Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

December 15, 2017

Comme tous les enfants, mes enfants adorent recevoir des bonbons. Et les occasions ne manquent pas en fin d’année : Halloween, Saint-Nicolas, Noël, …  Le tout à multiplier par le nombre de parents, grand-parents, école, clubs, etc. C’est bien simple : il devient parfois difficile de justifier que Saint-Nicolas se déplace aussi vite d’un endroit à un autre. Et d’expliquer pourquoi il semble tellement tenir à engraisser une génération de futurs diabétiques…

Mais la particularité de mes enfants est que, s’ils adorent recevoir, ils consomment finalement très peu de sucreries. Nous les sensibilisons à la surconsommation et aux méfaits de la publicité depuis peut-être un peu trop jeune.

Les bonbons s’entassent donc dans un véritable tiroir au trésor qui déborderait à longueur d’année si PapaPloum n’allait pas de temps en temps assouvir son addiction au sucre.

Pour Saint-Nicolas cette année, j’ai franchi une étape de plus : au lieu d’aller acheter des chocolats, j’ai tout simplement été puisé dans le susdit tiroir et j’ai mis dans les souliers des friandises qu’ils avaient déjà reçues.

Ils ne se sont aperçu de rien et ont été enchantés.

Mais, malgré tout, ma conscience me tiraille…

Ai-je été le meilleur et le plus écolo PapaPloum-Nicolas ? Ou le pire radin qui aie jamais enfanté ?

Photo par Jessica S.

Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal ou Liberapay. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Medium, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

December 13, 2017

Acquia gift drive

Yesterday in Acquia's Boston headquarters, there were hundreds of presents covering the lobby. These gifts were donated by over 130 Acquians on behalf of the Department of Children and Family Services' Wonderfund. For years, Acquia has participated in this holiday gift drive to support children that otherwise wouldn't receive presents this season. This December, we were able to collect gifts for 200 children throughout Massachusetts.

One of Acquia's founding values is to "Give back more". Inspired by our Open Source roots, contributing back to our communities is ingrained into the way we work. Acquia's annual gift drive is one of the most meaningful examples of giving back. It's incredibly heartwarming to see the effort and passion that goes into making the gift drive possible. Year after year, Acquia's gift drive remains one of my favorite office moments. It makes me incredibly proud to be an Acquian. Happy Holidays!

Acquia gift drive

I published the following diary on isc.sans.org: “Tracking Newly Registered Domains“:

Here is the next step in my series of diaries related to domain names. After tracking suspicious domains with a dashboard and proactively searching for malicious domains, let’s focus on newly registered domains. They are a huge number of domain registrations performed every day (on average a few thousand per day all TLD’s combined). Why focus on new domains? With the multiple DGA (“Domain Generation Algorithms”) used by malware families, it is useful to track newly created domains and correlate them with your local resolvers’ logs. You could detect some emerging threats or suspicious activities… [Read more]

[The post [SANS ISC] Tracking Newly Registered Domains has been first published on /dev/random]

December 11, 2017

Fingers on keyboard

We have ambitious goals for Drupal 8, including new core features such as Workspaces (content staging) and Layout Builder (drag-and-drop blocks), completing efforts such as the Migration path and Media in core, automated upgrades, and adoption of a JavaScript framework.

I met with several of the coordinators behind these initiatives. Across the board, they identified the need for faster feedback from Core Committers, citing that a lack of Committer time was often a barrier to the initiative's progress.

We have worked hard to scale the Core Committer Team. When Drupal 8 began, it was just catch and myself. Over time, we added additional Core Committers, and the team is now up to 13 members. We also added the concept of Maintainer roles to create more specialization and focus, which has increased our velocity as well.

I recently challenged the Core Committer Team and asked them what it would take to double their efficiency (and improve the velocity of all other core contributors and core initiatives). The answer was often straightforward; more time in the day to focus on reviewing and committing patches.

Most don't have funding for their work as Core Committers. It's something they take on part-time or as volunteers, and it often involves having to make trade-offs regarding paying work or family.

Of the 13 members of the Core Committer Team, three people noted that funding could make a big difference in their ability to contribute to Drupal 8, and could therefore help them empower others:

  • Lauri 'lauriii' Eskola, Front-end Framework Manager — Lauri is deeply involved with both the Out-of-the-Box Experience and the JavaScript Framework initiatives. In his role as front-end framework manager, he also reviews and unblocks patches that touch CSS/JS/HTML, which is key to many of the user-facing features in Drupal 8.5's roadmap.
  • Francesco 'plach' Placella, Framework Manager — Francesco has extensive experience in the Entity API and multilingual initiatives, making him an ideal reviewer for initiatives that touch lots of moving parts such as API-First and Workflow. Francesco was also a regular go-to for the Drupal 8 Accelerate program due to his ability to dig in on almost any problem.
  • Roy 'yoroy' Scholten, Product Manager — Roy has been involved in UX and Design for Drupal since the Drupal 5 days. Roy's insights into usability best practices and support and mentoring for developers is invaluable on the core team. He would love to spend more time doing those things, ideally supported by a multitude of companies each contributing a little, rather than just one.

Funding a Core Committer is one of the most high-impact ways you can contribute to Drupal. If you're interested in funding one or more of these amazing contributors, please contact me and I'll get you in touch with them.

Note that there is also ongoing discussion in Drupal.org's issue queue about how to expose funding opportunities for all contributors on Drupal.org.

The post “du –max-depth” alternative on Mac OSX appeared first on ma.ttias.be.

Ever hard to find obscure places where disk space is wasted on your Mac? If you're accustomed to the Linux terminal, you'll try this:

$ du -h --max-depth=1
du: illegal option -- -
usage: du [-H | -L | -P] [-a | -s | -d depth] [-c] [-h | -k | -m | -g] [-x] [-I mask] [file ...]

... but that doesn't work on Mac.

So here's a Mac alternative:

$ find . -maxdepth 1 -type d -mindepth 1 -exec du -hs {} \;

I try that --max-depth every. single. time.

Update: it's even easier than that

Sure, the find line works, but turns out I've been missing a much easier shortcut here.

$ du -hd1

Lesson learned: blog about it faster! ^^

The post “du –max-depth” alternative on Mac OSX appeared first on ma.ttias.be.

Somebody recently pointed me towards a blog post by a small business owner who proclaimed to the world that using Devuan (and not Debian) is better, because it's cheaper.

Hrm.

Looking at creating Devuan, which means splitting of Debian, economically, you caused approximately infinite cost.

Well, no. I'm immensely grateful to the Devuan developers, because when they announced their fork, all the complaints about systemd on the debian-devel mailinglist ceased to exist. Rather than a cost, that was an immensely gratifying experience, and it made sure that I started reading the debian-devel mailinglist again, which I had stopped for a while before that. Meanwhile, life in Debian went on as it always has.

Debian values choice. Fedora may not be about choice, but Debian is. If there are two ways of doing something, Debian will include all four. If you want to run a Linux system, and you're not sure whether to use systemd, upstart, or something else, then Debian is for you! (well, except if you want to use upstart, which is in jessie but not in stretch). Debian defaults to using systemd, but it doesn't enforce it; and while it may require a bit of manual handholding to make sure that systemd never ever ever ends up on your system, this is essentially not difficult.

you@your-machine:~$ apt install equivs; equivs-control your-sanity; $EDITOR your-sanity

Now make sure that what you get looks something like this (ignoring comments):

Section: misc
Priority: standard
Standards-Version: <whatever was there>

Package: your-sanity
Essential: yes
Conflicts: systemd-sysv
Description: Make sure this system does not install what I don't want
 The packages in the Conflicts: header cannot be installed without
 very difficult steps, and apt will never offer to install them.

Install it on every system where you don't want to run systemd. You're done, you'll never run systemd. Well, except if someone types the literal phrase "Yes, do as I say!", including punctuation and everything, when asked to do so. If you do that, well, you get to keep both pieces. Also, did you see my pun there? Yes, it's a bit silly, I admit it.

But before you take that step, consider this.

Four years ago, I was an outspoken opponent of systemd. It was a bad idea, I thought. It is not portable. It will cause the death of Debian GNU/kFreeBSD, and a few other things. It is difficult to understand and debug. It comes with a truckload of other things that want to replace the universe. Most of all, their developers had a pretty bad reputation of being, pardon my French, arrogant assholes.

Then, the systemd maintainers filed bug 796633, asking me to provide a systemd unit for nbd-client, since it provided an rcS init script (which is really a very special case), and the compatibility support for that in systemd was complicated and support for it would be removed from the systemd side. Additionally, providing a systemd template unit would make the systemd nbd experience much better, without dropping support for other init systems (those cases can still use the init script). In order to develop that, I needed a system to test things on. Since I usually test things on my laptop, I installed systemd on my laptop. The intent was to remove it afterwards. However, for various reasons, that never happened, and I still run systemd as my pid1. Here's why:

  • Systemd is much faster. Where my laptop previously took 30 to 45 seconds to boot using sysvinit, it takes less than five. In fact, it took longer for it to do the POST than it took for the system to boot from the time the kernel was loaded. I changed the grub timeout from the default of five seconds to something more reasonable, because I found that five seconds was just ridiculously long if it takes about half that for the rest of the system to boot to a login prompt afterwards.
  • Systemd is much more reliable. That is, it will fail more often, but it will reliably fail. When it fails, it will tell you why it failed, so you can figure out what went wrong and fix it, making sure the system never fails again in the same fashion. The unfortunate fact of the matter is that there were many bugs in our init scripts, but they were never discovered and therefore lingered. For instance, you would not know about this race condition between two init scripts, because sysvinit is so dog slow that 99 times out of 100 it would not trigger, and therefore you don't see it. The one time you do see it, something didn't come up, but sysvinit doesn't log about such errors (it expects the init script to do so), so all you can do is go "damn, wtf happened?!?" and manually start things, allowing the bug to remain. These race conditions were much more likely to trigger with systemd, which caused it a lot of grief originally; but really, you should be thankful, because now that all these race conditions have been discovered by way of an init system that is much more verbose about such problems, they have also been fixed, and your sysvinit system is more reliable, too, as a result. There are other similar issues (dependency loops, to name one) that systemd helped fix.
  • Systemd is different, and that requires some re-schooling. When I first moved my laptop to systemd, I remember running into some kind of issue that I couldn't figure out how to fix. No, I don't remember the specifics of that issue, but they don't really matter. The point is this: at first, I thought "this is horrible, you can't debug it, how can you use such a system". And while it's true that undebuggable systems are not very useful, the systemd maintainers know this too, and therefore systemd is debuggable. It's just that you don't debug it by throwing some imperative init script code through a debugger (or, worse, something like sh -x), because there is no imperative init script code to throw through such a debugger, and therefore that makes little sense. Instead, there is a wealth of different tools to inspect the systemd state, and a lot of documentation on what the different things mean. It takes a while to internalize all that; and if you're not convinced that systemd is a good thing then it may mean some cursing while you're fighting your way through. But in the end, systemd is not more difficult to debug than simple init scripts -- in fact, it sometimes may be easier, because the system is easier to reason about.
  • While systemd comes with a truckload of extra daemons (systemd-networkd, systemd-resolved, systemd-hostnamed, etc etc etc), the systemd in their name do not imply that they are required by systemd. In fact, it's the other way around: you are required to run systemd if you want to run systemd-networkd (etc), because systemd-networkd (etc) make extensive use of the systemd infrastructure and public APIs; but nothing inside systemd requires that systemd-networkd (etc) are running. In fact, on my personal laptop, beyond systemd and udev themselves, I'm not using anything that gets built from the systemd source.

I'm not saying these reasons are universally true, and I'm not saying that you'll like systemd as much as I have. I am saying, however, that you should give it an honest attempt before you say "I'm not going to run systemd, ever," because you might be surprised by the huge gap of difference between what you expected and what you got. I know I was.

So, given all that, do I think that Devuan is a good idea? It is if you want flamewars. It gives those people who want vilify systemd a place to do that without bothering Debian with their opinion. But beyond that, if you want to run Debian and you don't want to run systemd, you can! Just make sure you choose the right options, and you're done.

All that makes me wonder why today, almost half a year after the initial release of Debian 9.0 "Stretch", Devuan Ascii still hasn't released, and why it took them over two years to release their Devuan Jessie based on Debian Jessie. But maybe that's just me.

December 08, 2017

This blog has been quiet for the last year and a half, because I don’t like to announce things until I feel comfortable recommending them. Until today!

Since July 2016, API-First Drupal became my primary focus, because Dries felt this was one of the most important areas for Drupal’s future. Together with the community, I triaged the issue queue, and helped determine the most important bugs to fix and improvements to add. That’s how we ended up with REST: top priorities for Drupal … plan issues for each Drupal 8 minor:

If you want to see what’s going on, start following that last issue. Whenever there’s news, I post a new comment there.

But enough background. This blog post is not an update on the entire API-First Initiative, it’s about a particular milestone.

100% integration test coverage!

The biggest problem we encountered while working on rest.module, serialization.module and hal.module was unknown BC breaks 1. Because in case of a REST API, the HTTP response is the API. What is a bug fix for person X is a BC break for person Y. The existing test coverage was rather thin, and was often only testing “the happy path”: the simplest possible case. That’s why we would often accidentally introduce BC breaks.

Hence the clear need for really thorough functional (integration) test coverage2, which was completed almost exactly a year ago. We added EntityResourceTestBase, which tests dozens of scenarios3 in a generic way4, and used that to test the 9 entity types, that already had some REST test coverage, more thoroughly than before.

But we had to bring this to all entity types in Drupal core … and covering all 41 entity types in Drupal core was completed exactly a week ago!

The test coverage revealed bugs for almost every entity type. (Most of them are fixed by now.)

Tip: Subclass that base test class for your custom entity types, and easily get full REST test coverage — 41 examples available!

Guaranteed to remain at 100%

We added EntityResourceRestTestCoverageTest, which verifies that we have test coverage for all permutations of:

  • entity type
  • format: json + xml + hal_json
  • authentication: cookie + basic_auth + anon

It is now impossible to add new entity types without also adding solid REST test coverage!

If you forget that test coverage, you’ll find an ASCII-art llama talking to you:

That is why we can finally say that Drupal is really API-First!

This of course doesn’t help only core’s REST module, it also helps the contributed JSON API and GraphQL modules: they’ll encounter far fewer bugs!

Thanks

So many people have helped! In random order: rogierbom, alexpott, harings_rob, himanshu-dixit, webflo, tedbow, xjm, yoroy, timmillwood, gaurav.kapoor, Gábor Hojtsy, brentschuddinck, Sam152, seanB, Berdir, larowlan, Yogesh Pawar, jibran, catch, sumanthkumarc, amateescu, andypost, dawehner, naveenvalecha, tstoeckler — thank you all!5

Special thanks to three people I omitted above, because they’re not well known in the Drupal community, and totally deserve the spotlight here, for their impressive contribution to making this happen:

That’s thirty contributors without whom this would not have happened!

And of course thanks to my employer, Acquia, for allowing me to work on this full-time!

Next

What is going to be the next big milestone we hit? That’s impossible to say, because it depends on the chains of blocking issues that we encounter. It could be support for modifying and creating config entities, it could be support for translations, it could be that all major serialization gaps are fixed, it could be file uploads, or it could be ensuring all normalizers work in both rest.module & jsonapi.module

The future will tell, follow along!


  1. Backwards Compatibility. ↩︎

  2. Nowhere near 100% test coverage, definitely not every possible edge case is tested, and that is fine↩︎

  3. Including helpful error responses when unauthenticated, unauthorized or just a bad request. This vastly improves DX: no need to be a Drupal expert to talk to a REST API powered by Drupal! ↩︎

  4. It is designed to be subclassed for an entity type, and then there are subclasses of that for every format + authentication combination. ↩︎

  5. And this is just from all the per-entity type test issues, I didn’t look at the blockers and blockers of blockers. ↩︎

And this is already the end of Botconf. Time for my last wrap-up. The day started a little bit later to allow some people to recover from the social event. It started at 09:40 with a talk presented by Anthony Kasza, from PaloAlto Networks: “Formatting for Justice: Crime Doesn’t Pay, Neither Does Rich Text“. Everybody knows the RTF format… even more since the famous CVE-2017-0199. But what’s inside an RTF document? As the name says, it is used to format text. It was created by Microsoft in 1987. It has similarities with HTML:

RTF vs HTML

Entities are represented with ‘{‘ and ‘}’. Example:

{\iThis is some italic text}

There are control words like “\rtf”, “\info”, “\author”, “\company”, “\i”, “\AK”, …. It is easy to obfuscate such document with extra whitespaces, headers or with nested elements:

{\rtf [\info]] == {\rtf {{{\i,nfo}}}}

This means that writing signature is complex. Also, just rename the document with a .doc extension and it will be opened by Word. How to generate RTF documents? They are the official “tools” like Microsoft or Wordpad but they are, of course, plenty of malicious tools:

  • 2017-0199 builder
  • wingd/stone/ooo
  • Sofacy, Monsoon, MWI
  • Ancalog, AK builder

What about analysis tools? Here also, it is easy to build a toolbox with nice tools: rtfdump, rtfobj, pyRTF, YARA are some of them. To write good signatures, Anthony suggested focussing on suspicious words:

  •  \info
  • \object
  • DDEAUTO
  • \pict
  • \insrsid or \rsidtbl

DDEAUTO is a good candidate for a while and is seen as the “most annoying bug of the year” for its inclusion in everything (RTF & other documents, e-mail, calendar entries…). Anthony finished his talk by providing a challenge based on an RTF file.

The next talk was presented byPaul Jung: “PWS, Common, Ugly but Effective“. PWS also know as “info stealer” are a very common piece of malware. They steal credentials from many sources (browsers, files, registries, wallets, etc).
PWS

They also offer “bonus” features like screenshot grabbers or keylogger. How to find them? Buy them, find a cracked one or open sources. Some of them have also promotional videos on Youtube! A PWS is based on a builder that generates a specific binary based on the config file, it is delivered via protocols like email, HTTP and data are managed via a control panel. Paul reviewed some well-known PWS like JPro Crack Stealer, Pony (the most famous), Predator Pain or Agent Tesla. The last one promotes itself as “not being a malware”. Some of them support more than 130 different applications to steal passwords from. Some do not reinvent the wheel and just use external tools (ex: the Nirsoft suite). If it is difficult to detect them before the infection, it’s quite easy to spot them based on the noise they generate in log files. They use specific queries:

  • “POST /fre.php” for Lokibot
  • “POST /gate.php” for Pony or Zeus

Very nice presentation!

After the first coffee refill break, Paul Rascagnères presented “Nyetya Malware & MeDoc Connection“. The presentation was a recap of the bad story that affected Ukraine a few months ago. It started with a phone call saying “We need help“. They received some info to start the investigation but their telemetry did not return anything juicy (Talos collects a huge amount of data to build their telemetry). Paul explained the case of M.E. Doc, a company providing a Windows application for tax processing. The company servers were compromised and the software was modified. Then, Paul reviewed the Nytia malware. It used WMI, PsExec, EternalBlue, EternalRomance and scanned ranges of IP to infect more computers. It also used a modified version of Mimikatz. Note that Nyetya cleared the infected host logs. This is a good reminder to always push logs on an external system to prevent losing pieces of evidence.

The next talk was about a system to track the Locky ransomware based on its DGA: “Math + GPU + DNS = Cracking Locky Seeds in Real Time without Analyzing Samples“. Yohai Einav Alexey Sarychev explained how they solved the problem to detect as fast as possible new variation of domain names used by the Locky ransomware. The challenges were:

  • To get the DGA  (it’s public now)
  • To be able to process a vast search space. The namespace could be enormous (from 3 digit seed to 4 then 5, 6). There is a scalability problem.
  • Mapping the ambiguity (and avoid collisions with other DGA’s)

So solution they developed is based on GPU (for maximum speed). If you’re interested in the Locky DGA, you can have a look at their dataset.

The next talk was, for me, the best of the day because it contained a lot of useful information that many people can immediately reuse in their environment to improve the detection of malicious behaviour or to improve their DFIR process. It was titled “Hunting Attacker Activities – Methods for Discovering, Detecting Lateral Movements” and presented by Keisuke Muda and Shusei Tomonaga. Based on their investigations, they explained how attackers can perform lateral movement inside a network just be using standard Windows tools (that, by default, are not flagged as malicious by the antivirus).

https://github.com/baderj/domain_generation_algorithms/tree/master/locky

They presented multiple examples of commands or small scripts used to scan, pivot, cover tracks, etc. Then they explained how to detect this kind of activity. They made a good comparison of the standard Windows audit log versus the well-known Sysmon tool. They presented pro & con of each solution and the conclusion could be that, for maximum detection, you need both. There were so many examples that it’s not possible to list them here. I just recommend you to have a look at the documents available online:

It was an amazing presentation!

After the lunch, Jaeson Schultz, also from Talos, presented “Malware, Penny Stocks, Pharma Spam – Necurs Delivers“. The talk was a good review of the biggest spam botnet active. Just some numbers collected from multiple campaigns; 2.1 messages, 1M unique sender IP addresses from 216 countries/territories. The top countries are India, Vietnam, Iran and Pakistan. Jaeson explained that the re-use of IP address is so low that it’s difficult to maintain blacklists.

IP Addresses Reuse

How do the bad guys send emails? They use harvested accounts (of course) but also auto-generated addresses and common / role-based accounts. That’s why the use of catch-all mailboxes is useful. Usually, big campaigns are launched from Monday to Friday and regular campaigns are constantly running at a low speed. Jaeson presented many examples of spam, attachments. Good review with entertaining slides.

Then, Łukasz Siewierski presented “Thinking Outside of the (Sand)box“. Łukasz is working for Google (Play Store) and analyze applications. He said that all applications submitted to Google are reviewed from a security point of view. Android has many security features: SE linux, application sandbox, permission model, verified boot, (K)ASLR, Seccomp but the presentation focused on the sandbox. First, why is there a sandboxing system? To prevent spyware to access other applications data, to prevent applications to pose as other ones, make easy to attribute action to specific apps and to allow strict policy enforcement.  But how to break the sandbox? First, the malware can ask users for a number of really excessive permissions. In this case, you just have to wait and cross your fingers that he will click “Allow”. Another method is to use Xposed. I already heard about this framework at Hack in the Box. It can prevent apps to be displayed in the list of installed applications. It gives any application every permission but there is one big drawback: the victim MUST install Xposed! The other method is to root the phone, inject code into other processes and profit. Łukasz explained different techniques to perform injection on Android but it’s not easy. Even more since the release of “Nougat” which introduced now mitigations techniques.

The last slot was assigned to Robert Simmons who presented “Advanced Threat Hunting“. It was very interesting because Robert gave nice tips to improve the process of threat hunting. It can require a lot of resources that are … limited! We have small teams with limited resources and limited time. He also gave tips to better share information. A good example is YARA rules. Everybody has a set of YARA rules in private directories, on laptops, etc. Why not store them in a central repository like a gitlab server? Many other tips were given that are worth a read if you are performing threat hunting.

The event was close to the classic kind word of the team. You can already book your agenda for the 6th edition that will be held in Toulouse!

 

The Botconf Crew

[The post Botconf 2017 Wrap-Up Day #3 has been first published on /dev/random]

I’m just back from the social event that was organized at the aquarium Mare Nostrum. A very nice place full of threats as you can see in the picture above. Here is my wrap-up for the second day.

The first batch of talks started with “KNIGHTCRAWLER,  Discovering Watering-holes for Fun, Nothing” presented by Félix Aimé. This is Félix’s personal project that he started in 2016 to get his own threat intelligence platform. He started with some facts like the definition of a watering hole: it is the insertion of specific malicious scripts on a specific website to infect visitors. Usually, Javascript + iframe that redirect to the malicious server but it can also be a malvertising campaign (via banners). They are not easy to track because, on the malicious server, you can have protections like IP whitelists (in case of targeted attack or to keep researchers away), browser fingerprinting, etc. Then he explained how he build his own platform and the technique used to find suspicious activities: passive DNS, common crawl indexes, directory scraping, leaked DNS, … It is interesting to note that he uses YARA rules. In fact, he created his personal (legal) botnet. The architecture is based on a master server (the C&C) which is talking to crawler servers. Actually, he’s monitoring 25K targets. This is an ongoing project and Félix will still improve it. Not that it is not publicly available. He also gave some nice examples of findings like the keylogger on WordPress that we reported yesterday. He detected it for the first time a few months ago he told me! Very nice project!

The second talk was a complete review of the Wannacry attack that hits many organizations in May 2017: “The (makes me) Wannacry Investigation” presented by Alan Neville from Symantec. This is the last time that the SANS ISC InfoCON was raised to yellow! Everybody remembers this bad story. Alan reviewed some major virus infections during the last years like Blaster (2003) or Conficker (2008). These malware infected millions of computers but, in the case of Wannacry, “only” 300K hosts were infected. But, the impact was much more important: factories, ATM’s, billboards, health devices, etc. Then Alan reviewed some technical aspect of Wannacry and mentioned, of course, the famous kill-switch domain: iuqerfsodp9ifjaposdfjhgosurijfaewrwergwea[.]com. In fact, Symantec detected an early version of the ransomware a few months before (without the Eternal Blue exploit). They also observed some attacks in March/April 2017. But, basics security rules could have reduced the impact of the ransomware: have a proper patching procedure as well as backup/restore procedures.

After the morning coffee refill, Maria Jose Erquiaga came on stage to present: “Malware Uncertainty Principle: an Alteration of Malware Behavior by Close Observation“. This talk was a presentation of the study of the influence of web TLS interception in malware analysis. Indeed, today, more and more malwares are communicating on top of HTTPS. What will happen if we play MitM with them to intercept communications with the C&C server? Maria explained the lab that was deployed with two scenarios: with and without an intercepting proxy.

Nomad Project Infrastructure

Once the project in place, they analyzed many samples and captured all the traffic. The result of this research is available online (link). What did they find? Sometimes, there is no communication at all with the C&C because the malware is using a custom protocol via TCP/443. This one is rejected by the proxy. Some malwares tried to reconnect continuously or seek another way to connect (ex: via different ports).

The next one was “Knock Knock… Who’s there? admin admin, Get In! An Overview of the CMS Brute-Forcing Malware Landscape” presented by Anna Shirokova from Cisco. This talk was presented at BruCON but, being part of the organization, I was not able to follow it. Hopefully, this time was the right one. I’m maintaining multiple WordPress sites and, I fully agree, brute-force attacks are constantly launched and pollute my logs. Anna started with a review of the brute-force attacks and the targets. Did you know that ~5% of the Internet websites are running WordPress? This is a de-facto target. There are two types of brute-force attacks: the vertical one (a list of passwords is tested against one target) and horizontal (one password is tested against a list of targets). Brute-force attacks are not new, Anna made a quick recap from 2009 until 2015 with nice names like FortDisco, Mayhem, CMS Catcher, Troldesh, etc. And it’s still increasing… Then Anna focuses on Sathurbot which is a modular botnet with different features: downloader, web crawler and brute-forcer). The crawler module uses search engines to find a list of sites to be targeted (ex: “bing.com/search?q=makers%20manage%20manual“). Then the brute-force attack starts against /wp-login.php. Nice research which revealed that the same technique is always used and that many WordPress instances are still using weak passwords! Note that it is difficult to measure the success rate of those brute-force attacks).

Then Mayank Dhiman & Will Glazier presented “Automation Attacks at Scale or Understanding ‘Credential Exploitation’“. There exists many tools to steal credentials on the Internet and others to re-use them to perform malicious activities (account takeover, fake accounts creation, shopping bots, API abuse, etc). They are many toolkits that were briefly reviewed: SentryMBA, Fraudfox, AntiDetect but also more classic tools like Hydra, curl, wget, Selenium, PhantomJS. The black market is full of services that offers configuration files for popular websites. According to the research, 10% of the Alexia top websites are a config file available on the black market (which describes how to abuse them, the API, etc). Top targets are gaming websites, entertainment and e-commerce. No surprise here. To abuse them, you need: a config file, stolen credentials and some IP addresses (for rotation) and some computing power. About credentials, they are quite easy to find, pastebin.com is your best friend. Note that they need good IP addresses, best sources are cloud services or compromised IoT devices or proxy farms. They gave a case study about the large US retailer that was targeted by 40K IP addresses from 61 countries. But how to protect organizations against this kind of attacks?

  • Analyze HTTP(S) requests and headers to fingerprint attack tools
  • Use machine learning to detect forged browser behaviour
  • Use threat intelligence
  • Data analytics (look for patterns)

The next one was “The Good, the Bad, the Ugly: Handling the Lazarus Incident in Poland” presented by Maciej Kotowicz. Maciej came back on a big targeted attack that occurred in Poland. This talk was flagged as TLP:AMBER. Sorry, no coverage. If you are interested, here is a link for more info about Lazarus.

 

After the (delicious) lunch, Daniel Plohmann presented his project: “Malpedia: A Collaborative Effort to Inventorize the Malware Landscape“. Malpedia can be resumed in a few words: Free, independent, resource labeled, unpacked, samples. The idea of Malpedia came two years ago during Botconf. The idea is to propose a high-quality repository of malware samples (Daniel insisted on the fact that quality is better than quantity) properly analyzed and tagged. Current solutions (botnets.fr, theZoo, VirusBay.io) still have issues to identify properly the samples. In the Daniel’s project, samples are classified by families. What is a malware family? According to Daniel, it’s all samples that belong to the same project seen from a developer’s point of view. After explaining the collection process, he gave some interesting stats based on his current collection (as of today, 2491 samples from 669 families). Nice project and access is available upon request (if you met Daniel IRL) or by vouching for other people. Malpedia is available here.

The next talk was… hard! When the speaker warns you that some slides will contain lot of assembler code, you know what to expect! “YANT – Yet Another Nymaim Talk” was presented by Sebastian Eschweiler. What I was able to follow: Nymain is a malware that uses very complex anti-analysis techniques to defeat researchers and analysts. The main technique used is called “Heaven’s Gate“. It is a mechanism to call directly 64-bits kernel core from 32-bit code. It is very useful to encrypt code, hide from static analysis tools and a nice way to evade sandbox hooks.

After the afternoon coffee break, Amir Asiaee presented “Augmented Intelligence to Scale Humans Fighting Botnets“. It started with a fact: today, they are too many malwares and too few researchers. So we need to automate as much as possible. Amir is working for a company that gets feeds of DNS request from multiple ISP’s. They get 100B of DNS queries per day! As the malwares are moving faster then yesterday, they use complex DGA, the lifetime of C&C is shorter, there is a clear need for quick analysis of all those data. Amir explained how they process this huge amount of data using NLP (“Natural Language Processing”).DNS Processing

The engineering challenge is to process all those data and to spot new core domain… when real tile is a key! Here is a cool video about the data processing. Then Amir explained some use cases. Two interesting examples: Bedep uses exchange rates as DGA seed… Some others have too much coalitions (ex: [a-z]{6}.com) which could lead to many false positives: what about akamai.com?

The last talk covered the Stantinko botnet: “Stantinko: a Massive Adware Campaign Operating Covertly since 2012” by Matthieu FAOU & Frédéric Vachon from Eset. It was a very nice review of the botnet. It started with some samples they received from a customer. They started the reverse engineering and, when you discover that a DLL, belonging to a MP3 encoder application, decrypts and load another one in memory, you are facing something very suspicious! They were able to sinkhole the C&C server and started further analysis. What about the persistence? The malware creates two Windows services: PDS (Plugin Downloader Service) and BEDS (Browser Extension Downloader Service).

Statinko Architecture

The purpose of the PDS is to compromise CMS (WordPress and Joomla), install a RAT and Facebook bot. The BEDS is a flexible plugins system to install malicious extensions in the browser. Stantinko has many interesting anti-analysis features: the code is encrypted with a unique key per infection. The analyze requires to find the dropper and aget a sample + related context. There is a fileless plugin system. To get payloads, they had to code a bot mimicking an infected machine. What about the browser extension? The Ad-Fraud injects ads on targeted websites or redirect the user to an ads websites before showing the right one. They also replace ads with their own. Note that URL’s are hashed in the config files! Another module is the search parser which search on Google or Yandex for potential victims to perform brute-force attacks. Finally, a RAT module is also available. This bot has a estimate size of 500K hosts. More details about Stantinko are available here.

The day ended with a good lightning talks sessions: 14 presentations  in 1h! Some of them were really interesting, others very funny. In bulk mode, what was presented:

  • The Onyphe project
  • IoT Malware classification
  • Dropper analysis (https://malware.sekoia.fr)
  • Deft Linux (Free DFIR Linux distribution) DART deftlinux.net
  • Sysmon FTW
  • PyOnyphe: Onyphe Python library to use the API
  • Autopwn
  • Just a normal phishing
  • Context enrichment for IR
  • Yet another sandbox evation “you_got_damn_right” HTTP header gist.github.com/bcse/1834878
  • Sysmon sigs for Linux honeypots
  • Malware config dynamic extraction (Gootkit)
  • IDA Appcall
  • A Knightcrawler demo (see above)

See you tomorrow for the last day!

[The post Botconf 2017 Wrap-Up Day #2 has been first published on /dev/random]

December 07, 2017

I just committed what I consider the last feature to be added to AO Extra; the optimization of Google Fonts, with the choice between:

  • “remove”
  • “aggregate and link”, where the Google Font CSS might still render-blocking but there will be no “flash of unstyled fonts”)
  • “aggregate and load asynchronous with webfont.js” will not be render-blocking, but might lead to that dreaded “flash of unstyled fonts”

Next step before merging AO Extra with Autoptimize to become AO 2.3; testing. And that’s where I need your help;

  1. Download the AO Extra zip-file from Github
  2. go to Plugins -> Add New -> Upload Plugin
  3. Click “browse” and select the zip-file you downloaded in (1)
  4. Click “Install now”
  5. Click “Activate”
  6. Go to Settings -> Autoptimize -> Extra
  7. Test
  8. Give generic feedback below or file bugs in the projects Github Issues

If all goes well Autoptimize 2.3 could be release before we have to wave 2017 goodbye :-)

For future reference (to myself, for the most part):

ffmpeg -i foo.webm -i foo.en.vtt -i foo.nl.vtt -map 0:v -map 0:a \
  -map 1:s -map 2:s -metadata:s:a language=eng -metadata:s:s:0   \
  language=eng -metadata:s:s:1 language=nld -c copy -y           \
  foo.subbed.webm

... is one way to create a single .webm file from one .webm input file and multiple .vtt files. A little bit of explanation:

  • The -i arguments pass input files. You can have multiple input files for one output file. They are numbered internally (this is necessary for the -map and -metadata options later), starting from 0.
  • The -map options take a "mapping". With them, you specify which input streams should go where in the output stream. By default, if you have multiple streams of the same type, ffmpeg will only pick one (the "best" one, whatever that is). The mappings we specify are:

    • -map 0:v: this means to take the video stream from the first file (this is the default if you do not specify any mapping at all; but if you do specify a mapping, you need to be complete)
    • -map 0:a: take the audio stream from the first file as well (same as with the video).
    • -map 1:s: take the subtitle stream from the second (i.e., indexed 1) file.
    • -map 2:s: take the subtitle stream from the third (i.e., indexed 2) file.
  • The -metadata options set metadata on the output file. Here, we pass:

    • -metadata:s:a language=eng, to add a 's'tream metadata item on the 'a'udio stream, with name language and content eng. The language metadata in ffmpeg is special, in that it gets automatically translated to the correct way of specifying the language in the target container format.
    • -metadata:s:s:0 language=eng, to add a 's'tream metadata item on the first (indexed 0) 's'ubtitle stream in the output file. This too has the english language set
    • `-metadata:s:s:1 language=nld', to add a 's'tream metadata item on the second (indexed 1) 's'ubtitle stream in the output file. This has dutch set as the language.
  • The -c copy option tells ffmpeg to not transcode the input video data, but just to rewrite the container. This works because all input files (WebM video plus VTT subtitles) are valid for WebM. If you do not have an input subtitle format that is valid for WebM, you can instead limit the copy modifier to the video and audio only, allowing ffmpeg to transcode the subtitles. This is done by way of -c:v copy -c:a copy.
  • Finally, we pass -y to specify that any pre-existing output file should be overwritten, and the name of the output file.

December 06, 2017

We reached December, it’s time for another edition of the Botconf security conference fully dedicate to fighting botnets. This is already the fifth edition that I’m attending. This year, the beautiful city of Montpellier in the south of France is hosting the conference. I arrived on Monday evening to attend a workshop yesterday about The Hive, Cortex and MISP. As usual, I’m following the talks to propose you a wrap-up. Let’s go for the first one!

The introduction was not performed by “The Boss” (Eric Freyssinet) who was blocked due to a last minute change in his work agenda. But, the crew was there to ensure a smooth event. What about the current edition? In a few numbers: 4 days, 3 workshops, 12 crew members, 300 attendees (+13%), 28 talks selected amongst 46 submissions and good food as usual. Some attendees already renamed the event in “Bouffeconf” (“bouffe” is a French expression which expresses a huge amount of food). They also insisted on the respect of the social network and TLP policies.

The keynote slot was presented by Sébastien Larinier and Robert Erra. The title was “How to Compute the Clusterization of a Very Large Dataset of Malware with Open Source Tools for Fun & Profit?” and presumes a talk being oriented to machine learning. And it was indeed the case, the word appeared quickly on a slide. It was quite hard for a keynote with many mathematics formulas. The idea behind Sébastien and Robert’s research was to solve the following problem: Based on a data set of a few millions of malware samples how to process them automatically to classify them in clusters or families and get more information about their differences. In such a complex task, the scalability is important but also the speed. The schema to process the samples is the following:

blob >> parser >> JSON data >> FV (Features Vector) >> Classification

They explained the available algorithms (KMeans and DBScan) and their differences. Read the links if you are interested. Then they explained the issues they faced and finally gave some statistics.

Malware Clustering

They also explained the architecture deployed to parse all those samples. But what is stored? A lot of information: Hashes, the size and number of sections, names, entropy, characteristics, resources, entry point, import/export tables, strings, certificates, compilation date, etc. It is a good research that is still ongoing. Note that Sébastien has a workshop on this topic that he’s giving here and there at security conferences.

The first talk was titled “Get Rich or Die Trying” by Or EshedMark Lechtik from Checkpoint. It started with a fact: Many researches started with a simple finding like an email… that is the “trigger”. In this case, the research performed by Checkpoint started from an email about an oil company (Aramco) and targeting Saudi Arabia. Was it an APT? The investigations revealed step by step that it was not really an APT. They explained every step of the case from the email to the different malware samples delivered via malicious Office documents.

Attacker Infrastructure

One of them was a NetWire Lite, a RAT sold by wordwirelabs.com. The second sample was a VB6 compiled program which was an info stealer (ISR Stealer). The next one was an HawkEye keylogger which steals FTP, HTTP, SMTP credentials but also… Minecraft!? Don’t ask why! These tools are definitively not present in an APT… So they degraded the incident level. While going further, they finally found the Nigerian guy behind this attack. The main conclusion at the end of this talk could be: This guy was able to create a big operation and to cause damages with limited skills set. What about a group of highly skilled people?

The next slot was assigned to “Exploring a P2P Transient Botnet – From Discovery to Enumeration” from Renato Marinho, a researcher and SANS ISC handler. Renato explained how he found a botnet and how he was able to reverse the communications with the C&C. How it started? Simply with a Raspberry Pi running a honeypot at his home. The device was quickly infected (using the default Pi credentials) and he saw that the device tried to established a lot of connections to the Internet. Tip: when you’re running a honeypot, block (but log!) all connections to the wild Internet. He found that each member of the botnet could be a “Checker” or a “Scaro“, just one of them of both at the same time. A “Checker” is a dump node while a “Scaro” is a C&C. Communications with the C&C were established via HTTPS but the certificate was found in the binary. In this case, it’s easy to play MitM and intercept all communications. The set of commands was quite limited (“POST /ping”, “GET /upgrade”). The next step was to estimate the botnet size. The first techniques were to crawl the botnet based on the IP addresses found in communications with the C&C. The second one was more interesting: Renato found that it was possible to become the botnet by changed some parameters in the communication protocol (this is easy to achieve via a tool like BurpSuite). Another interesting fact about this botnet: there was no persistence mechanism in place which means that a reboot will remove the malware… until the next infection! Very interesting research!

Then, Jakub Křoustek, Peter Matula, Petr Zemek, from Avast, presented a very nice tool called RetDec. This is an open-source machine code decompiler. The first part of the talk was easy to understand. When a program (source) is compiled, the compiler generates machine code but also optimizes and changes reorganizes how data is managed. When you use a decompiler, you’ll get a code that is readable but that is far away from the original code. Usually, unreadable. They are also other techniques that make decompilation a hard work: packers, obfuscation, anti-debugging techniques, etc. RetDec is trying to solve those issues… The goal is to make a generic decompilation of binary code. That was the easy part. In the second part of the talk, they explained in details how the decompiler does the job with many examples. It was really complex. I just trust them. RetDec can do a good job. The good news is that it will be released as an open-source project next week. Check on retdec.com for more details. A good point for the IDA debugging plugin that can interact directly with RetDec! Impressive work by the Avast team…

After a long half-day, the lunch break was welcome. The afternoon started with “A Silver Path: Ideas for Improving Lawful Sharing of Botnet Evidence with Law Enforcement” by Karine e Silva from the University of Tilburg, NL. Not a technical talk at all but Karine has a very good overview of the issues between security researchers and law enforcement agencies. Indeed, by the law, attacking people or getting access to non-authorized data is prohibited. But in case of a botnet (just an example), the help of the researcher could be positive to help the LEA to take down the C&C server. The project presented by Karine is called BotLeg (more information here):

The project is a consortium between TiU (TILT), SURFNet, SIDN, Abuse Information Exchange, and NHTCU. While the main focus of the research is the Netherlands, the project will develop a comparative analysis to include other EU countries. The project is financed via NWO and will last for 48 months. Among the expected legal research results, the BotLeg project will deliver sectorial guidelines and codes of conduct on anti-botnet operations.

Karine on Stage

 

Some points are quite difficult to address. Example: in some cases, hack back is allowed but must be performed with the same level as the original attacker did. That’s not easy to quantify. What as an “aggressive” attack? Of course, the GDPR was mentioned because researchers are also collecting sensitive data.

The next talk was presented by two guys from the CERT.pl (Jarosław Jedynak & Paweł Srokosz): “Use Your Enemies: Tracking Botnets with Bots“. Usually, bots are used for malicious activities but they can be used for many purposes. Collected data are used to identify and kill them. They explained the infrastructure they developed to analyze malware samples, decrypt C&C configurations and then act as a member of the botnet to gain more knowledge. Their Ripper is, in fact, a modified version of Cuckoo + homemade scripts.

Automated Malware Analysis Tool Chain

Interesting to notice that performing this can be directly related to the previous talk: personal or sensitive information can be found. Once information about the botnet discovered, it’s not always easy to infiltrate it because you need to look legitimate (hostname, behavior, uptime), wait some time before being able to fetch data, and sometimes configuration is one available on specific countries.

The next talk was similar to the previous one. It focused on SOCKS proxies. “SOCKs as a Service, Botnet Discovery” by Christopher Baker. IP addresses can be easily classified. They are blacklists, GeoIP databases, DNS, CGN, websites etc. It’s easy to block them. But some IP addresses are very difficult to block because it could affect too many people (example: cloud services or ISP’s). That’s why there is a (black) market of SOCKS proxies. This is really a pain for researchers or law enforcement agencies because many SOCKS proxies are running on compromised computers in homes. Christopher explained how easy it is to “rent” such services for a small fee. In the second part of his talk, he explained how he infiltrated SOCKS proxies networks to gather more information about them. If I understood correctly, he used controlled hosts to join networks of proxies and see what was passing through them. Like deploying a tor-exit node.

After the afternoon coffee break, Sébastien Mériot from OVH presented “Automation Of Internet-Of-Things Botnets Takedown By An ISP“. For an ISP, DDoS attacks can be catastrophic. Not only they suffer from DDoS but some C&C servers can be hosted inside their infrastructure and, regarding the law, they can’t have a look at their customers’ data. Working based on abuse reports isn’t useful because it generates a lot of noise, they are often incomplete or the malicious content is already gone. IoT botnets have been a pain during the last year and generate a lot of DDoS attacks. Finding them is not complicated (Shodan is your best friend) but how to recover information about the C&C servers? Sébastien explained how he’s performing some reverse engineering to extract juicy information. I like the way he uses Radare2 with the r2pipe to get the assembly code of the sample and perform some greps to search for patterns of assembly code handling domains or IP addresses.

Then, Pedro Drimel Neto (Fox-It) came on stage to present “The New Era of Android Banking Botnets“. It was an interesting review of some banking malware families that spread during the last years: Perkele, iBanking, GMbot and BankBot. For each of them, he reviewed the infection path, the C&C communications, the backend. If in the previous years, unencrypted communications occurred via SMS, today it’s quite different and the latest malware families are much better (from an attacker perspective): strong encryption, anti-analysis, packing, C&C communications, e, c. Also, the distribution methods changed.

The last talk was an excellent review of the Gooligan botnet: “Hunting Down Gooligan” by Elie Bursztein & Oren Koriat. What is Gooligan? It was the first large-scale OAuth stealing botnet. Being used by all major actors on the Internet (Google, Microsoft, Facebook, Twitter, etc) you can imagine the impact of this botnet. The first version was detected in 2015 by Checkpoint and it was taken down in November 2016. In a nutshell, it was distributed as a repackaged known APK.

Gooligan in a Nutshell

Once decoded, the payload is downloaded, devices are rooted and persistence is configured. It modifies the install-recovery.sh file used when resetting the phone to factory settings. It makes very difficult to get rid of the malware. After technical details, the speakers explained the monetization techniques used by the botnet. There was two: apps boosting and ads injection. Stolen OAuth tokens were used interact with the play store to generate fake installs, reviews and search. Indeed, real users on real phones are difficult to spot compared to “fraudulent” server. As the C&C server got all details to spoof the infected phones (IMEI, IMSI, brand, model, token, Android version, etc). The last step was to explain how the remediation was performed: The C&C server was sinkholed and stolen token revoked. All users were notified, which is a challenge based on the number of people (1M), different languages, technical skills etc. I really like this presentation.

The day finished with beers and pizza in a relaxed atmosphere. Stay tuned for a second wrap-up tomorrow!

[The post Botconf 2017 Wrap-Up Day #1 has been first published on /dev/random]

December 04, 2017

Cable squeeze

Last month, the Chairman of the Federal Communications Commission, Ajit Pai, released a draft order that would soften net neutrality regulations. He wants to overturn the restrictions that make paid prioritization, blocking or throttling of traffic unlawful. If approved, this order could drastically alter the way that people experience and access the web. Without net neutrality, Internet Service Providers could determine what sites you can or cannot see.

The proposed draft order is disheartening. Millions of Americans are trying to save net neutrality; the FCC has received over 5 million emails, 750,000 phone calls, and 2 million comments. Unfortunately this public outpouring has not altered the FCC's commitment to dismantling net neutrality.

The commission will vote on the order on December 14th. We have 10 days to save net neutrality.

Although I have written about net neutrality before, I want to explain the consequences and urgency of the FCC's upcoming vote.

What does Pai's draft order say?

Chairman Pai has long been an advocate for "light touch" net neutrality regulations, and claims that repealing net neutrality will allow "the federal government to stop micromanaging the Internet".

Specifically, Pai aims to scrap the protection that classifies ISPs as common carriers under Title II of the Communications Act of 1934. Radio and phone services are also protected under Title II, which prevents companies from charging unreasonable rates or restricting access to services that are critical to society. Pai wants to treat the internet differently, and proposes that the FCC should simply require ISPs "to be transparent about their practices". The responsibility of policing ISPs would also be transferred to the Federal Trade Commission. Instead of maintaining the FCC's clear-cut and rule-based approach, the FTC would practice case-by-case regulation. This shift could be problematic as a case-by-case approach could make the FTC a weak consumer watchdog.

The consequences of softening net neutrality regulations

At the end of the day, frail net neutrality regulations mean that ISPs are free to determine how users access websites, applications and other digital content.

It is clear that depending on ISPs to be "transparent" will not protect against implementing fast and slow lanes. Rolling back net neutrality regulations means that ISPs could charge website owners to make their website faster than others. This threatens the very idea of the open web, which guarantees an unfettered and decentralized platform to share and access information. Gravitating away from the open web could create inequity in how communities share and express ideas online, which would ultimately intensify the digital divide. This could also hurt startups as they now have to raise money to pay for ISP fees or fear being relegated to the "slow lane".

The way I see it, implementing "fast lanes" could alter the technological, economic and societal impact of the internet we know today. Unfortunately it seems that the chairman is prioritizing the interests of ISPs over the needs of consumers.

What can you can do today

Chairman Pai's draft order could dictate the future of the internet for years to come. In the end, net neutrality affects how people, including you and me, experience the web. I've dedicated both my spare time and my professional career to the open web because I believe the web has the power to change lives, educate people, create new economies, disrupt business models and make the world smaller in the best of ways. Keeping the web open means that these opportunities can be available to everyone.

If you're concerned about the future of net neutrality, please take action. Share your comments with the U.S. Congress and contact your representatives. Speak up about your concerns with your friends and colleagues. Organizations like The Battle for the Net help you contact your representatives — it only takes a minute!

Now is the time to stand up for net neutrality: we have 10 days and need everyone's help.

December 02, 2017

I published the following diary on isc.sans.org: “Using Bad Material for the Good“:

There is a huge amount of information shared online by attackers. Once again, pastebin.com is a nice place to start hunting. As this material is available for free, why not use it for the good? Attackers (with or without bots) are constantly looking for entry points on websites. Those entry points are a good place to search, for example, for SQL injections… [Read more]

[The post [SANS ISC] Using Bad Material for the Good has been first published on /dev/random]

December 01, 2017

I published the following diary on isc.sans.org: “Phishing Kit (Ab)Using Cloud Services“:

When you build a phishing kit, they are several critical points to address. You must generate a nice-looking page which will match as close as possible to the original one and you must work stealthily to not be blocked or, at least, be blocked as late as possible… [Read more]

[The post [SANS ISC] Phishing Kit (Ab)Using Cloud Services has been first published on /dev/random]

This year at Acquia Engage, the Commonwealth of Massachusetts launched Mass.gov on Drupal 8. Holly St. Clair, the Chief Digital Officer of the Commonwealth of Massachusetts, joined me during my keynote to share how Mass.gov is making constituents' interactions with the state fast, easy, meaningful, and "wicked awesome".

Since its founding, Acquia has been headquartered in Massachusetts, so it was very exciting to celebrate this milestone with the Mass.gov team.

Constituents at the center

Today, 76% of constituents prefer to interact with their government online. Before Mass.gov switched to Drupal it struggled to provide a constituent-centric experience. For example, a student looking for information on tuition assistance on Mass.gov would have to sort through 7 different government websites before finding relevant information.

Mass gov before and after

To better serve residents, businesses and visitors, the Mass.gov team took a data-driven approach. After analyzing site data, they discovered that 10% of the content serviced 89% of site traffic. This means that up to 90% of the content on Mass.gov was either redundant, out-of-date or distracting. The digital services team used this insight to develop a site architecture and content strategy that prioritized the needs and interests of citizens. In one year, the team at Mass.gov moved a 15-year-old site from a legacy CMS to Acquia and Drupal.

The team at Mass.gov also incorporated user testing into every step of the redesign process, including usability, information architecture and accessibility. In addition to inviting over 330,000 users to provide feedback on the pilot site, the Mass.gov team partnered with the Perkins School for the Blind to deliver meaningful accessibility that surpasses compliance requirements. This approach has earned Mass.gov a score of 80.7 on the System Usability Scale; 12 percent higher than the reported average.

Open from the start

As an early adopter of Drupal 8, the Commonwealth of Massachusetts decided to open source the code that powers Mass.gov. Everyone can see the code that make Mass.gov work, point out problems, suggest improvements, or use the code for their own state. It's inspiring to see the Commonwealth of Massachusetts fully embrace the unique innovation and collaboration model inherent to open source. I wish more governments would do the same!

Congratulations Mass.gov

The new Mass.gov is engaging, intuitive and above all else, wicked awesome. Congratulations Mass.gov!

November 29, 2017

So with that nice little page cache experiment concluded, I started working on something that will definitively be included in the next version of Autoptimize; Extra Auto-optimizations!

You can read all about it below or you can skip all of that and immediately download the zipfile of the “AO Extra power-up” from the Github repository.

The 3 features available now;

  • remove Emoji’s
  • remove (version parameter from the) query string (not that big a deal, from a load time perspective, but still)
  • the ability to have Autoptimize add the async attribute to the local or 3rd party JavaScript (local JS-files will be excluded automatically if added here).

There’s one extra feature that will very likely be added; optimize Google Fonts (because removing them is considered harsh, apparently).

Do download, do test and do let me know if anything is broken in the comments or via my contact form. And If you have other ideas for extra features, do let me know too!

I published the following diary on isc.sans.org: “Fileless Malicious PowerShell Sample“:

Pastebin.com remains one of my favourite place for hunting. I’m searching for juicy content and report finding in a Splunk dashboard:

Yesterday, I found an interesting pastie with a simple Windows CMD script… [Read more]

[The post [SANS ISC] Fileless Malicious PowerShell Sample has been first published on /dev/random]

Imprimante 3D AirwolfCe jeudi 21 décembre 2017 à 19h se déroulera la 64ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Construire son imprimante 3D

Thématique : DIY|Maker|Internet|Graphisme| sysadmin|communauté

Public : Tout public|sysadmin| entreprises|étudiants|…

L’animateur conférencier : Louis-Marie Croisez

Lieu de cette séance : Université de Mons, Campus Plaine de Nimy, avenue Maistriau, Grands Amphithéâtres, Auditoire Curie (cf. ce plan sur le site de l’UMONS, ou la carte OSM).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Tout le monde peut s’acheter une imprimante 3D et selon vos finances, vous devrez débourser de 200€ à 1000€ pour une imprimante d’entrée de gamme.

La logique est que plus vous payez, plus vous obtenez de fiabilité et de caractéristiques intéressantes. Pour briser cette logique, et si vous aimez bricoler et que vous êtes habiles de vos mains, il est tout à fait possible de fabriquer sa propre imprimante 3D à un prix maîtrisé et selon vos critères. Quel est intérêt? En plus du gain d’argent évident, ce processus vous donne accès à un outil que vous maîtrisez à 100%, que vous pouvez modifier, améliorer selon vos critères. Le DIY est aussi un formidable outil d’auto-apprentissage de cette technologie.

Short bio : Louis-Marie Croisez est diplômé Ingénieur Civil Électricien. Depuis 20 ans, il a travaillé dans le secteur des réseaux informatiques, des télécoms, de l’informatique embarquée, au sein de firmes telles que CBR, Alcatel, Thales, etc. Actuellement il travaille chez CETIC asbl, un centre de recherche appliquée en IT basé à Gosselies. Ses centres d’intérêts personnels sont : pratiquer la musique, l’électronique, bricoler (Maker).

November 28, 2017

The post Root login without password allowed by default on Mac OSX High Sierra appeared first on ma.ttias.be.

Right, this isn't a good day for Apple.

As first reported on Twitter by Lemi Orhan Ergin, you can bypass just about any security dialog on Mac OSX High Sierra (10.13) by using the root user without a password.

Use the user root and click Unlock several times, you'll eventually bypass the dialog and be granted root privileges. You can try it if you go to the Users & Groups settings screen and click Lock at the bottom.

I'd be very curious to know the technical reasons why this was possible in the first place.

Update: be sure to disable the root user after test

Turns out, testing this actually creates a root user without a password in the background! Make sure to disable the root user in System Preferences to prevent this from getting any worse than it already is.

For a quick workaround, set a non-default (aka: anything) password on the root user via the terminal.

$ sudo passwd -u root

Once a password has been set, it wont change to an empty value anymore.

Also applicable to Remote Management

If you've enabled Remote Management, anyone can log into your Mac using the root user with an empty password.

Woops.

Responsible disclosure?

This issue was first reported on Twitter and is now getting widespread traction. This isn't exactly a good way to disclose security issues, but I'm willing to bet the reporter perhaps didn't think it would go this far in the media?

There's an entire KB about reporting security issues to Apple, if someone ever feels the need to report similar security bugs.

The post Root login without password allowed by default on Mac OSX High Sierra appeared first on ma.ttias.be.

November 27, 2017

November 25, 2017

LOAD 2018 is a GO and will be held on 21th and 22th of April 2018 in Antwerp, Belgium.

November 23, 2017

I published the following diary on isc.sans.org: “Proactive Malicious Domain Search“:

In a previous diary, I presented a dashboard that I’m using to keep track of the DNS traffic on my networks. Tracking malicious domains is useful but what if you could, in a certain way, “predict” the upcoming domains that will be used to host phishing pages? Being a step ahead of the attackers is always good, right? Thanks to the CertStream service (provided by Cali Dog Security), you have access to a real-time certificate transparency log update stream… [Read more]

 

[The post [SANS ISC] Proactive Malicious Domain Search has been first published on /dev/random]

November 22, 2017

Over the past weeks I have shared an update on the Media Initiative and an update on the Layout Initiative. Today I wanted to give an update on the Workflow Initiative.

Creating great software doesn't happen overnight; it requires a desire for excellence and a disciplined approach. Like the Media and Layout Initiatives, the Workflow Initiative has taken such an approach. The disciplined and steady progress these initiative are making is something to be excited about.

8.4: The march towards stability

As you might recall from my last Workflow Initiative update, we added the Content Moderation module to Drupal 8.2 as an experimental module, and we added the Workflows module in Drupal 8.3 as well. The Workflows module allows for the creation of different publishing workflows with various states (e.g. draft, needs legal review, needs copy-editing, etc) and the Content Moderation module exposes these workflows to content authors.

As of Drupal 8.4, the Workflows module has been marked stable. Additionally, the Content Moderation module is marked beta in Drupal 8.4, and is down to two final blockers before marking stable. If you want to help with that, check out the Content Moderation module roadmap.

8.4: Making more entity types revisionable

To advance Drupal's workflow capabilities, more of Drupal's entity types needed to be made "revisionable". When content is revisionable, it becomes easier to move it through different workflow states or to stage content. Making more entity types revisionable is a necessary foundation for better content moderation, workflow and staging capabilities. But it was also hard work and took various people over a year of iterations — we worked on this throughout the Drupal 8.3 and Drupal 8.4 development cycle.

When working through this, we discovered various adjacent bugs (e.g. bugs related to content revisions and translations) that had to be worked through as well. As a plus, this has led to a more stable and reliable Drupal, even for those who don't use any of the workflow modules. This is a testament to our desire for excellence and disciplined approach.

8.5+: Looking forward to workspaces

While these foundational improvements in Drupal 8.3 and Drupal 8.4 are absolutely necessary to enable better content moderation and content staging functionality, they don't have much to show for in terms of user experience changes. Now a lot of this work is behind us, the Workflow Initiative changed its focus to stabilizing the Content Moderation module, but is also aiming to bring the Workspace module into Drupal core as an experimental module.

The Workspace module allows the creation of multiple environments, such as "Staging" or "Production", and allows moving collections of content between them. For example, the "Production" workspace is what visitors see when they visit your site. Then you might have a protected "Staging" workspace where content editors prepare new content before it's pushed to the Production workspace.

While workflows for individual content items are powerful, many sites want to publish multiple content items at once as a group. This includes new pages, updated pages, but also changes to blocks and menu items — hence our focus on making things like block content and menu items revisionable. 'Workspaces' group all these individual elements (pages, blocks and menus) into a logical package, so they can be prepared, previewed and published as a group. This is one of the most requested features and will be a valuable differentiator for Drupal. It looks pretty slick too:

Drupal workspaces prototype

I'm impressed with the work the Workflow team has accomplished during the Drupal 8.4 cycle: the Workflow module became stable, the Content Moderation module improved by leaps and bounds, and the under-the-hood work has prepared us for content staging via Workspaces. In the process, we've also fixed some long-standing technical debt in the revisions and translations systems, laying the foundation for future improvements.

Special thanks to Angie Byron for contributions to this blog post and to Dick Olsson, Tim Millwood and Jozef Toth for their feedback during the writing process.

November 21, 2017

November 20, 2017

One of the features present in the August release of the SELinux user space is its support for ioctl xperm rules in modular policies. In the past, this was only possible in monolithic ones (and CIL). Through this, allow rules can be extended to not only cover source (domain) and target (resource) identifiers, but also a specific number on which it applies. And ioctl's are the first (and currently only) permission on which this is implemented.

Note that ioctl-level permission controls isn't a new feature by itself, but the fact that it can be used in modular policies is.

What is ioctl?

Many interactions on a Linux system are done through system calls. From a security perspective, most system calls can be properly categorized based on who is executing the call and what the target of the call is. For instance, the unlink() system call has the following prototype:

int unlink(const char *pathname);

Considering that a process (source) is executing unlink (system call) against a target (path) is sufficient for most security implementations. Either the source has the permission to unlink that file or directory, or it hasn't. SELinux maps this to the unlink permission within the file or directory classes:

allow <domain> <resource> : { file dir }  unlink;

Now, ioctl() is somewhat different. It is a system call that allows device-specific operations which cannot be expressed by regular system calls. Devices can have multiple functions/capabilities, and with ioctl() these capabilities can be interrogated or updated. It has the following interface:

int ioctl(int fd, unsigned long request, ...);

The file descriptor is the target device on which an operation is launched. The second argument is the request, which is an integer whose value identifiers what kind of operation the ioctl() call is trying to execute. So unlike regular system calls, where the operation itself is the system call, ioctl() actually has a parameter that identifies this.

A list of possible parameter values on a socket for instance is available in the Linux kernel source code, under include/uapi/linnux/sockios.h.

SELinux allowxperm

For SELinux, having the purpose of the call as part of a parameter means that a regular mapping isn't sufficient. Allowing ioctl() commands for a domain against a resource is expressed as follows:

allow <domain> <resource> : <class> ioctl;

This of course does not allow policy developers to differentiate between harmless or informative calls (like SIOCGIFHWADDR to obtain the hardware address associated with a network device) and impactful calls (like SIOCADDRT to add a routing table entry).

To allow for a fine-grained policy approach, the SELinux developers introduced an extended allow permission, which is capable of differentiating based on an integer value.

For instance, to allow a domain to get a hardware address (SIOCGIFHWADDR, which is 0x8927) from a TCP socket:

allowxperm <domain> <resource> : tcp_socket ioctl 0x8927;

This additional parameter can also be ranged:

allowxperm <domain> <resource> : <class> ioctl 0x8910-0x8927;

And of course, it can also be used to complement (i.e. allow all ioctl parameters except a certain value):

allowxperm <domain> <resource> : <class> ioctl ~0x8927;

Small or negligible performance hit

According to a presentation given by Jeff Vander Stoep on the Linux Security Summit in 2015, the performance impact of this addition in SELinux is well under control, which helped in the introduction of this capability in the Android SELinux implementation.

As a result, interested readers can find examples of allowxperm invocations in the SELinux policy in Android, such as in the app.te file:

# only allow unprivileged socket ioctl commands
allowxperm { appdomain -bluetooth } self:{ rawip_socket tcp_socket udp_socket } ioctl { unpriv_sock_ioctls unpriv_tty_ioctls };

And with that, we again show how fine-grained the SELinux access controls can be.

November 17, 2017

I published the following diary on isc.sans.org: “Top-100 Malicious IP STIX Feed“.

Yesterday, we were contacted by one of our readers who asked if we provide a STIX feed of our blocked list or top-100 suspicious IP addresses. STIX means “Structured Threat Information eXpression” and enables organizations to share indicator of compromise (IOC) with peers in a consistent and machine readable manner… [Read more]

[The post [SANS ISC] Top-100 Malicious IP STIX Feed has been first published on /dev/random]

November 16, 2017

So I integrated a page cache (based on KeyCDN Cache Enabler) in Autoptimize, just to see how easy (or difficult) it would be. Turns out it was pretty easy, mostly because Cache Enabler (based on Cachify, which was very popular in Germany until the developer abandoned it) is well-written, simple and efficient. :-)

No plans to release this though. Or do you think I should?

I published the following diary on isc.sans.org: “Suspicious Domains Tracking Dashboard“.

Domain names remain a gold mine to investigate security incidents or to prevent some malicious activity to occur on your network (example by using a DNS firewall). The ISC has also a page dedicated to domain names. But how can we detect potentially malicious DNS activity if domains are not (yet) present in a blacklist? The typical case is DGA’s of Domain Generation Algorithm used by some malware families… [Read more]

 

[The post [SANS ISC] Suspicious Domains Tracking Dashboard has been first published on /dev/random]

November 15, 2017

I published the following diary on isc.sans.org: “If you want something done right, do it yourself!“.

Another day, another malicious document! I like to discover how the bad guys are creative to write new pieces of malicious code. Yesterday, I found another interesting sample. It’s always the same story, a malicious document is delivered by email. The document was called ‘Saudi Declare war Labenon.doc’ (interesting name by the way!). According to VT, it is already flagged as malicious by many antiviruses… [Read more]

[The post [SANS ISC] If you want something done right, do it yourself! has been first published on /dev/random]

Now Drupal 8.4 is released, and Drupal 8.5 development is underway, it is a good time to give an update on what is happening with Drupal's Layout Initiative.

8.4: Stable versions of layout functionality

Traditionally, site builders have used one of two layout solutions in Drupal: Panelizer and Panels. Both are contributed modules outside of Drupal core, and both achieved stable releases in the middle of 2017. Given the popularity of these modules, having stable releases closed a major functionality gap that prevented people from building sites with Drupal 8.

8.4: A Layout API in core

The Layout Discovery module added in Drupal 8.3 core has now been marked stable. This module adds a Layout API to core. Both the aforementioned Panelizer and Panels modules have already adopted the new Layout API with their 8.4 release. A unified Layout API in core eliminates fragmentation and encourages collaboration.

8.5+: A Layout Builder in core

Today, Drupal's layout management solutions exist as contributed modules. Because creating and building layouts is expected to be out-of-the-box functionality, we're working towards adding layout building capabilities to Drupal core.

Using the Layout Builder, you start by selecting predefined layouts for different sections of the page, and then populate those layouts with one or more blocks. I showed the Layout Builder in my DrupalCon Vienna keynote and it was really well received:

8.5+: Use the new Layout Builder UI for the Field Layout module

One of the nice improvements that went in Drupal 8.3 was the Field Layout module, which provides the ability to apply pre-defined layouts to what we call "entity displays". Instead of applying layouts to individual pages, you can apply layouts to types of content regardless of what page they are displayed on. For example, you can create a content type 'Recipe' and visually lay out the different fields that make up a recipe. Because the layout is associated with the recipe rather than with a specific page, recipes will be laid out consistently across your website regardless of what page they are shown on.

The basic functionality is already included in Drupal core as part of the experimental Fields Layout module. The goal for Drupal 8.5 is to stabilize the Fields Layout module, and to improve its user experience by using the new Layout Builder. Eventually, designing the layout for a recipe could look like this:

Drupal field layouts prototype

Layouts remains a strategic priority for Drupal 8 as it was the second most important site builder priority identified in my 2016 State of Drupal survey, right behind Migrations. I'm excited to see the work already accomplished by the Layout team, and look forward to seeing their progress in Drupal 8.5! If you want to help, check out the Layout Initiative roadmap.

Special thanks to Angie Byron for contributions to this blog post, to Tim Plunkett and Kris Vanderwater for their feedback during the writing process, and to Emilie Nouveau for the screenshot and video contributions.

November 13, 2017

Today, I am excited to announce that Michael Sullivan will be joining Acquia as its CEO.

The search for a new CEO

Last spring, Tom Erickson announced that he was stepping down as Acquia's CEO. For over eight years, Tom and I have been working side-by-side to build and run Acquia. I've been lucky to have Tom as my partner as he is one of the most talented leaders I know. When Tom announced he'd be stepping down as Acquia's CEO, finding a new CEO became my top priority for Acquia. For six months, the search consumed a good deal of my time. I was supported by a search committee drawn from Acquia's board of directors, including Rich D'Amore, Tom Bogan, and Michael Skok. Together, we screened over 140 candidates and interviewed 10 in-depth. Finding the right candidate was hard work and time consuming, but we kept the bar high at all times. As much as I enjoyed meeting so many great candidates and hearing their perspective on our business, I'm glad that the search is finally behind me.

The right fit for Acquia

Finding a business partner is like dating; you have to get to know each other, build trust, and see if there is a match. Identifying and recruiting the best candidate is difficult because unlike dating, you have to consider how the partnership will also impact your team, customers, partners, and community. Once I got to know Mike, it didn't take me long to realize how he could help scale Acquia and help make our customers and partners successful. I also realized how much I would enjoy working with him. The fit felt right.

With 25 years of senior leadership in SaaS, enterprise content management and content governance, Mike is well prepared to lead our business. Mike will join Acquia from Micro Focus, where he participated in the merger of Micro Focus with Hewlett Packard Enterprise's software business. The combined company became the world's seventh largest pure-play software company and the largest UK technology firm listed on the London Stock Exchange. At Micro Focus and Hewlett Packard Enterprise, Mike was the Senior Vice President and General Manager for Software-as-a-Service and was responsible for managing over 30 SaaS products.

This summer, I shared that Acquia expanded its focus from website management to data-driven customer journeys. We extended the capabilities of the Acquia Platform with journey orchestration, commerce integrations and digital asset management tools. The fact that Mike has so much experience running a diverse portfolio of SaaS products is something I really valued. Mike's expertise can guide us in our transformation from a single product company to a multi-product company.

Creating a partnership

For many years, I have woken up everyday determined to set a vision for the future, formulate a strategy to achieve that vision, and help my fellow Acquians figure out how to achieve that vision.

One of the most important things in finding a partner and CEO for Acquia was having a shared vision for the future and an understanding of the importance of cloud, Open Source, data-driven experiences, customer success and more. This was very important to me as I could not imagine working with a partner who isn't passionate about these same things. It is clear that Mike shares this vision and is excited about Acquia's future.

Furthermore, Mike's operational strength and enterprise experience will be a natural complement to my focus on vision and product strategy. His expertise will allow Acquia to accelerate its mission to "build the universal platform for the world's greatest digital experiences."

Formalizing my own role

In addition to Mike joining Acquia as CEO, my role will be elevated to Chairman. I will also continue in my position as Acquia CTO. My role has always extended beyond what is traditionally expected of a CTO; my responsibilities have bridged products and engineering, fundraising, investor relations, sales and marketing, resource allocation, and more. Serving as Chairman will formalize the various responsibilities I've taken on over the past decade. I'm also excited to work with Mike because it is an opportunity for me to learn from him and grow as a leader.

Acquia's next decade

The web has the power to change lives, educate the masses, create new economies, disrupt business models and make the world smaller in the best of ways. Digital will continue to change every industry, every company and every life on the planet. The next decade holds enormous promise for Acquia and Drupal because of what the power of digital holds for business and society at large. We are uniquely positioned to deliver the benefits of open source, cloud and data-driven experiences to help organizations succeed in an increasingly complex digital world.

I'm excited to welcome Mike to Acquia as its CEO because I believe he is the right fit for Acquia, has the experience it takes to be our CEO and will be a great business partner to bring Acquia's vision to life. Welcome to the team, Mike!

November 11, 2017

I published the following diary on isc.sans.org: “Keep An Eye on your Root Certificates“.

A few times a year, we can read in the news that a rogue root certificate was installed without the user consent. The latest story that pops up in my mind is the Savitech audio drivers which silently installs a root certificate. The risks associated with this kind of behaviour are multiple, the most important remains performing MitM attacks. New root certificates are not always the result of an attack or infection by a malware. Corporate end-points might also get new root certificates… [Read more]

 

[The post [SANS ISC] Keep An Eye on your Root Certificates has been first published on /dev/random]

November 10, 2017

This morning I uploaded version 0.1 of SReview, my video review and transcoding system, to Debian experimental. There's still some work to be done before it'll be perfectly easy to use by anyone, but I do think I've reached the point by now where it should have basic usability by now.

Quick HOWTO for how to use it:

  • Enable Debian experimental
  • Install the packages sreview-master, sreview-encoder, sreview-detect, and sreview-web. It's possible to install the four packages on different machines, but let's not go into too much detail there, yet.
  • The installation will create an sreview user and database, and will start the sreview-web service on port 8080, listening only to localhost. The sreview-web package also ships with an apache configuration snippet that shows how to proxy it from the interwebs if you want to.
  • Run sreview-config --action=dump. This will show you the current configuration of sreview. If you want to change something, either change it in /etc/sreview/config.pm, or just run sreview-config --set=variable=value --action=update.
  • Run sreview-user -d --action=create -u <your email>. This will create an administrator user in the sreview database.
  • Open a webbrowser, browse to http://localhost:8080/, and test whether you can log on.
  • Write a script to insert the schedule of your event into the SReview database. Look at the debconf and fosdem scripts for inspiration if you need it. Yeah, that's something I still need to genericize, but I'm not quite sure yet how to do that.
  • Either configure gridengine so that it will have the required queues and resources for SReview, or disable the qsub commands in the SReview state_actions configuration parameter (e.g., by way of sreview-config --action=update --set=state_actions=... or by editing /etc/sreview/config.pm).
  • If you need notification, modify the state_actions entry for notification so that it sends out a notification (e.g., through an IRC bot or an email address, or something along those lines). Alternatively, enable the "anonreviews" option, so that the overview page has links to your talk.
  • Review the inputglob and parse_re configuration parameters of SReview. The first should contain a filesystem glob that will find your raw assets; the second should parse the filename into room, year, month, day, hour, minute, and second, components. Look at the defaults of those options for examples (or just use those, and store your files as /srv/sreview/incoming/<room>/<year>-<month>-<day>/<hour>:<minute>:<second>.*).
  • Provide an SVG file for opening credits, and point to it from the preroll_template configuration option.
  • Provide an SVG or PNG file for closing credits, and point to it from the postroll_template resp postroll configuration option.
  • Start recording, and watch SReview do its magic :-)

There's still some bits of the above list that I want to make easier to do, and there's still some things that shouldn't be strictly necessary, but all in all, I think SReview has now reached a certain level of maturity that means I felt confident doing its first upload to Debian.

Did you try it out? Let me know what you think!

In my blog post, "A plan for media management in Drupal 8", I talked about some of the challenges with media in Drupal, the hopes of end users of Drupal, and the plan that the team working on the Media Initiative was targeting for future versions of Drupal 8. That blog post is one year old today. Since that time we released both Drupal 8.3 and Drupal 8.4, and Drupal 8.5 development is in full swing. In other words, it's time for an update on this initiative's progress and next steps.

8.4: A Media API in core

Drupal 8.4 introduced a new Media API to core. For site builders, this means that Drupal 8.4 ships with the new Media module (albeit still hidden from the UI, pending necessary user experience improvements), which is an adaptation of the contributed Media Entity module. The new Media module provides a "base media entity". Having a "base media entity" means that all media assets — local images, PDF documents, YouTube videos, tweets, and so on — are revisable, extendable (fieldable), translatable and much more. It allows all media to be treated in a common way, regardless of where the media resource itself is stored. For end users, this translates into a more cohesive content authoring experience; you can use consistent tools for managing images, videos, and other media rather than different interfaces for each media type.

8.4+: Porting contributed modules to the new Media API

The contributed Media Entity module was a "foundational module" used by a large number of other contributed modules. It enables Drupal to integrate with Pinterest, Vimeo, Instagram, Twitter and much more. The next step is for all of these modules to adopt the new Media module in core. The required changes are laid out in the API change record, and typically only require a couple of hours to complete. The sooner these modules are updated, the sooner Drupal's rich media ecosystem can start benefitting from the new API in Drupal core. This is a great opportunity for intermediate contributors to pitch in.

8.5+: Add support for remote video in core

As proof of the power of the new Media API, the team is hoping to bring in support for remote video using the oEmbed format. This allows content authors to easily add e.g. YouTube videos to their posts. This has been a long-standing gap in Drupal's out-of-the-box media and asset handling, and would be a nice win.

8.6+: A Media Library in core

The top two requested features for the content creator persona are richer image and media integration and digital asset management.

The top content author improvements for DrupalThe results of the State of Drupal 2016 survey show the importance of the Media Initiative for content authors.

With a Media Library content authors can select pre-existing media from a library and easily embed it in their posts. Having a Media Library in core would be very impactful for content authors as it helps with both these feature requests.

During the 8.4 development cycle, a lot of great work was done to prototype the Media Library discussed in my previous Media Initiative blog post. I was able to show that progress in my DrupalCon Vienna keynote:

The Media Library work uses the new Media API in core. Now that the new Media API landed in Drupal 8.4 we can start focusing more on the Media Library. Due to bandwidth constraints, we don't think the Media Library will be ready in time for the Drupal 8.5 release. If you want to help contribute time or funding to the development of the Media Library, have a look at the roadmap of the Media Initiative or let me know and I'll get you in touch with the team behind the Media Initiative.

Special thanks to Angie Byron for contributions to this blog post and to Janez Urevc, Sean Blommaert, Marcos Cano Miranda, Adam G-H and Gábor Hojtsy for their feedback during the writing process.

November 07, 2017

I published the following diary on isc.sans.org: “Interesting VBA Dropper“.

Here is another sample that I found in my spam trap. The technique to infect the victim’s computer is interesting. I captured a mail with a malicious RTF document (SHA256: c247929d3f5c82247db9102d2dec28c27f73dc0824f8b386f92aad1a22fd8edd) that exploits the OLE2Link vulnerability (CVE-2017-0199). Once opened, the document fetches the following URL… [Read more]

[The post [SANS ISC] Interesting VBA Dropper has been first published on /dev/random]

November 06, 2017

Gnome logoI like what the Ubuntu people did when adopting Gnome as the new Desktop after the dismissal of Unity. When the change was announced some months ago, I decided to move to Gnome and see if I liked it. I did.

It’s a good idea to benefit of the small changes Ubuntu did to Gnome 3. Forking dash-to-dock was a great idea so untested updates (e.g. upstream) don’t break the desktop. I won’t discuss settings you can change through the “Settings” application (Ubuntu Dock settings) or through “Tweaks”:

$ sudo apt-get install gnome-tweak-tool

It’s a good idea, though, to remove third party extensions so you are sure you’re using the ones provided and adapted by Ubuntu. You can always add new extensions later (the most important ones are even packaged).
$ rm -rf ~/.local/share/gnome-shell/extensions/*

Working with Gnome 3, and in less extent with MacOS, taught me that I prefer bars and docks to autohide. I never did in the past, but I feel that Gnome (and MacOS) got this right. I certainly don’t like the full height dock: make it so small as needed. You can use the graphical “dconf Editor” tool to make the changes, but I prefer the safer command line (you won’t make a change by accident).

To prevent Ubuntu Dock to take all the vertical space (i.e., most of it is just an empty bar):

$ dconf write /org/gnome/shell/extensions/dash-to-dock/extend-height false

A neat Dock trick: when hovering over a icon on the dock, cycle through windows of the application while scrolling (or using two fingers). Way faster than click + select:

$ dconf write /org/gnome/shell/extensions/dash-to-dock/scroll-action "'cycle-windows'"

I set the dock to autohide in the regular “Settings” application. An extension is needed to do the same for the Top Bar (you need to log out, and the enable it through the “Tweaks” application):

$ sudo apt-get install gnome-shell-extension-autohidetopbar

Oh, just to be safe (e.g., in case you broke something), you can reset all the gnome settings with:

$ dconf reset -f /

Have a look at the comments for some extra settings (that I personally do not use, but many do).

Some options that I don’t use far people have asked me about (here and elsewhere)

Specially with the setting that allows scrolling above, you may want to only switch between windows of the same application in the active workspace. You can isolate workspaces with:

$ dconf write /org/gnome/shell/extensions/dash-to-dock/isolate-workspaces true

Hide the dock all the time, instead of only when needed. You can do this by disabling “intellihide”:

$ dconf write /org/gnome/shell/extensions/dash-to-dock/intellihide false


Filed under: Uncategorized Tagged: better-defaults-needed-department, dconf, gnome, Gnome3, Ubuntu, Ubuntu 17.10

The Drupal render pipeline and its caching capabilities have been the subject of quite a few talks of mine and of multiple writings. But all of those were very technical, very precise.

Over the past year and a half I’d heard multiple times there was a need for a more pragmatic talk, where only high-level principles are explained, and it is demonstrated how to step through the various layers with a debugger. So I set out to do just that.

I figured it made sense to spend 10–15 minutes explaining (using a hand-drawn diagram that I spent a lot of time tweaking) and spend the rest of the time stepping through things live. Yes, this was frightening. Yes, there were last-minute problems (my IDE suddenly didn’t allow font size scaling …), but it seems overall people were very satisfied :)

Have you seen and heard of Render API (with its render caching, lazy builders and render pipeline), Cache API (and its cache tags & contexts), Dynamic Page Cache, Page Cache and BigPipe? Have you cursed them, wondered about them, been confused by them?

I will show you three typical use cases:

  1. An uncacheable block
  2. A personalized block
  3. A cacheable block that you can see if you have a certain permission and that should update whenever some entity is updated

… and for each, will take you on the journey through the various layers: from rendering to render caching, on to Dynamic Page Cache and eventually Page Cache … or BigPipe.

Coming out of this session, you should have a concrete understanding of how these various layers cooperate, how you as a Drupal developer can use them to your advantage, and how you can test that it’s behaving correctly.

I’m a maintainer of Dynamic Page Cache and BigPipe, and an effective co-maintainer of Render API, Cache API and Page Cache.

Preview:

November 04, 2017

As part of working in Acquia’s Office of the CTO, I’ve been working on the API-First Initiative for the past year and a half! Where are we at? Find out :)

Preview:

November 03, 2017

I published the following diary on isc.sans.org: “Simple Analysis of an Obfuscated JAR File“.

Yesterday, I found in my spam trap a file named ‘0.19238000 1509447305.zip’ (SHA256: 7bddf3bf47293b4ad8ae64b8b770e0805402b487a4d025e31ef586e9a52add91). The ZIP archive contained a Java archive named ‘0.19238000 1509447305.jar’ (SHA256: b161c7c4b1e6750fce4ed381c0a6a2595a4d20c3b1bdb756a78b78ead0a92ce4). The file had a score of 0/61 in VT and looks to be a nice candidate for a quick analysis. .jar files are ZIP archives that contain compiled Java classes and a Manifest file that points to the initial class to load. Let’s decompile the classes. To achieve this, I’m using a small Docker container… [Read more]

[The post [SANS ISC] Simple Analysis of an Obfuscated JAR File has been first published on /dev/random]

November 01, 2017

October 31, 2017

The post Fall cleaning: shutting down some of my projects appeared first on ma.ttias.be.

Just a quick heads-up, I'm taking several of my projects offline as they prove to be too much of a hassle.

Mailing List Archive

Almost 2 years ago, I started a project to mirror/archive public mailing lists in a readable, clean layout. It's been fun, but quite a burden to maintain. In particular;

  • The mailing list archive now consists of ~3.500.000 files, which meant special precautions with filesystems (running out of inodes) & other practical concerns related to backups.
  • Abuse & take-down reports: lots of folks mailed a public mailing list, apparently not realizing it would be -- you know -- public. So they ask their name & email to be removed from those that mirror it. Like me.

    Or, in one spectacular case, a take down request by Italian police because of some drug-related incident on the Jenkins public mailing list. Fun stuff. It involved threatening lawyer mails that I quickly complied with.

All of that combined, it isn't worth it. It got around 1.000 daily pageviews, mostly from the Swift mailing list community. Sorry folks, it's gone now.

This also means the death of the @foss_security & @oss_announce Twitter accounts, since those were built on top of the mailing list archive.

Manpages Archive

I ran a manpages archive on man.ttias.be, but nobody used it (or knew about it), so I deleted it. Also hard to keep up to date, with different versions of packages & different manpages etc.

Also, it's surprisingly hard to download all manpages, extract them, turn them into HTML & throw them on a webserver. Who knew?!

Coinjump

This was a wild idea where I tried to crowdsource the meaning behind cryptocurrency price jumps. A coin can increase in value at a rate of 400% of more in just a few hours, the idea was to find the reason behind it and learn from it.

It also fueled the @CoinJumpApp account on Twitter, but that's dead now too.

The source is up on Github, but I'm no longer maintaining it.

More time for other stuff!

Either way, some projects just cause me more concern & effort than I care to put into them, so time to clean that up & get some more mental braincycles for different projects!

The post Fall cleaning: shutting down some of my projects appeared first on ma.ttias.be.

This October, Acquia welcomed over 650 people to the fourth annual Acquia Engage conference. In my opening keynote, I talked about the evolution of Acquia's product strategy and the move from building websites to creating customer journeys. You can watch a recording of my keynote (30 minutes) or download a copy of my slides (54 MB).

I shared that a number of new technology trends have emerged, such as conversational interfaces, beacons, augmented reality, artificial intelligence and more. These trends give organizations the opportunity to re-imagine their customer experience. Existing customer experiences can be leapfrogged by taking advantage of more channels and more data (e.g. be more intelligent, be more personalized, and be more contextualized).

Digital market trends aligning

I gave an example of this in a blog post last week, which showed how augmented reality can improve the shopping experience and help customers make better choices. It's just one example of how these new technologies advance existing customer experiences and move our industry from website management to customer journey management.

Some of the most important market trends in digital for 2017

This is actually good news for Drupal as organizations will have to create and manage even more content. This content will need to be optimized for different channels and audience segments. However, it puts more emphasis on content modeling, content workflows, media management and web service integrations.

I believe that the transition from web content management to data-driven customer journeys is full of opportunity, and it has become clear that customers and partners are excited to take advantage of these new technology trends. This year's Acquia Engage showed how our own transformation will empower organizations to take advantage of new technology and transform how they do business.

While you use a tool every day, you get more and more knowledge about it but you also have plenty of ideas to improve it. I’m using Splunk on a daily basis within many customers’ environments as well as for personal purposes. When you have a big database of events, it becomes quickly mandatory to deploy techniques to help you to extract juicy information from this huge amount of data.  The classic way to do hunting is to submit IOC’s to Splunk (IP addresses, domains, hashes, etc) and to schedule searches or to search it in real time. A classic schema is:

Splunk Data Flow

Inputs are logs, OSINT sources or output from 3rd party tools. Outputs are enriched data. A good example is to use the MISP platform. Useful IOC’s are extracted at regular interval via the API and injected into Splunk for later searching and reporting.

# wget --header 'Authorization: xxxxx' \
       --no-check-certificate \
       -O /tmp/domain.txt \
       https://misp/attributes/text/download/domain/false/false/false/false/false/7d

This process has a limit: new IOC’s are not immediately available when exported on a daily basis (or every x hours). When we see new major threats like the Bad Rabbit last week, it is useful to have a way to search for the first IOCs released by security researchers. How to achieve this? You can run manually the export procedure by starting a connection on the Splunk server and executing commands (but people must have access to the console) or … use a custom search command! Splunk has a very nice language to perform queries but, do you know that you can expand it with your own commands? How?

A Splunk custom search command is just a small program written in a language that can be executed in the Splunk environment. My choice was to use Python. There is an SDK available. The principle is simple: input data are processed to generate new output data. The basis of any computer program.

I wrote a custom search command that interacts with MISP to get IOCs. Example:

Custom Search Command Example 1

The command syntax is:

|getmispioc [server=https://host:port] 
            [authkey=misp-authorization-key]
            [sslcheck=y|n]
            [eventid=id]
            [last=interval]
            [onlyids=y|n]
            [category=string]
            [type=string]

The only mandatory parameters are ‘eventid’ (to return IOCs from a specific event) or ‘last’ (to return IOCs from the last x (hours, days, week, or months). You can filter returned data by filtering on the ‘ids_only’ flag and/or a specific category or type. Example:

|getmispioc last=2d onlyids=y type=ip-dst

Custom Search Command Example 2

Now, you can integrate the command into more complex queries to search for IOCs across your logs. Here is an example with bind logs  to search for interesting domains:

source=/var/log/named/queries.log
[|getmispioc last=5d type=domain
 |rename value as query
 |fields query
]

Custom Search Command Example 3

The custom command is based on PyMISP. The script and installation details are available on my github account.

[The post Splunk Custom Search Command: Searching for MISP IOC’s has been first published on /dev/random]

I published the following diary on isc.sans.org: “Some Powershell Malicious Code“.

Powershell is a great language that can interact at a low-level with Microsoft Windows. While hunting, I found a nice piece of Powershell code. After some deeper checks, it appeared that the code was not brand new but it remains interesting to learn how a malware infects (or not) a computer and tries to collect interesting data from the victim… [Read more]

[The post [SANS ISC] Some Powershell Malicious Code has been first published on /dev/random]

Le réchauffement climatique est-il l’œuvre de l’homme ou est-ce un phénomène naturel ?

Le discours climato-sceptique est tellement nocif qu’il a réussi à créer un débat là où, si vous réfléchissez un petit peu, il ne devrait pas y avoir l’ombre d’un hésitation.

Nul besoin de recourir à des dizaines d’études, à des consensus de scientifiques ou à un quelconque argument d’autorité. Branchez votre cerveau et laissez-moi 5 minutes pour vous expliquer.

La quantité totale de carbone

Si on considère l’apport des météorites et l’évaporation de l’atmosphère dans l’espace comme négligeables, ce qu’ils sont, on peut considérer que le nombre d’atomes de carbone présents sur terre est fixe.

Il y a donc un nombre déterminé d’atomes de carbones sur la planète terre. Pendant des milliards d’années, ces atomes existaient essentiellement sous forme minérale (graphite, diamant), sous forme organique (tous les êtres vivants) et sous forme de CO2 dans l’atmosphère.

Le carbone sous forme minérale est stable et sa quantité n’a jamais vraiment évolué depuis la création de la planète. On peut donc sans scrupule se concentrer sur les atomes de carbones qui sont soit dans les êtres vivants (vous êtes essentiellement composés d’atomes de carbones), soit dans l’atmosphère sous forme de CO2.

Le cycle de la vie

Les plantes se nourrissent du CO2 de l’atmosphère pour capter le carbone qui leur permet de vivre. Elles rejettent ensuite l’oxygène excédentaire qui est pour elles un déchet. Séparer le CO2 en carbone et oxygène est une réaction endothermique qui demande de l’énergie. Cette énergie est fournie par le soleil grâce à la photosynthèse.

Les êtres vivants aérobiques, dont nous faisons partie, se nourrissent d’autres êtres vivants (plantes, animaux) afin de capter les atomes de carbone dont ils ont besoin. Ces atomes de carbones sont stockés et brûlés avec de l’oxygène afin de produire de l’énergie. Le déchet produit est le CO2. La combustion du carbone est une réaction exothermique, qui produit de l’énergie.

En résumé, vous mangez du carbone issu de plantes ou d’autres animaux, vous le stockez sous forme de sucre et de graisse et, lorsque votre corps a besoin d’énergie, ces atomes sont mis en réaction avec l’oxygène apporté par la respiration et le système sanguin. La réaction produit du CO2, qui est expiré, et de l’énergie dont une partie se dissipe sous forme de chaleur. C’est la raison pour laquelle vous avez chaud et êtes essoufflé pendant un effort : votre corps brûle plus de carbone qu’habituellement, il surchauffe et doit se débarrasser de beaucoup plus de CO2.

Un subtil équilibre

Comme on le voit, plus l’atmosphère va être riche en CO2, plus les plantes vont avoir de carbone à disposition et vont croître. C’est d’ailleurs une expérience simple : dans un environnement à haute teneur en CO2, les plantes sont bien plus florissantes.

Mais s’il y’a plus de carbone dans les plantes, il y’en a forcément moins dans l’atmosphère. Il s’ensuit donc une situation d’équilibre où le carbone capté par les plantes correspond à celui relâché par la respiration des animaux (ou par les plantes en décomposition).

Comme le CO2 est un gaz à effet de serre, cet équilibre carbone va avoir un impact direct sur le climat de la planète. Et, il y’a quelques milliards d’années, cet équilibre entraînait un climat bien plus chaud qu’aujourd’hui.

La fossilisation du carbone et le climat

Cependant, un processus a rompu cet équilibre. À leur mort, une partie des êtres vivants (cellules, plantes ou animaux) se sont enfoncés dans le sol. Le carbone qui les composaient n’a donc pas pu regagner l’atmosphère, que ce soit en se décomposant ou en servant de nourriture à d’autres animaux.

Sous le sol, la pression et le temps a fini par transformé ces cadavres en pétrole, charbon ou gaz naturel.

Toute cette quantité de carbone n’étant plus disponibles en surface, un nouvel équilibre s’est créé avec de moins en moins de CO2 dans l’atmosphère, ce qui entraina un refroidissement général de la planète. Cette ère glacière vit l’apparition d’Homo Sapiens.

L’évidence de la “défossilisation”

Si brûler du bois ou respirer sont des activités qui produisent du CO2, elles ne perturbent pas l’équilibre carbone de la planète. En effet, l’atome de carbone de la molécule de CO2 produite fait partie de l’équilibre actuel. Cet atome était très récemment dans l’atmosphère, a été capté par un être vivant avant d’y retourner.

Par contre, il parait évident que si on creuse pour aller chercher du carbone fossile (pétrole, gaz, charbon) pour le rejeter dans l’atmosphère en le brûlant, on va forcément augmenter augmenter la quantité totale de carbone dans le cycle de la planète et, de là, augmenter la quantité de CO2 dans l’atmosphère et donc la température. C’est la raison pour laquelle il est absurde de comparer les émissions de CO2 d’un cycliste et d’une voiture. Seule la voiture « défossilise » du carbone et a un impact sur le climat.

Un tel chamboulement pourrait être en théorie contrebalancé par une augmentation de la végétation pour absorber le CO2 en excédent. Malheureusement, ce changement est trop rapide pour permettre à la végétation de s’adapter. Pire : nous réduisons cette végétation, principalement via la déforestation en Amazonie.

Brûler des combustibles fossiles a donc un effet direct sur le réchauffement climatique. Si l’on brûlait toutes les réserves de combustible fossile de la planète, l’Antarctique fondrait complètement, la glace et la neige n’existerait plus sur la planète et le niveau des océans serait 30 à 40 mètres au dessus de l’actuel.

Mais alors, pourquoi un débat ?

Si les scientifiques sont absolument unanimes sur le fait que brûler des combustibles fossiles accentue le réchauffement climatique, cette vérité est particulièrement dérangeante pour le monde économique, qui vit littéralement en brûlant des combustibles fossiles.

Pendant un temps, l’idée a donc été émise que la planète était dans la phase de réchauffement d’un cycle naturel de variation du climat. Différents modèles se sont alors affrontés pour tenter de savoir quelle était la part de responsabilité humaine dans le réchauffement.

Mais force est de constater que ce débat est absurde. C’est comme si deux personnes au premier étage d’une maison en feu débattaient de l’origine de l’incendie : court-circuit accidentel ou acte criminel ? Il doit à présent vous sembler clair que brûler des combustibles fossiles accentue le réchauffement climatique, rendant la responsabilité humaine indiscutable.

Cependant, ces débats ont été exploités par le monde économique : « Regardez, les scientifiques ne sont pas d’accords sur certains détails du réchauffement climatiques. Donc le réchauffement climatique n’existe pas. »

Cette stratégie anti-scientifique est souvent utilisée : l’industrie du tabac, le scandale du Roundup, les créationistes. Tous prétendent que si les scientifiques sont en désaccord sur certains détails, on ne peut être certain et si on n’est pas certain, il faut continuer à faire comme avant. Au besoin, il suffit de graisser la patte à quelques scientifiques pour introduire le doute là où le consensus était parfait.

Contrairement aux créationistes ou à l’industrie du tabac, dont l’impact sur la planète reste relativement limité, l’ignorance dangereuse des climato-sceptiques sert les intérêts économiques du monde entier ! Ce faux débat permet à toute personne utilisant une voiture, à tout industriel brûlant des combustibles fossiles, à tout employé vivant indirectement de notre économie de se déresponsabiliser.

En résumé

Brûler des combustibles fossiles rejette dans l’atmosphère du carbone qui était auparavant inerte (d’où le terme fossile). Plus de carbone dans l’atmosphère entraîne un effet de serre et donc une augmentation de la température. C’est imparable et absolument indiscutable. Mais le climatosceptisme nous parle car il nous permet de nous déresponsabiliser, de ne pas questionner notre mode de vie.

Nous sommes dans une maison en feu mais comme certains pensent que le pyromane n’a fait qu’activer un feu qui couvait déjà, nous pouvons déclarer : c’est que l’incendie n’existe pas !

 

Photos par Lukas Schlagenhauf, US Department of Agriculture, Cameron Strandberg.

Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal ou Liberapay. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Medium, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

October 29, 2017

De F-35 toont aan dat het Westen te idioot is om toestellen te maken die per ontwerp, en per toepassing, wel geschikt zijn.

In plaats daarvan wil men een politiek toestel dat uiteindelijk nergens goed in is.

Wat kan ik als techneut zeggen?

Tja, dwazen.

October 27, 2017

Michel Bauwens à Mons, le 16 novembre 2017Ce jeudi 16 novembre 2017 se déroulera la 63ème séance montoise des Jeudis du Libre de Belgique, en fait une organisation de Creative Valley au Mundaneum, à laquelle les JDL contribueront activement. Format exceptionnel, lieu inhabituel, il en va de même de l’horaire :

  • 15h00 – 17h30 : ateliers d’idéation
  • 18h00 – 19h30 : conférence de Michel Bauwens
  • 19h30 – 21h30 : soirée networking

Le sujet de cette séance : Sauver le Monde ? Vers une économie collaborative par la mise en commun des ressources… Ou quand la contribution du tout un chacun devient moteur du processus d’innovation.

  • Thématique : communauté
  • Public : Tout public
  • L’animateur conférencier : Michel Bauwens
  • Lieu de cette séance : Mundaneum, 76 rue de Nimy à 7000 Mons (cf. ce plan sur le site d’Openstreetmap)

Programme complet, description des ateliers et de la conférence, inscription obligatoire : https://www.eventbrite.fr/e/billets-sauver-le-monde-ateliers-conference-de-michel-bauwens-et-networking-39219758353

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

October 26, 2017

How augmented reality can be used to superimpose product information

Last spring, Acquia Labs built a chatbot prototype that helps customers choose recipes and plan shopping lists with dietary restrictions and preferences in mind. The ability to interact with a chatbot assistant rather than having to research and plan everything on your own can make grocery shopping much easier. We wanted to take this a step further and explore how augmented reality could also improve the shopping experience.


The demo video above features how a shopper named Alex can interact with an augmented reality application to remove friction from her shopping experience at Freshland Market (a fictional grocery store). The Freshland Market mobile application not only guides Alex through her shopping list but also helps her to make more informed shopping decisions through augmented reality overlays. It superimposes useful information such as price, user ratings and recommended recipes, over shopping items detected by a smartphone camera. The application can personalize Alex's shopping experience by highlighting products that fit her dietary restrictions or preferences.

What is exciting about this demo is that the Acquia Labs team built the Freshland Market application with Drupal 8 and augmented reality technology that is commercially available today.

An augmented reality architecture using Drupal and Vuforia

The first step in developing the application was to use an augmented reality library, Vuforia, which identifies pre-configured targets. In our demo, these targets are images of product labels, such as the tomato sauce and cereal labels shown in the video. Each target is given a unique ID. This ID is used to query the Freshland Market Drupal site for content related to that target.

The Freshland Market site stores all of the product information in Drupal, including price, dietary concerns, and reviews. Thanks to Drupal's web services support and the JSON API module, Drupal 8 can serve content to the Freshland Market application. This means that if the Drupal content for Rosemary & Olive Oil chips is edited to mark the item on sale, this will automatically be reflected in the content superimposed through the mobile application.

In addition to information on price and nutrition, the Freshland Market site also stores the location of each product. This makes it possible to guide a shopper to the product's location in the store, evolving the shopping list into a shopping route. This makes finding grocery items easy.

Augmented reality is building momentum because it moves beyond the limits of a traditional user interface, or in our case, the traditional website. It superimposes a digital layer onto a user's actual world. This technology is still emerging, and is not as established as virtual assistants and wearables, but it continues to gain traction. In 2016, the augmented reality market was valued at $2.39 billion and it is expected to reach $61.39 billion by 2023.

What is exciting is that these new technology trends require content management solutions. In the featured demo, there is a large volume of product data and content that needs to be managed in order to serve the augmented reality capabilities of the Freshland Market mobile application. The Drupal community's emphasis on making Drupal API-first in addition to supporting distributions like Reservoir means that Drupal 8 is prepared to support emerging channels.

If you are ready to start reimagining how your organization interacts with its users, or how to take advantage of new technology trends, Acquia Labs is here to help.

Special thanks to Chris Hamper and Preston So for building the Freshland Market augmented reality application, and thank you to Ash Heath and Drew Robertson for producing the demo video.

I was in for a nice surprise this week. Andy Georges, Lieven Eeckhout and I received an ACM SIGPLAN award for the most influential OOPSLA 2007 paper. Our paper was called "Statistically rigorous Java performance evaluation" and was published 10 years ago at the OOPSLA conference. It helped set a standard for benchmarking the performance of Java applications. A quick look on the ACM website shows that our paper has been cited 156 times. The award was totally unexpected, but much appreciated. As much as I love my current job, thinking back to some of my PhD research makes me miss my academic work. Congratulations Andy and Lieven!

October 25, 2017

The idea of this Docker container came after reading the excellent Micah Hoffman’s blog post: Dark Web Report + TorGhost + EyeWitness == Goodness. Like Micah, I’m also receiving a daily file with new websites discovered on the (dark|deep) web (name it as you prefer). This service is provided by @hunchly Twitter account. Once a day, you get an XLS sheet with newly discovered websites. Micah explained how to process automatically these URLs. He’s using TorGhost with EyeWitness (written by Chris Truncer). It worked very well but it requires the installation of many libraries (EyeWitness has a lot of software dependencies) and I hate polluting my system with libraries used by only a few tools. That’s why I using Docker to run everything in a container. Also, processing a new XLS files every way is really a pain! So let’s automate as much as possible. To process the XLS file, I wrote a small Python script xlsxtract.py (see my previous blog post):

# ./xlsxtract.py -w 'New Today' -c A -r 2- -s /tmp/HiddenServices.xlsx
marketos2sttgxde.onion
s4usinb4eu7exiqj.onion
jt3wzqga4wrprwrf.onion
anonstni3rufuvab.onion
torhnpnu2vv5xtrh.onion
torl7e6yohnjtrn3.onion
anggactr2fturxop.onion
ntqb6tpjsdl4kpu7.onion
gfdbbv5mmprt3dor.onion
22222owjmamxwgv4.onion
cbehcy6letx6vnao.onion
2reich6dcr3dclrx.onion
36663z4ei2552lu6.onion

I built a Docker container (‘TorWitness’)  that performs the following tasks:

  • Setup TorGhost and connect to the Tor network
  • Extract .onion URLs from XLS files
  • Take screenshots of the URLs via EyeWitness

But, sometimes, it can be helpful to visit other websites (not only on the .onion) via the Tor network. That’s why you can pass your own list of URLs as an argument.

Here is an example of container execution:

$ cat $HOME/torwitness/urls.txt
https://blog.rootshell.be
https://isc.sans.edu
$ docker run \
     --rm \
     -it ÷
     -v $HOME/torwitness:/data \
     --cap-add=NET_ADMIN --cap-add=NET_RAW \
     torwitness \
     urls.txt
      _____           ____ _               _
     |_   _|__  _ __ / ___| |__   ___  ___| |_
       | |/ _ \| '__| |  _| '_ \ / _ \/ __| __|
       | | (_) | |  | |_| | | | | (_) \__ \ |_
       |_|\___/|_|   \____|_| |_|\___/|___/\__|
    v2.0 - SusmithHCK | www.khromozome.com
[done]
[12:19:18] Configuring DNS resolv.conf file.. [done]
 * Starting tor daemon...  [OK]
[12:19:18] Starting tor service..  [done]
[12:19:19] setting up iptables rules [done]
[12:19:19] Fetching current IP...
[12:19:19] CURRENT IP : 51.15.79.107
Using environment variables:
TIMEOUT=30
MAX_RETRIES=3
Found Onion URLs to process:
https://blog.rootshell.be
https://isc.sans.edu
################################################################################
#                                  EyeWitness                                  #
################################################################################
Starting Web Requests (2 Hosts)
Attempting to screenshot https://blog.rootshell.be
Attempting to screenshot https://isc.sans.edu
Finished in 22.2635469437 seconds
$ cd $HOME/torwitness
$ ls
results-20171025113745 urls.txt
$ cd results-20171025113745
$ firefox result.html

If you don’t pass a file to the Docker, it will parse XLSX files in the /data directory and extract .onion URLs. There are multiple cases where this container can be helpful:

  • Browsing the dark web
  • Performing reconnaissance phase
  • Hunting
  • Browsing attacker’s resources

To build the Docker, instructions and the Dockerfile are available on my GitHub repository.

[The post “TorWitness” Docker Container: Automated (Tor) Websites Screenshots has been first published on /dev/random]

October 24, 2017

Excel sheets are very common files in corporate environments. It’s definitively not a security tool but it’s not rare to find useful information stored in such files. When these data must be processed for threat hunting or to collect IOC’s, it is mandatory to automate, as much as possible, the processing of data. Here a good example: Everyday, I’m receiving  one email that contains a list of new URLs found on the dark web (Tor network). I’m processing them automatically with a script but it’s a painful task: open the Excel sheet, select the URLs, copy, create a text file, paste and the file to the next script. Being a lazy guy, what not automate this?

Luckily, Python has a nice module called openpyxl which is very helpful to read/write XLS[X|M] files. Reading the content of a file can be performed with only a few lines of Python but I decided to write a small tool that could extract specific zones of the sheet based on the arguments. I called the script xlsxtract.py. Its syntax is easy to understand:

# ./xlsxtract.py -h
Usage: xlsxtract.py [options] <file> ...

Options:
 --version show program's version number and exit
 -h, --help show this help message and exit
 -w WORKBOOK, --workbook=WORKBOOK
 Workbook to extract data from
 -c COLS,... --cols=COLS Read columns (Format: "A", "A-" or "A-B")
 -r ROWS,... --rows=ROWS Read rows (Format: "1", "1-" or "1-10")
 -m MAX, --max=MAX Process maximum rows
 -p, --prefix Display cell name
 -s, --stop Stop processing when empty cell is found

You can specify the cells to dump with the ‘–cols’ and ‘–rows’ parameters. Only one, a range or starting from (‘A’, ‘A-C’ or ‘A-‘) and (‘1’, ‘1-100’ or ‘1-‘). Multiple ranges can be separated by commas. A maximum of cells to reports can be specified (the default is 65535). You can also stop processing cells when a first empty one is found. If no ranges are specified, the script dumps all cells starting from A1 (be careful!).

Here is a simple shell with an inventory of servers:

Example 1

 

 

To extract the list of IP addresses, we can use xlsxtract.py with the following syntax:

$ xlsxtr.py -r A -c 2-600 -s -w 'Sheet1' test.xlsx
10.0.0.1
10.0.0.2
10.0.0.3

Now, if the sheet is more complex and we have IP addresses spread in multiple cells:

Example 2

 

 

We use xlsxtract.py like this:

$ xlsxtr.py -r A,F -c 3-600 -s -w 'Sheet1' test.xlsx
10.0.0.1
10.0.0.2
10.0.0.3
172.16.0.1
172.16.0.2
172.16.0.3

The script is available on my github repository: xlsxtract.py.

 

 

[The post Automatic Extraction of Data from Excel Sheet has been first published on /dev/random]

I published the following diary on isc.sans.org: “Stop relying on file extensions“.

Yesterday, I found an interesting file in my spam trap. It was called ‘16509878451.XLAM’. To be honest, I was not aware of this extension and I found this on the web: “A file with the XLAM file extension is an Excel Macro-Enabled Add-In file that’s used to add new functions to Excel. Similar to other spreadsheet file formats, XLAM files contain cells that are divided into rows and columns that can contain text, formulas, charts, images and… macros!” Indeed, the file contained some VBA code… [Read more]

 

[The post [SANS ISC] Stop relying on file extensions has been first published on /dev/random]

October 23, 2017

With asynchronous commands we have typical commands from the Model View ViewModel world that return asynchronously.

Whenever that happens we want result reporting and progress reporting. We basically want something like this in QML:

Item {
  id: container
  property ViewModel viewModel: ViewModel {}

  Connections {
    target: viewModel.asyncHelloCommand
    onExecuteProgressed: {
        progressBar.value = value
        progressBar.maximumValue = maximum
    }
  }
  ProgressBar {
     id: progressBar
  }
  Button {
    enabled: viewModel.asyncHelloCommand.canExecute
    onClicked: viewModel.asyncHelloCommand.execute()
  }
}

How do we do this? First we start with defining a AbstractAsyncCommand (impl. of protected APIs here):

class AbstractAsyncCommand : public AbstractCommand {
    Q_OBJECT
public:
    AbstractAsyncCommand(QObject *parent=0);

    Q_INVOKABLE virtual QFuture<void*> executeAsync() = 0;
    virtual void execute() Q_DECL_OVERRIDE;
signals:
    void executeFinished(void* result);
    void executeProgressed(int value, int maximum);
protected:
    QSharedPointer<QFutureInterface<void*>> start();
    void progress(QSharedPointer<QFutureInterface<void*>> fut, int value, int total);
    void finish(QSharedPointer<QFutureInterface<void*>> fut, void* result);
private:
    QVector<QSharedPointer<QFutureInterface<void*>>> m_futures;
};

After that we provide an implementation:

#include <QThreadPool>
#include <QRunnable>

#include <MVVM/Commands/AbstractAsyncCommand.h>

class AsyncHelloCommand: public AbstractAsyncCommand
{
    Q_OBJECT
public:
    AsyncHelloCommand(QObject *parent=0);
    bool canExecute() const Q_DECL_OVERRIDE { return true; }
    QFuture<void*> executeAsync() Q_DECL_OVERRIDE;
private:
    void* executeAsyncTaskFunc();
    QSharedPointer<QFutureInterface<void*>> current;
    QMutex mutex;
};

#include "asynchellocommand.h"

#include <QtConcurrent/QtConcurrent>

AsyncHelloCommand::AsyncHelloCommand(QObject* parent)
    : AbstractAsyncCommand(parent) { }

void* AsyncHelloCommand::executeAsyncTaskFunc()
{
    for (int i=0; i<10; i++) {
        QThread::sleep(1);
        qDebug() << "Hello Async!";
        mutex.lock();
        progress(current, i, 10);
        mutex.unlock();
    }
    return nullptr;
}

QFuture<void*> AsyncHelloCommand::executeAsync()
{
    mutex.lock();
    current = start();
    QFutureWatcher<void*>* watcher = new QFutureWatcher<void*>(this);
    connect(watcher, &QFutureWatcher<void*>::progressValueChanged, this, [=]{
        mutex.lock();
        progress(current, watcher->progressValue(), watcher->progressMaximum());
        mutex.unlock();
    });
    connect(watcher, &QFutureWatcher<void*>::finished, this, [=]{
        void* result=watcher->result();
        mutex.lock();
        finish(current, result);
        mutex.unlock();
        watcher->deleteLater();
    });
    watcher->setFuture(QtConcurrent::run(this, &AsyncHelloCommand::executeAsyncTaskFunc));
    QFuture<void*> future = current->future();
    mutex.unlock();

    return future;
}

You can find the complete working example here.

October 21, 2017

Let’s go for more wrap-ups. The second day started smoothly with Haroon Meer’s keynote. There was only one track today, the second room being fully dedicated to hackerspaces. Harron is a renowned speaker and the title of his keynote was “Time to play ‘D’”. The intro was simple: Nothing new, no 0-day, he decided to start his keynote based on his previous talks, especially one from 2011: “Penetration testing considered harmful“. Things changed considerably from a hardware and software point of view but we are still facing the same security issues. Example: a today’s computer is based on multiple computers (think about the MacBook Pro and its touch bar which is based on the same hardware as the Apple watch). Generic security solutions fail and an AV can still be easily bypassed. He gave many good facts and advice. Instead of buying expensive appliances, use this money to hire skilled people. But usually, companies have a security issue and they fix it by deploying a solution that… introduces new issues. He insisted and gave examples of “Dirty Cheap Solutions”. With a few lines of Powershell, we can easily detect new accounts created in an Active Directory. Aaron gave another example with a service he created: canarytokens.org. You create files, URLs, DNS records that are linked an email address and, in case of breach or unexpected access, an alert is sent to you. Another one: regular people don’t use commands lines ‘uname’, ‘ifconfig’ or ‘whoami’. Create alerts to report when they are used!

The first regular talk was given by Tobias Schrödel: “Hacking drones and buying passwords in the Darknet“. What’s the relation between them? Nothing, Tobias just accepted to cover these two topics! The talk was very entertaining and Tobias is a very good speaker… The first part (drones) was made in a management style (with a tie) and the second one with a t-shirt, classic one. Why hacking drones? In Germany, like in many countries, the market for drones is growing quickly. Small models (more classified as “toys”) are using Wireless networks to be controlled and get the pictures from the camera. Those drones provide a SSID, DHCP and are managed via a web interface. So they can be compared to a flying router! There are different ways to take down a drone. The safest solution is to use eagles because they can drop out the drone out of the zone that must be secured. The attack he demonstrated was a simple de-auth attack. The second part of the talks focused on the black market. Not a lot of people already bought stuff on the Darknet (or they hide it) but they are nice webshops where you can buy passwords for many official shops like eBay, Zalando, Paypal, etc… But why a company should buy passwords on the Darkweb? A few years ago, Dropbox suffered from a mega leak with millions of passwords in the wild. That’s bad but even more when corporate email addresses are present in sensitive leaks like Ashley-Madison. In Germany, a big company found 10 email addresses in this leak. If employees are free in their private life, this could have a very huge impact in case of blackmailing: “Give us access to these internal documents or we make your wife/husband aware of your Ashley-Madison account. This is a way to protect its business.

Then, Adrian Vollmer presented “Attacking RDP with Seth – How to eavesdrop on poorly secured RDP connections“. Adrian explained in details how’s working the RDP protocols to authenticate users. In the past, he used Cain & Abel to attack RDP sessions but the tool is quite old and unmaintained (I’m still using it from time to time). So, he decided to write his own tool called Seth. It exploits a misconfiguration in many RDP services. RDP security is similar to SSL but not exactly the same. He explained how this can be abused do downgrade from Kerberos to RDP Security. In this case, a popup warning is displayed to the victim but it is always ignored.

After the morning coffee break, I expected a lot from this talk: “A heaven for Hackers: Breaking Log/SIEM Products” by Mehmet Ince. The talk was not based on ways to abuse a SIEM via the logs that it processed but based on the fact that a SIEM is an application integrating multiple components. The methodology he used was:

  • Read the documentation
  • Understand the features
  • Get a trial version
  • Break it to access console
  • Define attack vector(s)
  • Find a vulnerability

He reported three cases. The first one was AlienVault. They downloaded two versions (the latest one and the previous one) and make a big diff in the files. Based on this, three problems were found: object injection, authentication bypass and IP spooking through XFF. By putting the three together, they were able to get a SQL injection but RCE is always better. They successfully achieved this by created a rule on the application that triggered a command when an SSH denied connection was reported. Evil! The second case targeted ManagEngine. The product design was bad and password to connect to remote Windows systems were stored in a database. If was possible to get access to a console to perform SQL queries but the console obfuscated passwords. By renaming the field ‘password’ to ‘somethingelse’, passwords were displayed in clear text! (“SELECT password AS somethingelse FROM …”). In the third case, LogSign, it was more destructive: it was possible to get rid of the logs… so simple! This was a nice talk.

Then, Ben Seri & Gregory Vishnepolsky presented “BlueBorne Explained: Exploiting Android devices over the air“. This vulnerability was in the news recently and is quite important:

  • 5.3B devices vulnerable in the wild
  • 8 vulnerabilities,  4 critical
  • Multiple OS: Android, Linux, Windows, IOS
  • No user interaction or auth
  • Enables RCE, MitM and info leaks

They reviewed the basic of the Bluetooth protocol and the different services (like SDP – “Service Discovery Protocol”). They gave a huge amount of details… The finished with a live demo by compromising an Android phone via the BNEP service (“BT Network Authentication Protocol). Difficult to follow for me but a huge research!

After a lunch break and interesting discussions, back to the main theatre for the last set of talks. There were two presentations that I found less interesting (IMHO). Anto Joseph presented “Bug hunting using symbolic virtual machines“. Symbolic execution + fuzzing a winning combination to find vulnerabilities. Symbolic execution is a way to analyse the behaviour of a program to determine what inputs cause each part of a program to execute. The tool used by Anto was klee. He made a lot of demos to explain how the tool is working. It looks to be a great tool but it was difficult to follow for my poor brain.

The next talk started late due to a video issue with the speaker’s laptop. Dmitry Yudin presented ” PeopleSoft: HACK THE Planet^W university“. By university, we mean here the PeopleSoft Campus Solutions which is used in more than 1000 universities worldwide. The main components are a browser, a web server, an application, a batch server and a database. Multiple vulnerabilities have been found in this suite, Dmitry explained the CVE-2017-10366. He explained all the step to jump from one service to another until a complete compromise of the suite.

After the last break, the day finished with two interesting presentations. Kirils Solovjovs presented “Tools for effortless reverse engineering of MikroTik routers“. Mikrotik routers are used worldwide and can be considered as a nice target. They are based on Linux, but RouterOS is based on an old kernel from 2012 and is closed source. So, we need a jailbreak! Kirils explained two techniques to jailbreak the router. He also found a nice backdoor which requires a specific file to be created on the file system. He explained many features of RouterOS and also some security issues like in the backup process. It is possible to create a file containing ‘../../../../’, so it was possible to create the file required by the back door. He released on the tools here.

To cloture the day and the conference, Gábor Szappanos talked about “Office Exploit Builders“. Why? Because Office documents remain the main vector of infection to drop malwares. It’s important to have “good” tools to generate malicious documents but who’s writing them?  Usually, VBA macros are used but, with a modern version of Office, macros are disabled by default. It’s better to use an exploit. Based on a study conducted two years ago, APT groups lack of knowledge to build malicious documents so they need tools! Gábor reviewed three tools:

  • AKBuilder: Active since 2015, typically used by Nigerians scammers and cost ~$500
  • Ancalog Exploit Builder: Peak of activity in 2016, also used by scammers. Price is ~$300 (retired)
  • Microsoft Word Intruder: used by more “high” profile, it can drop more dangerous pieces of malware. Written in PHP for Windows, its price is ~$20000-$35000!

A nice presentation to close the day! So, this closes the two days of Hacktivity 2017, the first edition for me. Note that the presentations will be available on the website in the coming days!

[The post Hacktivity 2017 Wrap-Up Day 2 has been first published on /dev/random]

October 20, 2017

My wrap-up crazy week continues… I’m now in Budapest to attend Hacktivity for the first time. During the opening ceremony some figures were given about this event: 14th edition(!), 900 attendees from 23 different countries and 36 speakers. Here is a nice introduction video. The venue is nice with two tracks in parallel, workshops (called “Hello Workshops”), a hacker center, sponsor’ booths and… a wall-of-sheep! After so many years, you realize immediately that it is well organized and everything is under control.

As usual, the day started with a keynote. Costin Raiu from Kaspersky presented “Why some APT research is like palaeontology?” Daily, Kaspersky collects 500K malware samples and less than 50 are really interesting for his team. The idea to compare this job with palaeontology came from a picture of Nessie (the Lochness monster). We some something on the picture but are we sure that it’s a monster? Costin gave the example of Regin: They discovered the first sample in 1999, 1 in 2003 and 43 in 2007. Based in this, how to be certain that you found something interesting? Finding IOCs, C&Cs is like finding bones of a dinosaur. At the end, you have a complete skeleton and are able to publish your findings (the report). In the next part of the keynote, Costin gave examples of interesting cases they found with nice stories like the 0-day that was discovered thanks to the comment left by the developer in his code. The Costin’s advice is to learn Yara and to write good signatures to spot interesting stuff.

The first regular talk was presented by Zoltán Balázs: “How to hide your browser 0-days?‘. It was a mix of crypto and exploitation. The Zoltán’s research started after a discussion with a vendor that was sure to catch all kind of 0-day exploits against browsers. “Challenge accepted” for him. The problem with 0-day exploits is that they quickly become ex-0-day exploits when they are distributed by exploit kits. Why? Quickly, security researchers will find samples, analyze them and patches will be available soon. From an attacker point of view, this is very frustrating. You spend a lot of money and lose it quickly. The idea was to deliver the exploit using an encrypted channel between the browser and the dropper. The shellcode is encrypted, executed then download the malware (also via a safe channel is required). Zoltán explained how he implemented the encrypted channel using ECDH (that was the part of the talk about crypto). This is better than SSL because if you control the client, it is too easy to play MitM and inspect the traffic. It’s not possible with the replay attack that implemented Zoltán. The proof of concept has been released.

Then another Zoltán came on stage: Zoltán Wollner with a presentation called “Behind the Rabbit and beyond the USB“. He started with a scene of the show Mr Robot where they use a Rubber Ducky to get access to a computer. Indeed a classic USB stick might have hidden/evil features. The talk was in fact a presentation of the Bash Bunny tool from Hak5. This USB stick is … what you want! A keyboard, a flash drive, an Ethernet/serial adapter and more! He demonstrated some scenarios:

  • QuickCreds: stealing credentials from a locked computer
  • EternalBlue

This device is targeting low-hanging fruits but … just works! Note that it requires physical access to the target computer.

After the lunch coffee break, Mateusz Olejarka presented “REST API, pentester’s perspective“. Mateusz is a pentester and, by experience, he is facing more and more API when conducting penetration tests. The first time that an API was spotted in an attack was when @sehacure pwned a lot of Facebook accounts via the API and the password reset feature. On the regular website, he was rejected after a few attempts but the anti-bruteforce protection was not enabled on the beta Facebook site! Today RASK API are everywhere and most of the application and web tools have an API. An interesting number:  by 2018, 50% of B2B exchanges will be performed via web APIs. The principle of an API is simple: a web service that offers methods and process data in JSON (most of the time). Methods are GET/PUT/PATCH/DELETE/POST/… To test a REST API, we need some information: the endpoint, the documentation, get access to access key and sample calls. Mateusz explained how to find them. To find endpoints, we just try URI like “/api”, “/v1”, “/v1.1”, “/api/v1” or “/ping”, “/status”, “/health”, … Sometimes the documentation is available online or returned by the API itself. To find keys, two reliable sources are:

  • Apps / mobile apps
  • Github!

Also, fuzzing can be interesting to stress test the API. This one of my favourite talk, plenty of useful information if you are working in the pentesting area.

The next speaker was Leigh-Anne Galloway: “Money makes money: How to buy an ATM and what you can do with it“. She started with the history of ATMs. The first one was invented in 1967 (for Barclay’s in the UK). Today, there are 3.8M devices in the wild. The key players are Siemens Nixdorf, NSC and Fujitsu. She explained how difficult is was for her to just buy an ATM. Are you going through the official way or the “underground” way? After many issues, she finally was able to have an ATM delivered at her home. Cool but impossible to bring it in her apartment without causing damages. She decided to leave it on the parking and to perform the tests outside. In the second part, Leigh-Anne explained the different tests/attacks performed against the ATM: bruteforce, attack at OS level, at hardware and software level.

The event was split into two tracks, so I had to make choice. The afternoon started with Julien Thomas and “Limitations of Android permission system: packages, processes and user privacy“. He explained in details how are the access rights and permissions defined and enforced by Android. Amongst a deep review of the components, he also demonstrated an app that, once installed has no access, but, due to the process of revocation weaknesses, the app gets more access than initially.

Then Csaba Fitzl talked about malware and techniques used to protect themselves against security researchers and analysts: “How to convince a malware to avoid us?“. Malware authors are afraid of:

  • Security researchers
  • Sandboxes
  • Virtual machines
  • Hardened machines

Malware hates to be analysed and they sometimes avoid to infect certain targets (ex: they check the keyboard mapping to detect the country of the victim). Czaba reviewed several examples of known malware and how to detect if they are being monitored. The techniques are multiple and, as said Csaba, it could take weeks to review all of them. He also gave nice tips to harden your virtual machine/sandboxes to make them look really like a real computer used by humans. Then he gave some tips that he solved by writing small utilities to protect the victim. Example: mutex-grabber which monitors malwr.com and automatically creates the found Mutexes on the local OS. The tools reviewed on the presentation are available here. Also a great talk with plenty of useful tips.

After the last coffee break, Harman Singh presented “Active Directory Threats & Detection: Heartbeat that keeps you alive may also kill you!“. Active Directories remain a juicy target because they are implemented in almost all organizations worldwide! He reviewed all the components of an Active Directory then explained some techniques like enumeration of accounts, how to collect data, how to achieve privilege escalation and access to juicy data.

Finally, Ignat Korchagin closed the day with a presentation “Exploiting USB/IP in Linux“. When he asked who know or use USB/IP in the room, nobody raised hands. Nobody was aware of this technique, same for me! The principle is nice: USB/IP allows you to use a USB device connected on computer A from computer B. The USB traffic (URB – USB Request Blocks) are sent over TCP/IP. More information is available here. This looks nice! But… The main problem is that the application level protocol is implemented at kernel level! A packet is based on a header + payload. The kernel gets the size of data to process via the header. This one can be controlled by an attacker and we are facing a nice buffer overflow! This vulnerability is referenced as CVE-2016-3955. Ignat also found a nice name for his vulnerability: “UBOAT” for “(U)SB/IP (B)uffer (O)verflow (AT)tack“. He’s still like for a nice logo :). Hopefully, to be vulnerable, many requirements must be fulfilled:

  • The kernel must be unpatched
  • The victim must use USB/IP
  • The victim must be a client
  • The victim must import at least one device
  • The victim must be root
  • The attacker must own the server or play MitM.

Ignat completed his talk with a live demo that crashed the computer (DoS) but there is probably a way use the head application to get remote code execution.

Enough for today, stay tuned for the second day!

[The post Hacktivity 2017 Wrap-Up Day 1 has been first published on /dev/random]

October 19, 2017

Hack.lu is already over and I’m currently waiting for my connecting flight in Munich, that’s the perfect opportunity to write my wrap-up. This one is shorter because I had to leave early to catch my flight to Hacktivity and I missed some talks scheduled in the afternoon. Thank Lufthansa for rebooking my flight so early in the afternoon… Anyway, it started again early (8AM) and John Bambenek opened the day with a talk called “How I’ve Broken Every Threat Intel Platform I’ve Ever Had (And Settled on MISP)”. The title was well chosen because John is a big fan of OSINT. He collects a lot of data and provides them for free via feeds (available here). He started to extract useful information from malware samples because the main problem today is the flood of samples that are constantly discovered. But how to find relevant information? He explained some of the dataset he’s generating. The first one is DGA or “Domain Generation Algorithm“.  DNS is a key indicator and is used everywhere. Checking a domain name may also reveal interesting information via the Whois databases. Even if data are fake, they can be helpful to link different campaigns or malware families together and get more intelligence about the attacker. If you can reverse the algorithm, you can predict the upcoming domains, prepare yourself better and also start takedown operations. The second dataset was the malware configurations. Yes, a malware is configurable (example: kill-switch domains, Bitcoin wallets, C2, campaign ID’s, etc). Mutex can be useful to correlated malware from different campaigns like DGA. John is also working on a new dataset based on the tool Yalda. In the second part of his presentation, he explained why most solutions he tested to handle this amount of data failed (CIF, CRITS, ThreatConnect, STIX, TAXII). The problem with XML (and also an advantage at the same time): XML can be very verbose to describe events. Finally, he explained how he’s now using MISP. If you’re interested in OSINT, John is definitively a key person to follow and he is also a SANS ISC handler.

The next talk was “Automation Attacks at Scale” by Will Glazier & Mayank Dhiman. Databases of stolen credentials are a goldmine for bad guys. They are available everywhere on the Internet. Ex: Just by crawling Pastebin, it is possible to collect ~20K passwords per day (note: but most of them are duplicates). It is tempting to test them but this requires a lot of resources. A valid password has a value on the black market but to test them, attackers must spend some bucks to buy resources when not available for free or can’t be abused). Will and Mayank explained how they are working to make some profit. They need tools to test credentials and collect information (Ex: Sentra, MBA, Hydra, PhantomJS, Curl, Wget, …). They need fresh meat (credentials), IP addresses (to make the rotation and avoid blacklists) and of course CPU resources. About IP rotation, they use often big cloud service providers (Amazon, Azure) because those big players on the Internet will almost never be blacklisted. They can also use compromised servers or IoT botnets. In the second part of the talk, some pieces of advice were provided to help to detect them (ex: most of them can be fingerprinted just via the User-Agent they use). A good advice is also to keep an idea on your API logs to see if some malicious activity is ongoing (bruteforce attacks).

Then we switched to pure hardware session with Obiwan666 who presented “Front door Nightmares. When smart is not secure“. The research started from a broken lock he found. The talk did not cover the “connected” locks that can manage with a smartphone but real security locks found in many enterprises and restricted environments. Such locks are useful because the key management is easier. No need to replace the lock if a key is lost, the access-rights must just be adapted on the lock. It is also possible to play with time constraints. They offer multiple ways to interact via the user: with something you have (a RFID token), something you are (biometrics) or something you know (a PIN code). Obiwan666 explained in details how such locks are built and, thanks to his job and background in electronics, he has access to plenty of nice devices to analyze the target. He showed X-ray pictures of the lock. X-Ray scanner isn’t very common! Then he explained different scenarios of attack. The first one was trivial: sometimes, the lock is mounted in the wrong way and the inner part is outside (“in the wild”). The second attack was a signal replay. Locks use a serial protocol that can be sniffed and replayed – no protection). I liked the “brain implant” attack: you just buy a new lock (same model), you program it to grant your access and replace the electronic part of the victim with yours…Of course, traditional lock-picking can be tested. Also, a thermal camera can reveal the PIN code if the local has a pinpad. I know some organizations which could be very interested to test their locks against all these attacks! 🙂

After an expected coffee break, another awesome research was presented by Aaron Kaplan and Éireann Leverett: “What is the max Reflected Distributed Denial of Service (rDDoS) potential of IPv4?“. DDoS attacks based on UDP amplification are not new but remain quite effective. The four protocols in the scope of the research were: DNS, NTP, SSDP and SNMP. But in theory, what could be the effect of a massive DDoS over the IPv4 network? They started the talk with one simple number:

108.49Tb/s

The idea was to scan the Internet for vulnerable services and to classify them. Based on the location of the server, they were able to estimate the bandwidth available (ex: per countries) and to calculate the total amount of bandwidth that could be wasted by a massive attack. They showed nice statistics and findings. One of them was a relation between the bandwidth increase and the risk to affects other people on the Internet.

Then, the first half-day ended with the third keynote. This one was presented by Vladimir Kropotov, Fyodor Yarochkin: “Information Flows and Leaks in Social Media“. Social media are used everywhere today… for the good or the bad. They demonstrated how social network can react in case of a major event in the world (nothing related to computers). Some examples:

  • Trump and his awesome “Covfefe”
  • Macron and the French elections
  • The Manchester bombing
  • The fight of Barcelona for its independence

They mainly focused on the Twitter social network. They have tools to analyze the traffic and relations between people and the usage of specific hashtags. In the beginning of the keynote, many slides had content in Russian, no easy to read but the second part was interesting with the tracking of bots and how to detect them.

After the lunch break, there was again a lightning talk session then Eleanor Saitta came to present “On Strategy“. I did not follow them. The last talk I attended was a cool one: “Digital Vengeance: Exploiting Notorious C&C Toolkits” by Waylon Grange. The idea of the research was to try to compromize the attackers by applying the principle of offensive security. Big disclosure here: hacking back is often illegal and does not provide any gain but risks of liability, reputation… Waylon focused on RAT (“Remote Access Tools”) like Poison Ivy, Dark Comet or Xtreme RAT. Some of them already have known vulnerabilities. He demonstrated his finding and how he was able to compromise the computer of remote attackers. But what do when you are “in”? Search for interesting IP addresses (via netstat), browser the filesystem, install persistence, a keylogger or steal credentials, pivot, etc.

Sorry for the last presentation that I was unable to follow and report here. I had to leave for Hacktivity in Budapest. I’ll also miss the first edition of BSidesLuxembourg, any volunteer to write a wrap-up for me?  So to recap this edition of Hack.lu:

  • Plenty of new stickers
  • New t-shirts and nice MISP sweat-shirt
  • Lot of coffee (and other types of drinks)
  • Nice restaurants
  • Excellent schedule
  • Lot of new friends (and old/classic ones)
  • My Twitter timeline exploded 😉

You can still expect more wrap-ups tomorrow but for another conference!

[The post Hack.lu 2017 Wrap-Up Day 3 has been first published on /dev/random]

October 18, 2017

As said yesterday, the second day started very (too?) early… The winner of the first slot was Aaron Zauner who talked about pseudo-random numbers generators. The complete title of the talk was “Because ‘User Random’ isn’t everything: a deep dive into CSPRGNs in Operating Systems & Programming Languages”. He started with an overview of random numbers generators and why we need them. They are used almost everywhere even in the Bash shell where you can use ${RANDOM}.  CSPRNG is also known as RNG or “Random Number Generator”. It is implemented at operating system level via /dev/urandom on Linux on RtlGenRandom() on Windows but also in programming languages. And sometimes, with security issues like CVE-2017-11671 (GCC fails to generate incorrect code for RDRAND/RDSEED. /dev/random & /dev/urandom devices are using really old code! (fro mid-90’s). According to Aaron, it was a pure luck if no major incident arises in the last years. And today? Aaron explained what changed with the kernel 4.2. Then he switched to the different language and how they are implementing random numbers generators. He covered Ruby, Node.js and Erlang. All of them did not implement proper random number generators but he also explained what changed to improve this feature. I was a little bit afraid of the talk at 8AM but it was nice and easy to understand for a crypto talk.

The next talk was “Keynterceptor: Press any key to continue” by Niels van Dijkhuizen. Attacks via HID USB devices are not new. Niels reviewed a timeline with all the well-known attacks from 2005 with the KeyHost USB logger until 207 with the BashBunny. The main problems with those attacks: they need an unlocked computer, some social engineer skills and an Internet connection (most of the time). They are products to protect against these attacks. Basically, they act as a USB firewall: USBProxy, USBGuest, GoodDog, DuckHunt, etc. Those products are Windows tools, for Linux, have a look at GRSecurity. Then Niels explains how own solution which gets rid of all the previous constraints: his implants is inline between the keyboard and the host. It must also have notions of real)time. The rogue device clones itself as a classic HID device (“HP Elite USB Keyboard”) and also adds random delays to fake a real human typing on a keyboard. This allows bypassing the DuckHunt tool. Niels makes a demonstration of his tool. It comes with another device called the “Companion” which has a 3G/4G module that connects to the Keynterceptor via a 433Mhz connection. A nice demo was broadcasted and his devices were available during the coffee break. This is a very nice tool for red teams…

Then, Clement Rouault, Thomas Imbert presented a view into ALPC-RPC.The idea of the talk: how to abuse the UAC feature in Microsoft Windows.They were curious about this feature. How to trigger the UAC manually? Via RPC! A very nice tool to investigate RPC interface is RpcView. Then, they switched to ALPC: what is it and how does ir work. It is a client/server solution. Clients connect to a port and exchange messages that have two parts: the PORT_MESSAGE header and APLC_MESSAGE_ATTRIBUTES. They explained in details how they reverse-engineering the messages and, of course, they discovered vulnerabilities. They were able to build a full RPC client in Python and, with the help of fuzzing techniques, they found bugs: NULL dereference, out-of-bounds access, logic bugs, etc. Based on their research, one CVE was created: CVE-2017-11783.

After the coffee break, a very special talk was scheduled: “The untold stories of Hackers in Detention”. Two hackers came on stage to tell how they were arrested and put in jail. It was a very interesting talk. They explained their personal stories how they were arrested, how it happened (interviews, etc). Also gave some advice: How to behave in jail, what to do and not do, the administrative tasks, etc. This was not recorded and, to respect them, no further details will be provided.

The second keynote was assigned to Ange Albertini: “Infosec and failure”. Ange’s presentation are always a good surprise. You never know how he will design his slides.As he said, his talk is not about “funny” failures. Infosec is typically about winning. The keynote was a suite of facts that prove us that we usually fail to provide good infosec services and pieces of advice, also in the way we communicate to other people. Ange likes retro-gaming and made several comparisons between the gaming and infosec industries. According to him, we should have some retropwning events to play and learn from old exploits. According to Ange, an Infosec crash is coming like the video game industry in 1983 and a new cycle is coming. It was a great keynote with plenty of real facts that we should take care of! Lean, improve, share, don’t be shy, be proactive.

After the lunch, I skipped the second session of lightning talks and got back for “Sigma – Generic Signatures for Log Events” by Thomas Patzke. Let’s talk with logs… When the talk started, my first feeling was “What? Another talk about logs?” but, in fact, it was interesting. The idea behind Sigma is that everybody is looking for a nice way to detect threats but all solutions have different features and syntax. Some example of threats are:

  • Authentication and accounts (large amount of failed logins, lateral movement, etc.)
  • Process execution (exec from an unusual location, unknown process relationship, evil hashes, etc…
  • Windows events

The problem we are facing: there is a lack of standardised format. Here comes Sigma. The goal of this tool is to write use case in YAML files that contain all the details to detect a security issue. Thomas gave some examples like detecting Mimikatz or webshells.

Sigma comes with a generator tool that can generate queries for multiple tools: Splunk, Elasticsearch or Logpoint. This is more complex than expected because field names are different, there are inconsistent file names, etc. In my opinion, Sigma could be useful to write use cases in total independence of any SIEM solution. It is still an ongoing project and, good news, recent versions of MISP can integrate Sigma. A field has been added and a tool exists to generate Sigma rules from MISP data.

The next talk was “SMT Solvers in the IT Security – deobfuscating binary code with logic” by Thaís Moreira Hamasaki. She started with an introduction to CLP or “Constraint Logic Programming”. Applications in infosec can be useful like malware de-obfuscation. Thais explained how to perform malware analysis using CLP. I did not follow more about this talk that was too theoretical for me.

Then, we came back to more practical stuff with Omar Eissa who presented “Network Automation is not your Safe Haven: Protocol Analysis and Vulnerabilities of Autonomic Network”. Omar is working for ERNW and they always provide good content. This time they tested the protocol used by Cisco to provision new routers. The goal is to make a router ready for use in a few minutes without any configuration: the Cisco Autonomic network. It’s a proprietary protocol developed by Cisco. Omar explained how this protocol is working and then how to abuse it. They found several vulnerabilities

  • CVE-2017-6664: There is no way to protect against malicious nodes within the network
  • CVE-2017-6665 : Possible to reset of the secure channel
  • CVE-2017-3849: registrar crash
  • CVE-2017-3850: DeathKiss – crash with 1 IPv6 packet
The talk had many demos that demonstrated the vulnerabilities above. A very nice talk.

The next speaker was Frank Denis who presented “API design for cryptography”. The idea of the talk started with a simple Google query: “How to encrypt stuff in C”. Frank found plenty of awful replies with many examples that you should never use. Crypto is hard to design but also hard to use. He reviewed several issues in the current crypto libraries then presented libhydrogen which is a library developed to solve all the issues introduced by the other libraries. Crypto is not easy to use and developer don’t read the documentation, they just expect some pieces of code that they can copy/paste. The library presented by Frank is called libhyrogen. You can find the source code here.

Then, Okhin came on stage to give an overview of the encryption VS the law in France. The title of his talk was “WTFRance”. He explained the status of the French law against encryption and tools. Basically, many political people would like to get rid of encryption to better fight crime. It was interesting to learn that France leads the fight against crypto and then push ideas at EU level. Note that he also mentioned several politician names that are “against” encryption.

The next talk was my preferred for this second day: “In Soviet Russia, Vulnerability Finds You” presented by Inbar Raz. Inbar is a regular speaker at hack.lu and proposes always entertaining presentations! This time he came with several examples of interesting he found “by mistake”. Indeed, sometimes, you find interesting stuff by accident. Inbar game several examples like an issue on a taxi reservation website, the security of an airport in Poland or fighting against bots via the Tinder application. For each example, a status was given. It’s sad to see that some of them were unresolved for a while! An excellent talk, I like it!

The last slot was assigned to Jelena Milosevic. Jelena is a nurse but she has also a passion for infosec. Based on her job, she learns interesting stuff from healthcare environments. Her talk was a compilation of mistakes, facts and advice for hospitals and health-related services. We all know that those environments are usually very badly protected. It was, once again, proven by Jelena.

The day ended with the social event and the classic Powerpoint karaoke. Tomorrow, it will start again at 08AM with a very interesting talk…

[The post Hack.lu 2017 Wrap-Up Day 2 has been first published on /dev/random]

Weer vier politieke posts alvorens ik iets technisch schrijf. Pff. Wat een zagevent ben ik!

Ik beloof dat ik binnenkort een “How it’s made” over een AbstractFutureCommand ga schrijven. Dat is een Command in MVVM waar er een QFuture teruggegeven wordt bij een executeAsync() method.

Ik ben eerst nog even met m’n klant en haar ontwikkelaars bezig om één en ander door te voeren. Teneinde de toekomst van CNC besturingssoftware kei goed zal zijn.

We werken er aan op EMO ooit een bende software die bangelijk is te introduceren.