Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

April 27, 2017

Like many developers, I get a broken build on regular basis. When setting up a new project this happens a lot because of missing infrastructure dependencies or not having access to the companies private docker hub …

Anyway on the old Jenkins (1), I used to go into the workspace folder to have a look at the build itself. In Jenkins 2 this seemed to have disappeared  (when using the pipeline) … or did it … ?

It’s there, but you got to look very careful. To get there follow these simple steps:

  • Go to the failing build (the #…)
  • Click on “Pipeline Steps” in the sidebar
  • Click on “Allocate node : …”
  • On the right side the “Workspace” folder icon that we all love appears
  • Click it

Et voila we are all set

April 26, 2017

Here is my quick wrap-up of the FIRST Technical Colloquium hosted by Cisco in Amsterdam. This is my first participation to a FIRST event. FIRST is an organization helping in incident response as stated on their website:

FIRST is a premier organization and recognized global leader in incident response. Membership in FIRST enables incident response teams to more effectively respond to security incidents by providing access to best practices, tools, and trusted communication with member teams.

The event was organized at Cisco office. Monday was dedicated to a training about incident response and the two next days were dedicated to presentations. All of them focussing on the defence side (“blue team”). Here are a few notes about interesting stuff that I learned.

The first day started with two guys from Facebook: Eric Water @ Matt Moren. They presented the solution developed internally at Facebook to solve the problem of capturing network traffic: “PCAP don’t scale”. In fact, with their solution, it scales! To investigate incidents, PCAPs are often the gold mine. They contain many IOC’s but they also introduce challenges: the disk space, the retention policy, the growing network throughput. When vendors’ solutions don’t fit, it’s time to built your own solution. Ok, only big organizations like Facebook have resources to do this but it’s quite fun. The solution they developed can be seen as a service: “PCAP as a Service”. They started by building the right hardware for sensors and added a cool software layer on top of it. Once collected, interesting PCAPs are analyzed using the Cloudshark service. They explained how they reached top performances by mixing NFS and their GlusterFS solution. Really a cool solution if you have multi-gigabits networks to tap!

The next presentation focused on “internal network monitoring and anomaly detection through host clustering” by Thomas Atterna from TNO. The idea behind this talk was to explain how to monitor also internal traffic. Indeed, in many cases, organizations still focus on the perimeter but internal traffic is also important. We can detect proxies, rogue servers, C2, people trying to pivot, etc. The talk explained how to build clusters of hosts. A cluster of hosts is a group of devices that have the same behaviour like mail servers, database servers, … Then to determine “normal” behaviour per cluster and observe when individual hosts deviate. Clusters are based on the behaviour (the amount of traffic, the number of flows, protocols, …). The model is useful when your network is quite close and stable but much more difficult to implement in an “open” environment (like universities networks).
Then Davide Carnali made a nice review of the Nigerian cybercrime landscape. He explained in details how they prepare their attacks, how they steal credentials, how they deploy the attacking platform (RDP, RAT, VPN, etc). The second part was a step-by-step explanation how they abuse companies to steal (sometimes a lot!) of money. An interesting fact reported by Davide: the time required between the compromisation of a new host (to drop malicious payload) and the generation of new maldocs pointing to this host is only… 3 hours!
The next presentation was performed by Gal Bitensky ( Minerva):  “Vaccination: An Anti-Honeypot Approach”. Gal (re-)explained what the purpose of a honeypot and how they can be defeated. Then, he presented a nice review of ways used by attackers to detect sandboxes. Basically, when a malware detects something “suspicious” (read: which makes it think that it is running in a sandbox), it will just silently exit. Gal had the idea to create a script which creates plenty of artefacts on a Windows system to defeat malware. His tool has been released here.
Paul Alderson (FireEye) presented “Injection without needles: A detailed look at the data being injected into our web browsers”. Basically, it was a huge review of 18 months of web-­inject and other configuration data gathered from several botnets. Nothing really exciting.
The next talk was more interesting… Back to the roots: SWITCH presented their DNS Firewall solution. This is a service they provide not to their members. It is based on DNS RPZ. The idea was to provide the following features:
  • Prevention
  • Detection
  • Awareness

Indeed, when a DNS request is blocked, the user is redirected to a landing page which gives more details about the problem. Note that this can have a collateral issue like blocking a complete domain (and not only specific URLs). This is a great security control to deploy. Note that RPZ support is implemented in many solutions, especially Bind 9.

Finally, the first day ended with a presentation by Tatsuya Ihica from Recruit CSIRT: “Let your CSIRT do malware analysis”. It was a complete review of the platform that they deployed to perform more efficient automatic malware analysis. The project is based on Cuckoo that was heavily modified to match their new requirements.

The second day started with an introduction to the FIRST organization made by Aaron Kaplan, one of the board members. I liked the quote given by Aaron:

If country A does not talk to country B because of ‘cyber’, then a criminal can hide in two countries

Then, the first talk was really interesting: Chris Hall presented “Intelligence Collection Techniques“. After explaining the different sources where intelligence can be collected (open sources, sinkholes, …), he reviewed a serie of tools that he developed to help in the automation of these tasks. His tools addresses:
  • Using the Google API, VT API
  • Paste websites (like pastebin.com)
  • YARA rules
  • DNS typosquatting
  • Whois queries

All the tools are available here. A very nice talk with tips & tricks that you can use immediately in your organization.

The next talk was presented by a Cisco guy, Sunil Amin: “Security Analytics with Network Flows”. Netflow isn’t a new technology. Initially developed by Cisco, they are today a lot of version and forks. Based on the definition of a “flow”: “A layer 3 IP communication between two endpoints during some time period”, we got a review the Netflow. Netflow is valuable to increase the visibility of what’s happening on your networks but it has also some specific points that must be addressed before performing analysis. ex: de-duplication flows. They are many use cases where net flows are useful:
  • Discover RFC1918 address space
  • Discover internal services
  • Look for blacklisted services
  • Reveal reconnaissance
  • Bad behaviours
  • Compromised hosts, pivot
    • HTTP connection to external host
    • SSH reverse shell
    • Port scanning port 445 / 139
I would expect a real case where net flow was used to discover something juicy. The talk ended with a review of tools available to process net flow data: SiLK, nfdump, ntop but log management can also be used like the ELK stack or Apache Spot. Nothing really new but a good reminder.
Then, Joel Snape from BT presented “Discovering (and fixing!) vulnerable systems at scale“. BT, as a major player on the Internet, is facing many issues with compromized hosts (from customers to its own resources). Joel explained the workflow and tools they deployed to help in this huge task. It is based on the following circle: Introduction,  data collection, exploration and remediation (the hardest part!).
I like the description of their “CERT dropbox” which can be deployed at any place on the network to perform the following tasks:
  • Telemetry collection
  • Data exfiltration
  • Network exploration
  • Vulnerability/discovery scanning
An interesting remark from the audience: ISP don’t have only to protect their customers from the wild Internet but also the Internet from their (bad) customers!
Feike Hacqueboard, from TrendMicro, explained:  “How political motivated threat actors attack“. He reviewed some famous stories of compromised organizations (like the French channel TV5) then reviewed the activity of some interesting groups like C-Major or Pawn Storm. A nice review of the Yahoo! OAuth abuse was performed as well as the tab-nabbing attack against OWA services.
Jose Enrique Hernandez (Zenedge) presented “Lessons learned in fighting Targeted Bot Attacks“. After a quick review of what bots are (they are not always malicious – think about the Google crawler bot), he reviewed different techniques to protect web resources from bots and why they often fail, like the JavaScript challenge or the Cloudflare bypass. These are “silent challenges”. Loud challenges are, by examples, CAPTCHA’s. Then Jose explained how to build a good solution to protect your resources:
  • You need a reverse proxy (to be able to change quests on the fly)
  • LUA hooks
  • State db for concurrency
  • Load balancer for scalability
  • fingerprintjs2 / JS Challenge

Finally, two other Cisco guys, Steve McKinney & Eddie Allan presented “Leveraging Event Streaming and Large Scale Analysis to Protect Cisco“. CIsco is collecting a huge amount of data on a daily basis (they speak in Terabytes!). As a Splunk user, they are facing an issue with the indexing licence. To index all these data, they should have extra licenses (and pay a lot of money). They explained how to “pre-process” the data before sending them to Splunk to reduce the noise and the amount of data to index.
The idea is to pub a “black box” between the collectors and Splunk. They explained what’s in this black box with some use cases:
  • WSA logs (350M+ events / day)
  • Passive DNS (7.5TB / day)
  • Users identification
  • osquery data

Some useful tips that gave and that are valid for any log management platform:

  • Don’t assume your data is well-formed and complete
  • Don’t assume your data is always flowing
  • Don’t collect all the things at once
  • Share!

Two intense days full of useful information and tips to better defend your networks and/or collect intelligence. The slides should be published soon.

[The post FIRST TC Amsterdam 2017 Wrap-Up has been first published on /dev/random]

MEANCe jeudi 18 mai 2017 à 19h se déroulera la 59ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Développer avec la stack MEAN

Thématique : Développement

Public : Développeurs|étudiants|fondateurs de startups

Les animateurs conférenciers : Fabian Vilers (Dev One) et Sylvain Guérin (Guess Engineering)

Lieu de cette séance : Université de Mons, Campus Plaine de Nimy, avenue Maistriau, Grands Amphithéâtres, Auditoire Curie (cf. ce plan sur le site de l’UMONS, ou la carte OSM).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : MEAN est une stack JavaScript complète, basée sur des logiciels ouverts, et permettant de développer rapidement des applications orientées web. Elle est composée de 4 piliers fondamentaux :

  • MongoDB (base de données NoSQL orientée documents)
  • Express (framework de développement d’application back-end)
  • Angular (framework de développement d’application front-end)
  • Nodejs (environnement d’exécution multi-plateforme basé sur le moteur V8 JavaScript de Google).

Lors de cette session, nous présenterons à tour de rôle chaque élément constituant une application MEAN tout en construisant pas à pas, en live, et avec vous, une application simple et fonctionnelle.

Short Bios :

  • Développeur depuis près de 17 ans, Fabian se décrit comme un artisan du logiciel. Il montre beaucoup d’intérêt pour la précision, la qualité et la minutie lors de la conception et la mise au point d’un logiciel. Depuis 2010, il est consultant au sein de Dev One pour laquelle il renforce, guide, et forme des équipes de développement tout en promouvant les bonnes pratiques du développement logiciel en Belgique et en France. Vous pouvez suivre Fabian sur Twitter et LinkedIn.
  • Sylvain accompagne depuis plus de 20 ans des sociétés dans la réalisation de leurs projets informatiques. Témoin actif de la révolution numérique et porteur de la bonne parole agile, il tentera de vous faire voir les choses sous un angle différent. Vous pouvez suivre Sylvain sur Twitter et LinkedIn.

April 25, 2017

More and more developers are choosing content-as-a-service solutions known as headless CMSes — content repositories which offer no-frills editorial interfaces and expose content APIs for consumption by an expanding array of applications. Headless CMSes share a few common traits: they lack end-user front ends, provide few to no editorial tools for display and layout, and as such leave presentational concerns almost entirely up to the front-end developer. Headless CMSes have gained popularity because:

  • A desire to separate concerns of structure and presentation so that front-end teams and back-end teams can work independently from each other.
  • Editors and marketers are looking for solutions that can serve content to a growing list of channels, including websites, back-end systems, single-page applications, native applications, and even emerging devices such as wearables, conversational interfaces, and IoT devices.

Due to this trend among developers, many are rightfully asking whether headless CMSes are challenging the market for traditional CMSes. I'm not convinced that headless CMSes as they stand today are where the CMS world in general is headed. In fact, I believe a nuanced view is needed.

In this blog post, I'll explain why Drupal has one crucial advantage that propels it beyond the emerging headless competitors: it can be an exceptional CMS for editors who need control over the presentation of their content and a rich headless CMS for developers building out large content ecosystems in a single package.

As Drupal continues to power the websites that have long been its bread and butter, it is also used more and more to serve content to other back-end systems, single-page applications, native applications, and even conversational interfaces — all at the same time.

Headless CMSes are leaving editors behind

This diagram illustrates the differences between a traditional Drupal website and a headless CMS with various front ends receiving content.

Some claim that headless CMSes will replace traditional CMSes like Drupal and WordPress when it comes to content editors and marketers. I'm not so sure.

Where headless CMSes fall flat is in the areas of in-context administration and in-place editing of content. Our outside-in efforts, in contrast, aim to allow an editor to administer content and page structure in an interface alongside a live preview rather than in an interface that is completely separate from the end user experience. Some examples of this paradigm include dragging blocks directly into regions or reordering menu items and then seeing both of these changes apply live.

By their nature, headless CMSes lack full-fledged editorial experience integrated into the front ends to which they serve content. Unless they expose a content editing interface tied to each front end, in-context administration and in-place editing are impossible. In other words, to provide an editorial experience on the front end, that front end must be aware of that content editing interface — hence the necessity of coupling.

Display and layout manipulation is another area that is key to making marketers successful. One of Drupal's key features is the ability to control where content appears in a layout structure. Headless CMSes are unopinionated about display and layout settings. But just like in-place editing and in-context administration, editorial tools that enable this need to be integrated into the front end that faces the end user in order to be useful.

In addition, editors and marketers are particularly concerned about how content will look once it's published. Access to an easy end-to-end preview system, especially for unpublished content, is essential to many editors' workflows. In the headless CMS paradigm, developers have to jump through fairly significant hoops to enable seamless preview, including setting up a new API endpoint or staging environment and deploying a separate version of their application that issues requests against new paths. As a result, I believe seamless preview — without having to tap on a developer's shoulder — is still necessary.

Features like in-place editing, in-context administration, layout manipulation, and seamless but faithful preview are essential building blocks for an optimal editorial experience for content creators and marketers. For some use cases, these drawbacks are totally manageable, especially where an application needs little editorial interaction and is more developer-focused. But for content editors, headless CMSes simply don't offer the toolkits they have come to expect; they fall short where Drupal shines.

Drupal empowers both editors and application developers

This diagram illustrates the differences between a coupled — but headless-enabled — Drupal website and a headless CMS with various front ends receiving content.

All of this isn't to say that headless isn't important. Headless is important, but supporting both headless and traditional approaches is one of the biggest advantages of Drupal. After all, content management systems need to serve content beyond editor-focused websites to single-page applications, native applications, and even emerging devices such as wearables, conversational interfaces, and IoT devices.

Fortunately, the ongoing API-first initiative is actively working to advance existing and new web services efforts that make using Drupal as a content service much easier and more optimal for developers. We're working on making developers of these applications more productive, whether through web services that provide a great developer experience like JSON API and GraphQL or through tooling that accelerates headless application development like the Waterwheel ecosystem.

For me, the key takeaway of this discussion is: Drupal is great for both editors and developers. But there are some caveats. For web experiences that need significant focus on the editor or assembler experience, you should use a coupled Drupal front end which gives you the ability to edit and manipulate the front end without involving a developer. For web experiences where you don't need editors to be involved, Drupal is still ideal. In an API-first approach, Drupal provides for other digital experiences that it can't explicitly support (those that aren't web-based). This keeps both options open to you.

Drupal for your site, headless Drupal for your apps

This diagram illustrates the ideal architecture for Drupal, which should be leveraged as both a front end in and of itself as well as a content service for other front ends.

In this day and age, having all channels served by a single source of truth for content is important. But what architecture is optimal for this approach? While reading this you might have also experienced some déjà-vu from a blog post I wrote last year about how you should decouple Drupal, which is still solid advice nearly a year after I first posted it.

Ultimately, I recommend an architecture where Drupal is simultaneously coupled and decoupled; in short, Drupal shines when it's positioned both for editors and for application developers, because Drupal is great at both roles. In other words, your content repository should also be your public-facing website — a contiguous site with full editorial capabilities. At the same time, it should be the centerpiece for your collection of applications, which don't necessitate editorial tools but do offer your developers the experience they want. Keeping Drupal as a coupled website, while concurrently adding decoupled applications, isn't a limitation; it's an enhancement.

Conclusion

Today's goal isn't to make Drupal API-only, but rather API-first. It doesn't limit you to a coupled approach like CMSes without APIs, and it doesn't limit you to an API-only approach like Contentful and other headless CMSes. To me, that is the most important conclusion to draw from this: Drupal supports an entire spectrum of possibilities. This allows you to make the proper trade-off between optimizing for your editors and marketers, or for your developers, and to shift elsewhere on that spectrum as your needs change.

It's a spectrum that encompasses both extremes of the scenarios that a coupled approach and headless approach represent. You can use Drupal to power a single website as we have for many years. At the same time, you can use Drupal to power a long list of applications beyond a traditional website. In doing so, Drupal can be adjusted up and down along this spectrum according to the requirements of your developers and editors.

In other words, Drupal is API-first, not API-only, and rather than leave editors and marketers behind in favor of developers, it gives everyone what they need in one single package.

Special thanks to Preston So for contributions to this blog post and to Wim Leers, Ted Bowman, Chris Hamper and Matt Grill for their feedback during the writing process.

I’m the kind of dev that dreads configuring webservers and that rather does not have to put up with random ops stuff before being able to get work done. Docker is one of those things I’ve never looked into, cause clearly it’s evil annoying boring evil confusing evil ops stuff. Two of my colleagues just introduced me to a one-line docker command that kind off blew my mind.

Want to run tests for a project but don’t have PHP7 installed? Want to execute a custom Composer script that runs both these tests and the linters without having Composer installed? Don’t want to execute code you are not that familiar with on your machine that contains your private keys, etc? Assuming you have Docker installed, this command is all you need:

docker run --rm --interactive --tty --volume $PWD:/app -w /app\
 --volume ~/.composer:/composer --user $(id -u):$(id -g) composer composer ci

This command uses the Composer Docker image, as indicated by the first composer at the end of the command. After that you can specify whatever you want to execute, in this case composer ci, where ci is a custom composer Script. (If you want to know what the Docker image is doing behind the scenes, check its entry point file.)

This works without having PHP or Composer installed, and is very fast after the initial dependencies have been pulled. And each time you execute the command, the environment is destroyed, avoiding state leakage. You can create a composer alias in your .bash_aliases as follows, and then execute composer on your host just as you would do if it was actually installed (and running) there.

alias composer='docker run --rm --interactive --tty --volume $PWD:/app -w /app\
 --volume ~/.composer:/composer --user $(id -u):$(id -g) composer composer'

Of course you are not limited to running Composer commands, you can also invoke PHPUnit

...(id -g) composer vendor/bin/phpunit

or indeed any PHP code.

...(id -g) composer php -r 'echo "hi";'

This one liner is not sufficient if you require additional dependencies, such as PHP extensions, databases or webservers. In those cases you probably want to create your own Docker file. Though to run the tests of most PHP libraries you should be good. I’ve now uninstalled my local Composer and PHP.

Git is cool, for reasons I won't go into here.

It doesn't deal very well with very large files, but that's fine; when using things like git-annex or git-lfs, it's possible to deal with very large files.

But what if you've added a file to git-lfs which didn't need to be there? Let's say you installed git-lfs and told it to track all *.zip files, but then it turned out that some of those files were really small, and that the extra overhead of tracking them in git-lfs is causing a lot of grief with your users. What do you do now?

With git-annex, the solution is simple: you just run git annex unannex <filename>, and you're done. You may also need to tell the assistant to no longer automatically add that file to the annex, but beyond that, all is well.

With git-lfs, this works slightly differently. It's not much more complicated, but it's not documented in the man page. The naive way to do that would be to just run git lfs untrack; but when I tried that it didn't work. Instead, I found that the following does work:

  • First, edit your .gitattributes files, so that the file which was (erroneously) added to git-lfs is no longer matched against a git-lfs rule in your .gitattributes.
  • Next, run git rm --cached <file>. This will tell git to mark the file as removed from git, but crucially, will retain the file in the working directory. Dropping the --cached will cause you to lose those files, so don't do that.
  • Finally, run git add <file>. This will cause git to add the file to the index again; but as it's now no longer being smudged up by git-lfs, it will be committed as the normal file that it's supposed to be.

Mastonaut’s log, tootdate 10. We started out by travelling on board of mastodon.social. Being the largest one, we met with people from all over the fediverse. Some we could understand, others we couldn’t. Those were interesting days, I encountered a lot of people fleeing from other places to feel free and be themselves, while others were simply enjoying the ride. It wasn’t until we encountered the Pawoo, who turned out to have peculiar tastes when it comes to imagery, that the order in the fediverse got disturbed. But, as we can’t expect to get freedom while restricting others’, I fetched the plans to build my own instance. Ready to explore the fediverse and its inhabitants on my own, I set out on an exciting journey.

As I do not own a server myself, and still had $55 credit on Digital Ocean, I decided to setup a simple $5 Ubuntu 16.04 droplet to get started. This setup assumes you’ve got a domain name and I will even show you how to run Mastodon on a subdomain while identifying on the root domain. I suggest following the initial server setup to make sure you get started the right way. Once you’re all set, grab a refreshment and connect to your server through SSH.

Let’s start by ensuring we have everything we need to proceed. There are a few dependencies to run Mastodon. We need docker to run the different applications and tools in containers (easiest approach) and nginx to expose the apps to the outside world. Luckily, Digital Ocean has an insane amount of up-to-date documentation we can use. Follow these two guides and report back.

At this point, we’re ready to grab the source code. Do this in your location of choice.

git clone https://github.com/tootsuite/mastodon.git

Change to that location and checkout the latest release (1.2.2 at the time of writing).

cd mastodon
git checkout 1.2.2

Now that we’ve got all this setup, we can build our containers. There’s a useful guide made by the Mastodon community I suggest you follow. Before we make this available to the outside world, we want to tweak our .env.production file to configure the instance. There are a few keys in there we need to adjust, and some we could adjust. In my case, Mastodon runs as a single user instance, meaning only one user is allowed in. Nobody can register and the home page redirects to that user’s profile instead of the login page. Below are the settings I adjusted, remember I run Mastodon on a subdomain mastodon.herebedragons.io, but my user identifies as @jan@herebedragons.io. The config changes below illustrate that behavior. If you have no use for that, just leave the WEB_DOMAIN key commented out. If you do need it however, you’ll still have to enter a redirect rule for your root domain that points https://rootdomain/.well-known/host-meta to https://subdomain.rootdomain/.well-known/host-meta. I added a rule on Cloudflare to achieve this, but any approach will do.

# Federation
LOCAL_DOMAIN=herebedragons.io
LOCAL_HTTPS=true

# Use this only if you need to run mastodon on a different domain than the one used for federation.
# Do not use this unless you know exactly what you are doing.
WEB_DOMAIN=mastodon.herebedragons.io

# Registrations
# Single user mode will disable registrations and redirect frontpage to the first profile
SINGLE_USER_MODE=true

As we can’t run a site without configuring SSL, we’ll use Let’s Encrypt to secure nginx. Follow the brilliant guide over at Digital Ocean and report back for the last part. Once setup, we need to configure nginx (and the DNS settings for your domain) to make Mastodon available for the world to enjoy. You can find my settings here. Just make sure to adjust the key file’s name and DNS settings. As I redirect all http traffic to https using Cloudflare, I did not bother to add port 80 to the config, be sure to add it if needed.

Alright, we’re ready to start exploring the fediverse! Make sure to restart nginx to apply the latest settings using sudo service nginx restart and update the containers to reflect your settings via docker-compose up -d. If all went according to plan, you should see your brand new shiny instance on your domain name. Create your first user and get ready to toot! In case you did not bother to add an smtp server, manually confirm your user:

docker-compose run --rm web rails mastodon:confirm_email USER_EMAIL=alice@alice.com

And make sure to give yourself ultimate admin powers to be able to configure your intance:

docker-compose run --rm web rails mastodon:make_admin USERNAME=alice

Updating is a straightforward process too. Fetch the latest changes from the remote, checkout the tag you want and update your containers:

docker-compose stop
docker-compose build
docker-compose run --rm web rails db:migrate
docker-compose run --rm web rails assets:precompile
docker-compose up -d

Happy tooting!

April 24, 2017

Wim made a stir in the land of the web. Good for Wim that he rid himself of the shackles of social media.

But how will we bring a generation of people, who are now more or less addicted to social media, to a new platform? And what should that platform look like?

I’m not a anthropologist, but I believe human nature of organizing around new concepts and techniques is that we, humans, start central and monolithic. Then we fine-tune it. We figure out that the central organization and monolithic implementation of it becomes a limiting factor. Then we decentralize it.

The next step for all those existing and potential so-called ‘online services’ is to become fully decentralized.

Every family or home should have its own IMAP and SMTP server. Should that be JMAP instead? Probably. But that ain’t the point. The fact that every family or home will have its own, is. For chat, XMPP’s s2s is like SMTP. Postfix is an implementation of SMTP like ejabberd is for XMPP’s s2s. We have Cyrus, Dovecot and others for IMAP, which is the c2s of course. And soon we’ll probably have JMAP, too. Addressability? IPv6.

Why not something like this for social media? For the next online appliance, too? Augmented reality worlds can be negotiated in a distributed fashion. Why must Second Life necessarily be centralized? Surely we can run Linden Lab’s server software, locally.

Simple, because money is not interested in anything non-centralized. Not yet.

In the other news, the Internet stopped working truly well ever since money became its driving factor.

ps. The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think different. Quote by Friedrich Nietzsche.

April 21, 2017

I deleted my Facebook account because in the past three years, I barely used it. It’s ironic, considering I worked there. 1

More irony: I never used it as much as I did when I worked there.

Yet more irony: a huge portion of my Facebook news feed was activity by a handful of Facebook employees. 2

No longer useful

I used to like Facebook because it delivered on its original mission:

Facebook helps you connect and share with the people in your life.

They’re clearly no longer true to that mission. 3

When I joined in November 2007, the news feed chronologically listed status updates from your friends. Great!

Since then, they’ve done every imaginable thing to increase time spent, also known as the euphemistic “engagement”. They’ve done this by surfacing friends’ likes, suggested likes, friends’ replies, suggested friends, suggested pages to like based on prior likes, and of course: ads. Those things are not only distractions, they’re actively annoying.

Instead of minimizing time spent so users can get back to their lives, Facebook has sacrificed users at the altar of advertising: more time spent = more ads shown.

No longer trustworthy

An entire spectrum of concerns to choose from, along two axes: privacy and walled garden. And of course, the interesting intersection of minimized privacy and maximized walled gardenness: the filter bubble.

If you want to know more, see Vicki Boykis’ well-researched article.

No thanks

Long story short: Facebook is not for me anymore.

It’s okay to not know everything that’s been going on. It makes for more interesting conversations when you do get to see each other again.

The older I get, the more I prefer the one communication medium that is not a walled garden, that I can control, back up and search: e-mail.


  1. To be clear: I’m still very grateful for that opportunity. It was a great working environment and it helped my career! ↩︎

  2. Before deleting my Facebook account, I scrolled through my entire news feed — apparently this time it was seemingly endless, and I stopped after the first 202 items in it, by then I’d had enough. Of those 202 items, 58 (28%) were by former or current Facebook employees. 40% (81) of it was reporting on a like or reply by somebody — which I could not care less about 99% of the time. And the remainder, well… the vast majority of it is mildly interesting at best. Knowing all the trivia in everybody’s lives is fatiguing and wasteful, not fascinating and useful. ↩︎

  3. They’ve changed it since then, to: give people the power to share and make the world more open and connected. ↩︎

I published the following diary on isc.sans.org: “Analysis of a Maldoc with Multiple Layers of Obfuscation“.

Thanks to our readers, we get often interesting samples to analyze. This time, Frederick sent us a malicious Microsoft Word document called “Invoice_6083.doc” (which was delivered in a zip archive). I had a quick look at it and it was interesting enough for a quick diary… [Read more]

[The post [SANS ISC] Analysis of a Maldoc with Multiple Layers of Obfuscation has been first published on /dev/random]

The past weeks have been difficult. I'm well aware that the community is struggling, and it really pains me. I respect the various opinions expressed, including opinions different from my own. I want you to know that I'm listening and that I'm carefully considering the different aspects of this situation. I'm doing my best to progress through the issues and support the work that needs to happen to evolve our governance model. For those that are attending DrupalCon Baltimore and want to help, we just added a community discussions track.

There is a lot to figure out, and I know that it's difficult when there are unresolved questions. Leading up to DrupalCon Baltimore next week, it may be helpful for people to know that Larry Garfield and I are talking. As members of the Community Working Group reported this week, Larry remains a member of the community. While we figure out Larry's future roles, Larry is attending DrupalCon as a regular community member with the opportunity to participate in sessions, code sprints and issue queues.

As we are about to kick off DrupalCon Baltimore, please know that my wish for this conference is for it to be everything you've made it over the years; a time for bringing out the best in each other, for learning and sharing our knowledge, and for great minds to work together to move the project forward. We owe it to the 3,000 people who will be in attendance to make DrupalCon about Drupal. To that end, I ask for your patience towards me, so I can do my part in helping to achieve these goals. It can only happen with your help, support, patience and understanding. Please join me in making DrupalCon Baltimore an amazing time to connect, collaborate and learn, like the many DrupalCons before it.

(I have received a lot of comments and at this time I just want to respond with an update. I decided to close the comments on this post.)

April 20, 2017

The Internet Archive is a well-known website and more precisely for its “WaybackMachine” service. It allows you to search for and display old versions of websites. The current Alexa ranking is 262 which makes it a “popular and trusted” website. Indeed, like I explained in a recent SANS ISC diary, whitelists of websites are very important for attackers! The phishing attempt that I detected was also using the URL shortener bit.ly (Position 9380 in the Alexa list).

The phishing is based on a DHL notification email. The mail has a PDF attached to it:

DHL Notification

This PDF has no malicious content and is therefore not blocked by antispam/antivirus. The link “Click here” points to a bit.ly short URL:

hxxps://bitly.com/2jXl8GJ

Note that HTTPS is used which already make the traffic non-inspected by many security solutions.


Tip: If you append a “+” at the end of the URL, bit.ly will not directly redirect you to the hidden URL but will display you an information page where you can read this URL!


The URL behind the short URL is:

hxxps://archive.org/download/gxzdhsh/gxzdhsh.html

Bit.ly also maintains statistics about the visitors:

bit.ly Statistics

It’s impressive to see how many people visited the malicious link. The phishing campaign was also active since the end of March. Thank you bit.ly for this useful information!

This URL returns the following HTML code:

<html>
<head>
<title></title>
<META http-equiv="refresh" content="0;URL=data:text/html;base64, ... (base64 data) ... "
</head>
<body bgcolor="#fffff">
<center>
</center>
</body>
</html>

The refresh META tag displays the decoded HTML code:

<script language="Javascript">
document.write(unescape('%0A%3C%68%74%6D%6C%20%68%6F%6C%61%5F%65%78%74%5F%69%6E%6A%65%63
%74%3D%22%69%6E%69%74%65%64%22%3E%3C%68%65%61%64%3E%0A%3C%6D%65%74%61%20%68%74%74%70%2D
%65%71%75%69%76%3D%22%63%6F%6E%74%65%6E%74%2D%74%79%70%65%22%20%63%6F%6E%74%65%6E%74%3D
%22%74%65%78%74%2F%68%74%6D%6C%3B%20%63%68%61%72%73%65%74%3D%77%69%6E%64%6F%77%73%2D%31
%32%35%32%22%3E%0A%3C%6C%69%6E%6B%20%72%65%6C%3D%22%73%68%6F%72%74%63%75%74%20%69%63%6F
%6E%22%20%68%72%65%66%3D%22%68%74%74%70%3A%2F%2F%77%77%77%2E%64%68%6C%2E%63%6F%6D%2F%69
%6D%67%2F%66%61%76%69%63%6F%6E%2E%67%69%6
...
%3E%0A%09%3C%69%6D%67%20%73%72%63%3D%22%68%74%74%70%3A%2F%2F%77%77%77%2E%66%65%64%61%67
%72%6F%6C%74%64%2E%63%6F%6D%2F%6D%6F%62%2F%44%48%4C%5F%66%69%6C%65%73%2F%61%6C%69%62%61
%62%61%2E%70%6E%67%22%20%68%65%69%67%68%74%3D%22%32%37%22%20%0A%0A%77%69%64%74%68%3D%22
%31%33%30%22%3E%0A%09%3C%2F%74%64%3E%0A%0A%09%3C%2F%74%72%3E%3C%2F%74%62%6F%64%79%3E%3C
%2F%74%61%62%6C%65%3E%3C%2F%74%64%3E%3C%2F%74%72%3E%0A%0A%0A%0A%0A%3C%74%72%3E%3C%74%64
%20%68%65%69%67%68%74%3D%22%35%25%22%20%62%67%63%6F%6C%6F%72%3D%22%23%30%30%30%30%30%30
%22%3E%0A%3C%2F%74%64%3E%3C%2F%74%72%3E%0A%0A%3C%2F%74%62%6F%64%79%3E%3C%2F%74%61%62%6C
%65%3E%0A%0A%0A%0A%3C%2F%62%6F%64%79%3E%3C%2F%68%74%6D%6C%3E'));
</Script>

The deobfuscated script displays the following page:

DHL Phishing Page

The pictures are stored on a remote website but it has already been cleaned:

hxxp://www.fedagroltd.com/mob/DHL_files/

Stolen data are sent to another website: (This one is still alive)

hxxp://www.magnacartapeace.org.ng/wp/stevedhl/kenbeet.php

The question is: how this phishing page was stored on archive.org? If you visit the upper level on the malicious URL (https://archive.org/download/gxzdhsh/), you find this:

archive.org Files

Go again to the upper directory (‘../’) and you will find the owner of this page: alextray. This guy has many phishing pages available:

alextray's Projects

Indeed, the Internet Archives website allows registered users to upload content as stated in the FAQ. If you search for ‘archive.org/download’ on Google, you will find a lot of references to multiple contents (most of them are harmless) but on VT, there are references to malicious content hosted on archive.org.

Here is the list of phishing sites hosted by “alextray”. You can use them as IOC’s:

hxxps://archive.org/download/gjvkrduef/gjvkrduef.html
hxxps://archive.org/download/Jfojasfkjafkj/jfojas;fkj;afkj;.html
hxxps://archive.org/download/ygluiigii/ygluiigii.html (Yahoo!)
hxxps://archive.org/download/ugjufhugyj/ugjufhugyj.html (Microsoft)
hxxps://archive.org/download/khgjfhfdh/khgjfhfdh.html (DHL)
hxxps://archive.org/download/iojopkok/iojopkok.html (Adobe)
hxxps://archive.org/download/Lkmpk/lkm[pk[.html (Microsoft)
hxxps://archive.org/download/vhjjjkgkgk/vhjjjkgkgk.html (TNT)
hxxps://archive.org/download/ukryjfdjhy/ukryjfdjhy.html (TNT)
hxxps://archive.org/download/ojodvs/ojodvs.html (Adobe)
hxxps://archive.org/download/sfsgwg/sfsgwg.html (DHL)
hxxps://archive.org/download/ngmdlxzf/ngmdlxzf.html (Microsoft)
hxxps://archive.org/download/zvcmxlvm/zvcmxlvm.html (Microsoft)
hxxps://archive.org/download/ugiutiyiio/ugiutiyiio.html (Yahoo!)
hxxps://archive.org/download/ufytuyu/ufytuyu.html (Microsoft Excel)
hxxps://archive.org/download/xgfdhfdh/xgfdhfdh.html (Adobe)
hxxps://archive.org/download/itiiyiyo/itiiyiyo.html (DHL)
hxxps://archive.org/download/hgvhghg/hgvhghg.html (Google Drive)
hxxps://archive.org/download/sagsdg_201701/sagsdg.html (Microsoft)
hxxps://archive.org/download/bljlol/bljlol.html (Microsoft)
hxxps://archive.org/download/gxzdhsh/gxzdhsh.html (DHL)
hxxps://archive.org/download/bygih_201701/bygih.html (DHL)
hxxps://archive.org/download/bygih/bygih.html (DHL)
hxxps://archive.org/download/ygi9j9u9/ygi9j9u9.html (Yahoo!)
hxxps://archive.org/download/78yt88/78yt88.html (Microsoft)
hxxps://archive.org/download/vfhyfu/vfhyfu.html (Yahoo!)
hxxps://archive.org/download/yfuyj/yfuyj.html (DHL)
hxxps://archive.org/download/afegwe/afegwe.html (Microsoft)
hxxps://archive.org/download/nalxJL/nalxJL.html (DHL)
hxxps://archive.org/download/jfleg/jfleg.html (DHL)
hxxps://archive.org/download/yfigio/yfigio.html (Microsoft)
hxxps://archive.org/download/gjbyk/gjbyk.html (Microsoft)
hxxps://archive.org/download/nfdnkh/nfdnkh.html (Yahoo!)
hxxps://archive.org/download/GfhdtYry/gfhdt%20yry.html (Microsoft)
hxxps://archive.org/download/fhdfxhdh/fhdfxhdh.html (Microsoft)
hxxps://archive.org/download/iohbo6vu5/iohbo6vu5.html (DHL)
hxxps://archive.org/download/sgsdgh/sgsdgh.html (Adobe)
hxxps://archive.org/download/mailiantrewl/mailiantrewl.html (Google)
hxxps://archive.org/download/ihiyi/ihiyi.html (Microsoft)
hxxps://archive.org/download/glkgjhtrku/glkgjhtrku.html (Microsoft)
hxxps://archive.org/download/pn8n8t7r/pn8n8t7r.html (Microsoft)
hxxps://archive.org/download/aEQWGG/aEQWGG.html (Yahoo!)
hxxps://archive.org/download/isajcow/isajcow.html (Yahoo!)
hxxps://archive.org/download/pontiffdata_yahoo_Kfdk/;kfd;k.html (Yahoo!)
hxxps://archive.org/download/vuivi/vuivi.html (TNT)
hxxps://archive.org/download/lmmkn/lmmkn.html (Microsoft)
hxxps://archive.org/download/ksafaF/ksafaF.html (Google)
hxxps://archive.org/download/fsdgs/fsdgs.html (Microsoft)
hxxps://archive.org/download/joomlm/joomlm.html (Microsoft)
hxxps://archive.org/download/rdgdh/rdgdh.html (Adobe)
hxxps://archive.org/download/pontiffdata_yahoo_Bsga/bsga.html (Microsoft)
hxxps://archive.org/download/ihgoiybot/ihgoiybot.html (Microsoft)
hxxps://archive.org/download/dfhrf/dfhrf.html (Microsoft)
hxxps://archive.org/download/pontiffdata_yahoo_Kgfk_201701/kgfk.html (Microsoft)
hxxps://archive.org/download/jhlhj/jhlhj.html (Yahoo!)
hxxps://archive.org/download/pontiffdata_yahoo_Kgfk/kgfk.html (Microsoft)
hxxps://archive.org/download/pontiffdata_yahoo_Gege/gege.html (Microsoft)
hxxps://archive.org/download/him8ouh/him8ouh.html (DHL)
hxxps://archive.org/download/maiikillll/maiikillll.html (Google)
hxxps://archive.org/download/pontiffdata_yahoo_Mlv/mlv;.html (Microsoft)
hxxps://archive.org/download/oiopo_201701/oiopo.html (Microsoft)
hxxps://archive.org/download/ircyily/ircyily.html (Microsoft)
hxxps://archive.org/download/vuyvii/vuyvii.html (DHL)
hxxps://archive.org/download/fcvbt_201612/fcvbt.html (Microsoft)
hxxps://archive.org/download/poksfcps/poksfcps.html (Yahoo!)
hxxps://archive.org/download/tretr_201612/tretr.html
hxxps://archive.org/download/eldotrivoloto_201612/eldotrivoloto.html (Microsoft)
hxxps://archive.org/download/babalito_201612/babalito.html (Microsoft)
hxxps://archive.org/download/katolito_201612/katolito.html (Microsoft)
hxxps://archive.org/download/kingshotties_201612/kingshotties.html (Microsoft)
hxxps://archive.org/download/fcvbt/fcvbt.html (Microsoft)
hxxps://archive.org/download/vkvkk/vkvkk.html (DHL)
hxxps://archive.org/download/pontiffdata_yahoo_Vkm/vkm;.html (Microsoft)
hxxps://archive.org/download/hiluoogi/hiluoogi.html (Microsoft)
hxxps://archive.org/download/ipiojlj/ipiojlj.html (Microsoft)

[The post Archive.org Abused to Deliver Phishing Pages has been first published on /dev/random]

I published the following diary on isc.sans.org: “DNS Query Length… Because Size Does Matter“.

In many cases, DNS remains a goldmine to detect potentially malicious activity. DNS can be used in multiple ways to bypass security controls. DNS tunnelling is a common way to establish connections with remote systems. It is often based on “TXT” records used to deliver the encoded payload. “TXT” records are also used for good reasons, like delivering SPF records but, too many TXT DNS request could mean that something weird is happening on your network… [Read more]

[The post [SANS ISC] DNS Query Length… Because Size Does Matter has been first published on /dev/random]

Updated: 20170419: gnome-shell extension browser integration.
Updated: 20170420: natural scrolling on X instead of Wayland.

Introduction

Mark Shuttleworth, founder of Ubuntu and Canonical, dropped a bombshell: Ubuntu drops Unity 8 and –by extension– also the Mir graphical server on the desktop. Starting from the 18.04 release, Ubuntu will use Gnome 3 as the default Desktop environment.

Sadly, the desktop environment used by millions of Ubuntu users –Unity 7– has no path forward now. Unity 7 runs on the X.org graphical stack, while the Linux world –including Ubuntu now– is slowly but surely moving to Wayland (it will be the default on Ubuntu 18.04 LTS). It’s clear that Unity has its detractors, and it’s true that the first releases (6 years ago!) were limited and buggy. However, today, Unity 7 is a beautiful and functional desktop environment. I happily use it at home and at work.

Soon-to-be-dead code is dead code, so even as a happy user I don’t see the interest in staying with Unity. I prefer to make the jump now instead of sticking a year with a desktop on life support. Among other environments, I have been a full time user of CDE, Window Maker, Gnome 1.*, KDE 2.*, Java Desktop System, OpenSolaris Desktop, LXDE and XFCE. I’ll survive :).

The idea of these lines is to collect changes I felt I needed to make to a vanilla Ubuntu Gnome 3 setup to make it work for me. I made the jump 1 week before the release of 17.04, so I’ll stick with 17.04 and skip the 16.10 instructions (in short: you’ll need to install gnome-shell-extension-dashtodock from an external source instead of the Ubuntu repos).

The easiest way to make the use Gnome on Ubuntu is, of course, installing the Ubuntu Gnome distribution. If you’re upgrading, you can do it manually. In case you want to remove Unity and install Gnome at the same time:
$ sudo apt-get remove --purge ubuntu-desktop lightdm && sudo apt-get install ubuntu-gnome-desktop && apt-get remove --purge $(dpkg -l |grep -i unity |awk '{print $2}') && sudo apt-get autoremove -y

Changes

Add Extensions:

  1. Install Gnome 3 extensions to customize the desktop experience:
    $ sudo apt-get install -y gnome-tweak-tool gnome-shell-extension-top-icons-plus gnome-shell-extension-dashtodock gnome-shell-extension-better-volume gnome-shell-extension-refreshwifi gnome-shell-extension-disconnect-wifi
  2. Install the gnome-shell integration (the one on the main Ubuntu repos does not work):
    $ sudo add-apt-repository ppa:ne0sight/chrome-gnome-shell && sudo apt-get update && sudo apt-get install chrome-gnome-shell
  3. Install the “Refresh wifi” extension by going with Firefox or Chrome to the Gnome Extensions website. You’ll need to install a browser plugin. Refresh the page after installing the plugin.
  4. Log off in order to activate the extensions.
  5. Start gnome-tweak-tool and enable “Better volume indicator” (scroll wheel to change volume), “Dash to dock” (a more Unity-like Dock, configurable. I set the “Icon size limit” to 24 and “Behavior-Click Action” to “minimize”), “Disconnect wifi” (allow disconnection of network without setting Wifi to off), “Refresh Wifi connections” (auto refresh wifi list) and “Topicons plus” (put non-Gnome icons like Dropbox and pidgin on the top menu).

Change window size and buttons:

  1. On the Windows tab, I enabled the Maximise and Minise Titlebar Buttons.
  2. Make the window top bars smaller if you wish. Just create ~/.config/gtk-3.0/gtk.css with these lines:
    /* From: http://blog.samalik.com/make-your-gnome-title-bar-smaller-fedora-24-update/ */
    window.ssd headerbar.titlebar {
    padding-top: 4px;
    padding-bottom: 4px;
    min-height: 0;
    }
    window.ssd headerbar.titlebar button.titlebutton {
    padding: 0px;
    min-height: 0;
    min-width: 0;
    }

Disable “natural scrolling” for mouse wheel:

While I like “natural scrolling” with the touchpad (enable it in the mouse preferences), I don’t like it on the mouse wheel. To disable it only on the mouse:
$ gsettings set org.gnome.desktop.peripherals.mouse natural-scroll false

If you run Gnome on good old X instead of Wayland (e.g. for driver support of more stability while Wayland matures), you need to use libinput instead of the synaptic driver to make “natural scrolling” possible:

$ sudo mkdir -p /etc/X11/xorg.conf.d && sudo cp -rp /usr/share/X11/xorg.conf.d/40-libinput.conf /etc/X11/xorg.conf.d/

Log out.

Enable Thunderbird notifications:

For Thunderbird new mail notification I installed the gnotifier Thunderbird add-on: https://addons.mozilla.org/en-us/thunderbird/addon/gnotifier/

Extensions that I tried, liked but ended not using:

  • gnome-shell-extension-pixelsaver: it feels unnatural on a 3 screen setup like I use at work, e.g. min-max-close windows buttons on the main screen for windows on other screens..
  • gnome-shell-extension-hide-activities: the top menu is already mostly empty, so it’s not saving much.
  • gnome-shell-extension-move-clock: although I prefer the clock on the right, the default middle position makes sense as it integrates with notifications.

 

That’s it (so far 🙂 ).

Thx to @sil, @adsamalik and Jonathan Carter.


Filed under: Uncategorized Tagged: gnome, Gnome3, Linux, Linux Desktop, Thanks for all the fish, Ubuntu, unity

Too many MS Office 365 appsUpdate 20160421:
– update for MS Office 2016.
– fix configuration.xml view on WordPress.

If you install Microsoft Office trough click-to-run you’ll end with the full suite installed. You can no longer select what application you want to install. That’s kind of OK because you pay for the complete suit. Or at least the organisation (school, work, etc.) offering the subscription does. But maybe you are like me and you dislike installing applications you don’t use. Or even more like me: you’re a Linux user with a Windows VM you boot once in a while out of necessity. And unused applications in a VM residing on your disk is *really* annoying.

The Microsoft documentation to remove the unused applications (Access as a DB? Yeah, right…) wasn’t very straightforward so I post what worked for me after the needed trial-and-error routines. This is a small howto:

    • Install the Office Deployment Toolkit (download for MS Office 20132016). The installer asks for a installation location. I put it in C:\Users\nxadm\OfficeDeployTool (change the username accordingly). If you’re short on space (or in a VM), you can put it in a mounted shared.
    • Create a configuration.xml with the applications you want to add. The file should reside in the directory you chose for the Office Deployment Tookit (e.g. C:\Users\nxadm\OfficeDeployTool\configuration.xml) or you should refer to the file with its full path name. You can find the full list op AppIDs here (more info about other settings)/ Add or remove ExcludeApps as desired.  My configuration file is as follows (wordpress removes the xml code below, hence the image):
      configuration.xml
    • If you run the 64-bit Office version change OfficeClientEdition="32" to OfficeClientEdition="64".
    • Download the office components. Type in a cmd box:
      C:\Users\\OfficeDeployTool>setup.exe /download configuration.xml
    • Remove the unwanted applications:
      C:\Users\\OfficeDeployTool>setup.exe /configure configuration.xml
    • Delete (if you want) the Office Deployment Toolkit directory. Certainly the cached installation files in the “Office” directory take a lot of space.

Enjoy the space and faster updates. If you are using a VM don’t forget to defragment and compact the Virtual Hard Disk to reclaim the space.


Filed under: Uncategorized Tagged: Click-to-Run, MS Office 365, VirtualBox, vm, VMWare, Windows

April 19, 2017

I published the following diary on isc.sans.org: “Hunting for Malicious Excel Sheets“.

Recently, I found a malicious Excel sheet which contained a VBA macro. One particularity of this file was that useful information was stored in cells. The VBA macro read and used them to download the malicious PE file. The Excel file looked classic, asking the user to enable macros… [Read more]

[The post [SANS ISC] Hunting for Malicious Excel Sheets has been first published on /dev/random]

The post DNS Spy has launched! appeared first on ma.ttias.be.

I started to created a DNS monitoring & validation solution called DNS Spy and I'm happy to report: it has launched!

It's been in private beta starting in 2016 and in public beta since March 2017. After almost 6 months of feedback, features and bugfixes, I think it's ready to kick the tires.

What's DNS Spy?

In case you haven't been following me the last few months, here's a quick rundown of DNS Spy.

  • Monitors your domains for any DNS changes
  • Alerts you whenever a record has changed
  • Keeps a detailed history of each DNS record change
  • Notifies you of invalid or RFC-violating DNS configs
  • Rates your DNS configurations with a scoring system
  • Is free for 1 monitored domain
  • Provides a point-in-time back-up of all your DNS records
  • It can verify if all your nameservers are in sync
  • Supports DNS zone transfer (AXFR)

There's many more features, like CNAME resolving, public domain scanning, offline & change notifications, ... that all make DNS Spy what it is: a reliable & stable DNS monitoring solution.

A new look & logo

The beta design of DNS Spy was built using a Font Awesome icon and some copy/paste bootstrap templates, just to validate the idea. I've gotten enough feedback to feel confident that DNS Spy adds actual value, so it was time to make the look & feel match that sentiment.

This was the first design:

Here's the new & improved look.

It's go a brand new look, a custom logo and a way to publicly scan & rate your domain configuration.

Public scoring system

You've probably heard of tools like SSL Labs' test & Security Headers, free webservices that allow you to rate and check your server configurations. Each with focus on their domain.

From now on, DNS Spy also has such a feature.

Above is the DNS Spy scan report for StackOverflow.com, which as a rock solid DNS setup.

We rate things like the connectivity (IPv4 & IPv6, records synced, ...), performance, resilience & security (how many providers, domains, DNSSEC & CAA support, ...) & DNS records (how is SPF/DMARC set up, are your TTLs long enough, do your NS records match your nameservers, ...).

The aim is to have DNS Spy become the SSL Labs of DNS configurations. To make that a continuous improvement, I encourage any feedback from you!

If you're curious how your domain scores, scan it via dnsspy.io.

Help me promote it?

Next up, of course, is promotion. There are a lot of ways to promote a service, and advertising is surely going to be one of them.

But if you've used DNS Spy and like it or if you've scanned your domain and are proud of your results, feel free to spread word of DNS Spy to your friends, coworkers, online followers, ... You'd have my eternal gratitude! :-)

DNS Spy is available on dnsspy.io or via @dnsspy on Twitter.

The post DNS Spy has launched! appeared first on ma.ttias.be.

April 18, 2017

Vous avez peut-être entendu parler de Mastodon, ce nouveau réseau social qui fait de la concurrence à Twitter. Ses avantages ? Une limite par post qui passe de 140 à 500 caractères et une approche orientée communauté et respect de l’autre là où Twitter a trop souvent été le terrain de cyber-harcèlements.

Mais une des particularités majeures de Mastodon est la décentralisation : ce n’est pas un seul et unique service appartenant à une entreprise mais bien un réseau, comme le mail.

Si chacun peut en théorie créer son instance Mastodon, la plupart d’entre nous rejoindrons des instances existantes. J’ai personnellement rejoint mamot.fr, l’instance gérée par La Quadrature du Net car j’ai confiance dans la pérennité de l’association, sa compétence technique et, surtout, je suis aligné avec ses valeurs de neutralité et de liberté d’expression. Je recommande également framapiaf.org, qui est administré par Framasoft.

Mais vous trouverez pléthore d’instances : depuis celles des partis pirate français et belge aux instances à thème. Il existe même des instances payantes et, pourquoi pas, il pourrait un jour y avoir des instances avec de la pub.

La beauté de tout ça réside bien entendu dans le choix. Les instances de La Quadrature du Net et de Framasoft sont ouvertes et libres, je conseille donc de faire un petit paiement libre récurrent à l’association de 2€, 5€ ou 10€ par mois, selon vos moyens.

Mastodon est décentralisé ? En fait, il faudrait plutôt parler de “distribué”. Il y’a 5 ans, je dénonçais les problèmes des solutions décentralisées/distribuées. Le principal étant qu’on est soumis au bon vouloir ou aux maladresses de l’administrateur de son instance.

Force est de constater que Mastodon n’a techniquement résolu aucun de ces problèmes. Mais semble créer une belle dynamique communautaire qui fait plaisir à voir. Contrairement à son ancêtre Identi.ca, les instances se sont rapidement multipliées. Les conversations se sont lancées et des usages ont spontanément apparu : accueillir les nouveaux, suivre ceux qui n’ont que peu de followers pour les motiver, discuter de manière transparente des bonnes pratiques à adopter, utilisation d’un CW, Content Warning, masquant les messages potentiellement inappropriés, débats sur les règles de modération.

Toute cette énergie donne l’impression d’un espace à part, d’une liberté de discussion éloignée de l’omniprésente et omnisciente surveillance publicitaire indissociable des outils Facebook, Twitter ou Google.

D’ailleurs, un utilisateur proposait qu’on ne parle pas d’utilisateurs (“users”) pour Mastodon mais bien de personnes (“people”).

Dans un précédent article, je soulignais que les réseaux sociaux sont les prémisses d’une conscience globale de l’humanité. Mais comme le souligne Neil Jomunsi, le media est une part indissociable du message que l’on développe. Veut-on réellement que l’humanité soit représentée par une plateforme publicitaire où l’on cherche à exploiter le temps de cerveau des utilisateurs ?

Mastodon est donc selon moi l’expression d’un réel besoin, d’un manque. Une partie de notre humanité est étouffée par la publicité, la consommation, le conformisme et cherche un espace où s’exprimer.

Mastodon serait-il donc le premier réseau social distribué populaire ? Saura-t-il convaincre les utilisateurs moins techniques et se démarquer pour ne pas être « un énième clone libre » (comme l’est malheureusement Diaspora pour Facebook) ?

Mastodon va-t-il durer ? Tant qu’il y’aura des volontaires pour faire tourner des instances, Mastodon continuera d’exister sans se soucier du cours de la bourse, des gouvernements, des lois d’un pays particuliers ou des desiderata d’investisseurs. On ne peut pas en dire autant de Facebook ou Twitter.

Mais, surtout, il souffle sur Mastodon un vent de fraîche utopie, un air de naïve liberté, un sentiment de collaborative humanité où la qualité des échanges supplante la course à l’audience. C’est bon et ça fait du bien.

N’hésitez pas à nous rejoindre, à lire le mode d’emploi de Funambuline et poster votre premier « toot » présentant vos intérêts. Si vous dîtes que vous venez de ma part ( @ploum@mamot.fr ), je vous « boosterais » (l’équivalent du retweet) et la communauté vous suggérera des personnes à suivre.

Au fond, peu importe que Mastodon soit un succès ou disparaisse dans quelques mois. Nous devons continuons à essayer, à tester, à expérimenter jusqu’à ce que cela fonctionne. Si ce n’est pas Diaspora ou Mastodon, ce sera le prochain. Notre conscience globale, notre expression et nos échanges méritent mieux que d’être de simple encarts entre deux publicités sur une plateforme soumise à des lois sur lesquelles nous n’avons aucune prise.

Mastodon est un réseau social. Twitter et Facebook sont des réseaux publicitaires. Ne nous y trompons plus.

 

Photo par Daniel Mennerich.

Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal ou Liberapay. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Medium, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

April 17, 2017

April 14, 2017

After a nice evening with some beers and an excellent dinner with infosec peers, here is my wrap-up for the second day. Coffee? Check! Wireless? Check! Twitter? Check!

As usual, the day started with a keynote. Window Snyder presented “All Fall Down: Interdependencies in the Cloud”. Window is the CSO of Fastly and, as many companies today, Fasly relies on many services running in the cloud. This reminds me the Amazon S3 outage and their dashboard that was not working because it was relying on… S3! Today, all the stuff are interconnected and the overall security depends on the complete chain. To resume: You use a cloud service to store your data, you authenticate to it using another cloud service, and you analyse your data using a third one etc… If one is failing, we can face a domino effect. Many companies have statements like “We take security very seriously” but they don’t invest. Window reviewed some nightmare stories where the security completely failed like the RSA token compromization in 2011, Diginotar in 2012 or Target in 2013. But sometimes dependencies are very simple like DNS… What if your DNS is out of service? All your infrastructure is down. DNS remains an Achille’s heel for many organizations. The keynote interesting but very short! Anyway, it meant more time for coffee…

The first regular talk was maybe the most expected: “Chasing Cars: Keyless Entry System Attacks”. The talk was promoted via social networks before the conference. I was really curious and not disappointed by the result of the research! Yingtao Zeng, Qing Yang & Jun Li presented their work about car keyless attack. It was strange that the guy responsible of the most part of the research did not speak English. I was speaking in Chinese to his colleague who was translating in English. Because users are looking for more convenience (and because it’s “cool”), modern cars are not using RKE (remote keyless entry) but PKE (passive keyless entry). They started with a technical description of the technology that many of us use daily:

Passive key entry system

How to steal the car? How could we use the key in the car owner’s pocket? The idea was to perform a relay attack. The signal of the key is relayed from the owner’s pocket to the attacker sitting next to the car. Keep in mind that cars required to press the button on the door or to use a contact sensor to enable communications with the key. A wake up is sent to the key and unlock doors. The relay attack scenario looks like this:

Relay attack scenario

During this process, they are time constraints. They showed a nice demo of a guy leaving his car, followed by attacker #1 who captures the signal and relay to the attack #2 who unlock the car.

Relay devices

The current range to access the car owner’s key is ~2m. Between the two relays, up to 300m! What about the cost to build the devices? Approximatively 20€! (the cost of the main components). What in real case? Once the car is stolen and the engine running, it will only warn that the key is not present but it won’t stop! The only limit is running out of gas 🙂 Countermeasures are: use a faraday cage or bag, remove the battery more strict timing constraints.

They are still improving the research and are now investigating how to relay this signal through TCP/IP (read: the Wild internet). [Slides are available here]

My next choice was to follow “Extracting All Your Secrets: Vulnerabilities in Android Password Managers” presented by Stephan Uber, Steven Arzt and Siegfried Rasthofer. Passwords remain a threat for most people. For years, we ask users to use strong passwords, to change them regularly. The goal was not here to debate about how passwords must be managed but, as we recommend users to use passwords manager to handle the huge amount of passwords, are they really safe? An interesting study demonstrated that, on average, users have to deal with 90 passwords. The research focused on Android applications. First of all, most of them say that they “banking level” or “military grade” encryption? True or false? Well, encryption is not the only protection for passwords. Is it possible to steal them using alternative attack scenarios? Guess what? They chose the top password managers by the number of downloads on the Google play store. They all provide standard features like autofill, custom browser, comfort features, secure sync and confidential password storage of course. (Important note: all the attacks have been performed on non-rooted devices) Manual filing attack: Manual filling is using the clipboard. 1st problem: any app can read from the clipboard without any specific rights. A clipboard sniffer app could be useful.

The first attack scenario was: Manual filing attack: Manual filling is using the clipboard. First problem: any application can read from the clipboard without any specific rights. A clipboard sniffer app could be useful to steal any password. The second scenario was: Automatic filling attack. How does it work? Applications cannot communicate due to the sandboxing system. They have to use the “Accessibility service” (normally used for disabled people). The issue may arise if the application doesn’t check the complete app name. Example: make an app that starts also with “com.twitter” like “com.twitter.twitterleak”. The next attack is based on the backup function. Backup, convert the backup to .tar, untar and get the master password in plain text in KeyStorage.xml. Browsers don’t provide API’s to perform autofill so developers create a customer browser. But it’s running in the same sandbox. Cool! But can we abuse this? Browsers are based on Webview API which supports access to files… file:///data/package/…./passwords_pref.xml Where is the key? In the source code, split in two 🙂 More fails reported by the speakers:

  • Custom crypto (“because AES isn’t good enough?”)
  • AES used in ECB mode for db encryption
  • Delivered browsers to not consider subdomains in form fields
  • Data leakage in browsers
  • Customer transport security

How to improve the security of password managers:

  • Android provides a keystore, use it!
  • Use key derivation function
  • Avoid hardcoded keys
  • Do not abuse the account manager

The complete research is available here. [Slides are available here]

After the lunch, Antonios Altasis presented “An Attack-in-Depth Analysis of Multicast DNS and DNS Service Discovery”. The objective was to perform threat analysis and to release a tool to perform tests on a local network. The starting point was the RFC and identifying the potential risks. mDNS & DNS-SD are used for zero-conf networking. They are used by the AppleTV, the Google ChromeCast, home speakers, etc. mDNS (RFC6762) provides DNS-alike operations but on the local network (uses 5353). DNS-SD (RFC6763) allows clients to discover instances of a specific service (using standard DNS queries). mDNS uses the “.local” TLD via 224.0.0.251 & FF02::FB. Antonios make a great review of the problems associated with these protocols. The possible attacks are:

  • Reconnaissance (when you search for a printer, all the services will be returned, this is useful to gather information about your victim. Easy to get info without scanning). I liked this.
  • Spoofing
  • DoS
  • Remote unicast interaction

mDNS implementation can be used to perform a DoS attack from remote locations. If most modern OS are protected, some embedded systems still use vulnerable Linux implementations. Interesting: Close to 1M of devices are listening to port 5353 on the Internet (Shodan). Not all of them are vulnerable but there are chances. During the demos, Antonios used the tool he developed: pholus.py. [Slides are available here]

Then, Patrick Wardle presented “OverSight: Exposing Spies on macOS”. Patrick presented a quick talk yesterday in the Commsec track. It was very nice so I expected also some nice content. Today the topic was pieces of malware on OSX that abuse the microphone and webcam. To protect against this, he developed a tool called OverSight. Why bad guys use webcams? To blackmail victims, Why governments use microphone to spy. From a developer point of view, how to access the webcam? Via the avfoundation framework. Sandboxed applications must have specific rights to access the camera (via entitlement ‘com apple.security.device.camera’ but non sandboxed application do not require this entitlement to access the cam. videoSnap is a nice example of avfoundation use. The pending tool is audioSnap for the microphone. The best way to protect your webcam is to put a sticker on it. Note that it is also possible to restrict access to is via file permissions.

What about malware that use mic/cam? (note: the LED will always be on). Patrick reviewed some of them like yesterday:

  • The Hackingteam’s implant
  • Eleanor
  • Mokes
  • FruitFly

To protect against abusive access to the webcam & microphone, Patrick developed a nice tool called OverSight. The version 1.1 was just released with new features (better support for the mic, whitelisting apps which can access resources). The talk ended with a nice case study: Shazam was reported as listening all the time to the mic (even if disabled). This was reported by an OverSight user to Patrick. He decided to have a deeper look. He discovered that it’s not a bug but a feature and contacted Shazam. For performance reasons they use continuous recording on IOS but a shared SDK is used with OSX. Malicious or not? “OFF” means in fact “stop processing the recording” but don’t stop the recording.

Other tools developed by Patrick:

  • KnockKnock
  • BlockBlock
  • RansomWhere (detect encryption of files and high number of created files)

It was a very cool talk with lot of interesting information and tips to protect your OSX computers! [Slides are available here]

The last talk from my list was “Is There a Doctor in The House? Hacking Medical Devices and Healthcare Infrastructure” presented by Anirudh Duggal. Usually, such talks present vulnerabilities around the devices that we can find everywhere in hospitals but the the talk focused on something completely different: The protocol HL7 2.x. Hospitals have: devices (monitors, X-ray, MRI, …), networks, protocols (DICOM, HL7, FHIR, HTTP, FTP) and records (patients). HL7 is a messaging standard used by medical devices to achieve interoperability. Messages may contain patient info (PII), doctor info, patient visit details, allergy & diagnostics. Anirudh reviewed the different types of message that can be exchanged like “RDE” or ” Pharmacy Order Message”. The common attacks are:

  • MITM (everything is in clear text)
  • Message source not validated
  • DoS
  • Fuzzing

This is scaring to see that important information are exchanged with so poor protections. How to improve? According to Anirugh, here are some ideas:

  • Validate messages size
  • Enforce TLS
  • Input sanitization
  • Fault tolerance
  • Anonymization
  • Add consistency checks (checksum)

The future? HL7 will be replaced by FHIR a lightweight HTTP-based API. I learned interesting stuff about this protocol… [Slides are available here]

The closing keynote was given by Natalie Silvanovich working on the Google Project Zero. It was about the Shakra Javascript engine. Natalie reviewed the code and discovered 13 bugs, now fixed. She started the talk with a deep review of the principles of arrays in the Javascript engine. Arrays are very important in JS. There are simple but can quickly become complicate with arrays of arrays of arrays. Example:

var b = [ 1; “bob, {}, new RegExp() ];

The second part of the talk was dedicated to the review of the bug she found during her research time. I was a bit lost (the end of the day and not my preferred topic) but the work performed looked very nice.

The 2017’s edition is now over. Besides the talk, the main room was full of sponsor booths with nice challenges, hackerspaces, etc. A great edition! See you next year I hope!

Hackerspaces

ICS Lab

 

[The post HITB Amsterdam 2017 Day #2 Wrap-Up has been first published on /dev/random]

Updating the VCSA is easy when it has internet access or if you can mount the update iso. On a private network, VMware assumes you have a webserver that can serve up the updaterepo files. In this article, we'll look at how to proceed when VCSA is on a private network where internet access is blocked, and there's no webserver available. The VCSA and PSC contain their own webserver that can be used for an HTTP based update. This procedure was tested on PSC/VCSA 6.0.

Follow these steps:


  • First, download the update repo zip (e.g. for 6.0 U3A, the filename is VMware-vCenter-Server-Appliance-6.0.0.30100-5202501-updaterepo.zip ) 
  • Transfer the updaterepo zip to a PSC or VCSA that will be used as the server. You can use Putty's pscp.exe on Windows or scp on Mac/Linux, but you'd have to run "chsh -s /bin/bash root" in the CLI shell before using pscp.exe/scp if your PSC/VCSA is set up with the appliancesh. 
    • chsh -s /bin/bash root
    • "c:\program files (x86)\putty\pscp.exe" VMware*updaterepo.zip root@psc-name-or-address:/tmp 
  • Change your PSC/VCSA root access back to the appliancesh if you changed it earlier: 
    • chsh -s /bin/appliancesh root
  • Make a directory for the repository files and unpack the updaterepo files there:
    • mkdir /srv/www/htdocs/6u3
    • chmod go+rx /srv/www/htdocs/6u3
    • cd /srv/www/htdocs/6u3
    • unzip /tmp/VMware-vCenter*updaterepo.zip
    • rm /tmp/VMware-vCenter*updaterepo.zip
  • Create a redirect using the HTTP rhttpproxy listener and restart it
    • echo "/6u3 local 7000 allow allow" > /etc/vmware-rhttpproxy/endpoints.conf.d/temp-update.conf 
    • /etc/init.d/vmware-rhttpproxy restart 
  • Create a /tmp/nginx.conf (I didn't save mine, but "listen 7000" is the key change from the default)
  • Start nginx
    • nginx -c /tmp/nginx.conf
  • Start the update via the VAMI. Change the repository URL in settings,  use http://psc-name-or-address/6u3/ as repository URL. Then use "Check URL". 
  • Afterwards, clean up: 
    • killall nginx
    • cd /srv/www/htdocs; rm -rf 6u3


P.S. I personally tested this using a PSC as webserver to update both that PSC, and also a VCSA appliance.
P.P.S. VMware released an update for VCSA 6.0 and 6.5 on the day I wrote this. For 6.0, the latest version is U3B at the time of writing, while I updated to U3A.

April 13, 2017

I was recently in a need to start "playing" with Openstack (working in an existing RDO setup) so I thought that it would be good idea to have my personal playground to start deploying from scratch/breaking/fixing that playground setup.

At first sight, Openstack looks impressive and "over-engineered", as it's complex and have zillions of modules to make it work. But then when you dive into it, you understand that the choice is yours to make it complex or not. Yeah, that sentence can look strange, but I'll explain you why.

First, you should just write your requirements, and then only have a look at the needed openstack components. For my personal playground, I just wanted to have a basic thing that would let me deploy VMs on demand, in the existing network, and so directly using bridge as I want the VMs to be directly integrated into the existing network/subnet.

So just by looking at the mentioned diagram, we just need :

  • keystone (needed for the identity service)
  • nova (hypervisor part)
  • neutron (handling the network part)
  • glance (to store the OS images that will be used to create the VMs)

Now that I have my requirements and list of needed components, let's see how to setup my PoC ... The RDO project has good doc for this, including the Quickstart guide. You can follow that guide, and as everything is packaged/built/tested and also delivered through CentOS mirror network, you can have a working RDO/openstack All-in-one setup working in minutes ...

The only issue is that it doesn't fit my need, as it will setup unneeded components, and the network layout isn't the one I wanted either, as it will be based on openvswitch, and other rules (so multiple layers I wanted to get rid of). The good news is that Packstack is in fact a wrapper tool around puppet modules, and it also supports lot of options to configure your PoC.

Let's assume that I wanted a PoC based on openstack-newton, and that my machine has two nics : eth0 for mgmt network and eth1 for VMs network. You don't need to configure the bridge on the eth1 interface, as that will be done automatically by neutron. So let's follow the quickstart guide, but we'll just adapt the packstack command line :

yum install centos-release-openstack-newton -y
systemctl disable firewalld
systemctl stop firewalld
systemctl disable NetworkManager
systemctl stop NetworkManager
systemctl enable network
systemctl start network
yum install -y openstack-packstack

Let's fix eth1 to ensure that it's started but without any IP on it :

sed -i 's/BOOTPROTO="dhcp"/BOOTPROTO="none"/' /etc/sysconfig/network-scripts/ifcfg-eth1
sed -i 's/ONBOOT="no"/ONBOOT="yes"/' /etc/sysconfig/network-scripts/ifcfg-eth1
ifup eth1

And now let's call packstack with the required option so that we'll use basic linux bridge (and so no openvswitch), and we'll instruct that it will have to use eth1 for that mapping

packstack --allinone --provision-demo=n --os-neutron-ml2-type-drivers=flat --os-neutron-ml2-mechanism-drivers=linuxbridge --os-neutron-ml2-flat-networks=physnet0 --os-neutron-l2-agent=linuxbridge --os-neutron-lb-interface-mappings=physnet0:eth1 --os-neutron-ml2-tenant-network-types=' ' --nagios-install=n 

At this stage we have openstack components installed, and /root/keystonerc_admin file that we can source for openstack CLI operations. We have instructed neutron to use linuxbridge, but we haven't (yet) created a network and a subnet tied to it, so let's do that now :

source /root/keystonerc_admin
neutron net-create --shared --provider:network_type=flat --provider:physical_network=physnet0 othernet
neutron subnet-create --name other_subnet --enable_dhcp --allocation-pool=start=192.168.123.1,end=192.168.123.4 --gateway=192.168.123.254 --dns-nameserver=192.168.123.254 othernet 192.168.123.0/24

Before import image[s] and creating instances, there is one thing left to do : instruct dhcp_agent that metadata for cloud-init inside the VM will not be served from traditional "router" inside of openstack. And also don't forget to let traffic (in/out) pass through security group (see doc)

Just be sure to have enable_isolated_metadata = True in /etc/neutron/dhcp_agent.ini and then systemctl restart neutron-dhcp-agent : and from that point, cloud metadata will be served from dhcp too.

From that point you can just follow the quickstart guide to create projects/users, import images, create instances and/or do all this from cli too

One last remark with linuxbridge in an existing network : as neutron will have a dhcp-agent listening on the bridge, the provisioned VMs will get an IP from the pool declared in the "neutron subnet-create" command. However (and I saw that when I added other compute nodes in the same setup), you'll have a potential conflict with an existing dhcpd instance on the same segment/network, so your VM can potentially get their IP from your existing dhcpd instance on the network, and not from neutron. As a workaround, you can just ignore the mac addresses range used by openstack, so that your VMs will always get their IP from neutron dhcp. To do this, there are different options, depending on your local dhcpd instance :

  • for dnsmasq : dhcp-host=fa:16:3e:::*,ignore (see doc)
  • for ISC dhcpd : "ignore booting" (see doc)

The default mac addresses range for openstack VMs is indeed fa:16:3e:00:00:00 (see /etc/neutron/neutron.conf, so that can be changed too)

Those were some of my findings for my openstack PoC/playground. Now that I understand a little bit more all this, I'm currently working on some puppet integration for this, as there are official openstack puppet modules available on git.openstack.org that one can import to deploy/configure openstack (and better than using packstack). But lot of "yaks to shave" to get to that point, so surely for another future blog post.

I’m back in Amsterdam for the 8th edition of the security conference Hack in the Box. Last year, I was not able to attend but I’m attending it for a while (you can reread all my wrap-up’s here). What to say? It’s a very strong organisation, everything running fine, a good team dedicated to attendees. This year, the conference was based on four(!) tracks: two regular ones, one dedicated to more “practical” presentations (HITBlabs) and the last one dedicated to small talks (30-60 mins).

Elly van den Heuvel opened the conference with a small 15-minutes introduction talk: “How prepared we are for the future?”. Elly works for the Dutch government as the “Cyber Security Council“. She gave some facts about the current security landscape from the place of women in infosec (things are changing slowly) to the message that cyber-security is important for our security in our daily life. For Elly, we are facing a revolution as big as the one we faced with the industrial revolution, maybe even bigger. Our goal as information security professional is to build a cyber security future for the next generations. They are already nice worldwide initiatives like the CERT’s or NIST and their guidelines. In companies, board members must take their responsibilities for cyber-security projects (budgets & times must be assigned to them). Elly declared the conference officially open 🙂

The first-day keynote was given by Saumil Shah. The title was “Redefining defences”. He started with a warning: this talk is disrupting and… it was! Saumil started with a step by to the past and how security/vulnerabilities evolved. It started with servers and today people are targeted. For years, we have implemented several layers of defence but with the same effect: all of them can be bypassed. Keep in mind that there will be always new vulnerabilities because products and applications have more and more features, are becoming more complex. I really liked the comparison with the Die Hard movie: It’s the Nakatomi building: we can walk through all the targets exactly in the movie when Bruce Willis travels in the building. Vendors invent new technologies to mitigate the exploits. There was a nice reference to the “Mitigator“. The next part of the keynote was focusing how the CISO daily job and the fight against auditors. A fact: “compliance is not security”. In 2001, the CIO position was split in CIO & CISO but budgets remained assigned to the CIA as “business enabler”. Today, we should have another split: The CISO position must be divided in CISO and COO (Compliance Officer). His/her job is to defend against auditors. It was a great keynote but the audience should be more C-level people instead of “technical people” who already agree on all the facts reviewed by Saumil. [Saumil’s slides are available here]

After the first coffee break, I had to choose between two tracks. My first choice was already difficult: hacking femtocell devices or IBM mainframes running z/OS. Even if the second focused on less known environments, mainframes are used in many critical operations so I decided to attend this talk. Ayoub Elaassal is a pentester who focused on this type of targets. People still have an old idea of mainframes. The good old IBM 370 was a big success. Today, the reality is different, modern mainframes are badass computers like the IBM zEC 13: 10TB of memory, 141 processors, cryptographic chips, etc. Who uses such computers? Almost every big companies from airlines, healthcare, insurance or finance ( Have a look at this nice gallery of mainframe consoles). Why? Because it’s powerful and stable. Many people (me first) don’t know a lot about mainframes: It’s not a web app, it uses a 3270 emulator over port 23 but we don’t know how it works. On top of the mainframe OS, IBM has an application layer called CICS (“Customer Information Control System”). For Ayoub, it looks like “a combination of Tomcat & Drupal before it was cool”. CICS is a very nice target because it is used a log: Ayoub gave a nice comparison: worldwide, 1.2M of request/sec are performed using the CICS product while Google reaches 200K requests/sec. Impressive! Before exploiting CICS, the first step was to explain how it works. The mainframe world is full of acronyms. not easy to understand immediately.  But then Ayoub explained how it abused a mainframe. The first attack was to jailbreak the CICS to get a console access (just like finding the admin web page). Mainframes contain a lot of juicy information. The next attack was to read sensitive files. Completed too! So, the next step is to pwn the device. CICS has a feature called “spool” functions. A spool is a dataset (or file) containing the output of a job. Idea: generate a dataset and send it to the job scheduler. Ayoub showed a demo of a Reverse shell in REXX. Like DC trust, you can have the same trust between mainframes and push code to another one. Replace NODE(LOCAL) by NODE(WASHDC). If the spool feature is not enabled, there are alternative techniques that were also reviewed. Finally, let’s to privileges escalation: They are three main levels: Special, Operations and Audit. Special can be considered as the “root” level. Those levels are defined by a simple bit in memory. If you can swap it, you get more privileges. It was the last example. From a novice point of view, this was difficult to follow but basically, mainframes can be compromised like any other computer. The more dangerous aspect is that people using mainframes think that they’re not targeted. Based on the data stored on them, they are really nice targets. All the Ayoub’s scripts are here. [Ayoub’s slides are available here]

The next talk was “Can’t Touch This: Cloning Any Android HCE Contactless Card” by Slawomir Jasek. Cloning things has always been a dream for people. And they succeeded in 1996 with Dolly the sheep. Later, in 2001, scientists make “Copycat”. Today we have also services to clone pets (if you have a lot of money to spend). Even if cloning humans is unethical, it remains a dream. So, we not close also objects? Especially if it can help to get some money. Mobile contactless payment cards are a good target. It’s illegal but bad guys don’t care. Such devices implement a lot of countermeasures but are we sure that they can’t be bypassed? Slawomir explained briefly the HCE technology. So, what are the different ways to abuse a payment application? The first one is of course to stole the phone. We can steal the card data via NFC (but they are already restriction: the phone screen must be turned on). We can’t pay but for motivated people, it should be possible to rebuild the mag stripe. Mobile apps use tokenization. Random card numbers are generated to pay and are used only for such operations. The transaction is protected by encrypted data. So, the next step is to steal the key. Online? Using man-in-the-middle attacks? Not easy. The key is stored on the phone. The key is also encrypted. How to access it? By reversing the app but it has a huge cost. What if we copy data across devices? They must be the same (model, OS, IMEI). We can copy the app + data but it’s not easy for a mass scale attack. The xposed framework helps to clone the device but it requires root access. Root detection is implemented in many apps. Slawomir performed a life demo: He copied data between two mobile phones using shell scripts and was able to make a payment with the cloned device. Note that the payments were performed on the same network and with small amounts of money. Google and banks have strong fraud detection systems. What about the Google push messages used by the application? Cloned devices received both messages but not always (not reliable). Then Slawomir talked about CDCVM which is a verification method that asks the user to give a PIN code but where… on its own device! Some apps do not support it but there is an API and it is possible to patch the application and enable the support (setting it to “True”) via an API call. What about other applications? As usual, some are good while others are bad (ex: some don’t event implement root detection). To conclude, can we prevent cloning? Not completely but we can make the process more difficult. According to Slawomir, the key is also to improve the backend and strong fraud detection controls (ex: based on the behaviour of the user). [Slawomir’s slides are available here]

After the lunch time, my choice was to attend Long Liu’s and Linan Has’s (which was not present) talk. The abstract looked nice: exploitation of the Chakracore core engine. This is a Javascript engine developed by Microsoft for its Edge browser. Today the framework is open source. Why is it a nice target according to the speaker? The source code is freely available, Edge is a nice attack surface. Long explained the different bug they found in the code and they helped them to win a lot of hacking contests. The problem was the monotonous voice of the speaker which just invited to take a small nap. The presentation ended with a nice demo of a web page visited by Edge and popping up a notepad running with system privileges. [Long’s slides are available here]

After the break, I switched to the track four to attend two small talks. But the quality was there! The first one by Patrick Wardle: “Meet and Greet with the MacOS Malware Class of 2016“. The presentation was a cool overview of the malware that targeted the OSX operating system. Yes, OSX is also targeted by malware today! For each of them, he reviewed:

  • The infection mechanism
  • The persistence mechanism
  • The features
  • The disinfection process

The examples covered by Patrick were:

  • Keranger
  • Keydnap
  • FakeFileOpener
  • Mokes
  • Komplex

He also presented some nice tools which could increase the security of your OSX environment. [Patrick’s slides are available here]

The next talk was presented by George Chatzisofroniou and covered a new wireless attach technique called Lure10. Wireless automatic association is not new (the well-known KARMA attack). This technique exists for years but modern operating systems implemented controls against this attack. But MitM attacks remains interesting because most applications do not implement countermeasures. In Windows 10, open networks are not added to the PNL (“Preferred Networks List”).  Microsoft developed a Wi-Fi Sense feature. The Lure10 attack tries to abuse it by making the Windows Location Service think that it is somewhere else and then mimic a Wifi Sence approved local network. In this case, we have an automatic association. A really cool attack that will be implemented in the next release of the wifiphisher phisher framework.  [George’s slides are available here]

My next choice was to attend a talk about sandboxing: “Shadow-Box: The Practical and Omnipotent Sandbox” by Seunghun Han. In short, Shadow-box is a lightweight hypervisor-based kernel protector. A fact: Linux kernels are everywhere today (computers, IoT, cars, etc). The kernel suffers from vulnerabilities and the risk of rootkits is always present. The classic ring (ring 0) is not enough to protect against those threats. Basically, the rootkit changes the system calls table and divert them to it to perform malicious activities.The idea behind Shadow-box is to use the VT technology to help in mitigating those threats. This is called “Ring -1”. Previous researches were already performed but suffered from many issues (mainly performance). The new research insists on lightweight and practical usage. Seunghun explained in detail how it works and ended with a nice demo. He tried to start a rootkit into a Linux kernel that has the Shadow-box module loaded. Detection was immediate and the rootkit not installed. Interesting but is it usable on a day-to-day basis? According to Seunghun, it is. The performance impact on the system is acceptable. [Seughun’ slides are available here]

The last talk of the day focused on TrendMicro products: “I Got 99 Trends and a # is All of Them! How We Found Over 100 RCE Vulnerabilities in Trend Micro Software” by Roberto Suggi Liverani and Steven Seeley. They research started after the disclosure of vulnerabilities. They decided to find more. Why Trendmicro? Nothing against the company but it’s a renowned vendor, they have a bug bounty program and they want to secure their software. The approach followed was to compromise the products without user interaction. They started with low-handing fruits, focused on components like libraries, scripts. The also use the same approach as used in malware analysis: check the behaviour and communications with external services and other components. They reviewed the following products:

  • Smart Protection Server
  • Data Loss Prevention
  • Control Manager
  • Interscan Web Security
  • Threat Discovery Appliance
  • Mobile Security for Enterprise
  • Safesync for Enterprise

The total amount of vulnerabilities they found was so impressive, most of them led to remote code execution. And, for most of them, it was quite trivial. [Roberto’s & Steven’s slides are available here]

This is the end of day #1. Stay tuned for more tomorrow.

 

[The post HITB Amsterdam 2017 Day #1 Wrap-Up has been first published on /dev/random]

Combining QFuture with QUndoCommand made a lot of sense for us. The undo and the redo methods of the QUndoCommand can also be asynchronous, of course. We wanted to use QFuture without involving threads, because our asynchronosity is done through a process and IPC, and not a thread. It’s the design mistake of QtConcurrent‘s run method, in my opinion. That meant using QFutureInterface instead (which is undocumented, but luckily public – so it’ll remain with us until at least Qt’s 6.y.z releases).

So how do we make a QUndoCommand that has a undo, and that has a redo method that returns a asynchronous QFuture<ResultType>?

We just did that, today. I’m very satisfied with the resulting API and design. It might have helped if QUndoStack would be a QUndoStack<T> and QUndoCommand would have been a QUndoCommand<T> with undo and redo’s return type being T. Just an idea for the Qt 6.y.z developers.

April 12, 2017

As a sysadmin, you probably deploy your bare-metal nodes through kickstarts in combination with pxe/dhcp. That's the most convenient way to deploy nodes in an existing environment. But what about having to remotely init a new DC/environement, without anything at all ? Suppose that you have a standalone node that you have to deploy, but there is no PXE/Dhcp environment configured (yet).

The simple solution would be to , as long as you have at least some kind of management/out-of-band network, to either ask the local DC people to burn the CentOS Minimal iso image on a usb stick, or other media. But I was in a need to deploy a machine without any remote hand available locally there to help me. The only things I had were :

  • access to the ipmi interface of that server
  • the fixed IP/netmask/gateway/dns settings for the NIC connected to that segment/vlan

One simple solution would have been to just "attach" the CentOS 7 iso as a virtual media, and then boot the machine, and setup from "locally emulated" cd-rom drive. But that's not something I wanted to do, as I didn't want to slow the install, as that would come from my local iso image, and so using my "slow" bandwidth. Instead, I directly wanted to use the Gbit link from that server to kick the install. So here is how you can do it with ipxe.iso. Ipxe is really helpful for such thing. The only "issue" was that I had to configure the nic first with Fixed IP (remember ? no dhcpd yet).

So, download the ipxe.iso image, add it as "virtual media" (and transfer will be fast, as that's under 1Mb), and boot the server. Once it boots from the iso image, don't let ipxe run, but instead hit CTRL/B when you see ipxe starting . Reason is that we don't want to let it starting the dhcp discover/offer/request/ack process, as we know that it will not work.

You're then presented with ipxe shell, so here we go (all parameters are obviously to be adapted, including net adapter number) :

set net0/ip x.x.x.x
set net0/netmask x.x.x.x
set net0/gateway x.x.x.x
set dns x.x.x.x

ifopen net0
ifstat

From that point you should have network connectivity, so we can "just" chainload the CentOS pxe images and start the install :

initrd http://mirror.centos.org/centos/7/os/x86_64/images/pxeboot/initrd.img
chain http://mirror.centos.org/centos/7/os/x86_64/images/pxeboot/vmlinuz net.ifnames=0 biosdevname=0 ksdevice=eth2 inst.repo=http://mirror.centos.org/centos/7/os/x86_64/ inst.lang=en_GB inst.keymap=be-latin1 inst.vnc inst.vncpassword=CHANGEME ip=x.x.x.x netmask=x.x.x.x gateway=x.x.x.x dns=x.x.x.x

Then you can just enjoy your CentOS install running all from network, and so at "full steam" ! You can also combine directly with inst.ks= to have a fully automated setup. Worth knowing that you can also regenerate/build an updated/customized ipxe.iso with those scripts directly too. That's more or less what we used to also have a 1Mb universal installer for CentOS 6 and 7, see https://wiki.centos.org/HowTos/RemoteiPXE , but that one defaults to dhcp

Hope it helps

I am not a hug fan of the Linux Nvidia drivers*, but once in a while I try them to check the performance of the machine. More often than not, I end up in a console and no X/Wayland.

I have seen some Ubuntu users reinstalling their machine after this f* up, so here my notes to fix it (I always forget the initramfs step and end up wasting a lot of time):

$ sudo apt-get remove --purge nvidia-*
$ sudo mv /etc/X11/xorg.conf /etc/X11/xorg.conf_pre-nvidia
$ sudo update-initramfs -u
$ reboot

*: I am not a fan of the Windows drivers either now that Nvidia decided to harvest emails and track you if you want updates.


Filed under: Uncategorized Tagged: driver, fix, nvidia, post-it, Ubuntu

April 11, 2017

The post Nginx might have 33% market share, Apache isn’t falling below 50% appeared first on ma.ttias.be.

This is a response to a post published by W3 Techs titled "Nginx reaches 33.3% web server market share while Apache falls below 50%". It's gotten massive upvotes on Hacker News, but I believe it's a fundamentally flawed post.

Here's why.

How server adoption is measured

Let's take a quick moment to look at how W3 Techs can decide if a site is running Apache vs. Nginx. The secret lies in the HTTP headers the server sends on each response.

$ curl -I https://ma.ttias.be 2>/dev/null | grep 'Server:'
Server: nginx

That Server header is collected by W3 Techs and they draw pretty graphs from it.

Cool!

Except, you can't rely on the Server alone for these statistics and claims.

You (often) can't hide the Nginx Server header

Nginx is most often used as a reverse proxy, for TLS, load balancing and HTTP/2. That's a part the article to right.

Nginx is the leading web servers supporting some the the more modern protocols, which is probably one of the reasons why people start using it. 76.8% of all sites supporting HTTP/2 use Nginx, while only 2.3% of those sites rely on Apache.

Yes, Nginx offers functionality that's either unstable or hard to get on Apache (ie: not on versions in current repositories).

As a result, Nginx is often deployed like this;

: 443 Nginx
|-> proxy to Apache

:80 Nginx
|-> forward traffic from HTTP -> HTTPs

To the outside world, Nginx is the only HTTP(s) server available. Since measurements of this stat are collected via the Server header, you get this effect.

: 443 Nginx
|- HTTP/1.1 200 OK
|- Server: nginx
|- Cache-Control: max-age=600
|- ...
\
 \
  \ Apache
   |- HTTP/1.1 200 OK
   |- Server: Apache
   |- Cache-Control: max-age=600

Both Apache and Nginx generate the Server header, but Nginx replaces that header with its own as it sends the request to the client. You never see the Apache header, even though Apache is involved.

For instance, here's my website response;

$ curl -I https://ma.ttias.be 2>/dev/null | grep 'Server:'
Server: nginx

Spoiler: I use Nginx as an HTTP/2 proxy (in Docker) for Apache, which does all the heavy lifting. That header only tells you my edge is Nginx, it doesn't tell you what's behind it.

And since Nginx is most often deployed at the very edge, it's the surviving Server header.

Nginx supplements Apache

Sure, in some stacks, Nginx completely replaced Apache. There are clear benefits to do so. But a few years ago, many sysadmins & devs changed their stack from Apache to Nginx, only to come back to Apache after all.

This created a series of Apache configurations that learned the good parts from Nginx, while keeping the flexibility of Apache (aka: .htaccess). Turns out, Nginx forced a wider use of PHP-FPM (and other runtimes), that were later used in Apache as well.

A better title for the original article would be: Nginx runs on 33% of top websites, supplementing Apache deployments.

This is one of those rare occasions where 1 + 1 != 2. Nginx can have 33% market share and Apache can have 85% market share, because they're often combined on the same stack. Things don't have to add up to 100%.

The post Nginx might have 33% market share, Apache isn’t falling below 50% appeared first on ma.ttias.be.

So work on Autoptimize 2.2 is almost finished and I need your help testing this version before releasing (targeting May, but that depends on you!). The more people I have testing, the faster I might be able to push this thing out and there’s a lot to look forward to;

  • New option: enable/ disable AO for logged in users for all you pagebuilders out there
  • New minification/ caching system, significantly speeding up your site for non-cached pages (previously part of a power-up)
  • Switched to rel=preload + Filamentgroup’s loadCSS for CSS deferring
  • Additional support for HTTP/2 setups (no GUI, you might need to have a look at the API to see/ use all possibilities)
  • Important improvements to the logic of which JS/ CSS can be optimized (getPath function) increasing reliability of the aggregation process
  • Updated to a newer version of the CSS Minification component (albeit not the 3.x one, which seems a tad too fresh and which would require me to drop support for PHP 5.2 which will come but just not yet)
  • API: Lots of extra filters, making AO (even) more flexible.
  • Lots of bugfixes and smaller improvements (see GitHub commit log)

So if you want to help:

  1. Download the zip-file from Github
  2. Overwrite the contents of wp-content/plugins/autoptimize with the contents of autoptimize-master from the zip
  3. Test and if any bug (regression) create an issue in GitHub (if it doesn’t exist already).

Very much looking forward to your feedback!

April 10, 2017

The last time we made significant changes to our governance was 4 to 5 years ago [1, 2, 3]. It's time to evolve it more. We need to:

  • Update the governance model so governance policies and community membership decisions are not determined by me or by me alone. It is clear that the current governance structure of Drupal, which relies on me being the ultimate decision maker and spokesperson for difficult governance and community membership decisions, has reached its limits. It doesn't work for many in our community -- and frankly, it does not work for me either. I want to help drive the technical strategy and vision of Drupal, not be the arbiter of governance or interpersonal issues.
  • Review our the Code of Conduct. Many have commented that the intentions and scope of the Code of Conduct are unclear. For example, some people have asked if violations of the Code of Conduct are the only reasons for which someone might be removed from our community, whether Community Working Group decisions can be made based on actions outside of the Drupal community, or whether we need a Code of Conduct at all. These are all important questions that need clear answers.

I believe that to achieve the best outcome, we will:

  1. Organize both in-person and virtual roundtables during and after DrupalCon Baltimore to focus on gathering direct feedback from the community on evolving our governance.
  2. Refocus the 2-day meeting of the Drupal Association's Board of Directors at DrupalCon Baltimore to discuss these topics.
  3. Collect ideas in the issue queue of the Drupal Governance project. We will share a report from the roundtable discussions (point 1) and the Drupal Association Board Meeting (point 2) in the issue queue so everything is available in one place.
  4. Actively solicit help from experts on diversity, inclusion, experiences of marginalized groups, and codes of conduct and governance. This could include people from both inside and outside the Drupal community (e.g. a leader from another community who is highly respected). I've started looking into this option with the help of the Drupal Association and members of the Community Working Group. We are open to suggestions.

In order to achieve these aims, we plan to organize an in-person Drupal Community Governance sprint the weeks following DrupalCon Baltimore, involving members of the Drupal Association, Community Working Group, the Drupal Diversity & Inclusion group, outside experts, as well as some community members who have been critical of our governance. At the sprint, we will discuss feedback gathered by the roundtables, as well as discussions during the 2-day board meeting at DrupalCon Baltimore, and turn these into concrete proposals: possible modifications to the Code of Conduct, structural changes, expectations of leadership, etc. These proposals will be open for public comment for several weeks or months, to be finalized by DrupalCon Vienna.

We're still discussing these plans but I wanted to give you some insight in our progress and thinking; once the plans are finalized we'll share them on Drupal.org. Let us know your thoughts on this framework. I'm looking forward to working on solutions with others in the community.

April 09, 2017

We are making an editor for industrial uses at Heidenhain. This is to make big Klartext programs, editable. I’m sure other industries could also use that.

Nowadays these programs often come out of a conversion from a CAD-CAM format. Before you can mill and turn your pesky military secrets on one of the machines controlled by a Heidenhain set, you’ll have to tweak the program that you converted from your CAD-CAM product. We are making the editor for that.

I wrote on this blog how we will instantaneously open those >4GB files, ready for editing. It looks a lot like how I made the E-mail client modest open the headers instantaneously on the N900. Basically, having a partition or index table that gets mmapped.

We’re also making the overlaying (the changes made by the user) undoable. The APIs for that kinda look like this. All examples on my blog are amateur extracts of the real thing, of course.

I feel like it’s actually going to work out. Architecturally and organizationally the other developers in our team are getting at the right level of expertise and sense of wanting this.

That is most important for anything to make it happen.

It feels a bit like how Nokia was: I’m learning a lot about myself from techleading: how to propose a design, concept or idea; how to convince deeply technical people; how to push others to go further than what they can already do. How to make a team quit competing and start sharing a common goal. The infrastructure for that was provided to me by Nokia. At Heidenhain, I feel like having played a small role in it.

I have been a long time user of Cyanogenmod, which discontinued its services end of 2016. Due to lack of (continuous) time, I was not able to switch over toward a different ROM. Also, I wasn't sure if LineageOS would remain the best choice for me or not. I wanted to review other ROMs for my Samsung Galaxy SIII (the i9300 model) phone.

Today, I made my choice and installed LineageOS.

The requirements list

When looking for new ROMs to use, I had a number of requirements, some must-have, others should-have or would-have (using the MoSCoW method.

First of all, I want the ROM to be installable through ClockworkMod 6.4.0.something. This is a mandatory requirement, because I don't want to venture out in installing a different recovery (like TWRP). Not that much that I'm scared from it, but it might require me to install stuff like Heimdal and update my SELinux policies on my system to allow it to run, and has the additional risk that things still fail.

I tried updating the recovery ROM in the past (a year or so ago) using the mobile application approaches themselves (which require root access, that my phone had at the time) but it continuously said that it failed and that I had to revert to the more traditional way of flashing the recovery.

Given that I know I need to upgrade within a day (and have other things planned today) I didn't want to loose too much time in upgrading the recovery first.

Second, the ROM had to allow OTA updates. With CyanogenMod, the OTA didn't fully work on my phone (it downloaded and verified the images correctly, but couldn't install it automatically - I had to reboot in recovery manually and install the ZIP), but it worked sufficiently for me to easily update the phone on a weekly basis. I wanted to keep this luxury, and who knows, move towards an end-to-end working OTA.

Furthermore, the ROM had to support Android 7.1. I want the latest Android to see how long this (nowadays aged) phone can handle things. Once the phone cannot get the latest Android anymore, I'll probably move towards a new phone. But as long as I don't have to, I'll put my money in other endeavours ;-)

Finally, the ROM must be in active development. One of the reasons I want the latest Android is also because I want to keep receiving the necessary security fixes. If a ROM doesn't actively follow the security patches and code, then it might become (too) vulnerable for comfort.

ROMs, ROMs everywhere (?)

First, I visited the Galaxy S3 discussion on the XDA-Developers site. This often contains enough material to find ROMs which have a somewhat active development base.

I was still positively surprised by the activity on this quite old phone (the i9300 was first released in May, 2012, making this phone almost 5 years old).

The Vanir mod seemed to imply that TWRP was required, but past articles on Vanir showed that CWM should also work. However, from the discussion I gathered that it is based on LineageOS. Not that that's bad, but it makes LineageOS the "preferred" ROM first (default installed software list, larger upstream community, etc.)

The Ressurrection Remix shows a very active discussion with good feedback from the developer(s). It is based on a number of other resources (including CyanogenMod), so seems to borrow and implement various other features. Although I got the slight impression that it would be a bit more filled with applications I might not want, I kept it on the short-list.

SLIMROM is based on AOSP (the Android Open Source Project). It doesn't seem to support OTA though, and its release history is currently still premature. However, I will keep an eye on this one for future reference.

After a while, I started looking for ROMs based on AOSP, as the majority of ROMs shown are based on LineageOS (abbreviated to LOS). Apparently, for the Samsung S3, LineageOS seems to be one of the most popular sources (and ROMs).

So I put my attention to LineageOS:

So, why not?

Using LineageOS without root

While deciding to use LineageOS or go through with additional ROM seeking, I stumbled upon the installation instructions that showed that the ROM can be installed without automatically enabling rooted Android access. I'm not sure if this was the case with Cyanogenmod (I've been running with a rooted Cyanogenmod for too long to remember) but it opened a possiblity for me...

Personally, I don't mind having a rooted phone, as long as it is the user who decides which applications can get root access and which can't. For me, the two applications that used root access was an open source ad blocker called AdAway and the Android shell (for troubleshooting purposes, such as killing the media server if it locked my camera).

But some applications seem to think that a rooted phone automatically means that the phone is open access and full of malware. It is hard to find any trustworthy, academical research on the actual secure state of rooted versus non-rooted devices. I believe that proper application vetting (don't install applications that aren't popular and long-existing, check the application vendors, etc.) and keeping your phone up-to-date is much more important than not rooting.

And although these applications happily function on old, unpatched Android 4.x devices they refuse to function on my (rooted) Android 7.1 phone. So, the ability to install LineageOS without root (rooting actually requires flashing an additional package) is a nice thing as I can start with a non-rooted device first, and switch back to a rooted device if I need it later.

With that, I decided to flash my phone with the latest LineageOS nightly for my phone.

Switching password manager

I tend to use such ROM switches (or, in case of CyanogenMod, major version upgrades) as a time to revisit the mobile application list, and reduce it to what I really used the last few months.

One of the changes I did on my mobile application list is switch the password application. I used to use Remember Passwords but it hasn't seen updates for quite some time, and the backup import failed last time I migrated to a higher CyanogenMod version (possibly Android version related). Because I don't want to synchronize the passwords or see the application have any Internet oriented activity, I now use Keepass2Android Offline.

This is for passwords which I don't auto-generate using SuperGenPass, my favorite password manager. I don't use the bookmarklet approach myself, but download and run it separately when generating passwords - or use a SuperGenPass mobile application.

First impressions

It is too soon to say if it is fully functional or not. Most standard functionality works OK (phone, SMS, camera) but it is only after a few days that specific issues can come up.

Only the first boot was very slow (probably because it was optimizing the application list in the background), the second boot was well below half a minute. I didn't count it, but it's fast enough for me.

Last week Megan Sanicki, executive director of the Drupal Association, and I published a joint statement. In this blog post, I wanted to follow up with a personal statement focused on the community at large.

I've talked to a lot of people the last two weeks, and it is clear to me that our decisions have caused much alarm and distress in our community. I feel this follow-up is important even though I know it doesn't undo the hurt I've caused.

I want to deeply apologize for causing grief and uncertainty, especially to those in the BDSM and kink communities who felt targeted by the turmoil. This incident was about specific actions of a single member of our community. This was never meant to be about sexual practices or kinks, so it pains me that I unintentionally hurt you. I do support you and respect you as a key part of our community.

Shortly after I started Drupal more than 15 years ago, I based its core values on openness and equality. Gender, race, religion, beliefs, sexuality ... all are welcome in our community. We've always had people with wildly different views and identities. When we walk into a sprint at DrupalCon, we've been able to put our opinions aside, open our laptops, and start collaborating. Diversity has always been a feature, not a bug. I strongly feel that this foundation is what made Drupal what it is today; a global family.

Serving a community as unique and diverse as Drupal is both rewarding and challenging. We've navigated through several defining moments and transitions in our history. I feel what we are going through now is another one of these defining moments for our culture and community. In an excruciating but illuminating way this has shown some of what is best about our community: we care. I'm reminded that what brings us together, what we all have in common, is our love and appreciation of open-source software. Drupal is a positive force, a collective lifting by thousands and thousands, created and maintained by those individuals cooperating toward a common goal, whose other interests have no need to be aligned.

I want to help our community heal and I'm open to learn and change. As one of the next steps, I will make a follow-up post on improving our governance to a healthier model that does not place such sensitive decisions on me. I love this community, and recognize that the things we hold in common are more important than our differences.

(Comments on this post are allowed but for obvious reasons will be moderated.)

April 08, 2017

The post CAA checking becomes mandatory for SSL/TLS certificates appeared first on ma.ttias.be.

This was news to me in a few ways; first, there's a new DNS resource record called CAA (Certificate Authority Authorization) and second, Certificate Authorities are now required to check that record before issuing a certificate, to determine if they're allowed to do so.

Cool!

What's a CAA (Certificate Authority Authorization)?

When in doubt, consult the RFC: RFC 6844,  DNS Certification Authority Authorization (CAA) Resource Record.

In short, it looks like this:

ttias.be. CAA 0 issue "letsencrypt.org"

The CAA record is a new resource record, next to the usual A, CNAME, MX, TXT, ... records you might already know.

The syntax is as follows;

CAA <flags> <tag> <value>

Which translates to;

  • flag: An unsigned integer between 0-255.
  • tag: An ASCII string that represents the identifier of the property represented by the record.
  • value: The value associated with the tag.

In reality, you'll see the configurations mostly as follows;

ttias.be. CAA 0 issue "letsencrypt.org"

-> this means that only Let's Encrypt can issue a certificate for the domain "ttias.be". Note this doesn't cover the subdomains, like www.

ttias.be. CAA 0 issue "letsencrypt.org"
ttias.be. CAA 0 issue "globalsign.org"

-> this means both Let's Encrypt and Globalsign can issue certificates for the domain "ttias.be".

ttias.be. CAA 0 issuewild "letsencrypt.org"

-> the issuewild tag indicates that wildcard certificates can be issued for "ttias.be", covering "*.ttias.be", but not "*.mail.ttias.be".

Receiving notifications of CAA violations

If a certificate authority receives a certificate request for a domain, but the CAA record denies it, the CA can send a notification to the domain owner. This is configured & managed via the iodef tag.

ttias.be.  CAA 0 iodef "mailto:m@ttias.be"

-> this configures it so that if a CA receives a certificate request for "ttias.be", you'll be notified on "m@ttias.be".

ttias.be.  CAA 0 iodef "https://ma.ttias.be/callback"

-> this configures it so that a HTTPS call will be done to that URL to inform you of this certificate request attempt. It's unclear whether it's a GET or a POST or what the payload is, that might depend on the CA?

Query'ing CAA records

Alas, this doesn't work yet in dig, as you get the A record if you query for a CAA record.

$ dig google.com CAA
google.com.		143	IN	A	172.217.17.142

Current versions of dig don't understand the CAA record yet, so you have to be explicit and query for Resource Record Type 257, the identifier given to CAA.

$ dig google.com type257
google.com.		86399	IN	TYPE257	\# 19 0005697373756573796D616E7465632E636F6D
google.com.		86399	IN	TYPE257	\# 15 00056973737565706B692E676F6F67

Notice how the value isn't exactly readable? That's because it's binary encoded and needs to be decoded first, to be human readable.

Update: the dig version on CentOS 7 works just fine.

$ dig -v
DiG 9.9.4-RedHat-9.9.4-38.el7_3.2
$ dig google.com CAA
google.com.		86400	IN	CAA	0 issue "pki.goog"
google.com.		86400	IN	CAA	0 issue "symantec.com"

Tools like dnscaa provide that ability.

$ ./digcaa google.com
google.com. 86399   IN  CAA 0 issue "symantec.com"

Support for CAA records are coming in the next version of DNS Spy, too.

Do I have to add CAA records to get a certificate?

No, just like HSTS, the CAA records are completely optional. You should add them, for increased security though.

As of September 2017, Certificate Authorities are obligated to check for CAA records and honor those preferences. Not having CAA records is essentially the same as saying "everyone can issue a certificate for my domain".

The post CAA checking becomes mandatory for SSL/TLS certificates appeared first on ma.ttias.be.

April 07, 2017

I published the following diary on isc.sans.org: “Tracking Website Defacers with HTTP Referers”.

In a previous diary, I explained how pictures may affect your website reputation. Although a suggested recommendation was to prevent cross-linking by using the HTTP referer, this is a control that I do not implement on my personal blog, purely for research purposes. And it successfully worked… [Read more]

[The post [SANS ISC] Tracking Website Defacers with HTTP Referers has been first published on /dev/random]

April 06, 2017

We’ve all been there, working on code and forgetting about source control for a few hours. In my case, this resulted in crappy, way too large commits once I was happy with the result. In the best possible outcome, I could split the changes into multiple commits when the changes span across different files, but most of the time that’s not really the case.

It wasn’t until a while back that I figured out git has a way of dealing with that. All this time I thought I simply sucked at source control and would never be able to master that craft on top of all the other skills. No. In fact, it’s completely normal to forget about source control during programming and git can help you sort stuff once you’re done being awesome. But how? Meet patch mode. In this example, we’ll have a look at git add -p, but know that patch mode exists on a multitude of commands, not just add. I’ll come back to that later, but let’s start by looking add -p first.

As I said, we start off by having a lot of files and changes that, if we want to do the right thing, should be split into different commits. The problem isn’t that we have multiple files, the problem is that we have a set of changes within 1 or more files which belong together. So, we need a way to split those files and keep some changes as modified, and stage others.

When we type git add -p, git will present us with something that looks familiar and other stuff that looks rather unfamiliar if you’ve never been here before.

PS> git add -p

diff --git a/_posts/2017-03-16-cover-all-the-things.md b/_posts/2017-03-16-cover-all-the-things.md
index 4b3185c..646c9e0 100644
--- a/_posts/2017-03-16-cover-all-the-things.md
+++ b/_posts/2017-03-16-cover-all-the-things.md
@@ -35,6 +35,9 @@ Lastly, we need to format and publish the results.

 ...

+Look, I'm new here
+I'm new here too
+
 ...
\ No newline at end of file
Stage this hunk [y,n,q,a,d,/,s,e,?]?

The first part of this output looks like a diff, but we do see something else at the end. git talks about a hunk. If you change more than one line in a file, and provided they are more than a few lines apart, git will already split your file into multiple hunks. At the bottom, you can see git presents us with a few options. I’ll only cover the ones I use the most, as those will also be the ones you’ll be using the most. There’s no need in complicating things 😊. You can type ? to get a bit more information about every option.

y - stage this hunk
n - do not stage this hunk
q - quit; do not stage this hunk or any of the remaining ones
a - stage this hunk and all later hunks in the file
d - do not stage this hunk or any of the later hunks in the file
s - split the current hunk into smaller hunks
e - manually edit the current hunk
? - print help

y and n are rather straightforward, we either chose to stage this hunk or not. q will get you back to your safe zone if you came here by accident. a can be used to stage this hunk and all the remaining hunks in the same file. d is like a, only that it won’t stage this hunk or any remaining ones for that file. Quite boring you say? Yup, but we’ve reached the interesting part. Suppose the hunk you see before you contains changes you want to stage, but also a few lines you’d rather not stage right now. You could try to let git split the hunk for you using s, but that won’t always work. When the lines are too close together, git will just repeat the same hunk over and over when trying to split it. In that case you want to use e and manually edit the diff. And that’s an important concept. You will edit the diff, not the file itself.

Try pressing e and your editor will pop up containing the diff for the selected hunk. There’s also a nice little companion text to guide you through this feature.

# Manual hunk edit mode -- see bottom for a quick guide.
@@ -35,6 +35,9 @@ Lastly, we need to format and publish the results.
 
 ...
 
+Look, I'm new here
+I'm new here too
+
...
\ No newline at end of file
# ---
# To remove '-' lines, make them ' ' lines (context).
# To remove '+' lines, delete them.
# Lines starting with # will be removed.
# 
# If the patch applies cleanly, the edited hunk will immediately be
# marked for staging.
# If it does not apply cleanly, you will be given an opportunity to
# edit again.  If all lines of the hunk are removed, then the edit is
# aborted and the hunk is left unchanged.

In the example above, two lines were added to the file. It displays two options depending on what you want to do. In case it’s a removal, you have to replace the - with a space. That might seem straightforward but I’ve seen many people (including me at first) mess this up. Leave the line as is and just replace - with a blank space. That’s all. In case we have an addition, lines starting with +, you simply need to delete the entire line to not stage that change. Remember, we are not editing the file itself, only the diff, implying, on our filesystem, the file will still contain all the changes. We will simply tell git which changes to stage and prepare for a commit. So, after the edit, you’ll end up with this, if you wish to only keep the first line.

...
 
+Look, I'm new here
...

In case you mess up, git will not apply the patch and tell you. You can either quit or modify the diff again to fix that.

Now, I said in the beginning that add is not the only command where you can use this mode. You can make use of this on commit, checkout, reset and stash. As it’s a powerful feature and also iterates through every change you made, I use this all the time to review my changes and create nice, clean, contextual commits. It’s almost like a second nature to simply use git commit -p -m 'Some new code' instead of adding every file separately or worry about my changes along the way. I can keep on coding and when I’m happy with the result I’ll make sure to create an impeccable git log.

Are there any GUI’s out there who support this, you ask? Sure, GitHub for Desktop allows you to stage individual lines, which is exactly what git add -p allows you to do. The ever so awesome GitKraken displays hunks when you click on modified files, where you can also add line per line. When it comes to git and GUI tools, I usually can’t recommend anything useful that won’t create even more confusion. That is, until GitKraken came along. You still need to know what it’s all about, and no GUI can help you with that, but given that it uses the correct naming for its actions, that’s the one I recommend if you’re looking for a GUI tool to manage your git repos.

Now go out and have fun creating clean commits!

April 05, 2017

Almost a year ago, BigPipe was the first experimental module added to Drupal 8. It was still experimental in Drupal 8.2 (October 2016), but it was upgraded from alpha to beta stability. Later today, Drupal 8.3.0 is going to be released, and BigPipe is now stable!

Install it!

BigPipe is a zero-risk module. So … why not install it right now? You can uninstall it at any time. It won’t cause problems in any browser, on any web server, or with any proxy. Because:

There is zero risk of data loss. And when the environment — i.e. web server or (reverse) proxy — doesn’t support streaming, then BigPipe-delivered responses behave as if BigPipe was not installed. Nothing breaks, you just go back to the same perceived performance as before.

If you’re still on Drupal 8.2 for a while — also install it! There are no functional changes for BigPipe between 8.3 and 8.2.

Stability

In hindsight, we could have made BigPipe stable from day one, or at least in Drupal 8.2.

There have been only 4 bug reports since then, 3 of which were trivial forgotten edge cases, and the other one was a bug in the Render system which happened to also affect BigPipe1. Still, it is important to very thoroughly validate such a module before marking it stable, because:

This is the sort of module that needs wider testing because it changes how pages are delivered, so before it can be considered stable, it must be tested in as many circumstances as possible, including the most exotic ones.

Given that in the past year only a handful of (non-critical) bugs have been reported … that gave Drupal core committers the confidence to mark it “stable”.

Sessionless BigPipe contrib module

There’s only one thing that is new in BigPipe in Drupal 8.3: some internal refactoring, which makes it possible for the Sessionless BigPipe contrib module to exist.
This contributed module accelerates Page Cache misses using the BigPipe technique.

Future

  1. Only one feature is still planned for the future: interface previews — see Callum Hart’s excellent blog post about it for an introduction.
  2. I hope to enable BigPipe by default in the Standard install profile in 8.4.0 :)

I hope you enjoy faster personalized & authenticated page loads thanks to BigPipe! :)

I want to thank my employer Acquia for giving me the time to make this happen!


  1. See BigPipeRegressionTest — it has an explicit regression test for each of the four bugs: #2698811, #2678662, #2712935 and #2802923↩︎

I published the following diary on isc.sans.org: “Whitelists: The Holy Grail of Attackers“.

As a defender, take the time to put yourself in the place of a bad guy for a few minutes. You’re writing some malicious code and you need to download payloads from the Internet or hide your code on a website. Once your malicious code spread in the wild, it will be quickly captured by honeypots, IDS, … (name your best tool) and analysed automatically of manually by the good guys… [Read more]

[The post [SANS ISC] Whitelists: The Holy Grail of Attackers has been first published on /dev/random]

April 01, 2017

I published the following diary on isc.sans.org: “Pro & Con of Outsourcing your SOC“.

I’m involved in a project to deploy a SIEM (“Security Information &Event Management“) / SOC (“Security Operation Center“) for a customer. The current approach is to outsource the services to an external company also called a MSSP (“Managed Security Services Provider“). We had an interesting chat about the pro & con to have an internal or external SOC… [Read more]

[The post [SANS ISC] Pro & Con of Outsourcing your SOC has been first published on /dev/random]

March 31, 2017

Oracle Linux 6 for SPARC is now available for download from OTN and the released notes can be found here.

This version of Oracle Linux 6 uses UEK2 (there is no RHCK here of course as there is no corresponding release on SPARC) and this OS release can be installed on T4, T5 and T7 (M7,M5) but not yet on the S7 platform. OL6 for SPARC contains all the packages (binary and -devel) for DAX, ADI (SSM), an updated version of openssl with support of on-chip crypto features.

We also provide the SPARC LDOM Manager code (both source and binary). With LDOM manager installed you can run Oracle Linux as a control domain for both Linux and Solaris guests. You can of course also install Linux as s guest domain on top of Solaris. The kernel supports vswitch and vdiskserver etc. A native (linux only) installation is also supported.

Our yum repo will have the OL6/sparc channels later today. The repo also contains -devel packages and the toolchains for gcc etc ... BTW of course, gcc supports M7 (cpu) optimizations. We have optimized memcpy and tons of other stuff.

Lots of SPARC Linux kernel code is already in upstream Linux but a bunch of stuff is in progress of going in. The same goes for user space code. glib and gcc patches have for the most part been submitted upstream and committed, some are pending.

A newer ISO with UEK(4) is on its way (we have builds and are testing). This update will also support the S7 systems/chip.

OL6 for SPARC doesn't yet contain -all- the RPMs that are part of Oracle Linux on x86. Right now, it is just a subset however we will be expanding it over time.

I will blog about some Dax and ADI/SSM samples in a few days :) some ldom control domain tips etc...

have fun

March 30, 2017

Jenkins can spread the load of Jobs by using H instead of * in the cron fields. It means that:

H 3 * * *

Means: Run between 3 and 4 am.

The minute will be decided by Jenkins, by applying a hash function over the job name.

What about this one:

H H * * *

Means: Run once a day. The moment will be calculated by a Jenkins based on the job name.

But what if I have hundreds of jobs, I want to run them once a day, but during night? Something like:

H H(0-5) * * *

Means: Run the job once everyday between 12am and 6am.

But when you scale up your Jenkins, you want the jobs to run between e.g. 7pm and 6am. Because you also want to use the hours before midnight.,

There is a BAD WAY to do it:

H H(0-6),H(19-23) H/2 * *

That would run the jobs in the morning and the evening, every 2 days. It is complex and will not behave correctly at the end of months.

The GOOD WAY to do it is to set the timezone in the cron expression, something which is not documented yet. It is there since Jenkins 1.615 so you probably have it in your Jenkins:

TZ=GMT+7
H H(0-10) * * *

What does this mean? In the timezone GMT+7, run the jobs once between 12am and 11 am. Which means between 7pm and 6am in my timezone.

$ date -d '0:00 GMT+7'
Wed Mar 29 19:00:00 CEST 2017
$ date -d '11:00 GMT+7'
Thu Mar 30 06:00:00 CEST 2017

It is a lot more simple syntax and is more reliable. Please note that the validation (preview) below the Cron settings is not using that TZ (I opened JENKINS-43228 for this).

Here are the Robot Framework keywords needed nowadays to setup a socks5 proxy (e.g ssh -ND 9050 bastion.example.com):

*** Settings ***
Documentation     Open a Web Page using a socks 5 proxy (demo)
Library           Selenium2Library

*** Test Cases ***
Create Webdriver and Open Page
    ${profile}=   Evaluate   sys.modules['selenium.webdriver'].FirefoxProfile()
sys
    Call Method   ${profile}   set_preference   network.proxy.socks   127.0.0.1
    Call Method   ${profile}   set_preference   network.proxy.socks_port
${9060}
    Call Method   ${profile}   set_preference   network.proxy.socks_remote_dns
${True}
    Call Method   ${profile}   set_preference   network.proxy.type   ${1}
    Create WebDriver   Firefox   firefox_profile=${profile}
    Go To    http://internal.example.com

I published the following diary on isc.sans.org: “Diverting built-in features for the bad“.

Sometimes you may find very small pieces of malicious code. Yesterday, I caught this very small Javascript sample with only 2 lines of code… [Read more]

 

[The post [SANS ISC] Diverting built-in features for the bad has been first published on /dev/random]

March 29, 2017

I published the following diary on isc.sans.org: “Logical & Physical Security Correlation“.

Today, I would like to review an example how we can improve our daily security operations or, for our users, how to help in detecting suspicious content. Last week, I received the following email in my corporate mailbox. The mail is written in French but easy to understand: It is a notification regarding a failed delivery (they pretended that nobody was present at the delivery address to pick up the goods)… [Read more]

[The post [SANS ISC] Logical & Physical Security Correlation has been first published on /dev/random]

March 28, 2017

We just released Oracle Linux 6 update 9. The channels are on ULN and on our yum repo. The ISOs are available for download through MOS and in the next few days also on the software delivery cloud page, as customary. The release notes with changes are published and so on.

One thing we discovered during testing of OL6.9 was that a recent change in "upstream" glibc can cause memory corruption resulting in a database start-up failure every now and then.

Since we caught this prior to release, we have, of course, fixed the bug.

The following code change introduced the bug (glibc-rh1012343.patch)

 	
	     char newmode[modelen + 2];
	  -  memcpy (mempcpy (newmode, mode, modelen), "c", 2);
	  +  memcpy (mempcpy (newmode, mode, modelen), "ce", 2);
	     FILE *result = fopen (file, newmode);

As you can see, someone added e to newmode (c to ce) but forgot to increase the size of newmode (2 to 3) so there is no null character at the end.
The correct patch that we have in glibc as part of OL6.9 is:
	-  char newmode[modelen + 2];
	-  memcpy (mempcpy (newmode, mode, modelen), "ce", 2);
	+  char newmode[modelen + 3];
	+  memcpy (mempcpy (newmode, mode, modelen), "ce", 3);

The Oracle bug id is 25609196. The patch for this is in the glibc src rpm. The customer symptom would be a failed start of the database because of fopen() failing.
Something like this:
  Wed Mar 22 *17:19:51* 2017
  *ORA-00210: cannot open the specified control file*
  ORA-00202: control file:
  
'/opt/oracle/oltest/.srchome/single-database/nas/12.1.0.2.0-8192-72G/control_0
01'
  ORA-27054: NFS file system where the file is created or resides is
  not mounted with correct options
  *Linux-x86_64 Error: 13: Permission denied*
  Additional information: 2
  ORA-205 signalled during: ALTER DATABASE   MOUNT...
  Shutting down instance (abort) 


March 27, 2017

A new release is now available for the cvechecker application. This is a stupid yet important bugfix release: the 3.7 release saw all newly released CVEs as being already known, so it did not take them up to the database. As a result, systems would never check for the new CVEs.

Logo SaltStackCe jeudi 20 avril 2017 à 19h se déroulera la 58ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Gestion de configuration et de cloud avec SaltStack

Thématique : sysadmin|développement

Public : sysadmin|développeurs|entreprises|étudiants

L’animateur conférencier : Sébastien Wains (ETNIC)

Lieu de cette séance : HEPH Condorcet, Chemin du Champ de Mars, 15 – 7000 Mons – Auditorium 2 (G01) situé au rez de chaussée (cf. ce plan sur le site d’Openstreetmap; ATTENTION, l’entrée est peu visible de la voie principale, elle se trouve dans l’angle formé par un très grand parking).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : A l’ère du cloud, les besoins en automatisation sont de plus en plus importants. Le nombre d’éléments de configuration à gérer augmente, sans pour autant que les effectifs ne suivent cette tendance. Outre cet aspect, c’est aussi de nouvelles pratiques et de nouveaux besoins qu’il faut satisfaire. Le monde de l’administrateur système est en (r)évolution.

Des outils ont été créés pour accompagner ces évolutions techniques, on pense rapidement à la gestion de configuration et des outils comme Puppet ou Chef.

La gestion de configuration n’est cependant plus suffisante, il faut en effet contrôler, vérifier et éventuellement corriger les éléments de configuration dont on a la charge via un outil “chef d’orchestre”.

SaltStack se propose de faire tout cela, et plus encore. Il se différencie des outils “classiques” tels que Puppet car il est construit autour d’un bus d’événements, qui ouvre la porte à une infinité de possibilités.

La présentation brossera les différents concepts de Salt et tentera de répondre aux questions suivantes :

  • est-ce utile ?
  • est-ce facile à prendre en main ?
  • est-ce facilement intégrable dans mon système informatique ?
  • est-ce extensible ?

Short bio : Sébastien est un Linuxien enthousiaste. Il a développé une passion pour l’informatique et particulièrement Linux vers 16 ans. Malgré son graduat en comptabilité et gestion, il a cependant débuté sa carrière en tant qu’administrateur systèmes, gérant alors quelques serveurs et une trentaine d’utilisateurs. Son approche pragmatique se reposant sur ses connaissances en gestion lui a permis d’évoluer et d’être à la tête d’un parc Linux de près de 260 serveurs utilisés par plusieurs milliers d’utilisateurs chez ETNIC, organisme d’intérêt public actif en Fédération Wallonie-Bruxelles.

March 26, 2017

Certaines histoires commencent mal. Très mal. Mais, petit à petit, la vie se fraie un chemin à travers les pires situations pour s’épanouir en frêles et merveilleux bourgeons.

Cette histoire commence le 7 janvier 2015. Ce jour là, je croise Damien Van Achter, atterré par ce qui se passe à Paris. Il me parle de morts. Je ne comprends pas. J’ouvre alors Twitter et découvre l’ampleur des attentats contre Charlie Hebdo.

Je ne le sais pas encore mais ces attentats vont changer ma vie. En bien. En incroyablement, merveilleusement bien.

Sur le moment, choqué à mon tour, je me fends d’un tweet immédiat, instinctif. Étant moi-même parfois auteur d’humour de mauvais goût, je me sens attaqué dans mes valeurs.

Ce tweet sera retweeté plus de 10.000 fois, publié dans les médias, à la télévision, dans un livre papier et, surtout, sur Facebook où il sera mis en image par Pierre Berget, repartagé et lu par des centaines de milliers de personnes.

Parmi elles, une jeune femme. Intriguée, elle se mettra à lire mon blog et m’enverra un paiement libre. Après m’avoir croisé par hasard à l’inauguration du coworking Rue du Web, elle me contactera sur Facebook pour discuter certaines de nos idées respectives.

Deux ans plus tard, le 9 mars 2017, jour de mon 36ème anniversaire, cette jeune femme dont je suis éperdument amoureux a donné naissance à Miniploum, mon fils. Le plus beau des cadeaux d’anniversaire…

Je souris, je savoure la vie et je suis heureux. Ce bonheur, cet amour que j’ai la chance de vivre, ne le dois-je pas en partie aux réseaux sociaux qui ont transformé un ignoble attentat en une nouvelle vie ?

Rappelons-nous que chaque drame, chaque catastrophe porte en elle les germes de futurs bonheurs. Des bonheurs qui ne font peut-être pas toujours les grands titres de la presse, qui sont moins vendeurs mais qui sont les fondations de chacune de nos vies.

Souvenons-nous également que les outils, quels qu’ils soient, ne deviennent que ce que nous en faisons. Ils ne sont ni bons, ni mauvais. Il est de notre responsabilité d’en faire des sources de bonheur…

Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal ou Liberapay. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Medium, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

March 24, 2017

I published the following diary on isc.sans.org: “Nicely Obfuscated JavaScript Sample“.

One of our readers sent us an interesting sample that was captured by his anti-spam. The suspicious email had an HTML file attached to it. By having a look at the file manually, it is heavily obfuscated and the payload is encoded in a unique variable… [Read more]

[The post [SANS ISC] Nicely Obfuscated JavaScript Sample has been first published on /dev/random]

Among the problems we’ll face is that we want asynchronous APIs that are undoable and that we want to switch to read only, undoable editing, non-undoable editing and that QML doesn’t really work well with QFuture. At least not yet. We want an interface that is easy to talk with from QML. Yet we want to switch between complicated behaviors.

We will also want synchronous mode and asynchronous mode. Because I just invented that requirement out of thin air.

Ok, first the “design”. We see a lot of behaviors, for something that can do something. The behaviors will perform for that something, the actions it can do. That is the strategy design pattern, then. It’s the one about ducks and wing fly behavior and rocket propelled fly behavior and the ostrich that has a can’t fly behavior. For undo and redo, we have the command pattern. We have this neat thing in Qt for that. We’ll use it. We don’t reinvent the wheel. Reinventing the wheel is stupid.

Let’s create the duck. I mean, the thing-editor as I will use “Thing” for the thing that is being edited. We want copy (sync is sufficient), paste (must be aysnc), and edit (must be async). We could also have insert and delete, but those APIs would be just like edit. Paste is usually similar to insert, of course. Except that it can be a combined delete and insert when overwriting content. The command pattern allows you to make such combinations. Not the purpose of this example, though.

Enough explanation. Let’s start! The ThingEditor, is like the flying Duck in strategy. This is going to be more or less the API that we will present to the QML world. It could be your ViewModel, for example (ie. you could let your ThingViewModel subclass ThingEditor).

class ThingEditor : public QObject
{
    Q_OBJECT

    Q_PROPERTY ( ThingEditingBehavior* editingBehavior READ editingBehavior
                 WRITE setEditingBehavior NOTIFY editingBehaviorChanged )
    Q_PROPERTY ( Thing* thing READ thing WRITE setThing NOTIFY thingChanged )

public:
    explicit ThingEditor( QSharedPointer<Thing> &a_thing,
            ThingEditingBehavior *a_editBehavior,
            QObject *a_parent = nullptr );

    explicit ThingEditor( QObject *a_parent = nullptr );

    Thing* thing() const { return m_thing.data(); }
    virtual void setThing( QSharedPointer<Thing> &a_thing );
    virtual void setThing( Thing *a_thing );

    ThingEditingBehavior* editingBehavior() const { return m_editingBehavior.data(); }
    virtual void setEditingBehavior ( ThingEditingBehavior *a_editingBehavior );

    Q_INVOKABLE virtual void copyCurrentToClipboard ( );
    Q_INVOKABLE virtual void editCurrentAsync( const QString &a_value );
    Q_INVOKABLE virtual void pasteCurrentFromClipboardAsync( );

signals:
    void editingBehaviorChanged ();
    void thingChanged();
    void editCurrentFinished( EditCurrentCommand *a_command );
    void pasteCurrentFromClipboardFinished( EditCurrentCommand *a_command );

private slots:
    void onEditCurrentFinished();
    void onPasteCurrentFromClipboardFinished();

private:
    QScopedPointer<ThingEditingBehavior> m_editingBehavior;
    QSharedPointer<Thing> m_thing;
    QList<QFutureWatcher<EditCurrentCommand*> *> m_editCurrentFutureWatchers;
    QList<QFutureWatcher<EditCurrentCommand*> *> m_pasteCurrentFromClipboardFutureWatchers;
};

For the implementation of this class, I’ll only provide the non-obvious pieces. I’m sure you can do that setThing, setEditingBehavior and the constructor yourself. I’m also providing it only once, and also only for the EditCurrentCommand. The one about paste is going to be exactly the same.

void ThingEditor::copyCurrentToClipboard ( )
{
    m_editingBehavior->copyCurrentToClipboard( );
}

void ThingEditor::onEditCurrentFinished( )
{
    QFutureWatcher<EditCurrentCommand*> *resultWatcher
            = static_cast<QFutureWatcher<EditCurrentCommand*>*> ( sender() );
    emit editCurrentFinished ( resultWatcher->result() );
    if (m_editCurrentFutureWatchers.contains( resultWatcher )) {
        m_editCurrentFutureWatchers.removeAll( resultWatcher );
    }
    delete resultWatcher;
}

void ThingEditor::editCurrentAsync( const QString &a_value )
{
    QFutureWatcher<EditCurrentCommand*> *resultWatcher
            = new QFutureWatcher<EditCurrentCommand*>();
    connect ( resultWatcher, &QFutureWatcher<EditCurrentCommand*>::finished,
              this, &ThingEditor::onEditCurrentFinished, Qt::QueuedConnection );
    resultWatcher->setFuture ( m_editingBehavior->editCurrentAsync( a_value ) );
    m_editCurrentFutureWatchers.append ( resultWatcher );
}

For QUndo we’ll need a QUndoCommand. For each undoable action we indeed need to make such a command. You could add more state and pass it to the constructor. It’s common, for example, to pass Thing, or the ThingEditor or the behavior (this is why I used QSharedPointer for those: as long as your command lives in the stack, you’ll need it to hold a reference to that state).

class EditCurrentCommand: public QUndoCommand
{
public:
    explicit EditCurrentCommand( const QString &a_value,
                                 QUndoCommand *a_parent = nullptr )
        : QUndoCommand ( a_parent )
        , m_value ( a_value ) { }
    void redo() Q_DECL_OVERRIDE {
       // Perform action goes here
    }
    void undo() Q_DECL_OVERRIDE {
      // Undo what got performed goes here
    }
private:
    const QString &m_value;
};

You can (and probably should) also make this one abstract (and/or a so called pure interface), as you’ll usually want many implementations of this one (one for every kind of editing behavior). Note that it leaks the QUndoCommand instances unless you handle them (ie. storing them in a QUndoStack). That in itself is a good reason to keep it abstract.

class ThingEditingBehavior : public QObject
{
    Q_OBJECT

    Q_PROPERTY ( ThingEditor* editor READ editor WRITE setEditor NOTIFY editorChanged )
    Q_PROPERTY ( Thing* thing READ thing NOTIFY thingChanged )

public:
    explicit ThingEditingBehavior( ThingEditor *a_editor,
                                   QObject *a_parent = nullptr )
        : QObject ( a_parent )
        , m_editor ( a_editor ) { }

    explicit ThingEditingBehavior( QObject *a_parent = nullptr )
        : QObject ( a_parent ) { }

    ThingEditor* editor() const { return m_editor.data(); }
    virtual void setEditor( ThingEditor *a_editor );
    Thing* thing() const;

    virtual void copyCurrentToClipboard ( );
    virtual QFuture<EditCurrentCommand*> editCurrentAsync( const QString &a_value, bool a_exec = true );
    virtual QFuture<EditCurrentCommand*> pasteCurrentFromClipboardAsync( bool a_exec = true );

protected:
    virtual EditCurrentCommand* editCurrentSync( const QString &a_value, bool a_exec = true );
    virtual EditCurrentCommand* pasteCurrentFromClipboardSync( bool a_exec = true );

signals:
    void editorChanged();
    void thingChanged();

private:
    QPointer<ThingEditor> m_editor;
    bool m_synchronous = true;
};

That setEditor, the constructor, etc: these are too obvious to write here. Here are the non-obvious ones:

void ThingEditingBehavior::copyToClipboard ( )
{
}

EditCurrentCommand* ThingEditingBehavior::editCurrentSync( const QString &a_value, bool a_exec )
{
    EditCurrentCommand *ret = new EditCurrentCommand ( a_value );
    if ( a_exec )
        ret->redo();
    return ret;
}

QFuture<EditCurrentCommand*> ThingEditingBehavior::editCurrentAsync( const QString &a_value, bool a_exec )
{
    QFuture<EditCurrentCommand*> resultFuture =
            QtConcurrent::run( QThreadPool::globalInstance(), this,
                               &ThingEditingBehavior::editCurrentSync,
                               a_value, a_exec );
    if (m_synchronous)
        resultFuture.waitForFinished();
    return resultFuture;
}

And now we can make the whole thing undoable by making a undoable editing behavior. I’ll leave a non-undoable editing behavior as an exercise to the reader (ie. just perform redo() on the QUndoCommand, don’t store it in the QUndoStack and immediately delete or cmd->deleteLater() the instance).

Note that if m_synchronous is false, that (all access to) m_undoStack, and the undo and redo methods of your QUndoCommands, must be (made) thread-safe. The thread-safety is not the purpose of this example, though.

class UndoableThingEditingBehavior : public ThingEditingBehavior
{
    Q_OBJECT
public:
    explicit UndoableThingEditingBehavior( ThingEditor *a_editor,
                                           QObject *a_parent = nullptr );
protected:
    EditCellCommand* editCurrentSync( const QString &a_value, bool a_exec = true ) Q_DECL_OVERRIDE;
    EditCurrentCommand* pasteCurrentFromClipboardSync( bool a_exec = true ) Q_DECL_OVERRIDE;
private:
    QScopedPointer<QUndoStack> m_undoStack;
};

EditCellCommand* UndoableThingEditingBehavior::editCurrentSync( const QString &a_value, bool a_exec )
{
    Q_UNUSED(a_exec)
    EditCellCommand *undoable = ThingEditingBehavior::editCurrentSync(  a_value, false );
    m_undoStack->push( undoable );
    return undoable;
}

EditCellCommand* UndoableThingEditingBehavior::pasteCurrentFromClipboardSync( bool a_exec )
{
    Q_UNUSED(a_exec)
    EditCellCommand *undoable = ThingEditingBehavior::pasteCurrentFromClipboardSync( false );
    m_undoStack->push( undoable );
    return undoable;
}

March 23, 2017

I’m just back from Heidelberg so here is the last wrap-up for the TROOPERS 2017 edition. This day was a little bit more difficult due to the fatigue and the social event of yesterday. That’s why the wrap-up will be shorter…  The second keynote was presented by Mara Tam: “Magical thinking … and how to thwart it”. Mara is an advisor to executive agencies on information security issues. Her job focuses on the technical and strategic implications of regulatory and policy activity. In my opinion, the keynote topic remained classic. Mara explained her vision of the problems that the infosec community is facing when information must be exchanged with “the outside world”. Personally, I found that Mara’s slides were difficult to understand. 

For the first presentation of the day, I stayed in the main room – the offensive track – to follow “How we hacked distributed configuration management systems” by Francis Alexander and Bharadwaj Machhiraju. As we already saw yesterday with the SDN, automation is a hot topic and companies tend to install specific tools to automate the configuration tasks. Such software is called DCM or “Distributed Configuration Management”. They simplify the maintenance of complex infrastructures, the synchronization and service discovery. But, as software, they also have bugs or vulnerabilities and are, for a pentester, a nice target. They’re real goldmines because they content not only configuration files but if the attacker can change them, it’s even more dangerous. DCM can be agent-less or -based on agents. This is the second case that was the target of Francis & Bharadwaj. They reviewed three tools:

  • HashiCorp Consul
  • Apache Zookeeper
  • CoreOS etcd

For each tool, that explained the vulnerability they found and how it was exploited up to remote code execution. The crazy story is that all of them do not have authentication enabled by default! To automate the search for DCM and exploitation, they developed a specific tool called Garfield. Nice demos were performed during the talk with many remote shells and calc.exe spawned here and there.

The next talk was my favourite of today. It was about a tool called Ruller to pivot through Exchange servers. Etienne Stamens presented his research on Microsoft Exchange and how he reverse engineered the protocol. The goal is just to get a shell though Exchange. The classic phases of an attack were reviewed:

  • Reconnaissance
  • Exploitation
  • Persistence (always better!)

Basically, Exchange is a mail server but many more features are available: calendar, Lync, Skype, etc. Exchange must be able to serve local and remote users so it exposes services on the Internet. How do identify companies that use an Exchange server and how to find it? Simply thanks to the auto-discovery feature that is implemented by Microsoft. If your domain is company.com, Outlook will search for https://company.com/autodiscover/autodiscover.xml (+ other alternatives URLs if this one isn’t useful). Etienne did some research and found that 10% of the Internet domains have this process enabled. After some triage, he found that approximatively 26000 domains are linked to an Exchange server. Nice attack surface! The next step is to compromise at least one account. Here, classic methods can be used (brute-force, rogue wireless AP, phishing or dumps of leaked databases). The exploitation in itself is performed by creating a rule that will execute a script. The rule looks like “When the word “pwned” is present in the subject, start “badprogram.exe”. A very nice finding is the way Windows converts UNC path to webdav:

\\host.com@SSL\webdav\pew.zip\s.exe

will be converted to:

https://host.com/webdab/pew.zip

And Windows will even extract s.exe for you! Magic!

Etienne performed a nice demo of Ruler which automates all the process described above. Then, he demonstrated another tool called Linial which takes care of the persistence. To conclude, Etienne explained briefly how to harden Exchange to prevent this kind of attack. Outlook 2016 blocks unsafe rules by default which is good. An alternative is to block WebDAV and use MFA.

After the lunch, Zoz came back with another funny presentation: “Data Demolition: Gone in 60 seconds!”. The idea is simple: When you throw away some devices, you must be sure that they don’t contain remaining critical data. Classic examples are HD’s and printers. Also extremely mobiles devices like drones. The talk was some kind of a “Myth Busters” show for hard drives! Different techniques were tested by Zoz:

  • Thermal
  • Kinetic
  • Electric

For each of them, different scenarios were presented and the results demonstrated with small videos. Crazy!

Destroying HD's

What was interesting to notice is that most techniques failed because the disk plates could still be “cleaned” (ex: removing the dust) and become maybe readable by using forensic techniques. For your information, the most feasible techniques were: Plasma cutter or oxygen injector, nailguns and HV Power spike. Just one advice: don’t try this at home!

There was a surprise talk scheduled. The time slot was offered to The Grugq. Renowned security researcher, he presented “Surprise Bitches, Enterprise Security Lessons From Terrorists”. He talked about APT’s but not as a buzzword. He gave his own view of how APT’s work. For him, the term “APT” was invented by Americans and means: “Asia Pacific Threat“.

I finished the day back to offensive track. Mike Ossmann and Dominic Spill from Great Scott Gadgets presented “Exploring the Infrared, part 2“. The first part was presented at Schmoocon. Hopefully, they started with a quick recap. What is the infrared light and its applications (remote control, sensors, communications, heating systems, …). The talk was a suite of nice demos where they use replay attack techniques to abuse of tools/toys that work with IR like a Dunk Hunt game, a shark remote controller. The most impressive one was the replay attack against the Bosh audio transmitter. This very expensive device is used in big events for instant translations. They were able to reverse engineer the protocol and were able to play a song through the device… You can imagine the impact of such attack in a live event (ex: switching voices, replacing translations by others, etc). They have many more tests in the pipe.

IR Fun

The last talk was “Blinded Random Block Corruption” by Rodrigo Branco. Rodrigo is a regular speaker at TROOPERS and provides always good content. His presentation was very impressive. They idea is to evaluate the problems around memory encryption? How and why to use it? Physical access to the victim is the worse case. An attacker has access to anything. You implemented full-disk encryption? Cool but many info are in the memory when the system is running. Access to memory can be performed via Firewire, PCIe, PCMCIA and new USB standards. What about memory encryption? It’s good but encryption alone is not enough. Controls must be implemented. The attack explained by Rodrigo is called “BRBC” or  “Blinded Random Block Corruption“. After giving the details, a nice demo was realized: how to become root on a locked system? Access to the memory is more easy in virtualized (or cloud) environments. Indeed many hypervisors allow enabling a “debug” feature per VM. Once activated, the administrator has write access to the memory. By using a debugger, you can use the BRBC attack and bypass the login procedure. The video demo was impressive.

So, TROPPER 10th anniversary edition is over. I spend four awesome days attending nice talks and meeting a lot of friends (old & new). I learned a lot and my todo-list already expanded.

 

[The post TROOPERS 2017 Day #4 Wrap-Up has been first published on /dev/random]

The Drupal community is committed to welcome and accept all people. That includes a commitment to not discriminate against anyone based on their heritage or culture, their sexual orientation, their gender identity, and more. Being diverse has strength and as such we work hard to foster a culture of open-mindedness toward differences.

A few weeks ago, I privately asked Larry Garfield, a prominent Drupal contributor, to leave the Drupal project. I did this because it came to my attention that he holds views that are in opposition with the values of the Drupal project.

I had hoped to avoid discussing this decision publicly out of respect for Larry's private life, but now that Larry has written about it on his blog and it is being discussed publicly, I believe I have no choice but to respond on behalf of the Drupal project.

It's not for me to judge the choices anyone makes in their private life or what beliefs they subscribe to. I also don't take any offense to the role-playing activities or sexual preferences of Larry's alternative lifestyle.

What makes this difficult to discuss, is that it is not for me to share any of the confidential information that I've received, so I won't point out the omissions in Larry's blog post. However, I can tell you that those who have reviewed Larry's writing, including me, suffered from varying degrees of shock and concern.

In the end, I fundamentally believe that all people are created equally. This belief has shaped the values that the Drupal project has held since it's early days. I cannot in good faith support someone who actively promotes a philosophy that is contrary to this. The Gorean philosophy promoted by Larry is based on the principle that women are evolutionarily predisposed to serve men and that the natural order is for men to dominate and lead.

While the decision was unpleasant, the choice was clear. I remain steadfast in my obligation to protect the shared values of the Drupal project. This is unpleasant because I appreciate Larry's many contributions to Drupal, because this risks setting a complicated precedent, and because it involves a friend's personal life. The matter is further complicated by the fact that this information was shared by others in a manner I don't find acceptable either and will be dealt with separately.

However, when a highly-visible community member's private views become public, controversial, and disruptive for the project, I must consider the impact that his words and actions have on others and the project itself. In this case, Larry has entwined his private and professional online identities in such a way that it blurs the lines with the Drupal project. Ultimately, I can't get past the fundamental misalignment of values.

Collectively, we work hard to ensure that Drupal has a culture of diversity and inclusion. Our goal is not just to have a variety of different people within our community, but to foster an environment of connection, participation and respect. We have a lot of work to do on this and we can't afford to ignore discrepancies between the espoused views of those in leadership roles and the values of our culture. It's my opinion that any association with Larry's belief system is inconsistent with our project's goals.

It is my responsibility and obligation to act in the best interest of the project at large and to uphold our values. Decisions like this are unpleasant and disruptive, but important. It is moments like this that test our commitment to our values. We must stand up and act in ways that demonstrate these values. For these reasons, I'm asking Larry to resign from the Drupal project.

Update March 24th

After reading hundreds of responses, I wanted to make a clarifying statement. First, I made the decision to ask Larry not to participate in the Drupal project, and separately, the Drupal Association made a decision not to invite Larry to speak at DrupalCon Baltimore or serve as a track chair for it. I can only speak to my decision-making here. It's worth noting that I recused myself from the Drupal Association's decision.

Many have rightfully stated that I haven't made a clear case for the decision. When one side chooses to make their case public it creates an imbalance of information. Only knowing one side skews public opinion heavily towards the publicized viewpoint. While I will not share the evidence that I believe would validate the decision that I made for reasons of liability and confidentiality, I will say that I did not make the decision based on the information or beliefs conveyed in Larry's blog post.

Larry accurately pointed out that some of the evidence was brought forth in a manner that is not in alignment with Drupal values. This manner is being addressed with the CWG. While it's disheartening that some of our community members chose the approach they did to bring the matter regarding Larry forward, that does not change the underlying facts that informed my decision.

Update March 31st

Megan Sanicki, Drupal Association Executive Director, and myself posted a follow-up statement on Drupal.org. As with any such decisions, and especially due to the circumstances of this one, there has been controversy, misinformation and rumors, as well as healthy conversation and debate. Many people feel hurt, worried, and confused. The fact that this matter became very public and divisive greatly saddens all of us involved, especially as we can see the pain it has caused many. We want to strongly emphasize that Drupal is an open-minded and inclusive community, and we welcome people of all backgrounds. Our community's diversity is something to cherish and celebrate as well as protect. We apologize for any anxiety we caused you and reiterate that our decision was not based on anyone's sexual practices. It will take time to heal, but we want to make a start by providing insight into our decision-making, answering questions, and placing a call for improvements to our governance, conflict-resolution processes, and communication.

Update April 9th

I posted an apology for causing grief and uncertainty, especially to those in the BDSM and kink communities who felt targeted. This incident was about specific actions of a single member of our community. This was never about sexual practices or kinks.

Update April 10th

It is clear that the current governance structure of Drupal, which relies on me being the ultimate decision maker and spokesperson for difficult governance and community membership decisions, has reached its limits. It doesn't work for many in our community -- and frankly, it does not work for me either. Community membership decisions shouldn't be determined by me or by me alone. I announced a framework for how we can evolve our governance model.

Update April 16th

The Community Working Group published a more complete explanation for what happened to address some of the misinformation and speculation: "We strongly reject any suggestion or assertion that Larry was asked to leave solely on the basis of his personal beliefs or what he does in his private life. If any of us had any reason to believe that was the case, we would have resigned immediately from the CWG."

(Comments on this post are allowed but for obvious reasons will be moderated.)

Perfection has been reached not when there is nothing left to add, but when there is nothing left to take away.

March 22, 2017

The third day is already over! Today the regular talks were scheduled split in three tracks: offensive, defensive and a specific one dedicated to SAP. The first slot at 09:00 was, as usual, a keynote. Enno Rey presented ten years of TROOPERS. What happened during all those editions? The main ideas behind TROOPERS have always been that everybody must learn something by attending the conference but… with fun and many interactions with other peers! The goal was to mix infosec people coming from different horizons. And, of course, to use the stuff learned to contribute back to the community. Things changed a lot during these ten years, some are better while others remain the same (or worse?). Enno reviewed all the keynotes presented and, for each of them, gave some comments – sometimes funny. The conference in itself also evolved with a SAP track, the Telco Sec Day, the NGI track and when they move to Heidelberg. Some famous vulnerabilities were covered like MS08-067 or the RSA hack. What we’ve seen:

  • A move from theory to practice
  • Some things/crap that stay the same (same shit, different day)
  • A growing importance of the socio-economic context around security.

Has progress been made? Enno reviewed infosec in three dimensions:

  • As a (scientific) discipline: From theory to practice. So yes, progress has been made
  • In enterprise environments: Some issues on endpoints have been fixed but there is a fact: Windows security has become much better but now they use Android :). Security in Datacenter also improved but now there is the cloud. 🙂
  • As a constituent for our society: Complexity is ever growing.

Are automated systems the solution? They are still technical and human factors that are important “Errare Humanum Est” said Enno. Information security is still in progress but we have to work for it. Again, the examples of the IoT crap was used. Education is key. So, yes, the TROOPERS motto is still valid: “Make the world a better place”. Based on the applause from the audience, this was a great keynote by an affected Enno!

I started my day within the defensive track. Veronica Valeros presented “Hunting Them All”. Why do we need hunting capabilities? A definition of threat hunting is “to help in spotting attacks that would pass our existing controls and make more damages to the business“.

Veronica's Daily Job: Hunting!

People are constantly hit by threats (spam, phishing, malware, trojans, RATs, … you name them). Being always online also increases our surface attack. Attacks are very lucrative and attract a lot of bad guys. Sometimes, malware may change things. A good example comes with the ransomware plague: it made people aware that backups are critical. Threat hunting is not easy because when you are sitting on your network, you don’t always know what to search. And malicious activity does not always rely on top-notch technologies. Attackers are not all ‘l33t’. They just want to bypass controls and make their malicious code run. To achieve this, they have a lot of time, they abuse the weakest link and they hide in plain sight. To resume: they use the “less effort rule”. Which sound legit, right? Veronica has access to a lot of data. Her team is performing hunting across hundreds of networks, millions of users and billions of web requests. How to process this? Machine learning came to the rescue. And Veronica’s job is to check and validate the output of the machine learning process they developed. But it’s not a magic tool that will solve all issues. The focus must be given on what’s important: from 10B of requests/day to 20K incidents/day using anomaly detection, trust modelling, event classification, entity & user modelling. Veronica gave an example. The botnet Sality is active since 2003 and still present. IOC’s exists but they generate a lot of false positives. Regular expressions are not flexible enough. Can we create algorithms to automatically track malicious behaviour. For some threats, it works, for others no. Veronica’s team is tracking +200 malicious behaviours and 60% is automated tracking. “Let the machine do the machine work”. As a good example, Veronica explained how referrers can be the source of important data leaks from corporate networks.

My next choice was “Securing Network Automation” by Ivan Peplnjak. In a previous talk, Ivan explained why Software Defined Networks failed but many vendors improved, which is good. So today, his new topic was about ways to improve the automation from a security perspective. Indeed, we must automate as much as possible but how to make it reliable and secure? If a process is well defined, it can be automated as said Ivan. Why automate? From a management perspective, the same reasons come always on the table: increase the flexibility while reducing costs, to have faster deployments and complete for public cloud offering. About the cloud, do we need to buy or to build? In all cases, you’ll have to build if you want to automate. The real challenge is to move quickly from development to test and production. To achieve this, instead of editing a device configuration live, create configuration text files, push them to a gitlab server. Then you can virtualise a lab, pull config and test them. Did it work? Then merge with the main branch. A lot can be automated: device provisioning, VLANs management, ACLs, firewall rules. But the challenge is to have strong controls to prevent issues upfront and troubleshoot if needed. A nice quote was:

“To make mistake is human, to automatically deploy mistake to all the servers use DevOps”

You remember the amazon bad story? Be prepared to face issues. To automate, you need tools and such tool must be secure. An example was given with Ansible. The issues are that it gathers information from untrusted source:

  • Scripts are executed on managed devices: what about data injection?
  • Custom scripts are included in data gathering: More data injection?
  • Returned data are not properly parsed: Risk of privilege escalation?

The usual controls to put in place are:

  • OOB management
  • Management network / VR
  • Limit access to the management hosts
  • SSH-based access
  • Use SSH keys
  • RBAC (commit scripts)

Keep in mind: Your network is critical so automatic (network programming) is too. Don’t write code yourself (hire a skilled Python programmer for this task) but you must know what the code should do. Test, test, test and once done, test again. As an example of control, you can perform a trace route before / after the change and compare the path. Ivan published a nice list of requirements for your vendor while looking for a new network device. If your current vendor cannot provide you basic requirements like an API, change it!

After the lunch, back to the defence & management track with “Vox Ex Machina” by Grame Neilson. The title looked interesting, was it more offensive of defensive content? Voice recognition is more and more used (example: Cortana, Siri, etc) but also on non-IT systems like banking or support system: “Press 1 for X or press 2 for Y”. But is it secure? Voice recognition is not a new hipe. There are references to the “Voder” already in 1939. Another system was the Vocoder a few years later. Voice recognition is based on two methods: phrase dependent or independent (the current talk will focus on the first method). The process is split in three phases:

  • Enrolment: your record a phrase x times. It must be different and the analysis is stored as a voice print.
  • Authentication: Based on feature extraction or MFCC (Mel-Frequency Cepstrum Correlation).
  • Confidence: Returned as a percentage.

The next part of the talk focused on the tool developed by Grame. Written in Python, it tests a remote API. The different supported attacks are: replay, brute-force and voice print fixation. An important remark made by Grame: Event if some services pretend it, your voice is NOT a key! Every time you pronounce “word”, the generated file is different. That’s why the process of brute-forcing is completely different with voice recognition: You know when you are getting closer due to the returned confidence  (in %) instead of a password comparison which returns “0” or “1”. The tool developed by Grame is available here (or will be soon after the conference).

The next talk was presented by Matt Grabber and Casey Smith: “Architecting a Modern Defense using Device Guard”. The talk was scheduled on the defensive track but it covered both worlds. The question that interest many people is: Is whitelisting a good solution? Bad guys are trying to find bypass strategies (red teams). What are the mitigations available for the blue teams? The attacker’s goal is clear: execute HIS code on YOUR computer. They are two types of attackers: the one who knows what controls you have in place (enlightened) and the novices who aren’t equipped to handle your controls (ex: the massive phishing campaigns dropping Office documents with malicious macros). Device Guard offers the following protections:

  • Prevents unauthorised code execution,
  • Restricted scripting environment
  • Prevents policy tempering and virtualisation based security

The speakers were honest: Device Guard does NOT protect against all the threats but it increases the noises (evidence). Bypasses are possible. how?

  • Policy misconfiguration
  • Misplaced trust
  • Enlightened scripting environments
  • Exploitation of vulnerable code
  • Implementation flaws

The way you deploy your policy depends on your environment is a key but also depends on the security eco-system where we are living. Would you trust all code signed by Google? Probably yes. Do you trust any certificate issued by Symantec? Probably not. The next part o the talk was a review of the different bypass techniques (offensive) and them some countermeasures (defensive). A nice demo was performed with Powershell to bypass the language constraint mode. Keep in mind that some allowed applications might be vulnerable. Do you remember the VirtualBox signed driver vulnerability? Besides those problems, Device Guard offers many advantages:

  • Uncomplicated deployment
  • DLL enforcement implicit
  • Supported across windows ecosystem
  • Core system component
  • Powershell integration

Conclusion: whitelisting is often a huge debate (pro/con). Despite the flaws, it forces the adversaries to reset their tactics. By doing this you disrupt the attackers’ economics: if it makes the system harder to compromise, it will cost the more time/money.

After the afternoon coffee break, I switched to the offensive track again to follow Florian Grunow and Niklaus Schuss who presented “Exploring North Korea’s Surveillance Technology”. I had no idea about the content of the talk but it was really interesting and an eye-opener! It’s a fact:  If it’s locked down, it must be interesting. That’s why Florian and Niklaus performed a research on the systems provided to DPRK citizens (“Democratic People’s Republic of Korea“). The research was based on papers published by others and devices / operating systems leaked. They never went over there. The motivation behind the research was to get a clear view of the surveillance and censorship put in place by the government. It started with the Linux distribution called “Red Star OS”. It is based on Fedora/KDE via multiple version and looks like a modern Linux distribution but… First finding: certificates installed in the browser are all coming from the Korean authorities. Also, some suspicious processes cannot be killed. Integrity checks are performed on system files and downloaded files are changed on the fly by the OS (example: files transferred via an USB storage). The OS adds a watermark at the end of the file which helps to identify the computer which was used. If the file is transferred to another computer, a second watermark is added, etc. This is a nice method to track dissidents and to build a graph of relations between them. Note that this watermark is added only on data files and that it can easily be removed. An antivirus is installed but can also be used to deleted files based on their hash. Of course, the AV update servers are maintained by the government. After the desktop OS, the speakers reviewed some “features” installed on the “Woolim” tablet. This device is based on Android and does not have any connectivity onboard. You must use specific USB dongle for this (provided by the government of course). When you try to open some files, you get a warning message “This is not signed file”. Indeed, the tablet can only work with files signed by the government or locally (based on RSA signatures). The goal, here again, is to prevent the distribution of media files. From a network perspective, there is no direct Internet access and all the traffic is routed through proxies. An interesting application running on the tablet is called “TraceViewer”. It takes a screenshot of the tablet at regular interval. The user cannot delete the screenshots and random physical controls can be performed by authorities to keep the pressure on the citizens. This talk was really an eye-opener for me. Really crazy stuff!

Finally, my last choice was another defensive track: “Arming Small Security Programs” by Matthew Domko. The idea is to generate a network baseline, exactly like we do for applications on Windows. For many organizations, the problem is to detect malicious activity on your network. Using an IDS becomes quickly unuseful due to the amount and the limitation of signatures. Matthew’s idea was to:

  • Build a baseline (all IPs, all ports)
  • Write snort rules
  • Monitor
  • Profit

To achieve this, he used the tool Bro. Bro is some kind of Swiss army knife for IDS environments. Matthew made a quick introduction to the tool and, more precisely, focussed on the scripting capabilities of Bro. Logs produced by Bro are also easy to parse. The tool developed by Matthew implements a simple baseline script. It collects all connections to IP addresses / ports and logs what is NOT know. The tool is called Bropy and should be available soon after the conference. A nice demo was performed. I really liked the idea behind this tool but it should be improved and features added to be used on big environments. I would recommend having a look at it if you need to build a network activity baseline!

The day ended with the classic social event. Local food, drinks and nice conversations with friends, which is priceless. I have to apologize for the delay to publish this wrap-up. Complaints can be sent to Sn0rkY! 😉

[The post TROOPERS 2017 Day #3 Wrap-Up has been first published on /dev/random]

In my job at Acquia, I’ve been working almost exclusively on Drupal 8 core. In 2012–2013 I worked on authoring experience (in-place editing, CKEditor, and more). In 2014–2015, I worked on performance, cacheability, rendering and generally the stabilizing of Drupal 8. Drupal 8.0.0 shipped on November 19, 2015. And since then, I’ve spent most of my time on making Drupal 8 be API-first: improving the RESTful Web Services support that Drupal 8 ships with, and in the process also strengthening the JSON API & GraphQL contributed modules.

In the process, I’ve learned a lot about the impact of past decisions (by myself and others) on backwards compatibility. The benefit of backwards compatibility (BC). But the burden of ensuring BC can increase exponentially due to certain architectural decisions. I’ve been experiencing that first-hand, since I’m tasked with making Drupal 8’s REST support rock-solid, where I am seeing time and time again that “fixing bugs + improving DXrequires BC breaks. Tough decisions.

In this talk (which will be a core conversation at DrupalCon Baltimore), I analyzed the architectural patterns in Drupal 8, and provided suggestions on how to do better. I don’t have all the answers. But what matters most is not answers, but a critical mindset going forward that is consciously considering BC implications for every patch that goes into Drupal 8!

Preview:

March 21, 2017

This is my wrap-up for the 2nd day of “NGI” at TROOPERS. My first choice for today was “Authenticate like a boss” by Pete Herzog. This talk was less technical than expected but interesting. It focussed on a complex problem: Identification. It’s not only relevant for users but for anything (a file, an IP address, an application, …). Pete started by providing a definition. Authentication is based on identification and authorisation. But identification can be easy to fake. A classic example is the hijacking of a domain name by sending a fax with a fake ID to the registrar – yes, some of them are still using fax machines! Identification is used at any time to ensure the identity of somebody to give access to something. It’s not only based on credentials or a certificate.

Identification is extremely important. You have to distinguish the good and bad at any time. Not only people but files, IOC’s, threat intelligence actors, etc. For files, metadata can help to identify. Another example reported by Pete: the attribution of an attack. We cannot be 100% confident about the person or the group behind the attack.The next generation Internet needs more and more identification. Especially with all those IoT devices deployed everywhere. We don’t even know what the device is doing. Often, the identification process is not successful. How many times did you send a “hello” to somebody that was not the right person on the street or while driving? Why? Because we (as well as objects) are changing. We are getting older, wearing glasses, etc…  Every interaction you have in a process increases your attack surface the same amount as one vulnerability.  What is more secure? Let a user choose his password or generate a strong one for him? He’ll not remember ours and write it down somewhere. In the same way, what’s best? a password or a certificate? An important concept explained by Pete is the “intent”. The problem is to have a good idea of the intent (from 0 – none – to 100% – certain).

Example: If an attacker is filling your firewall state table, is it a DoS attack? If somebody is performed a traceroute to your IP addresses, is it a foot-printing? Can be a port scan automatically categorized as hunting? And a vulnerability scan will be immediately followed by an attempt to exploit? Not always… It’s difficult to predict specific action. To conclude, Pete mentioned machine learning as a tool that may help in the indicators of intent.

After an expected coffee break, I switched to the second track to follow “Introduction to Automotive ECU Research” by Dieter Spaar. ECU stands for “Electronic Control Unit”. It’s some kind of brain present in modern cars that helps to control the car behaviour and all its options. The idea of the research came after the problem that BMW faced with the unlock of their cars. Dieter’s Motivations were multiple: engine tuning, speedometer manipulation, ECU repair, information privacy (what data are stored by a car?), the “VW scandal” and eCall (Emergency calls). Sometimes, some features are just a question of ECU configuration. They are present but not activated. Also, from a privacy point of view, what infotainment systems collect from your paired phone? How much data is kept by your GPS? ECU’s depend on the car model and options. In the picture below, yellow  blocks are ECU activated, others (grey) are optional (this picture is taken from an Audi A3 schema):

Audi A3 ECU

Interaction with the ECU is performed via a bus. They are different bus systems: the most known is CAN (Controller Area Network), MOST (Media Oriented System Transport), Flexray, LIN (Local Interconnected Network), Ethernet or BroadR-Reach. Interesting fact, some BMW cars have an Ethernet port to speed up the upgrades of the infotainment (like GPS maps). Ethernet provides more bandwidth to upload big files. ECU hardware is based on some typical microcontrollers like Renesas, Freescale or Infineon. Infotainment systems are running on ARM sometimes x86. QNX, Linux or Android. A special requirement is to provide a fast response time after power on. Dieter showed a lot of pictures with ECU where you can easily identify main components (Radio, infotainment, telematics, etc). Many of them are manufactured by Peiker. This was a very quick introduction but this demonstrated that they are still space for plenty of research projects with cars. During the lunch break, I had an interesting chat with two people working at Audi. Security is clearly a hot topic for car manufacturers today!

For the next talk, I switched again to the other track and attended “PUF ’n’ Stuf” by Jacob Torrey & Anders Fogh. The idea behind this strange title was “Getting the most of the digital world through physical identities”. The title came from a US TV show popular in the 60’s. Today, within our ultra-connected digital world, we are moving our identity from a physical world and it becomes difficult to authenticated somebody. We are losing the “physical” aspect. Humans can quickly spot an imposter just by having a look at a picture and after a simple conversation. Even if you don’t personally know the person. But to authenticate people via a simple login/password pair, it becomes difficult in the digital world. The idea of Jacob & Anders was to bring a strong physical identification in the digital world. The concept is called “PUF” or “Physically Uncloneable Function“. To achieve this, they explained how to implement a challenge-response function for devices that should return responses as non-volatile as possible. This can be used to attest the execution state or generate device-specific data. They reviewed examples based on SRAM, EEPROM or CMOS/CCD. The latest example is interesting. The technique is called PRNU and can be used to uniquely identify image sensors. This is often used in forensic investigation to link a picture to a camera. You can see this PUF as a dual-factor authentication. But there are caveats like a lack of proper entropy or PUF spoofing. Interesting idea but no easy to implement in practical cases.

After the lunch, Stefan Kiese had a two-hours slot to present “The Hardware Striptease Club”. The idea of the presentation was to briefly introduce some components that we can find today in our smart houses and see how to break them from a physical point of view. Stefan briefly explained the methodology to approach those devices. When you do this, never forget the impact (loss of revenue, theft of credentials, etc… or worse life (pacemakers, cars). Some reviewed victims:

  • TP-Link NC250 (Smart home camera)
  • Netatmo weather station
  • BaseTech door camera
  • eQ-3 home control access point
  • Easy home wifi adapter
  • Netatmo Welcome

It made an electronic crash course but also insisted on the risks to play with electricity powered devices! Then, people were able to open and disassemble the devices to play with them.

I didn’t attend the second hour because another talk looked interesting: “Metasploit hardware bridge hacking” by Craig Smith. He is working at Rapid7 and is playing with all “moving” things from cars to drones. To interact with those devices, a lot of tools and gadgets are required. The idea was to extend the Metasploit framework to be able to pentest these new targets. With an estimation of 20.8 billions of IoT devices connected (source: Gartner), pentesting projects around IoT devices will be more and more frequent. Many tools are required to test IoT devices: RF Transmitters, USB fuzzers, RFID cloners, JTAG devices, CAN bus tools, etc. The philosophy behind Metasploit remains the same: based on modules (some exploits, some payload, some shellcodes). New modules are available to access relays which talk directly to the hardware module. Example:

msf> use auxililary/server/local_hwbridge

A Metasploit relay is a lightweight HTTP server that just makes JSON translations between the bridge and Metasploit.

Example: ELM327 diagnostic module can be used via serial USB or BT. Once connected all the classic framework features are available as usual:

./tools/hardware/elm327_relay.rb

Other supported relays are RF transmitter or Zigbee. This was an interesting presentation.

For the last time slot, there was two talks: one about vulnerabilities in TP-Link devices and one presented as “Looking through the web of pages to the Internet of Things“. I chose the second one presented by Gabriel Weaver. The abstract did not describe properly the topic (or I did not understand it) but the presentation was a review of the research performed by Gabriel: “CTPL” or “Cyber Physical Topology Language“.

That’s close the 2nd day. Tomorrow will be dedicated to the regular tracks. Stay tuned for more coverage.

[The post TROOPERS 2017 Day #2 Wrap-Up has been first published on /dev/random]

March 20, 2017

I’m in Heidelberg (Germany) for the 10th edition of the TROOPERS conference. The regular talks are scheduled on Wednesday and Thursday. The two first days are reserved for some trainings and a pre-conference event called “NGI” for “Next Generation Internet” focusing on two hot topics: IPv6 and IoT. As said on the website: “NGI aims to provide discussion on how to secure these core technologies by bringing together practitioners from this space and some of the smartest researchers of the respective fields”. I initially planned to attend talks from both worlds but I stayed in the “IoT” tracks because many talks were interesting.

The day started with a keynote by Steve Lord: ”Of Unicorns and replicants”. Steve flagged his keynote as a “positive” talk (usually, we tend to present negative stuff). It started with some facts like “The S in IoT stands for Security” and a recap of the IoT history. This is clearly not a new idea. The first connected device was a Coca-Cola machine that was available via finger in… 1982! Who’s remember this old-fashioned protocol? In 1985, came the first definition of “IoT”: It is the integration of people, processes and technology with connectable devices and sensors to enable remote monitoring status. In 2000, LG presented its very first connected fridge. 2009 was a key year with the explosion of crowdfunding campaigns. Indeed, many projects were born due to the financial participations of many people. It was a nice way to bring ideas to life. In 2015, Vizio smart TV’s started to watch at you. Of course, Steve talked also about the story of St-Jude Medical and their bad pacemakers story. Common IoT problems are: botnets, endpoints, the overreach (probably the biggest problem) and the availability (You remember the outage that affected Amazon a few days ago?). The second part of the keynote was indeed positive and Steve reviewed the differences between 2015 – 2017. In the past, cloud solutions were not so mature, there was communication issues, little open guidance and unrealistic expectations. People learn by mistakes and some companies don’t want to have nightmare stories like others and are investing in security. So, yes, things are going (a little bit) better because more people are addressing security issues.

The first talk was “IoT hacking and Forensic with 0-day” by Moonbeom Park & Soohyun Jin. More and more IoT devices have been involved in security incident cases. Mirai is one of the latest examples. To address this problem, the speakers explained their process based on these following steps: Search for IoT targets, analyze the filesystem or vulnerabilities, attack and exploit, analyze the artefacts, install a RAT and control using a C&C then perform incident response using forensic skills. The example they used was a vacuum robot with a voice recording feature. The first question is just… “why?”. They explained how to compromize the device which was, at the beginning, properly hardened.  But, it was possible to attack the protocol used to configure it. Some JSON data was sent in clear text with the wireless configuration details. Once the robot reconfigured to use a rogue access-point, root access on the device was granted. That’s nice but how to control the robot, its camera and microphone? To idea was to turn in into a spying device. They explained how to achieve this and played a nice demo:

Vacuum robot spying device

So, why do we need IoT forensics? IoT devices can be involved in incidents.Issues? One of the issues is the way data are stored. There is no HD but flash memory. Linux remains the first OS used by IoT devices (73% according to the latest IoT developers survey). It is important to be able to extract the filesystem from such devices to understand how they work and to collect logs. Usually, filesystems are based on SquashFS and UBIFS. Tools were presented to access those data directly from Python. Example: the ubi_reader module. Once the filesystem details accessible, the forensic process remains the same.

The next talk was dedicated to SDR (Software Defined Radio) by Matt Knight & Marc Newline from Bastille: “So you want to hack radios?”. The idea behind this talk was to open our eyes on all the connected devices that implement SDR. Why should we care about radio communications? Not that they are often insecure but they are deployed everywhere. They are also built on compromises: big size and costs constraints, weak batteries, the deployment scenarios are challenging and, once in the wild, they are difficult to patch. Matt and Marc explained during the talk how to perform reverse engineering. They are two approaches: hardware & software defined radio. They reviewed pro & con. How to perform reverse engineering a radio signal? Configure yourself as a receiver and try to map symbols. This is a five steps process:

  • Identify the channel
  • Identify the modulation
  • Determine the symbol rate
  • Synchronize
  • Extract symbols

In many cases, OSINT is helpful to learn how it works (find online documentation). Many information is publicly available (example: on the FCC website – Just check for the FCC ID on the back of the device to get interesting info). They briefly introduced the RF concept then the reverse engineering workflow. To achieve this, they based the concept on different scenarios:

  • A Z-Wave home automation protocol
  • A door bell (capture button info and then replay to make the doorbell ring of course
  • An HP wireless keyboard/mouse

After the lunch, Vladimir Wolstencroft presented “SIMBox Security: Fraud, Fun & Failure”. This talk was tagged as TLP:RED so no coverage but very nice content! It was one of my best talk for today.

The next one was about the same topic: “Dissecting modern cellular 3G/4G modems” by Harald Welte. This talk is the result of a research conducted by Harald. His company was looking for a new M2M (“Machine to Machine”) solution. They searched interesting devices and started to see what was in the box. Once the good candidate found (the EC2O from Quectel), they started a security review and, guess what, they made nice findings. First, the device contained some Linux code. Based on this, all manufacturers have to respect the GPL and to disclose the modified source code. It takes a long time to get the information from Quectel). By why is Linux installed on this device? For Harald, it just increased the complexity. Here is a proof with the slide explaining how the device is rebooted:

Reboot Process

Crazy isn’t it? Another nice finding was the following AT command:

AT+QLINUXCMD=

It allows to send Linux commands to the devices in read/write mode and as root. What else?

The last talk “Hacks & case studies: Cellular communications” was presented by Brian Butterly. Brian’s motto is “to break things you must understand how they work”. The first step, read as much as possible, then build your lab to play with the selected targets. Many IoT devices today use GSM networks to interact with them via SMS or calls. Others also support TCP/IP communications (data). After a brief introduction to mobile network and how to deploy your own. An important message from Brian: Technically, nothing prevents to broadcast valid networks ID’s (but the law does it :-).

It’s important to understand how a device connects to a mobile network:

  • First, connect to its home network if available
  • Otherwise, do NOT connect to a list of blacklisted networks
  • Then connect to the network with the strongest signal.

If you deploy your own mobile network, you can make target devices connect to your network and play MitM. So, what can we test? Brian reviewed different gadgets and how to abuse them / what are their weaknesses.

First case: a small GPS Tracker with an emergency button. The Mini A8 (price: 15€). Just send a SMS with “DW” and the device will reply with a SMS containing the following URL:

http://gpsui.net/smap.php?lac=1&cellid=2&c=262&n=23&v=6890 Battery:70%

This is not a real GPS tracker, it returns the operation (“262” is Germany) and tower cell information. If you send “1111”, it will enable the built-in microphone. When the SOS button is pressed, a message is sent to the “authorized” numbers. The second case was a gate relay (RTU5025 – 40€). It allows opening a door via SMS or call. It’s just a relay in fact. Send “xxxxCC” (xxxx is the pin) to unlock the door. Nothing is sent back if the PIN is wrong. This means that it’s easy to brute force the device. Even better, once you found the PIN, you can send “xxxxPyyyy” to replace the PIN xxxx with a new one yyyy (and lock out the owner!). The next case was the Smanos X300 home alarm system (150€). Can be also controlled by SMS or calls (to arm, disarm and get notifications). Here again, there is a lack of protection and it’s easy to get the number and to fake authorized number to just send a “1” or “0”.

The next step was to check IP communications used by devices like the GPS car tracker (TK105 – 50€). You can change the server using the following message:

adminip 123456 101.202.101.202 9000

And define your own web server to get the device data. More fun, the device has a relay that can be connected to the car oil pump to turn the engine off (if the car is stolen). It also has a microphone and speaker. Of course, all communications occur over HTTP.

The last case was a Siemens module for PLC (CMR 2020). It was not perfect but much better than the other devices. By example, passwords are not only 4 numbers PIN codes but a real alphanumeric password.

Two other examples: a SmartMeter sending UDP packets in clear text with the meter serial number is all packets). And a Solar system control box running Windows CE 6.x. Guest what? The only way to manage the system is via Telnet. Who said that Telnet is dead?

It’s over for today. Stay tuned for more news by tomorrow!

[The post TROOPERS 2017 Day #1 Wrap-Up has been first published on /dev/random]

CurrentCost?

CurrentCost is a UK company founded in 2010 which provides home power measuring hardware. Their main product is a transmitter which uses an inductive clamp to measure power usage of your household. On top of that, they also provide Individual Appliance Monitors (IAMs) which sit between your device and the outlet, and measure its power usage.

The transmitters broadcast their findings wirelessly. To display the data, a number of different displays are sold, which can indicate total usage, per-IAM data, trending and even cost if you care to set that up. I myself have the EnviR, which has USB connectivity (with an optional cable) so you can process the CurrentCost data on your computer. Up to 9 IAMs can be connected to the EnviR.

I bought my CurrentCost products over 5 years ago, it does look like they are still on sale. CurrentCosts operates an eBay store where you can buy their hardware, if you’re not in the UK.

Important note: Should you decide to acquire any of their IAM hardware, do note that their “EU” IAM plugs are in fact “Schuko” type F type, which is used in Germany and The Netherlands, with side ground contacts instead of a grounding pin. That wouldn’t be so much of an issue, except that they don’t have an accommodating hole for the pin, so they won’t fit standard EU (non-German, non-Dutch, type E) outlets without a converter! The device-side is usually fine, as most if not all plugs also support side ground contacts, and if they don’t, at least the lack of a pin does not impede plugging it in.

OpenHAB?

OpenHAB is an open source, Java-powered Home Automation Bus; what this basically means is it’s a centralized hub that can connect to many systems that have found their way into your house, such as your smart TV, your ethernet-capable Audio Receiver, Hue lighting, Z-Wave- or Zigbee-powered accessories, HVAC systems, Kodi, CUPS, and even anything that speaks SNMP or MQTT or Amazon’s Alexa. It’s pretty much fully able to integrate into anything you can throw at it.

If you’re not already using OpenHAB, this post may not be very useful to you… Yet.

MQTT?

MQTT (short for Message Queue Telemetry Transport) is publish-subscribe-based “lightweight” messaging protocol. In short, you run a (or multiple) MQTT broker, and then clients can “subscribe” to certain topics (topics being freehand names, pretty much), and other clients, or the same ones even, can “publish” data in those same topics.

MQTT is a quick and elegant solution to have data circulate between different services, over the network or on the local machine – the fact that that data is broadcast over certain topics means multiple listeners (“subscribers”) can all act on that same data being published.

OpenHAB supports MQTT in both directions – it can listen to incoming topics, or broadcast certain things over MQTT as well.

The configuration below assumes you already have an MQTT broker to publish the radio messages to; setting up Mosquitto or similar is out of scope for this article.

Getting data out of CurrentCost

The EnviR outputs the following string regularly (every 6 seconds) over its USB serial port:

<msg><src>CC128-v1.48</src><dsb>00012</dsb><time>22:18:34</time><tmpr>24.9</tmpr><sensor>0</sensor><id>02015</id><type>1</type><ch1><watts>00876</watts></ch1></msg>

This is a line of data reporting power data sent by sensor ID 2015 (every sensor has their own 12-bit ID), of type 1 (this is either the main unit, or an IAM) reporting 876W on the first channel. The <tmpr> field is indeed a temperature reading, this is powered by a temperature sensor inside the EnviR and it simply measures the temperature where your display is located.

Back in 2012, I wrote a set of PHP scripts parsing this data and putting it into separate files in /tmp, with a separate script then throwing this data into RRD files every minute to be visualised in graphs. I never published it because to be frank it was a bit of an ugly hack. However it did work, and I had perfect visibility into when my 3 monitors went into standby, when the TV/Home theater amplifier was on, etc.

After getting a taste of this, spurred on by JP, I dived into Home Automation and ended up using OpenHAB. For years I’ve “planned” to write a CurrentCost binding for OpenHAB, so it would natively support the XML serial protocol, and just read out everything automatically. I never got around to figuring out how to create an OpenHAB binding though, so my stats were separate in those RRD files for years. When I moved, I didn’t even reinstall the CurrentCost sensors, as I was already using Z-Wave based power sensors for most things I wanted to measure.

In the meanwhile I also encountered current-cost-forwarder which listens for the XML data and sends it over to an MQTT broker. That did alleviate the need for a dedicated binding, but I never got around to trying this out, so I can’t tell you how well it works.

The magic of RTLSDR

RTLSDR is a great software defined radio (SDR) based on very cheap Realtek RTL28xx chips, originally meant to receive DVB-T broadcast. This means you can pretty much listen in on any frequency between ~22MHz and ~1100MHz, including FM radio, ADS-B aircraft data, and many others. And by cheap, I do mean cheap, you can find RTLSDR-compatible USB dongles on eBay for 5€ or less!

CurrentCost transmitters use the 433MHz frequency, like many other home devices (car keys, wireless headphones, garage openers, weather stations, …). As I was playing around with the rtl_433 tool, which uses RTLSDR to sniff 433MHz communications, I noticed my CurrentCost sensor data passing by as well. That gave me the idea for this system, and the setup in use that led to this blog post.

Additional advantages for this are that you can now use as many IAMs as you want and are no longer limited to 9, and there is no need to have the EnviR connected to one of your server’s USB ports. In fact, you don’t even need the EnviR display at all, and can even drop the base transmitter if you only want to read out IAM data.

As an extra, any other communication picked up by rtl_433 on 433MHz will also be automatically piped into MQTT for consumption by anything else you want to run. If you (or your neighbours!) have a weather station, anemometer or anything else transmitting on 433MHz (and supported by rtl_433), you can consume this data in OpenHAB just as well!

Sniffing the 433MHz band

First off, let’s install rtl_433:

~# apt-get install rtl-sdr librtlsdr-dev cmake build-essential
~# git clone https://github.com/merbanan/rtl_433.git
Cloning into 'rtl_433'...
remote: Counting objects: 5722, done.
(...)
Checking connectivity... done.
~# cd rtl_433/
~/rtl_433# mkdir build
~/rtl_433# cd build
~/rtl_433/build# cmake ..
-- The C compiler identification is GNU 4.9.2
(...)
-- Build files have been written to: /root/rtl_433/build
~/rtl_433/build# make
~/rtl_433/build# make install

Once it’s been installed, let’s do a test run. If you have CurrentCost transmitters plugged in, you should see their data flash by. Unfortunately, nobody in my neighbourhood seems to have any weather stations or outdoor temperature sensors, so only CurrentCost output for me:

~# rtl_433 -G
Using device 0: Generic RTL2832U
Found Rafael Micro R820T tuner
Exact sample rate is: 250000.000414 Hz
Sample rate set to 250000.
Bit detection level set to 0 (Auto).
Tuner gain set to Auto.
Reading samples in async mode...
Tuned to 433920000 Hz.
2017-02-27 17:53:12 : CurrentCost TX
Device Id: 2015
Power 0: 26 W
Power 1: 0 W
Power 2: 0 W

To end the program, press ctrl-C.

What we can see in the output here:
Device Id: the same device ID as was reported by the EnviR’s XML protocol. It likely changes when you press the pair button.
Power 0: This is the only power entry you’ll see for IAM modules, the base device may also report data for the second and third clamp.

It is possible you see the following error when running rtl_433:

Kernel driver is active, or device is claimed by second instance of librtlsdr.
In the first case, please either detach or blacklist the kernel module
(dvb_usb_rtl28xxu), or enable automatic detaching at compile time.

This means your OS has autoloaded the DVB drivers when you plugged in your USB stick, before you installed the rtl-sdr package. You can unload them manually:

rmmod dvb_usb_rtl28xxu rtl2832

The rtl-sdr package contains a blacklist entry for this driver, so it shouldn’t be a problem anymore from now on. Then, try running the command again.

Relaying 433MHz data to MQTT

Install mosquitto-clients, which provides mosquitto_pub which will be used to publish data to the broker:

~# apt-get install mosquitto-clients

Next, install the systemd service file which will run the relay for us:

~# cat <<EOF /etc/systemd/system/rtl_433-mqtt.service
[Unit]
Description=rtl_433 to MQTT publisher
After=network.target
[Service]
ExecStart=/bin/bash -c "/usr/local/bin/rtl_433 -q -F json |/usr/bin/mosquitto_pub -h <your.broker.hostname> -i RTL_433 -l -t RTL_433/JSON"
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
~# systemctl daemon-reload
~# systemctl enable rtl_433-mqtt

This will publish JSON-formatted messages containing the 433MHz data to your MQTT broker on the RTL_433/JSON topic.

Don’t forget to replace your broker’s hostname on the correct line. I tried to make it use an Environment file first, but unfortunately ran into a few issues using those variables, due to needing to run the command in an actual shell because of the pipe. If you figure out a way to make that work, please do let me know.

See if it works

We can subscribe to the MQTT topic using the mosquitto_sub tool from the mosquitto-clients package:

mosquitto_sub -h <your.broker.hostname> -t RTL_433/JSON

Doing this should yield a number of output lines such as this:

{"time" : "2017-03-19 21:26:25", "model" : "CurrentCost TX", "dev_id" : 2015, "power0" : 0, "power1" : 0, "power2" : 0}

Press Ctrl-C to exit the utility. If this didn’t yield any useful JSON output, check the status of the service with systemctl status rtl_433-mqtt and correct any issues.

Add items to OpenHAB

The following instructions were written for OpenHAB 1.x. Even though OpenHAB 2.x has a new way of adding “things” and “channels”, they work on that version as well – there is no automatic web-based configuration for MQTT items anyway.

Create items such as these in the OpenHAB items file of your choice (where this is exactly depends on your OpenHAB major version and whether you installed using debian packages or from the zip file).

Number Kitchen_Boiler_Power "Kitchen Boiler [%.1f W]" (GF_Kitchen) { mqtt="<[your.broker.hostname:RTL_433/JSON:state:JSONPATH($.power0):.*\"dev_id\" \\: 2015,.*]"}

Replace the dev_id being matched (2015 in the example above) with the ID of your CurrentCost transmitter. As all rtl_433 broadcasts come in on the same topic, a regular expression is used to match a single Device Id. If you want to read another phase, replace power0 by power1 or power2.

If you want to receive other 433MHz broadcasts, you may need to change the regex to make sure it’s only catching CurrentCost sensors – although the dev_id match alone might be enough. Let me know!

&url Writing informative technical how-to documentation takes time, dedication and knowledge. Should my blog series have helped you in getting things working the way you want them to, or configure certain software step by step, feel free to tip me via PayPal (paypal@powersource.cx) or the Flattr button. Thanks!