Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

June 24, 2017

The second edition of BSides Athens was planned this Saturday. I already attended the first edition (my wrap-up is here) and I was happy to be accepted as a speaker for the second time!  This edition moved to a new location which was great. Good wireless, air conditioning and food. The day was based on three tracks: the first two for regular talks and the third one for the CTP and workshops. The “boss”, Grigorios Fragkos introduced the 2nd edition. This one gave more attention to a charity program called “the smile of the child” which helps Greek kids to remain in touch with tmosthe new technologies. A specific project is called “ODYSSEAS” and is based on a truck that travels across Greek to educate kids to technologies like mobile phones, social networks, … The BSides Athens donated to this project. A very nice initiative that was presented by Stefanos Alevizos who received a slot of a few minutes to describe the program (content in Greek only).


The keynote was assigned to Dave Lewis who presented “The Unbearable Lightness of Failure”. The main fact explained by Dave is that we fail but…we learn from our mistakes! In other words, “failure is an acceptable teaching tool“. The keynote was based on many facts like signs. We receive signs everywhere and we must understand how to interpret them or the famous Friedrich Nietzsche’s quote: “That which does not kill us makes us stronger“. We are facing failures all the time. The last good example is the Wannacry bad story which should never happen but… You know the story! Another important message is that we don’t have to be afraid t fail. We also have to share as much as possible not only good stories but also bad stories. Sharing is a key! Participate in blogs, social networks, podcasts. Break out of your silo! Dave is a renowned speaker and delivered a really good keynote!

Then talks were split across the two main rooms. For the first one, I decided to attend the Thanissis Diogos’s presentation about “Operation Grand Mars“. In January 2017, Trustwave published an article which described this attack. Thanassis came back on this story with more details. After a quick recap about what is incident management, he reviewed all the fact related to the operation and gave some tips to improve abnormal activities on your network. It started with an alert generated by a workstation and, three days later, the same message came from a domain controller. Definitively not good! The entry point was infected via a malicious Word document / Javascript. Then a payload was download from Google docs which is, for most of our organizations, a trustworthy service. Then he explained how persistence was achieved (via autorun, scheduled tasks) and also lateral movements. The pass-the-hash attack was used. Another tip from Thanissis: if you see local admin accounts used for network logon, this is definitively suspicious! Good review of the attack with some good tips for blue teams.

My next choice was to move to the second track to follow Konstantinos Kosmidis‘s talk about machine learning (a hot topic today in many conferences!). I’m not a big fan of these technologies but I was interested in the abstract. The talk was a classic one: after an introduction to machine learning (that we already use every day with technologies like the Google face recognition, self-driving card or voice-recognition), why not apply this technique to malware detection. The goal is to: detect, classify but, more important, to improve the algorithm! After reviewing some pro & con, Konstantinos explained the technique he used in his research to convert malware samples into images. But, more interesting, he explained a technique based on steganography to attack this algorithm. The speaker was a little bit stressed but the idea looks interesting. If you’re interested, have a look at his Github repository.

Back to the first track to follow Professor Andrew Blyth with “The Role of Professionalism and Standards in Penetration Testing“. The penetration testing landscape changed considerably in the last years. We switched to script kiddies search for juicy vulnerabilities to professional services. The problem is that today some pentest projects are required not to detect security issues and improve but just for … compliance requirements. You know the “checked-case” syndrome. Also, the business evolves and is requesting more insurance. The coming GDP European regulation will increase the demand in penetration tests.  But, a real pentest is not a Nessus scan with a new logo as explained Andrew! We need professionalism. In the second part of the talk, Andrew reviewed some standards that involve pentests: iCAST, CBEST, PCI, OWASP, OSSTMM.

After a nice lunch with Greek food, back to talks with the one of Andreas Ntakas and Emmanouil Gavriil about “Detecting and Deceiving the Unknown with Illicium”. They are working for one of the sponsors and presented the tool developed by their company: Illicium. After the introduction, my feeling was that it’s a new honeypot with extended features.  Not only, they are interesting stuff but, IMHO, it was a commercial presentation. I’d expect a demo. Also, the tool looks nice but is dedicated to organization that already reached a mature security level. Indeed, before defeating the attacker, the first step is to properly implement basic controls like… patching! What some organizations still don’t do today!

The next presentation was “I Thought I Saw a |-|4><0.-” by Thomas V. Fisher.  Many interesting tips were provided by Thomas like:

  • Understand and define “normal” activities on your network to better detect what is “abnormal”.
  • Log everything!
  • Know your business
  • Keep in mind that the classic cyber kill-chain is not always followed by attackers (they don’t follow rules)
  • The danger is to try to detect malicious stuff based on… assumptions!

The model presented by Thomas was based on 4 A’s: Assess, Analyze, Articulate and Adapt! A very nice talk with plenty of tips!

The next slot was assigned to Ioannis Stais who presented his framework called LightBulb. The idea is to build a framework to help in bypassing common WAF’s (web application firewalls). Ioannis explained first how common WAF’s are working and why they could be bypassed. Instead of testing all possible combinations (brute-force), LightBuld relies on the following process:

  • Formalize the knowledge in code injection attacks variations.
  • Expand the knowledge
  • Cross check for vulnerabilities

Note that LightBulb is available also as a BurpSuipe extension! The code is available here.

Then, Anna Stylianou presented “Car hacking – a real security threat or a media hype?“. The last events that I attended also had a talk about cars but they focused more on abusing the remote control to open doors. Today, it focuses on ECU (“Engine Control Unit”) that are present in modern cars. Today a car might have >100 ECU’s and >100 millions lines of code which means a great attack surface! They are many tools available to attack a car via its CAN bus, even the Metasploit framework can be used to pentest cars today! The talk was not dedicated to a specific attack or tools but was more a recap of the risks that cars manufacturers are facing today. Indeed, threats changed:

  • theft from the car (breaking a window)
  • theft of the cat
  • but today: theft the use of the car (ransomware)

Some infosec gurus also predict that autonomous cars will be used as lethal weapons! As cars can be seen as computers on wheels, the potential attacks are the same: spoofing, tampering, repudiation, disclosure, DoS or privilege escalation issues.

The next slot was assigned to me. I presented “Unity Makes Strength” and explained how to improve interconnections between our security tools/applications. The last talk was performed by Theo Papadopoulos: A “Shortcut” to Red Teaming. He explained how .LNK files can be a nice way to compromize your victim’s computer. I like the “love equation”: Word + Powershell = Love. Step by step, Theo explained how to build a malicious document with a link file, how to avoid mistakes and how to increase chances to get the victim infected. I like the persistence method based on assigning a popular hot-key (like CTRL-V) to shortcut on the desktop. Windows will trigger the malicious script attached to the shortcut and them… execute it (in this case, paste the clipboard content). Evil!

The day ended with the CTF winners announce and many information about the next edition of BSides Athens. They already have plenty of ideas! It’s now time for some off-days across Greece with the family…

[The post BSides Athens 2017 Wrap-Up has been first published on /dev/random]

June 22, 2017

I published the following diary on isc.sans.org: “Obfuscating without XOR“.

Malicious files are generated and spread over the wild Internet daily (read: “hourly”). The goal of the attackers is to use files that are:

  • not know by signature-based solutions
  • not easy to read for the human eye

That’s why many obfuscation techniques exist to lure automated tools and security analysts… [Read more]

[The post [SANS ISC] Obfuscating without XOR has been first published on /dev/random]

June 21, 2017

[Updated 23/06 to reflect newer versions 2.1.2 and 2.2.1]

Heads-up: Autoptimize 2.2 has just been released with a slew of new features (see changelog) and an important security-fix. Do upgrade as soon as possible.

If you prefer not to upgrade to 2.2 (because you prefer the stability of 2.1.0), you can instead download 2.1.2, which is identical to 2.1.0 except that the security fix has been backported.

I’ll follow up on the new features and on the security issue in more detail later today/ tomorrow.

June 20, 2017

The placing of your state is the only really important thing in architecture.

The post MariaDB MaxScale 2.1 defaulting to IPv6 appeared first on ma.ttias.be.

This little upgrade caught me by surprise. In a MaxScale 2.0 to 2.1 upgrade, MaxScale changes the default bind address from IPv4 to IPv6. It's mentioned in the release notes as this;

MaxScale 2.1.2 added support for IPv6 addresses. The default interface that listeners bind to was changed from the IPv4 address 0.0.0.0 to the IPv6 address ::. To bind to the old IPv4 address, add address=0.0.0.0 to the listener definition.

Upgrading MariaDB MaxScale from 2.0 to 2.1

The result is pretty significant though, because authentication in MySQL is often host or IP based, with permissions being granted like this.

$ SET PASSWORD FOR 'xxx'@'10.0.0.1' = PASSWORD('your_password');

Notice the explicit use of IP address there.

Now, after a MariaDB 2.1 upgrade, it'll default to an IPv6 address for authentication, which gives you the following error message;

$ mysql -h127.0.0.1 -P 3306 -uxxx -pyour_password
ERROR 1045 (28000): Access denied for user 'xxx'@'::ffff:127.0.0.1' (using password: YES)

Notice how 127.0.0.1 turned into ::ffff:127.0.0.1? That's an IPv4 address being encapsulated in an IPv6 address. And it'll cause MySQL authentication to potentially fail, depending on how you assigned your users & permissions.

To fix, you can either;

  • Downgrade MaxScale from 2.1 back to 2.0 (add the 2.0.6 repositories for your OS and downgrade MaxScale)
  • Add the address=0.0.0.0 config the your listener configuration in /etc/maxscale.cnf

Hope this helps!

The post MariaDB MaxScale 2.1 defaulting to IPv6 appeared first on ma.ttias.be.

June 18, 2017

In February we spent a weekend in the Arctic Circle hoping to see the northern lights. I've been so busy, I only now got around to writing about it.

We decided to travel to Nellim for an action-packed weekend with outdoor adventure, wood fires, reindeer and no WiFi. Nellim, is a small Finnish village, close to the Russian border and in the middle of nowhere. This place is a true winter wonderland with untouched and natural forests. On our way to the property we saw a wild reindeer eating on the side of the road. It was all very magical.

Beautiful log cabin bed

The trip was my gift to Vanessa for her 40th birthday! I reserved a private, small log cabin instead of the main lodge. The log cabin itself was really nice; even the bed was made of logs with two bear heads carved into it. Vanessa called them Charcoal and Smokey. To stay warm we made fires and enjoyed our sauna.

Dog sledding
Dog sledding
Dog sledding

One day we went dog sledding. As with all animals it seems, Vanessa quickly named them all; Marshmallow, Brownie, Snickers, Midnight, Blondie and Foxy. The dogs were so excited to run! After 3 hours of dog sledding in -30 C (-22 F) weather we stopped to warm up and eat; we made salmon soup in a small make-shift shelter that was similar to a tepee. The tepee had a small opening at the top and there was no heat or electricity.

The salmon soup was made over a fire, and we were skeptical at first how this would taste. The soup turned out to be delicious and even reminded us of the clam chowder that we have come to enjoy in Boston. We've since remade this soup at home and the boys also enjoy it. Not that this blog will turn into a recipe blog, but I plan to publish the recipe with photos at some point.

Tippy by night
Campfire in the snow

At night we would go out on "aurora hunts". The first night by reindeer sled, the second night using snowshoes, and the third night by snowmobile. To stay warm, we built fires either in tepees or in the snow and drank warm berry juice.

Reindeer sledding
Reindeer sledding

While the untouched land is beautiful, they definitely try to live off the land. The Fins have an abundance of berries, mushrooms, reindeer and fish. We gladly admit we enjoyed our reindeer sled rides, as well as eating reindeer. We had fresh mushroom soup made out of hand-picked mushrooms. And every evening there was an abundance of fresh fish and reindeer offered for dinner. We also discovered a new gin, Napue, made from cranberries and birch leaves.

In the end, we didn't see the Northern Lights. We had a great trip, and seeing them would have been the icing on the cake. It just means that we'll have to come back another time.

June 17, 2017

Let's look how easy it is to implement a simple cookie based language switch in the rocket web framework for the rust programming language. Defining the language type:

In this case there will be support for Dutch and English. PartialEq is derived to be able to compare Lang items with ==.

The Default trait is implemented to define the default language:

The FromStr trait is implemented to allow creating a Lang item from a string.

The Into<&'static str> trait is added to allow the conversion in the other direction.

Finally the FromRequest trait is implemented to allow extracting the "lang" cookie from the request.

It always succeeds and falls back to the default when no cookie or an unknown language is is found. How to use the Lang constraint on a request:

And the language switch page:

And as a cherry on the pie, let's have the language switch page automatically redirect to the referrer. First let's implement a FromRequest trait for our own Referer type:

When it finds a Referer header it uses the content, else the request is forwarded to the next handler. This means that if the request has no Referer header it is not handled, and a 404 will be returned.
Finally let's update the language switch request handler:

Pretty elegant. A recap with all code combined and adding the missing glue:

June 16, 2017

Pourquoi les abus financiers des politiciens sont inévitables dans une démocratie représentative

À chaque fois que quelqu’un se décide à creuser les dépenses du monde politique, des scandales éclatent. La conclusion facile est que les politiciens sont tous pourris, qu’il faut voter pour ceux qui ne le sont pas. Ou qui promettent de ne pas l’être.

Pourtant, depuis que la démocratie représentative existe, cela n’a jamais fonctionné. Et si c’était le système lui-même qui rendait impossible une gestion saine de l’argent public ?

Selon Milton Friedman, il n’y a que 4 façons de dépenser de l’argent : dépenser son argent pour soi, son argent pour les autres, l’argent des autres pour soi, l’argent des autres pour les autres.

Son argent pour soi

Lorsqu’on dépense l’argent qu’on a gagné, on optimise toujours le rendement pour obtenir le plus possible en dépensant le moins possible. Vous réfléchissez à deux fois avant de faire de grosses dépenses, vous comparez les offres, vous planifiez, vous calculez l’amortissement même de manière intuitive.

Si vous dépensez de l’argent inutilement, vous vous en voudrez, vous vous sentirez soit coupable de négligence, soit floué par d’autres.

Son argent pour les autres

Si l’intention de dépenser pour d’autres est toujours bonne, vous ne prêterez généralement pas toujours attention à la valeur que les autres recevront. Vous fixez généralement le budget qui vous semble socialement acceptable pour ne pas paraître pour un radin et vous dépensez ce budget de manière assez arbitraire.

Il y’a de grandes chances que votre cadeau ne plaise pas autant qu’il vous a couté, qu’il ne réponde pas à un besoin important ou immédiat voire, même, qu’il finisse directement à la poubelle.

Économiquement, les cadeaux et les surprises sont rarement une bonne idée. Néanmoins, comme vous tentez généralement de ne pas dépasser un budget donné, les dommages économiques sont faibles. Et, parfois, un cadeau fait extrêmement plaisir. Idée : offrez un ForeverGift !

L’argent d’autrui pour soi

Lorsqu’on peut dépenser sans compter, par exemple lorsque votre entreprise couvre tous vos frais de voyages ou que vous avez une carte essence, l’optimisation économique devient catastrophique.

En fait, ce cas de figure relève même généralement de l’anti-optimisation. Vous allez sans remords choisir un vol qui vous permet de dormir une heure plus tard même s’il est plus cher de plusieurs centaines d’euros que le vol matinal. Dans les cas extrêmes, vous allez tenter de dépenser le plus possible, même inutilement, pour avoir l’impression d’obtenir plus que votre salaire nominal.

Cette anti-optimisation peut être compensée par plusieurs facteurs : un sentiment de devoir moral vis-à-vis de l’entreprise, surtout dans les petites structures, ou une surveillance des notes de frais voire un plafond.

Le plafond peut cependant avoir un effet inverse. Si un employé bénéficie d’une carte essence avec une limite, par exemple de 2000 litres par an, il va avoir tendance à rouler plus ou à partir en vacances avec la voiture pour utiliser les 2000 litres auxquels il estime avoir droit.

C’est la raison pour laquelle cette situation économique est très rare et devrait être évitée à tout pris.

L’argent d’autrui pour les autres

Par définition, les instances politiques sont dans ce dernier cas de figures. Les politiciens sont en effet à la tête d’une énorme manne d’argent récoltée de diverses manières chez les citoyens. Et ils doivent décider comment les dépenser. Voir comment augmenter encore plus la manne, par exemple avec de nouveaux impôts.

Comme je l’ai expliqué dans un précédent billet, gagner de l’argent est l’objectif par défaut de tout être humain dans notre société.

Les politiciens vont donc tout naturellement tenter de bénéficier par tous les moyens possibles de la manne d’argent dont ils sont responsables. Chez les plus honnêtes, cela se fera inconsciemment mais cela se fera quand même, de manière indirecte. Pour les plus discrets, le politicien pourra par exemple accorder des marchés publics sans recevoir aucun bénéfice immédiat mais en se créant un réseau de relation lui permettant de siéger par après dans de juteux conseils d’administration. Pour les plus cyniques, de véritables systèmes seront mis en place, ce que j’appelle des boucles d’évaporation, permettant de transférer, le plus souvent légalement, l’argent public vers les poches privées.

Tout cela étant complètement opaque et noyé dans la bureaucratie, il est généralement impossible pour le citoyen de faire le lien entre l’euro qu’il a payé en impôt et l’euro versé de manière scandaleuse à certains politiciens. Surtout que la notion de “scandaleux” est subjective. À partir de quand un salaire devient-il scandaleux ? À partir de combien d’administrateurs une intercommunale devient-elle une machine à payer les amis et à évaporer l’argent public ? À partir de quel degré de connaissance un politicien ne peut-il plus engager sa famille et ses amis ou les faire bénéficier d’un contrat public ?

Les politiciens sont nos employés à qui nous fournissons une carte de crédit illimitée, sans aucun contrôle et avec le pouvoir d’émettre de nouvelles cartes pour leurs amis.

Que faire ?

Il ne faut donc pas s’empresser de voter pour ceux qui se promettent moins pourris que les autres. S’ils ne le sont pas encore, cela ne devrait tarder. Le pouvoir corrompt. Fréquenter des riches et d’autres politiciens qui font tous la même chose n’aide pas à garder la tête froide. Ces comportements deviennent la norme et les limites fixées par la loi ne sont, tout comme la carte essence sus-citée, plus des limites mais des dûs auxquels ils estiment avoir légitimement le droit. En cas de scandale, ils ne comprendront même pas ce qu’on leur reproche en se réfugiant derrière le « C’est légal ». Ce que nous pensons être une corruption du système n’en est en fait que son aboutissement mécanique le plus logique !

La première étape d’une solution consiste par exiger la transparence totale des dépenses publiques. Le citoyen devrait être en mesure de suivre les flux financiers de chaque centime public jusqu’au moment où il arrive dans une poche privée. L’argent public versé à chaque mandataire devrait être public. S’engager en politique se ferait avec la connaissance qu’une partie de notre vie privée devient transparente et que toutes les rémunérations seront désormais publiques, sans aucune concession.

Cela demande beaucoup d’effort de simplification mais, avec un peu de volonté, c’est aujourd’hui tout à fait possible. Les budgets secrets devraient être dûment budgétisé et justifié afin que le public puisse au moins suivre leur évolution au cours du temps.

Curieusement, cela n’est sur le programme d’aucun politicien…

 

Billet rédigé en collaboration avec Mathieu Jamar. Photo par feedee P.

Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal ou Liberapay. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Medium, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

June 14, 2017

I published the following diary on isc.sans.org: “Systemd Could Fallback to Google DNS?“.

Google is everywhere and provides free services to everyone. Amongst the huge list of services publicly available, there are the Google DNS, well known as 8.8.8.8, 8.8.4.4 (IPv4) and 2001:4860:4860::8888, 2001:4860:4860::8844 (IPv6)… [Read more]

 

[The post [SANS ISC] Systemd Could Fallback to Google DNS? has been first published on /dev/random]

June 13, 2017

Recently, I got the chance to assist a team of frontend and back-end developers to do a bit of open heart surgery. The scope of the project is as follows, migrate a BBOM monolith towards a new BSS system but keep the frontend part, and convert another web frontend and one mobile app to the same BSS system. To facilitate this, and because it’s common sense, the decision was made to create our own REST API in between. But, we were faced with an issue. Time is limited and we wanted to start creating everything at once. Without a working API implementation and the need for a defined interface, we decided to look for a tool to assist us in this process.

Gotta have swag

We began to create our API using API Blueprint in Apiary, but that soon turned out to be quite annoying because of a few reasons. One, everything exists within the same file. This implies the file grows quite large once you start adding endpoints, examples and responses. Secondly, there’s no nice way to start working on this as a team, unless you get a Standard Plan. We could debate about whether or not migrating to another plan would have solved our problem, but let’s be honest, I’d rather invest in the team than spend it on unnecessary tooling.

I began a venture to migrate this to another tool, and eventually ended up playing with Swagger. First off, Swagger also supports yaml, which is a great way to describe these things. Secondly, the ecosystem is a bit more mature which allows us to do things API Blueprint does not provide, such as split the specification into smaller parts. I found this great blog post by Mohsen Azimi which explains exactly this, and following his example, I added a compile.js file that collects the .yaml references and combines those into one big swagger.json file.

The advantages are interesting as we can now split the Swagger specification into folders for context and work on different parts without creating needless overhead all the time, like merge conflicts for example. To make sure we know the changes comply with the Swagger definition, I added a check after compiling swagger.json using swagger-parser to validate the output. Combined with a docker container to do the compilation and validation, we’ve got ourself a nice way to proceed with certainty. Adding this to a CI is peanuts, as we can use the same docker image to run all the necessary checks. The project is currently being built using Travis, you can find a sample .travis.yml file in the repository.

The setup of the project is as follows. The explanation of the components is listed inline, be aware I only listed the parts which need an explanation. Refer to the repository for a complete overview.

.
├── definitions // the data model used by the API
|   ├── model.yaml // model definition
|   └── index.yaml // list of model definitions
├── examples // sample json output
|   ├── sample.json
|   └── second_sample.json
├── parameters
|   ├── index.yaml // header and query string parameters
├── paths
|   ├── path.yaml // path definition
|   └── index.yaml / list of path definitions
├── swagger-ui // folder containing custom Swagger-UI
├── gulpfile.js // build and development logic
├── makefile // quick access to commands
└── swagger.yaml // swagger spec base file

While this sample contains model, path and parameter definitions in the root of each sub folder, nothing stops you from creating more folders to structure the definitions inside. As the compile function in gulpfile.js (previously compile.js) takes care of stitching the YAML files into one JSON spec, it can be as flexible as you want. The makefile contains a few handy commands so everyone can use the project without the need for specific setup or docker knowledge.

To change the spec you can use any editor of choice, I have Visual Studio Code setup together with the Swagger Viewer plugin. This way I can work on the spec and have it preview in a tab next to me. In case I need to validate the changes, I can also use the pre-configured validate task to quickly get feedback in my editor console. The tasks are added to the project to get you started using Visual Studio Code. If you do, make sure to also add a key binding to spawn the tasks. Open keybindings.json and enter the following (change the key combo if needed).

    {
        "key": "ctrl+shift+r",
        "command": "workbench.action.tasks.runTask"
    }

On top of that, one of my colleagues, Joeri Hendrickx, extended the setup by creating a watch function inside the gulpfile.js file that automatically reloads changes in Swagger-UI while you adjust files. This way, there’s no need for a specific setup and you can use any editor you like. As an extra bonus, it will also display the errors on top of the page.

To run the swagger specification, use the make swagger command or the swagger task in Visual Studio Code. By default, Swagger UI will be available at localhost:3000, unless you specify another port using the SWAGGER_PORT environment variable. To enable the watch function, make use of the make watch command or watch task in Visual Studio Code.

Are you mocking me?

This leaves us with one open item. How do we create a mock service using our Swagger specification? As it turns out, there’s a very useful tool out there called Prism that does just that. Part of the Stoplight tool set, their CLI tool allows you to create a mock server by simply using a Swagger spec. This provides you with all you need to design, test and move fast.

The docker image has been extended to also pull in the latest version of Prism and add it to our path. You can run the mock server through the make mock command or the mock task in Visual Studio Code. By default, the mock server will run on localhost:8010, unless you specify another port using the PRISM_PORT environment variable.

Starting the mock server prints the available endpoints. You now have the ability to start developing and use the mocked API, or validate your work via Postman, curl or any http request tool of choice. Using this repository, the curl following command will output a mocked result.

curl -X GET http://localhost:8010/v1/ping -H 'authorization: Basic trololol'

If for any reason you need to debug inside the container, you can use the make interactive command. This will open a shell inside the container for you to mess around in. I never needed it until now, but it’s there. Just in case.

The setup we have at work currently uses Jenkins to validate the spec which is deployed to Heroku every time the build on master succeeds (which is, well, every time). This way we have a single place of truth when it comes to our Swagger specification and accompanying mock service for developers or partners to play with. We can prototype fast while collecting feedback, or change current implementations fast and knowing the impact. Our production API is tested against the Swagger specification, which is integrated in that repository as a submodule to decouple designing and implementation. To get a more behavioral representation of a real API, we created middleware in Python which can keep track of the data you send and respond accordingly for certain processes. Changes to this part are also validated against the specification in order to reduce the chance of creating errors.

Feel free to mess around, ask questions or even create issues and pull requests on GitHub and let me know what you think. And stay tuned for Part II which covers technical documentation!

Source code

June 12, 2017

CoreOS Fest 2017 happened earlier this month in San Francisco. I had the joy of attending this conference. With a vendor-organized conference there’s always the risk of it being mostly a thinly-veiled marketing excercise, but this didn’t prove to be the case: there was a good community and open-source vibe to it, probably because CoreOS itself is for the most part an open-source company.

Not bad for a view

Also fun was encountering a few old-time GNOME developers such as Matthew Garrett (now at Google) and Chris Kühl (who now runs kinvolk). It’s remarkable how good of a talent incubator the GNOME project is. Look at any reasonably successful project and chances are high you’ll find some (ex-)GNOME people.

Main room

I also had the pleasure of presenting the experiences and lessons learned related to introducing Kubernetes at Ticketmatic. Annotated slides and a video of the talk can be found here.

Making your company cloud‑native: the Ticketmatic story


Comments | More on rocketeer.be | @rubenv on Twitter

June 10, 2017

June 09, 2017

Here is my wrap-up for the last day. Hopefully, after the yesterday’s social event, the organisers had the good idea to start later… The first set of talks was dedicated to presentation tools.

The first slot was assigned to Florian Maury, Sébastien Mainand: “Réutilisez vos scripts d’audit avec PacketWeaver”. When you are performed audit, the same tasks are already performed. And, as we are lazy people, Florian & Sébastien’s idea was to automate such tasks. They can be to get a PCAP, to use another tool like arpspoof, to modify packets using Scapy, etc… The chain can quickly become complex. By automating, it’s also more easy to deploy a proof-of-concept or a demonstration. The tool used a Metasploit-alike interface. You select your modules, you execute them but you can also chain them: the output of script1 is used as input of script2. The available modules are classified par ISO layer:

  • app_l7
  • datalink_l2
  • network_l3
  • phy_l1
  • transport_l4

The tool is available here.

The second slot was about “cpu_rec.py”. This tool has been developed to help in the reconnaissance of architectures in binary files. A binary file contains instructions to be executed by a CPU (like ELF or PE files). But not only files. It is also interesting to recognise firmware’s or memory dumps. At the moment, cpu_rec.py recognise 72 types of architectures. The tool is available here.

And we continue with another tool using machine learning. “Le Machine Learning confronté aux contraintes opérationnelles des systèmes de détection” presented by Anaël Bonneton and Antoine Husson. The purpose is to detect intrusions based on machine learning. The classic approach is to work with systems based on signatures (like IDS). Those rules are developed by experts but can quickly become obsolete to detect newer attacks. Can machine learning help? Anaël and Antoine explained the tool that that developed (“SecuML”) but also the process associated with it. Indeed, the tool must be used in a first phase to learning from samples. The principle is to use a “classifier” that takes files in input (PDF, PE, …) and return the conclusions in output (malicious or not malicious). The tool is based on the scikit-learn Python library and is also available here.

Then, Eric Leblond came on stage to talk about… Suricata of course! His talk title was “À la recherche du méchant perdu”. Suricata is a well-known IDS solution that don’t have to be presented. This time, Eric explained a new feature that has been introduced in Suricata 4.0. A new “target” keyword is available in the JSON output. The idea arise while a blue team / read team exercise. The challenge of the blue team was to detect attackers and is was discovered that it’s not so easy. With classic rules, the source of the attack is usually the source of the flow but it’s not always the case. A good example of a web server returned an error 4xx or 5xx. In this case, the attacker is the destination. The goal of the new keyword is to be used to produce better graphs to visualise attacks. This patch must still be approved and merge in the code. It will also required to update the rules.

The next talk was the only one in English: “Deploying TLS 1.3: the great, the good and the bad: Improving the encrypted web, one round-trip at a time” by Filippo Valsorda & Nick Sullivan. After a brief review of the TLS 1.2 protocol, the new version was reviewed. You must know that, if TLS 1.0, 1.1 and 1.2 were very close to each others, TLS 1.3 is a big jump!. Many changes in the implementation were reviewed. If you’re interested here is a link to the specifications of the new protocol.

After a talk about crypto, we switched immediately to another domain which also uses a lot of abbreviations: telecommunications. The presentation was performed by  Olivier Le Moal, Patrick Ventuzelo, Thomas Coudray and was called “Subscribers remote geolocation and tracking using 4G VoLTE enabled Android phone”. VoLTE means “Voice over LTE” and is based on VoIP protocols like SIP. This protocols is already implemented by many operators around the world and, if your mobile phone is compatible, allows you to perform better calls. But the speakers found also some nasty stuff. They explained how VoLTE is working but also how it can leak the position (geolocalization) of your contact just by sending a SIP “INVITE” request.

To complete the first half-day, a funny presentation was made about drones. For many companies, drones are seen as evil stuff and must be prohibited to fly over some  places. The goal of the presented tool is just to prevent drones to fly over a specific place and (maybe) spy. There are already solutions for this: DroneWatch, eagles, DroneDefender or SkyJack. What’s new with DroneJack? It focuses on drones communicating via Wi-Fi (like the Parrot models). Basically, a drone is a flying access point. It is possible to detected them based on their SSID’s and MAC addresses using a simple airodump-ng. Based on the signal is it also possible to estimate how far the drone is flying. As the technologies are based on Wi-Fi there is nothing brand new. If you are interested, the research is available here.

When you had a lunch, what do you do usually? You brush your teeth. Normally, it’s not dangerous but if your toothbrush is connected, it can be worse! Axelle Apvrille presented her research about a connected toothbrush provided by an health insurance company in the USA. The device is connected to a smart phone using a BTLE connection and exchange a lot of data. Of course, without any authentication or any other security control. The toothbrush even continues to expose his MAC address via bluetooth all the tile (you have to remove the battery to turn it off). Axelle did not attached the device itself with reverse the mobile application and the protocol used to communicate between the phone and the brush. She wrote a small tool to communicate with the brush. But she also write an application to simulate a rogue device and communicate with the mobile phone. The next step was of course to analyse the communication between the mobile app and the cloud provided by the health insurance. She found many vulnerabilities to lead to the download of private data (up to picture of kids!). When she reported the vulnerability, her account was just closed by the company! Big fail! If you pay your insurance less by washing your teeth correctly, it’s possible to send fake data to get a nice discount. Excellent presentation from Axelle…

To close the event, the ANSSI came on stage to present a post-incident review of the major security breach that affected the French TV channel TV5 in 2015. Just to remember you, the channel was completely compromised up to affecting the broadcast of programs for several days. The presentation was excellent for everybody interested in forensic investigation and incident handling. In a first part, the timeline of all events that lead to the full-compromise were reviewed. To resume, the overall security level of TV5 was very poor and nothing fancy was used to attack them: contractor’s credentials used, lack of segmentation, default credentials used, expose RDP server on the Internet etc. An interesting fact was the deletion of firmwares on switches and routers that prevented them to reboot properly causing a major DoS. They also deleted VM’s. The second part of the presentation was used to explain all the weaknesses and how to improve / fix them. It was an awesome presentation!

My first edition of SSTIC is now over but I hope not the last one!

[The post SSTIC 2017 Wrap-Up Day #3 has been first published on /dev/random]

June 08, 2017

Here is my wrap-up for the second day. From my point of view, the morning sessions were quite hard with a lot of papers based on hardware research.

Anaïs Gantet started with “CrashOS : recherche de vulnérabilités système dans les hyperviseurs”. The motivations behind this research are multiple: virtualization of computers is everywhere today, not only on servers but also on workstations. The challenge for the hypervisor (the key component of a virtualization system) is to simulate the exact same hardware platform (same behaviour) for the software. It virtualizes access to the memory and devices. But do they do this in the right way? Hypervisors are software and software have bugs. The approach explained by Anaïs is to build a very specific light OS that could perform a bunch of tests, like fuzzing, against hypervisors. The name of this os is logically “CrashOS“. It proposes an API that is used to script test scenarios. Once booted, tests are executed and results are sent to the display but also to the serial port for debugging purposes. Anaïs demonstrated some tests. Up to today, the following hypervisors have been tested: Ramooflax (project developed by Airbus Security), VMware and Xen. Some tests that returned errors:

  • On VMware, a buffer overflow and crash of the VM when writing a big buffer to COM1.
  • On Xen, a “FAR JMP” instruction should generate a “general protection” failure but it’s not the case.

CrashOS is available here. A nice presentation to start the day!

The next presentation went deeper and focused again on the hardware, more precisely, the PCI Express that we find in many computers. The title was “ProTIP: You Should Know What To Expect From Your Peripherals” by Marion Daubignard, Yves-Alexis Perez. Why could it be interesting to keep an eye on our PCIe extensions? Because they all run some software and have usually a direct access to the host computer resources like memory (for performance reasons). What if the firmware of your NIC could contain some malicious code and search for data in the memory? They describe the PCIe standard which can be seen as a network with CPU, memory, a PCI hierarchy (a switch) and a root complex. How to analyse all the flows passing over a PCIe network? The challenge is to detect the possible paths and alternatives. The best language to achieve this is Prolog (a very old language that I did not use since my study) but still alive. The tool is called “ProTIP” for “Prolog Tester for Information Flow in PCIe networks” and is available here. The topic was interesting when you realise what a PCIe extension could do.

Then, we got a presentation from Chaouki Kasmi, José Lopes Esteves, Mathieu Renard, Ryad Benadjila: “From Academia to Real World: a Practical Guide to Hitag-2 RKE System Analysis“. The talk was dedicated to the Hitag-2 protocols used by “Remote Keyless Entry” with our modern cars. Researches in this domain are not brand new. There was already a paper on it presented at Usenix. The talk really focussing on Hitag-2 (crypto) and was difficult to follow for me.

After the lunch break, Clémentine Maurice talked about accessing the host memory from a browser with Javascript: “De bas en haut : attaques sur la microarchitecture depuis un navigateur web“. She started with a deeply detailed review of how the DRAM memory is working and how to read operations make use a “row buffer” (like a cache). The idea is to be able to detect key presses in the URL bar of Firefox. The amount of work is pretty awesome from an educational point of view but I’ve just one question: how to use this in the real world? If you’re interested, Clémentine published all her publications are available here.

The next talk was interesting for people working on the blue team side. Binacle is a tool developed to make a full-text search on binaries. Guillaume Jeanne explained why full-text search is important and how it fails with classic methods to index binary files. The goal is not only to index “strings” like IP addresses, FQDN but also suite of bytes. After testing several solutions, he found a good one which is not too resources consuming. The tool and his features were explained, with the Yara integration (also a feature to generate new signatures). To be tested for sure! Binacle is available here.

The next tool presented by YaCo: “Rétro-Ingénierie Collaborative” by Benoît Amiaux, Frédéric Grelot, Jérémy Bouétard, Martin Tourneboeuf, Valerian Comiti. YaCo means “Yet another collaborative tool”. The idea is to add a “multi-user” layer to the IDA debugger. By default, users have a local DB used by IDA. The idea is to sync those databases via a Gitlab server. The synchronisation is performed via a Python plugin. They made a cool demo. YaCo is available here.

Sibyl was the last tool presented today by Camille Mougey. Sibyl helps to identify libraries used in the malicious code. Based on Miasm2, it identifies functions and their side effect. More information is available on the Github repository.

The next talk was about the Android platform: “Breaking Samsung Galaxy Secure Boot through Download mode” presented by Frédéric Basse. He explained the attacks that can be launched against the bootloader of a Samsung Galaxy smartphone via a bug.

Finally, a non-technical talk presented by Martin Untersinger: “Oups, votre élection a été piratée… (Ou pas)”. Martin is a journalist working for the French newspaper “Le Monde”. He already spoke at SSTIC two years ago and came back today to give his view of the “buzz” around the hacking of the election processes around the world. Indeed, today when elections are organised, there are often rumours that this democratic process has been altered due to state-sponsored hackers. It started in the US and also reached France with the Macronleak. A first fact reported by Martin is that information security goes today way beyond the “techies”… Today all the citizens are aware that bad things may happen. It’s not only a technical issue but also a geopolitical issue. Therefore, it is very interesting for journalists. Authorities do not always disclose information about the attribution of the attack because it can be wrong and alter the democratic process of elections. Today documents are published but the attribution remains a political decision. It touchy and may lead to diplomatic issues. Journalists are also facing challenges:

  • Publish leaked docs or not?
  • Are they real or fake?
  • Ignore the information or maybe participle to the disinformation campaign?

But it is clear that good a communication is a key element.

The day closed with the second rump sessions with a lot of submissions (21!). Amongst them, some funny ideas like using machine learning to generate automatic submissions of paper to the SSTIC call for paper, an awesome analysis of the LinkedIn leaked passwords, connected cars, etc… Everybody moved to the city centre to attend the social event with nice food, drinks and lot of interesting conversations.

Today, a lot of tools were presented. The balance between the two types of presentation is interesting. Indeed, if pure security research is interesting, sometimes it is very difficult to use it in the real context of an information system. However, presented tools were quick talks with facts and solutions that can be easily implemented.

[The post SSTIC 2017 Wrap-Up Day #2 has been first published on /dev/random]

June 07, 2017

I’m in Rennes, France to attend my very first edition of the SSTIC conference. SSTIC is an event organised in France, by and for French people. The acronym means “Symposium sur la sécurité des technologies de l’information et des communications“. The event has a good reputation about its content but is also known to have a very strong policy to sell tickets. Usually, all of them are sold in a few minutes, spread across 3 waves. I was lucky to get one this year. So, here is my wrap-up! This is already the fifteen edition with a new venue to host 600 security people. A live streaming is also available and a few hundred people are following talks remotely.

The first presentation was performed by  Octave Klaba who’s the CEO of the OVH operator. OVH is a key player on the Internet with many services. It is known via the BGP AS16276. Octave started with a complete overview of the backbone that he build from zero a few years ago. Today, it has a capacity of 11Tpbs and handles 2500 BGP sessions. It’s impressive how this CEO knows his “baby”. The next part of the talk was a deep description of their solution “VAC” deployed to handle DDoS attacks. For information, OVH is handler ~1200 attacks per day! They usually don’t communicate with them, except if some customers are affected (the case of Mirai was provided as an example by Octave). They chose the name “VAC” for “Vacuum Cleaner“. The goal is to clean the traffic as soon as possible before it enters the backbone. An interesting fact about anti-DDoS solutions: it is critical to detect them as soon as possible. Why? Let’s assume that your solution detects a DDoS within x seconds, attackers will launch attacks of less than x seconds. Evil! The “VAC” can be seen as a big proxy and is based on multiple components that can filter specific types of protocols/attacks. Interesting: to better handle some DDoS, the OVH teams reversed some gaming protocols to better understand how they work. Octave described in deep details how the solution has been implemented and is used today… for any customer! This is a free service! It was really crazy to get so many technical details from a… CEO! Respect!

The second talk was “L’administration en silo” by Aurélien Bordes and focused on some best practices for Windows services administration. Aurélien started with a fact: When you ask a company how is the infrastructure organised, they speak usually about users, data, computers, partners but… they don’t mention administrative accounts. From where and how are managed all the resources? Basically, they are three categories of assets. They can be classified based on colours or tiers.

  • Red: resources for admins
  • Yellow: core business
  • Green: computers

The most difficult layer to protect is… the yellow one. After some facts about the security of AD infrastructure,  Aurélien explained how to improve the Kerberos protocol. The solution is based on FAST, a framework to improve the Kerberos protocol. Another interesting tool developed by Aurélien: The Terminal Server Security Auditor. Interesting presentation but my conclusion is that in increase the complexity of Kerberos which is already not easy to master.

During the previous talk, Aurélien presented a slide with potential privilege escalation issues in an Active Directory environment. One of them was the WSUS server. It’s was the topic of the research presented by Romain Coltel and Yves Le Provost. During a pentest engagement, they compromised a network “A” but they also discovered a network “B” completely disconnected from “A”. Completely? Not really, there were WSUS servers communicating between them. After a quick recap of the WSUS server and its features, they explained how they compromised the second network “B” via the WSUS server. Such a server is based on three major components:

  • A Windows service to sync
  • A web service web to talk to clients (configs & push packages)
  • A big database

This database is complex and contains all the data related to patches and systems. Attacking a WSUS server is not new. In 2015, there was a presentation at BlackHat which demonstrated how to perform a man-in-the-middle attack against a WSUS server. But today, Romain and Yves used another approach. They wrote a tool to directly inject fake updates in the database. The important step is to use the stored procedures to not break the database integrity. Note that the tool has a “social engineering” approach and fake info about the malicious patch can be injected too to entice the admin to allow the installation of the fake patch on the target system(s). To be deployed, the “patch” must be a binary signed by Microsoft. Good news, plenty of tools are signed and can be used to perform malicious tasks. They use the tool psexec for the demo:

psexec -> cmd.exe -> net user /add

The DB being synced between different WSUS servers, it was possible to compromise the network “B”. The tool they developed to inject data into the WSUS database is called WUSpendu. A good recommendation is to put WSUS servers in the “red” zone (see above) and to consider them as critical assets. Very interesting presentation!

After two presentations focusing on the Windows world, back to the UNIX world and more precisely Linux with the init system called systemd. Since it was implemented in major Linux distribution, systemd has been the centre of huge debates between the pro-initd and pro-systemd. Same for me, I found it not easy to use, it introduces complexity, etc… But the presentation gave nice tips that could be used to improve the security of daemons started via systemd. A first and basic tip is to not use the root account but many new features are really interesting:

  • seccomp-bpf can be used to disable access to certain syscalls (like chroot() or obsolete syscalls)
  • capacities can be disabled (ex: CAP_NET_BIND_SERVICE)
  • name spaces mount (ex: /etc/secrets is not visible by the service)

Nice quick tips that can be easily implemented!

The next talk was about Landlock by Michael Salaün. The idea is to build a sandbox with unprivileged access rights and to run your application in this restricted space. The perfect example that was used by Michael is a multi-media player. This kind of application includes many parsers and is, therefore, a good candidate to attacks or bugs. The recommended solution is, as always, to write good (read: safe) code and the sandbox must be seen as an extra security control. Michael explained how the sandbox is working and how to implement it. The example with the media player was to allow it to disable write access to the filesystem except if the file is a pipe.

After the lunch, a set of talks was scheduled around the same topic: analysis of code. If started with “Static Analysis and Run-time Assertion checking” by Dillon Pariente, Julien Signoles. The presented Frama-C a framework of C code analysis.

Then Philippe Biondi, Raphaël Rigo, Sarah Zennou, Xavier Mehrenberger presented BinCAT (“Binary Code Analysis Tool”). It can analyse binaries (x86 only) but will never execute code. Just by checking the memory, the register and much other stuff, it can deduce a program behaviour. BinCAT is integrated into IDA. They performed a nice demo of a keygen tool. BinCAT is available here and can also be executed in a Docker container. The last talk in this set was “Désobfuscation binaire: Reconstruction de fonctions virtualisées” by Jonathan Salwan, Marie-Laure Potet, Sébastien Bardin. The principle of the binary protection is to make a binary more difficult to analyse/decode but without changing the original capabilities. This is not the same as a packer. Here there is some kind of virtualization that emulates proprietary bytecode. Those three presentations represented a huge amount of work but were too specific for me.

Then, Geoffroy CoupriePierre Chifflier presented “Writing parsers like it is 2017“. Writing parsers is hard. Just don’t try to write your own parser, you’ll probably fail. But parsers are available in many applications. They are hard to maintain (old code, handwritten, hard to test & refactor). Issues based on parsers can have huge security impacts, just remember the Cloudbleed bleed bug! The proposed solution is to replace classic parsers by something stronger. The criteria’s are: must be memory safe, called by / can call C code and, if possible, no garbage collection process. RUST is a language made to develop parsers like nom. To test it, it has been used in projects like the VLC player and the Suricata IDS. Suricata was a good candidate with many challenges: safety, performance. The candidate protocol was TLS. About VLC and parser, the recent vulnerability affecting the subtitles parser is a perfect example why parsers are critical.

The last talk of the day was about caradoc. Developed by the ANSSI (French agency), it’s a toolbox able to decode PDF files. The goal is not to extract and analyse potentially malicious streams from PDF files. Like the previous talk, the main idea was to avoid parsing issues. After reviewing the basics of the PDF file format, Guillaume Endignoux, Olivier Levillain made two demos. The first one was to open the same PDF file within two readers (Acrobat and Fox-It). The displayed content was not the same. This could be used in phishing campaigns or to defeat the analyst. The second demo was a malicious PDF file that crashed Fox-It but not Adobe (DDoS). Nice tool.

The day ended with a “rump” session (also called lighting talks by other conferences). I’m really happy with the content of the first day. Stay tuned for more details tomorrow! If you want to follow live talks, the streaming is available here.

[The post SSTIC 2017 Wrap-Up Day #1 has been first published on /dev/random]

Many organizations struggle with the all-time increase in IP address allocation and the accompanying need for segmentation. In the past, governing the segments within the organization means keeping close control over the service deployments, firewall rules, etc.

Lately, the idea of micro-segmentation, supported through software-defined networking solutions, seems to defy the need for a segmentation governance. However, I think that that is a very short-sighted sales proposition. Even with micro-segmentation, or even pure point-to-point / peer2peer communication flow control, you'll still be needing a high level overview of the services within your scope.

In this blog post, I'll give some insights in how we are approaching this in the company I work for. In short, it starts with requirements gathering, creating labels to assign to deployments, creating groups based on one or two labels in a layered approach, and finally fixating the resulting schema and start mapping guidance documents (policies) toward the presented architecture.

As always, start with the requirements

From an infrastructure architect point of view, creating structure is one way of dealing with the onslaught in complexity that is prevalent within the wider organizational architecture. By creating a framework in which infrastructural services can be positioned, architects and other stakeholders (such as information security officers, process managers, service delivery owners, project and team leaders ...) can support the wide organization in its endeavor of becoming or remaining competitive.

Structure can be provided through various viewpoints. As such, while creating such framework, the initial intention is not to start drawing borders or creating a complex graph. Instead, look at attributes that one would assign to an infrastructural service, and treat those as labels. Create a nice portfolio of attributes which will help guide the development of such framework.

The following list gives some ideas in labels or attributes that one can use. But be creative, and use experienced people in devising the "true" list of attributes that fits the needs of your organization. Be sure to describe them properly and unambiguously - the list here is just an example, as are the descriptions.

  • tenant identifies the organizational aggregation of business units which are sufficiently similar in areas such as policies (same policies in use), governance (decision bodies or approval structure), charging, etc. It could be a hierarchical aspect (such as organization) as well.
  • location provides insight in the physical (if applicable) location of the service. This could be an actual building name, but can also be structured depending on the size of the environment. If it is structured, make sure to devise a structure up front. Consider things such as regions, countries, cities, data centers, etc. A special case location value could be the jurisdiction, if that is something that concerns the organization.
  • service type tells you what kind of service an asset is. It can be a workstation, a server/host, server/guest, network device, virtual or physical appliance, sensor, tablet, etc.
  • trust level provides information on how controlled and trusted the service is. Consider the differences between unmanaged (no patching, no users doing any maintenance), open (one or more admins, but no active controlled maintenance), controlled (basic maintenance and monitoring, but with still administrative access by others), managed (actively maintained, no privileged access without strict control), hardened (actively maintained, additional security measures taken) and kiosk (actively maintained, additional security measures taken and limited, well-known interfacing).
  • compliance set identifies specific compliance-related attributes, such as the PCI-DSS compliancy level that a system has to adhere to.
  • consumer group informs about the main consumer group, active on the service. This could be an identification of the relationship that consumer group has with the organization (anonymous, customer, provider, partner, employee, ...) or the actual name of the consumer group.
  • architectural purpose gives insight in the purpose of the service in infrastructural terms. Is it a client system, a gateway, a mid-tier system, a processing system, a data management system, a batch server, a reporting system, etc.
  • domain could be interpreted as to the company purpose of the system. Is it for commercial purposes (such as customer-facing software), corporate functions (company management), development, infrastructure/operations ...
  • production status provides information about the production state of a service. Is it a production service, or a pre-production (final testing before going to production), staging (aggregation of multiple changes) or development environment?

Given the final set of labels, the next step is to aggregate results to create a high-level view of the environment.

Creating a layered structure

Chances are high that you'll end up with several attributes, and many of these will have multiple possible values. What we don't want is to end in an N-dimensional infrastructure architecture overview. Sure, it sounds sexy to do so, but you want to show the infrastructure architecture to several stakeholders in your organization. And projecting an N-dimensional structure on a 2-dimensional slide is not only challenging - you'll possibly create a projection which leaves out important details or make it hard to interpret.

Instead, we looked at a layered approach, with each layer handling one or two requirements. The top layer represents the requirement which the organization seems to see as the most defining attribute. It is the attribute where, if its value changes, most of its architecture changes (and thus the impact of a service relocation is the largest).

Suppose for instance that the domain attribute is seen as the most defining one: the organization has strict rules about placing corporate services and commercial services in separate environments, or the security officers want to see the commercial services, which are well exposed to many end users, be in a separate environment from corporate services. Or perhaps the company offers commercial services for multiple tenants, and as such wants several separate "commercial services" environments while having a single corporate service domain.

In this case, part of the infrastructure architecture overview on the top level could look like so (hypothetical example):

Top level view

This also shows that, next to the corporate and commercial interests of the organization, a strong development support focus is prevalent as well. This of course depends on the type of organization or company and how significant in-house development is, but in this example it is seen as a major decisive factor for service positioning.

These top-level blocks (depicted as locations, for those of you using Archimate) are what we call "zones". These are not networks, but clearly bounded areas in which multiple services are positioned, and for which particular handling rules exist. These rules are generally written down in policies and standards - more about that later.

Inside each of these zones, a substructure is made available as well, based on another attribute. For instance, let's assume that this is the architectural purpose. This could be because the company has a requirement on segregating workstations and other client-oriented zones from the application hosting related ones. Security-wise, the company might have a principle where mid-tier services (API and presentation layer exposures) are separate from processing services, and where data is located in a separate zone to ensure specific data access or more optimal infrastructure services.

This zoning result could then be depicted as follows:

Detailed top-level view

From this viewpoint, we can also deduce that this company provides separate workstation services: corporate workstation services (most likely managed workstations with focus on application disclosure, end user computing, etc.) and development workstations (most likely controlled workstations but with more open privileged access for the developer).

By making this separation explicit, the organization makes it clear that the development workstations will have a different position, and even a different access profile toward other services within the company.

We're not done yet. For instance, on the mid-tier level, we could look at the consumer group of the services:

Mid-tier explained

This separation can be established due to security reasons (isolating services that are exposed to anonymous users from customer services or even partner services), but one can also envision this to be from a management point of view (availability requirements can differ, capacity management is more uncertain for anonymous-facing services than authenticated, etc.)

Going one layer down, we use a production status attribute as the defining requirement:

Anonymous user detail

At this point, our company decided that the defined layers are sufficiently established and make for a good overview. We used different defining properties than the ones displayed above (again, find a good balance that fits the company or organization that you're focusing on), but found that the ones we used were mostly involved in existing policies and principles, while the other ones are not that decisive for infrastructure architectural purposes.

For instance, the tenant might not be selected as a deciding attribute, because there will be larger tenants and smaller tenants (which could make the resulting zone set very convoluted) or because some commercial services are offered toward multiple tenants and the organizations' strategy would be to move toward multi-tenant services rather than multiple deployments.

Now, in the zoning structure there is still another layer, which from an infrastructure architecture point is less about rules and guidelines and more about manageability from an organizational point of view. For instance, in the above example, a SAP deployment for HR purposes (which is obviously a corporate service) might have its Enterprise Portal service in the Corporate Services > Mid-tier > Group Employees > Production zone. However, another service such as an on-premise SharePoint deployment for group collaboration might be in Corporate Services > Mid-tier > Group Employees > Production zone as well. Yet both services are supported through different teams.

This "final" layer thus enables grouping of services based on the supporting team (again, this is an example), which is organizationally aligned with the business units of the company, and potentially further isolation of services based on other attributes which are not defining for all services. For instance, the company might have a policy that services with a certain business impact assessment score must be in isolated segments with no other deployments within the same segment.

What about management services

Now, the above picture is missing some of the (in my opinion) most important services: infrastructure support and management services. These services do not shine in functional offerings (which many non-IT people generally look at) but are needed for non-functional requirements: manageability, cost control, security (if security can be defined as a non-functional - let's not discuss that right now).

Let's first consider interfaces - gateways and other services which are positioned between zones or the "outside world". In the past, we would speak of a demilitarized zone (DMZ). In more recent publications, one can find this as an interface zone, or a set of Zone Interface Points (ZIPs) for accessing and interacting with the services within a zone.

In many cases, several of these interface points and gateways are used in the organization to support a number of non-functional requirements. They can be used for intelligent load balancing, providing virtual patching capabilities, validating content against malware before passing it on to the actual services, etc.

Depending on the top level zone, different gateways might be needed (i.e. different requirements). Interfaces for commercial services will have a strong focus on security and manageability. Those for the corporate services might be more integration-oriented, and have different data leakage requirements than those for commercial services.

Also, inside such an interface zone, one can imagine a substructure to take place as well: egress interfaces (for communication that is exiting the zone), ingress interfaces (for communication that is entering the zone) and internal interfaces (used for routing between the subzones within the zone).

Yet, there will also be requirements which are company-wide. Hence, one could envision a structure where there is a company-wide interface zone (with mandatory requirements regardless of the zone that they support) as well as a zone-specific interface zone (with the mandatory requirements specific to that zone).

Before I show a picture of this, let's consider management services. Unlike interfaces, these services are more oriented toward the operational management of the infrastructure. Software deployment, configuration management, identity & access management services, etc. Are services one can put under management services.

And like with interfaces, one can envision the need for both company-wide management services, as well as zone-specific management services.

This information brings us to a final picture, one that assists the organization in providing a more manageable view on its deployment landscape. It does not show the 3rd layer (i.e. production versus non-production deployments) and only displays the second layer through specialization information, which I've quickly made a few examples for (you don't want to make such decisions in a few hours, like I did for this post).

General overview

If the organization took an alternative approach for structuring (different requirements and grouping) the resulting diagram could look quite different:

Alternative general overview

Flows, flows and more flows

With the high-level picture ready, it is not a bad idea to look at how flows are handled in such an architecture. As the interface layer is available on both company-wide level as well as the next, flows will cross multiple zones.

Consider the case of a corporate workstation connecting to a reporting server (like a Cognos or PowerBI or whatever fancy tool is used), and this reporting server is pulling data from a database system. Now, this database system is positioned in the Commercial zone, while the reporting server is in the Corporate zone. The flows could then look like so:

Flow example

Note for the Archimate people: I'm sorry that I'm abusing the flow relation here. I didn't want to create abstract services in the locations and then use the "serves" or "used by" relation and then explaining readers that the arrows are then inverse from what they imagine.

In this picture, the corporate workstation does not connect to the reporting server directly. It goes through the internal interface layer for the corporate zone. This internal interface layer can offer services such as reverse proxies or intelligent load balancers. The idea here is that, if the organization wants, it can introduce additional controls or supporting services in this internal interface layer without impacting the system deployments themselves much.

But the true flow challenge is in the next one, where a processing system connects to a data layer. Here, the processing server will first connect to the egress interface for corporate, then through the company-wide internal interface, toward the ingress interface of the commercial and then to the data layer.

Now, why three different interfaces, and what would be inside it?

On the corporate level, the egress interface could be focusing on privacy controls or data leakage controls. On the company-wide internal interface more functional routing capabilities could be provided, while on the commercial level the ingress could be a database activity monitoring (DAM) system such as a database firewall to provide enhanced auditing and access controls.

Does that mean that all flows need to have at least three gateways? No, this is a functional picture. If the organization agrees, then one or more of these interface levels can have a simple pass-through setup. It is well possible that database connections only connect directly to a DAM service and that such flows are allowed to immediately go through other interfaces.

The importance thus is not to make flows more difficult to provide, but to provide several areas where the organization can introduce controls.

Making policies and standards more visible

One of the effects of having a better structure of the company-wide deployments (i.e. a good zoning solution) is that one can start making policies more clear, and potentially even simple to implement with supporting tools (such as software defined network solutions).

For instance, a company might want to protect its production data and establish that it cannot be used for non-production use, but that there are no restrictions for the other data environments. Another rule could be that web-based access toward the mid-tier is only allowed through an interface.

These are simple statements which, if a company has a good IP plan, are easy to implement - one doesn't need zoning, although it helps. But it goes further than access controls.

For instance, the company might require corporate workstations to be under heavy data leakage prevention and protection measures, while developer workstations are more open (but don't have any production data access). This not only reveals an access control, but also implies particular minimal requirements (for the Corporate > Workstation zone) and services (for the Corporate interfaces).

This zoning structure does not necessarily make any statements about the location (assuming it isn't picked as one of the requirements in the beginning). One can easily extend this to include cloud-based services or services offered by third parties.

Finally, it also supports making policies and standards more realistic. I often see policies that make bold statements such as "all software deployments must be done through the company software distribution tool", but the policies don't consider development environments (production status) or unmanaged, open or controlled deployments (trust level). When challenged, the policy owner might shrug away the comment with "it's obvious that this policy does not apply to our sandbox environment" or so.

With a proper zoning structure, policies can establish the rules for the right set of zones, and actually pin-point which zones are affected by a statement. This is also important if a company has many, many policies. With a good zoning structure, the policies can be assigned with meta-data so that affected roles (such as project leaders, architects, solution engineers, etc.) can easily get an overview of the policies that influence a given zone.

For instance, if I want to position a new management service, I am less concerned about workstation-specific policies. And if the management service is specific for the development environment (such as a new version control system) many corporate or commercially oriented policies don't apply either.

Conclusion

The above approach for structuring an organization is documented here in a high-level manner. It takes many assumptions or hypothetical decisions which are to be tailored toward the company itself. In my company, a different zoning structure is selected, taking into account that it is a financial service provider with entities in multiple countries, handling several thousand of systems and with an ongoing effort to include cloud providers within its infrastructure architecture.

Yet the approach itself is followed in an equal fashion. We looked at requirements, created a layered structure, and finished the zoning schema. Once the schema was established, the requirements for all the zones were written out further, and a mapping of existing deployments (as-is) toward the new zoning picture is on-going. For those thinking that it is just slideware right now - it isn't. Some of the structures that come out of the zoning exercise are already prevalent in the organization, and new environments (due to mergers and acquisitions) are directed to this new situation.

Still, we know we have a large exercise ahead before it is finished, but I believe that it will benefit us greatly, not only from a security point of view, but also clarity and manageability of the environment.

June 06, 2017

In this post I review the source code of the Ayreon software. Well, actually not. This is a review of The Source, a progressive rock/metal album from the band Ayreon. Yes really. Much wow omg.

Overall rating

This album is awesome.

Like every Ayreon album, The Source features a crapton of singers each portraying a character in the albums story, with songs always featuring multiple of them, often with interleaving lines. The mastermind behind Ayreon is Arjen Anthony Lucassen, who for each album borrows some of the most OP singers and instrumentalists from bands all over the place. Hence if you are a metal or rock fan, you are bound to know some of the lineup.

What if you are not into those genres? I’ve seen Arjen described as a modern day Mozart and some of his albums as works of art. Art that can be appreciated even if you otherwise are not a fan of the genre.

Some nitpicks

The lyrics, while nice, are to me not quite as epic as those in the earlier Ayreon album 01011001. A high bar to set, since 01 is my favorite Ayreon album. (At least when removing 2 tracks from it that I really do not like. Which is what I did years ago so I don’t remember what they are called. Which is bonus points for The Source, which has no tracks I dislike.) One of the things I really like about it is that some of the lyrics have characters with opposite moral or emotional stances debate which course of action to take.

These are some of the lyrics of The Fifth Extinction (a song from 01011001), with one singers lines in green italics and the other in red normal font:

I see a planet, perfect for our needs
behold our target, a world to plant our seeds
There must be life
first remove any trace of doubt!
they may all die
Don’t you think we should check it out?
We have no choice, we waited far too long
this is our planet, this is where they belong.
We may regret this,
is this the way it’s supposed to be?
A cold execution, a mindless act of cruelty!

I see mainly reptiles
A lower form or intelligence
mere brainless creatures
with no demonstrable sentience
What makes us superior
We did not do so great ourselves!
A dying race, imprisoned in restricted shells

<3

The only other nitpick I have is that the background tone in Star of Sirrah is kinda annoying when paying attention to the song.

Given my potato sophistication when it comes to music, I can’t say much more about the album. Go get a copy, it’s well worth the moneys.

The story

Now we get to the real reason I’m writing this review, or more honestly, this rant. If you’re not a Ayreon fanboy like me, this won’t be interesting for you (unless you like rants). Spoilers on the story in The Source ahead.

In summary, the story in The Source is as follows: There is a human civilization on a planet called Alpha and they struggle with several ecological challenges. To fix the problems they give control to Skynet (yes I will call it that). Skynet shuts everything down so everyone dies. Except a bunch of people who get onto a spaceship and find a home on a new planet to start over again. The story focuses on these people, how they deal with Skynet shutting everything down, their coming together, their journey to the new planet and how they begin new lives there.

Originality

It’s an OK story. A bit cheesy and not very original. Arjen clearly likes the “relying on machines is bad” topic, as evidenced by 01011001 and Victims of the Modern Age (Star One). When it comes to originally in similar concept albums I think 01011001 does a way better job. Same goes for Cybion (Kalisia), which although focuses on different themes and is not created by Arjen (it does feature him very briefly), has a story with similar structure. (Perhaps comparing with Cybion is a bit mean, since that bar is so high it’s hidden by the clouds.)

Consistency

Then there are some serious WTFs in the story. For instance planet Alpha blows up after some time because of the quantum powercore melting down due to the cooling systems being deactivated. Why would Skynet let that happen? If it can take on an entire human civilization surely it knows what such deactivation will result into? Why would it commit suicide, and fail its mission to deal with the ecological issues? Of course, this is again a bit nitpicky. Logic and consistency in the story are not the most important thing on such an album of course. Still, it bugs me.

Specificness

Another difference with 01011001 is that story in The Source is more specific about things. If it was not, a lot of the WTFs would presumably be avoided, and you’d be more free to use your imagination. 01011001 tells the story of an Aquatic race called the Forever and how they create human kind to solve their wee bit of an apathy problem. There are no descriptions of what the Forever look like, beyond them being an Aquatic race, and no description of what their world and technology looks like beyond the very abstract.

Take this line from Age of Shadows, the opening song of 01011001:

Giant machines blot out the sun

When I first heard this song this line gave me the chills, as I was imagining giant machines in orbit around the systems star (kinda like a Dyson swarm) or perhaps just passing in between the planet and the star, still at significant distance from the planet. It took some months for me to realize that the lyrics author probably was thinking of surface based machines, which makes them significantly less giant and cool. The lyrics don’t specify that though.

The Ayreon story

Everything I described so far are minor points to me. What really gets me is what The Source does to the overall Ayreon story. Let’s recap what it looked like before The Source:

The Forever lose their emotions and create the human race to fix themselves. Humanity goes WW3 and blows itself up, despite the Forever helping them to avoid this fate, and perhaps due to the Forevers meddling to accelerate human evolution. Still, the Forever are able to awaken their race though the memories of the humans.

Of course this is just a very high level description, and there is much more to it then that. The Source changes this. It’s a prequel to 01011001 and reveals that the Forever are human, namely the humans that fled Alpha… Which turns the high level story into:

Humans on Alpha fuck their environment though the use of technology and then build Skynet. Some of them run away to a new planet (the water world they call Y) and re-engineer themselves to live there. They manage to fuck themselves with technology again and decide to create a new human race on earth. Those new humans also fuck themselves with technology.

So to me The Source ruins a lot of the story from the other Ayreon albums. Instead of having this alien race and the humans, each with their own problems, we now have just humans, who manage to fuck themselves over 3 times in a row and win the universes biggest tards award. Great. #GrumpyJeroenIsGrumpy

Conclusion

Even with all the grump, I think this is an awesome album. Just don’t expect too much from the story, which is OK, but definitely not as great as the rest of the package. Go buy a copy.

June 02, 2017

I published the following diary on isc.sans.org: “Phishing Campaigns Follow Trends“.

Those phishing emails that we receive every day in our mailboxes are often related to key players in different fields (…) But the landscape of online services is ever changing and new actors (and more precisely their customers) become new interesting targets. Yesterday, while hunting, I found for the first time a phishing page trying to lure the Bitcoin operator: BlockChain. Blockchain[1] is a key player in the management of digital assets… [Read more]

[The post [SANS ISC] Phishing Campaigns Follow Trends has been first published on /dev/random]

June 01, 2017

I published the following diary on isc.sans.org: “Sharing Private Data with Webcast Invitations“.

Last week, at a customer, we received a forwarded email in a shared mailbox. It was somebody from another department that shared an invitation for a webcast “that could be interesting for you, guys!”. This time, no phishing attempt, no malware, just a regular email sent from a well-known security vendor. A colleague was interested in the webcast and clicked on the registration link… [Read more]

[The post [SANS ISC] Sharing Private Data with Webcast Invitations has been first published on /dev/random]

Every spring, members of Acquia's Product, Engineering and DevOps teams gather at our Boston headquarters for "Build Week". Build Week gives our global team the opportunity to meet face-to-face, to discuss our product strategy and roadmap, to make plans, and to collaborate on projects.

One of the highlights of Build Week is our annual Hackathon; more than 20 teams of 4-8 people are given 48 hours to develop any project of their choosing. There are no restrictions on the technology or solutions that a team can utilize. Projects ranged from an Amazon Dash Button that spins up a new Acquia Cloud environment with one click, to a Drupal module that allows users to visually build page layouts, or a proposed security solution that would automate pen testing against Drupal sites.

This year's projects were judged on innovation, ship-ability, technical accomplishment and flair. The winning project, Lift HoloDeck, was particularly exciting because it showcases an ambitious digital experience that is possible with Acquia and Drupal today. The Lift Holodeck takes a physical experience and amplifies it with a digital one using augmented reality. The team built a mobile application that superimposes product information and smart notifications over real-life objects that are detected on a user's smartphone screen. It enables customers to interact with brands in new ways that improve a customer's experience.

Lift holodeck banner

At the hackathon, the Lift HoloDeck Team showed how augmented reality can change how both online and physical storefronts interact with their consumers. In their presentation, they followed a customer, Neil, as he used the mobile application to inform his purchases in a coffee shop and clothing store. When Neil entered his favorite coffee shop, he held up his phone to the posted “deal of the day”. The Lift HoloDeck application superimposes nutrition facts, directions on how to order, and product information on top of the beverage. Neil contemplated the nutrition facts before ordering his preferred drink through the Lift HoloDeck application. Shortly after, he received a notification that his order was ready for pick up. Because Acquia Lift is able to track Neil's click and purchase behavior, it is also possible for Acquia Lift to push personalized product information and offerings through the Lift HoloDeck application.

Check out the demo video, which showcases the Lift HoloDeck prototype:

The Lift HoloDeck prototype is exciting because it was built in less than 48 hours and uses technology that is commercially available today. The Lift HoloDeck experience was powered by Unity (a 3D game engine), Vuforia (an augmented reality library), Acquia Lift (a personalization engine) and Drupal as a content store.

The Lift HoloDeck prototype is a great example of how an organization can use Acquia and Drupal to support new user experiences and distribution platforms that engage customers in captivating ways. It's incredible to see our talented teams at Acquia develop such an innovative project in under 48 hours; especially one that could help reshape how customers interact with their favorite brands.

Congratulations to the entire Lift HoloDeck team; Ted Ottey, Robert Burden, Chris Nagy, Emily Feng, Neil O'Donnell, Stephen Smith, Roderik Muit, Rob Marchetti and Yuan Xie.

May 30, 2017

Comparatif des matelas Tediber, Eve et Ilobed

Lorsqu’il est devenu urgent de changer de matelas, plutôt que de me rendre dans un magasin de literie, je me suis tout naturellement tourné vers le web, curieux de voir ce qui se faisait en la matière.

J’y ai découvert que le matelas était un domaine grouillant d’activités avec des startups comme Tediber, Eve, Ilobed et Oscarsleep. Mais quel est le rapport entre un matelas et Internet ? Quel avantage à acheter un matelas sur le web ?

Rassurez-vous, pas de matelas connecté ! La particularité de ces startups matelas c’est qu’elles partagent un concept similaire, lancé par Tuft & Needle en 2012 et popularisé par Casper, deux startups américaines. Un modèle de matelas unique livré roulé sous vide, une période d’essai de 100 jours et un remboursement intégral en cas de non-satisfaction.

Contrairement à un matelas de magasin, vous pouvez donc réellement dormir pendant 100 nuits avant de faire votre choix ! Les matelas retournés sont donnés à des associations.

Chaque startup ne propose qu’un seul type de matelas mais en différentes tailles. L’idée est venue au créateur de Casper en réalisant que, dans les hôtels, on dort généralement très bien alors qu’on ne vous demande jamais le type de matelas que vous préférez. Il serait donc possible de créer un matelas « universel ».

Bref, une fois encore Internet prouve que l’on peut innover sans nécessairement faire de la haute technologie ou du tout connecté. Il n’y a même pas d’app mais seulement un concept commercial que je me devais de tester.

Tediber, le bleu nuit

Ne sachant que choisir, les caractéristiques techniques étant très similaires, je me suis tourné vers Twitter où les community managers de Tediber et Eve se sont affrontés un dimanche soir afin de me convaincre.

Ne pouvant tester le matelas, j’ai été séduit par l’image de Tediber : technique mais très classe avec un doux mélange blanc/bleu foncé évoquant pour moi l’apaisement et le sommeil. Le site, très simple et fonctionnel, met en avant le matelas et ses qualités.

Sur Twitter, le community manager de Tediber était très factuel, décrivant son produit. Le compte Eve, par contre, avait tendance à comparer voir à dénigrer les concurrents.

J’ai donc opté pour un matelas Tediber et, autant le dire tout de suite, le produit est magnifique.

La grande force de Tediber, c’est sa housse qui est tout simplement sublime. Du côté du sommier, le matelas est équipée d’une couche antidérapante particulièrement résistante. Du côté du dormeur, le matelas est d’une douceur incomparable et fait regretter d’avoir un drap de lit.

Le matelas est particulièrement moelleux et donne une douce impression de chaleur lorsqu’on s’y enfonce. Ce test ayant été réalisé en hiver, je suis curieux de savoir comment se comporte le matelas lors de fortes chaleurs.

Car, malheureusement, j’ai du renvoyer le matelas Tediber, aussi parfait soit-il. La raison ? Ma compagne, enceinte à l’époque, et moi-même étions aspirés par le centre où notre poids créait une légère dépression.

Peut-être est-ce dû à la taille choisie (140cm de large) ? Quoiqu’il en soit, je décide, un peu à contre-cœur, de renvoyer le matelas Tediber.

La communication, le retour et le remboursement se passent très bien.

Eve, le jaune

Mon second choix se porte en toute logique vers Eve. Comme je partage mon expérience sur Facebook, plusieurs d’entre vous se disent intéressés par un retour d’expérience, que vous êtes justement en train de lire. Je demande alors à Eve s’ils sont disposés à me faire une réduction en échange de mon test.

Ils acceptent de me faire le tarif “membre du personnel” (-30%) et je commande mon matelas Eve.

Contrairement à Tediber, Eve est une déception.

La housse glisse semble faite dans un tissu bon-marché, le jaune est absolument criard. Le matelas est bien moins moelleux que le Tediber mais, au moins, il ne se creuse pas au centre. Pour mon goût, il est soit trop mou, soit trop dur. Je n’arrive pas à trouver les mots mais je ne m’y sens pas bien. Ma compagne avoue avoir le même ressenti, elle qui préfère un matelas ferme.

Deux problèmes m’irritent particulièrement : le matelas glisse sur le sommier et la surface de la housse possède un relief en nid d’abeille que je trouve insupportable, malgré la présence d’une alèse et d’un drap de lit. (Eve m’affirme avoir réglé ces deux problèmes qui étaient des critiques récurrentes)

Bref, je n’aime pas le matelas Eve. Rien que l’idée d’utiliser du jaune pour symboliser le sommeil, quelle horreur !

Je décide de le remballer avec l’alèse que j’avais également commandée. Mais il m’est notifié que l’alèse ne dispose que de 30 jours d’essais, non 100 (la demande de retour ayant été fait aux alentours du 35ème jour). C’est un peu ballot…

Si la communication, le retour et le remboursement se passent bien, je garde un mauvais sentiment de cette expérience. Je n’aime pas les couleurs, le site un peu confus qui insiste plus sur des photos de modèles dénudés que sur le matelas, sur l’approche à la limite de la grosse boite industrielle, un matelas qui est plus beau en photo qu’en vrai. Notons que le site a été récemment simplifié et que les photos se centrent désormais sur le matelas.

Ilobed, le blanc

En désespoir de cause, je me tourne vers Ilobed, le dernier acteur qui n’avait pas participé à la guerre des community managers sur Twitter.

Et pour cause : contrairement aux deux précédents, Ilobed est auto-financé et est beaucoup plus petit. Je suis en contact direct avec Clément, fondateur d’Ilobed, qui répond gentiment à toutes mes questions et me propose 150€ de réduction lorsque je lui annonce écrire cet article. J’avais eu peu d’interaction sur Twitter car lui ne peut se permettre de passer son dimanche sur les réseaux sociaux et c’est très bien comme ça !

C’est également Clément qui me téléphone directement lorsqu’il réalise que ma commande est en Belgique dans une zone où un éventuel retour risque d’être difficile voire impossible. Il préfère me prévenir pour discuter avec moi et j’apprécie la démarche.

Ilobed mise sur le plus simple, moins cher. Le matelas est plus fin car, selon Clément, l’épaisseur n’est qu’un phénomène de mode. La housse est toute blanche, avec un motif agréable.

Des trois, Ilobed est certainement le plus ferme. Et nous y dormons désormais très bien.

Seul gros bémol : il glisse presqu’autant que le matelas Eve. J’ai fini par acheter sur Amazon un sous-matelas antidérapant à 20€ qui a fait des miracles mais c’est dommage. Sans compter que c’est le seul matelas pour lequel l’envers et l’endroit ne sont pas clairs du tout ! Une couche anti-dérapante résoudrait ces deux problèmes.

Le matelas Ilobed n’est clairement pas Tediber, il n’est pas enthousiasmant, il n’est pas moelleux. Mais sa sobriété est peut-être justement son meilleur atout. Et c’est celui que nous avons décidé de garder.

Oscarsleep, le gris foncé

Je me dois de citer Oscarsleep, l’acteur belge du marché du matelas francophone. Oscarsleep était plus cher et proposait un matelas retournables (on peut dormir sur les deux faces). Comme ils n’ont jamais répondu à mes requêtes, je ne l’ai pas testé. Je note cependant que le prix a baissé et que le matelas s’est aligné sur la technologie des 3 autres avec une couche à mémoire de forme du côté du dormeur.

J’avoue, je serais très curieux de l’essayer pour compléter ce test.

Conclusion

Un matelas est quelque chose de très subjectif et je sais que des centaines de personnes adorent leur matelas Eve. Mais je déteste quand ce genre de comparatif se termine par une conclusion qui n’en est pas une, disant que toutes les solutions ont plein de qualités et ne prenant pas un parti ferme.

Du coup, ma conclusion est simple : si le budget n’est pas un problème pour vous, testez Tediber. C’est un matelas enthousiasmant. Si vous avez été déçu par Tediber, si vous cherchez le meilleur rapport qualité prix ou que vous favorisez la sobriété et un matelas ferme, adoptez Ilobed.

Mais le plus important dans cette expérience n’est pas tellement le matelas que j’ai choisi. C’est la réalisation que plus jamais je ne retournerai dans un magasin de literie pour tester un matelas en trois minutes, tout habillé. Désormais, bénéficier de 100 nuits d’essai me semble indispensable avant d’acheter un matelas. Cela parait peut-être anecdotique mais ce genre d’innovations ne cesse de creuser l’écart entre le nouveau monde et les entreprises zombies.

Dans tous les cas, je vous souhaite une bonne nuit !

 

Remarque importante : ce blog est financé par ceux d’entre vous qui m’offrent un prix libre pour chaque série de 5 billets. Cela se passe sur Tipeee ou Patreon et je ne vous remercierai jamais assez pour votre contribution. Ce billet ayant déjà été financé par une réduction de 150€ sur l’achat de mon matelas Ilobed, il n’est pas payant et ne compte pas dans la prochaine série de 5.

 

Photo par Edgar Crook.

Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal ou Liberapay. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Medium, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

May 29, 2017

Tonight, I was invited by the OWASP Belgium Chapter (thank you again!) to present “something“. When I accepted the invitation, I did not really have an idea so I decided to compile the findings around my research about webshells. They are common tools used by bad guys: Once they compromized a server, they often install a webshell which is a kind of toolbox or a RAT (“Remote Access Tool”). It’s very interesting to analyze how such interfaces are protected from unauthorized accesses but also the mistakes that are present in their code. This is a very first version and more will come soon!

My slides are available on slideshare.net:

[The post HTTP… For the Good or the Bad has been first published on /dev/random]

Hardware

My old netbook is currently seven years old, and shows its age : boot times up to two minutes, working in Chrome was a drag and took ages. And I'm not even talking about performing updates. All to blame on the slow CPU (never again an Atom !) and the slow hard drive. The last two occasions I used the laptop was on Config Management Days and Red Hat summit, and I can tell you the experience was unpleasant. So a new laptop was needed.

Luckily, the laptop market has reinvented itself after it collapsed during the tablet rise. Ultrabooks are now super slim, super light and extremely powerful. My new laptop needed to be :

  • fast : no Celeron or Atom chip was allowed. An i5 as minimum CPU
  • beautifull : I need a companion to my vanity. No plasticky stuff, well build and good quality.
  • well supportive for Linux : Linux would be installed, so the hardware needed to be supported
  • reasonable cheap : speaks for itself; a lot of nice ultrabooks are available, but I didn't want to pay an arm and a leg.
  • light and small : I carry this everywhere around the world, so the laptop shouldn't weigh more than 1.4kg

Soon, I saw 2 main candidates : first, the Dell XPS13 still is regarded as ultrabook king. It supports Linux nicely, and has that beautiful Infinity display. Disadvantages were that it was on the heavy side, and I wasn't fan of its design either. And a tad on the expensive side as well. On the other side, there was the Asus Zenbook 3 (UX390) which was stunningly beautiful, had a nice screen as well and was extremely light with its 0.9 kg.

However, I saw the silver variant in the shop, but found it a bit on the small side. So when I saw its 14 inch brother, UX430UQ, I was immediately sold. This is a 14 inch laptop - it is advertised as a 13inch laptop with a 14 inch screen, but don't believe that - which is as light as 1.25 kg, has a nice dark grey metal spun outerior and excellent keyboard and screen. Equipped with an i7 CPU and 16GB of RAM, it doesn't fail to deliver on the performance field. Shame that Asus doesn't provide a sleeve with this laptop, as it does with the UX390. Also, important, it doesn't has a safe lock hole, so don't leave this baby unattended.

I wiped the Windows 10 and booted the Fedora netinstall CD, but it seemed that both WiFi and trackpad were unsupported. I lost quite some time with this, but eventually decided to boot it with the Fedora LiveCD, to find out all was working out of the box. Probably the netinstall CD uses an older kernel. I baptised the laptop Millenium Falcon, as I switched to spaceship names on my hardware lately.

DependenciesWhen I need to program something, most of the time I use Perl 5, Go or Perl 6. Which one depends on the existence and maturity of libraries and the deployment strategy (and I must admit, probably my mood). Most applications I write at work are not that big, but they need to be stable and secure. Some end up in production as an extension or addition to the software that is the core of our authentication and authorisation infrastructure. Some programs are managed by other teams, e.g. of sysadmin-type applications like the monitoring of a complex chain of microservices. Finally, proof of concept code is often needed when designing a new architecture. I believe that If your software is cumbersome/fragile to install and maintain, people won’t use it. Or worse, stick with an old version.

Hence, I like CPAN. A lot. I love how easy it is to download, build, test and install Perl 5 libraries. cpanminus made the process even more zero-conf. When not using Docker, tools like App::Fatpacker and Carton can bundle libraries and you end with a mostly self-contained application (thx mst and miyagawa).

Go, a compiled language, took a different path and opted for static binaries. The included (Go) libraries are not downloaded and built when you’re deploying, but when you’re developing. Although this is a huge advantage, I am not too fond of the dependency system: you mostly end up downloading random versions from the master branch of random Github repos (the workaround is using external webservices like gopkg.in, not ideal). The only sane way is to “vendor in” (also with a tool) your dependencies pretty much the same way Carton does: you copy the libs in your repository (but still without versioning). I hear the Go community is working at this, but so far there are only workarounds.

In the Perl 6 world, the zef module manager does provide a kind of cpanminus-like experience. However, in contrast with cpanminus it does this by downloading code from a zillion Github repo’s where the versioning is questionable. The is no clear link between the version fixed in the metadata (META6.json) and branches/tags on the repo. Like mentioned above, Go gets away with this due to static compiling, although the price is high: your projects will have dozens of versions of the same lib, probably even with a different API… and no way to declare the version in the code.

The centralised Perl 5 approach approach is fairly complex. It works because of the maturity of the ecosystem (the “big” modules are pretty stable) and the taken-for-granted testing culture (thank you toolchain and testing people!). Actually, in my opinion, the only projects that really solved the dependency management problem are the Linux & BSD distributions by boxing progress in long release cycles. Developers want the last shiny lib, so that won’t work.

The Perl 6 devs have no static compiling on the agenda, so it’s clear that the random Github repo situation is far from ideal. That’s why I was pretty excited to read on #perl6 that Perl 6 code can now be uploaded to CPAN (with Shoichi Kaji’s mi6) and installed (with Nick Logan’s zef). Today, the Perl 6 ecosystem has neither the maturity of the one of Perl 5 nor its testing culture. The dependency chain can be pretty fragile at times. Working within a central repository with an extensive and mature testing infrastructure will certainly help over time. One place to look for libraries, rate them, find the documentation, and so on. Look at their state and the state of its dependencies. This is huge. But I don’t think that CPAN will fix the problems of a young ecosystem right away. I think there is an opportunity here to build on the shoulders on CPAN, while keeping the advantages of a new language: find out what works.

Personally, I would love to have a built-in packaging solution like Carton –or even App::Fatpacker– out of the box. I think there is something to be said for the “vendoring-in” of the dependencies in the application repo (putting all the dependencies in a directory in the repo). The Perl 6 language/compiler has advantages over Perl 5. You can specify the compiler you target so your code doesn’t break when your compiler (also) targets a more recent milestone (allowing the core devs to advance by breaking stuff). Soon, you’ll be able to You can even load different versions of the same library in the same program (thx for the correction, nine).

The same tool could even vendor (and update) different version of the same library. At “package time” this tool would look at your sources and include only the versions of the libraries that are referenced. The result could be a tar or a directory with everything needed to run your application. As a bonus point, it would be nice to still support random repos for libraries-in-progress or for authors that opted not to use CPAN.

My use case is not everyone’s, so I wonder what people would like to see based on their experience or expectations. I think that now, with the possibility of a CPAN migration, it a good time to think about this. Let’s gather ideas. The implementation is for later :).


Filed under: Uncategorized Tagged: cpan, deployment, golang, Perl, perl6

May 28, 2017

Comment les élus d’Ottignies-Louvain-la-Neuve semblent vouloir tout faire pour saboter une consultation populaire d’origine citoyenne.

Le 11 juin, dans ma ville d’Ottignies-Louvain-la-Neuve, se déroulera une consultation populaire. Chaque citoyen de 16 ans ou plus est appelé à se prononcer sur la question « Êtes-vous favorable à une extension du centre commercial ? ».

Demander aux citoyens de se prononcer sur l’avenir de leur ville, cela semble la base d’une société démocratique. Et pourtant, l’incroyable défi que représente cette simple consultation populaire m’emmène à une conclusion terrible mais limpide : les conseillers communaux d’Ottignies-Louvain-la-Neuve sont soit cruellement incompétents soit prêts à tout pour faire échouer cette consultation populaire.

L’histoire d’une initiative citoyenne

Selon le code belge de la démocratie locale, chaque commune est tenue d’organiser une consultation populaire si le projet est porté par au moins 10% des citoyens dans les communes de plus de 30.000 habitants (32.000 à Ottignies-Louvain-la-Neuve).

Peu connue, cette loi n’est que rarement utilisée. Le code de la démocratie locale limite d’ailleurs le nombre de consultation possible à 6 par législature de 6 ans avec minimum 6 mois entre chaque et aucune dans les 16 mois avant la prochaine élection communale.

Lorsque le centre commercial L’Esplanade, dont la construction avait déjà suscité de nombreux émois, a annoncé vouloir s’agrandir, un groupe motivé de citoyens s’est lancé dans la récolte de près de 3500 signatures, obligeant les édiles à organiser une consultation populaire.

Les citoyens enterrent le veau d’or lors de la parade des utopies.

Le bourgmestre Jean-Luc Roland, pourtant issu du parti Écolo, étant un grand défenseur du centre commercial, il y’a fort à parier que cette consultation fasse grincer des dents et que son organisation soit faite à contre-cœur. Je n’ai jamais compris cet engouement politique pour le centre commercial de la part d’un écologiste mais Monsieur Roland ne s’en cache pas.

L’histoire complète de cette consultation populaire est narrée avec force humour et détails par Stéphane Vanden Eede, conseiller CPAS Écolo de la ville.

Les oublis de la brochure officielle

Comme le stipule le code de la démocratie locale, la commune a fait parvenir aux habitants une brochure explicative détaillant l’enjeu et les modalités de la consultation populaire.

Surprise de taille : la brochure insiste plusieurs fois lourdement sur le fait que la participation à la consultation n’est pas obligatoire (contrairement aux élections).

Mais il n’est nul part indiqué que s’il n’y a pas au moins 10% de participation, les urnes ne seront même pas ouvertes ! Si 3200 citoyens de plus de 16 ans ne se déplacent pas, la consultation n’aura servi à rien. Au contraire, le message envoyé sera : « Nous, citoyens, ne voulons pas choisir ». Et oui, on compte bien 10% de la population, enfants compris, ce qui signifie que près de 15% des électeurs doivent participer.

Cette information me semble cruciale et je trouve particulièrement dommage qu’elle ait été omise de la brochure.

Moralité : quel que soit votre avis, allez voter à tout prix lors des consultations populaires et encouragez votre entourage à faire de même. Il est possible de donner procuration à un autre électeur si vous ne savez pas vous déplacer ce jour là. Le taux de participation est un élément crucial pour faire vivre le processus démocratique.

L’illisibilité du bulletin

La pétition signée par 3500 citoyens demandait une consultation populaire sur une question claire et précise :

« Aujourd’hui, le propriétaire de L’esplanade envisage d’agrandir sa surface commerciale. Êtes-vous favorable à une extension du centre commercial ? »

Cependant, un comité de conseillers communaux présidé par Michel Beaussart, échevin de la participation citoyenne, a décidé de rajouter 20 questions sur le bulletin de vote !

Ces 20 questions supplémentaires rendent le bulletin complètement illisible. La question principale, seule qui ait de l’importance, est reléguée sur un tout petit espace en haut à droit et il est facile de la manquer !

Les réactions de citoyens confrontés au bulletin de vote démontrent une confusion certaine : Quelle est la question principale qui a de la valeur ? Est-ce grave si certaines de mes réponses sont en contradiction l’une avec l’autre ? Comment seront dépouillées mes réponses ? À la phrase « Il n’y a pas de nécessité d’agrandir le centre commercial et d’augmenter l’offre commerciale. », je dois répondre oui ou non si je suis contre ?

Force est de constater que si on avait voulu embrouiller les citoyens, on ne s’y serait pas pris autrement. Je pense que si le taux de votes blancs à la première question est important, on pourra sans hésiter accuser la rédaction du bulletin. Ce long bulletin de vote risque également de ralentir le processus et de décourager d’éventuels votants en rallongeant inutilement les files.

L’impossibilité de dépouiller les bulletins

Toute personne un peu au fait de la sociologie vous le dira : rédiger une enquête d’opinion est un travail difficile. La méthodologie d’interprétation des résultats doit être étudiée, testée et validée.

Quand je vois un tel bulletin, je suis très curieux de savoir quel sera le protocole de dépouillement et d’interprétation des résultats.

Toutes les personnes que j’ai consulté m’ont confirmé l’amateurisme apparent de ce formulaire. Si 10.000 citoyens se rendent aux urnes et remplissent consciencieusement les 21 questions, la commune sera tout simplement assise sur une masse de données inexploitable.

Ces 20 questions ne servent donc à rien. Si ce n’est à rendre le bulletin particulièrement illisible, induire les électeurs en erreur et rallonger les files.

Un vote qui n’est plus secret

Mais là où l’incompétence est la plus tangible, c’est que ces 20 questions supplémentaires annulent l’anonymat du vote. Le code de la démocratie locale exige que le vote soit secret. Or, avec un tel bulletin, il ne l’est plus.

En effet, outre la question principale (la seule qui ait de la valeur), il y’a 2^20 bulletins possibles. Ce qui fait plus d’un million !

Il est possible pour une personne mal intentionnée de faire pression pour imposer un vote.

Exemple concret : un employeur annonce à ses 100 employés qu’il exige d’eux de voter pour l’agrandissement du centre commercial. À chaque employé, il donne une combinaison unique de réponses aux 20 questions. Par exemple « 9 oui – 1 non – 9 oui – 1 non ».

Le patron annonce alors que ses agents vont assister au dépouillement et guetter les bulletins qui suivront cette combinaison pour vérifier le vote des employés.

Si aucun bulletin ne répond à cette combinaison, l’employé est viré. Si le ou les bulletins correspondant sont tous contre l’extension, l’employé est viré.

Bien sûr, il est possible que plusieurs bulletins aient la même combinaison. Mais comme il y’a un million de combinaison pour maximum 10.000 ou 20.000 votants, la probabilité d’avoir la même combinaison est d’une pour cent ou une pour cinquante !

Sans compter que certaines combinaisons sont illogiques et que le patron peut accorder le bénéfice du doute si deux bulletins ont la même combinaison mais que l’un est pour et l’autre contre.

Le 11 juin, ne votez que pour la toute première question, bien cachée en haut à droite. Laissez les autres blanches !

Alors, incompétence ou malveillance ?

Sans être un expert en la matière et sans avoir suivi le dossier de près, j’ai relevé ces problèmes essentiels en quelques minutes à peine.

En conséquence, je suis forcé d’accuser publiquement Michel Beaussart, échevin de la participation citoyenne et tous les conseillers communaux qui ont validé ce bulletin d’être soit incompétents soit malveillants par rapport à l’organisation de cette consultation populaire.

Si Monsieur Beaussart me répond être de bonne foi, ce que je présume, il doit adresser les 3 points que j’ai soulevé, notamment en publiant un protocole validé d’interprétation des résultats.

Faute de réponse correcte, je pense que toute personne un peu soucieuse de la démocratie comprendra qu’il est indispensable de modifier d’urgence le bulletin de vote pour que celui-ci ne comporte que la question initialement demandée par la pétition.

En tant qu’échevin en charge, cette modification incombe à Monsieur Beaussart. Selon ma lecture amateur du code de la démocratie locale, rien ne s’oppose à la modification du bulletin de vote en dernière minute.

Un bulletin de vote difficilement lisible et ne permettant pas de garantir le secret du vote est un manquement gravissime au bon fonctionnement démocratique et devrait entraîner la nullité des résultats.

Si l’incompétence me semble dramatique, je peux reconnaître que l’erreur de bonne foi est humaine et excusable lorsqu’il y’a une volonté de réparer son erreur. Faute de cette volonté, les électeurs seront forcés de tirer la seule conclusion qui s’impose : il ne s’agit plus d’une erreur mais d’un acte délibéré de saboter le processus démocratique par ceux-là même qui ont été élus pour nous représenter. Ou, au mieux, le camouflage irresponsable d’une incompétence dangereuse.

Dans tous les cas, j’invite les électeurs à faire de cette consultation du 11 juin un véritable succès de participation, à ne répondre qu’à la première question et à se souvenir des réactions à cet argumentaire lorsqu’ils voteront en 2018. Et à se demander si le régime sous lequel nous vivons est bel et bien une démocratie.

Photo de couverture par Manu K.

Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal ou Liberapay. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Medium, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

May 25, 2017

Updated: 20170419: gnome-shell extension browser integration.
Updated: 20170420: natural scrolling on X instead of Wayland.
Updated: 20170512: better support for multi monitor setups.
Updated: 20170525: add “No TopLeft Hot Corner”, use upstream “Top Icons Plus” instead of the one in the repos.

Introduction

Mark Shuttleworth, founder of Ubuntu and Canonical, dropped a bombshell: Ubuntu drops Unity 8 and –by extension– also the Mir graphical server on the desktop. Starting from the 18.04 release, Ubuntu will use Gnome 3 as the default Desktop environment.

Sadly, the desktop environment used by millions of Ubuntu users –Unity 7– has no path forward now. Unity 7 runs on the X.org graphical stack, while the Linux world –including Ubuntu now– is slowly but surely moving to Wayland (it will be the default on Ubuntu 18.04 LTS). It’s clear that Unity has its detractors, and it’s true that the first releases (6 years ago!) were limited and buggy. However, today, Unity 7 is a beautiful and functional desktop environment. I happily use it at home and at work.

Soon-to-be-dead code is dead code, so even as a happy user I don’t see the interest in staying with Unity. I prefer to make the jump now instead of sticking a year with a desktop on life support. Among other environments, I have been a full time user of CDE, Window Maker, Gnome 1.*, KDE 2.*, Java Desktop System, OpenSolaris Desktop, LXDE and XFCE. I’ll survive :).

The idea of these lines is to collect changes I felt I needed to make to a vanilla Ubuntu Gnome 3 setup to make it work for me. I made the jump 1 week before the release of 17.04, so I’ll stick with 17.04 and skip the 16.10 instructions (in short: you’ll need to install gnome-shell-extension-dashtodock from an external source instead of the Ubuntu repos).

The easiest way to make the use Gnome on Ubuntu is, of course, installing the Ubuntu Gnome distribution. If you’re upgrading, you can do it manually. In case you want to remove Unity and install Gnome at the same time:
$ sudo apt-get remove --purge ubuntu-desktop lightdm && sudo apt-get install ubuntu-gnome-desktop && apt-get remove --purge $(dpkg -l |grep -i unity |awk '{print $2}') && sudo apt-get autoremove -y

Changes

Add Extensions:

  1. Install Gnome 3 extensions to customize the desktop experience:
    $ sudo apt-get install -y gnome-tweak-tool gnome-shell-extension-dashtodock gnome-shell-extension-better-volume gnome-shell-extension-refreshwifi gnome-shell-extension-disconnect-wifi
  2. Install the gnome-shell integration (the one on the main Ubuntu repos does not work):
    $ sudo add-apt-repository ppa:ne0sight/chrome-gnome-shell && sudo apt-get update && sudo apt-get install chrome-gnome-shell
  3. Install the “Multi-monitor add-on“, the “Top Icon Plus” (we use an upstream version op the previous two extensension because the ones on the Ubuntu repos are buggy), the “Not Topleft Hot Corner” (a must in a multi-monitor setup) and the “Refresh wifi” extensions. You’ll need to install a browser plugin. Refresh the page after installing the plugin.
  4. Log off in order to activate the extensions.
  5. Start gnome-tweak-tool and enable “Better volume indicator” (scroll wheel to change volume), “Dash to dock” (a more Unity-like Dock, configurable. I set the “Icon size limit” to 24 and “Behavior-Click Action” to “minimize”), “Disconnect wifi” (allow disconnection of network without setting Wifi to off), “Refresh Wifi connections” (auto refresh wifi list), “Multi monitors add-on” (add a top bar to other monitors) and “Topicons plus” (put non-Gnome icons like Dropbox and pidgin on the top menu).

Change window size and buttons:

  1. On the Windows tab, I enabled the Maximise and Minise Titlebar Buttons.
  2. Make the window top bars smaller if you wish. Just create ~/.config/gtk-3.0/gtk.css with these lines:
    /* From: http://blog.samalik.com/make-your-gnome-title-bar-smaller-fedora-24-update/ */
    window.ssd headerbar.titlebar {
    padding-top: 4px;
    padding-bottom: 4px;
    min-height: 0;
    }
    window.ssd headerbar.titlebar button.titlebutton {
    padding: 0px;
    min-height: 0;
    min-width: 0;
    }

Disable “natural scrolling” for mouse wheel:

While I like “natural scrolling” with the touchpad (enable it in the mouse preferences), I don’t like it on the mouse wheel. To disable it only on the mouse:
$ gsettings set org.gnome.desktop.peripherals.mouse natural-scroll false

If you run Gnome on good old X instead of Wayland (e.g. for driver support of more stability while Wayland matures), you need to use libinput instead of the synaptic driver to make “natural scrolling” possible:

$ sudo mkdir -p /etc/X11/xorg.conf.d && sudo cp -rp /usr/share/X11/xorg.conf.d/40-libinput.conf /etc/X11/xorg.conf.d/

Log out.

Enable Thunderbird notifications:

For Thunderbird new mail notification I installed the gnotifier Thunderbird add-on: https://addons.mozilla.org/en-us/thunderbird/addon/gnotifier/

Extensions that I tried, liked but ended not using:

  • gnome-shell-extension-pixelsaver: it feels unnatural on a 3 screen setup like I use at work, e.g. min-max-close windows buttons on the main screen for windows on other screens..
  • gnome-shell-extension-hide-activities: the top menu is already mostly empty, so it’s not saving much.
  • gnome-shell-extension-move-clock: although I prefer the clock on the right, the default middle position makes sense as it integrates with notifications.

That’s it (so far 🙂 ).

Thx to @sil, @adsamalik and Jonathan Carter.


Filed under: Uncategorized Tagged: gnome, Gnome3, Linux, Linux Desktop, Thanks for all the fish, Ubuntu, unity

May 24, 2017

In this blog post I outline my thinking on sharing code that deals with different types of Entities in your domain. We’ll cover what Entities are, code reuse strategies, pitfalls such as Shotgun Surgery and Anemic Domain Models and finally Bounded Contexts.

Why I wrote this post

I work at Wikimedia Deutschland, where amongst other things, we are working on a software called Wikibase, which is what powers the Wikidata project. We have a dedicated team for this software, called the Wikidata team, which I am not part of. As an outsider that is somewhat familiar with the Wikibase codebase, I came across a writeup of a perceived problem in this codebase and a pair of possible solutions. I happen to disagree with what the actual problem is, and as a consequence also the solutions. Since explaining why I think that takes a lot of general (non-Wikibase specific) explanation, I decided to write a blog post.

DDD Entities

Let’s start with defining what an Entity is. Entities are a tactical Domain Driven Design pattern. They are things that can change over time and are compared by identity rather than by value, unlike Value Objects, which do not have an identity.

Wikibase has objects which are conceptually such Entities, though are implemented … oddly from a DDD perspective. In the above excerpt, the word entity, is confusingly, not referring to the DDD concept. Instead, the Wikibase domain has a concept called Entity, implemented by an abstract class with the same name, and derived from by specific types of Entities, i.e. Item and Property. Those are the objects that are conceptually DDD Entities, yet diverge from what a DDD Entity looks like.

Entities normally contain domain logic (the lack of this is called an Anemic Domain Model), and don’t have setters. The lack of setters does not mean they are immutable, it’s just that actions are performed through methods in the domain language (see Ubiquitous Language). For instance “confirmBooked()” and “cancel()” instead of “setStatus()”.

The perceived problem

What follows is an excerpt from a document aimed at figuring out how to best construct entities in Wikibase:

Some entity types have required fields:

  • Properties require a data type
  • Lexemes require a language and a lexical category (both ItemIds)
  • Forms require a grammatical feature (an ItemId)

The ID field is required by all entities. This is less problematic however, since the ID can be constructed and treated the same way for all kinds of entities. Furthermore, the ID can never change, while other required fields could be modified by an edit (even a property’s data type can be changed using a maintenance script).

The fact that Properties require the data type ID to be provided to the constructor is problematic in the current code, as evidenced in EditEntity::clearEntity:

// FIXME how to avoid special case handling here?
if ( $entity instanceof Property ) {
  /** @var Property $newEntity */
  $newEntity->setDataTypeId( $entity->getDataTypeId() );
}

…as well as in EditEntity::modifyEntity():

// if we create a new property, make sure we set the datatype
if ( !$exists && $entity instanceof Property ) {
  if ( !isset( $data['datatype'] ) ) {
     $this->errorReporter->dieError( 'No datatype given', 'param-illegal' );
  } elseif ( !in_array( $data['datatype'], $this->propertyDataTypes ) ) {
     $this->errorReporter->dieError( 'Invalid datatype given', 'param-illegal' );
  } else {
     $entity->setDataTypeId( $data['datatype'] );
  }
}

Such special case handling will not be possible for entity types defined in extensions.

It is very natural for (DDD) Entities to have required fields. That is not a problem in itself. For examples you can look at our Fundraising software.

So what is the problem really?

Generic vs specific entity handling code

Normally when you have a (DDD) Entity, say a Donation, you also have dedicated code that deals with those Donation objects. If you have another entity, say MembershipApplication, you will have other code that deals with it.

If the code handling Donation and the code handing MembershipApplication is very similar, there might be an opportunity to share things via composition. One should be very careful to not do this for things that happen to be the same but are conceptually different, and might thus change differently in the future. It’s very easy to add a lot of complexity and coupling by extracting small bits of what would otherwise be two sets of simple and easy to maintain code. This is a topic worthy of its own blog post, and indeed, I might publish one titled The Fallacy of DRY in the near future.

This sharing via composition is not really visible “from outside” of the involved services, except for the code that constructs them. If you have a DonationRepository and a MembershipRepository interface, they will look the same if their implementations share something, or not. Repositories might share cross cutting concerns such as logging. Logging is not something you want to do in your repository implementations themselves, but you can easily create simple logging decorators. A LoggingDonationRepostory and LoggingMembershipRepository could both depend on the same Logger class (or interface more  likely), and thus be sharing code via composition. In the end, the DonationRepository still just deals with Donation objects, the MembershipRepository still just deals with Membership objects, and both remain completely decoupled from each other.

In the Wikibase codebase there is an attempt at code reuse by having services that can deal with all types of Entities. Phrased like this it sounds nice. From the perspective of the user of the service, things are great at first glance. Thing is, those services then are forced to actually deal with all types of Entities, which almost guarantees greater complexity than having dedicated services that focus on a single entity.

If your Donation and MembershipApplication entities both implement Foobarable and you have a FoobarExecution service that operates on Foobarable instances, that is entirely fine. Things get dodgy when your Entities don’t always share the things your service needs, and the service ends up getting instances of object, or perhaps some minimal EntityInterface type.

In those cases the service can add a bunch of “if has method doFoobar, call it with these arguments” logic. Or perhaps you’re checking against an interface instead of method, though this is by and large the same. This approach leads to Shotgun Surgery. It is particularly bad if you have a general service. If your service is really only about the doFoobar method, then at least you won’t need to poke at it when a new Entity is added to the system that has nothing to do with the Foobar concept. If the service on the other hands needs to fully save something or send an email with a summary of the data, each new Entity type will force you to change your service.

The “if doFoobar exists” approach does not work if you want plugins to your system to be able to use your generic services with their own types of Entities. To enable that, and avoid the Shotgun Surgery, your general service can delegate to specific ones. For instance, you can have an EntityRepository service with a save method that takes an EntityInterface. In it’s constructor it would take an array of specific repositories, i.e. a DonationRepository and a MembershipRepository. In its save method it would loop through these specific repositories and somehow determine which one to use. Perhaps they would have a canHandle method that takes an EntityInterface, or perhaps EntityInterface has a getType method that returns a string that is also used as keys in the array of specific repositories. Once the right one is found, the EntitiyInterface instance is handed over to its save method.

interface Repository {
    public function save( EntityInterface $entity );
    public function canHandle( EntityInterface $entity ): bool;
}

class DonationRepository implements Repository { /**/ }
class MembershipRepository implements Repository { /**/ }

class GenericEntityRepository {
    /**
     * @var Repository[] $repositories
     */
    public function __construct( array $repositories ) {
        $this->repositories = $repositories;
    }

    public function save( EntityInterface $entity ) {
        foreach ( $this->repositories as $repository ) {
            if ( $repository->canHandle( $entity ) ) {
                $repository->save( $entity );
                break;
            }
        }
    }
}

This delegation approach is sane enough from a OO perspective. It does however involve specific repositories, which begs the question of why you are creating a general one in the first place. If there is no compelling reason to create the general one, just stick to specific ones and save yourself all this not needed complexity and vagueness.

In Wikibase there is a generic web API endpoint for creating new entities. The users provide a pile of information via JSON or a bunch of parameters, which includes the type of Entity they are trying to create. If you have this type of functionality, you are forced to deal with this in some way, and probably want to go with the delegation approach. To me having such an API endpoint is very questionable, with dedicated endpoints being the simpler solution for everyone involved.

To wrap this up: dedicated entity handling code is much simpler than generic code, making it easier to write, use, understand and modify. Code reuse, where warranted, is possible via composition inside of implementations without changing the interfaces of services. Generic entity handling code is almost always a bad choice.

On top of what I already outlined, there is another big issue you can run into when creating generic entity handling code like is done in Wikibase.

Bounded Contexts

Bounded Contexts are a key strategic concept from Domain Driven Design. They are key in the sense that if you don’t apply them in your project, you cannot effectively apply tactical patterns such as Entities and Value Objects, and are not really doing DDD at all.

“Strategy without tactics is the slowest route to victory. Tactics without strategy are the noise before defeat.” — Sun Tzu

Bounded Contexts allow you to segregate your domain models, ideally having a Bounded Context per subdomain. A detailed explanation and motivation of this pattern is out of scope for this post, but suffice to say is that Bounded Contexts allow for simplification and thus make it easier to write and maintain code. For more information I can recommend Domain-Driven Design Destilled.

In case of Wikibase there are likely a dozen or so relevant subdomains. While I did not do the analysis to create a comprehensive picture of which subdomains there are, which types they have, and which Bounded Contexts would make sense, a few easily stand out.

There is the so-called core Wikibase software, which was created for Wikidata.org, and deals with structured data for Wikipedia. It has two types of Entities (both in the Wikibase and in the DDD sense): Item and Property. Then there is (planned) functionality for Wiktionary, which will be structured dictionary data, and for Wikimedia Commons, which will be structured media data. These are two separate subdomains, and thus each deserve their own Bounded Context. This means having no code and no conceptual dependencies on each other or the existing Big Ball of Mud type “Bounded Context” in the Wikibase core software.

Conclusion

When standard approaches are followed, Entities can easily have required fields and optional fields. Creating generic code that deals with different types of entities is very suspect and can easily lead to great complexity and brittle code, as seen in Wikibase. It is also a road to not separating concepts properly, which is particularly bad when crossing subdomain boundaries.

May 23, 2017

In 2007, Jay Batson and I wanted to build a software company based on open source and Drupal. I was 29 years old then, and eager to learn how to build a business that could change the world of software, strengthen the Drupal project and help drive the future of the web.

Tom Erickson joined Acquia's board of directors with an outstanding record of scaling and leading technology companies. About a year later, after a lot of convincing, Tom agreed to become our CEO. At the time, Acquia was 30 people strong and we were working out of a small office in Andover, Massachusetts. Nine years later, we can count 16 of the Fortune 100 among our customers, saw our staff grow from 30 to more than 750 employees, have more than $150MM in annual revenue, and have 14 offices across 7 countries. And, importantly, Acquia has also made an undeniable impact on Drupal, as we said we would.

I've been lucky to have had Tom as my business partner and I'm incredibly proud of what we have built together. He has been my friend, my business partner, and my professor. I learned first hand the complexities of growing an enterprise software company; from building a culture, to scaling a global team of employees, to making our customers successful.

Today is an important day in the evolution of Acquia:

  • Tom has decided it's time for him step down as CEO, allowing him flexibility with his personal time and act more as an advisor to companies, the role that brought him to Acquia in the first place.
  • We're going to search for a new CEO for Acquia. When we find that business partner, Tom will be stepping down as CEO. After the search is completed, Tom will remain on Acquia's Board of Directors, where he can continue to help advise and guide the company.
  • We are formalizing the working relationship I've had with Tom during the past 8 years by creating an Office of the CEO. I will focus on product strategy, product development, including product architecture and Acquia's roadmap; technology partnerships and acquisitions; and company-wide hiring and staffing allocations. Tom will focus on sales and marketing, customer success and G&A functions.

The time for these changes felt right to both of us. We spent the first decade of Acquia laying down the foundation of a solid business model for going out to the market and delivering customer success with Drupal – Tom's core strengths from his long career as a technology executive. Acquia's next phase will be focused on building confidently on this foundation with more product innovation, new technology acquisitions and more strategic partnerships – my core strengths as a technologist.

Tom is leaving Acquia in a great position. This past year, the top industry analysts published very positive reviews based on their dealings with our customers. I'm proud that Acquia made the most significant positive move of all vendors in last year's Gartner Magic Quadrant for Web Content Management and that Forrester recognized Acquia as the leader for strategy and vision. We increasingly find ourselves at the center of our customer's technology and digital strategies. At a time when digital experiences means more than just web content management, and data and content intelligence play an increasing role in defining success for our customers, we are well positioned for the next phase of our growth.

I continue to love the work I do at Acquia each day. We have a passionate team of builders and dreamers, doers and makers. To the Acquia team around the world: 2017 will be a year of changes, but you have my commitment, in every way, to lead Acquia with clarity and focus.

To read Tom's thoughts on the transition, please check out his blog post. Michael Skok, Acquia's lead investor, also covered it on his blog.

Tom and dries

May 19, 2017

The post CentOS 7.4 to ship with TLS 1.2 + ALPN appeared first on ma.ttias.be.

Oh happy days!

I've long been tracking the "Bug 1276310 -- (rhel7-openssl1.0.2) RFE: Need OpenSSL 1.0.2" issue, where Red Hat users are asking for an updated version of the OpenSSL package. Mainly to get TLS 1.2 and ALPN.

_openssl_ rebased to version 1.0.2k

The _openssl_ package has been updated to upstream version 1.0.2k, which provides a number of enhancements, new features, and bug fixes, including:

* Added support for the datagram TLS (DTLS) protocol version 1.2.

* Added support for the TLS automatic elliptic curve selection.

* Added support for the Application-Layer Protocol Negotiation (ALPN).

* Added Cryptographic Message Syntax (CMS) support for the following schemes: RSA-PSS, RSA-OAEP, ECDH, and X9.42 DH.

Note that this version is compatible with the API and ABI in the *OpenSSL* library version in previous releases of Red Hat Enterprise Linux 7.
RFE: Need OpenSSL 1.0.2

The ALPN support is needed because in the Chrome browser, server-side ALPN support is a dependency to support HTTP/2. Without it, Chrome users don't get to use HTTP/2 on your servers.

The newly updated packages for OpenSSL are targeting the RHEL 7.4 release, which -- as far as I'm aware -- has no scheduled release date yet. But I'll be waiting for it!

As soon as RHEL 7.4 is released, we should expect a CentOS 7.4 release soon after.

The post CentOS 7.4 to ship with TLS 1.2 + ALPN appeared first on ma.ttias.be.

There was a lot of buzz about the leak of two huge databases of passwords a few days ago. This has been reported by Try Hunt on his blog. The two databases are called “Anti-Trust-Combo-List” and “Exploit.In“. If the sources of the leaks are not officially known, there are some ways to discover some of them (see my previous article about the “+” feature offered by Google).

A few days after the first leak, a second version of “Exploit.In” was released with even more passwords:

Name

Size

Credentials

Anti-Trust-Combo-List

16GB

540.701.509

Exploit.In

15GB

499.305.318

Exploit.In (2)

24GB

805.499.579

With the huge of amount of passwords released in the wild, you can assume that your password is also included. But what are those passwords? I used Robbin Wood‘s tool pipal to analyze those passwords.

I decided to analyze the Anti-Trust-Combo-List but I had to restart several times due to a lack of resources (pipal requires a lot of memory to generate the statistics) and it failed always. I decided to use a sample of the passwords. I successfully analyzed 91M passwords. The results generated by pipal are available below.

What can we deduce? Weak passwords remain classic. Most passwords have only 8 characters and are based on lowercase characters. Interesting fact: users like to “increase” the complexity of the password by adding trailing numbers:

  • Just one number (due to the fact that they have to change it regularly and just increase it at every expiration)
  • By adding their birth year
  • By adding the current year
Basic Results

Total entries = 91178452
Total unique entries = 40958257

Top 20 passwords
123456 = 559283 (0.61%)
123456789 = 203554 (0.22%)
passer2009 = 186798 (0.2%)
abc123 = 100158 (0.11%)
password = 96731 (0.11%)
password1 = 84124 (0.09%)
12345678 = 80534 (0.09%)
12345 = 76051 (0.08%)
homelesspa = 74418 (0.08%)
1234567 = 68161 (0.07%)
111111 = 66460 (0.07%)
qwerty = 63957 (0.07%)
1234567890 = 58651 (0.06%)
123123 = 52272 (0.06%)
iloveyou = 51664 (0.06%)
000000 = 49783 (0.05%)
1234 = 35583 (0.04%)
123456a = 34675 (0.04%)
monkey = 32926 (0.04%)
dragon = 29902 (0.03%)

Top 20 base words
password = 273853 (0.3%)
passer = 208434 (0.23%)
qwerty = 163356 (0.18%)
love = 161514 (0.18%)
july = 148833 (0.16%)
march = 144519 (0.16%)
phone = 122229 (0.13%)
shark = 121618 (0.13%)
lunch = 119449 (0.13%)
pole = 119240 (0.13%)
table = 119215 (0.13%)
glass = 119164 (0.13%)
frame = 118830 (0.13%)
iloveyou = 118447 (0.13%)
angel = 101049 (0.11%)
alex = 98135 (0.11%)
monkey = 97850 (0.11%)
myspace = 90841 (0.1%)
michael = 88258 (0.1%)
mike = 82412 (0.09%)

Password length (length ordered)
1 = 54418 (0.06%)
2 = 49550 (0.05%)
3 = 247263 (0.27%)
4 = 1046032 (1.15%)
5 = 1842546 (2.02%)
6 = 15660408 (17.18%)
7 = 14326554 (15.71%)
8 = 25586920 (28.06%)
9 = 12250247 (13.44%)
10 = 11895989 (13.05%)
11 = 2604066 (2.86%)
12 = 1788770 (1.96%)
13 = 1014515 (1.11%)
14 = 709778 (0.78%)
15 = 846485 (0.93%)
16 = 475022 (0.52%)
17 = 157311 (0.17%)
18 = 136428 (0.15%)
19 = 83420 (0.09%)
20 = 93576 (0.1%)
21 = 46885 (0.05%)
22 = 42648 (0.05%)
23 = 31118 (0.03%)
24 = 29999 (0.03%)
25 = 25956 (0.03%)
26 = 14798 (0.02%)
27 = 10285 (0.01%)
28 = 10245 (0.01%)
29 = 7895 (0.01%)
30 = 12573 (0.01%)
31 = 4168 (0.0%)
32 = 66017 (0.07%)
33 = 1887 (0.0%)
34 = 1422 (0.0%)
35 = 1017 (0.0%)
36 = 469 (0.0%)
37 = 250 (0.0%)
38 = 231 (0.0%)
39 = 116 (0.0%)
40 = 435 (0.0%)
41 = 45 (0.0%)
42 = 57 (0.0%)
43 = 14 (0.0%)
44 = 47 (0.0%)
45 = 5 (0.0%)
46 = 13 (0.0%)
47 = 1 (0.0%)
48 = 16 (0.0%)
49 = 14 (0.0%)
50 = 21 (0.0%)
51 = 2 (0.0%)
52 = 1 (0.0%)
53 = 2 (0.0%)
54 = 22 (0.0%)
55 = 1 (0.0%)
56 = 3 (0.0%)
57 = 1 (0.0%)
58 = 2 (0.0%)
60 = 10 (0.0%)
61 = 3 (0.0%)
63 = 3 (0.0%)
64 = 1 (0.0%)
65 = 2 (0.0%)
66 = 9 (0.0%)
67 = 2 (0.0%)
68 = 2 (0.0%)
69 = 1 (0.0%)
70 = 1 (0.0%)
71 = 3 (0.0%)
72 = 1 (0.0%)
73 = 1 (0.0%)
74 = 1 (0.0%)
76 = 2 (0.0%)
77 = 1 (0.0%)
78 = 1 (0.0%)
79 = 3 (0.0%)
81 = 3 (0.0%)
83 = 1 (0.0%)
85 = 1 (0.0%)
86 = 1 (0.0%)
88 = 1 (0.0%)
89 = 1 (0.0%)
90 = 6 (0.0%)
92 = 3 (0.0%)
93 = 1 (0.0%)
95 = 1 (0.0%)
96 = 16 (0.0%)
97 = 1 (0.0%)
98 = 3 (0.0%)
99 = 2 (0.0%)
100 = 1 (0.0%)
104 = 1 (0.0%)
107 = 1 (0.0%)
108 = 1 (0.0%)
109 = 1 (0.0%)
111 = 2 (0.0%)
114 = 1 (0.0%)
119 = 1 (0.0%)
128 = 377 (0.0%)

Password length (count ordered)
8 = 25586920 (28.06%)
6 = 15660408 (17.18%)
7 = 14326554 (15.71%)
9 = 12250247 (13.44%)
10 = 11895989 (13.05%)
11 = 2604066 (2.86%)
5 = 1842546 (2.02%)
12 = 1788770 (1.96%)
4 = 1046032 (1.15%)
13 = 1014515 (1.11%)
15 = 846485 (0.93%)
14 = 709778 (0.78%)
16 = 475022 (0.52%)
3 = 247263 (0.27%)
17 = 157311 (0.17%)
18 = 136428 (0.15%)
20 = 93576 (0.1%)
19 = 83420 (0.09%)
32 = 66017 (0.07%)
1 = 54418 (0.06%)
2 = 49550 (0.05%)
21 = 46885 (0.05%)
22 = 42648 (0.05%)
23 = 31118 (0.03%)
24 = 29999 (0.03%)
25 = 25956 (0.03%)
26 = 14798 (0.02%)
30 = 12573 (0.01%)
27 = 10285 (0.01%)
28 = 10245 (0.01%)
29 = 7895 (0.01%)
31 = 4168 (0.0%)
33 = 1887 (0.0%)
34 = 1422 (0.0%)
35 = 1017 (0.0%)
36 = 469 (0.0%)
40 = 435 (0.0%)
128 = 377 (0.0%)
37 = 250 (0.0%)
38 = 231 (0.0%)
39 = 116 (0.0%)
42 = 57 (0.0%)
44 = 47 (0.0%)
41 = 45 (0.0%)
54 = 22 (0.0%)
50 = 21 (0.0%)
48 = 16 (0.0%)
96 = 16 (0.0%)
49 = 14 (0.0%)
43 = 14 (0.0%)
46 = 13 (0.0%)
60 = 10 (0.0%)
66 = 9 (0.0%)
90 = 6 (0.0%)
45 = 5 (0.0%)
71 = 3 (0.0%)
56 = 3 (0.0%)
92 = 3 (0.0%)
79 = 3 (0.0%)
98 = 3 (0.0%)
63 = 3 (0.0%)
61 = 3 (0.0%)
81 = 3 (0.0%)
51 = 2 (0.0%)
58 = 2 (0.0%)
65 = 2 (0.0%)
53 = 2 (0.0%)
67 = 2 (0.0%)
68 = 2 (0.0%)
76 = 2 (0.0%)
111 = 2 (0.0%)
99 = 2 (0.0%)
73 = 1 (0.0%)
72 = 1 (0.0%)
74 = 1 (0.0%)
70 = 1 (0.0%)
69 = 1 (0.0%)
77 = 1 (0.0%)
78 = 1 (0.0%)
64 = 1 (0.0%)
109 = 1 (0.0%)
114 = 1 (0.0%)
119 = 1 (0.0%)
83 = 1 (0.0%)
107 = 1 (0.0%)
85 = 1 (0.0%)
86 = 1 (0.0%)
104 = 1 (0.0%)
88 = 1 (0.0%)
89 = 1 (0.0%)
57 = 1 (0.0%)
100 = 1 (0.0%)
55 = 1 (0.0%)
93 = 1 (0.0%)
52 = 1 (0.0%)
95 = 1 (0.0%)
47 = 1 (0.0%)
97 = 1 (0.0%)
108 = 1 (0.0%)

| 
 | 
 | 
 | 
 | 
 | 
 | 
 || 
 ||| 
 ||| 
 ||| 
 ||| 
 ||| 
 ||| 
 ||||| 
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
000000000011111111112222222222333333333344444444445555555555666666666677
012345678901234567890123456789012345678901234567890123456789012345678901

One to six characters = 18900217 (20.73%)
One to eight characters = 58813691 (64.5'%)
More than eight characters = 32364762 (35.5%)

Only lowercase alpha = 25300978 (27.75%)
Only uppercase alpha = 468686 (0.51%)
Only alpha = 25769664 (28.26%)
Only numeric = 9526597 (10.45%)

First capital last symbol = 72550 (0.08%)
First capital last number = 2427417 (2.66%)

Single digit on the end = 13167140 (14.44%)
Two digits on the end = 14225600 (15.6%)
Three digits on the end = 6155272 (6.75%)

Last number
0 = 4370023 (4.79%)
1 = 12711477 (13.94%)
2 = 5661520 (6.21%)
3 = 6642438 (7.29%)
4 = 3951994 (4.33%)
5 = 4028739 (4.42%)
6 = 4295485 (4.71%)
7 = 4055751 (4.45%)
8 = 3596305 (3.94%)
9 = 4240044 (4.65%)

| 
 | 
 | 
 | 
 | 
 | 
 | 
 | | 
 ||| 
 ||| 
|||| ||| | 
|||||||||| 
|||||||||| 
|||||||||| 
|||||||||| 
|||||||||| 
0123456789

Last digit
1 = 12711477 (13.94%)
3 = 6642438 (7.29%)
2 = 5661520 (6.21%)
0 = 4370023 (4.79%)
6 = 4295485 (4.71%)
9 = 4240044 (4.65%)
7 = 4055751 (4.45%)
5 = 4028739 (4.42%)
4 = 3951994 (4.33%)
8 = 3596305 (3.94%)

Last 2 digits (Top 20)
23 = 2831841 (3.11%)
12 = 1570044 (1.72%)
11 = 1325293 (1.45%)
01 = 1036629 (1.14%)
56 = 1013453 (1.11%)
10 = 909480 (1.0%)
00 = 897526 (0.98%)
13 = 854165 (0.94%)
09 = 814370 (0.89%)
21 = 812093 (0.89%)
22 = 709996 (0.78%)
89 = 706074 (0.77%)
07 = 675624 (0.74%)
34 = 627901 (0.69%)
08 = 626722 (0.69%)
69 = 572897 (0.63%)
88 = 557667 (0.61%)
77 = 557429 (0.61%)
14 = 539236 (0.59%)
45 = 530671 (0.58%)

Last 3 digits (Top 20)
123 = 2221895 (2.44%)
456 = 807267 (0.89%)
234 = 434714 (0.48%)
009 = 326602 (0.36%)
789 = 318622 (0.35%)
000 = 316149 (0.35%)
345 = 295463 (0.32%)
111 = 263894 (0.29%)
101 = 225151 (0.25%)
007 = 222062 (0.24%)
321 = 221598 (0.24%)
666 = 201995 (0.22%)
010 = 192798 (0.21%)
777 = 164454 (0.18%)
011 = 141015 (0.15%)
001 = 138363 (0.15%)
008 = 137610 (0.15%)
999 = 129483 (0.14%)
987 = 126046 (0.14%)
678 = 123301 (0.14%)

Last 4 digits (Top 20)
3456 = 727407 (0.8%)
1234 = 398622 (0.44%)
2009 = 298108 (0.33%)
2345 = 269935 (0.3%)
6789 = 258059 (0.28%)
1111 = 148964 (0.16%)
2010 = 140684 (0.15%)
2008 = 111014 (0.12%)
2000 = 110456 (0.12%)
0000 = 108767 (0.12%)
2011 = 103328 (0.11%)
5678 = 102873 (0.11%)
4567 = 94964 (0.1%)
2007 = 94172 (0.1%)
4321 = 92849 (0.1%)
3123 = 92104 (0.1%)
1990 = 87828 (0.1%)
1987 = 87142 (0.1%)
2006 = 86640 (0.1%)
1991 = 86574 (0.09%)

Last 5 digits (Top 20)
23456 = 721648 (0.79%)
12345 = 261734 (0.29%)
56789 = 252914 (0.28%)
11111 = 116179 (0.13%)
45678 = 96011 (0.11%)
34567 = 90262 (0.1%)
23123 = 84654 (0.09%)
00000 = 81056 (0.09%)
54321 = 73623 (0.08%)
67890 = 66301 (0.07%)
21212 = 28777 (0.03%)
23321 = 28767 (0.03%)
77777 = 28572 (0.03%)
22222 = 27754 (0.03%)
55555 = 26081 (0.03%)
66666 = 25872 (0.03%)
56123 = 21354 (0.02%)
88888 = 19025 (0.02%)
99999 = 18288 (0.02%)
12233 = 16677 (0.02%)

Character sets
loweralphanum: 47681569 (52.29%)
loweralpha: 25300978 (27.75%)
numeric: 9526597 (10.45%)
mixedalphanum: 3075964 (3.37%)
loweralphaspecial: 1721507 (1.89%)
loweralphaspecialnum: 1167596 (1.28%)
mixedalpha: 981987 (1.08%)
upperalphanum: 652292 (0.72%)
upperalpha: 468686 (0.51%)
mixedalphaspecialnum: 187283 (0.21%)
specialnum: 81096 (0.09%)
mixedalphaspecial: 53882 (0.06%)
upperalphaspecialnum: 39668 (0.04%)
upperalphaspecial: 18674 (0.02%)
special: 14657 (0.02%)

Character set ordering
stringdigit: 41059315 (45.03%)
allstring: 26751651 (29.34%)
alldigit: 9526597 (10.45%)
othermask: 4189226 (4.59%)
digitstring: 4075593 (4.47%)
stringdigitstring: 2802490 (3.07%)
stringspecial: 792852 (0.87%)
digitstringdigit: 716311 (0.79%)
stringspecialstring: 701378 (0.77%)
stringspecialdigit: 474579 (0.52%)
specialstring: 45323 (0.05%)
specialstringspecial: 28480 (0.03%)
allspecial: 14657 (0.02%)

[The post Your Password is Already In the Wild, You Did not Know? has been first published on /dev/random]

May 18, 2017

Friduction

There is one significant trend that I have noticed over and over again: the internet's continuous drive to mitigate friction in user experiences and business models.

Since the internet's commercial debut in the early 90s, it has captured success and upset the established order by eliminating unnecessary middlemen. Book stores, photo shops, travel agents, stock brokers, bank tellers and music stores are just a few examples of the kinds of middlemen who have been eliminated by their online counterparts. The act of buying books, printing photos or booking flights online alleviates the friction felt by consumers who must stand in line or wait on hold to speak to a customer service representative.

Rather than negatively describing this evolution as disintermediation or taking something away, I believe there is value in recognizing that the internet is constantly improving customer experiences by reducing friction from systems — a process I like to call "friduction".

Open Source and cloud

Over the past 15 years, I have observed Open Source and cloud-computing solutions remove friction from legacy approaches to technology. Open Source takes the friction out of the technology evaluation and adoption process; you are not forced to get a demo or go through a sales and procurement process, or deal with the limitations of a proprietary license. Cloud computing also took off because it also offers friduction; with cloud, companies pay for what they use, avoid large up-front capital expenditures, and gain speed-to-market.

Cross-channel experiences

There is a reason why Drupal's API-first initiative is one of the topics I've talked and written the most about in 2016; it enables Drupal to "move beyond the page" and integrate with different user engagement systems that can eliminate inefficiencies and improve the user experience of traditional websites.

We're quickly headed to a world where websites are evolving into cross­channel experiences, which includes push notifications, conversational UIs, and more. Conversational UIs, such as chatbots and voice assistants, will prevail because they improve and redefine the customer experience.

Personalization and contextualization

In the 90s, personalization meant that websites could address authenticated users by name. I remember the first time I saw my name appear on a website; I was excited! Obviously personalization strategies have come a long way since the 90s. Today, websites present recommendations based on a user's most recent activity, and consumers expect to be provided with highly tailored experiences. The drive for greater personalization and contextualization will never stop; there is too much value in removing friction from the user experience. When a commerce website can predict what you like based on past behavior, it eliminates friction from the shopping process. When a customer support website can predict what question you are going to ask next, it is able to provide a better customer experience. This is not only useful for the user, but also for the business. A more efficient user experience will translate into higher sales, improved customer retention and better brand exposure.

To keep pace with evolving user expectations, tomorrow's digital experiences will need to deliver more tailored, and even predictive customer experiences. This will require organizations to consume multiple sources of data, such as location data, historic clickstream data, or information from wearables to create a fine-grained user context. Data will be the foundation for predictive analytics and personalization services. Advancing user privacy in conjunction with data-driven strategies will be an important component of enhancing personalized experiences. Eventually, I believe that data-driven experiences will be the norm.

At Acquia, we started investing in contextualization and personalization in 2014, through the release of a product called Acquia Lift. Adoption of Acquia Lift has grown year over year, and we expect it to increase for years to come. Contextualization and personalization will become more pervasive, especially as different systems of engagements, big data, the internet of things (IoT) and machine learning mature, combine, and begin to have profound impacts on what the definition of a great user experience should be. It might take a few more years before trends like personalization and contextualization are fully adopted by the early majority, but we are patient investors and product builders. Systems like Acquia Lift will be of critical importance and premiums will be placed on orchestrating the optimal customer journey.

Conclusion

The history of the web dictates that lower-friction solutions will surpass what came before them because they eliminate inefficiencies from the customer experience. Friduction is a long-term trend. Websites, the internet of things, augmented and virtual reality, conversational UIs — all of these technologies will continue to grow because they will enable us to build lower-friction digital experiences.

Today I was attempting to update a local repository, when SSH complained about a changed fingerprint, something like the following:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
SHA256:p4ZGs+YjsBAw26tn2a+HPkga1dPWWAWX+NEm4Cv4I9s.
Please contact your system administrator.
Add correct host key in /home/user/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /home/user/.ssh/known_hosts:9
ECDSA host key for 192.168.56.101 has changed and you have requested strict checking.
Host key verification failed.

I published the following diary on isc.sans.org: “My Little CVE Bot“.

The massive spread of the WannaCry ransomware last Friday was another good proof that many organisations still fail to patch their systems. Everybody admits that patching is a boring task. They are many constraints that make this process very difficult to implement and… apply! That’s why any help is welcome to know what to patch and when… [Read more]

[The post [SANS ISC] My Little CVE Bot has been first published on /dev/random]

May 17, 2017

  • First rule. You must understand the rules of scuba diving. If you don’t know or understand the rules of scuba diving, go to the second rule.
  • The second rule is that you never dive alone.
  • The third rule is that you always keep close enough to each other to perform a rescue of any kind.
  • The forth rule is that you signal each other and therefor know each other’s signals. Underwater, communication is key.
  • The fifth rule is that you tell the others, for example, when you don’t feel well. The others want to know when you emotionally don’t feel well. Whenever you are insecure, you tell them. This is hard.
  • The sixth rule is that you don’t violate earlier agreed upon rules.
  • The seventh rule is that given rules will be eclipsed the moment any form of panic occurs, you will restore the rules using rationalism first, pragmatism next but emotional feelings last. No matter what.
  • The eighth rule is that the seventh rule is key to survival.

These rules make scuba diving an excellent learning school for software development project managers.

May 16, 2017

I have to admit it, I’m not the biggest fan of Java. But, when they asked me to prepare a talk for 1st grade students who are currently learning to code using Java, I decided it was time to challenge some of my prejudices. As I selected continuous integration as the topic of choice, I started out by looking at all available tools to quickly setup a reliable Java project. Having played with dotnet core the past months, I was looking for a tool that could do a bit of the same. A straightforward CLI interface that can create a project out of the box to mess around with. Maven provided to be of little help, but gradle turned out to be exactly what I was looking for. Great, I gained some faith.

It’s only while creating my slides and looking for tooling that can be used specifically for Java, that I had an epiphany. What if it is possible to create an entire developer environment using docker? So no need for local dependencies like linting tools or gradle. No need to mess with an IDE to get everything set up. And, no more “it works on my machine”. The power and advantages of a CI tool, straight onto your own computer.

A quick search on Google points us to gradle’s own Alpine linux container. It comes with JDK8 out of the box, exactly what we’re looking for. You can create a new Java application with a single command:

docker run -v=$(pwd):/app --workdir=/app gradle:alpine gradle init --type java-application

This starts a container, creates a volume linked to your current working directory and initializes a brand new Java application using gradle init --type java-application. As I don’t feel like typing those commands all the time, I created a makefile to help me build and debug the app. Yes, you can debug the app while it’s running in the container. Java supports remote debugging out of the box. Any modern IDE that supports Java, has support for remote debugging. Simply run the make debug command and attach to the remote debugging session on port 1044.

ROOT_DIR:=$(shell dirname $(realpath $(lastword $(MAKEFILE_LIST))))

build:
    docker run --rm -v=${ROOT_DIR}:/app --workdir=/app gradle:alpine gradle clean build

debug: build
    docker run --rm -v=${ROOT_DIR}:/app -p 1044:1044 --workdir=/app gradle:alpine java -classpath /app/build/classes/main -verbose -agentlib:jdwp=transport=dt_socket,server=y, suspend=y,address=1044 App

Now that we have a codebase that uses the same tools to build, run and debug, we need to bring our coding standard to a higher level. First off we need a linting tool. Traditionally, people look at checkstyle when it comes to Java. And while that could be fine for you, I found that tool rather annoying to set up. XML is not something I like to mess with other than to create UI, so seeing this verbose config set me back. There simply wasn’t time to look at that. Even with the 2 different style guides, it would still require a bit of tweaking to get everything right and make the build pass.

As it turns out, there are other tools out there which feel a bit more 21st century. One of those is coala. Now, coala can be used as a linting tool on a multitude of languages, not just Java, so definetly take a look at it, even if you’re not into Java yourself. It’s a Python based tool which has a lot of neat little bears who can do things. The config is a breeze as it’s a yaml file, and they provide a container so you can run the checks in an isolated environment. All in all, exactly what we’re looking for.

Let’s extend our makefile to run coala:

docker run --rm -v=${ROOT_DIR}:/app --workdir=/app coala/base coala --ci -V

I made sure to enable verbose logging, simply to be able to illustrate the tool to students. Feel free to disable that. You can easily control what coala needs to verify by creating a .coafile in the root of the repository. One of the major advantages to use coala over anything else, is that it can do both simple linting checks as well as full on static code analysis.

Let’s have a look at the settings I used to illustrate its power.

[Default]
files = src/**/*.java
language = java

[SPACES]
bears = SpaceConsistencyBear
use_spaces = True

[TODOS]
bears = KeywordBear

[PMD]
bears = JavaPMDBear
check_optimizations = true
check_naming = false

You can start out by defining a default. In my case, I’m telling coala to look for .java files which are written using Java. There are three bears being used. SpaceConsistencyBear, who will check for spaces and not tabs. KeywordBear, who dislikes //TODO comments in code, and JavaPMDBear, who invokes PMD to do some static code analysis. In the example, I had to set check_naming = false otherwise I would have lost a lot of time fixing those error (mostly due to my proper lack of Java knowledge).

Now, whenever I want to validate my code and enforce certain rules for me and my team, I can use coala to achieve this. Simply run make validate and it will start the container and invoke coala. At this point, we can setup the CI logic in our makefile by simply combining the two commands.

ci: validate build

The command make ci will invoke coala and if all goes well, use gradle to build and test the file. As a cherry on top, I also included test coverage. Using Jacoco, you can easily setup rules to fail the build when the coverage goes below a certain threshold. The tool is integrated directly into gradle and provides everything you need out of the box, simply add the following lines to your build.gradle file. This way, the build will fail if the coverage drops below 50%.

apply plugin: 'jacoco'

jacocoTestReport {
    reports {
        xml.enabled true
        html.enabled true
    }
}

jacocoTestCoverageVerification {
    violationRules {
        rule {
            limit {
                minimum = 0.5
            }
        }
    }
}

check.dependsOn jacocoTestCoverageVerifica

Make sure to edit the build step in the makefile to also include Jacoco.

build:
    docker run --rm -v=${ROOT_DIR}:/app --workdir=/app gradle:alpine gradle clean build jacocoTestReport

The only thing we still need to do is select a CI service of choice. I made sure to add examples for both circleci and travis, each of which only require docker and an override to use our makefile instead of auto-detecting gradle and running that. The way we set up this project allows us to easily switch CI when we need to, which is not all that strange given the lifecycle of a software project. The tools we choose when we start out, might be selected to fit the needs at the time of creation, but nothing assures us that will stay true forever. Designing for change is not something we need to do in code alone, it has a direct impact on everything, so expect things to change and your assumptions to be challenged.

Have a look at the source code for all the info and the build files for the two services. Enjoy!

Source code

May 15, 2017

It's been a while since I thought about this design, but I finally had time to implement it the proper way, and "just in time" as I needed recently to migrate our Foreman instance to another host (from CentOS 6 to CentOS 7)

Within the CentOS Infra, we use Foreman as an ENC for our Puppet environments (multiple ones). For full automation between configuration management and monitoring, you need some "glue". The idea is that whatever you describe at the configuration management level should be authoritative and so automatically configuring the monitoring solution you have in place in your Infra.

In our case, that means that we have Foreman/puppet on one side, and Zabbix on the other side. Let's see how we can "link" the two sides.

What I've seen so far is that you use exported resources on each node, store that in another PuppetDB, and then on the monitoring node, reapply all those resources. Problem with such solution is that it's "expensive" and when one thinks about it, a little bit strange to export the "knowledge" from Foreman back into another DB, and then let puppet compiles a huge catalog at the monitoring side, even if nothing was changed.

One issue is also that in our Zabbix setup, we also have some nodes that aren't really managed by Foreman/puppet (but other automation around Ansible, so I had to use an intermediate step that other tools can also use/abuse for the same reason.

The other reason also is that I admit that I'm a fan of "event driven" configuration change, so my idea was :

  • update a host in Foreman (or groups of hosts, etc)
  • publish that change on a secure network through a message queue (so asynchronous so that it doesn't slow down the foreman update operation itself)
  • let Zabbix server know that change and apply it (like linking a template to a host)

So the good news is that it can be done really easily with several components :

Here is a small overview of the process :

Foreman MQTT Zabbix

Foreman hooks

Setting up foreman hooks is really easy: just install the pkg itself (tfm-rubygem-foreman_hooks.noarch), read the Documentation, and then create your scripts. There are some examples for Bash and python in the examples directory, but basically you just need to place some scripts at specific place[s]. In my case I wanted to "trigger" an event in the case of a node update (like adding a puppet class, or variable/paramater change) so I just had to place it under /usr/share/foreman/config/hooks/host/managed/update/.

One little remark though : if you put a new file, don't forget to restart foreman itself, so that it picks that hooks file, otherwise it would still be ignored and so not ran.

Mosquitto

Mosquitto itself is available in your favorite rpm repo, so installing it is a breeze. Reason why I selected mosquitto is that it's very lightweight (package size is under 200Kb), it supports TLS and ACL out-of-the box.

For an introduction to MQTT/Mosquitto, I'd suggest you to read Jan-Piet Mens dedicated blog post around it I even admit that I discovered it by attending one of his talks on the topic, back in the Loadays.org days :-)

Zabbix-cli

While one can always discuss "Raw API" with Zabbix, I found it useful to use a tool I was already using for various tasks around Zabbix : zabbix-cli For people interested in using it on CentOS 6 or 7, I built the packages and they are on CBS

So I plumbed it in a systemd unit file that subscribe to specific MQTT topic, parse the needed informations (like hostname and zabbix templates to link, unlink, etc) and then it updates that in Zabbix itself (from the log output):

[+] 20170516-11:43 :  Adding zabbix template "Template CentOS - https SSL Cert Check External" to host "dev-registry.lon1.centos.org" 
[Done]: Templates Template CentOS - https SSL Cert Check External ({"templateid":"10105"}) linked to these hosts: dev-registry.lon1.centos.org ({"hostid":"10174"})

Cool, so now I don't have to worry about forgetting to tie a zabbix template to a host , as it's now done automatically. No need to say that the deployment of those tools was of course automated and coming from Puppet/foreman :-)

Acquia labs space

For most of the history of the web, the website has been the primary means of consuming content. These days, however, with the introduction of new channels each day, the website is increasingly the bare minimum. Digital experiences can mean anything from connected Internet of Things (IoT) devices, smartphones, chatbots, augmented and virtual reality headsets, and even so-called zero user interfaces which lack the traditional interaction patterns we're used to. More and more, brands are trying to reach customers through browserless experiences and push-, not pull-based, content — often by not accessing the website at all.

Last year, we launched a new initiative called Acquia Labs, our research and innovation lab, part of the Office of the CTO. Acquia Labs aims to link together the new realities in our market, our customers' needs in coming years, and the goals of Acquia's products and open-source efforts in the long term. In this blog post, I'll update you on what we're working on at the moment, what motivates our lab, and how to work with us.

Acquia labs space timelapse

Alexa, ask GeorgiaGov

One of the Acquia Labs' most exciting projects is our ongoing collaboration with GeorgiaGov Interactive. Through an Amazon Echo integration with the Georgia.gov Drupal website, citizens can ask their government questions. Georgia residents will be able to find out how to apply for a fishing license, transfer an out-of-state driver's license, and register to vote just by consulting Alexa, which will also respond with sample follow-up questions to help the user move forward. It's a good example of how conversational interfaces can change civic engagement. Our belief is that conversational content and commerce will come to define many of the interactions we have with brands.

The state of Georgia has always been on the forefront of web accessibility. For example, from 2002 until 2006, Georgia piloted a time-limited text-to-speech telephony service which would allow website information and popular services like driver's license renewal to be offered to citizens. Today, it publishes accessibility standards and works hard to make all of its websites accessible for users of assistive devices. This Alexa integration for Georgia will continue that legacy by making important information about working with state government easy for anyone to access.

And as a testament to the benefits of innovation in open source and our commitment to open-source software, Acquia Labs backported the Drupal 8 module for Amazon Echo to Drupal 7.

Here's a demo video showing an initial prototype of the Alexa integration:

Shopping with chatbots

In addition to physical devices like the Amazon Echo, Acquia Labs has also been thinking about what is ahead for chatbots, another important component of the conversational web. Unlike in-home devices, chatbots are versatile because they can be used across multiple channels, whether on a native mobile application or a desktop website.

The Acquia Labs team built a chatbot demonstrating an integration with the inventory system and recipe collection available on the Drupal website of an imaginary grocery store. In this example, a shopper can interact with a branded chatbot named "Freshbot" to accomplish two common tasks when planning an upcoming barbecue.

First, the user can use the chatbot to choose the best recipes from a list of recommendations with consideration for number of attendees, dietary restrictions, and other criteria. Second, the chatbot can present a shopping list with correct quantities of the ingredients she'll need for the barbecue. The ability to interact with a chatbot assistant rather than having to research and plan everything on your own can make hosting a barbecue a much easier and more efficient experience.

Check out our demo video, "Shopping with chatbots", below:

Collaborating with our customers

Many innovation labs are able to work without outside influence or revenue targets by relying on funding from within the organization. But this can potentially create too much distance between the innovation lab and the needs of the organization's customers. Instead, Acquia Labs explores new ideas by working on jointly funded projects for our clients.

I think this model for innovation is a good approach for the next generation of labs. This vision allows us to help our customers stake ground in new territory while also moving our own internal progress forward. For more about our approach, check out this video from a panel discussion with our Acquia Labs lead Preston So, who introduced some of these ideas at SXSW 2017.

If you're looking at possibilities beyond what our current offerings are capable of today, if you're seeking guidance and help to execute on your own innovation priorities, or if you have a potential project that interests you but is too forward-looking right now, Acquia Labs can help.

Special thanks to Preston So for contributions to this blog post and to Nikhil Deshpande (GeorgiaGov Interactive) and ASH Heath for feedback during the writing process.

The post Ways in which the WannaCry ransomware could have been much worse appeared first on ma.ttias.be.

If you're in tech, you will have heard about the WannaCry/WannaCrypt ransomware doing the rounds. The infection started on Friday May 12th 2017 by exploiting MS17-010, a Windows Samba File Sharing vulnerability. The virus exploited a known vulnerability, installed a cryptolocker and extorted the owner of the Windows machine to pay ransom to get the files decrypted.

As far as worms go, this one went viral at an unprecedented scale.

But there are some design decisions in this cryptolocker that prevent it from being much worse. This post is a thought exercise, the next vulnerability will probably implement one of these methods. Make sure you're prepared.

Time based encryption

This WannaCry ransomware found the security vulnerability, installed the cryptolocker and immediately started encrypting the files.

Imagine the following scenario;

  • Day 1: worm goes round and infects vulnerable SMB, installs backdoor, keeps quiet, infects other machines
  • Day 14: worm activates itself, starts encrypting files

With WannaCrypt, it took a few hours to reach world-scale infections, alerting everyone and their grandmother that something big was going on. Mainstream media picked up on it. Train stations showed cryptolocker screens. Everyone started patching. What if the worm gets a few days head start?

By keeping quiet, the attacker risks getting caught, but in many cases this can be avoided by excluding known IPv4 networks for banks or government organizations. How many small businesses or large organizations do you think would notice a sudden extra running .exe in the background? Not enough to trigger world-wide coverage, I bet.

Self-destructing files

A variation to the scenario above;

  • Day 1: worm goes round, exploits SMB vulnerability, encrypts each file, but still allows files to remain opened (1)
  • Day 30: worm activates itself, removes decryption key for file access and prompts for payment

How are your back-ups at that point? All files on the machine have some kind of hidden time bomb in them. Every version of that file you have in back-up is affected. The longer they can keep that hidden, the bigger the damage.

More variations of this exist, with Excel or VBA macro's etc, and all boil down to: modify the file, render it unusable unless proper identification is shown.

(1) This should be possible with shortcuts to the files, first opening some kind of wrapper-script to decrypt the files before they launch. Decryption key is stored in memory and re-requested whenever the machine reboots, from its Command & Control servers.

Extortion with your friends

The current scheme is: your files get encrypted, you can pay to get your files back.

What if it's not your own files you're responsible for? What if are the files of your colleagues, family or friends? What if you had to pay 300$ to recover the files from someone you know?

Peer pressure works, especially if the blame angle is played. It's your fault someone you know got infected. Do you feel responsible at that point? Would that make you pay?

From a technical POV, it's tricky but not impossible to identify known associates for a victim. This could only happen a smaller scale, but might yield bigger rewards?

Cryptolocker + Windows Update DDoS?

Roughly 200.000 affected Windows PCs have been caught online. There are probably a lot more, that haven't made it to the online reports yet. Those are quite a few PCs to have control over, as an attacker.

The media is now jumping on the news, urging everyone to update. What if the 200k infected machines were to launch an effective DDoS against the Windows Update servers? With everyone trying to update, the possible targets are lowering every hour.

If you could effectively take down the means with which users can protect themselves, you can create bigger chaos and a bigger market to infect.

The next cryptolocker isn't going to be "just" a cryptolocker, in all likeliness it'll combine its encryption capacities with even more damaging means.

Stay safe

How to prevent any of these?

  1. Enable auto-updates on all your systems (!!)
  2. Have frequent back-ups, store them long enough

Want more details? Check out my earlier post: Staying Safe Online – A short guide for non-technical people.

The post Ways in which the WannaCry ransomware could have been much worse appeared first on ma.ttias.be.

May 14, 2017

Quel est le sens de la vie ? Pourquoi y’a-t-il des êtres vivants dans l’univers plutôt que de la matière inerte ? Pour ceux d’entre vous qui se sont déjà posé ces questions, j’ai une bonne et une mauvaise nouvelle.

La bonne, c’est que la science a peut-être trouvé une réponse.

La mauvaise, c’est que cette réponse ne va pas vous plaire.

Les imperfections du big bang

Si le big bang avait été un événement parfait, l’univers serait aujourd’hui uniforme et lisse. Or, des imperfections se sont créées.

À cause des forces fondamentales, ces imperfections se sont agglomérées jusqu’à former des étoiles et des planètes faites de matière.

Si nous ne sommes pas aujourd’hui une simple soupe d’atomes parfaitement lisse mais bien des êtres solides sur une planète entourée de vide, c’est grâce à ces imperfections !

La loi de l’entropie

Grâce à la thermodynamique, nous avons compris que rien ne se perd et rien ne se crée. L’énergie d’un système est constante. Pour refroidir son intérieur, un frigo devra forcément chauffer à l’extérieur. L’énergie de l’univers est et restera donc constante.

Il n’en va pas de même de l’entropie !

Pour faire simple, l’entropie peut être vue comme une « qualité d’énergie ». Au plus l’entropie est haute, au moins l’énergie est utilisable.

Par exemple, si vous placez une tasse de thé bouillante dans une pièce très froide, l’entropie du système est faible. Au fil du temps, la tasse de thé va se refroidir, la pièce se réchauffer et l’entropie va augmenter pour devenir maximale lorsque la tasse et la pièce seront à même température. Ce phénomène très intuitif serait dû à l’intrication quantique et serait à la base de notre perception de l’écoulement du temps.

Pour un observateur extérieur, la quantité d’énergie dans la pièce n’a pas changé. La température moyenne de l’ensemble est toujours la même. Par contre, il y’a quand même eu une perte : l’énergie n’est plus exploitable.

Il aurait été possible, par exemple, d’utiliser le fait que la tasse dé thé réchauffe l’air ambiant pour actionner une turbine et générer de l’électricité. Ce n’est plus possible une fois que la tasse et la pièce sont à la même température.

Sans apport d’énergie externe, tout système va voir son entropie augmenter. Il en va donc de même pour l’univers : si l’univers ne se contracte pas sous son propre poids, les étoiles vont inéluctablement se refroidir et s’éteindre comme la tasse de thé. L’univers deviendra, inexorablement, un continuum parfait de température constante. En anglais, on parle de “Heat Death”, la mort de la chaleur.

L’apparition de la vie

La vie semble être une exception. Après tout, ne sommes-nous pas des organismes complexes et très ordonnés, ce qui suppose une entropie très faible ? Comment expliquer l’apparition de la vie, et donc d’éléments à entropie plus faible que leur environnement, dans un univers dont l’entropie est croissante ?

Jeremy England, un physicien du MIT, apporte une solution nouvelle et particulièrement originale : la vie serait la manière la plus efficace de dissiper la chaleur et donc d’augmenter l’entropie.

Sur une planète comme la terre, les atomes et les molécules sont bombardés en permanence par une énergie forte et utilisable : le soleil. Ceci engendre une situation d’entropie très faible.

Naturellement, les atomes vont alors s’organiser pour dissiper l’énergie. Physiquement, la manière la plus efficace de dissiper l’énergie reçue est de se reproduire. En se reproduisant, la matière crée de l’entropie.

La première molécule capable d’une telle prouesse, l’ARN, fut la première étape de la vie. Les mécanismes de sélection naturelle favorisant la reproduction ont alors fait le reste.

Selon Jeremy England, la vie serait mécaniquement inéluctable pour peu qu’il y aie suffisamment d’énergie.

L’humanité au service de l’entropie

Si la théorie d’England se confirme, cela serait une très mauvaise nouvelle pour l’humanité.

Car si le but de la vie est de maximiser l’entropie, alors ce que nous faisons avec la terre, la consommation à outrance, les guerres, les bombes nucléaires sont parfaitement logiques. Détruire l’univers le plus vite possible pour en faire une soupe d’atomes est le sens même de la vie !

Le seul dilemme auquel nous pourrions faire face serait alors : devons-nous détruire la terre immédiatement ou arriver à nous développer pour apporter la destruction dans le reste de l’univers ?

Quoi qu’il en soit, l’objectif ultime de la vie serait de rentre l’univers parfait, insipide, uniforme. De se détruire elle-même.

Ce qui est particulièrement angoissant c’est que, vu sous cet angle, l’humanité semble y arriver incroyablement bien ! Beaucoup trop bien

 

Photo par Bardia Photography.

Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal ou Liberapay. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Medium, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

May 12, 2017

I published the following diary on isc.sans.org: “When Bad Guys are Pwning Bad Guys…“.

A few months ago, I wrote a diary about webshells[1] and the numerous interesting features they offer. They’re plenty of web shells available, there are easy to find and install. They are usually delivered as one big obfuscated (read: Base64, ROT13 encoded and gzip’d) PHP file that can be simply dropped on a compromised computer. Some of them are looking nice and professional like the RC-Shell… [Read more]

[The post [SANS ISC] When Bad Guys are Pwning Bad Guys… has been first published on /dev/random]

May 11, 2017

Imagine we want an editor that has undo and redo capability. But the operations on the editor are all asynchronous. This implies that also undo and redo are asynchronous operations.

We want all this to be available in QML, we want to use QFuture for the asynchronous stuff and we want to use QUndoCommand for the undo and redo capability.

But how do they do it?

First of all we will make a status object, to put the status of the asynchronous operations in (asyncundoable.h).

class AbstractAsyncStatus: public QObject
{
    Q_OBJECT

    Q_PROPERTY(bool success READ success CONSTANT)
    Q_PROPERTY(int extra READ extra CONSTANT)
public:
    AbstractAsyncStatus(QObject *parent):QObject (parent) {}
    virtual bool success() = 0;
    virtual int extra() = 0;
};

We will be passing it around as a QSharedPointer, so that lifetime management becomes easy. But typing that out is going to give us long APIs. So let’s make a typedef for that (asyncundoable.h).

typedef QSharedPointer<AbstractAsyncStatus> AsyncStatusPointer;

Now let’s make ourselves an undo command that allows us to wait for asynchronous undo and asynchronous redo. We’re combining QUndoCommand and QFutureInterface here (asyncundoable.h).

class AbstractAsyncUndoable: public QUndoCommand
{
public:
    AbstractAsyncUndoable( QUndoCommand *parent = nullptr )
        : QUndoCommand ( parent )
        , m_undoFuture ( new QFutureInterface<AsyncStatusPointer>() )
        , m_redoFuture ( new QFutureInterface<AsyncStatusPointer>() ) {}
    QFuture<AsyncStatusPointer> undoFuture()
        { return m_undoFuture->future(); }
    QFuture<AsyncStatusPointer> redoFuture()
        { return m_redoFuture->future(); }

protected:
    QScopedPointer<QFutureInterface<AsyncStatusPointer> > m_undoFuture;
    QScopedPointer<QFutureInterface<AsyncStatusPointer> > m_redoFuture;

};

Okay, let’s implement these with an example operation. First the concrete status object (asyncexample1command.h).

class AsyncExample1Status: public AbstractAsyncStatus
{
    Q_OBJECT
    Q_PROPERTY(bool example1 READ example1 CONSTANT)
public:
    AsyncExample1Status ( bool success, int extra, bool example1,
                          QObject *parent = nullptr )
        : AbstractAsyncStatus(parent)
        , m_example1 ( example1 )
        , m_success ( success )
        , m_extra ( extra ) {}
    bool example1() { return m_example1; }
    bool success() Q_DECL_OVERRIDE { return m_success; }
    int extra() Q_DECL_OVERRIDE { return m_extra; }
private:
    bool m_example1 = false;
    bool m_success = false;
    int m_extra = -1;
};

Let’s make a QUndoCommand that uses a timer to simulate asynchronous behavior. We could also use QtConcurrent’s run function to use a QThreadPool and QRunnable instances that also implement QFutureInterface, of course. Seasoned Qt developers know what I mean. For the sake of example, I wanted to illustrate that QFuture can also be used for asynchronous things that aren’t threads. We’ll use the lambda because QUndoCommand isn’t a QObject, so no easy slots. That’s the only reason (asyncexample1command.h).

class AsyncExample1Command: public AbstractAsyncUndoable
{
public:
    AsyncExample1Command(bool example1, QUndoCommand *parent = nullptr)
        : AbstractAsyncUndoable ( parent ), m_example1(example1) {}
    void undo() Q_DECL_OVERRIDE {
        m_undoFuture->reportStarted();
        QTimer *timer = new QTimer();
        timer->setSingleShot(true);
        QObject::connect(timer, &QTimer::timeout, [=]() {
            QSharedPointer<AbstractAsyncStatus> result;
            result.reset(new AsyncExample1Status ( true, 1, m_example1 ));
            m_undoFuture->reportFinished(&result);
            timer->deleteLater();
        } );
        timer->start(1000);
    }
    void redo() Q_DECL_OVERRIDE {
        m_redoFuture->reportStarted();
        QTimer *timer = new QTimer();
        timer->setSingleShot(true);
        QObject::connect(timer, &QTimer::timeout, [=]() {
            QSharedPointer<AbstractAsyncStatus> result;
            result.reset(new AsyncExample1Status ( true, 2, m_example1 ));
            m_redoFuture->reportFinished(&result);
            timer->deleteLater();
        } );
        timer->start(1000);
    }
private:
    QTimer m_timer;
    bool m_example1;
};

Let’s now define something we get from the strategy design pattern; a editor behavior. Implementations provide an editor all its editing behaviors (abtracteditorbehavior.h).

class AbstractEditorBehavior : public QObject
{
    Q_OBJECT
public:
    AbstractEditorBehavior( QObject *parent) : QObject (parent) {}

    virtual QFuture<AsyncStatusPointer> performExample1( bool example1 ) = 0;
    virtual QFuture<AsyncStatusPointer> performUndo() = 0;
    virtual QFuture<AsyncStatusPointer> performRedo() = 0;
    virtual bool canRedo() = 0;
    virtual bool canUndo() = 0;
};

So far so good, so let’s make an implementation that has a QUndoStack and that therefor is undoable (undoableeditorbehavior.h).

class UndoableEditorBehavior: public AbstractEditorBehavior
{
public:
    UndoableEditorBehavior(QObject *parent = nullptr)
        : AbstractEditorBehavior (parent)
        , m_undoStack ( new QUndoStack ){}

    QFuture<AsyncStatusPointer> performExample1( bool example1 ) Q_DECL_OVERRIDE {
        AsyncExample1Command *command = new AsyncExample1Command ( example1 );
        m_undoStack->push(command);
        return command->redoFuture();
    }
    QFuture<AsyncStatusPointer> performUndo() {
        const AbstractAsyncUndoable *undoable =
            dynamic_cast<const AbstractAsyncUndoable *>(
                    m_undoStack->command( m_undoStack->index() - 1));
        m_undoStack->undo();
        return const_cast<AbstractAsyncUndoable*>(undoable)->undoFuture();
    }
    QFuture<AsyncStatusPointer> performRedo() {
        const AbstractAsyncUndoable *undoable =
            dynamic_cast<const AbstractAsyncUndoable *>(
                    m_undoStack->command( m_undoStack->index() ));
        m_undoStack->redo();
        return const_cast<AbstractAsyncUndoable*>(undoable)->redoFuture();
    }
    bool canRedo() Q_DECL_OVERRIDE { return m_undoStack->canRedo(); }
    bool canUndo() Q_DECL_OVERRIDE { return m_undoStack->canUndo(); }
private:
    QScopedPointer<QUndoStack> m_undoStack;
};

Now we only need an editor, right (editor.h)?

class Editor: public QObject
{
    Q_OBJECT
    Q_PROPERTY(AbstractEditorBehavior* editorBehavior READ editorBehavior CONSTANT)
public:
    Editor(QObject *parent=nullptr) : QObject(parent)
        , m_editorBehavior ( new UndoableEditorBehavior ) { }
    AbstractEditorBehavior* editorBehavior() { return m_editorBehavior.data(); }
    Q_INVOKABLE void example1Async(bool example1) {
        QFutureWatcher<AsyncStatusPointer> *watcher = new QFutureWatcher<AsyncStatusPointer>(this);
        connect(watcher, &QFutureWatcher<AsyncStatusPointer>::finished,
                this, &Editor::onExample1Finished);
        watcher->setFuture ( m_editorBehavior->performExample1(example1) );
    }
    Q_INVOKABLE void undoAsync() {
        if (m_editorBehavior->canUndo()) {
            QFutureWatcher<AsyncStatusPointer> *watcher = new QFutureWatcher<AsyncStatusPointer>(this);
            connect(watcher, &QFutureWatcher<AsyncStatusPointer>::finished,
                    this, &Editor::onUndoFinished);
            watcher->setFuture ( m_editorBehavior->performUndo() );
        }
    }
    Q_INVOKABLE void redoAsync() {
        if (m_editorBehavior->canRedo()) {
            QFutureWatcher<AsyncStatusPointer> *watcher = new QFutureWatcher<AsyncStatusPointer>(this);
            connect(watcher, &QFutureWatcher<AsyncStatusPointer>::finished,
                    this, &Editor::onRedoFinished);
            watcher->setFuture ( m_editorBehavior->performRedo() );
        }
    }
signals:
    void example1Finished( AsyncExample1Status *status );
    void undoFinished( AbstractAsyncStatus *status );
    void redoFinished( AbstractAsyncStatus *status );
private slots:
    void onExample1Finished() {
        QFutureWatcher<AsyncStatusPointer> *watcher =
                dynamic_cast<QFutureWatcher<AsyncStatusPointer>*> (sender());
        emit example1Finished( watcher->result().objectCast<AsyncExample1Status>().data() );
        watcher->deleteLater();
    }
    void onUndoFinished() {
        QFutureWatcher<AsyncStatusPointer> *watcher =
                dynamic_cast<QFutureWatcher<AsyncStatusPointer>*> (sender());
        emit undoFinished( watcher->result().objectCast<AbstractAsyncStatus>().data() );
        watcher->deleteLater();
    }
    void onRedoFinished() {
        QFutureWatcher<AsyncStatusPointer> *watcher =
                dynamic_cast<QFutureWatcher<AsyncStatusPointer>*> (sender());
        emit redoFinished( watcher->result().objectCast<AbstractAsyncStatus>().data() );
        watcher->deleteLater();
    }
private:
    QScopedPointer<AbstractEditorBehavior> m_editorBehavior;
};

Okay, let’s register this up to make it known in QML and make ourselves a main function (main.cpp).

#include <QtQml>
#include <QGuiApplication>
#include <QQmlApplicationEngine>
#include <editor.h>
int main(int argc, char *argv[])
{
    QGuiApplication app(argc, argv);
    QQmlApplicationEngine engine;
    qmlRegisterType<Editor>("be.codeminded.asyncundo", 1, 0, "Editor");
    engine.load(QUrl(QStringLiteral("qrc:/main.qml")));
    return app.exec();
}

Now, let’s make ourselves a simple QML UI to use this with (main.qml).

import QtQuick 2.3
import QtQuick.Window 2.2
import QtQuick.Controls 1.2
import be.codeminded.asyncundo 1.0
Window {
    visible: true
    width: 360
    height: 360
    Editor {
        id: editor
        onUndoFinished: text.text = "undo"
        onRedoFinished: text.text = "redo"
        onExample1Finished: text.text = "whoohoo " + status.example1
    }
    Text {
        id: text
        text: qsTr("Hello World")
        anchors.centerIn: parent
    }
    Action {
        shortcut: "Ctrl+z"
        onTriggered: editor.undoAsync()
    }
    Action {
        shortcut: "Ctrl+y"
        onTriggered: editor.redoAsync()
    }
    Button  {
        onClicked: editor.example1Async(99);
    }
}

You can find the sources of this complete example at github. Enjoy!

May 10, 2017

For years, Google is offering two nice features with his gmail.com platform to gain more power of your email address. You can play with the “+” (plus) sign or “.” (dot) to create more email addresses linked to your primary one. Let’s take an example with John who’s the owner of john.doe@gmail.com. John can share the email address “john.doe+soccer@gmail.com” with his friends playing soccer or “john.doe+security@gmail.com”  to register on forums talking about information security. It’s the same with dots. Google just ignore them. So “john.doe@gmail.com” is the same as “john.d.oe@gmail.com”. Many people use the “+” format to optimize the flood of email they receive every day and automatically process it / store it in separate folders. That’s nice but it can also be very useful to discover where an email address is being used.

A few days ago, Troy Hunt, the owner of haveibeenpwned.com service (if you don’t know it yet, just have a look and register!), announced that new massive dumps were in the wild for a total of ~1B passwords! The new dumps are called “Exploit.In” (593M entries) and “Anti Public Combo List” (427M entries). The sources of the leaks are not clear. I grabbed a copy of the data and searched for Google “+” email addresses.

Not surprising, I found +28K unique accounts! I extracted strings after the “+” sign and indexed everything in Splunk:

Gmail Tags

As you can see, we recognise some known online services:

  • xtube (adult content)
  • friendster (social network)
  • filesavr (file exchange service in the cloud)
  • linkedin (social network)
  • bioware (gaming platform)

This does not mean that those platforms were breached (ok, LinkedIn was) but it can give some indicators…

Here is a dump of the top identified tags (with more than 3 characters to keep the list useful). You can download the complete CSV here.

Tag Count
xtube

37

spam

18

filedropper

17

daz3d

12

bioware

11

friendster

10

savage

10

linkedin

8

eharmony

7

filesavr

6

bryce

5

savage2

5

porn

4

precyl

4

bravenet

3

comicbookdb

3

freebie

3

freebiejeebies

3

freebies

3

hackforums

3

junk

3

kffl

3

social

3

youporn

3

97979797

2

brice

2

dazstudio

2

detnews

2

eharm

2

free

2

gamigo

2

hack

2

heroesofnewerth

2

itickets

2

lists

2

luther

2

paygr

2

policeauctions

2

test

2

texasmonthly

2

toddy

2

trzy

2

usercash

2

xtube2

2

[The post Identifying Sources of Leaks with the Gmail “+” Feature has been first published on /dev/random]

May 09, 2017

I’m happy to announce the immediate availability of FileFetcher 4.0.0.

FileFecther is a small PHP library that provides an OO way to retrieve the contents of files.

What’s OO about such an interface? You can inject an implementation of it into a class, thus avoiding that the class knows about the details of the implementation, and being able to choose which implementation you provide. Calling file_get_contents does not allow changing implementation as it is a procedural/static call making use of global state.

Library number 8234803417 that does this exact thing? Probably not. The philosophy behind this library is to provide a very basic interface (FileFetcher) that while insufficient for plenty of use cases, is ideal for a great many, in particular replacing procedural file_get_contents calls. The provided implementations are to facilitate testing and common generic tasks around the actual file fetching. You are encouraged to create your own core file fetching implementation in your codebase, presumably an adapter to a library that focuses on this task such as Guzzle.

So what is in it then? The library provides several trivial implementations of the FileFetcher interface at its heart:

  • SimpleFileFetcher: Adapter around file_get_contents
  • InMemoryFileFetcher: Adapter around an array provided to its constructor
  • ThrowingFileFetcher: Throws a FileFetchingException for all calls (added after 4.0)
  • NullFileFetcher: Returns an empty string for all calls (added after 4.0)
  • StubFileFetcher: Returns a stub value for all calls (added after 4.0)

It also provides a number of generic decorators:

Version 4.0.0 brings PHP7 features (scalar type hints \o/) and adds a few extra handy implementations. You can add the library to your composer.json (jeroen/file-fetcher) or look at the documentation on GitHub. You can also read about its inception in 2013.

Logo JenkinsCe jeudi 15 juin 2017 à 19h se déroulera la 60ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Intégration continue avec Jenkins

Thématique : Intégration continue|déploiement automatisé|Processus

Public : Tout public

L’animateur conférencier : Dimitri Durieux (CETIC)

Lieu de cette séance : Campus technique (ISIMs) de la Haute Ecole en Hainaut, Avenue V. Maistriau, 8a, Salle Académique, 2e bâtiment (cf. ce plan sur le site de l’ISIMs, et ici sur la carte Openstreetmap).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Afin d’améliorer la gestion des ressources, la virtualisation et le Cloud ont révolutionné notre manière de concevoir et de déployer des applications informatiques. La complexité des systèmes informatiques a fortement évolué. Il ne s’agit plus de déployer et de maintenir une application monolithique mais un ensemble de services interagissant entre eux. Les développeurs sont donc confrontés à une forte complexité des tests, de l’intégration et du déploiement des applications.

A ce nouveau contexte technique s’ajoute des méthodologies de développement agiles, rapides et itératives. Elles favorisent un déploiement régulier de nouvelles versions des composants incorporant de nouvelles fonctionnalités. Il faut donc tester, intégrer et déployer à un rythme beaucoup plus soutenu qu’avant.

Le développeur doit donc réaliser très régulièrement des tâches de tests, d’intégration et de déploiement complexes. Il a besoin d’un outil et il en existe plusieurs. Dans le cadre de cette présentation, je propose de vous présenter l’outil Jenkins. Il s’agit de l’outil open-source disposant du plus grand nombre de possibilités de personnalisation à ce jour. Cette présentation sera l’occasion de présenter les bases de Jenkins et des pratiques dites d’intégration continue. Elle sera illustrée d’exemples concrets d’utilisation de la solution.

Jenkins est un outil installable sous forme de site web permettant la configuration de tâches automatisées. Ces tâches possibles sont nombreuses mais citons notamment la compilation ou la publication d’un module, l’intégration d’un système, les tests d’une fonctionnalité ou encore le déploiement dans l’environnement de production. Les tâches peuvent être déclenchées automatiquement selon le besoin. Par exemple, on peut réaliser un test d’intégration à chaque modification de la base de code d’un des modules. Les fonctionnalités restent assez simples et répondent à des besoins très précis mais il est possible de les combiner pour automatiser la plupart des tâches périphériques afin d’avoir plus de temps disponible pour le développement de votre logiciel.

Short Bio : Dimitri Durieux est un membre du CETIC spécialisé dans la qualité logicielle. Il a travaillé sur plusieurs projets de recherche dans des contexte à complexité et contraintes variées et aide les industries Wallonne à produire du code de meilleure qualité. Principalement, il a évalué la qualité du code et des pratiques de développement dans plus de 50 entreprises IT Wallonnes afin de les aider à mettre en place une gestion efficace de la qualité. Lors de son parcours, au CETIC, il a eu l’occasion de déployer et de maintenir plusieurs instances de Jenkins tout en démontrant la faisabilité de cas d’application complexes de cette technologie pour plusieurs sociétés Wallonnes.

7-Eleven is the largest convenience store chain in the world with 60,000 locations around the globe. That is more than any other retailer or food service provider. In conjunction with the release of its updated 7-Rewards program, 7-Eleven also relaunched its website and mobile application using Drupal 8! Check it out at https://www.7-eleven.com, and grab a Slurpee while you're at it!

7 eleven

During our development we are using the Gitflow process. (Some may prefer others, but this works well in the current situation). In order to do this we are using the JGitflow Maven plugin (developed by Atlassian)

If our software is ready to go out of the door, we make a release branch. This release branch is then filled with documentation with regards to the release (Jira tickets, git logs, build numbers, … pretty much everything to make it trackable inside the artifact).
From time to time the back merge of this release branch towards Develop would fail and cause us all kinds of trouble. By default the JGitflow Maven plugin tries to do a back merge and delete the branch. This was a behaviour we wanted to change.
So during my spare time (read nights) I decided it was time to do some nightly hacking. The result is a new maven option “backMergeToDevelop” that defaults to true but can be overridden.

I created the necessary tests to validate it works and created a pull request, however no follow up so far from Atlassian … Anybody from Atlassian reading this? Reach out to me so I can get this in the next release (if any is coming …?)

It seems there were no commits since 2015-09 …

 

May 07, 2017

It's not a secret that I was playing/experimenting with OpenStack in the last days. When I mention OpenStack, I should even say RDO , as it's RPM packaged, built and tested on CentOS infra.

Now that it's time to deploy it in Production, that's when you should have a deeper look at how to proceed and which tool to use. Sure, Packstack can help you setting up a quick PoC but after some discussions with people hanging around in the #rdo irc channel on freenode, it seems that almost everybody agreed on the fact that it's not the kind of tool you want to use for a proper deploy.

Let's so have a look at the available options. While I really like/prefer Ansible, we (CentOS Project) still use puppet as our Configuration Management tool, and itself using Foreman as the ENC. So let's see both options.

  • Ansible : Lot of natives modules exist to manage an existing/already deployed openstack cloud, but nothing really that can help setting up one from scratch. OTOH it's true that Openstack Ansible exists, but that will setup openstack components into LXC containers, and wasn't really comfortable with the whole idea (YMMV)
  • Puppet : Lot of puppet modules so you can automatically reuse/import those into your existing puppet setup, and seems to be the prefered method when discussing with people in #rdo (when not using TripleO though)

So, after some analysis, and despite the fact that I really prefer Ansible over Puppet, I decided (so that it could still make sense in our infra) to go the "puppet modules way". That was the beginning of a journey, where I saw a lot of Yaks to shave too.

It started with me trying to "just" reuse and adapt some existing modules I found. Wrong. And it's even fun because it's one of my mantras : "Don't try to automate what you can't understand from scratch" (And I fully agree with Matthias' thought on this ).

So one can just read all the openstack puppet modules, and then try to understand how to assemble them together to build a cloud. But I remembered that Packstack itself is puppet driven. So I just decided to have a look at what it was generating and start from that to write my own module from scratch. How to proceed ? Easy : on a VM, just install packstack, generate answer file, "salt" it your needs, and generate the manifests :

 yum install -y centos-release-openstack-ocata && yum install openstack-packstack -y
 packstack --gen-answer-file=answers.txt
 vim answers.txt
 packstack --answer-file=answers.txt --dry-run
 * The installation log file is available at: /var/tmp/packstack/20170508-101433-49cCcj/openstack-setup.log
 * The generated manifests are available at: /var/tmp/packstack/20170508-101433-49cCcj/manifests

So now we can have a look at all the generated manifests and start from scratch our own, reimporting all the needed openstack puppet modules, and that's what I did .. but started to encounter some issues. The first one was that the puppet version we were using was 3.6.2 (everywhere on every release/arch we support, so centos 6 and 7, and x86_64,i386,aarch64,ppc64,ppc64le).

One of the openstack component is RabbitMQ but openstack modules rely on the puppetlabs module to deploy/manage it. You'll see a lot of those external modules being called/needed by openstack puppet. The first thing that I had to do was investigating our own modules as some are the same name, but not coming from puppetlabs/forge, so instead of analyzing all those, I moved everything RDO related to a different environment so that it wouldn't conflict with some our our existing modules. Back now to the RabbitMQ one : puppet errors where trying to just use it. First yak to shave : updating the whole CentOS infra puppet to higher version because of a puppet bug. Let's so rebuild puppet for centos 6/7 and with a higher version on CBS

That means of course testing our own modules, on our Test Foreman/puppetmasterd instance first, and as upgraded worked, I applied it everywhere. Good, so let's jump to the next yak.

After the rabbitmq issue was solved, I encountered other ones coming from openstack puppet modules now, as the .rb ruby code used for type/provider was expecting ruby2 and not 1.8.3, which was the one available on our puppetmasterd (yeah, our Foreman was on a CentOS 6 node) so another yak to shave : migrating our Foreman instance from CentOS 6 to a new CentOS 7 node. Basically installing a CentOS 7 node with the same Foreman version running on CentOS 6 node, and then following procedure, but then, again, time lost to test update/upgrade and also all other modules, etc (One can see why I prefer agentless cfgmgmt).

Finally I found that some of the openstack puppet modules aren't touching the whole config. Let me explain why. In Openstack Ocata, some things are mandatory, like the Placement API, but despite all the classes being applied, I had some issues to have it to run correctly when deploying an instance. It's true that I initially had a bug in my puppet code for the user/password to use to configure the rabbitmq settings, but it was solved and also applied correctly in /etc/nova/nova.conf (setting "transport_url=") . But openstack nova services (all nova-*.log files btw) were always saying that credentials given were refused by rabbitmq, while tested manually)

After having verified in the rabbitmq logs, I saw that despite what was configured in nova.conf, services were still trying to use the wrong user/pass to connect to rabbitmq. Strange as ::nova::cell_v2::simple_setup was included and was supposed also to use the transport_url declared at the nova.conf level (and so configured by ::nova) . That's how I discovered that something "ugly" happened : in fact even if you modify nova.conf, it stores some settings in the mysql DB, and you can see those (so the "wrong" ones in my case) with :

nova-manage cell_v2 list_cells --debug

Something to keep in mind, as for initial deployment, if your rabbitmq user/pass needs to be changed, and despite the fact that puppet will not complain, it will only update the conf file, but not the settings imported first by puppet in the DB (table nova_api.cell_mapping if you're interested) After that, everything was then running, and reinstalled/reprovisioned multiple times my test nodes to apply the puppet module/manifests from puppetmasterd to confirm.

That was quite a journey, but it's probably only the beginning but it's a good start. Now to investigate other option for cinder/glance as it seems Gluster was deprecated and I'd like to know hy.

Hope this helps if you need to bootstrap openstack with puppet !

May 06, 2017

In my computer lifetime, I’ve used Windows 3,NT,95,… 10, Linux (mostly Gnome from 2002 onwards) and since late 200X Mac. Currently I’m mostly using Mac because it fits nice between Windows and Linux.

At a certain time I was doing a demo on my Linux laptop and I got a blue screen in front of a client. Pretty much a No No for everybody … So I decided to switch to Mac and ever since no problems doing demo’s and presentations.

My Mac is already pretty old (read +3 years). As I’ve installed a lot on it, I decided to re-install Sierra. Try the good old Windows way, re-install and get a fast machine again. But what happened pretty much blew my mind …

I did a clean install and afterwards checked my laptops memory consumption … the bare system was still eating 7 Gigs of memory … Okay I may have 16 in total but why would my core OS without any additional installations need 7 gigs … This is utter madness and even a complete rip off. This would mean that in order to run a descent Mac Sierra you always need 16 gig to run at a descent speed … I’m following Microsoft pretty closely as they are more than ever focusing on good products and helping the opensource community (Did you know Microsoft is the #1 contributor to opensource software ???!!!). Even their Windows 10 is looking like a descent product again …

My current Mac is still doing fine, so I hope to postpone the purchase of a new machine somewhat to a far future … However if I need to purchase something at the moment, I wouldn’t know what to buy.
The new Macs really scare me. The touch bar is something that nobody seems  to find useful even worse, pretty much everybody I talked to hates it and finds it a waste of money … for myself I find Microsoft did a good job at Windows 10, however the command line interface is still not up to par with the on from Mac or Linux one … So if I should buy something right now, it would probably be a laptop with an SSD, I7, 16 gig or more diskspace that is completely supported on Linux. I would then be installing Linux Mint as that’s still the distro that seems for myself the most user friendly …. if you have other suggestions, just let me know !!!

Lets just hope Apple can turn it around, and can make descent desktopsagain that don’t eat all your memory for no un-necessary reason.

I published the following diary on isc.sans.org: “The story of the CFO and CEO…“.

I read an interesting article in a Belgian IT magazine[1]. Every year, they organise a big survey to collect feelings from people working in the IT field (not only security). It is very broad and covers their salary, work environments, expectations, etc. For infosec people, one of the key points was that people wanted to attend more trainings and conferences… [Read more]

[The post [SANS ISC] The story of the CFO and CEO… has been first published on /dev/random]

May 05, 2017

I published the following diary on isc.sans.org: “HTTP Headers… the Achilles’ heel of many applications“.

When browsing a target web application, a pentester is looking for all “entry” or “injection” points present in the pages. Everybody knows that a static website with pure HTML code is less juicy compared to a website with many forms and gadgets where visitors may interact with it. Classic vulnerabilities (XSS, SQLi) are based on the user input that is abused to send unexpected data to the server… [Read more]

[The post [SANS ISC] HTTP Headers… the Achilles’ heel of many applications has been first published on /dev/random]

May 02, 2017

Today, while hunting, I found a malicious HTML page in my spam trap. The page was a fake JP Morgan Chase bank. Nothing fancy. When I found such material, I usually search for “POST” HTTP requests to collect URLs and visit the websites that receive the victim’s data. As usual, the website was not properly protected and all files were readable. This one looked interesting:

Data File

The first question was: are those data relevant. Probably not… Why?

Today, many attackers protect their malicious website via an .htaccess file to restrict access to their victims only. In this case, the Chase bank being based in the US, we could expect that most of the visitors’ IP addresses to be geolocalized there but it was not the case this time. I downloaded the data file that contained 503 records. Indeed, most of them contained empty or irrelevant information. So I decided to have a look at the IP addresses. Who’s visiting the phishing site? Let’s generate some statistics!

$ grep ^ip: data.txt |cut -d ' ' -f 2 | sort -u >victims.csv
$ wc -l victims.csv
150

With Splunk, we can easily display them on a fancy map:

| inputlookup victims.csv | iplocation IP \
| inputlookup victims.csv | iplocation IP \
| stats count by IP, lat, lon, City, Country, Region

IP Map

Here is the top-5 of countries which visited the phishing page or, more precisely, which submitted a POST request:

United States 64
United Kingdom 13
France 11
Germany 7
Australia 5

Some IP addresses visited multiple times the website:

37.187.173.11 187
51.15.46.11 77
219.117.238.170 21
87.249.110.180 9
95.85.8.153 6

A reverse lookup on the IP addresses revealed some interesting information:

  • The Google App Engine was the top visitor
  • Many VPS providers visited the page, probably owned by researchers (OVH, Amazon EC2)
  • Service protecting against phishing sites visited the page (ex: phishtank.com, phishmongers.com, isitphishing.org)
  • Many Tor exit-nodes
  • Some online URL scanners (urlscan.io)
  • Some CERTS (CIRCL)

Two nice names were found:

  • Trendmicro
  • Trustwave

No real victim left his/her data on the fake website. Some records contained data but fake ones (although probably entered manually). All the traffic was generated by crawlers, bot and security tools…

[The post Who’s Visiting the Phishing Site? has been first published on /dev/random]

The post How to enable TLS 1.3 on Nginx appeared first on ma.ttias.be.

Since Nginx 1.13, support has been added for TLSv1.3, the latest version of the TLS protocol. Depending on when you read this post, chances are you're running an older version of Nginx at the moment, which doesn't yet support TLS 1.3. In that case, consider running Nginx in a container for the latest version, or compiling Nginx from source.

Enable TLSv1.3 in Nginx

I'm going to assume you already have a working TLS configuration. It'll include configs like these;

...
ssl_protocols               TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers                 ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256;
ssl_prefer_server_ciphers   on;
ssl_ecdh_curve              secp384r1;
...

And quite a few more parameters.

To enable TLS 1.3, add TLSv1.3 to the ssl_protocols list.

ssl_protocols               TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;

And reload your Nginx configuration.

Test if your Nginx version supports TLS 1.3

Add the config as shown above, and try to run Nginx in debug mode.

$ nginx -t
nginx: [emerg] invalid value "TLSv1.3" in /etc/nginx/conf.d/ma.ttias.be.conf:34
nginx: configuration file /etc/nginx/nginx.conf test failed

If you see the message above, your Nginx version doesn't support TLS 1.3. A working config will tell you this;

$ nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

If you don't see any errors, your Nginx version supports TLS 1.3.

Further requirements

Now that you've told Nginx to use TLS 1.3, it will use TLS 1.3 where available, however ... there aren't many libraries out there that offer TLS 1.3.

For instance, OpenSSL is still debating the TLS 1.3 implementation, which seems reasonable because to the best of my knowledge, the TLS 1.3 spec isn't final yet. There's TLS 1.3 support included in the very latest OpenSSL version though, but there doesn't appear to be a sane person online that actually uses it.

TLSv1.3 draft-19 support is in master if you "config" with "enable-tls1_3". Note that draft-19 is not compatible with draft-18 which is still be used by some other libraries. Our draft-18 version is in the tls1.3-draft-18 branch.
TLS 1.3 support in OpenSSL

In short; yes, you can enable TLS 1.3 in Nginx, but I haven't found an OS & library that will allow me to actually use TLS 1.3.

The post How to enable TLS 1.3 on Nginx appeared first on ma.ttias.be.

May 01, 2017

Drupal community

Last week, 3,271 people gathered at DrupalCon Baltimore to share ideas, to connect with friends and colleagues, and to collaborate on both code and community. It was a great event. One of my biggest takeaways from DrupalCon Baltimore is that Drupal 8's momentum is picking up more and more steam. There are now about 15,000 Drupal 8 sites launching every month.

I want to continue the tradition of sharing my State of Drupal presentations. You can watch a recording of my keynote (starting at 24:00) or download a copy of my slides here (108 MB).



The first half of my presentation provided an overview of Drupal 8 updates. I discussed why Drupal is for ambitious digital experiences, how we will make Drupal upgrades easier and why we added four new Drupal 8 committers recently.

The second half of my keynote highlighted the newest improvements to Drupal 8.3, which was released less than a month ago. I showcased how an organization like The Louvre could use Drupal 8 to take advantage of new or improved site builder (layouts video, workflow video), content author (authoring video) and end user (BigPipe video, chatbot video) features.

I also shared that the power of Drupal lies in its ability to support the spectrum of both traditional websites and decoupled applications. Drupal continues to move beyond the page, and is equipped to support new user experiences and distribution platforms, such as conversational user interfaces. The ability to support any user experience is driving the community's emphasis on making Drupal API-first, not API-only.

Finally, it was really rewarding to spotlight several Drupalists that have made an incredible impact on Drupal. If you are interested in viewing each spotlight, they are now available on my YouTube channel.

Thanks to all who made DrupalCon Baltimore a truly amazing event. Every year, DrupalCon allows the Drupal community to come together to re-energize, collaborate and celebrate. Discussions on evolving Drupal's Code of Conduct and community governance were held and will continue to take place virtually after DrupalCon. If you have not yet had the chance, I encourage you to participate.

Luc en Annick stellen hun ruimte ter beschikking om de voorbijgangers gedurende de hele maand mei warm te maken om LovArte te komen bezoeken.

uitstalraam uitvaart Cocquyt

April 30, 2017

Parfois, au milieu du mépris de la cohue humaine, il parvenait à croiser un regard fuyant, à attirer une attention concentrée sur un téléphone, à briser pour quelques secondes le dédain empli de stress et d’angoisse des navetteurs préssés. Mais les rares réponses à son geste étaient invariables :
— Non !
— Merci, non. (accompagné d’un pincement des lèvres et d’un hochement de tête)
— Pas le temps !
— Pas de monnaie…

Il ne demandait pourtant pas d’argent ! Il ne demandait rien en échange de ses roses rouges. Sauf peut-être un sourire.

Pris d’une impulsion instinctive, il était descendu ce matin dans le métro, décidé à offrir un peu de gentillesse, un peu de bonheur sous forme d’un bouquet de fleur destiné au premier inconnu qui l’accepterait.

Alors que la nuée humaine peu à peu s’égayait et se dispersait dans les grands immeubles gris du quartier des affaires, il regarda tristement son bouquet.
— J’aurais essayé, murmura-t-il avant de confier les fleurs à la poubelle, cynique vase de métal.

Une larme perla au coin de sa paupière. Il l’effaça du revers de la main avant de s’asseoir à même les marches de béton. Il ferma les yeux, forçant son cœur à s’arrêter.
— Monsieur ! Monsieur !

Une main lui secouait l’épaule. Devant son regard fatigué se tenait un jeune agent de police, l’uniforme rutilant, la coupe de cheveux nette et fringuante.
— Monsieur, je vous ai observé avec votre bouquet de fleur…
— Oui ? fit-il, emplit d’espoir et de reconnaissance.
— Puis-je voir votre permis de colportage dans le métro ? Si vous n’en possédez pas, je serai obligé de vous verbaliser.

Courte histoire inspirée par ce tweet. Photo par Tiberiu Ana.

 

Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal ou Liberapay. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Medium, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

April 27, 2017

Like many developers, I get a broken build on regular basis. When setting up a new project this happens a lot because of missing infrastructure dependencies or not having access to the companies private docker hub …

Anyway on the old Jenkins (1), I used to go into the workspace folder to have a look at the build itself. In Jenkins 2 this seemed to have disappeared  (when using the pipeline) … or did it … ?

It’s there, but you got to look very careful. To get there follow these simple steps:

  • Go to the failing build (the #…)
  • Click on “Pipeline Steps” in the sidebar
  • Click on “Allocate node : …”
  • On the right side the “Workspace” folder icon that we all love appears
  • Click it

Et voila we are all set

April 26, 2017

Here is my quick wrap-up of the FIRST Technical Colloquium hosted by Cisco in Amsterdam. This is my first participation to a FIRST event. FIRST is an organization helping in incident response as stated on their website:

FIRST is a premier organization and recognized global leader in incident response. Membership in FIRST enables incident response teams to more effectively respond to security incidents by providing access to best practices, tools, and trusted communication with member teams.

The event was organized at Cisco office. Monday was dedicated to a training about incident response and the two next days were dedicated to presentations. All of them focussing on the defence side (“blue team”). Here are a few notes about interesting stuff that I learned.

The first day started with two guys from Facebook: Eric Water @ Matt Moren. They presented the solution developed internally at Facebook to solve the problem of capturing network traffic: “PCAP don’t scale”. In fact, with their solution, it scales! To investigate incidents, PCAPs are often the gold mine. They contain many IOC’s but they also introduce challenges: the disk space, the retention policy, the growing network throughput. When vendors’ solutions don’t fit, it’s time to built your own solution. Ok, only big organizations like Facebook have resources to do this but it’s quite fun. The solution they developed can be seen as a service: “PCAP as a Service”. They started by building the right hardware for sensors and added a cool software layer on top of it. Once collected, interesting PCAPs are analyzed using the Cloudshark service. They explained how they reached top performances by mixing NFS and their GlusterFS solution. Really a cool solution if you have multi-gigabits networks to tap!

The next presentation focused on “internal network monitoring and anomaly detection through host clustering” by Thomas Atterna from TNO. The idea behind this talk was to explain how to monitor also internal traffic. Indeed, in many cases, organizations still focus on the perimeter but internal traffic is also important. We can detect proxies, rogue servers, C2, people trying to pivot, etc. The talk explained how to build clusters of hosts. A cluster of hosts is a group of devices that have the same behaviour like mail servers, database servers, … Then to determine “normal” behaviour per cluster and observe when individual hosts deviate. Clusters are based on the behaviour (the amount of traffic, the number of flows, protocols, …). The model is useful when your network is quite close and stable but much more difficult to implement in an “open” environment (like universities networks).
Then Davide Carnali made a nice review of the Nigerian cybercrime landscape. He explained in details how they prepare their attacks, how they steal credentials, how they deploy the attacking platform (RDP, RAT, VPN, etc). The second part was a step-by-step explanation how they abuse companies to steal (sometimes a lot!) of money. An interesting fact reported by Davide: the time required between the compromisation of a new host (to drop malicious payload) and the generation of new maldocs pointing to this host is only… 3 hours!
The next presentation was performed by Gal Bitensky ( Minerva):  “Vaccination: An Anti-Honeypot Approach”. Gal (re-)explained what the purpose of a honeypot and how they can be defeated. Then, he presented a nice review of ways used by attackers to detect sandboxes. Basically, when a malware detects something “suspicious” (read: which makes it think that it is running in a sandbox), it will just silently exit. Gal had the idea to create a script which creates plenty of artefacts on a Windows system to defeat malware. His tool has been released here.
Paul Alderson (FireEye) presented “Injection without needles: A detailed look at the data being injected into our web browsers”. Basically, it was a huge review of 18 months of web-­inject and other configuration data gathered from several botnets. Nothing really exciting.
The next talk was more interesting… Back to the roots: SWITCH presented their DNS Firewall solution. This is a service they provide not to their members. It is based on DNS RPZ. The idea was to provide the following features:
  • Prevention
  • Detection
  • Awareness

Indeed, when a DNS request is blocked, the user is redirected to a landing page which gives more details about the problem. Note that this can have a collateral issue like blocking a complete domain (and not only specific URLs). This is a great security control to deploy. Note that RPZ support is implemented in many solutions, especially Bind 9.

Finally, the first day ended with a presentation by Tatsuya Ihica from Recruit CSIRT: “Let your CSIRT do malware analysis”. It was a complete review of the platform that they deployed to perform more efficient automatic malware analysis. The project is based on Cuckoo that was heavily modified to match their new requirements.

The second day started with an introduction to the FIRST organization made by Aaron Kaplan, one of the board members. I liked the quote given by Aaron:

If country A does not talk to country B because of ‘cyber’, then a criminal can hide in two countries

Then, the first talk was really interesting: Chris Hall presented “Intelligence Collection Techniques“. After explaining the different sources where intelligence can be collected (open sources, sinkholes, …), he reviewed a serie of tools that he developed to help in the automation of these tasks. His tools addresses:
  • Using the Google API, VT API
  • Paste websites (like pastebin.com)
  • YARA rules
  • DNS typosquatting
  • Whois queries

All the tools are available here. A very nice talk with tips & tricks that you can use immediately in your organization.

The next talk was presented by a Cisco guy, Sunil Amin: “Security Analytics with Network Flows”. Netflow isn’t a new technology. Initially developed by Cisco, they are today a lot of version and forks. Based on the definition of a “flow”: “A layer 3 IP communication between two endpoints during some time period”, we got a review the Netflow. Netflow is valuable to increase the visibility of what’s happening on your networks but it has also some specific points that must be addressed before performing analysis. ex: de-duplication flows. They are many use cases where net flows are useful:
  • Discover RFC1918 address space
  • Discover internal services
  • Look for blacklisted services
  • Reveal reconnaissance
  • Bad behaviours
  • Compromised hosts, pivot
    • HTTP connection to external host
    • SSH reverse shell
    • Port scanning port 445 / 139
I would expect a real case where net flow was used to discover something juicy. The talk ended with a review of tools available to process net flow data: SiLK, nfdump, ntop but log management can also be used like the ELK stack or Apache Spot. Nothing really new but a good reminder.
Then, Joel Snape from BT presented “Discovering (and fixing!) vulnerable systems at scale“. BT, as a major player on the Internet, is facing many issues with compromized hosts (from customers to its own resources). Joel explained the workflow and tools they deployed to help in this huge task. It is based on the following circle: Introduction,  data collection, exploration and remediation (the hardest part!).
I like the description of their “CERT dropbox” which can be deployed at any place on the network to perform the following tasks:
  • Telemetry collection
  • Data exfiltration
  • Network exploration
  • Vulnerability/discovery scanning
An interesting remark from the audience: ISP don’t have only to protect their customers from the wild Internet but also the Internet from their (bad) customers!
Feike Hacqueboard, from TrendMicro, explained:  “How political motivated threat actors attack“. He reviewed some famous stories of compromised organizations (like the French channel TV5) then reviewed the activity of some interesting groups like C-Major or Pawn Storm. A nice review of the Yahoo! OAuth abuse was performed as well as the tab-nabbing attack against OWA services.
Jose Enrique Hernandez (Zenedge) presented “Lessons learned in fighting Targeted Bot Attacks“. After a quick review of what bots are (they are not always malicious – think about the Google crawler bot), he reviewed different techniques to protect web resources from bots and why they often fail, like the JavaScript challenge or the Cloudflare bypass. These are “silent challenges”. Loud challenges are, by examples, CAPTCHA’s. Then Jose explained how to build a good solution to protect your resources:
  • You need a reverse proxy (to be able to change quests on the fly)
  • LUA hooks
  • State db for concurrency
  • Load balancer for scalability
  • fingerprintjs2 / JS Challenge

Finally, two other Cisco guys, Steve McKinney & Eddie Allan presented “Leveraging Event Streaming and Large Scale Analysis to Protect Cisco“. CIsco is collecting a huge amount of data on a daily basis (they speak in Terabytes!). As a Splunk user, they are facing an issue with the indexing licence. To index all these data, they should have extra licenses (and pay a lot of money). They explained how to “pre-process” the data before sending them to Splunk to reduce the noise and the amount of data to index.
The idea is to pub a “black box” between the collectors and Splunk. They explained what’s in this black box with some use cases:
  • WSA logs (350M+ events / day)
  • Passive DNS (7.5TB / day)
  • Users identification
  • osquery data

Some useful tips that gave and that are valid for any log management platform:

  • Don’t assume your data is well-formed and complete
  • Don’t assume your data is always flowing
  • Don’t collect all the things at once
  • Share!

Two intense days full of useful information and tips to better defend your networks and/or collect intelligence. The slides should be published soon.

[The post FIRST TC Amsterdam 2017 Wrap-Up has been first published on /dev/random]

MEANCe jeudi 18 mai 2017 à 19h se déroulera la 59ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Développer avec la stack MEAN

Thématique : Développement

Public : Développeurs|étudiants|fondateurs de startups

Les animateurs conférenciers : Fabian Vilers (Dev One) et Sylvain Guérin (Guess Engineering)

Lieu de cette séance : Université de Mons, Campus Plaine de Nimy, avenue Maistriau, Grands Amphithéâtres, Auditoire Curie (cf. ce plan sur le site de l’UMONS, ou la carte OSM).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : MEAN est une stack JavaScript complète, basée sur des logiciels ouverts, et permettant de développer rapidement des applications orientées web. Elle est composée de 4 piliers fondamentaux :

  • MongoDB (base de données NoSQL orientée documents)
  • Express (framework de développement d’application back-end)
  • Angular (framework de développement d’application front-end)
  • Nodejs (environnement d’exécution multi-plateforme basé sur le moteur V8 JavaScript de Google).

Lors de cette session, nous présenterons à tour de rôle chaque élément constituant une application MEAN tout en construisant pas à pas, en live, et avec vous, une application simple et fonctionnelle.

Short Bios :

  • Développeur depuis près de 17 ans, Fabian se décrit comme un artisan du logiciel. Il montre beaucoup d’intérêt pour la précision, la qualité et la minutie lors de la conception et la mise au point d’un logiciel. Depuis 2010, il est consultant au sein de Dev One pour laquelle il renforce, guide, et forme des équipes de développement tout en promouvant les bonnes pratiques du développement logiciel en Belgique et en France. Vous pouvez suivre Fabian sur Twitter et LinkedIn.
  • Sylvain accompagne depuis plus de 20 ans des sociétés dans la réalisation de leurs projets informatiques. Témoin actif de la révolution numérique et porteur de la bonne parole agile, il tentera de vous faire voir les choses sous un angle différent. Vous pouvez suivre Sylvain sur Twitter et LinkedIn.