Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

August 18, 2017

In the .NET XAML world, you have the ICommand, the CompositeCommand and the DelegateCommand. You use these commands to in a declarative way bind them as properties to XAML components like menu items and buttons. You can find an excellent book on this titled Prism 5.0 for WPF.

The ICommand defines two things: a canExecute property and an execute() method. The CompositeCommand allows you to combine multiple commands together, the DelegateCommand makes it possible to pass two delegates (functors or lambda’s); one for the canExecute evaluation and one for the execute() method.

The idea here is that you want to make it possible to put said commands in a ViewModel and then data bind them to your View (so in QML that’s with Q_INVOKABLE and Q_PROPERTY). Meaning that the action of the component in the view results in execute() being called, and the component in the view being enabled or not is bound to the canExecute bool property.

In QML that of course corresponds to a ViewModel.cpp for a View.qml. Meanwhile you also want to make it possible to in a declarative way use certain commands in the View.qml without involving the ViewModel.cpp.

So I tried making exactly that. I’ve placed it on github in a project I plan to use more often to collect MVVM techniques I come up with. And in this article I’ll explain how and what. I’ll stick to the header files and the QML file.

We start with defining a AbstractCommand interface. This is very much like .NET’s ICommand, of course:

#include <QObject>

class AbstractCommand : public QObject {
    Q_OBJECT
    Q_PROPERTY(bool canExecute READ canExecute NOTIFY canExecuteChanged)
public:
    AbstractCommand(QObject *parent = 0):QObject(parent){}
    Q_INVOKABLE virtual void execute() = 0;
    virtual bool canExecute() const = 0;
signals:
    void canExecuteChanged(bool canExecute);
};

We will also make a command that is very easy to use in QML, the EmitCommand:

#include <MVVM/Commands/AbstractCommand.h>

class EmitCommand : public AbstractCommand
{
    Q_OBJECT
    Q_PROPERTY(bool canExecute READ canExecute WRITE setCanExecute NOTIFY privateCanExecuteChanged)
public:
    EmitCommand(QObject *parent=0):AbstractCommand(parent){}

    void execute() Q_DECL_OVERRIDE;
    bool canExecute() const Q_DECL_OVERRIDE;
public slots:
    void setCanExecute(bool canExecute);
signals:
    void executes();
    void privateCanExecuteChanged();
private:
    bool canExe = false;
};

We make a command that allows us to combine multiple commands together as one. This is the equivalent of .NET’s CompositeCommand, here you have our own:

#include <QSharedPointer>
#include <QQmlListProperty>

#include <MVVM/Commands/AbstractCommand.h>
#include <MVVM/Commands/ListCommand.h>

class CompositeCommand : public AbstractCommand {
    Q_OBJECT

    Q_PROPERTY(QQmlListProperty<AbstractCommand> commands READ commands NOTIFY commandsChanged )
    Q_CLASSINFO("DefaultProperty", "commands")
public:
    CompositeCommand(QObject *parent = 0):AbstractCommand (parent) {}
    CompositeCommand(QList<QSharedPointer<AbstractCommand> > cmds, QObject *parent=0);
    ~CompositeCommand();
    void execute() Q_DECL_OVERRIDE;
    bool canExecute() const Q_DECL_OVERRIDE;
    void remove(const QSharedPointer<AbstractCommand> &cmd);
    void add(const QSharedPointer<AbstractCommand> &cmd);

    void add(AbstractCommand *cmd);
    void clearCommands();
    QQmlListProperty<AbstractCommand> commands();

signals:
    void commandsChanged();
private slots:
    void onCanExecuteChanged(bool canExecute);
private:
    QList<QSharedPointer<AbstractCommand> > cmds;
    static void appendCommand(QQmlListProperty<AbstractCommand> *lst, AbstractCommand *cmd);
    static AbstractCommand* command(QQmlListProperty<AbstractCommand> *lst, int idx);
    static void clearCommands(QQmlListProperty<AbstractCommand> *lst);
    static int commandCount(QQmlListProperty<AbstractCommand> *lst);
};

We also make a command that looks a lot like ListElement in QML’s ListModel:

#include <MVVM/Commands/AbstractCommand.h>

class ListCommand : public AbstractCommand
{
    Q_OBJECT
    Q_PROPERTY(AbstractCommand *command READ command WRITE setCommand NOTIFY commandChanged)
    Q_PROPERTY(QString text READ text WRITE setText NOTIFY textChanged)
public:
    ListCommand(QObject *parent = 0):AbstractCommand(parent){}
    void execute() Q_DECL_OVERRIDE;
    bool canExecute() const Q_DECL_OVERRIDE;
    AbstractCommand* command() const;
    void setCommand(AbstractCommand *newCommand);
    void setCommand(const QSharedPointer<AbstractCommand> &newCommand);
    QString text() const;
    void setText(const QString &newValue);
signals:
    void commandChanged();
    void textChanged();
private:
    QSharedPointer<AbstractCommand> cmd;
    QString txt;
};

Let’s now also make the equivalent for QML’s ListModel, CommandListModel:

#include <QObject>
#include <QQmlListProperty>

#include <MVVM/Commands/ListCommand.h>

class CommandListModel:public QObject {
    Q_OBJECT
    Q_PROPERTY(QQmlListProperty<ListCommand> commands READ commands NOTIFY commandsChanged )
    Q_CLASSINFO("DefaultProperty", "commands")
public:
    CommandListModel(QObject *parent = 0):QObject(parent){}
    void clearCommands();
    int commandCount() const;
    QQmlListProperty<ListCommand> commands();
    void appendCommand(ListCommand *command);
    ListCommand* command(int idx) const;
signals:
    void commandsChanged();
private:
    static void appendCommand(QQmlListProperty<ListCommand> *lst, ListCommand *cmd);
    static ListCommand* command(QQmlListProperty<ListCommand> *lst, int idx);
    static void clearCommands(QQmlListProperty<ListCommand> *lst);
    static int commandCount(QQmlListProperty<ListCommand> *lst);

    QList<ListCommand* > cmds;
};

Okay, let’s now put all this together in a simple example QML:

import QtQuick 2.3
import QtQuick.Window 2.0
import QtQuick.Controls 1.2

import be.codeminded.mvvm 1.0

import Example 1.0 as A

Window {
    width: 360
    height: 360
    visible: true

    ListView {
        id: listView
        anchors.fill: parent

        delegate: Item {
            height: 20
            width: listView.width
            MouseArea {
                anchors.fill: parent
                onClicked: if (modelData.canExecute) modelData.execute()
            }
            Text {
                anchors.fill: parent
                text: modelData.text
                color: modelData.canExecute ? "black" : "grey"
            }
        }

        model: comsModel.commands

        property bool combineCanExecute: false

        CommandListModel {
            id: comsModel

            ListCommand {
                text: "C++ Lambda command"
                command:  A.LambdaCommand
            }

            ListCommand {
                text: "Enable combined"
                command: EmitCommand {
                    onExecutes: { console.warn( "Hello1");
                        listView.combineCanExecute=true; }
                    canExecute: true
                }
            }

            ListCommand {
                text: "Disable combined"
                command: EmitCommand {
                    onExecutes: { console.warn( "Hello2");
                        listView.combineCanExecute=false; }
                    canExecute: true
                }
            }

            ListCommand {
                text: "Combined emit commands"
                command: CompositeCommand {
                    EmitCommand {
                        onExecutes: console.warn( "Emit command 1");
                        canExecute: listView.combineCanExecute
                    }
                    EmitCommand {
                        onExecutes: console.warn( "Emit command 2");
                        canExecute: listView.combineCanExecute
                    }
                }
            }
        }
    }
}

I made a task-bug for this on Qt, here.

The post MariaDB: JSON datatype supported as of 10.2 appeared first on ma.ttias.be.

Let's say you have the following CREATE TABLE statement you wanted to run on a MariaDB instance.

CREATE TABLE `test` (
  `id` int unsigned not null auto_increment primary key,
  `value` json null
) default character set utf8mb4 collate utf8mb4_unicode_ci

You might be greeted with this error message;

SQLSTATE[42000]: Syntax error or access violation: 1064 You have an error in your SQL
syntax; check the manual that corresponds to your MariaDB server version for the right
syntax to use near 'json null at line 1

Syntax-wise all is good, but the JSON data type is actually pretty new, and it's only "supported" (these parentheses will become clear in a moment) as of MariaDB 10.2.

First of all: make sure you run the latest MariaDB.

MariaDB> select @@version;
+-----------------+
| @@version       |
+-----------------+
| 10.0.31-MariaDB |
+-----------------+

If, like me, you were on an older release, upgrade that until you're on MariaDB 10.2.

MariaDB> select @@version;
+----------------+
| @@version      |
+----------------+
| 10.2.7-MariaDB |
+----------------+

Then, be aware that MariaDB's JSON implementation is slightly different than MySQL's, as explained in MDEV-9144.

JSON data type directly contradicts SQL standard, that says, that JSON_* functions take a string as an argument.

Also, speed-wise MariaDB does not need binary JSON, according to our benchmarks, our JSON parser is as fast on text JSON as MySQL on binary JSON.
[...]
We'll add JSON "type" for MySQL compatibility, though.

And as a final remark;

added JSON as an alias for TEXT

So behind the scenes, a JSON data type is actually a TEXT data type. But at least those CREATE TABLE queries will actually work.

That same CREATE TABLE statement above gets translated to this in MariaDB;

MariaDB> SHOW CREATE TABLE test;

CREATE TABLE `test` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `value` text COLLATE utf8mb4_unicode_ci DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci

Note the TEXT datatype for the 'value' column, that I specified as JSON.

The post MariaDB: JSON datatype supported as of 10.2 appeared first on ma.ttias.be.

August 17, 2017

I published the following diary on isc.sans.org: “Maldoc with auto-updated link“.

Yesterday, while hunting, I found another malicious document that (ab)used a Microsoft Word feature: auto-update of links. This feature is enabled by default for any newly created document (that was the case for my Word 2016 version). If you add links to external resources like URLs, Word will automatically update them without any warning or prompt… [Read more]

[The post [SANS ISC] Maldoc with auto-updated link has been first published on /dev/random]

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

I hope by now we all integrate with third party tooling when it comes to validating code changes and enforcing conventions and good practice. But what do you do when you can’t find what you’re looking for? That’s exactly where I was when one of my team members asked if it’s possible to block merging pull requests in GitHub when there are still fixup or squash commits to be found (this all boils down to our branching strategy, more on that in a future blog post).

We want to proceed with safety, which means no-one should have to worry about merging something which isn’t ready. People have to be able to focus on being awesome and everything in the development process should facilitate that. When we build gated check ins, we want to cover everything we can, leaving no room for mistakes. As we all fail at times, we have to automate wherever we can.

Luckily for us, GitHub provides a rich API that allows to interact with basically everything. Creating an automated review tool starts by setting up a small API where GitHub can post events to. Now, I’ve tried numerous languages and frameworks, and when it comes to quickly creating a web API, I always go to Express in nodejs. You can hate on Javascript all you want, but even in a swamp there are flowers to be found.

Thanks to an active community, we can make use of two modules to help us out. express-x-hub to assist in validate requests, and octonode to interact with the GitHub API. We start out simple by creating an endpoint to integrate with GitHub and test the API. I created a gist that contains all the code needed to follow along. Let’s go over the files and their responsibilities.

As we’re dealing with a nodejs project, we’ve got our obligatory package.json file containing the project description and dependencies. I prefer yarn over npm, so let’s meet these dependencies using yarn install to download and install these locally. There are two other files to be found, app.js contains the logic to run our express API and .env holds the environment variables we do not wish to expose in the source code (you want to add this to your .gitignore file). For your enjoyment, I also included the launch.json debug settings for Visual Studio Code.

Verify if you can run everything locally before connecting this to the web. Open the project in Visual Studio Code, open app.js and press F5. Great, you’ve got a little API running in debug mode. Navigating to localhost:3000 should display a hello world in your browser (rejoice). Of course, we aren’t interacting with GitHub yet. Thanks to the marvels of today’s development tools, we can use ngrok to expose the API running on our machine. Install ngrok, and expose the API running on port 3000 using ngrok http 3000 in your preferred shell environment.

Next up, we have to create a webhook in the GitHub repository you wish to interact with. Go to Settings/Webhooks/Add webhook to create a new one. Enter https://<GUID>.ngrok.io/github-hook using the specific endpoint ngrok provided you with, just make sure to use https. Set Content type to application/json and enter a random secret (save this value for later). Enable Let me select individual events, select Pull request and remove the Push event. Pressing Add webhook immediately triggers a POST to our newly created API which should then print the received data. You can see the data GitHub sends by opening a webhook’s details and scroll to the bottom. There’s a section Recent Deliveries that contains the payloads and response data. For our newly received request, you should see a response status 200 and the same payload printed by the API running on our machine.

To talk to GitHub, we will use a personal access token. Whether you’re setting this up for a personal repository or not, you might want to go for a token from a bot or regular user. On GitHub, navigate to Your personal settings/Personal access tokens/Generate new token, add a description and select the high level repo scope. Generate the token and copy the key. Go back to the project in Visual Studio Code and edit the .env file using your newly generated token and the previously generated secret when setting up the webhook (you can always edit the hook in case you forgot this).

XHUB_SECRET=your_secret
GITHUB_API_KEY=your_personal_access_token

There’s one file in the code sample we didn’t cover yet. github.js contains a function using octonode to interact with GitHub and set a commit status. We will use this to set a status on the last commit in a pull request. Change the app.post('/github-hook', function(req, res) function in app.js to look like this and restart the API using F5.

app.post('/github-hook', function(req, res) {
    var isValid = req.isXHubValid();
    if (!isValid) {
        res.status(404).end();
        return;
    }
    res.status(200).end();
    github.setCommitStatus(req.body.pull_request.head.repo.full_name,
        req.body.pull_request.head.sha,
        'failure',
        'Oh no, this commit looks bad!',
        'github-tools/test');
});

Let’s have a look at this function. First, the secret you selected when setting up the webhook serves as validation for the request. This to make sure the received request is indeed coming from our repository. If it can’t validate the request, we’ll simply return a 404 because we’re not really here. Once the request passes validation, we immediately return a 200 as the processing of the pull request might take longer than the time it takes GitHub to mark the request as timeout. Finally, we set a hardcoded commit status on the HEAD of the pull request.

GitHub provides 4 types of commit statuses: pending, success, error, or failure. Depending on your use-case, you might want to mark commits pending before assigning the final status. And make sure to only mark them as error when your API code failed to validate due to a bug.

Now, when you push a branch in the repository you added the webhook to and create a pull request, you will see the data being posted to your API. If you did everything according to the sample, the API will immediately mark the commit as failure and GitHub will display the message Oh no, this commit looks bad!. The cool thing is, now you can mark this check as required in the repository’s settings to enforce the rule and not allow anyone to bypass this check. Navigate to Branches/Protected branches/<branch>/edit/Require status checks to pass before merging/<select github-tools/test>.

Using this sample, you can extend it towards whatever use-cases your team could benefit from. You can find the implementation our team currently uses in my github-tools repository. We currently have 2 use-cases, one is to check for any fixup or squash commits in the pull requests, the other one will look for changes in requirements.txt files and make sure everyone neatly follows PEP440 rules and properly sets dependencies. As the possibilities are endless, I can’t wait to see what you’ll come up with, so make sure to let me know in the comments or on Twitter!

August 16, 2017

I published the following diary on isc.sans.org: “Analysis of a Paypal phishing kit“.

They are plenty of phishing kits in the wild that try to lure victims to provide their credentials. Services like Paypal are nice targets and we can find new fake pages almost daily. Sometimes, the web server isn’t properly configured and the source code is publicly available. A few days ago, I was lucky to find a ZIP archive containing a very nice phishing kit targeting Paypal. I took some time to have a look at it… [Read more]

[The post [SANS ISC] Analysis of a Paypal phishing kit has been first published on /dev/random]

August 15, 2017

Dri es
I recently was able to obtain the domain name dri.es so I decided to make the switch from buytaert.net to dri.es. I made the switch because my first name is a lot easier to remember and pronounce than my last name. It's bittersweet because I've been blogging on buytaert.net for almost 12 years now. The plan is to stick with dri.es for the rest of the blog's life so it's worth the change. Old links to buytaert.net will automatically be redirected, but if you can, please update your RSS feeds and other links you might have to my website.

August 13, 2017

Dans ce billet, j’explore le futur de la nutrition en testant différents repas en poudre, comme je l’avais déjà fait il y’a deux ans.

Pour produire l’énergie nécessaire à la vie, nous n’avons besoin que de deux choses : du carburant et du comburant. Vu sous cet angle, tout notre système digestif ne sert qu’à une seule et unique chose : extraire du carburant de notre environnement en rejetant l’immense majorité qui n’est pas utilisable. Le comburant, lui, est fourni par le système respiratoire.

Toute cette complexité organique, toute cette énergie, toutes ces sources potentielles de maladies et de complications pour une seule et unique chose : extraire de tout ce qui nous entoure du carbone (et quelques autres composants) que l’on pourra ensuite combiner à de l’oxygène pour produire de l’énergie.

Entre parenthèse, cela signifie aussi que si nous sommes trop gros, la seule et unique manière de nous débarrasser du carbone excédentaire est de… respirer. En effet, le CO2 que nous expirons est la seule porte de sortie pour le carbone de notre corps, avec l’urine qui en contient également une toute petite quantité.

Le système digestif étant extrêmement énergivore, un comble vu que son rôle est d’obtenir de l’énergie, l’être humain inventa la cuisine. Les recettes permirent de sélectionner les aliments les plus nourrissants tandis que la cuisson, rendue possible par le feu, facilita la digestion.

Depuis, si les recettes de cuisine sont brandies comme un étendard culturel, force est de constater que nous mangeons majoritairement des ersatz industriels des recettes originelles. Les industriels ont compris comment tromper nos réflexes pour nous faire ingérer de la nourriture bon marché. Le sucre, initialement un indicateur naturel d’un fruit mûr contenant de bonnes vitamines, a été isolé pour être saupoudré dans à peu près tout, nous rendant accros à des produits peu nourrissants voire franchement nocifs.

Quelle sera donc la prochaine évolution en termes de nutrition ? S’il n’est pas question de gaspiller une occasion de manger un bon repas, je suis persuadé que la malbouffe industrielle et le sandwich de midi peuvent avantageusement être remplacés par de la nourriture spécialement conçue pour apporter ce dont le corps à besoin le plus efficacement possible. C’est exactement l’objectif du Soylent, qui a donné naissance à de nombreux clones européens dont je vous avais parlé il y’a deux ans en vous posant la question « Est-il encore nécessaire de manger ? ».

Or, en deux ans, les choses ont bien changé. Des alternatives françaises ont vu le jour et les produits ont gagné en qualité. Je vous propose un petit tour d’horizon des différents repas en poudre que j’ai testé.

Vitaline

Vitaline, c’est le produit santé de cette comparaison. Composé d’ingrédients essentiellement bio, Vitaline cherche avant tout la qualité. D’ailleurs, les premières versions étaient peu nourrissantes et au goût assez fade. La dernière version a grandement amélioré ces aspects même si on est encore limité à 3 goûts (pour moi 2 car je n’aime pas du tout le goût amande alors que je raffole pourtant du massepain).

Autre particularité : Vitaline est le seul des produits testés qui périme assez vite. La poudre devient immangeable après quelques semaines de stockage là où les autres restent identiques durant plusieurs mois, voire années. Peut-être est-ce le prix à payer pour avoir des composants bio et moins de conservateurs.

À 4€ le repas, je conseille Vitaline à ceux pour qui la santé et le bio passent avant le goût. J’apprécie aussi énormément les sachets individuels, bien plus pratiques que les gros sachets de 3 repas.

Note : Vitaline m’a spontanément offert deux coffrets de test suite à la lecture de mon article d’il y’a deux ans.

Smeal

Autre alternative française, Smeal ne se démarque pas spécialement. Les goûts sont bons (parfois de manière surprenante, comme Speculoos) mais fort sucrés et fort écœurants. À 3€ le repas, je l’ai plutôt perçu comme une alternative bon marché qui remplit son office : on n’a plus faim pendant plusieurs heures après un Smeal.

Soulignons la poudre goût « légumes du jardin ». Une véritable innovation qui permet de sortir de l’aspect essentiellement sucré de ces repas.

Note : suite à ma demande, j’ai reçu un pack de test gratuit de Smeal.

Feed

Toujours en France, Feed se démarque par son aspect design et pratique. Plutôt que les traditionnels sachets, Feed propose des bouteilles en plastique pré-remplies de poudre. Une innovation d’ailleurs reprise par Vitaline.

Le problème ? La poudre forme de tels grumeaux que je n’ai jamais réussi à la mélanger correctement dans les bouteilles. C’est de plus particulièrement peu écologique. Je n’ai donc pas été convaincu par l’expérience même si c’est fort pratique de pouvoir se passer d’un shaker à nettoyer lors de journées nomades.

Notons que, comme Smeal, Feed se diversifie dans les goûts salés et propose également des barres repas. Ces barres sont nourrissantes sans être écœurantes et proposent des goûts salés. Bref, de Feed, je retiens essentiellement les barres.

Note : suite à ma demande, j’ai reçu un pack de test gratuit de Feed.

Queal

Déjà testé il y’a deux ans, Queal, produit hollandais, oriente désormais son marketing sur la performance, physique et intellectuelle. Pour le gag, il faut noter que leur nouveau shaker est le moins performant du marché, à la limite de l’inutilisable avec un bouchon qui se détache et qui est inlavable.

Mais force est de constater que leur poudre reste pour moi la plus digeste, avec une pléthore de goûts dont certains sont délicieux. À 2,5€ le repas, Queal reste un maître achat.

Queal tente de se diversifier avec des barres repas, les Wundrbars, qui sont absolument infectes mais nourrissantes (elles gardent toutes une trace d’amertume très prononcées).

Autre innovation, Queal propose la poudre « boost », un supplément nootropique permettant d’améliorer la mémoire et la concentration. L’effet sur la mémoire de certains composants du Boost serait démontré scientifiquement.

Est-ce que ça fonctionne ? J’ai l’impression que les matins où je rajoute du Boost à mon Queal, je suis plus apaisé et légèrement plus concentré que d’habitude. Je me sens moins grognon et moins enclin à procrastiner. Effet placebo ? C’est fort probable. À 60€ le pot de boost, je n’ai pas envie de gâcher l’effet en glandant sur Facebook !

Note : Queal m’a spontanément contacté pour m’offrir un coffret de test suite à mon article d’il y’a deux ans.

Ambronite

Produit d’ultra luxe, à plus de 8€ le repas, Ambronite se démarque grandement par sa composition.

Là où tous les autres produits sont essentiellement de la protéine de lait avec des suppléments et des arômes, Ambronite est un réel mélange de fruits et légumes secs réduits en poudre. Toutes les vitamines et les minéraux sont issues de produits naturels, les protéines étant essentiellement fournies par de l’avoine.

Il en résulte une espèce de soupe verte avec des arrières goûts sucrés de fruits. Le fait qu’il n’y aie pas de protéine de lait rend Ambronite beaucoup plus digeste et moins écœurant.

Le problème d’Ambronite reste avant tout son prix. Cela nous confronte à une question intéressante. Si payer 3/4€ pour être nourri en sautant en repas semble « rentable », suis-je prêt à payer plus du double pour un repas qui ne m’apportera aucun plaisir gustatif et qui sera ingéré en quelques secondes ?

Conclusion du test

Au niveau des marques, si ce n’était sonprix prohibitif, je pense que je consommerais essentiellement de l’Ambronite, à la fois bon, efficace et excellent pour la santé. C’est également celui que je recommande sans aucune hésitation à mes enfants.

Dans une gamme de prix correcte, j’apprécie la démarche de Vitaline, qui fait passer la santé et les aspects scientifiques de son produit avant le goût et le marketing. Malgré la composition essentiellement basée sur la protéine de lait, l’attention portée à l’utilisation d’ingrédients bio me rendent également confortable avec le fait d’offrir Vitaline à mes enfants même si le goût ne le rend pas très attractif.

Mais pour l’usage quotidien, Queal reste une valeur sûre, au goût « facile » qui plaira à tout le monde. Pour les enfants, je me rassure en me disant que ça ne peut pas être pire qu’un hamburger mais je ne pousse pas à l’utilisation trop fréquente de Queal. Je reste également partagé sur le supplément Boost. C’est soit absolument génial, soit une arnaque complète. Je n’arrive pas à me décider.

Une évolution rapide et souhaitable

En deux ans, la qualité des repas en poudre a monté de plusieurs crans. De nombreux produits sont apparus et, parfois, ont disparu aussitôt. C’est d’ailleurs un peu difficile pour le consommateur de s’y retrouver.

Mais force est de constater que ce genre de produits s’installe durablement. En deux ans, il m’est arrivé de manger essentiellement des repas en poudre pendant plusieurs jours et je me sentais particulièrement plein d’énergie. Les selles se font également plus légères. Je ne sais personnellement plus me passer d’un repas en poudre avant une longue randonnée à vélo. Même au niveau du travail intellectuel, je sais qu’un repas en poudre favorise ma concentration par rapport à tout autre repas.

Je pense que l’innovation principale sera dans l’abandon progressif de la protéine de lait et la démocratisation des produits de très haute qualité, comme Ambronite. Une attention particulière sera de plus en plus portée au bilan carbone du repas, à l’absence de produits indésirables (pesticides, sucres raffinés). En parallèle, je prédis l’apparition de produits très bon marché (moins de 1€ le repas) mais à la qualité bien moindre.

Loin de rester des alternatives aux repas, ces poudres en deviendront des composants, avec la popularisation de recettes utilisant les poudres pour les mélanger à d’autres ingrédients. Il deviendra socialement acceptable voire normal de consommer des repas en poudre là où, aujourd’hui, on me regarde encore souvent comme un extra-terrestre.

Et après ?

À plus long terme, je suis convaincu que l’on considérera la manière dont nous nous alimentons aujourd’hui comme préhistorique et morbide. Sans aucune considération pour la valeur nutritive, nous avons en effet tendance à nous laisser diriger par notre goût, notre odorat et notre vue, sens facilement abusés par la publicité, le marketing et les additifs chimiques. Si l’idée d’un repas en poudre en choque certains, il faut peut-être rappeler que nous avons tous passé les premiers mois de notre vie nourris par une source de nourriture unique (que ce soit en poudre ou à travers l’allaitement maternel).

Sur le principe de l’imprimante 3D, nous aurons alors dans notre cuisine un shaker qui mélangera en direct les ingrédients en se basant sur notre envie du moment pour le goût et sur les données de bio-capteurs pour la valeur nutritive nécessaire à notre organisme.

Les imprimantes 3D les plus sophistiquées pourront reproduire le goût et la consistance de la plupart des aliments connus, y compris la viande. Le prix et l’encombrement réserveront néanmoins dans un premier temps ces appareils aux restaurants. Il sera possible de commander un steak saignant vegan, riche en vitamines et pauvre en graisse.

L’époque où nous ingérions des graisses saturées issues d’animaux morts en buvant des sodas nous semblera probablement particulièrement barbare. Tout comme il ne nous viendrait pas à l’esprit aujourd’hui de tuer un animal et d’en arracher la chair encore chaude avec les dents, le visage barbouillé de sang.

Remarque importante : ce blog est financé par ceux d’entre vous qui m’offrent un prix libre pour chaque série de 5 billets. Cela se passe sur Tipeee ou Patreon et je ne vous remercierai jamais assez pour votre contribution. Je considère ce billet comme ayant été financé par la réception d’échantillons Vitaline, Smeal et Feed. Il n’est pas payant et ne compte pas dans la prochaine série de 5.

Photo par Brian Brodeur.

Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal ou Liberapay. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Medium, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

August 11, 2017

So I wanted to replace a site’s (*) main title which required some fancy font (Courgette!) to be downloaded, by an SVG image.

Here’s what I did;

  1. make a screenshot of the node in Firefox Developer Tools -> Inspector -> select node -> click “screenshot node” in context menu
  2. convert the png-file into svg at http://pngtosvg.com/ result being a 6.93KB file.
  3. optimize the svg at http://petercollingridge.appspot.com/svg-optimiser resulting in a 3.1KB file (see above image) which remains crispy at whatever size.
  4. added the SVG as background image (not inline though, might do that next) and set “visibility” of the logo->a->h3 (which has the title in it as text) to “hidden”
  5. ticked Autoptimize’s “remove Google Fonts”-option (which also removed a slew of other unwanted requests for fonts)

(*) The site is my wife’s boeken-jagers.be which is an offspring of her successful “De Boekenjagers” Facebook group where people hide books for others to find (hunt) and share info about that. 27 000 members and counting, proud of my Veerleken!

August 10, 2017

As heard on an older Gilles Peterson WorldWide-show I listened to on the train this morning.

YouTube Video
Watch this video on YouTube.

About Cossi Anatz:

Comment ça va ? Cossi Annatz in french occitan means “How are you doing ?” I’m doing well when I listen to this beautiful free/spiritual jazz album recorded and produced by Michel Marre in the South of France and released on Vendémiaire (Jef Gilson’s label) with a similar style to Francois Tusques Intercommunal. Jazz Afro-occitan is written on the cover and it’s clearly the case when you put the needle on wax you instantly feel the North African, Arabic & Mediterranean influences, that are quite obvious here, especially in the brass & percussions. Some brilliant tracks like the beautiful deep jazz Al Coltrane or the monster spiritual oriental jazz Aden Arabie with great Fender solo.

(Source: http://www.diggersdigest.com/shop/product/jazz-afro-occitan-1/)

August 08, 2017

Due to some technical issues, it took a slight bit longer than I'd originally expected; but the first four videos of the currently running DebConf 17 conference are available. Filenames are based on the talk title, so that should be reasonably easy to understand. I will probably add an RSS feed (like we've done for DebConf 16) to that place some time soon as well, but code for that still needs to be written.

Meanwhile, we're a bit behind on the reviewing front, with (currently) 34 talks still needing review. If you're interested in helping out, please join the #debconf-video channel on OFTC and ask what you can do. This is something which you can do from home if you're interested, so don't be shy! We'd be happy for your help.

You might not realize this looking at my blog posts, but I was actually actively using Linux (Debian) over 10 years ago. Back in the day Vista made me move to Linux after dual booting it together with XP for a few years (Vista was simply terrible, don’t even try to deny this). I went back to Windows when I started studying to become a developer in 2012. First because I had to, I needed to be able to create Windows Phone apps, later on 8.1.1 because I liked it, and in the end on 10 because I loved it.

Why the love?

Contrary to popular believe, Windows can be a great development environment by using amazing tools such as ConEmu, PowerShell and Visual Studio. And thanks to the WSL, you can now use tools just as you would on Linux, removing even more restrictions. Windows 10 finally enabled virtual desktops, and it got a lot more responsive compared to previous editions. All in all, this was a nice experience.

So, what happened?

It was only when I wanted to start using Docker, that my personal machine hit a “dead” end. There I was, trying to get Docker to work on Windows 10 Home, when suddenly I realized I needed a Pro version to enable Hyper-V and get Docker up and running. I had the choice to either spend 150 euro on Pro, or dual boot Linux once again and have some fun with Docker. The initial plan was to simply play with Docker, see if I liked it enough to use it and if so, purchase Windows 10 Pro and continue living my life as the weird CLI Windows guy.

As an unexpected turn of events, I really enjoyed my time on Linux. I started out by dual booting Ubuntu 17.04 next to Windows 10. I needed something fast, and Ubuntu’s easy installation procedure was all I needed to get things going. But, I quickly noticed a few things. First, the network connectivity is A LOT more reliable on Linux when it comes to connecting to WIFI networks for example. You can’t imagine (or maybe you can) how often I cursed at not being able to connect to a network on Windows 10 when my iPhone had no issues at all. Secondly, it’s FAST. Windows 10 took a step in the right direction, but boy, it still has a long road ahead to get to where Linux is right now.

Running Windows 10, I always stripped it naked before using it. That means doing a fresh install, removing all the crap I didn’t want or need simply to make room or remove unnecessary CPU load. On Linux, I could work the other way around. Simply install what I need and be done with it. Nice. I’m in control.

And another benefit, everything works. Do you know how hard it can be to configure a printer on Windows? Well, it’s silly how easy this is on Linux. Seriously, I had no idea knowing that my last experience dated 5 years back.

Now what?

Right now I ditched Ubuntu and run Arch with Gnome as my DE. I like the idea behind Arch and how well documented everything is. Windows 10 is completely gone from my machine, I moved to Linux full time. These past months, I learned so much that I only feel like I’ve gained something valuable. Can’t wait to see where this adventure will take me.

You might wonder if I’m now in that “I’m running Linux, fuck Microsoft” vibe. The answer is no. Knowing myself I can’t tell if I’m going to be on Linux forever, probably not. Microsoft still has time to fix the annoyances on Windows 10, it’s just silly that their upgrade plan turned against them as soon as I realized I don’t need them. I can suggest anyone to do the same. Stop staring blindly at your tooling/vendor and look at the world out there. Switch when you feel like it and trust me, you’ll learn a lot. You can’t see the world when you’re on an island.

August 07, 2017

La plupart d’entre-nous avons désormais toujours un smartphone en poche. Lorsque nous devons patienter, ne fût-ce que quelques secondes, il est socialement admis de sortir son téléphone et d’y consacrer son attention. Pire, il est de plus en plus fréquent de voir des gens en pleines conversations être en même temps sur leur téléphone. Des familles entières se promènent, chacun sur son smartphone.

Selon moi, on ne sortira pas de cette dynamique en se forçant à moins utiliser les téléphones ou en se culpabilisant. Il est déjà trop tard. Mais un nouveau type d’appareil pourrait changer la donne. Je l’appelle le « Zen Device ». Et je pense que l’innovation viendra des fabricants de liseuses, pas des fabricants de téléphones.

Le téléphone, roi des micro-tâches

Faire des choses plutôt que s’ennuyer, pourquoi pas ? Surtout que nos téléphones nous permettent de nous connecter à une quantité impressionnante de culture, de réflexions, de contenus, de nous mettre en contact avec nos amis.

Le problème, c’est que lorsque nous avons quelques minutes voire quelques secondes d’ennui, nous ne savons pas combien de temps cet ennui va durer. Nous ne pouvons pas nous lancer dans une tâche qui demande un petit peu de concentration comme répondre à un email ou lire un article de plusieurs centaines de mots sur un sujet intéressant.

Pour éviter d’être interrompu dans une de ces tâches, nous nous cantonnons à des micro-tâches, des tâches qui nous apportent une brève satisfaction même après quelques secondes : consulter Facebook, liker un tweet, poster une photo, envoyer un smiley dans une conversation, parcourir un flux d’images rigolotes, etc. Même les jeux vidéos, riches et profonds sur consoles ou PC, se jouent en quelques secondes sur smartphones.

Inutile de dire que, même mises bout à bout, ces micro-tâches n’apportent absolument rien. Pire, elles rendent accro à ces petits shoots hormonaux. Même avec du temps devant nous, nous avons désormais souvent tendance à favoriser ces micro-tâches et parcourir le flux Facebook au lieu de terminer la lecture de cet excellent article.

Les liseuses, un nouveau type de compagnon électronique

C’est peut-être la raison pour laquelle ma liseuse électronique est rapidement devenue l’objet le plus important pour moi, celui que j’emporte toujours avec moi. Car avec une liseuse, je sais qu’il n’est pas possible de réaliser des micro-tâches, seulement des tâches profondes. Bien sûr, ce n’est qu’un seul type de tâche profonde, la lecture, mais c’est déjà énorme.

Véritable talisman, mon livre électronique contient toute ma bibliothèque. Lorsque je suis en voyage, je caresse parfois d’un geste de la main la couverture de cuir qui me rappelle que j’ai avec moi toutes la connaissance dont je pourrais rêver, quand bien même j’échouerai pour plusieurs années sur une île déserte (avec une prise de courant pour recharger, faut pas déconner).

Sans notifications, sans connexion permanente, sans écran lumineux, sans mouvement rapide, les liseuses apportent un côté zen particulièrement bienvenu dans notre monde trépidant.

Mais ne pourrait-on pas les améliorer pour en faire le compagnon idéal de ma thébaïde ? Et si les liseuses devenaient des objets haut-de-gamme favorisant la concentration, nous permettant d’acheter des téléphones bon-marché qu’on se plairait à oublier complètement en partant en balade ?

Faisons l’exercice de construire ce Zen Device.

Une machine pour favoriser la lecture

Contrairement à ce qu’essayent de nous vendre les plus grands fabricants de liseuses comme Amazon et Kobo, lire ce n’est pas « acheter des livres ». Le Zen Device ne met donc pas en permanence en avant une librairie ou des formules d’abonnement.

Un service web permet de charger n’importe quel fichier Epub. Ceux-ci peuvent être lu sur le Zen Device mais également en ligne, la position de lecture étant synchronisée. L’utilisateur pourra ajouter des catalogues ou des librairies selon ses goûts, le projet Gutenberg étant un bon catalogue par défaut.

À l’heure du web, réduire la lecture aux seuls livres me semble criminel. Le Zen Device (et toute liseuse digne de ce nom) se doit de permettre la lecture d’articles sauvegardés sur Pocket ou Wallabag. La nouvelle littérature émerge sur des plateformes comme Wattpad, Scribay voire Medium. Comment se fait-il qu’aucune liseuse n’y donne à ma connaissance accès ?

Le Zen Device facilite également l’usage du Wiktionnaire et de Wikipédia. Son rétroéclairage peut être réduit fortement afin de ne pas être comme un phare lors des séances de lecture nocturne.

Un remplaçant au carnet de note

La seule et unique raison pour laquelle je ne peux pas me passer de mon smartphone, même la nuit, c’est qu’il a remplacé mon carnet de note, que ce soit manuscrites avec le stylet ou audio avec la fonction dictaphone.

Mais noter des rêves ou des nouvelles idées n’aurait-il pas plus de sens sur un Zen Device que sur un Micro-Task-Hyper-Distracting-Phone ?

C’est pourquoi le Zen Device de mes rêves dispose d’un stylet, permettant des notes manuscrites sur une page blanche et d’une fonction dictaphone. Cela me permettrait enfin de me défaire de ma dépendance à Samsung et ses Galaxy Note, seuls téléphones offrant une expérience stylet suffisamment qualitative pour l’écriture manuscrite.

Les notes peuvent être soit complètement indépendantes, soit liées à la lecture en cours si un extrait a été surligné. Ces notes seraient synchronisées avec un service en ligne, Evernote, Dropbox ou autre. De cette manière, il serait particulièrement aisé de savoir que lorsque j’ai eu telle idée, j’étais à la page 184 de tel bouquin, d’assembler des notes et de faire émerger de vraies idées. Un outil absolument révolutionnaire !

Outre les notes manuscrites et audio, il pourrait être possible d’utiliser un clavier. Mais toute personne ayant utilisé une liseuse sait à quel point l’expérience de clavier virtuel est frustrante sur écran e-ink. Le Zen Device permettra donc de brancher un clavier Bluetooth. Et supporter la disposition BÉPO.

Fonctionnel sans notifications

En évoluant, un Zen Device pourrait même avoir de plus en plus de fonctionnalités mais avec une contrainte majeur : pas de haut-parleur ni de notification.

On pourrait imaginer un accès à son calendrier afin de voir le plan de la journée, écouter de la musique ou des audio livres avec des écouteurs (jack ou bluetooth), un appareil photo permettant de scanner du texte avec reconnaissance de caractère, un accès à Open Street Map permettant de s’orienter (mais peut-être sans avoir besoin d’un calcul d’itinéraire).

On peut imaginer tout un écosystème d’apps qui seraient soumises à plusieurs contraintes : ne pas pouvoir tourner en arrière-plan, en pas envoyer de notifications, fonctionner sur un écran e-ink.

Zen mais sans oublier le côté social et le prix libre

Non, le Zen Device ne permettra pas d’aller sur Facebook. Mais il y’aurait du sens à intégrer certains types de réseaux sociaux comme SensCritique ou Babelio. De cette manière, le Zen Device aurait la liste de mes envies de lecture pour me suggérer mon prochain livre et me permettrait de garder un journal de mes lectures, potentiellement public.

En fait, on pourrait même imaginer un tout nouveau type de réseau social qui ne serait pas uniquement basé sur le « J’aime » mais sur des interactions plus fines comme « Je valide et recommande », « Cela m’interpelle, est-ce sérieux ? » ou « Cette lecture est inintéressante ».

Lorsqu’un lecteur recommande une lecture, une toute petite somme est versée à l’auteur mais un pourcentage va aux recommandeurs grâce à qui le lecteur a découvert cet article.

Mais là on s’éloigne un peu du Zen Device en lui-même…

Le Zen Device existe-t-il ?

Tout cela parait un bien beau rêve mais, techniquement, rien de ce que je décris n’est impossible.

Kobo offre une intégration avec Pocket sur ses liseuses, Bookeen envisagerait sérieusement d’offrir une intégration Wallabag et Tolino permet de gérer en ligne sa bibliothèque de fichiers Epub avec synchronisation de la position de lecture. Tolino embarque une excellente intégration au Wiktionnaire.

De son côté, la liseuse PocketBook Ultra offre un appareil photo et une prise audio pour écouter des audio-books. Il ne devrait pas être sorcier de rajouter un micro pour avoir une fonction dictaphone.

Au niveau du stylet, Sony offre une feuille A4 virtuelle permettant d’annoter des PDF. Mais le plus intéressant est la tablette Remarkable. Malheureusement, la partie « lecture » est peu développée sur le site web et l’engin ne dispose pas du moindre éclairage, le rendant parfaitement inadapté pour la prise de note de nuit ou l’écriture de rêves. De plus, sa grande taille le rend peu pratique à emporter.

Le projet NoteSlate aurait quand à lui pu être une première version du Zen Device, c’est d’ailleurs pourquoi j’ai choisi de l’utiliser comme illustration à ce post. Mais le site annonçant un lancement en Mars 2017 n’a plus été mis à jour depuis janvier. De nombreuses pré-commandes ont été placées et, depuis, silence radio si ce ne sont quelques tweets de marketing. Ce n’est pas de très bonne augure pour la suite.

Un changement de paradigme pour les concepteurs de liseuses.

Bref, comme vous pouvez le constater, le Zen Device ne relève pas de la pure science-fiction. Technologiquement, rien d’impossible. Alors, y’a-t-il un marché ? Personnellement je suis près à payer le prix d’un téléphone très haut de gamme pour un tel engin car je trouve dommage de payer une fortune pour mes micro-tâches/distractions et 150€ pour mon outil le plus important, ma liseuse…

Mais je constate que, tôt ou tard, tous les fabricants de liseuses finissent par vouloir reproduire Amazon. Vendre des livres virtuels selon l’idée très 20ème siècle de « librairie » est plus rentable que de vendre des appareils. Du coup, au fil des mises à jours et des partenariats, les liseuses se transforment en « appareil vous encourageant à acheter des livres ». Tout le contraire d’un Zen Device…

Oui, je pense qu’il est possible de concevoir un nouveau type d’appareil qui nous délivrera de notre dépendance à Facebook/Twitter/Snapshat/WhatsApp. Qui nous permettra de profiter du moment présent tout en nous permettant d’enrichir nos connaissances et notre patrimoine intellectuel.

Mais pour cela, il faudra soit un nouvel acteur, soit un fabricant de liseuses qui prendra le risque de ne pas vouloir être un énième sous-Amazon. Alors, si vous êtes dans la conception de liseuses, que l’idée du Zen Device vous intéresse, n’hésitez pas à me contacter.

Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal ou Liberapay. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Medium, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

I published the following diary on isc.sans.org: “Increase of phpMyAdmin scans“.

PMA (or “phpMyAdmin”) is a well-known MySQL front-end written in PHP that “brings MySQL to the web” as stated on the web site. The tool is very popular amongst web developers because it helps to maintain databases just by using a web browser. This also means that the front-end might be publicly exposed! It is a common finding in many penetration tests to find an old PMA interface left by an admin… [Read more]

[The post [SANS ISC] Increase of phpMyAdmin scans has been first published on /dev/random]

August 02, 2017

You can’t deny the power of a good book. And yet, for the past years I relied mostly on blog posts and hands-on experience when it came to learning and keeping up in tech. Some of it due to being hard to combine with family life, blogging and basically everything else that comes your way, but a few weeks ago I decided to get back on track and start working my way down that built up book backlog.

Getting started

I bought a Kobo Aura Edition 2 after a lot of contemplating and information gathering, purchased a few ebooks and got ready to read until my eyes hurt. Getting books on the Kobo is a breeze, simply plug it in your laptop and copy the epub files to your reader. I immediately fell in love. Being able to takes notes, highlight and look for more information on certain words while you’re reading is super neat. Not only does it, in my experience, work better than a paper alternative, you can get a complete overview of all your notes and export them where needed afterwards. Given that my main use case is to learn and study, this feature is key in turning this into a success story.

The only thing that kept bugging me was the slow response time when opening a book or swiping to the next page. Only after looking into this, and learning about ebooks and the epub format, I realized you can have a huge difference in performance depending on the format settings. Luckily, fixing this is rather easy. Download Calibre, install the Kobo Touch Extended plugin and reformat the epub files. I made sure to split files larger than 20KB in the epub output, this allows for fast responding ebooks and the smaller size hasn’t got any impact on your reading experience. An extra, and not to be taken lightly, advantage of the plugin, is that it automatically converts epub to kepub when selecting Send to device in Calibre. This provides you with additional reading insights and even better response times. I now convert every epub to this format before adding it to my Kobo (extra thanks to Kris Hermans for pointing me to this information).

Sources

If you want to read, and you’re looking for interesting deals like me, you’d want to know where to get free ebooks that are worth your time. I get my books from a couple of sources:

The last two often have interesting deals so you can get books cheap. Also, Humble Bundle offers interesting deals on ebooks. Make sure to keep an eye out for interesting stuff. And in the end, there are a lot of books out there worth every penny.

Here are the books listed in the order I will read them of which I’m certain I will actually read them. have a look for inspiration, or add your own recommendations and/or sources in the comments below. Stay tuned for a review of Pro Git edition 2 which I finished reading last week. I can already recommend it to anyone working or wanting to get started with Git.

  • DevOpsSec - Jim Bird
  • The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win - Gene Kim
  • The DevOps Handbook - Gene Kim
  • Using Docker - Adrian Mouat
  • Docker Security - Adrian Mouat
  • The Art of Software Security Assessment - Mark Dowd
  • Automate the Boring Stuff with Python - Al Sweigart
  • The Little Go Book - Karl Seguin
  • Modular Programming with Python - Unknown

August 01, 2017

The post RHEL & CentOS 7.4 restores HTTP/2 functionality on Nginx appeared first on ma.ttias.be.

Red Hat Enterprise Linux 7.4 has just been released, and with it, a much-awaited (at least by me) update to OpenSSL, bringing it to version 1.0.2k.

openssl rebased to version 1.0.2k

The openssl package has been updated to upstream version 1.0.2k, which provides a number of enhancements, new features, and bug fixes, including:

-- Added support for the Datagram Transport Layer Security TLS (DTLS) protocol version 1.2.
-- Added support for the automatic elliptic curve selection for the ECDHE key exchange in TLS.
-- Added support for the Application-Layer Protocol Negotiation (ALPN).
-- Added Cryptographic Message Syntax (CMS) support for the following schemes: RSA-PSS, RSA-OAEP, ECDH, and X9.42 DH.

Note that this version is compatible with the API and ABI in the OpenSSL library version in previous releases of Red Hat Enterprise Linux 7.
Security updates in RHEL 7.4

That bold text? That's the good part: added support for the Application-Layer Protocol Negotiation (ALPN).

You might remember that more than a year ago, Chrome disabled support for NPN and only allowed ALPN. As a direct result, HTTP/2 became impossible on RHEL or CentOS systems, because it didn't support ALPN. You could run Nginx in a Docker container to get a more up-to-date OpenSSL version, but for many, that's too hard or too much effort.

Now with this RHEL 7.4 update -- and the CentOS 7.4 release that'll be out in a few days -- you no longer need Docker and you can enable HTTP/2 in Nginx again for both Firefox & Chrome users.

server {
  listen        443 ssl http2;
  server_name   your.fqdn.tld;
  ...

I'm very happy with this update as it -- finally -- brings a more recent OpenSSL version to RHEL & CentOS, removing the need for hacks/workarounds just for the sake of HTTP/2.

If you're a CentOS user, wait a couple of days for the packages to be built and update to CentOS 7.4. If your a RHEL user: go forth and upgrade to 7.4!

The post RHEL & CentOS 7.4 restores HTTP/2 functionality on Nginx appeared first on ma.ttias.be.

July 31, 2017

Niagara Falls by night

I had a chance to visit the Niagara Falls last week on a family trip. If you like nature, it's a must see — both during the day and at night when they lit up the falls. I love this photo but it still doesn't capture the majesty and beauty of the Niagara Falls.

July 29, 2017

Onze inlichtingendiensten en Comité I geven dit jaar enkele cijfers over het speurwerk van staatsveiligheid, blijkbaar ‘vertrouwelijk’, aan een Belgische krant.

Het is spijtig dat gewone burgers dit niet zelf (eenvoudig) kunnen vinden op de website van Comité I. Wel kon ik een ouder verslag vinden van 2014 – 2015.

Misschien moet men bijzondere opsporingsmethoden gebruiken om officiële informatie vrijgegeven door Comité I te vinden? Laat ik dat maar niet doen. Ik heb er eigenlijk geen idee van waarom we dan maar De Tijd moeten vertrouwen, en waarom we als burger niet meteen zelf het originele verslag kunnen lezen?

Ik heb altijd gevonden dat het vrijgeven van zulke informatie perfect kan zonder één en ander te onthullen wat het criminelen gemakkelijker maakt. Deze vrijgave van cijfers bewijst dat, volgens mij. Ons land blijkt vrij uniek te zijn met het vrijgeven van dit soort gegevens. Zulke openbaarheid van bestuur van onze overheid is iets waar wij burgers dan ook trots op mogen zijn, vind ik. Het siert de medewerkers van de inlichtingendiensten, en Comité I, dat dit in ons land mogelijk is.

Hoewel er een forse toename is in terrorisme dossiers, wat te verwachten was na wat er gebeurd is in Zaventem, blijkt er geen sprake te zijn van grootschalig aftappen of van grote inbreuken, of zo iets. Slechts één procent van de operaties moest door Comité I stopgezet worden. Dat is één procent te veel, maar dat is eerlijk gezegd ook vrij weinig. Een vraag is hoe Comité I er voor zorgt dat dit a) zo weinig blijft en b) steeds minder wordt? Worden er cursussen gegeven aan de medewerkers van de diensten? Wordt men desnoods gesanctioneerd bij (herhaaldelijke) inbreuken?

Er wordt meer gebruik gemaakt van het inbreken op computersystemen, en ook dat lijkt me logisch: heel wat criminaliteit verhuist de dag van vandaag naar de digitale wereld. Ik vraag me daar wel bij af of onze inlichtingendiensten voldoende scholing en recruitering van specialisten ter zake doet. Het is ook eenvoudiger inbreuken tegen de wet te verbergen dan bij gebruik van meer conventionele bijzondere inlichtingendiensten. Snel een tap afzetten en de logs en data verwijderen is eenvoudiger dan officiële verslagen te vernietigen. Toch moet ook dit allemaal volgens de wet gebeuren. Uiteraard.

Een vraag blijft voor mij wel welk percentage van de operaties Comité I heeft gemonitord? Want, indien er slechts één procent van de operaties door Comité I moest stopgezet worden, maar het Comité I onderzoekt maar twee procent van alle gevoerde operaties, dan zou dat willen zeggen dat de helft van alle operaties foutief uitgevoerd worden. Vermoedelijk onderzoekt men veel meer. Maar hoe weet de burger dat? Ik zie ook niet hoe het onthullen van dat percentage het criminelen eenvoudiger maakt. Misschien, vond ik ook deze informatie gewoon nog niet?

Leuk dat Comité I meldt dat er vorig jaar geen enkele journalist, arts of advocaat een doelwit is geweest. Dat zal journalist Lars van De Tijd heel leuk vinden. Het is vermoedelijk een reactie op dit nieuwsfeitOnze wet is hier, terecht, heel specifiek in. Ze moet, uiteraard, gevolgd worden. Éénieder in dit land, ook criminelen, moeten in vertrouwen medische verzorging kunnen genieten. Het afluisteren van een arts kan slechts in zeer uitzonderlijke gevallen (bv. wanneer de arts zelf betrokken is bij de criminele activiteiten van zijn of haar patiënt). Journalisten en advocaten moeten voorts ook altijd hun werk onafhankelijk kunnen doen.

Ik zou willen herhalen wat Peter Buysrogge, N-VA-Kamerlid, ook al liet weten: Het stelt ons gerust dat de inlichtingendiensten hun bevoegdheden, blijkbaar, correct toepassen. Maar voor mij mag hieraan toegevoegd worden dat het ons land zou sieren, en haar burgers nog meer geruststellen, moest er meer informatie over de werking van controleorgaan Comité I zelf gepubliceerd worden.

July 28, 2017

I'm on vacation this week, and I've been trying to disconnect and soak up time with my family. However, I had to make an exception to write a quick but exciting blog post, as Acquia was named a leader in the 2017 Gartner Magic Quadrant for Web Content Management. This marks Acquia's placement as a leader for the fourth year in a row, solidifying our position as one of the top three vendors in Gartner's report.

The 2017 Gartner Magic Quadrant for Web Content ManagementAcquia recognized as a top 3 leader, next to Adobe and Sitecore, in the 2017 Gartner Magic Quadrant for Web Content Management.

Early in my career I didn't fully understand or value the role of industry analysts like Gartner. Experience has taught me that strong analyst reports provide credibility and expose vendors to new markets and customers. It's easy to underestimate the importance of this kind of recognition for Acquia, and by extension for Drupal. If you're not familiar with the role of analyst firms, you can think of it this way: if you want to find a good coffee place, you use Yelp. If you want to find a nice hotel in New York, you use TripAdvisor. Similarly, if a CIO or CMO wants to spend $250,000 or more on enterprise software, they often consult an analyst firm like Gartner.

This year's report further cements Acquia's position as an industry leader as we received the highest marks for "Cloud Capability and Architecture". The report further highlights how Acquia enables our customers to use Drupal to the fullest extent. We enhance Drupal with services like Acquia Lift that empower organizations to not only meet the needs of their customers, but to be ambitious with digital. Today, a variety of organizations, ranging from DocuSign to the Tennessee Department of Tourism are using Acquia Lift to create significant value for their businesses.

In addition to tools like Acquia Lift, Gartner also highlighted the flexibility inherent to Acquia's platform. Acquia's emphasis on Open APIs, ranging from Drupal 8's API-first initiative to APIs for Acquia Cloud, Acquia Site Factory, and Acquia Lift, allows organizations to deliver critical capabilities faster and better integrated in their existing environments. For example, Wilson Sporting Goods delivers experiential commerce by marrying the abilities of Drupal and Magento, while Acquia supports Motorola's partnership with Demandware.

Our tenure as a leader in the Gartner Magic Quadrant for Web Content Management has enabled organizations across every industry to take a closer look at Acquia and Drupal. Organizations like Nasdaq, Pfizer, The City of Boston, and the YMCA continue to demonstrate the advantages of evolving their operating models with Drupal in comparison to our proprietary counterparts. Everyday, I get to witness firsthand how incredible and influential brands are shaping the world with Acquia and Drupal, and our standing in the Gartner Magic Quadrant reinforces that.

July 27, 2017

As announced already, I was (between other things) playing with Openstack/RDO and had deployed some small openstack setup in the CentOS Infra. Then I had to look at our existing DevCloud setup. This setup was based on Opennebula running on CentOS 6, and also using Gluster as backend for the VM store. That's when I found out that Gluster isn't a valid option anymore : Gluster is was deprecated and was now even removed from Cinder. Sad as one advantage of gluster is that you could (you had to ! ) user libgfapi so that qemu-kvm process could talk directly to gluster through ligbfapi and not accessing VM images over locally mounted gluster volumes (please, don't even try to do that, through fuse).

So what could be a replacement for Gluster from an openstack side ? I still have some dedicated nodes for storage backend[s], but not enough to even just think about Ceph. So it seems my only option was to consider NFS. (Technically speaking driver was removed from cinder, but I could have only tried to use it for glance and nova, as I have no need for cinder for DevCloud project, but clearly it would be dangerous for potential upgrades)

It's no that I'm a fan of storing qcow2 images on top of NFS, but it seems it was my only option, and at least the most transparent/less intrusive path, would I need to migrate to something else later. So let's test this before then using NFS through Infiniband (using IPoIB), and so at "good speed" (still have the infiniband hardware in place running for gluster, that will be replaced)

It's easy to mount the nfs exported dir under /var/lib/glance/images for glance, and then on every compute node also a nfs export under /var/lib/nova/instances/.

That's where you have to see what would be blocked by Selinux, as it seems the current policy shipped with openstack-selinux-0.8.6-0 (from Ocata) doesn't seem to allow that.

I initially tested services one and one and decided to open Pull Request for this, but in the mean time I rebuilt a custom selinux policy that seems to do the job in my rdo playground.

Here it is the .te that you can compile into usable .pp policy file :

module os-local-nfs 0.2;

require {
    type glance_api_t;
    type virtlogd_t;
    type nfs_t;
    class file { append getattr open read write unlink create };
    class dir { search getattr write remove_name create add_name };
}

#============= glance_api_t ==============
allow glance_api_t nfs_t:dir { search getattr write remove_name create add_name };
allow glance_api_t nfs_t:file { write getattr unlink open create read};

#============= virtlogd_t ==============
allow virtlogd_t nfs_t:dir search;
allow virtlogd_t nfs_t:file { append getattr open };

Of course you also need to enable some booleans. Some are already loaded by openstack-selinux (and you can see that from the enabled booleans by looking at /etc/selinux/targeted/active/booleans.local) but you also now need virt_use_nfs=1

Now that it works, I can replay that (all that coming from puppet) on the DevCloud nodes

Many infosec professionals joined Las Vegas to attend the BlackHat security conference. As I’m not part of those lucky people so I’m waiting for the presentations (they are published when the talk is completed). But I don’t have time to lose sitting in front of my computer and pressing F5… So let’s crawl them automatically…

#!/bin/bash
curl -s https://www.blackhat.com/us-17/briefings.html\ 
| egrep 'https://www.blackhat.com/docs/us-17/.*\.pdf' \
| awk -F '"' '{ print $4 }' \
| while read URL
do
  F=$(basename $URL)
  if [ ! -r $F ]; then
     curl -s -o $F $URL
     echo "Scrapped $f"
   fi
done

Quick and dirty but it works…

[The post Lazy BlackHat Presentations Crawler has been first published on /dev/random]

I published the following diary on isc.sans.org: “TinyPot, My Small Honeypot“.

Running honeypots is always interesting to get an overview of what’s happening on the Internet in terms of scanners or new threats. Honeypots are useful not only in the Wild but also on your internal networks. There are plenty of solutions to deploy honeypots with more or less nice features (depending on the chosen solution). They are plenty of honeypots which can simulate specific services or even mimic a complete file system, computer or specific hardware… [Read more]

 

[The post [SANS ISC] TinyPot, My Small Honeypot has been first published on /dev/random]

July 25, 2017

Yesterday, it was the second time I wondered why the hotspot of my MotoG5 phone wasn’t working with my tablet (after an OS update on the phone and the tablet).

The tablet connected fine to the phone hotspot through wifi, but the traffic was not routed to the Internet. I forgot about the first time I debugged the problem, so a little post-it not to forget it again. The secret is to change the APN type to:

APN type: supl,default

As reference, the other settings for Mobile Vikings:

APN: web.be
Username: web
Password: web
MCC: 206
MNC: 20
Authentication Type: PAP

Filed under: Uncategorized Tagged: hotspot, mobile, mobile vikings, post-it

July 20, 2017

As a Belgian sports fan, I will always be a loyal to the Belgium National Football Team. However, I am willing to extend my allegiance to Arsenal F.C. because they recently launched their new site in Drupal 8! As one of the most successful teams of England's Premier League, Arsenal has been lacing up for over 130 years. On the new Drupal 8 site, Arsenal fans can access news, club history, ticket services, and live match results. This is also a great example of collaboration with two Drupal companies working together - Inviqa in the UK and Phase2 in the US. If you want to see Drupal 8 on Arsenal's roster, check out https://www.arsenal.com!

Arsenal

July 19, 2017

I published the following diary on isc.sans.org: “Bots Searching for Keys & Config Files“.

If you don’t know our “404” project, I would definitively recommend having a look at it! The idea is to track HTTP 404 errors returned by your web servers. I like to compare the value of 404 errors found in web sites log files to “dropped” events in firewall logs. They can have a huge value to detect ongoing attacks or attackers performing some reconnaissance… [Read more]

 

[The post [SANS ISC] Bots Searching for Keys & Config Files has been first published on /dev/random]

July 18, 2017

The post Choose source IP with ping to force ARP refreshes appeared first on ma.ttias.be.

This is a little trick you can use to force outgoing traffic via one particular IP address on a server. By default, if your server has multiple IPs, it'll use the default IP address for any outgoing traffic.

However, if you're changing IP addresses or testing failovers, you might want to force traffic to leave your server as if it's coming from one of the other IP addresses. That way, upstream switches learn your server has this IP address and they can update their ARP caches.

ping allows you to do that very easily (sending ICMP traffic).

$ ping 8.8.8.8 -I 10.0.1.4

The -I parameter sets the interface via which packets should be sent, it can accept either an interface name (like eth1) or an IP address. This way, traffic leaves your server with srcip 10.0.1.4.

Docs describe -I like this;

-I interface address
 Set  source address to specified interface address. Argument may be numeric IP
 address or name of device. When pinging IPv6 link-local address this option is
 required.

So works for both individual IP addresses as well as interfaces.

The post Choose source IP with ping to force ARP refreshes appeared first on ma.ttias.be.

The post Apache httpd 2.2.15-60: underscores in hostnames are now blocked appeared first on ma.ttias.be.

A minor update to the Apache httpd project on CentOS 6 had an unexpected consequence. The update from 2.2.15-59 to 2.2.15-60, as advised because of a small security issue, started respecting RFC 1123 and as a result, stops allowing underscores in hostnames.

I spent a while debugging a similar problem with IE dropping cookies on hostnames with underscores, because it turns out that's not a valid "hostname" as per the definition in RFC 1123.

Long story short, the minor update to Apache broke these kind of URLs;

  • http://client_one.site.tld/
  • http://client_two.site.tld/page/three

These worked fine before,  but now started throwing these errors;

Bad Request

Your browser sent a request that this server could not understand.
Additionally, a 400 Bad Request error was encountered while trying to use an ErrorDocument to handle the request.

The error logs showed the message as such;

[error] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23)

The short-term workaround was to downgrade Apache again.

$ yum downgrade httpd-tools httpd mod_ssl

And that allowed the underscores again, long-term plan is to migrate those (sub)domains to versions without underscores.

The post Apache httpd 2.2.15-60: underscores in hostnames are now blocked appeared first on ma.ttias.be.

This is a long read, skip to “Prioritizing the projects and changes” for the approach details...

Organizations and companies generally have an IT workload (dare I say, backlog?) which needs to be properly assessed, prioritized and taken up. Sometimes, the IT team(s) get an amount of budget and HR resources to "do their thing", while others need to continuously ask for approval to launch a new project or instantiate a change.

Sizeable organizations even require engineering and development effort on IT projects which are not readily available: specialized teams exist, but they are governance-wise assigned to projects. And as everyone thinks their project is the top-most priority one, many will be disappointed when they hear there are no resources available for their pet project.

So... how should organizations prioritize such projects?

Structure your workload, the SAFe approach

A first exercise you want to implement is to structure the workload, ideas or projects. Some changes are small, others are large. Some are disruptive, others are evolutionary. Trying to prioritize all different types of ideas and changes in the same way is not feasible.

Structuring workload is a common approach. Changes are grouped in projects, projects grouped in programs, programs grouped in strategic tracks. Lately, with the rise in Agile projects, a similar layering approach is suggested in the form of SAFe.

In the Scaled Agile Framework a structure is suggested that uses, as a top-level approach, value streams. These are strategically aligned steps that an organization wants to use to build solutions that provide a continuous flow of value to a customer (which can be internal or external). For instance, for a financial service organization, a value stream could focus on 'Risk Management and Analytics'.

SAFe full framework

SAFe full framework overview, picture courtesy of www.scaledagileframework.com

The value streams are supported through solution trains, which implement particular solutions. This could be a final product for a customer (fitting in a particular value stream) or a set of systems which enable capabilities for a value stream. It is at this level, imo, that the benefits exercises from IT portfolio management and benefits realization management research plays its role (more about that later). For instance, a solution train could focus on an 'Advanced Analytics Platform'.

Within a solution train, agile release trains provide continuous delivery for the various components or services needed within one or more solutions. Here, the necessary solutions are continuously delivered in support of the solution trains. At this level, focus is given on the culture within the organization (think DevOps), and the relatively short-lived delivery delivery periods. This is the level where I see 'projects' come into play.

Finally, you have the individual teams working on deliverables supporting a particular project.

SAFe is just one of the many methods for organization and development/delivery management. It is a good blueprint to look into, although I fear that larger organizations will find it challenging to dedicate resources in a manageable way. For instance, how to deal with specific expertise across solutions which you can't dedicate to a single solution at a time? What if your organization only has two telco experts to support dozens of projects? Keep that in mind, I'll come back to that later...

Get non-content information about the value streams and solutions

Next to the structuring of the workload, you need to obtain information about the solutions that you want to implement (keeping with the SAFe terminology). And bear in mind that seemingly dull things such as ensuring your firewalls are up to date are also deliverables within a larger ecosystem. Now, with information about the solutions, I don't mean the content-wise information, but instead focus on other areas.

Way back, in 1952, Harry Markowitz introduced Modern portfolio theory as a mathematical framework for assembling a portfolio of assets such that the expected return is maximized for a given level of risk (quoted from Wikipedia). This was later used in an IT portfolio approach by McFarlan in his Portfolio Approach to Information Systems article, published in September 1981.

There it was already introduced that risk and return shouldn't be looked at from an individual project viewpoint, but how it contributes to the overall risk and return. A balance, if you wish. His article attempts to categorize projects based on risk profiles on various areas. Personally, I see the suggested categorization more as a way of supporting workload assessments (how many mandays of work will this be), but I digress.

Since then, other publications came up which tried to document frameworks and methodologies that facilitate project portfolio prioritization and management. The focus often boils down to value or benefits realization. In The Information Paradox John Thorp comes up with a benefits realization approach, which enables organizations to better define and track benefits realization - although it again boils down on larger transformation exercises rather than the lower-level backlogs. The realm of IT portfolio management and Benefits realization management gives interesting pointers as to the lecture part of prioritizing projects.

Still, although one can hardly state the resources are incorrect, a common question is how to make this tangible. Personally, I tend to view the above on the value stream level and solution train level. Here, we have a strong alignment with benefits and value for customers, and we can leverage the ideas of past research.

The information needed at this level often boils down to strategic insights and business benefits, coarse-grained resource assessments, with an important focus on quality of the resources. For instance, a solution delivery might take up 500 days of work (rough estimation) but will also require significant back-end development resources.

Handling value streams and solutions

As we implement this on the highest level in the structure, it should be conceivable that the overview of the value streams (a dozen or so) and solutions (a handful per value stream) is manageable, and something that at an executive level is feasible to work with. These are the larger efforts for structuring and making strategic alignment. Formal methods for prioritization are generally not implemented or described.

In my company, there are exercises that are aligning with SAFe, but it isn't company-wide. Still, there is a structure in place that (within IT) one could map to value streams (with some twisting ;-) and, within value streams, there are structures in place that one could map to the solution train exercises.

We could assume that the enterprise knows about its resources (people, budget ...) and makes a high-level suggestion on how to distribute the resources in the mid-term (such as the next 6 months to a year). This distribution is challenged and worked out with the value stream owners. See also "lean budgeting" in the SAFe approach for one way of dealing with this.

There is no prioritization of value streams. The enterprise has already made its decision on what it finds to be the important values and benefits and decided those in value streams.

Within a value stream, the owner works together with the customers (internal or external) to position and bring out solutions. My experience here is that prioritization is generally based on timings and expectations from the customer. In case of resource contention, the most challenging decision to make here is to put a solution down (meaning, not to pursue the delivery of a solution), and such decisions are hardly taken.

Prioritizing the projects and changes

In the lower echelons of the project portfolio structure, we have the projects and changes. Let's say that the levels here are projects (agile release trains) and changes (team-level). Here, I tend to look at prioritization on project level, and this is the level that has a more formal approach for prioritization.

Why? Because unlike the higher levels, where the prioritization is generally quality-oriented on a manageable amount of streams and solutions, we have a large quantity of projects and ideas. Hence, prioritization is more quantity-oriented in which formal methods are more efficient to handle.

The method that is used in my company uses scoring criteria on a per-project level. This is not innovative per se, as past research has also revealed that project categorization and mapping is a powerful approach for handling project portfolio's. Just look for "categorizing priority projects it portfolio" in Google and you'll find ample resources. Kendal's Advanced Project Portfolio Management and the PMO (book) has several example project scoring criteria's. But allow me to explain our approach.

It basically is like so:

  1. Each project selects three value drivers (list decided up front)
  2. For the value drivers, the projects check if they contribute to it slightly (low), moderately (medium) or fully (high)
  3. The value drivers have weights, as do the values. Sum the resulting products to get a priority score
  4. Have the priority score validated by a scoring team

Let's get to the details of it.

For the IT projects within the infrastructure area (which is what I'm active in), we have around 5 scoring criteria (value drivers) that are value-stream agnostic, and then 3 to 5 scoring criteria that are value-stream specific. Each scoring criteria has three potential values: low (2), medium (4) and high (9). The numbers are the weights that are given to the value.

A scoring criteria also has a weight. For instance, we have a scoring criteria on efficiency (read: business case) which has a weight of 15, so a score of medium within that criteria gives a total value of 60 (4 times 15). The potential values here are based on the "return on investment" value, with low being a return less than 2 years, medium within a year, and high within a few months (don't hold me on the actual values, but you get the idea).

The sum of all values gives a priority score. Now, hold your horses, because we're not done yet. There is a scoring rule that says a project can only be scored by at most 3 scoring criteria. Hence, project owners need to see what scoring areas their project is mostly visible in, and use those scoring criteria. This rule supports the notion that people don't bring around ideas that will fix world hunger and make a cure for cancer, but specific, well scoped ideas (the former are generally huge projects, while the latter requires much less resources).

OK, so you have a score - is that your priority? No. As a project always falls within a particular value stream, we have a "scoring team" for each value stream which does a number of things. First, it checks if your project really belongs in the right value stream (but that's generally implied) and has a deliverable that fits the solution or target within that stream. Projects that don't give any value or aren't asked by customers are eliminated.

Next, the team validates if the scoring that was used is correct: did you select the right values (low, medium or high) matching the methodology for said criteria? If not, then the score is adjusted.

Finally, the team validates if the resulting score is perceived to be OK or not. Sometimes, ideas just don't map correctly on scoring criteria, and even though a project has a huge strategic importance or deliverable it might score low. In those cases, the scoring team can adjust the score manually. However, this is more of a fail-safe (due to the methodology) rather than the norm. About one in 20 projects gets its score adjusted. If too many adjustments come up, the scoring team will suggest a change in methodology to rectify the situation.

With the score obtained and validated by the scoring team, the project is given a "go" to move to the project governance. It is the portfolio manager that then uses the scores to see when a project can start.

Providing levers to management

Now, these scoring criteria are not established from a random number generator. An initial suggestion was made on the scoring criteria, and their associated weights, to the higher levels within the organization (read: the people in charge of the prioritization and challenging of value streams and solutions).

The same people are those that approve the weights on the scoring criteria. If management (as this is often the level at which this is decided) feels that business case is, overall, more important than risk reduction, then they will be able to put a higher value in the business case scoring than in the risk reduction.

The only constraint is that the total value of all scoring criteria must be fixed. So an increase on one scoring criteria implies a reduction on at least one other scoring criteria. Also, changing the weights (or even the scoring criteria themselves) cannot be done frequently. There is some inertia in project prioritization: not the implementation (because that is a matter of following through) but the support it will get in the organization itself.

Management can then use external benchmarks and other sources to gauge the level that an organization is at, and then - if needed - adjust the scoring weights to fit their needs.

Resource allocation in teams

Portfolio managers use the scores assigned to the projects to drive their decisions as to when (and which) projects to launch. The trivial approach is to always pick the projects with the highest scores. But that's not all.

Projects can have dependencies on other projects. If these dependencies are "hard" and non-negotiable, then the upstream project (the one being dependent on) inherits the priority of the downstream project (the one depending on the first) if the downstream project has a higher priority. Soft dependencies however need to validate if they can (or have to) wait, or can implement workarounds if needed.

Projects also have specific resource requirements. A project might have a high priority, but if it requires expertise (say DBA knowledge) which is unavailable (because those resources are already assigned to other ongoing projects) then the project will need to wait (once resources are fully allocated and the projects are started, then they need to finish - another reason why projects have a narrow scope and an established timeframe).

For engineers, operators, developers and other roles, this approach allows them to see which workload is more important versus others. When their scope is always within a single value stream, then the mentioned method is sufficient. But what if a resource has two projects, each of a different value stream? As each value stream has its own scoring criteria it can use (and weight), one value stream could systematically have higher scores than others...

Mixing and matching multiple value streams

To allow projects to be somewhat comparable in priority values, an additional rule has been made in the scoring methodology: value streams must have a comparable amount of scoring criteria (value drivers), and the total value of all criteria must be fixed (as was already mentioned before). So if there are four scoring criteria and the total value is fixed at 20, then one value stream can have its criteria at (5,3,8,4) while another has it at (5,5,5,5).

This is still not fully adequate, as one value stream could use a single criteria with the maximum amount (20,0,0,0). However, we elected not to put in an additional constraint, and have management work things out if the situation ever comes out. Luckily, even managers are just human and they tend to follow the notion of well-balanced value drivers.

The result is that two projects will have priority values that are currently sufficiently comparable to allow cross-value-stream experts to be exchangeable without monopolizing these important resources to a single value stream portfolio.

Current state

The scoring methodology has been around for a few years already. Initially, it had fixed scoring criteria used by three value streams (out of seven, the other ones did not use the same methodology), but this year we switched to support both value stream agnostic criteria (like in the past) as well as value stream specific ones.

The methodology is furthest progressed in one value stream (with focus of around 1000 projects) and is being taken up by two others (they are still looking at what their stream-specific criteria are before switching).

July 17, 2017

The post mysqldump without table locks (MyISAM and InnoDB) appeared first on ma.ttias.be.

It's one of those things I always have to Google again and again, so documenting for my own sanity.

Here's how to run mysqldump without any locks, which works on both MyISAM and InnoDB data engines. There are no READ nor WRITE locks, which means the dump will have little to no influence on the machine (except for CPU & disk I/O for taking the back-up), but your data will also not be consistent (!!!).

This is useful for testing migrations though, where you might not need locks, but are more interested in timing back-ups or spotting other timeout-related bugs.

$ mysqldump --compress --quick --triggers --routines --lock-tables=false --single-transaction {YOUR_DATABASE_NAME}

The key parameters are --lock-tables=false (for MyISAM) and --single-transaction (for InnoDB).

If you need more options or flexibility, xtrabackup is a tool to check out.

The post mysqldump without table locks (MyISAM and InnoDB) appeared first on ma.ttias.be.

July 13, 2017

July 12, 2017

A QEventLoop is a heavy dependency. Not every worker thread wants to require all its consumers to have one. This renders QueuedConnection not always suitable. I get that signals and slots are a useful mechanism, also for thread-communications. But what if your worker thread has no QEventLoop yet wants to wait for a result of what another worker-thread produces?

QWaitCondition is often what you want. Don’t be afraid to use it. Also, don’t be afraid to use QFuture and QFutureWatcher.

Just be aware that the guys at Qt have not yet decided what the final API for the asynchronous world should be. The KIO guys discussed making a QJob and/or a QAbstractJob. Because QFuture is result (of T) based (and waits and blocks on it, using a condition). And a QJob (derived from what currently KJob is), isn’t or wouldn’t or shouldn’t block (such a QJob should allow for interactive continuation, for example — “overwrite this file? Y/N”). Meanwhile you want a clean API to fetch the result of any asynchronous operation. Blocked waiting for it, or not. It’s an uneasy choice for an API designer. Don’t all of us want APIs that can withstand the test of time? We do, yes.

Yeah. The world of programming is, at some level, complicated. But I’m also sure something good will come out of it. Meanwhile, form your asynchronous APIs on the principles of QFuture and or KJob: return something that can be waited for.

Sometimes a prediction of how it will be like is worth more than a promise. I honestly can’t predict what Thiago will approve, commit or endorse. And I shouldn’t.

 

I published the following diary on isc.sans.org: “Backup Scripts, the FIM of the Poor“.

File Integrity Management or “FIM” is an interesting security control that can help to detect unusual changes in a file system. By example, on a server, they are directories that do not change often. Example with a UNIX environment:

  • Binaries & libraries in /usr/lib, /usr/bin, /bin, /sbin, /usr/local/bin, …
  • Configuration files in /etc
  • Devices files in /dev

[Read more]

 

[The post [SANS ISC] Backup Scripts, the FIM of the Poor has been first published on /dev/random]

If you visit Acquia's homepage today, you will be greeted by this banner:

Acquia supports net neutrality

We've published this banner in solidarity with the hundreds of companies who are voicing their support of net neutrality.

Net neutrality regulations ensure that web users are free to enjoy whatever sites they choose without interference from Internet Service Providers (ISPs). These protections establish an open web where people can explore and express their ideas. Under the current administration, the U.S. Federal Communications Commision favors less-strict regulation of net neutrality, which could drastically alter the way that people experience and access the web. Today, Acquia is joining the ranks of companies like Amazon, Atlassian, Netflix and Vimeo to advocate for strong net neutrality regulations.

Why the FCC wants to soften net neutrality regulations

In 2015, the United States implemented strong protections favoring net neutrality after ISPs were classified as common carriers under Title II of the Communications Act of 1934. This classification catalogs broadband as an "essential communication service", which means that services are to be delivered equitably and costs kept reasonable. Title II was the same classification granted to telcos decades ago to ensure consumers had fair access to phone service. Today, the Title II classification of ISPs protects the open internet by making paid prioritization, blocking or throttling of traffic unlawful.

The issue of net neutrality is coming under scrutiny since to the appointment of Ajit Pai as the Chairman of the Federal Communications Commission. Pai favors less regulation and has suggested that the net neutrality laws of 2015 impede the ISP market. He argues that while people may support net neutrality, the market requires more competition to establish faster and cheaper access to the Internet. Pai believes that net neutrality regulations have the potential to curb investment in innovation and could heighten the digital divide. As FCC Chairman, Pai wants to reclassify broadband services under less-restrictive regulations and to eliminate definitive protections for the open internet.

In May 2017, the three members of the Federal Communications Commission voted 2-1 to advance a plan to remove Title II classification from broadband services. That vote launched a public comment period, which is open until mid August. After this period the commission will take a final vote.

Why net neutrality protections are good

I strongly disagree with Pai's proposed reclassification of net neutrality. Without net neutrality, ISPs can determine how users access websites, applications and other digital content. Today, both the free flow of information, and exchange of ideas benefit from 'open highways'. Net neutrality regulations ensure equal access at the point of delivery, and promote what I believe to be the fairest competition for content and service providers.

If the FCC rolls back net neutrality protections, ISPs would be free to charge site owners for priority service. This goes directly against the idea of an open web, which guarantees a unfettered and decentralized platform to share and access information. There are many challenges in maintaining an open web, including "walled gardens" like Facebook and Google. We call them "walled gardens" because they control the applications, content and media on their platform. While these closed web providers have accelerated access and adoption of the web, they also raise concerns around content control and privacy. Issues of net neutrality contribute a similar challenge.

When certain websites have degraded performance because they can't afford the premiums asked by ISPs, it affects how we explore and express ideas online. Not only does it drive up the cost of maintaining a website, but it undermines the internet as an open space where people can explore and express their ideas. It creates a class system that puts smaller sites or less funded organizations at a disadvantage. Dismantling net neutrality regulations raises the barrier for entry when sharing information on the web as ISPs would control what we see and do online. Congruent with the challenge of "walled gardens", when too few organizations control the media and flow of information, we must be concerned.

In the end, net neutrality affects how people, including you and me, experience the web. The internet's vast growth is largely a result of its openness. Contrary to Pai's reasoning, the open web has cultivated creativity, spawned new industries, and protects the free expression of ideas. At Acquia, we believe in supporting choice, competition and free speech on the internet. The "light touch" regulations now proposed by the FCC may threaten that very foundation.

What you can do today

If you're also concerned about the future of net neutrality, you can share your comments with the FCC and the U.S. Congress (it will only take you a minute!). You can do so through Fight for the Future, who organized today's day of action. The 2015 ruling that classified broadband service under Title II came after the FCC received more than 4 million comments on the topic, so let your voice be heard.

July 11, 2017

The post Launching the cron.weekly forum appeared first on ma.ttias.be.

I've been writing a weekly newsletter on Linux & open source technologies for nearly 2 years now at cron.weekly and to this day I'm amazed by all the feedback and response I've gotten from it. What's even more fun is watching the subscriber base grow on a weekly basis, to over 6.000 users already!

(That initial big spike is actually from when I imported the MailChimp list into a self-hosted Sendy installation)

At every issue there's good feedback on the projects I list, comments on the stories, ... it's so much fun!

But there's a downside ...

Taking cron.weekly to the next level

All that feedback has been directed at me. I've been mailing thousands of Linux enthousiasts every week and it's been one-way communication. I talk, they listen. That seems like a waste of potential.

Imagine if every one on that newsletter could share their knowledge and expertise and connect to one another? I'm definitely not the smartest person to be mailing them all, there are far more intelligent folks subscribed. I want to get them involved!

To start building a community around cron.weekly I've launched the cron.weekly forum. Yes, there are things like Reddit, Stack Overflow, Quora, ... and all other fun places to discuss topics. But there's one thing they don't have; the like minded people that subscribe to cron.weekly that each have the same interest at heart: caring about Linux, open source & web technologies.

Launching the forum is an experiment, there's already so many places out there to "waste your time" online, but I'm confident it can become an interactive place to discuss new technologies, share ideas or launch new open source projects.

Heck, there already interesting topics on load balancing features & requirements, end-to-end encrypted backups, new open source projects like git-dit, ...

Highlighting questions

If questions posted to the forum remain unanswered, I'll call upon the great powers of cron.weekly subscribers to highlight them and raise awareness, get more eyes on the topic & find the best answer possible.

The last cron.weekly issue #88 already had a "Ask cron.weekly" section to get that started.

I'm confident about the future of that forum, I think the newsletter can be a great way to get more attention to difficult-to-solve questions and allow cron.weekly to actively help its community members.

How can you help?

  • Promote the cron.weekly newsletter to your friends, family & pets
  • If you've got a problem, try asking on the cron.weekly forum
  • If you started a new open source projected or reached a new milestone, brag about it in the "Show CW" section

Now let's build a community together. :-) (1)

(1) I know, it's cheesy -- but I had to end my post with something, right?

The post Launching the cron.weekly forum appeared first on ma.ttias.be.

July 10, 2017

July 09, 2017

The first release of the CDN module for Drupal was 9.5 years ago yesterday: cdn 5.x-1.0-beta1 was released on January 8, 2008!

Excitement

On January 27, 2008, the first RC followed, with boatloads of new features. Over the years, it was ported to Drupal 61, 7 and 8 and gained more features (I effectively added every single feature that was requested — I loved empowering the site builder). I did the same with my Hierarchical Select module.

I was a Computer Science student for the first half of those 9.5 years, and it was super exciting to see people actually use my code on hundreds, thousands and even tens of thousands of sites! In stark contrast with the assignments at university, where the results were graded, then discarded.

Frustration

Unfortunately this approach resulted in feature-rich modules, with complex UIs to configure them, and many, many bug reports and support requests, because they were so brittle and confusing. Rather than making the 80% case simple, I supported 99% of needed features, and made things confusing and complex for 100% of the users.


CDN settings: 'Details' tab Main CDN module configuration UI in Drupal 7.


Learning

In my job in Acquia’s Office of the CTO, my job is effectively “make Drupal better & faster”.

In 2012–2013, it was improving the authoring experience by adding in-place editing and tightly integrating CKEditor. Then it shifted in 2014 and 2015 to “make Drupal 8 shippable”, first by working on the cache system, then on the render pipeline and finally on the intersection of both: Dynamic Page Cache and BigPipe. After Drupal 8 shipped at the end of 2015, the next thing became “improve Drupal 8’s REST APIs”, which grew into the API-First Initiative.

All this time (5 years already!), I’ve been helping to build Drupal itself (the system, the APIs, the infrastructure, the overarching architecture), and have seen the long-term consequences from both up close and afar: the concepts required to understand how it all works, the APIs to extend, override and plug in to. In that half decade, I’ve often cursed past commits, including my own!

That’s what led to:

  • my insistence that the dynamic_page_cache and big_pipe modules in Drupal 8 core do not have a UI, nor any configuration, and rely entirely on existing APIs and metadata to do their thing (with only a handful of bug reports in >18 months!)
  • my “Backwards Compatibility: Burden & Benefit” talk a few months ago
  • and of course this rewrite of the CDN module

CDN module in Drupal 8: radically simpler

I started porting the CDN module to Drupal 8 in March 2016 — a few months after the release of Drupal 8. It is much simpler to use (just look at the UI). It has less overhead (the UI is in a separate module, the altering of file URLs has far simpler logic). It has lower technical complexity (File Conveyor support was dropped, it no longer needs to detect HTTP vs HTTPS: it always uses protocol-relative URLs, less unnecessary configurability, the farfuture functionality no longer tries to generate file and no longer has extremely detailed configurability).

In other words: the CDN module in Drupal 8 is much simpler. And has much better test coverage too. (You can see this in the tarball size too: it’s about half of the Drupal 7 version of the module, despite significantly more test coverage!)


CDN UI module version 3.0-rc2 on Drupal 8 CDN UI module in Drupal 8.


all the fundamentals
  • the ability to use simple CDN mappings, including conditional ones depending on file extensions, auto-balancing, and complex combinations of all of the above
  • preconnecting (and DNS prefetching for older browsers)
  • a simple UI to set it up — in fact, much simpler than before!
changed/improved
  1. the CDN module now always uses protocol-relative URLs, which means there’s no more need to distinguish between HTTP and HTTPS, which simplifies a lot
  2. the UI is now a separate module
  3. the UI is optional: for power users there is a sensible configuration structure with strict config schema validation
  4. complete unit test coverage of the heart of the CDN module, thanks to D8’s improved architecture
  5. preconnecting (and DNS prefetching) using headers rather than tags in , which allows a much simpler/cleaner Symfony response subscriber
  6. tours instead of advanced help, which very often was ignored
  7. there is nothing to configure for the SEO (duplicate content prevention) feature anymore
  8. nor is there anything to configure for the Forever cacheable files feature anymore (named Far Future expiration in Drupal 7), and it’s a lot more robust
removed
  1. File Conveyor support
  2. separate HTTPS mapping (also mentioned above)
  3. all the exceptions (blacklist, whitelist, based on Drupal path, file path…) — all of them are a maintenance/debugging/cacheability nightmare
  4. configurability of SEO feature
  5. configurability of unique file identifiers for the Forever cacheable files feature
  6. testing mode

For very complex mappings, you must manipulate cdn.settings.yml — there’s inline documentation with examples there. Those who need the complex setups don’t mind reading three commented examples in a YAML file. This used to be configurable through the UI, but it also was possible to configure it “incorrectly”, resulting in broken sites — that’s no longer possible.

There’s comprehensive test coverage for everything in the critical path, and basic integration test coverage. Together, they ensure peace of mind, and uncover bugs in the next minor Drupal 8 release: BC breaks are detected early and automatically.

The results after 8 months: contributed module maintainer bliss

The first stable release of the CDN module for Drupal 8 was published on December 2, 2016. Today, I released the first patch release: cdn 8.x-3.1. The change log is tiny: a PHP notice fixed, two minor automated testing infrastructure problems fixed, and two new minor features added.

We can now compare the Drupal 7 and 8 versions of the CDN module:

In other words: maintaining this contributed module now requires pretty much zero effort!

Conclusion

For your own Drupal 8 modules, no matter if they’re contributed or custom, I recommend a few key rules:

  • Selective feature set.
  • Comprehensive unit test coverage for critical code paths (UnitTestCase)2 + basic integration test coverage (BrowserTestBase) maximizes confidence while minimizing time spent.
  • Don’t provide/build APIs (that includes hooks) unless you see a strong use case for it. Prefer coarse over granular APIs unless you’re absolutely certain.
  • Avoid configurability if possible. Otherwise, use config schemas to your advantage, provide a simple UI for the 80% use case. Leave the rest to contrib/custom modules.

This is more empowering for the Drupal site builder persona, because they can’t shoot themselves in the foot anymore. It’s no longer necessary to learn the complex edge cases in each contributed module’s domain, because they’re no longer exposed in the UI. In other words: domain complexities no longer leak into the UI.

At the same time, it hugely decreases the risk of burnout in module maintainers!

And of course: use the CDN module, it’s rock solid! :)

Related reading

Finally, read Amitai Burstein’s OG8 Development Mindset”! He makes very similar observations, albeit about a much bigger contributed module (Organic Groups). Some of my favorite quotes:

  1. About edge cases & complexity:
    Edge cases are no longer my concern. I mean, I’m making sure that edge cases can be done and the API will cater to it, but I won’t go too far and implement them. […] we’ve somewhat reduced the flexibility in order to reduce the complexity; but while doing so, made sure edge cases can still hook into the process.
  2. About tests:
    I think there is another hidden merit in tests. By taking the time to carefully go over your own code - and using it - you give yourself some pause to think about the necessity of your recently added code. Do you really need it? If you are not afraid of writing code and then throwing it out the window, and you are true to yourself, you can create a better, less complex, and polished module.
  3. About feature set & UI:
    One of the mistakes that I feel made in OG7 was exposing a lot of the advanced functionality in the UI. […] But these are all advanced use cases. When thinking about how to port them to OG8, I think found the perfect solution: we did’t port it.

  1. I also did my bachelor thesis about Drupal + CDN integration, which led to the Drupal 6 version of the module. ↩︎

  2. Unit tests in Drupal 8 are wonderful, they’re nigh impossible in Drupal 7. They finish running in seconds. ↩︎

July 08, 2017

I published the following diary on isc.sans.org: “A VBScript with Obfuscated Base64 Data“.

A few months ago, I posted a diary to explain how to search for (malicious) PE files in Base64 data. Base64 is indeed a common way to distribute binary content in an ASCII form. There are plenty of scripts based on this technique. On my Macbook, I’m using a small service created via Automator to automatically decode highlighted Base64 data and submit them to my Viper instance for further analysis… [Read more]

[The post [SANS ISC] A VBScript with Obfuscated Base64 Data has been first published on /dev/random]

July 06, 2017

I’m at home now. I don’t do non-public unpaid work. So let’s blog the example I’m making for him.

workplace.h

#ifndef Workplace_H
#define Workplace_H

#include <QObject>
#include <QFuture>
#include <QWaitCondition>
#include <QMutex>
#include <QStack>
#include <QList>
#include <QThread>
#include <QFutureWatcher>

class Workplace;

typedef enum {
    WT_INSERTS,
    WT_QUERY
} WorkplaceWorkType;

typedef struct {
    WorkplaceWorkType type;
    QList<int> values;
    QString query;
    QFutureInterface<bool> insertIface;
    QFutureInterface<QList<QStringList> > queryIface;
} WorkplaceWork;

class WorkplaceWorker: public QThread {
    Q_OBJECT
public:
    WorkplaceWorker(QObject *parent = NULL)
        : QThread(parent), m_running(false) { }
    void run() Q_DECL_OVERRIDE;
    void pushWork(WorkplaceWork *a_work);
private:
    QStack<WorkplaceWork*> m_ongoing;
    QMutex m_mutex;
    QWaitCondition m_waitCondition;
    bool m_running;
};

class Workplace: public QObject {
    Q_OBJECT
public:
    explicit Workplace(QObject *a_parent=0) : QObject (a_parent) {}
    bool insert(QList<int> a_values);
    QList<QStringList> query(const QString &a_param);
    QFuture<bool> insertAsync(QList<int> a_values);
    QFuture<QList<QStringList> > queryAsync(const QString &a_param);
private:
    WorkplaceWorker m_worker;
};

class App: public QObject {
    Q_OBJECT
public slots:
    void perform();
    void onFinished();
private:
    Workplace m_workplace;
};

#endif// Workplace_H

workplace.cpp

#include "workplace.h"

void App::onFinished()
{
    QFutureWatcher<bool> *watcher = static_cast<QFutureWatcher<bool>* > ( sender() );
    delete watcher;
}

void App::perform()
{
    for (int i=0; i<10; i++) {
       QList<int> vals;
       vals.append(1);
       vals.append(2);
       QFutureWatcher<bool> *watcher = new QFutureWatcher<bool>;
       connect (watcher, &QFutureWatcher<bool>::finished, this, &App::onFinished);
       watcher->setFuture( m_workplace.insertAsync( vals ) );
    }

    for (int i=0; i<10; i++) {
       QList<int> vals;
       vals.append(1);
       vals.append(2);
       qWarning() << m_workplace.insert( vals );
       qWarning() << m_workplace.query("test");
    }
}

void WorkplaceWorker::pushWork(WorkplaceWork *a_work)
{
    if (!m_running) {
        start();
    }

    m_mutex.lock();
    switch (a_work->type) {
    case WT_QUERY:
        m_ongoing.push_front( a_work );
    break;
    default:
        m_ongoing.push_back( a_work );
    }
    m_waitCondition.wakeAll();
    m_mutex.unlock();
}

void WorkplaceWorker::run()
{
    m_mutex.lock();
    m_running = true;
    while ( m_running ) {
        m_mutex.unlock();
        m_mutex.lock();
        if ( m_ongoing.isEmpty() ) {
            m_waitCondition.wait(&m_mutex);
        }
        WorkplaceWork *work = m_ongoing.pop();
        m_mutex.unlock();

        // Do work here and report progress
        sleep(1);

        switch (work->type) {
        case WT_QUERY: {
            // Report result here
            QList<QStringList> result;
            QStringList row;
            row.append("abc"); row.append("def");
            result.append(row);
            work->queryIface.reportFinished( &result );
            } break;

        case WT_INSERTS:
        default: {
            // Report result here
            bool result = true;
            work->insertIface.reportFinished( &result );
            } break;
        }

        m_mutex.lock();
        delete work;
    }
    m_mutex.unlock();
}

bool Workplace::insert(QList<int> a_values)
{
    WorkplaceWork *work = new WorkplaceWork;;
    QFutureWatcher<bool> watcher;
    work->type = WT_INSERTS;
    work->values = a_values;
    work->insertIface.reportStarted();
    watcher.setFuture ( work->insertIface.future() );
    m_worker.pushWork( work );
    watcher.waitForFinished();
    return watcher.result();
}

QList<QStringList> Workplace::query(const QString &a_param)
{
    WorkplaceWork *work = new WorkplaceWork;
    QFutureWatcher<QList<QStringList> > watcher;
    work->type = WT_QUERY;
    work->query = a_param;
    work->queryIface.reportStarted();
    watcher.setFuture ( work->queryIface.future() );
    m_worker.pushWork( work );
    watcher.waitForFinished();
    return watcher.result();
}

QFuture<bool> Workplace::insertAsync(QList<int> a_values)
{
    WorkplaceWork *work = new WorkplaceWork;
    work->type = WT_INSERTS;
    work->values = a_values;
    work->insertIface.reportStarted();
    QFuture<bool> future = work->insertIface.future();
    m_worker.pushWork( work );
    return future;
}

QFuture<QList<QStringList> > Workplace::queryAsync(const QString &a_param)
{
    WorkplaceWork *work = new WorkplaceWork;
    work->type = WT_QUERY;
    work->query = a_param;
    work->queryIface.reportStarted();
    QFuture<QList<QStringList> > future = work->queryIface.future();
    m_worker.pushWork( work );
    return future;
}

The post Some more nuances to the systemd debacle appeared first on ma.ttias.be.

I published my thoughts on the systemd "0day username gets root access" bug a few days ago. That got quite the attention and the discussion in places like HN, Reddit & Twitter.

So yesterday, The Register reached out asking for my comment. It ended in a piece they published where I'm quoted ever so slightly.

In a blog post on Sunday, Mattias Geniar, a developer based in Belgium, said the issue qualifies as a bug because systemd's parsing of the User= parameter in unit files falls back on root privileges when user names are invalid.

It would be better, Geniar said in an email to The Register, if systemd defaulted to user rather than root privileges. Geniar stressed that while this presents a security concern, it's a not a critical security issue because the attack vectors are limited.
Create a user called '0day', get bonus root privs – thanks, Systemd!

My reply to their inquiry was a bit more nuanced than that though, so for the sake of transparency I'll publish my response below.

They also reached out on Twitter to confirm that "the snark is all theirs, of course" -- I appreciate that!

My response to The Register

The e-mail starts of with a set of questions.

Lennart Poettering seems disinclined to accept that systemd should
check for invalid names.

I think he's right in that systemd doesn't have to check for invalid names, after all -- those shouldn't even get on the system in the first place. It would be nice if systemd did though, the more validation the better. Any webdeveloper knows he/she shouldn't blindly trust user input, so why should systemd?

In this regard, I think Lennart is absolutely right that systemd should & does try to validate the username.

However, the problem here is that the username "0day" is a legit username, that's being validated as invalid, after which systemd falls back to its system default, root. Arguably, perhaps not a sane default and a non-privileged user would be better.

I wanted to find out why you see the issue with systemd as a security
flaw. How might the ability to create a user with a name like "0day"
be exploited?

if I can be 100% clear upfront: this flaw/bugreport in systemd is most definitely a security issue. However, even though it sounds bad, it's not a critical security issue. Attack vectors are limited, but they exist. My post was mostly aimed at preventing bad press that would interpret this bug as "if your username contains a digit, you can become root".

In order to exploit this, you need;

  • a username that gets interpreted by systemd as invalid (there are most likely more potential usernames that get interpreted as invalid)
  • a systemd unit file to launch a script or service

Here's where this is a potential issue;

  • Shared hosting: systems that allow a username to be chosen by the client, that eventually run PHP, Ruby, ... as that user. On RHEL/CentOS7, those could (1) get started by systemd
  • Self-service portals that use systemd to manage one-off or recurring tasks with systemd
  • Any place that allows user input for systemd-managed tasks, think controlpanels like Plesk, DirectAdmin, ... that allow usernames to be chosen for script execution

(1) those implementing shared hosting have a wide variety of ways to implement it though, so no guarantee that it's going to be a unit file with systemd.

In most cases (all?), you need at least access to the system already in one way or another, to try and use this bug as a security vector to get privilege escalation.

Take care,

Mattias

The post Some more nuances to the systemd debacle appeared first on ma.ttias.be.

July 02, 2017

The post Giving perspective on systemd’s “usernames that start with digit get root privileges”-bug appeared first on ma.ttias.be.

Fire in the hole! There's a new systemd bug that gets the haters aroused! 

The bug in question is one where systemd's unit files that contain illegal usernames get defaulted to root, so they get run as the root user. It sounds pretty bad;

> In case of bug report: Expected behaviour you didn't see
The process started by systemd should be user previlege

> In case of bug report: Unexpected behaviour you saw
The process started by systemd was root previlege

systemd can't handle the process previlege that belongs to user name startswith number, such as 0day

Let's break it down.

How the bug happens

If you have a systemd unit file that looks like this, the service that should start as the user "0day" will actually be run as the root user.

[Unit]
Description=0day socat service
After=network.target

[Service]
User=0day
Restart=always
Type=simple
WorkingDirectory=/home/0day/
ExecStart=/usr/bin/socat TCP-LISTEN:18086,reuseaddr,fork EXEC:"/opt/run-elf"

[Install]
WantedBy=multi-user.target

This is definitely a bug, because RHEL7 (and thus, CentOS and other derivatives) do allow usernames that start with a digit. It's systemd's parsing of the User= parameter that determines the naming doesn't follow a set of conventions, and decides to fall back to its default value, root.

Just to prove a point, here's a 0day user on CentOS 7.3.

$ cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)
$ useradd 0day
$ su - 0day

$ id
uid=1067(0day) gid=1067(0day) groups=1067(0day)

That user works. The bug is thus in systemd where it doesn't recognize that as a valid username.

Why the big fuss?

If you quickly glance over the bug (and especially the hype-media that loves to blow this up), it can come across as if every username that starts with a digit can automatically get root privileges on any machine that has systemd installed (which, let's be frank, is pretty much every modern Linux distro).

That's not the case. You need a valid systemd Unit file before that could ever happen.

This might be a security issue, but is hard to trigger

So in order to trigger this behaviour, someone with root-level privileges needs to edit a Unit file and enter a "invalid username", in this case one that starts with a digit.

But you need root level privileges to edit the file in the first place and to reload systemd to make use of that Unit file.

So here's the potential security risk;

  • You could trick a sysadmin into creating such a Unit file, hoping they miss this behaviour and trick your user in becoming root
  • You need an exploit to grant you write access to systemd's Unit files in order to escalate your privileges further

At this point, I don't think I'm missing another attack vector here.

Should this be fixed?

Yes. It's an obvious bug (at least on RHEL/CentOS 7), since a valid username does not get accepted by systemd so it triggers unexpected behaviour by launching services as root.

However, it isn't as bad as it sounds and does not grant any username with a digit immediate root access.

But it's systemd, so everyone loves to jump on that bandwagon and hype this as much as possible. Here's the deal folks: systemd is software and it has bugs. The Linux kernel also has bugs, but we don't go around blaming Linus for everyone one of those either.

I disabled the comments on this post because I'm not in the mood in yet another systemd debacle where my comment section gets abused for personal threats or violence. If you want to discuss, the post is on HackerNews & /r/linux.

The post Giving perspective on systemd’s “usernames that start with digit get root privileges”-bug appeared first on ma.ttias.be.

June 29, 2017

This week marked Acquia's 10th anniversary. In 2007, Jay Batson and I set out to build a software company based on open source and Drupal that we would come to call Acquia. In honor of our tenth anniversary, I wanted to share some of the milestones and lessons that have helped shape Acquia into the company it is today. I haven't shared these details before so I hope that my record of Acquia's founding not only pays homage to our incredible colleagues, customers and partners that have made this journey worthwhile, but that it offers honest insight into the challenges and rewards of building a company from the ground up. If you like this story, I also encourage you to read Jay's side of story.

A Red Hat for Drupal

In 2007, I was attending the University of Ghent working on my PhD dissertation. At the same time, Drupal was gaining momentum; I will never forget when MTV called me seeking support for their new Drupal site. I remember being amazed that a brand like MTV, an institution I had grown up with, had selected Drupal for their website. I was determined to make Drupal successful and helped MTV free of charge.

It became clear that for Drupal to grow, it needed a company focused on helping large organizations like MTV be successful with the software. A "Red Hat for Drupal", as it were. I also noticed that other open source projects, such as Linux had benefitted from well-capitalized backers like Red Hat and IBM. While I knew I wanted to start such a company, I had not yet figured out how. I wanted to complete my PhD first before pursuing business. Due to the limited time and resources afforded to a graduate student, Drupal remained a hobby.

Little did I know that at the same time, over 3,000 miles away, Jay Batson was skimming through a WWII Navajo Code Talker Dictionary. Jay was stationed as an Entrepreneur in Residence at North Bridge Venture Partners, a venture capital firm based in Boston. Passionate about open source, Jay realized there was an opportunity to build a company that provided customers with the services necessary to scale and succeed with open source software. We were fortunate that Michael Skok, a Venture Partner at North Bridge and Jay's sponsor, was working closely with Jay to evaluate hundreds of open source software projects. In the end, Jay narrowed his efforts to Drupal and Apache Solr.

If you're curious as to how the Navajo Code Talker Dictionary fits into all of this, it's how Jay stumbled upon the name Acquia. Roughly translating as "to spot or locate", Acquia was the closest concept in the dictionary that reinforced the ideals of information and content that are intrinsic to Drupal (it also didn't hurt that the letter A would rank first in alphabetical listings). Finally, the similarity to the world "Aqua" paid homage to the Drupal Drop; this would eventually provide direction for Acquia's logo.

Breakfast in Sunnyvale

In March of 2007, I flew from Belgium to California to attend Yahoo's Open Source CMS Summit, where I also helped host DrupalCon Sunnyvale. It was at DrupalCon Sunnyvale where Jay first introduced himself to me. He explained that he was interested in building a company that could provide enterprise organizations supplementary services and support for a number of open source projects, including Drupal and Apache Solr. Initially, I was hesitant to meet with Jay. I was focused on getting Drupal 5 released, and I wasn't ready to start a company until I finished my PhD. Eventually I agreed to breakfast.

Over a baguette and jelly, I discovered that there was overlap between Jay's ideas and my desire to start a "Red Hat for Drupal". While I wasn't convinced that it made sense to bring Apache Solr into the equation, I liked that Jay believed in open source and that he recognized that open source projects were more likely to make a big impact when they were supported by companies that had strong commercial backing.

We spent the next few months talking about a vision for the business, eliminated Apache Solr from the plan, talked about how we could elevate the Drupal community, and how we would make money. In many ways, finding a business partner is like dating. You have to get to know each other, build trust, and see if there is a match; it's a process that doesn't happen overnight.

On June 25th, 2007, Jay filed the paperwork to incorporate Acquia and officially register the company name. We had no prospective customers, no employees, and no formal product to sell. In the summer of 2007, we received a convertible note from North Bridge. This initial seed investment gave us the capital to create a business plan, travel to pitch to other investors, and hire our first employees. Since meeting Jay in Sunnyvale, I had gotten to know Michael Skok who also became an influential mentor for me.

Wired interviewJay and me on one of our early fundraising trips to San Francisco.

Throughout this period, I remained hesitant about committing to Acquia as I was devoted to completing my PhD. Eventually, Jay and Michael convinced me to get on board while finishing my PhD, rather than doing things sequentially.

Acquia, my Drupal startup

Soon thereafter, Acquia received a Series A term sheet from North Bridge, with Michael Skok leading the investment. We also selected Sigma Partners and Tim O'Reilly's OATV from all of the interested funds as co-investors with North Bridge; Tim had become both a friend and an advisor to me.

In many ways we were an unusual startup. Acquia itself didn't have a product to sell when we received our Series A funding. We knew our product would likely be support for Drupal, and evolve into an Acquia-equivalent of the Red Hat Network. However, neither of those things existed, and we were raising money purely on a PowerPoint deck. North Bridge, Sigma and OATV mostly invested in Jay and I, and the belief that Drupal could become a billion dollar company that would disrupt the web content management market. I'm incredibly thankful for Jay, North Bridge, Sigma and OATV for making a huge bet on me.

Receiving our Series A funding was an incredible vote of confidence in Drupal, but it was also a milestone with lots of mixed emotions. We had raised $7 million, which is not a trivial amount. While I was excited, it was also a big step into the unknown. I was convinced that Acquia would be good for Drupal and open source, but I also understood that this would have a transformative impact on my life. In the end, I felt comfortable making the jump because I found strong mentors to help translate my vision for Drupal into a business plan; Jay and Michael's tenure as entrepreneurs and business builders complimented my technical strength and enabled me to fine-tune my own business building skills.

In November 2007, we officially announced Acquia to the world. We weren't ready but a reporter had caught wind of our stealth startup, and forced us to unveil Acquia's existence to the Drupal community with only 24 hours notice. We scrambled and worked through the night on a blog post. Reactions were mixed, but generally very supportive. I shared in that first post my hopes that Acquia would accomplish two things: (i) form a company that supported me in providing leadership to the Drupal community and achieving my vision for Drupal and (ii) establish a company that would be to Drupal what Ubuntu or Red Hat were to Linux.

Acquia com marchAn early version of Acquia.com, with our original logo and tagline. March 2008.

The importance of enduring values

It was at an offsite in late 2007 where we determined our corporate values. I'm proud to say that we've held true to those values that were scribbled onto our whiteboard 10 years ago. The leading tenant of our mission was to build a company that would "empower everyone to rapidly assemble killer websites".

Acquia vision

In January 2008, we had six people on staff: Gábor Hojtsy (Principal Acquia engineer, Drupal 6 branch maintainer), Kieran Lal (Acquia product manager, key Drupal contributor), Barry Jaspan (Principal Acquia engineer, Drupal core developer) and Jeff Whatcott (Vice President of Marketing). Because I was still living in Belgium at the time, many of our meetings took place screen-to-screen:

Typical work day

Opening our doors for business

We spent a majority of the first year building our first products. Finally, in September of 2008, we officially opened our doors for business. We publicly announced commercial availability of the Acquia Drupal distribution and the Acquia Network. The Acquia Network would offer subscription-based access to commercial support for all of the modules in Acquia Drupal, our free distribution of Drupal. This first product launched closely mirrored the Red Hat business model by prioritizing enterprise support.

We quickly learned that in order to truly embrace Drupal, customers would need support for far more than just Acquia Drupal. In the first week of January 2009, we relaunched our support offering and announced that we would support all things related to Drupal 6, including all modules and themes available on drupal.org as well as custom code.

This was our first major turning point; supporting "everything Drupal" was a big shift at the time. Selling support for Acquia Drupal exclusively was not appealing to customers, however, we were unsure that we could financially sustain support for every Drupal module. As a startup, you have to be open to modifying and revising your plans, and to failing fast. It was a scary transition, but we knew it was the right thing to do.

Building a new business model for open source

Exiting 2008, we had launched Acquia Drupal, the Acquia Network, and had committed to supporting all things Drupal. While we had generated a respectable pipeline for Acquia Network subscriptions, we were not addressing Drupal's biggest adoption challenges; usability and scalability.

In October of 2008, our team gathered for a strategic offsite. Tom Erickson, who was on our board of directors, facilitated the offsite. Red Hat's operational model, which primarily offered support, had laid the foundation for how companies could monetize open source, but we were convinced that the emergence of the cloud gave us a bigger opportunity and helped us address Drupal's adoption challenges. Coming out of that seminal offsite we formalized the ambitious decision to build "Acquia Gardens" and "Acquia Fields". Here is why these two products were so important:

Solving for scalability: In 2008, scaling Drupal was a challenge for many organizations. Drupal scaled well, but the infrastructure companies required to make Drupal scale well was expensive and hard to find. We determined that the best way to help enterprise companies scale was by shifting the paradigm for web hosting from traditional rack models to the then emerging promise of the Cloud.

Solving for usability: In 2008, Wordpress and Ning made it really easy for people to start blogging or to set up a social network. At the time, Drupal didn't encourage this same level of adoption for non-technical audiences. Acquia Gardens was created to offer an easy on-ramp for people to experience the power of Drupal, without worrying about installation, hosting, and upgrading. It was one of the first times we developed an operational model that would offer "Drupal-as-a-service".

Acquia roadmap

Fast forward to today, and Acquia Fields was renamed Acquia Hosting and later Acquia Cloud. Acquia Gardens became Drupal Gardens and later evolved into Acquia Cloud Site Factory. In 2008, this product roadmap to move Drupal into the cloud was a bold move. Today, the Cloud is the starting point for any modern digital architecture. By adopting the Cloud into our product offering, I believe Acquia helped establish a new business model to commercialize open source. Today, I can't think of many open source companies that don't have a cloud offering.

Tom Erickson takes a chance on Acquia

Tom joined Acquia as an advisor and a member of our Board of Directors when Acquia was founded. Since the first time I met Tom, I always wanted him to be an integral part of Acquia. It took some convincing, but Tom eventually agreed to join us full time as our CEO in 2009. Jay Batson, Acquia's founding CEO, continued on as the Vice President at Acquia responsible for incubating new products and partnerships.

Moving from Europe to the United States

In 2010, after spending my entire life in Antwerp, I decided to move to Boston. The move would allow me to be closer to the team. A majority of the company was in Massachusetts, and at the pace we were growing, it was getting harder to help execute our vision all the way from Belgium. I was also hoping to cut down on travel time; in 2009 flew 100,000 miles in just one year (little did I know that come 2016, I'd be flying 250,00 miles!).

This is a challenge that many entrepreneurs face when they commit to starting their own company. Initially, I was only planning on staying on the East Coast for two years. Moving 3,500 miles away from your home town, most of your relatives, and many of your best friends is not an easy choice. However, it was important to increase our chances of success, and relocating to Boston felt essential. My experience of moving to the US had a big impact on my life.

Building the universal platform for the world's greatest digital experiences

Entering 2010, I remember feeling that Acquia was really 3 startups in one; our support business (Acquia Network, which was very similar to Red Hat's business model), our managed cloud hosting business (Acquia Cloud) and Drupal Gardens (a WordPress.com based on Drupal). Welcoming Tom as our CEO would allow us to best execute on this offering, and moving to Boston enabled me to partner with Tom directly. It was during this transformational time that I think we truly transitioned out of our "founding period" and began to emulate the company I know today.

The decisions we made early in the company's life, have proven to be correct. The world has embraced open source and cloud without reservation, and our long-term commitment to this disruptive combination has put us at the right place at the right time. Acquia has grown into a company with over 800 employees around the world; in total, we have 14 offices around the globe, including our headquarters in Boston. We also support an incredible roster of customers, including 16 of the Fortune 100 companies. Our work continues to be endorsed by industry analysts, as we have emerged as a true leader in our market. Over the past ten years I've had the privilege of watching Acquia grow from a small startup to a company that has crossed the chasm.

With a decade behind us, and many lessons learned, we are on the cusp of yet another big shift that is as important as the decision we made to launch Acquia Field and Gardens in 2008. In 2016, I led the project to update Acquia's mission to "build the universal platform for the world's greatest digital experiences". This means expanding our focus, and becoming the leader in building digital customer experiences. Just like I openly shared our roadmap and strategy in 2009, I plan to share our next 10 year plan in the near future. It's time for Acquia to lay down the ambitious foundation that will enable us to be at the forefront of innovation and digital experience in 2027.

A big thank you

Of course, none of these results and milestones would be possible without the hard work of the Acquia team, our customers, partners, the Drupal community, and our many friends. Thank you for all your hard work. After 10 years, I continue to love the work I do at Acquia each day — and that is because of you.

June 24, 2017

The second edition of BSides Athens was planned this Saturday. I already attended the first edition (my wrap-up is here) and I was happy to be accepted as a speaker for the second time!  This edition moved to a new location which was great. Good wireless, air conditioning and food. The day was based on three tracks: the first two for regular talks and the third one for the CTP and workshops. The “boss”, Grigorios Fragkos introduced the 2nd edition. This one gave more attention to a charity program called “the smile of the child” which helps Greek kids to remain in touch with tmosthe new technologies. A specific project is called “ODYSSEAS” and is based on a truck that travels across Greek to educate kids to technologies like mobile phones, social networks, … The BSides Athens donated to this project. A very nice initiative that was presented by Stefanos Alevizos who received a slot of a few minutes to describe the program (content in Greek only).


The keynote was assigned to Dave Lewis who presented “The Unbearable Lightness of Failure”. The main fact explained by Dave is that we fail but…we learn from our mistakes! In other words, “failure is an acceptable teaching tool“. The keynote was based on many facts like signs. We receive signs everywhere and we must understand how to interpret them or the famous Friedrich Nietzsche’s quote: “That which does not kill us makes us stronger“. We are facing failures all the time. The last good example is the Wannacry bad story which should never happen but… You know the story! Another important message is that we don’t have to be afraid t fail. We also have to share as much as possible not only good stories but also bad stories. Sharing is a key! Participate in blogs, social networks, podcasts. Break out of your silo! Dave is a renowned speaker and delivered a really good keynote!

Then talks were split across the two main rooms. For the first one, I decided to attend the Thanissis Diogos’s presentation about “Operation Grand Mars“. In January 2017, Trustwave published an article which described this attack. Thanassis came back on this story with more details. After a quick recap about what is incident management, he reviewed all the fact related to the operation and gave some tips to improve abnormal activities on your network. It started with an alert generated by a workstation and, three days later, the same message came from a domain controller. Definitively not good! The entry point was infected via a malicious Word document / Javascript. Then a payload was download from Google docs which is, for most of our organizations, a trustworthy service. Then he explained how persistence was achieved (via autorun, scheduled tasks) and also lateral movements. The pass-the-hash attack was used. Another tip from Thanissis: if you see local admin accounts used for network logon, this is definitively suspicious! Good review of the attack with some good tips for blue teams.

My next choice was to move to the second track to follow Konstantinos Kosmidis‘s talk about machine learning (a hot topic today in many conferences!). I’m not a big fan of these technologies but I was interested in the abstract. The talk was a classic one: after an introduction to machine learning (that we already use every day with technologies like the Google face recognition, self-driving card or voice-recognition), why not apply this technique to malware detection. The goal is to: detect, classify but, more important, to improve the algorithm! After reviewing some pro & con, Konstantinos explained the technique he used in his research to convert malware samples into images. But, more interesting, he explained a technique based on steganography to attack this algorithm. The speaker was a little bit stressed but the idea looks interesting. If you’re interested, have a look at his Github repository.

Back to the first track to follow Professor Andrew Blyth with “The Role of Professionalism and Standards in Penetration Testing“. The penetration testing landscape changed considerably in the last years. We switched to script kiddies search for juicy vulnerabilities to professional services. The problem is that today some pentest projects are required not to detect security issues and improve but just for … compliance requirements. You know the “checked-case” syndrome. Also, the business evolves and is requesting more insurance. The coming GDP European regulation will increase the demand in penetration tests.  But, a real pentest is not a Nessus scan with a new logo as explained Andrew! We need professionalism. In the second part of the talk, Andrew reviewed some standards that involve pentests: iCAST, CBEST, PCI, OWASP, OSSTMM.

After a nice lunch with Greek food, back to talks with the one of Andreas Ntakas and Emmanouil Gavriil about “Detecting and Deceiving the Unknown with Illicium”. They are working for one of the sponsors and presented the tool developed by their company: Illicium. After the introduction, my feeling was that it’s a new honeypot with extended features.  Not only, they are interesting stuff but, IMHO, it was a commercial presentation. I’d expect a demo. Also, the tool looks nice but is dedicated to organization that already reached a mature security level. Indeed, before defeating the attacker, the first step is to properly implement basic controls like… patching! What some organizations still don’t do today!

The next presentation was “I Thought I Saw a |-|4><0.-” by Thomas V. Fisher.  Many interesting tips were provided by Thomas like:

  • Understand and define “normal” activities on your network to better detect what is “abnormal”.
  • Log everything!
  • Know your business
  • Keep in mind that the classic cyber kill-chain is not always followed by attackers (they don’t follow rules)
  • The danger is to try to detect malicious stuff based on… assumptions!

The model presented by Thomas was based on 4 A’s: Assess, Analyze, Articulate and Adapt! A very nice talk with plenty of tips!

The next slot was assigned to Ioannis Stais who presented his framework called LightBulb. The idea is to build a framework to help in bypassing common WAF’s (web application firewalls). Ioannis explained first how common WAF’s are working and why they could be bypassed. Instead of testing all possible combinations (brute-force), LightBuld relies on the following process:

  • Formalize the knowledge in code injection attacks variations.
  • Expand the knowledge
  • Cross check for vulnerabilities

Note that LightBulb is available also as a BurpSuipe extension! The code is available here.

Then, Anna Stylianou presented “Car hacking – a real security threat or a media hype?“. The last events that I attended also had a talk about cars but they focused more on abusing the remote control to open doors. Today, it focuses on ECU (“Engine Control Unit”) that are present in modern cars. Today a car might have >100 ECU’s and >100 millions lines of code which means a great attack surface! They are many tools available to attack a car via its CAN bus, even the Metasploit framework can be used to pentest cars today! The talk was not dedicated to a specific attack or tools but was more a recap of the risks that cars manufacturers are facing today. Indeed, threats changed:

  • theft from the car (breaking a window)
  • theft of the cat
  • but today: theft the use of the car (ransomware)

Some infosec gurus also predict that autonomous cars will be used as lethal weapons! As cars can be seen as computers on wheels, the potential attacks are the same: spoofing, tampering, repudiation, disclosure, DoS or privilege escalation issues.

The next slot was assigned to me. I presented “Unity Makes Strength” and explained how to improve interconnections between our security tools/applications. The last talk was performed by Theo Papadopoulos: A “Shortcut” to Red Teaming. He explained how .LNK files can be a nice way to compromize your victim’s computer. I like the “love equation”: Word + Powershell = Love. Step by step, Theo explained how to build a malicious document with a link file, how to avoid mistakes and how to increase chances to get the victim infected. I like the persistence method based on assigning a popular hot-key (like CTRL-V) to shortcut on the desktop. Windows will trigger the malicious script attached to the shortcut and them… execute it (in this case, paste the clipboard content). Evil!

The day ended with the CTF winners announce and many information about the next edition of BSides Athens. They already have plenty of ideas! It’s now time for some off-days across Greece with the family…

[The post BSides Athens 2017 Wrap-Up has been first published on /dev/random]

June 22, 2017

I published the following diary on isc.sans.org: “Obfuscating without XOR“.

Malicious files are generated and spread over the wild Internet daily (read: “hourly”). The goal of the attackers is to use files that are:

  • not know by signature-based solutions
  • not easy to read for the human eye

That’s why many obfuscation techniques exist to lure automated tools and security analysts… [Read more]

[The post [SANS ISC] Obfuscating without XOR has been first published on /dev/random]

June 21, 2017

[Updated 23/06 to reflect newer versions 2.1.2 and 2.2.1]

Heads-up: Autoptimize 2.2 has just been released with a slew of new features (see changelog) and an important security-fix. Do upgrade as soon as possible.

If you prefer not to upgrade to 2.2 (because you prefer the stability of 2.1.0), you can instead download 2.1.2, which is identical to 2.1.0 except that the security fix has been backported.

I’ll follow up on the new features and on the security issue in more detail later today/ tomorrow.

June 20, 2017

The placing of your state is the only really important thing in architecture.

The post MariaDB MaxScale 2.1 defaulting to IPv6 appeared first on ma.ttias.be.

This little upgrade caught me by surprise. In a MaxScale 2.0 to 2.1 upgrade, MaxScale changes the default bind address from IPv4 to IPv6. It's mentioned in the release notes as this;

MaxScale 2.1.2 added support for IPv6 addresses. The default interface that listeners bind to was changed from the IPv4 address 0.0.0.0 to the IPv6 address ::. To bind to the old IPv4 address, add address=0.0.0.0 to the listener definition.

Upgrading MariaDB MaxScale from 2.0 to 2.1

The result is pretty significant though, because authentication in MySQL is often host or IP based, with permissions being granted like this.

$ SET PASSWORD FOR 'xxx'@'10.0.0.1' = PASSWORD('your_password');

Notice the explicit use of IP address there.

Now, after a MariaDB 2.1 upgrade, it'll default to an IPv6 address for authentication, which gives you the following error message;

$ mysql -h127.0.0.1 -P 3306 -uxxx -pyour_password
ERROR 1045 (28000): Access denied for user 'xxx'@'::ffff:127.0.0.1' (using password: YES)

Notice how 127.0.0.1 turned into ::ffff:127.0.0.1? That's an IPv4 address being encapsulated in an IPv6 address. And it'll cause MySQL authentication to potentially fail, depending on how you assigned your users & permissions.

To fix, you can either;

  • Downgrade MaxScale from 2.1 back to 2.0 (add the 2.0.6 repositories for your OS and downgrade MaxScale)
  • Add the address=0.0.0.0 config the your listener configuration in /etc/maxscale.cnf

In case of your MaxScale listeners, this should be enough to resolve the problem.

$ cat maxscale.cnf
[listener_1]
type=listener
service=...
protocol=...
address=0.0.0.0
port=...

Hope this helps!

The post MariaDB MaxScale 2.1 defaulting to IPv6 appeared first on ma.ttias.be.

June 18, 2017

In February we spent a weekend in the Arctic Circle hoping to see the northern lights. I've been so busy, I only now got around to writing about it.

We decided to travel to Nellim for an action-packed weekend with outdoor adventure, wood fires, reindeer and no WiFi. Nellim, is a small Finnish village, close to the Russian border and in the middle of nowhere. This place is a true winter wonderland with untouched and natural forests. On our way to the property we saw a wild reindeer eating on the side of the road. It was all very magical.

Beautiful log cabin bed

The trip was my gift to Vanessa for her 40th birthday! I reserved a private, small log cabin instead of the main lodge. The log cabin itself was really nice; even the bed was made of logs with two bear heads carved into it. Vanessa called them Charcoal and Smokey. To stay warm we made fires and enjoyed our sauna.

Dog sledding
Dog sledding
Dog sledding

One day we went dog sledding. As with all animals it seems, Vanessa quickly named them all; Marshmallow, Brownie, Snickers, Midnight, Blondie and Foxy. The dogs were so excited to run! After 3 hours of dog sledding in -30 C (-22 F) weather we stopped to warm up and eat; we made salmon soup in a small make-shift shelter that was similar to a tepee. The tepee had a small opening at the top and there was no heat or electricity.

The salmon soup was made over a fire, and we were skeptical at first how this would taste. The soup turned out to be delicious and even reminded us of the clam chowder that we have come to enjoy in Boston. We've since remade this soup at home and the boys also enjoy it. Not that this blog will turn into a recipe blog, but I plan to publish the recipe with photos at some point.

Tippy by night
Campfire in the snow

At night we would go out on "aurora hunts". The first night by reindeer sled, the second night using snowshoes, and the third night by snowmobile. To stay warm, we built fires either in tepees or in the snow and drank warm berry juice.

Reindeer sledding
Reindeer sledding

While the untouched land is beautiful, they definitely try to live off the land. The Fins have an abundance of berries, mushrooms, reindeer and fish. We gladly admit we enjoyed our reindeer sled rides, as well as eating reindeer. We had fresh mushroom soup made out of hand-picked mushrooms. And every evening there was an abundance of fresh fish and reindeer offered for dinner. We also discovered a new gin, Napue, made from cranberries and birch leaves.

In the end, we didn't see the Northern Lights. We had a great trip, and seeing them would have been the icing on the cake. It just means that we'll have to come back another time.

June 17, 2017

Let's look how easy it is to implement a simple cookie based language switch in the rocket web framework for the rust programming language. Defining the language type:

In this case there will be support for Dutch and English. PartialEq is derived to be able to compare Lang items with ==.

The Default trait is implemented to define the default language:

The FromStr trait is implemented to allow creating a Lang item from a string.

The Into<&'static str> trait is added to allow the conversion in the other direction.

Finally the FromRequest trait is implemented to allow extracting the "lang" cookie from the request.

It always succeeds and falls back to the default when no cookie or an unknown language is is found. How to use the Lang constraint on a request:

And the language switch page:

And as a cherry on the pie, let's have the language switch page automatically redirect to the referrer. First let's implement a FromRequest trait for our own Referer type:

When it finds a Referer header it uses the content, else the request is forwarded to the next handler. This means that if the request has no Referer header it is not handled, and a 404 will be returned.
Finally let's update the language switch request handler:

Pretty elegant. A recap with all code combined and adding the missing glue:

June 16, 2017

Pourquoi les abus financiers des politiciens sont inévitables dans une démocratie représentative

À chaque fois que quelqu’un se décide à creuser les dépenses du monde politique, des scandales éclatent. La conclusion facile est que les politiciens sont tous pourris, qu’il faut voter pour ceux qui ne le sont pas. Ou qui promettent de ne pas l’être.

Pourtant, depuis que la démocratie représentative existe, cela n’a jamais fonctionné. Et si c’était le système lui-même qui rendait impossible une gestion saine de l’argent public ?

Selon Milton Friedman, il n’y a que 4 façons de dépenser de l’argent : dépenser son argent pour soi, son argent pour les autres, l’argent des autres pour soi, l’argent des autres pour les autres.

Son argent pour soi

Lorsqu’on dépense l’argent qu’on a gagné, on optimise toujours le rendement pour obtenir le plus possible en dépensant le moins possible. Vous réfléchissez à deux fois avant de faire de grosses dépenses, vous comparez les offres, vous planifiez, vous calculez l’amortissement même de manière intuitive.

Si vous dépensez de l’argent inutilement, vous vous en voudrez, vous vous sentirez soit coupable de négligence, soit floué par d’autres.

Son argent pour les autres

Si l’intention de dépenser pour d’autres est toujours bonne, vous ne prêterez généralement pas toujours attention à la valeur que les autres recevront. Vous fixez généralement le budget qui vous semble socialement acceptable pour ne pas paraître pour un radin et vous dépensez ce budget de manière assez arbitraire.

Il y’a de grandes chances que votre cadeau ne plaise pas autant qu’il vous a couté, qu’il ne réponde pas à un besoin important ou immédiat voire, même, qu’il finisse directement à la poubelle.

Économiquement, les cadeaux et les surprises sont rarement une bonne idée. Néanmoins, comme vous tentez généralement de ne pas dépasser un budget donné, les dommages économiques sont faibles. Et, parfois, un cadeau fait extrêmement plaisir. Idée : offrez un ForeverGift !

L’argent d’autrui pour soi

Lorsqu’on peut dépenser sans compter, par exemple lorsque votre entreprise couvre tous vos frais de voyages ou que vous avez une carte essence, l’optimisation économique devient catastrophique.

En fait, ce cas de figure relève même généralement de l’anti-optimisation. Vous allez sans remords choisir un vol qui vous permet de dormir une heure plus tard même s’il est plus cher de plusieurs centaines d’euros que le vol matinal. Dans les cas extrêmes, vous allez tenter de dépenser le plus possible, même inutilement, pour avoir l’impression d’obtenir plus que votre salaire nominal.

Cette anti-optimisation peut être compensée par plusieurs facteurs : un sentiment de devoir moral vis-à-vis de l’entreprise, surtout dans les petites structures, ou une surveillance des notes de frais voire un plafond.

Le plafond peut cependant avoir un effet inverse. Si un employé bénéficie d’une carte essence avec une limite, par exemple de 2000 litres par an, il va avoir tendance à rouler plus ou à partir en vacances avec la voiture pour utiliser les 2000 litres auxquels il estime avoir droit.

C’est la raison pour laquelle cette situation économique est très rare et devrait être évitée à tout pris.

L’argent d’autrui pour les autres

Par définition, les instances politiques sont dans ce dernier cas de figures. Les politiciens sont en effet à la tête d’une énorme manne d’argent récoltée de diverses manières chez les citoyens. Et ils doivent décider comment les dépenser. Voir comment augmenter encore plus la manne, par exemple avec de nouveaux impôts.

Comme je l’ai expliqué dans un précédent billet, gagner de l’argent est l’objectif par défaut de tout être humain dans notre société.

Les politiciens vont donc tout naturellement tenter de bénéficier par tous les moyens possibles de la manne d’argent dont ils sont responsables. Chez les plus honnêtes, cela se fera inconsciemment mais cela se fera quand même, de manière indirecte. Pour les plus discrets, le politicien pourra par exemple accorder des marchés publics sans recevoir aucun bénéfice immédiat mais en se créant un réseau de relation lui permettant de siéger par après dans de juteux conseils d’administration. Pour les plus cyniques, de véritables systèmes seront mis en place, ce que j’appelle des boucles d’évaporation, permettant de transférer, le plus souvent légalement, l’argent public vers les poches privées.

Tout cela étant complètement opaque et noyé dans la bureaucratie, il est généralement impossible pour le citoyen de faire le lien entre l’euro qu’il a payé en impôt et l’euro versé de manière scandaleuse à certains politiciens. Surtout que la notion de “scandaleux” est subjective. À partir de quand un salaire devient-il scandaleux ? À partir de combien d’administrateurs une intercommunale devient-elle une machine à payer les amis et à évaporer l’argent public ? À partir de quel degré de connaissance un politicien ne peut-il plus engager sa famille et ses amis ou les faire bénéficier d’un contrat public ?

Les politiciens sont nos employés à qui nous fournissons une carte de crédit illimitée, sans aucun contrôle et avec le pouvoir d’émettre de nouvelles cartes pour leurs amis.

Que faire ?

Il ne faut donc pas s’empresser de voter pour ceux qui se promettent moins pourris que les autres. S’ils ne le sont pas encore, cela ne devrait tarder. Le pouvoir corrompt. Fréquenter des riches et d’autres politiciens qui font tous la même chose n’aide pas à garder la tête froide. Ces comportements deviennent la norme et les limites fixées par la loi ne sont, tout comme la carte essence sus-citée, plus des limites mais des dûs auxquels ils estiment avoir légitimement le droit. En cas de scandale, ils ne comprendront même pas ce qu’on leur reproche en se réfugiant derrière le « C’est légal ». Ce que nous pensons être une corruption du système n’en est en fait que son aboutissement mécanique le plus logique !

La première étape d’une solution consiste par exiger la transparence totale des dépenses publiques. Le citoyen devrait être en mesure de suivre les flux financiers de chaque centime public jusqu’au moment où il arrive dans une poche privée. L’argent public versé à chaque mandataire devrait être public. S’engager en politique se ferait avec la connaissance qu’une partie de notre vie privée devient transparente et que toutes les rémunérations seront désormais publiques, sans aucune concession.

Cela demande beaucoup d’effort de simplification mais, avec un peu de volonté, c’est aujourd’hui tout à fait possible. Les budgets secrets devraient être dûment budgétisé et justifié afin que le public puisse au moins suivre leur évolution au cours du temps.

Curieusement, cela n’est sur le programme d’aucun politicien…

 

Billet rédigé en collaboration avec Mathieu Jamar. Photo par feedee P.

Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal ou Liberapay. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Medium, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

June 14, 2017

I published the following diary on isc.sans.org: “Systemd Could Fallback to Google DNS?“.

Google is everywhere and provides free services to everyone. Amongst the huge list of services publicly available, there are the Google DNS, well known as 8.8.8.8, 8.8.4.4 (IPv4) and 2001:4860:4860::8888, 2001:4860:4860::8844 (IPv6)… [Read more]

 

[The post [SANS ISC] Systemd Could Fallback to Google DNS? has been first published on /dev/random]

June 13, 2017

Recently, I got the chance to assist a team of frontend and back-end developers to do a bit of open heart surgery. The scope of the project is as follows, migrate a BBOM monolith towards a new BSS system but keep the frontend part, and convert another web frontend and one mobile app to the same BSS system. To facilitate this, and because it’s common sense, the decision was made to create our own REST API in between. But, we were faced with an issue. Time is limited and we wanted to start creating everything at once. Without a working API implementation and the need for a defined interface, we decided to look for a tool to assist us in this process.

Gotta have swag

We began to create our API using API Blueprint in Apiary, but that soon turned out to be quite annoying because of a few reasons. One, everything exists within the same file. This implies the file grows quite large once you start adding endpoints, examples and responses. Secondly, there’s no nice way to start working on this as a team, unless you get a Standard Plan. We could debate about whether or not migrating to another plan would have solved our problem, but let’s be honest, I’d rather invest in the team than spend it on unnecessary tooling.

I began a venture to migrate this to another tool, and eventually ended up playing with Swagger. First off, Swagger also supports yaml, which is a great way to describe these things. Secondly, the ecosystem is a bit more mature which allows us to do things API Blueprint does not provide, such as split the specification into smaller parts. I found this great blog post by Mohsen Azimi which explains exactly this, and following his example, I added a compile.js file that collects the .yaml references and combines those into one big swagger.json file.

The advantages are interesting as we can now split the Swagger specification into folders for context and work on different parts without creating needless overhead all the time, like merge conflicts for example. To make sure we know the changes comply with the Swagger definition, I added a check after compiling swagger.json using swagger-parser to validate the output. Combined with a docker container to do the compilation and validation, we’ve got ourself a nice way to proceed with certainty. Adding this to a CI is peanuts, as we can use the same docker image to run all the necessary checks. The project is currently being built using Travis, you can find a sample .travis.yml file in the repository.

The setup of the project is as follows. The explanation of the components is listed inline, be aware I only listed the parts which need an explanation. Refer to the repository for a complete overview.

.
├── definitions // the data model used by the API
|   ├── model.yaml // model definition
|   └── index.yaml // list of model definitions
├── examples // sample json output
|   ├── sample.json
|   └── second_sample.json
├── parameters
|   ├── index.yaml // header and query string parameters
├── paths
|   ├── path.yaml // path definition
|   └── index.yaml / list of path definitions
├── swagger-ui // folder containing custom Swagger-UI
├── gulpfile.js // build and development logic
├── makefile // quick access to commands
└── swagger.yaml // swagger spec base file

While this sample contains model, path and parameter definitions in the root of each sub folder, nothing stops you from creating more folders to structure the definitions inside. As the compile function in gulpfile.js (previously compile.js) takes care of stitching the YAML files into one JSON spec, it can be as flexible as you want. The makefile contains a few handy commands so everyone can use the project without the need for specific setup or docker knowledge.

To change the spec you can use any editor of choice, I have Visual Studio Code setup together with the Swagger Viewer plugin. This way I can work on the spec and have it preview in a tab next to me. In case I need to validate the changes, I can also use the pre-configured validate task to quickly get feedback in my editor console. The tasks are added to the project to get you started using Visual Studio Code. If you do, make sure to also add a key binding to spawn the tasks. Open keybindings.json and enter the following (change the key combo if needed).

    {
        "key": "ctrl+shift+r",
        "command": "workbench.action.tasks.runTask"
    }

On top of that, one of my colleagues, Joeri Hendrickx, extended the setup by creating a watch function inside the gulpfile.js file that automatically reloads changes in Swagger-UI while you adjust files. This way, there’s no need for a specific setup and you can use any editor you like. As an extra bonus, it will also display the errors on top of the page.

To run the swagger specification, use the make swagger command or the swagger task in Visual Studio Code. By default, Swagger UI will be available at localhost:3000, unless you specify another port using the SWAGGER_PORT environment variable. To enable the watch function, make use of the make watch command or watch task in Visual Studio Code.

Are you mocking me?

This leaves us with one open item. How do we create a mock service using our Swagger specification? As it turns out, there’s a very useful tool out there called Prism that does just that. Part of the Stoplight tool set, their CLI tool allows you to create a mock server by simply using a Swagger spec. This provides you with all you need to design, test and move fast.

The docker image has been extended to also pull in the latest version of Prism and add it to our path. You can run the mock server through the make mock command or the mock task in Visual Studio Code. By default, the mock server will run on localhost:8010, unless you specify another port using the PRISM_PORT environment variable.

Starting the mock server prints the available endpoints. You now have the ability to start developing and use the mocked API, or validate your work via Postman, curl or any http request tool of choice. Using this repository, the curl following command will output a mocked result.

curl -X GET http://localhost:8010/v1/ping -H 'authorization: Basic trololol'

If for any reason you need to debug inside the container, you can use the make interactive command. This will open a shell inside the container for you to mess around in. I never needed it until now, but it’s there. Just in case.

The setup we have at work currently uses Jenkins to validate the spec which is deployed to Heroku every time the build on master succeeds (which is, well, every time). This way we have a single place of truth when it comes to our Swagger specification and accompanying mock service for developers or partners to play with. We can prototype fast while collecting feedback, or change current implementations fast and knowing the impact. Our production API is tested against the Swagger specification, which is integrated in that repository as a submodule to decouple designing and implementation. To get a more behavioral representation of a real API, we created middleware in Python which can keep track of the data you send and respond accordingly for certain processes. Changes to this part are also validated against the specification in order to reduce the chance of creating errors.

Feel free to mess around, ask questions or even create issues and pull requests on GitHub and let me know what you think. And stay tuned for Part II which covers technical documentation!

Source code

June 12, 2017

CoreOS Fest 2017 happened earlier this month in San Francisco. I had the joy of attending this conference. With a vendor-organized conference there’s always the risk of it being mostly a thinly-veiled marketing excercise, but this didn’t prove to be the case: there was a good community and open-source vibe to it, probably because CoreOS itself is for the most part an open-source company.

Not bad for a view

Also fun was encountering a few old-time GNOME developers such as Matthew Garrett (now at Google) and Chris Kühl (who now runs kinvolk). It’s remarkable how good of a talent incubator the GNOME project is. Look at any reasonably successful project and chances are high you’ll find some (ex-)GNOME people.

Main room

I also had the pleasure of presenting the experiences and lessons learned related to introducing Kubernetes at Ticketmatic. Annotated slides and a video of the talk can be found here.

Making your company cloud‑native: the Ticketmatic story


Comments | More on rocketeer.be | @rubenv on Twitter

June 10, 2017

June 09, 2017

Here is my wrap-up for the last day. Hopefully, after the yesterday’s social event, the organisers had the good idea to start later… The first set of talks was dedicated to presentation tools.

The first slot was assigned to Florian Maury, Sébastien Mainand: “Réutilisez vos scripts d’audit avec PacketWeaver”. When you are performed audit, the same tasks are already performed. And, as we are lazy people, Florian & Sébastien’s idea was to automate such tasks. They can be to get a PCAP, to use another tool like arpspoof, to modify packets using Scapy, etc… The chain can quickly become complex. By automating, it’s also more easy to deploy a proof-of-concept or a demonstration. The tool used a Metasploit-alike interface. You select your modules, you execute them but you can also chain them: the output of script1 is used as input of script2. The available modules are classified par ISO layer:

  • app_l7
  • datalink_l2
  • network_l3
  • phy_l1
  • transport_l4

The tool is available here.

The second slot was about “cpu_rec.py”. This tool has been developed to help in the reconnaissance of architectures in binary files. A binary file contains instructions to be executed by a CPU (like ELF or PE files). But not only files. It is also interesting to recognise firmware’s or memory dumps. At the moment, cpu_rec.py recognise 72 types of architectures. The tool is available here.

And we continue with another tool using machine learning. “Le Machine Learning confronté aux contraintes opérationnelles des systèmes de détection” presented by Anaël Bonneton and Antoine Husson. The purpose is to detect intrusions based on machine learning. The classic approach is to work with systems based on signatures (like IDS). Those rules are developed by experts but can quickly become obsolete to detect newer attacks. Can machine learning help? Anaël and Antoine explained the tool that that developed (“SecuML”) but also the process associated with it. Indeed, the tool must be used in a first phase to learning from samples. The principle is to use a “classifier” that takes files in input (PDF, PE, …) and return the conclusions in output (malicious or not malicious). The tool is based on the scikit-learn Python library and is also available here.

Then, Eric Leblond came on stage to talk about… Suricata of course! His talk title was “À la recherche du méchant perdu”. Suricata is a well-known IDS solution that don’t have to be presented. This time, Eric explained a new feature that has been introduced in Suricata 4.0. A new “target” keyword is available in the JSON output. The idea arise while a blue team / read team exercise. The challenge of the blue team was to detect attackers and is was discovered that it’s not so easy. With classic rules, the source of the attack is usually the source of the flow but it’s not always the case. A good example of a web server returned an error 4xx or 5xx. In this case, the attacker is the destination. The goal of the new keyword is to be used to produce better graphs to visualise attacks. This patch must still be approved and merge in the code. It will also required to update the rules.

The next talk was the only one in English: “Deploying TLS 1.3: the great, the good and the bad: Improving the encrypted web, one round-trip at a time” by Filippo Valsorda & Nick Sullivan. After a brief review of the TLS 1.2 protocol, the new version was reviewed. You must know that, if TLS 1.0, 1.1 and 1.2 were very close to each others, TLS 1.3 is a big jump!. Many changes in the implementation were reviewed. If you’re interested here is a link to the specifications of the new protocol.

After a talk about crypto, we switched immediately to another domain which also uses a lot of abbreviations: telecommunications. The presentation was performed by  Olivier Le Moal, Patrick Ventuzelo, Thomas Coudray and was called “Subscribers remote geolocation and tracking using 4G VoLTE enabled Android phone”. VoLTE means “Voice over LTE” and is based on VoIP protocols like SIP. This protocols is already implemented by many operators around the world and, if your mobile phone is compatible, allows you to perform better calls. But the speakers found also some nasty stuff. They explained how VoLTE is working but also how it can leak the position (geolocalization) of your contact just by sending a SIP “INVITE” request.

To complete the first half-day, a funny presentation was made about drones. For many companies, drones are seen as evil stuff and must be prohibited to fly over some  places. The goal of the presented tool is just to prevent drones to fly over a specific place and (maybe) spy. There are already solutions for this: DroneWatch, eagles, DroneDefender or SkyJack. What’s new with DroneJack? It focuses on drones communicating via Wi-Fi (like the Parrot models). Basically, a drone is a flying access point. It is possible to detected them based on their SSID’s and MAC addresses using a simple airodump-ng. Based on the signal is it also possible to estimate how far the drone is flying. As the technologies are based on Wi-Fi there is nothing brand new. If you are interested, the research is available here.

When you had a lunch, what do you do usually? You brush your teeth. Normally, it’s not dangerous but if your toothbrush is connected, it can be worse! Axelle Apvrille presented her research about a connected toothbrush provided by an health insurance company in the USA. The device is connected to a smart phone using a BTLE connection and exchange a lot of data. Of course, without any authentication or any other security control. The toothbrush even continues to expose his MAC address via bluetooth all the tile (you have to remove the battery to turn it off). Axelle did not attached the device itself with reverse the mobile application and the protocol used to communicate between the phone and the brush. She wrote a small tool to communicate with the brush. But she also write an application to simulate a rogue device and communicate with the mobile phone. The next step was of course to analyse the communication between the mobile app and the cloud provided by the health insurance. She found many vulnerabilities to lead to the download of private data (up to picture of kids!). When she reported the vulnerability, her account was just closed by the company! Big fail! If you pay your insurance less by washing your teeth correctly, it’s possible to send fake data to get a nice discount. Excellent presentation from Axelle…

To close the event, the ANSSI came on stage to present a post-incident review of the major security breach that affected the French TV channel TV5 in 2015. Just to remember you, the channel was completely compromised up to affecting the broadcast of programs for several days. The presentation was excellent for everybody interested in forensic investigation and incident handling. In a first part, the timeline of all events that lead to the full-compromise were reviewed. To resume, the overall security level of TV5 was very poor and nothing fancy was used to attack them: contractor’s credentials used, lack of segmentation, default credentials used, expose RDP server on the Internet etc. An interesting fact was the deletion of firmwares on switches and routers that prevented them to reboot properly causing a major DoS. They also deleted VM’s. The second part of the presentation was used to explain all the weaknesses and how to improve / fix them. It was an awesome presentation!

My first edition of SSTIC is now over but I hope not the last one!

[The post SSTIC 2017 Wrap-Up Day #3 has been first published on /dev/random]

June 08, 2017

Here is my wrap-up for the second day. From my point of view, the morning sessions were quite hard with a lot of papers based on hardware research.

Anaïs Gantet started with “CrashOS : recherche de vulnérabilités système dans les hyperviseurs”. The motivations behind this research are multiple: virtualization of computers is everywhere today, not only on servers but also on workstations. The challenge for the hypervisor (the key component of a virtualization system) is to simulate the exact same hardware platform (same behaviour) for the software. It virtualizes access to the memory and devices. But do they do this in the right way? Hypervisors are software and software have bugs. The approach explained by Anaïs is to build a very specific light OS that could perform a bunch of tests, like fuzzing, against hypervisors. The name of this os is logically “CrashOS“. It proposes an API that is used to script test scenarios. Once booted, tests are executed and results are sent to the display but also to the serial port for debugging purposes. Anaïs demonstrated some tests. Up to today, the following hypervisors have been tested: Ramooflax (project developed by Airbus Security), VMware and Xen. Some tests that returned errors:

  • On VMware, a buffer overflow and crash of the VM when writing a big buffer to COM1.
  • On Xen, a “FAR JMP” instruction should generate a “general protection” failure but it’s not the case.

CrashOS is available here. A nice presentation to start the day!

The next presentation went deeper and focused again on the hardware, more precisely, the PCI Express that we find in many computers. The title was “ProTIP: You Should Know What To Expect From Your Peripherals” by Marion Daubignard, Yves-Alexis Perez. Why could it be interesting to keep an eye on our PCIe extensions? Because they all run some software and have usually a direct access to the host computer resources like memory (for performance reasons). What if the firmware of your NIC could contain some malicious code and search for data in the memory? They describe the PCIe standard which can be seen as a network with CPU, memory, a PCI hierarchy (a switch) and a root complex. How to analyse all the flows passing over a PCIe network? The challenge is to detect the possible paths and alternatives. The best language to achieve this is Prolog (a very old language that I did not use since my study) but still alive. The tool is called “ProTIP” for “Prolog Tester for Information Flow in PCIe networks” and is available here. The topic was interesting when you realise what a PCIe extension could do.

Then, we got a presentation from Chaouki Kasmi, José Lopes Esteves, Mathieu Renard, Ryad Benadjila: “From Academia to Real World: a Practical Guide to Hitag-2 RKE System Analysis“. The talk was dedicated to the Hitag-2 protocols used by “Remote Keyless Entry” with our modern cars. Researches in this domain are not brand new. There was already a paper on it presented at Usenix. The talk really focussing on Hitag-2 (crypto) and was difficult to follow for me.

After the lunch break, Clémentine Maurice talked about accessing the host memory from a browser with Javascript: “De bas en haut : attaques sur la microarchitecture depuis un navigateur web“. She started with a deeply detailed review of how the DRAM memory is working and how to read operations make use a “row buffer” (like a cache). The idea is to be able to detect key presses in the URL bar of Firefox. The amount of work is pretty awesome from an educational point of view but I’ve just one question: how to use this in the real world? If you’re interested, Clémentine published all her publications are available here.

The next talk was interesting for people working on the blue team side. Binacle is a tool developed to make a full-text search on binaries. Guillaume Jeanne explained why full-text search is important and how it fails with classic methods to index binary files. The goal is not only to index “strings” like IP addresses, FQDN but also suite of bytes. After testing several solutions, he found a good one which is not too resources consuming. The tool and his features were explained, with the Yara integration (also a feature to generate new signatures). To be tested for sure! Binacle is available here.

The next tool presented by YaCo: “Rétro-Ingénierie Collaborative” by Benoît Amiaux, Frédéric Grelot, Jérémy Bouétard, Martin Tourneboeuf, Valerian Comiti. YaCo means “Yet another collaborative tool”. The idea is to add a “multi-user” layer to the IDA debugger. By default, users have a local DB used by IDA. The idea is to sync those databases via a Gitlab server. The synchronisation is performed via a Python plugin. They made a cool demo. YaCo is available here.

Sibyl was the last tool presented today by Camille Mougey. Sibyl helps to identify libraries used in the malicious code. Based on Miasm2, it identifies functions and their side effect. More information is available on the Github repository.

The next talk was about the Android platform: “Breaking Samsung Galaxy Secure Boot through Download mode” presented by Frédéric Basse. He explained the attacks that can be launched against the bootloader of a Samsung Galaxy smartphone via a bug.

Finally, a non-technical talk presented by Martin Untersinger: “Oups, votre élection a été piratée… (Ou pas)”. Martin is a journalist working for the French newspaper “Le Monde”. He already spoke at SSTIC two years ago and came back today to give his view of the “buzz” around the hacking of the election processes around the world. Indeed, today when elections are organised, there are often rumours that this democratic process has been altered due to state-sponsored hackers. It started in the US and also reached France with the Macronleak. A first fact reported by Martin is that information security goes today way beyond the “techies”… Today all the citizens are aware that bad things may happen. It’s not only a technical issue but also a geopolitical issue. Therefore, it is very interesting for journalists. Authorities do not always disclose information about the attribution of the attack because it can be wrong and alter the democratic process of elections. Today documents are published but the attribution remains a political decision. It touchy and may lead to diplomatic issues. Journalists are also facing challenges:

  • Publish leaked docs or not?
  • Are they real or fake?
  • Ignore the information or maybe participle to the disinformation campaign?

But it is clear that good a communication is a key element.

The day closed with the second rump sessions with a lot of submissions (21!). Amongst them, some funny ideas like using machine learning to generate automatic submissions of paper to the SSTIC call for paper, an awesome analysis of the LinkedIn leaked passwords, connected cars, etc… Everybody moved to the city centre to attend the social event with nice food, drinks and lot of interesting conversations.

Today, a lot of tools were presented. The balance between the two types of presentation is interesting. Indeed, if pure security research is interesting, sometimes it is very difficult to use it in the real context of an information system. However, presented tools were quick talks with facts and solutions that can be easily implemented.

[The post SSTIC 2017 Wrap-Up Day #2 has been first published on /dev/random]

June 07, 2017

I’m in Rennes, France to attend my very first edition of the SSTIC conference. SSTIC is an event organised in France, by and for French people. The acronym means “Symposium sur la sécurité des technologies de l’information et des communications“. The event has a good reputation about its content but is also known to have a very strong policy to sell tickets. Usually, all of them are sold in a few minutes, spread across 3 waves. I was lucky to get one this year. So, here is my wrap-up! This is already the fifteen edition with a new venue to host 600 security people. A live streaming is also available and a few hundred people are following talks remotely.

The first presentation was performed by  Octave Klaba who’s the CEO of the OVH operator. OVH is a key player on the Internet with many services. It is known via the BGP AS16276. Octave started with a complete overview of the backbone that he build from zero a few years ago. Today, it has a capacity of 11Tpbs and handles 2500 BGP sessions. It’s impressive how this CEO knows his “baby”. The next part of the talk was a deep description of their solution “VAC” deployed to handle DDoS attacks. For information, OVH is handler ~1200 attacks per day! They usually don’t communicate with them, except if some customers are affected (the case of Mirai was provided as an example by Octave). They chose the name “VAC” for “Vacuum Cleaner“. The goal is to clean the traffic as soon as possible before it enters the backbone. An interesting fact about anti-DDoS solutions: it is critical to detect them as soon as possible. Why? Let’s assume that your solution detects a DDoS within x seconds, attackers will launch attacks of less than x seconds. Evil! The “VAC” can be seen as a big proxy and is based on multiple components that can filter specific types of protocols/attacks. Interesting: to better handle some DDoS, the OVH teams reversed some gaming protocols to better understand how they work. Octave described in deep details how the solution has been implemented and is used today… for any customer! This is a free service! It was really crazy to get so many technical details from a… CEO! Respect!

The second talk was “L’administration en silo” by Aurélien Bordes and focused on some best practices for Windows services administration. Aurélien started with a fact: When you ask a company how is the infrastructure organised, they speak usually about users, data, computers, partners but… they don’t mention administrative accounts. From where and how are managed all the resources? Basically, they are three categories of assets. They can be classified based on colours or tiers.

  • Red: resources for admins
  • Yellow: core business
  • Green: computers

The most difficult layer to protect is… the yellow one. After some facts about the security of AD infrastructure,  Aurélien explained how to improve the Kerberos protocol. The solution is based on FAST, a framework to improve the Kerberos protocol. Another interesting tool developed by Aurélien: The Terminal Server Security Auditor. Interesting presentation but my conclusion is that in increase the complexity of Kerberos which is already not easy to master.

During the previous talk, Aurélien presented a slide with potential privilege escalation issues in an Active Directory environment. One of them was the WSUS server. It’s was the topic of the research presented by Romain Coltel and Yves Le Provost. During a pentest engagement, they compromised a network “A” but they also discovered a network “B” completely disconnected from “A”. Completely? Not really, there were WSUS servers communicating between them. After a quick recap of the WSUS server and its features, they explained how they compromised the second network “B” via the WSUS server. Such a server is based on three major components:

  • A Windows service to sync
  • A web service web to talk to clients (configs & push packages)
  • A big database

This database is complex and contains all the data related to patches and systems. Attacking a WSUS server is not new. In 2015, there was a presentation at BlackHat which demonstrated how to perform a man-in-the-middle attack against a WSUS server. But today, Romain and Yves used another approach. They wrote a tool to directly inject fake updates in the database. The important step is to use the stored procedures to not break the database integrity. Note that the tool has a “social engineering” approach and fake info about the malicious patch can be injected too to entice the admin to allow the installation of the fake patch on the target system(s). To be deployed, the “patch” must be a binary signed by Microsoft. Good news, plenty of tools are signed and can be used to perform malicious tasks. They use the tool psexec for the demo:

psexec -> cmd.exe -> net user /add

The DB being synced between different WSUS servers, it was possible to compromise the network “B”. The tool they developed to inject data into the WSUS database is called WUSpendu. A good recommendation is to put WSUS servers in the “red” zone (see above) and to consider them as critical assets. Very interesting presentation!

After two presentations focusing on the Windows world, back to the UNIX world and more precisely Linux with the init system called systemd. Since it was implemented in major Linux distribution, systemd has been the centre of huge debates between the pro-initd and pro-systemd. Same for me, I found it not easy to use, it introduces complexity, etc… But the presentation gave nice tips that could be used to improve the security of daemons started via systemd. A first and basic tip is to not use the root account but many new features are really interesting:

  • seccomp-bpf can be used to disable access to certain syscalls (like chroot() or obsolete syscalls)
  • capacities can be disabled (ex: CAP_NET_BIND_SERVICE)
  • name spaces mount (ex: /etc/secrets is not visible by the service)

Nice quick tips that can be easily implemented!

The next talk was about Landlock by Michael Salaün. The idea is to build a sandbox with unprivileged access rights and to run your application in this restricted space. The perfect example that was used by Michael is a multi-media player. This kind of application includes many parsers and is, therefore, a good candidate to attacks or bugs. The recommended solution is, as always, to write good (read: safe) code and the sandbox must be seen as an extra security control. Michael explained how the sandbox is working and how to implement it. The example with the media player was to allow it to disable write access to the filesystem except if the file is a pipe.

After the lunch, a set of talks was scheduled around the same topic: analysis of code. If started with “Static Analysis and Run-time Assertion checking” by Dillon Pariente, Julien Signoles. The presented Frama-C a framework of C code analysis.

Then Philippe Biondi, Raphaël Rigo, Sarah Zennou, Xavier Mehrenberger presented BinCAT (“Binary Code Analysis Tool”). It can analyse binaries (x86 only) but will never execute code. Just by checking the memory, the register and much other stuff, it can deduce a program behaviour. BinCAT is integrated into IDA. They performed a nice demo of a keygen tool. BinCAT is available here and can also be executed in a Docker container. The last talk in this set was “Désobfuscation binaire: Reconstruction de fonctions virtualisées” by Jonathan Salwan, Marie-Laure Potet, Sébastien Bardin. The principle of the binary protection is to make a binary more difficult to analyse/decode but without changing the original capabilities. This is not the same as a packer. Here there is some kind of virtualization that emulates proprietary bytecode. Those three presentations represented a huge amount of work but were too specific for me.

Then, Geoffroy CoupriePierre Chifflier presented “Writing parsers like it is 2017“. Writing parsers is hard. Just don’t try to write your own parser, you’ll probably fail. But parsers are available in many applications. They are hard to maintain (old code, handwritten, hard to test & refactor). Issues based on parsers can have huge security impacts, just remember the Cloudbleed bleed bug! The proposed solution is to replace classic parsers by something stronger. The criteria’s are: must be memory safe, called by / can call C code and, if possible, no garbage collection process. RUST is a language made to develop parsers like nom. To test it, it has been used in projects like the VLC player and the Suricata IDS. Suricata was a good candidate with many challenges: safety, performance. The candidate protocol was TLS. About VLC and parser, the recent vulnerability affecting the subtitles parser is a perfect example why parsers are critical.

The last talk of the day was about caradoc. Developed by the ANSSI (French agency), it’s a toolbox able to decode PDF files. The goal is not to extract and analyse potentially malicious streams from PDF files. Like the previous talk, the main idea was to avoid parsing issues. After reviewing the basics of the PDF file format, Guillaume Endignoux, Olivier Levillain made two demos. The first one was to open the same PDF file within two readers (Acrobat and Fox-It). The displayed content was not the same. This could be used in phishing campaigns or to defeat the analyst. The second demo was a malicious PDF file that crashed Fox-It but not Adobe (DDoS). Nice tool.

The day ended with a “rump” session (also called lighting talks by other conferences). I’m really happy with the content of the first day. Stay tuned for more details tomorrow! If you want to follow live talks, the streaming is available here.

[The post SSTIC 2017 Wrap-Up Day #1 has been first published on /dev/random]