Subscriptions

Planet Grep is open to all people who either have the Belgian nationality or live in Belgium, and who actively work with or contribute to Open Source/Free software.

About Planet Grep...

Other planets can be found at the Planet Index.

A complete feed is available in a number of syndication formats: RSS 1.0, RSS 2.0, FOAF, and OPML.

The layout of this site was done by Gregory

May 28, 2015

Xavier Mertens

HITB Amsterdam Wrap-Up Day #1

HITB Track 1The HITB crew is back in the beautiful city of Amsterdam for a new edition of their security conference. Here is my wrap-up for the first day!

The opening keynote was assigned to Marcia Hofmann who worked for the EFF (the Electronic Frontier Foundation). Her keynote title was: “Fighting for Internet Security in the New Crypto Wars”. EFF always fight for more privacy and she reviewed the history of encryption and all the bad stories about it. It started with a fact: “We need strong encryption but we need some backdoors”. Since encryption algorithms were developed, developers received pressure from governments to implement backdoors or use weak(er) keys to allow interception… just in case of! This already happened before and will happen again.

Marcia on stage

In a prologue, Marcia explained how everything started with the development of RSA and Diffie-Hellman. For some other protocols like DES, it was clear that the development team was very close to the NSA. They deliberately asked to use weakest key to help them to brute-force the keys. In the 80’s and 90’s, cryptography was developed more and more by the private sector and academic researches. Personal computers raised and people started to encrypt their data. Then came the famous PGP, key escrow and clipper chip. Marcia also explained the CALEA (“Communications Assistance Law Enforcement Act”): The technology must be designed in the way the FBI could intercept communications if needed (of course, with a proper warrant). Then came the restriction about encryption stuff export outside the US. It was a good opportunity for Marcia to give some words about the Wassenaar agreement and the recent story about the project to prevent export of intrusion software and surveillance technology. Today, the Snowden era, governments are seen as attackers. NSA was able to tamper all our data but also infiltrated major Internet players. What about the future? What should happened and what should we do? Marcia took a pendular as example. Some forces have impacts on the way a pendula moves. There are different external pressures which affect how security is designed:

I hope that the keynote helped a lot of people to open their eyes about our privacy!

After the morning coffee break, I went to the second track to follow Pedram Hayati’s presentation: “Uncovering Secret Connections Using Network Theory and Custom Honeypots”. The first part gave background information about our classic defence model and honeypots. In a traditional security model, the perimeter is very hardened but once the intruder is inside, nothing can stop him. We keep the focus on making the perimeter stronger. It’s coming from the physical security approach (the old castles) and attackers put all the efforts to bypass a single high barrier. Our second problem? We enter a battle without knowing the attackers. Idea of active defines and protection. Active defines is defined as:

A security approach that actively increases the cost of performing an attack in terms of time, effort and required resources to the point where a successful compromise against a target is impossible.

Pedram on stage

To achieve this, we need:

The Foundation is knowing the attacker! So what tools do we have? Our logs of course but honeypots can be very helpful! In the second part, Pedram explained what honeypots are… “a decoy system to lure attacker”. They increase the cost of a successful attack: the attacker will spend time in the honeypot. It is fundamental that it looks legitimate but it has signatures and behaviour, that’s why it must be fully configurable to lure the attacker. Some principle:

The next section was the experiment. Pedram deployed 13 honeypots in major cloud providers (AWS, Google), distributed across the Internet. They mimic a typical server and have IP addresses not published (no domain mapping). The goal was to identify SSH attacks, discover attacks profile per region and relations between them. How long to detect the first infection? On average, less than 10 mins! An analyse of the collected data was performed and it was possible to classify the attacker in three categories:

Pedram also explained how he generated nice statistics about the attackers, their behaviour and locations. To conclude, he compared the attackers as somebody throwing bricks through windows. How to we react? We can take actions to prevent this guy from sending more bricks or we can buy bullet-proof windows. It’s the same with information security. Try to get rid of the attackers!

My next choice was a talk about mobile phones operators: “Bootkit via SMS: 4G Access Level Security Assessment” presented by Timur Yunusov and Kirill Nesterov. Today, 3G/4G network are not only used by people to surf on Facebook but are also used more and more for M2M (“machine to machine”) communications. They explained that many operators have GGSN (“GPRS Gateway Support Node”) facing the Internet (just use Shodan to find some). A successful attack against such devices can lead to DoS, leak on information, fraud, APN guessing.

Timur & Kirill on stage

BTW, do you know that, when you are out of credit, telco’s block TCP traffic but UDP remains available? It’s time to use your UDP VPN! But attacking the network in this way is not new, the speakers focused on another path: attacking the network via SMS! About the hardware, they investigated some USB modems used by many computers. Such devices are based on Linux/Android/Busybox and have many interesting features. Most of them suffer of basic vulnerabilities like XSS, CSRC, ability to brick the device. They showed a demo video to demonstrate an XSS attack and CSRF to steal the user password. If you can own the device, the next challenge is to own the computer using the USB modem! To achieve this, they successfully turned to modem into an HID device. It is first detected as a classic RNDIS device then it is detected as a keyboard and operates like a Teensy to inject keyboard keypresses. You own the modem, the computer, what about the SIM card? They explained in details how they achieve this step and ended with a demonstration where they remotely cloned a SIM card and captured GSM traffic! The best advice they can give as a concluse: always change your PIN code!

After the lunch, Didier Stevens and myself gave our workshop about the IOS forensics. I did not attend two talks but my next choice was to listen to Bas Venis, a very young security researcher, who talked about browsers: “Exploiting Browsers the Logical Way”. The presentation was based on “logic” bugs. No need to use debuggers and other complicated tools to find such vulnerabilities. Bas explained the Chrome URL spoofing vulnerability and he discovered it (CVE-2013-6636).

Bas on stage

Then he switched to the Flash players. The goal was to evade the sandox. After explaining the different types of sandboxes (remote, local_with_file, local_with_network, local_trusted and application), I explained that the logic of URL/URI is not rock solid in sandboxes and lead to CVE-2014-0535. The conclusion was that looking for logic bugs and using them proven to be a sensitive approach when trying to hack browsers. Sweet results can be found and they do not require tools but just dedication and creativity. Just a remark about the quality of the video, almost unreadable on the big plasma screens installed in the room.

Finally, the first day ended with a rock-start: Saumil Shah who presented “Stegosploit: Hacking with Pictures”. This presentation is the next step in Saumil’s research about owning the user with pictures. In 2014 at hack.lu, he already presented “Hacking with pictures”. What’s new? Saumil insisted in the fact that “A good exploit is one delivered with style”. Pwning the browser can be complicated, why not just find a way to simple trick the exploit? The first part was a review of the history of steganography which is a technique used to hide a message into a picture, without altering it. Then came the principle of GIFAR: One file file with a JAR file appended to it. Then webshells raised with the embedding of tags like “<?php>” or “<% … %>”. Finally EXIF data were used (example to deliver a XSS).

Saumil on stage

Stegosploit is not a new 0-day exploit with a nice name and logo. It’s a technique to deliver browser exploits via pictures. To achieve this we need an attack payload, a “safe” decoder which can transform pixels into dangerous data. How?

How to decode in the browser? Using the HTML5 CANVAS (so, only available in modern browsers). The next step is to run the decoder automatically in the browser. To achieve this, we have IMAJS, depending on the tag used, a browser will display an image or execute it. Why is this attack possible? Because browsers follows this principle:
Be conservative in what you send and liberal in what you receive. (Jon Postel)
Browsers accept unknown tags and get pwned! The talk ended with some nice demos of exploits popping up calc.exe or Meterpreter sessions. Keep an eye on this technique, it’s amazing to own a user!

This closed the first day! Note that slides are uploaded after each talk and available here.

by Xavier at May 28, 2015 09:08 PM

Mattias Geniar

Google Jump: VR For The Masses

The post Google Jump: VR For The Masses appeared first on ma.ttias.be.

They had already made commodity-class hardware in cloud a standard and their R&D team seems to be heading the same way for Virtual Reality, too.

From a cardboard Virtual Reality goggle to affordable VR recording. Impressive stuff.

GoPro's Jump-ready 360 camera array uses HERO4 camera modules and allows all 16 cameras to act as one. It makes camera syncing easy, and includes features like shared settings and frame-level synchronization.
Google Jump

If this works as simple as advertised we'll be seeing a lot of VR content in the nearby future.

Is Virtual Reality on the web really the next step?

The post Google Jump: VR For The Masses appeared first on ma.ttias.be.

by Mattias Geniar at May 28, 2015 07:11 PM

Les Jeudis du Libre

Arlon, le 4 juin, S01E03 : On fait la Java dans Scala

Logo de ScalaJeudi 4 juin à 19h, les Jeudis du Libre d’Arlon vous proposent un opéra en 5 actes sur le langage de programmation Scala :

Sur la scène, le ténor sera Alexis Vandendaele, développeur Java chez Sfeir et passionné pour les nouvelles technologies montantes web ou non. L’objectif de sa conférence sera de vous présenter une vue d’ensemble des capacités du Scala afin de le démystifier.

Le lieu : InforJeunes Arlon (1er étage), Place Didier, 31 à 6700 Arlon
Pour les détails et  inscriptions, c’est ici !
Pour des renseignement complémentaires, contactez l’ASBL 6×7.

by Didier Villers at May 28, 2015 07:26 AM

May 27, 2015

Frank Goossens

Firefox: how to enable the built-in tracking protection

Just read an article on BBC News that starts of with the AdBlock Plus team winning another case in a German court (yeay) and ended with a report on how Firefox also has built-in tracking protection which -for now- is off by default and is somewhat hidden. To enable it, just open about:config and set privacy.trackingprotection.enabled to true. I disabled Ghostery for now, let’s see how how things go from here.

by frank at May 27, 2015 04:46 PM

Autoptimize: video tutorial en Espanõl!

The webempresa.com team contacted me a couple of days ago to let me know they created a small tutorial on the installation & basic configuration of Autoptimize, including this video of the process;

YouTube Video
Watch this video on YouTube or on Easy Youtube.

The slowdown noticed when activating JS optimization is due to the relative cost of aggregating & minifying the JS. To avoid this overhead for each request, implementing a page caching solution (e.g. HyperCache plugin or a Varnish-based solution) is warmly recommended.

Muchas gracias Webempresa!

by frank at May 27, 2015 03:34 PM

May 26, 2015

Mattias Geniar

iOS9 Major Feature: Fixing The Shift Key

The post iOS9 Major Feature: Fixing The Shift Key appeared first on ma.ttias.be.

I may be an Apple fan, but I LOL'd at this major feature that leaked.

Alongside this, the company plans to tweak the keyboard to work better in both landscape and portrait keyboard mode and will make it easier to tell when the shift key is selected.

iOS 9 to reportedly support Force Touch, fix shift key and open up Apple Pay to Canada

If new OS details leak and they mention fixing the shift key, you're doing something wrong.

The post iOS9 Major Feature: Fixing The Shift Key appeared first on ma.ttias.be.

by Mattias Geniar at May 26, 2015 06:51 PM

Bert de Bruijn

which vSphere version is my VM running on?

(an update of an older post, now complete up to vSphere 6)

Your Linux runs on a VMware VM, but which on which ESXi version? You can see for yourself: run "dmidecode" and look at lines 10, 11 and 12.
ESX 2.5 - BIOS Release Date: 04/21/2004 - Address 0xE8480 - Size 97152 bytes
ESX 3.0 - BIOS Release Date: 04/17/2006 - Address 0xE7C70 - Size 99216 bytes
ESX 3.5 - BIOS Release Date: 01/30/2008 - Address 0xE7910 - Size 100080 bytes
ESX 4 - BIOS Release Date: 08/15/2008 - Address 0xEA6C0 - Size 88384 bytes
ESX 4U1 - BIOS Release Date: 09/22/2009 - Address 0xEA550 - Size 88752 bytes
ESX 4.1 - BIOS Release Date: 10/13/2009 - Address 0xEA2E0 - Size 89376 bytes
ESXi 5 - BIOS Release Date: 01/07/2011 - Address 0xE72C0 - Size 101696 bytes

ESXi 5.1 - BIOS Release Date: 06/22/2012 - Address: 0xEA0C0 - Size: 89920 bytes
ESXi 5.5 - BIOS Release Date: 07/30/2013 - Address: 0xEA050 - Size: 90032 bytes
ESXi 6 - BIOS Release Date: 09/30/2014 - Address: 0xE9A40 - Size: 91584 bytes


NB These DMI properties are set at boot time. Even if your VM gets live-migrated to a host running a different vSphere version, your VM will keep the values it got from the host it booted on. What you see is the vSphere version of the host your VM booted on.

by Bert de Bruijn (noreply@blogger.com) at May 26, 2015 12:02 PM

Frank Goossens

Music from Our Tube: Daniel Lanois going drum&bass

As usual I heard this on KCRW earlier today; Daniel Lanois leaving his  roots-oriented songwriting for some pretty spaced-out instrumentals with a jazz-like and sometime straight drum & bass feel to them. This is “Opera”, live;

YouTube Video
Watch this video on YouTube or on Easy Youtube.

You can watch, listen & enjoy more live “flesh & machine”-material in this live set on KCRW.

by frank at May 26, 2015 05:27 AM

May 25, 2015

Mattias Geniar

Custom Fonts On Kindle Paperwhite First Generation

The post Custom Fonts On Kindle Paperwhite First Generation appeared first on ma.ttias.be.

I have something to admit: I'm a bit obsessive when it comes to fonts.

Nobody notices it, but this blog has changed more fonts in the last 6 months than I care to remember. Chaparral-Pro, Open Sans, HelveticaNeue-Light, ... they've all been used.

For me, a proper font makes or breaks a reading experience. In fact, I can still remember when just a few hours before launching our corporate website, I mentioned this "cool font I just came across" to Chris, and we decided to switch to HelveticaNeue-Light for the site, last minute.

Even our animated video got a font-change because of my obsessiveness. All for the better, of course.

So as much as the internet is about fonts and typography, kerning and whitespace, it sort of surprises me that the e-book world isn't. Or maybe it is, but it's only for the newer generation Ebooks.

I'm at my second Kindle now and I'd buy one again in a heartbeat. It's absolutely brilliant. But after a few years, you begin to notice the outdatedness of the device. It looks and feels old. To me, that's in large part because of the default font Caecilia.

Here's what it looks like.

caecilia_font_kindle

It looks bold and makes the device feel older than it really is.

On the Kindle, I've used that font for many years. Largely because I had no idea that I could change the font in the first place. But it's readable and there really isn't much bad about it. It gets the job done.

Here's what it looks like on the Kindle itself, as the cover page of Becoming Steve Jobs.

kindle-caecille

A very quick way of making the Kindle feel new again is by changing to the other built-in fonts, more specifically Palatino.

kindle-palatino

It's more in line with modern typography as seen on the web: a lighter font that vaguely resembles HelveticaNeue.

But the default font options are limited. There are 6 included in my Paperwhite. And because geeks will be geeks, I wanted a font I choose.

First, make sure you're on the 5.3.1 version. I've read some blogposts about alternative methods working on 5.3.1+ versions, but none of them seemed to work for me. Download the 5.3.1 binary image here.

Next disable WiFi on the PaperWhite, because the auto-upgrades will break this functionality. You can enable it again later on, after you've added the fonts.

To downgrade the Kindle (in case you need to) follow these steps;

  1. Download earlier update file from Amazon: Kindle Paperwhite 1 Update 5.3.1
  2. Disable wifi on your Paperwhite (airplane mode).
  3. Connect your Kindle Paperwhite to your computer (do not disconnect until the last step).
  4. Copy the bin file you downloaded in step 1 to root folder of Paperwhite.
  5. Wait at least 2 minutes after copy has completed (the devices need to register the .bin internally).
  6. Push and hold power button until your Paperwhite restarts (the led blinks orange and light turns on at the screen).
  7. Wait until the Paperwhite has installed the upgrade (which is really a downgrade).
  8. Now you can DISCONNECT from your computer.

If you've done the steps right, the next time the Kindle boots it'll flash itself from the supplied .bin file.

kindle_downgrading

Once you're on 5.3.1, getting the fonts activated is pretty easy.

Connect to the device to your PC and do these steps;

  1. In the root of the Kindle, make a file called "USE_ALT_FONTS" with no content. (touch USE_ALT_FONTS)
  2. Make a folder called fonts and drop your favourite font in there, in 4 versions each: Regular, Italic, Bold, BoldItalic. The filename needs to include those versions in the suffix, see the example below.

    I downloaded the ChaparralPro font from Fontzone.

    ChaparralPro_fonts_kindle

  3. After you've uploaded the fonts, reboot your Kindle
  4. After it booted, go to search in the dashboard/home screen and type ;fc-cache as a command.

    That forces the Kindle to rebuild its font database. After 4-5 minutes, the device will flash white and reload its UI, that is the sign that the reload finished. Take your time for this.

    The command will look like it completed instantly, but is still running in the background.

kindle-fc-cache

Once it reboots, you'll find a lot more fonts available in the Font Selection window. Enabling the USE_ALT_FONTS flag also unlocks other, already installed, fonts on the device.

kindle_unlocked_font_selections

After the Kindle booted, I chose the new Chaparral Pro font, increased the font size with 2 options higher than the default and we're good to go.

kindle_chaparralpro

I'm really happy with the results: the Chaparral Pro font is very pleasing to read.

Here's a side-by-side comparison of the original, the on-board Palatino and the newly installed Chaparral Pro. Click on the image for a bigger view.

kindle_font_comparison

The photo quality is sloppy, as I took "screenshots" with my phone. That means the angle is off every time and the alignment just downright sucks. But it gets the message across.

I'm hoping the next e-reader I buy has simpler options for managing custom fonts and takes its typography more seriously.

The post Custom Fonts On Kindle Paperwhite First Generation appeared first on ma.ttias.be.

by Mattias Geniar at May 25, 2015 09:03 PM

May 24, 2015

Wouter Verhelst

Fixing CVE-2015-0847 in Debian

Because of CVE-2015-0847 and CVE-2013-7441, two security issues in nbd-server, I've had to updates for nbd, for which there are various supported versions: upstream, unstable, stable, oldstable, oldoldstable, and oldoldstable-backports. I've just finished uploading security fixes for the various supported versions of nbd-server in Debian. There're various relevant archives, and unfortunately it looks like they all have their own way of doing things regarding security:

While I understand how the differences between the various approaches have come to exist, I'm not sure I understand why they are necessary. Clearly, there's some room for improvement here.

As anyone who reads the above may see, doing an upload for squeeze-lts is in fact the easiest of the three "stable" approaches, since no intermediate steps are required. While I'm not about to advocate dropping all procedures everywhere, a streamlining of them might be appropriate.

May 24, 2015 07:18 PM

May 23, 2015

Ruben Vermeersch

dupefinder - Removing duplicate files on different machines

Imagine you have an old and a new computer. You want to get rid of that old computer, but it still contains loads of files. Some of them are already on the new one, some aren’t. You want to get the ones that aren’t: those are the ones you want to copy before tossing the old machine out.

That was the problem I was faced with. Not willing to do this tedious task of comparing and merging files manually, I decided to wrote a small tool for it. Since it might be useful to others, I’ve made it open-source.

Introducing dupefinder

Here’s how it works:

  1. Use dupefinder to generate a catalog of all files on your new machine.
  2. Transfer this catalog to the old machine
  3. Use dupefinder to detect and delete any known duplicate
  4. Anything that remains on the old machine is unique and needs to be transfered to the new machine

You can get in two ways: there are pre-built binaries on Github or you may use go get:

go get github.com/rubenv/dupefinder/...

Usage should be pretty self-explanatory:

Usage: dupefinder -generate filename folder...
    Generates a catalog file at filename based on one or more folders

Usage: dupefinder -detect [-dryrun / -rm] filename folder...
    Detects duplicates using a catalog file in on one or more folders

  -detect=false: Detect duplicate files using a catalog
  -dryrun=false: Print what would be deleted
  -generate=false: Generate a catalog file
  -rm=false: Delete detected duplicates (at your own risk!)

Full source code on Github

Technical details

Dupefinder was written using Go, which is my default choice of language nowadays for these kind of tools.

There’s no doubt that you could use any language to solve this problem, but Go really shines here. The combination of lightweight-threads (goroutines) and message-passing (channels) make it possible to have clean and simple code that is extremely fast.

Internally, dupefinder looks like this:

Each of these boxes is a goroutine. There is one hashing routine per CPU core. The arrows indicate channels.

The beauty of this design is that it’s simple and efficient: the file crawler ensures that there is always work to do for the hashers, the hashers just do one small task (read a file and hash it) and there’s one small task that takes care of processing the results.

The end-result?

A multi-threaded design, with no locking misery (the channels take care of that), in what is basically one small source file.

Any language can be used to get this design, but Go makes it so simple to quickly write this in a correct and (dare I say it?) beautiful way.

And let’s not forget the simple fact that this trivially compiles to a native binary on pretty much any operationg system that exists. Highly performant cross-platform code with no headaches, in no time.

The distinct lack of bells and whistles makes Go a bit of an odd duck among modern programming languages. But that’s a good thing. It takes some time to wrap your head around the language, but it’s a truly refreshing experience once you do. If you haven’t done so, I highly recommend playing around with Go.

Random questions


Comments | @rubenv on Twitter

May 23, 2015 11:44 AM

May 22, 2015

Xavier Mertens

When Security Makes Users Asleep!

AsleepIt’s a fact, in industries or on building sites, professional people make mistakes or, worse, get injured. Why? Because their attention is reduced at a certain point. When you’re doing the same job all day long, you get tired and lack of concentration. The same can apply in information security! For a long time, more and more solutions are deployed in companies to protect their data and users. Just make your wishlist amongst firewalls, (reverse-)proxies, next-generation firewalls, ID(P)S, anti-virus, anti-malware, end-point protection, etc (The list is very long). Often multiple lines of defenses are implemented with different firewalls, segmented networks, NAC. The combination of all those security controls tend to reduce successful attacks to a minimum. “To tend” does not mean that all of them will be blocked! A good example are phishing emails, they remain a very good way to abuse people. If most of them will be successfully detected, only one may have disastrous impacts. Once dropped in a user mailbox, there are chances that the potential victim will be asleep… Indeed, the company spent a lot of money to protect its infrastructure so the user will think “My company is doing a good job at protecting myself, so if I receive a message in my mailbox, I can trust it!“. Here is a real life example I’m working on.

A big organization received a very nicely formated email from a business partner. The mail had an attachment pretending to be a pending invoice and was sent to <info@company.com>. The person reading the information mailbox forwarded it, logically, to the accounting department. There, an accountant read the mail (coming from a trusted partner and forwarded by a colleague – what can go wrong?) and opened the attachment. No need to tell the rest of the story, you can imagine what happened. The malicious file was part of a new CBT-Locker campaign: The new malicious file was generated only a few hours before the attacks and, no luck, the installed solutions were not able (yet) to detect it. The malicious files passed successfully the following controls:

Users, don’t fall aspleep! Keep your eyes open and keep in mind that the controls deployed by your company are a way to reduce the risks of attacks. You car has ABS, ESP, cross-lane detection systems and much more but you still need to pay attention to the road! The same applies in IT, stay safe…

by Xavier at May 22, 2015 11:53 AM

Dries Buytaert

Why WooMattic is big news for small businesses

Earlier this week Matt Mullenweg, founder and CEO of Automattic, parent company of WordPress.com, announced the acquisition of WooCommerce. This is a very interesting move that I think cements the SMB/enterprise positioning between WordPress and Drupal.

As Matt points out a huge percentage of the digital experiences on the web are now powered by open source solutions: WordPress, Joomla and Drupal. Yet one question the acquisition may evoke is: "How will open source platforms drive ecommerce innovation in the future?".

Larger retailers with complex requirements usually rely on bespoke commerce engines or built their online stores on solutions such as Demandware, Hybris and Magento. Small businesses access essential functions such as secure transaction processing, product information management, shipping and tax calculations, and PCI compliance from third-party solutions such as Shopify, Amazon's merchant services and increasingly, solutions from Squarespace and Wix.

I believe the WooCommerce acquisition by Automattic puts WordPress in a better position to compete against the slickly marketed offerings from Squarespace and Wix, and defend WordPress's popular position among small businesses. WooCommerce brings to WordPress a commerce toolkit with essential functions such as payments processing, inventory management, cart checkout and tax calculations.

Drupal has a rich library of commerce solutions ranging from Drupal Commerce -- a library of modules offered by Commerce Guys -- to connectors offered by Acquia for Demandware and other ecommerce engines. Brands such as LUSH Cosmetics handle all of their ecommerce operations with Drupal, others, such as Puma, use a Drupal-Demandware integration to combine the best elements of content and commerce to deliver stunning shopping experiences that break down the old division between brand marketing experiences and the shopping process. Companies such as Tesla Motors have created their own custom commerce engine and rely on Drupal to deliver the front-end customer experience across multiple digital channels from traditional websites to mobile devices, in-store kiosks and more.

To me, this further accentuates the division of the CMS market with WordPress dominating the small business segment and Drupal further solidifying its position with larger organizations with more complex requirements. I'm looking forward to seeing what the next few years will bring for the open source commerce world, and I'd love to hear your opinion in the comments.

by Dries at May 22, 2015 03:38 AM

May 21, 2015

Mattias Geniar

rtop: Remote System Monitoring Via SSH

The post rtop: Remote System Monitoring Via SSH appeared first on ma.ttias.be.

This is a simple but effective tool: rtop.

rtop is a remote system monitor. It connects over SSH to a remote system and displays vital system metrics (CPU, disk, memory, network). No special software is needed on the remote system, other than an SSH server and working credentials.

You could question why you wouldn't just SSH into the box and run top, but hey, let's just appreciate rtop for what it is: a simple overview of the systems' state and performance.

Installation

Not that hard, you just need the Go language runtime.

$ git clone --recursive http://github.com/rapidloop/rtop
$ cd rtop
$ make

For a few days, there was a problem with connecting over keys that use passphrases, but that was resolved in issue #16.

Running rtop

As easy as the installer.

rtop user@host:2222 1

This translates to;

And then you have your output.

./rtop user@host:2222 1
host.domain.tld up 57d 22h 32m 7s

Load:
    0.19 0.05 0.01

Processes:
    1 running of 240 total

Memory:
    free    = 573.58 MiB
    used    =   1.89 GiB
    buffers = 144.43 MiB
    cached  =   1.05 GiB
    swap    =   4.00 GiB free of   4.00 GiB

Filesystems:
           /:  21.25 GiB free of  23.23 GiB

Network Interfaces:
    eth0 - 192.168.10.5/26, fe80::aa20:66ff:fe0d/64
      rx = 523.23 GiB, tx = 4972.94 GiB

    lo - 127.0.0.1/8, ::1/128
      rx =   2.69 GiB, tx =   2.69 GiB

Pretty neat summary of the system.

The post rtop: Remote System Monitoring Via SSH appeared first on ma.ttias.be.

by Mattias Geniar at May 21, 2015 09:30 PM

Frank Goossens

Instant Pages vs Instant Web?

Again an interesting ALA-article about web performance (or the lack thereoff), triggered by Facebook’s “Instant Articles” announcement;

I think we do have to be better at weighing the cost of what we design, and be honest with ourselves, our clients, and our users about what’s driving those decisions. This might be the toughest part to figure out, because it requires us to question our design decisions at every point. Is this something our users will actually appreciate? Is it appropriate? Or is it there to wow someone (ourselves, our client, our peers, awards juries) and show them how talented and brilliant we are?

This exercise clearly starts at the design-phase, because thinking about performance in development or testing-phase is simply too late.

by frank at May 21, 2015 05:17 AM

May 20, 2015

Mattias Geniar

Bitrot

The post Bitrot appeared first on ma.ttias.be.

Nothing new, but I recently got reminded of this bitrot thing.

Let's talk about "bitrot," the silent corruption of data on disk or tape. One at a time, year by year, a random bit here or there gets flipped. If you have a malfunctioning drive or controller—or a loose/faulty cable—a lot of bits might get flipped. Bitrot is a real thing, and it affects you more than you probably realize.

The JPEG that ended in blocky weirdness halfway down? Bitrot. The MP3 that startled you with a violent CHIRP!, and you wondered if it had always done that? No, it probably hadn't—blame bitrot. The video with a bright green block in one corner followed by several seconds of weird rainbowy blocky stuff before it cleared up again? Bitrot.

Bitrot and atomic COWs: Inside “next-gen” filesystems

If you're an Accidental Tech Podcast listener, you'll have heard the rants of John on HFS+ and Bitrot by now. Here's some reading material to keep you focussed;

For the next few weeks, every unexplained filesystem corruption error I encounter will be blamed on bitrot.

The post Bitrot appeared first on ma.ttias.be.

by Mattias Geniar at May 20, 2015 07:08 PM

Joram Barrez

Launching Activiti version 6 in Paris (June 10th)!

The Activiti engine was born five years ago and we started with version 5.0 (a cheeky reference to our jBPM past). In those five years we’ve seen Activiti grow beyond our wildest dreams. It is used all across the globe in all kinds of cool companies. The last couple of months we’ve been working very hard at […]

by Joram Barrez at May 20, 2015 06:40 PM

Frank Goossens

Is your string zipped?

While looking into a strange issue on a multisite WordPress installation which optimized the pages of the main but not of the sub-blogs, I needed code to check whether a string was gzipped or not. I found this like-a-boss code-snippet on StackOverflow which worked for gzencoded strings:

$is_gzip = 0 === mb_strpos($mystery_string , "\x1f" . "\x8b" . "\x08");

But this does not work for strings compressed with gzcompress or gzdeflate, which don’t have the GZIP header data, so in the end I came up with this less funky function which somewhat brutally simply tries to gzuncompress and gzinflate the string:

function isGzipped($in) {
  if (mb_strpos($in , "\x1f" . "\x8b" . "\x08")===0) {
    return true;
  } else if (@gzuncompress($in)!==false) {
    return true;
  } else if (@gzinflate($in)!==false) {
    return true;
  } else {
    return false;
  }
}

Klunky, but it works. Now if only this would confirm my “educated guess” that the original problem was due to a compressed string.

by frank at May 20, 2015 02:01 PM

May 19, 2015

Mattias Geniar

Stunt Hacking: The Sad State of Our Security Industry

The post Stunt Hacking: The Sad State of Our Security Industry appeared first on ma.ttias.be.

There's a new term in the security industry: Stunt Hacking. And it isn't positive.

In case you've missed the recent buzz on the internet, there's a lot of response to an alleged hack of a plane, in mid-flight, by a security researcher. His goal was to indicate software vulnerabilities in the onboard entertainment system.

Obviously, what he did was wrong. There's no disputing that.

Putting the lives of others at risk, while manipulating a plane in flight, is absolutely wrong.

Yet at the same time, we have to acknowledge that security news nowadays doesn't get the attention it deserves without such stunt hacking.

Could he have done it differently? Yes.

Should he have done it differently? Absolutely.

Media Manipulation

This stunt got a lot of media coverage. From security professionals distancing themselves from the act to CNN covering it in prime time. People are aware that it happened.

This kind of media manipulation almost seems necessary nowadays. The recent VENOM vulnerability got a well-prepare website, with details of the vulnerability carefully explained. Someone took their time, whilst knowing about the vulnerability, to prepare that site.

They did so, because they know it's necessary. Without it, the vulnerability wouldn't get the attention it deserved. It would be noticed among hackers, but perhaps not by those in charge of updating their infrastructure (1).

Vulnerabilities fly under the radar all the time (pun intended).

OpenSSL CVE-2014-0160? It got a marketing name and a website: Heartbleed. Bash CVE-2014-6271? It got the ShellShock and BashBug labels pinned to it. CVE-2015-1635? The "Windows HTTP Packet Of Death". Hell, I'm even guilty by naming a recent Drupal hack "The EngineHack", just to draw attention to it.

Nowadays, security incidents just don't get noticed without the necessary media buzz and fancy naming and logo's. This is a sad trend our industry is moving towards.

Cover-ups

Back to the flying hacker. He's not talking to the media as adviced by his lawyer. Probably a good idea, too. But that makes it a one-sided story. A story that's being denied from ever having happened now, by authorities. (2)

Here's the current state: the security researcher is under attack by the media. The fact that he uncovered major flaws in the entertainment system is no longer the primary focus of attention.

His plan failed. The attempt to get media coverage on the security of planes backfired. That's a damn shame, because by the sound of it it's necessary that it gets more attention.

Software On The Plane

Planes run on software. From autopilots to navigation to entertainment systems, it's driven by software. And it has bugs, like buffer overflows. And bugs cause crashes with fatal results. This is what deserves attention.

Like any business, aviation is driven by profit. If there's a way to make more profit, they'll go for it. Apparently, that sometimes means taking shortcuts in software. Crucial software. They're not alone though, medical equipment is far worse.

The aviation industry has strict norms that have to be followed for software in planes.

A recent HackerNews post has several developers who work/have worked in the aviation industry comment on this.

All safety critical software (every piece of code ran on-board is safety critical the least) in aerospace needs to pass the DO-178 standard [1].
alandarev

I worked at an avionics contractor for a number of years and that mirrors my experience as well. On Level A projects, the process was at least followed, but it was often under tight stressful deadlines. Testing was frequently off-shored to save money, but often resulted in low quality tests that had to be reworked at the last minute by in-house engineers.
Jayschwa

This is cause for concern.

I'm in no way approving nor endorsing what the security researcher did. It was wrong on every level. He got yelled at by the media and will have his hands full with all the legal action that follows. Good, he deserves that.

But shift the focus back on software security, please. It's much needed.

(1) Yes, that's their own fault. But let's be honest, a lot of system administrators lack time and effort to monitor all security flaws all the time.

(2) Paranoid people would take the opportunity to refer to large-scale cover-ups.

The post Stunt Hacking: The Sad State of Our Security Industry appeared first on ma.ttias.be.

by Mattias Geniar at May 19, 2015 05:54 PM

Lionel Dricot

Les 10 millions de conducteurs du train magique tueur

8051906007_9d42074b13_z

Fermez les yeux.

Imaginez un long train contenant toutes les marchandises transportées durant une année en Europe. Ce train est magique : il part le 1er janvier, roule quelques millions de kilomètres sur une voie sans fin et, automatiquement, toutes les livraisons de l’année sont effectuées dans tous les magasins et usines du continent.

Maintenant imaginez qu’une personne soit couchée sur la voie et empêche le train de passer. Si le train freine, c’est toute l’économie de l’année qui est par terre. Le train doit-il s’arrêter pour sauver la vie de cet individu ? Ou bien, au contraire, la société doit-elle sacrifier une vie pour faire tourner l’économie ?

Aux États-Unis, avec 4000 personnes sur la voie, le train ne s’arrête pas. Et je pense que les chiffres seraient similaires partout dans le monde. 4000, c’est en effet le nombre de personnes tuées chaque année dans des accidents causés par des camions de transport (et, dans la plupart des cas, par une faute humaine du conducteur). Par comparaison, les attaques du 11 septembre qui ont chamboulé le monde et fait dépenser des trilliards en « mesures de sécurité » ont fait… 3000 victimes. Les camions de transport représente à eux-seuls plus d’un 11 septembre chaque année rien que sur le sol américain.

Et si nous avons été hypnotisé par les cadavres du onze septembre, nous ignorons superbement les milliers de morts de la route, les considérant comme d’anonymes tragédies individuelles. Peut-être que si, comme pour le onze septembre, on nous repassait en boucle les images des gens en train de mourir, nous aurions une autre perception de la conduite automobile ? Personnellement, c’est la raison pour laquelle je ne veux plus conduire.

Mais j’ai une bonne nouvelle pour vous : j’ai dernièrement eu l’occasion de m’asseoir au volant d’un camion moderne. Tout est désormais automatisé : le camion anticipe les freinages, surveille la conduite du conducteur, avertit des obstacles et ralentit. De quoi éviter bien des accidents et sauver des vies.

Mieux ! Ce mois de mai 2015 voit la mise en circulation aux États-Unis du premier camion entièrement autonome. Pas de conducteur, pas d’erreur. Comme l’a démontré la Google Car, le remplacement progressif des conducteurs par des intelligences artificielles va drastiquement réduire le nombre de victimes. En plus d’un millions de miles, les Google Cars n’ont en effet connu que 11 accidents mineurs, tous sans exception ayant été causés par une erreur humaine (dans 7 de ces accidents, la voiture s’est fait emboutir par l’arrière alors qu’elle était à l’arrêt).

Génial, non ?

Il y a juste un petit problème. Il y a 3,5 millions de conducteurs de camion aux États-Unis. Dans son excellent article que je vous encourage à lire, Scott Santens estime qu’avec les motels, les restoroutes et tous les services associés, la conduite de camion représente 10 millions d’emplois.

10 millions d’emplois qui vont devenir obsolètes. Ou plutôt qui le sont déjà vu que le camion automatique existe. Un camion qui pollue moins car il peut conduire de manière optimale. Un camion qui allège la route car il peut rouler 24h/24 et donc remplacer 3 camions qui sont forcés de faire des pauses régulières.

10 millions d’emplois qui seront réalisés de manière plus efficace, plus rapide et plus sûre par des intelligences artificielles. 10 millions d’emplois qui sont, chaque année, responsables de 4000 morts.

On pourrait se réjouir sans rien changer à la société. On sauve 4000 vies et on envoie 10 millions de personnes dans la misère. Le revenu actuellement perçu par ces 10 millions de personnes se partagera entre les quelques milliers de veinards qui auront acheté des camions automatiques. Ils vivront dans le luxe en le louant sans réellement rien faire de leur journée, accusant les anciens chauffeurs d’être des paresseux. C’est une possibilité.

On pourrait également lutter de toutes nos forces contre une innovation de toutes façons inéluctable, on pourrait prétendre que rien ne vaut un bon camion manuel conduit par un routier qui sent la sueur. On pourrait tenter de faire passer des lois pour interdire les camions automatiques, permettant à 10 millions de personnes de continuer à faire un travail inutile de creusage et rebouchage de trous tout en tuant 4000 personnes par an. C’est une autre possibilité.

Je vous laisse choisir la meilleure.

Ça y’est ? Vous avez choisi votre camp ?

Ne trainez pas car les camionneurs ne sont bien entendu qu’un exemple. Si votre gagne-pain actuel n’est pas encore obsolète aujourd’hui, cela ne va guère tarder. Tout ce qu’un humain peut faire, y compris créer ou inventer, peut ou pourra être réalisé demain par une intelligence artificielle. En mieux, plus rapide et moins cher.

Alors, dépêchez-vous de faire votre choix : allez-vous investir massivement en espérant être parmi les riches et que les pauvres crèveront de faim avant de vous couper la tête ? Allez-vous lutter de toutes vos forces pour empêcher le moindre progrès technologique afin que tout le monde puisse creuser des trous et les reboucher inutilement, même au prix de nombreuses vies humaines ?

Ne pourrait-on pas imaginer une alternative, une troisième voie ? Contrairement aux politiciens, que le manque total de vision cantonne à l’équation emplois = social et donc à la dualité ci-dessus, je suis persuadé qu’il existe bien d’autres voies. Et tout comme Scott Santens, je suis convaincu que le revenu de base est une condition nécessaire à ces alternatives.

Si vous êtes contre le revenu de base, je vous laisse choisir entre les deux solutions précédentes.

 

Photo par Daniel Bracchetti.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

Flattr this!

by Lionel Dricot at May 19, 2015 04:23 PM

May 18, 2015

Frederic Hornain

[Red Hat] Tried. Tested. Trusted.

TriedTestedTrusted

 

 

 

 

 

 

 

From the trading floor to air traffic control, industries around the world rely on Red Hat.

 

Ref :

http://www.redhat.com/en/about/trusted

Kind Regards

Frederic


by Frederic Hornain at May 18, 2015 08:35 PM

Xavier Mertens

Tracking SSL Issues with the SSL Labs API

SSL LockThe SSL and TLS protocols have been on the front of the stage for months. Besides many vulnerabilities disclosed in the OpenSSL library, the deployment of SSL and TLS is not always easy. They are weak cyphers (like RC4), weak signatures, certificates issues (self-signed, expiration or fake ones). Other useful features are mis-understood and not often not configured like PFS (“Perfect Forward Secrecy”). Encryption effectiveness is directly related to the way it is implemented and used. If it’s not the case, encrypted data can be compromized by multiple attacks scenarios. To resume: For users, the presence of a small yellow lock close to the URL in your browser does not mean that you are 100% safe. For administrators and website owners: it’s not because you have a good SSL configuration today  that it will remain safe in the coming months/years. Unfortunately, keeping an eye on your SSL configurations is a pain.

To help us, Qualys offers a very nice tool to assess SSL configurations via the ssllabs.com website. Very easy to use, the website allows you to submit an URL and give you a nice report after a few minutes. The URL is flagged with a grade between “A+” (the best) and “F“. (Note that the worst that I saw until today is “T“). The methodology to compute the grade level is explained in this document. Here is a sample report:

SSL Labs Example

The good news is that the checks performed against the URLs are upgraded when new vulnerabilities are discovered (it was the case with the POODLE attack). You can now understand that your grade at a time “x” can be different at a time “x+y“. That’s why you need to test again your sites at regular interval. The other good news is that an API has been made available for everybody. Check results are returned into JSON data. Such data are not very friendly for the human eyes, that’s why I wrote a few lines of Python to extract useful information:

The API can be used via a tool called ssllabs-scan. The tool runs on any UNIX flavor (you’ll need “go” to compile it) and reads the hosts to be tested from a flat file:

$ cat sites.txt
registration.brucon.org
www.truesec.be
$ ./ssllabs-scan -hostfile=sites.txt -quiet=true

The small Python script will extract the information listed above from the data returned by the API:

$ ./ssllabs-scan -hostfile=sites.txt -quiet=true | ./ssllabs-scan-parse.py >ssllabs-scan.output
$ cat ssllabs-scan.output
Site: www.truesec.be:443, Grade: A-, CertIssuer: StartCom Class 1 Primary Intermediate Server CA, CertExpiration: Wed Jan 20 04:46:26 2016 CertMD5: 89bd5d0dde1d3ec8a06b192d62ffbb72
Site: registration.brucon.org:443, Grade: A-, CertIssuer: StartCom Class 1 Primary Intermediate Server CA, CertExpiration: Fri Feb 26 01:21:57 2016 CertMD5: da6812d4d43282260d9a4751ed88b908

The last step is to add some automation (because we are lazy people). Create a crontab to run the above command at regular interval (one time a month is enough). Being a fan of OSSEC, I’m just monitoring the result file and any difference will generate an OSSEC alert + notification (ex: if the grade or MD5 changes). Configure a new source in your ossec.conf:

 <localfile>
    <log_format>full_command</log_format>
    <command>cat /data/ssllabs/ssllabs-scan.output</command>
 </localfile>

Create a new rule in your local_rules.xml:

 <rule id="100900" level="9">
    <if_sid>530</if_sid>
    <match>ossec: output: 'cat /data/ssllabs/ssllabs-scan.output</match>
    <check_diff />
    <description>SSLLabs: SSL configuration change detected</description>
 </rule>

Now, you can’t miss any degradation of your SSL configurations, especially of you maintain lot of websites!

by Xavier at May 18, 2015 03:05 PM

May 17, 2015

Mattias Geniar

The Scary Revelation: The Advertising Options On Facebook & Twitter

The post The Scary Revelation: The Advertising Options On Facebook & Twitter appeared first on ma.ttias.be.

This is probably common knowledge to marketers, but it was an eye opener for me. It's absolutely stunning how many filtering options there are available on both Facebook and Twitter if you're looking to advertise there.

I had never actually done any advertising on Facebook or Twitter. This was an experiment. How does it work? What can you advertise? What options are there?

Facebook Advertising

My first experiment were the different options in Facebook. I recently created a Facebook page for this blog. Now that I have my very own page on Facebook, what can I do?

I decided to test if I could promote the page via advertising. Just for fun. So, let's create a target audience.

Poking around on Facebooks Advertising platform, there's no shortage of options to filter from.

The classics, including: location, age, gender, language.

facebook_advertising_options_1

Next, it gets a bit more scary. Straight? Gay? Hispanic? Hispanic with English dominant? Parent with child between the age of 8-12? Liberal or conservative? Engaged for more than 3 months?

It's unbelievable what you can filter as an advertiser.

facebook_advertising_options_2

Next up, you can filter based on interests. This is a never-ending list. Every item has a submenu. And then another one.

Interest filters range from "DIY gardening as a hobby" to "Prefers shopping in boutiques" to "Uses E-Book readers" and anything in between.

facebook_advertising_options_3

You can filter further to Behaviours, like selecting Expats, if people like Baseball in 2015, have returned from a trip in the last 2 weeks, if they've played a game in the last month, ...

facebook_advertising_options_4

Combining all these filter options gives you tremendous power.

This information is gathered by Facebook using the pages you like or visit, the posts you make (or decide not to post), the websites you visit, ... It's a goldmine.

Twitter Advertising

Facebook 's advertising filters are scary. But Twitter is just the same.

In fact, Twitter surprised me more. There are no "pages" to like on Twitter. But the same principle applies: data is gathered by the profiles you watch or follow, the interactions you have, the favorites/retweets, the links you share, the sites you visit, ...

You can experiment for yourself on the Twitter Advertising platform.

The first filter is the same, define locations and gender.

twitter_advertising_options_1

You can further drill down by the device or platform used (iOS, Android, ...).

twitter_advertising_options_2

Or even select the mobile carrier the user has and how long they've been using Twitter on their mobile device.

twitter_advertising_options_3

And then the scary filtering begins.

Filter based on user interests. Notice the scrollbars on the left and right hand column? This is a long list.

twitter_advertising_options_4

And I mean really long lists.

twitter_advertising_options_5

Further options exist to make decisions like "people who have visited my site in the last 3 weeks" (yes, actual site tracking), "people who have interacted with a particular TV show", ...

It's pretty unbelievable what you actually give away by using these services.

Dealing With It

First, I encourage you to visit both Advertising Platforms: Facebook Advertising and Twitter Advertising. See for yourself what options are available and what combinations you can make.

Then ask yourself the simple question: do you care if the platform you're using can target you like this? If you don't, carry on. You're good to go.

If you want to prevent some of this tracking, but still keep using the service, there are some browser options to test.

If you think un-liking your Facebook pages or un-favoriting those brand-tweets will help you, you're probably wrong. You may un-like and un-favorite, but Facebook knows. And Twitter knows. And they do not forget easily.

I'm going for the Ghostery and uBlock Origin options. I like to use Facebook and Twitter. I know it's hypocritical. I would much rather prefer to pay for Facebook and Twitter and avoid the tracking and targeting altogether.

But the math is simple: there's much more money to be gained from advertisers than from monthly user subscriptions.

The post The Scary Revelation: The Advertising Options On Facebook & Twitter appeared first on ma.ttias.be.

by Mattias Geniar at May 17, 2015 08:03 PM

May 16, 2015

Mattias Geniar

It’s Cheaper To Upgrade An Illegal Windows Version Than Buy A New One

The post It’s Cheaper To Upgrade An Illegal Windows Version Than Buy A New One appeared first on ma.ttias.be.

Because ... logic?

Existing Windows users get the next version of the OS for free. A Windows installation used in an enterprise environment will still have to pay for the upgrade. And pirated users? They get a discount.

While our free offer to upgrade to Windows 10 will not apply to Non-Genuine Windows devices, and as we’ve always done, we will continue to offer Windows 10 to customers running devices in a Non-Genuine state.

In addition, in partnership with some of our valued OEM partners, we are planning very attractive Windows 10 upgrade offers for their customers running one of their older devices in a Non-Genuine state.

Genuine Windows and Windows 10

Here's how I interpret this: installing a Windows license on an illegally downloaded Windows will be cheaper than buying a new copy of Windows of the shelf.

Piracy has won.

The post It’s Cheaper To Upgrade An Illegal Windows Version Than Buy A New One appeared first on ma.ttias.be.

by Mattias Geniar at May 16, 2015 08:38 PM

Understanding the /bin, /sbin, /usr/bin and /usr/sbin Split

The post Understanding the /bin, /sbin, /usr/bin and /usr/sbin Split appeared first on ma.ttias.be.

A short but illuminating read [pdf].

When their root filesystem grew too big to fit on their tiny (half a megabyte) system disk, they let it leak into the larger but slower RK05 disk pack, which is where all the user and home directories lived and why the mount was called /usr.

They replicated all the OS directories under the second disk (/bin, /sbin, /lib, /tmp...) and wrote files to those new directories because their original disk was out of space.
Understanding the bin, sbin, usr/bin, usr/sbin Split

That remaining artefact, of splitting /bin and /usr/bin, remains to this day.

Today, the files needed to boot a system are (mostly) statically linked and still live in /bin and /sbin. All other-purpose binaries reside in /usr/bin and /usr/sbin.

It's interesting how something like the lack of disk space from over 30 years ago still influences directory structuring to this day.

Update 18/5/2015: as pointed out to me by @tvlooy on Twitter, the man-page on the filesystem hierarchy also reflects this.

$ man hier

(abbreviated version below)
HIER(7)                    Linux Programmer’s Manual                   HIER(7)

NAME
       hier - Description of the file system hierarchy

DESCRIPTION
       A typical Linux system has, among others, the following directories:

       /      This is the root directory.  This is where the whole tree starts.

       /bin   This directory contains executable programs which are needed in single user mode and to bring the system up or repair it.

       /sbin  Like /bin, this directory holds commands needed to boot the system, but which are usually not executed by normal users.

       /usr   This directory is usually mounted from a separate partition.  It should hold only sharable, read-only data, so that it can be mounted by various machines running  Linux.

       /usr/bin
              This is the primary directory for executable programs.  Most programs executed by normal users which are not needed for booting or for repairing the system and which are
              not installed locally should be placed in this directory.

       /usr/sbin
              This directory contains program binaries for system administration which are not essential for the boot process, for mounting /usr, or for system repair.

Good to know!

The post Understanding the /bin, /sbin, /usr/bin and /usr/sbin Split appeared first on ma.ttias.be.

by Mattias Geniar at May 16, 2015 04:13 PM

May 15, 2015

Paul Cobbaut

healthy breakfast ?

It is said that breakfast is the most important meal of the day.

Here is mine (often);
picture of 6 seeds and oat

- hemp seed (hennepzaad)
- brown flax seed (lijnzaad)
- golden flax seed (geel lijnzaad)
- sesame seed (sesamzaad)
- Chia seed (chiazaad)
- Sun Flower seeds (zonnebloempitten)- Oatmeal (havervlokken)

mixed with:
picture of a plate with 6 berry types
- redcurrant (rode bessen)
- strawberries (aardbeien)
- raspberries (frambozen)
- rubus (braambes)
- blackcurrant (zwarte bes)
- cranberries? (wilde bosbes)

mixed with lots of soy milk and let it soak for half an hour.

Healthy ?

by Paul Cobbaut (noreply@blogger.com) at May 15, 2015 02:49 PM

Mattias Geniar

Hypertext Transfer Protocol Version 2 (HTTP/2): RFC7540

The post Hypertext Transfer Protocol Version 2 (HTTP/2): RFC7540 appeared first on ma.ttias.be.

It's out!

This specification describes an optimized expression of the semantics of the Hypertext Transfer Protocol (HTTP), referred to as HTTP version 2 (HTTP/2). HTTP/2 enables a more efficient use of network resources and a reduced perception of latency by introducing header field compression and allowing multiple concurrent exchanges on the same connection. It also introduces unsolicited push of representations from servers to clients. This specification is an alternative to, but does not obsolete, the HTTP/1.1 message syntax. HTTP's existing semantics remain unchanged.
RFC 7540

Links;

The post Hypertext Transfer Protocol Version 2 (HTTP/2): RFC7540 appeared first on ma.ttias.be.

by Mattias Geniar at May 15, 2015 12:54 AM

May 14, 2015

Mattias Geniar

Microsoft Remotely Locking Xbox One’s

The post Microsoft Remotely Locking Xbox One’s appeared first on ma.ttias.be.

Sometimes, remote control access can be used for good. In this case, not so good.

In response to a video leak of in-game footage of a yet-to-be-released video game, Microsoft has blocked several Xbox One accounts -- which is understandable, it's an online service whose accounts you can block -- as well as remotely lock their Xbox One devices, rendering them completely unusable.

Yep, according to VMC, Microsoft has both permanently banned those leakers' Xbox Live accounts and temporarily made their Xbox Ones totally unusable. If you didn’t think Microsoft had this power, you’re not alone. The digital present is scary.
Microsoft Punishes Gears Leakers By (Temporarily) Bricking Xbox Ones

They shouldn't have leaked that footage. But they bought the Xbox hardware. Where does one draw the line between owning the device and Microsoft stepping in as virtual gatekeeper, making its own laws?

Blocking the Xbox One live account I can understand. Disabling the physical device remotely, until Microsoft decides the punishment has been enough, that's crossing the line.

I'd be pissed if this happened to me(1).

(1) Luckily, I'm not a gamer, so chances of that happening are virtually zero.

The post Microsoft Remotely Locking Xbox One’s appeared first on ma.ttias.be.

by Mattias Geniar at May 14, 2015 08:57 PM

Linux futex_wait bug

The post Linux futex_wait bug appeared first on ma.ttias.be.

A deep technical read, but something you better be aware of.

TL;DR: make sure you update your Linux kernels in the near future, or you'll experience some nasty deadlocks.

The impact of this kernel bug is very simple: user processes can deadlock and hang in seemingly impossible situations. A futex wait call (and anything using a futex wait) can stay blocked forever, even though it had been properly woken up by someone. Thread.park() in Java may stay parked. Etc.

If you are lucky you may also find soft lockup messages in your dmesg logs.
Linux futex_wait() bug...

Anything running RHEL 6x or CentOS 6.x is advised to upgrade to the latest kernel (2.6.32-504.16.2 or higher). The post mentions it happens mostly on systems with Intel's Haswell processors (Xeon E3 v3, Xeon E5 v3, etc).

If you haven't been bitten by this bug, it's probably just a matter of time. Or perhaps you've experienced a service that crashed, couldn't figure out the actual reason and left it at a "meh, I'll just restart it and it'll be fine", just be done with it.

The changelog for the 2.6.32-504.16.2 kernel on CentOS 6.6 mentions this futex fix.

$ yum install yum-changelog python-dateutil
$ yum changelog all kernel-2.6.32-504.16.2.el6 | grep futex
...
- [kernel] futex: Ensure get_futex_key_refs() always implies a barrier (Larry Woodman) [1192107 1167405]
...

It's a long shot, but this kernel bug may be the actual reason.

The post Linux futex_wait bug appeared first on ma.ttias.be.

by Mattias Geniar at May 14, 2015 12:43 PM

May 13, 2015

Wim Leers

Making Drupal fly — The fastest Drupal ever is near!

Together with Fabian Franz from Tag1 Consulting, I had a session about Big Pipe in Drupal 8, as well as related performance/cacheability improvements. Fabian’s demo of BigPipe and other render strategies in the first ten minutes are especially worth watching :)

I’ll let Fabian’s session description speak for itself:

Come and join us for a wild ride into the depths of Render Caching and how it enables Drupal to be faster than ever.

The Masterplan of Drupal Performance

Here we will reveal the TRUE MASTERPLAN of Drupal Performance. The plan we have secretly (not really!) been implementing for years and are now “sharing” finally with all of you! (Well you could look at the issue queue too or this public google doc, but this session will be more fun!)

Learn what we have in store for the future and what has changed since we last talked about this topic in Amsterdam and why Drupal 8 will even be more awesome and why you don’t have to wait and can do it all in Drupal 7 right now with the help of the render_cache module (with some extra work).

Get the edge advantage of knowing more

Learn how to utilize cache contexts to vary the content of your site, cache tags to know perfectly when items are expired and cache keys to identify the objects - and what is the difference between them.

Learn how powerful ‘placeholders’ will allow the perfect ESI caching you always wanted and how it will all be very transparent and how you can make your modules ready for the placeholder future.

See with your own eyes how you can utilize all of that functionality now on your Drupal 7 and 8 sites.

Get ready for a new area of performance

We will show you:

  • The biggest Do’s and Don’ts when creating render-cache enabled modules and sites
  • Frontend performance pitfalls and why front-end performance is tied to backend performance more than you thought
  • Why libraries[] are so great and why single CSS/JS files make trouble.
  • Common scenarios and how to solve them (mobile sites variation, cookie variation, etc.)
  • Drupal using an intelligent BigPipe approach

Get to know the presenters

This session will be presented by Wim Leers and Fabian Franz. Wim implemented a lot of what we show here in Drupal 8 and made the APIs easy and simple to use and made cache tags and #post_render_cache a very powerful concept. Fabian has prototyped a lot of this concepts in his render_cache module, introduced powerful Drupal 8 concepts into Drupal 7 and is always one step ahead in making the next big thing. Together they have set out on a crusade to rule the Drupal Performance world to bring you the faster Drupal ever!

by Wim Leers at May 13, 2015 04:45 PM

May 12, 2015

Frederic Hornain

[I am an Enterpriser] How can CIOs who leverage relationships drive the business forward ?

EnterpriserProject

 

 

 

 

 

 

 

 

100 % agree of what Lisa Davis, CIO of Georgetown University, explains in the following video about how the role of the CIO is changing.

I take this opportunity to propose you the following thing :

If you are a CIO/CEO of a small/medium/large company based in Belgium or in Luxembourg and are interested to better understand Open source and why it is going to change the business, “YOUR BUSINESS” then feel from to contact me via my LinkedIn Profile[1]

On my side, I would be really interested to better understand your needs, constraints and challenges.

OK, this true that I work for Red Hat[2] but I can guaranty you we will have an objective and neutral conversation.

We will only talk about technology,  Open Source and what you bring to drive your business forward

[1] https://be.linkedin.com/in/fhornain

[2] http://www.redhat.com/en

Kind Regards

Frederic

 

 


by Frederic Hornain at May 12, 2015 06:21 PM

May 11, 2015

Paul Cobbaut

new data center

My new data center is under construction (two Raspberry Pi 2 and one Raspberry pi model B).


by Paul Cobbaut (noreply@blogger.com) at May 11, 2015 09:08 PM

Frederic Hornain

Red Hat JBoss Enterprise Application Platform version 6.4 release

EAP6.4_images

 

 

 

 

 

 

 

 

In today‘s fast-moving, demanding economy, organizations are using DevOps and bi-modal IT initiatives to compete and achieve the next level of developer productivity. They also seek complementary, flexible technologies that enable them to experiment, fail fast, and still deliver innovations on time.

With new support for deploying a JBoss EAP subscription across multiple environments, customers can now better tailor their applications based on their individual business requirements. For example, JBoss EAP in traditional on-premise or virtualized environments and/or newly-renamed Red Hat JBoss Enterprise Application Platform for xPaaS* in OpenShift Enterprise.

JBoss EAP 6.4 also includes support :

  1. Support on Java 8 applications
  2. Support on the Java API WebSockets implementation protocol,
  3. Support on JSR 356,
  4. Enable developers to build real-time, rich client and mobile applications with reduced overhead and complexity.

Ref :

http://wwpi.com/red-hats-jboss-enterprise-application-platform-6-4-offers-elasticity-to-move-into-the-cloud/

http://www.redhat.com/en/about/press-releases/red-hat-expands-jboss-enterprise-application-platform-subscription-greater-flexibility-move-cloud

Kind Regards

Frederic


by Frederic Hornain at May 11, 2015 08:42 PM

Kris Buytaert

On the importance of idempotence.

A couple of months ago we were seeing weird behaviour with consul not knowing all it's members at a customer where we had deployed Consul for service registration as a POC
The first couple of weeks we hadn't noticed any difficulties but after a while we had the impression that the number of nodes in the cluster wasn't stable.

Obviously the first thought is that such a new tool probably isn't stable enough so it's expected behaviour , but rest asured that was not the case.

We set out to frequently monitor the number of nodes
a simple cron to create a graph.

  1. NOW=`date +%s`
  2. HOST=`hostname -f`
  3. MEMBERS=`/usr/local/bin/consul members | wc -l`
  4.  
  5. echo "consul_members.$HOST $MEMBERS $NOW" | graphite 2003

It didn't take us very long to see that indeed the number members in the cluster wasn't stable, frequently there were less nodes in a cluster then slowly the expected number of nodes came back on our graph.

Some digging learned us that the changes in number of nodes was in sync with our puppetruns.
But we weren't reconfiguring consul anymore, there were no changes in the configuration of our nodes.
Yet puppet triggered a restart of consul on every run. The restart was because knew it had rewritten the consul config file.
Which was weird as the values in that file were the same.

On closer inspection we noticed that the values in the file didn't change, however the order of the values in the file
changed. From a functional point of view that did not introduce any changes, but puppet rightfully assumed the configuration file
had changed and thus restarted the service dutyfully.

The actually problem lied in the implementation of the writing of the config file which was in JSON,
The ancient Ruby library just took the hash and wrote it in no specific order, each time potentially resulting
in a file with the content in a different order.

A bug fix to the puppet module made sure that the hash was written out in a sorted way , so each time resulting in the
same file being generated.

After that bugfix obviously our graph of the number of nodes in the cluster flatlined as restarts were not being introduced anymore.

This is yet another example of the importance of idempotence . When we trigger a configuration run , we want to
be absolutely sure that it won't change the state of the system if it already has been defined the way we want.
Rewriting the config file should only happen if it gets new content.

The yak is shaved .. and sometimes it's not a funky dns problem but just a legacy ruby library one ..

by Kris Buytaert at May 11, 2015 06:06 PM

Lionel Dricot

Printeurs 29

7165240349_af2a36bb32_z
Ceci est le billet 29 sur 29 dans la série Printeurs

Dans le commissariat où il a trouvé refuge, Nellio a sympathisé avec Junior Freeman, le policier qui lui a sauvé la vie. Ensemble, ils décident d’imprimer le mystérieux contenu de la carte mémoire qu’Eva avait implantée sous la peau de Nellio. Mais pour arriver au printeur avant Georges Farreck, il va falloir utiliser un avatar, un robot dans lequel les policiers uploadent leurs esprits.

J’ouvre les yeux et contemple étonné les murs de béton du réduit. J’avais beau m’y attendre, la sensation reste particulièrement surprenante. Un diffus sentiment de panique parcours mon corps. Mon corps ? Ou plutôt ce corps artificiel que contrôle momentanément mon esprit. Cet assemblage mécanique enfermé dans un oppressant cercueil de béton.

— La sortie est devant toi ! Ne perd pas de temps. Si nécessaire, je te transmettrai le flux vidéo de l’escadre Farreck.

La voix de Junior est étrange, tellement proche et tellement lointaine. Il a insisté pour que je prenne sa place dans l’avatar. Lui ne pourrait pas faire fonctionner le printeur sans hésitation ou guidage de ma part. Et chaque seconde peut être critique.

Je prends une profonde inspiration. Avec quel corps ? Pas le temps de répondre à cette question pour le moment. J’avance.

La marche et l’ouverture de la porte se révèle incroyablement intuitive. À peine ai-je fait quelques pas à l’air libre que l’idée d’être dans un corps artificiel disparait. Par réflexe, je tourne mon visage vers le soleil. Il fait beau. Est-ce mon imagination ou ai-je véritablement senti cette odeur de bitume ramolli, de tarmac recuit qui est la caractéristique des villes les jours de chaleur ?

— Nellio, arrête de rêvasser ! Georges Farreck se rapproche et ta copine ne l’a pas encore intercepté !

Obéissant à l’injonction, je me mets à courir dans les ruelles familières. À mon passage, les passants s’écartent craintivement sans se poser de questions. Après tout, quoi de plus naturel qu’un policier en train de courir ?

La vitesse de ma course me surprend moi-même. En quelques bonds, j’arrive à l’entrée de notre ancien repère. Traversant le petit salon et le laboratoire dévasté, je me retrouve face au frigo d’azote renversé. Sans effort, je le soulève et dégage l’entrée du réduit où Max m’avait fait passer le fameux scanner multi-modal auquel je dois vraisemblablement mon amnésie. Mais pourquoi Max aurait-il fait cela ? Au fond, était-ce bien Max ?

J’ai un éclair soudain de compréhension en revoyant les lieux : je ne suis pas amnésique ! J’ai été gardé, drogué et nourri, pendant plusieurs mois. Un autre a pris ma place, sans doute pour sous-tirer des informations à Georges Farreck. À moins qu’il ne soit lui-même complice ? Et, dans ce cas, qui avait donc intérêt à me cacher dans un endroit que Georges Farreck ne connaissait pas ? Max bien entendu ! Pour me protéger ! Georges Farreck m’a probablement fait assassiner ou, pour le moins, aura fait assassiner mon double ! Tout se tient !

— Nellio, il faut que tu voies ça. Je crois que ta copine a réussi !
Une image apparait soudain dans mon champs de vision. Elle est filmée depuis l’intérieur du véhicule policier. On y voit Georges Farreck regardant par une fenêtre. Des poings tapent sur la carrosserie.
— Georges Farreck ! Georges Farreck !
— Ils sont trop nombreux, nous n’arrivons plus à avancer.
— Mais comment ont-ils pu être au courant de ma présence ? C’est incompréhensible ?
— Cela pue le coup monté. Je vais envoyer deux-trois gars pour tenter d’identifier les meneurs, cela va aller vite.

C’est toujours ça de gagné, murmuré-je. Entrant dans la pièce aveugle, je commence à vérifier l’état du printeur. La structure est renversée mais semble intacte. Par contre, la cuve d’impression s’est cassée lors de mon réveil brutal. Je tente de réfléchir à tout vitesse. Le liquide n’est pas un problème. Il suffit de l’imprimer : il est auto-générant. Par contre la cuve est plus problématique. Elle doit être étanche et nous n’en avions pas de réserve.

— La cuve est cassée ! Pas moyen d’imprimer !

Ma voix est-elle sortie de mon avatar ou de mon corps abandonné ? Peut-être les deux ? Quoi qu’il en soit, la réponse désincarnée de Junior me parvient immédiatement.

— De quoi as-tu besoin ?
— Un récipient étanche.
— Quelle taille ?
— La taille de l’objet qui est sur cette foutue carte mémoire.
— Bref, tu n’as aucune idée.
— Non, si ça se trouve, c’est grand comme la pièce !

Une intuition subite me parcourt. Retournant dans le labo dévasté, je cours vers le minuscule coin que nous appelions familièrement « cafétéria ». La zone a été vaguement épargnée et je retrouve sans peine les restes de la table écroulée.
— Elle est toujours là !
D’un geste, je saisis la nappe. Une nappe en toile cirée inusable, du genre de celles introuvables en magasin mais qui apparaissent spontanément sur la table de votre cuisine le jour où vous avez des petits enfants. Peut-être qu’on les fournit avec le kit « tisane de grand-maman » ? Retournant dans la pièce secrète, je me mets à disposer des tables de manière à délimiter un espace fermé à même le sol. Par dessus tout, j’étend la nappe. Elle pourrait couvrir une table de huit personnes.
— Et voilà ! Une véritable baignoire de luxe.
— Nellio, j’ai une mauvaise nouvelle. Jette un œil à ce qui se passe du côté de chez Georges Farreck !
— Isabelle !

Dans mon champs de vision, je vois apparaitre une image d’Isabelle entourée de deux policiers. Elle hurle :
— Georges Farreck ! Laissez moi parler à Georges Farreck ! J’ai des révélations à lui faire.
La voix de Georges retentit dans mes oreilles, extrêmement proche.
— Amenez moi cette femme !
— Mais c’est une télé-pass hystérique, sans doute une de vos fans. Elle veut juste vous violer ou un truc du genre.
— Vous êtes capable de me protéger, non ? Cette foule qui bloque notre passage ne me semble pas un hasard.
Une main gantée apparait à l’écran et fait un signe à destination des autres policiers. Isabelle est conduite sans ménagement. Je distingue sa figure échevelée, ses joues rubicondes. Son essoufflement est visible. Elle s’arrête un instant, interdite.
— Oh merde ! Georges Farreck ! Le Georges Farreck ! J’ai la culotte qui dégouline ! Je… J’ai vu tous vos films, je vous adore !
Georges ne peut se retenir de dégainer un sourire charmeur. Ses dents étincellent.
— Merci, c’est très gentil à vous. Je suis flatté. Mais vous me parliez d’une révélation ?
— Ouais, justement, est-ce que vous allez tourner un nouveau film ici dans la ville ?
— Je ne sais pas encore, pourquoi cette question ?
— Parce que voilà, on m’a d’mandé de venir faire la fan rapport à votre film. Une obligation qu’y disaient. Mais j’suis pas conne. Je sens bien que c’est autre chose.
— Attendez, je suis pas sûr de vous suivre. Vous voulez dire qu’on vous a demandé de réunir des personnes pour m’acclamer ici ?
— C’est ça !
— Dans quel but ?
— J’sais pas. Et c’est ça qui semble bizarre.
— Et pourquoi l’avez-vous fait ?
— Ben c’t’une obligation. J’ai pas envie de perdre mes allocs. Mais je me dis que si je vous aide, vous pouvez p’têtre m’aider en retour. J’ai toujours su que j’serais une star. J’pourrais jouer dans vos films.
— Qui vous a donné cette obligation ?
— Attends mon pote, d’abord on négocie ce que j’aurais en échange !

J’éclate de rire. Sacrée Isabelle. Elle a réussit le tour de force de retarder Georges Farreck tout en lui extorquant un quelconque avantage.
— Nellio, ne traîne pas ! Isabelle nous offre un répit inespéré mais les policiers ne sont vraiment pas loin.

Mécaniquement, je remets en place le printeur. D’une pression sur le clavier, je lance l’impression du liquide d’impression. Je note mentalement d’optimiser l’algorithme pour imprimer le liquide dynamiquement, en fonction de l’objet à traiter.

— Connecte-toi à l’ordinateur que j’uploade le fichier à imprimer !
— Connecter ? Mais comment ?
— Les avatars disposent de la plupart des ports standards. Regarde dans ton torse.

C’est la première loi de l’ère électrique. Depuis qu’il est possible de brancher deux appareils entre eux, le format des prises a évolué de manière aussi explosive qu’irrationnelle. Chacun tentant de créer un format standard que tout le monde utilisera. Au final, tout terminal implémente une quinzaine de ports avec l’espoir d’une intersection avec la quinzaine implémentée par le terminal d’en face.

La seconde loi, quant à elle, stipule que c’est toujours le dernier câble que vous testez qui rentre dans le trou. Loi qui se révèle, une nouvelle fois, empiriquement exacte.

— Voilà, je suis branché !
— Fichier uploadé, en cours de transfert sur l’ordinateur.
— Quoi ? Si vite ? Mais ce n’est pas possible !
— Les avatars ne passent pas par le réseau traditionnel. Trop dangereux. D’ailleurs, la pièce où tu te trouves semble être une cage de Faraday parfaitement isolée.
— Mais…
— Chaque avatar est lié au centre par quantum entanglement. Deux photons émis au même moment. L’un est stocké dans l’avatar, l’autre au centre de contrôle, le tout grâce à des ralentisseurs de lumière. Cela permet une communication instantanée dont la vitesse n’est théoriquement pas limitée.
— Je croyais que ce n’était encore qu’un prototype !
— C’est l’avantage de travailler dans un commissariat à haut tarif !
Ébahi, je tente de me reconcentrer sur ma tâche.
— Bon, je lance l’impression !
— Merde ! Les flics ! Ils sont là, j’étais distrait ! Nellio !

Un bruit d’explosion retentit soudainement dans l’entrée du laboratoire.

 

Photo par Trey Ratcliff.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

Flattr this!

by Lionel Dricot at May 11, 2015 05:59 PM

Mattias Geniar

Who Came First: The Source Code Or The Compiler?

The post Who Came First: The Source Code Or The Compiler? appeared first on ma.ttias.be.

Let's get philosophical for a second.

The dilemma of who came first, the chicken or the egg, is an old one. And still up for debate.

From a modern scientific perspective, the chicken came first because the genetic recombination that produced the first "chicken" occurred in germ-line cells in a non-chicken ancestor.

Another literal answer is that "the egg" in general came first, because egg-laying species pre-date the existence of chickens.

To others, the chicken came first, seeing as chickens are merely domesticated red junglefowls.
Chicken or the egg

Do we have the same philosophical question in IT?

After all, imagine the following scenario for compiling the gcc compiler from source.

  1. Download gcc source code
  2. Configure the different options of gcc
  3. Compile gcc from source ... using gcc?

How do you compile gcc from source when it requires gcc in the first place? As the installation docs describe it:

When configuring a native system, either cc or gcc must be in your path or you must set CC in your environment before running configure. Otherwise the configuration scripts may fail.
gcc configuration

When you try to compile gcc without the gcc binary present, the build will indeed fail.

$ ./configure
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
...
configure: error: in `/usr/local/src/gcc_test/gcc-5.1.0':
configure: error: no acceptable C compiler found in $PATH
See `config.log' for more details.

Is this the Chicken or the egg equivalent of software engineering(1)?

(1) Yes, I know there are ways around this, but it just struck me as a funny comparison.

The post Who Came First: The Source Code Or The Compiler? appeared first on ma.ttias.be.

by Mattias Geniar at May 11, 2015 01:31 AM

May 10, 2015

Mattias Geniar

Scan Your WordPress For Security Vulnerabilities With WPScan

The post Scan Your WordPress For Security Vulnerabilities With WPScan appeared first on ma.ttias.be.

If you're comfortable at the CLI, WPScan is super easy to get going.

The project is open source on Github and uses the WPScan Vulnerability Database, an open dataset of known WordPress vulnerabilities.

Installation on a Mac is a piece of cake. Other methods and operating systems are documented on Github.

$ git clone https://github.com/wpscanteam/wpscan.git
$ cd wpscan
$ bundle install --without test

The first time you run wpscan.rb, you'll be prompted to update the vulnerability database.

$ ./wpscan.rb

[i] It seems like you have not updated the database for some time.
[?] Do you want to update now? [Y]es [N]o [A]bort, default: [N]Y

[i] Updating the Database ...
[i] Update completed.

To scan your own site, simply pass the --url parameter.

$ ./wpscan.rb --url https://ma.ttias.be
...
[+] robots.txt available under: 'https://ma.ttias.be/robots.txt'
[!] The WordPress 'https://ma.ttias.be/readme.html' file exists exposing a version number
[+] Interesting header: SERVER: nginx
[+] XML-RPC Interface available under: https://ma.ttias.be/xmlrpc.php
...
[+] WordPress version 4.x.x identified from meta generator
...
[+] Enumerating plugins from passive detection ...
 | 9 plugins found:
...

[+] Finished: Sun May 10 16:19:35 2015
[+] Requests Done: 126
[+] Memory used: 20.738 MB
[+] Elapsed time: 00:00:12

It enumerates all known themes and plugins, detects the WordPress version and gives you a nice summary. In my case, I still had to remove a readme.html file that exposes the version number.

If it happens to find a known vulnerability, you'll be notified in the output. Like the example below:

 ...
[!] Title: Jetpack <= 3.5.2 - DOM Cross-Site Scripting (XSS)
    Reference: https://wpvulndb.com/vulnerabilities/7964
    Reference: https://blog.sucuri.net/2015/05/jetpack-and-twentyfifteen-vulnerable-to-dom-based-xss-millions-of-wordpress-websites-affected-millions-of-wordpress-websites-affected.html

This was from an old WordPress I had lying around that hadn't been updated in a while.

Very useful tool, I would recommend it to everyone to at least scan your own site once!

The post Scan Your WordPress For Security Vulnerabilities With WPScan appeared first on ma.ttias.be.

by Mattias Geniar at May 10, 2015 06:30 PM

May 09, 2015

Mattias Geniar

Under The Hood: Facebook’s Cold Storage System

The post Under The Hood: Facebook’s Cold Storage System appeared first on ma.ttias.be.

An interesting look at how Facebook handles the cold storage of its users.

Among others, it covers Open Vault Storage and Open Rack, both hardware designs from the Open Compute Project.

We knew that building a completely new system from top to bottom would bring challenges. But some of them were extremely nontechnical and simply a side effect of our scale.

For example, one of our test production runs hit a complete standstill when we realized that the data center personnel simply could not move the racks. Since these racks were a modification of the OpenVault system, we used the same rack castors that allowed us to easily roll the racks into place. But the inclusion of 480 4 TB drives drove the weight to over 1,100 kg, effectively crushing the rubber wheels.

That's a scale many of us can only dream of.

The post Under The Hood: Facebook’s Cold Storage System appeared first on ma.ttias.be.

by Mattias Geniar at May 09, 2015 06:45 AM

May 08, 2015

Frederic Hornain

[Openshift Online] “Visionary” in Gartner Magic Quadrant

Garner_Openshift

 

 

 

 

 

 

Red Hat[1] cited as a “Visionary” in Gartner Magic Quadrant for Enterprise Application Platform-as-a-Service, Worldwide for OpenShift Online[2] Offering

Red Hat has been positioned in the Visionary quadrant of the Gartner, Inc., “Magic Quadrant for Enterprise Application Platform-as-a-Service, Worldwide” for our OpenShift Online public Platform-as-a-Service offering… Red Hat was evaluated against 16 other public PaaS vendors. OpenShift allows transferable code base between on-premise and public offerings allowing better flexibility of choice for developers. “Its robust platform and interoperability with popular middleware development frameworks has enabled more than 2.25 million applications to be created for a variety of business and developer needs.”

Learn More

 

 

[1] http://www.redhat.com

[2] https://www.openshift.com/products/online

Kind Regards

Frederic


by Frederic Hornain at May 08, 2015 04:14 PM

Red Hat JBoss Web Server 3.0 is GA

Red Hat JBoss Web Server 3.0 is GA

 

 

 

 

 

 

 

Red Hat JBoss Web Server (JWS) 3.0 is generally available.

This major version release updates Apache httpd and the versions of Apache Tomcat to recent versions, including updates to all of the mod* extensions for httpd, and the version of Hibernate for the JWS Plus product.

Some highlights of the release are:
Support for Java 8 with OpenJDK, Oracle JDK, and IBM JDK
Addition of Tomcat version 8.0.18
Update to Tomcat version 7.0.59
Update to Apache httpd version 2.4.6
Update to Hibernate 4.2.18
Support for WebSockets with Apache and Tomcat.

More information at https://access.redhat.com/articles/111723

Ref :

http://www.redhat.com/en/technologies/jboss-middleware/web-server

Kind Regards

Frederic


by Frederic Hornain at May 08, 2015 03:50 PM

Xavier Mertens

Deobfuscating Malicious VBA Macro with a Few Lines of Python

DeobfuscateJust a quick post about a problem that security analysts are facing daily… For a while, malicious Office documents are delivered with OLE objects containing VBA macros. Bad guys are always using obfuscation techniques to make the analysis more difficult and (try to) bypass basic filters. This makes the analysis not impossible but boring and time consuming.

As example, we see more and more VBA macros with strings obfuscated by encoding characters with the ‘Chr()‘ or ‘Chrw()‘ functions. Check the following piece of code:

Set ertertFFFg = CreateObject(Chr$(77) & Chr$(83) & Chr$(88) & Chr$(77) & Chr$(76) & Chr$(50) & Chr$(46) & Chr$(88) & Chr$(77) & Chr$(76) & Chr$(72) & Chr$(84) & Chr$(84) & Chr$(80))

Once decoded, the variable ‘ertertFFg‘ is assigned the following value:

Set ertertFFFg = CreateObject(MSXML2.XMLHTTP)

Seeing more and more macros based on this obfuscation technique, I wrote a quick and dirty Python script to help a friend. Currently it supports the following syntaxes:

The script reads the macro from stdin and output the decoded strings to stdout. Feel free to use it, it is available on my github repo.

 

 

by Xavier at May 08, 2015 01:31 PM

Wim Leers

The Montpellier Perf Sprint, and what’s next

At least 20 people helped push one or more issues forward in Montpellier, at the Drupal Dev Days Performance Sprint!

Here’s an overview of what we set out to do, what we did, and what the next steps are.

The plan for DDD Montpellier

Drupal Dev Days Montpellier Performance sprint board — midweek

The sprint board, midweek. Yellow stickies are performance stickies. Middle of the board is “in progress”. Bottom right is “fixed”.

1. Finding more performance issues

We already know that certain things are slow, and we know how to fix them. But significant portions of slowness do not yet have explanations for, let alone precise or even rough plans on how to reduce the slowness.
The parts of Drupal 8 that are slow that we do not have a strong grasp on yet are the bootstrap phase in general, but also routing, container services and route access checking.

Berdir, amateescu, dawehner, yched, znerol, pwolanin and I did a lot of profiling, testing hypotheses about why certain things took a given amount of time, comparing to Drupal 7 where possible, figuring out where the differences lied, and so on.
Bits and pieces of that profiling work1 are in https://www.drupal.org/node/2470679, including patches that help profile the routing system.

In the weeks since DDD Montpellier, effulgentsia and catch have continued discussions there, posted further analyses and filed many more issues about fixing individual issues.

2. Try to break Drupal 8’s internal page cache & fix it

So, Drupal 8 had the internal page cache enabled by default shortly before DDD Montpellier, with only a few known problems. Having many people to try to break it, using scenarios they use daily in their day jobs, that’s the ideal way to find any remaining cache invalidation problems.

Many tried, and most did not succeed in breaking it (yay! :)), but about half a dozen problems were discovered. See https://www.drupal.org/node/2467071.

What we got done

We fixed so incredibly many issues, and we had more than twenty people helping out! Notably, fgm was all over class loading-related issues, borisson_ and swentel got many of the page cache issues fixed, pwolanin pushed routing/REST/menu links issues forward significantly, and we overall simply made progress on many fronts simultaneously.

We made Drupal 8’s authenticated user page loads several percent faster in the course of that week!

Most of the page cache problems that were discovered (see above) were fixed right at the sprint! There are 4 known issues left, of which one is critical on its own, one is blocked, and the two others are very hard.

(If you want more details, we have day-by-day updates on what got done.)

Drupal Dev Days Montpellier Performance sprint board end result

The sprint board, end result. Focused on the bottom right corner: look at all those yellow stickies!

Next steps

We currently have 11 remaining criticals with the “Performance” tag. Getting that to zero is our top priority. But many in that list are difficult.

If you specifically care about performance for authenticated users: less difficult issues can be found in the child issues of the Cache contexts meta issue. And for some least difficult issues, see the child issues of the SmartCache issue.

Generally speaking, all major issues tagged with either “Performance” or “D8 cacheability” can use a hand.

Hopefully see you in the queues! :)


  1. It was impossible to capture all things we considered in an issue, that’d have slowed us down at least tenfold. 

by Wim Leers at May 08, 2015 09:37 AM

May 07, 2015

Mattias Geniar

In Defence Of WordPress

The post In Defence Of WordPress appeared first on ma.ttias.be.

The internet is verbally attacking WordPress again. I read a lot of hate towards WordPress for its latest security vulnerabilities that have become public.

leave_wordpress_alone_meme

What I don't see is praise in how those updates are handled and distributed to its millions of users.

Cross-Site Scripting Vulnerabilities

The last 2 weeks, 3 major security releases have been announced by the WordPress team;

Oh my, WordPress must pose a security risk, right?!

The Magical Release: WordPress 3.7

I was skeptical when they first announced this, but automatic background updates as featured in the 3.7 release are amazing.

Automatic background updates were introduced in WordPress 3.7 in an effort to promote better security, and to streamline the update experience overall. By default, only minor releases – such as for maintenance and security purposes – and translation file updates are enabled on most sites. In special cases, plugins and themes may be updated.

If you read the comments on Twitter, security blogs and even major news sites, you would expect the internet to have crashed and burned by now, with all the WordPress security vulnerabilities.

But that magical feature saved the internet from a lot of problems. That feature, that most WordPress users take for granted, is the single best thing ever to happen to WordPress.

And to think I questioned it at launch. What happens when your auto-update breaks all sites? What happens if an update is pushed, that introduces more vulnerabilities or backdoors? What if WordPress.org is every compromised and attackers can influence that update?

None of those scenarios happened. At least, not yet. But WordPress' trackrecord is solid.

Patching several million websites

WordPress is popular. It powers millions of sites. Small & big. This puts it in a position where it's bound to attract some unwanted attention. Once a critical WordPress vulnerabilty comes out, the update is pushed to those millions of sites within hours.

Hours.

Let that sink in for a while. After a few hours, WordPress administrators that left the auto-update enabled (which it is, by default), receive an e-mail like this.

wordpress_auto_updated

Just to put that into perspective, the steps to update Drupal core contain 13 instructions, among which;

5. Delete all the files & folders inside your original Drupal instance except for /sites folder and any custom files you added elsewhere.

6. Copy all the folders and files except /sites from inside the extracted Drupal package [tar ball or zip package] into your original Drupal instance.

WordPress users get that automatically.

Disabled auto-updates? Just log in and click a single button.

wordpress_update_now

Does the update need a database schema change or upgrade? A single button.

wordpress_database_upgrade

Want to updates your installed plugins, to the latest version? A single button.

wordpress_update_plugins

Your themes? A single button.

wordpress_update_themes

Let that sink in for a while.

The Punching Bag

At PHP conferences, WordPress often serves as a punching bag.

Nearly every talk that discusses code quality brings in WordPress and compares it to other frameworks. WordPress always ends up at the bottom.

Yet here it is, powering the internet. Patching millions of sites in less than 24 hours.

As much as I appreciate the other frameworks, WordPress is by far the best at handling security incidents. Magento? Don't get me started. Drupal? Your average user has no idea how to apply patches. I'm certain Drupalgeddon did far more damage than the recent 3 WordPress vulnerabilities, combined.

Joomla? Typo3? Each and every one can learn from WordPress.

Thanks, WordPress

I for one would like to thank WordPress. Besides powering this blog, it powers thousands of our clients. And thanks to this auto-update feature, each and every one of those is safer.

For all the hate the internet redirects to WordPress and for all the punches it has to take, I think there's far too little appreciation for everything WordPress does.

Thanks guys, keep it up.

The post In Defence Of WordPress appeared first on ma.ttias.be.

by Mattias Geniar at May 07, 2015 08:29 PM

Xavier Mertens

The Art of Logging

Logfiles[This blogpost has been published as a guest diary on isc.sans.org]

Handling log files is not a new topic. For a long time, people should know that taking care of your logs is a must have. They are very valuable when you need to investigate an incident. But, if collecting events and storing them for later processing is one point, events must be properly generated to be able to investigate suspicious activities! Let’s take by example a firewall… Logging all the accepted traffic is one step but what’s really important is to log all the rejected traffic. Most of the modern security devices (IDS, firewalls, web application firewalls, …) can integrate dynamic blacklists maintained by external organizations. They are plenty of usefull blacklists on the internet with IP addresses, domain names, etc… It’s quite easy to add a rule on top of your security policy which says:

if (source_ip in blacklist):
    drop_traffic()

With the “blacklist” table being populated by an external process. Usually, this rule is defined at the beginning of the security policy for performance reason. Very efficient, but is it the right place?

Let’s assume a web application firewall which has this kind of feature. It will drop all connections from a (reported as) suspicious IP address from the beginning without more details. Let’s put the blacklist rule at the end of the policy of our WAF. We have now something like this:

if (detected_attack(pattern1)):
    drop_traffic()
elif (detected_attack(pattern2)):
   drop_traffic()
elif (detected_attack(pattern3)):
  drop_traffic()
elif  (source_ip in blacklist):
  drop_traffic()

If we block the malicious IP addresses at the beginning of the policy, we’ll never know which kind of attack has been tried. By blocking our malicious IP addresses at the end, we know that if one IP is blocked, our policy was not effective enough to block the attack! Maybe a new type of attack was tried and we need to add a new pattern. Blocking attackers is good but it’s more valuable to know why they were blocked…

by Xavier at May 07, 2015 04:12 PM

Mattias Geniar

The (Lack Of?) Durability in SSDs

The post The (Lack Of?) Durability in SSDs appeared first on ma.ttias.be.

We all love our fast SSDs. Has our adoption of SSDs blinded us from the durability aspect of it?

A stored SSD, without power, can start to lose data in as little as a single week on the shelf.

[...]

For client application SSDs, the powered-off retention period standard is one year while enterprise application SSDs have a powered-off retention period of three months.
SSD Storage -- Ignorance of Technology is No Excuse

To cover the durability loss of SSD cells, most SSDs ship with more capacity than they actually expose. The firmware is smart enough to use those spare cells once the live cells start to show their age.

Some SSD vendors devote more of the flash to overprovisioned spare area that's inaccessible to the OS but can be used to replace blocks that have become unreliable and must be retired.
Introducing the SSD Endurance Experiment

But this assumes SSDs are in use so the firmware can manage the block distribution.

Is the potential for data loss as mentioned by KoreBlog a real concern for SSD drives lying on the shelf?

In datacenter environments this won't be much of an issue, but for home PC users that have a computer that hasn't been turned on for a while?

For them, it just might be a disaster.

The post The (Lack Of?) Durability in SSDs appeared first on ma.ttias.be.

by Mattias Geniar at May 07, 2015 02:42 PM

May 06, 2015

Frederic Hornain

Red Hat Developer Toolset 3.1 Now Available

DeveloperToolset

Latest, most stable versions of open source developer tools bridge developer productivity and production stability.

New to Red Hat Developer Toolset 3.1 are:

As with all versions of Red Hat Developer Toolset, Red Hat Developer Toolset 3.1 allows for the creation of applications compatible with both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 across physical, virtual and cloud environments, including OpenShift, Red Hat’s award-winning Platform-as-a-Service (PaaS) offering. Additionally, these tools are delivered on a lifecycle separate from that of Red Hat Enterprise Linux, helping developers stay up-to-date with the latest innovations while retaining deployment stability.

Ref

https://access.redhat.com/solutions/218273

http://www.redhat.com/en/about/press-releases/red-hat-developer-toolset-31-now-available

Kind Regards

Frederic


by Frederic Hornain at May 06, 2015 09:05 PM

Mattias Geniar

uBlock Origin

The post uBlock Origin appeared first on ma.ttias.be.

I've tested this the last few weeks and I'm not going back to Adblock Plus. Welcome to the club, uBlock Origin.

uBlock is a general-purpose blocker — not an ad blocker specifically.

uBlock blocks ads through its support of the Adblock Plus filter syntax. uBlock extends the syntax and is designed to work with custom rules and filters.

...

On average, uBlock really does make your browser run leaner. [1]
uBlock on Github

Its open source mentality, the speed and memory consumption, ... I've had nothing but good experiences from uBlock.

ublock_cpu_time

My browser feels snappy again.

Remove Adblock Plus and install uBlock, you won't regret it.

Update 10/5/2015: uBlock has been renamed to uBlock Origin

Many thanks to @haploc who let me know in the comments!

In a Reddit post, the reasoning behind the move from uBlock to uBlock Origin is further explained.

The plugin was originally created by gorhill and got help from chrismatic. It seems both had mixed intentions for the project, resulting in the rename.

-- gorhill got tired of dozens of "my facebook isnt working plz help" issues.
-- he handed the repository to chrismatic while maintaining control of the extension in the Chrome webstore (by forking chrismatic's version back to himself).
-- chrismatic promptly added donate buttons and a "made with love by Chris" note.
-- gorhill took exception to this and asked chrismatic to change the name so people didn't confuse uBlock (the original, now called uBlock Origin) and uBlock (chrismatic's version)
-- Google took down gorhill's extension. Apparently this was because of the naming issue (since technically chrismatic has control of the repo).
-- gorhill renamed and rebranded his version of ublock to uBlock Origin.
/r/chrome/

So the original uBlock extension got handed over to uBlock Origin. Time to update your plugin!

Save yourself memory, CPU and get a smoother web browsing experience in return. Get uBlock Origin.

The post uBlock Origin appeared first on ma.ttias.be.

by Mattias Geniar at May 06, 2015 06:42 PM

Frederic Hornain

Red Hat Expands JBoss Enterprise Application Platform Subscription with Greater Flexibility to Move into the Cloud

JEE App. Go To The Cloud

 

 

 

 

 

 

Enterprises are under pressure to deliver new applications fast; however, many factors, including rigid proprietary stacks, inflexible licensing agreements, and cultural silos in IT can prevent enterprises from achieving the agility they need to stay competitive. Enterprises are increasingly implementing DevOps methodologies, and technologies that complement them, to break down siloed communications between development and operations teams and accelerate application development and delivery. As DevOps adoption increases, so does the demand for technologies that complement DevOps methodologies and enable high productivity of developers and operations teams working closely together.

“Although DevOps emphasizes people (and culture) over tools and processes, implementation utilizes technology. As a result, Gartner, Inc. expects strong growth opportunities for DevOps toolsets, with the total for DevOps tools reaching $2.3 billion in 2015, up 21.1 percent from $1.9 billion in 2014. By 2016, DevOps will evolve from a niche strategy employed by large cloud providers to a mainstream strategy employed by 25 percent of Global 2000 organizations.”1

Red Hat is committed to helping enterprises get more out of their technology by allowing for greater freedom of choice. JBoss EAP supports a broad range of third-party frameworks, operating systems, databases, security, and identity systems to make integration into existing infrastructure easier. In addition, new subscription flexibility expands support for customers deploying JBoss EAP across multiple environments based on their individual needs and business requirements, including on-premise, in a Platform-as-a-Service (PaaS) environment, or in hybrid cloud scenarios.

Additional enhancements introduced in JBoss EAP 6.4 include:

As one of the only open source application platforms that commercially supports Java EE applications deployed in PaaS environments, JBoss EAP deployed on OpenShift provides developers with a fully certified Java EE 6 container and all the tools needed to build, run, and manage a wide range of Java applications. The combination of JBoss EAP and OpenShift Enterprise helps enterprises to optimize both development and operations by offering the ability to build enterprise-grade Java applications in a streamlined PaaS environment more quickly.

Ref :

http://www.redhat.com/en/about/press-releases/red-hat-expands-jboss-enterprise-application-platform-subscription-greater-flexibility-move-cloud

Kind Regards

Frederic


by Frederic Hornain at May 06, 2015 06:41 PM

Building enterprise Java apps in the cloud

CloudAndApplication

 

  • Do you want to build and deliver Java™ EE apps faster?
  • Are your customers demanding more?
  • What if you could develop new business critical Java EE applications using your existing skills and deliver them faster?

http://www.youtube.com/watch?v=Iquk68o1Q-k

Learn how Red Hat accelerates application development and delivery, allowing you to get to market faster and deliver new features and value more frequently.

Learn more at :

Kind Regards
Frederic


by Frederic Hornain at May 06, 2015 06:03 PM

Les Jeudis du Libre

Mons, le 21 mai : Processing pour la création plastique, graphique interactive et bien plus !

Processing (source logo wikimedia)Ce jeudi 21 mai 2015 à 19h se déroulera la 39ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Processing pour la création plastique, graphique interactive et bien plus !

Thématique : Programmation|Education|Communauté

Public : Tout public

L’animateur conférencier : Martin Waroux, Arts² et Numediart (UMONS)

Lieu de cette séance : HEPH Condorcet, Chemin du Champ de Mars, 15 – 7000 Mons – Auditorium 2 situé au rez de chaussée (cf. ce plan sur le site d’Openstreetmap; ATTENTION, l’entrée est peu visible de la voie principale, elle se trouve dans l’angle formé par un très grand parking).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, Normation, OpenSides, MeaWeb, NextLab, Phonoid et Creative Monkeys.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : La simplicité, c’est l’essence même du projet Processing, aussi bien dans la réalisation des programmes, dans l’interface, dans la syntaxe que dans son fonctionnement. Il est utilisé par de nombreux créa­teurs numériques dans le monde entier et per­met d’aborder très rapi­de­ment la program­ma­tion temps-réel et donc d’accéder à l’interactivité. De base, il per­met la créa­tion graphique et la géométrie, l’animation 2D et 3D, le traitement d’images, l’interaction clavier-souris,… Ensuite, à l’aide d’extensions étendant considérablement ses possibilités, il devient facile de générer et manipuler du son, des vidéos et des webcams/caméras. Il est aussi possible de faire du traite­ment de don­nées, de la data visualisation, de la recon­nais­sance faciale, d’utiliser des moteurs physiques, piloter une carte Arduino, utiliser une Kinect, échanger des données en réseau, etc., etc.

Du fait de sa simplicité et de sa facilité à prendre en main, Processing est donc tout à fait adapté pour l’apprentissage du code mais peut également combler les aficionados de la programmation souhaitant développer des projets plus interactifs et créatifs ou même rapidement prototyper une idée.

En plus de découvrir Processing et son fonctionnement, nous aurons l’occasion de voir des projets artistiques et/ou techniques l’utilisant. Nous en profiterons également pour naviguer dans la sphère d’autres initiatives proches de Processing, des extensions et des projets alternatifs, pouvant même travailler de pair avec lui. Pour finir, comme il est très utilisé dans l’enseignement du code, nous évoquerons une série d’initiatives pour l’apprentissage et la découverte des bases de la programmation pour les plus jeunes (et moins jeunes), en lien avec la précédente conférence du 16 octobre 2014 : Apprendre à programmer à l’école : pourquoi et comment ?, de Martin Quinson.

by Didier Villers at May 06, 2015 05:24 AM

May 05, 2015

Mattias Geniar

French Law Forces Backdoors On ISPs

The post French Law Forces Backdoors On ISPs appeared first on ma.ttias.be.

This law just got approved in France. 438 votes in favor, 86 against, 42 abstained.

The original, French, version is here: ASSEMBLÉE NATIONALE 2669.

A google translated version reads as follows.

Article 6 [...]

It is also stated that operators and service providers will, if necessary, be able to observe the provisions governing the secrecy of national defense.

Finally, Article L. 871-4 provides that CNCTR the members and agents can penetrate, for control purposes, on the premises of operators and service providers.

Article 7 also moves, adapting in the new Book VIII of the Code of internal security of existing criminal provisions, including the fact that repress from revealing that information technology is implemented or refusal to transmit login data whose collection has been authorized.

Every ISP or hosting provider in France should be worried. OVH, one of the biggest hosting providers in the world, was already threatening to leave France. If they're looking to store their servers physically nearby, maybe we can partner up.

But all kidding aside, I'm curious what they'll do now, since the law has been approved.

A translated post on OVH's statement shows some of the real dangerous of this law.

... Requiring French hosts to accept real-time capture of connection data and the establishment of "black boxes" with blurred in infrastructure, means giving French intelligence services access and visibility into all data traveling over networks. This unlimited access insinuate doubt among customers of hosting providers on the use of these "black boxes" and the protection of their personal data.
OVH.com

We can all argue in favor of end-to-end encryption, SSL everywhere, ... but that doesn't change the fact that your government is forcing the internet providers in your country to install and maintain a backdoor, so French law enforcement can intervene and spy at their own choosing.

It's a sad day for France and a sad day for ISPs and hosting providers in general.

Hat tip to @FredericJacobs for bringing this to my attention.

The post French Law Forces Backdoors On ISPs appeared first on ma.ttias.be.

by Mattias Geniar at May 05, 2015 04:21 PM

Frank Goossens

Music from Our Tube; Stromae live in the KEXP studio

Proud to be Belgian! Stromae live in the KEXP studio a couple of days ago;

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at May 05, 2015 02:26 PM

Mattias Geniar

Security In Medical Equipment

The post Security In Medical Equipment appeared first on ma.ttias.be.

This isn't the first occurrence and it sure won't be the last, either.

Hospira Lifecare PCA infusion pump running "SW ver 412″ does not require authentication for Telnet sessions, which allows remote attackers to gain root privileges via TCP port 23.

CVE-2015-3459

Imagine having an infusion pump that someone can remotely control. Power on, power of? Increase or decrease the supply?

How is security not a top priority for anything medically related? Even remote surgery equipment (the actual robotics hands someone can control from the other side of the world) has known security issues.

The post Security In Medical Equipment appeared first on ma.ttias.be.

by Mattias Geniar at May 05, 2015 10:51 AM

May 04, 2015

Mattias Geniar

What’s The Value Of Owning A Browser?

The post What’s The Value Of Owning A Browser? appeared first on ma.ttias.be.

Perhaps a better question: why would Microsoft still invest development efforts in a new Internet Explorer?

First, I must admit a wrong judgement on my part. Back when Project Spartan was announced, I called it the new IE6. As more and more details emerge, that turns out to be a wrong assessment.

Since its announcement, Project Spartan has been renamed to Microsoft Edge. And it's actually looking pretty good, from a technical point-of-view.

But I can't stop wondering: why would Microsoft still invest resources in a new browser?

The Decline Of Internet Explorer

I've got a fair amount of Google Analytics websites I have access to. If I compare the technology stack of the visitors, more importantly their User-Agent, we can see Internet Explorer continue to decline.

For instance, here are the User-Agent (browser) statistics of this blog. Target audience: highly technical.

2013-2014

technology_2013-2014

2014-2015

technology_2014-2015

Internet Explorer: from 10.55% to 4.96%.

A large auto-motive website. Target audience: normal people.

2013-2014

automobile_2013-2014

2014-2015

automobile_2014-2015

Internet Explorer: from 37.73% to 29.76%.

A large entertainment website. Target audience: normal people.

2013-2014

entertainment_2013-2014

2014-2015

entertainment_2014-2015

Internet Explorer: from 19.98% to 13.39%.

Our company website. Target audience: mixed technical and normal people.

2013-2014

nucleus_2013-2014

2014-2015

nucleus_2014-2015

Internet Explorer: from 22.37% to 18.30%.

I think I've got a good mix of content sites in my examples. Suffice to say, Internet Explorer is declining in each and every segment. Chrome is winning in each and every segment, with Firefox a close second.

Every geek could probably have predicted this, without any numbers.

Betting On A Browser

What does Microsoft have to gain by building their own browser again? Their reputation amongst web developers with Internet Explorer isn't great. There's a reason why websites are developed in either Chrome or Firefox (debug-tools, compatibility, ...) and not in IE.

Even today, we're cursing at Internet Explorer for its slow adoption of web standards, its mix of old and new versions, its quirks and all the workarounds we need to make modern web development work.

Why does Microsoft risk this all again with a new browser?

Or perhaps a better question: with Microsoft using existing technology to build tools like the new Visual Studio Code (using Github's Atom as its foundation), why would they go and create an entirely new rendering engine for the web?

Why not use existing engines with a proven track record, like Webkit, Blink or Gecko? Why risk your image even more, when there are viable alternatives?

Monetising The Browser

There's only one way to make money from building a web browser. Advertisement.

In Microsoft's case, it's by pushing Bing as the default search engine and having ads be displayed in the search results.

I would even go so far as saying that without Internet Explorer/Edge, there would be no Bing. The search marketshare is dominated by Google. If Microsoft didn't push a browser that defaulted to its own search engine, it would not have any value.

Bing needs Internet Explorer. Internet Explorer/Edge needs Bing -- it's the reason it exists.

The Alternatives

When actually considering possible alternatives, for Microsoft, there probably aren't that many. If they were to ship with an existing browser -- with default settings to use their Bing search engine -- what options would they have?

Google Chrome? They've got Google Search to compete with Bing.

Apple's Safari? They've got Mac OS X and iOS to compete with Windows and Windows Phone.

Mozilla Firefox? They recently chose Yahoo as the default search engine over Google and Bing.

All things considered, they probably don't have much choice. Adopting one of the existing browsers as the default would mean giving market share to a competitor.

Microsoft has to develop their own browser. It's driving their Bing market share.

I'd just wish they had based it on an already existing rendering engine.

The post What’s The Value Of Owning A Browser? appeared first on ma.ttias.be.

by Mattias Geniar at May 04, 2015 08:22 PM

May 03, 2015

Paul Cobbaut

encrypted storage (on Pi-2)

I used this procedure today to create encrypted storage on two 64GB-usb sticks on a Raspberry Pi 2.

Advantage: nobody can read the backups but me
Disadvantage: I need to type a long passphrase at boot (twice)


# prepare to enter a passphrase
cryptsetup luksFormat /dev/sda --cipher=aes --key-size=256
cryptsetup luksFormat /dev/sdb --cipher=aes --key-size=256

# verify device
cryptsetup isLuks /dev/sda -v
cryptsetup isLuks /dev/sdb -v

# dump metadata (just for information)
cryptsetup luksDump /dev/sda
cryptsetup luksDump /dev/sdb

# find uuid (so you can add them with uuid to /etc/crypttab)
cryptsetup luksUUID /dev/sda
cryptsetup luksUUID /dev/sdb

# create mapper devices
cryptsetup luksOpen /dev/sda encrypt-backup
cryptsetup luksOpen /dev/sdb encrypt-archive

# verify dm devices
dmsetup info

# mkfs (Wouter told me to use ext4 ;-)
mkfs.ext4 /dev/mapper/encrypt-backup
mkfs.ext4 /dev/mapper/encrypt-archive

# tune reserved space for root
tune2fs -m2 /dev/mapper/encrypt-backup
tune2fs -m2 /dev/mapper/encrypt-archive

# mount
mount /dev/mapper/encrypt-backup /srv/encrypt-backup
mount /dev/mapper/encrypt-archive /srv/encrypt-archive

by Paul Cobbaut (noreply@blogger.com) at May 03, 2015 07:36 PM

Kris Buytaert

What done REALLY looks like in devops

Steve Ropa blogged about What done looks like in devops , I must say I respecfullly , but fully disagree with Steve here.

For those of you that remember I gave an Ignite about my views on the use of the Definition of Done back ad #deovpsdays 2013 in Amsterdam.

In the early days we talked about the #devops movement partly being a reaction against the late friday night deployments where the ops people got a tarball with some minimalistic notes and were supposed to put stuff in production. The work of the development team was Done, but the operations team work just started.

Things have improved .. like Steve mentions for a lot of teams done now means that that their software is deployable, that we have metrics from them, that we can monitor the application.

But lets face it .. even if all of that is in place there is still going to be maintenance, security fixes, major stack upgrades, minor application changes, we all still need to keep the delivery pipelines running.

A security patch on an appliction stack means that both the ops and the developers need to figure out the required changes together.

Building and delivering value to your end users is something that never ends, we are never actually done.

So let me repeat ,

"Done is when your last enduser is in his grave"
In other words, when the application is decomissioned.

And that is the shared responsability mindset devops really brings, everybody is caring about the value they are bringing to their customers, both developers and operations people. Thinking about keeping the application running. And not assuming that because a list of requirements have been validated at the end of a sprint we are done. Because we never are...

BTW. Here's my original slides for that #devopsdays Amsterdam talk.


Dod is not done from Kris Buytaert

by Kris Buytaert at May 03, 2015 05:05 PM

May 02, 2015

Mattias Geniar

Using Webserver Access Logs As A Database Storage System

The post Using Webserver Access Logs As A Database Storage System appeared first on ma.ttias.be.

I used this hack a few days ago when I launched the Drupal EngineHack Detection Website, and it's serving its purpose just fine.

My usecase

The EngineHack site scans a website and tells the user if it has been hacked or not. So for that particular tool, I want to log the results of those scans. Most importantly, I wanted to log:

Traditionally, I would create either a table in a RDMS like MySQL or a simple key/value system like MongoDB and store the results in there. But I didn't want to spend much time dealing with SQL injection, data validation, ...

And storing things inside a MySQL table isn't always as practical: I had no GUI, so everything had to be done at the CLI. Creating tables, query-ing data, ... It all sounded like a lot of work, for a tool this simple.

There must be an easier system, right?

Access Logs as a Storage Method

Everything you do in a browser gets logged on the webserver. Every timestamp, every URL and every GET parameter. I can use that to serve my purpose!

access_logs_everywhere

Most of what I wanted to log was already present in the logs. I had the timestamp and the IP of each scan.

All that was left, was the URL of the Drupal site that has been scanned and the result: compromised, yes or no.

I solved that by including a hidden 1px by 1px image in the result page. The URL was like this.

<img src="/check_pixel.png?url=http://www.domain.tld&compromised=false"
     width="1px"
     height="1px"
/>

Nobody notices this in the browser. It's the same kind of technique many trackers use in mailing tools, to check for the open rates of newsletters.

All I had to do now, was check my access logs for GET requests to that particular .png file and I had everything: the IP, the timestamp, which site got scanned and what the result was.

...
10.0.1.1 [28/Apr/2015:11:19:54] "GET /check_pixel.png?url=http://some.domain.tld&compromised=false HTTP/1.1" 200 901
10.0.1.1 [28/Apr/2015:11:20:05] "GET /check_pixel.png?url=http://www.domain.tld&compromised=true HTTP/1.1" 200 901 
...

Perfect!

Querying the dataset

Granted, in the long run, a SQL statement is easier than this. But since I live mostly at the CLI, this feels more natural to me.

How many sites have been scanned?

$ grep 'check_pixel.png' scan_results.log | sort | uniq | wc -l
843

Note I'm not using grep -c to count, since some sites have been checked multiple times, I only want the unique values.

How many were compromised?

$ grep 'compromised=true' scan_results.log | sort | uniq | wc -l
9

Which sites have been scanned?

$ awk '{print $7}' scan_results.log | sort | uniq
...
/check_pixel.png?url=http://domain.tld&compromised=false
/check_pixel.png?url=http://otherdomain.Tld&compromised=false
...

Which were compromised?

$ awk '{print $7}' /var/www/enginehack.ma.ttias.be/results/scan_results.log | grep 'compromised=true' | sort | uniq
...
/check_pixel.png?url=http://domain.tld&compromised=true
/check_pixel.png?url=http://otherdomain.Tld&compromised=true
...

I have all my queries I need, right there at the CLI.

Logrotate

One downside to this system is that it lacks several of the ACID properties. Most importantly, to me at least, is the durability.

I've got logrotate configured to rotate all logs every night and store them for 7 days. That would mean my scan-logs would also be deleted after 7 days once the webserver access logs are cleared.

A simple script takes care of that.

#!/bin/bash
grep 'check_pixel.png' /path/to/access.log >> /path/to/permanent/logs/scan_results.log

That runs every night, before logrotate. It takes results from the current log, stores them safely and appends them to the other logs. Easy.

Benefits and downsides

For me, this technique worked flawlessly. My benefits;

I could live with the downsides as well.

And there we have it. This implementation took me 30 seconds to make and it has the same results, at least for me, as a relational database. Implementing the MySQL solution, since it's been a while for me, would have taken 30 minutes or more. Not to mention the security angle of SQL injection, sanitising data, ...

Glad I didn't have to do that.

Caveats

Obviously, in the long run, I should have stored it in a MySQL table. It would allow for much better storage system.

This #OpsHack worked for me, because my dataset is simple. The amount of possible permutations of my data is incredible small. As soon as the complexity of the data increases, using the access logs to store anything is no longer an option.

Just like my other #OpsHack (abusing zabbix to monitor the HackerNews submissions), this was the easiest solution for me.

The post Using Webserver Access Logs As A Database Storage System appeared first on ma.ttias.be.

by Mattias Geniar at May 02, 2015 10:20 AM

248 days

The post 248 days appeared first on ma.ttias.be.

Or: how a system's uptime can trigger a buffer overflow.

Let's do some quick math.

The maximum value a 32bit integer can hold:
2^31 &dash 1 = 2.147.483.647.

The amount of seconds in 248 days, keeping time to one hundredth of a second accuracy:
248 days x 24 hours x 60 minutes x 60 seconds x 100 = 2.142.720.000.

Those are remarkably close, aren't they? In fact, as soon as day 248 reaches somewhere around 14:00h, the value exceeds the maximum value a 32bit integer can hold.

Boeing learned this with its 787s.

We have been advised by Boeing of an issue identified during laboratory testing.

The software counter internal to the generator control units (GCUs) will overflow after 248 days of continuous power, causing that GCU to go into failsafe mode.

If the four main GCUs (associated with the engine mounted generators) were powered up at the same time, after 248 days of continuous power, all four GCUs will go into failsafe mode at the same time, resulting in a loss of all AC electrical power regardless of flight phase.
Federal Aviation Administration

Ouch.

This issue reminded me of a problem some Dell EqualLogic storage arrays experienced as well.

While running firmware version 7.0.x, unexpected controller failovers may have occurred at 248 consecutive days of uptime.
Dell EQL firmware v7.0.9

Storing the system uptime in a 32bit integer? Not the best idea, so it seems.

The post 248 days appeared first on ma.ttias.be.

by Mattias Geniar at May 02, 2015 09:56 AM

May 01, 2015

Philip Van Hoof

Hey guys

Have you guys stopped debating systemd like a bunch of morons already? Because I’ve been keeping myself away from the debate a bit: the amount of idiot was just too large for my mind.People who know me also know that quite a bit of idiot fits into it.

I remember when I was younger, somewhere in the beginning of the century, that we first debated ORBit-2, then Bonobo, then software foolishly written with it like Evolution, Mono (and the idea of rewriting Evolution in C#. But first we needed a development environment MonoDevelop to write it in – oh the gnomes). XFree86 and then the X.Org fork. Then Scaffolding and Anjuta. Beagle and Tracker (oh the gnomes). Rhythmbox versus Banshee (oh the gnomes). Desktop settings in gconf, then all sorts of gnome services, then having a shared mainloop implementation with Qt.

Then god knows what. Dconf, udev, gio, hal, FS monitoring: a lot of things that were silently but actually often bigger impact changes than systemd is because much much more real code had to be rewritten, not just badly written init.d-scripts. The Linux eco-system has reinvented itself several times without most people having noticed it.

Then finally D-Bus came. And yes, evil Lennart was there too. He was also one of those young guys working on stuff. I think evil evil pulseaudio by that time (thank god Lennart replaced the old utter crap we had with it). You know, working on stuff.

D-Bus’s debate began a bit like systemd’s debate: Everybody had feelings about their own IPC system being better because of this and that (most of which where really bad copies of xmms’s remote control infrastructure). It turned out that KDE got it mostly right with DCOP, so D-Bus copied a lot from it. It also opened a lot of IPC author’s eyes that message based IPC, uniform activation of services, introspection and a uniform way of defining the interface are all goddamned important things. Also other things, like tools for monitoring and debugging plus libraries for all goddamn popular programming environments and most importantly for IPC their mainloops, appeared to be really important. The uniformity between Qt/KDE and Gtk+/GNOME application’s IPC systems was quite a nice thing and a real enabler: suddenly the two worlds’ softwares could talk with each other. Without it, Tracker could not have happened on the N900 and N9. Or how do you think qt/qsparql talks with it?

Nowadays everybody who isn’t insane or has a really, really, really good reason (like awesome latency or something, although kdbus solves that too), and with exception of all Belgian Linux programmers (who for inexplicable reasons all want their own personal IPC – and then endlessly work on bridges to all other Belgian Linux programmer’s IPC systems), won’t write his own IPC system. They’ll just use D-Bus and get it over with (or they initially bridge to D-Bus, and refactor their own code away over time).

But anyway.

The D-Bus debate was among software developers. And sometimes teh morons joined. But they didn’t understand what the heck we where doing. It was easy to just keep the debate mostly technical. Besides, we had some (for them) really difficult to understand stuff to reply like “you have file descriptor passing for that”, “study it and come back”. Those who came back are now all expert D-Bus users (I btw think and/or remember that evil Lennart worked on FD passing in D-Bus).

Good times. Lot’s of debates.

But the systemd debate, not the software, the debate, is only moron.

Recently I’ve seen some people actually looking into it and learning it. And then reporting about what they learned. That’s not really moron of course. But then their blogs get morons in the comments. Morons all over the place.

Why aren’t they on their fantastic *BSD or Devuan or something already?

ps. Lennart, if you read this (I don’t think so): I don’t think you are evil. You’re super nice and fluffy. Thanks for all the fish!

by admin at May 01, 2015 01:53 PM

April 30, 2015

Frederic Hornain

[Red Hat BeLux Office] New Address

RedHatBeLuxHi all,

Just to let you know we have move from Berchem to Brussels. Here is our new address :

Brussels

Red Hat BeLux
MC-Square Business Centre
Stockholm Building
Leonardo da Vincilaan 19
Diegem 1831
Belgium
Tel: +32 2 719 0340
Fax: +32 3 218 2020

IMAG0140

 

Ref: http://www.redhat.com/en/about/offices

Kind Regards

Frederic


by Frederic Hornain at April 30, 2015 12:17 PM