Subscriptions

Planet Grep is open to all people who either have the Belgian nationality or live in Belgium, and who actively work with or contribute to Open Source/Free software.

About Planet Grep...

Other planets can be found at the Planet Index.

A complete feed is available in a number of syndication formats: RSS 1.0, RSS 2.0, FOAF, and OPML.

The layout of this site was done by Gregory

April 19, 2014

Paul Cobbaut

Vagrant: Creating 10 vm's with 6 disks each

Hello lazyweb,

the Vagrantfile below works fine, but can probably be written simpler. I've been struggling to create variables like "servers=10" and "disks=6" to automate creation of 10 servers with 6 disks each.

Drop me a hint if you feel like creating those two loops.


paul@retinad:~/vagrant$ cat Vagrantfile
hosts = [ { name: 'server1', disk1: './server1disk1.vdi', disk2: 'server1disk2.vdi' },
          { name: 'server2', disk1: './server2disk1.vdi', disk2: 'server2disk2.vdi' },
          { name: 'server3', disk1: './server3disk1.vdi', disk2: 'server3disk2.vdi' }]

Vagrant.configure("2") do |config|

  config.vm.provider :virtualbox do |vb|
   vb.customize ["storagectl", :id, "--add", "sata", "--name", "SATA" , "--portcount", 2, "--hostiocache", "on"]
  end

  hosts.each do |host|

    config.vm.define host[:name] do |node|
      node.vm.hostname = host[:name]
      node.vm.box = "chef/centos-6.5"
      node.vm.network :public_network
      node.vm.synced_folder "/srv/data", "/data"
      node.vm.provider :virtualbox do |vb|
        vb.name = host[:name]
        vb.customize ['createhd', '--filename', host[:disk1], '--size', 2 * 1024]
        vb.customize ['createhd', '--filename', host[:disk2], '--size', 2 * 1024]
        vb.customize ['storageattach', :id, '--storagectl', "SATA", '--port', 1, '--device', 0, '--type', 'hdd', '--medium', host[:disk1] ]
        vb.customize ['storageattach', :id, '--storagectl', "SATA", '--port', 2, '--device', 0, '--type', 'hdd', '--medium', host[:disk2] ]
      end
    end

  end

end

by Paul Cobbaut (noreply@blogger.com) at April 19, 2014 10:02 AM

April 18, 2014

Frederic Hornain

2014 Red Hat Summit: Open Playground

;)

/f


by Frederic Hornain at April 18, 2014 10:16 AM

Mark Van den Borre

Reglementitis

Wie toerist laat overnachten riskeert boete





Wat we zelf regelneven regelneven we beter!

by Mark Van den Borre (noreply@blogger.com) at April 18, 2014 07:49 AM

April 17, 2014

Wim Coekaerts

Oracle E-Business Suite R12 Pre-Install RPM available for Oracle Linux 5 and 6

One of the things we have been focusing on with Oracle Linux for quite some time now, is making it easy to install and deploy Oracle products on top of it without having to worry about which RPMs to install and what the basic OS configuration needs to be.

A minimal Oracle Linux install contains a really small set of RPMs but typically not enough for a product to install on and a full/complete install contains way more packages than you need. While a full install is convenient, it also means that the likelihood of having to install an errata for a package is higher and as such the cost of patching and updating/maintaining systems increases.

In an effort to make it as easy as possible, we have created a number of pre-install RPM packages which don't really contain actual programs but they 're more or less dummy packages and a few configuration scripts. They are built around the concept that you have a minimal OL installation (configured to point to a yum repository) and all the RPMs/packages which the specific Oracle product requires to install cleanly and pass the pre-requisites will be dependencies for the pre-install script.

When you install the pre-install RPM, yum will calculate the dependencies, figure out which additional RPMs are needed beyond what's installed, download them and install them. The configuration scripts in the RPM will also set up a number of sysctl options, create the default user, etc. After installation of this pre-install RPM, you can confidently start the Oracle product installer.

We have released a pre-install RPM in the past for the Oracle Database (11g, 12c,..) and Oracle Enterprise Manager 12c agent. And we now also released a similar RPM for E-Business R12.

This RPM is available on both ULN and public-yum in the addons channel.

by wcoekaer at April 17, 2014 11:44 PM

Frank Goossens

Some HTML DOM parsing gotchas in PHP’s DOMDocument

Although I had used Simple HTML DOM parser for WP DoNotTrack, I’ve been looking into native PHP HTML DOM parsing as a possible replacement for regular expressions for Autoptimize as proposed by Arturo. I won’t go into the performance comparison results just yet, but here’s some of the things I learned while experimenting with DOMDocument which in turn might help innocent passers-by of this blogpost.

// loadHTML from string, suppressing errors
$dom = new DOMDocument();
@$dom->loadHTML($html);

// get all script-nodes
$_scripts=$dom->getElementsByTagName("script");

// move the result form a DomNodeList to an array
$scripts = array();
foreach ($_scripts as $script) {
   $scripts[]=$script;
}

// iterate over array and remove script-tags from DOM
foreach ($scripts as $script) {
   $script->parentNode->removeChild($script);
}

// write DOM back to the HTML-string
$html = $dom->saveHTML();

Now chop chop, back to my code to finish that performance comparison. Who know what else we’ll learn ;-)

by frank at April 17, 2014 05:05 PM

April 16, 2014

Wouter Verhelst

Call for help for DVswitch maintenance

I've taken over "maintaining" DVswitch from Ben Hutchings a few years ago, since Ben realized he didn't have the time anymore to work on it well.

After a number of years, I have to admit that I haven't done a very good job. Not becase I didn't want to work on it, but mainly because I don't have enough time to fix DVswitch against the numerous moving targets that it uses; the APIs of libav and of liblivemedia are fluent enough that just making sure everything remains compilable and in working order is quite a job.

DVswitch is used by many people; DebConf, FOSDEM, and the CCC are just a few examples, but I know of at least three more.

Most of these (apart from DebConf and FOSDEM) maintain local patches which I've been wanting to merge into the upstream version of dvswitch. However, my time is limited, and over the past few years I've not been able to get dvswitch into a state where I confidently felt I could upload it into Debian unstable for a release. One step we took in order to get that closer was to remove the liblivemedia dependency (which implied removing the support for RTSP sources). Unfortunately, the resulting situation wasn't good enough yet, since libav had changed API enough that current versions of DVswitch compiled against current versions of libav will segfault if you try to do anything useful.

I must admit to myself that I don't have the time and/or skill set to maintain DVswitch on an acceptable level all by myself. So, this is a call for help:

If you're using DVswitch for your conference and want to continue doing so, please talk to us. The first things we'll need to do:

See you there?

April 16, 2014 04:24 PM

April 15, 2014

Luc Stroobant

Telenet ipv6 pfSense configuratie

Na jaren gepruts met commerciele wifi routers die om de 2-3 jaar kapot gaan, heb ik eindelijk maar eens geinvesteerd in een Soekris bordje voor een veel krachtigere pfSense in de kelder. Bijkomend interessant punt van pfSense is dat IPv6 goed ondersteund is.

Wat je moet aan zetten om dit met Telenet te laten werken is niet op het eerste zicht duidelijk, dus even een overzichtje voor wie het in één zoekopdracht wil terug vinden. :)

Op nieuw geinstalleerde PFsense setups staat de optie "Allow IPv6" standaard aan. Als je een setup hebt die al een tijdje bestaat moet je dit nog aan zetten onder System: Advanced: Networking.

Op de WAN interface zet je onder "DHCP6 client configuration" "DHCPv6 Prefix Delegation size" op /56, de grootte van het prefix dat je van Telenet krijgt. De rest van de opties mag uit blijven staan.
Op de LAN interface zet je bij "IPv6 Configuration Type" "Track interface" en dan iets lager onder "Track ipv6 interface" selecteer je de WAN interface. Dat is alles... Je zou nu ipv6 adressen moeten krijgen op je pfSense interfaces en op de clients achter de pfSense.

Default wordt alle inboud verkeer geblokkeerd, als je ping wil door laten pas dan de standaard aanwezige rule voor inbound ipv4 ICMP op de WAN interface aanzodat naar IPv4+6.

NB: dit is getest en werkt met een gewone modem, geen home-gateway-Telenet-managed wifi-router-ding. Ervaringen of het daar ook mee werkt zijn altijd welkom in de comments.

by luc at April 15, 2014 06:28 PM

Lionel Dricot

Lily & Lily à Ottignies

lilylily

Dans le strass et les paillettes du Hollywood des années 1930, la star sur le déclin Lily Da Costa remplit plus souvent les verres et les chroniques des journaux à scandale que les salles obscures et les plateaux de tournage. Sam, le brave imprésario dépassé par tous ses caprices, ne sait plus à quel saint se vouer. Entre un mari gigolo, un bagnard en cavale et des domestiques malhonnêtes, voici que débarque à l’improviste Déborah, la sœur jumelle de Lily, pleine de bonnes intentions. Mais l’enfer n’est-il pas pavé de bonnes intentions ?

Envie de connaître la suite ? Alors je vous invite à venir assister à l’une des représentations de Lily & Lily par les Comédiens du Petit-Ry à l’école primaire Saint-Pie X d’Ottignies-Louvain-la-Neuve :

Le prix des places est de 10€ et les réservations se font à l’adresse reservationscomry@gmail.com.

Outre le rire, les portes qui claquent, les amants sous les lits et dans les placards, Lily & Lily est également l’occasion de fêter les 30 ans d’existence de la troupe et les 25 ans de participation de Laure Destercke, qui jouera bien entendu Lily.

lily_lily

 La troupe, en pleine répétition

À titre plus personnel, Lily & Lily représente ma première participation à la troupe. Lors de la lecture du texte, j’ai également eu la surprise de découvrir que la pièce a été montée en 1985 avec Jacqueline Maillan et… Francis Lemaire, mon oncle, décédé il y a un an déjà. C’est donc avec une pointe d’émotion et une certaine fierté que je monterai sur les planches en pensant à lui.

Tout cela fait beaucoup d’occasions de rire et de faire la fête. Alors prenez votre agenda, choisissez une date, faites suivre les événements, invitez vos amis et, comme Lily Da Costa, venez vous enfiler un godet avec nous durant l’entracte ! Avec les comédiens du Petit-Ry, l’ambiance est autant dans la salle que sur la scène !

Au plaisir de vous voir dans la salle un de ces soirs…

 

 

 

Merci d'avoir pris le temps de lire ce texte. Ce blog est payant mais vous êtes libre de choisir le prix. Vous pouvez soutenir l'écriture de ces billets via Flattr, Patreon, virements IBAN, Paypal ou en bitcoins. Mais le plus beau moyen de me remercier est de simplement partager ce texte autour de vous ou de m'aider à trouver de nouveaux défis en 2014.

flattr this!

by Lionel Dricot at April 15, 2014 12:04 PM

April 14, 2014

Xavier Mertens

xip.py: Executing Commands per IP Address

Batch ProcessingDuring a penetration test, I had to execute specific commands against some IP networks. Those networks were represented under the CIDR form (network/subnet). Being a lazy guy, I spent some time to write a small Python script to solve this problem. The idea was based on the “xargs” UNIX command which is used to build complex command lines. From the xargs man page:

xargs reads items from the standard input, delimited by blanks (which can be protected with double or single quotes or a backslash) or newlines, and executes the command (default is /bin/echo) one or more times with any initial-arguments followed by items read from standard input. Blank lines on the standard input are ignored.

I called the tool logically “xip.py” as it allows you to execute a provided command for each IP address from a subnet or a range. The syntax is simple:

$ ./xip.py -h
Usage: xip.py [options]

Options:
 --version             show program's version number and exit
 -h, --help            show this help message and exit
 -i IPADDRESSES, --ip-addresses=IPADDRESSES
                       IP Addresses subnets to expand
 -c COMMAND, --command=COMMAND
                       Command to execute for each IP ("{}" will be replaced by the IP)
 -o OUTPUT, --output=OUTPUT
                       Send commands output to a file
 -s, --split           Split outfile files per IP address
 -d, --debug           Debug output

The IP addresses can be added in two formats: x.x.x.x/x or x.x.x.x-x. Multiple subnets can be delimited by commas and subnet starting with a “-” will be excluded. Examples:

$ ./xip.py -i 10.0.0.0/29,10.10.0.0/29,-10.0.0.1-4 -c "echo {}"

This command will return:

10.0.0.0
10.0.0.5
10.0.0.6
10.0.0.7
10.10.0.0
10.10.0.1
10.10.0.2
10.10.0.3
10.10.0.4
10.10.0.5
10.10.0.6
10.10.0.7

Like the “find” UNIX command, “{}” are replaced by the IP address (multiple {} pairs can be used). With the “-o <file>” option, the command output will be stored to the file (stderr & stdout). You can split the output across multiple files using the switch “-s“. In this case, <file> will end the IP addresses.

This is a quick and dirty tool which helped me a lot. I already have some ideas to improve it, if I’ve time… The script is available on my github repository.

by Xavier at April 14, 2014 06:50 PM

Tom Laermans

Setting up a Postfix-based relay server with user authentication via Active Directory

In this post I will explain how to setup Postfix authentication against an AD server. This is similar to regular LDAP authentication, I am running a Samba 4.0 domain, but it should work just as well against a “real” Microsoft AD Domain.

Packages required:

/etc/default/saslauthd (excerpts):

MECHANISMS="ldap"
OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd"
NAME=Mailserver

This code sets the SASL auth daemon (saslauthd) up to use LDAP authentication (for which the configuration is read from /etc/saslauthd.conf as detailed below), and puts the saslauthd communication socket inside the Postfix chroot so we can reach it from Postfix.

/etc/saslauthd.conf (complete file):

ldap_servers: ldap://domaincontroller.example.com/
ldap_search_base: cn=Users,dc=domain,dc=example,dc=com
ldap_filter: (userPrincipalName=%u@domain.example.com)

ldap_bind_dn: cn=lookupuser,cn=Users,dc=domain,dc=example,dc=com
ldap_password: omnomnom

This file configures the actual LDAP connection. You need a working user/password combination for the domain to be able to connect to the domain controller and browse the tree. We’re filtering on userPrincipalName; in this example @domain.example.com is added behind the username, as the principalName in AD is actually yourusername@yourwindowsdomain. I prefer to authenticate to Postfix without adding the Windows domain to the username, so we have to hardcode it in the LDAP query filter. You can add multiple servers after ldap_servers, which will be tried in order.

After configuring both saslauthd files, (re)start the saslauthd service.

You can then test if the SASL authentication works already with the testsaslauthd command. Careful, you have to pass it the password on the command line in plain text – be sure to use a test password or clear your terminal and shell history! If this doesn’t work, there’s no reason to continue to Postfix yet, as working SASL authentication is key!

/etc/postfix/main.cf (excerpts):

smtpd_tls_cert_file=/etc/ssl/certs/mycert.crt
smtpd_tls_key_file=/etc/ssl/private/mycert.key
smtpd_tls_CAfile = /etc/ssl/certs/myintermediate.crt
smtpd_use_tls=yes
smtpd_sasl_auth_enable = yes
broken_sasl_auth_clients = yes
smtpd_recipient_restrictions = permit_mynetworks,
  permit_sasl_authenticated,
  reject_unauth_destination

Use your commercial or selfsigned certificate and key combination for the first 3 lines. I have a wildcard certificate that I use for most of my servers, which matches the hostname used to identify this relaying Postfix server.

/etc/postfix/master.cf (excerpts):

submission inet n       -       -       -       -       smtpd
  -o smtpd_enforce_tls=yes
  -o smtpd_sasl_auth_enable=yes
  -o smtpd_client_restrictions=permit_sasl_authenticated,reject
smtps     inet  n       -       -       -       -       smtpd
  -o smtpd_tls_wrappermode=yes
  -o smtpd_sasl_auth_enable=yes
  -o smtpd_client_restrictions=permit_sasl_authenticated,reject

These 2 blocks are already in master.cf on Debian, but they’re commented out. The first block enables submission on port 587 via STARTTLS (fully encrypted after initial greeting dialogue). The second enables secure smtp on port 465, which is fully SSL encrypted.

/etc/postfix/sasl/smtpd.conf (complete file):

pwcheck_method: saslauthd
mech_list: plain login

Finally, we set Postfix up to use saslauthd as authentication mechanism. We can only support plain, because we don’t have the plaintext password stored in Active Directory. This means we have to get it from the client and hash it (or well, verify by logging in) at our side to make sure it’s correct.

Finally, add Postfix to the “sasl” group, to be able to access the saslauthd communication socket.

# useradd -G sasl postfix

Restart Postfix, and sending mail through it should work, authenticated against Active Directory! Be sure to test with a wrong password, so that you don’t accidentally create an open relay somehow. With running a mail server on the internet comes great responsability, so make sure not to contribute to the spam problem – SMTP relay accounts get stolen on a regular basis as well so monitor your queues for unusually high amounts of outgoing mail.

Feel free to leave comments, questions or suggestions below!

by Tom Laermans at April 14, 2014 10:43 AM

Joram Barrez

Review ‘Activiti 5.x Business Process Management’ by Zakir Laliwala and Irshad Mansuri

I’ve been contacted by the people of Packt Publishing to review their recent book release of the ‘Activiti 5.x Business Process Management”, written by Dr. Zakir Laliwala and Irshad Mansuri. For an open source project, books are a good thing. They indicate that a project is popular, and often people prefer books over gathering all […]

by Joram Barrez at April 14, 2014 08:39 AM

April 13, 2014

Mattias Geniar

Follow-up: use ondemand PHP-FPM masters using systemd

A few days ago, I published a blogpost called A better way to run PHP-FPM. It's gotten a fair amount of attention. It detailed the use of the "ondemand" process manager as well as using a separate PHP-FPM master per pool, for a unique APC cache.

The setup works fine, but has the downside that you'll have multiple masters running -- an obvious consequence. Kim Ausloos created a solution for this by using systemd's socket activation. This means PHP-FPM masters are only started when needed and no longer stay active on the system when obsolete.

This has a few benefits and possible downsides;

I'll do some more testing on this use-case, as well as the performance penalty (if any) of having to start new master on a first request to the PHP-FPM socket. For this to work out, RHEL or CentOS 7 is needed in my case (we're a RHEL/CentOS shop), as systemd is required and will only be supported from RHEL/CentOS 7.

by Mattias Geniar at April 13, 2014 11:26 AM

Frank Goossens

Gastgeblogt op nummervandedag.nl

Er staat een stukje over Kate Bush van mij op nummvervandedag.nl. Fijne muziekblog, overigens, gebruiken WP YouTube Lyte :-)

by frank at April 13, 2014 07:17 AM

April 11, 2014

Wouter Verhelst

Review: John Scalzi: Redshirts

I'm not much of a reader anymore these days (I used to be when I was a young teenager), but I still do tend to like reading something every once in a while. When I do, I generally prefer books that can be read front to cover in one go—because that allows me to immerse myself into the book so much more.

John Scalzi's book is... interesting. It talks about a bunch of junior officers on a starship of the "Dub U" (short for "Universal Union"), which flies off into the galaxy to Do Things. This invariably involves away missions, and on these away missions invariably people die. The title is pretty much a dead giveaway; but in case you didn't guess, it's mainly the junior officers who die.

What I particularly liked about this book is that after the story pretty much wraps up, Scalzi doesn't actually let it end there. First there's a bit of a tie-in that has the book end up talking about itself; after that, there are three epilogues in which the author considers what this story would do to some of its smaller characters.

All in all, a good read, and something I would not hesitate to recommend.

April 11, 2014 03:25 PM

Xavier Mertens

Log Awareness Trainings?

ChuckawareMore and more companies organize “security awareness” trainings for their team members. With the growing threats faced by people while using their computers or any connected device, it is definitively a good idea. The goal of such trainings is to make people open their eyes and change their attitude towards security.

If the goal of an awareness training is to change the attitude of people, why not apply the same in other domains? Log files sounds a good example! Most log management solutions prone to be extended to collect and digest almost any type of log files. With their standard configuration, they are able to process logfiles generated by most solutions on the information security market but they can also “learn” unknown logfile formats. Maaaagic!

A small reminder for those who are new in this domain. The primary goal of a log management solution is to collect, parse and store events in a common format to help searching, alerting or reporting on events. The keyword here is “to parse“. Let’s take the following event generated by UFW on Ubuntu:

Apr 10 23:56:17 marge kernel: [8209773.464692] [UFW BLOCK] IN=eth0 OUT= \
  MAC=xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx SRC=11.12.13.14 DST=88.191.132.217 \
  LEN=60 TOS=0x00 PREC=0x00 TTL=42 ID=36063 DF PROTO=TCP SPT=32345 DPT=143 \
  WINDOW=14600 RES=0x00 SYN URGP=0

We can extract some useful “fields” like: the source IP address and port, the destination IP address and port, a timestamp, interfaces, protocols, etc. Let’s come back to our unknown logfile format! The biggest issue is our total dependance of the way developers generate and store the events. If events are stored in a database or if  fields are delimited by a common character, it’s quite easy: we just have to setup a mapping between the source and our standard fields:

if ($event =~ /(\S+),(\S+),(\S+)/) {
  $source_address = $1;
  $source_pot = $2;
  $dest_address = $1;
  # ...
}

Alas, most of the time, it’s more complicated and we have to switch to complex regular expressions to extract juicy fields. And the nightmare begins… I had to integrate events generated by “Exotic Product 3.2.1” because “Its events are interesting for our compliance requirements“. Challenge accepted!

Of course, there are chances that your regex will fail after an upgrade from 3.2.1 to 3.2.2 because developers decided to change some messages. This bad scenario is real! By telling this, I would like attract the attention of developers. Guys, could you please not only write logs but write good logs? In the example above, I faced the following issues:

The primary goal of logfiles is to be able to help sysadmins, network admins or security team during investigation or debugging phases. When something occur, the first place that people will look at are logs! I’m not a developer but I’m playing with logs almost every day. Here are some guidelines which seems important for me:

Enough said, if you are interested, have a look at the OWASP document “Logging Cheat Sheet” (here). This was my Friday tribune to all developers! Happy logging…

by Xavier at April 11, 2014 09:14 AM

Frederic Descamps

PLMCE: SKySQL/MariaDB: my Digital Caricature

Once again MariaDB invited Doug Shannon from EventToons at their Percona Live MySQL Conference & Expo's booth.

And once again he drew me.

This is the picture of last year:

and the one of this year:

The conclusion is simple: my beard narrowed and my face grew ! ... is this a sign that I'm becoming old ? ;-)

BTW, thx MariaDB's team for this nice and funny gift !

by lefred at April 11, 2014 06:26 AM

April 10, 2014

Mattias Geniar

Varnish 4.0.0 released together with configuration templates

Good news! Today, Varnish 4.0.0 has been released!. Among the most important features are;

* Full support for streaming objects through from the backend on a cache miss.
Bytes will be sent to 1..n requesting clients as they come in from the backend
server.
* Background (re)fetch of expired objects. On a cache miss where a stale copy
is available, serve the client the stale copy while fetching an updated copy
from the backend in the background.
* New varnishlog query language, allowing automatic grouping of requests when
debugging ESI or a failed backend request. (among much more)
* Comprehensive request timestamp and byte counters.
* Security improvements, including disabling of run-time changes to security
sensitive parameters.

Together with this release, I'm making public the first draft of Varnish 4.0 configuration templates on Github. Since the syntax and internals have changed quite a bit, this is a Work-in-Progress.

My config templates for Varnish 3.x have received a great deal of feedback and attention, I'm hoping to accomplish the same with these 4.x configs. They're still in draft and need finetuning, so I appreciate any feedback!

by Mattias Geniar at April 10, 2014 03:27 PM

Laurent Bigonville

Hide partitions in nautilus

If you want to hide a partition in nautilus (which uses udisks2), you can do that easily by setting the UDISKS_IGNORE environment to 1 in an udev rules file.

The following example hides all the partitions that have a logical volume name that finishes by “-sbuild”:

$ cat /etc/udev/rules.d/99-hide-lv-udisks.rules
ENV{DM_LV_NAME}=="*-sbuild", ENV{UDISKS_IGNORE}="1"

After that you need to run “udevadm trigger” as root, the disks should then immediately disappear from nautilus.

You can use “udevadm info” to see the different environment variables that could be used to identify a disk/partition.

 

by bigon at April 10, 2014 12:03 PM

Mattias Geniar

Scan your network for Heartbleed vulnerabilities with Nmap

Nmap now has an NSE script (Nmap Scripting Engine) to detect SSL Heartbleed vulnerabilities. You can find how to patch yourself in my previous blogpost: Patch against the heartbleed OpenSSL bug (CVE-2014-0160).

First, download nmap. If your on Mac, "brew install nmap" should do the trick. On Linux, your package manager should have nmap readily available.

Get the latest version of Nmap

This NSE script requires at least Nmap version 6.25 or later. Make sure you're on the latest version before trying this.

To check your nmap version, add "--version":

$  nmap --version

Nmap version 6.40 ( http://nmap.org )
Platform: x86_64-apple-darwin13.0.0
Compiled with: liblua-5.2.2 openssl-1.0.1f nmap-libpcre-7.6 libpcap-1.3.0 nmap-libdnet-1.12 ipv6
Compiled without:
Available nsock engines: kqueue poll select

Download the extra TLS LUA

Since the heartbleed bug is an SSL vulnerability, you'll need some extra TLS libraries. Download the tls.lua file to the Nmap share directory (most likely /usr/local/share/nmap/nselib/) and store it as tls.lua.

Download the SSL-Heartbleed NSE script

Then, run nmap with the NSE script on port 443. You can download the ssl-heartbleed.nse script via the NSE website. Save it on your local drive and refer to it via the --script $FILENAME parameter to nmap.

$ nmap -T4 -p 443 -n -Pn --open 192.168.0.0/24 --script nse/ssl-heartbleed.nse

The above should list all your IPs that have an open 443 port together with a remark if they're vulnerable or not.

The parameters explained;

  1. -T4: an agressive scan, will be detected via IDS/IPS's, but it's the fastest
  2. -p 443: only scan port 443 (you may want to extend your reach if you have alternative SSL-enabled ports)
  3. -n: no name resolution, faster scans
  4. -Pn: no Ping Scan first, assume all hosts are up, don't waste time with ICMP
  5. --open: only list the hosts with open ports
  6. 192.168.0.0/24: the IP range to scan
  7. --script nse/ssl-heartbleed.nse: the file location of the NSE script to scan for Heartbleed vulnerabilities

The result, if a vulnerable host has been found, looks like this.

Nmap scan report for 192.168.0.5
Host is up (0.017s latency).
PORT    STATE SERVICE
443/tcp open  https
| ssl-heartbleed:
|   VULNERABLE:
|   The Heartbleed Bug is a serious vulnerability in the popular OpenSSL cryptographic
|        software library. It allows for stealing information intended to be protected
|        by SSL/TLS encryption.
|     State: VULNERABLE
|     Risk factor: High
|     Description:
|       OpenSSL versions 1.0.1 and 1.0.2-beta releases (including 1.0.1f and 1.0.2-beta1)
|       of OpenSSL are affected by the Heartbleed bug. The bug allows for reading memory
|       of systems protected by the vulnerable OpenSSL versions and could allow for
|       disclosure of otherwise encrypted confidential information as well as the
|       encryption keys themselves.
|
|     References:
|       http://cvedetails.com/cve/2014-0160/
|       https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0160
|_      http://www.openssl.org/news/secadv_20140407.txt

Good luck patching!

by Mattias Geniar at April 10, 2014 10:24 AM

Frederic Descamps

golden cage's phone supported in Fedora 20

If you want to be able to use an iPhone with Fedora 20 to copy photos or songs, you will need to upgrade libimobiledevice to 1.1.6. If you don't on the phone you will be constantly prompted to trust or not the computer:

You can find the rpms for libimobiledevice 1.1.6 here.

by lefred at April 10, 2014 10:08 AM

April 09, 2014

Frederic Descamps

MySQL Community Awards 2014 : Community Contributor of the year 2014

At Percona Live MySQL Conference & Expo (PLMCE), I had the honor to receive a MySQL Community Award for the "Community Contributor of the year 2014". I was so proud and still I am.

This is the reason why I received it: Frederic organizes the MySQL & Friends Devroom at FOSDEM every year. He worked towards making a true community driven event participated by all key players. in 2014 the MySQL & Friends devroom also presented a shared booth/stand regrouping Oracle, MariaDB/SkySQL and Percona engineers and developers.

But this Award can't be only mine, I need to share it with all people that helped me in this adventure and that makes it possible. FOSDEM MySQL & Friends Devroom is maybe now the second best event in Europe after Percona Live UK (PLUK).

I'm sorry, "apologies upfront" (some people will understand this sentence), but I'll now thanks all the people that deserves it and the list is long:

- FOSDEM, for the acceptance of the MySQL & Friends Devroom every year since 2009.
- Kris sdog Buytaert, who encouraged me to give my first talk in the MySQL Devroom in 2010.
- Lenz Grimmer, to have run the devroom at the beginning
- Giuseppe @datacharmer Maxia, for having helped me and introduced me to key players in the Community to be able to create every year a strong Committee with people of different companies and opensource projects
- Colin @bytebot Charles, for having participate to all these devrooms as visitor, speaker and committee member (and congratulation for the Award too)
- Sergey Petrunia who also helped me the first year I was in charge of the devroom
- Henrik @h_ingo Ingo, who was a model for me in how he represented the Open Source in the MySQL ecosystem (you can come back whenever you want)
- Oracle, Tungsten, SkySQL, MariaDB, Percona to have allowed their engineers to travel, speak and share the booth with each others
- All committee members
- All speakers (2014, 2013,2012)
- All attendees
- and last but not least the Percona Belgian Team, Liz @lizztheblizz van Dijk, Dimitri @dim0 Vanoverbeke and Kenny @gryp Gryp who supported me for the logistic and organized a wonderful Community Dinner this year.

Again thank you everybody, this award is our award to all of us !

by lefred at April 09, 2014 09:14 PM

Mattias Geniar

A better way to run PHP-FPM

If you search the web for PHP-FPM configurations, you'll find many of the same configurations popping up. They nearly all use the 'dynamic' process manager and all assume you will have one master process for running PHP-FPM configurations. While there's nothing technically wrong with that, there is a better way to run PHP-FPM.

In this blogpost I'll detail;

  1. Why 'dynamic' should not be your default process manager
  2. Why it's better to have multiple PHP-FPM masters

Why you should prefer the 'ondemand' Process Manager instead of 'dynamic'

Most of the copy/paste work on the internet for PHP-FPM have configurations such as the one below.

[pool_name]
...
pm = dynamic
pm.max_children = 5
pm.start_servers = 3
pm.min_spare_servers = 2
pm.max_spare_servers = 4
pm.max_requests = 200

Most "guides" advocate the use of the 'dynamic' Process Manager ('pm' option in the config), which allows you to choose how many minimum and maximum (spare) processes you have per pool. Many guides however make blind assumptions on what your server specs are and will cause, like in the example above, a minimum of 3 PHP processes running per pool (pm.start_servers = 3). If you're on a low-traffic site, that could very well be overkill. For your server, it looks like this in your processlist.

root      3986  4704 ?        Ss   19:04   0:00 php-fpm: master process (/etc/php-fpm.conf)
user      3987  4504 ?        S    19:04   0:00  \_ php-fpm: pool pool_name
user      3987  4504 ?        S    19:04   0:00  \_ php-fpm: pool pool_name
user      3987  4504 ?        S    19:04   0:00  \_ php-fpm: pool pool_name

Those 3 processes will always be running, whether they're needed or not.

A far better way to run PHP-FPM pools, but badly documented, would be the 'ondemand' Process Manager. As the name suggests, it does not leave processes lingering, but spawns them as they are needed. The configuration is similar to the above, but simplified.

[pool_name]
...
pm = ondemand
pm.max_children = 5
pm.process_idle_timeout = 10s
pm.max_requests = 200

The 'ondemand' process manager was added since PHP 5.3.8. The config above causes your default processlist to look like this.

root      3986  4704 ?        Ss   19:04   0:00 php-fpm: master process (/etc/php-fpm.conf)

Only the master process is spawned, there are no pre-forked PHP-FPM processes. Only when processes are needed will they be started, to a maximum of 5 (with the config above, which is defined by pm.max_children). So if there are 2 simultaneous PHP requests going on, the processlist would be:

root      3986  4704 ?        Ss   19:04   0:00 php-fpm: master process (/etc/php-fpm.conf)
user      3987  4504 ?        S    19:04   0:00  \_ php-fpm: pool pool_name
user      3987  4504 ?        S    19:04   0:00  \_ php-fpm: pool pool_name

After the configured timeout in "pm.process_idle_timeout", the process will be stopped again. This does not impact PHP's max_execution_time, because the process manager only considers a process "idle" when it's not serving any request.

If you're working on a high performance PHP setup, the 'ondemand' PM may not be for you. In that case, it's wise to pre-fork your PHP-FPM processes up to the maximum your server can handle. That way, all your processes are ready to serve your requests without needing to be spawned first. However, for 90% of the sites out there, the ondemand PHP-FPM configuration is better than either static or dynamic.

A shared APC cache: why multiple PHP-FPM masters are better

You may not be aware that the APC opcode cache is actually held by the master process in PHP. Any configuration for APC needs to come from the .INI configurations and cannot be overwritten later on via ini_set() or php_admin_value. That's because the spawned PHP-FPM processes have no influence on the size or config of the APC cache, as it is initiated and managed by the master process.

That inherently means that the APC cache is shared between all PHP-FPM pools. If you only have a single site to serve, that's no issue. If you have a few dozen sites on the same server via PHP-FPM, you should be aware that they all share the same APC cache. The APC cache size should then be big enough to hold the opcode cache of all your sites combined.

To avoid this, each PHP-FPM pool can also be started separately and have it's own master process. That means each site can have its own APC cache and can be started/stopped independently from all the other PHP-FPM pools. A change in one pool's config does not cause all the other FPM pools to be reloaded when the new config needs to be activated, which is the default behaviour of "/etc/init.d/php-fpm reload" (it would reload all pools).

What's needed to achieve this then;

  1. Each PHP-FPM pool needs its own init.d start/stop script
  2. Each PHP-FPM pool needs its own php-fpm.conf file to have a unique PID

If you manage your environment via a CMS such as Puppet/Chef/Salt/Ansible, the above is not difficult to set up. If you do things manually, it can become a burden and difficult to manage.

Here's what an abbreviated configuration can look like. You would now have a single .conf file that contains the configuration of your master process (PID etc.) as well as the definition of 1 PHP-FPM pool.

$ cat /etc/php-fpm.d/pool1.conf
[global]
pid = /var/run/php-fpm/pool1.pid
log_level = notice
emergency_restart_threshold = 0
emergency_restart_interval = 0
process_control_timeout = 0
daemonize = yes

[pool1]
listen = /var/run/php-fpm/pool1.sock
listen.owner = pool1
listen.group = pool1
listen.mode = 0666

user = pool1
group = pool1

pm = ondemand
pm.max_children = 5
pm.process_idle_timeout = 10s
pm.max_requests = 500

The above contains the most important bits; the main config determines that it can be daemonized and where the PID-file should be located. The Pool-configuration has the basic information of where to listen to and the type of Process Manager.

The init.d file is a simple copy/paste from the default /etc/init.d/php-fpm with a few modifications.

$ cat /etc/init.d/php-fpm-pool1
#! /bin/sh
#
# chkconfig: - 84 16
# description:  PHP FastCGI Process Manager for pool 'pool1'
# processname: php-fpm-pool1
# config: /etc/php-fpm.d/pool1.conf
# pidfile: /var/run/php-fpm/pool1.pid

# Standard LSB functions
#. /lib/lsb/init-functions

# Source function library.
. /etc/init.d/functions

# Check that networking is up.
. /etc/sysconfig/network

if [ "$NETWORKING" = "no" ]
then
    exit 0
fi

RETVAL=0
prog="php-fpm-pool1"
pidfile=/var/run/php-fpm/pool1.pid
lockfile=/var/lock/subsys/php-fpm-pool1
fpmconfig=/etc/php-fpm.d/pool1

start () {
    echo -n $"Starting $prog: "
    daemon --pidfile ${pidfile} php-fpm --fpm-config=${fpmconfig} --daemonize
    RETVAL=$?
    echo
    [ $RETVAL -eq 0 ] && touch ${lockfile}
}
stop () {
    echo -n $"Stopping $prog: "
    killproc -p ${pidfile} php-fpm
    RETVAL=$?
    echo
    if [ $RETVAL -eq 0 ] ; then
        rm -f ${lockfile} ${pidfile}
    fi
}

restart () {
        stop
        start
}

reload () {
    echo -n $"Reloading $prog: "
    killproc -p ${pidfile} php-fpm -USR2
    RETVAL=$?
    echo
}


# See how we were called.
case "$1" in
  start)
    start
    ;;
  stop)
    stop
    ;;
  status)
    status -p ${pidfile} php-fpm
    RETVAL=$?
    ;;
  restart)
    restart
    ;;
  reload|force-reload)
    reload
    ;;
  condrestart|try-restart)
    [ -f ${lockfile} ] && restart || :
    ;;
  *)
    echo $"Usage: $0 {start|stop|status|restart|reload|force-reload|condrestart|try-restart}"
    RETVAL=2
        ;;
esac

exit $RETVAL

The only pieces we changed to that init.d script are at the top; a new process name has been defined (this needs to be unique) and the PID-file has been changed to point to our custom PID-file for this pool, as it's defined in the pool1.conf file above.

You can now start/stop this pool separately from all the others. It's configuration can be changed without impacting others. If you have multiple pools configured, your process list would look like this.

root      5963  4704 ?        Ss   19:23   0:00 php-fpm: master process (/etc/php-fpm.d/pool1.conf)
root      6036  4744 ?        Ss   19:23   0:00 php-fpm: master process (/etc/php-fpm.d/pool2.conf)

Multiple master processes are running as root and are listening to a socket defined in the pool configuration. As soon as PHP requests are made, they spawn children to handle them and stop them again after 10s of idling. The master process also shows which configuration file it loaded, making it easy to pinpoint the configuration of that particular pool.

As soon as PHP requests are made, the processlist looks like this.

root      5963  4704 ?        Ss   19:23   0:00 php-fpm: master process (/etc/php-fpm.d/pool1.conf)
user      3987  4504 ?        S    19:23   0:00  \_ php-fpm: pool pool1
user      3987  4504 ?        S    19:23   0:00  \_ php-fpm: pool pool1
root      6036  4744 ?        Ss   19:23   0:00 php-fpm: master process (/etc/php-fpm.d/pool2.conf)
user      3987  4504 ?        S    19:23   0:00  \_ php-fpm: pool pool2

To summarise, the above has 2 main advantages: a separated APC cache per PHP-FPM pool as well as the ability to start/stop/reconfigure PHP-FPM pools without impacting the other defined pools. For anyone struggling with APC/realpath/stat cache issues on PHP deploys, this configuration could be a solution by allowing (sudo) access to restart or reload the master PHP-FPM process of your particular pool in order to clear all caches.

Things to keep in mind when doing this:

Feedback appreciated!

by Mattias Geniar at April 09, 2014 06:20 PM

April 08, 2014

Wim Coekaerts

Easy access to Java SE 7 on Oracle Linux

In order to make it very easy to install Java SE 7 on Oracle Linux, we added a Java channel on ULN (http://linux.oracle.com). Here is a brief description of how to enable the channel and install Java on your system.

Enable the Java SE 7 ULN channel for Oracle Linux 6

- Start with a server or desktop installed with Oracle Linux 6 and registered with ULN (http://linux.oracle.com) for updates

This is typically using uln_register on your system.

- Log into ULN, go to the Systems tab for your server/desktop and click on Manage Subscriptions

-> Ensure your system is registered to the "Oracle Linux 6 Add ons (x86_64)" channel (it should appear in the 'Subscribed channels' list)

if your system is not registered with the above channel, add it :

-> Click on "Oracle Linux 6 Add ons (x86_64)" in the Available Channels tab and click on the right arrow to move it to Subscribed channels. -> Click on Save Subscriptions

- In order to register with the 'Java SE 7' channel, you first have to install a yum plugin to enable access to channels with licenses

# yum install yum-plugin-ulninfo
Loaded plugins: rhnplugin
This system is receiving updates from ULN.
ol6_x86_64_addons                                        | 1.2 kB     00:00     
ol6_x86_64_addons/primary                                |  44 kB     00:00     
ol6_x86_64_addons                                                       177/177
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package yum-plugin-ulninfo.noarch 0:0.2-9.el6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

========================================================================================================================
 Package                          Arch                 Version                    Repository                       Size
========================================================================================================================
Installing:
 yum-plugin-ulninfo               noarch               0.2-9.el6                  ol6_x86_64_addons                13 k

Transaction Summary
========================================================================================================================
Install       1 Package(s)

Total download size: 13 k
Installed size: 23 k
Is this ok [y/N]: y
Downloading Packages:
yum-plugin-ulninfo-0.2-9.el6.noarch.rpm                                                          |  13 kB     00:00     
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : yum-plugin-ulninfo-0.2-9.el6.noarch                                                                  1/1 
  Verifying  : yum-plugin-ulninfo-0.2-9.el6.noarch                                                                  1/1 

Installed:
  yum-plugin-ulninfo.noarch 0:0.2-9.el6                                                                                 

Complete!

- In future versions of Oracle Linux 6, this RPM will become part of the base channel and at that point you will no longer need to register with the Add ons channel to install yum-plugin-ulninfo

- Add the Java SE 7 channel subscription to your system in ULN

-> Click on "Java SE 7 for Oracle Linux 6 (x86_64) (Public)" in the Available Channels tab and click on the right arrow to move it to Subscribed channels

-> Click on Save Subscriptions

-> A popup will appear with the EULA for Java SE 7, click on Accept or Decline

- Now your system has access to the Java SE 7 channel. You can verify this by executing :

# yum repolist
Loaded plugins: rhnplugin, ulninfo
This system is receiving updates from ULN.
ol6_x86_64_JavaSE7_public:
By downloading the Java software, you acknowledge that your use of the Java software is 
subject to the Oracle Binary Code License Agreement for the Java SE Platform Products and 
JavaFX (which you acknowledge you have read and agree to) available 
at http://www.java.com/license.

ol6_x86_64_JavaSE7_public                                                                        | 1.2 kB     00:00     
ol6_x86_64_JavaSE7_public/primary                                                                | 1.9 kB     00:00     
ol6_x86_64_JavaSE7_public                                                                                           2/2
repo id                        repo name                                                                          status
ol6_x86_64_JavaSE7_public      Java SE 7 for Oracle Linux 6 (x86_64) (Public)                                          2
ol6_x86_64_UEKR3_latest        Unbreakable Enterprise Kernel Release 3 for Oracle Linux 6 (x86_64) - Latest          122
ol6_x86_64_addons              Oracle Linux 6 Add ons (x86_64)                                                       177
ol6_x86_64_ksplice             Ksplice for Oracle Linux 6 (x86_64)                                                 1,497
ol6_x86_64_latest              Oracle Linux 6 Latest (x86_64)                                                     25,093
repolist: 26,891

- To install Java SE 7 on your system, simply us yum install :

# yum install jdk
Loaded plugins: rhnplugin, ulninfo
This system is receiving updates from ULN.
ol6_x86_64_JavaSE7_public:
By downloading the Java software, you acknowledge that your use of the Java software is 
subject to the Oracle Binary Code License Agreement for the Java SE Platform Products
 and JavaFX (which you acknowledge you have read and agree to) 
available at http://www.java.com/license.

Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package jdk.x86_64 2000:1.7.0_51-fcs will be installed
--> Finished Dependency Resolution

Dependencies Resolved

========================================================================================================================
 Package           Arch                 Version                           Repository                               Size
========================================================================================================================
Installing:
 jdk               x86_64               2000:1.7.0_51-fcs                 ol6_x86_64_JavaSE7_public               117 M

Transaction Summary
========================================================================================================================
Install       1 Package(s)

Total download size: 117 M
Installed size: 193 M
Is this ok [y/N]: y
Downloading Packages:
jdk-1.7.0_51-fcs.x86_64.rpm                                                                                                         | 117 MB     02:27     
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : 2000:jdk-1.7.0_51-fcs.x86_64                                                                                                            1/1 
Unpacking JAR files...
	rt.jar...
	jsse.jar...
	charsets.jar...
	tools.jar...
	localedata.jar...
	jfxrt.jar...
  Verifying  : 2000:jdk-1.7.0_51-fcs.x86_64                                                                                                            1/1 

Installed:
  jdk.x86_64 2000:1.7.0_51-fcs                                                                                                                             

Complete!

- You now have a completely install Java SE 7 on your Oracle Linux environment.

# ls /usr/java/jdk1.7.0_51/
bin  COPYRIGHT  db  include  jre  lib  LICENSE  man  README.html  release  src.zip  
THIRDPARTYLICENSEREADME-JAVAFX.txt  THIRDPARTYLICENSEREADME.txt

by wcoekaer at April 08, 2014 06:10 PM

Lionel Dricot

Gigas chocottes chez les opérateurs téléphoniques

zombiephone

Depuis des années, les opérateurs téléphoniques s’evertuent à proposer des plans tarifaires d’une complexité post-relativiste soit disant pour être plus adaptés à vos besoins : 15 centimes la minute en heures creuses les jeudis de pleine lune, 3000 SMS par semaine de quatre jeudis, gratuité pour les appels vers les numéros qui sont un nombre premier et… 20 Mo de transfert par mois. Youpie !

Sauf que j’envoie à tout casser 15 SMS et que je passe 10 coups de fils de moins d’une minute par mois. La seule chose qui compte pour les utilisateurs comme moi ? Le prix du Giga ! C’est pour ça que je prends le plan 15€ de Mobile Vikings, qui me met le Giga à 7,50€ (et que je ne prends pas les plans à 25€ ou 50€).

Le Giga-octet est infacturable

Pour les opérateurs, un problème de taille se pose. Le Go n’est pas du tout intuitif. Dès qu’on sort du milieu geeko-geek, le client n’a aucune idée de ce qu’est un Go. Certains ont 20Mo par mois. D’autres 1 Go. Et ils ne comprennent pas la différence ni ce qu’ils peuvent faire avec ça. Au final, beaucoup bloquent la 3G, par crainte de payer sans comprendre.

Mais même pour un technicien, le Go n’est pas contrôlable. Vous pouvez toujours surveiller la durée d’un appel et raccrocher. Ou arrêter d’envoyer un SMS. Mais vous ne pouvez pas contrôler, en cliquant sur un lien, si votre navigateur va télécharger une version haute-résolution d’une photo ou une version optimisée. Sans compter les publicités qui, en plus de vous corrompre le cerveau, vous coûtent directement de l’argent !

Actuellement, la plupart des services de nos smartphones reposent sur la possibilité d’un accès permanent à Internet. Parfois, un bug ou une application mal codée entraine un pic de connexions. Pourriez-vous être tenu responsable des connexions de votre smartphone et vous voir infliger une facture astronomique ? La plupart des fois où ce problème s’est posé, l’opérateur a préféré poser « un geste commercial » plutôt que d’aller en justice.

Bref, le Giga est, en temps que tel, infacturable.

La mort de la compétition

Ce n’est pas tout ! La complexité des offres entretenait la compétition. Personne n’y comprenant rien, on se retrouvait finalement chez celui qui avait les vendeurs les plus persuasifs. Avec le « tout-au-net » que nous sommes en train de vivre, la concurrence se fait uniquement sur le coût du Giga. Plus besoin d’un doctorat en mécanique quantique pour comprendre quelle offre est la plus intéressante !

Pour les opérateurs, se battre pour les prix implique de rogner sévèrement sur les marges. Et cela risque d’entraîner la disparition progressive de la concurrence. Au final, tout le monde sera chez le moins cher qui pourra, par économie d’échelle, baisser encore un peu plus le prix de son Giga.

La neutralité du net

C’est pour lutter contre ces deux énormes failles dans leur business model que les opérateurs souhaitent avec tant d’insistance pouvoir discriminer les paquets du réseau en fonction du site et du contenu. Histoire de pouvoir offrir des abonnements « Facebook illimité et 3h de vidéo Youtube par mois ».

Cette discrimination permettrait de garder artificiellement une complexité sur un produit devenu trop simple, de maintenir la concurrence (tel opérateur est meilleur si tu es beaucoup sur Facebook mais sinon utilise untel) et de ne pas mettre au chômage cette armée de commerciaux qui nous concoctent chaque année ces formules ultra complexes mais tellement adaptées à mon usage quotidien que mes dents blanchissent et que je saute en l’air ! (C’est en tout cas ce que promettent les pubs)

Bien sûr, mettre à mal la neutralité du net pourrait avoir des conséquences dramatiques en termes de censure, de liberté d’expression et de démocratie. Cela renforcerait un réseau à plusieurs vitesses où quelques acteurs auraient le contrôle absolu. Mais, aux yeux des opérateurs, ces considérations sont accessoires face à la remise en question de leur business model. Surtout que ce sont eux qui seraient au pouvoir.

Je n’aimerais pas être un opérateur

Ce que j’aime beaucoup avec cette histoire, c’est qu’il s’agit d’un exemple particulièrement édifiant de la manière dont Internet remet en cause un business bien implanté. Pour l’utilisateur comme vous et moi, Internet bon marché et partout est une bénédiction. Plus de soucis à communiquer avec une personne qui se trouve à l’autre bout du monde. Pas d’obligation de couper les données mobiles dès qu’on s’approche trop près d’une frontière. Plus de limitations arbitraires dans la manière dont nous communiquons. Un choix clair et évident pour un service bon marché.

Si on se met à la place des opérateurs, ces bienfaits deviennent des fléaux. On comprend mieux le lobbying intensif contre la neutralité des réseaux. On perçoit la peur de disparaître dans les menaces à peine voilée « sans nous, il n’y aura plus d’infrastructure, plus d’investissements ». On réalise tout l’intérêt économique que représente toutes les frontières qui justifient le roaming dès qu’on s’en approche. Bref, les opérateurs téléphoniques sont devenus des adversaires de leurs propres utilisateurs. Ils se battent pour que nos vies ne deviennent pas meilleures !

Mais, pour une fois, et cela fait plaisir de le souligner, le monde politique ne s’y est pas laissé prendre. Le Parlement Européen soutient la neutralité du net et une interdiction des frais de roaming.

Bien sûr, la victoire est loin d’être acquise. Mais les opérateurs téléphoniques sont désormais officiellement en train de faire leur deuil. Et, contrairement à de nombreuses industries zombies, on dirait que, cette fois, les politiques ne vont pas faire dans l’acharnement thérapeutique.

Bref, sortez les popcorns. Les années qui viennent vont être giga passionnantes !

 

Photo par Pete.

Merci d'avoir pris le temps de lire ce texte. Ce blog est payant mais vous êtes libre de choisir le prix. Vous pouvez soutenir l'écriture de ces billets via Flattr, Patreon, virements IBAN, Paypal ou en bitcoins. Mais le plus beau moyen de me remercier est de simplement partager ce texte autour de vous ou de m'aider à trouver de nouveaux défis en 2014.

flattr this!

by Lionel Dricot at April 08, 2014 03:03 PM

Mattias Geniar

Patch against the heartbleed OpenSSL bug (CVE-2014-0160)

A very unfortunate and dangerous bug has been discovered in OpenSSL that allows an attacker to read otherwise sensitive information hidden by the encryption of OpenSSL. In some cases, it allows an attacker to retrieve the private key of certificates. The vulnerability is known as CVE-2014-0160

The bug has been fully disclosed on the site heartbleed.com. Unfortunately, someone went through a lot of trouble getting massive publicity for this bug/vulnerability but did not notify the OpenSSL project first. So the vulnerability is now public, but the software may not already be patched.

How do you protect yourself? Update OpenSSL!

Most distros already have a patched version of OpenSSL included. In the case of CentOS, a workaround has been created by removing the vulnerable pieces of code from OpenSSL. A full patch is expected in the next few days.

Red Hat / CentOS / fedora

$ yum update openssl

Debian / Ubuntu

$ apt-get update
$ apt-get install openssl

Restart services that rely on OpenSSL

You can find all the services on your system by running the following command as root. It lists all services that rely on libssl.

$ lsof | grep libssl | awk '{print $1}' | sort | uniq

After the update of OpenSSL, every one of those services needs to be restarted.

Consider re-issuing your certificates

Since this vulnerability allowed an attacker to possibly get your private keys (without leaving a trace in your logs), you should consider replacing all your certificates. This of course comes down to money; a re-issue will cost you some $$.

If you're not running a high-profile website over SSL, I would assume you're probably safe. If you're dealing with millions of dollars in transactions every day and SSL is one of the ways to protect your clients, then yes -- consider issuing all new certificates and consider the current private keys as compromised.

How do you know if you're vulnerable?

There are a few tools to help you test if you're vulnerable. For now (April 8th, 2014), it's safe to assume that if you're running SSH, SSL certificates, or anything else involved with encryption, you're vulnerable until you update your OpenSSL version.

You can use the tools below to test if you are actually vulnerable.

  1. Heartbleed Test: a website that allows you enter any (publicly available) URL and test for the exploit (alternative site is possible.lv/tools/hb).
  2. Heartbleeder: a script written in Go to test the vulnerability.
  3. ssltest.py: a python script to test this vulnerability. (github mirror here)

I'd be happy to hear for other alternatives to protect yourself.

by Mattias Geniar at April 08, 2014 11:54 AM

Xavier Mertens

The Day Windows XP Died!

XP TombstoneTuesday 8th of April 2014, a page of the computer industry has been turned! Windows XP is dead! Of course, I had to write a blog post about this event. For months now, Microsoft warned its customers that XP won’t be supported starting from today. Do you remember: Windows XP was available on floppies and had – in the beginning – no native USB support! What does it mean today? From a end-users’ point of view, their computer will not collapse! No need to repeat some voodoo formulas, it will boot again and work like yesterday… Except if something bad happens. In this case, Microsoft won’t help you (instead they will be very happy to propose you an upgrade to Windows 8.1). Well, this is not 100% true: Microsoft is still ready to “offer” you some support if you subscribe to their Premium Service program! (Business is business)

Things are more nasty from a security point of view! Your computer will still run but will be vulnerable to new attacks. By “new” I mean the ones that will be discovered (because XP will be a very nice target seeing its installed base – see the graph below). But I’m also pretty sure that some vulnerabilities have been discovered for a while and kept below the radar ready to be used in the wild. And this may occur very soon tomorrow. People are still migrating to a newer operating system and the surface attacks will reduce itself with time. For an attacker perspective, this is the right time!

But, is this old Windows XP still a problem? People had quite a long time to switch to alternative OS rights? Have a look at the following statistics. They come from the blog and are based on the last 30 days:

Windows Statistics

Based on Google Analytics, 11% of my visitors are still using Windows XP! Based on my regular audience and the content of this blog, I could expect people to have a “high-level profile” like IT professional, infosec people, etc. Those people should have get rid of XP for a while. Ok, let’s reduce this number by a few percents due to fake User-Agents used by some of you or bots and crawlers. Let’s make a final estimation to 7-8%? This remains a huge amount of vulnerable computers (my blog does not generate a lot of traffic). I’m curious to see statistics for big players on the web… Somebody can share?

If you’re still using XP today, have a look a top of your head, there is sword of Damocles! Windows XP was not only used on desktop computers. They are plenty of services still running on top of it:

 What can you do against this? First reaction: upgrade as soon as possible (for laptops & desktops). Installation like medical devices have the bad reputation to not be easily upgradable (or not at all). In all other cases, security best practices apply as usual:

Finally, if you have old applications, test them on a newer OS in the “Windows XP” compatibility mode. Please take actions today!

by Xavier at April 08, 2014 05:21 AM

April 06, 2014

Lionel Dricot

J’ai vendu mon âme pour un Chromebook

chromebook

Depuis quelques mois, je me surprends à utiliser de moins en moins mon ordinateur en faveur de ma tablette. D’ailleurs, la majeure partie de mes documents est accessible via des services en ligne. Mais la tablette a un inconvénient majeur : elle ne dispose pas d’un clavier.

J’ai beau garder, à portée de main, un clavier facile à brancher, la tablette me décourage d’écrire. C’est avant tout un outil de consommation de contenu. Existe-t-il un outil similaire dédié à la production de contenu ? C’est pour le tester, et aussi par pure curiosité, que j’ai décidé de plonger dans l’univers Chromebook.

La tranquilité de l’esprit

Ce qui frappe avec la première prise en main d’un Chromebook, c’est la rapidité et la légèreté du processus. Vous ouvrez votre nouveau Chromebook, vous entrez votre identifiant Google et c’est tout.

Vous pouvez l’emporter partout sans risque ni soucis : si vous perdez ou cassez votre Chromebook, il suffit d’en racheter un et vous serez immédiatement sur votre bureau avec votre fond d’écran, vos icônes et tous vos services.

Entre transporter un Chromebook à 300€ et un ordinateur quatre ou cinq fois plus cher dont le dernier backup remonte à plusieurs semaines, le choix est vite fait. Rien que pour cette tranquilité d’esprit, le Chromebook est un excellent produit.

La dépendance à Google

Bien sûr, cette tranquilité a un prix. Et ce prix est important : une dépendance totale vis-à-vis de Google. Personnellement, je suis convaincu depuis longtemps par le principe d’un client léger vers les services clouds. Malheureusement, les services clouds libres (comme Owncloud) ont encore beaucoup de retard et il me semble raisonnable, en attendant, d’utiliser des services propriétaires comme Dropbox.

Sauf que, dans ce cas précis, Google abuse de sa position dominante pour intégrer le Chromebook directement avec Google Drive sans laisser la possibilité d’utiliser une alternative.

Cette impossibilité d’utiliser un service autre que Google Drive pour accéder simplement à ses fichiers laisse en bouche un goût amer. Utiliser un Chromebook, c’est vendre définitivement son âme à Google. Autant le savoir.

L’écosystème

Les principales critiques du Chromebook portent sur le manque d’applications :  pas de Photoshop, pas possible d’installer un environnement de développement, etc.

Mais il faut garder à l’esprit que l’univers Chrome est encore jeune. Passer à Chrome, c’est un peu comme passer de Windows à Linux il y a 10 ans. Il faut accepter de changer certaines habitudes. Personnellement, j’ai été très surpris de constater la richesse de l’écosystème des applications Chrome. Des applications comme Pixlr Touch Up comblent parfaitement mes maigres besoins en retouche d’image et plus simplement qu’un Gimp, dans lequel j’ai toujours été perdu.

Développer sur Nitrous.io est encore fort limité ? Il n’y a pas l’application exacte pour répondre à mon besoin précis ? Pas moyen d’ouvrir ce fichier zip ? Je pense qu’il s’agit d’une question de temps avant que des solutions durables apparaissent ou que des nouveaux usages rendent obsolètes certains besoins.

La mauvaise gestion du mode hors-ligne

Si le manque de fonctionnalités n’est pour moi pas un réel problème, j’ai été surpris par l’incroyable échec que représente le mode hors-ligne.

Soyons réalistes : dans le monde actuel, nous sommes souvent déconnectés. Le mouvement dans le train empêche une connexion 3G stable, le wifi dans l’avion reste une exception, les pannes des fournisseurs de service sont courantes sans oublier les connaissances qui n’ont pas de Wifi chez eux.

Avec Google Drive et Google Musique, Google a prouvé qu’ils étaient capables de gérer efficacement le mode hors-ligne : en fonction de l’espace disponible sur votre appareil, ces applications vont tenter de mettre en cache les fichiers dont la probabilité est la plus grande que vous souhaitiez y accéder. L’application Dropbox sur Android permet, dans la même optique, de marquer des répertoires accessibles hors-ligne.

Mais le Chromebook n’obéit pas à cette logique. Google Drive, qui est votre disque dur principal sur votre Chromebook, ne permet l’accès hors-ligne qu’aux fichiers de type Google Doc ! Si vous êtes en train de travailler hors-ligne sur un fichier texte ou une image et que vous fermez, par mégarde, l’éditeur, vous serez obligés de vous reconnecter pour réouvrir le-dit fichier. (EDIT: en fait, le mode hors-ligne marche aussi avec les fichiers textes mais ils doivent être marqués individuellement et il n’y a pas d’intelligence comme pour les google docs)

Face à cette critique, certains fanatiques de Google préconisent l’emploi de solutions Google uniquement. Exemple : transformer tous les fichiers texte en note Google Keep.

Sauf que même sur ses propres applications, Google peut se tromper lourdement. Google Keep, par exemple, ne se synchronise que s’il est lancé. Si vous avez des notes modifiées hors-ligne, il ne faudra pas oublier de relancer Keep une fois la connexion réétablie afin d’effectuer une synchronisation. Et, sans raison, Google Drive ou Google Keep vous avertiront parfois de faire une copie de votre contenu et de rafraichir la page, les modifications n’ayant pu être sauvées. Le mode hors-ligne de Gmail est également tellement différent du mode connecté que je me retrouve à préférer trouver une connexion à tout prix plutôt que de l’utiliser.

S’agit-il d’erreurs de jeunesse du Chromebook ou, au contraire, d’une volonté délibérée pour renforcer le besoin d’être partout et tout le temps connecté ? Quoiqu’il en soit, cette dépendance à une connexion va à l’encontre de la tranquilité d’esprit totale que je voyais comme l’argument majeur du Chromebook.

L’efficacité presque totale

Mais s’il y a une chose que je retiens de ma première semaine presqu’exclusivement sur un Chromebook, c’est la sensation d’avoir une véritable machine dédiée au travail.

Depuis vingt ans que j’utilise un ordinateur quotidiennement, l’administration de la machine représente une charge de travail non négligeable. Il faut en permanence effectuer les mises à jour, faire du nettoyage, installer un nouveau logiciel et désinstaller un ancien. Il y a tant de choses à faire sur un ordinateur qu’on peut y passer sa journée tout en ayant l’impression d’être productif.

Durant cette dernière décénnie, ces tâches ont été ma plus grande source de procrastination. Combien de fois n’ai-je pas remis à plus tard un travail urgent parce que je voulais tester la dernière version du logiciel X ? Combien de fois n’ai-je pas décidé de « nettoyer mon disque dur afin d’être plus productif » ? Et même lorsque j’étais motivé, combien de fois n’ai-je pas été interrompu par un popup me rappelant de faire une mise à jour ?

Le Chromebook m’a mis dans un autre univers. Lorsque j’ouvre la machine, je me rends compte que je n’ai rien à faire si ce n’est les tâches de ma todo-list. Une seule touche passe n’importe quelle application en mode plein écran et je peux me consacrer entièrement à une seule et unique idée. Même le clavier, qui fait enfin disparaître les absconses touches de fonction, et le touchpad, bourré de raccourcis pratiques, semblent n’avoir été conçus que dans un seul objectif : me faciliter la vie.

Une machine qui m’aide à être productif, qui se met en dehors de mon chemin, que je n’ai pas peur de casser ou perdre. C’est le prix auquel j’ai vendu mon âme. Ajoutez-y un réel support transparent du mode hors-ligne et je vous la vendrai une seconde fois avec un emballage cadeau.

 

Photo par Morid1n.

Merci d'avoir pris le temps de lire ce texte. Ce blog est payant mais vous êtes libre de choisir le prix. Vous pouvez soutenir l'écriture de ces billets via Flattr, Patreon, virements IBAN, Paypal ou en bitcoins. Mais le plus beau moyen de me remercier est de simplement partager ce texte autour de vous ou de m'aider à trouver de nouveaux défis en 2014.

flattr this!

by Lionel Dricot at April 06, 2014 11:18 AM

April 05, 2014

Ruben Vermeersch

Benchmarking on OSX: HTTP timeouts!

I’ve been doing some HTTP benchmarking on OSX lately, using ab (ApacheBench). After a large volume of requests, I always ended up with connection timeouts. I used to blame my application and mentally filed it as “must investigate”.

I was wrong.

The problem here was OSX, which seems to only have roughly 16000 ports available for connections. A port that was used by a closed connection is only released after 15 seconds. Quick calculation shows that you can only do a sustained rate of 1000 connections per second. Try to do more and you’ll end up with timeouts.

That’s not acceptable for testing pretty much anything that scales.

 

Here’s the workaround: you can control the 15 seconds release delay with sysctl:

sudo sysctl -w net.inet.tcp.msl=100

There’s probably a good reason why it’s in there, so you might want to revert this value once you are done testing:

sudo sysctl -w net.inet.tcp.msl=15000

 

Alternatively, you could just use Linux if you want to get some real work done.

by Ruben at April 05, 2014 04:57 PM

April 03, 2014

LOADays Organizers

Roadworks

Some practical information for people coming by car. The A12 motorway will be closed during the Loadays weekend. People coming from Brussels direction Antwerp better take the E19 highway. More info can be found here (Dutch).

Translated : link

by Loadays Crew at April 03, 2014 10:00 PM

April 01, 2014

LOADays Organizers

New Updates

LOADays is a free open source event. You do not need to register, announce that you are coming. However you might want to inform to the world about our event.

The Build-your-own-OpenNebula-Cloud day is on Monday 7/4/2014 is also a free open source tutorial day held just after LOADays. While we appreciate that you register before hand, again this is not a requirement, it just makes our life easier.

For the speakers, please not that you should have received a mail relating to :

  1. Hotel reservations
  2. Speakers dinner

Please take a look at our still provisional schedule online. Check it out :

Loadays 2014 schedule

Notice: the schedule will still change

by Loadays Crew at April 01, 2014 10:00 PM

Ruben Vermeersch

Release Notes: Mar 2014

What’s the point of releasing open-source code when nobody knows about it? In “Release Notes” I give a round-up of recent open-source activities.

Slightly calmer month, nonetheless, here are some things you might enjoy:

 

angular-debounce (New, github)

Tiny debouncing function for Angular.JS. Debouncing is a form of rate-limiting: it prevents rapid-firing of events. You can use this to throttle calls to an autocomplete API: call a function multiple times and it won’t get called more than once during the time interval you specify.

One distinct little feature I added is the ability to flush the debounce. Suppose you are periodically sending the input that’s being entered by a user to the backend. You’d throttle that with debounce, but at the end of the process, you’ll want to immediately send it out, but only if it’s actually needed. The flush method does exactly that.

Second benefit of using an Angular.JS implementation of debounce: it integrates in the event loop. A consequence of that is that the testing framework for E2E tests (Protractor) is aware of the debouncing and it can take it into account.

 

angular-gettext (Updated, website, original announcement)

A couple of small feature additions to angular-gettext, but nothing shocking. I’m planning a bigger update to the documentation website, which will describe most of these.

 

ensure-schema (New, github)

Working with a NoSQL store (like MongoDB) is really refreshing in that it frees you from having to manage database schemas. You really feel this pain when you go back to something like PostgreSQL.

The ensure-schema module is a very early work-in-progress module to lessen some of that pain. You specify a schema in code and the module ensures that your database will be in that state (pretty much what it says on the box).

var schema = function () {
    this.table("values", function () {
        this.field('id', 'integer', { primary: true });
        this.field('value', 'integer', { default: 3 });
    });
    this.table("people", function () {
        this.field('id', 'integer', { primary: true });
        this.field('first_name', 'text');
        this.field('last_name', 'text');
        this.index('uniquenameidx', ['first_name', 'last_name'], true);
    });
};
ensureSchema('postgresql', db, schema, function (err) {
    // Do things
});

It supports PostgreSQL and SQLite (for now). One thing I specifically do not try to do is database abstractions: there are other tools for that. This means you’ll have to write specific schemas for each storage type.

There’s a good reason for that: you should pick your storage type based on its strenghts and weaknesses. Once you pick one, there’s no reason to fully use all of its capabilities.

This module is being worked out in the context of the project where I use it, so things could change.

 

Testing with Angular.JS (New, article)

Earlier last month I gave a presentation for the Belgian Angular.JS Meetup group:

ngmeetup-testing.001

The slides from this presentation are now available as an annotated article. You can read it over here.

by Ruben at April 01, 2014 06:06 AM

March 31, 2014

Mattias Geniar

Presentation: Code Obfuscation, PHP shells & more: what hackers do once they get passed your code

I recently gave a presentation titled "Code Obfuscation, PHP shells & more: what hackers do once they get passed your (PHP) code". I've received positive feedback, which is why I think this may be worth sharing. This presentation is based on nearly a decade of experience working at Nucleus.be.

Any comments are greatly appreciated.

If the presentation embed doesn't work, it's viewable online at:

If you'd like to hear this presentation again on a User Group or conference, let me know via @mattiasgeniar.be or via mail at m@ttias.be.

by Mattias Geniar at March 31, 2014 06:56 PM

Frank Goossens

Music from Our Tube; A/T/O/S – “What I Need”

Last Saturday shortly before midnight I was listening to the radio, my ear-buds plugged in tightly and slowly falling asleep only for this tune to awaken my auditive senses, urging me to wake up and really listen;

YouTube Video
Watch this video on YouTube or on Easy Youtube.

A/T/O/S or “A Taste of Struggle” is the project of Belgian producer Truenoys and singer Amos. Their debut album is released on Deep Medi Musik.

Triphop, dark soul and silence as integral part of the music (à la James Blake). Not your average summertime sing-a-long, but a very impressive track nonetheless!

by frank at March 31, 2014 01:26 PM

March 28, 2014

Dries Buytaert

Entrepreneurship is 80% sales and marketing

Background in business is a 'nice to have', not a 'must have' for an aspiring entrepreneur. I had no solid business background when I founded Mollom or Acquia (I launched them roughly at the same time).

Other than the standard things (an idea, passion and the willingness to act), the most important thing that aspiring entrepreneurs need is the understanding that 80% of entrepreneurship is sales and marketing. If as a founder, you're not obsessed with sales and marketing, you're a liability rather than an asset.

You don't have to be the best sales and marketing guy (I am far from that), but you better enjoy getting other people excited about your project, company or product. It will help you not only with finding customers, but also with recruiting a world-class team, raising venture capital, and more. So if there is one thing you should learn before starting a company, it is "sales and marketing" (in the broad sense) — and you better be passionate about it, because you'll invest years of your life to selling and evangelizing to make your company a success. Without customers or a team, you won't need any other skills, because you'll be out of business.

You need to be talking about your idea all the time. Too many entrepreneurs believe that if they build a killer product, customers will come. It almost never works like that. Smart entrepreneurs do it backwards; they find customers first and build their product only when they have customers ready to start paying. Not testing the market by selling from day one can lead to months, if not years, of wasted time and money. So stop being so secretive about your idea. You will never find your product-market fit by keeping your idea secret until it is perfect. If you're afraid of people telling you that your idea is stupid, chances are you may not be ready to be an entrepreneur.

by Dries at March 28, 2014 05:29 PM

March 27, 2014

Kristof Provost

Bug reports

I've noticed that many testers, project/program/product managers, ... have no idea on how to properly report bugs, or request new features.

It's well known that developers will do everything they can to avoid actually writing code, so it's vitally important to avoid falling for their traps when reporting a bug. My fellow developers will be angry with me for giving away this secret information, but, well, it wants to be free!

Here are a few hints:

In conclusion, the ideal bug report is "It breaky. You fix.".

In case someone doesn't get it, the above suggestions are NOT something you should do in bug reports. Unfortunately just about all of these are based on one or more bug reports I have received over the years.

March 27, 2014 09:22 PM

Wouter Verhelst

GSS-TSIG

While reading up on dnssec-keygen and other related stuff in order to update my puppet bind module to support DNSSEC transparently, I accidentally stumbled across something in the BIND Administrator's Reference Manual called GSS-TSIG. Intrigued, I set out to learn more, and found that this is terribly easy to set up with BIND:

There are a few more interesting features; e.g., krb5-self can be used to allow the host/machine@REALM principal to update the A record for machine, and there are ways to specify ACLs by wildcard. For more information, see the BIND administrator's reference manual, chapter 6, in the section entitled "Dynamic Update Policies".

March 27, 2014 09:21 PM

Les Jeudis du Libre

Mons, le 24 avril : Montage vidéo en libre avec Kdenlive


Logo KdenliveCe jeudi 24 avril 2014 à 19h se déroulera la 28ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Montage vidéo en libre avec Kdenlive

Thématique : Graphisme|vidéo

Public : Tout public

L’animateur conférencier : Thierry Gavroy (LoLiGrUB)

Lieu de cette séance : Université de Mons, Campus Plaine de Nimy, avenue Maistriau, Grands Amphithéâtres, Auditoire Curie (cf. ce plan sur le site de l’UMONS, ou la carte OSM).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/

Cette séance sera suivie d’un verre de l’amitié, offert par la Faculté des Sciences de l’UMONS (le tout sera terminé à 22h).

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des rencontres autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois (sauf exceptions comme cette fois), et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires du Pôle Hainuyer d’enseignement supérieur impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description :

Il existe une offre importante de logiciels libres consacrés à la création de vidéos : Kino, Avidemux, Pitivi, Openshot, VLMC, KDenlive, Cinelerra, Blender…

L’exposé sera l’occasion d’expliquer, de façon pratique et progressive comment réaliser un montage vidéo sur base d’une collection personnelle de séquences filmées, de photos et de bandes sons. Thierry Gavroy nous fera partager son expérience avec Kdenlive, qu’il utilise pour ses besoins privés et pour la réalisation de vidéos de quelques conférences des Jeudis du Libre

Kdenlive, sous licence GNU GPL, fonctionne sous Linux et Mac OS X. Il gère les format DV et HDV (mpeg2, AVCHD) et de nombreux autres formats, sans qu’il soit nécessaire d’importer et de convertir les fichiers. Kdenlive peut utiliser plusieurs pistes vidéo et audio pour le montage, et offre un choix important d’effets spéciaux et de transitions. Son interface claire et intuitive le rend intéressant à la fois pour un public débutant en montage vidéo mais aussi pour des monteurs confirmés.

La présentation s’appuiera sur des extraits de cette vidéo de la conférence de décembre 2013 donnée par Roberto Di Cosmo.

by Didier Villers at March 27, 2014 06:12 AM

March 26, 2014

LOADays Organizers

Schedule

We have got a provisional schedule online. Check it out :

Loadays 2014 schedule

notice: the schedule is still subject to change.

by Loadays Crew at March 26, 2014 11:00 PM

Dries Buytaert

Do well and do good

This blog post is on purpose, Open Source, profit and pie. This week I had an opportunity to meet Professor Klaus Schwab, Founder and Executive Chairman of the World Economic Forum. I was inspired by the following comment he made (not his exact words):

"Because companies strive to have a positive balance sheet, they take more in, than they give out. However, as individuals, we define success as giving more than you take. Given that many of us are leaders as individuals *and* also leaders in our businesses, we often wrestle with these opposing forces. Therein lies the leadership challenge."

I’ve seen many Open Source developers struggle with this as they are inherently wired to give back more than they take. Open Source developers often distrust businesses, sometimes including their own employer, because they take more than they give back. They believe businesses just act out of greed and self-interest.

This kind of corporate distrust comes from the “fixed-pie concept"; that there is only so much work or resources to go around, and as pieces of the pie are taken by some, there is less left for everyone else. The reality is that businesses are often focused on expanding the pie. As the pie grows, there is more for everyone. It is those who believe in the "expanding-pie concept" who can balance the opposing forces. It is those who believe in the "fixed-pie concept" who worry about their own self-interests and distrust businesses.

Imagine a business that is born out of a desire to improve the world, that delivers real value to everyone it touches. A business that makes employees proud and where team members are passionate and committed. A business that aspires to do more than just turn a profit. A business that wants to help fuel a force of good. That is Acquia for me. That should be your employer for you (whoever your employer is).

The myth that profit maximization is the sole purpose of business is outdated, yet so many people seem to hold on to it. I started Acquia because I believed in the potential and transformative nature of Drupal and Open Source. The purpose of business is to improve our lives and create value for all stakeholders.

Acquia's growth and capital position has given me power and responsibility. Power and responsibility that has enabled me to give back more and grow the pie. I have seen the power that businesses have to improve the world by accelerating the power of good, even if they have to take more than they give. It's a story worth telling because business is not a zero-sum game with one winner. I believe Open Source companies are in a prime position to balance the opposing forces. We can do well and do good.

by Dries at March 26, 2014 02:38 PM

Wouter Verhelst

LDAP and Kerberos authentication

Kerberos is a great protocol for single sign-on authentication. It's supported by many protocols, allowing you to not have to enter a password to each and every one of them; instead, the protocols behind the scenes (not "just" Kerberos, but also the things that embed tickets, such as GSSAPI, or the things that embed GSSAPI, such as SASL or SPNEGO) use your ticket-granting-ticket to ask for security credentials for the service you're trying to use, magic happens, and you're authenticated. I blogged about kerberos before (even if it was ages ago); since then, I've not only used it on my own systems, but also on the systems of various customers.

One thing I've learned in that time, however, is that most web application developers have a bad case of NIHilism when it comes to authentication. Most webservers that I've seen have a wide range of methods to do authentication in the webserver through various means, including things like certificate-based authentication, one-time password modules, and, yes, kerberos. Yet almost no webapp out there will look at the magic variables that those webservers set to explain we're authenticated, instead reinventing the wheel through webforms and other various stupid means. Sigh.

So, that means, no kerberos authentication for webapps. Worse, if the application has no way to pass on authentication to something external, that means users will now have to learn another password: one for Kerberos, one for the webapp. And, probably, one for this other webapp, too -- because once you add one webapp, people expect you to add more of them.

Well, mostly. In some cases, webapps do have ways to externalize authentication. In most cases this means "store passwords in a database", or "try authenticating against this other service here".

When "this other service here" is an IMAP server, then all you need to do is make sure cleartext authentication on the IMAP server eventually ends up trying to authenticate against the Kerberos server, and you're all set. When "this other service here" is an LDAP server, however, you're out of luck. Right?

It turns out that no, you're not. I recently learned that OpenLDAP can, in fact, check "simple" bind requests by checking some other service, and that this other service can be a Kerberos realm. Doing so is called "Pass-through authentication" in the OpenLDAP documentation, and this is how you do it:


mech_list: plain
pwcheck_method: saslauthd

This directs the SASL libraries, loaded by slapd, to talk to saslauthd when trying to authenticate some SASL things - Make sure slapd has the correct permissions to access the saslauthd unix domain socket. On Debian, that means you need to add the "openldap" user to the "sasl" group:

`adduser openldap sasl`

(obviously that won't be active until the next slapd restart)

And that's it; if you now try a simple bind against the LDAP directory, and enter your Kerberos password, you should be in. If it doesn't work, try running "testsaslauthd"; if that works, it means the error is in your slapd configuration. If it doesn't, then the problem is in saslauthd.

Some notes:

Oh, and if you're a webapp developer: please please please make it easy for me to use an external authentication mechanism. This isn't hard; all you need is a separate page that will read the magic variables (probably REMOTE_USER if you're using apache), set a session variable or whatever, and then redirect to your normal "we've just logged in" page. ikiwiki gets this right; you can too!

(for added bonus points, have some way to declare a mapping from REMOTE_USER values to internal, readable, usernames, but it's not too crucial)

March 26, 2014 12:35 PM

March 25, 2014

Xavier Mertens

Pwned or not Pwned?

Pwn3d!Just before the announce of the Full-Disclosure shutdown a few days ago, a thread generated a lot of traffic and finally turned into a small flame war. In the beginning of the month, a security researcher reported a vulnerability found on Youtube. According to him, the Google service was suffering of a file upload vulnerability. Reading such kind of post is juicy! Accepting files sent by visitors is always a touchy feature on a website. By example, if you allow your users to upload images to create an avatar, you must implement proper controls to be sure that the uploaded file is in the correct format and does not contain any malicious code. I won’t describe how to protect against this vulnerability and even less discuss about the Full-Disclosure thread but it reveal an important fact: the severity of an issue is linked to its “context“…

When you are mandated by a customer to perform a pentest against his infrastructure, you always have two parties which have different expectations. The customer hopes that nothing critical will be found and the pentester expects interesting stuff. In most cases, a pentest is based on highly technical tests and requires very specific tools and procedures. Once the fun is over, it’s time for the boring homework: to write the report. Until today, I never met a pentester who likes writing reports! Unfortunately, there is not “translator” to explain technical findings in common (English|French|Duch|German) words (choose your best language). We are all dreaming of an interface like this one:

Business Translation

From a pentester point of view, It’s tempting to be cynic and to point to the customer: “Ah ah! You have a XSS on your homepage! Fail!“. First of all this is not professional. Don’t forget that you’re in a customer-contractor relationship. The customer don’t expect a blame but some inputs to improve the overall security. It’s not the pentester job to provide a turnkey solution to fix an issue but it must be properly reported with a valid severity. Again, I won’t discuss the severity of a XSS vulnerability because the context is a key element here. When I report an issue in my report, there is of course a complete technical description of the issue, how to reproduce it and a PoC (proof-of-concept) which can be a screenshot, a password, a DB dump, … Then the vulnerability must be placed in the context of the customer’s business and properly rated. The final goal isn’t to stress the customer.

The rating scheme I’m using adds the following flags to findings:

Some examples? A likelihood of “rare” is assigned to an issue which requires highly skilled and determined attacker with substantial resources. Consequences are translated in business impacts: A catastrophic consequence on system confidentiality is a major disclosure or highly-confidential information.

To apply correctly those flags, a good understanding of the business (the context) is required. It’s easy to understand that a website operated by a major bank will be more critical than the website of your local flowershop. But, in the same idea, a XSS vulnerability on the corporate bank portal will have less impact than the same issue on the e-banking front-end! I always suggest to the customer sit down together around the table and to review the issues found. An open discussion might change the initial flags that were applied. To conclude, each issue found during a pentest must be properly addressed and reviewed with the customer.

by Xavier at March 25, 2014 07:39 PM

Frank Marien

Rebooting ExtreMon

Aaah, Spring!

After all the initial excitement a few years back, and despite seeing active duty, I have to admit that ExtreMon wasn’t evolving as it should have these past few years.  Suffice it to say I didn’t have the time to work on it, despite the many requests.

https://github.com/m4rienf/ExtreMon

it says “m4rienf authored

https://github.com/m4rienf/ExtreMon-Display

“a year ago”

which is really sad.

So, it being spring, I decided to do something about it. I’m arranging my replacement in other projects in order to focus on ExtreMon, to bring IT monitoring into the 21st Century, and a few other, smaller but related real-time projects, mostly from my Wetteren, Belgium office.

If you don’t remember what ExtreMon is about, allow me to quote myself:

Designers and operators of critical systems like (nuclear) power plants, automated factories, and power distribution grid management installations have always [...]  taken for granted, [...], that information about a system is visualised on an exact schematic representation of that system, and that the information thus visualized is the current, up-to-date state of the system at all times, regardless of the technology used, and regardless of the distance between the system and the operator. [...] What is sought is called Situational Awareness, which we consider requires a monitoring system to be Real-time, Representative, Comprehensive, and to respect the cognitive strengths and limitations of its human operators. The IT sector, despite having successfully supported these other industries for decades and being one of the most impacted by the evolution towards supersystems, remains decades behind in terms of supporting Situation Awareness for its operators, remaining in a dispersed, non-interoperable, out-of-date mode with representations requiring the extensive cognitive efforts much like the dreaded “stove-pipe” accumulation of technologies that plague combat technologies and that military designers have been actively trying to resolve for many years.

Marien, F. (2012). ‘Mending The Cobbler’s Shoes: Supporting Situation Awareness in Distributed IT Supersystems.’. Master’s Thesis submitted to the University Of Liverpool. Available on request.
 

Immediate focus is on installability, converting the Display from Java WebStart to 100% JavaScript, rewriting one or 2 core components in C or C++ for max performance (smaller hardware, less power usage..), and hardware sensors integration (airco, UPS, heating systems).

I’m therefore looking for IT-driven organisations that have a need for schematic, live, internet-wide monitoring, power plant / chemical factory / NORAD style, and are willing to engage in an ideally Agile type process with my company to achieve this. The idea is that ExtreMon is FOSS, and that you will not be paying for the software, only for my time in getting you set up, optionally in managing your monitoring infrastructure, or hosting it, and any custom development that you would need (“custom” means you need something that no other ExtreMon user would be able to use).

If you would be interested or know of organisations that might be, I would very much like to hear about it.

There are a few pilot sites, but I would need several more to make this viable (and to avoid it becoming too focused on a particular organisation’s Modus Operandi). All sorts of cooperation modes are possible.

I need the Field Exposure and, “Frankly”,  you probably need the Situation Awareness!

WKR,

-f

by root at March 25, 2014 01:16 PM

March 24, 2014

Joram Barrez

Important: Activiti 5.15 and MySQL 5.6+ users

Giving it the attention it needs: http://forums.activiti.org/content/important-activiti-515-and-mysql-56-users

by Joram Barrez at March 24, 2014 09:36 PM

Steven Wittens

Shadow DOM

SVG, CSS, React and Angular

For a while now I've been working on MathBox 2. I want to have an environment where you take a bunch of mathematical legos, bind them to data models, draw them, and modify them interactively at scale. Preferably in a web browser.

Unfortunately HTML is crufty, CSS is annoying and the DOM's unwieldy. Hence we now have libraries like React. It creates its own virtual DOM just to be able to manipulate the real one—the Agile Bureaucracy design pattern.

The more we can avoid the DOM, the better. But why? And can we fix it?

Netscape
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
  width="400px" height="400px" viewBox="0 0 400 400" enable-background="new 0 0 400 400" xml:space="preserve">
  <polygon fill="#FDBD10" stroke="#BE1E2D" stroke-width="3" stroke-miterlimit="10" points="357.803,105.593 276.508,202.82 
    343.855,310.18 226.266,262.91 144.973,360.139 153.592,233.697 36.002,186.426 158.918,155.551 167.538,29.109 234.885,136.469 "/>
  <polygon fill="#FDEB10" points="326.982,114.932 259.695,195.408 315.441,284.271 218.109,245.146 150.821,325.623 157.955,220.966 
    60.625,181.838 162.364,156.283 169.499,51.625 225.242,140.488 "/>
</svg>

Dangling Nodes

Take SVG. Each XML tag is a graphical shape or instruction. Like all XML, the data has to be serialized into tags with attributes made of strings. Large data sets turn into long string attributes to be parsed. Large collections of stuff turn into many separate tags to be iterated over. Neither is really desirable.

It only represents basic operations, so all serious prep work has to be done by the user up front. This is what D3 is used for, generating and managing more complex mappings for you.

When you put SVG into HTML, each element becomes a full DOM node. A simple <tag> with attributes is now a colossal binding between HTML, JS, CSS and native. It's a JavaScript object that pretends to be an XML tag, embedded inside a layout model that takes years to understand fully.

Its namespace mixes metadata with page layout, getters and setters with plain properties, native methods with JS, string shorthands with nested objects, and so on. Guess how many properties the DOM Node Object actually has in total? We'll be generous and count style as one.

A hundred is not even close. A plain <div> doesn't fare much better. Just serializing a chunk of DOM back into its constituent XML is a tricky task once you get into fun stuff like namespaces. Nothing in the DOM is as simple as JSON.stringify. Why does my polygon have a base URI?

We have all these awesome dev tools now, yet we're using them to teach a terrible model to people who don't know any better.

DOM Shader

In contrast, there's Angular. I like it because they've pulled off a very neat trick: convincing people to adopt a whole new DOM by disguising it as HTML.

<body ng-controller="PhoneListCtrl">
  <ul>
    <li ng-repeat="phone in phones">
      {{phone.name}}
      <p>{{phone.snippet}}</p>
    </li>
  </ul>
</body>

When you use <input ng-model="foo"> or <my-directive>, you're creating a controller and a scope, entirely separate from the DOM, with their own rules and chain of inheritance. The pseudo-HTML in the source code is merely an initial definition, most of it inert to the browser. Angular parses it out and replaces much of it.

Like React, the browser's live DOM is subsumed and used as a sort of render tree, a generic canvas to be cleverly manipulated to match a given set of views. The real view tree hides in the shadows of JS, where controllers operate on scopes. They only use the DOM to find each other on creation, and then communicate directly. The DOM is mostly there to trigger events, do layout and look pretty. Form controls are the one exception.

It's a bad fit because the DOM was built for text markup and there's tons of baggage in the form of inline spans, floats, alignment, indentation, etc. Most of these are layout systems disguised as typography, of which CSS now has several.

The whole idea of cascading styles is suspect. In reality, most styles don't actually cascade: paddings and backgrounds are set on individual elements. The inherited ones are almost all about typography: font styles, text justification, writing direction, word wrap, etc.

Think of it this way: why should a table have a font size? Only the text inside the table can have a font size, the table is just a box with layout that contains other boxes. Why don't we write table text { size: 16px; } instead of table { font-size: 16px; }? Text nodes exist today.

Well because that's how HTML's <font> tag worked. Instead of just making a selector for text nodes, they gave all the other elements font properties. They didn't get rid of font tags, they made them invisible and put one inside each DOM node.

<html><font>
  <body><font>
    <h1><font>Hello World</font></h1>
    <p><font>Welcome to the future.</font></p>
  </font></body>
</font></html>

Unreasonable Behavior

It was decided the world would be made of block and inline elements—divs and spans—and they saw that it was good, until someone came along and said, hey, so what about my table?

<table>
  <tr>
    <td>Forever</td>
    <td>Alone</td>
  </tr>
</table>

This <table> can't be replicated with CSS 1. Tables require a particular arrangement of children and apply their own box model. It's a directive posing amongst generic markup, just like Angular.

CSS has never been able to deliver on the promise of turning semantic HTML into arbitrary layout. We've always been forced to add extra divs or classes. These are really just attachment points for independent behaviors.

Purists see these as a taint upon otherwise pristine HTML, even though I've never seen someone close a website because the markup was messy. Not all HTML should be semantic. Rather, HTML stripped of its non-semantic parts should remain meaningful to robots.

CSS 2's solution was instead to make <table> invisible too, to go with the invisible <float>, <layer>, <clear> and <frame> tags which we pretended we didn't have. Watch:

17.2.1 Anonymous table objects

[…] Any table element will automatically generate necessary anonymous table objects around itself, consisting of at least three nested objects corresponding to a 'table'/'inline-table' element, a 'table-row' element, and a 'table-cell' element. Missing elements generate anonymous objects (e.g., anonymous boxes in visual table layout) according to the following rules […]
.grid {
  display: table;
}
.grid > ul {
  display: table-row;
}
.grid > ul > li {
  display: table-cell;
}

This is called Not Using Tables.

Without typographical styles, block elements start to look very different. They're styled boxes with implied layout constraints. They stack vertically, expand horizontally and shrink wrap vertically. Floated blocks are boxes that stack horizontally, and shrink wrap both ways. Tables are grids of boxes that are locked together.

Just think how much simpler CSS would be if boxes had box styles and text had text styles, instead of all of them having both. Besides, block margins and paddings don't even work the same on inline elements, there's a whole new layout behavior there.

So we do have two kinds of objects, text and boxes, but several different ways of combining them into layout: inline, stacked, nested, absolute, relative, fixed, floated, flex or table. We have optional behaviors like scrollable, draggable, clipped or overflowing.

They're spread across display, position, float and more, only meaningful in some combinations. And presence is mixed in there too. As a result, you can't unhide an element without knowing its display model. This is a giant red flag.

Thinking with Portals

It should further raise eyebrows that the binary world of inline and block now also includes a hybrid called inline-block.

Medium share thing

You generally don't need to embed a contact form–or all of Gmail—in the middle of mixed English/Hebrew poetry shaped like a bird. You just don't. To attach something to flowing text, you should insert an anchor point instead and add floating constraints. Links are called anchor tags for a reason. Why did we forget this?

Don't shove your entire widget right between the words. You'd inherit styles, match new selectors and bubble all your events up through the text just for the sake of binding a pair of (x, y) coordinates.

Heck, pointer events, cursors, hover states... these are for interactive elements only. Why isn't that optional, so mouse events wouldn't need to bubble up through inert markup? This would completely avoid the mouseover vs mouseenter problem. What is the point of putting a resize cursor on something that is dead without JavaScript? Pointer events shouldn't fire on inert children, and inert parents shouldn't care about interactive children. It's about boundaries, not hierarchy.

Things like SVG are better used as image tags instead of embedded trees, just slotting into place while ignoring their surroundings. They do need their own tree structure, but there is little reason to graft it onto HTML/CSS, inheriting original sin. The nodes have too little in common. At most you can share the models, not the controllers.

We should be able to manipulate them from the outside, like a <canvas>, but define and load them declaratively, like an image tag.

For that matter, MathML should really be a single inline text tag, optimized for math, not a bunch of tags. Regular text spans are not just "plain text". They are trimmed, joined, bidirectionalized, word wrapped and ellipsified before display. It's a separate embedded layout model that makes up the true, invisible <p> tag. A tag that HTML1 actually sort of got right: as an operator.

We create JavaScript with code, not as abstract syntax trees. Why should I build articles and embedded languages out of enormously nested trees, instead of just typing them out and adding some anchor tags around specific interesting parts? The DOM already inserts invisible text nodes everywhere. We didn't need to wrap all our words in <text> tags by hand just to embiggen one of them. The mutant tree on the right could just look like this:

<math>x = (-b &pm; &Sqrt;(b^2 - 4 a c)) / 2a</math>
<math>x = (-b &pm &Sqrt(b^2 - 4 a c)) / 2a</math>

Wasn't HTML5 supposed to match how people write it? LaTeX exists.

And which is easier: defining a hairy new category of pseudo-elements like :first-letter and :first-line… or just telling people to wrap their first letter in a span if they really want to make it giant? It was ridiculous to have this feature in a spec that didn't include tables.

The :first-line problem should be solved differently: you define two separate blocks inside a directive, to spread markup across two children with a content binding. It's no different from flowing text across lines and columns.

<mrow>
  <mi>x</mi>
  <mo>=</mo>
  <mfrac>
    <mrow>
      <mrow>
        <mo>-</mo>
        <mi>b</mi>
      </mrow>
      <mo>&#xB1;<!--PLUS-MINUS SIGN--></mo>
      <msqrt>
        <mrow>
          <msup>
            <mi>b</mi>
            <mn>2</mn>
          </msup>
          <mo>-</mo>
          <mrow>
            <mn>4</mn>
            <mo>&#x2062;<!--INVISIBLE TIMES--></mo>
            <mi>a</mi>
            <mo>&#x2062;<!--INVISIBLE TIMES--></mo>
            <mi>c</mi>
          </mrow>
        </mrow>
      </msqrt>
    </mrow>
    <mrow>
      <mn>2</mn>
      <mo>&#x2062;<!--INVISIBLE TIMES--></mo>
      <mi>a</mi>
    </mrow>
  </mfrac>
</mrow>

This is the first example in the MathML spec. Really. "Invisible times".

<join>
  <box class="first-line"></box>
  <box></box>
  <content>Hello New World</content>
</join>

Would this really be insane?

The Boxed Model

CSS got it wrong and we're now suffering the consequences. The HTML feature that was ignored in CSS 1 was the thing they should've focused on: tables, which were directives that generated layout. It set us on a path of trying to fake them by piggybacking on supposedly semantic elements, like lipstick on a div. Really we were pigeonholing non-linear layout as a nested styling problem.

Semantic content was a false spectre on the document level. Making our menus out of <ul> and <li> tags did not help impaired users skip to the main article. Adding roman numerals for lists did not help us number our headers and chapters automatically.

View and render trees are supposed to be simple and transparent data structures, the model for and result of layout. This is why absolute positioning is a performance win for mobile: it avoids creating invisible dynamic constraints between things that rarely change. Styles are orthogonal to that, they merely define the shape, not where it goes.

Flash had its flaws, but it worked 15 years ago. Shoving raw SVG or MathML into the DOM—or god forbid XML3D–is a terrible idea. It's like there's an entire class of developers who've now forgotten how fast computers actually are and how memory is supposed to work. A stringly typed kitchen sink is not it.

So I frown when I see people excited about SVG in the browser in the year 2014, making polygons out of CSS 3D or driving divs with React. Yes I know, it's fun and it does work. And Angular shows the web component approach has merit. But we need a way out.

CSS should be limited to style and typography. We can define a real layout system next to it rather than on top of it. The two can combine in something that still includes semantic HTML fragments, but wraps layout as a first class citizen. We shouldn't be afraid to embrace a modular web page made of isolated sections, connected by reference instead of hierarchy.

Not my problem though, I can make better SVGs with WebGL in the meantime. But one can dream.

March 24, 2014 07:00 AM

March 23, 2014

Mattias Geniar

What if you are the network admin tasked with blocking IPs in Turkey?

A lot is happening in Turkey these last few days. And not among the least is the heavy violation of network neutrality, by blocking access to services such as Twitter, Google's Open DNS resolvers and more.

This means that the ISPs in Turkey are being forced to block access to crucial news sources and are preventing residents in Turkey from having free access to the internet.

What would you do if you were the network admin at one of those ISPs? Would you simply block any IP/DNS record that you were told to? Would you first put up a fight to defend net neutrality? Would you rather quit your job than deny millions of people free internet and access to crucial information on the political and military state in your country?

Not everyone has the technical knowledge to block these kind of networks at a large ISP. That means you as a network admin, and a handful of colleagues probably, have the power to make a difference. What would you risk to make that difference?

I'm guessing you didn't see those choices coming when you got a job as a network admin at a Turkish ISP. Whoever you are, if you're quietly sabotaging the regime to fight for free internet, I wish you the best of luck.

by Mattias Geniar at March 23, 2014 10:41 AM

March 22, 2014

LOADays Organizers

Build your OpenNebula Cloud Day

After LOADays, on Monday 07/04/2014, we organize a tutorial and image session for OpenNebula.
This is a free event, we recommend to register as seats are limited and we want to anticipate the number of participants.

SCHEDULE

Timing
09:30 - 13:30 OpenNebula Tutorial (Jaime Melis)
13:30 - 14:30 LUNCH
14:30 - 17:30 OpenNebula Cloud Image Creation/Hack Session
17:30 - 18:00 CLOSING

Registration is possible using this link : http://load2014opennebuladay.eventbrite.com

by Loadays Crew at March 22, 2014 11:00 PM

Laurent Bigonville

Add a new CA certificate to the certificates stash in Debian

Since a few days, the CAcert root certificates have been removed from the ca-certificates package. While there was a discussion about whether it should be trusted by default in Debian, let’s see here how an administrator can trust CAcert again (or any other CA certificates).

In Debian, the certificates stash is located in /etc/ssl/certs/. This directory contains by default a series of symlinks that points to the certificates installed by the ca-certificates package (including the needed symlinks generated by c_rehash(1)) and a ca-certificates.crt which is a concatenation of all these certificates. Everything managed by the update-ca-certificates(8) command which is taking care of updating the symlinks and the ca-certificates.crt file.

Adding a new (CA) certificate to the stash is quite easy as update-ca-certificates(8) is also looking for files in /usr/local/share/ca-certificates/, the administrator just has to place the new certificate in the PEM format in this directory (with the .crt extension) and run update-ca-certificates(8) as root. All the applications on the system (wget, …) should now trust it.

by bigon at March 22, 2014 01:20 PM

March 21, 2014

Frank Goossens

Music from Our Tube; Fatboy Slim & Co raving like it’s 1999

I’ve been to a couple of tech-house events back in the days and I must say this 2013 track does bring back some of those memories;

YouTube Video
Watch this video on YouTube or on Easy Youtube.

Eat, sleep, rave, repeat by Fatboy Slim, Riva Starr and Beardyman. Coz it’s Friday after all!

by frank at March 21, 2014 03:06 PM

March 20, 2014

Xavier Mertens

2nd European Information Security Blogger Awards Announced

Security Bloggers Meet-upToday, Brian Honan announced on his blog the second European edition of the Security Bloggers Awards. In a few weeks, many infosec guys will join London to attend BSidesLondon and/or InfoSecurity Europe. This is the perfect time to organize a meet-up on Wednesday 30rd April. Security bloggers are welcome to have drinks and chats in a relaxed atmosphere. Bad timing for me, I won’t be able to attend…

Awards will be distributed to European Bloggers. It’s now time to nominate your favourite sites/people! Different categories are defined:

Ready to participate? Nomitate them here. Psssst, if you like this blog, think of me!

by Xavier at March 20, 2014 10:53 PM

Dries Buytaert

Acquia certification for Drupal

Topic: 

I'm proud to share that Acquia announced its certification program today. You can now get "Acquia certified in Drupal", something I'm pretty excited about.

This is something I've been hoping to see in the community. While there have been other experiments around certification, we as a community have lacked a way to ensure professional standards across Drupal. Over the years, I've heard the demand coming from partners and clients who need a way to evaluate the skills of people on their teams. More and more, that demand has drowned out any perceived criticisms of a certification for Drupal.

A good certification is not just a rubber stamp, but a way for people to evaluate their own abilities, and make plans for improving their knowledge. In some countries, certification is really important to create a career path (something I learned when visiting India). For these reasons, I feel Drupal's growth and development has been hindered without a formal certification in place.

The certification we've built is based on the combined years of experience among Acquia staff who oversee and manage thousands of Drupal sites. We've observed patterns in errors and mistakes; we know what works and what doesn't.

People have debated the pros and cons of software certifications for years (including myself), especially where it involves evaluating candidates for hire. Certainly no certification can be used in isolation; it cannot be used to evaluate a candidate's ability to perform a job well, to work in teams or to learn quickly. Certification can, however, provide a valuable data point for recruiters, and a way for developers to demonstrate their knowledge and stand out. It is undeniably valuable for people who are early in their Drupal career; being certified increases their chance to find a great Drupal job opportunity.

One of the biggest challenges for Drupal adoption has been the struggle to find qualified staff to join projects. Certification will be helpful to recruiters who require that job candidates have a good understanding of Drupal. There are many other aspects to recruitment for which certification does not provide a substitute; it is only one piece of the puzzle. However, It will provide organizations added confidence when hiring Drupal talent. This will encourage the adoption of Drupal, which in turn will grow the Drupal project.

The community has been talking about this need for a long time. One approach, Certified to Rock, evaluated an individual's participation and contribution in the Drupal community. Acquia's certification is different because we're assessing Drupal problem-solving skills. But the community needs more assessments and qualifications. I hope to see other providers come into this space.

by Dries at March 20, 2014 08:16 PM

Lionel Dricot

Vos observables tuent-elles votre valeur ?

green_red_car

Dans le lointain pays d’Observabilie, il n’existe plus que deux types de voitures. Les rouges, chères, aux formes aérodynamiques, et les vertes, bon marché, plus fréquentes.

En Observabilie, la sécurité routière est un véritable problème. En effet, une loi interdit l’utilisation de tout type de radar. Les chauffards n’ont que faire des limitations de vitesse.

Cependant, un statisticien renommé a démontré que près de 95% des excès de vitesse étaient commis par une voiture rouge. Les explications possibles sont multiples : les conducteurs agressifs préféreraient le rouge. Les voitures rouges sont plus rapides. Elles sont également plus chères et leur conducteur veut rentabiliser son achat avec des sensations fortes. Au fond, peu importe. La corrélation semble claire !

L’administration Observabilienne met donc immédiatement en place des détecteurs de chauffards qui se basent sur la couleur du véhicule. Si un véhicule rouge passe devant le détecteur, une photo est prise et une amende envoyée au propriétaire.

Bien sûr, certains propriétaires de véhicules rouges seront injustement punis. Et certains chauffards en véhicules verts pourront impunément rouler dangereusement. Mais l’étude statistique montre que ce sont des cas marginaux. Faut-il vraiment s’en préoccuper ?

La corrélation imparfaite entraîne une décorrélation totale

Comme je le disais dans mon article « Méfiez-vous des observables », une forte corrélation (ici 95%) ne permet de tirer aucune conclusion valable. Seule une corrélation absolument parfaite est pertinente lorsqu’on cherche à mesurer une valeur.

Mais il y a pire. En Observabilie, après l’entrée en vigueur de cette mesure, un phénomène étonnant a été observé. Les conducteurs des petites voitures vertes se sont soudain senti le droit de ne pas respecter les limitations de vitesse. Après tout, celles-ci ne concernaient que les voitures rouges. Les vendeurs de voiture de sport ont eux observé une demande pour des voitures rapides et vertes. Les conducteurs n’aimaient pas le vert mais peu importe. Certains frimeurs ont décidé de garder des voitures rouges et de poster fièrement sur Internet leurs contraventions comme preuve de leur manière virile de conduire. Certains jeunes n’ayant pas l’argent pour une voiture de sport ont été vus peignant leur petite voiture verte en rouge juste pour pouvoir également poster des contraventions sur le net et donner une impression de richesse et de vie trépidante.

Que nous dit cette histoire ? Et bien que si une corrélation entre une valeur et une observable n’est pas parfaite, le fait de mesurer l’observable va modifier le système et accroître cette imperfection. Naturellement, le système va tendre vers une maximisation de l’observable en oubliant complètement l’objectif premier, à savoir la maximisation de la valeur.

L’exemple du travail

Dans notre société, l’un des exemples les plus frappants est le modèle du travail. On considère que le travail est une production de valeur et qu’un employé doit être rémunéré en fonction de la valeur qu’il produit. Or, la mesure de cette valeur est très complexe, différente pour chacun et souvent subjective. Il a donc été décidé assez universellement de payer les travailleurs à l’heure, en se basant sur l’affirmation que plus on passe du temps à travailler, plus on produit de la valeur.

Si cette affirmation est vraie dans une usine Fordienne où l’employé n’a aucun contrôle et accomplit mécaniquement des tâches rythmées par une machine, il n’en est évidemment rien dans la plupart de nos emplois actuels.

Par définition, toute entreprise où les employés sont payés à l’heure va donc tendre vers une inefficacité maximale. Chaque employé va avoir une tendance, consciente ou non, à faire durer chaque tâche le plus longtemps possible voir à créer des tâches, à mettre en place des réunions interminables afin de justifier des heures supplémentaires. Le tout en étant parfaitement convaincu de travailler vu que le travail est défini par le fait de passer des heures au bureau.

De même, le système rend implicite qu’un employé mieux payé apporte plus de valeur. Or le salaire ne dépend généralement que des capacités de négociation de l’employé lors de son embauche. Inconsciemment, les managers vont avoir tendance à récompenser les employés qui coûtent plus chers et qui font beaucoup d’heures. La réelle valeur ajoutée est complètement ignorée.

Mon expérience au FOREM relève du même principe : inconsciemment, chaque employé du FOREM, même le plus compétent, sait qu’il ne doit son travail qu’au fait qu’il existe des chômeurs. Tout le système va donc tendre vers une maximisation inconsciente du nombre de chômeurs pour éviter le spectre, irrationnel mais néanmoins effrayant, du plein emploi.

La production de contenu sur le web

Le web est un autre domaine où la décorrélation devient dramatique. Pour beaucoup de créateurs de contenu, seule la publicité s’est avérée un business model relativement rentable. Or, la publicité rémunère à la vue ou au clic.

Il s’ensuit que les producteurs de contenu ne cherchent plus des lecteurs mais bien des cliqueurs. Inconsciemment, tout leur contenu va tendre vers un seul et unique objectif : attirer un maximum de clic et leur faire quitter immédiatement la page via la publicité. Ce qui fonctionne : un titre accrocheur qui donne envie de cliquer suivi d’un contenu très court. Avoir un contenu de qualité devient même un inconvénient : on risque que le lecteur oublie de cliquer sur la publicité ou soit trop bas dans la page pour la voir. Le simple fait que les publicitaires aient arbitrairement choisi une observable « nombre de clics » a suffit pour faire du web une gigantesque machine à générer du contenu de piètre qualité.

Cependant, même le taux de clic n’est pas une observable idéale. Mais on peut estimer qu’il existe une corrélation, même imparfaite, entre le taux de clic et les pages vues. Et entre les pages vues et le page rank sur Google ou le nombre de fans sur votre page Facebook. Il s’ensuit toute une industrie fournissant des moyens d’améliorer des observables qui s’avèrent peu ou prou corrélées avec des observables qui sont elles-mêmes peu ou prou corrélées avec votre objectif de base.

Facebook exploite cette décorrélation à merveille en offrant directement les observables et un bouton pour les augmenter en payant. Lorsque vous administrez une page Facebook, le nombre de vue en dessous de chaque post est affiché. Un simple paiement Paypal et ce chiffre augmente immédiatement. Est-ce utile pour votre business ? Peut-être. Peut-être pas. Mais au fond, ce que vend Facebook, c’est le simple plaisir de voir un chiffre augmenter. Ce n’est jamais qu’un Candy Crush un peu élaboré, l’utilité importe peu.

Que faire ?

Au cours de ma carrière, j’ai vu un nombre impressionnant d’entrepreneurs se perdre dans des observables complètement décorrélées de leur business. Les Google Analytics, les statistiques Facebook et les rapports SEO ont un effet quasi-hypnotique. Ils apportent une satisfaction rapide, comme une friandise. Il est extrêment difficile d’en décrocher.

Gardez à l’esprit qu’un client satisfait qui vous paie avec plaisir vaut mieux que 100.000 fans sur Facebook ou un million de clics sur votre page. Concentrez-vous sur votre objectif, votre spécialité. Produisez de la qualité, partagez de la valeur ! Construisez vos propres observables : combien de mails de remerciement ou de félicitations ai-je reçu ce mois-ci ? À combien de personnes ai-je apporté de la valeur ? Combien d’actions ai-je réalisées qui rendent le monde un tout petit peu meilleur ? Ai-je agi conformément à mes valeurs ?

Vos valeurs personnelles sont centrales. Quand vous les aurez identifiées, vous pourrez tenter de trouver des observables parfaitement corrélées. Ou bien vous passer complètement d’observable, ce qui est préférable que d’en utiliser des mauvaises. Lorsque vous doutez, souvenez-vous qu’en Observabilie, un détecteur de couleur paraissait une excellente idée pour punir les excès de vitesse.

Quels que soient vos objectifs ou votre business, je suis à peu près sûr qu’un chiffre dans Google Analytics, dans Facebook ou sur une pointeuse n’est pas représentatif de votre valeur. En tout cas, pas plus que la couleur de votre voiture.

 

Photo par Motorito.

Merci d'avoir pris le temps de lire ce texte. Ce blog est payant mais vous êtes libre de choisir le prix. Vous pouvez soutenir l'écriture de ces billets via Flattr, Patreon, virements IBAN, Paypal ou en bitcoins. Mais le plus beau moyen de me remercier est de simplement partager ce texte autour de vous ou de m'aider à trouver de nouveaux défis en 2014.

flattr this!

by Lionel Dricot at March 20, 2014 12:24 PM

Fabian Arrotin

CentOS Mirrors “Spring Clean-up operation”

Just to let you know that I have verified some mirrors last week and sent several mails to the contact info we had for those mirrors (unreachable/far behind).
I've received feedback from some people still willing to be listed as third-party mirror and so they fixed the issue they had (thank you !)

Some other people replied with a "sorry, we can't host a mirror anymore" answer . (Thanks for having replied my email and thank you for having been part of the successful "centos mirror party" !).

For the "unanswered" ones, I've decided that it was time to launch a "Spring clean-up operation" in the mirrors DB/Network.
I've removed them from the DB, meaning that the crawler process we use to detect bad/unreachable mirrors will not even try anymore to verify them.
We actually have more than 500 external (third-party) mirrors serving CentOS to the whole world, without counting the 50+ (managed by CentOS) servers used to feed those external mirrors, and sometimes serving content too for countries less covered.

Thanks a lot for your collaboration and support ! We *love* you :-)

by fabian.arrotin at March 20, 2014 12:24 PM

March 19, 2014

Xavier Mertens

R.I.P Full-Disclosure… What’s Next?

TombstoneSad news received today, a (last) message was posted in the Full-Disclosure mailing-list. John Cartwright, one of the founder and owner, anounced the end of the list (copy here). Personally, I subscribed in December 2006 (more than seven years ago!). I was  a passive reader but learned so many interesting stuff!

I was surprised to read John’s announce but I can fully understand and respect his decision. Operating a public service in 2002 or today is something completely different. The word “public” is the main issue here. Why? First of all, the mailing-list was open to everybody after a simple registration. It started completely unmoderated but, around 2010, some controls were added. Was it a first smoke signal? Maybe… But, the list archive being replicated on multiple sites, Google & co made their job and indexed all the content. Today, the behavior of most organizations changed and they try to keep an eye on what’s being said about them. It became usual to send a request asking to remove some sensitive content. According to John, the number of such requests kept growing with the time. I could imagine the workload to handle this!

Over the years, more and more people subscribed to the list, “young” people jumped into the security community (no, I don’t consider myself as old ;-)) and the list was also known to be, from time to time often, flooded by flamewars. The last example was a few days ago about the vulnerability reported on Youtube… But that’s normal… a space to express yourself open to anymore, people from different countries, different experiences and generations, all the ingredients were present for clashes!

What is a shame is the lack of strong community in the infosec field. What’s next? A fork of a new Full-Disclosure? In which format? Mailing-list, forum, Google group? Personally I prefer a solution based on emails. It’s easy to read, archive, process. Who will join? If the same people move to the new platform, the same problems will occur again. What about restricting the access and moderation? I’m definitively for people freedom but today you can’t definitively publish everything online. Create an “underground” list whitout community? There are already plenty… It’s maybe time to review the concept but we definitively need a Full-Disclosure mailing list!

Thank you John for your awesome work!

by Xavier at March 19, 2014 04:04 PM

Frank Goossens

How to keep Autoptimize’s cache size under control (and improve visitor experience)

Confession time: Autoptimize does not have its proper cache purging mechanism. There are some good reasons for that (see below) but in most cases this is not something to worry about.

Except when it is something to worry about off course. Because in some cases the amount of cache-files generated by Autoptimize can grow to several Gigabytes. Why, you might wonder? Well, for each page being loaded Autoptimize aggregates all JS (and CSS) calculates the hash of that string and checks if an optimized version is in cache using that hash. If there is a difference (even if just a comma), the hash is not the same and the aggregated CSS/ JS is cached seperately. This behavior typically is caused by plugins that generate javascript-variables (or CSS-selectors) that are specific for each page (or even worse, for each page request). That does not only lead to a huge amount of files in the cache, but also impacts visitors as their browsers will have to request a different optimized CSS- or JS-file for each page instead of reusing the same file for several pages.

This is what you can do if you want a healthier cache both from a server- and visitor-perspective (based on JavaScript, but the same principle applies to CSS);

  1. Open two similar pages (posts).
  2. View source of the optimized JavaScript in those two pages.
  3. Copy the source of each to a seperate file and replace all semi-colons (“;”) with semi-colon+linefeed (“;\n”) in both files.
  4. Execute an automatic comparison between the two using e.g. diff (or “compare” in Notepad++), this should give you one or more lines that will probably be almost the same, but not exactly (e.g. with a different nonce or a postid in them).
  5. Now disable JS optimization and look for similar strings in the inline and the external JavaScript.
  6. If you find it in the inline JavaScript, try to identify a unique string in there (the name of a specific variable, probably) and write that down. If the variable JS is in a file, jot down the filename.
  7. Go to the autoptimize settings page and make sure the advanced settings are shown.
  8. Now add the strings or filenames from (6) to “Exclude scripts from Autoptimize:” (which is a comma-seperated list).
  9. Re-enable JS optimization.
  10. Save settings & clear cache.

This does require some digging, but the advantages are clear; a (much) smaller cache-size on disk and better performance for your visitors. Everyone will be so happy, people will want to hug you and there will be much rejoicing, generally.

So why doesn’t Autoptimize have automatic cache pruning? Well, the problem is a page caching layer (which could be a browser, a caching reverse proxy or a wordpress page caching plugin) contains pages that refer to the aggregated JS/CSS-files. If those optimized files were to be automatically removed while the page would remain in the page caching layer, people would get the cached page without any JS- or CSS-files being available. And as I don’t want Autoptimize to break your pages, I didn’t include a automatic cache purging mechanism. But if you have a bright idea of how this problem could be tackled, I’d be happy to reconsider, off course!

by frank at March 19, 2014 01:29 PM

Wouter Verhelst

Spelling should not be hard

I know, I know, I should resist saying this. But every time I see it, I wonder why it happens, and I should just get this off my chest.

The difference between "its" and "it's" is something that many people, even native english speakers, seem to miss. Yet, it's so extremely simple that I, a non-native english speaker, have been baffled about that common mistake for as long as I can remember.

The apostrophe (') in any sentence usually means that something at the location of that apostrophe is gone out to lunch. In this particular case, it means that the " " and the "i" in the phrase "it is" were hungry. So rather than "it is", we contract that to "it's", and allow the space and the i to enjoy their meal while the apostrophe keeps their seats warm.

Practically, what that means is that every time you want to write "it's", you should consider whether you can replace it with "it is" without making the sentence sound like junk. If you can't, you probably meant to write "its" rather than "it's".

For instance, consider the following sentence:

"It's not possible to repair this car within the budget that its owner wants to pay"

It's perfectly possible to say "it is not possible" here, so we need to have the apostrophe keep a seat warm for the space and the i.

It makes no sense to say "it is owner", unless you're trying to speak a much deformed form of english, so that makes it a possessive pronoun (similar to "hers", "his", "theirs", etc) and you shouldn't use an apostrophe.

Speaking of food, it's time for lunch now.

March 19, 2014 11:04 AM

March 16, 2014

LOADays Organizers

LPI at Loadays

The Linux Professional Institute (LPI) is offering PBT LPI exams at Loadays.

LPI Certifications are globaly accepted certifications that are:

At loadays we want to offer certification for:

Registration is possible using this link:https://lpievent.lpice.eu/index.php

by Loadays Crew at March 16, 2014 11:00 PM

March 15, 2014

Fabian Arrotin

CentOS Dojo Lyon (France)

Comme vous le savez peut-être (ou pas !), nous tiendrons un Dojo CentOS à Lyon le vendredi 11 avril. Si donc vous avez envie de partager votre expérience autour de CentOS, en donnant une présentation par exemple, ou bien si vous désirez seulement venir passer un bon moment avec nous en écoutant les présentations prévues (appel - subliminal - aux candidats volontaires !), sentez-vous libre de vous inscrire.
L'inscription est gratuite ! Plus d'informations sur la page Wiki : http://wiki.centos.org/Events/Dojo/Lyon2014 .

Hi people, are you in the Lyon (France) area around April 11th ? Willing to come to a CentOS Dojo ? (either to attend it or even better, present something around CentOS ?) . Feel free to register for this free event ! : http://wiki.centos.org/Events/Dojo/Lyon2014

by fabian.arrotin at March 15, 2014 02:00 PM

Lionel Dricot

Printeurs, livre 1 : La fin de l’innocence

printeurs_livre_1_la_fin_de_l_innocence_banniere

Dans un monde où les publicités s’affichent directement dans vos lentilles de contact et où les voitures automatiques vous conduisent immédiatement à votre destination, Nellio et Eva vont tenter de mettre au point une imprimante 3D d’un tout nouveau type. Une invention qui risque fort de remettre en question le fragile équilibre entre une classe sociale inactive qui rêve de travail et le monde lointain des stars de cinéma et de la finance. Mais, entre un ciel constellé de drones et des rues tapissées de caméras, Nellio et Eva ne s’attaquent-ils pas à plus fort qu’eux ? Et l’équilibre social est-il bien la seule chose que leur imprimante remet en cause ?

Voici, en deux mots, résumé l’histoire de Printeurs dont vous avez pu lire les 19 premiers épisodes sur ce blog. 19 épisodes qui forment une première partie, « La fin de l’innocence », que je vous invite à (re)découvrir sous forme de livre électronique.

Sketch145114420 (3)

 Format .epub - Format .pdf

Sans votre présence, vos partages et vos relectures attentives, Printeurs n’existerait pas. Je n’ai qu’un seul mot : merci ! Merci pour vos messages d’encouragements, vos signalement de fautes, votre impatience à lire la suite. Un merci tout spécial à François Martin, qui a honoré Printeurs de son encyclopédique connaissance orthographique, et à Roudou, qui a réalisé la couverture de cette première partie en moins de 48h !

Comme tous mes écrits, Printeurs est payant. Mais le prix est libre. Si vous appréciez Printeurs, n’hésitez pas à soutenir librement son écriture. Grâce à la suggestion d’un lecteur, vous pouvez également vous abonner à Printeurs sur Flattr. Il suffit de cliquer deux fois pour m’envoyer, chaque mois, un Flatt.

Quant à la seconde partie ? Et bien je vous invite à la découvrir dès la semaine prochaine sur ce blog.

Bonne lecture !

Merci d'avoir pris le temps de lire ce texte. Ce blog est payant mais vous êtes libre de choisir le prix. Vous pouvez soutenir l'écriture de ces billets via Flattr, Patreon, virements IBAN, Paypal ou en bitcoins. Mais le plus beau moyen de me remercier est de simplement partager ce texte autour de vous ou de m'aider à trouver de nouveaux défis en 2014.

flattr this!

by Lionel Dricot at March 15, 2014 12:18 PM

March 14, 2014

Jochen Maes

django_atomiadns and pyatomiadns release

\\//!

After a long hiatus on blogging I'd like to introduce django_atomiadns. As I've been using AtomiaDNS for quite a while now the only thing that still bothered me was the usage of their webapp. There hasn't been any new releases and I fixed some bugs locally. However as I'm not an avid node developer I wasn't actually going to add the features I wanted there.

When a friend asked me if I was able to hosts his domains (dns wise) and give him the rights to change/update etc I decided I'd quickly make a webapp that allows me to do that.

My current client (amplidata) could use the same functionality and it would be a fun project.

So django_atomiadns was founded. To be able to implement the initial functionality I needed to update pyatomiadns and over a few days time I released a few versions and now we are at version 1.5

As always if you have questions contact me on twitter (sejo_it), irc (freenode#sejo), email (you should be able to figure that out :p)

Have fun (ab)using/forking/submitting issues!

LLAP!

by Jochen Maes at March 14, 2014 08:15 AM

March 13, 2014

Frank Goossens

Music from Our Tube; Child of Lov

I just “discovered” Child of Lov (a.k.a. Martijn Teerlinck, a young man who was born in Belgium/ Flanders but was raised in the Netherlands) and then learned the guy had died after heart surgery 2 months ago. Damn!

YouTube Video
Watch this video on YouTube or on Easy Youtube.

His debut album was released in 2013 and got raving reviews from all over the world. If you’re into this kind of thing, listen to the full album here and you’ll understand the fuss. Such a pity.

by frank at March 13, 2014 06:28 PM