Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

November 17, 2018

A heads-up to Autoptimize users who are using Divi (and potentially other Elegant Theme’s themes); as discovered and documented by Chris, Divi purges Autoptimize’s cache every time a page/ post is published (or saved?).

To be clear; there is no reason for the AO cache being cleared at that point as:

  1. A new page/ post does not introduce new CSS/ JS
  2. Even if new CSS/ JS would be added somehow, AO would automatically pick up on that and create new optimized CSS/ JS.

Chris contacted Divi support to get this fixed, so this is in the ticketing queue, but if you’re using Divi and  encounter slower saving of posts/ pages or Autoptimized files mysteriously disappearing then his workaround can help you until ET fix this.

I published the following diary on isc.sans.edu: “Quickly Investigating Websites with Lookyloo”:

While we are enjoying our weekend, it’s always a good time to learn about new pieces of software that could be added to your toolbox. Security analysts have often to quickly investigate a website for malicious content and it’s not always easy to keep a good balance between online or local services… [Read more]

[The post [SANS ISC] Quickly Investigating Websites with Lookyloo has been first published on /dev/random]

Je suis impressionné par les retours que j’ai sur ce que j’ai appelé, dans un accès de grandiloquence prétentieuse, ma déconnexion.

Bien que simplissime dans sa forme (bloquer tout accès à une dizaine de sites internet), elle se révèle étonnamment profonde et titille un sujet particulièrement sensible. J’ai reçu plusieurs témoignages de ceux qui, suite à la lecture de mes billets, se sont mis à mesurer le temps passer en ligne et ont découvert avec effroi les ravages de leur addiction à Facebook, Instagram ou Youtube. La diversité de vos addictions est en soi une donnée particulièrement intéressante. Je n’ai même pas songé à bloquer Youtube car rien ne m’ennuie plus que d’entendre réciter un texte trop lent, trop pauvre en information autour d’une théorie de flashs lumineux filmée par un daltonien cocaïnomane et montée par un publicitaire parkinsonien. Pourtant, si vous aimez le genre, Youtube est particulièrement retors avec sa “suggestion de prochaine vidéo”.

Mais rassurons-nous, nous ne sommes pas seuls dans notre dépendance. La prise de conscience est telle que le business commence à s’en inquiéter ! Ainsi ai-je surpris, à la devanture d’une échoppe, la couverture d’un programme télé « Écrans : les éteindre ou les apprivoiser ? ». La rhétorique est à peine subtile et fait irrémédiablement penser au tristement célèbre « Fumeur ou pas, restons courtois » des années 80, slogan qui a retardé de près de quatre décennies la lutte contre le tabac en suggérant qu’il existait un juste milieu, que le non-fumeur était une sorte d’extrémiste.

Dans ce cas précis, il s’agit de mélanger aveuglément « les écrans ». La problématique n’est pas et n’a jamais été l’écran, mais bien ce qu’il affiche. À part dans les rares et très futuristes cas de crapauds hypnotiseurs, un écran coincé sur la mire n’a jamais rendu addict. Les réseaux sociaux, les contenus sans fin, les likes le font tous les jours.

On pourrait objecter que ce n’est pas aussi grave que la cigarette. Et, comme le dit Tonton Alias, que ma position est extrémiste.

Mais plus je me désintoxique, plus je pense que la gravité de la pollution des réseaux sociaux est au moins aussi grande que celle du tabac.

Il n’y a pas si longtemps, il était autorisé de fumer dans les avions, dans les trains, dans les bureaux, dans les couloirs. Même les non-fumeurs ne se plaignaient que sporadiquement, car l’odeur était partout, car tout le monde était au moins fumeur passif.

Pour moi, qui n’ai pas de souvenirs de cette époque, un voisin qui fume dans son jardin me force à fermer les fenêtres. Qu’une cigarette soit allumée à la table voisine d’une terrasse d’un restaurant et je change de table voir je quitte, incapable de manger dans ces conditions. Qu’un fumeur s’asseye à côté de moi dans un lieu public après avoir écrasé son mégot et je me vois forcé de changer de place. N’ayant pas été forcé de m’habituer à cette irritation permanente, je ne la supporte tout simplement pas, je suis physiquement mal, agressé. J’ai des nausées. Plusieurs ex-fumeurs m’ont témoigné qu’ils pensaient que les non-fumeurs exagéraient et en rajoutaient avant de se rendre compte que, quelques mois seulement après avoir arrêté, il ne pouvait tout simplement plus supporter le moindre relent de l’infâme tabac.

Il y’a quelques jours, j’ai décidé de vérifier que mon système de publication sur les réseaux sociaux fonctionnait bien. J’ai donc désactivé mes filtres et j’ai consulté, sans me connecter, mes différents comptes. L’opération n’a duré que quelques secondes, mais j’ai eu le temps de voir, sans vraiment les lire, plusieurs réactions.

Je ne me souviens même plus du contenu de ces réactions. Je sais juste qu’aucune n’était insultante ni agressive. Pourtant, j’ai senti mon corps entier se mettre sur la défensive, l’adrénaline couler dans mes veines. Certaines réactions faisaient montre d’une incompréhension (du moins, je l’ai perçu comme tel) qui nécessitait à tout prix une clarification, un combat. Une seule réaction me semblait moqueuse, ironique (mais comment être sûr ?). Mon sang n’a fait qu’un tour !

J’ai immédiatement réactivé mon blocage en respirant longuement. J’en ai fait part à ma femme qui m’a dit « La prochaine fois, demande-moi, je te dirai si tout est bien posté ! ».

N’étant plus exposé depuis un mois à l’irritation permanente, au stress constant, à la colère partagée, j’ai pris de plein fouet la décharge de violence. Violence qui n’est même pas à mettre sur le dos des auteurs de commentaires, car ma perception multiplie, amplifie ce que j’ai envie ou peur d’y voir. Si j’ai peur d’être incompris, je verrai de l’incompréhension partout, je prendrai de plein fouet une remarque idiote écrite en 12 secondes par quelqu’un que je ne connais pas et qui a sans doute lu les 5 premières lignes d’un de mes articles dans la file de son supermarché.

Depuis cette expérience, j’ai peur à l’idée d’aller sur un réseau social. Mon icône Adguard activée me rassure, me soulage.

Les médias, en général, sont source d’anxiété. Les réseaux sociaux en sont des amplificateurs. Vue comme cela, l’analogie de la cigarette me semble parfaitement appropriée. Comme la cigarette, une pseudo liberté privée est à la fois morbide pour l’individu et une source de dangereuse pollution pour la communauté. Le bien-être global, comme l’air pur, ne devrait-il pas être considéré comme un bien commun ? L’anxiété, la peur sont des cancers qui rongent les cellules individuelles que nous sommes pour former une gigantesque société tumeur. Et les dernières expériences semblent bel et bien confirmer cette intuition : les réseaux sociaux seraient la cause de symptômes de dépression.

N’est-il pas pas paradoxal que nous tentions de soigner nos angoisses existentielles à travers des outils qui prétendent nous aider en nous offrant reconnaissance et gloriole, mais en attisant la source de tous nos maux ?

Mais du coup se pose la question : ne suis-je pas complice en continuant à proposer mes billets sur les réseaux sociaux, en continuant à alimenter mes comptes et mes pages ? La réponse n’est pas facile et fera l’objet du prochain billet.

Photo by Andre Hunter on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

November 16, 2018

Le phare tranche l’épaisseur de la nuit de ses coups d’épée répétés, réguliers, métronome lumineux dans un silence de ténèbres. La mer s’étale comme une lisse frontière entre l’obscurité des étoiles et l’abysse d’un noir transparent.

Quelques vagues lèchent le sable en un langoureux ressac hypnotique. Depuis la baie vitrée de ma maison passive entièrement domotisée, je vois, à la lisière de mon écran, passer l’ombre d’une calèche d’un autre siècle, d’un autre temps. Ai-je rêvé ? Pourtant, le claquement caractéristique des sabots sur les graviers retentit encore, porté par l’air marin.

Intrigué, je pose mon ordinateur et fais quelques pas dehors. J’écarte un buisson d’épine et découvre, au fond du jardin, un chemin que je n’avais jamais vu. Quelques marches me mènent à la nationale. Où ce qui aurait dû être la nationale, car, inexplicablement, le bitume a été remplacé par de la poussière, de la terre et quelques pavés épars tentant vainement de contenir des herbes folles.

Je lève la tête. Le phare continue sa sarabande silencieuse d’assourdissants éclairs. Un bruit, un claquement suivi d’un crissement. Une autre calèche s’arrête. Dans la pénombre, je ne distingue qu’une vague forme noire surmontée d’un haut de forme. 

— Montez ! On a besoin de vous au phare !

La phrase résonne dans ma tempe. Machinalement, j’obéis à l’ordre qui m’a été fait sans savoir si le dur accent rocailleux me parlait bien en français. Ou bien était-ce du breton ? Un mélange ? Comment ai-je compris aussi facilement ?

Mais je n’ai pas le temps de m’interroger que le véhicule se met en branle. Le paysage défile sous mes yeux désormais habitués à l’obscurité. Sous la lumière de quelques étoiles, j’aperçois les bateaux des pêcheurs, rangés pour la nuit, gémissant doucement dans leur sommeil ligneux. Nous traversons le bourg jouxtant le port à toute vitesse dans un claquement de sabots.

Le long de la route, des pierres millénaires tentent de tracer un muret à travers les épines, dessinant parfois une croix, un calvaire ou l’ébauche d’un souvenir encore plus lointain. Il flotte dans l’air marin une vapeur étrange, un passé qui tente de s’instiller dans un présent filigrane.

L’attelage me dépose au pied de l’énorme cylindre de pierre. Une vieille porte s’entrouvre en grinçant, laissant apercevoir un visage buriné par les embruns et la fumée d’une vieille pipe d’écume. Sous de broussailleux sourcils de neige, deux fentes noires me transpercent par leur intensité.

— C’est donc vous que nous attendions ?

Entrecoupée de jurons et de crachats, la phrase a résonné comme une vague éclatant sur un brise-lame dans une langue que je n’aurais pas dû comprendre. Timidement, j’acquiesce d’un léger mouvement de tête.

— Alors ? Montez ! poursuit l’homme. Voici pour payer le cocher ! fait-il en me tendant une grosse pièce plate et argentée.

Sans prendre la peine de la regarder, je la tends à mon conducteur qui la mord, l’empoche et me donne en échange trois petites pièces en grommelant : 

— Votre monnaie. 

Sans ajouter un mot, il fait faire demi-tour à son attelage et s’enfonce dans la nuit. Mais je n’ai pas le temps de le voir disparaitre qu’une main me happe et me hisse à travers un escalier métallique.

— Venez ! On n’a pas toute la nuit !

L’escalade me semble interminable, vertigineuse. Sous mes pieds, les marches se succèdent, infiniment identiques. Après un passage dans différentes pièces maigrement meublées, nous débouchons enfin dans la galerie entourant l’optique. L’éclat est aveuglant, mais supportable. Je distingue une forme étendue. Le second gardien.

— Que lui est-il arrivé ?

Mon hôte ne répond pas et le pointe du doigt. Je m’approche du corps. Le visage est blanc, les lèvres sont bleuies, mais, sous mes doigts, je sens encore une légère chaleur.

Le premier gardien me regarde et, à contrecœur, me lance un rugueux : 

— Il s’est noyé.

— Comment ça noyé ? À 30 mètres au-dessus du niveau de la mer ?

— Oui !

— Et que voulez-vous que je fasse ?

— Que vous le sauviez !

— Mais depuis le temps que je suis en route, il est mort des dizaines de fois !

— En Bretagne, le temps ne s’écoule pas toujours de la même façon. Sauvez-le !

M’interdisant de réfléchir, j’applique machinalement les premiers secours. Noyade blanche. Bouche à bouche. Massage cardiaque. Encore une fois. Encore. Et, soudain, un mouvement, une toux, des yeux qui s’ouvrent et un froid terrifiant qui m’envahit, me coupe le souffle.

Je suis en état de choc glacé. Je suis sous l’eau. Instinctivement, mon cerveau m’impose la routine de survie de l’apnéiste. Se calmer. Détendre les muscles. Accepter le froid. Ne pas respirer, ne pas ouvrir la bouche. Nager. Une brasse. Une autre brasse.

La pression écrase ma poitrine, enfonce ma glotte dans ma cage thoracique, vrille mes tympans. Je suis profond. Très profond. Mais une profondeur qui m’est familière, à laquelle j’ai déjà plongé. Alors je fais une brasse. Mais comment connaitre la direction ? Une légère bulle d’air s’échappe de ma chaussure et remonte le long de mon pantalon. Je suis donc vertical. Il ne me reste plus qu’à nager. Une brasse. Et encore une brasse. J’en compte une dizaine. Mes poumons vont éclater. Mais encore une dizaine et tout devrait bien aller. Allez, courage, plus que dix et… mon crâne émerge brusquement. Inspire ! Inspire ! Inspire !

Le froid me vrille les tempes. Je suis hors de l’eau et j’aperçois le rivage qui se découpe en noir foncé sur la ligne sombre de l’horizon. Il n’y a que quelques centaines de mètres. Je nage sur le dos pour reprendre mon souffle, le courant me porte. Le froid piquant n’est pas mortel. Pas avant une heure ou deux à cette température. Je me retourne pour faire quelques brasses. La petite plage est proche désormais, mes pieds raclent le sable. Ahanant, marchant, nageant, j’extirpe des flots marins pour m’écrouler sur des algues malodorantes.

Le sol se dérobe sous moi et je tombe dans le noir. Un choc dur ! Un cri ! Aïe ! Une lumière m’éblouit ?

— Mais qu’est-ce que tu fais ? Fais moins de bruit, tu vas réveiller le petit ?

Étourdi, je constate que je suis au pied de mon lit.

— Je… Je dois être tombé du lit. J’ai fait un rêve !

— Tu me raconteras demain, me fait ma femme en éteignant la lumière. Grimpe et rendors-toi !

Mais lorsque j’ai voulu raconter mon rêve le lendemain matin, les mots ne me sont pas venus. Les souvenirs s’effilochaient, les images devenaient floues. Tout au plus ai-je pu montrer les trois petites pièces de monnaie trouvées dans ma poche et une plongée à moins trente mètres enregistrée cette nuit-là par ma montre profondimètre dans une eau à une dizaine de degrés.

Le phare, lui, continue de balayer chaque nuit de sa dague de lumière, envoyant aux hommes son vital avertissement. Et si les marins ont appris à être humbles face aux vagues de la mer, quel phare guidera la prétentieuse humanité dans les flots impétueux du temps dans lesquels nous sommes condamnés à nous faire engloutir ?

Qui en seront les gardiens ?

Nuit du 9 au 10 septembre, en regardant le phare depuis l’Aber Wrac’h.

Photo by William Bout on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

An Acquia banner at the Austin airportAcquia Engage attendees that arrived at the Austin airport were greeted by an Acquia banner!

Last week, Acquia welcomed more than 600 attendees to the fifth annual Acquia Engage Conference in Austin, Texas. During my keynote, my team and I talked about Acquia's strategy, recent product developments, and our product roadmap. I also had the opportunity to invite three of our customers on stage — Paychex, NBC Sports, and Wendy's — to hear how each organization is leveraging the Acquia Platform.

All three organizations demonstrate incredible use cases, and I invite you to watch the recording of the Innovation Showcase (78 minutes) or download a copy of my slides (219 MB).

I also plan to share more in-depth blog posts on my conversations with Wendy’s, NBC Sports, and Paychex’s next week.

I published the following diary on isc.sans.edu: “Basic Obfuscation With Permissive Languages”:

For attackers, obfuscation is key to keep their malicious code below the radar. Code is obfuscated for two main reasons: defeat automatic detection by AV solutions or tools like YARA (which still rely mainly on signatures) and make the code difficult to read/understand by a security analyst… [Read more]

[The post [SANS ISC] Basic Obfuscation With Permissive Languages has been first published on /dev/random]

November 15, 2018

This week Acquia was named a leader in the Forrester Wave: Web Content Management Systems. Acquia was previously named a leader in WCM systems in Q1 2017.

The report highlights Acquia and Drupal's leadership on decoupled and headless architectures, in addition to Acquia Cloud's support for Node.js.

I'm especially proud of the fact that Acquia received the highest strategy score among all vendors.

Thank you to everyone who contributed to this result! It's another great milestone for Acquia and Drupal.

November 13, 2018

If you want to go quickly, go alone. If you want to go far, go together.

Drupal exists because of its community. What started from humble beginnings has grown into one of the largest Open Source communities in the world. This is due to the collective effort of thousands of community members.

What distinguishes Drupal from other open source projects is both the size and diversity of our community, and the many ways in which thousands of contributors and organizations give back. It's a community I'm very proud to be a part of.

Without the Drupal community, the Drupal project wouldn't be where it is today and perhaps would even cease to exist. That is why we are always investing in our community and why we constantly evolve how we work with one another.

The last time we made significant changes to Drupal's governance was over five years ago when we launched a variety of working groups. Five years is a long time. The time had come to take a step back and to look at Drupal's governance with fresh eyes.

Throughout 2017, we did a lot of listening. We organized both in-person and virtual roundtables to gather feedback on how we can improve our community governance. This led me to invest a lot of time and effort in documenting Drupal's Values and Principles.

In 2018, we transitioned from listening to planning. Earlier this year, I chartered the Drupal Governance Task Force. The goal of the task force was to draft a set of recommendations for how to evolve and strengthen Drupal's governance based on all of the feedback we received. Last week, after months of work and community collaboration, the task force shared thirteen recommendations (PDF).

The proposal from the Drupal Governance Task Force Me reviewing the Drupal Governance proposal on a recent trip.

Before any of us jump to action, the Drupal Governance Task Force recommended a thirty-day, open commentary period to give community members time to read the proposal and to provide more feedback. After the thirty-day commentary period, I will work with the community, various stakeholders, and the Drupal Association to see how we can move these recommendations forward. During the thirty-day open commentary period, you can then get involved by collaborating and responding to each of the individual recommendations below:

I'm impressed by the thought and care that went into writing the recommendations, and I'm excited to help move them forward.

Some of the recommendations are not new and are ideas that either the Drupal Association, myself or others have been working on, but that none of us have been able to move forward without a significant amount of funding or collaboration.

I hope that 2019 will be a year of organizing and finding resources that allow us to take action and implement a number of the recommendations. I'm convinced we can make valuable progress.

I want to thank everyone who has participated in this process. This includes community members who shared information and insight, facilitated conversations around governance, were interviewed by the task force, and supported the task force's efforts. Special thanks to all the members of the task force who worked on this with great care and determination for six straight months: Adam Bergstein, Lyndsey Jackson, Ela Meier, Stella Power, Rachel Lawson, David Hernandez and Hussain Abbas.

Pourquoi l’immense majorité des contenus en ligne aujourd’hui est en fait du bruit, comment on peut s’en protéger et comment faire pour rendre le monde un peu moins bruyant

Mon expérience des blogs marketing

Blogueur actif et, dans une certaine mesure, reconnu depuis plus de 14 ans, il est normal que j’aie tenté de multiples fois de faire de ma plume bloguesque un arc de ma carrière professionnelle.

Soit en postulant par moi-même pour alimenter le blog d’une entreprise contre rémunération, soit suite à une demande de mon employeur, impressionné par les différentes métriques de mon blog et rêvant de reproduire la même chose, de transférer mon audience vers son entreprise.

Dans tous les cas, ces projets se vautrèrent aussi misérablement qu’un base jumpeur ivre accroché à une plaque de tôle ondulée.

Je n’étais pas satisfait de mon travail. Mon employeur non plus. Le plus souvent, mes billets étaient même refusés avant publication. Pour le reste, ils étaient retravaillés à outrance, je devais les réécrire plusieurs fois et aucun de ceux qui ont finalement été publiés n’ont jamais obtenu ne fût qu’une fraction du succès relatif dont je peux régulièrement m’enorgueillir sur mon blog personnel. D’ailleurs, j’avais l’impression que les lettres s’étalaient sur l’écran en une bouse fraîche et odorante de printemps. Mon commanditaire me faisait comprendre que ce n’était pas qu’une impression.

J’ai toujours attribué ces échecs au bike shedding, ce besoin des managers de surveiller, de modifier ce qui leur semble simple au lieu de me faire confiance. Je les croyais incapables de laisser publier un truc qu’ils n’avaient pas écrit eux-mêmes (alors que si on m’engage, c’est justement pour ne pas l’écrire soi-même). Bref, du micromanagement.

Les deux types de contenus

Avec le recul, je commence seulement à réaliser une raison bien plus profonde de ces échecs : il y’a deux types de contenu, deux types de texte.

Les premiers, comme celui que vous êtes en train de lire, sont le résultat d’une expression personnelle. Le but est de réfléchir, en public ou non, par clavier interposé, de partager des réflexions, de les lancer dans l’éther sans trop savoir ce qui va en découler.

Mais les entreprises n’ont, par définition, qu’un seul objectif : vendre leurs tapis, leurs chameaux de plastiques produits en chine par des enfants aveugles dans des mines, leurs services de voyance/consultance aussi ineptes que dispendieux, leur golfware et autre tupperware. Mais on ne vend pas directement par blog interposé, ça se saurait ! En conséquence, le but du blog corporate est uniquement d’augmenter la visibilité, le pagerank, le référencement du site de l’entreprise dans les moteurs de recherches et les réseaux sociaux. L’entreprise va tenter de produire du contenu, mais sans réelle motivation (car elle n’a rien à partager ou, si c’est le cas, elle ne souhaite justement pas le partager) et sans aucun objectif d’être lu. Un blog d’entreprise, c’est surtout ne rien dire (pour ne pas donner des idées aux concurrents), mais faire du vent en espérant que les suites aléatoires de mots à haute teneur de buzz bercent les oreilles des robots Google d’une douce musique de pagerank ou que le titre soit suffisamment accrocheur pour générer un clic chez une aléatoire probabilité de clientèle potentielle.

J’exagère ? Mais regardez, au hasard, le blog de Freedom, un logiciel qui a pour mission de filtrer les distractions, de permettre de se concentrer. Il est rempli d’articles creux, vides et, disons-le, nullissimes, sur le thème du focus. Le but est évident : attirer l’attention. Tenter de distraire les internautes pour leur vendre une solution pour éviter les distractions.

Ce qui est particulièrement dommage c’est que ce genre de blog peut avoir des choses à dire. Mais un contenu intéressant, car parlant du logiciel lui-même, est noyé au milieu d’un succédané de Flair l’hebdo. Freedom n’est pas une exception, c’est la règle générale. Les projets, même intéressants, se sentent obligés de produire régulièrement du contenu, de faire du marketing au lieu de se concentrer sur la technique.

Une fois clairement identifiée la différence entre un blog d’idées et un blog marketing, il semble absurde qu’on ait voulu, de nombreuses fois, me confier les rênes d’un blog marketing.

Un blog marketing, par essence, n’est que du vent, du bruit. Il a pour objectif d’attirer l’attention sans prendre le moindre risque. Un blog personnel, c’est le contraire. J’ai l’aspiration d’écrire pour moi, de prendre des risques, de me mettre à nu. Même si, trop souvent, je cède aux sirènes du bruit et du marketing de mon fragile ego.

Le bruit de l’email

Il semble évident que cette différentiation du contenu ne s’applique pas qu’aux blogs, mais absolument à tout type de média, depuis l’email à la lettre papier. Il y’a deux types de contenus : les contenus créés pour partager quelque chose et les contenus qui ne cherchent qu’à prendre de la place dans un océan de contenu, à exister, à accaparer votre attention. Bref, du bruit…

Toute la « science » enseignée dans les écoles de marketing peut se résumer à cela : faire du bruit. Faire plus de bruit que les autres. Et accessoirement mesurer combien de personnes ont été forcées d’entendre. Alors, forcément, quand des milliers de marketeux issus des mêmes écoles se retrouvent en situation de concurrence, cela produit un gigantesque tintamarre, un tohu-bohu, un charivari dans lequel nous vivons pour le moment.

Aujourd’hui, je commence à peine à prendre conscience que la grande partie de mon temps de consommation pré-déconnexion était en fait dédié à “trier” le bruit, sans réel repère autre que l’intuition. Je tentais de donner un sens à tout ce vacarme. Pour chaque contenu qui “avait l’air intéressant”, je passais du temps à soit le supprimer, soit à tenter de comprendre ce qui se cachait sous les parasites. Et, forcément, lorsqu’on est assourdi, tout parait bruyant. Même une conversation normale devient inintelligible.

À la lueur de cette dichotomie manichéenne, ma déconnexion s’illumine d’un nouveau sens : comment passer moins de temps dans la mer de contenu à trier ? Comment supprimer le bruit de mon existence ?

Et la solution se révèle, pour moi, incroyablement simple.

Ne plus accepter le bruit

Pour les emails, cela consiste à se désabonner d’absolument tous les mails qui comportent la mention “unsubscribe” et à demander à tous les auteurs de mails impersonnels de me supprimer de leur liste. Chaque mail de ce type est donc un léger effort (parfois il y’a des négociations), mais l’effet, au bout de quelques semaines, est absolument saisissant. L’immense majorité de notre usage du mail est en fait “filtrer le bruit”. Ma solution est tellement efficace que, parfois, j’ai l’impression que mon mail est planté. Cela demande une certaine rigueur, car recevoir un mail est source d’une légère décharge de dopamine. Nous sommes accros à recevoir des mails. Si vous êtes de ceux qui reçoivent plus de cinquante emails par jour, et je l’ai été bien longtemps, vous n’êtes pas une personne importante, vous êtes tout simplement une victime du bruit.

Il y’a probablement certaines newsletters que vous trouvez instructives, intéressantes. Si vous ne payez pas pour ces newsletters, alors c’est par définition du bruit. Des mails conçus pour vous rappeler que le projet ou la personne existe. Les grands services en ligne comme Facebook et Linkedin sont d’ailleurs particulièrement retors : de nouvelles catégories d’emails sont ajoutées régulièrement, permettant de toucher ceux qui s’étaient déjà désinscrits de tout le reste. Se désinscrire peut parfois révéler du véritable parcours du combattant, et c’est parfois pire pour les mailings papier !

Hors mail, comme vous le savez peut-être, je n’accède plus non plus aux réseaux sociaux. Au fond, ceux-ci sont l’archétype du bruit. Sur les réseaux sociaux, l’immense majorité des contenus ne sont que « Regardez-moi, j’existe ! ». Être sur les réseaux sociaux, c’est un peu comme rentrer dans une discothèque en espérant tomber par hasard sur quelqu’un avec qui discuter de l’influence kantienne dans l’œuvre d’Heidegger. Ça arrive, mais c’est tellement rare que mieux vaut utiliser d’autres moyens et se couper du bruit.

Par contre, je lis avec plaisir ce que mes amis prennent le temps de me recommander. Je suis également le flux RSS de quelques individus sélectionnés. Le fait de les lire non plus en vitesse, au milieu du bruit, en les scannant pour tester leur “intérêt potentiel”, m’apporte énormément. Je leur fais désormais confiance, je les lis en étant disponible à 100%. Cela me permet de m’imprégner de leurs idées. Je ne me contente plus de lister leurs idées pour les archiver dans un coin de ma tête ou d’Evernote, mais je prends le temps de les laisser grandir en moi, de les approprier, de partager leur vision.

Au lieu de scanner le bruit pour repérer ce qui m’intéresse, j’accepte de rater des infos et je n’accepte que des sources qui sont majoritairement personnelles, profondes, quel que soit le sujet. Je peux me passionner pour un billet d’un Alias même lorsqu’il traite de sujets abscons et complètement hors de mes intérêts. Par exemple la différence entre le néo-prog métal et le néo-métal tendance prog, un sujet fondamental. Si je pestais contre les articles trop longs qui n’allaient pas directement à l’essentiel, aujourd’hui je suis déçu par la brièveté de certains qui ne font que toucher, effleurer ce qui mériterait une bien plus grande profondeur.

Contribuer au silence

Mais réduire le bruit du monde n’est pas uniquement à sens unique. Nous sommes tous responsables d’une part de bruit. Comment contribuer ?

C’est simple ! Que ce soit un simple email, un post sur les réseaux sociaux, un blog post, posez-vous la question : est-ce que je suis en train de rendre le monde meilleur en diffusant ce contenu ?

S’il s’agit d’essayer d’obtenir de la reconnaissance, de convaincre votre public (que ce soit de la pertinence de vos idées, de la nécessité d’acheter votre produit, de l’importance de votre vie) ou de cracher votre colère, alors vous ne rendez pas le monde meilleur.

J’ai, un peu par hasard, acheté une licence Antidote en commençant ma déconnexion (ce qui est râlant, ils ont annoncé la version 10 3 semaines après). Antidote possède une fonctionnalité qui fait que chaque mail que vous envoyez vous est affiché à l’écran avec toutes les fautes d’orthographe. C’est un peu pénible, car, Antidote étant lent, cela rend le mail moins immédiat.

Pourtant, deux ou trois fois, à la relecture, j’ai purement et simplement renoncé à envoyer le mail. Moi qui étais ultra-impulsif du clavier, voilà un outil qui me soigne. Si le monde n’a pas besoin de mon email, alors je ne l’envoie pas. Non seulement j’applique Inbox 0 pour moi, mais, désormais, j’aide les autres à l’atteindre.

Exemple frappant : mon mail type de demande de désinscription citant le RGPD comportait un paragraphe tentant de convaincre de l’aspect immoral des pratiques marketing. J’ai arrêté. Je tente de ne répondre à chaque email qu’avec le minimum d’informations nécessaires. Je garde pour moi toutes mes suggestions d’améliorations. C’est difficile, mais ça va soulager pas mal de monde.

Trop de bruit, pas assez de silence ?

Depuis l’avènement d’Internet, nous sommes tous des producteurs de contenus. Si nous rajoutons les contenus générés automatiquement par des ordinateurs, il semble évident qu’il y’a désormais beaucoup trop de contenus, que trouver de l’audience est difficile. C’est un truc dont traite régulièrement l’auteur Neil Jomunsi.

Loin d’être effrayé par cette apparence de abondance, je pense qu’il n’y a en réalité pas assez de contenu de qualité. Il n’y a pas assez d’écrivains, d’artistes. Il n’y a pas assez de contenu qui n’a pour seule vocation que d’enrichir le patrimoine commun de l’humanité.

Et si, avant toute chose, on arrêtait de produire et de consommer du bruit, de la merde ?

À un âge de surabondance de l’information, de publicités épileptiques clignotantes à tous les coins de rue et d’omniscients écrans, publier un contenu devrait être soumis à un filtre strict : « Est-ce que j’ai vraiment envie de publier ça ? Est-ce que je ne rajoute pas une couche de fumier sur la merde du monde ? »

Est-ce que moi, Ploum moralisateur en quête d’égo, je ne suis pas face à un paradoxe en continuant à alimenter automatiquement mes bruyants comptes de réseaux sociaux pour que vous veniez lire mes pontifiants sermons sur le silence ?  La réflexion est en cours.

Photo by @chairulfajar_ on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

November 12, 2018

Content authors want an easy-to-use page building experience; they want to create and design pages using drag-and-drop and WYSIWYG tools. For over a year the Drupal community has been working on a new Layout Builder, which is designed to bring this page building capability into Drupal core.

Drupal's upcoming Layout Builder is unique in offering a single, powerful visual design tool for the following three use cases:

  1. Layouts for templated content. The creation of "layout templates" that will be used to layout all instances of a specific content type (e.g. blog posts, product pages).
  2. Customizations to templated layouts. The ability to override these layout templates on a case-by-case basis (e.g. the ability to override the layout of a standardized product page)
  3. Custom pages. The creation of custom, one-off landing pages not tied to a content type or structured content (e.g. a single "About us" page).

Let's look at all three use cases in more detail to explain why we think this is extremely useful!

Use case 1: Layouts for templated content

For large sites with significant amounts of content it is important that the same types of content have a similar appearance.

A commerce site selling hundreds of different gift baskets with flower arrangements should have a similar layout for all gift baskets. For customers, this provides a consistent experience when browsing the gift baskets, making them easier to compare. For content authors, the templated approach means they don't have to worry about the appearance and layout of each new gift basket they enter on the site. They can be sure that once they have entered the price, description, and uploaded an image of the item, it will look good to the end user and similar to all other gift baskets on the site.

The Drupal 8 Layout Builder showing a templated gift basket

Drupal 8's new Layout Builder allows a site creator to visually create a layout template that will be used for each item of the same content type (e.g. a "gift basket layout" for the "gift basket" content type). This is possible because the Layout Builder benefits from Drupal's powerful "structured content" capabilities.

Many of Drupal's competitors don't allow such a templated approach to be designed in the browser. Their browser-based page builders only allow you to create a design for an individual page. When you want to create a layout that applies to all pages of a specific content type, it is usually not possible without a developer.

Use case 2: Customizations to templated layouts

While having a uniform look for all products of a particular type has many advantages, sometimes you may want to display one or more products in a slightly (or dramatically) different way.

Perhaps a customer recorded a video of giving their loved one one of the gift baskets, and that video has recently gone viral (because somehow it involved a puppy). If you only want to update one of the gift baskets with a video, it may not make sense to add an optional "highlighted video" field to all gift baskets.

Drupal 8's Layout Builder offers the ability to customize templated layouts on a case per case basis. In the "viral, puppy, gift basket" video example, this would allow a content creator to rearrange the layout for just that one gift basket, and put the viral video directly below the product image. In addition, the Layout Builder would allow the site to revert the layout to match all other gift baskets once the world has moved on to the next puppy video.

The Drupal 8 Layout Builder showing a templated gift basket with a puppy video

Since most content management systems don't allow you to visually design a layout pattern for certain types of structured content, they of course can't allow for this type of customization.

Use case 3: Custom pages (with unstructured content)

Of course, not everything is templated, and content authors often need to create one-off pages like an "About us" page or the website's homepage.

In addition to visually designing layout templates for different types of content, Drupal 8's Layout Builder can also be used to create these dynamic one-off custom pages. A content author can start with a blank page, design a layout, and start adding blocks. These blocks can contain videos, maps, text, a hero image, or custom-built widgets (e.g. a Drupal View showing a list of the ten most popular gift baskets). Blocks can expose configuration options to the content author. For instance, a hero block with an image and text may offer a setting to align the text left, right, or center. These settings can be configured directly from a sidebar.

The Drupal 8 Layout Builder showing how to configure a block

In many other systems content authors are able to use drag-and-drop WYSIWYG tools to design these one-off pages. This type of tool is used in many projects and services such as Squarespace and the new Gutenberg Editor for WordPress (now available for Drupal, too!).

On large sites, the free-form page creation is almost certainly going to be a scalability, maintenance and governance challenge.

For smaller sites where there may not be many pages or content authors, these dynamic free-form page builders may work well, and the unrestricted creative freedom they provide might be very compelling. However, on larger sites, when you have hundreds of pages or dozens of content creators, a templated approach is going to be preferred.

When will Drupal's new Layout Builder be ready?

Drupal 8's Layout Builder is still a beta level experimental module, with 25 known open issues to be addressed prior to becoming stable. We're on track to complete this in time for Drupal 8.7's release in May 2019. If you are interested in increasing the likelihood of that, you can find out how to help on the Layout Initiative homepage.

An important note on accessibility

Accessibility is one of Drupal's core tenets, and building software that everyone can use is part of our core values and principles. A key part of bringing Layout Builder functionality to a "stable" state for production use will be ensuring that it passes our accessibility gate (Level AA conformance with WCAG and ATAG). This holds for both the authoring tool itself, as well as the markup that it generates. We take our commitment to accessibility seriously.

Impact on contributed modules and existing sites

Currently there a few methods in the Drupal module ecosystem for creating templated layouts and landing pages, including the Panels and Panelizer combination. We are currently working on a migration path for Panels/Panelizer to the Layout Builder.

The Paragraphs module currently can be used to solve several kinds of content authoring use-cases, including the creation of custom landing pages. It is still being determined how Paragraphs will work with the Layout Builder and/or if the Layout Builder will be used to control the layout of Paragraphs.

Conclusion

Drupal's upcoming Layout Builder is unique in that it supports multiple different use cases; from templated layouts that can be applied to dozens or hundreds of pieces of structured content, to designing custom one-off pages with unstructured content. The Layout Builder is even more powerful when used in conjunction with Drupal's other out-of-the-box features such as revisioning, content moderation, and translations, but that is a topic for a future blog post.

Special thanks to Ted Bowman (Acquia) for co-authoring this post. Also thanks to Wim Leers (Acquia), Angie Byron (Acquia), Alex Bronstein (Acquia), Jeff Beeman (Acquia) and Tim Plunkett (Acquia) for their feedback during the writing process.

Dear Kettle and Neo4j friends,

Since I joined the Neo4j team in April I haven’t given you any updates despite the fact that a lot of activity has been taking place in both the Neo4j and Kettle realms.

First and foremost, you can grab the cool Neo4j plugins from neo4j.kettle.be (the plugin in the marketplace is always out of date since it takes weeks to update the metadata).

Then based on valuable feedback from community members we’ve updated the DataSet plugin (including unit testing) to include relative paths for filenames (for easier git support), to avoid modifying transformation metadata and to set custom variables or parameters.

Kettle unit testing ready for prime time

I’ve also created a plugin to debug transformations and jobs a bit easier.  You can do things like set specific logging levels on steps (or only for a few rows) and work with zoom levels.

Clicking right on a step you can choose “Logging…” and set logging specifics.

Then, back on the subject of Neo4j, I’ve created a plugin to log the execution results of transformations and jobs (and a bit of their metadata) to Neo4j.

Graph of a transformation executing a bunch of steps. Metadata on the left, logging nodes on the right.

Those working with Azure might enjoy the Event Hubs plugins for a bit of data streaming action in Kettle.

The Kettle Needful Things plugin aims to fix bugs and solve silly problems in Kettle.  For now it sets the correct local metastore on Carte servers AND… features a new launcher script called MaitreMaitre supports transformations and jobs, local, remote and clustered execution.

The Kettle Environment plugin aims to take a stab at life-cycle management by allowing you to define a list of Environments:

The Environments dialog shown at the start of Spoon

In each Environment you can set all sorts of metadata but also the location of the Kettle and MetaStore home folders.

Finally, because downloading, patching, installing and configuring all this is a lot of work, I’ve created an automated process which does this for you on a daily bases (for testing) and so you can download Kettle Community Edition version 8.1.0.0 patched to 8.1.0.4 with all the extra plugins above in its 1GB glory at : remix.kettle.be

To get it on your machine simply run:

wget remix.kettle.be -O remix.zip

You can also give these plugins (Except for Needful-things and Environment) a try live on my sandbox WebSpoon server.  You can easily run your own WebSpoon from the also daily updated docker container.

If you have suggestions, bugs, rants, please feel free to leave them here or in the respective github projects.  Any feedback is as always more than welcome.  In fact, thanks you all for the feedback given so far.  It’s making all the difference.  If you feel the need to contribute more opinions on the subjects of Kettle feel free to send me a mail (mattcasters at gmail dot com) to join our kettle-community Slack channel.

Enjoy!

Matt

November 11, 2018

The post HTTP-over-QUIC will officially become HTTP/3 appeared first on ma.ttias.be.

The protocol that's been called HTTP-over-QUIC for quite some time has now changed name and will officially become HTTP/3. This was triggered by this original suggestion by Mark Nottingham.

The QUIC Working Group in the IETF works on creating the QUIC transport protocol. QUIC is a TCP replacement done over UDP. Originally, QUIC was started as an effort by Google and then more of a "HTTP/2-encrypted-over-UDP" protocol.

Source: HTTP/3 | daniel.haxx.se

If you're interested in what QUIC is and does, I wrote a lengthy blogpost about it: Google’s QUIC protocol: moving the web from TCP to UDP.

The post HTTP-over-QUIC will officially become HTTP/3 appeared first on ma.ttias.be.

November 10, 2018

WP YouTube Lyte users might have received the following mail from Google/ YouTube:

This is to inform you that we noticed your project(s) has not accessed or used the YouTube Data API Service in the past 60 days.

Please note, if your project(s) remains inactive for another 30 days from the date of this email (November 9, 2018), we will disable your project’s access to, or use of, the YouTube API Data Service. As per Section III(D)(4) of the YouTube API Services Developer Policies (link), YouTube has the right to disable your access to, and use of, the YouTube Data API Service if your project has been inactive for 90 consecutive days.

The reason for this is that LYTE caches the responses from YouTube to ensure optimal performance. You can check your API usage at  https://console.developers.google.com/apis/dashboard. In my case it actually was not inactive although not very active either;

To make sure my API access would not get disabled I ticked the “Empty WP YouTube Lyte’s cache” checkbox in LYTE’s settings, saved changes to force LYTE to re-request the data from the YT API when pages/ posts with LYTE’s being requested again. The result:

I do have a non-neglectable number of videos on this little blog already, but here’s one more for a rainy Saturday-afternoon;

YouTube Video
Watch this video on YouTube.

 

November 09, 2018

Comment les marketeux et autres publicitaires ont pour mission de détruire l’humanité, un couvercle de tupperware à la fois.

On a tous un tiroir de tupperwares avec 20 tupperwares et 20 couvercles. Et pourtant, vous avez beau tous les essayer, rien à faire. Aucun ne va sur aucun. Parfois, coup de chance, y’a une paire qui s’encastre au prix de gros efforts pour maintenir les 4 coins, paire précieuse que vous guetterez toujours sans pour autant vous débarrasser du reste du tiroir.

C’est un problème mathématiquement ou physiquement complexe, une véritable énigme de l’univers du même ordre que la chaussette de Schrödinger.

Alors, oui, Elon Musk envoie des voitures électriques sur Mars, mais les problèmes importants, comme ceux du tupperware, personne ne les résout.

Tout ça à cause du Marketing !

Imaginez, y’a peut-être un petit gars du département R&D de chez Tupperware. Pendant 12 ans il a bossé sur ce problème, il a écrit des équations de fou dans des espaces affines à 18 dimensions, réinventé les maths. Un jour, Eureka !

Sans perdre une seconde, il court annoncer la bonne nouvelle. Il a non seulement compris le paradoxe, mais il a une solution à proposer. Le problème sera définitivement résolu ! Son cœur bat à tout rompre, la sueur lui dégouline dans les cheveux. La gloire pour lui, un monde meilleur pour le reste de l’humanité ! Plus de tupperwares dépareillés ! Moins de pollution !

C’est là qu’arrive la directrice marketing.

Oui, mais non. Ça va plomber nos ventes. 65% de notre chiffre d’affaires est composé de gens qui achètent des tupperwares neufs parce qu’ils ne trouvent plus le couvercle (ou, au contraire, le contenant). Alors, tu comprends mon petit… Tes espaces affines, c’est gentil mais ici, c’est la vraie vie !

12 ans de recherche et de travail aux oubliettes. Ce grand problème de l’humanité qui touche des millions de personnes et les laissera pour toujours dans la souffrance et les affres des tupperwares dépareillés. Pire, la directrice marketing proposera de faire des collections subtilement différentes pour que le couvercle ait l’air d’être compatible, mais, en fin de compte, ne le soit pas.

Le marketing et la vente sont l’antithèse du progrès. Ils ont pour seul objectif de rendre l’humanité misérable pour nous faire acheter encore et toujours. Si votre boulot consiste à « augmenter les ventes », que ce soit du marketing web, de la pub ou n’importe quoi, vous n’êtes pas une solution, vous êtes le problème.

Je n’invente rien, vous n’avez qu’à regarder votre tiroir à tupperwares pour en avoir la preuve.

Photo par QiYun Lu.

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

Passive DNS is not a new technique but, for the last months, there was more and more noise around it. Passive DNS is a technique used to record all resolution requests performed by DNS resolvers (bigger they are, bigger they will collect) and then allow to search for historical data. This is very useful when you are working on a security incident. By example, to track the different IP addresses used to host a C2 server.

But Passive DNS can also be very helpful on the other side: offensive security! A practical example is always better to demonstrate the value of passive DNS. During an engagement, I found a server with ports 80, 443 and 8080 publicly available. The port 8080 looks juicy because it could be a management interface, a Tomcat or? Who knows? Helas, there was no reverse DNS defined for the IP address and, while accessing the server directly through the IP address, I just hit a default page:Default Vhost

You probably recognised the good old default page of IIS 6.0 but it’s not so important here. The problem was the same on port 8080: an error message. Now, let’s search for historical data about this IP address. That is where Passive DNS is helpful. Bingo! There were three records associated with this IP. Let’s retry to access the port 8080 but now through a vhost:JBoss

This looks much more interesting! We can now go deeper and maybe find more juicy stuff to report… When you scan IP addresses, always query then against one or more Passive DNS databases to easily detect potential websites.

How to protect against this? You can’t prevent your IP addresses and hostnames to be logged in Passive DNS databases but there is a simple way to avoid problems such as the one described above. Keep your infrastructure up-to-date. Based on what I found and the Passive DNS data, this server is an old one. The application has been moved for a while but the vhost and the application were still configured on the old server…

[The post Passive DNS for the Bad has been first published on /dev/random]

November 08, 2018

PHP, the Open Source scripting language, is used by nearly 80 percent of the world's websites.

According to W3Techs, around 61 percent of all websites on the internet still use PHP 5, a version of PHP that was first released fourteen years ago.

Now is the time to give PHP 5 some attention. In less than two months, on December 31st, security support for PHP 5 will officially cease. (Note: Some Linux distributions, such as Debian Long Term Support distributions, will still try to backport security fixes.)

If you haven't already, now is the time to make sure your site is running an updated and supported version of PHP.

Beyond security considerations, sites that are running on older versions of PHP are missing out on the significant performance improvements that come with the newer versions.

Drupal and PHP 5

Drupal 8

Drupal 8 will drop support for PHP 5 on March 6, 2019. We recommend updating to at least PHP 7.1 if possible, and ideally PHP 7.2, which is supported as of Drupal 8.5 (which was released March, 2018). Drupal 8.7 (to be released in May, 2019) will support PHP 7.3, and we may backport PHP 7.3 support to Drupal 8.6 in the coming months as well.

Drupal 7

Drupal 7 will drop support for older versions of PHP 5 on December 31st, but will continue to support PHP 5.6 as long there are one or more third-party organizations providing reliable, extended security support for PHP 5.

Earlier today, we released Drupal 7.61 which now supports PHP 7.2. This should make upgrades from PHP 5 easier. Drupal 7's support for PHP 7.3 is being worked on but we don't know yet when it will be available.

Thank you!

It's a credit to the PHP community that they have maintained PHP 5 for fourteen years. But that can't go on forever. It's time to move on from PHP 5 and upgrade to a newer version so that we can all innovate faster.

I'd also like to thank the Drupal community — both those contributing to Drupal 7 and Drupal 8 — for keeping Drupal compatible with the newest versions of PHP. That certainly helps make PHP upgrades easier.

We’ve had a week of heated discussion within the Perl 6 community. It is the type of debate where everyone seems to lose. It is not the first time we do this and it certainly won’t be the last. It seems to me that we have one of those about every six months. I decided not to link to many reiterations of the debate in order not to feed the fire.

Before defining sides in the discussion it is important to identify the problems that drives the fears and hopes of the community. I don’t think that the latest round of discussions was about the Perl 6 alias in itself (Raku), but rather about the best strategy to answer to two underlying problems:

  • Perl 5 is steadily losing popularity.
  • Perl 6, despite the many enhancements and a steady growth, is not yet a popular language and does not seem to have a niche or killer app in sight yet to spur rapid adoption.

These are my observations and I don’t present them as facts set in stone. However, to me, they are the two elephants in the room. As an indication we could refer to the likes of TIOBE where Perl 5 fell of the top 10 in just 5 years (from 8th to 16th, see “Very Long Time History” at the bottom) or compare Github stars and Reddit subscribers of Perl 5 and 6 with languages on the same level of popularity on TIOBE.

Perl 5 in not really active on Github and the code mirror there has until today only 378 stars. Rakudo, developed on Github, has understandably more: 1022. On Reddit 11726 people subscribe to Perl threads (mostly Perl 5) and only 1367 to Perl 6. By comparison, Go has 48970 stars on Github and 57536 subscribers on Reddit. CPython, Python’s main implementation, has 20724 stars on Github and 290 308 Reddit subscribers. Or put differently: If you don’t work in a Perl shop, can you remember when a young colleague knew about Perl 5 besides by reputation? Have you met people in the wild that code in Perl 6?

Maybe this isn’t your experience or maybe you don’t care about popularity and adoption. If this is the case, you probably shrugged at the discussions, frowned at the personal attacks and just continued hacking. You plan to reopen IRC/Twitter/Facebook/Reddit when the dust settles. Or you may have lost your patience and moved on to a more popular language. If this is the case, this post is not for you.

I am under the impression that the participants of what I would call “cyclical discussions” *do* agree with the evaluation of the situation of Perl 5 and 6. What is discussed most of the time is clearly not a technical issue. The arguments reflect different –and in many cases opposing— strategies to alleviate the aforementioned problems. The strategies I can uncover are as follows:

Perl 5 and Perl 6 carry on as they have for longer than a decade and half

Out of inertia, this a status-quo view is what we’ve seen in practice today. While this vision honestly subscribes to the “sister languages narrative” (Perl is made up of two languages in order to explain the weird major version situation), it doesn’t address the perceived problems. It chooses to ignore them. The flip side is that with every real or perceived threat to the status-quo the debate resurges.

Perl 6 is the next major version of Perl

This is another status-quo view. The “sister languages narrative” is dead: Perl 6 is the next major version of Perl 5. While a lot of work is needed to make this happen, it’s work that is already happening: make Perl 6 fast. The target, however, is defined by Perl 5: It must be faster than the previous release. Perl consists of many layers and VMs are interchangeable: it’s culturally still Perl if you replace the Perl 5 runtime with one from of Perl 6. This view is not well received by many people in Perl 5 community, and certainly by those emotionally or professionally invested in “Perl”, with most of the time Perl meaning Perl 5.

Both Perl 5 and Perl 6 are qualified/renamed

This is a renaming view that looks for a compromise in both communities. The “sister languages narrative” is the real world experience and both languages can stand on their own feet while being one big community. By renaming both projects and keeping Perl in the name (e.g. Rakudo Perl, Pumpkin Perl) the investment in the Perl name is kept, while the next major version number dilemma is dissolved. However this strategy is not an answer for those in the Perl 6 community that experience that the (unjustified) reputation of Perl 5 is hurting Perl 6’s adoption. On the Perl 5’s side is some resentment why good old “Perl” needs to be renamed when Perl 6 is the newcomer.

Rename Perl 6

Perl 6’s adoption is hindered by Perl 5’s reputation and, at the same time, Perl 6’s major number “squatting” places Perl 5 in limbo. The “sister language narrative” is the real world situation: Perl 5 is not going away and it should not. The unjustified reputation of Perl 5 for some people is not something Perl 6 needs to fix. Only action of the Perl 6 community is required in the view. However, a “sisterly” rename will benefit Perl 5. Liberating the next major version will not fix Perl 5’s decline, but it may be a small piece of the puzzle of the recovery. Renaming will result in more loosely coupled communities, but Perl communities are not mutually exclusive and the relation may improve without the version dilemma. The “sister language narrative” becomes a proud origin story. Mostly Perl 6 people heavily invested in the Perl *and* the Perl 6 brand opposed this strategy.

Alias Perl 6

While being very similar to the strategy above, this view is less ambitious as it only cares about Perl 6’s adoption is hindered by Perl 5’s reputation. It’s up to Perl 5 to fix their major version problem. It’s a compromise between (a number of) people in the Perl 6 community. It may or may not be a way to proof if an alias catches on, the renaming of Perl 6 should stay on the table.

Every single strategy will result in people being angry or disappointed because they honestly believe it hurts the strategy that they feel is necessary to alleviate Perl’s problems. We need to acknowledge that the fears and hopes are genuine and often related. Without going in detail to not reignite the fire (again), the tone of many of the arguments I heard this week from people opposing the Raku alias rung very close to me to the arguments Perl 5 users have against the Perl 6 name. Being a victim of injustice by people that don’t care for an investment of years and a feeling of not being listen to.

By losing the sight of the strategies in play, I feel the discussion degenerated very early in personal accusations that certainly leave scars while not resulting in even a hint of progress. We are not unique in this situation, see the recent example of the toll it took on Guido van Rossum. I can only sympathize with Larry is feeling these days.

While the heated debates may continue for years to come, it’s important to keep an eye on people that silently leave. The way to irrelevance is a choice.

 

(I disabled comments on this entry, feel free to discuss it on Reddit or the like. However respect the tone of this message and refrain from personal attacks.)

November 07, 2018

The post Automatic monitoring of Laravel Forge managed sites with Oh Dear! appeared first on ma.ttias.be.

Did I mention there's a blog for Oh Dear! yet? :-)

This week we shipped a pretty cool feature: if you use Laravel Forge to manage your servers & sites, we can automatically detect newly added sites and start monitoring those instantly.

The cool thing is that Laravel Forge might've started out as a pure-Laravel management tool, it can be used for any PHP-based framework. You can use it to manage your servers for Zend Framework, Symfony, ... or any other kind of project.

Today we're launching a cool feature for our users on the Laravel Forge platform: automatic monitoring of any of your sites and servers managed through Laravel Forge!

Source: Automatic monitoring of Laravel Forge managed sites

Give it a try, the integration is pretty seamless!

The post Automatic monitoring of Laravel Forge managed sites with Oh Dear! appeared first on ma.ttias.be.

November 06, 2018

While I have to admit that I'm using Zabbix since the 1.8.x era, I also have to admit that I'm not an expert, and that one can learn new things every day. I recently had to implement a new template for a custom service, that is multi-instances aware, and so can be started multiple times with various configurations, and so with its own set of settings, like tcp port on which to listen, etc .. , but also the number of instances running as it can be different from one node to the next one.

I was thinking about the best way to implement this through Zabbix, and my initial idea was to just have one template per possible instance type, that would though use macros defined at the host level, to know which port to check, etc .. so in fact backporting into zabbix what configuration management (Ansible in our case) already has to know to deploy such app instance.

But parallel to that, I always liked the fact that Zabbix itself has some internal tools to auto-discover items and so triggers for those : That's called Low-level Discovery (LLD in short).

By default, if you use (or have modified) some zabbix templates, you can see those in actions for the mounted filesystems or even the present network interfaces in your linux OS. That's the "magic" : you added a new mount point or a new interface ? Zabbix discovers it automatically and start monitoring it, and also graph values for those.

So back to our monitoring problem and the need for multiple templates : what if we could use LLD too and so have Zabbix automatically checking our deployed instances (multiple ones) automatically ? The good is that we can : one can create custom LLD rules and so it would work OOTB when only one template would be added for those nodes.

If you read the link above for custom LLD rule, you'll see some examples about a script being called at the agent level, from the zabbix server, at periodic interval, to "discover" those custom discovery checks. The interesting part to notice is that it's a json that is returned to zabbix server , pointing to a new key, that is declared at the template level.

So it (usually) goes like this :

  • create a template
  • create a new discovery rule, give it a name and a key (and also eventually add Filters)
  • deploy a new UserParameter at the agent level reporting to that key the json string it needs to declare to zabbix server
  • Zabbix server receives/parses that json and based on the checks/variables declared in that json, it will create , based on those returned macros, some Item Prototypes, Trigger prototypes and so on ...

Magic! ... except that in my specific case, for some reasons I never allowed the zabbix user to really launch commands, due to limited rights and also the Selinux context in which it's running (for interested people, it's running in the zabbix_agent_t context)

I suddenly didn't want to change that base rule for our deployments, but the good news is that you don't have to use UserParameter for LLD ! . It's true that if you look at the existing Discovery Rules for "Network interface discovery", you'll see the key net.if.discovery, that is used for everything after, but the Type is "Zabbix agent". We can use something else in that list, like we already do for a "normal" check

I'm already (ab)using the Trapper item type for a lot of hardware checks : reason is simple : as zabbix user is limited (and I don't want to grant more rights for it), I have some scripts checking for hardware raid controllers (if any), etc, and reporting back to zabbix through zabbix_sender.

Let's use the same logic for the json string to be returned to Zabbix server for LLD. (as yes, Trapper is in the list for the discovery rule Type.

It's even easier for us, as we'll control that through Ansible : It's what is already used to deploy/configure our RepoSpanner instances so we have all the logic there.

Let's first start by creating the new template for repospanner, and create a discovery rule (detecting each instances and settings) :

zabbix-discovery-type.png

You can then apply that template to host[s] and wait ... but first we need to report back from agent to server which instances are deployed/running. So let's see how to implement that through ansible.

To keep it short, in Ansible we have the following (default values, not the correct ones) variables (from roles/repospanner/default.yml):

...
repospanner_instances:
  - name: default
    admin_cli: False
    admin_ca_cert:
    admin_cert:
    admin_key:
    rpc_port: 8443
    rpc_allow_from:
      - 127.0.0.1
    http_port: 8444
    http_allow_from:
      - 127.0.0.1
    tls_ca_cert: ca.crt
    tls_cert: nodea.regiona.crt
    tls_key: nodea.regiona.key
    my_cn: localhost.localdomain
    master_node : nodea.regiona.domain.com # to know how to join a cluster for other nodes
    init_node: True # To be declared only on the first node
...

That simple example has only one instance, but you can easily see how to have multiple ones, etc So here is the logic : let's have ansible, when configuring the node, create the file that will be used zabbix_sender (triggered by ansible itself) to send the json to zabbix server. zabbix_sender can use a file that is separated (man page) like this :

  • hostname (or '-' to use name configured in zabbix_agentd.conf)
  • key
  • value

Those three fields have to be separated by one space only, and important : you can't have extra empty line (but something can you easily see when playing with this the first time)

How does our file (roles/repospanner/templates/zabbix-repospanner-lld.j2) look like ? :

- repospanner.lld.instances { "data": [ {% for instance in repospanner_instances -%} { "{{ '{#INSTANCE}' }}": "{{ instance.name }}", "{{ '{#RPCPORT}' }}": "{{ instance.rpc_port }}", "{{ '{#HTTPPORT}' }}": "{{ instance.http_port }}" } {%- if not loop.last -%},{% endif %} {% endfor %} ] }

If you have already used jinja2 templates for Ansible, it's quite easy to understand. But I have to admit that I had troubles with the {#INSTANCE} one : that one isn't an ansible variable, but rather a fixed name for the macro that we'll send to zabbix (and so reused as macro everywhere). But ansible, when trying to translate the jinja2 template, was complaining about missing "comment' : Indeed {# ... #} is a comment in jinja2. So the best way (thanks to people in #ansible for that trick) is to include it in {{ }} brackets but then escape it so that it would be rendered as {#INSTANCE} (nice to know if you have to do that too ....)

The rest is trival : excerpt from monitoring.yml (included in that repospanner role) :

- name: Distributing zabbix repospanner check file
  template:
    src: "{{ item }}.j2"
    dest: "/usr/lib/zabbix/{{ item }}"
    mode: 0755
  with_items:
    - zabbix-repospanner-check
    - zabbix-repospanner-lld
  register: zabbix_templates   
  tags:
    - templates

- name: Launching LLD to announce to zabbix
  shell: /bin/zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -i /usr/lib/zabbix/zabbix-repospanner-lld
  when: zabbix_templates is changed

And this is how is rendered on one of my test node :

- repospanner.lld.instances { "data": [ { "{#INSTANCE}": "namespace_rpms", "{#RPCPORT}": "8443", "{#HTTPPORT}": "8444" }, { "{#INSTANCE}": "namespace_centos", "{#RPCPORT}": "8445", "{#HTTPPORT}": "8446" }  ] }

As ansible auto-announces/push that back to zabbix, zabbix server can automatically start creating (through LLD, based on the item prototypes) some checks and triggers/graphs and so start monitoring each newly instance. You want to add a third one ? (we have two in our case) : ansible pushes the config, would modify the .j2 template and would notify zabbix server. etc, etc ...

The rest is just "normal" operation for zabbix : you can create items/trigger prototypes and just use those special Macros coming from LLD :

zabbix-item-prototypes.png

It was worth spending some time in the LLD doc and in #zabbix to discuss LLD, but once you see the added value, and that you can automatically configure it through Ansible, one can see how powerful it can be.

I’ve been running a lot lately, and so have been listening to lots of podcasts! Which is how I stumbled upon this great episode of the Lullabot podcast recently — embarrassingly one from over a year ago: “Talking Performance with Pantheon’s David Strauss and Josh Koenig”, with David and Josh from Pantheon and Nate Lampton from Lullabot.

(Also, I’ve been meaning to blog more, including simple responses to other blog posts!)

Interesting remarks about BigPipe

Around 49:00, they start talking about BigPipe. David made these observations around 50:22:

I have some mixed views on exactly whether that’s the perfect approach going forward, in the sense that it relies on PHP pumping cached data through its own system which basically requires handling a whole bunch of strings to send them out, as well as that it seems to be optimized around this sort of HTTP 1.1 behavior. Which, to compare against HTTP 2, there’s not really any cost to additional cost to additional connections in HTTP 2. So I think it still remains to be seen how much benefit it provides in the real world with the ongoing evolution of some of these technologies.

David is right; BigPipe is written for a HTTP 1.1 world, because BigPipe is intended to benefit as many end users as possible.

And around 52:00, Josh then made these observations:

It’s really great that BigPipe is in Drupal core because it’s the kind of thing that if you’re building your application from scratch that you might have to do a six month refactor to even make possible. And the cache layer that supports it, can support lots other interesting things that we’ll be able to develop in the future on top of Drupal 8. […] I would also say that I think the number of cases where BigPipe or ESI are actually called for is very very small. I always whenever we talk about these really hot awesome bleeding-edge cache technologies, I kinda want to go back to what Nate said: start with your Page Cache, figure out when and how to use that, and figure out how to do all the fundamentals of performance before even entertaining doing any of these cutting-edge technologies, because they’re much trickier to implement, much more complex and people sometimes go after those things first and get in over their head, and miss out on a lot of the really big wins that are easier to get and will honestly matter a lot more to end users. “Stop thinking about ESI, turn on your block cache.”

Josh is right too, BigPipe is not a silver bullet for all performance problems; definitely ensure your images and JS are optimized first. But equating BigPipe with ESI is a bit much; ESI is indeed extremely tricky to set up. And … Drupal 8 has always cached blocks by default. :)

Finally, around 53:30 David cites another reason to stress why more sites are not handling authenticated traffic:

[…] things like commenting often move to tools like Disqus and whether you want to use Facebook or the Google+ ones or any one of those kind of options; none of those require dynamic interaction with Drupal.

Also true, but we’re now seeing the inverse movement, with the increased skepticism of trusting social media giants, not to mention the privacy (GDPR) implications. Which means sites that have great performance for dynamic/personalized/uncacheable responses are becoming more important again.

BigPipe’s goal

David and Josh were being constructively critical; I would expect nothing less! :)

But in their description and subsequent questioning of BigPipe, I think they forget its two crucial strengths:

BigPipe works on any server, and is therefore available to everybody, and it works for many things out of the box, including f.e. every uncacheable Drupal block! In other words: no infrastructure (changes) required!

Bringing this optimization that sits at the intersection of front-end & back-end performance to the masses rather than having it only be available for web giants like Facebook and LinkedIn is a big step forward in making the entire web fast.

Using BigPipe does not require writing a single line of custom code; the module effectively progressively enhances Drupal’s HTML rendering — and turned on by default since Drupal 8.5!

Conclusion

Like Josh and David say: don’t forget about performance fundamentals! BigPipe is no silver bullet. If you serve 100% anon traffic, BigPipe won’t make a difference. But for sites with auth traffic, personalized and uncacheable blocks on your Drupal site are streamed automatically by BigPipe, no code changes necessary:

(That’s with 2 slow blocks that take 3 s to render. Only one is cacheable. Hence the page load takes ~6 s with cold caches, ~3 s with warm caches.)

Today, Acquia announced a partnership with BigCommerce, a leading cloud commerce platform. BigCommerce recently launched a headless commerce solution called BigCommerce Commerce-as-a-Service to complement its "traditional" commerce solutions. Acquia's partnership with BigCommerce will center around this Commerce-as-a-Service solution to enable customers to take advantage of headless commerce architectures, while leveraging Drupal and Acquia to power content-rich shopping experiences.

With BigCommerce and Acquia, brands can use a commerce-as-a-service approach to quickly build an online store and oversee product management and transactional data. The front-end of the commerce experience will be powered by Drupal, built and managed using the Acquia Platform, and personalized with Acquia Lift.

This month, Acquia has been focused on expanding our partnerships with headless commerce vendors. This announcement comes on the heels of our partnership with Elastic Path. Our partnership with BigCommerce not only reinforces our belief in headless commerce, but also our commitment to a best-of-breed commerce strategy that puts the needs of our customers first.

I published the following diary on isc.sans.edu: “Malicious Powershell Script Dissection”:

Here is another example of malicious Powershell script found while hunting. Such scripts remain a common attack vector and many of them can be easily detected just by looking for some specific strings. Here is an example of YARA rule that I’m using to hunt for malicious Powershell scripts… [Read more]

[The post [SANS ISC] Malicious Powershell Script Dissection has been first published on /dev/random]

Restés seuls et nus, Nellio et Eva pénètrent dans une grande pièce dont la porte vient de s’ouvrir.

– Georges Farreck !

Alors que je tente machinalement de récupérer quelques vêtement et d’essuyer les poisseux fluides qui recouvrent mon entrejambe, je ne peux retenir un cri de surprise.

Debout, les mains appuyées sur un bureau, Georges Farreck est en grande conversation avec une élégante femme blonde vêtue d’un tailleur bleu électrique qui souligne son ventre rebondit. Elle caresse un chat angora au long pelage gris. Georges sursaute et se retourne vers nous.

— Nellio ! Eva ! Que…

Il a l’air profondément surpris. La femme, elle, garde son calme et se contente d’un petit sourire ironique. À ses côtés, je reconnais Warren, l’administrateur du conglomérat de la zone industrielle. Il semble nerveux, ennuyé et passe constamment un mouchoir sur son front luisant.

Je sens la colère me faire vibrer les tempes. J’ai envie de saisir Georges par la gorge, de lui faire mal, de le griffer, de le faire saigner. Une envie sourde de violence se mélange effroyablement au reste de l’orgasme que je viens de vivre.

— Georges ! hurlé-je, tes sales manigances viennent de coûter la vie à deux de mes amis, tu ne vas pas…

Il semble profondément hébété et se contente de bégayer de vagues réponses.

Sans se départir de sa morgue ironique, la femme enceinte me lance alors :
— Bienvenue Nellio. Heureux d’enfin vous rencontrer, je suis Mérissa. Je suis heureuse de voir que vous avez remplacé le sex-toy que nous avons détruit chez Georges. Vous avez bien fait ! Après tout, nous les produisons en série.

Avant que je ne n’ai pu émettre la moindre réponse, Eva s’avance sans un mot. Sans aucune honte, son corps nu glisse majestueusement vers le bureau. Elle se saisit d’une paire de ciseau qu’elle s’enfonce, sans un cri, dans l’avant bras, la blessure tournée bien en évidence vers la femme blonde. Ennuyé, le chat saute sur le sol et s’éloigne d’un air digne.

Après quelques secondes de silence, un sang se met à couler de la blessure d’Eva.

Mérissa a un mouvement d’effroi.

— Mais… Ce n’est pas possible ! murmure-t-elle.
— Si, grogne Eva en serrant les dents.
— Vous… vous êtes le modèle original ? balbutie Warren.
— Il n’y a pas de modèle original, répond violemment Mérissa.
Puis, se retournant vers Eva :
— Qui es-tu ?
— Je suis humaine et je viens débrancher l’algorithme.

Le visage de Mérissa se tord de rage et de surprise.

— Comment… Comment peux-tu connaître l’existence de l’algorithme. J’en suis l’auteur, j’en suis le seul et unique maître !
— Tu en es l’esclave, répond Eva. Je le sais. Car je suis l’algorithme.

Mon cœur s’arrête. L’explication que me propose mon cerveau me semble si incroyable, si inconcevable.

— Si tu es humaine, je peux te tuer, menace Mérissa. Et tuer ton soupirant. Tout continuera comme avant, comme si vous n’aviez jamais existé. Le printeur rejoindra la longue liste des inventions immédiatement perdues.
— Pas sûr, ne puis-je m’empêcher de répondre. Junior et moi avons envoyé les plans détaillés du printeur à des dizaines de personnes en leur demandant de le construire et de diffuser l’information.
— Mon pauvre Nellio, l’information sur le réseau, ça se contrôle, ça s’efface. Quelques morts dans les attentats, quelques sites webs modifiés et plus personne ne se souviendra de ce printeur.

Elle ricane.

— C’est pour cela que nous n’avons pas utilisé le réseau, dis-je calmement.

Elle s’arrête.

— Nous avons rédigé des plans papiers que nous avons envoyés à travers l’intertube…
— Quoi ? Mais l’intertube n’a jamais été inauguré ! Ce n’était que pour occuper les politiciens !
— …avec des instructions demandant de construire des printeurs et de les envoyer à travers l’intertube. D’ici quelques jours, les printeurs seront monnaie courante.
— Mais vous êtes fous !

Elle hurle avant de jeter un regard noir à Georges Farreck.

— Et toi, pauvre abruti, tu n’as pas réussi à l’arrêter.
– Mérissa, je ne te permets pas de me traiter en sous-fifre. J’ai créé une fondation…
— …pour servir de paravent crédible à la création des printeurs, je le sais, c’est moi qui te l’ai ordonné.
— Non, pas de paravent, hurle Georges, les joues gonflées par la colère. Je voulais sincèrement faire cesser cet esclavage dont je soupçonnais l’existence sans en avoir la preuve définitive.
— Et, au passage, devenir le seul et unique détenteur du brevet sur le printeur.
— Mais…
— Développer le printeur et le garder secret, c’était ta mission. Le printeur était vital pour nous, les astéroïdes sont de moins en moins rentables. Mais il fallait que cela reste un secret ! Toi, pauvre idiot, en voulant faire cavalier seul, tu as fais en sorte que le secret se répande dans la nature ! C’est une catastrophe !
— Je pense au contraire que c’est la meilleur chose qui puisse arriver à l’humanité, fais-je d’une voix que je veux assurée.

Mérissa me fixe de ses yeux noirs tout en se tenant le ventre arrondi par la maternité prochaine. Une chemise à peine enfilée, sans pantalon, le sexe humide, je frissonne d’humiliation.

— Pauvre inconscient. Tu ne réalises pas ce que tu as fait !
— Je crois que si, fais-je en soutenant opiniâtrement son regard.
– Tu vas foutre en l’air l’économie.
— Une économie qui repose sur l’esclavage d’une partie de la population et l’abrutissement de l’autre. Je suis fier de la détruire !
— Tu ne te rends pas compte des conséquences ! Sans règles, les gens vont utiliser les atomes de l’atmosphère qui les entoure pour faire des objets inutiles, ils risquent de détruire la planète !
— Parce que c’est vrai que vous, on peut vous faire confiance, vous avez démontré une réelle maturité dans le domaine !

— Madame ! Nous sommes conscients que la santé de votre bébé est primordiale…

Comme un seul homme, toutes les personnes présentes dans la pièce se retournent. Sur le mur, un homme en costume est apparu. Il n’a pas de jambes et s’adresse à nous avec un sourire forcé orné d’une moustache lissée.

— …ou de vos bébés, devrais-je dire, car les jumeaux apportent deux fois plus de bonheur. Mais également cinq fois plus de risques de vergetures disgracieuses. Y avez-vous pensé ?

Soudainement, un Georges Farreck géant se met à descendre en parachute le long du mur. Il est magnifique, jeune mais pourtant mature, plein de charme et de sex appeal. Pendant un instant, mon cœur s’arrête de battre et je sens pointer un début d’érection.

— Madame, dit le Georges Farreck parachutiste, avec BioVerge au Cadmium actif, vous pouvez dire adieu aux vergeture.

Il lance un sourire étincelant avant de disparaître. Le flacon qu’il tenait dans les mains se met à grandir et tourbillonner, lui donnant un effet de relief très réussi. Puis, le mur s’éteint et redevient soudainement triste et silencieux.

— Ah oui, je me souviens de ce contrat, murmure Georges Farreck, le vrai au visage ridé et cerné. Ils ont fait du beau travail au montage, je suis très réaliste. Mais le texte est franchement nul. J’ai l’air de sortir d’un spectacle d’école primaire.

Photo par Trey Ratclife

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

Logo ETHEREUMCe jeudi 15 novembre 2018 à 19h se déroulera la 72ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : À quoi ressemble une application décentralisée ?

Thématique : blockchain|sécurité|développement

Public : étudiant|professionnel|développeur (le public attendu doit avoir une connaissance de base en informatique. Comprendre le JS est amplement suffisant)

L’animateur conférencier : Said Eloudrhiri

Lieu de cette séance : Campus technique (ISIMs) de la Haute Ecole en Hainaut, Avenue V. Maistriau, 8a, Salle Académique, 2e bâtiment (cf. ce plan sur le site de l’ISIMs, et ici sur la carte Openstreetmap).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : De nos jours, les blockchains restent indissociables des cryptomonnaies. Beaucoup ne voient que le volet spéculatif de ces monnaies virtuelles en oubliant le potentiel disruptif des blockchains. Cette présentation se concentrera sur la blockchain Ethereum vue au travers du fonctionnement d’une DApp (Decentralized Application). Après une brève introduction, nous aborderons les éléments clés pour installer l’environnement de développement en vue de construire, tester et déployer un simple smart contract. Une connaissance de base en programmation est utile. Au terme de cette présentation, nous espérons encourager les participants à poursuivre leur découverte de ce formidable écosystème.

Short bio : Said Eloudrhiri is a freelance consultant, an Agile coach and an Ethereum blockchain speaker and trainer. In 2016, he co-founded the ChainSkills initiative with Sébastien Arbogast to involve more developers in the Ethereum blockchain ecosystem. With ChainSkills, Said has published a course on Udemy about Ethereum to create and publish a first decentralised application (Dapp).

The post How to size & scale your Laravel Queues appeared first on ma.ttias.be.

We've got a new blog set up for our Oh Dear! monitoring service. In it, we share some details of our setup, how we tackle technical challenges and how we implement some of our monitoring tooling.

The first post covers how we use Laravel queues and how we designed around those.

Laravel offers a convenient way to create asynchronous background tasks using its queues. We utilize those heavily at Oh Dear! for all our monitoring jobs and in this post we'll share some of our lessons learned and what we consider to be best practices.

Source: How to size & scale your Laravel Queues

The post How to size & scale your Laravel Queues appeared first on ma.ttias.be.

November 04, 2018

Cela fait un mois que je suis déconnecté, 30% du temps initial que je m’étais fixé en démarrant l’expérience. Puis-je déjà tirer des conclusions ?

Oui, clairement. La première c’est que je ne suis pas du tout impatient de me reconnecter. En fait, j’ai même peur de le faire. Il y’a quelques jours, après avoir fini une tâche, j’ai machinalement tapé l’URL d’un site bloqué, un site d’informations générales.

Ce simple fait est en soi incroyable : mon cerveau est tellement formaté que même après un mois d’abstinence, les réflexes musculaires agissent encore. Je n’ai pas voulu aller sur ce site. Cela fait un mois que je n’y suis plus allé. Je n’avais aucune envie, aucun intérêt à y aller. Et, pourtant, mes doigts ont tapé l’URL.

Or, il s’avère que Adguard était désactivé. Pour la petite histoire, un site ne chargeait pas correctement et j’avais voulu vérifier si ce n’était pas Adguard qui interférait. J’avais oublié de le réactiver juste après.

Je me suis donc retrouvé sur le site d’actualités, à ma grande surprise. Premièrement, car je ne m’étais pas rendu compte que j’avais tapé l’URL et, en deuxième lieu, car j’étais persuadé que le site était bloqué. J’ai donc décidé de fermer le site et de réactiver le blocage. Mais, auparavant, j’ai quand même lu tous les grands titres et j’ai ouvert deux articles. Je n’ai littéralement pas pu m’en empêcher.

C’est effrayant.

Le pire est que, sans ma déconnexion, je ne me serais probablement jamais rendu compte de ces automatismes incontrôlables. Il y’a également de fortes chances pour que mon cas soit relativement représentatif d’une certaine frange de la population. Même si je suis un cas extrême, nous sommes certainement tous atteints à des degrés différents. La seule chose qui change est la conscience.

Mais la question primordiale reste de savoir si cette addiction est vraiment nocive.

Pour moi, la réponse est sans conteste un oui généralisé.

En un mois seulement, j’ai déjà l’impression d’être en train de changer profondément et durablement. Je suis plus patient, plus calme. Mes colères sont moins fréquentes. J’écoute les autres sans éprouver de l’ennui (ou moins, selon mon interlocuteur). Je savoure le temps avec mes proches. Je suis moins stressé, j’ai moins l’impression qu’il y’a des millions de choses à faire. Ma todo list et mon inbox email sont sous contrôle permanent. Mieux : ça vaut ce que ça vaut, mais j’ai accompli en octobre autant de tâches de ma liste de todos qu’en 3 mois habituellement.

En fait, c’est simple : lorsque je lance mon ordinateur, au lieu d’aller dans mon navigateur, je vais dans ma todo list et je choisis ce que j’ai envie de faire à ce moment.

Je recommence à écrire de la fiction, même si ce n’est encore que le tout début. Je tiens un journal de manière régulière et ça me plait.

Pour procrastiner, je réfléchis à mon workflow général de travail, j’ai eu une phase où j’ai testé pas mal de logiciels. Voire j’écris dans mon journal. Ce n’est certes pas le plus productif, mais c’est au moins une procrastination qui n’est pas nocive. Autre procrastination : je teste le fait de m’autoriser un lecteur de flux RSS, mais avec seulement quelques sources sélectionnées. Si un flux a un volume trop important ou poste des articles trop génériques, je le supprime immédiatement. D’ailleurs, je cherche des flux RSS d’individus qui partagent leurs idées. Pas de news, pas d’actualités, mais des lectures un peu fouillées qui poussent à la réflexion. Le but n’est pas d’être informé, mais d’être inspiré. Vous avez des recommandations ?

Si je lis toujours autant de fiction qu’avant, j’ai par contre ajouté à mon régime des véritables livres sur arbres morts de non-fiction, livres qui trainent un peu partout dans la maison et que je saisis quand l’envie m’en prend.

Bon, évidemment, tout n’est pas rose. Je ne publie pas beaucoup plus qu’avant. Je ne suis toujours pas un parangon de patience et, si je me distancie de mon téléphone, je reste accro à mon clavier. En fait, je deviens de plus en plus accro à l’écriture, ce qui est une bonne chose, mais les tâches ménagères ne m’intéressent pas plus qu’avant.

Un autre chantier en cours, outre le fait que je commence à lâcher prise sur les opportunités manquées, c’est celui de l’ego. J’ai toujours eu beaucoup d’égo et le fait d’être parfois appelé « blogueur influent » n’aide pas à rester modeste. Mon blog est une quête de visibilité personnelle, de reconnaissance. J’ai le désir profond de devenir célèbre grâce à mon travail. Les réseaux sociaux exploitent cette faiblesse en me faisant guetter les followers, les likes, etc.

Me déconnecter me permet de m’affranchir de cela. Je ne sais pas si ce que je poste a du succès ou non, je ne sais pas si je suis en train de devenir (un tout petit peu) célèbre ou non. Et, du coup, une partie de mon cerveau commence à lâcher prise sur cet objectif irrationnel. L’effet net est que je sens grandir en moi l’envie de me concentrer sur des projets à plus long terme plutôt que de tenter d’obtenir une gratification immédiate. Mais je ne suis clairement qu’au début.

D’ailleurs, en parlant de gratification, je n’ai jamais reçu autant de mails de lecteurs et de dons sur Tipeee, Paypal ou en Bitcoin. Cela me touche très profondément de voir que certains prennent le temps pour réagir à mes billets, pour me recommander des lectures, pour me remercier. Je suis reconnaissant d’avoir cette chance. Merci !

Enfin, je constate également que j’ai moins envie d’acheter des gadgets sur Kickstarter ou du matériel de vélo. Je ne sais pas si c’est ma volonté de minimalisme ou bien si c’est un effet de ma déconnexion (ou bien les deux), mais, clairement, je prends conscience que le shopping, même en ligne, est un passe-temps, une procrastination pour éviter de faire autre chose. La déconnexion a donc un impact positif sur notre objectif de famille de dépenser moins pour devoir gagner moins.

D’ailleurs, mon cerveau perd de la tolérance face à toutes les sollicitations sensorielles. Voir un réseau social sur un écran m’agresse par les couleurs, les notifications, les publicités. Les écrans publicitaires me font mal aux yeux dans la rue. Bon, c’était déjà un peu le cas avant.

Cette déconnexion me permet également d’apprécier la chance que j’ai d’avoir ma femme et ma famille.

Car, soyons honnête, mon misanthrope égoïsme est peu compatible avec la vie de famille. Si j’adore ma famille et mes enfants, il m’est impossible de ne pas imaginer que si j’étais resté célibataire, j’aurais tout le temps que je veux pour écrire, regarder des films, faire plein sport, écouter du métal à fond. Que je n’aurais pas dû annuler en catastrophe l’enregistrement d’un épisode du ProofOfCast pour éponger du vomi dans toute la chambre.

Mais cette déconnexion, je l’ai souhaitée parce que je voulais améliorer ma vie de famille. Parce que je voulais améliorer la qualité de nos relations et que ma femme avait pointé du doigt mon addiction à mon smartphone.

Aujourd’hui, cette déconnexion m’apporte bien plus que ce que je n’imaginais. Du coup, je réalise que sans femme et sans enfants, je n’aurais sans doute pas entrepris cette expérience. J’aurais certainement continué à me perdre, à courir après ma petite gloriole, à guetter les opportunités pour avoir quelques miettes de gloire.

Bref, cette déconnexion me permet d’apprendre à apprécier qui je deviens tout en me rendant compte que ce que je suis est désormais indissociable de ma femme et de mes enfants, que le Ploum alternatif resté célibataire m’est de plus en plus étranger.

Et tout ça après seulement un mois sans réseau sociaux…

Photo by Will van Wingerden on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

The post Laravel Telescope: Data too long for column ‘content’ appeared first on ma.ttias.be.

For Oh Dear!, we're using Laravel Telescope as a convenient way of tracking/displaying exceptions, runs, ...

It stores its exceptions in the database, so you can easily view them again. It stores those in a TEXT field in MySQL, which is limited to 2^16 bytes, or 65536 bytes. Some of our exceptions tend to be pretty long, and they were hitting this limit.

The fix is to transfer the column from a TEXT to a LONGTEXT instead.

MariaDB [ohdear]> ALTER TABLE telescope_entries MODIFY content LONGTEXT;

You now have 2^32 bytes for storing data, or 4MB.

The table will now show it has a LONGTEXT field instead.

 MariaDB [ohdear]> describe telescope_entries;
+-------------------------+---------------------+------+-----+---------+----------------+
| Field                   | Type                | Null | Key | Default | Extra          |
+-------------------------+---------------------+------+-----+---------+----------------+
| sequence                | bigint(20) unsigned | NO   | PRI | NULL    | auto_increment |
| uuid                    | char(36)            | NO   | UNI | NULL    |                |
| batch_id                | char(36)            | NO   | MUL | NULL    |                |
| family_hash             | varchar(191)        | YES  | MUL | NULL    |                |
| should_display_on_index | tinyint(1)          | NO   |     | 1       |                |
| type                    | varchar(20)         | NO   | MUL | NULL    |                |
| content                 | longtext            | YES  |     | NULL    |                |
| created_at              | datetime            | YES  |     | NULL    |                |
+-------------------------+---------------------+------+-----+---------+----------------+
8 rows in set (0.00 sec)

Your exceptions/stacktraces should now be logged properly.

The post Laravel Telescope: Data too long for column ‘content’ appeared first on ma.ttias.be.

October 31, 2018

Today, Acquia announced a partnership with Elastic Path, a headless commerce platform. In this post, I want to explore the advantages of headless commerce and the opportunity it holds for both Drupal and Acquia.

The advantages of headless commerce

In a headless commerce approach, the front-end shopping experience is decoupled from the commerce business layer. Headless commerce platforms provide a clean separation between the front end and back end; the shopping experience is provided by Drupal and the commerce business logic is provided by the commerce platform. This decoupling provides advantages for the developer, merchant and shopping experience.

  • For developers, it means that you can decouple both the development and the architecture. This allows you to build an innovative shopping experience without having to worry about impacting a system as critical as your commerce backend. For instance, you can add ratings and reviews to your shopping experience without having to redeploy your commerce platform.
  • For merchants, it can provide a better experience for administering the shop. Traditional commerce solution usually ship with a lightweight content management system. This means that there can be competition over which system provides the experience layer (i.e. the "glass"). This can introduce overlap in functionality; both systems offer ways to manage URLs, create landing pages, manage user access rights, theming systems, etc. Because headless commerce systems are designed from the ground up to integrate with other systems, there is less duplication of functionality. This provides a streamlined experience for merchants.
  • And last but not least, there is the shopping experience for end-users or consumers. Consumers are demanding better experiences when they shop online; they want editorials, lookbooks, tutorials, product demonstration videos, testimonials, and more. They desire the content-rich experiences that a comprehensive content management system can provide.

All this is why Acquia is excited about our partnership with Elastic Path. I believe the partnership is a win-win-win. It's a win for Acquia because we are now better equipped than ever to offer personal, unique and delightful shopping experiences. It is a win for Elastic Path as they have the opportunity to provide contextual commerce solutions to any Acquia customer. Last but not least, it's a win for Drupal because it will introduce more organizations to the project.

Note that many of the above integration challenges don't apply to native solutions like Drupal Commerce for Drupal or WooCommerce for WordPress. It only applies when you have to integrate two entirely different systems. Integrating two different systems is a common use case, because customers either already have a commerce platforms in place that they don't want to replace, or because native solutions don't meet their needs.

Acquia's commitment to best of breed

Acquia remains committed to a best-of-breed strategy for commerce. There isn't a single commerce platform that meets the needs of all our customers. This belief comes from years of experience in the field. Acquia's customers want to integrate with a variety of commerce systems such as Elastic Path, SAP Hybris, Salesforce Commerce Cloud (Demandware), Magento, BigCommerce, Reaction Commerce, Oracle ATG, Moltin, and more. Our customers also want to use Drupal Commerce, Drupal's native commerce solution. We believe customers should be able to integrate Drupal with their commerce management solutions of choice.

October 30, 2018

Configuration management is an important feature of any modern content management system. Those following modern development best-practices use a development workflow that involves some sort of development and staging environment that is separate from the production environment.

Configuration management example

Given such a development workflow, you need to push configuration changes from development to production (similar to how you need to push code or content between environments). Drupal's configuration management system helps you do that in a powerful yet elegant way.

Since I announced the original Configuration Management Initiative over seven years ago, we've developed and shipped a strong configuration management API in Drupal 8. Drupal 8's configuration management system is a huge step forward from where we were in Drupal 7, and a much more robust solution than what is offered by many of our competitors.

All configuration in a Drupal 8 site — from one-off settings such as site name to content types and field definitions — can be seamlessly moved between environments, allowing for quick and easy deployment between development, staging and production environments.

However, now that we have a couple of years of building Drupal 8 sites behind us, various limitations have surfaced. While these limitations usually have solutions via contributed modules, it has become clear that we would benefit from extending Drupal core's built-in configuration management APIs. This way, we can establish best practices and standard approaches that work for all.

Configuraton management initiativeThe four different focus areas for Drupal 8. The configuration management initiative is part of the 'Improve Drupal for developers' track.

I first talked about this need in my DrupalCon Nashville keynote, where I announced the Configuration Management 2.0 initiative. The goal of this initiative is to extend Drupal's built-in configuration management so we can support more common workflows out-of-the-box without the need of contributed modules.

What is an example workflow that is not currently supported out-of-the-box? Support for different configurations by environment. This is a valuable use case because some settings are undesirable to have enabled in all environments. For example, you most likely don't want to enable debugging tools in production.

Configuration management example

The contributed module Config Filter extends Drupal core's built-in configuration management capabilities by providing an API to support different workflows which filter out or transform certain configuration changes as they are being pushed to production. Config Split, another contributed module, builds on top of Config Filter to allow for differences in configuration between various environments.

The Config Split module's use case is just one example of how we can improve Drupal's out-of-the-box configuration management capabilities. The community created a longer list of pain points and advanced use cases for the configuration management system.

While the initiative team is working on executing on these long-term improvements, they are also focused on delivering incremental improvements with each new version of Drupal 8, and have distilled the most high-priority items into a configuration management roadmap.

  • In Drupal 8.6, we added support for creating new sites from existing configuration. This enables developers to launch a development site that matches a production site's configuration with just a few clicks.
  • For Drupal 8.7, we're planning on shipping an experimental module for dealing with environment specific configuration, moving the capabilities of Config Filter and the basic capabilities of Config Split to Drupal core through the addition of a Configuration Transformer API.
  • For Drupal 8.8, the focus is on supporting configuration updates across different sites. We want to allow both sites and distributions to package configuration (similar to the well-known Features module) so they can easily be deployed across other sites.

How to get involved

There are many opportunities to contribute to this initiative and we'd love your help.

If you would like to get involved, check out the Configuration Management 2.0 project and various Drupal core issues tagged as "CMI 2.0 candidate".

Special thanks to Fabian Bircher (Nuvole), Jeff Beeman (Acquia), Angela Byron (Acquia), ASH (Acquia), and Alex Pott (Thunder) for contributions to this blog post.

Several tricks and heuristics that I apply to write easy to understand functions keep coming up when I look at other people their code. This post outlines the second key principle. The first principle is Minimize State. Following posts will contain specific tips and tricks, often building on these two principles.

Do One Thing

Often functions become less readable because they are doing multiple things. If your function Foo needs to do tasks A, B and C, then create a function for each of the tasks and call those from Foo.

Having small functions is OK. Having just 3 calls to other functions in your function is OK. Don’t be afraid of “too simple” code or of this style making things harder to follow (it does not (unless you are splitting things up stupidly)). And don’t be afraid of performance. In most programs the amount of function calls have effectively 0 impact on performance. There are exceptions, though unless you know you are dealing with one of these, don’t mash things together for performance reasons.

One Level of Abstraction

Doing one thing includes dealing with a single level of abstraction. For instance, suppose you have a function in which in some cases a message needs to be logged to the file system.

function doThing() {
    if (condition) {
        low_level_write_api_call('write-mode', $this->somePath, 'Some error message');
    }
}

Here the details of how the error message is logged are on a lower level of abstraction than the rest of the function. This makes it harder to tell what the function is actually doing, because details that don’t matter on the higher level of abstraction clutter the high level logic. These details should be in their own function.

function doThing() {
    if (condition) {
        $this->log('Some error message');
    }
}


private function log(string $message) {
    low_level_write_api_call('write-mode', $this->somePath, $message);
}

See also

The post Readable Functions: Do One Thing appeared first on Entropy Wins.

October 29, 2018

MacOS Mojave, Apple's newest operating system, now features a Dark Mode interface. In Dark Mode, the entire system adopts a darker color palette. Many third-party desktop applications have already been updated to support Dark Mode.

Today, more and more organizations rely on cloud-based web applications to support their workforce; from Gmail to Google Docs, SalesForce, Drupal, WordPress, GitHub, Trello and Jira. Unlike native desktop applications, web applications aren't able to adopt the Dark Mode interface. I personally spend more time using web applications than desktop applications, so not having web applications support Dark Mode defeats its purpose.

This could change as the next version of Safari adds a new CSS media query called prefers-color-scheme. Websites can use it to detect if Dark Mode is enabled.

I learned about the prefers-color-scheme media query on Jeff Geerling's blog, so I decided to give it a try on my own website. Because I use CSS variables to set the colors of my site, it took less than 30 minutes to add Dark Mode support on dri.es. Here is all the code it took:

@media (prefers-color-scheme: dark) {
  :root {
    --primary-font-color: #aaa;
    --secondary-font-color: #777;
    --background-color: #222;
    --table-zebra-color: #333;
    --table-hover-color: #444;
    --hover-color: #333;
  }
}

If you use MacOS Mojave, Safari 12.1 or later, and have Dark Mode enabled, my site will be shown in black:

Dark mode dri es

It will be interesting to see if any of the large web applications, like Gmail or Google Docs will adopt Dark Mode. I bet they will, because it adds a level of polish that will be expected in the future.

It was just announced that IBM bought Red Hat for $34 billion in cash. Wow!

I remember taking the bus to the local bookstore to buy the Red Hat Linux 5.2 CD-ROMs. It must have been 1998. Ten years later, Red Hat acted as an inspiration for starting my own Open Source business.

While it's a bit sad to see the largest, independent Open Source company get acquired, it's also great news for Open Source. IBM has been a strong proponent and contributor to Open Source, and its acquisition of Red Hat should help accelerate Open Source even more. IBM has the ability to introduce Open Source to more organizations in a way that Red Hat never could.

Just a few weeks ago, I wrote that 2018 is a breakout year for Open Source businesses. The acquisition of Red Hat truly cements that, as this is one of the largest acquisitions in the history of technology. It's very exciting to see that the largest technology companies in the world are getting comfortable with Open Source as part of their mainstream business.

Thirty-four billion is a lot of money, but IBM had to do something big to get back into the game. Public cloud gets all the attention, but hybrid cloud is just now setting up for growth. It was only last year that both Amazon Web Services and Google partnered with VMware on hybrid cloud offerings, so IBM isn't necessarily late to the hybrid cloud game. Both IBM and Red Hat are big believers in hybrid cloud, so this acquisition makes sense and helps IBM compete with Amazon, Google and Microsoft in terms of hybrid cloud.

In short, this should be great for Open Source, it should be good for IBM, and it should be healthy for the cloud wars.

PS: I predict that Jim Whitehurst becomes IBM's CEO in less than five years.

October 28, 2018

Lately there has been a lot of hype around the typed properties that PHP 7.4 will bring. In this post I outline why typed properties are not as big of a game changer as some people seem to think and how they can lead to shitty code. I start by a short introduction to what typed properties are.

What Are Typed Properties

As of version 7.3, PHP supports types for function parameters and for function return values. Over the latest years many additions to PHP types where made, such as primitive (scalar) types like string and int (PHP 7.0), return types (PHP 7.0), nullable types (PHP 7.1) and parameter type widening (PHP 7.2). The introduction of typed properties (PHP 7.4) is thus a natural progression.

Typed properties work as follows:

class User {
    public int $id;
    public string $name;
 
    public function __construct(int $id, string $name) {
        $this->id = $id;
        $this->name = $name;
    }
}

You can do in two simple lines what takes a lot more boilerplate in PHP 7.3 or earlier. In these versions, if you want to have type safety, you need a getter and setter for each property.

    private $id;
    private $name;

    public function getId(): int {
        return $this->id;
    }
    public function setId(int $id): void {
        $this->id = $id;
    }
 
    public function getName(): string {
        return $this->name;
    }
    public function setName(string $name): void {
        $this->id = $name;
    }

Not only is it a lot more work to write all of these getters and setters, it is also easy to make mistakes when not automatically generating the code with some tool.

These advantages are what the hype is all about. People are saying it will save us from writing so much code. I think not, and I am afraid of the type of code those people will write using typed properties.

Applicability of Typed Properties

Let’s look at some of different types of classes we have in a typical well designed OO codebase.

Services are classes that allow doing something. Loggers are services, Repositories are services and LolcatPrinters are services. Services often need collaborators, which get injected via their constructor and stored in private fields. These collaborators are not visible from the outside. While services might have additional state, they normally do not have getters or setters. Typed properties thus do not save us from writing code when creating services and the added type safety they provide is negligible.

Entities (DDD term) encapsulate both data and behavior. Normally their constructors take a bunch of values, typically in the form of Value Objects. The methods on entities provide ways to manipulate these values via actions that make sense in the domain language. There might be some getters, though setters are rare. Having getters and setters for most of the values in your entities is an anti-pattern. Again typed properties do not save us from writing code in most cases.

Value Objects (DDD term) are immutable. This means you can have getters but not setters. Once again typed properties are of no real help. What would be really helpful however is a first-class Value Object construct part of the PHP language.

Typed properties are only useful when you have public mutable state with no encapsulation. (And in some cases where you assign to private fields after doing complicated things.) If you design your code well, you will have very little code that matches all of these criteria.

Going to The Dark Side

By throwing immutability and encapsulation out of the window, you can often condense code using typed properties. This standard Value Object …

class Name {
    private $firstName;
    private $lastName;
 
    public function __construct(string $firstName, string $lastName) {
        $this->firstName = $firstName;
        $this->lastName = $lastName;
    }
 
    public function getFirstName(): string {
        return $this->firstName;
    }

    public function getLastName(): string {
        return $this->lastName;
    }
}

… becomes the much shorter

class Name {
    public strimg $firstName;
    public strimg $lastName;
 
    public function __construct(string $firstName, string $lastName) {
        $this->firstName = $firstName;
        $this->lastName = $lastName;
    }
}

The same goes for Services and Entities: by giving up on encapsulation and immutability, you gain the ability to not write a few lines of simple code.

This trade-off might actually make sense if you are working on a small codebase on your own or with few people. It can also make sense if you create a throw away prototype that you then actually throw away. For codebases that are not small and are worked on by several people writing a few simple getters is a low price to pay for the advantages that encapsulation and immutability provide.

Conclusion

Typed properties marginally help with type safety and in some rare cases can help reduce boilerplate code. In most cases typed properties do not reduce the amount of code needed unless you throw the valuable properties of immutability and encapsulation out of the window. Due to the hype I expect many junior programmers to do exactly that.

See Also

The post PHP Typed Properties appeared first on Entropy Wins.

October 26, 2018

The use of a wall of sheep (or WoS) is nice to raise the security awareness of your audience. A WoS is a tool used to demonstrate what can happen when users connect to a wild network without a minimum level of security. The non-encrypted traffic is analyzed and evidence of bad behaviour is displayed publicly (mainly logins and passwords). To increase the impact, images can also be extracted from the TCP flows and displayed to everybody. During the BruCON security conference, we install a WoS for a few years:

Wall of Sheep

Most of the attendees are aware of it and, honestly, we collect fewer credentials for a few years. There are three reasons for this:

  1. People are more aware of the risks to connect to a “wild” network in a security conference. This is a very good news!
  2. People don’t connect to the wireless network. Most of our attendees are coming from Europe and, with the cancellation of data roaming costs, people aren’t afraid to use they 4G plan while abroad.
  3. Most of the online services are now encrypted. The rise of services like Let’s Encrypt is a key point here. Most of the traffic today is encrypted.

But the WoS remains very popular because attendees like to play with it and try to display funny pictures on it. If some people are trying to promote their company by flooding the WoS with logos of there company, most of the people are trying to display p0rn0graphic pictures. Even if the biggest part of our attendees are men, we like to welcome women and encourage them to join security conferences. That’s why the goal of the network team is to not hurt them by fighting against p0rn on the WoS!

The first version of the WoS that we found on the web was written in Python and quickly we added a “skin colour filter“. The idea is simple: collected pictures are analyzed and the colour map extracted. If the picture has more than x % of skin colour, it is not displayed. Nice but what do hackers when you try to block them? They find new ways to flood the WoS with more funny pictures. So we started to see waves of:

  • Smurf or Avatar p0rn (images containing a lot of blue pixels)
  • Hulk p0rn (mainly green pixels)
  • … (name your preferred colour)

To be honest, the funniest was “furniture p0rn”… Just don’t ask how they find this…

Furniture p0rn

Later,  the developer of the WoS decided to rewrite it in Node.js. It was not easy to re-implement the “skin colour filter“.  We had to find an alternative. Hopefully, the developer added some support for a feature called ‘NSFW’ which was not really documented. After some investigations, we found a project distributed as open source by Yahoo: “The Not Suitable for Work classification deep neural network” or, in short, “NSFW”. More information can be found in a blog post. The framework is based on CaffeOnSpark that brings deep learning to Hadoop and Spark clusters. 

The installation process is not very easy because the code relies on many libraries (I think that in total I had to install more than 1GB of dependencies). Once installed, NFSW can be started as a daemon and listen to a port waiting for some submissions. What it does?

  1. It receives pictures from the WoS
  2. Pictures are resized to a size that is relevant to the Yahoo! data
  3. Pictures are submitted to analyze
  4. The “score” is returned to the WoS

Here is a sample of the WoS log with a wave of bad pictures:

Driftnet: 8545fee5449bfbb62910c2f17a226aba.jpg created from DRIFTNET with nsfw score of 21.743951737880707
Driftnet: c169deab61a6f2b01aaacd0902456cb6.jpg created from DRIFTNET with nsfw score of 0.06645353860221803
Driftnet: 5e8e9e369b4d3de92924cd8f9d7754af.jpg created from DRIFTNET with nsfw score of 1.8503639847040176
Driftnet: 8fbf475267bdcb6557996fa6a58172e2.jpg created from DRIFTNET with nsfw score of 2.0092489197850227
Driftnet: ffee8ded38319084341fb2631a866161.jpg created from DRIFTNET with nsfw score of 2.17214897274971
Driftnet: 9b106349f6f1dbf128f8ab2bb76ad2d6.jpg created from DRIFTNET with nsfw score of 1.8500618636608124
Driftnet: 24326cdd7e0785960befc0594c6b2195.jpg created from DRIFTNET with nsfw score of 32.450902462005615
Driftnet: dfb5a1ac2537fe29b46c7376b1ae1307.png created from DRIFTNET with nsfw score of 0.38929139263927937
Driftnet: ed7671a42d45c58382a1274002768692.jpg created from DRIFTNET with nsfw score of 1.3844704255461693
Driftnet: 0f409595e35e42c76fdafcdc607f70e8.jpg created from DRIFTNET with nsfw score of 30.140525102615356
Driftnet: da44c46c99d2bb2c17fb85041769e638.jpg created from DRIFTNET with nsfw score of 61.361753940582275
Driftnet: 225524f6d09e7b76bfb3538c23b17754.jpg created from DRIFTNET with nsfw score of 1.0479548946022987
Driftnet: 49c46d62651ebd15a320e26c2feeb678.jpg created from DRIFTNET with nsfw score of 46.900102496147156
Driftnet: f5859183fc621cdacc2d323eae7503a3.jpg created from DRIFTNET with nsfw score of 11.838868260383606
Driftnet: 64e526cdf2bea9c80c95c39cdc20e654.jpg created from DRIFTNET with nsfw score of 61.6860032081604
Driftnet: b35c23ae6c6ded481f68a9759e4ce2fa.jpg created from DRIFTNET with nsfw score of 42.94847548007965
Driftnet: 3effdac3157fcd0fcd9e71679f248f5f.jpg created from DRIFTNET with nsfw score of 88.63767981529236
Driftnet: 9f01e91aae2b17e1ca755beeb5dfd5e2.jpg created from DRIFTNET with nsfw score of 99.12621974945068
Driftnet: 609164070904483972de4b75f3239c29.jpg created from DRIFTNET with nsfw score of 88.15133571624756
Driftnet: 97e4f04647c601ecfeb449304d545ebb.jpg created from DRIFTNET with nsfw score of 99.12621974945068
Driftnet: df13ced5cc054addcd9448e78cf2415a.jpg created from DRIFTNET with nsfw score of 5.358363315463066
Driftnet: db022beb1d447d56f21aa93dd4335bd5.jpg created from DRIFTNET with nsfw score of 68.98346543312073
Driftnet: a32821c97bf4563ad8db5cf20903ea5b.jpg created from DRIFTNET with nsfw score of 97.75400161743164
Driftnet: 8a47b1a56916db2cb79d0d43f09c26b5.jpg created from DRIFTNET with nsfw score of 62.90503144264221
Driftnet: 8b78d63137a9e141c36eceb2b61e3c6d.jpg created from DRIFTNET with nsfw score of 39.26731050014496
Driftnet: 0f45686ff31a8b58764635948665a5d5.jpg created from DRIFTNET with nsfw score of 0.6589059717953205
Driftnet: bd39aab8ee9112c773a6863ab59e8a1e.jpg created from DRIFTNET with nsfw score of 99.40871000289917
Driftnet: 7a1a95f4fa4511f202f19a6410d44d1c.jpg created from DRIFTNET with nsfw score of 1.829291507601738
Driftnet: 131b3701695a9c0e04d900b0c06373ba.jpg created from DRIFTNET with nsfw score of 0.6589059717953205
Driftnet: ac18617d741377f18a4c8978f0f085de.jpg created from DRIFTNET with nsfw score of 2.146361581981182
Driftnet: 61d3c8aa05b275774f4cead727bf8bb6.jpg created from DRIFTNET with nsfw score of 35.02090573310852

We made some tests to finetune the threshold and finally fixed it at 20 (Most of the pictures with a score >20 were indeed very bad). The filter was quite effective and some people were disappointed by the new filter (especially Zoz ;-))Tweets

About the network, here are some statistics based on the 3 days:

  • 112784 pictures collected (only via clear-text protocols)
  • 403 Gigabytes of traffic was inspected
  • 880 unique devices connected to the network (MAC addresses)
  • 1 rogue access point spoofed the WiFi during a few minutes (security incident)

For the fun, we detected that a lot of people were using browsers configured with proxy auto-configuration and performing DNS requests for ‘wpad’, ‘wpad.local’ or ‘wpad.<company>’. As we are kind people, when you ask us for something, we are happy to provide it to you. We delivered a wpad.dat file:

function FindProxyForURL(url, host) {
  if (isInNet(dnsResolve(host), "172.16.0.0", "255.240.0.0") ||
      isInNet(dnsResolve(host), "192.168.0.0", "255.255.0.0") ||
      isInNet(dnsResolve(host), "127.0.0.0", "255.255.255.0")) {
    return "DIRECT";
  }
  if (isInNet(host,"192.168.0.0", "255.255.0.0") ||
      isInNet(host,"172.16.0.0", "255.255.240.0") {
    return "DIRECT";
  }
  return "PROXY 192.168.10.20:3128";
}

The HTTP traffic was proxied by us  (nothing malicious). 120K+ pages were requested via our proxy.

See you again next year at BruCON with our WoS!

[Edited]
Some readers asked me details about the tool behind the WoS. It is called Dofler (“Dashboard of Fail”) and developed by Steve McGrath (the code is available here).

[The post Post-BruCON Experience – Running a Wall of Sheep in the Wild has been first published on /dev/random]

Cela fait maintenant trois semaines que j’ai commencé ma déconnexion, trois semaines pleines de surprises. Trois semaines tellement surprenantes que je trouve intéressant de partager avec vous « comment » j’ai déconnecté et ce que j’ai mis en place exactement. Car il n’y a pas de secret : pour atteindre des objectifs, il faut se construire des outils.

1. Prise de conscience.

Avant toute chose, il est nécessaire de prendre conscience du problème que vous souhaitez régler. L’accepter. Avoir envie de changer des choses. Être motivé. Si vous ne voyez pas de problème à votre utilisation actuelle d’Internet, il n’y a pas de raison de la changer.

La technologie ne peut qu’aider la motivation, la soutenir. Toute seule, est elle inutile. Vous pouvez installer 15 apps pour détailler votre temps passé en ligne, pour bloquer pendant 25 minutes l’accès à votre navigateur, encore faut-il que vous l’utilisiez. Et que cette utilisation ne soit pas contre-productive : si le fait d’avoir fait une séance de 25 minutes pseudo-productive vous permet de vous auto-récompenser en passant le reste de la journée à glander, le problème n’est pas résolu. Peut-être qu’il est nécessaire de remonter à la racine du problème plutôt que de le camoufler.

2. Établir les règles de votre déconnexion.

Même si elles vont forcément changer et être adaptées, il est important de clarifier vos règles. Pour Thierry Crouzet, c’était “pas d’accès à Internet direct” mais il ne s’empêchait pas de regarder la télévision, de lire les journaux ou de demander à sa femme de lui télécharger ce dont il avait besoin.

Pour moi les règles initiales étaient :

  • pas de réseaux sociaux
  • pas de sites d’actualités
  • pas de sites ou d’app avec un “flux” infini
  • pas de sites ou d’app avec de la publicité

Au final, ces règles ont rapidement évolué pour devenir :

  • Pas de site ni d’app où je vais sans avoir une idée de ce que je cherche. Donc pas de “découverte”.
  • Pas de site ni d’app qui m’alimente en contenu.
  • Ni télévision, ni radio, ni médias quelconques. Mais ça, c’est une règle que je suis depuis des années.

L’app Refind, qui propose les meilleurs liens partagés par vos contacts Twitter, est ainsi passé à la trappe dès le deuxième jour alors que je pensais en faire un outil important de ma déconnexion. À ma grande surprise, Pocket a également disparu de mon téléphone. Je le garde cependant sur mon Kobo mais n’étant pas alimenté en contenu, son utilisation est exceptionnelle. Je réfléchis à réintroduire un lecteur RSS uniquement sur mon ordinateur (les suggestions de flux à suivre sont les bienvenues, je ne souhaite pas de l’actualité mais au contraire des réflexions nouvelles, différentes).

3. Mise en place d’une solution technique de blocage

Il est primordial de comprendre que la volonté d’un humain est limitée. Se fixer des règles est une chose mais il est illusoire de se faire confiance pour “respecter les règles”. Les automatismes ont la vie dure et il m’arrive encore, sans réfléchir, d’ouvrir un navigateur et de taper une URL d’un site interdit, par réflexe.

Sur Facebook, j’avais déjà mon système depuis plusieurs mois, inspiré par Pierre Valade. L’idée est de se désabonner d’absolument tous vos amis Facebook et de toutes les pages que vous likez (en fait, vous pouvez déliker absolument tout, les likes ne servent qu’à vous profiler publicitairement. Oui, vous pouvez même enlever le like que vous aviez gentiment mis sur ma propre page, je suis suicidaire à ce point là !). Cela va rendre votre flux complètement vide tout en préservant vos liens d’amitié. Attention, pour que cela fonctionne, il est indispensable de se désabonner de tout le monde et de le faire à chaque nouvel ami. Il suffit d’avoir quelques amis pour que Facebook aie assez de matériel pour remplir votre flux. Une fois que c’est fait, vous pouvez désinstaller l’app Facebook et utiliser la version Messenger Lite pour chatter.

La beauté de cette solution c’est que vous êtes officiellement sur Facebook mais quand on vous parle d’un truc, vous répondez que vous ne l’avez pas vu et tout le monde trouve ça normal car il est évident que tout le monde ne peut pas tout voir.

Mais si, comme moi, vous optez pour la déconnexion totale, cette étape n’est même pas nécessaire (elle peut cependant être vue comme une étape temporaire pour adoucir la déconnexion).

Le jour choisi pour le grand saut, déconnectez-vous de chacun de vos comptes ! J’ai commis l’erreur de ne pas le faire immédiatement ce qui a fait que la première fois que j’ai levé le blocage pour une bonne raison (de type aller chercher une adresse pour un restaurant qui n’a qu’une page facebook), les notifications m’ont sauté au visage.

Pour le blocage proprement dit, j’ai choisi Adguard, car disponible sur Android et MacOS, avec la liste de blocage suivante :

||dhnet.be^$empty,important
||lalibre.be^$empty,important
||9gag.com^$empty,important
||facebook.com^$empty,important
||lesoir.be^$empty,important
||rtbf.be^$empty,important
||lavenir.net^$empty,important
||sudpresse.be^$empty,important
||linkedin.com^$empty,important
||twitter.com^$empty,important
||mamot.fr^$empty,important
||macrumors.com^$empty,important
||numerama.com^$empty,important
||plus.google.com^$empty,important
||joindiaspora.org^$empty,important

J’ai commencé par bloquer un site que je ne visitais jamais (dhnet.be) pour tester puis j’ai ajouté ceux que je fréquentais régulièrement (précision, la liste est dans le désordre). Durant les trois premiers jours, cette liste s’est un peu allongée. Mais elle reste remarquablement courte ! Il va sans dire que les applications respectives sont également désinstallées de mon smartphone.

Le seul blocage auquel j’ai renoncé est medium.com car beaucoup de sites légitimes l’utilisent et je devais sans cesse désactiver Adguard pour accéder au résultat de ma recherche. Mais je me suis déconnecté et je ne visite jamais Medium directement, je n’y accède que lorsque le lien que je suis y est hébergé.

Car, oui, je m’autorise de couper mon filtrage si l’information que je cherche le rend nécessaire. Le but n’est pas de me “priver” arbitrairement mais d’utiliser Internet avec la conscience de ce que je souhaite obtenir. Devoir couper Adguard me force à prendre conscience de ce que je fais.

4. Configuration de l’ordinateur et du téléphone.

J’ai également éprouvé, après quelques jours de déconnexion, le besoin de changer l’interface de mon smartphone pour le rendre moins “sexy”, pour ne plus me procurer de plaisir à le dégainer, à chercher machinalement l’icône Twitter. Exit donc la couleur et les jolies icônes. Le launcheur AIO s’est révélé absolument parfait. D’ailleurs, si vous connaissez un moyen pour arriver à ce genre de résultat sur un Iphone, je suis preneur de tout conseil. Le fait d’avoir de la couleur sur mon écran me semble désormais ultra agressif.

Voilà ce que je vois désormais quand je dévérouille mon téléphone. Et si l’app souhaitée n’est pas dans mes raccourcis (vers le bas de de l’écran), je dois chercher en tapant le nom de l’app. Pas moyen de lancer une app sans réfléchir !

J’ai activé les notifications pour les applications suivantes : mail, Whatsapp, Signal et agenda. J’avais initialement désactivé les mails mais, du coup, à chaque fois que je regardais mon téléphone, je lançais l’app. Désormais, un coup d’œil me permet de voir si je dois traiter quelque chose. Tout le reste est en théorie désactivé. Grâce à un bouton hardware, le téléphone est en permanence en silencieux sauf si j’attends un coup de fil. Je ne mets donc pas mon téléphone en silencieux, c’est le contraire : je le mets en mode bruyant lorsque j’en ai besoin. Conséquence directe : je ne vois les notifications que sur l’écran et seulement si je le consulte volontairement. Elle ne m’interrompent pas.

L’agenda et les appels téléphoniques ont la permission de faire vibrer ma montre et donc de m’interrompre.

Pour mon ordinateur, j’ai également fait du nettoyage dans la barre de lancement rapide, ne laissant que le calendrier et ma liste de todo. Pour le reste, je dois taper le nom de l’application dans Alfred, ce qui m’oblige à réfléchir et ne pas être dans le réflexe de cliquer sur une icône. De ce fait, je n’accède à mon navigateur qu’en tapant une recherche DuckDuckGo dans Alfred.

Alfred, mon lanceur de recherches
Ma barre d’icônes

Au fait, mon ordinateur est en permanence en mode Do Not Disturb. Je sais bien que MacOS ne permet pas d’être en permanence dans ce mode mais il permet de fixer Une plage horaire. La mienne s’étant de 4h du matin à 3h du matin. En théorie, entre 3h et 4h, je recevrai les notifications mais je suis rarement devant mon ordinateur à cette heure là.

Do not disturb

Cette déconnexion s’est également accompagnée d’un besoin pressant de minimalisme. J’ai donc désinstallé énormément d’apps et de logiciels. Exit tous les outils de tracking (hors Strava) ou de statistiques. RescueTime, par exemple, s’est révélé particulièrement retors : en quelques jours d’utilisation, j’avais pris le réflexe de consulter mes statistiques de “productivité”, d’ouvrir les apps dites “productives” sur mon écran lorsque je réfléchissais dans le vide. Bref, tout le contraire de ce que je cherchais.

5. Au sujet des emails

Cela fait des années que j’applique la méthode Inbox 0 et me désabonne de tout ce qui contient un lien “unsubscribe”. Par définition, s’il y’a un lien pour se désinscrire, c’est que ce n’est pas critique et je peux m’en passer. J’ai également un filtre qui envoie directement dans les archives les mails pour lesquels je ne suis pas explicitement dans le champs “to”. Pour l’anecdote, l’université dans laquelle j’enseigne produit une quantité invraisemblable de newsletters et de listes de diffusion dont il est impossible de se désinscrire. Il y’a d’ailleurs toute une équipe uniquement en charge de produire ces newsletter. Avant mon filtre automatique, mes journées étaient un combat permanent entre l’équipe chargée de remplir mon inbox et moi, tentant de la vider, de trouver les bons filtres pour n’avoir que ce qui m’intéressait. J’ai fini par gagner même s’ils ne le savent pas encore.

J’envoie également une réponse automatisée (via un snippet Alfred) pour réclamer ma suppression de tout envoi de type “marketing” (en citant la RGPD), même personnalisé. J’en profite pour demander d’où vient mon adresse ce qui m’a permis de remonter à la source et de me faire supprimer de plusieurs bases de données (la plus pénible étant un certain « hors antenne »). Je fais désormais la même chose avec les mailings papiers.

Avec ma déconnexion, j’ai simplement décidé d’être encore plus sévère et d’appliquer systématiquement ces règles que je suivais déjà mais avec un certain laxisme. En 3 semaines, mon inbox s’est incroyablement calmée à tel point que, désormais, je peux passer plusieurs jours sans “traiter mes mails”.

Alors oui, ce système extrême possède le risque de me faire “rater” quelque chose. Mais toute l’expérience réside justement en ne plus avoir peur de rater quelque chose. Avant, je ratais 100 infos par jour, aujourd’hui j’en rate peut-être 150. L’univers finit toujours bien par trouver une façon de nous prévenir de ce qui est réellement important. De toutes façons, l’excuse “Ah non, je n’ai pas vu passer cet email. Il doit être dans mes spams” est désormais universellement admise.

6. Poster sur les réseaux sociaux

Ma particularité dans cette expérience est que je suis producteur de contenu web, que la majorité de mon audience provient des réseaux sociaux. Je ne peux donc pas simplement arrêter de poster. Enfin, si, je peux mais ne le souhaite pas.

La solution la plus connue pour poster sur différents réseaux sociaux est Buffer mais l’abonnement coûte fort cher et l’interface très complexe pour mes maigres besoins. J’ai découvert l’alternative Friends+Me grâce à Balise. C’est vachement moins abouti mais beaucoup plus simple. Le plan gratuit me permet de poster sur ma page Facebook et sur Twitter, ce qui envoie également le post sur Mastodon.

Je poste donc sur ces réseaux mais en différé (l’heure est programmée à l’avance) et je ne vois pas les réactions, les réponses ou les partages. Bref, rien du tout.

Diaspora, Google+ et Linkedin sont abandonnés, tout comme mon compte Facebook personnel. De toutes façons, entre nous, ce n’est pas comme si ces réseaux étaient importants. Pour l’anecdote, j’ai du aller chercher une info sur Linkedin et, comme j’avais oublié de me déconnecter, un popup de chat m’a sauté au visage avec une dizaine de nouveaux messages.

La plupart concernaient des félicitations pour un vague anniversaire de travail (ça se fête, ça ?) et provenaient de gens que je ne connaissais pas ou d’ex-collègues que je n’ai plus vu depuis une bonne décennie. Les autres messages tentaient d’obtenir que je fasse de la pub pour des projets obscurs liés à la blockchain ou que me proposait des entretiens d’embauche pour devenir développeur sous-payé dans de grandes institutions bancaires. Linkedin est décidément sa propre parodie.

6. S’engager

Engagez-vous vis-à-vis de vos proches et de vos contacts, rendez votre déconnexion publique avec une date de début et une date de fin. Cet engagement peut servir de garde fou moral. Le fait d’avoir une date de fin est également un bon objectif afin de tester en profondeur. Sans date de fin, il serait facile après 2 semaines de dire “ça ne me convient pas”. Plusieurs mois me semble un minimum. J’ai personnellement fixé le 31 décembre car c’était une symbolique facile et que ça donnait une durée de 3 mois à l’expérience.

Le fait de rendre la nouvelle publique, surtout sur les réseaux où vous êtes habituellement actifs, permet à vos contacts de ne pas s’inquiéter voire, de vous contacter pour ce qu’ils estiment important.

J’ai déjà reçu plusieurs mails d’amis proches, de “connaissances virtuelles” ou de lecteurs de mon blog avec en lien un article qui “devrait m’intéresser”. Dans tous les cas ça s’est révélé intéressant et j’ai été très touché par cet échange direct. Peut-être parce que j’avais également l’espace mental pour apprécier le message plutôt que pour voir cela comme “un mail de plus à traiter”.

Au final…

Bien entendu, ces solutions ont leurs limites. Elles ne s’appliquent qu’à mon cas particulier (je n’aurais jamais abandonné les réseaux sociaux si je devais pointer 8h par jour dans un open space grisonnant. Les réseaux sociaux sont un parfait échappatoire pour avoir l’air concentré sur son ordinateur). De plus, ces règles ne sont en place que depuis 3 semaines et doivent encore être testé dans la durée.

Mais s’interroger sur son utilisation des technologies, sur sa dépendance est toujours un questionnement sain. Le petit système que je viens de vous partager me semble un excellent point de départ pour ceux qui souffrent des mêmes problèmes que moi et souhaiteraient tenter le même genre de solution.

Entre nous, j’ai déjà tenté de me désintoxiquer de Facebook il y’a exactement 8 ans. À relire l’article, je constate que rien n’a changé. Le fait d’être candidat aux élections m’a fait rechuter.

Cette tentative sera-t-elle la bonne ? Les paris sont ouverts !

Photo by Kyle Glenn on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

I published the following diary on isc.sans.edu: “Dissecting Malicious Office Documents with Linux”:

A few months ago, Rob wrote a nice diary to explain how to dissect a (malicious) Office document (.docx). The approach was to use the OpenXML SDK with Powershell. This is nice but how to achieve the same on a Linux system? One of our readers (thanks Mike!) provided us with the steps to perform the same kind of analysis but on a Kali instance (replace Kali with your preferred distribution)… [Read more]

[The post [SANS ISC] Dissecting Malicious Office Documents with Linux has been first published on /dev/random]

October 25, 2018

Mateu, Gabe and I just released the first RC of JSON API 2, so time for an update!

It’s been three months since the previous “state of JSON API” blog post, where we explained why JSON API didn’t get into Drupal 8.6 core.

What happened since then? In a nutshell:

  • We’re now much closer to getting JSON API into Drupal core!
  • JSON API 2.0-beta1, 2.0-beta2 and 2.0-rc1 were released
  • Those three releases span 84 fixed issues. (Not counting support requests.)
  • includes are now 3 times faster, 4xx responses are now cached!
  • Fixed all spec compliance issues mentioned previously
  • Zero known bugs (the only two open bugs are core bugs)
  • Only 10 remaining tasks (most of which are for test coverage in obscure cases)
  • ~75% of the open issues are feature requests!
  • ~200 sites using the beta!
  • Also new: JSON API Extras 2.10, works with JSON API 1.x & 2.x!
  • Two important features are >80% done: file uploads & revisions (they will ship in a release after 2.0)

So … now is the time to update to 2.0-RC1!

JSON API spec v1.1

We’ve also helped shape the upcoming 1.1 update to the JSON API spec, which we especially care about because it allows a JSON API server to use “profiles” to communicate support for capabilities outside the scope of the spec. 1

Retrospective

Now that we’ve reached a major milestone, I thought it’d be interesting to do a small retrospective using the project page’s sparklines:

JSON API project statistics, annotated with green vertical lines for the start of 2018 and the time of the previous blog post. The first green line indicates the start of 2018. Long-time Drupal & JSON API contributor Gabe Sullice joined Acquia’s Office of the CTO two weeks before 2018 started. He was hired specifically to help push forward the API-First initiative. Upon joining, he immediately started contributing to the JSON API module, and I joined him shortly thereafter. (Yes, Acquia is putting its money where its mouth is.)
The response rate for this module has always been very good, thanks to original maintainer Mateu “e0ipso” Aguiló Bosch working on it quite a lot in his sparse free time. (And some company time — thanks Lullabot!) But there’s of course a limit to how much of your free time you can contribute to open source.

  • The primary objective for Gabe and I for most of 2018 has been to get JSON API ready to move into Drupal core. We scrutinized every area of the existing JSON API module, filed lots of issues, minimized the API surface, maximized spec compliance (hence also minimizing Drupalisms), minimized potential for regressions to occur, and so on. This explains the significantly elevated rate of the new issues sparkline. It also explains why the open bugs sparkline first increased.
  • This being our primary objective also explains the response rate sparkline being at 100% nearly continously. It also explains the plummeted average first response time: it went from days to hours! This surely benefited the sites using JSON API: bug fixes happened much faster.
  • By the end of June, we managed to make the 1.x branch maximally stable and mature in the 1.22 release (shortly before the second green vertical line) — hence the “open bugs” sparkline decreased). The remaining problems required BC breaks — usually minor ones, but BC breaks nonetheless! The version of JSON API that ends up in core needs to be as future proof as possible: BC breaks are not acceptable in core. 2 Hence the need for a 2.x branch.

Surely the increased development rate has helped JSON API reached a strong level of stability and maturity faster, and I believe this is also reflected in its adoption: a 50–70 percent increase since the end of 2017!

From 1 to 3 maintainers

This was the first time I’ve worked so closely and so actively on a small codebase in an open-source setting. I’ve learned some things.

Some of you might understandably think that Gabe and I steamrolled this module. But Mateu is still very actively involved, and every significant change still requires his blessing. Funded contributions have accelerated this module’s development, but neither Acquia nor Lullabot ever put any pressure on how it should evolve. It’s always been the module maintainers, through debate (and sometimes heartfelt concessions), who have moved this module forward.

The “participants” sparkline being at a slightly higher level than before (with more consistency!) speaks for itself. Probably more importantly: if you’re wondering how the original maintainer Mateu feels about this, I’ll be perfectly honest: it’s been frustrating at times for him — but so it’s been for Gabe and I — for everybody! Differences in availability, opinion, priorities (and private life circumstances!) all have effects. When we disagree, we meet face to face to chat about it openly.

In the end I still think it’s worth it though: Mateu has deeper ties to concrete complex projects, I have deeper ties to Drupal core requirements, and Gabe sits in between those extremes. Our discussions and disagreements force us to build consensus, which makes for a better, more balanced end result! And that’s what open source is all about: meeting the needs of more people better :)

Thanks to Mateu & Gabe for their feedback while writing this!


  1. The spec does not specify how filtering and pagination should work exactly, so the Drupal JSON API implementation will have to specify how it handles this exactly. ↩︎

  2. I’ve learned the hard way how frustratingly sisyphean it can be to stabilize a core module where future evolvability and maintainability were not fully thought through. ↩︎

October 24, 2018

I published the following diary on isc.sans.edu: “Diving into Malicious AutoIT Code”:

Following my yesterday diary, I had a deeper look at the malicious AutoIT script dropped in my sandbox. For those who are not aware of AutoIT, it is a BASIC-like scripting language designed for automating Windows tasks. If scripts can be very simple, they can also interact with any feature of the operating system… [Read more]

 

[The post [SANS ISC] Diving into Malicious AutoIT Code has been first published on /dev/random]

The cover of the Decoupled Drupal book

Drupal has evolved significantly over the course of its long history. When I first built the Drupal project eighteen years ago, it was a message board for my friends that I worked on in my spare time. Today, Drupal runs two percent of all websites on the internet with the support of an open-source community that includes hundreds of thousands of people from all over the world.

Today, Drupal is going through another transition as its capabilities and applicability continue to expand beyond traditional websites. Drupal now powers digital signage on university campuses, in-flight entertainment systems on commercial flights, interactive kiosks on cruise liners, and even pushes live updates to the countdown clocks in the New York subway system. It doesn't stop there. More and more, digital experiences are starting to encompass virtual reality, augmented reality, chatbots, voice-driven interfaces and Internet of Things applications. All of this is great for Drupal, as it expands its market opportunity and long-term relevance.

Several years ago, I began to emphasize the importance of an API-first approach for Drupal as part of the then-young phenomenon of decoupled Drupal. Now, Drupal developers can count on JSON API, GraphQL and CouchDB, in addition to a range of surrounding tools for developing the decoupled applications described above. These decoupled Drupal advancements represent a pivotal point in Drupal's history.

Decoupled Drupal sitesA few examples of organizations that use decoupled Drupal.

Speaking of important milestones in Drupal's history, I remember the first Drupal book ever published in 2005. At the time, good information on Drupal was hard to find. The first Drupal book helped make the project more accessible to new developers and provided both credibility and reach in the market. Similarly today, decoupled Drupal is still relatively new, and up-to-date literature on the topic can be hard to find. In fact, many people don't even know that Drupal supports decoupled architectures. This is why I'm so excited about the upcoming publication of a new book entitled Decoupled Drupal in Practice, written by Preston So. It will give decoupled Drupal more reach and credibility.

When Preston asked me to write the foreword for the book, I jumped at the chance because I believe his book will be an important next step in the advancement of decoupled Drupal. I've also been working with Preston So for a long time. Preston is currently Director of Research and Innovation at Acquia and a globally respected expert on decoupled Drupal. Preston has been involved in the Drupal community since 2007, and I first worked with him directly in 2012 on the Spark initiative to improve Drupal's editorial user experience. Preston has been researching, writing and speaking on the topic of decoupled Drupal since 2015, and had a big impact on my thinking on decoupled Drupal, on Drupal's adoption of React, and on decoupled Drupal architectures in the Drupal community overall.

To show the value that this book offers, you can read exclusive excerpts of three chapters from Decoupled Drupal in Practice on the Acquia blog and at the Acquia Developer Center. It is available for preorder today on Amazon, and I encourage my readers to pick up a copy!

Congratulations on your book, Preston!

Several tricks and heuristics that I apply to write easy to understand functions keep coming up when I look at other peoples code. In this post I share the first of two key principles to writing readable functions. Follow up posts will contain the second key principle and specific tricks, often building on these general principles.

What makes functional programming so powerful? Why do developers that have mastered it say it makes them so much more productive? What amazing features or capabilities does the functional paradigm provide to enable this enhanced productivity? The answer is not what you might expect if you never looked into functional programming. The power of the functional paradigm does not come from new functionality, it comes from restricting something we are all familiar with: mutable state. By minimizing or altogether avoiding mutable state, functional programs skip a great source of complexity, thus becoming easier to understand and work with.

Minimize Mutability

If you are doing Object Orientated Programming you are hopefully aware of the drawbacks of having mutable objects. Similar drawbacks apply to mutable state within function scope, even if those functions are part of a procedural program. Consider the below PHP code snippet:

function getThing() {
    var $thing = 'default';
    
    if (someCondition()) {
        $thing = 'special case';
    }
    
    return $thing;
}

This function is needlessly complex because of mutable state. The variable $thing is in scope in the entire function and it gets modified. Thus to understand the function you need to keep track of the value that was assigned and how that value might get modified/overridden. This mental overhead can easily be avoided by using what is called a Guard Clause:

function getThing() {
    if (someCondition()) {
        return 'special case';
    }
    
    return 'default';
}

This code snippet is easier to understand because there is no state. The less state, the less things you need to remember while simulating the function in your head. Even though the logic in these code snippets is trivial, you can already notice how the Accidental Complexity created by the mutable state makes understanding the code take more time and effort. It pays to write your functions in a functional manner even if you are not doing functional programming.

Minimize Scope

While mutable state is particularly harmful, non-mutable state also comes with a cost. What is the return value of this function?

function getThing() {
    $foo = 1;
    $bar = 2;
    $baz = 3;

    $meh = $foo + $baz * 2;
    $baz = square($meh);

    print($baz);
    return $bar;
}

It is a lot easier to tell what the return value is when refactored as follows:

function getThing() {
    $foo = 1;
    $baz = 3;

    $meh = $foo + $baz * 2;
    $baz = square($meh);
    print($baz);

    $bar = 2;
    return $bar;
}

To understand the return value you need to know where the last assignment to $bar happened. In the first snippet you, for no reason at all, need to scan up all the way to the first lines of the function. You can avoid this by minimizing the scope of $bar. This is especially important if, like in PHP, you cannot declare function scope values as constants. In the first snippet you likely spotted that $bar = 2 before you went through the irrelevant details that follow. If instead the code had been const bar = 2 like you can do in JavaScript, you would not have needed to make that effort.

Conclusion

With this understanding we arrive at two guidelines for scope in functions that you can’t avoid altogether in the first place. Thou shalt:

  • Minimize mutability
  • Minimize scope

Indeed, these are two very general directives that you can apply in many other areas of software design. Keep in mind that these are just guidelines that serve as a starting point. Sometimes a little state or mutability can help readability.

To minimize scope, create it as close as possible to where it is needed. The worst thing you can do is declare all state at the start of a function, as this maximizes scope. Yes, I’m looking at you JavaScript developers and university professors. If you find yourself in a team or community that follows the practice of declaring all variables at the start of a function, I recommend not going along with this custom because its harmful nature outweighs “consistency” and “tradition” benefits.

To minimize mutability, stop every time you are about to override a variable and ask yourself if you cannot simplify the code. The answer is nearly always that you can via tricks such as Guard Clauses, many of which I will share in follow up posts. I myself rarely end up mutating variables, less than once per thousand 1000 lines of code. Because each removal of harmful mutability makes your code easier to work with you reap the benefits incrementally and can start applying this style right away. If you are lucky enough to work with a language that has constants in function scopes, use them as default instead of variables.

See also

Thanks to Gabriel Birke for proofreading and making some suggestions.

The post Readable Functions: Minimize State appeared first on Entropy Wins.

October 23, 2018

I published the following diary on isc.sans.edu: “Malicious Powershell using a Decoy Picture“:

I found another interesting piece of malicious Powershell while hunting. The file size is 1.3MB and most of the file is a PE file Base64 encoded. You can immediately detect it by checking the first characters of the string… [Read more]

[The post [SANS ISC] Malicious Powershell using a Decoy Picture has been first published on /dev/random]

While Acquia's global headquarters are in Boston, we have fourteen offices all over the world. Last week, we moved our European headquarters to a larger space in Reading, UK. We've been steadily growing in Europe, and the new Reading office now offers room for 120 employees. It's an exciting move. Here are a few photos:

Reading office
Reading office
Reading office
Reading office

October 22, 2018

I have just added Sea Island support to flashrom. This puts the R600 style SPI interface behind a pair of indexed registers, but otherwise is highly compatible.

While only tested on the first Sea-Island type SPI interface (Bonaire), support should work for everything up to and including Polaris, for a grand total of 240 devices supported today.

Bonaire was the introduction of this relocated register interface. Sadly there was an issue with this chip. There are 256bytes of SPI data registers, but registers 0xd8 through 0xE8 shifted up 0x10. There's an ugly work-around for that, but it works fine and the bit of extra cpu overhead is not an issue with a slowish interface like SPI. This shift was fixed for the next ASIC though, and Hawaii no longer needs that workaround.

As for my ordered hw...
* I still need to get a Tahiti/Hainan or Oland. So there are a few dozen Southern Islands cards to be added still.
* The HD4650 i ordered turned out to be an HD2600. The seller mistook one club3d card for another with the same passive cooler. He did not bother to even compare the connectors :(
* I did get a passive X300 (rv370) and X1300 (rv515) for which i will start REing now, so that i can extend flashrom support to all PCIE devices.

As always, the code is available here.

October 20, 2018

October 18, 2018

For the last 15 years, I've been using floats for laying out a web pages on dri.es. This approach to layout involves a lot of trial and error, including hours of fiddling with widths, max-widths, margins, absolute positioning, and the occasional calc() function.

I recently decided it was time to redesign my site, and decided to go all-in on CSS Grid and Flexbox. I had never used them before but was surprised by how easy they were to use. After all these years, we finally have a good CSS layout system that eliminates all the trial-and-error.

I don't usually post tutorials on my blog, but decided to make an exception.

What is our basic design?

The overall layout of the homepage for dri.es is shown below. The page consists of two sections: a header and a main content area. For the header, I use CSS Flexbox to position the site name next to the navigation. For the main content area, I use CSS Grid Layout to lay out the article across 7 columns.

Css page layout

Creating a basic responsive header with Flexbox

Flexbox stands for the Flexible Box Module and allows you to manage "one-dimensional layouts". Let me further explain that by using an real example.

Defining a flex container

First, we define a simple page header in HTML:

To turn this in to a Flexbox layout, simply give the container the following CSS property:

#header {
  display: flex;
}

By setting the display property to flex, the #header element becomes a flex container, and its direct children become flex items.

Css flexbox container vs items

Setting the flex container's flow

The flex container can now determine how the items are laid out:

#header {
  display: flex;
  flex-direction: row;
}

flex-direction: row; will place all the elements in a single row:

Css flexbox direction row

And flex-direction: column; will place all the elements in a single column:

Css flexbox direction column

This is what we mean with a "one-dimensional layout". We can lay things out horizontally (row) or vertically (column), but not both at the same time.

Aligning a flex item

#header {
  display: flex;
  flex-direction: row;
  justify-content: space-between;
}

Finally, the justify-content property is used to horizontally align or distribute the Flexbox items in their flex container. Different values exist but justify-content: space-between will maximize the space between the site name and navigation. Different values exist such as flex-start, space-between, center, and more.

Css flexbox justify content

Making a Flexbox container responsive

Thanks to Flexbox, making the navigation responsive is easy. We can change the flow of the items in the container using only a single line of CSS. To make the items flow differently, all we need to do is change or overwrite the flex-direction property.

Css flexbox direction row vs column

To stack the navigation below the site name on a smaller device, simply change the direction of the flex container using a media query:

@media all and (max-width: 900px) {
  #header {
    flex-direction: column;
  }
}

On devices that are less than 900 pixels wide, the menu will be rendered as follows:

Css flexbox direction column

Flexbox make it really easy to build responsive layouts. I hope you can see why I prefer using it over floats.

Laying out articles with CSS Grid

Flexbox deals with layouts in one dimension at the time ― either as a row or as a column. This is in contrast to CSS Grid Layout, which allows you to use rows and columns at the same time. In this next section, I'll explain how I use CSS Grid to make the layout of my articles more interesting.

Css grid layout

For our example, we'll use the following HTML code:

Lorem ipsum dolor sit amet

Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium.

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt.

Some meta data Some meta data Some meta data

Simply put, CSS Grid Layout allows you to define columns and rows. Those columns and rows make up a grid, much like an Excel spreadsheet or an HTML table. Elements can be placed onto the grid. You can place an element in a specific cell, or an element can span multiple cells across different rows and different columns.

We apply a grid layout to the entire article and give it 7 columns:

article {
  display: grid;
  grid-template-columns: 1fr 200px 10px minmax(320px, 640px) 10px 200px 1fr;
}

The first statement, display: grid, sets the article to be a grid container.

The second statement grid-template-columns defines the different columns in our grid. In our example, we define a grid with seven columns. The middle column is defined as minmax(320px, 640px), and will hold the main content of the article. minmax(320px, 640px) means that the column can stretch from 320 pixels to 640 pixels, which helps to make it responsive.

On each side of the main content section there are three columns. Column 3 and column 5 provide a 10 pixel padding. Column 2 and columns 6 are defined to be 200 pixels wide and can be used for metadata or for allowing an image to extend beyond the width of the main content.

The outer columns are defined as 1fr, and act as margins as well. 1fr stands for fraction or fractional unit. The width of the factional units is computed by the browser. The browser will take the space that is left after what is taken by the fixed-width columns and divide it by the number of fractional units. In this case we defined two fractional units, one for each of the two outer columns. The two outer columns will be equal in size and make sure that the article is centered on the page. If the browser is 1440 pixels wide, the fixed columns will take up 1020 pixels (640 + 10 + 10 + 180 + 180). This means there is 420 pixels left (1440 - 1020). Because we defined two fractional units, column 1 and column 2 should be 210 pixels wide each (420 divided by 2).

Css grid layout columns

While we have to explicitly declare the columns, we don't have to define the rows. The CSS Grid Layout system will automatically create a row for each direct child of our grid container article.

Css grid layout rows

Now we have the grid defined, we have to assign content elements to their location in the grid. By default, the CSS Grid Layout system has a flow model; it will automatically assign content to the next open grid cell. Most likely, you'll want to explicitly define where the content goes:

article > * {
  grid-column: 4 / -4;
}

The code snippet above makes sure that all elements that are a direct child of article start at the 4th column line of the grid and end at the 4th column line from the end. To understand that syntax, I have to explain you the concept of column lines or grid lines:

Css grid layout column lines

By using grid-column: 4 / -4, all elements will be displayed in the "main column" between column line 4 and -4.

However, we want to overwrite that default for some of the content elements. For example, we might want to show metadata next to the content or we might want images to be wider. This is where CSS Grid Layout really shines. To make our image take up the entire width we'll just tell it span from the first to the last column line:

article > figure {
  grid-column: 1 / -1;
}

To put the metadata left from the main content, we write:

#main article > footer {
  grid-column: 2 / 3;
  grid-row: 2 / 4;
}
Css grid layout placement

I hope you enjoyed reading this tutorial and that you are encouraged to give Flexbox and Grid Layouts a try in your next project.

Here we go with the last wrap-up of the 2018 edition! The first presentation was about worms: “Worms that turn: nematodes and neotodes” by Matt Wixey. The first slide contained the mention: “for educational purposes only”. What could we expect? The idea of the research performed by Matt was interesting. A nematode is, in infosec, an anti-worm. It is designed to kill/block or disrupt other worms. Matt started with a history of worms. The first one was written in the 70’s (“I’m the creeper: catch me if you can”) until today. Modern worms target IoT devices which are juicy targets. If people can write worms to infect devices, why not write worms that can fix them by patching and/or performing security remediation. Matt performed some demos about his research. Not a classic way to address the problem with worms but interesting.
The second presentation was called “Mind the (Air)Gap” by Pedro Umbelino. Why searching for security issues on air-gapped networks? Because they are used in highly secured environments and because it’s extra-challenging and fun. But when you have air gaps, you also have covert-channels that are everywhere: physical media (USB sticks), acoustic, light, seismic, magnetic, thermal, etc. The idea of Pedro was to use a BLE (“Bluetooth Low Energy“) light bulb to exfiltrate data and particularly the blue colour because the human eye is less reactive to it. He demonstrated how to perform variations on the blue light to exfiltrate data character by character. Then, he explained how to use the NFC chip of an Android device to perform the same kind of exfiltration. The conclusion to the talk was: “As long as a device is on, people will find a way to exfiltrate data out of it“. Just a personal remark: in very secured environments, electronic devices are not allowed or must be turned off… but it was a nice demo!
Then, Gaelle De Julis came on stage to talk about PRNG (“Pseudo-Random Number Generator“). Her talk was “Not So Random”. Randomness is widely used today (for encryption, for session identifications, API keys generation, etc). There are two important properties that randomness must have:
  • Distribution (no pattern)
  • Predication (not predictable)

There was already research and exploits found like “How I met your girlfriend”, “PS3 Epic Fail” or “Mind your Ps and Qs”. Gaelle had a look at Java and the Math.random() function. She explained how a token generation function can be abused so, yes, sometimes, it’s not so random. Her background is based on mathematics so it was quite difficult to follow but you get the idea: take care when you need to generate random data.

The next one was “How we trained the dragon^H classified APKs via ANNs” by Roman Graf and Aaron Kaplan. The idea of their talk was to use machine learning techniques to detect malicious Android APK files and qualify them. Unfortunately, I had to leave the room during the talk.
The following talk was the keynote speaker and a very special one. Mr Paul Vixie himself! For those who don’t know him (yet!), he’s one of the founders of the ISC (“Internet Software Consortium”) maintaining the BIND domain name server! I had the opportunity to meet him in IRL! His keynote was focussing on the need of collaboration and exchange to better protect our Internet. The competition is mandatory for efficient purposes (to influence (nations), for profit (companies) or lifestyle (people)) but it must be defined in a framework. On the Internet, there is no cooperation (spam, e-crime, DDoS) and it is difficult to track bad actors. Indeed on the Internet, as said Paul, “packets have no passport“! And an attack on one will become an attack on all. Then, he presented the SIE (“Security Information Exchange”) founded in 2007 when he started to deploy sensors to collect DNS data (which are today known as the passive DNS – – see yesterday‘s presentation by Irena). Today, SIE is still growing and there is something new: a European office has been opened in Germany. So, they are still collecting more and more data. Feel free to join them and submit your logs!
After lunch, I attended a very interesting workshop about AIL (“Analysis of Information Leaks). This is a framework that is fed by popular sharing data platforms like pastebin.com (but not limited to) and that helps to search for interesting information like PII or IOCs.
After the workshop, back to the main room for two last talks. The first one was “Serial-Killer: Security Analysis of Industrial Serial Device Servers” by  Florian Adamsky. His research started next to the attack in Ukraine which affected the distribution of electricity (report here). Apparently, Ethernet to Serial converters were abused during the attack. Such devices are more and more used in ICS environments to perform remote access to devices that have only a serial port. Some examples of usage: traffic lights, tank monitoring. He started to investigate and found many vulnerabilities that are really basic and scary. Some examples:
  • DoS (blocking regular users by opening just 1 TCP session)
  • No access control or very weak access control (like 4 digits)
  • Firmware manipulation (no verification)
  • Etherleakinng (exposition of some portion of the kernel memory)
  • Hidden WiFi access point that can’t be disabled
  • XSS

All vulnerabilities were reported to ICS-CERT and,  after three months, publicly disclosed.

My last presentation was the one of Francois Durvaux about “Practical and Affordable Side-Channel Attacks“. Crypto is everywhere (thanks to more and more IoT devices) and cyphers are strong today… until the key is not disclosed or stolen. François explained how easy it is to steal an AES key using a side-channel attack. What does mean side-channel? It’s all information that is not part of the standard input/output. We can have timing attacks, power attacks, sound-based attacks or cache attacks (like Meltdown). Francois explained how we can steal a key by having a look at the electromagnetic radiations emitted by an implementation of an AES-258 on an 8-bit microcontroller. He explained how we can detect variation of the radiations emitted by the device when it’s performing encryption. And this based on a low budget! After explaining how to perform the attack, François presented his demo which was quite impressive! Note that it’s not AES that is targetted! All block cyphers are affected by this non-intrusive attack. The code used in the demo has been released on github.com. Awesome!
That’s all for this edition of hack.lu, now let’s switch to BSidesLux organized tomorrow.

[The post Hack.lu 2018 Wrap-Up Day #3 has been first published on /dev/random]

October 17, 2018

The second day started early with an eye-opener talk: “IPC – the broken dream of inherent security” by Thanh Bui. IPC or “Inter-Process Communications” are everywhere. You can compare them as a network connection between a client and a server but inside the operating system. The idea of Thanh’s research was to see how to intercept communications occurring via IPC based on MitMA or “Man in the Machine” attacks. There are multiple attack vectors available. Usually, a server bind to a specific identifier (a network socket, a named pipe). Note that they are also secure methods via socket pairs or unnamed pipes. In the case of a network socket, the server binds to a specific port on 127.0.0.1:<port> and waits for a client connection but, often, they have failover ports. A malicious process could bind to the official port and connect to the server via the failover port and receive connections from the client on the first port. Thanh reviewed also named pipes on Windows systems (accessible via \\.\pipe\). The second part of the talk focused on juicy targets that use IPC: password managers. Most of them use IPC for communications between the desktop client and the browser (to fill passwords automatically). Thanh demonstrated how it is possible to intercept juicy information! Another popular device using IPC is the Fido U2F security key. The attack scenario is the following:
  1. Attacker signs in using the 1st factor and receive a challenge
  2. Attacker keeps sending the challenge at the device at a high rate
  3. Victim signs in to ANY service using the same security key and touches the button
  4. Attacker receives the response with high probability

A nice talk to start the day! Then, I decided to attend a workshop about Sigma. I knew the tool for a while but I never had the opportunity to play with it. This workshop was the perfect opportunity to learn more.
Sigma is an open signature format that allows you to write rules in YAML and export them to many log management solution (Splunk, ES, QRadar, <name your preferred tool here>).  Here is a simple example of a Sigma rule to detect Mimikatz binaries launched on a Windows host:

title: Exercise 1
status: experimental
description: Detects the execution of Mimikatz binaries
references:
    - https://website
author: Xavier Mertens
data: 2018/10/17
tags:
    - attack.execution
logsource:
    product: windows
    service: sysmon
detection:
    selection:
        EventID: 1
    filter:
        Hashes:
            - 97f93fe103c5a3618b505e82a1ce2e0e90a481aae102e52814742baddd9fed41
            - 6bfc1ec16f3bd497613f57a278188ff7529e94eb48dcabf81587f7c275b3e86d
            - e46ba4bdd4168a399ee5bc2161a8c918095fa30eb20ac88cac6ab1d6dbea2b4a
    condition: selection and filter
level: high
After the lunch break, I attended a new set of presentation. It started with Ange Albertini who presented the third episode of his keynote trilogy: “Education & communication“. The idea was the same as the previous presentations: Can we imagine a world where everything is secure? Definitively not. And today, our life is bound to computers. It’s a fact that infosec is a life requirement for everybody, said Ange. We need to share our expertise because we are all on the same boat. But the problem is that not everybody is acting in the same way when facing a security issue or something suspicious. It’s so easy to blame people reacting in a bad way. But, according to Ange, education and communication are part of our daily job as infosec professionals. Keep in mind that “not knowing is not a crime” said Ange. Interesting keynote but unfortunately not easy to implement on a day to day basis.
The next talk was presented by Saumil Shad, the classic speaker at hack.lu. Saumil is an awesome speaker and has always very interesting topics. This year it was: “Make ARM Shellcode Great Again“. It was quite hard for me… Saumil’s talk was about ARM shellcode. But as usual, demos well prepared and just working!
Then Alicia Hickey and Dror-John Roecher came on stage to present “Finding the best threat intelligence provider for a specific purpose: trials and tribulations“. IOCs are a hot topic for a while and many organizations are looking into implementing security controls based on IOCs to detect suspicious activities. Speaking about IOCs, there are the classic two approaches: free sources and a commercial feed. What’s the best one? It depends on you but Alicia & Dror make a research about commercial feeds and tried to compare them, which was not so easy! They started with a huge list of potential partners but the study focused on a pool of four (They did not disclose the names). They used MISP to store received IOCs but they had many issues like a lack of standard format between the tested partners. About the delivery of IOCs, there were also differences in terms of time but what do you prefer? Common IOCs quickly released or a set of valuable IOCs within a fixed context? IMHO, both are required and must be used for different purposes. This was an interesting research.
The next talk was presented by Sva: “Pretty Easy Privacy“. The idea behind this talk was to present the PEP project. I already saw this talk last year at FSEC.
My best choice for today was the talk presented by Stephan Gerling: “How to hack a Yacht – swimming IoT“. Last year, Stephan presented an awesome talk about smart locks. This year, he came back with something again smart but way bigger: Yacht! Modern Yachts are big floating IoT devices. They have a complete infrastructure based on a router, WiFi access-points, radars, cameras, AIS (“Automatic Identification System”) and also entertainment systems. There is also a CANbus like cars that carry data from multiple sensors to pilot the boat. If it’s easy to jam a GPS signal (to make the boat blind), Stephan tried to find ways to pwn the boat from the Internet connection. The router (brand: Locomarine) i, in fact,t a Mikrotik router that is managed by a crappy management tool that uses FTP to retrieve/push XML configuration files. In the file, the WiFi password can be found in clear text, amongst other useful information. The funniest demo was when Stephan used Shodan to find live boats. The router interface is publicly available and the administrative page can be accessed without authentication thanks to this piece of (pseudo) code:
if (user == "Dealer") { display_admin_page(); }
Awesome and scary talk at the same time!
The last regular talk was presented by Irena Damsky: “Simple analysis using pDNS“. Passive DNS is not new but many people do not realize the power of these databases. After a quick recap about how DNS works, Irena performed several live demonstrations by querying a passive DNS instance to collect juicy information. What you can find? (short list)
  • List of domains hosted on a name server
  • Malicious domains
  • Suspicious domains (typosquatting & co))

This is a very useful tool for incident responders but don’t forget that there is only ONE condition: the domain must have been queried at least once!

And the second day ended with a set of lightning talks (always interesting ideas and tools promoted in a few minutes) followed by the famous PowerPoint karaoké! Stay tuned tomorrow for the next wrap-up!

[The post Hack.lu 2018 Wrap-Up Day #2 has been first published on /dev/random]

The post The convincing Bitcoin scam e-mail extorting you appeared first on ma.ttias.be.

A few months ago I received an e-mail that got me worried for a few seconds. It looked like this, and chances are you've seen it too.

From: Kalie Paci 
Subject: mattias - UqtX7m

It seems that, UqtX7m, is your pass word. You do not know me and you are probably thinking
why you are getting this mail, correct?

Well, I actually placed a malware on the adult video clips (porn) web-site and guess what,
you visited this site to have fun (you know what I mean). While you were watching videos,
your browser started operating as a RDP (Remote control Desktop) that has a keylogger which
gave me access to your display and also web camera. Immediately after that, my software
program collected your entire contacts from your Messenger, FB, and email.

What exactly did I do?

I created a double-screen video. First part displays the video you were viewing (you have
a nice taste lol), and second part displays the recording of your web camera.

What should you do?

Well, in my opinion, $1900 is a fair price for our little secret. You’ll make the payment
through Bitcoin (if you do not know this, search “how to buy bitcoin” in Google).

BTC Address: 1MQNUSnquwPM9eQgs7KtjDcQZBfaW7iVge
(It is cAsE sensitive, so copy and paste it)

Important:
You now have one day to make the payment. (I’ve a unique pixel in this message, and right
now I know that you have read this email message). If I don’t get the BitCoins, I will
send your video recording to all of your contacts including members of your family,
colleagues, and many others. Having said that, if I do get paid, I will destroy the video
immidiately. If you need evidence, reply with “Yes!” and I definitely will send your video
recording to your 11 friends. This is a non-negotiable offer, and so please don’t waste
my personal time and yours by responding to this mail.

If you read it, it looks like spam -- doesn't it?

Well, the thing that got me worried for a few seconds was that the subject line and the body contained an actual password I used a while back: UqtX7m.

To receive an email with a -- what feels like -- personal secret in the subject, it draws your attention. It's clever in the sense that you feel both violated and ashamed for the consequences. It looks legit.

Let me tell you clearly: it's a scam and you don't need to pay anyone.

I first mentioned it on my Twitter describing what feels like the brilliant part of this scam:

  • Email + Passwords, easy to get (plenty of leaks online)
  • Everyone watches porn
  • Nobody wants that leaked
  • The same generic e-mail can be used for every victim

Whoever is running this scam thought about the psychology of this one and found the sweet spot: it gets your attention and it gets you worried.

Well played. But don't fall for it and most importantly: do not pay anything.

The post The convincing Bitcoin scam e-mail extorting you appeared first on ma.ttias.be.

October 16, 2018

Remember that time when ATI employees tried to re-market their atomBIOS bytecode as scripts?

You probably don't, it was a decade ago.

It was in the middle of the RadeonHD versus -ati shitstorm. One was the original, written in actual C poking display registers directly, and depending on ATIs atomBIOS as little as practical. The other was the fork, implementing everything the fglrx way, and they stopped at nothing to smear the real code (including vandalizing the radeonhd repo, _after_ radeonhd had died). It was AMD teamed with SUSE versus ATI teamed with redhat. From a software and community point of view, it was real open source code and the goal of technical excellence, versus spite and grandstanding. From an AMD versus ATI point of view, it was trying to gain control over and clean up a broken company, and limiting the damage to server sales, versus fighting the new "corporate" overlord and people fighting to keep their old (ostensibly wrong) ways of working.

The RadeonHD project started april 2007, when we started working on a proposal for an open source driver. AMD management loved it, and supported innovations such as fully free docs (the first time since 3dfx went bust in 1999), and we started coding in July 2007. This is also when we were introduced to John Bridgman. At SUSE, we were told that John Bridgman was there to provide us with the information we needed, and to make sure that the required information would go through legal and be documented in public register documents. As an ATI employee, he had previously been tasked by AMD to provide working documentation infrastructure inside ATI (or bring one into existence?). From the very start, John Bridgman was underhandedly working on slowly killing the RadeonHD project. First by endlessly stalling and telling a different lie every week about why he did not manage to get us information this time round either. Later, when the RadeonHD driver did make it out to the public, by playing a very clear double game, specifically by supporting a competing driver project (which did things the ATI way) and publicly deriding or understating the role of the AMD backed SUSE project.

In November 2007, John Bridgman hired Alex Deucher, a supposed open source developer and x.org community member. While the level of support Bridgman had from his own ATI management is unclear to me, to AMD management he claimed that Alex was only there to help out the AMD sponsored project (again, the one with real code, and public docs), and that Alex was only working on the competing driver in his spare time (yeah right!). Let's just say that John slowly "softened" his claim there over time, as this is how one cooks a frog, and Mr Bridgman is an expert on cooking frogs.

One particularly curious instance occurred in January 2008, when John and Alex started to "communicate" differently about atomBIOS, specifically by consistently referring to it as a set of "scripts". You can see one shameful display here on alex his (now defunct) blog. He did this as part of a half-arsed series of write-ups trying to educate everyone about graphics drivers... Starting of course with... those "scripts" called atomBIOS...

I of course responded with a blog entry myself. Here is a quote from it: "At no point do AtomBIOS functions come close to fitting the definition of script, at least not as we get them. It might start life as "scripts", but what we get is the bytecode, stuck into the ROM of our graphics cards or our mainboard." (libv, 2008-01-28)

At the same time, Alex and John were busy renaming the C code for r100-r4xx to "legacy", inferring both that these old graphics cards were too old to support, and that actual C code is the legacy way of doing things. "The warping of these two words make it impossible to deny: it is something wrong, that somehow has to be made to appear right." (libv, 2008-01-29) Rewriting an obvious truth... Amazing behaviour from someone who was supposed to be part of the open source community.

At no point did ATI provide us with any tools to alter these so-called scripts. They provided only the interpreter for the bytecode, the bare minimum of what it took for the rest of the world to actually use atomBIOS. There never was atomBIOS language documentation. There was no tooling for converting to and from the bytecode. There never was tooling to alter or create PCI BIOS images from said atomBIOS scripts. And no open source tool to flash said BIOSes to the hardware was available. There was an obscure atiflash utility doing the rounds on the internet, and that tool still exits today, but it is not even clear who the author is. It has to be ATI, but it is all very clandestine, i think it is safe to assume that some individuals at some card makers sometimes break their NDAs and release this.

The only tool for looking at atomBIOS is Matthias Hopfs excellent atomdis. He wrote this in the first few weeks of the RadeonHD project. This became a central tool for RadeonHD development, as this gave us the insight into how things fit together. Yes, we did have register documentation, but Mr. Bridgman had given us twice 500 pages of "this bit does that" (the same ones made public a few months later), in the hope that we would not see the forest through the trees. Atomdis, register docs, and the (then) most experienced display driver developer on the planet (by quite a margin) made the RadeonHD a viable driver in record time, and it gave ATI no way back from an open source driver. When we showed Mr. Bridgman the output of atomdis in september 2007, he was amazed just how readable its output was compared to ATI internal tools. So much for scripts eh?

I have to confess though that i convinced Matthias to hold off making atomdis public, as i knew that people like Dave Airlie would use it against us otherwise (as they ended up doing with everything else, they just did not get to use atomdis for this purpose as well). In Q2 2009. after the RadeonHD project was well and truly dead, Matthias brought up the topic again, and i wholeheartedly agreed to throw it out. ATI and the forkers had succeeded anyway, and this way we would give others a tool to potentially help them move on from atomBIOS. Sadly, not much has happened with this code since.

One major advantage of full openness is the ability to support future use cases that were unforeseeable at the time of hardware, code or documentation release. One such use-case is the today rather common (still) use of GPUs for cryptocurrency "mining". This was not a thing back in 2007-2009 when we were doing RadeonHD. It was not a thing when AMD created a GPGPU department out of a mix of ATI and new employees (this is the department that has been pushing out 3D ISA information ever since). It was also not considered when ATI stopped the flow of (non shader ISA) documentation back in Q1 2009 (coinciding with radeonHD dying). We only just got to the point of providing a 3d driver then, and never got near considering pushing for open firmware for Radeon GPUs. Fully open power management and fully open 3d engine firmware could mean that both could be optimised to provide a measurable boost to a very specific use-case, which cryptocurrency mining usually is. By retreating in its proprietary shell with the death of the RadeonHD driver, and by working on the blob based fig-leaf driver to keep the public from hating the fglrx driver as much, ATI has denied us the ability to gain those few 10ths of percents or even whole percents that there are likely to be gained from optimised support.

Today though, miners are mostly just altering the gpu and memory frequencies and voltages in their atomBIOS based data tables (not touching the function tables). They tend to use a binary only gui tool to edit those values. Miners also extensively use the rather horrible atiflash. That all seems very limited compared to what could have been if AMD had not lost the internal battle.

So much for history, as giving ATI the two fingered salute is actually more of an added bonus for me. I primarily wanted to play around with some newer ideas for tracing registers, and i wanted to see how much more effective working with capstone would make me. Since SPI chips are simple hardware, and SPI engines are just as trivial, ati spi engines seemed like an easy target. The fact that there is one version of atiflash (that made it out) that runs under linux, made this an obvious and fun project. So i spent the last month, on and off, instrumenting this binary, flexing my REing muscle.

The information i gleaned is now stuck into flashrom, a tool to which i have been contributing since 2007, from right before i joined SUSE. I even held a coreboot devroom at FOSDEM in 2010, and had a talk on how to RE BIOSes to figure out board specific flash enables. I then was on a long flashrom hiatus from 2010 til earlier this year, when i was solely focused on ARM. But a request for a quick board enable got me into flashrom again. Artificial barriers are made to be breached, and it is fun and rewarding to breach them, and board enables always were a quick fix.

The current changes are still in review at review.coreboot.org, but i have a github clone for those who want immediate and simple access to the complete tree.

To use the ati_spi programmer in flashrom you need to use:
./flashrom -pati_spi
and then specify the usual flashrom arguments and commands.

Current support is limited to the hardware i have in my hands today, namely Rx6xx, and only spans to that hardware that was obviously directly compatible to the Rx6xx SPI engine and GPIO block. This list of a whopping 182 devices includes:
* rx6xx: which are the radeon HD2xxx and HD3xxx series, released april 2007
* rx7xx: or HD4xxx, released june 2008.
* evergreen: or HD5xxx, released september 2009.
* northern island: HD6xxx series, released oktober 2010
* Lombok, part of the Southern Island family, release in january 2012.

The other Southern Islanders have some weird IO read hack that i need to go test.

I have just ordered 250eur worth of used cards that should extend support across all PCIE devices all the way through to Polaris. Vega is still out of reach, as that is prohibitively expensive for a project that is not going to generate any revenue for yours truly anyway (FYI, i am self-employed these days, and i now need to find a balance between fun, progress and actual revenue, and i can only do this sort of thing during downtime between customer projects). According to pci.ids, Vega 12 and 20 are not out yet, but they too will need specific changes when they come out. Having spent all that time instrumenting atiflash, I do have enough info to quickly cover the entire range.

One good thing about targetting AMD/ATI again is that I have been blackballed there since 2007 anyway, so i am not burning any bridges that were not already burned a decade ago \o/

If anyone wants to support this or related work, either as a donation, or as actually invoiced time, drop me an email. If any miners are interested in specifically targetting Vega, soon, or doing other interesting things with ATI hardware, I will be happy to set up a coinbase wallet. Get in touch.

Oh, and test out flashrom, and report back so we can update the device table to "Tested, OK" status, as there are currently 180 entries ranked as "Not-Tested", a number that is probably going to grow quite a bit in the next few weeks as hardware trickles in :)
The 14th edition (!) of hack.lu is ongoing in Luxembourg. I arrived yesterday to attend the MISP summit which was a success. It’s great to see that more and more people are using this information sharing platform to fight bad guys! Today, the conference officially started with the regular talk. I spent my day in the main room to follow most of the scheduled talks. Here is my quick recap…

There was no official keynote speaker this year, the first ones to come on stage where

Ankit Gangwal and Eireann Leverett with a talk about ransomware: “”Come to the dark side! We have radical insurance groups & ransomware”. But it was a different one with an interesting approach. Yesterday Eireann already presented the results of his research based on MISP: “Logistical Budget: Can we quantitatively compare APTs with MISP”. It was on the same topic today: How to quantify the impact of ransomware attacks. Cyber insurance likes quantifiable risks. How to put some numbers (read: the amount of money) on this threat? They reviewed the ransomware life cycle as well as some popular malware families and estimated the financial impact when a company gets infected. It was an interesting approach. Also the analysis of cryptocurrency used to pay the ransom (how often, when – weekend vs week ays). Note that also developed a set of script to help to extract IOCs from ransomware sample (the code is available here).
Back to the technical side with the next talk presented by Matthieu Tarral. He presented his solution to debug malware in a virtualized environment. What are the problems related to classic debuggers? They are noisy, they alter the environment, they can affect what the analyst sees or the system view can be incomplete. For Matthieu, a better approach is to put the debugger at level -1 to be stealthier and be able to perform a full analysis of the malware. Another benefit is that the guest is unmodified (no extra process, no serial connection, … Debuggers working at hypervisor level are not new, he mentioned HyperDBG (from 2010!), virtdgb and PulseDBG. For the second part, Matthieu presented his own project based on LibVMI which is a VMI abstraction layer library independent of any hypervisor. At the moment, Xen is fully supported and KVM is coming. He showed a nice demo (video) about a malware analysis. Very nice project!
The next slide was again less technical but quite interesting. It was presented by Fabien Mathey and was focussing on problems related to performing risks assessment in a company (time constraints, budgets, motivated people and lack of proper tool). The second part was a presented of MONARC, a tool developed in Luxembourg and dedicated to performing risks assessment through a nice web interface.
And the series of non-technical talks continued with the one of Elle Armageddon about threat intelligence. The first part of the talk was a comparison between medical and infosec environment. If you check them carefully, you’ll quickly spot many common ideas like:
  • Take preventive measures
  • Listen to users
  • Get details before recommending strategies and tools
  • Be good stewards of user data
  • Threat users with respect and dignity

The second part was too long and less interesting (IMHO) and focused on the different types of threats like stalkers, state repression, etc. What they have in common, what are the differences, etc. I really like the comparison of medical/infosec environments!

After the lunch break (we get always good food at hack.lu!), Dan Demeter came on stage to talk about YARA and particularly the tools he developed: “Let me Yara that for you!“. YARA is a well-known tool for security researchers. It helps to spot malicious files but it may also become quickly very difficult to manage all the rules that you collected here and there or that you developed by yourself. Klara is the tool developed by Dan which helps to automate this. It can be described as a distributed YARA scanner. It offers a web interface, users groups, email notifications and more useful options. The idea is to be able to manager rules but also to (re)apply them to a set of samples in a quick way (for retro-search purposes). About performances, Klara is able to scan 10TB in 30 mins!
I skipped the next talk – “The (not so profitable) path towards automated heap exploitation“. It looked to be a technical one. The next slop was assigned to Emmanual Nicaise who presented a talk about “Neuro Hacking” or “The science behind social engineering and an effective security culture“. Everybody knows what is social engineering and the goal to convince the victim to perform actions or to disclose sensitive information. But to achieve an efficient social engineering, it is mandatory to understand how the human brain is working and Emmanuel has a deep knowledge about this topic. Our brain is very complex and it can be compared to a computer with inputs/outputs, a bus, a firewall etc. Compared to computers, humans do not perform multi-tasking but time-sharing. He explained what are neurotransmitters and how they can be excitatory (ex: glutamate) or inhibitory (GABA). Hormones were also covered. Our brains have two main functions:
  • To make sense
  • To make pattern analysis
Emmanual demonstrated with several examples of how our brain put a picture in a category or another one just based on the context. Some people might look good or bad depending on the way the brain see the pictures. The way we process information is also important (fast vs slow mode). For the social engineer, it’s best to keep the brain in fast mode to make it easier to manipulate or predict. As a conclusion, to perform efficient social engineering:
  • Define a frame / context
  • Be consistent, respect the pattern
  • Pick a motivation
  • Use emotion accordingly
  • Redefine the norm
The next talk was about Turla or Snake: “The Snake keeps reinventing itself” by Jean-Ian Boutin and Matthieu Faou. This is a group very active for a long time that targeted governments and diplomats. It has a large toolset targetting all major platforms. What is the infection vector? Mosquito was a backdoor distributed via a fake Flash installer via the URL admdownload[.]adobe[.]com. It was pointing to an IP in the Akamai CDN used by Adobe. How did it work? Mitm? BGP hijack? Based on the location of the infected computers, it was via a compromised ISP. Once installed, Mosquito used many tools to perform lateral movement:
  • Sniffing network for port 21, 22, 110, 143, 22, 80, 389)
  • Cliproxy (a reverse shell)
  • Quarks PwDump
  • Mimikatz
  • NirSoft tools
  • ComRat (a keylogger and recent file scrapper)

A particular attention was discovered to cover all the tracks (deletion of all files, logs, etc). Then, the speakers presented the Outlook backdoor used to interact with the compromised computer (via the MAPI protocol). All received emails were exfiltrated to a remote address and commands received via attached PDF files in emails. They finished the presentation with a nice demo of a compromised Outlook which popped up a calc.exe via a malicious email received.

The next presentation was (for me) the best one of this first day. Not only because the content was technically great but the way it was presented by funny. The title was “What the fax?” by Eyal Itkin, Yaniv Balmas. Their research started last year at the hotel bar with a joke. Is it possible to compromise a company just be sending a fax? Challenge accepted! Fax machines are old devices (1966) and the ITU standard was defined in 1980 but, still today, many companies accept faxes as a communication channel. Today, fax machines disappeared and are replaced by MFPs (“Multi-Functions Printers”). And such devices are usually connected to the corporate network. They focused their research on an HP printer due to the popularity of the brand. They explained how to grad the firmware, how to discovered the compression protocol used. Step by step, the explained how they discovered vulnerabilities (CVE-2018-5925 & CVE-2018-5925). Guess what? They made a demo: a printer received a fax and, using EternalBlue, they compromised a computer connected on the same LAN. Awesome to see how an old technology can still be (ab)used today!
I skipped the last two talks: One about the SS7 set of protocols and how it can be abused to collect information about mobile users or intercept SMS message. The last one was about DDoS attacks based on IoT devices.
There is already a Youtube channel with all the presentations (added as soon as possible) and slides will be uploaded on the archive website.  That’s all for today folks, stay tuned for another wrap-up tomorrow!

[The post Hack.lu 2018 Wrap-Up Day #1 has been first published on /dev/random]

Ça y’est, c’est fait ! J’ai désinstallé Pocket de mon smartphone. Pocket qui est pourtant l’application que j’ai toujours trouvée la plus importante, l’application qui a justifié que je repasse sur un Kobo afin de lire les articles sauvegardés. Pocket qui est, par définition, une manière de garder des articles glanés sur le web « pour plus tard », liste qui tend à se remplir au gré des messages sur les réseaux sociaux.

Or, ne glanant plus sur le web depuis ma déconnexion, force est de constater que je n’ajoute rien de nouveau à Pocket (sauf quand vous m’envoyez un mail avec un lien qui devrait selon vous m’intéresser. J’adore ces recommandations, merci, continuez !). Le fait d’avoir le temps pour lire la quantité d’articles amassés non encore lus (une petite centaine) m’a fait réaliser à quel point ces articles sont très très rarement intéressants (comme je le signalais dans le ProofOfCast 12, ceux sur la blockchain disent absolument tous la même chose). En fait, sur la centaine d’articles, j’en ai partagé 4 ou 5 sur le moment même, j’en retiens un seul qui m’a vraiment appris des choses (un article qui résume l’œuvre d’Harold Adams Innis) et 0 qui ont changé quoi que ce soit à ma perspective.

L’article sur Innis est assez paradoxal dans le sens que son livre le plus connu est dans ma liste de lecture depuis des mois, que je n’ai encore jamais pris le temps de le lire. Globalement, je peux dire que ma liste Pocket était inutile à 99% et, à 1%, un succédané d’un livre que j’ai envie de lire. Que lire ces 100 articles m’a sans doute pris le temps que j’aurais pris à lire rapidement le livre en question.

Le résultat est donc assez peu brillant…

Mais il y’a pire !

Pocket possède un flux d’articles « recommandés ». Ce flux est extrêmement mal conçu (beaucoup de répétitions, des mélanges à chaque rafraichissement, affiche même des articles qui sont déjà dans notre liste de lecture) mais est la seule application qui reste sur mon téléphone à fournir un flux d’actualités.

Vous me voyez venir…

Mon cerveau a très rapidement pris l’habitude de lancer Pocket pour « vider ma liste de lecture » avant d’aller consulter les « articles recommandés » (entre nous, la qualité est vraiment déplorable).

Aujourd’hui, alors qu’il ne me reste que 2 articles à lire, voici deux suggestions qui se suivaient dans mon flux :

La coïncidence est absolument troublante !

D’abord un article pour nous faire rêver et dont le titre peut se traduire par « Devenir milliardaire en 2 ans à 20 ans, c’est possible ! » suivi d’un article « Le réchauffement climatique, c’est la faute des milliardaires ».

Ces deux articles résument sans doute à eux seuls l’état actuel des médias que nous consommons. D’un côté, le rêve, l’envie et de l’autre le catastrophisme défaitiste.

Il est évident qu’on a envie de cliquer sur le premier lien ! Peut-être qu’on pourrait apprendre la technique pour faire pareil ? Même si ça parait stupide, le simple fait que je sois dans un flux prouve que mon cerveau ne cherche pas à réfléchir, mais à se faire du bien.

L’article en question, pour le préciser, ne contient qu’une seule information factuelle : la startup Brex, fondée par deux vingtenaires originaires du Brésil, vient de lever 100 millions lors d’un Round C, ce qui place sa valeur à 1 milliard. Dit comme ça, c’est beaucoup moins sexy et sans intérêt si vous n’êtes pas dans le milieu de la fintech. Soit dit en passant, cela ne fait pas des fondateurs des milliardaires vu qu’ils doivent à présent bosser pour faire un exit profitable (même si, financièrement, on se doute qu’ils ne doivent pas gagner le SMIC et que l’aspect financier n’est plus un problème pour eux).

Le second article, lui, nous rappelle que les plus grosses industries sont celles qui polluent le plus (quelle surprise !) et qu’elles ont toujours fait du lobby contre toute tentative de réduire leurs profits (sans blague !). Le rapport avec les milliardaires est extrêmement ténu (on sous-entend que ce sont des milliardaires qui sont dans le conseil d’administration de ces entreprises). L’article va jusqu’à déculpabiliser le lecteur en disant que, vu les chiffres, le consommateur moyen ne peut rien faire contre la pollution. Alors que la phrase porte en elle sa solution : dans « consommateur », il y’a « consommer » et donc les « consommateurs » peuvent décider de favoriser les entreprises qui polluent moins (ce qui, remarquons-le, est en train de se passer depuis plusieurs années, d’où le green-washing des entreprises suivi d’un actuel mouvement pour tenter de voir clair à travers le green-washing).

Bref, l’article est inutile, dangereusement stupide, sans rapport avec son titre, mais le titre et l’image donnent envie de cliquer. Pire en fait : le titre et l’image donnent envie de discuter, de réagir. J’ai été témoin de nombreux débats sur Facebook dans les commentaires d’un article, débat traitant… du titre de l’article !

Lorsqu’un commentateur un peu plus avisé que les autres signale que les commentaires sont à côté de la plaque, car la remarque est adressée dans l’article et/ou l’article va plus loin que son titre, il n’est pas rare de voir le commentateur pris en faute dire « qu’il n’a pas le temps de lire l’article ». Les réseaux sociaux sont donc peuplés de gens qui ne lisent pas plus loin que les titres, mais se lancent dans des diatribes (car on a toujours le temps de réagir). Ce qui est tout bénéf pour les réseaux sociaux, car ça fait de l’animation, des interactions, des visites, des publicités affichées. Mais également pour les auteurs d’articles car ça fait des likes et des vues sur leurs articles.

Le fait que personne ne lise le contenu ? Ce n’est pas l’objectif du business. Tout comme ce n’est pas l’objectif d’un fast-food de s’inquiéter que vous ayez une alimentation équilibrée riche en vitamines.

Si vous voulez une alimentation équilibrée, il faut trouver un fournisseur dont le business model n’est pas de vous « remplir » mais de vous rendre « meilleur ». Intellectuellement, cela signifique se fournir directement chez vos petits producteurs locaux, vos blogueurs et écrivains bio qui ne vivent que de vos contributions.

Mais passons cette intrusion publicitaire éhontée pour remarquer que j’ai désinstallé Pocket, mon app la plus indispensable, après seulement 12 jours de déconnexion.

Suis-je en train d’établir des changements durables ou bien suis-je encore dans l’enthousiasme du début, excité par la nouveauté que représente cette déconnexion ? Vais-je tenir le coup sans absolument aucun flux ? (et punaise, c’est bien plus difficile que prévu) Vais-je abandonner la jalousie envieuse de ceux devenus milliardaires (car si ça m’arrivait, je pourrais me consacrer à l’écriture) et me mettre à écrire en réduisant ma consommation (vu que moins exposé aux pubs) ?

L’avenir nous le dira…

Photo by Mantas Hesthaven on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

October 14, 2018

For a few years now I've been planning to add support for responsive images to my site.

The past two weeks, I've had to take multiple trips to the West Coast of the United States; last week I traveled from Boston to San Diego and back, and this week I'm flying from Boston to San Francisco and back. I used some of that airplane time to add responsive image support to my site, and just pushed it to production from 30,000 feet in the air!

When a website supports responsive images, it allows a browser to choose between different versions of an image. The browser will select the most optimal image by taking into account not only the device's dimensions (e.g. mobile vs desktop) but also the device's screen resolution (e.g. regular vs retina) and the browser viewport (e.g. full-screen browser or not). In theory, a browser could also factor in the internet connection speed but I don't think they do.

First of all, with responsive image support, images should always look crisp (I no longer serve an image that is too small for certain devices). Second, my site should also be faster, especially for people using older smartphones on low-bandwidth connections (I no longer serve an image that is too big for an older smartphone).

Serving the right image to the right device can make a big difference in the user experience.

Many articles suggest supporting three image sizes, however, based on my own testing with Chrome's Developer Tools, I didn't feel that three sizes was sufficient. There are so many different screen sizes and screen resolutions today that I decided to offer six versions of each image: 480, 640, 768, 960, 1280 and 1440 pixels wide. And I'm on the fence about adding 1920 as a seventh size.

Because I believe in being in control of my own data, I host almost 10,000 original images on my site. This means that in addition to the original images, I now also store 60,000 image variants. To further improve the site experience, I'm contemplating adding WebP variants as well — that would bring the total number of stored images to 130,000.

If you notice that my photos are clearer and/or page delivery a bit faster, this is why. Through small changes like these, my goal is to continue to improve the user experience on dri.es.

October 13, 2018

In 1999, I decided to start dri.es (formally buytaert.net) as a place to blog, write, and deepen my thinking. While I ran other websites before dri.es, my blog is one of my longest running projects.

Working on my site helps me relax, so it's not unusual for me to spend a few hours now and then making tweaks. This could include updating my photo galleries, working on more POSSE features, fixing broken links, or upgrading to the latest version of Drupal.

The past month, a collection of smaller updates have resulted in a new visual design for my site. If you are reading this post through an RSS aggregator or through my mailing list, consider checking out the new design on dri.es.

2018 dri.es redesignBefore (left) and after (right).

The new dri.es may not win design awards, but will hopefully make it easier to consume the content. My design goals were the following:

  • Improve the readability of the content
  • Improve the discoverability of the content
  • Optimize the performance of my site
  • Give me more creative freedom

Improve readability of the content

To improve the readability of the content, I implemented various usability best practices for spacing text and images.

I also adjusted the width of the main content area. For optimal readability, you should have between 45 and 75 characters on each line. No more, no less. The old design had about 95 characters on each line, while the new design is closer to 70.

Both the line width and the spacing changes should improve the readability.

Improve the discoverability of content

I also wanted to improve the discoverability of my content. I cover a lot of different topics on my blog — from Drupal to Open Source, startups, business, investing, travel, photography and the Open Web. To help visitors understand what my site is about, I created a new navigation. The new Archive page shows visitors a list of the main topics I write about. It's a small change, but it should help new visitors figure out what my site is about.

Optimize the performance of my site

Less noticeable is that the underlying HTML and CSS code is now entirely different. I'm still using Drupal, of course, but I decided to rewrite my Drupal theme from scratch.

The new design's CSS code is more than three times smaller: the previous design had almost 52K of theme-specific CSS while the new design has only 16K of theme-specific CSS.

The new design also results in fewer HTTP requests as I replaced all stand-alone icons with inline SVGs. Serving the page you are reading right now only takes 16 HTTP requests compared to 33 HTTP requests with the previous design.

All this results in faster site performance. This is especially important for people visiting my site from a mobile device, and even more important for people visiting my site from areas in the world with slow internet. A lighter theme with fewer HTTP requests makes my site more accessible. It is something I plan to work more on in the future.

Website bloat is a growing problem and impacts the user experience. I wanted to lead by example, and made my site simpler and faster to load.

The new design also uses Flexbox and CSS Grid Layout — both are more modern CSS standards. The new design is fully supported in all main browsers: Chrome, Firefox, Safari and Edge. It is, however, not fully supported on Internet Explorer, which accounts for 3% of all my visitors. Internet Explorer users should still be able to read all content though.

Give me more creative freedom

Last but not least, the new design provides me with a better foundation to build upon in subsequent updates. I wanted more flexibility for how to lay out images in my blog posts, highlight important snippets, and add a table of content on long posts. You can see all three in action in this post, assuming you're looking at this blog post on a larger screen.

We are pleased to announce the developer rooms that will be organised at FOSDEM 2019. Developer rooms are assigned to self-organising groups to work together on open source projects, to discuss topics relevant to a broader subset of the community, etc. The individual developer room organisers will issue their calls for participation in the next few days. We will update this table with links to the calls for participation. Saturday 2 February 2019 Topic Call for Participation CfP deadline .NET and TypeScript announcement 2018-12-03 Ada announcement 2018-12-01 BSD announcement 2018-12-10 Collaborative Information and Content Management Applications announcement 2018-12-10 Decentralized舰

October 12, 2018

I was talking to Chetan Puttagunta yesterday, and we both agreed that 2018 has been an incredible year for Open Source businesses so far. (Chetan helped lead NEA's investment in Acquia, but is also an investor in Mulesoft, MongoDB and Elastic.)

Between a series of acquisitions and IPOs, Open Source companies have shown incredible financial returns this year. Just look at this year-to-date list:

Company Acquirer Date Value
CoreOS RedHat January 2018 $250 million
Mulesoft Saleforce May 2018 $6,5 billion
Magento Adobe June 2018 $1,7 billion
GitHub Microsoft June 2018 $7,5 billion
Suse EQT partners July 2018 $2,5 billion
Elastic IPO September 2018 $4,9 billion

For me, the success of Open Source companies is not a surprise. In 2016, I explained how open source crossed the chasm in 2016, and predicted that proprietary software giants would soon need to incorporate Open Source into their own offerings to remain competitive:

The FUD-era where proprietary software giants campaigned aggressively against open source and cloud computing by sowing fear, uncertainty and doubt is over. Ironically, those same critics are now scrambling to paint themselves as committed to open source and cloud architectures.

Adobe's acquisition of Magento, Microsoft's acquisition of GitHub, the $5,2 billion merger of Hortonworks and Cloudera, or SalesForce's acquisition of Mulesoft endorse this prediction. The FUD around Open Source businesses is officially over.

I published the following diary on isc.sans.edu: “More Equation Editor Exploit Waves“:

This morning, I spotted another wave of malicious documents that (ab)use again CVE-2017-11882 in the Equation Editor (see my yesterday’s diary). This time, malicious files are RTF files. One of the samples is SHA256:bc84bb7b07d196339c3f92933c5449e71808aa40a102774729ba6f1c152d5ee2 (VT score: 19/57)… [Read more]

[The post [SANS ISC] More Equation Editor Exploit Waves has been first published on /dev/random]

Dans mon pays, il faut 18 ans pour pouvoir se présenter aux élections (il n’y a pas si longtemps, il fallait même 23 ans pour être sénateur). Je suppose que la raison est que l’on considère que la plupart des jeunes de moins de 18 ans sont trop immatures intellectuellement. Ils sont trop aisément influençables, ils ne connaissent pas encore les réalités de la vie.

Bien sûr, la limite est arbitraire. Certains sont bien plus matures et réalistes à 15 ans que ce que certains ne seront jamais de toute leur vie de centenaire.

Nous acceptons cette limite arbitraire pour éviter l’innocence et la naïveté de la jeunesse.

Mais qu’en est-il de l’inverse ?

J’observe que beaucoup d’adultes qui étaient très idéalistes à 20 ans deviennent soudainement plus préoccupés par leur petit confort et payer les traites de la maison dans la trentaine et la quarantaine. Au-delà, nous sommes nombreux à devenir conservateurs, voire réactionnaires. Ce n’est certes pas une généralité, mais c’est quand même une forme de constance humaine observée depuis les premiers philosophes. Pline (ou Sénèque ? Plus moyen de retrouver le texte) en parlait déjà dans mes cours de latin.

Mais du coup pourquoi ne pas mettre une limite d’âge à la candidature aux élections ?

Plutôt que de faire des élections un concours de serrage de mains de retraités dans les kermesses, laissons les jeunes prendre le pouvoir. Ils ont encore des idéaux eux. Ils sont vachement plus concernés par le futur de leur planète, car ils savent qu’ils y vivront. Ils sont plus vifs, plus au courant des dernières tendances. Mais, contrairement aux plus âgés, ils n’ont ni le temps ni l’argent à investir dans des élections.

Imaginez un instant un monde où aucun élu ne dépasserait l’âge de 35 ans (ce qui me disqualifie directement) ! Et bien, en toute honnêteté, je trouve ça pas mal. Les ainés deviendraient les conseillers des plus jeunes et non plus le contraire où les plus jeunes font les cafés des plus vieux en espérant monter dans la hiérarchie et bénéficier d’un poste enviable à 50 ans.

En approchant de la limite fatidique des 35 ans, les élus commenceraient à vouloir plaire aux plus jeunes encore empreints de rébellion. Comme ce serait beau de voir ces trentenaires considérer les intérêts des plus jeunes comme leur principale priorité. Comme il serait jouissif de ne plus subir la suffisance d’élus cacochymes admonestant la fainéantise de la jeunesse entre deux tournois de golf.

Bien sûr, il y’a énormément de vieux qui restent jeunes dans leur cœur et leur esprit. Vous les reconnaitrez aisément : ils ne cherchent pas à occuper l’espace, mais à mettre en avant la jeunesse, la relève. Allez savoir pourquoi, j’ai l’impression que ceux-là auraient tendance à être plutôt d’accord avec ce billet. 

Après tout, on se prive bien de toute l’intelligence et de la vivacité des moins de 18 ans, je n’ai pas l’impression que le conservatisme des anciens jeunes nous manquera beaucoup.

Photo by Cristina Gottardi on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

October 11, 2018

I published the following diary on isc.sans.edu: “New Campaign Using Old Equation Editor Vulnerability“:

Yesterday, I found a phishing sample that looked interesting:

From: sales@tjzxchem[.]com
To: me
Subject: RE: Re: Proforma Invoice INV 075 2018-19 ’08
Reply-To: exports.sonyaceramics@gmail[.]com

[Read more]

[The post [SANS ISC] New Campaign Using Old Equation Editor Vulnerability has been first published on /dev/random]