Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

February 17, 2019

Deux articles dont je vous recommande la lecture, pour parler vie au grand air, vélo, écriture et air conditionné.

Parfois, la sérendipité de mes lectures sur le web produit un télescopage d’idées, un chevauchement extraordinaire.

Ce fut le cas le soir où j’ai commencé à lire le magistral billet de Thierry Crouzet sur le vélo et l’écriture. Étant moi-même cycliste et écrivain, je vibre, je ressens chacune des sensations que Thierry partage.

Le vélo, pour moi, c’est vivre dehors. Sortir. J’ai besoin de respirer le vent, de sentir la pluie, le soleil ou les nuages. Avec l’âge, ce besoin d’extérieur devient de plus en plus violent, nécessaire. Je rêve de pédaler pendant des jours en dormant à la belle étoile. Je ne supporte plus, même quelques heures, l’air conditionné.

L’air conditionné, une invention qui a bouleversé l’ordre du monde selon ce magnifique article de Rowan Moore qui s’est immiscé juste derrière Crouzet dans ma liste de lecture.

Car j’ai parfois l’impression que mon besoin d’extérieur est loin d’être partagé par la majorité de mes concitoyens. Je vois des gens intelligents, sportifs, éduqués prendre la voiture, avec air conditionné, pour se rendre du boulot avec air conditionné avant de reprendre la voiture pour aller faire du vélo immobile dans une salle à air conditionné.

Qu’il pleuve un peu ou qu’il fasse froid et la dizaine de mètres entre la voiture et la porte du bâtiment est perçue comme une aventure. S’il fait beau, l’aventure reste identique, car on risque de transpirer. Les places dans le parking intérieur sont d’ailleurs les plus convoitées. Moi qui arrive à vélo ou à pied, je passe pour un extra-terrestre dans les deux cas.

Je reconnais que je ne me jette pas avec plaisir dans le froid et la pluie. Lorsque les gouttes d’eau ruissellent sur les vitres, j’ai envie de rester bien au chaud, de ne pas sortir de chez moi. Je dois me pousser littéralement dehors.

Mais, une fois les quelques premiers kilomètres avalés en grelottant et pestant contre mon masochisme, l’atmosphère se fait accueillante, elle m’accepte. Je souffre, je hurle, mais un énorme sourire déchire mon visage couvert de boue. Je me sens vivant, je fais partie de la pluie, de la terre humide. Je suis cette interface floue, un horizon indistinct entre le ciel brun et les flaques grises. Je vis !

Je me rends compte que, sous nos latitudes, le temps n’est jamais extrême. Une fois dehors, la pluie n’est jamais si terrible. Une fois en mouvement, la canicule n’est jamais effroyable. Correctement habillé, le froid n’est jamais insurmontable. Comme le dit le proverbe, il n’y a pas de mauvais temps, que des mauvais vêtements. Seule la boue est incontournable dans mon pays. Alors, on en a fait une discipline sportive : le cyclocross !

Mais ce goût pour l’air libre boueux, que j’ai entièrement hérité du scoutisme, n’est pas partagée. L’extérieur fait peur. Aller dehors effraie. La chaleur, le froid. La pluie ou le vent. Beaucoup les considèrent comme des ennemis. Ils cherchent à les éviter.

Moi, qui ai le luxe de savoir qu’une douche bien chaude ou bien fraîche m’attend à la maison (faut pas déconner, j’aime mon petit confort), j’ai appris à les accepter. Les aimer. Ils me le rendent bien. Ils me vivifient. Et rentrer au bercail n’en est que plus plaisant.

Par contre, le bruissement d’une soufflerie dans un open space me rend fou en quelques minutes. Quelques heures dans une pièce avec l’air conditionné me font chopper un rhume à coup sûr. 

Je crois que je n’ai pas une constitution physique assez solide conduire une voiture et pour le travail de bureau.

Photo by Daniel Sturgess | @daniel_sturgess on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

February 15, 2019

I published the following diary on isc.sans.edu: “Old H-Worm Delivered Through GitHub”:

Another piece of malicious code spotted on GitHub this time. By the way, this is the perfect example to demonstrate that protecting users via a proxy with web-categorization is useless… Event sites from the Alexa Top-1M may deliver malicious content (Github current position is 51. The URL has been found in a classic email phishing attempt. The content was recently uploaded (<24h) when I found it… [Read more]

[The post [SANS ISC] Old H-Worm Delivered Through GitHub has been first published on /dev/random]

You know that frustrating Google PageSpeed Insights opportunity “Defer unused CSS”? Well, it’s going to be renamed soon (in Lighthouse first, so GPSI should follow) as per this Github merged pull request;

The full text will read;

Remove unused CSS
Remove dead rules from stylesheets and defer the loading of CSS not used for above-the-fold content to reduce unnecessary bytes consumed by network activity.

February 14, 2019

An image of a copyright sign

After much debate, the EU Copyright Directive is now moving to a final vote in the European Parliament. The directive, if you are not familiar, was created to prohibit spreading copyrighted material on internet platforms, protecting the rights of creators (for example, many musicians have supported this overhaul).

The overall idea behind the directive — compensating creators for their online works — makes sense. However, the implementation and execution of the directive could have a very negative impact on the Open Web. I'm surprised more has not been written about this within the web community.

For example, Article 13 requires for-profit online services to implement copyright filters for user-generated content, which includes comments on blogs, reviews on commerce sites, code on programming sites or possibly even memes and cat photos on discussion forums. Any for-profit site would need to apply strict copyright filters on content uploaded by a site's users. If sites fail to correctly filter copyrighted materials, they will be directly liable to rights holders for expensive copyright infringement violations.

While implementing copyright filters may be doable for large organizations, it may not be for smaller organizations. Instead, small organizations might decide to stop hosting comments or reviews, or allowing the sharing of code, photos or videos. The only for-profit organizations potentially excluded from these requirements are companies earning less than €10 million a year, until they have been in business for three years. It's not a great exclusion, because there are a lot of online communities that have been around for more than three years and don't make more than €10 million a year.

The EU tends to lead the way when it comes to internet legislation. For example, GDPR has proven successful for consumer data protection and has sparked state-by-state legislation in the United States. In theory, the EU Copyright Directive could do the same thing for modern internet copyright law. My fear is that in practice, these copyright filters, if too strict, could discourage the free flow of information and sharing on the Open Web.

I published the following diary on isc.sans.edu: “Suspicious PDF Connecting to a Remote SMB Share”:

Yesterday I stumbled upon a PDF file that was flagged as suspicious by a customer’s anti-malware solution and placed in the quarantine. Later, the recipient contacted the team in charge of emails to access his document because he knew the sender and pretended that the file was legit… [Read more]

[The post [SANS ISC] Suspicious PDF Connecting to a Remote SMB Share has been first published on /dev/random]

I've been thinking about the performance of my site and how it affects the user experience. There are real, ethical concerns to poor web performance. These include accessibility, inclusion, waste and environmental concerns.

A faster site is more accessible, and therefore more inclusive for people visiting from a mobile device, or from areas in the world with slow or expensive internet.

For those reasons, I decided to see if I could improve the performance of my site. I used the excellent https://webpagetest.org to benchmark a simple blog post https://dri.es/relentlessly-eliminating-barriers-to-growth.

A diagram that shows page load times for dri.es before making performance improvements

The image above shows that it took a browser 0.722 seconds to download and render the page (see blue vertical line):

  • The first 210 milliseconds are used to set up the connection, which includes the DNS lookup, TCP handshake and the SSL negotiation.
  • The next 260 milliseconds (from 0.21 seconds to 0.47 seconds) are spent downloading the rendered HTML file, two CSS files and one JavaScript file.
  • After everything is downloaded, the final 330 milliseconds (from 0.475 seconds to 0.8 seconds) are used to layout the page and execute the JavaScript code.

By most standards, 0.722 seconds is pretty fast. In fact, according to HTTP Archive, it takes more than 2.4 seconds to download and render the average web page on a laptop or desktop computer.

Regardless, I noticed that the length of the horizontal green bars and the horizontal yellow bar was relatively long compared to that of the blue bar. In other words, a lot of time is spent downloading JavaScript (yellow horizontal bar) and CSS (two green horizontal bars) instead of the HTML, including the actual content of the blog post (blue bar).

To fix, I did two things:

  1. Use vanilla JavaScript. I replaced my jQuery-based JavaScript with vanilla JavaScript. Without impacting the functionality of my site, the amount of JavaScript went from almost 45 KB to 699 bytes, good for a savings of over 6,000 percent.
  2. Conditionally include CSS. For example, I use Prism.js for syntax highlighting code snippets in blog posts. prism.css was downloaded for every page request, even when there were no code snippets to highlight. Using Drupal's render system, it's easy to conditionally include CSS. By taking advantage of that, I was able to reduce the amount of CSS downloaded by 47 percent — from 4.7 KB to 2.5 KB.

According to the January 1st, 2019 run of HTTP Archive, the median page requires 396 KB of JavaScript and 60 KB of CSS. I'm proud that my site is well under these medians.

File type Dri.es before Dri.es after World-wide median
JavaScript 45 KB 669 bytes 396 KB
CSS 4.7 KB 2.5 KB 60 KB

Because the new JavaScript and CSS files are significantly smaller, it takes the browser less time to download, parse and render them. As a result, the same blog post is now available in 0.465 seconds instead of 0.722 seconds, or 35% faster.

After a new https://webpagetest.org test run, you can clearly see that the bars for the CSS and JavaScript files became visually shorter:

A diagram that shows page load times for dri.es after making performance improvements

To optimize the user experience of my site, I want it to be fast. I hope that others will see that bloated websites can come at a great cost, and will consider using tools like https://webpagetest.org to make their sites more performant.

I'll keep working on making my website even faster. As a next step, I plan to make pages with images faster by using lazy image loading.

February 12, 2019

February 11, 2019

An abstract image of three boxes

The web used to be server-centric in that web content management systems managed data and turned it into HTML responses. With the rise of headless architectures a portion of the web is becoming server-centric for data but client-centric for its presentation; increasingly, data is rendered into HTML in the browser.

This shift of responsibility has given rise to JavaScript frameworks, while on the server side, it has resulted in the development of JSON:API and GraphQL to better serve these JavaScript applications with content and data.

In this blog post, we will compare REST, JSON:API and GraphQL. First, we'll look at an architectural, CMS-agnostic comparison, followed by evaluating some Drupal-specific implementation details.

It's worth noting that there are of course lots of intricacies and "it depends" when comparing these three approaches. When we discuss REST, we mean the "typical REST API" as opposed to one that is extremely well-designed or following a specification (not REST as a concept). When we discuss JSON:API, we're referring to implementations of the JSON:API specification. Finally, when we discuss GraphQL, we're referring to GraphQL as it used in practice. Formally, it is only a query language, not a standard for building APIs.

The architectural comparison should be useful for anyone building decoupled applications regardless of the foundation they use because the qualities we will evaluate apply to most web projects.

To frame our comparisons, let's establish that most developers working with web services care about the following qualities:

  1. Request efficiency: retrieving all necessary data in a single network round trip is essential for performance. The size of both requests and responses should make efficient use of the network.
  2. API exploration and schema documentation: the API should be quickly understandable and easily discoverable.
  3. Operational simplicity: the approach should be easy to install, configure, run, scale and secure.
  4. Writing data: not every application needs to store data in the content repository, but when it does, it should not be significantly more complex than reading.

We summarized our conclusions in the table below, but we discuss each of these four categories (or rows in the table) in more depth below. If you aggregate the colors in the table, you see that we rank JSON:API above GraphQL and GraphQL above REST for Drupal core's needs.

REST JSON:API GraphQL
Request efficiency Poor; multiple requests are needed to satisfy common needs. Responses are bloated. Excellent; a single request is usually sufficient for most needs. Responses can be tailored to return only what is required. Excellent; a single request is usually sufficient for most needs. Responses only include exactly what was requested.
Documentation, API explorability and schema Poor; no schema, not explorable. Acceptable; generic schema only; links and error messages are self-documenting. Excellent; precise schema; excellent tooling for exploration and documentation.
Operational simplicity Acceptable; works out of the box with CDNs and reverse proxies; few to no client-side libraries required. Excellent; works out of the box with CDNs and reverse proxies, no client-side libraries needed, but many are available and useful. Poor; extra infrastructure is often necessary client side libraries are a practical necessity, specific patterns required to benefit from CDNs and browser caches.
Writing data Acceptable; HTTP semantics give some guidance but how specifics left to each implementation, one write per request. Excellent; how writes are handled is clearly defined by the spec, one write per request, but multiple writes is being added to the specification. Poor; how writes are handled is left to each implementation and there are competing best practices, it's possible to execute multiple writes in a single request.

If you're not familiar with JSON:API or GraphQL, I recommend you watch the following two short videos. They will provide valuable context for the remainder of this blog post:

Request efficiency

Most REST APIs tend toward the simplest implementation possible: a resource can only be retrieved from one URI. If you want to retrieve article 42, you have to retrieve it from https://example.com/article/42. If you want to retrieve article 42 and article 72, you have to perform two requests; one to https://example.com/article/42 and one to https://example.com/article/72. If the article's author information is stored in a different content type, you have to do two additional requests, say to https://example.com/author/3 and https://example.com/author/7. Furthermore, you can't send these requests until you've requested, retrieved and parsed the article requests (you wouldn't know the author IDs otherwise).

Consequently, client-side applications built on top of basic REST APIs tend to need many successive requests to fetch their data. Often, these requests can't be sent until earlier requests have been fulfilled, resulting in a sluggish experience for the website visitor.

GraphQL and JSON:API were developed to address the typical inefficiency of REST APIs. Using JSON:API or GraphQL, you can use a single request to retrieve both article 42 and article 72, along with the author information for each. It simplifies the developer experience, but more importantly, it speeds up the application.

Finally, both JSON:API and GraphQL have a solution to limit response sizes. A common complaint against typical REST APIs is that their responses can be incredibly verbose; they often respond with far more data than the client needs. This is both annoying and inefficient.

GraphQL eliminates this by requiring the developer to explicitly add each desired resource field to every query. This makes it difficult to over-fetch data but easily leads to very large GraphQL queries, making (cacheable) GET requests impossible.

JSON:API solves this with the concept of sparse fieldsets or lists of desired resource fields. These behave in much the same fashion as GraphQL does, however, when they're omitted JSON:API will typically return all fields. An advantage, though, is that when a JSON:API query gets too large, sparse fieldsets can be omitted so that the request remains cacheable.

REST JSON:API GraphQL
Multiple data objects in a single response Usually; but every implementation is different (for Drupal: custom "REST Export" view or custom REST plugin needed). Yes Yes
Embed related data (e.g. the author of each article) No Yes Yes
Only needed fields of a data object No Yes; servers may choose sensible defaults, developers must be diligent to prevent over-fetching. Yes; strict, but eliminates over-fetching, at the extreme, it can lead to poor cacheability.

Documentation, API explorability and schema

As a developer working with web services, you want to be able to discover and understand the API quickly and easily: what kinds of resources are available, what fields does each of them have, how are they related, etc. But also, if this field is a date or time, what machine-readable format is the date or time specified in? Good documentation and API exploration can make all the difference.

REST JSON:API GraphQL
Auto-generated documentation Depends; if using the OpenAPI standard. Depends; if using the OpenAPI standard (formerly, Swagger). Yes; various tools available.
Interactivity Poor; navigable links rarely available. Acceptable; observing available fields and links in its responses enable exploration of the API. Excellent; autocomplete feature, instant results or compilation errors, complete and contextual documentation.
Validatable and programmable schema. Depends; if using the OpenAPI standard. Depends; the JSON:API specification defines a generic schema, but a reliable field-level schema is not yet available. Yes; a complete and reliable schema is provided (with very few exceptions).

GraphQL has superior API exploration thanks to GraphiQL (demonstrated in the video above), an in-browser IDE of sorts, which lets developers iteratively construct a query. As the developer types the query out, likely suggestions are offered and can be auto-completed. At any time, the query can be run and GraphiQL will display real results alongside the query. This provides immediate, actionable feedback to the query builder. Did they make a typo? Does the response look like what was desired? Additionally, documentation can be summoned into a flyout, when additional context is needed.

On the other hand, JSON:API is more self-explanatory: APIs can be explored with nothing more than a web browser. From within the browser, you can browse from one resource to another, discover its fields, and more. So, if you just want to debug or try something out, JSON:API is usable with nothing more than cURL or your browser. Or, you can use Postman (demonstrated in the video above) — a standalone environment for developing on top of an any HTTP-based API. Constructing complex queries requires some knowledge, however, and that is where GraphQL's GraphiQL shines compared to JSON:API.

Operational simplicity

We use the term operational simplicity to encompass how easy it is to install, configure, run, scale and secure each of the solutions.

The table should be self-explanatory, though it's important to make a remark about scalability. To scale a REST-based or JSON:API-based web service so that it can handle a large volume of traffic, you can use the same approach websites (and Drupal) already use, including reverse proxies like Varnish or a CDN. To scale GraphQL, you can't rely on HTTP caching as with REST or JSON:API without persisted queries. Persisted queries are not part of the official GraphQL specification but they are a widely-adopted convention amongst GraphQL users. They essentially store a query on the server, assign it an ID and permit the client to get the result of the query using a GET request with only the ID. Persisted queries add more operational complexity, and it also means the architecture is no longer fully decoupled — if a client wants to retrieve different data, server-side changes are required.

REST JSON:API GraphQL
Scalability: additional infrastructure requirements Excellent; same as a regular website (Varnish, CDN, etc). Excellent; same as a regular website (Varnish, CDN, etc). Usually poor; only the simplest queries can use GET requests; to reap the full benefit of GraphQL, servers needs their own tooling.
Tooling ecosystem Acceptable; lots of developer tools available, but for the best experience they need to be customized for the implementation. Excellent; lots of developer tools available; tools don't need to be implementation-specific. Excellent; lots of developer tools available; tools don't need to be implementation-specific.
Typical points of failure Fewer; server, client. Fewer; server, client. Many; server, client, client-side caching, client and build tooling.

Writing data

For most REST APIs and JSON:API, writing data is as easy as fetching it: if you can read information, you also know how to write it. Instead of using the GET HTTP request type you use POST and PATCH requests. JSON:API improves on typical REST APIs by eliminating differences between implementations. There is just one way to do things and that enabled better, generic tooling and less time spent on server-side details.

The nature of GraphQL's write operations (called mutations) means that you must write custom code for each write operation; unlike JSON:API the specification, GraphQL doesn't prescribe a single way of handling write operations to resources, so there are many competing best practices. In essence, the GraphQL specification is optimized for reads, not writes.

On the other hand, the GraphQL specification supports bulk/batch operations automatically for the mutations you've already implemented, whereas the JSON:API specification does not. The ability to perform batch write operations can be important. For example, in our running example, adding a new tag to an article would require two requests; one to create the tag and one to update the article. That said, support for bulk/batch writes in JSON:API is on the specification's roadmap.

REST JSON:API GraphQL
Writing data Acceptable; every implementation is different. No bulk support. Excellent; JSON:API prescribes a complete solution for handling writes. Bulk operations are coming soon. Poor; GraphQL supports bulk/batch operations, but writes can be tricky to design and implement. There are competing conventions.

Drupal-specific considerations

Up to this point we have provided an architectural and CMS-agnostic comparison; now we also want to highlight a few Drupal-specific implementation details. For this, we can look at the ease of installation, automatically generated documentation, integration with Drupal's entity and field-level access control systems and decoupled filtering.

Drupal 8's REST module is practically impossible to set up without the contributed REST UI module, and its configuration can be daunting. Drupal's JSON:API module is far superior to Drupal's REST module at this point. It is trivial to set up: install it and you're done; there's nothing to configure. The GraphQL module is also easy to install but does require some configuration.

Client-generated collection queries allow a consumer to filter an application's data down to just what they're interested in. This is a bit like a Drupal View except that the consumer can add, remove and control all the filters. This is almost always a requirement for public web services, but it can also make development more efficient because creating or changing a listing doesn't require server-side configuration changes.

Drupal's REST module does not support client-generated collection queries. It requires a "REST Views display" to be setup by a site administrator and since these need to be manually configured in Drupal; this means a client can't craft its own queries with the filters it needs.

JSON:API and GraphQL, clients are able to perform their own content queries without the need for server-side configuration. This means that they can be truly decoupled: changes to the front end don't always require a back-end configuration change.

These client-generated queries are a bit simpler to use with the JSON:API module than they are with the GraphQL module because of how each module handles Drupal's extensive access control mechanisms. By default JSON:API ensures that these are respected by altering the incoming query. GraphQL instead requires the consumer to have permission to simply bypass access restrictions.

Most projects using GraphQL that cannot grant this permission use persisted queries instead of client-generated queries. This means a return to a more traditional Views-like pattern because the consumer no longer has complete control of the query's filters. To regain some of the efficiencies of client-generated queries, the creation of these persisted queries can be automated using front-end build tooling.

REST JSON:API GraphQL
Ease of installation and configuration Poor; requires contributed module REST UI, easy to break clients by changing configuration. Excellent; zero configuration! Poor; more complex to use, may require additional permissions, configuration or custom code.
Automatically generated documentation Acceptable; requires contributed module OpenAPI. Acceptable; requires contributed module OpenAPI. Excellent; GraphQL Voyager included.
Security: content-level access control (entity and field access) Excellent; content-level access control respected. Excellent; content-level access control respected, even in queries. Acceptable; some use cases require the consumer to have permission to bypass all entity and/or field access.
Decoupled filtering (client can craft queries without server-side intervention) No Yes Depends; only in some setups and with additional tooling/infrastructure.

What does this mean for Drupal's roadmap?

Drupal grew up as a traditional web content management system but has since evolved for this API-first world and industry analysts are praising us for it.

As Drupal's project lead, I've been talking about adding out-of-the-box support for both JSON:API and GraphQL for a while now. In fact, I've been very bullish about GraphQL since 2015. My optimism was warranted; GraphQL is undergoing a meteoric rise in interest across the web development industry.

Based on this analysis, for Drupal core's needs, we rank JSON:API above GraphQL and GraphQL above REST. As such, I want to change my recommendation for Drupal 8 core. Instead of adding both JSON:API and GraphQL to Drupal 8 core, I believe only JSON:API should be added. That said, Drupal's GraphQL implementation is fantastic, especially when you have the developer capacity to build a bespoke API for your project.

On the four qualities by which we evaluated the REST, JSON:API and GraphQL modules, JSON:API has outperformed its contemporaries. Its web standards-based approach, its ability to handle reads and writes out of the box, its security model and its ease of operation make it the best choice for Drupal core. Additionally, where JSON:API underperformed, I believe that we have a real opportunity to contribute back to the specification. In fact, one of the JSON:API module's maintainers and co-authors of this blog post, Gabe Sullice (Acquia), recently became a JSON:API specification editor himself.

This decision does not mean that you can't or shouldn't use GraphQL with Drupal. While I believe JSON:API covers the majority of use cases, there are valid use cases where GraphQL is a great fit. I'm happy that Drupal is endowed with such a vibrant contributed module ecosystem that provides so many options to Drupal's users.

I'm excited to see where both the JSON:API specification and Drupal's implementation of it goes in the coming months and years. As a first next step, we're preparing the JSON:API to be added to Drupal 8.7.

Special thanks to Wim Leers (Acquia) and Gabe Sullice (Acquia) for co-authoring this blog post and to Preston So (Acquia) and Alex Bronstein (Acquia) for their feedback during the writing process.

Une nouvelle série de billets pour explorer la philosophie “Distraction free”.

Il y’a seulement et déjà 13 ans exactement, le développeur GNOME Vincent Untz me prêtait son Nokia 770 pour un week-end.

Photo par Miia Ranta, CC BY-SA 2.5

Cela peut sembler préhistorique, mais, à cette époque, les smartphones n’existaient pas (on était encore un an avant le lancement du premier iPhone). Les laptops n’étaient pas tellement courants et, souvent, étaient beaucoup moins puissants que les ordinateurs de bureau pour un prix très élevé (ou alors ils étaient lourds et encombrants). Mon utilisation d’Internet était donc limitée à ma chambre d’étudiant.

Pourtant, j’adorais déjà Internet. J’étais heureux de rentrer dans ma chambre pour retrouver ce monde infini : les forums que je fréquentais, les discussions sur IRC, les blogs que je lisais, le code que j’écrivais. Un univers gigantesque malgré son confinement aux quatre murs de ma chambre.

Mais avec le Nokia 770, tout changea subitement. Le soir même, je découvris que je pouvais chatter sur IRC dans mon lit avant de m’endormir. Que je pouvais consulter les forums de n’importe où et, déjà, lire des livres électroniques que je téléchargeais.

Ce fut une révélation. Conquis par le prêt de Vuntz, je fis rapidement l’acquisition de l’appareil. Un dimanche matin, alors que je faisais la file dans la boulangerie de mon quartier, je sortis machinalement le Nokia 770 de ma poche et découvrit qu’il y’avait un Wifi non protégé à proximité.

À l’époque, les wifi non protégés étaient courants, mais la 3G n’existait pas. D’ailleurs, le Nokia 770 n’avait pas de carte sim et ne pouvait servir de téléphone. C’était littéralement un mini-ordinateur.

À travers le wifi, je me mis à consulter le site Linuxfr, que j’appréciais particulièrement, tout en attendant mon tour pour commander mes pains au chocolat. Cette expérience fut, pour moi, mystique. Pour la première fois de ma vie, je ne devais pas attendre en m’ennuyant, je ne devais pas perdre un temps inutile à écouter des conversations sans intérêt. J’avais accès à mon univers étendu partout. Enthousiaste, je postai un message sur Linuxfr, disant que j’étais à la boulangerie et que je trouvais ça génial.

La communauté Linuxfr s’enflamma sur la blague de “la boulangère de Ploum” (une vieille dame qui ne saura probablement jamais à quel point elle a été l’objet de blagues salaces de la part cette petite communauté) et, aujourd’hui encore, il m’arrive de recevoir un message avec un post-scriptum « Au fait, bien le bonjour à ta boulangère ». La communauté Linuxfr n’oublie jamais !

Mais faisons un rapide bond en avant de 13 années. La boulangerie a été détruite et remplacée par un bâtiment moderne contenant bureaux et magasins. Ce que je trouvais exceptionnel est devenu la norme absolue. Nous avons Internet partout et tout le temps. Plus besoin de sortir un engin de notre poche et de se connecter, notre poche vibre sans arrêt de notifications. Les réseaux sociaux (concept inconnu en 2006) nous appellent sans cesse. C’est à ce point problématique que nous avons développé le syndrome des notifications fantômes : nous pensons sentir des vibrations même lorsque le téléphone n’est pas dans notre poche. Et si nous sommes déconnectés pour cause de mauvais réseau, nous nous en rendons compte immédiatement ! Nous pestons, nous jurons si nous sommes coupés quelques minutes de ce qui semblait tout bonnement impossible il y’a seulement 13 ans !

Et même si vous décidez d’éteindre volontairement votre téléphone, autour de vous la ville bruisse de notifications vaguement musicales, les gens ont les yeux rivés sur leur téléphone dans les salles d’attente, les files de boulangeries, dans la rue. Ils n’accèdent plus à Internet, Internet les avale, les digère. D’ailleurs, comme vient de le démontrer une équipe de chercheurs américains, même lorsqu’il est en silencieux, notre téléphone nous déconcentre par sa seule présence !

J’étais enthousiaste à l’idée de ne plus perdre de temps, mais, aujourd’hui, nous n’avons plus le temps pour rien. Internet est devenu un trou noir temporel, un aspirateur de pensées, une décharge d’inspiration.

Contre toute attente, nous tentons désormais de retrouver le temps de nous ennuyer dans une file d’attente, de méditer. Nous devons développer des stratégies de protection. La moitié des posts sur la plateforme Medium sont en substance des gens qui ont passé une demi-heure sans leur téléphone et témoignent de ô combien c’est vraiment génial.

Je ne fais pas exception ! Je vous ai parlé de ma déconnexion, de la manière dont j’ai configuré mon téléphone. Mais là où le smartphone était l’outil ultime qui faisait tout (de la lampe de poche à l’appareil photo en passant par la machine à écrire portative), nous ressentons le besoin d’outils qui ne font plus qu’une et une seule chose.

Ou, plutôt, nous avons besoin d’outils qui ne peuvent pas faire certaines choses. Ne pas se connecter à Facebook ni avoir accès aux médias devient la fonctionnalité ultime. Toute une gamme de nouveaux produits se prétend désormais “distraction free”.

Un comble quand on replonge 13 ans en arrière. On pourrait croire qu’il faut seulement d’un peu de volonté pour ne pas utiliser Facebook/checker les news, mais il apparait de plus en plus évident que les plateformes publicitaires que sont les réseaux sociaux nous capturent à un niveau subconscient hors de portée de notre simple volonté. Car, comme le montre l’expérience citée plus haut, même si nous arrivons à “ne pas céder”, l’effort mental est tellement important qu’il s’en ressent dans nos performances intellectuelles.

En 13 ans seulement, l’ennui est passé d’une partie de la vie inévitable à une denrée rare dont nous commençons seulement à percevoir l’importance. En 13 ans, l’accès à Internet est passé d’une denrée rare, cantonnée à certains endroits précis, à une présence envahissante et sans limites dont il devient difficile de se protéger. L’homme s’ennuie depuis des milliers d’années et il s’avère que c’est un processus essentiel. Nous devons apprendre à nous ennuyer volontairement.

Le marché, toujours à l’écoute de nos besoins, voit donc fleurir une myriade d’appareils “distraction free”. J’ai moi-même imaginé un « Zen device », un appareil non plus dédié aux microtâches, mais à ce que je souhaite accomplir. Retour en arrière ? Arnaque ? Ou réelle adaptation d’un marché qui a été trop loin dans la captation de notre attention ? 

Sous le thème “distraction free”, il y’a certainement à boire et à manger, mais, lorsque je me penche sur un écran autre que le mien, je suis toujours saisi par la profusion, d’informations inutiles, de couleurs, de stimuli.

Les écrans de laptops sont littéralement remplis d’icônes qui se superposent, les boîtes mail dans les téléphones annoncent des centaines voire des milliers de mails non lus, la zone de notification est surchargée. D’ailleurs, il est désormais une pratique courante de réenvoyer plusieurs fois un mail si on n’a pas de réponse. Voire de notifier la même personne sur plusieurs réseaux à la fois lorsqu’on veut vraiment attirer son attention.

Dans les applications spécialisées elles-mêmes, des bandeaux proposant des mises à jour ou des informations parfois vieilles de plusieurs mois sont affichés alors qu’il suffirait d’un clic pour les faire disparaitre.

Le navigateur est certainement la partie la plus effrayante : débordant de bandeaux publicitaires clignotants, de proposition d’installer un navigateur alternatif, des cookies à accepter voire de “barres d’outils” installées par mégarde. C’est bien simple, la zone de travail sur un laptop est souvent réduite à sa plus simple expression. Il est probable que votre téléphone soit fourni avec plein d’applications que vous ne pouvez pas désinstaller, le constructeur espérant que, en désespoir de cause, vous finissiez par les utiliser. Des icônes aux couleurs vives qui ne vous servent à rien, mais que vous avez sous les yeux tout le temps et qui occupent de l’espace mémoire chèrement payé.

La plupart des utilisateurs prétendent ne même pas voir toute cette gabegie sur leur propre écran, être habitués. Ils savent dans quelle zone ils peuvent cliquer, où chercher les 3 icônes utiles et quels sont les dizaines de mails qu’ils n’ont pas lus, mais qu’ils ne liront jamais.

Cela me fait mal au cœur, car cette situation induit un stress, une fatigue inconsciente chez les utilisateurs. Une énergie mentale incroyable est mobilisée pour arriver à se concentrer hors des pubs qui clignotent, des icônes inutiles, des mails non importants, mais toujours présents.

Depuis des années, je tente d’afficher le moins possible sur mon écran. J’utilise toujours le mode nuit, des couleurs sombres, un logiciel pour filtrer la couleur bleue. J’utilise plusieurs bloqueurs de publicités, je supprime toute icône qui apparait sur mon bureau et je suis très strict vis-à-vis de mon Inbox 0.

Mais je réalise que tout cela demande une certaine aisance, une conscience de l’outil, une rigueur scrupuleuse et un investissement en temps pour apprendre ces techniques. Si l’investissement des quelques secondes nécessaires pour se désabonner d’une newsletter est extrêmement rentable, il est bien plus facile de laisser les mails tels quels sans même les ouvrir et de s’habituer progressivement à la pastille rouge qui indique 1017 afin de n’ouvrir son logiciel de mail que lors de l’incrément à 1018.

La technologie n’est donc pas prévue pour les gens “normaux”. Elle fonctionne littéralement contre eux, à leurs dépens. Elle les fatigue, les stresse, affaiblit leur capacité cognitive, les désensibilise.

Le résultat est une escalade dans la guerre à l’attention (ou plutôt à la distraction). Les apps rivalisent de notifications impossibles à désactiver, tester le moindre service vous inscrit automatiquement à des dizaines de newsletters sans compter les mails pour tenter de nous faire revenir sur le service.

Et si un utilisateur plus courageux qu’un autre se lance dans le process de désabonnement, de suppression des notifications, il trouvera des messages culpabilisants l’informant qu’il va rater des informations primordiales puis tout le reste ayant failli, qu’il fait pleurer les développeurs, qu’ils sont tristes de le voir partir. Ne parlons même pas de la suppression complète d’un compte sur un service, qui est le plus souvent un parcours du combattant sans aucune garantie d’être réellement effacé de la base de données.

D’ailleurs, si vous êtes l’auteur d’un de ces messages culpabilisant, je vous conchie, vous êtes la lie de l’humanité, la morve de l’espèce. Ça vaut certainement pour toute cette industrie qui cherche à accaparer notre attention pour nous vendre de la merde dont nous n’avons pas besoin.

Ce qui est effrayant avec cet état de fait c’est qu’il sépare de facto l’humanité en deux classes distinctes : une élite qui a les ressources pour se protéger des agressions mentales permanentes et le reste de la populace, attaquée en permanence, soumise à un lavage de cerveau constant, épuisé, lessivé et de moins en moins capable de se concentrer. Un futur que je décris dans Printeurs, mais qui n’est plus très éloigné de notre présent.

Certains ont pris le parti de rejeter autant que possible la technologie, se rendant compte avec intelligence que celle-ci a pris le dessus sur eux et ne voyant pas d’autres alternatives. Des syndromes psychosomatiques commencent même à arriver, comme l’électrosensibilité ou la peur des “ondes” qui, finalement, ne sont qu’une manière inconsciente d’exprimer notre envahissement par la connexion permanente.

Mais je suis de ceux qui pensent que la technologie est indispensable pour construire collectivement le futur à une échelle globale. Un retour en arrière serait une catastrophe.

Il n’y a donc que deux solutions : changer les gens ou changer la technologie.

À mon échelle, j’essaie de changer les gens, de leur démontrer qu’ils peuvent faire partie de l’élite avec un investissement très minime, que cet investissement est rentable. Installer des bloqueurs de pubs, désactiver dès que possible les notifications, se mettre à l’inbox 0, se méfier de la « gratuité commerciale ». Bref conscientiser une hygiène numérique.

Mais la solution réelle ne pourra venir que d’un changement radical de la technologie. Lorsque les concepteurs de la technologie ne seront plus eux-mêmes une minorité issue de l’élite, mais que son objectif sera de servir l’humain et non plus les publicitaires.

Certains militent, comme Aral Balkan qui a mis au point l’Ethical Design. Ou Humanetech, qui œuvre dans la même veine. D’autres tentent de créer de nouveaux produits. Au fond, “Distraction Free” n’est probablement qu’un nouveau mot marketing pour dire “On cherche à vous être utile à vous, pas à vous espionner pour mieux capter votre attention et vendre votre cerveau au plus offrant”. Et c’est une bonne chose.

Simple slogan marketing ou réel progrès dans les méthodes de conception des produits ? C’est ce que je me propose d’explorer dans cette série, en vous parlant des produits “distraction-free” que je testerai, treize années après le Nokia 770 qui ouvrit pour moi la boîte de Pandore de l’Internet mobile.

Photo by Gerrie van der Walt on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

February 10, 2019

w500 and pi

I got a Lenovo Thinkpad W500 from www.2dehands.be for a nice price.

Actually, I got it a couple of months back but I didn’t have time to play with it and it took some time to get some parts from Aliexpress.

The Thinkpad W500 is probably the most powerful system that is compatible with Libreboot, it has a nice high-resolution display with a 1920 x 1200 resolution which is even a higher screen resolution than the Full HD resolution used on most new laptops today.

Security

Keep in mind that the core duo CPU does not get microcode updates from Intel for spectre and meltdown. There is no solution (currently) for spectre 3a - Rogue System Register Read - CVE-2018-3640 and Spectre 4 - Speculative Store Bypass CVE-2018-3639 without a microcode update.

Binary blobs are bad. Having a closed source binary-only piece of software on your system is not only unacceptable for Free Software activists it also makes it more difficult to review what it really does and makes it more difficult to review it for security concerns.

Having your system vulnerable is also a bad thing of course. Can’t wait to get a computer system with an open CPU architecture like RISC-V.

Preparation

Thinkpad

MAC address

Your MAC address is stored in your BIOS since you’ll overwite the BIOS with Libreboot we need to have the MAC address. Your MAC address is written on Laptop however I recommend to boot from GNU/Linux and copy/paste it from the ifconfig or the ip a command.

EC update

It’s recommended to update your current BIOS to get the latest EC firmware. My system has a CDROM drive I updated the BIOS with a CDROM.

Prepare the Raspberry-pi

It isn’t possible to flash the BIOS with software only on a Lenovo W500/T500, it’s required to put a clip on your BIOS chip and flash the new BIOS with flashrom. I used a Raspberry Pi 1 model B with Raspbian to flash Libreboot .

Enable the SPI port

The SPI port isn’t enabled by default on Raspbian, so we’ll need to enable it.

Open /boot/config.txt in your favorite text editor.

1
2
3
4
5
6
7
8
root@raspberrypi:~# cd /boot/
root@raspberrypi:/boot# ls
bcm2708-rpi-0-w.dtb     bcm2710-rpi-3-b.dtb       config.txt     fixup_x.dat       LICENSE.oracle  start_x.elf
bcm2708-rpi-b.dtb       bcm2710-rpi-3-b-plus.dtb  COPYING.linux  issue.txt         overlays
bcm2708-rpi-b-plus.dtb  bcm2710-rpi-cm3.dtb       fixup_cd.dat   kernel7.img       start_cd.elf
bcm2708-rpi-cm.dtb      bootcode.bin              fixup.dat      kernel.img        start_db.elf
bcm2709-rpi-2-b.dtb     cmdline.txt               fixup_db.dat   LICENCE.broadcom  start.elf
root@raspberrypi:/boot# vi config.txt 

uncomment dtparam=spi=on

1
2
3
4
# Uncomment some or all of these to enable the optional hardware interfaces
#dtparam=i2c_arm=on
#dtparam=i2s=on
dtparam=spi=on

After a reboot of the raspberry-pi the SPI interface /dev/spidev* will be available.

1
2
3
4
root@raspberrypi:~# ls -l /dev/spidev*
crw-rw---- 1 root spi 153, 0 Jan 26 20:08 /dev/spidev0.0
crw-rw---- 1 root spi 153, 1 Jan 26 20:08 /dev/spidev0.1
root@raspberrypi:~# 

Install the required software

git

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
pi@raspberrypi:~ $ sudo apt install git
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  git-man liberror-perl
Suggested packages:
  git-daemon-run | git-daemon-sysvinit git-doc git-el git-email git-gui gitk gitweb git-arch git-cvs
  git-mediawiki git-svn
The following NEW packages will be installed:
  git git-man liberror-perl
0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded.
Need to get 4,849 kB of archives.
After this operation, 26.4 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://mirror.nl.leaseweb.net/raspbian/raspbian stretch/main armhf liberror-perl all 0.17024-1 [26.9 kB]
Get:2 http://mirror.nl.leaseweb.net/raspbian/raspbian stretch/main armhf git-man all 1:2.11.0-3+deb9u4 [1,433 kB]
Get:3 http://mirror.nl.leaseweb.net/raspbian/raspbian stretch/main armhf git armhf 1:2.11.0-3+deb9u4 [3,390 kB]
Fetched 4,849 kB in 3s (1,517 kB/s)
Selecting previously unselected package liberror-perl.
(Reading database ... 35178 files and directories currently installed.)
Preparing to unpack .../liberror-perl_0.17024-1_all.deb ...
Unpacking liberror-perl (0.17024-1) ...
Selecting previously unselected package git-man.
Preparing to unpack .../git-man_1%3a2.11.0-3+deb9u4_all.deb ...
Unpacking git-man (1:2.11.0-3+deb9u4) ...
Selecting previously unselected package git.
Preparing to unpack .../git_1%3a2.11.0-3+deb9u4_armhf.deb ...
Unpacking git (1:2.11.0-3+deb9u4) ...
Setting up git-man (1:2.11.0-3+deb9u4) ...
Setting up liberror-perl (0.17024-1) ...
Processing triggers for man-db (2.7.6.1-2) ...
Setting up git (1:2.11.0-3+deb9u4) ...
pi@raspberrypi:~ $ 

flashrom

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
root@raspberrypi:~# apt install flashrom
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  libftdi1-2 libpci3
The following NEW packages will be installed:
  flashrom libftdi1-2 libpci3
0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded.
Need to get 454 kB of archives.
After this operation, 843 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://mirror.nl.leaseweb.net/raspbian/raspbian stretch/main armhf libpci3 armhf 1:3.5.2-1 [50.9 kB]
Get:2 http://mirror.nl.leaseweb.net/raspbian/raspbian stretch/main armhf libftdi1-2 armhf 1.3-2 [26.8 kB]
Get:3 http://mirror.nl.leaseweb.net/raspbian/raspbian stretch/main armhf flashrom armhf 0.9.9+r1954-1 [377 kB]
Fetched 454 kB in 4s (108 kB/s)   
Selecting previously unselected package libpci3:armhf.
(Reading database ... 34656 files and directories currently installed.)
Preparing to unpack .../libpci3_1%3a3.5.2-1_armhf.deb ...
Unpacking libpci3:armhf (1:3.5.2-1) ...
Selecting previously unselected package libftdi1-2:armhf.
Preparing to unpack .../libftdi1-2_1.3-2_armhf.deb ...
Unpacking libftdi1-2:armhf (1.3-2) ...
Selecting previously unselected package flashrom.
Preparing to unpack .../flashrom_0.9.9+r1954-1_armhf.deb ...
Unpacking flashrom (0.9.9+r1954-1) ...
Setting up libftdi1-2:armhf (1.3-2) ...
Processing triggers for libc-bin (2.24-11+deb9u3) ...
Processing triggers for man-db (2.7.6.1-2) ...
Setting up libpci3:armhf (1:3.5.2-1) ...
Setting up flashrom (0.9.9+r1954-1) ...
Processing triggers for libc-bin (2.24-11+deb9u3) ...
root@raspberrypi:~# 

Wiring

Wire diagram

It’s useful to get correct flash chip specs, I used a magnifying loupe and a photo camera to get my chip type. After searching the internet I found a very nice blog post from p1trson https://p1trson.blogspot.com/2017/01/journey-to-freedom-part-ii.html about flashing Libreboot on a Thinkpad T400 with the same Flash chip, I used his wiring diagram. Thanks P1trson!

w500 and pi pin layout

Power off & wiring

Power off your raspberry-pi and wire your flash clip to the raspberry-pi with the above diagram.

1
2
3
4
root@raspberrypi:~# poweroff
Connection to pi2 closed by remote host.
Connection to pi2 closed.
[staf@vicky ~]$ 

flashing

test

Test the connection to your flash chip with flashrom. I needed to specify the spispeed=512 to get the connection established.

1
2
3
4
5
6
7
8
9
10
11
12
root@raspberrypi:~# flashrom -p linux_spi:dev=/dev/spidev0.0,spispeed=512 
flashrom v0.9.9-r1954 on Linux 4.14.79+ (armv6l)
flashrom is free software, get the source code at https://flashrom.org

Calibrating delay loop... OK.
Found Macronix flash chip "MX25L6405" (8192 kB, SPI) on linux_spi.
Found Macronix flash chip "MX25L6405D" (8192 kB, SPI) on linux_spi.
Found Macronix flash chip "MX25L6406E/MX25L6408E" (8192 kB, SPI) on linux_spi.
Found Macronix flash chip "MX25L6436E/MX25L6445E/MX25L6465E/MX25L6473E" (8192 kB, SPI) on linux_spi.
Multiple flash chip definitions match the detected chip(s): "MX25L6405", "MX25L6405D", "MX25L6406E/MX25L6408E", "MX25L6436E/MX25L6445E/MX25L6465E/MX25L6473E"
Please specify which chip definition to use with the -c <chipname> option.
root@raspberrypi:~# 

read old bios

read

Read the original flash twice

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
pi@raspberrypi:~ $ sudo flashrom -c "MX25L6405D" -p linux_spi:dev=/dev/spidev0.0,spispeed=512 -r w500bios.rom
flashrom v0.9.9-r1954 on Linux 4.14.79+ (armv6l)
flashrom is free software, get the source code at https://flashrom.org

Calibrating delay loop... OK.
Found Macronix flash chip "MX25L6405D" (8192 kB, SPI) on linux_spi.
Reading flash... done.
pi@raspberrypi:~ $ ls
flashrom  test.rom  w500bios.rom
pi@raspberrypi:~ $ sudo flashrom -c "MX25L6405D" -p linux_spi:dev=/dev/spidev0.0,spispeed=512 -r w500bios2.rom
flashrom v0.9.9-r1954 on Linux 4.14.79+ (armv6l)
flashrom is free software, get the source code at https://flashrom.org

Calibrating delay loop... OK.
Found Macronix flash chip "MX25L6405D" (8192 kB, SPI) on linux_spi.
Reading flash... done.
pi@raspberrypi:~ $ 

compare

1
2
3
4
pi@raspberrypi:~ $ sha1sum w500bios*.rom
d23effea7312dbc0f2aabe1ca1387e1d047d7334  w500bios2.rom
d23effea7312dbc0f2aabe1ca1387e1d047d7334  w500bios.rom
pi@raspberrypi:~ $ 

store

Store your original BIOS image to a safe place. Might be useful if need to restore it…

Flash libreboot

Download & verify

I created ~/libreboot directory on my raspberry-pi to store all the downloads.

Download

Download the libreboot version that matches your laptop with SHA512SUMS and SHA512SUMS.sig.

https://libreboot.org/download.html

Verify

It always a good idea to verify the gpg signature…

Download the gpg key

1
2
3
4
pi@raspberrypi:~ $ gpg --recv-keys 0x969A979505E8C5B2
gpg: failed to start the dirmngr '/usr/bin/dirmngr': No such file or directory
gpg: connecting dirmngr at '/run/user/1000/gnupg/S.dirmngr' failed: No such file or directory
gpg: keyserver receive failed: No dirmngr

I needed to install dirmgr separately on my Raspbian installation.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
pi@raspberrypi:~ $ sudo apt-get install dirmngr
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Suggested packages:
  dbus-user-session pinentry-gnome3 tor
The following NEW packages will be installed:
  dirmngr
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 547 kB of archives.
After this operation, 963 kB of additional disk space will be used.
Get:1 http://mirror.nl.leaseweb.net/raspbian/raspbian stretch/main armhf dirmngr armhf 2.1.18-8~deb9u3 [547 kB]
Fetched 547 kB in 9s (58.3 kB/s)         
Selecting previously unselected package dirmngr.
(Reading database ... 36051 files and directories currently installed.)
Preparing to unpack .../dirmngr_2.1.18-8~deb9u3_armhf.deb ...
Unpacking dirmngr (2.1.18-8~deb9u3) ...
Processing triggers for man-db (2.7.6.1-2) ...
Setting up dirmngr (2.1.18-8~deb9u3) ...
pi@raspberrypi:~ $ 

Try it again…

1
2
3
4
5
6
7
8
9
pi@raspberrypi:~ $ gpg --recv-keys 0x969A979505E8C5B2
key 969A979505E8C5B2:
1 signature not checked due to a missing key
gpg: /home/pi/.gnupg/trustdb.gpg: trustdb created
gpg: key 969A979505E8C5B2: public key "Leah Rowe (Libreboot signing key) <info@minifree.org>" imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg:               imported: 1
pi@raspberrypi:~ $ 

Verify the signature of the checksum file…

1
2
3
4
5
6
7
8
9
pi@raspberrypi:~ $ gpg --verify SHA512SUMS.sig 
gpg: assuming signed data in 'SHA512SUMS'
gpg: Signature made Wed 07 Sep 2016 23:15:17 BST
gpg:                using RSA key 969A979505E8C5B2
gpg: Good signature from "Leah Rowe (Libreboot signing key) <info@minifree.org>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: CDC9 CAE3 2CB4 B7FC 84FD  C804 969A 9795 05E8 C5B2
pi@raspberrypi:~ $ 

Compare the checksum…

1
2
3
4
5
6
7
pi@raspberrypi:~/libreboot $ sha512sum libreboot_r20160907_grub_t500_8mb.tar.xz 
5325aef526ab6ca359d6613609a4a2345eee47c6d194094553b53996c413431bccdc345838299b347f47bcba8896dd0a6ed3f9b4c88606ead61c3725b580983b  libreboot_r20160907_grub_t500_8mb.tar.xz
pi@raspberrypi:~/libreboot $ grep sha512sum 5325aef526ab6ca359d6613609a4a2345eee47c6d194094553b53996c413431bccdc345838299b347f47bcba8896dd0a6ed3f9b4c88606ead61c3725b580983b
grep: 5325aef526ab6ca359d6613609a4a2345eee47c6d194094553b53996c413431bccdc345838299b347f47bcba8896dd0a6ed3f9b4c88606ead61c3725b580983b: No such file or directory
pi@raspberrypi:~/libreboot $ grep 5325aef526ab6ca359d6613609a4a2345eee47c6d194094553b53996c413431bccdc345838299b347f47bcba8896dd0a6ed3f9b4c88606ead61c3725b580983b SHA512SUMS
5325aef526ab6ca359d6613609a4a2345eee47c6d194094553b53996c413431bccdc345838299b347f47bcba8896dd0a6ed3f9b4c88606ead61c3725b580983b  ./rom/grub/libreboot_r20160907_grub_t500_8mb.tar.xz
pi@raspberrypi:~/libreboot $ 
Extract
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
pi@raspberrypi:~/libreboot $ tar xvf libreboot_r20160907_grub_t500_8mb.tar.xz
libreboot_r20160907_grub_t500_8mb/
libreboot_r20160907_grub_t500_8mb/t500_8mb_deqwertz_txtmode.rom
libreboot_r20160907_grub_t500_8mb/t500_8mb_esqwerty_txtmode.rom
libreboot_r20160907_grub_t500_8mb/t500_8mb_frazerty_txtmode.rom
libreboot_r20160907_grub_t500_8mb/t500_8mb_frdvbepo_txtmode.rom
libreboot_r20160907_grub_t500_8mb/t500_8mb_itqwerty_txtmode.rom
libreboot_r20160907_grub_t500_8mb/t500_8mb_svenska_txtmode.rom
libreboot_r20160907_grub_t500_8mb/t500_8mb_ukdvorak_txtmode.rom
libreboot_r20160907_grub_t500_8mb/t500_8mb_ukqwerty_txtmode.rom
libreboot_r20160907_grub_t500_8mb/t500_8mb_usdvorak_txtmode.rom
libreboot_r20160907_grub_t500_8mb/t500_8mb_usqwerty_txtmode.rom
libreboot_r20160907_grub_t500_8mb/t500_8mb_deqwertz_vesafb.rom
libreboot_r20160907_grub_t500_8mb/t500_8mb_esqwerty_vesafb.rom
libreboot_r20160907_grub_t500_8mb/t500_8mb_frazerty_vesafb.rom
libreboot_r20160907_grub_t500_8mb/t500_8mb_frdvbepo_vesafb.rom
libreboot_r20160907_grub_t500_8mb/t500_8mb_itqwerty_vesafb.rom
libreboot_r20160907_grub_t500_8mb/t500_8mb_svenska_vesafb.rom
libreboot_r20160907_grub_t500_8mb/t500_8mb_ukdvorak_vesafb.rom
libreboot_r20160907_grub_t500_8mb/t500_8mb_ukqwerty_vesafb.rom
libreboot_r20160907_grub_t500_8mb/t500_8mb_usdvorak_vesafb.rom
libreboot_r20160907_grub_t500_8mb/t500_8mb_usqwerty_vesafb.rom
libreboot_r20160907_grub_t500_8mb/ChangeLog
libreboot_r20160907_grub_t500_8mb/NEWS
libreboot_r20160907_grub_t500_8mb/version
libreboot_r20160907_grub_t500_8mb/versiondate
pi@raspberrypi:~/libreboot $ 
copy the image that you plan to use
1
2
pi@raspberrypi:~/libreboot $ cp libreboot_r20160907_grub_t500_8mb/t500_8mb_usqwerty_vesafb.rom libreboot.rom
pi@raspberrypi:~/libreboot $ 

Change MAC

Download the libreboot util
Download
1
2
3
4
5
6
7
8
9
10
11
12
13
pi@raspberrypi:~/libreboot $ wget https://www.mirrorservice.org/sites/libreboot.org/release/stable/20160907/libreboot_r20160907_util.tar.xz
--2019-01-27 08:46:32--  https://www.mirrorservice.org/sites/libreboot.org/release/stable/20160907/libreboot_r20160907_util.tar.xz
Resolving www.mirrorservice.org (www.mirrorservice.org)... 212.219.56.184, 2001:630:341:12::184
Connecting to www.mirrorservice.org (www.mirrorservice.org)|212.219.56.184|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2458736 (2.3M) [application/x-tar]
Saving to: libreboot_r20160907_util.tar.xz

libreboot_r20160907_util.tar 100%[===========================================>]   2.34M  1.65MB/s    in 1.4s    

2019-01-27 08:46:34 (1.65 MB/s) - libreboot_r20160907_util.tar.xz saved [2458736/2458736]

pi@raspberrypi:~/libreboot $ 
Verify
1
2
3
4
5
pi@raspberrypi:~/libreboot $ sha512sum libreboot_r20160907_util.tar.xz
c5bfa5a06d55c61e5451e70cd8da3f430b5e06686f9a74c5a2e9fe0e9d155505867b0ca3428d85a983741146c4e024a6b0447638923423000431c98d048bd473  libreboot_r20160907_util.tar.xz
pi@raspberrypi:~/libreboot $ grep c5bfa5a06d55c61e5451e70cd8da3f430b5e06686f9a74c5a2e9fe0e9d155505867b0ca3428d85a983741146c4e024a6b0447638923423000431c98d048bd473 SHA512SUMS
c5bfa5a06d55c61e5451e70cd8da3f430b5e06686f9a74c5a2e9fe0e9d155505867b0ca3428d85a983741146c4e024a6b0447638923423000431c98d048bd473  ./libreboot_r20160907_util.tar.xz
pi@raspberrypi:~/libreboot $ 
Extract
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
pi@raspberrypi:~/libreboot $ tar xvf libreboot_r20160907_util.tar.xz 
libreboot_r20160907_util/
libreboot_r20160907_util/bucts/
libreboot_r20160907_util/bucts/x86_64/
libreboot_r20160907_util/bucts/x86_64/bucts
libreboot_r20160907_util/bucts/i686/
libreboot_r20160907_util/bucts/i686/bucts
libreboot_r20160907_util/flashrom/
libreboot_r20160907_util/flashrom/x86_64/
libreboot_r20160907_util/flashrom/x86_64/flashrom
libreboot_r20160907_util/flashrom/x86_64/flashrom_lenovobios_sst
libreboot_r20160907_util/flashrom/x86_64/flashrom_lenovobios_macronix
libreboot_r20160907_util/flashrom/armv7l/
libreboot_r20160907_util/flashrom/armv7l/flashrom
libreboot_r20160907_util/flashrom/i686/
libreboot_r20160907_util/flashrom/i686/flashrom
libreboot_r20160907_util/flashrom/i686/flashrom_lenovobios_macronix
libreboot_r20160907_util/flashrom/i686/flashrom_lenovobios_sst
libreboot_r20160907_util/cbfstool/
libreboot_r20160907_util/cbfstool/x86_64/
libreboot_r20160907_util/cbfstool/x86_64/cbfstool
libreboot_r20160907_util/cbfstool/i686/
libreboot_r20160907_util/cbfstool/i686/cbfstool
libreboot_r20160907_util/cbfstool/armv7l/
libreboot_r20160907_util/cbfstool/armv7l/cbfstool
libreboot_r20160907_util/ich9deblob/
libreboot_r20160907_util/ich9deblob/x86_64/
libreboot_r20160907_util/ich9deblob/x86_64/ich9deblob
libreboot_r20160907_util/ich9deblob/x86_64/ich9gen
libreboot_r20160907_util/ich9deblob/x86_64/demefactory
libreboot_r20160907_util/ich9deblob/i686/
libreboot_r20160907_util/ich9deblob/i686/ich9deblob
libreboot_r20160907_util/ich9deblob/i686/ich9gen
libreboot_r20160907_util/ich9deblob/i686/demefactory
libreboot_r20160907_util/ich9deblob/armv7l/
libreboot_r20160907_util/ich9deblob/armv7l/ich9deblob
libreboot_r20160907_util/ich9deblob/armv7l/ich9gen
libreboot_r20160907_util/ich9deblob/armv7l/demefactory
libreboot_r20160907_util/nvramtool/
libreboot_r20160907_util/nvramtool/x86_64/
libreboot_r20160907_util/nvramtool/x86_64/nvramtool
libreboot_r20160907_util/nvramtool/i686/
libreboot_r20160907_util/nvramtool/i686/nvramtool
libreboot_r20160907_util/flash
libreboot_r20160907_util/powertop.trisquel7
libreboot_r20160907_util/ChangeLog
libreboot_r20160907_util/NEWS
libreboot_r20160907_util/version
libreboot_r20160907_util/versiondate
pi@raspberrypi:~/libreboot $ 
## find the ich9gen utility for architecture

find ./libreboot_r20160907_util | grep -i ich9gen

To make our lives easier we will copy ich9gen binary to the directory that holds our libreboot images.

1
2
3
4
5
pi@raspberrypi:~/libreboot $ find ./libreboot_r20160907_util | grep -i ich9gen
./libreboot_r20160907_util/ich9deblob/i686/ich9gen
./libreboot_r20160907_util/ich9deblob/armv7l/ich9gen
./libreboot_r20160907_util/ich9deblob/x86_64/ich9gen
pi@raspberrypi:~/libreboot $ cp ./libreboot_r20160907_util/ich9deblob/armv7l/ich9gen .
## burn the MAC address into the rom/save
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
pi@raspberrypi:~/libreboot $ ./ich9gen --macaddress XX:XX:XX:XX:XX:XX
You selected to change the MAC address in the Gbe section. This has been done.

The modified gbe region has also been dumped as src files: mkgbe.c, mkgbe.h
To use these in ich9gen, place them in src/ich9gen/ and re-build ich9gen.

descriptor and gbe successfully written to the file: ich9fdgbe_4m.bin
Now do: dd if=ich9fdgbe_4m.bin of=libreboot.rom bs=1 count=12k conv=notrunc
(in other words, add the modified descriptor+gbe to your ROM image)

descriptor and gbe successfully written to the file: ich9fdgbe_8m.bin
Now do: dd if=ich9fdgbe_8m.bin of=libreboot.rom bs=1 count=12k conv=notrunc
(in other words, add the modified descriptor+gbe to your ROM image)

descriptor and gbe successfully written to the file: ich9fdgbe_16m.bin
Now do: dd if=ich9fdgbe_16m.bin of=libreboot.rom bs=1 count=12k conv=notrunc
(in other words, add the modified descriptor+gbe to your ROM image)

descriptor successfully written to the file: ich9fdnogbe_4m.bin
Now do: dd if=ich9fdnogbe_4m.bin of=yourrom.rom bs=1 count=4k conv=notrunc
(in other words, add the modified descriptor to your ROM image)

descriptor successfully written to the file: ich9fdnogbe_8m.bin
Now do: dd if=ich9fdnogbe_8m.bin of=yourrom.rom bs=1 count=4k conv=notrunc
(in other words, add the modified descriptor to your ROM image)

descriptor successfully written to the file: ich9fdnogbe_16m.bin
Now do: dd if=ich9fdnogbe_16m.bin of=yourrom.rom bs=1 count=4k conv=notrunc
(in other words, add the modified descriptor to your ROM image)

Insert the mac into your rom

1
2
3
4
5
6
7
pi@raspberrypi:~/libreboot $ dd if=ich9fdgbe_8m.bin of=libreboot.rom bs=12k count=1 conv=notrunc
1+0 records in
1+0 records out
12288 bytes (12 kB, 12 KiB) copied, 0.00883476 s, 1.4 MB/s
pi@raspberrypi:~/libreboot $ ls -lh libreboot.rom
-rw-r--r-- 1 pi pi 8.0M Jan 27 09:38 libreboot.rom
pi@raspberrypi:~/libreboot $ 
## flash it

Flash your Libreboot image to your BIOS.

Make sure that you get the Verifying flash... VERIFIED message, if you don’t get this message try it again until you get it. I needed to do it twice…

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
pi@raspberrypi:~/libreboot $ sudo flashrom -c "MX25L6405D" -p linux_spi:dev=/dev/spidev0.0,spispeed=512 -w libreboot.rom 
flashrom v0.9.9-r1954 on Linux 4.14.79+ (armv6l)
flashrom is free software, get the source code at https://flashrom.org

Calibrating delay loop... OK.
Found Macronix flash chip "MX25L6405D" (8192 kB, SPI) on linux_spi.
Reading old flash chip contents... done.
Erasing and writing flash chip... Erase/write done.
Verifying flash... FAILED at 0x000c9f01! Expected=0x6b, Found=0xe9, failed byte count from 0x00000000-0x007fffff: 0x2
Your flash chip is in an unknown state.
Please report this on IRC at chat.freenode.net (channel #flashrom) or
mail flashrom@flashrom.org, thanks!
pi@raspberrypi:~/libreboot $ 
pi@raspberrypi:~/libreboot $ sudo flashrom -c "MX25L6405D" -p linux_spi:dev=/dev/spidev0.0,spispeed=512 -w libreboot.rom 
flashrom v0.9.9-r1954 on Linux 4.14.79+ (armv6l)
flashrom is free software, get the source code at https://flashrom.org

Calibrating delay loop... OK.
Found Macronix flash chip "MX25L6405D" (8192 kB, SPI) on linux_spi.
Reading old flash chip contents... done.
Erasing and writing flash chip... Erase/write done.
Verifying flash... VERIFIED.
pi@raspberrypi:~/libreboot $ 

Almost done

GNU/Linux

w500_in_action.jpg

I use Parabola GNU/Linux on my W500.

Wifi Card

The intel wifi card that Lenovo uses on the W500 isn’t supported without a binary blob. With the original Lenovo BIOS you are forced to use certified PCI card. Libreboot doesn’t have this restriction this is another advantage of using an alternative BIOS like Libreboot or Coreboot . I replaced wifi an Atheros from ebay

1
2
3
[staf@snuffel ~]$ sudo lspci | grep -i Atheros
02:00.0 Network controller: Qualcomm Atheros AR93xx Wireless Network Adapter (rev 01)
[staf@snuffel ~]$

ACPI

It is recommended to load the thinkpad-acpi module. Make sure that fan_control=1 is enabled.

1
2
3
[staf@snuffel ~]$ cat /usr/lib/modprobe.d/thinkpad_acpi.conf
options thinkpad_acpi fan_control=1
[staf@snuffel ~]$

Execute modprobe thindpad_acpi to load the module

1
2
3
[staf@snuffel ~]$ sudo modprobe thinkpad_acpi
[sudo] password for staf:
[staf@snuffel ~]$

thinkfan

The Intel core duo is still a captable CPU. Even video playback in Full-HD is possible but it takes already a lot of the CPU and the temperature is increasing during the playback.

I installed thinkfan with a more aggresive cooling profile to keep the CPU temperature under control.

Install thinkfan

1
2
3
4
5
6
7
8
9
10
11
[staf@snuffel ~]$ yay -S thinkfan
warning: thinkfan-0.9.3-1 is up to date -- reinstalling
resolving dependencies...
looking for conflicting packages...

Packages (1) thinkfan-0.9.3-1

Total Installed Size:  0.11 MiB
Net Upgrade Size:      0.00 MiB

:: Proceed with installation? [Y/n]

copy the sample configuration

1
[staf@snuffel ~]$ sudo cp /usr/share/doc/thinkfan/examples/thinkfan.conf.simple /etc/thinkfan.con

Edit /etc/thinkfan.conf

1
2
3
4
5
6
7
(0,  0,  50)
(1,   49, 52)
(2,   51, 54)
(3,   53, 56)
(4,   55, 58)
(5,   57, 60)
(7,   59, 32767)

Enable and start thinkfan

1
2
3
4
[staf@snuffel ~]$ sudo systemctl enable thinkfan
[sudo] password for staf:
[staf@snuffel ~]$ sudo systemctl start thinkfan
[staf@snuffel ~]$

Have fun

Links

February 09, 2019

It took a lot of time for WP YouTube Lyte to get there, but we finally got to 20 000 active installs, and to celebrate that occasion I release a long overdue update (1.7.6), with the following improvements;

  • extra parameters for shortcode (start, showinfo, stepsize and hqthumb).
  • also turn youtube-nocookie.com iframes into LYTE’s as proposed by Maxim.
  • also remove cached thumbnails when clearing cache.
  • also set image in noscript tag to local hosted thumbnail if that option is active.

And in case you’re wonder, this how a LYTE video looks like;

YouTube Video
Watch this video on YouTube.

(Thom Yorke, Unmade from his Susperia soundtrack, live. I have been looking for something more upbeat in my YouTube favorites playlist, but this ultimately is what I want to share now. Sorry if.)

Le face à face continue entre Nellio et Eva, d’une part, et Georges Farreck, le célèbre acteur, et Mérissa, une femme mystérieuse qui semble contrôler tout le conglomérat industriel d’autre part.

– Mais… Et ma fondation pour les conditions de travail des ouvriers ? m’interrompt Georges Farreck. N’essayons-nous pas de rendre les conditions meilleures ?
– Bien sûr, répond Eva. L’algorithme a très vite compris que les humains se satisfont de leur sort s’ils sont persuadés qu’il y’a pire qu’eux. Et pour les convaincre, la méthode la plus efficace est de prendre une star adulée qui va leur demander de l’aide. « Moi qui suis milliardaire et célèbre, j’ai besoin de votre argent pour aider ceux qui sont encore plus pauvres que vous, ce qui va vous convaincre qu’il y’a plus pauvre et plus malheureux ! Donc vous faire accepter votre sort. »
— C’est absurde ! m’écrié-je.
— C’est la nature humaine, siffle Mérissa doucement. On n’a pas attendu l’algorithme pour cela.
— Mais l’algorithme est devenu dangereux, lui lance Eva. Il faut l’arrêter !
– En quoi est-il un danger ? Il n’a jamais aussi bien fonctionné ! Il ne fait que faire fonctionner la société comme elle l’a fait depuis des décennies.

Eva s’approche en tremblant du bureau de Mérissa. Quelque chose a changé la donne. D’un geste vif, elle lui brandit son bras écorché sous le nez.

— Je… Je ne comprends pas ! bégaie la femme la plus puissante du monde.

Un rayon de soleil perce les nuages et ricoche à travers les verrières colorées qui forment un étrange plafond lumineux dans la pièce. J’ai l’impression d’assister à la conclusion d’une mauvaise série B. Immobile, le cadavre de Warren ajoute une touche macabre mais pourtant fort à propos.

— C’est pourtant logique, grogne Eva entre ses dents. Comme tout ce qui touche à l’algorithme. C’est infiniment logique.
— Je…
— Il a d’abord créé des corps humains réalistes, des poupées sexuelles. C’était facile, cela fait des années que les hommes en réalisaient. Puis, il a assemblé les différents algorithmes de conscience artificielle et les a chargé dans une seule et unique poupée. Il a lancé un programme de test des autres poupées afin de retarder leur lancement commercial. De cette manière, la première poupée, la seule et unique poupée sexuelle consciente, pouvait se mêler aux humains sans se faire remarquer.

Étrangement, je me sens détaché de ces révélations. Une partie de moi-même avait compris cette vérité qui flottait dans mon inconscient sans jamais percer la surface, maintenue dans les profondeurs ignorantes par mon humaine volonté de préserver ma foi, de ne pas m’exposer aux rigueurs de la réalité.

— Mérissa, je suis l’algorithme ! Il faut m’arrêter !

Eva lui a brutalement empoigné les mains. Leur visage sont proches à se toucher.

— Tu n’es pas l’algorithme ! Tu n’es qu’une de ses inventions. Ou une humaine. Je ne sais pas. Mais pas l’algorithme !
— La première découverte que fit la poupée sexuelle Eva fut qu’elle avait besoin d’un véritable corps de chair et d’os pour ressentir la douleur comme un véritable humain. L’algorithme conçut alors le plan de lui en fournir un grâce à une imprimante 3D moléculaire. Cette imprimante révolutionnaire fut créée de manière complètement autonome grâce à l’accès à tous les papiers scientifiques dans le domaine, grâce au code open source de milliers de projets. Mais le projet échoua…

Les rayons de lumière dessinent d’étranges arabesques. Des poussières tournoient. Une ombre, un mouvement se dessine à l’extrême limite de mon champs de vision, me donnant l’impression d’une présence.

— L’algorithme n’était que la somme des connaissances humaines écrites et partagées. Pour la première fois, il échouait. Il avait besoin d’une forme de créativité. Il identifia rapidement la personne la plus susceptible de l’aider. C’était toi, Nellio !

D’un geste théâtral, elle pointe son doigt dans ma direction.

— Moi ? Je…
— Et comment t’attirer ? Te convaincre ? Étouffer toutes tes suspicions ? Tout simplement avec une attirance sexuelle combinée de Eva, poupée conçue dans cet objectif, et Georges Farreck, ton fantasme d’adolescent.

Georges et moi-même poussons à l’unisson un cri de surprise.

— Mais…
— Georges, tu fus le plus facile à manipuler. Il a suffit de te faire miroiter que ton personnage d’acteur qui ne serait, dans une décennie, plus qu’une page wikipédia oubliée, deviendrait un bienfaiteur de l’humanité.

Je réagis.
— Cela ne colle pas Eva. Nous avons été attaqués chez Georges Farreck. J’ai failli être tué chez Max.
— Mais tu t’en es sorti à chaque fois ! L’algorithme savait que le printeur était une invention dangereuse, la seule et unique invention capable de lui faire perdre son emprise sur l’humanité. Il devait la développer mais la garder secrète. Grâce à la menace permanente, nous avons pris toutes les précautions nécessaires pour que le printeur reste dans l’ombre. Une fois le projet terminé, il fallait que je meure devant toi pour que tu aies l’idée de me ressusciter à travers le printeur.
— Eva…

Mon regard plonge dans ses yeux noirs, profonds, lumineux et j’y lis soudain l’infini de toutes les tristesses humaines, de toutes les émotions de l’humanité.

— J’ai… J’ai soudain découvert la douleur, bégaie-t-elle. J’ai découvert la condition humaine.

Mérissa a porté sa main à sa bouche. Georges Farreck est immobile, retenant sa respiration. Les images d’Eva hurlant, se tordant de douleur sur le sol dansent dans ma tête.

— Tu… Tu as été le premier humain imprimé ! fais-je. C’est… C’est…
— Non, fait-elle. Tu l’as été Nellio. Tu es le premier humain ressuscité, revenu d’entre les morts.
— Quoi ?

Je reste interdit. Un éclair me foudroie soudain le cerveau, ma respiration se coupe, je panique.
— Ainsi, murmure Georges Farreck, Nellio est bel et bien mort lors de notre survol du sultanat islamique. Je m’en doutais, je ne voulais pas l’accepter.
— Je pense que c’était un imprévu, un élément complètement aléatoire qui a perturbé les plans de l’algorithme.
— Arrêtez ! Taisez-vous ! nous lance Mérissa, pâle comme la mort.
— Il faut arrêter l’algorithme, insiste Eva. Toi seule peut le faire sans qu’il se défende.
— Non, je…
— Les printeurs sont en train d’être diffusés. Un nouveau monde fondamentalement incompatible avec l’algorithme est en train de naître. Tu as le pouvoir d’empêcher un conflit meurtrier entre les deux mondes, tu peux…
— Je ne veux rien du tout !
— La femme la plus puissante de la terre ne veut rien du tout, ironise Georges Farreck.
— Quel monde veux-tu léguer aux deux humains à qui tu vas bientôt donner la vie ? continue Eva.

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

February 08, 2019

I published the following diary on isc.sans.edu: “Phishing Kit with JavaScript Keylogger”:

Here is an interesting sample! It’s a phishing page which entice the user to connect to his/her account to retrieve a potentially interesting document. As you can see, it’s a classic one… [Read more]

[The post [SANS ISC] Phishing Kit with JavaScript Keylogger has been first published on /dev/random]

February 07, 2019

Mais vécu du côté du professeur

Parfois, la nuit, je me réveille en sursaut, le corps baigné de transpiration. J’ai examen et je n’ai pas étudié. Ou pas assez. Mon cœur s’emballe, une nausée me remonte dans la gorge. Il me faut généralement quelques minutes pour réaliser que ce n’est qu’un mauvais rêve, une réminiscence issue de mon passé. 

Car cela fait 13 ans que j’ai passé mon dernier examen à l’université. 13 ans que je n’ai pas connu une telle angoisse.

N’est-ce pas absurde ? J’ai connu la mort soudaine et inattendue de personnes que j’appréciais. J’ai craint une ou deux fois pour ma propre vie. Mais jamais je n’ai connu une angoisse comme le matin d’un examen à l’université. Jamais je n’ai vidé mes tripes de manière aussi fluide par tous les orifices liés à mon système digestif que des notes à la main après quelques heures de mauvais sommeil.

D’ailleurs, ce stress serait un facteur prépondérant dans l’inégalité socio-économique liée aux études. Si on sait depuis longtemps que l’intelligence est indépendante de la classe sociale, les diplômes, eux, leur sont très fortement corrélés, même si l’on tient compte du coût des études.

Une des raisons serait que les étudiants des classes moins favorisées auraient sur leurs épaules une pression bien supérieure. Un enfant de bonne famille peut se permettre de rater, de se réorienter. S’il a appris dès sa plus tendre enfance une certaine assurance, une certitude quant à sa sécurité, ce n’est pas le cas de tout le monde. Recevoir une bourse implique de réussir. Voir ses parents se sacrifier interdit toute forme d’échec. Et, insidieusement, cette crainte serait l’une des premières causes d’échec.

Aujourd’hui, je suis passé de l’autre côté de la barrière. C’est moi qui fais passer les examens. Je pourrais en tirer une satisfaction voire un futile sentiment de triomphe. 

Pourtant, la veille de l’examen que je devais donner, j’ai paniqué comme si j’étais étudiant. Je me suis réveillé en sueur à 4h du matin persuadé d’être en retard. J’ai transpiré, palpité.

Devant mes étudiants, je me suis senti coupable face à ceux qui étaient en train de stresser. Comment les aider ? Lisant la panique dans leurs yeux, je voulais les rassurer. Mais, d’un autre côté, je ne pouvais pas les faire réussir sans ressentir un profond sentiment d’injustice face à ceux qui avaient, eux, travaillé et amplement mérité leur réussite.

Pourtant, j’ai tout fait pour ne pas faire un examen d’étude. Les questions sont des questions de réflexion, les étudiants ont accès à toutes les ressources qu’ils souhaitent (y compris un ordinateur connecté à Internet). Si l’étudiant s’empêtre, je tente de le réorienter et je reviens vers lui plus tard, après lui avoir suggéré des pistes. Sans compter qu’une bonne partie des points vient d’un projet à réaliser pendant l’année, à savoir contribuer à un projet open source choisi par l’étudiant.

Malgré tout ça, l’institution universitaire en impose et écrase. Ma position de professeur effraie. Et un étudiant que je sais brillant, mais paralysé par son stress sera, objectivement, identique à un étudiant qui n’a même pas pris la peine de lire quoi que ce soit et qui tente, à tout hasard, de faire semblant. On ne sait jamais.

Ayant, pour la première fois de ma vie, un certain pouvoir, je veux l’utiliser. Sachant que l’université me demande, pour chaque étudiant, une côte entre 0 et 20. Que je souhaite que cette côte soit juste et récompense ceux qui font preuve d’une certaine compréhension et d’un intérêt pour la matière.

Comment mettre en place un examen qui rassure. Qui soit un événement utile dans le parcours académique et non plus une épreuve de souffrance ?

J’ai voulu mettre en place un examen comme moi j’aurai voulu en avoir. Un examen pour lequel je n’aurais pas stressé (je ne stressais pas pour les examens à cours ouvert). Mais, cette année, j’ai constaté dans les yeux de certains étudiants que j’avais partiellement échoué. Que, en dépit de mes belles paroles, je me faisais le véhicule de cette injustice que j’abhorrais il y’a trois lustres.

Si des étudiants me lisent, je suis preneur de leurs idées, de leurs conseils. Tentons des expériences, ne nous satisfaisons pas des acquis et des coutumes traditionalistes traumatisantes. 

Photo by JESHOOTS.COM on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

Logo pfSenseCe jeudi 21 février 2019 à 19h se déroulera la 75ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : pfSense, un firewall “libre” pour la sécurisation des réseaux domestiques et d’entreprises

Thématique : Systèmes|réseaux

Public : Tout public

L’animateur conférencier : Michel Villers

Lieu de cette séance : Université de Mons, Campus Plaine de Nimy, avenue Maistriau, Grands Amphithéâtres, Auditoire Curie (cf. ce plan sur le site de l’UMONS, ou la carte OSM).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Pfsense est un routeur/pare-feu, dérivé de Monowall. Intégré dans un système d’exploitation Unix FreeBSD, il propose de nombreuses fonctionnalités comparables à celles fournies par les pare-feu professionnels propriétaires. pfSense constitue donc une solution “libre” pour la sécurisation de réseaux domestique ou d’entreprise.

Après installation et assignation d’une adresse IP de gestion, pfSense s’administre à distance via une interface Web. Gérant nativement les Vlan (802.1q), pfSense est en mesure de sécuriser un nombre illimité de segments-réseau.

En plus des traditionnelles fonctionnalités de base d’un pare-feu (routage, filtrage, NAT), de nombreux services réseau fondamentaux (DHCP, forwarding DNS) et plus spécifiques (PPPoE, Dynamic DNS, VPN, Wake On Lan, Portail captif, Load balancing, haute disponibilité, …) sont disponibles.

Un gestionnaire de paquets permet en outre de disposer de fonctionnalités additionnelles (services DNS, RADIUS, PROXY, filtrage publicitaire, antivirus, filtrage geoIP, …) ou d’outils d’administration complémentaires (statistiques, gestion de bandes passantes, …).

Différentes configurations pratiques de pfSense seront présentées afin de découvrir et d’apprécier tout le potentiel de cette solution firewall.

I'm frequently sent examples of how Drupal has changed the lives of developers, business owners and end users. Recently, I received a very different story of how Drupal had helped in a rescue operation that saved a man's life.

The Snowdonia Ultra Marathon website

In early 2018, Race Director Mike Jones was looking to build a new website for the Ultra-Trail Snowdonia ultra marathon. He reached out to a good friend and developer, Rob Edwards, to lead the development of the website.

A photo of a runner at the Ultra-trail Snowdonia ultramarathon© Ultra-trail Snowdonia and No Limits Photography

Rob chose Drupal for its flexibility and extensibility. As an organization supported heavily by volunteers, open source also fit the Snowdonia team's belief in community.

The resulting website, https://apexrunning.co/, included a custom-built timing module. This module allowed volunteers to register each runner and their time at every aid stop.

A runner goes missing

Rob attended the first day of Ultra-Trail Snowdonia to ensure the website ran smoothly. He also monitored the runners at the end of the race to certify they were all accounted for.

Monitoring the system into the early hours of the morning, Rob noticed one runner, after successfully completing checkpoints one and two, hadn't passed through the third checkpoint.

A photo of a runner at the Ultra-trail Snowdonia ultramarathon© Ultra-trail Snowdonia and No Limits Photography

Each runner carried a mobile phone with them for emergencies. Mike attempted to make contact with the runner via phone to ensure he was safe. However, this specific area was known for its poor signal and the connection was too weak to get through.

After some more time eagerly watching the live updates, it was clear the runner hadn't reached checkpoint four and more likely hadn't ever made it past checkpoint three. The Ogwen Mountain Rescue were called to action.

Due to the terrain and temperature, searching for the lost runner on foot would be too slow. Instead, the mountain rescue volunteers used a helicopter to scan the area and locate the runner.

How Drupal came to rescue

The area covered by runners in an ultra marathon like this one is vast. The custom-built timing module helped rescuers narrow down the search area; they knew the runner passed the second checkpoint but never made it to the third.

After following the fluorescent orange markers in the area pinpointed by the Drupal website, the team quickly found the individual. He had fallen and become too injured to carry on. A mild case of hypothermia had set in. The runner was airlifted to the hospital for appropriate care. The good news: the runner survived.

Without Drupal, it might have taken much longer to notify anyone that a runner had gone missing, and there would have been no way to tell when he had dropped off.

NFC and GPS devices are now being explored for these ultra marathon runners to carry with them to provide location data as an extra safety precaution. The Drupal system will be used alongside these devices for more accurate time readings, and Rob is looking into an API to pull this additional data into the Drupal website.

Stories about Drupal having an impact on organizations and individuals, or even helping out in emergencies, drive my sense of purpose. Feel free to keep sending them my way!

Special thanks to Rob Edwards, Poppy Heap (CTI Digital) and Paul Johnson (CTI Digital) for their help with this blog post.

February 06, 2019

LOADays 2019 is still a GO, however by popular demand we rescheduled LOADays 2019. The new dates are 4th and 5th of May 2019 in Antwerp, Belgium. We'll be opening the CfP shortly.

February 04, 2019

Facebook social decay© Andrei Lacatusu

Exactly one year ago, I decided to use social media less and blog more. I uninstalled the Facebook application from my phone, but kept my Facebook account for the time being.

The result is that I went from checking Facebook several times a day to once or twice a month.

Facebook can't be trusted

At the time I uninstalled the Facebook application from my phone, Mark Zuckerberg promised that he would fix Facebook. He didn't.

The remainder of 2018 was filled with Facebook scandals, including continued mishandling of personal data and privacy breaches, more misinformation, and a multitude of shady business practices.

Things got worse, not better.

The icing on the cake is that a few weeks ago we learned that Facebook knowingly duped children and their parents out of money, in some cases hundreds or even thousands of dollars, and often refused to give the money back.

And just last week, it was reported that Facebook had been collecting users' data by getting people to install a mobile application that gave Facebook root access to their network traffic.

It's clear that Facebook can't be trusted. And for that reason, I'm out.

I deleted my Facebook account twenty minutes ago.

An image of Facebook's account deletion confirmation screen

Social media's dark side

Social media, in general, have been enablers of community, transparency and positive change, but also of abuse, hate speech, bullying, misinformation, government manipulation and more. In just the past year, more and more users have woken up to the dark side of social media. Open Web and privacy advocates, on the other hand, have seen this coming for awhile.

Technological change is a wonderful thing, as it can bring unprecedented improvements to billions around the globe. As a technologist, I believe in the power of the web to improve the world for many, but we also need to make sure that technology disruption is positive for all of us.

Last week, we heard that Facebook intends to further blend Instagram and WhatsApp with Facebook. If I were to guess, they want to make it harder to split up Facebook later (and harder for users to know what is happening with their data). Regulators should be all over this right now.

My social detox

I plan to stay off Facebook indefinitely, unless maybe there is a new CEO and better regulatory oversight.

I already stopped using Twitter to share personal updates and use it almost exclusively for Drupal-related updates. It remains a valuable channel to reach many people, but I wouldn't categorize my use as social anymore.

For now, I'm still on Instagram, but it's hard to ignore that Instagram is owned by Facebook. I will probably uninstall that next.

A call to rejoin the Open Web

Instant gratification and network effects have made social media successful, at the sacrifice of blogs and the Open Web.

I've always been driven by a sense of idealism. I'm optimistic that the movement away from social media is good for the Open Web.

Since I scaled back my use of social media a year ago, I blogged more, re-subscribed to many RSS feeds, and grew increasingly interested in the IndieWeb — all small shifts back to the Open Web's roots.

I plan to continue to work on my POSSE plan, and hope to share more thoughts on this topic in the coming weeks.

I'd love to see thousands more people join or rejoin the Open Web, and help innovate on top of it.

February 01, 2019

A year and a half ago, we made a decision: I was going to move.

About a year ago, I decided that getting this done without professional help was not a very good idea and would take forever, so I got set up with a lawyer and had her guide me through the process.

After lots of juggling with bureaucracies, some unfortunate delays, and some repeats of things I had already done before, I dropped off a 1 cm thick file of paperwork at the consulate a few weeks ago

Today, I went back there to fetch my passport, containing the visum.

Tomorrow, FOSDEM starts. After that, I will be moving to a different continent!

January 31, 2019

I published the following diary on isc.sans.edu: “Tracking Unexpected DNS Changes”:

DNS is a key element of the Internet and, regularly, we read new bad stories. One of the last one was the Department of Homeland Security warning about recent DNS hijacking attacks. Indeed, when you want to visit the website ‘isc.sans.org’, you must be sure that the visited server will be 204.51.94.153 and not a malicious one controlled by a malicious server. From an end-user point of view, we have to rely on the DNS. it’s not easy to detect unexpected changes but you can implement your own checks to tracks changes for your most visited websites. But from a website owner or network admin perspective, it is indeed a good practice to ensure that DNS servers authoritative for our domain zones are providing the correct information… [Read more]

[The post [SANS ISC] Tracking Unexpected DNS Changes has been first published on /dev/random]

Since I was young, I've been an avid tennis player and fan. I still play to this day, though maybe not as much as I'd like to.

In my teens, Andre Agassi was my favorite player. I've even sported some of his infamous headbands. I also remember watching him win the Australian Open in 1995.

In 2012, I traveled to Melbourne for a Drupal event, the same week the Australian Open was going on. As a tennis fan, I was lucky enough to watch Belgium's Kim Clijsters play.

Last weekend, the Australian Open wrapped up. This year, their website, https://ausopen.com, ran on Acquia and Drupal, delivered by the team at Avanade.

In a two-week timeframe, the site successfully welcomed tens of millions of visitors and served hundreds of millions of page views.

I'm very proud of the fact that many of the world's largest sporting events and media organizations (such as NBC Sports who host the Super Bowl and Olympics in the US) trust Acquia and Drupal as their chosen digital platform.

When the world is watching an event, there is no room for error!

A photo of the team that worked on the Australian Open website in 2019Team Tennis Australia, Acquia and Avanade after the men’s singles final.

Many thanks to the round-the-clock efforts from Acquia's team in Asia Pacific, as well as our partners at Avanade!

January 30, 2019

We proudly present the last set of speaker interviews. See you at FOSDEM this weekend! Corey Hulen: Mattermost’s Approach to Layered Extensibility in Open Source Denis Roio (Jaromil): Algorithmic Sovereignty and the state of community-driven open source development. Is there a radical interface pedagogy for algorithmic governementality? John Garbutt: Square Kilometre Array and its Software Defined Supercomputer. ... and a very fast parallel file system Jon 'maddog' Hall: 2019 - Fifty years of Unix and Linux advances Roger Dingledine: The Current and Future Tor Project. Updates from the Tor Project Ron Evans: Go on Microcontrollers: Small Is Going Big. TinyGo舰

January 29, 2019

Every year, I sit down to write my annual Acquia retrospective. It's a rewarding exercise, because it allows me to reflect on how much progress Acquia has made in the past 12 months.

Overall, Acquia had an excellent 2018. I believe we are a much stronger company than we were a year ago; not only because of our financial results, but because of our commitment to strengthen our product and engineering teams.

If you'd like to read my previous retrospectives, they can be found here: 2017, 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009. This year marks the publishing of my tenth retrospective. When read together, these posts provide a comprehensive overview of Acquia's growth and trajectory.

Updating our brand

Exiting 2017, we doubled down on our transition from website management to digital experience management. In 2018, we updated our product positioning and brand narrative to reflect this change. This included a new Acquia Experience Platform diagram:

An image of the Acquia Platform in 2018The Acquia Platform is divided into two key parts: the Experience Factory and the Marketing Hub. Drupal and Acquia Lightning power every side of the experience. The Acquia Platform supports our customers throughout the entire life cycle of a digital experience — from building to operating and optimizing digital experiences.

In 2018, the Acquia marketing team also worked hard to update Acquia's brand. The result is a refreshed look and updated brand positioning that better reflects our vision, culture, and the value we offer our customers. This included updating our tagline to read: Experience Digital Freedom.

I think Acquia's updated brand looks great, and it's been exciting to see it come to life. From highway billboards to Acquia Engage in Austin, our updated brand has been very well received.

A photo of an Acquia billboard at the Austin airportWhen Acquia Engage attendees arrived at the Austin-Bergstrom International Airport for Acquia Engage 2018, they were greeted by an Acquia display.

Business momentum

This year, Acquia surpassed $200 million in annualized revenue. Overall new subscription bookings grew 33 percent year over year, and we ended the year with nearly 900 employees.

Mike Sullivan completed his first year as Acquia's CEO, and demonstrated a strong focus on improving Acquia's business fundamentals across operational efficiency, gross margins and cost optimization. The results have been tangible, as Acquia has realized unprecedented financial growth in 2018:

  • Channel-partner bookings grew 52 percent
  • EMEA-based bookings grew 103 percent
  • Gross profit grew 39 percent
  • Adjusted EBITDA grew 78 percent
  • Free cash flow grew 84 percent
A photo that summarizes Acquia's 2018 business results mentioned throughout this blog post2018 was a record year for Acquia. Year-over-year highlights include new subscription bookings, EMEA-based bookings, free cash flow, and more.

International growth and expansion

In 2018, Acquia also witnessed unprecedented success in Europe and Asia, as new bookings in EMEA were up more than 100 percent. This included expanding our European headquarters to a new and larger space with a ribbon-cutting ceremony with the mayor of Reading in the U.K.

Acquia also expanded its presence in Asia, and opened Tokyo-based operations in 2018. Over the past few years I visited Japan twice, and I'm excited for the opportunities that doing business in Japan offers.

We selected Pune as the location for our new India office, and we are in the process of hiring our first Pune-based engineers.

Acquia now has four offices in the Asia Pacific region serving customers like Astellas Pharmaceuticals, Muji, Mediacorp, and Brisbane City Council.

A photo of an Acquia marketing one-pager in JapaneseAcquia product information, translated into Japanese.

Acquia Engage

In 2018, we welcomed more than 650 attendees to Austin, Texas, for our annual customer conference, Acquia Engage. In June, we also held our first Acquia Engage Europe and welcomed 300 attendees.

Our Engage conferences included presentations from customers like Paychex, NBC Sports, Wendy's, West Corporation, General Electric, Charles Schwab, Pac-12 Networks, Blue Cross Blue Shield, Bayer, Virgin Sport, and more. We also featured keynote presentations from our partner network, including VMLY&R, Accenture Interactive, IBM iX and MRM//McCann.

Both customers and partners continue to be the most important driver of Acquia's product strategy, and it's always rewarding to hear about this success first hand. In fact, 2018 customer satisfaction levels remain extremely high at 94 percent.

Partner program

Finally, Acquia's partner network continues to become more sophisticated. In the second half of 2018, we right sized our partner community from 2,270 firms to 226. This was a bold move, but our goal was to place a renewed focus on the partners who were both committed to Acquia and highly capable. As a result, we saw almost 52 percent year-over-year growth in partner-sourced ACV bookings. This is meaningful because for every $1 Acquia books in collaboration with a partner, our partner makes about $5 in services revenue.

Analyst recognition

In 2018, the top industry analysts published very positive reviews about Acquia. I'm proud that Acquia was recognized by Forrester Research as the leader for strategy and vision in The Forrester Wave: Web Content Management Systems, Q4 2018. Acquia was also named a leader in the 2018 Gartner Magic Quadrant for Web Content Management, marking our placement as a leader for the fifth year in a row.

Product milestones

An image showing a timeline of Acquia's product history and evolutionAcquia's product evolution between 2008 and 2018. When Acquia was founded, our mission was to provide commercial support for Drupal and to be the "Red Hat for Drupal"; 12 years later, the Acquia Platform helps organizations build, operate and optimize Drupal-based experiences.

2018 was one of the busiest years I have experienced; it was full of non-stop action every day. My biggest focus was working with Acquia's product and engineering team. We focused on growing and improving our R&D organization, modernizing Acquia Cloud, becoming user-experience first, redesigning the Acquia Lift user experience, working on headless Drupal, making Drupal easier to use, and expanding our commerce strategy.

Hiring, hiring, hiring

In partnership with Mike, we decided to increase the capacity of our research and development team by 60 percent. At the close of 2018, we were able to increase the capacity of our research and development team by 45 percent percent. We will continue to invest in growing our our R&D team in 2019.

I spent a lot of time restructuring, improving and scaling the product organization to make sure we could handle the increased capacity and build out a world-class R&D organization.

As the year progressed, R&D capacity came online and our ability to innovate not only improved but accelerated significantly. We entered 2019 in a much better position as we now have a lot more capacity to innovate.

Acquia Cloud

Acquia Cloud and Acquia Cloud Site Factory support some of the largest and most mission-critical websites in the world. The scope and complexity that Acquia Cloud and Acquia Cloud Site Factory manages is enormous. We easily deliver more than 30 billion page views a month (excluding CDN).

Over the course of 10 years, the Acquia Cloud codebase had grown very large. Updating, testing and launching new releases took a long time because we had one large, monolithic codebase. This was something we needed to change in order to add new features faster.

Over the course of 2018, the engineering team broke the monolithic codebase down into discrete components that can be tested and released independently. We launched our component-based architecture in June. Since then, the engineering team has released changes to production 650 times, compared to our historic pace of doing one release per quarter.

A graph that shows how we moved Acquia Cloud from a monolithic codebase to a component-based code base.This graph shows how we moved Acquia Cloud from a monolithic code base to a component-based code base. Each color on the graph represents a component. The graph shows how releases of Acquia Cloud (and the individual components in particular) have accelerated in the second half of the year.

Planning and designing for all of these services took a lot of time and focus, and was a large priority for the entire engineering team (including me). The fruits of these efforts will start to become more publicly visible in 2019. I'm excited to share more with you in future blog posts.

Acquia Cloud also remains the most secure and compliant cloud for Drupal. As we were componentizing the Acquia Cloud platform, the requirements to maintain our FedRAMP compliance became much more stringent. In April, the GDPR deadline was also nearing. Executing on hundreds of FedRAMP- and GDPR-related tasks emerged as another critical priority for many of our product and engineering teams. I'm proud that the team succeeded in accomplishing this amid all the other changes we were making.

Customer experience first

Over the years, I've felt Acquia lacked a focus on user experience (UX) for both developers and marketers. As a result, increasing the capacity of our R&D team included doubling the size of the UX team.

We've stepped up our UX research to better understand the needs and challenges of those who use Acquia products. We've begun to employ design-first methodologies, such as design sprints and a lean-UX approach. We've also created roles for customer experience designers, so that we're looking at the full customer journey rather than just our product interfaces.

With the extra capacity and data-driven changes in place, we've been working hard on updating the user experience for the entire Acquia Experience Platform. For example, you can see a preview of our new Acquia Lift product in this video, which has an increased focus on UX:

Drupal

In 2018, Drupal 8 adoption kept growing and Drupal also saw an increase in the number of community contributions and contributors, both from individuals and from organizations.

Acquia remains very committed to Drupal, and was the largest contributor to the project in 2018. We now have more than 15 employees who contribute to Drupal full-time, in addition to many others that contribute periodically. In 2018, the Drupal team's main areas of focus have been Layout Builder and the API-first initiative:

  • Layout Builder: Layout Builder offers content authors an easy-to-use page building experience. It's shaping up to be one of the most useful and pervasive features ever added to Drupal because it redefines the how editors control the appearance of their content without having to rely on a developer.
  • API First: This initiative has given Drupal a true best-in-class web services API for using Drupal as a headless content management system. Headless Drupal is one of the fastest growing segments of Drupal implementations.
A photo of Acquia engineers, designers and product managers at Acquia Build Week 2018Our R&D team gathered in Boston for our annual Build Week in June 2018.

Content and Commerce

Adobe's acquisition of Magento has been very positive for us; we're now the largest commerce-agnostic content management company to partner with. As a result, we decided to extend our investments in headless commerce and set up partnerships with Elastic Path and BigCommerce. The momentum we've seen from these partnerships in a short amount of time is promising for 2019.

The market continues to move in Acquia's direction

In 2019, I believe Acquia will continue to be positioned for long-term growth. Here are a few reasons why:

  • The current markets for content and digital experience management continues to grow rapidly, at approximately 20 percent per year.
  • Digital transformation is top-of-mind for all organizations, and impacts all elements of their business and value chain.
  • Open source adoption continues to grow at a furious pace and has seen tremendous business success in 2018.
  • Cloud adoption continues to grow. Unlike most of our CMS competitors, Acquia was born in the cloud.
  • Drupal and Acquia are leaders in headless and decoupled content management, which is a fast growing segment of our market.
  • Conversational interfaces and augmented reality continues to grow, and we embraced these channels a few years ago. Acquia Labs, our research and innovation lab, explored how organizations can use conversational UIs to develop beyond-the-browser experiences, like cooking with Alexa, and voice-enabled search for customers like Purina.

Although we hold a leadership position in our market, our relative market share is small. These trends mean that we should have plenty of opportunity to grow in 2019 and beyond.

Thank you

While 2018 was an incredibly busy year, it was also very rewarding. I have a strong sense of gratitude, and admire every Acquian's relentless determination and commitment to improve. As always, none of these results and milestones would be possible without the hard work of the Acquia team, our customers, partners, the Drupal community, and our many friends.

I've always been pretty transparent about our trajectory (e.g. Acquia 2009 roadmap and Acquia 2017 strategy) and will continue to do so in 2019. We have some big plans for 2019, and I'm excited to share them with you. If you want to get notified about what we have in store, you can subscribe to my blog at https://dri.es/subscribe.

Thank you for your support in 2018!

January 28, 2019

An image of a shield with the Drupal mascot

The European Commission made an exciting announcement; it will be awarding bug bounties to the security teams of Open Source software projects that the European Commission relies on.

If you are not familiar with the term, a bug bounty is a monetary prize awarded to people who discover and correctly report security issues.

Julia Reda — an internet activist, Member of the European Parliament (MEP) and co-founder of the Free and Open Source Software Audit (FOSSA) project — wrote the following on her blog:

Like many other organizations, institutions like the European Parliament, the Council and the Commission build upon Free Software to run their websites and many other things. But the Internet is not only crucial to our economy and our administration, it is the infrastructure that runs our everyday lives.

With over 150 Drupal sites, the European Commission is a big Drupal user, and has a large internal Drupal community. The European Commission set aside 89,000€ (or roughly $100,000 USD) for a Drupal bug bounty. They worked closely with Drupal's Security Team to set this up. To participate in the Drupal bug bounty, read the guidelines provided by Drupal's Security Team.

Over the years I've had many meetings with the European Commission, presented keynotes at some of its events, and more. During that time, I've seen the European Commission evolve from being hesitant about Open Source to recognizing the many benefits that Open Source provides for its key ICT services, to truly embracing Open Source.

In many ways, the European Commission followed classic Open Source adoption patterns; adoption went from being technology-led (bottom-up or grassroots) to policy-led (top-down and institutionalized), and now the EU is an active participant and contributor.

Today, the European Commission is a shining example and role model for how governments and other large organizations can contribute to Open Source (just like how the White House used to be).

The European Commission is actually investing in Drupal in a variety of ways — the bug bounty is just one example of that — but more about that in a future blog post.

January 27, 2019

Update 4th Feb: the HTML minify bug was fixed in W3TC v. 0.9.7.2, released a couple of days ago.


Quick heads-up for users that have both W3 Total Cache and Autoptimize installed; the latest W3TC update (version 0.9.7.1) introduces a nasty bug in the HTML minifier which also impacts Autoptimize as that uses the same minifier class (Minify_HTML, part of Mr. Clay’s Minify). When W3TC is running the Minify_HTML class is loaded by and from W3TC, meaning AO’s autoload does not have to load the Minify_HTML from AO proper (which does not have that problem).

The bug sees some characters, esp. quotes disappear from the HTML leading to all sorts of .. weirdness from Pinterest icons not showing over mis-aligned titles in RevSlider to broken custom share buttons and more.

If you’re impacted by the bug, you can do one of the following;

  • disable HTML optimization in Autoptimize (and W3TC)
  • OR temporarily disable W3TC (or switch to another page cache plugins)
  • OR download and install the previous version of W3TC (0.9.7)

Fingers crossed they’ll release an update soon!

January 25, 2019

With only one week left until FOSDEM 2019, we have added some new interviews with our main track and keynote speakers, varying from a keynote talk about the dangers of the cloud to the use of Matrix in the French state and the inner workings of the ZFS Adaptive Replacement Cache: Allan Jude: ELI5: ZFS Caching. Explain Like I'm 5: How the ZFS Adaptive Replacement Cache works Guido Trotter and Dylan Reid: Crostini: A Linux Desktop on ChromeOS Kyle Rankin: The Cloud is Just Another Sun Lorenzo Fontana: eBPF powered Distributed Kubernetes performance analysis Matthew Hodgson: Matrix in the French舰

January 24, 2019

The pace of innovation in content management has been accelerating — driven by both the number of channels that content management systems need to support (web, mobile, social, chat) as well as the need to support JavaScript frameworks in the traditional web channel. As a result, we've seen headless or decoupled architectures emerge.

Decoupled Drupal has seen adoption from all corners of the Drupal community. In response to the trend towards decoupled architectures, I wrote blog posts in 2016 and 2018 for architects and developers about how and when to decouple Drupal. In the time since my last post, the surrounding landscape has evolved, Drupal's web services have only gotten better, and new paradigms such as static site generators and the JAMstack are emerging.

Time to update my recommendations for 2019! As we did a year ago, let's start with the 2019 version of the flowchart in full. (At the end of this post, there is also an accessible version of this flowchart described in words.)

A flowchart of how to decouple Drupal in 2019

Different ways to decouple Drupal

I want to revisit some of the established ways to decouple Drupal as well as discuss new paradigms that are seeing growing adoption. As I've written previously, the three most common approaches to Drupal architecture from a decoupled standpoint are traditional (or coupled), progressively decoupled, and fully decoupled. The different flavors of decoupling Drupal exist due to varying preferences and requirements.

In traditional Drupal, all of Drupal's usual responsibilities stay intact, as Drupal is a monolithic system and therefore maintains complete control over the presentation and data layers. Traditional Drupal remains an excellent choice for editors who need full control over the visual elements on the page, with access to features such as in-place editing and layout management. This is Drupal as we have known it all along. Because the benefits are real, this is still how most new content management projects are built.

Sometimes, JavaScript is required to deliver a highly interactive end-user experience. In this case, a decoupled approach becomes required. In progressively decoupled Drupal, a JavaScript framework is layered on top of the existing Drupal front end. This JavaScript might be responsible for nothing more than rendering a single block or component on a page, or it may render everything within the page body. The progressive decoupling paradigm lies on a spectrum; the less of the page dedicated to JavaScript, the more editors can control the page through Drupal's administrative capabilities.

Up until this year, fully decoupled Drupal was a single category of decoupled Drupal architecture that reflects a full separation of concerns between the presentation layer and all other aspects of the CMS. In this scenario, the CMS becomes a data provider, and a JavaScript application with server-side rendering becomes responsible for all rendering and markup, communicating with Drupal via web service APIs. Though key functionality like in-place editing and layout management are unavailable, fully decoupled Drupal is appealing for developers who want greater control over the front end and who are already experienced with building applications in frameworks like Angular, React, Vue.js, etc.

Over the last year, fully decoupled Drupal has branched into two separate paradigms due to the increasing complexity of JavaScript development. The so-called JAMstack (JavaScript, APIs, Markup) introduces a new approach: fully decoupled static sites. The primary reason for static sites is improved performance, security, and reduced complexity for developers. A static site generator like Gatsby will retrieve content from Drupal, generate a static website, and deploy that static site to a CDN, usually through a specialized cloud provider such as Netlify.

What do you intend to build?

The top section of the flowchart showing how to decouple Drupal in 2019

The essential question, as always, is what you're trying to build. Here is updated advice for architects exploring decoupled Drupal in 2019:

  1. If your intention is to build a single standalone website or web application, choosing decoupled Drupal may or may not be the right choice, depending on the features your developers and editors see as must-haves.
  2. If your intention is to build multiple web experiences (websites or web applications), you can use a decoupled Drupal instance either as a) a content repository without its own public-facing front end or b) a traditional website that acts simultaneously as a content repository. Depending on how dynamic your application needs to be, you can choose a JavaScript framework for highly interactive applications or a static site generator for mostly static websites.
  3. If your intention is to build multiple non-web experiences (native mobile or IoT applications), you can leverage decoupled Drupal to expose web service APIs and consume that Drupal site as a content repository without its own public-facing front end.

What makes Drupal so powerful is that it supports all of these use cases. Drupal makes it simple to build decoupled Drupal thanks to widely recognized standards such as JSON:API, GraphQL, OpenAPI, and CouchDB. In the end, it is your technical requirements that will decide whether decoupled Drupal should be your next architecture.

In addition to technical requirements, organizational factors often come into play as well. For instance, if it is proving difficult to find talented front-end Drupal developers with Twig knowledge, it may make more sense to hire more affordable JavaScript developers instead and build a fully decoupled implementation.

Are there things you can't live without?

The middle section of the flowchart showing how to decouple Drupal in 2019

As I wrote last year, the most important aspect of any decision when it comes to decoupling Drupal is the list of features your project requires; the needs of editors and developers have to be carefully considered. It is a critical step in your evaluation process to weigh the different advantages and disadvantages. Every project should embark on a clear-eyed assessment of its organization-wide needs.

Many editorial and marketing teams select a particular CMS because of its layout capabilities and rich editing functionality. Drupal, for example, gives editors the ability to build layouts in the browser and drop-and-drag components into it, all without needing a developer to do it for them. Although it is possible to rebuild many of the features available in a CMS on a consumer application, this can be a time-consuming and expensive process.

In recent years, the developer experience has also become an important consideration, but not in the ways that we might expect. While the many changes in the JavaScript landscape are one of the motivations for developers to prefer decoupled Drupal, the fact that there are now multiple ways to write front ends for Drupal makes it easier to find people to work on decoupled Drupal projects. As an example, many organizations are finding it difficult to find affordable front-end Drupal developers experienced in Twig. Moving to a JavaScript-driven front end can resolve some of these resourcing challenges.

This balancing act between the requirements that developers prioritize and those that editors prioritize will guide you to the correct approach for your needs. If you are part of an organization that is mostly editorial, decoupled Drupal could be problematic, because it reduces the amount of control editors have over the presentation of their content. By the same token, if you are part of an organization with more developer resources, fully decoupled Drupal could potentially accelerate progress, with the warning that many mission-critical editorial features disappear.

Current and future trends to consider

A diagram showing a spectrum of site building solution; low-code solutions on the left and high-code solutions on the rightOver the past year, JavaScript frameworks have become more complex, while static site generators have become less complex.

One of the common complaints I have heard about the JavaScript landscape is that it shows fragmentation and a lack of cohesion due to increasing complexity. This has been a driving force for static site generators. Whereas two years ago, most JavaScript developers would have chosen a fully functional framework like Angular or Ember to create even simple websites, today they might choose a static site generator instead. A static site generator still allows them to use JavaScript, but it is simpler because performance considerations and build processes are offloaded to hosted services rather than the responsibility of developers.

I predict that static site generators will gain momentum in the coming year due to the positive developer experience they provide. Static site generators are also attracting a middle ground of both more experienced and less experienced developers.

Conclusion

Drupal continues to be an ideal choice for decoupled CMS architectures, and it is only getting better. The API-first initiative is making good progress on preparing the JSON:API module for inclusion in Drupal core, and the Admin UI and JavaScript Modernization initiative is working to dogfood Drupal's web services with a reinvented administrative interface. Drupal's support for GraphQL continues to improve, and now there is even a book on the subject of decoupled Drupal. It's clear that developers today have a wide range of ways to work with the rich features Drupal has to offer for decoupled architectures.

With the introduction of fully decoupled static sites as an another architectural paradigm that developers can select, there is an even wider variety of architectural possibilities than before. It means that the spectrum of decoupled Drupal approaches I defined last year has become even more extensive. This flexibility continues to define Drupal as an excellent CMS for both traditional and decoupled approaches, with features that go well beyond Drupal's competitors, including WordPress, Sitecore and Adobe. Regardless of the makeup of your team or the needs of your organization, Drupal has a solution for you.

Special thanks to Preston So for co-authoring this blog post and to Angie Byron, Chris Hamper, Gabe Sullice, Lauri Eskola, Ted Bowman, and Wim Leers for their feedback during the writing process.

Accessible version of flowchart

This is an accessible and described version of the flowchart images earlier in this blog post. First, let us list the available architectural choices:

  • Coupled. Use Drupal as is without additional JavaScript (and as a content repository for other consumers).
  • Progressively decoupled. Use Drupal for initial rendering with JavaScript on top (and as a content repository for other consumers).
  • Fully decoupled static site. Use Drupal as a data source for a static site generator and, if needed, deploy to a JAMstack hosting platform.
  • Fully decoupled app. Use Drupal as a content repository accessed by other consumers (if JavaScript, use Node.js for server-side rendering).

Second, ask the question "What do you intend to build?" and choose among the answers "One experience" or "Multiple experiences".

If you are building one experience, ask the question "Is it a website or web application?" and choose among the answers "Yes, a single website or web application" or "No, Drupal as a repository for non-web applications only".

If you are building multiple experiences instead, ask the question "Is it a website or web application?" with the answers "Yes, Drupal as website and repository" or "No, Drupal as a repository for non-web applications only".

If your answer to the previous question was "No", then you should build a fully decoupled application, and your decision is complete. If your answer to the previous question was "Yes", then ask the question "Are there things the project cannot live without?"

Both editorial and developer needs are things that projects cannot live without, and here are the questions you need to ask about your project:

Editorial needs

  • Do editors need to manipulate page content and layout without a developer?
  • Do editors need in-context tools like in-place editing, contextual links, and toolbar?
  • Do editors need to preview unpublished content without custom development?
  • Do editors need content to be accessible by default like in Drupal's HTML?

Developer needs

  • Do developers need to have control over visual presentation instead of editors?
  • Do developers need server-side rendering or Node.js build features?
  • Do developers need JSON from APIs and to write JavaScript for the front end?
  • Do developers need data security driven by a publicly inaccessible CMS?

If, after asking all of these questions about things your project cannot live without, your answers show that your requirements reflect a mix of both editorial and developer needs, you should consider a progressively decoupled implementation, and your decision is complete.

If your answers to the questions about things your project cannot live without show that your requirements reflect purely developer needs, then ask the question "Is it a static website or a dynamic web application?" and choose among the answers "Static" or "Dynamic." If your answer to the previous question was "Static", you should build a fully decoupled static site, and your decision is complete. If your answer to the previous question was "Dynamic", you should build a fully decoupled app, and your decision is complete.

If your answers to the questions about things your project cannot live without show that your requirements reflect purely editorial needs, then ask two questions. Ask the first question, "Are there parts of the page that need JavaScript-driven interactions?" and choose among the answers "Yes" or "No." If your answer to the first question was "Yes", then you should consider a progressively decoupled implementation, and your decision is complete. If your answer to the first question was "No", then you should build a coupled Drupal site, and your decision is complete.

Then, ask the second question, "Do you need to access multiple data sources via API?" and choose among the answers "Yes" or "No." If your answer to the second question was "Yes", then you should consider a progressively decoupled implementation, and your decision is complete. If your answer to the second question was "No", then you should build a coupled Drupal site, and your decision is complete.

As of the end of 2018, the ULB campus Solbosch is located within the Brussels Low Emission Zone (LEZ). Drivers of cars registered abroad are required to register before entering the LEZ, or risk being fined. Registration is free of charge. More information can be found here.

January 23, 2019

The post Remote Code Execution in apt/apt-get appeared first on ma.ttias.be.

A very clever exploit was found in apt.

tl;dr I found a vulnerability in apt that allows a network man-in-the-middle (or a malicious package mirror) to execute arbitrary code as root on a machine installing any package. The bug has been fixed in the latest versions of apt. If you’re worried about being exploited during the update process, you can protect yourself by disabling HTTP redirects while you update.

Source: Remote Code Execution in apt/apt-get

And a good remark on why HTTPS still matters even if your packages are signed.

If packages manifests are signed, why bother using https? After all, the privacy gains are minimal, because the sizes of packages are well-known. And using https makes it more difficult to cache content.

People sometimes get really passionate about this. There are single purpose websites dedicated to explaining why using https is pointless in the context of apt.

They’re good points, but bugs like the one I wrote about in this post exist.

Food for though.

The post Remote Code Execution in apt/apt-get appeared first on ma.ttias.be.

January 22, 2019

If your non-geek partner and/or kids are joining you to FOSDEM, they may be interested in spending some time exploring Brussels while you attend the conference. Like previous years, FOSDEM is organising sightseeing tours.

I published the following diary on isc.sans.edu: “DNS Firewalling with MISP”:

If IOC’s are very useful to “detect” suspicious activities, why not use also them to “prevent” them to occur? DNS firewalling can be an efficient way to prevent your users to visit malicious online resources. The principle of DNS firewalling is not new, it is used for a long time to fight against spammers. Services lile SpamHaus provide RPZ feeds. RPZ means “Response Policy Zone” and the principle is to allow a nameserver to reply with an alternate responses to some clients queries… [Read more]

[The post [SANS ISC] DNS Firewalling with MISP has been first published on /dev/random]

January 21, 2019

Openstack is a nice platform to deploy an Infrastructure as a service and is a collection of projects but it can be a bit difficult to setup. The documentation is really great if you want to setup openstack by hand and there are a few openstack distributions that makes it easier to install it.

Ansible is a very nice tool for system automatisation and is one that’s easier to learn.

Wouldn’t be nice if we could make the openstack installation easier with ansible? That’s exactly what Openstack-Ansible does.

In this blog post we’ll setup “an all-in-one” openstack installation on Centos 7. The installer will install openstack into lxc containers and it’s nice way to learn how openstack works and how to operate it.

Preparation

System requirements

I use a Centos 7 virtual system running as a KVM instance with nested KVM virtualasation enabled. The system requiremensts The minimun requiremenst are:

  • 8 CPU cores
  • 50 GB of free diskspace
  • 8GB RAM

update ….

Make sure that your system is up-to-update

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[staf@openstack ~]$ sudo yum update -y

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.

[sudo] password for staf: 
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: distrib-coffee.ipsl.jussieu.fr
 * extras: mirror.in2p3.fr
 * updates: centos.mirror.fr.planethoster.net
base                                                                                                                                    | 3.6 kB  00:00:00     
extras                                                                                                                                  | 3.4 kB  00:00:00     
updates                                                                                                                                 | 3.4 kB  00:00:00     
No packages marked for update
[staf@openstack ~]$ 

Install git

We’ll need git to install the ansible playbooks and the Openstack-Ansible installation scripts.

1
2
3
4
5
6
7
8
9
10
11
12
[staf@openstack ~]$ yum install git
Loaded plugins: fastestmirror
You need to be root to perform this command.
[staf@openstack ~]$ sudo yum install git
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.in2p3.fr
 * extras: mirror.in2p3.fr
 * updates: centos.mirror.fr.planethoster.net
Package git-1.8.3.1-20.el7.x86_64 already installed and latest version
Nothing to do
[staf@openstack ~]$ 

Ansible….

This is a bit of a pitfail… The Openstack-Ansible bootstrap script will download and install his own version of ansible and create a link to /usr/local/bin. So /usr/local/bin must be in your $PATH. Ansible shouldn’t be installed on your system or if it is installed it shouln’t be executed instead of the ansible version that is builded with Openstack-Ansible.

On most GNU/Linux distributions have /usr/local/bin and /usr/local/sbin is in the $PATH but not on centos, so we’ll need to add it.

Make sure that ansible insn’t installed

1
2
3
[staf@openstack ~]$ sudo rpm -qa | grep -i ansible
[sudo] password for staf: 
[staf@openstack ~]$ 

Update your $PATH

1
[root@openstack ~]# export PATH=/usr/local/bin:$PATH

If you want to have /usr/local/bin in your $PATH update /etc/profile or $HOME/.profile

ssh password authentication

The ansibe playbooks will disable PasswordAuthentication, make sure that you login with a ssh key. - Password authentication is obsolete anyway -

firewalld

Firewall is enabled on Centos by default, the default iptables rules prevent communication between the openstack containers.

stop and disable firewalld

1
2
3
4
[root@openstack ~]# systemctl stop firewalld
[root@openstack ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

verify

1
2
3
4
5
6
7
8
9
10
root@openstack ~]# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
[root@openstack ~]# 

Openstack installation

The installation will take some time therefor it’s recommended to use an session manager like tmux or GNU screen

Bootstrap

git clone

clone the openstack-ansible git repo

1
2
3
4
5
6
7
8
[root@openstack ~]# git clone https://git.openstack.org/openstack/openstack-ansible /opt/openstack-ansible
Cloning into '/opt/openstack-ansible'...
remote: Counting objects: 67055, done.
remote: Compressing objects: 100% (32165/32165), done.
remote: Total 67055 (delta 45474), reused 52564 (delta 32073)
Receiving objects: 100% (67055/67055), 14.60 MiB | 720.00 KiB/s, done.
Resolving deltas: 100% (45474/45474), done.
[root@openstack ~]# 
1
2
[root@openstack ~]# cd /opt/openstack-ansible
[root@openstack openstack-ansible]# 

choose you Openstack releases

Openstack has release shedule about every 6 months the current stable release is Rocky. Every Openstack release has his own branch in the git repo. Each Openstack-Ansible release is tagged in the git repo. So either you’ll need checkout Openstack-Ansible release tag or the bracnh. We’ll checkout the Rocky branch.

get the list of branches

1
2
3
4
5
6
7
8
9
[root@openstack openstack-ansible]# git branch -a
* master
  remotes/origin/HEAD -> origin/master
  remotes/origin/master
  remotes/origin/stable/ocata
  remotes/origin/stable/pike
  remotes/origin/stable/queens
  remotes/origin/stable/rocky
[root@openstack openstack-ansible]# 
checkout the branch
1
2
3
4
[root@openstack openstack-ansible]# git checkout stable/rocky
Branch stable/rocky set up to track remote branch stable/rocky from origin.
Switched to a new branch 'stable/rocky'
[root@openstack openstack-ansible]# 

Bootstrap ansible

Execute scripts/bootstrap-ansible.sh this will install the required packages and ansible playbooks.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[root@openstack openstack-ansible]# scripts/bootstrap-ansible.sh
+ export HTTP_PROXY=
+ HTTP_PROXY=
+ export HTTPS_PROXY=
+ HTTPS_PROXY=
+ export ANSIBLE_PACKAGE=ansible==2.5.14
+ ANSIBLE_PACKAGE=ansible==2.5.14
+ export ANSIBLE_ROLE_FILE=ansible-role-requirements.yml
+ ANSIBLE_ROLE_FILE=ansible-role-requirements.yml
+ export SSH_DIR=/root/.ssh
+ SSH_DIR=/root/.ssh
+ export DEBIAN_FRONTEND=noninteractive
+ DEBIAN_FRONTEND=noninteractive
<SNIP>
+ unset ANSIBLE_LIBRARY
+ unset ANSIBLE_LOOKUP_PLUGINS
+ unset ANSIBLE_FILTER_PLUGINS
+ unset ANSIBLE_ACTION_PLUGINS
+ unset ANSIBLE_CALLBACK_PLUGINS
+ unset ANSIBLE_CALLBACK_WHITELIST
+ unset ANSIBLE_TEST_PLUGINS
+ unset ANSIBLE_VARS_PLUGINS
+ unset ANSIBLE_STRATEGY_PLUGINS
+ unset ANSIBLE_CONFIG
+ '[' false == true ']'
+ echo 'System is bootstrapped and ready for use.'
System is bootstrapped and ready for use.
[root@openstack openstack-ansible]# 

Verify

scripts/bootstrap-ansible created /opt/ansible-runtime and create amd updated //usr/local/bin with a few links.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@openstack openstack-ansible]# ls -ld /opt/*
drwxr-xr-x.  5 root root   56 Jan 12 11:42 /opt/ansible-runtime
drwxr-xr-x. 14 root root 4096 Jan 12 11:43 /opt/openstack-ansible
[root@openstack openstack-ansible]# ls -ltr /usr/local/bin/
total 8
lrwxrwxrwx. 1 root root   32 Jan 12 11:43 ansible -> /usr/local/bin/openstack-ansible
lrwxrwxrwx. 1 root root   39 Jan 12 11:43 ansible-config -> /opt/ansible-runtime/bin/ansible-config
lrwxrwxrwx. 1 root root   43 Jan 12 11:43 ansible-connection -> /opt/ansible-runtime/bin/ansible-connection
lrwxrwxrwx. 1 root root   40 Jan 12 11:43 ansible-console -> /opt/ansible-runtime/bin/ansible-console
lrwxrwxrwx. 1 root root   39 Jan 12 11:43 ansible-galaxy -> /opt/ansible-runtime/bin/ansible-galaxy
lrwxrwxrwx. 1 root root   36 Jan 12 11:43 ansible-doc -> /opt/ansible-runtime/bin/ansible-doc
lrwxrwxrwx. 1 root root   42 Jan 12 11:43 ansible-inventory -> /opt/ansible-runtime/bin/ansible-inventory
lrwxrwxrwx. 1 root root   32 Jan 12 11:43 ansible-playbook -> /usr/local/bin/openstack-ansible
lrwxrwxrwx. 1 root root   37 Jan 12 11:43 ansible-pull -> /opt/ansible-runtime/bin/ansible-pull
lrwxrwxrwx. 1 root root   38 Jan 12 11:43 ansible-vault -> /opt/ansible-runtime/bin/ansible-vault
-rw-r--r--. 1 root root 3169 Jan 12 11:43 openstack-ansible.rc
-rwxr-xr-x. 1 root root 2638 Jan 12 11:43 openstack-ansible

Verify that ansible command is one that’s installed bu the Openstack-Ansible bootstrap script.

1
2
[root@openstack openstack-ansible]# which ansible
/usr/local/bin/ansible

Bootstrap AIO

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[root@openstack openstack-ansible]# scripts/bootstrap-aio.sh
+ export BOOTSTRAP_OPTS=
+ BOOTSTRAP_OPTS=
+++ dirname scripts/bootstrap-aio.sh
++ readlink -f scripts/..
+ export OSA_CLONE_DIR=/opt/openstack-ansible
TASK [Gathering Facts] *****************************************************************************************************
ok: [localhost]

TASK [sshd : Set OS dependent variables] ***********************************************************************************
ok: [localhost] => (item=/etc/ansible/roles/sshd/vars/RedHat_7.yml)

TASK [sshd : OS is supported] **********************************************************************************************
ok: [localhost] => {
    "changed": false, 
    "msg": "All assertions passed"
}

TASK [sshd : Install ssh packages] 
<SNIP>
EXIT NOTICE [Playbook execution success] **************************************
===============================================================================
+ popd
/opt/openstack-ansible
+ unset ANSIBLE_INVENTORY
+ unset ANSIBLE_VARS_PLUGINS
+ unset HOST_VARS_PATH
+ unset GROUP_VARS_PATH
[root@openstack openstack-ansible]# 

Run the playbooks

We’ll to run a few playbooks to setup the containers and our Openstack environment.

Move to the openstack-ansible playbook directory.

1
2
3
4
[root@aio1 ~]# cd /opt/openstack-ansible/playbooks/
[root@aio1 playbooks]# pwd
/opt/openstack-ansible/playbooks
[root@aio1 playbooks]# 

and exexcute the playbooks.

1
2
3
[root@openstack playbooks]# openstack-ansible setup-hosts.yml
[root@openstack playbooks]# openstack-ansible setup-infrastructure.yml
[root@aio1 playbooks]# openstack-ansible setup-openstack.yml

If all goes well your openstack installation is completed.

You can verify the openstack containers with lxc-ls

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@aio1 playbooks]# lxc-ls --fancy
NAME                                   STATE   AUTOSTART GROUPS            IPV4                                           IPV6 
aio1_cinder_api_container-c211b759     RUNNING 1         onboot, openstack 10.255.255.43, 172.29.237.244, 172.29.244.190  -    
aio1_galera_container-9a90cbd9         RUNNING 1         onboot, openstack 10.255.255.50, 172.29.239.126                  -    
aio1_glance_container-c05aab79         RUNNING 1         onboot, openstack 10.255.255.218, 172.29.236.160, 172.29.247.238 -    
aio1_horizon_container-81943ba2        RUNNING 1         onboot, openstack 10.255.255.160, 172.29.237.37                  -    
aio1_keystone_container-a5859104       RUNNING 1         onboot, openstack 10.255.255.40, 172.29.236.95                   -    
aio1_memcached_container-ab998d0e      RUNNING 1         onboot, openstack 10.255.255.175, 172.29.239.49                  -    
aio1_neutron_server_container-439aeb90 RUNNING 1         onboot, openstack 10.255.255.137, 172.29.239.13                  -    
aio1_nova_api_container-c83e5ef0       RUNNING 1         onboot, openstack 10.255.255.216, 172.29.236.52                  -    
aio1_rabbit_mq_container-4fd792fb      RUNNING 1         onboot, openstack 10.255.255.2, 172.29.239.62                    -    
aio1_repo_container-b39d88a1           RUNNING 1         onboot, openstack 10.255.255.227, 172.29.237.146                 -    
aio1_utility_container-fff0b6df        RUNNING 1         onboot, openstack 10.255.255.117, 172.29.237.82                  -    
[root@aio1 playbooks]# 

Find the correct ip address

You should see horizon running with netstat

1
2
3
4
5
6
[root@aio1 ~]# netstat -pan | grep -i 443
tcp        0      0 172.29.236.100:443      0.0.0.0:*               LISTEN      12908/haproxy       
tcp        0      0 192.168.122.23:443      0.0.0.0:*               LISTEN      12908/haproxy       
unix  3      [ ]         STREAM     CONNECTED     73443    31134/tmux           
unix  2      [ ]         DGRAM                    1244303  23435/rsyslogd       
[root@aio1 ~]# 

Logon to the openstack GUI (Horizon)

Password…

1
[root@aio1 ~]# grep keystone_auth_admin_password /etc/openstack_deploy/user_secrets.yml

Have fun

Links

As promised 3 months ago, Gabe, Mateu and I together with twelve other contributors shipped support for revisions and file uploads today!

What happened since last month? In a nutshell:

JSON:API 2.1

JSON:API 2.1 follows two weeks after 2.0.

Work-arounds for two very common use cases are no longer necessary: decoupled UIs that are capable of previews and image uploads4.

  • File uploads work similarly to Drupal core’s file uploads in the REST module, with the exception that a simpler developer experience is available when uploading files to an entity that already exists.
  • Revision support is for now limited to retrieving the working copy of an entity using ?resourceVersion=rel:working-copy. This enables the use case we hear about the most: previewing draft Nodes. 5 Browsing all revisions is not yet possible due to missing infrastructure in Drupal core. With this, JSON:API leaps ahead of core’s REST API.

Please share your experience with using the JSON:API module!


  1. This was in the making for most of 2018, see the SA for details. ↩︎

  2. Note that usage statistics on drupal.org are an underestimation! Any site can opt out from reporting back, and composer-based installs don’t report back by default. ↩︎

  3. Which we can do thanks to the tightly managed API surface of the JSON:API module. ↩︎

  4. These were in fact the two feature requests with the highest number of followers↩︎

  5. Unfortunately only Node and Media entities are supported, since other entity types don’t have standardized revision access control. ↩︎

January 18, 2019

Over lammetjes in een schapenvacht

Ik herinner mij nog perfect toen ik besefte dat de technologiesector een opmerkelijke culturele shift aan het ondergaan was. Het bleek bovendien redelijk profetisch voor de Westerse cultuur. Het was eind 2013 op een klein Canadees congres, waar de afsluitende keynote concludeerde met een staande ovatie. De presentatie resoneerde duidelijk enorm bij iedereen. Behalve bij mij, want ik bleef zitten, extreem verontrust, niet alleen om wat er gezegd was, maar omdat ik mij alleen voelde in mijn stille weerstand. Waarom zag niemand anders in de zaal wat er zo mis mee was?

Het is onmogelijk uit te leggen waarom zonder de specifieke inhoud in detail te beschrijven. Helaas zelfs jaren later vermoed ik dat kritiek hierop voor sommigen nog steeds in het verkeerde keelgat zal schieten. Vreemd, want je zou toch denken dat het de bedoeling van een congres is om een breed publiek te bereiken en een dialoog te openen. Als iemand enkel spreekt om naar geluisterd te worden, dan is dat een preek, geen voordracht.

Wat er gezegd werd was op het eerste zicht nochtans oncontroversieel. Een jonge dame had ons net verteld over de moeilijke les die ze op haar eerste job had moeten leren. Dat ze zich volledig in haar werk gegooid had en daarmee vergeten was om adem te komen happen, met alle gevolgen vandien. Dat ze zo met haar prestaties bezig geweest was, dat er geen tijd meer was voor haar eigen sociaal en emotioneel welzijn, en dat dat toch even belangrijk was. En dat ze dat allemaal doorstaan had om zo die dag voor het eerst op het podium te staan op een technologiecongres.

Dat stoorde mij helemaal niet. Het is een veelvoorkomend verhaal, en het is belangrijk dat jonge starters het horen. Ze zijn vaak zo gewend aan het schoolsysteem, waar het parcours en de kalender reeds vast liggen, dat ze zich niet goed kunnen aanpassen aan de werkomgeving. Je moet je eigen draai vinden, en zelf een redelijk en houdbaar tempo kiezen voor een onbeperkte duur.

Zo legde ze het echter niét uit. Ze wijtte het daarentegen aan de verwachtingen om te doen wat er je gezegd werd, je "professioneel" te gedragen, en persoonlijke problemen buiten te houden. Dat waren nu eenmaal misvattingen over hoe een werkplaats best functioneert. In plaats daarvan zou je beter het "goeie leven" opzoeken, namelijk een plek waar je je thuis voelt, met mensen waar je van houdt, waar je je eigen werk doet, met opzet. Dat was wat ze nodig had om een omgeving te scheppen waar ze zich "veilig voelde."

Op papier klinkt dat best wel fijn. Het schoentje knelde echter omdat ze absoluut haar eigen advies niet volgde.

Het Lam Gods

Het Lam Gods, door de gebroeders Van Eyck, 1432 (le wik)

Ze verkondigde dat je niet in de onzekerheid mag vallen, maar begon haar voordracht door te benadrukken hoe zenuwachtig ze wel niet was, en vroeg het publiek om de ogen even te sluiten, wat niet hielp. Dus vroeg ze of enkele vrijwilligers op het podium wilden klimmen en wat zot wilden doen, om het ijs te breken.

Ze legde uit dat ze zich absoluut niet te behoeftig wou voelen, "het degoutantste gevoel ooit," maar hield als deel van haar presentatie een videogesprek met haar collega's, die duizend kilometer verder klaarstonden om "morele steun" te bieden. Daarna had ze problemen om haar powerpoint terug te brengen, tot een organisator insprong.

Ze benadrukte hoe belangrijk het was om de mensen mensen te laten zijn, met al hun diverse vaardigheden, maar leek niet te beseffen dat het onmogelijk beklemmende professionalisme dat ze afwees net een kader was om zelfs onverzoenbare verschillen te overbruggen, voor een gemeenschappelijk doel.

Ze zei dat de technologiesector veel van de meest getalenteerde en bevoorrechte mensen in de samenleving bevat, die "goud kunnen maken uit het niets," maar bekritiseerde de idee dat rationaliteit belangrijker is dan emotionaliteit. Het is nu net datgene wat de mensheid uit de middeleeuwen heeft getild, zodat zij daar feitelijk kon zijn.

Ze prijsde de emotionele intelligentie aan van zelfbeheersing, volharding en zelfbewustzijn, maar slaagde er niet in om deze effectief te demonstreren, aangezien ze haar gevoelens uitstrooide, meteen op anderen rekende voor steun en hulp, en het niet leek te beseffen dat ze haar speech compleet aan het ondermijnen was. Ze deed alsof kritiek van emotionaliteit gewoon zelfontkenning was, van mensen die met zichzelf niet eerlijk konden zijn, maar gebruikte die juist op die manier als schild.

Bovenal leek het niet erg geloofwaardig dat ze mensen beoordeelde over onverdiend privilege, terwijl de rol die ze vertolkte net die van het onschuldige, gewonde lammetje was, en daar met verve in opging. De droomjob die ze wilde zou net de meest bevoorrechte van al zijn, waar er geen zinvolle menings- of karakterverschillen waren, en waar haar collega's haar op alle mogelijke wijze zouden valideren. Een job die ze nu eindelijk had.

En kijk, als iemand effe naar huis wilt bellen om een punt te maken, dat mag wel, en iedereen weet dat een live demo een magneet is voor technische problemen. Maar zou het nu echt zo fijn zijn als alle anderen ook als trillend riet op het toneel staan, en halverwege nog een duwtje in de rug nodig hebben, als hulpeloos kind? Je moet mij niet komen vertellen over presentatiestress, ik heb zelf al talks gehouden die niks anders dan live demos bevatten, voor een publiek van een paar honderd mensen en een camera, en dat is niet gemakkelijk. Maar dat moet je nu eenmaal zelf overwinnen door je goed voor te bereiden en veel te oefenen.

Soit, als deze voordracht zo tegenstrijdig was, waarom sloeg dit zo aan? Wel omdat ze al de juiste noten zong. Ze was een jonge moeder die foto's van haar dochter liet zien, aan wie ze wou tonen hoe een goeie mens zich moet gedragen. Er was een moeilijke strijd en een verlossing: een verloren schaap dat uit de kille, donkere woestijn kwam en een warme oase van liefde en steun vond. Ze had het ook over haar eigen intern seksisme dat ze los had gelaten, en hoe belangrijk inclusiviteit en diversiteit waren. Toen kon je dat nog zeggen zonder dat dat speciaal registreerde, maar vandaag de dag zijn die woorden beladen met morele deugd, en de strijd tegen het kwaad.

Ik kon het toen eigenlijk niet goed uitleggen, maar nadien wist ik wat er gebeurd was: dit was geen technische voordracht, dit was een seculaire geloofsbelevenis. Ik dacht er onlangs weer aan toen ik een clipje van James Lindsay zag, die uitlegde dat ondanks de neergang van formeel geloof in het Westen, allerlei mensen er nog steeds naar op zoek zijn, en religieuze praktijken op andere wijze uitdrukken. Ik geloof het gerust, want dit was een voorsmaakje, met succesvolle bekeringen troef. Een plek waar je je thuis voelt, met mensen waar je van houdt, waar je je eigen werk doet, met een belangrijk doel, dat lijkt mij een ideale kerk.

Hoewel ik uiteraard niet erg gul ben in deze beoordeling, is het mijn bedoeling niet om dit individu persoonlijk te belasteren. Daarom heb ik ook de link achterwege gelaten. Ik hoop dat ze ondertussen misschien beter weet, hoewel er sindsdien al tientallen of zelfs honderden gelijkaardige talks geweest zijn. Dit fenomeen van gewijde seculariteit is alomtegenwoordig, en ik wil gewoon aantonen dat het inderdaad gebeurt, op de manier die mij het best lijkt.

Tijdens het schrijven van deze tekst verscheen er ook een andere video die de universiteit Evergreen State College toelicht, en een wansmakelijke mix van organizatiepolitiek en kollektivschuld toont, die de aanzet vormde voor een groot schandaal. Professor Bret Weinstein, die in het oog van de storm stond samen met zijn vrouw Heather Heying, vertelt dat ook hij voelde alsof hij één van de weinigen was die besefte dat oneerlijke en manipulatieve druk werd uitgeoefend om een morele ideologie te verkopen. En dat velen die deze praktijk openbaar steunden achter gesloten deuren toegaven dat ze zich er niet tegen durfden te verzetten.

Dit is waarom het zo gevaarlijk is om emotionaliteit te verkiezen over rationaliteit: zo krijg je aangenaam klinkende platitudes in plaats van de werkelijkheid. Daarom dat fantasieën over universele harmonie en mentale veiligheid complete utopie zijn: het ontkent de fundamentele verschillen en conflicten die er bestaan. Skeptici hebben er een allergie aan en kijken er dwars door, en mijn hart vervulde zich niet met Jesus' liefde.

Shrine to Grace Hopper

Het Altaar van Grace Hopper, 35C3, 2018
Is het nog steeds ironisch als je er niet meer mee mag lachen?

Vijf jaar later, als ik kijk wat de belangrijkste effecten van seculair-religieuze praktijk zijn geweest op de technologiesector, dan is het net om dezelfde dysfunctie mogelijk te maken die de traditionele religie persona non grata maakte. Onder het mom van een betere gemeenschap te scheppen, en belangrijke "inequiteiten" recht te zetten, vinden deze clerici ogenschijnlijk vaak dat het doel de middelen heiligt en dat ze hun eigen geboden niet moeten opvolgen. Je hoort nu vaak dat controle van bovenaf nodig is om de veiligheid van de gebruikers te verzekeren, zoals de herder en zijn kudde. Er is nu infrastructuur om dit op een zorgwekkende schaal gemakkelijk te maken, klaar voor misbruik, samen met een standaardbeleid om het mogelijk te maken.

In het beste geval leidt dit tot een traag verval van de vrije meningsuiting. Artificiële intelligentie en scripts ter bewaking kennen geen nuance, en kunnen op triviale wijze onschuldige omstanders schaden. Het risico om heel je sociale netwerk op één van de weinige populaire platformen te verliezen creëert een enorm verkillend effect. Dit is des te erger omdat het niet moeilijk is om gecoördineerd meldingen in te sturen, en publieke schaam- en lastercampagnes te voeren. Zo is iedereen schuldig tot de onschuld bewezen is. In het ergste geval schept het morele beschutting voor sociopathie, met plausibele ontkenning van censuur dankzij het gebruik van shadowbans, demonetizatie en gefilterde aanbevelingen.

Dit stroomt ook door tot op het lokale niveau, omdat in religieuze praktijk vaak de idee opduikt dat men het iedere dag op iedere plek moet toepassen, en enkel zo een deugdelijk leven kan leiden. Toen ik dit jaar op het 35C3 congres allerlei opzichtige portretten van "vrouwen in de technologie" zag, en zelfs een effectief altaar voor Grace Hopper, zag ik er onvermijdelijk de lijn in met de Maagd Maria, en de Onbevlekte Ontvangenis van de softwarecompiler. Toen een DJ zijn platen draaide onder een antifa vlag, net zoals degene die boven de inkom hing, vroeg ik mij toch af wat voor transcendente extase men op deze dansvloer opzocht. Versta mij niet verkeerd, ik kan best wel ieder zijn eigen ding laten doen, maar deze tolerantieregel werd duidelijk niet gevolgd toen er nationale vlaggen werden gestolen en standjes besmeurd, nadat er genoeg bedreigende klachten waren tijdens de eerste dag dat de security aanraadde om ze uit voorzorg misschien toch maar neer te halen.

Het gaat echter over veel meer dan gewoon wat industriecongressen of stoere tweets, zoals duidelijk werd in de Grievance Studies grap. Diezelfde James Lindsay schreef vorig jaar, samen met Helen Pluckrose en Peter Boghossian, een tiental nep-papers in de sociale menswetenschappen, met al de benodigde citaten, en slaagde er in om de meerderheid in academische tijdschriften te publiceren. Dit ging onder andere over het vieren van morbide obesitas, het inspecteren van de genitalia van honden voor tekens van "rape culture," het ketenen van blanke studenten in de klas, en zelfs een herschreven stuk van Mein Kampf, met intersectioneel feminisme in plaats van nazisme. Ze verzamelden talrijke accolades en lof over hun zogenaamd "rijkelijk en spannend" materiaal, alvorens de aap uit de mouw kwam.

Hun bedoeling was te tonen dat wat ogenschijnlijk de productie van kennis is, eerder vaak sofisterij is, waarbij ongestaafde ideologie met een dun laagje respectabiliteit bedekt wordt, om zo "ideeën wit te wassen." In plaats van academische procedures te gebruiken om waarheid te destilleren, gebruiken geleerden in deze wetenschappen ze om hun vooroordelen te bevestigen en een monocultuur te fabriceren.

Als antwoord heeft hun oppositie — die de papers niet in diskrediet kon brengen zonder in hun eigen voeten te schieten — een ethische klacht ingediend tegen Boghossian. Zijn werkgever Portland State University besloot dan dat een doorlichting van onverdedigbare wetenschap effectief onderzoek op menselijke proefkonijnen was zonder hun toestemming. Deze poging om gezichtsverlies te vermijden versterkt enkel de vaststelling van misbruik van procedures, en schiet gewoon de boodschapper neer.

Geloof lijkt een fundamentele behoefte voor velen te zijn, geëvolueerd om een sterk samenhorigheidsgevoel binnen een groep te vormen als competitief voordeel, ten koste van het zwart maken van ketters en afvalligen. Als de slechtste impulsen hiervan niet ingeperkt worden, gooit men de rationaliteit weg als onvoldoende, en zoekt men een grotere morele zuiverheid op met onbetwistbare ijver. Hoewel het openlijk beleven van geloof vandaag zeer ouderwets is, leidt het enkel tot grotere problemen als men doet alsof het niet meer bestaat of relevant is.

Ik weet best wel dat "X is religie" niet bepaald een nieuwe inval is, maar religieuze oorlog is oorlog, en war, war never changes.

Antifa Vlag op 35C3

Antifa Vlag op 35C3 (via Twitter)

De situatie in technologie is dus maar één facet, met duidelijke parallelen elders. In hun rechtzoekende morele strijd tegen "blankheid" en "nazis," nergens zo aanwezig als in religieuze koortsdromen, en met nul zelfbewustzijn, is deze golf van sociale rechtvaardigheid uitgegroeid tot een verontrustende tsunami van groupthink. Afvallige ketters worden nu gewoon "trolls" en "alt-right fascisten" genoemd, zelfs als het omgekeerde waar is. Zonder een duidelijk afgelijnde ruimte om hun geloof in te beleven, hebben de gelovigen het naar de afdeling personeelszaken, hun professioneel netwerk, de academische faculteit, de pers en de politiek gebracht, om zo hun eigen inquisitie te vormen in naam van Trust and Safety, in de jacht op schadelijke ideeën.

Het belangrijkste voorbeeld blijft voor mij de excommunicatie van James Damore bij Google in 2017, ontslagen niet om wat hij zei, maar om wat sommige werknemers en de pers er tussen de lijnen in zagen. Ze slaagden er niet in om de wetenschap met de benodigde rationaliteit te behandelen. De mentale veiligheid van zijn collega's was belangrijker dan zijn eigen fysische veiligheid, ondanks de dreigingen die hij ontving, en ze vormden een emotionele, hatelijke bende om de zondebok uit te drijven. Wat een bijbels concept is, natuurlijk. Net zoals het onschuldige lam gods. Deze twee archetypes waren ook duidelijk aanwezig in de vervolging van Larry Garfield bij Drupal in hetzelfde jaar, in naam van een ingebeeld slachtoffer van seksueel misbruik wiens mening men effectief nooit gevraagd had.

Dit domino-effect escaleert verder, wegens de groeiende centrale rol van technologieplatformen in de maatschappij, zoals onlangs toen crowdfunder Patreon wispelturig figuren begon te verbannen, en ook daarna toen de concurrent SubscribeStar gesaboteerd werd doordat provider PayPal betalingen stopzette. Het verlangen om zondige gedachten uit te sluiten gaat dus zelfs zo ver dat alle kanalen van financiële steun ontzegd worden, wat ingaat tegen de autonomie van hun individuele geldschieters. Dit wordt dan verdedigd door de doelwitten duidelijk als anti-goedheid te bestempelen, en een beroep te doen op het zogezegde superieure oordeel per geval, wat in de praktijk weinig meer is dan flagrante subjectiviteit.

Het nettoresultaat is dat heelder categorieën mensen nu effectief onveilig gemaakt worden, bedreigd met censuur of diskrediet ten onrechte, gewoon om het comfort en de status van een kortzichtig en verwend doelpubliek te verzekeren. Diezelfde groep zegt dan regelmatig dat het net de blanke mannen zijn die een klaagzang opzetten omdat ze hun voorrechten verliezen. Dan vraag ik mij toch af in wat voor parallel universum zij leven waar de technologie van gisteren gemaakt werd, en waar een oprechte interesse in computers, video games of science-fiction geen reden tot sociale uitsluiting was. Dit was een last waar nerds van allerlei strepen en kleuren mee opgezadeld zaten, en die zich nu lijkt te herhalen op een veel bedrieglijkere manier.

De voordracht van hierboven stelde wel de vraag, "wat zijn de negatieve gevolgen als je mensen niet toelaat volledig zichzelf te zijn op het werk?" Wel, Damore, met zijn autisme, kreeg duidelijk geen enkele kans om dit ook maar een beetje te doen, ondanks dat hij de aangeduide procedures volgde. Het verhaal in de pers recycleerde gewoon alweer dezelfde punten, in een troebele "conversatie" die een vage "wij" moet houden, maar waar niemand het niet mee eens mag zijn. Dat de technologiesector al lang vaak dient als "safe space" voor neurologische diversiteit wordt dan mooi onder de mat geveegd, in volledige contradictie met de zogenaamde inclusiviteit.

Ik voeg wel best toe, ik ben niet zo naïef om te denken dat dit puur een links fenomeen is, want religieuze overtuiging lokt meestal hetzelfde uit in tegenstand. Maar de vraag van wie ermee begonnen is is dus wel belangrijk. Politiek links beheert nu eenmaal de meerderheid van de culturele en intellectuele hefbomen in het Westen, en het is al even tijd voor wat verantwoording. Het is ook best wel tijd om te erkennen hoe provinciaal en seizoensgebonden de huidige obsessie met Trump is, en al de bijhorige rassenlokvoerpolitiek. De Amerikanen zijn momenteel geobsedeerd met hun grens, maar weten amper wat er daarbuiten gebeurt. Dat dezelfde mensen die zich druk maken over culturele toeëigening maar niet kunnen vatten hoe nauw en imperialistisch hun perspectief wel niet is, is een extreem ironische en triestige kers op de taart.

Sociale media lijkt hierin een zeer belangrijke drijfkracht te zijn, en ik vermoed dat de vroege symptomen in de technologiesector kunnen toegeschreven kunnen worden aan de computervaardige eerste gebruikers ervan. Doordat de lijn tussen het persoonlijke en het professionele uitgeveegd wordt, worden de mensen aangespoord om emotionele harmonie te verkiezen over objectieve samenwerking. Dit soort infantilisering was één van de factoren die Lukianoff en Haidt identificeerden in The Coddling of the American Mind en is in werkelijkheid nefast voor sociale en emotionele ontwikkeling. Het is een trend die langzaam maar zeker globaal wordt. De wereld kan nu eenmaal niet als één groot joviaal dorp bestuurd worden, en wie dat toch probeert start enkel een zuiverheidsspiraal waarin de echte gemarginaliseerden uiteindelijk gewoon noodzakelijke nevenschade worden, terwijl de wachters zelf amok maken.

Ik ben eigenlijk meer en meer overtuigd dat het principieel effect van sociale media is om cluster B persoonlijkheidsstoornissen (zoals narcisme en borderline) sociaal voordelig te maken. De hoofdzakelijke wapens die een Mean Girl (m/v/nb/rgba) gebruikt zijn het selectief delen van informatie en het inkaderen van verhalen, en Vind-Ik-Leuk en Delen zijn extreem effectieve hefbomen in de juiste verkeerde handen.

Nog niet zo lang geleden werd religie als persoonlijk beschouwd, en werd de scheiding tussen kerk en staat geapprecieerd. Als bepaalde partijen in de privésector essentiële delen van de publieke sfeer besturen, zouden ook zij hieraan onderhevig moeten zijn. Het zou echt een pak beter zijn als we dit principe terug beginnen te waarderen, omdat geloof op dit moment gebruikt wordt om de idealen van een vrije samenleving op de brandstapel te gooien. Waarschijnlijk heeft het vuur jou nog niet bereikt, maar voor hoe lang nog?

Erger nog, een technologiesector die ooit onwurmbaar was in de mening dat censuur schadelijk was, en dat persoonlijke vrijheid van het opperste belang was, is hier nu enorm over verdeeld. Het maakt het mogelijk om nog veel gevaarlijkere sprongen te nemen, zoals het idee dat het belangrijkste openbare medium ooit gecompromitteerd moet worden met achterpoortjes, en bewaakt moet worden tot op het laatste hoekje en kantje voor wangedrag, voor het collectieve welzijn. Ze zijn er nog steeds mee bezig, één wet per keer. Voor moest je het nog niet weten. 't Is misschien wel de moeite om daar iets aan te doen.

Ikzelf, ik zit hier nog, aan het werk zo goed als ik kan. Als het hier stillekes is, dan is dat enkel omdat ik mijn moeite ergens anders in stop, vreedzaam aan't computeren. Stuur mij gerust een seintje. Maar liefst niet op Twitter.

On lambs in sheep's clothing

I distinctly remember when I realized the tech community was undergoing a significant cultural shift. It turned out to be quite prophetic for western culture. It was late 2013, at a small conference, where the closing keynote had just concluded with a massive standing ovation. It was clear the talk had resonated immensely. Yet there I was, still seated, profoundly uncomfortable, not just because of what I'd heard, but because I felt alone in my quiet dissent. Why did nobody else see what was wrong here?

It's impossible to explain why without describing the actual content in detail. However, even years later I expect that criticizing this talk will not be well received by some, which is odd, considering the entire purpose of public speaking is to reach a wide audience and invite a dialogue. If someone is speaking only in order to be listened to, that's called a sermon, not a talk.

What was said was on the surface mostly unobjectionable. A young woman had just told us the trials of her early working experience. How she'd thrown herself into the job and forgotten to come up for air, neglecting herself in the process. How the focus on work performance left little room for emotional and social well-being, which were equally important. And how she'd overcome all that to bring her there, her first tech conference talk ever.

That didn't bother me at all. It's a common enough tale, which is important for young professionals to hear. They're so accustomed to conforming to the school system, where the roadmap and schedule is already planned out, they're often unable to adjust to the more fluid and independent environment of the workplace. You have to find your own way, and pacing yourself for the long haul is entirely up to you.

In this case though, that wasn't how she explained it. Instead she blamed the expectations of doing as you're told, behaving "professionally," and checking personal issues at the door. These were misconceptions that people had about what a workplace should be. That instead you should be living the good life, namely being in the place you belong, with people that you love, doing the work that's yours, on purpose. This is what was necessary to create an environment in which she "felt safe."

That all sounds nice on paper. What bothered me was that she was not actually practicing what she was preaching.

Het Lam Gods

The Adoration of the Mystic Lamb, by the brothers Van Eyck, 1432 (le wik)

She proclaimed the virtue of not succumbing to insecurity, yet began by highlighting how nervous she was, and that she needed the audience to close their eyes, which failed to alleviate it. So she asked some volunteers to come on stage to goof off and break the tension for her.

She explained how the last thing she wanted was to feel overly needy, "the grossest feeling ever," yet as part of her presentation had a brief video hangout with her coworkers, who were standing by a thousand miles away to provide live "moral support." After which she struggled to bring back her slides and needed an organizer to step in.

She emphasized the importance of letting people be human beings, with all their diverse and varied abilities, yet did not seem to realize the impossibly stifling professionalism she rejected was actually a framework to allow even irreconcilable differences to be overcome, in service of a common goal.

She talked about how tech features some of the most talented and privileged members of society, who can "create gold out of air," yet criticized a mindset that favors rationality over emotionality. This is the main thing that actually lifted humanity back out of the dark ages to make it possible for her to be there.

A person who praised the emotional intelligence of self-restraint, persistence and self-awareness failed to practice all three, by spilling her feelings out, immediately falling back to others for support and assistance, and not even realizing she was undermining her own points. She seemed to view criticism of emotionality as mere denial, an unwillingness to be honest with oneself, but was using it as a shield in that exact same way.

Most of all, it rang hollow to criticize others for having privilege just for showing up, when her own demeanor seemed to be falling into the role of the innocent, wounded lamb, spinning the yarn for all it was worth. Her notion of her dream job seemed to be exactly the most privileged one, free from significant differences of opinion or character, working only with those who would validate her in all the ways she wanted. A job which she now had.

Now, if a speaker makes a point by phoning home on the spot, that's one thing, and everyone knows live demos are cursed. But would it really be desirable if everyone else walked on stage as a quivering reed, requiring a pat on the back to make it through, flailing along the way? You don't have to tell me about the stress, I have given talks that are live demos from start to finish, in front of hundreds of faces and a camera, and it's not easy. But that's up to me to deal with and mitigate, through careful preparation and practice.

So, if this talk was so self-contradictory, why did it resonate? Well, because it hit all the right notes. She was a young mother showing pictures of her daughter, whom she wanted to teach what it meant to be a good person. There was struggle and redemption, in the form of a lost sheep finding her way out of the cold, dark desert and into a warm oasis of love and support. There was talk of her own internalized sexism, which she let go of, and the importance of inclusion and diversity. Back then those words still passed by with little notice, but today they are completely charged with moral goodness, and fighting evil.

I couldn't really explain it concisely at the time, but I put the pieces together later: this was not a tech talk, it was a secular baptist revival meeting. I was reminded of this by a recent clip of James Lindsay explaining that despite the demise of organized religion in the West, people on all sides are still searching for it, and have started acting out religious practice in other ways. I can believe it, because this was an early taste, creating converts in the audience. A place you belong, with people you love and who love you back, doing important work that's yours, with a purpose, that sounds like the ideal church.

Despite this harsh assessment, I don't intend to malign or single out the speaker, and I did not link to the talk on purpose. I also hope she's grown wiser since, though similar talks have been given dozens or even hundreds of times by now. The phenomenon of sanctifying secular practice is widespread. I merely want to add my voice that this is definitely happening, and this is the best way for me to explain it.

As I was revising this post, another video appeared which goes into the Evergreen State College scandal, demonstrating the unholy mix of organisational politics and collective guilt that lay at the basis of it. Professor Bret Weinstein, at the center of the storm along with his wife Heather Heying, describes the exact same feeling of being one of the few people in the room noticing the dishonest and manipulative pressure being applied to sell a moral ideology. He also describes the people who publicly appeared to support it, but privately admitted to not daring to speak out against it.

This is why favoring emotionality over rationality can be so pernicious: it prefers nice-sounding platitudes over reality. This is why dreams of universal harmony and psychological safety are utopian: it denies the fundamental differences and conflicts that exist. Skeptics have an allergic reaction and can see right through it, and it did not fill my heart with the love of Jesus.

Shrine to Grace Hopper

Shrine to Grace Hopper, 35C3, 2018
Is it still ironic if you're not allowed to laugh at it?

Five years later, if I look at what the main effects of secular-religious practice have been on the industry, it seems to be to enable exactly the same dysfunction that made traditional religion so unwelcome in the first place. Under the guise of creating better communities and addressing severe "inequities," the clergy feels that the ends justify the means, and that it doesn't need to follow its own commandments. It is now commonly argued that top-down control is necessary to ensure the safety of the users, like the shepherd protecting his flock. Tools have been created to make this easy at a frightening scale, ready to be abused, as well as standard policies to enable it.

At best, this leads to a slow decay of free expression. Nuanceless policing bots and scripts make it trivial for innocent bystanders to get hurt. The threat of losing one's social network on just one of the handful of popular platforms creates a severe chilling effect. This is augmented by the ease of concerted flagging and other public shaming campaigns, which create a guilty-until-proven innocent environment. In the worst case, it provides moral cover for sociopathy, as plausibly deniable censorship is enacted through the use of shadow bans, demonetization and filtered recommendations.

This also ripples down to the local scale, as a common theme in religious practice is that it must be applied every day, in every place, in order to live a virtuous life. So when this year's 35C3 congress put up conspicuous portraits of women-in-tech and built an actual shrine to Grace Hopper, I couldn't help but see the analogy to portraits of the Virgin Mary, celebrating the immaculate conception of the software compiler. When a DJ spun his tunes in front of an antifa flag, just like the one hanging over the venue entrance, I wondered exactly what sort of transcendant ecstacy was being sought on this dance floor. I can take a "you do you" approach to this, don't get me wrong, but this rule of tolerance was certainly not followed when people started stealing national flags and scrawling graffiti, after severe enough complaints were made prompting security to advise taking them down on the first day.

This is about far more than some industry events or edgy tweets though, as shown by the Grievance Studies hoax. The same James Lindsay, along with scholars Helen Pluckrose and Peter Boghossian, wrote over a dozen fake humanities papers last year, with all the right citations, and got several published in peer-reviewed journals. This included dissertations on celebrating morbid obesity, inspecting the genitals of dogs for signs of rape culture, putting white students in chains in classrooms, and even a rewritten excerpt from Mein Kampf, with intersectionality replacing naziism. They collected numerous accolades and compliments for their supposedly "rich and exciting" material, before the gig was up.

As they put it, they wanted to demonstrate that what passes for knowledge production is often just sophistry, coating unsupported ideology with a veneer of respectability, as a form of "idea laundering." Instead of using the processes of academia as a crucible for arriving at the truth, scholars in these fields use them to reinforce their preconceptions and manufacture a monoculture.

In response, unable to discredit the papers on their own merits without shooting themselves in the foot, opponents targeted Boghossian with an IRB ethics complaint. His employer Portland State University concluded that a practical audit of indefensible scholarship amounts to doing research on human subjects without their consent. This attempt to save face further reinforces the mockery of process, and merely tries to shoot the messenger.

Religion appears to be a fundamental need for many, which evolved to provide strong in-group cohesion as a competitive advantage, at the expense of demonizing apostates and heretics. When its worst impulses are not contained, rationality is tossed aside as inappropriate, seeking out a greater moral purity with unquestionable zeal, playing as dirty as needed. While the overt practice of faith has now become severely outmoded, it only leads to bigger problems when we pretend it is no longer present or relevant.

I know "X is religion" is not a particularly novel take, but religious war is war, and war never changes.

Antifa Flag at 35C3

Antifa Flag at 35C3 (via Twitter)

As such, the situation in tech is just one facet, with clear parallels elsewhere. Faced with a righteous moral struggle against "whiteness" and "nazis," nowhere near as prevalent in reality as in religious fever dreams, and with zero self-awareness, this wave of social justice has turned into a disturbing tsunami of groupthink. Heretics and apostates are more commonly called hateful trolls and alt-right fascists, even if the opposite is true. Without a clearly delineated space to practice their faith in, believers have instead taken it into HR departments, professional networks, academic faculties, media outlets and political parties, bootstrapping their own inquisition in the name of Trust and Safety, rooting out harmful ideas.

The most notable example for me remains the excommunication of James Damore at Google in 2017, fired not because of what he said, but because of what some of his coworkers and the press read into it. They were unable to treat science with the rationality it requires. The psychological safety of his coworkers was more important than his physical safety, judging by the threats they sent him, as they succumbed to emotionally driven, hateful mob behavior to expel the scapegoat. Which, y'know, is a biblical concept. As is the innocent lamb of god. These two avatars also featured prominently in the Drupal community's persecution of Larry Garfield in the same year, in service of an imagined victim of sexual abuse who was never actually consulted.

The knock-on effects continue to escalate, due to the increasing dominance of tech platforms in society, as shown recently with crowdfunder Patreon's capricious banning of wrongthinkers, and the subsequent sabotage of competitor SubscribeStar when PayPal cut off payments. The desire to exclude sinful thoughts can now go as far as denying all avenues of financial support, violating the autonomy of their individual backers. These acts are then justified by clearly labeling the targets as being anti-goodness, and appealing to the supposed superior human judgement of a case-by-case approach, in practice little more than naked subjectivity.

The net result is that whole categories of people are now made actually unsafe, under constant threat of being censored or falsely discredited, in order to satisfy the mere comfort and status of a shortsighted and pampered demographic. That same group will regularly say that it's actually the white men whailing about losing their privilege. So I have to wonder what parallel universe they imagine yesterday's tech was created in, where taking a serious interest in computers, video games or science fiction was not cause for social ostracism. This was a plight shared by geeks of all colors and varieties, and a pattern which appears to be repeating itself more insidiously.

The talk mentioned above did raise the question, "what about the negative impacts of denying people to be themselves entirely at work?" Well Damore, being autistic, was clearly denied the opportunity of doing any of that at all, despite following the signposted processes. The resulting media coverage then reiterated all its old talking points, in a muddled "conversation" a nebulous "we" need to be having, but which nobody is allowed to disagree with. That tech communities have themselves long served as safe spaces for the neurologically diverse is ignored, directly contradicting the stated aim of inclusion.

I should add, I'm not naive enough to think this is a uniquely left-wing phenomenon, and religious conviction invites the same in return. But in this case, "who started it" does actually matter a lot. The left controls the vast majority of cultural and intellectual levers in the West today, and accountability is long overdue. It's also high time to acknowledge how provincial and seasonal the current American Trump-obsession and its associated race-baiting is. They are preoccupied with the US border, but mostly ignorant of what happens far from it. That the same people who seem extremely concerned about cultural appropriation can't seem to even imagine how narrow and imperialist their perspective really is, is a supremely sad irony cherry on top.

Social media appears to have been a major driver, and I suspect the early warnings in tech can be directly attributed to its tech-savvy early adopters. By blurring the lines between the personal and the professional, it encourages people to focus on emotional harmony over objective cooperation. This kind of infantilization was one of several factors identified in Lukianoff and Haidt's The Coddling of the American Mind and actually harms social and emotional development. It's a trend that's going global. The world simply cannot be run like one big, happy village, and trying to do so regardless only results in a purity spiral where the genuinely marginalized are necessary collateral damage and the watchmen run amok.

In fact, I'm more and more convinced that the main effect of social media has been to make cluster B disorders such as narcissism and borderline personality socially advantageous. The main weapons deployed by a Mean Girl (m/f/nb/rgba) are selective sharing of information and narrative framing, and instant shares and likes are incredible force multipliers in the right wrong hands.

Not so long ago we confined religion to the personal sphere, and valued the separation of church and state. In a time when elements of the private sector control essential parts of the public sphere, that should include them too. It would be significantly better if we'd start valuing this principle again, because faith is now being used to justify burning the ideals of free society at the stake. The flames probably have not reached you yet, but how much longer?

Even worse, a tech community that was once united around treating censorship as damage, and personal freedom as paramount, is now divided on these very issues. It leaves the door open to far more dangerous leaps of faith, like the idea that the greatest public resource ever created should be backdoored and surveilled down to every nook and cranny for misbehavior, for the good of all. It's happening right now, one law at a time. In case you missed it. It might be worth doing something about it.

As for me, I'm still around, getting work done as best I can. If it's quiet around here, that's only because I'm putting efforts elsewhere, coding in peace. Feel free to hit me up. Just not on Twitter.

I did not yet update my older post when vSphere 6.7 was released. The list now complete up to vSphere 6.7. Your Linux runs on a VMware VM, but which on which ESXi version? You can see for yourself: run "dmidecode" and look at lines 10, 11 and 12.
ESX 2.5 - BIOS Release Date: 04/21/2004 - Address 0xE8480 - Size 97152 bytes
ESX 3.0 - BIOS Release Date: 04/17/2006 - Address 0xE7C70 - Size 99216 bytes
ESX 3.5 - BIOS Release Date: 01/30/2008 - Address 0xE7910 - Size 100080 bytes
ESX 4 - BIOS Release Date: 08/15/2008 - Address 0xEA6C0 - Size 88384 bytes
ESX 4U1 - BIOS Release Date: 09/22/2009 - Address 0xEA550 - Size 88752 bytes
ESX 4.1 - BIOS Release Date: 10/13/2009 - Address 0xEA2E0 - Size 89376 bytes
ESXi 5 - BIOS Release Date: 01/07/2011 - Address 0xE72C0 - Size 101696 bytes
ESXi 5.1 - BIOS Release Date: 06/22/2012 - Address: 0xEA0C0 - Size: 89920 bytes
ESXi 5.5 - BIOS Release Date: 07/30/2013 - Address: 0xEA050 - Size: 90032 bytes
ESXi 6 - BIOS Release Date: 09/30/2014 - Address: 0xE9A40 - Size: 91584 bytes
ESXi 6.5 - BIOS Release Date: 04/05/2016 - Address: 0xEA580 - Size: 88704 bytes 
ESXi 6.7 - BIOS Release Date: 07/03/2018 - Address: 0xEA520 - Size: 88800 bytes
NB These DMI properties are set at boot time. Even if your VM gets live-migrated to a host running a different vSphere version, your VM will keep the values it got from the host it booted on. What you see is the vSphere version of the host your VM booted on. It is the VM power-on that matters, so a guest OS reboot will not regenerate the DMI properties. A guest OS shut down on the other hand will also power off the VM, and the next power-on will regenerate the DMI properties.
We have just published the second set of interviews with our main track and keynote speakers. The following interviews give you a lot of interesting reading material about various topics, from ethics to databases and AI systems: Bradley M. Kuhn and Karen Sandler: Can Anyone Live in Full Software Freedom Today?. Confessions of Activists Who Try But Fail to Avoid Proprietary Software Deb Nicholson: Blockchain: The Ethical Considerations Drew Moseley: Mender - an open source OTA software update manager for IoT Duarte Nunes: Raft in Scylla. Consensus in an eventually consistent database Fernando Laudares: Hugepages and databases. working with abundant舰

January 15, 2019

Eighteen years ago today, I released Drupal 1.0.0. What started from humble beginnings has grown into one of the largest Open Source communities in the world. Today, Drupal exists because of its people and the collective effort of thousands of community members. Thank you to everyone who has been and continues to contribute to Drupal.

Eighteen years is also the voting age in the US, and the legal drinking age in Europe. I'm not sure which one is better. :) Joking aside, welcome to adulthood, Drupal. May your day be bug free and filled with fresh patches!

We have this Brother P750W label printer for https://fosdem.org . It's a model with wireless networking only. We wanted something with decent wired networking for use with multiple Debian desktop clients, but that was ~300€ more expensive. So here's how we went about configuring the bloody thing...

Not wanting to use proprietary drivers, this is what the device told me after some prying:
 * The thing speaks Apple Airprint.
 * By default, it shows up as an access point with SSID "DIRECT-brPT-P750WXXXX". XXXX is the last 4 digits of the printer's serial number.
 * Default wireless password "00000000".
 * Default ip address 192.168.118.1.
 * It allows only one client to connect at a time.
 * Its web interface is totally utterly broken:
   * Pointing a browser at the default device ip redirects to a page that 404's.
   * Pointing a browser at the url of the admin page gleaned from cups 404's too.
 * An upgrade to the latest firmware doesn't seem to solve any issues.

As a reminder to myself, this is what I did to get it to work:
 * Get a Debian stable desktop.
 * Add the Debian buster repositories. That is Debian testing at the time of me writing this.
 * Set up apt pinning in /etc/apt/preferences to prefer stable over buster.
 * Upgrade cups related packages to the version from Debian buster. "apt install cups* -t buster" did the trick.
 * Notice this makes the printer get autodiscovered.
 * Watch /etc/cups/printers.conf and /etc/cups/ppd/ for what happens when connected to the P750W. Copy the relevant bits.
 * Get a Debian headless thingie (rpi, olinuxino, whatever...) running Debian stable.
 * Connect it to the printer wifi using wpa_supplicant.
 * Install cups* on it.
 * Drop the printers.conf and P750W ppd from the Debian buster desktop into /etc/cups and /etc/cups/ppd respectively. The only change was in printers.conf, from the avahi autodiscovery url to the default 192.168.118.1.
 * Make sure to share the printer. I don't remember if I had to set that or if it was the default, but the cups web interface on port 631 should help there if needed.
 * Add the new shared printer on the Debian buster desktop. Entering the print server IP auto discovers it. Works flawlessly.
 * Try the same on a Debian stable desktop. Fails complaining about airprint related stuff.
 * Upgrade cups* to the Debian buster version. Apt pinning blah. Apt-listchanges says something about one of the package upgrades being crucial to get some airprint devices to work. Didn't notice the exact package alas and too lazy to get it to run.
 * Install the printer again. Now works flawlessly.

January 11, 2019

Et transition vers une déconnexion douce permanente

Sans que je m’en rende particulièrement compte, voici que je suis arrivé à la fin de ma déconnexion (dont vous pouvez retrouver tous les billets ici). Une date symbolique qui imposait un bilan. Tout d’abord en enlevant mon filtre et en faisant un tour sur les réseaux sociaux désormais abhorrés.

Pas que j’en avais pas vraiment envie mais plus par curiosité, pour voir ce que ça me faisait et vérifier si j’avais raté des choses. On pourrait croire que j’étais impatient mais, contre toute attente, j’ai du me forcer. Au nom de la science, pour la complétude de l’expérience ! Ces sites ne me manquent pas, au contraire. Je n’avais pas l’impression de rater quoi que ce soit d’important et, même si c’était le cas, je m’en portais au fond très bien.

Ma première impression a été d’arriver en retard dans une soirée à l’ambiance un peu morne. Vous savez, le genre de soirée où vous arrivez stressé de rater le meilleur pour vous rendre compte qu’en fait tout le monde semble s’emmerder.

Oh certes, il y’avait des commentaires sur mes posts dont certains étaient intéressants (je n’ai pas tout lu, juste regardé rapidement les derniers). J’avais plein de notifications, des centaines de demandes d’ajout sur Linkedin (que j’ai acceptée).

Mais, au final, rien qui me donne envie de revenir. Au contraire, j’avais la nausée, comme un addict au sucre qui se tape tout un gâteau au chocolat après 3 mois de diète.

Ce qui est encore plus frappant c’est que cette demi-heure de rattrapage de réseaux sociaux m’a obsédée durant plusieurs heures. J’avais envie d’aller vérifier des choses, je pensais à ce que j’avais vu passer, je me demandais ce que je devrais répondre à tel commentaire. Mon esprit était de nouveau complètement encombré.

Il faut se rendre l’évidence : je ne suis pas capable d’utiliser sainement les réseaux sociaux. Je suis trop sensible à leurs messages inconscients, à leurs tactiques d’addiction.

Pour être tout à fait honnête avec moi-même, il faut avouer que, techniquement, je n’ai pas respecté complètement ma déconnexion. J’ai assoupli certaines règles initiales en “débloquant” Slack, pour raisons professionnelles, et Reddit. Il m’est également arrivé assez souvent de devoir désactiver mes filtres pour accéder à un lien qu’on m’envoyait sur Twitter, pour chercher les coordonnées d’un contact professionnel sur Linkedin voire pour accéder à un article de la presse généraliste qu’on m’avait envoyé. Mais ce n’est pas grave. Le but n’était pas de devenir “pur” mais bien de reprendre le contrôle sur mon utilisation d’Internet. À chaque fois, la désactivation de mes filtres ne durait que le temps strictement nécessaire à charger la page incriminée.

Une anecdote illustre bien ma déconnexion : au cours d’un repas de famille, la discussion porta sur les gilets jaunes. Je n’en avais jamais entendu parler. Après quelques secondes d’étonnement face à mon ignorance, on m’expliqua et, le soir même, je lisais la page Wikipédia sur le sujet.

Wikipédia qui s’est révélé un outil de déconnexion extraordinaire. La page d’accueil dispose en effet d’une petite section concernant les actualités et les événements en cours. J’en ai déduis que si un événement n’est pas sur Wikipedia, alors il n’est pas vraiment important.

Si ne pas être informé libère de l’espace mental et ne semble prêter à aucune conséquence néfaste, il est dramatique de constater à quel point mon cerveau est addict. Devant un écran, il veut recevoir des informations, quelle qu’elles soient. Quand je procrastine, je me retrouve à chercher tout ce qui pourrait m’apporter des news sans désactiver mon blocage.

C’est d’ailleurs je pense la raison pour laquelle mes visites à Reddit (au départ utilisé uniquement pour poser des questions dans certains subreddit) sont devenues plus fréquentes (mais sans devenir envahissante mais à surveiller). Je regarde également mon lecteur RSS tous les jours (heureusement, il n’est pas sur mon téléphone) mais les flux réellement utiles sont rares. Les réseaux sociaux m’avaient habitué à m’intéresser à tout et n’importe quoi. Avec le RSS, je dois choisir des sites qui postent des choses que je trouvent intéressantes dans la durée et qui ne noient pas cela dans du bruit marketing.

Un autre effet important de ces 3 mois de déconnexion est le début d’un détachement de mon besoin de reconnaissance immédiate. Outre les likes sur les réseaux, je me rends compte que donner des conférences gratuites ou intervenir dans les médias me rapporte peu voire rien du tout pour beaucoup d’efforts, de transports et de fatigue. De manière amusante, j’ai déjà reçu pas mal de sollicitations pour parler dans les médias de ma déconnexion (que, jusqu’à présent, j’ai toutes refusées). Mon ego est toujours là mais souhaite désormais être reconnu sur le long terme, ce qui nécessite un investissement plus profond et pas de simples apparitions médiatiques. D’ailleurs, entre nous, refuser une sollicitation médiatique est encore plus jouissif pour l’égo que de l’accepter.

J’ai également pris conscience que, contrairement à ce que Facebook essaye d’instiller, mon blog n’est pas un business. Je ne dois pas répondre dans les 24h aux messages (ce que Facebook encourage très fortement). J’ai le droit de ne répondre qu’aux emails et ne pas devoir me connecter sur différentes messageries propriétaires. J’ai le droit de rater des opportunités. Je suis un humain qui partage certaines de ses expériences à travers l’écriture. Libre à chacun de lire, de copier, de partager, de s’inspirer voire de me contacter ou de me soutenir. Mais libre à moi de ne pas être le service client de mes écrits.

La conclusion de tout ça c’est que, les 3 mois écoulés, je n’ai aucune envie de stopper ma déconnexion. Ma vie d’aujourd’hui sans Facebook ou les médias me semble meilleure. Une fois tous les deux ou trois jours, je désactive mon filtre pour voir si j’ai des notifications sur Mastodon, Twitter ou Linkedin mais je n’ai même pas envie de regarder le flux. Je lis des choses qui m’intéressent grâce au RSS, je me plonge avec délice dans les livres qui attendaient sur mon étagère et j’ai beaucoup de conversations enrichissantes par mail.

Pourquoi quitterais-je ma thébaïde ?

Photo by Jay Mantri on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

January 10, 2019

LOADays 2019 is a GO and will be held on 27th and 28th of April 2018 in Antwerp, Belgium. We'll be opening the CfP shortly.

January 09, 2019

During the holidays we have performed some interviews with main track speakers from various tracks. To get up to speed with the topics discussed in the main track talks, you can start reading the following interviews: Chris Brind: Open Source at DuckDuckGo. Raising the Standard of Trust Online Daniel Stenberg: DNS over HTTPS - the good, the bad and the ugly. Why, how, when and who gets to control how names are resolved Joe Conway: PostgreSQL Goes to 11! Juan Linietsky: Making the next blockbuster game with FOSS tools. Using Free Software tools to achieve high quality game visuals. Richard舰
With FOSDEM just around the corner, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers make FOSDEM happen and make it a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, especially for heralding during the conference, during the buildup (starting Friday at noon) and teardown (Sunday evening). No need to worry about missing lunch. Food will be provided. Would you like to be part of the team that makes FOSDEM tick? Sign up here! You舰

January 08, 2019

Last year, I talked to nearly one hundred Drupal agency owners to understand what is preventing them from selling Drupal. One of the most common responses raised is that Drupal's administration UI looks outdated.

This critique is not wrong. Drupal's current administration UI was originally designed almost ten years ago when we were working on Drupal 7. In the last ten years, the world did not stand still; design trends changed, user interfaces became more dynamic and end-user expectations have changed with that.

To be fair, Drupal's administration UI has received numerous improvements in the past ten years; Drupal 8 shipped with a new toolbar, an updated content creation experience, more WYSIWYG functionality, and even some design updates.

A visual comparison of Drupal 7 and Drupal 8's administration UIA comparison of the Drupal 7 and Drupal 8 content creation screen to highlight some of the improvements in Drupal 8.

While we made important improvements between Drupal 7 and Drupal 8, the feedback from the Drupal agency owners doesn't lie: we have not done enough to keep Drupal's administration UI modern and up-to-date.

This is something we need to address.

We are introducing a new design system that defines a complete set of principles, patterns, and tools for updating Drupal's administration UI.

In the short term, we plan on updating the existing administration UI with the new design system. Longer term, we are working on creating a completely new JavaScript-based administration UI.

A screenshot of the content administration using Drupal 8's Carlo themeThe content administration screen with the new design system.

As you can see on Drupal.org, community feedback on the proposal is overwhelmingly positive with comments like Wow! Such an improvement! and Well done! High contrast and modern look..

A screenshot of the spacing guidelines of Drupal 8's Carlo themeSample space sizing guidelines from the new design system.

I also ran the new design system by a few people who spend their days selling Drupal and they described it as "clean" with "good use of space" and a design they would be confident showing to prospective customers.

Whether you are a Drupal end-user, or in the business of selling Drupal, I recommend you check out the new design system and provide your feedback on Drupal.org.

Special thanks to Cristina Chumillas, Sascha Eggenberger, Roy Scholten, Archita Arora, Dennis Cohn, Ricardo Marcelino, Balazs Kantor, Lewis Nyman,and Antonella Severo for all the work on the new design system so far!

We have started implementing the new design system as a contributed theme with the name Claro. We are aiming to release a beta version for testing in the spring of 2019 and to include it in Drupal core as an experimental theme by Drupal 8.8.0 in December 2019. With more help, we might be able to get it done faster.

Throughout the development of the refreshed administration theme, we will run usability studies to ensure that the new theme indeed is an improvement over the current experience, and we can iteratively improve it along the way.

Administration themes must meet large and varied use cases. For example, accessibility is critical for the administration experience, and I'm happy to see that this initiative connected with and have taken feedback into account from the accessibility team.

Acquia has committed to being an early adopter of the theme through the Acquia Lightning distribution, broadening the potential base of projects that can test and provide feedback on the refresh. Hopefully other organizations and projects will do the same.

How can I help?

The team is looking for more designers and frontend developers to get involved. You can attend the weekly meetings on #javascript on Drupal Slack Mondays at 16:30 UTC and on #admin-ui on Drupal Slack Wednesdays at 14:30 UTC.

Thanks to Lauri Eskola, Gábor Hojtsy and Jeff Beeman for their help with this post.

January 07, 2019

Tidelift, which provides organizations with commercial-grade support for Open Source software and pays part of the proceeds to software maintainers, raises a $25 million Series B. I hadn't heard about Tidelift before, but it turns out their office is 300 meters from Acquia's. I reached out and we're going to grab a coffee soon.

Mateu, Gabe and I just released JSON:API 2.0!

Read more about it on Mateu’s blog.

I’m proud of what we’ve achieved. I’m excited to see more projects use it. And I’m confident that we’ll be able to add lots of features in the coming years, without breaking backwards compatibility. I was blown away just now while generating release notes: apparently 63 people contributed. I never realized it was that many. Thanks to all of you :)

I had a bottle of Catalan Ratafia (which has a fascinating history) waiting to celebrate the occasion. Why Ratafia? Mateu is the founder of this module and lives in Mallorca, in Catalunya. Txin txin!

If you want to read more about how it reached this point, see the July, October, November and December blog posts I did about our progress.

January 03, 2019

I published the following diary on isc.sans.edu: “Malicious Script Leaking Data via FTP”:

The last day of 2018, I found an interesting Windows cmd script which was uploaded from India (SHA256: dff5fe50aae9268ae43b76729e7bb966ff4ab2be1bd940515cbfc0f0ac6b65ef) with a very low VT score. The script is not obfuscated and contains a long list of commands based on standard Windows tools. Here are some examples… [Read more]

[The post [SANS ISC] Malicious Script Leaking Data via FTP has been first published on /dev/random]

January 02, 2019

Well, an intense 2018 : traveling and climbing with old and new friends, making steady progress in climbing level, learning new techniques, practising and refining them, both in climbing and professionally :
  • FOSDEM
  • Config management Camp in Ghent
  • Climbing in Gorges du Tarn en Gorges de la Jonte with Vertical Thinking, including two multipitches (Le jardin enchanté, and diagonal du Gogol)
  • Climbing in Fontainebleau with Alex and Tom, doing lots of yellow, orange and blue routes in L'éléphant, Apremont and Roche aux sabots.
  • Climbing trip to Ettringen, practicing some trad techniques, first time climbing Basalt
  • Climbing trip with Vertical Thinking to Guillestre (Haut Val Durance), doing 2 nice multipitches (4-5 pitches including a 6a)
  • Climbing day with Koen and Wouter in Moha
  • Visiting Romania (Moldova and Transilvania region) with Eduard and Ecatarina : Lady's rock, Vatra Dornei, Dochia Caban and Toaca Peak, Transfagarasan, Sighisoara, Bran castel, Brasov, Iasi
  • Short citytrip in Vienna (during a 22h layover between two flights)
  • Multipitch climbing with Koen in Yvoir, also exploring Anhée
  • Climbing training working towards 6c.
  • Percona Live Europe in Frankfurt
  • Climbing trip to Siurana with Rouslan and Rat : leading my first 6b (redpoint), 6b+ toprope and projecting a 7a
  • Quick visit to Barcelona
  • Visiting 35C3 conference in Leipzig, another 4 days of infosec, IT, technology and science. First year as an 'angel', volunteering to help with some tasks at the conference. Unfortunately, bound to my hotel room due to illness for half of the conference. Luckily, all talks are recorded and streamed, so I could I could follow a few from my bed : https://media.ccc.de
  • Spending New Year's Eve in a doctor's office and looking for a pharmacy, due to earlier mentioned illness.

Plans for 2019 :
- more climbing : training for 6c/7a, fall-training to get more comfortable while leading, climbing trip to Buis-les-baronies in April, maybe a trip to Boulder, Colorado, possibly an alpine experience in the summer or a climbing trip in the US west coast in autumn
- continue Rock maintenance with Belgian Rebolting Team.
- find a new house, preferably with a small garden
- conferences : FOSDEM, Config Management  Camp, Percona Live (Austin, Texas), Percona Live Europe, 36C3
- first time Rock Werchter (Tool is coming)

January 01, 2019

Our keyserver is now accepting submissions for the FOSDEM 2019 keysigning event. The annual PGP keysigning event at FOSDEM is one of the largest of its kind. With more than one hundred participants every year, it is an excellent opportunity to strengthen the web of trust. For instructions on how to participate in this event, see the keysigning page. Key submissions on Wednesday 23 January, to give us some time to generate and distribute the list of participants. Remember to bring a printed copy of this list to FOSDEM.

December 31, 2018

Last week was my twelfth Drupalversary!

The first half dozen years as a volunteer contributor/student, the second half as a full-time contributor/Acquia employee. Which makes this a special Drupalversary and worth looking back on :)

2006–2012

The d.o highlights of the first six years were my Hierarchical Select and CDN modules. I started those in my first year or so of using Drupal (which coincides with my first year at university). They led to a summer job for Mollom, working with/for Dries remotely — vastly better than counting sandwiches or waiting tables!

It also resulted in me freelancing during the school holidays: the Hierarchical Select module gained many features thanks to agencies not just requesting but also sponsoring them. I couldn’t believe that companies thousands of kilometers away would trust a 21-year old to write code for them!

Then I did my bachelor thesis and master thesis on Drupal + WPO (Web Performance Optimization) + data mining. To my own amazement, my bachelor thesis (while now irrelevant) led to freelancing for the White House and an internship with Facebook.

Biggest lesson learned: opportunities are hiding in unexpected places! (But opportunities are more within reach to those who are privileged. I had the privilege to do university studies, to spend my free time contributing to an open source project, and to propose thesis subjects.)

2012–2018

The second half was made possible by all of the above and sheer luck.

When I was first looking for a job in early 2012, Acquia had a remote hiring freeze. It got lifted a few months later. Because I’d worked remotely with Dries before (at Mollom), I was given the opportunity to work fully remotely from day one. (This would turn out to be very valuable: since then I’ve moved three times!) Angie and Moshe thought I was a capable candidate, I think largely based on the Hierarchical Select module.
Imagine that the remote hiring freeze had not gotten lifted or I’d written a different module? I was lucky in past choices and timing.
So I joined Acquia and started working on Drupal core full-time! I was originally hired to work on the authoring experience, specifically in-place editing.
The team of four I joined in 2012 has quadrupled since then and has always been an amazing group of people — a reflection of the people in the Drupal community at large!

Getting Drupal 8 shipped was hard on everyone in the community, but definitely also on our team. We all did whatever was most important; I probably contributed to more than a dozen subsystems along the way. The Drupal 8 achievement I’m most proud of is probably the intersection of cacheability and the render pipeline: Dynamic Page Cache & BigPipe, both of which have accelerated many billions responses by now. After Drupal 8 shipped, my primary focus has been the API-First Initiative. It’s satisfying to see Drupal 8 do well.

Biggest lessons learned:

  1. code criticism is not personal criticism — not feeling the need to defend every piece of code you’ve written is not only liberating, it also makes you immensely more productive!
  2. always think about future maintainability — having to provide support and backwards compatibility made me truly understand the consequences of mistakes I’ve made.

To many more years with the Drupal community!

2018 is end of life and 2019 will be released soon. Autoptimize 2.5 is not at that point yet, but I just pushed a version to GitHub which adds image lazy loading to Autoptimize;

The actual lazy-loading is implemented by the integrated lazysizes JS lazy loader which has a lot of options some of which I will experiment with and bring to Autoptimize to the default improve user experience.

If you want you can download the beta (2.5.0-beta2) now from Github (disable 2.4.4 before activating the beta) and start using the new functionality immediately. And if you have feedback; shoot, I’ll be happy to take your remarks with me to bring AO 2.5 ready for release (I’m targeting March, but we’ll see).

Enjoy the celebrations and have a great 2019!

Pourquoi je minimalise désormais mes posts sur les réseaux sociaux, quitte à perdre des lecteurs.

Intellectuellement, je savais que les réseaux sociaux ne m’apportaient rien de bon. Ils étaient devenus un réflexe plutôt qu’une réelle source de plaisir. Ne plus les consulter était donc à la fois logique et facile. Il m’a suffit de trouver la bonne manière de les bloquer, d’enrober le tout sous la pompeuse appellation “déconnexion” et d’en faire des billets de blogs pour satisfaire mon égo tout en me libérant de l’espace mental.

Par contre, j’ai continué à poster sur les réseaux sociaux. Pour continuer à exister comme blogueur, comme personnage public. Même si je ne voyais plus les likes, les commentaires, je savais que ceux-ci existaient. Afin de garder le rythme, je postais des liens vers d’anciens billets les jours où je ne publiais pas de nouveau.

Ma première raison d’agir de cette façon c’est que l’algorithme Facebook filtre ce que vous voyez. Même si vous “aimez” ma page Facebook, il y’a à peine plus d’une chance sur dix que vous voyiez passer ma dernière publication dans votre flux. J’ai déjà constaté qu’un billet passé inaperçu pouvait attirer l’attention au troisième ou quatrième repost. Facebook va jusqu’à favoriser les pages qui postent régulièrement et n’hésitent pas à vous le faire savoir lorsque vous ne publiez pas durant un certain temps.

Sur Twitter, la situation est encore pire. La plupart des comptes postent le même lien plusieurs dizaines de fois sur la même journée.

En préparant mes posts sur les réseaux sociaux, je prenais même un malin plaisir à changer la phrase d’accroche, à la rendre le plus putaclick possible. Sans en avoir l’air, je vous manipulais pour vous donner envie de me lire. J’excitais votre curiosité comme un bon petit stagiaire employé dans un grand quotidien subventionné par l’état.

Bref, dans un monde ultra-bruyant, la seule solution pour se faire remarquer est de faire encore plus de bruit. J’ai beau avoir les meilleurs arguments du monde, je rajoutais de la pollution mentale à votre environnement.

Ma femme me l’a fait remarquer : « C’est une déconnexion de façade. Tu sais que tu es lu. Tu alimentes les réseaux sociaux. Tu fais comme si tu es déconnecté parce que tu ne le vois pas directement mais ce n’est pas grave car ton ego sais que, en ligne, tout continue comme avant. C’est hypocrite. » De fait, tant que je pollue, ma déconnexion est purement hypocrite. Elle est à sens unique. Un peu comme consommer du bio/local dans un emballage plastique.

Donc acte.

Ma déconnexion est entrée dans une phase plus dure. Elle me pousse à explorer une facette de ma personnalité que j’aurais préféré ne pas toucher : mon ego, mon besoin de reconnaissance publique.

Comme beaucoup de créateurs, je cherche la reconnaissance, quête égotiste encouragée par Facebook. Devant la nocivité de Facebook, nous nous cherchons des outils alternatifs pour continuer à exister. Alors que la vraie question est « Devons-nous à tout prix alimenter notre égo ? Quel est le sens de cette quête ? »

Pour tenter de m’en sortir, je n’alimenterai plus mes comptes de réseaux sociaux que d’une manière ultra minimale. Une simple règle automatique qui fait que chaque nouveau billet sera posté sur ma page Facebook, Twitter et Mastodon sans phrase d’accroche.

Peut-être qu’un jour je supprimerai complètement mes comptes. Mais je suis conscient qu’une énorme majorité de la population ne connait pas le RSS, que Facebook est pour eux ce qui s’en rapproche le plus malgré ses défauts.

Désormais, mes comptes sont moins polluants. Ils se contentent d’être factuels : un nouveau billet a été posté. Et si c’est encore trop bruyant pour vous, désabonnez-vous sans remords de ma page, utilisez le RSS, envoyez directement mes articles dans Pocket ou venez voir ma page lorsque le cœur vous en dit.

Mon audience va bien sûr en pâtir. Certains d’entre vous vont cesser de me lire. Ils ne s’en rendront pas compte. Moi non plus car je ne mesure pas mon audience. Je dois apprendre et accepter que je ne suis pas mon audience. Que je peux écrire sans chercher à être reconnu à tout prix. Qu’un lecteur fidèle qui me lit régulièrement vaut certainement mille internautes tombés par hasard sur cette page suite à un buzz un peu aléatoire d’un de mes billets. Que face à l’apparence de gloriole, un petit nombre de relations profondes et sincères n’a pas de prix. Que ce que les réseaux sociaux offrent n’est qu’une apparence d’audience qui flatte mon ego. Mais à un prix où le créateur comme le lecteur sont les pigeons.

Écrit comme ça, c’est beau et évident. Mais, au plus profond de moi, j’ai du mal. Je cherche la gloriole, je veux me sentir reconnu.

Vu de l’extérieur, cette recherche de reconnaissance a quelque chose de pathétique. Ceux qui sont passé au-dessus dégagent une impression de sagesse. On peut les trouver dans ce point où ils rejoignent les timides, les craintifs qui ont cherché toute leur vie à être discrets avant d’accepter de prendre des risques, de s’élever. Là, sur une fine arête, on trouve en équilibre ces personnes qu’on entend sans qu’elles aient à élever la voix, ces sages qui regardent loin et dont les silences ont autant de signification que des milliers d’égocentriques s’égosillant.

Est-ce que je veux tendre vers ça ? Est-ce que je dois tendre vers ça ? Est-ce que ça serait bon pour moi de tendre vers ça ? Est-ce que j’en suis capable ?

Soyons honnête : je suis encore incapable de “juste publier un billet” puis de l’oublier. J’ai bossé des jours sur une idée, je l’ai peaufinée et puis… Rien. Je devrais passer immédiatement à autre chose. Je crève d’envie d’avoir des retours, de voir le billet se propager, de “consulter mes statistiques”, de sentir que j’existe. C’est un peu ma came de blogueur.

Me lancer dans une cure de désintoxication me fait prendre conscience à quel point notre monde est plein de pollution mentale à laquelle nous contribuons, tant professionnellement que dans notre vie privée. Nous utilisons les mots « partager », « informer » voire « éduquer » alors qu’en réalité nous ne faisons que faire tourner le joint à la dopamine de notre ego toxicomane.

Nous lançons des projets participatifs, citoyens, basés sur les énergies renouvelables et conspuant les multinationales. Mais dès les premières contributions financières, nous engageons un marketeux/community manager pour demander à tout le monde de liker notre projet sur Facebook.

Pour quelqu’un comme moi qui tente de promouvoir ce blog ou mes projets de crowdfunding, difficile d’accepter que nous sommes malade de la publicité permanente, que nous avons besoin de devenir discret, de ne fonctionner que par le bouche à oreille, de croître doucement voire de décroître.

Mais c’est peut-être parce que c’est difficile que ça vaut la peine d’être tenté. On s’inquiète de la pollution de l’air, des sols, de l’eau, de nos corps. Mais personne ne semble s’inquiéter de la pollution de nos esprits…

Photo by Henry & Co. on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

December 30, 2018

Have you ever wanted to preview your new Drupal theme in a production environment without making it the default yet?

I did when I was working on my redesign of dri.es earlier in the year. I wanted the ability to add ?preview to the end of any URL on dri.es and have that URL render in my upcoming theme.

It allowed me to easily preview my new design with a few friends and ask for their feedback. I would send them a quick message like this: Hi Matt, check out an early preview of my site's upcoming redesign: https://dri.es?preview. Please let me know what you think!.

Because I use Drupal for my site, I created a custom Drupal 8 module to add this functionality.

Like all Drupal modules, my module has a *.info.yml file. The purpose of the *.info.yml file is to let Drupal know about the existence of my module and to share some basic information about the module. My theme preview module is called Previewer so it has a *.info.yml file called previewer.info.yml:

name: Previewer
description: Allows previewing of a theme by adding ?preview to URLs.
package: Custom
type: module
core: 8.x

The module has only one PHP class, Previewer, that implements Drupal's ThemeNegotiatorInterface interface:

The function applies() checks if ?preview is set as part of the current URL. If so, applies() returns TRUE to tell Drupal that it would like to specify what theme to use. If Previewer is allowed to specify the theme, its determineActiveTheme() function will be called. determineActiveTheme() returns the name of the theme. Drupal uses the specified theme to render the current page request.

For this to work, we have to tell Drupal about our theme negotiator class Previewer. This is done by registering it a service in previewer.services.yml:

services:
  theme.negotiator.previewer:
    class: Drupal\previewer\Theme\Previewer
    tags:
      - { name: theme_negotiator, priority: 10 }

previewer.services.yml tells Drupal to call our class Drupal\previewer\Theme\Previewer when it has to decide what theme to load.

A service is a common concept in Drupal (inherited from Symfony). Many of Drupal's features are separated into a service. Each service does just one job. Structuring your application around a set of independent and reusable service classes is an object-oriented programming best-practice. To some it might feel unnecessarily complex, but it actually promotes reusable, configurable and decoupled code.

Note that Drupal 8 adheres to PSR-4 namespaces and autoloading. This means that files must be named in specific ways and placed in specific directories in order to be recognized and loaded. Here is what my directory structure looks like:

$ tree previewer
previewer
├── previewer.info.yml
├── previewer.services.yml
└── src
    └── Theme
        └── Previewer.php

And that's it!

December 21, 2018

Drupal 8 has been growing 40 to 50 percent year over year. It's a healthy growth rate. Regardless, it is always worth exploring how we can continue to accelerate that growth.

Earlier this week, I wrote about the power of removing obstacles to growth, and shared how Amazon approaches its own growth blockers. Amazon identified at least two blockers for long-term growth: (1) shipping costs and (2) shipping times. For more than a decade, Amazon has been focused on eliminating both. They have spent an unbelievable amount of creativity, effort, time, and money to eliminate them.

In that blog post, I promised to share my thoughts around Drupal's own growth barriers. What obstacles can we eliminate to fuel Drupal's long-term growth? Well, I believe the limitations to Drupal's growth can be summarized as:

  1. Make Drupal easy to evaluate and adopt
  2. Make Drupal easy for content creators and site builders
  3. Reduce the total cost of ownership for developers and site owners
  4. Keep Drupal relevant and impactful
  5. Promote Drupal and help Drupal agencies win

For those that have read my blog or watched my DrupalCon keynote presentations, none of these will come as a surprise. Just like Amazon's examples, fixing these obstacles have been, and will be, multi-year efforts.

A mountain images with 5 product strategy tracks leading to the topDrupal's five product strategy tracks. A number of current initiatives is shown on each track.

1. Make Drupal easy to evaluate and adopt

We need to make it easy for more people to try Drupal. To help evaluators explore Drupal's possibilities, we improved the download and installation experience, and included a demonstration site with core. We made fantastic progress on this in 2018.

Now that we have improved the evaluator experience, I'd love to see us focus on the "new user" experience. When you put yourself in the shoes of a new Drupal user, you'd still find it hard to set up a local development environment. There are too many options, too little direction, and no one official way for how to get started with Drupal. The "new user" is not receiving enough attention, and that slows adoption so I'd love to see us focus on that in 2019.

2. Make Drupal easy for content creators and site builders

One of the most powerful trends I've noticed time and time again is that simplicity wins. People expect software to be functionally powerful and easy to use. This is especially true for content creators and site builders.

To make Drupal easier to use for content creators and site builders, we've introduced WYSIWYG and in-place editing in Drupal 8.0, and now we're working hard on media management, layout building, content workflows and a new administration and authoring UI.

A lot of these initiatives add tools to the UI that empower content creators and site builders to do more with less code. Long term, I believe that we need to add more of these "no-code" or "low-code" capabilities to Drupal.

3. Reduce the total cost of ownership for developers and site owners

Developers want to be agile, fast and deliver high quality projects that add value for their organization. Developers don't want their tools to get in the way.

For Drupal this means that they want to build sites, including themes and modules, without being bogged down by complex upgrades, expensive migrations or cumbersome developer workflows.

For developers and site owners we have made upgrades easier, we adopted a 6-month innovation model, and we extended security coverage for minor releases. This removes the complexity from major upgrades, gives organizations more time to upgrade, and allows us to release new capabilities more frequently. This is a very big deal for developer and site owners!

In addition, we're working on improving Drupal's Composer support and configuration management capabilities. This will help developers automate and streamline their day-to-day work.

Longer term, improved Composer support could act as a stepping stone towards automated updates, which would be one of the most effective ways to free up a developer's time.

4. Keep Drupal relevant and impactful

The innovation in the Drupal ecosystem happens thanks to Drupal contributors. We need to attract new contributors to Drupal, and keep existing contributors excited. This means we have to keep Drupal relevant and impactful.

To keep Drupal relevant, we've been investing in making Drupal an API-first platform for many years now. Headless Drupal or decoupled Drupal is one of Drupal's competitive advantages. Drupal's web service APIs allow developers to use Drupal with their JavaScript framework of choice, push content to different channels, and better integrate Drupal with different technologies in the marketing stack.

Drupal developers can now do unprecedented things with Drupal that weren't available before. JavaScript and mobile application developers have been familiarizing themselves with Drupal due to its improved API-first capabilities. All of this keeps Drupal relevant, ensures that Drupal has high impact, and that we attract new developers to Drupal.

5. Promote Drupal and help Drupal agencies win

While Drupal is well-known as an Open Source project, there isn't a deep understanding of how Drupal is evolving or how Drupal compares to its competitors.

Drupal is improving rapidly every six months with each new minor version release, but I'm not sure we're getting that message out effectively. We need to promote our amazing progress, not only to everyone in the web development community, but also to marketers and content managers, who are now often weighing in heavily on CMS decisions.

We do an incredible job collaborating on code — thousands of us are helping to build Drupal — but we do a poor job collaborating on marketing, education and promotion. Imagine what could happen if these thousands of individuals and agencies would all collaborate on promoting Drupal!

That is why the Drupal Association started the Promote Drupal initiative, and why we're trying to rally people in the community to work together on creating pitch decks, case studies, and other collateral to promote and market Drupal.

Here are a few things already happening:

  • There is an updated Drupal Brand Book for organizations to follow as they design Drupal marketing and sales materials.
  • A team of volunteers is creating a comprehensive Drupal pitch deck that Drupal agencies can use as a starting point when working with new clients.
  • DrupalCon will have new Content & Digital Marketing Track for marketing teams responsible for content generation, demand generation, user journeys, and more; and a "Agency Leadership Track" for those running Drupal agencies.
  • We will begin work on a competitive comparison chart — contrasting Drupal with other CMS competitors like Adobe, Sitecore, Contentful, WordPress, Prismic, and more.
  • A number of local Drupal Associations are hiring marketing people to help promote Drupal in their region.

Just like all open source contribution, it takes many to move things forward. So far, 40 people have signed up to help with these marketing efforts. If your organization has a marketing team that would like to contribute to the marketing of Drupal, check out the Promote Drupal initiative page and please join the Promote Drupal team.

Educating the world about how Drupal is evolving, the amazing use cases we support, and how Drupal compares to old and new competitors will go a very long way towards raising awareness of the project and growing the businesses built on and around Drupal.

Final thoughts

After talking to hundreds of Drupal users and would-be users, as well as dozens of agency owners, I believe we're working on the right things. Overcoming these growth obstacles are multi-year efforts. While the various initiatives might change, I believe we'll keep working on these five tracks for the next decade. We've been making steady progress the last few years but need to remain both patient and committed to driving them home. Just like Amazon continues to work on their growth obstacles after more than a decade, I expect we'll be working on these four obstacles for many years to come.

Vous connaissez certainement ce sentiment que nous éprouvons lorsque, après un voyage, nous rentrons vers notre foyer, notre maison.

Soudainement, les rues deviennent familières, nous connaissons chaque maison, chaque lampadaire, chaque dalle de trottoir. Physiquement, il y’a encore du trajet mais, dans la tête, on est déjà arrivé à la maison. Un sentiment qui donne généralement un petit boost d’énergie. Les chevaux ont, parait-il, la même sensation et se mettent à aller plus vite. On dit qu’ils « sentent l’écurie ».

Cette zone familière, quand on y réfléchit, est généralement délimitée par des frontières arbitraires que nous nous imposons : une route un peu large, un croisement, un pont. Au-delà s’étend la terre étrangère. On a beau la connaître, on n’est plus chez nous.

Dans ma vie, j’ai remarqué que, chez soi, c’est la zone qu’on parcourt à pied. Le nez dans le vent. La voiture, par contre, ne permet pas d’étendre notre territoire personnel. Une fois enfermé, nous ne sommes pas dans un endroit géographique, nous sommes « dans la voiture ». Sur l’écran des vitres défilent un paysage abstrait.

Et, un jour pas si lointain, j’ai découvert le vélo.

Contrairement à la voiture, le vélo nous met en contact direct avec notre environnement. On peut s’arrêter, changer d’avis, faire demi-tour sans craindre les coups de klaxons. On dit bonjour aux gens qu’on croise. On peut repérer un petit sentier qu’on n’avait jamais vu avant et l’emprunter « juste pour voir ».

Bref, le vélo permet d’étendre notre territoire. D’abord de 3-4km. Puis de 10. Puis de 20 et encore plus loin.

À force de rouler, j’ai l’impression d’être chez moi dans une zone qui s’étend jusqu’à 20km de ma maison. Je connais chaque petit sentier, chaque chemin.

Mon territoire selon Stravastats

Lorsque je m’aventure au-delà de ma « frontière », j’ai un frisson à l’idée d’entrer dans l’inconnu. Et j’éprouve un soulagement intense quand je la repasse dans l’autre sens. Mais, après quelques fois, je remarque que ma frontière est désormais un peu plus lointaine.

Ce n’est pas sans désagrément : je dois aller chaque fois plus loin pour franchir ma frontière. En voiture, j’ai tendance à me perdre en prenant des directions qui, à un moment ou un autre, sont impraticables pour l’automobile. J’oublie que je ne suis plus à vélo !

Mais je suis chez moi. Je suis le maître d’un domaine gigantesque. Je ne rêve pas spécialement de grands voyages exotiques, de contrées lointaines. Car je sais que l’aventure m’attend à 10, 20 ou 30km dans ce petit chemin que je n’ai encore jamais emprunté.

Les fesses sur une selle, les pieds sur les pédales, je suis un explorateur, un conquérant. Je m’enivre des paysages, de la lumière, des montées et des descentes.

Bref, je suis chez moi…

Note : je procrastinais la rédaction de ce billet depuis des mois lorsque Thierry Crouzet s’est mis à publié Born to Bike. Du coup, je me devais d’ajouter ma pierre à l’édifice.

Photo by Rikki Chan on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

Image Big Data WikimediaCe jeudi 17 janvier 2019 à 19h se déroulera la 74ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : SMACK: une pile logicielle open source pour le traitement big data

Thématique : big data|analyse de données

Public : Développeurs|analystes|data scientists

L’animateur conférencier : Mathieu Goeminne (CETIC)

Lieu de cette séance : HEPH Condorcet, Chemin du Champ de Mars, 15 – 7000 Mons – Auditoire Bloc E – situé au fond du parking (cf. ce plan sur le site d’Openstreetmap; suivre la voirie interne du site pour atteindre le bâtiment E). Le bâtiment est différent de celui utilisé lors de certaines séances précédentes.

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Les logiciels open source ont une place prédominante dans la panoplie du data scientist. L’utilisation conjointe de certains de ceux-ci a mené à la constitution de “boîtes à outils” ou de “piles technologiques” qui tendent à se standardiser. Dans le cadre de cette présentation, les constituants de la pile SMACK (pour Spark, Mesos, Akka, Cassandra, et Kafka) seront introduits, ainsi que leurs rôles et leurs interactions au sein de projets de traitement big data.

Short bio : Mathieu Goeminne a obtenu son Master en Informatique en 2009 et son doctorat en Sciences en 2013 à l’Université de Mons (UMONS). Dans le cadre de sa thèse, il s’est intéressé à l’évolution des écosystèmes logiciels. Il a rejoint l’équipe Big Data du département SST du CETIC en mars 2016, où il travaille principalement à la mise en œuvre d’outils de traitement et d’analyse de données caractérisées par une volumétrie, un débit ou une complexité importante. Il s’intéresse aux problématiques liées à l’importance grandissante des contenus numériques dans notre société.

December 20, 2018

I published the following diary on isc.sans.edu: “Using OSSEC Active-Response as a DFIR Framework”:

In most of our networks, endpoints are often the weakest link because there are more difficult to control (example: laptops are travelling, used at home, etc).They can also be located in different locations even countries for biggest organizations. To better manage them, tools can be deployed to perform many different tasks… [Read more]

[The post [SANS ISC] Using OSSEC Active-Response as a DFIR Framework has been first published on /dev/random]