Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

October 15, 2024

Bikepunk, les chroniques du flash

Vingt ans après le flash, la catastrophe qui a décimé l’humanité, la jeune Gaïa n’a qu’une seule solution pour fuir l’étouffante communauté dans laquelle elle a grandi : enfourcher son vélo et pédaler en compagnie de Thy, un vieil ermite cycliste.

Pour survivre dans ce monde dévasté où toute forme d’électricité est impossible, où les cyclistes sont pourchassés, où les jeunes femmes fécondes sont très recherchées, Gaïa et Thy ne pourront compter que sur leur maîtrise du guidon.

Mon nouveau roman « Bikepunk, les chroniques du flash » est désormais disponible dans toutes les librairies de France, de Belgique ou de Suisse. Ou, en ligne, sur le site de l’éditeur.

J’ai tant d’anecdotes à raconter sur ce projet très personnel, entièrement tapé sur une machine à écrire mécanique et né de ma passion pour l’exploration cycliste. J’ai tant à vous dire sur ma relation avec Gaïa et Thy, sur les chroniques du flash qui jalonnent l’histoire comme des billets de blog issus d’un futur d’où l’électricité, l’essence et Internet ont soudainemnet disparu.

Fable écolo-cycliste ou thriller d’action ? Réflexion philosophique ou Mad Max à vélo ?

Je vous laisse choisir, je vous laisse découvrir.

Mais je dois vous prévenir :

Vous risquez de ne plus jamais oser entrer dans une voiture électrique…

Bienvenue dans l’univers Bikepunk !

Couverture du livre Bikepunk : une cycliste en papier recyclé dévale une pente dans une ville dévastée entièrement argentée. Couverture du livre Bikepunk : une cycliste en papier recyclé dévale une pente dans une ville dévastée entièrement argentée.

Ingénieur et écrivain, j’explore l’impact des technologies sur l’humain, tant par écrit que dans mes conférences.

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

Pour me soutenir, achetez mes livres (si possible chez votre libraire) ! Je viens de publier Bikepunk, un thriller post-apocalyptico-cyliste entièrement tapé sur une machine à écrire mécanique..

October 11, 2024

Seuls les fous en tentent l’ascension

Je vous ai déjà parlé de Bruno Leyval, l’artiste qui a réalisé la couverture et l’illustration de Bikepunk ? Mais si, souvenez-vous. J’avais même mis en texte une des ces photos.

J’adore travailler avec Bruno. C’est un grand professionnel qui incarne néanmoins l’archétype de l’artiste fou. Pour le découvrir, je vous conseille cette formidable interview.

L’artiste est un fou. De cette folie révolutionnaire, nécessaire, indispensable dont Érasme chantait déjà les louanges. Mais la démence dangereuse, elle, est générée par la gloire. Une gloire que Bruno a connue et qu’il a fuie avec un discernement fascinant :

J’ai connu les effets de la reconnaissance artistique et je peux te dire qu’il faut être solide dans sa tête pour ne pas basculer dans la folie. Tu te retrouves plongé dans un monde parallèle où tout le monde te lèche le cul et le pire, c’est que c’est agréable !

C’est le paradoxe de l’artiste : soit il n’est pas reconnu et son message s’éteint, sans grande portée. Soit il devient célèbre et son message se transforme en une infâme mixture à vocation commerciale. Le message est totalement corrompu lorsqu’il atteint le public (qui, de par son adoration idolâtre, est en grande partie responsable de cette corruption).

Les pires victimes de cette démence sont sûrement celles qui la connaissent sans avoir la moindre once de folie artistique. Qui sont corrompues dès le départ, car elles n’ont même pas d’autre message que la quête de gloire. Les politiciens.

La gangrène de l’humanité est peut-être cette glorification inéluctable de celleux qui sont dans une quête compulsive de gloire, cette mise au pouvoir de ces obsédés de la puissance. Face à cette maladie, l’art et la folie me semblent être les seules manières mener une lutte raisonnable.

J’adore Bruno parce que nous sommes complètement différents. Il est chaman, spirituel, mystique, travailleur de ses mains. Je suis un ingénieur hyper rationnel scientifique greffé à un clavier. Chacun empêche l’autre de dormir avec ses ronflements, au propre comme au figuré.

Et pourtant, si l’on enlève les déguisements (les peintures corporelles et la tête de bison pour lui, la blouse blanche et l’éprouvette de laboratoire pour moi), on découvre des pensées incroyablement similaires. Nous sommes, tous les deux en train de gravir la même montagne, mais par des chemins opposés.

À lire Bruno, à avoir la chance de partager des moments avec lui, je pense que cette montagne s’appelle la sagesse.

C’est une montagne dont nul n’a jamais atteint le sommet. Une montagne dont aucun alpiniste n’est jamais revenu vivant.

Seuls les fous en tentent l’ascension.

Seuls les fous, en ce monde, sont en quête de sagesse.

C’est d’ailleurs pour cela qu’on les appelle des fous.

En ouvrant le livre Bikepunk, une fois la couverture passée, avant même les premiers tours de roue, avant même la première lettre imprimée, une image, une sensation mystique vous assaillira peut-être comme elle m’assaille moi-même à chaque fois. La folie spirituelle de Bruno qui me renvoie le reflet de ma propre folie mécanico-scientiste.

Vous êtes sur le point d’entrer, de partager notre folie.

Ça y est, vous êtes fous.

Bienvenue sur notre montagne !

  • Illustration : « La montagne sacrée », Bruno Leyval, Creative Commons By-NC-ND 4.0

Ingénieur et écrivain, j’explore l’impact des technologies sur l’humain, tant par écrit que dans mes conférences.

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

Pour me soutenir, achetez mes livres (si possible chez votre libraire) ! Je viens de publier Bikepunk, un thriller post-apocalyptico-cyliste entièrement tapé sur une machine à écrire mécanique..

October 08, 2024

Vélo et machine à écrire, petite eulogie de la satiété

Et si nous avions assez ?

Un des premiers lecteurs de Bikepunk à l’impression de retrouver le personnage de Thy dans Grant Petersen, un fabricant de vélos inusables et anti-compétitifs.

Ça me touche et me rassure quant au message que je tente de porter dans le livre. Grant Petersen pourrait parfaitement arborer l’adage que je mets en conclusion de Bikepunk : « Je ne crains pas la fin du monde, j’ai un vélo ».

Le blog de Grant Petersen est passionnant. Dans le dernier billet, on y trouve notamment la photo d’un journal contenant une interview avec José Mujica, l’ancien président de l’Uruguay (celui qui vivait dans sa petite maison et allait accomplir son travail de président à pieds).

Une phrase m’a frappé :

Prenez l’Uruguay. L’Uruguay compte 3,5 millions d’habitants. Mais importe 27 millions de paires de chaussures. Nous produisons de la merde (garbage) et souffrons au travail. Pour quoi ?

C’est quelque chose que j’essayais déjà d’articuler dans mon TEDx en 2015, qui est très populaire sur Youtube (mais je n’en tire aucun bénéfice et je vous mets le lien Peertube).

La production, c’est littéralement transformer notre environnement en déchets (l’étape de consommation intermédiaire étant très limitée). Toutes nos usines, tous nos centres commerciaux, tous nos experts en marketing ne font qu’une chose : tenter de convertir notre planète en déchet le plus rapidement possible.

À partir de quand considérerons-nous que nous avons assez ?

Le côté "anti-déchet/conçu pour durer" est l’un des aspects qui me passionnent avec les machines à écrire. Elles ne sont plus produites, mais sont tellement résistantes que toutes celles qui ont été produites sont aujourd’hui suffisantes pour alimenter un marché restreint (mais existant).

En fait, les machines à écrire ont été tellement produites pour durer que, durant la Seconde Guerre mondiale, les fabricants de machines les rachetaient aux particuliers afin de fournir l’armée qui en avait un gigantesque besoin.

Je trouve un point commun fascinant entre les réflexions de Grant Petersen sur le vélo, José Mujica sur la politique, Paul Watson sur la faune marine, Richard Stallman sur le logiciel, Richard Polt sur les machines à écrire et même Bruno Leyval sur l’art graphique. Tous, outre le fait que ce sont des hommes blancs barbus, ont refusé le principe même de croissance. Chacun, à son niveau, tente de poser la question : « Et si nous avions assez ? »

Ils sont perçus comme extrémistes. Mais n’est-ce pas la quête infinie de gloire et de richesse qui est un extrême morbide ?

Dans un idéal poétique, je me rends compte que je n’ai besoin de rien d’autre qu’un vélo, une machine à écrire et une bonne bibliothèque. Marrant, ça me fait justement penser à un roman sur le sujet qui sort le 15 octobre. Je vous en ai déjà parlé ? Vous l’avez déjà précommandé ?

Addendum sur les précommandes de livre

La raison pour laquelle ces précommandes sont importantes est que, contrairement à certains mastodontes de l’édition, mon éditeur PVH ne peut se permettre de financer des piles de 30 livres dans chaque librairie pour forcer les libraires à tenter de les vendre, quitte à en envoyer la moitié au pilon par après (ce qui va arriver à la moitié de l’immense pile de best-sellers dans l’entrée de votre librairie). Pour la plupart des livres moins connus, le libraire ne l’aura donc pas de stock ou en prendra un seul. Ce qui fait qu’un livre peu connu reste toujours peu connu.

Si, par contre, un libraire reçoit deux ou trois précommandes pour un livre pas encore sorti, sa curiosité est titillée. Il va potentiellement en commander plus au distributeur. Il va sans doute le lire et, parfois, le mettre en avant (honneur suprême que mon recueil de nouvelles « Stagiaire au spatioport Omega 3000 » a eu la chance de connaître à la chouette librairie/salon de thé Maruani à Paris).

Quelques libraires commandant un titre particulier, ça attire l’attention du distributeur et du diffuseur. Qui se disent que le livre a du potentiel et en parle avec les autres libraires. Qui le lisent et, parfois, le mettent en avant. La boucle est bouclée…

C’est bien sûr moins efficace que d’imprimer 100.000 exemplaires d’un coup, de les vendre dans les supermarchés et de mettre des affiches partout dans le métro avec des phrases grandiloquentes sorties de nulle part : « Le dernier livre événement de Ploum, le nouveau maître du thriller cycliste ».

On est d’accord que les publicités dans le métro, c’est moche. De toute façon, des histoires de machine à écrire et de vélo, ça n’intéresse probablement pas 100.000 personnes (quoique mon éditeur ne serait pas contre…).

Mais je pense que ça pourrait vraiment toucher celleux qui aiment flâner dans les libraires, leur vélo adossé à la devanture, et qui pourraient avoir l’œil attiré par une couverture brillante, flashy, sur laquelle se découpe une cycliste chaleureuse en papier recyclé…

Celleux qui aiment se poser avec un livre et qui se demandent parfois : « Et si nous avions assez ? »

Ingénieur et écrivain, j’explore l’impact des technologies sur l’humain, tant par écrit que dans mes conférences.

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

Pour me soutenir, achetez mes livres (si possible chez votre libraire) ! Je viens de publier Bikepunk, un thriller post-apocalyptico-cyliste entièrement tapé sur une machine à écrire mécanique..

October 05, 2024

Cover Ember Knights

Proton is a compatibility layer for Windows games to run on Linux. Running a Windows games is mostly just hitting the Play button within Steam. It’s that good that many games now run faster on Linux than on native Windows. That’s what makes the Steam Deck the best gaming handheld of the moment.

But a compatibility layer is still a layer, so you may encounter … incompatibilities. Ember Knights is a lovely game with fun co-op multiplayer support. It runs perfectly on the (Linux-based) Steam Deck, but on my Ubuntu laptop I encountered long loading times (startup was 5 minutes and loading between worlds was slow). But once the game was loaded it ran fine.

Debugging the game reveled that there were lost of EAGAIN errors while the game was trying to access the system clock. Changing the numer of allowed open files fixed the problem for me.

Add this to end end of the following files:

  • in /etc/security/limits.conf:
* hard nofile 1048576
  • in /etc/systemd/system.conf and /etc/systemd/user.conf:
DefaultLimitNOFILE=1048576 

Reboot.

Cover In Game

“The Witcher 3: Wild Hunt” is considered to be one of the greatest video games of all time. I certainly agree with that sentiment.

At its core, The Witcher 3 is a action-role playing game with a third-person perspective in a huge open world. You develop your character while the story advances. At the same time you can freely roam and explore as much as you like. The main story is captivating and the world is filled with with side quests and lots of interesting people. Fun for at least 200 hours, if you’re the exploring kind. If you’re not, the base game (without DLCs) will still take you 50 hours to finish.

While similar to other great games like Nintendo’s Zelda Breath of the Wild and Sony’s Horizon Zero Dawn, the strength of the game is a deep lore originating from the Witcher series novels written by the “Polish Tolkien” Andrzej Sapkowski. It’s not a game, but a universe (nowadays it even includes a Netflix tv-series).

A must play.

Played on the Steam Deck without any issues (“Steam Deck Verified”)

October 03, 2024

A couple of months after reaching 1600, I hit another milestone: Elo 1700!

When I reached an Elo rating of 1600, I expected the climb to get more difficult. Surprisingly, moving up to 1700 was easier than I thought.

I stuck with my main openings but added a few new variations. For example, I started using the "Queen's Gambit Declined: Cambridge Springs Defense" against white opening with 1. d4 – a name that, just six months ago, might as well have been a spell from Harry Potter. Despite expanding my opening repertoire, my opening knowledge remains limited, and I tend to stick to the few openings I know well.

wo men playing chess at an outdoor market in New York City Trying my luck against a chess hustler in New York City. I lost.

A key challenge I keep facing is what my chess coach calls "pattern recognition", the ability to instantly recognize common tactical setups and positional themes. Vanessa, my wife, would almost certainly agree with that – she's great at spotting patterns in my life that I completely miss. To work on this blind spot, I've made solving chess puzzles a daily habit.

What has really started to make sense for me is understanding pawn breaks and how to create weaknesses in my opponent's position – while avoiding creating weaknesses in my own. These concepts are becoming clearer, and I feel like I'm seeing the board better.

Next stop: Elo 1800.

Invitation à la sortie du roman Bikepunk et aux 20 ans de Ploum.net

Pour fêter la sortie de son nouveau roman « Bikepunk, les chroniques du flash » le 15 octobre et fêter les 20 ans du blog Ploum.net le 20 octobre, Ploum a le plaisir de convier :

.........................(ajoute ici ton nom, ami·e lecteurice)

pour une séance de dédicaces, une rencontre et un verre de l’amitié dans deux différents cadres.

Bruxelles, le 12 octobre

Le 12 octobre à partir de 16h à la toute nouvelle librairie Chimères, à Bruxelles.

404 avenue de la Couronne
1050 Ixelles
(l’adresse risque d’être introuvable pour les informaticiens)

Paris, le 18 octobre

Le 18 octobre à 17h30 à la librairie À Livr’Ouvert, à Paris

171b Boulevard Voltaire
75011 Paris

Dans les deux cas, l’entrée est libre, famille, amis et accompagnants sont les bienvenus.

Tenue cycliste souhaitée.

Ailleurs en francophonie

La francophonie ne se limite pas à Bruxelles ni Paris. Si vous êtes libraires ou si vous connaissez de chouettes librairies indépendantes, commandez-y dès à présent Bikepunk (ISBN : 978-2-88979-009-8) et n’hésitez pas à nous mettre en relation pour organiser une séance de dédicaces. Voire, pourquoi pas, organiser une séance de dédicaces dans un magasin de cycles ?

Si vous n’avez pas de librairie à disposition, commandez directement sur le site de l’éditeur.

Au plaisir de se rencontrer, en chaîne et en pédales,

Bikepunkement vôtre…

Ingénieur et écrivain, j’explore l’impact des technologies sur l’humain, tant par écrit que dans mes conférences.

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

Pour me soutenir, achetez mes livres (si possible chez votre libraire) ! Je viens de publier Bikepunk, un thriller post-apocalyptico-cyliste entièrement tapé sur une machine à écrire mécanique..

October 02, 2024

A scale with blocks next to it

Recently, a public dispute has emerged between WordPress co-founder Matt Mullenweg and hosting company WP Engine. Matt has accused WP Engine of misleading users through its branding and profiting from WordPress without adequately contributing back to the project.

As the Founder and Project Lead of Drupal, another major open source Content Management System (CMS), I hesitated to weigh in on this debate, as this could be perceived as opportunistic. In the end, I decided to share my perspective because this conflict affects the broader open source community.

I've known Matt Mullenweg since the early days, and we've grown both our open source projects and companies alongside each other. With our shared interests and backgrounds, I consider Matt a good friend and can relate uniquely to him. Equally valuable to me are my relationships with WP Engine's leadership, including CEO Heather Brunner and Founder Jason Cohen, both of whom I've met several times. I have deep admiration for what they've achieved with WP Engine.

Although this post was prompted by the controversy between Automattic and WP Engine, it is not about them. I don't have insight into their respective contributions to WordPress, and I'm not here to judge. I've made an effort to keep this post as neutral as possible.

Instead, this post is about two key challenges that many open source projects face:

  1. The imbalance between major contributors and those who contribute minimally, and how this harms open source communities.
  2. The lack of an environment that supports the fair coexistence of open source businesses.

These issues could discourage entrepreneurs from starting open source businesses, which could harm the future of open source. My goal is to spark a constructive dialogue on creating a more equitable and sustainable open source ecosystem. By solving these challenges, we can build a stronger future for open source.

This post explores the "Maker-Taker problem" in open source, using Drupal's contribution credit system as a model for fairly incentivizing and recognizing contributors. It suggests how WordPress and other open source projects could benefit from adopting a similar system. While this is unsolicited advice, I believe this approach could help the WordPress community heal, rebuild trust, and advance open source productively for everyone.

The Maker-Taker problem

At the heart of this issue is the Maker-Taker problem, where creators of open source software ("Makers") see their work being used by others, often service providers, who profit from it without contributing back in a meaningful or fair way ("Takers").

Five years ago, I wrote a blog post called Balancing Makers and Takers to scale and sustain Open Source, where I defined these concepts:

The difference between Makers and Takers is not always 100% clear, but as a rule of thumb, Makers directly invest in growing both their business and the open source project. Takers are solely focused on growing their business and let others take care of the open source project they rely on.

In that post, I also explain how Takers can harm open source projects. By not contributing back meaningfully, Takers gain an unfair advantage over Makers who support the open source project. This can discourage Makers from keeping their level of contribution up, as they need to divert resources to stay competitive, which can ultimately hurt the health and growth of the project:

Takers harm open source projects. An aggressive Taker can induce Makers to behave in a more selfish manner and reduce or stop their contributions to open source altogether. Takers can turn Makers into Takers.

Solving the Maker-Taker challenge is one of the biggest remaining hurdles in open source. Successfully addressing this could lead to the creation of tens of thousands of new open source businesses while also improving the sustainability, growth, and competitiveness of open source – making a positive impact on the world.

Drupal's approach: the Contribution Credit System

In Drupal, we've adopted a positive approach to encourage organizations to become Makers rather than relying on punitive measures. Our approach stems from a key insight, also explained in my Makers and Takers blog post: customers are a "common good" for an open source project, not a "public good".

Since a customer can choose only one service provider, that choice directly impacts the health of the open source project. When a customer selects a Maker, part of their revenue is reinvested into the project. However, if they choose a Taker, the project sees little to no benefit. This means that open source projects grow faster when commercial work flows to Makers and away from Takers.

For this reason, it's crucial for an open source community to:

  1. Clearly identify the Makers and Takers within their ecosystem
  2. Actively support and promote their Makers
  3. Educate end users about the importance of choosing Makers

To address these needs and solve the Maker-Taker problem in Drupal, I proposed a contribution credit system 10 years ago. The concept was straightforward: incentivize organizations to contribute to Drupal by giving them tangible recognition for their efforts.

We've since implemented this system in partnership with the Drupal Association, our non-profit organization. The Drupal Association transparently tracks contributions from both individuals and organizations. Each contribution earns credits, and the more you contribute, the more visibility you gain on Drupal.org (visited by millions monthly) and at events like DrupalCon (attended by thousands). You can earn credits by contributing code, submitting case studies, organizing events, writing documentation, financially supporting the Drupal Association, and more.

Example issue credit on drupal org A screenshot of an issue comment on Drupal.org. You can see that jamadar worked on this patch as a volunteer, but also as part of his day job working for TATA Consultancy Services on behalf of their customer, Pfizer.

Drupal's credit system is unique and groundbreaking within the Open Source community. The Drupal contribution credit system serves two key purposes: it helps us identify who our Makers and Takers are, and it allows us to guide end users towards doing business with our Makers.

Here is how we accomplish this:

  • Certain benefits, like event sponsorships or advertising on Drupal.org, are reserved for organizations with a minimum number of credits.
  • The Drupal marketplace only lists Makers, ranking them by their contributions. Organizations that stop contributing gradually drop in ranking and are eventually removed.
  • We encourage end users to require open source contributions from their vendors. Drupal users like Pfizer and the State of Georgia only allow Makers to apply in their vendor selection process.
A slide showing a grid of 14 projects, each labeled with the name of the individual and their organization responsible for building the corresponding recipe. A slide from my recent DrupalCon Barcelona State of Drupal keynote showcasing key contributors to Drupal Starshot. This slide showcases how we recognize and celebrate Makers in our community, encouraging active participation in the project.

Governance and fairness

To make sure the contribution credit system is fair, it benefits from the oversight from an independent, neutral party.

In the Drupal ecosystem, the Drupal Association fulfills this crucial role. The Drupal Association operates independently, free from control by any single company within the Drupal ecosystem. Some of the Drupal Association's responsibilities include:

  1. Organizing DrupalCons
  2. Managing Drupal.org
  3. Overseeing the contribution tracking and credit system

It's important to note that while I serve on the Drupal Association's Board, I am just one of 12 members and have not held the Chair position for several years. My company, Acquia, receives no preferential treatment in the credit system; the visibility of any organization, including Acquia, is solely determined by its contributions over the preceding twelve months. This structure ensures fairness and encourages active participation from all members of the Drupal community.

Drupal's credit system certainly isn't perfect. It is hard to accurately track and fairly value diverse contributions like code, documentation, mentorship, marketing, event organization, etc. Some organizations have tried to game the system, while others question whether the cost-benefit is worthwhile.

As a result, Drupal's credit system has evolved significantly since I first proposed it ten years ago. The Drupal Association continually works to improve the system, aiming for a credit structure that genuinely drives positive behavior.

Recommendations for WordPress

WordPress has already taken steps to address the Maker-Taker challenge through initiatives like the Five for the Future program, which encourages organizations to contribute 5% of their resources to WordPress development.

Building on this foundation, I believe WordPress could benefit from adopting a contribution credit system similar to Drupal's. This system would likely require the following steps to be taken:

  1. Expanding the current governance model to be more distributed.
  2. Providing clear definitions of Makers and Takers within the ecosystem.
  3. Implementing a fair and objective system for tracking and valuing various types of contributions.
  4. Implementing a structured system of rewards for Makers who meet specific contribution thresholds, such as priority placement in the WordPress marketplace, increased visibility on WordPress.org, opportunities to exhibit at WordPress events, or access to key services.

This approach addresses both key challenges highlighted in the introduction: it balances contributions by incentivizing major involvement, and it creates an environment where open source businesses of all sizes can compete fairly based on their contributions to the community.

Conclusion

Addressing the Maker-Taker challenge is essential for the long-term sustainability of open source projects. Drupal's approach may provide a constructive solution not just for WordPress, but for other communities facing similar issues.

By transparently rewarding contributions and fostering collaboration, we can build healthier open source ecosystems. A credit system can help make open source more sustainable and fair, driving growth, competitiveness, and potentially creating thousands of new open source businesses.

As Drupal continues to improve its credit system, we understand that no solution is perfect. We're eager to learn from the successes and challenges of other open source projects and are open to ideas and collaboration.

October 01, 2024

Ode aux perdants

Le Fediverse est-il pour les perdants ? se demande Andy Wingo

La question est provocante et intelligente : le Fediverse semble être un repère d’écologistes, libristes, défenseurs des droits sociaux, féministes et cyclistes. Bref la liste de tous ceux qui ne sont pas mis en avant, qui semblent « perdre ».

Je n’avais jamais vu les choses sous cet angle. Pour moi, le point commun est surtout une volonté de changer les choses. Or, par définition, si on veut changer les choses, c’est qu’on n’est pas satisfait avec la situation actuelle. On est donc « perdant ». En fait, tout révolutionnaire est, par définition, un·e perdant·e. Dès qu’iel gagne, ce n’est plus un·e révolutionnaire, mais une personne au pouvoir !

Être progressiste implique donc d’être perçu comme perdant selon le filtre d’Andy. Le progrès nait de l’insatisfaction. Le monde ne sera jamais parfait, il faudra toujours l’améliorer, toujours lutter. Mais tout le monde n’a pas l’énergie de lutter tout le temps. C’est humain, c’est souhaitable. Lorsque l’énergie me manque, lorsque je ne suis pas un révolutionnaire, je me concentre sur un objectif minimal : ne pas être un obstacle à celleux qui mènent la lutte.

Je ne suis pas végétarien, mais lorsqu’un restaurant propose des menus végés, je félicite le personnel pour l’initiative. Ce n’est pas grand-chose, mais, au moins, j’envoie un signal.

Si vous n’avez pas la force de quitter les réseaux sociaux propriétaires, encouragez celleux qui le font, ayez un compte sur les réseaux libres, mettez-les au même niveau que les autres. Vous ne serez pas révolutionnaire, mais, au moins, vous ne serez pas un obstacle.

Le coût de la rébellion

Je suis conscient que tout le monde n’a pas le loisir d’être rebelle. J’enseigne à mes étudiants qu’ils vont avoir un des meilleurs diplômes sur le marché, que les entreprises se battent pour les embaucher (je le sais, j’ai moi-même recruté pour mes équipes). Qu’ils sont donc, pour la plupart, immunisés contre le chômage longue durée. Et ce luxe vient avec une responsabilité morale : celle de dire « Non ! » lorsque notre conscience nous le dicte.

D’ailleurs, j’ai suffisamment dit « Non ! » (et pas toujours très poliment) pour savoir que le risque est vraiment minime lorsqu’on est un homme blanc avec un beau diplôme.

Remarquez que je ne cherche pas à vous dire ce qui est bien ou mal ni comment vous devez penser. C’est votre affaire. Je pense juste que le monde serait infiniment meilleur si les travailleurs refusaient de faire ce qui est contraire à leur conscience. Et arrêtez de vous raconter des histoires, de vous mentir à vous-même. Non, vous n’avez pas une « mission ». Vous cherchez juste à aider à construire un système pour vendre de la merde afin de pouvoir acheter votre propre merde.

Après, dire « Non », ce n’est pas toujours possible. Parce qu’on a vraiment quelque chose à perdre. Ou à gagner. Que le compromis moral nous semble nécessaire ou avantageux. Une fois encore, je ne juge pas. On fait tous des compromis moraux. Le tout est d’en être conscient. Chacun les siens.

Mais lorsque le « Non » frontal n’est pas possible, il reste la rébellion passive en jouant au plus con. C’est une technique qui fonctionne vraiment bien. Parfois, elle fonctionne même mieux que l’opposition franche.

Elle consiste à poser des questions :

— C’est légal ça ? Ça me semble quand même étrange, non ?
— Tu peux m’expliquer ? Parce que là, j’ai l’impression que c’est vraiment un truc dégueulasse qu’on nous demande, non ?
— Ça ne me semble pas très moral pour les clients, la planète. Tu en penses quoi ?

Les meilleurs résultats à ce genre de questionnement sont lorsque les deux parties sont prêtes à se laisser convaincre. Mais cela est malheureusement trop rare…

Soyez perdants, consommez de la culture indépendante

On peut également être rebelle dans sa consommation et, plus particulièrement, dans sa consommation culturelle.

Nous avons le réflexe, pour ne pas paraître un perdant, de vouloir faire comme tout le monde. Si un livre ou un film a du succès, nous l’achèterons plus facilement. Il suffit de voir les « déjà un million de lecteurs » sur les pochettes des best-sellers pour comprendre que ça fait vendre. Je dois être une exception comme me l’ont démontré mes colocs il y a 25 ans :

— Hey, vous regardez un film ? C’est quoi ?
— Tu n’aimeras pas, Ploum.
— Pourquoi je n’aimerais pas ?
— Parce que tout le monde aime bien ce film.

Les librairies de gare « Relay », propriétés de Bolloré, ont même des étagères contenant le top 10 des meilleures ventes. Notez que ce top 10 est entièrement fictif. Pour en faire partie, il suffit de payer. Les libraires reçoivent les ordres des livres à mettre dans l’étagère. D’une manière générale, je vous invite à questionner tous les « top vente » ou autre « top 50 ». Vous découvrirez à quel point ces classements sont arbitraires, commerciaux et facilement manipulables dès qu’on a un peu d’argent à investir. Parfois, le top est tout simplement une marque déposée, comme les « Produits de l’année ». Il suffit de payer pour appliquer l’écusson sur son emballage (un jour je vous raconterai ça en détail). Sans parler de l’influence de la pub ou des médias qui, même s’ils prétendent le contraire, sont achetés à travers les relations des attachés de presse. Je suis tombé récemment, dans un média du service public belge, sur une critique dithyrambique du film « L’Heureuse Élue ». Je n’ai pas vu le film, mais à voir la bande-annonce, le fait que ce film soit bon me semble extrêmement improbable. Si on ajoute que la critique ne relève aucun défaut, encourage à plusieurs reprises à aller le voir au plus vite, et n’est pas signée, ça fait beaucoup d’indice sur le fait que l’on est face à une publicité (à peine) déguisée (et complètement illégale, sur un média du service public qui plus est).

Mais parfois, c’est un peu plus subtil. Voire, dans certains cas, tellement subtil que le journaliste lui-même n’a pas réalisé à quel point il est manipulé (vu les tarifs pour un article, les bons journalistes sont pour la plupart ailleurs). La presse n’est plus qu’une caisse de résonnance publicitaire, essentiellement au bénéfice des sponsors des équipes de foot. Il faut juste en être conscient.

Être rebelle pour moi c’est donc choisir volontairement de s’intéresser à la culture indépendante, hors norme. Aller voir des concerts de groupes locaux que personne ne connait. Acheter des livres dans les festivals à des auteurs autoédités. S’intéresser à tout ce qui n’a pas de budget publicitaire.

Au dernier festival Trolls & Légendes, j’ai rencontré Morgan The Slug et je lui ai acheté les deux tomes de sa bédé « Rescue The Princess ». Ce n’est clairement pas de la grande bande dessinée. Mais je vois l’évolution de l’auteur au fur et à mesure des pages et, à ma grande surprise, mon fils adore. Les moments que nous avons passés ensemble à lire les deux tomes n’auraient pas été meilleurs avec des bédés plus commerciales. Nous attendons tous les deux le tome 3 !

Soyez perdants, cessez de suivre les réseaux propriétaires

Cela fait désormais des années que j’ai supprimé mes comptes sur tous les réseaux sociaux (à l’exception de Mastodon). Je vois régulièrement des retours de gens ayant fait pareil et, à chaque fois, je lis ce que j’ai exactement ressenti.

I noticed a low level anxiety, that I had not even known was there, disappear. Im less and less inclined to join in the hyper-polarization that dominates so much of online dialog. I underestimated the psycological effects of social media on how I think.

(extrait du gemlog de ATYH)

Le fait d’être toute la journée exposé à un flux constant d’images, de vidéos et de courts textes crée un stress permanent, une volonté de se conformer, de réagir, de se positionner.

Le collectif Sleeping Giants démontre ce que c’est d’être abonné aux flux de la fachosphère.

Personnellement, je ne trouve pas l’article très intéressant, car il ne m’apprend rien. Tout cela me semble évident, dit et redit. Les fachos vivent dans une bulle où l’on cherche à se faire peur (un noir ou un arabe casse la gueule à un blanc) et à se rassurer (le blanc casse la gueule à l’arabe. Oui, c’est subtil, faut suivre).

Mais, sur les réseaux sociaux, nous sommes tous dans une bulle. Nous avons tous une bulle qui nous fait à la fois peur (la politique, les guerres, le réchauffement climatique…) et nous rassure (l’inauguration d’une piste cyclable, les chats qui sont mignons). Nous sommes en permanence en train de jouer au même jeu de montagnes russes émotionnelles et de nous détraquer le cerveau. Afin de mieux consommer pour lutter contre la peur (alarmes, médicaments) ou « se faire plaisir » (t-shirt en coton bio à l’image du nouveau roman de Ploum, si si, ils arrivent).

Il n’y a pas une « bonne » façon de consommer, que ce soit physiquement ou l’information. On ne peut jamais « gagner », c’est le principe même de la société d’insatisfaction-consommatrice.

Soyez perdants, regagnez votre vie !

Être rebelle, c’est donc avant tout accepter sa propre identité, sa propre solitude. Accepter de ne pas être informé de tout, de ne pas avoir une opinion sur tout, de sortir d’un groupe qui ne nous correspond pas, de ne pas se mettre en avant tout le temps, de perdre toutes ces micro-opportunités, essentiellement factices, de créer un contact, de gagner un follower ou un client.

Donc, oui, être rebelle, c’est vu comme perdre. Pour moi, c’est avant tout ne pas accepter aveuglément les règles du jeu.

Être un perdant, aujourd’hui, c’est peut-être la seule manière de regagner sa vie.

Ingénieur et écrivain, j’explore l’impact des technologies sur l’humain, tant par écrit que dans mes conférences.

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

Pour me soutenir, achetez mes livres (si possible chez votre libraire) ! Je viens de publier Bikepunk, un thriller post-apocalyptico-cyliste entièrement tapé sur une machine à écrire mécanique..

September 30, 2024

Approximately 1,100 Drupal enthusiasts gathered in Barcelona, Spain, last week for DrupalCon Europe. As per tradition, I delivered my State of Drupal keynote, often referred to as the "DriesNote".

If you missed it, you can watch the video or download my slides (177 MB).

In my keynote, I gave an update on Drupal Starshot, an ambitious initiative we launched at DrupalCon Portland 2024. Originally called Drupal Starshot, inspired by President Kennedy's Moonshot challenge, the product is now officially named Drupal CMS.

The goal of Drupal CMS is to set the standard for no-code website building. It will allow non-technical users, like marketers, content creators, and site builders, to create digital experiences with ease, without compromising on the power and flexibility that Drupal is known for.

A four-month progress report

A screenshot of the new Drupal.org's front page. A preview of Drupal.org's front page with the updated Drupal brand and content.

While Kennedy gave NASA eight years, I set a goal to deliver the first version of Drupal CMS in just eight months. It's been four months since DrupalCon Portland, which means we're halfway through.

So in my keynote, I shared our progress and gave a 35-minute demo of what we've built so far. The demo highlights how a fictional marketer, Sarah, can build a powerful website in just hours with minimal help from a developer. Along her journey, I showcased the following key innovations:

  1. A new brand for a new market: A brand refresh of Drupal.org, designed to appeal to both marketers and developers. The first pages are ready and available for preview at new.drupal.org, with more pages launching in the coming months.
  2. A trial experience: A trial experience that lets you try Drupal CMS with a single click, eliminating long-standing adoption barriers for new users. Built with WebAssembly, it runs entirely in the browser – no servers to install or manage.
  3. An improved installer: An installer that lets users install recipes – pre-built features that combine modules, configuration, and default content for common website needs. Recipes bundle years of expertise into repeatable, shareable solutions.
  4. Events recipe: A simple events website that used to take an experienced developer a day to build can now be created in just a few clicks by non-developers.
  5. Project Browser support for recipes: Users can now browse the Drupal CMS recipes in the Project Browser, and install them in seconds.
  6. First page of documentation: New documentation created specifically for end users. Clear, effective documentation is key to Drupal CMS's success, so we began by writing a single page as a model for the quality and style we aim to achieve.
  7. AI for site building: AI agents capable of creating content types, configuring fields, building Views, forms, and more. These agents will transform how people build and manage websites with Drupal.
  8. Responsible AI policy: To ensure responsible AI development, we've created a Responsible AI policy. I'll share more details in an upcoming blog, but the policy focuses on four key principles: human-in-the-loop, transparency, swappable large language models (LLMs), and clear guidance.
  9. SEO Recipe: Combines and configures all the essential Drupal modules to optimize a Drupal site for search engines.
  10. 14 recipes in development: In addition to the Events and SEO recipes, 12 more are in development with the help of our Drupal Certified Partners. Each Drupal CMS recipe addresses a common marketing use case outlined in our product strategy. We showcased both the process and progress during the Initiative Lead Keynote for some of the tracks. After DrupalCon, we'll begin developing even more recipes and invite additional contributors to join the effort.
  11. AI-assisted content migration: AI will crawl your source website and handle complex tasks like mapping unstructured HTML to structured Drupal content types in your destination site, making migrations faster and easier. This could be a game-changer for website migrations.
  12. Experience Builder: An early preview of a brand new, out-of-the-box tool for content creators and designers, offering layout design, page building, basic theming and content editing tools. This is the first time I've showcased our progress on stage at a DrupalCon.
  13. Future-proof admin UI with React: Our strategy for modernizing Drupal's backend UI with React.
  14. The "Adopt-a-Document" initiative: A strategy and funding model for creating comprehensive documentation for Drupal CMS. If successful, I'm hopeful we can expand this model to other areas of Drupal. For more details, please read the announcement on drupal.org.
  15. Global Documentation Lead: The Drupal Association's commitment to hire a dedicated Documentation Lead, responsible for managing all aspects of Drupal's documentation, beyond just Drupal CMS.

The feedback on my presentation has been incredible, both online and in-person. The room was buzzing with energy and positivity! I highly recommend watching the recording.

Attendees were especially excited about the AI capabilities, Experience Builder, and recipes. I share their enthusiasm as these capabilities are transformative for Drupal.

Many of these features are designed with non-developers in mind. Our goal is to broaden Drupal's reach beyond its traditional user base and reach more people than ever before.

Release schedule

Our launch plan targets Drupal CMS's release on Drupal's upcoming birthday: January 15, 2025. It's also just a couple of weeks after the Drupal 7 End of Life, marking the end of one era and the beginning of another.

The next milestone is DrupalCon Singapore, taking place from December 9–11, 2024, less than 3 months away. We hope to have a release candidate ready by then.

Now that we're back from DrupalCon and have key milestone dates set, there is a lot to coordinate and plan in the coming weeks, so stay tuned for updates.

Call for contribution

Ambitious? Yes. But achievable if we work together. That's why I'm calling on all of you to get involved with Drupal CMS. Whether it's building recipes, enhancing the Experience Builder, creating AI agents, writing tests, improving documentation, or conducting usability testing – there are countless ways to contribute and make a difference. If you're ready to get involved, visit https://drupal.org/starshot to learn how to get started.

Thank you

This effort has involved so many people that I can't name them all, but I want to give a huge thank you to the Drupal CMS Leadership Team, who I've been working with closely every week: Cristina Chumillas (Lullabot), Gábor Hojtsy (Acquia), Lenny Moskalyk (Drupal Association), Pamela Barone (Technocrat), Suzanne Dergacheva (Evolving Web), and Tim Plunkett (Acquia).

A special shoutout goes to the demo team we assembled for my presentation: Adam Hoenich (Acquia), Amber Matz (Drupalize.me), Ash Sullivan (Acquia), Jamie Abrahams (FreelyGive), Jim Birch (Kanopi), Joe Shindelar (Drupalize.me), John Doyle (Digital Polygon), Lauri Timmanee (Acquia), Marcus Johansson (FreelyGive), Martin Anderson-Clutz (Acquia), Matt Glaman (Acquia), Matthew Grasmick (Acquia), Michael Donovan (Acquia), Tiffany Farriss (Palantir.net), and Tim Lehnen (Drupal Association).

I also want to thank the Drupal CMS track leads and contributors for their development work. Additionally, I'd like to recognize the Drupal Core Committers, Drupal Association staff, Drupal Association Board of Directors, and Certified Drupal Partners for continued support and leadership. There are so many people and organizations whose contributions deserve recognition that I can't list everyone individually, partly to avoid the risk of overlooking anyone. Please know your efforts are deeply appreciated.

Lastly, thank you to everyone who helped make DrupalCon Barcelona a success. It was excellent!

September 24, 2024

FOSDEM 2025 will take place at the ULB on the 1st and 2nd of February 2025. As has become traditional, we offer free and open source projects a stand to display their work "in real life" to the audience. You can share information, demo software, interact with your users and developers, give away goodies, sell merchandise or accept donations. Anything is possible! We offer you: One table (180x80cm) with a set of chairs and a power socket. Fast wireless internet access. You can choose whether you want the spot for the entire conference, or simply for one day. Joint舰

September 23, 2024

Les vieux cons (ou L’humaine imperfection de la perfection morale)

Idéaliser nos idoles

L’écrivain de SF John Scalzi réagit au fait que certaines personnes semblent faire de lui une idole, surtout depuis que Neil Gaiman est accusé d’agressions sexuelles. Son argument principal : tous les humains sont imparfaits et vous ne devriez prendre personne pour idole. Et surtout pas ceux qui le souhaitent.

Ce qui est intéressant, ce que nous avons tendance à idéaliser les personnes que nous suivons en ligne et qui font des choses que nous aimons. Je ne suis pas le millionième de John Scalzi, mais, à mon échelle, j’ai constaté parfois certaines réactions effrayantes. Comme lorsqu’un lecteur qui me rencontre s’excuse auprès de moi d’utiliser un logiciel propriétaire.

Mais bon sang, j’utilise aussi des logiciels propriétaires. Je lutte pour le logiciel libre, mais je ne suis (heureusement) pas Richard Stallman. Je cède comme tout le monde à la praticité ou à l’effet de réseau. C’est juste que j’essaye de conscientiser, de dénoncer la pression sociale. Mais, comme tout le monde, je suis très imparfait. Lorsque j’écris sur mon blog, c’est à propos de l’idéal que je souhaite atteindre au moment où j’écris, pas ce que je suis. Retenez également qu’un idéal n’est pas censé être atteint : il n’est qu’un guide sur le chemin et doit pouvoir être changé à tout moment. Un idéal gravé dans le marbre est morbide. Ceux qui vous font miroiter un idéal sont souvent dangereux.

L’hypocrisie de la perfection publique

D’une manière générale, les combats d’une personne sont très révélateurs de son obsession. Les homophobes sont souvent des homosexuels refoulés. Les scandales sexuels touchent le plus souvent ceux qui ont une image publique de rigueur. Personnellement, je parle beaucoup des dangers de l’addiction aux réseaux sociaux. Je vous laisse deviner pourquoi le sujet m’obsède à ce point…

Je me reconnais également dans ce que Scalzi raconte au sujet des conventions et des conférences : tout le monde le trouve sympa et puis, en rentrant chez lui, il s’écroule et s’enferme dans sa solitude. Je fais exactement pareil. J’ai un personnage public très différent du Ploum privé. Ma femme déteste le Ploum public : « Tu es plus sympa avec tes lecteurs qu’avec ta propre famille ! ».

Les zones d’ombre de chaque être humain

Je déteste également cette propension à sur analyser la perfection d’un individu, surtout à travers ses supposés liens sociaux. Combien de fois n’ai-je pas entendu que « Tu as partagé ce texte de machin, mais sais-tu que machin a lui-même partagé des textes de untel et que untel est en fait très limite au niveau de l’antisémitisme ? »

Réponse : non, je ne le sais pas. Et je n’enquête pas sur toutes les personnes qui publient parce que toutes, sans exception, ont leurs zones d’ombre, leurs erreurs de jeunesse, leurs œuvres qui appartiennent à une époque, mais ne sont plus du tout acceptables aujourd’hui.

Personne n’est parfait. Si je partage un texte, c’est pour le texte. Parce qu’il me parle et je le trouve intéressant. Le contexte peut parfois apporter une compréhension plus fine, mais je ne veux et ne peux pas juger les individus sur leur passé, leurs agissements ou toute information que je peux entendre sur eux. Je ne suis pas juge. Je ne condamne ni ne cautionne qui que ce soit en partageant des œuvres.

Si vous jugez les gens, je suis certain que, dans les 900 billets de ce blog, s’en trouve au moins un qui vous choquera et me condamnera définitivement à vos yeux. Oui, Bertrand Cantat est un con coupable de féminicide. Oui, j’adore et j’écoute Noir Désir presque tous les jours.

Paul Watson

Paul Watson a été arrêté. Depuis des années, il est accusé d’écoterrorisme.

J’ai tant à dire, mais, pour ne pas me répéter, je vous invite à relire ce que j’avais écrit suite à la lecture de son livre.

Oui, Paul Watson est copain avec Brigitte Bardot qui est elle-même copine avec Jean-Marie Le Pen qui est un gros con. Il n’empêche que je soutiens le combat de Paul Watson pour préserver les océans. Ce combat me semble juste et important. J’ai l’impression que Paul Watson défend réellement les intérêts de la faune marine. Je peux évidemment me tromper, mais c’est ma position aujourd’hui.

Les médias mentent et déforment tout

J’insiste sur le sujet, car, comme je vous l’ai dit, j’ai lu le livre de Paul Watson et ne peux que constater que les médias déforment complètement des propos ou des passages dont je me souviens particulièrement.

Dans la même veine, la professeure Emily Bender décrit comment ses propos ont été déformés sur la BBC pour transformer son discours "démystification de chatGPT" en "l’AI c’est le futur".

J’ai un jour vécu une expérience similaire. Une télévision belge voulait m’interviewer sur un sujet d’actualité informatique. Le rendez-vous était fixé et le journaliste m’appelle une heure avant pour préparer son sujet. Au téléphone, il me pose quelques questions. Je réponds et il rephrase mes réponses de manière à dire exactement l’inverse. Je crois à une incompréhension. Je lui signale. Il insiste. Le ton monte légèrement.

– Ce n’est pas du tout ce que j’ai dit. C’est même exactement l’inverse.
— Mais moi j’ai besoin de quelqu’un qui dise cela.
— Mais c’est faux. Je suis l’expert et c’est que vous voulez dire est complètement faux.
— C’est trop tard, tout notre sujet a été préparé en ce sens, il faudra adapter votre discours.
— Si vous savez déjà ce que vous voulez que je dise, vous n’avez pas besoin de m’interviewer. Au revoir.

Je n’ai plus jamais eu de nouvelles de ce journaliste ni même de cette chaîne.

Méfiez-vous de ce que disent les experts dans les médias. Parfois, le désir de passer à la télévision est plus fort que le souci de véracité. Parfois, on s’autojustifie en disant "au moins, on va faire passer un message dans le bon sens même s’il est un peu falsifié". Mais le montage final, l’englobage dans un sujet plus large ou la musique d’ambiance déformeront définitivement les propos.

Dès que vous connaissez un peu un domaine, vous vous rendez compte que tout ce qui est dit est dans les médias grand public est complètement faux. À un tel point qu’un article qui est vaguement juste est souvent souligné par les experts du domaine. C’est tellement rare !

Souvenez-vous-en à chaque fois que vous lisez un article dans un média grand public : ce que vous lisez est complètement faux, mensonger et a été écrit par une personne qui ne comprend rien au domaine et est payé pour générer du clic sur les publicités.

L’exemple éclairant des zones bleues

Un exemple qui illustre très bien cela sont ces fameuses « zones bleues », des régions d’Italie, de Grèce ou du Japon où pulluleraient les supercentenaires. Les médias parlent souvent de ces régions où vivre jusque 110 ans semble être la norme, avec des articles qui vantent l’huile d’olive ou la vie rurale en communauté, le tout illustré par de souriants vieillards assis sur un banc au soleil.

Et bien le professeur Saul Justin Newman vient de percer le secret de ces zones bleues et d’obtenir pour cela le prix « Ig Nobel » (un prix qui récompense les recherches les plus absurdes).

Pourquoi un Ig Nobel ? Parce qu’il a découvert que le secret de ces zones est tout simplement que 80% des supercentenaires sont… morts. Les autres n’ont pas de certificats de naissance.

Ces « zones bleues » sont souvent les régions où tout semble encourager et faciliter la fraude à la pension.

Je vous parie que cette recherche restera confidentielle. Les médias continueront à nous vanter l’huile d’olive pour vivre jusqu’à 110 ans et nous parler du « mystère » des zones bleues. Parce que la réalité n’est pas intéressante dans un programme de divertissement à vocation publicitaire.

Comme le disait Aaron Swartz: je déteste les actualités !

Pensées pour Aaron

Un excellent article de Dave Karpf qui m’apprend que Sam Altman, l’actuel CEO derrière ChatGPT, était dans l’incubateur de startup YCombinator en même temps qu’Aaron Swartz. Sur la photo de 2007, ils sont littéralement épaule contre épaule. La photo m’a arraché une larme de tristesse et une autre de colère.

Aaron Swartz, Sam Altman and Paul Graham in YCombinator 2007 Aaron Swartz, Sam Altman and Paul Graham in YCombinator 2007

Aaron Swartz voulait libérer la connaissance. Il avait créé un script pour télécharger, à travers le réseau de son université, des milliers d’articles scientifiques. Soupçonné de le faire afin de les rendre publics illégalement, il a été poursuivi en justice et risquait… des décennies de prison ce qui l’a poussé au suicide. Notez qu’il n’a finalement jamais rendu ces articles publics, articles financés par les contribuables, soit dit en passant.

Aujourd’hui, Sam Altman aspire un milliard de fois plus de contenu. Tous les contenus artistiques, tous vos messages, tous vos emails et, bien sûr, tous les articles scientifiques. Mais, contrairement à Aaron, qui le faisait pour libérer la connaissance, Sam Altman a un motif beaucoup plus correct. Il veut gagner de l’argent. Et donc… c’est acceptable.

Je n’invente rien, c’est réellement l’argument de Sam Altman.

On enferme les militants écolos, les défenseurs de baleines. On décore les CEO des grosses entreprises qui détruisent la planète le plus vite possible. En fait, je crois que je commence à voir comme une sorte de fil rouge…

Le problème, comme le souligne Dave Karpf, c’est qu’YCombinator est rempli de gens qui rêvent et veulent devenir… comme Sam Altman. Devenir un rebelle qui se suicide ou qui est envoyé en prison ne fait envie à personne.

À un moment de ma vie, baigné dans l’environnement « startups », j’ai moi-même essayé de devenir un « entrepreneur à succès ». Intellectuellement, je comprenais tout ce qu’il fallait faire pour y arriver. Je me disais « Pourquoi pas moi ? ». Mais je n’y arrivais pas. Je n’étais pas capable d’accepter ce qui ne me semblait pas juste ou parfaitement moral. Peut-être que je partage un peu trop de points de communs avec Aaron Swartz (surtout les défauts et les spécificités alimentaires, un peu moins le génie).

Et bien tant pis pour le succès. Aaron, chaque fois que je pense à toi, les larmes me montent aux yeux et j’ai envie de me battre. Tu me donnes l’énergie pour tenter de changer le monde avec mon clavier.

Je n’espère pas y arriver. Mais je n’ai pas le droit d’abandonner.

Dave, dans ton article tu te plains que tout le monde veut devenir Sam Altman. Moi pas. Moi plus. J’ai grandi, j’ai muri. Je veux (re)devenir un disciple d’Aaron Swartz. Lui, je peux en faire un modèle, une idole, car il n’a pas eu la chance de vivre assez longtemps pour développer ses zones d’ombre, pour devenir un vieux con.

Un vieux con comme celui vers lequel l’entropie cérébrale me pousse chaque jour un peu davantage.

Faut dire que pour taper ses billets de blog dans Vim et ses romans sur une machine à écrire mécanique, faut vraiment être un vieux con…

Je réalise d’ailleurs que les héros de mon prochain roman sont une jeune rebelle et un vieux con. En fait, c’est pas si mal de devenir un vieux con rebelle… (sortie le 15 octobre, vous pouvez déjà commander dans votre librairie l’EAN 9782889790098)

Ingénieur et écrivain, j’explore l’impact des technologies sur l’humain, tant par écrit que dans mes conférences.

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

Pour me soutenir, achetez mes livres (si possible chez votre libraire) ! Je viens de publier Bikepunk, un thriller post-apocalyptico-cyliste entièrement tapé sur une machine à écrire mécanique..

September 22, 2024

We now invite proposals for developer rooms for FOSDEM 2025. FOSDEM offers open source and free software developers a place to meet, share ideas and collaborate. Renowned for being highly developer-oriented, the event brings together some 8000+ geeks from all over the world. The twenty-fifth edition will take place Saturday 1st and Sunday 2nd February 2025 in Brussels, Belgium. Developer rooms are assigned to self-organising groups to work together on open source and free software projects, to discuss topics relevant to a broader subset of the community, etc. Most content should take the form of presentations. Proposals involving collaboration舰

September 17, 2024

FOSDEM 2025 will take place on Saturday 1st and Sunday 2nd of February 2025. This will be our 25th anniversary. Further details and calls for participation will be announced in the coming days and weeks.

September 10, 2024

In previous blog posts, we discussed setting up a GPG smartcard on GNU/Linux and FreeBSD.

In this blog post, we will configure Thunderbird to work with an external smartcard reader and our GPG-compatible smartcard.

beastie gnu tux

Before Thunderbird 78, if you wanted to use OpenPGP email encryption, you had to use a third-party add-on such as https://enigmail.net/.

Thunderbird’s recent versions natively support OpenPGP. The Enigmail addon for Thunderbird has been discontinued. See: https://enigmail.net/index.php/en/home/news.

I didn’t find good documentation on how to set up Thunderbird with a GnuPG smartcard when I moved to a new coreboot laptop, so this was the reason I created this blog post series.

GnuPG configuration

We’ll not go into too much detail on how to set up GnuPG. This was already explained in the previous blog posts.

If you want to use a HSM with GnuPG you can use the gnupg-pkcs11-scd agent https://github.com/alonbl/gnupg-pkcs11-scd that translates the pkcs11 interface to GnuPG. A previous blog post describes how this can be configured with SmartCard-HSM.

We’ll go over some steps to make sure that the GnuPG is set up correctly before we continue with the Thunderbird configuration. The pinentry command must be configured with graphical support to type our pin code in the Graphical user environment.

Import Public Key

Make sure that your public key - or the public key of the reciever(s) - is/are imported.

[staf@snuffel ~]$ gpg --list-keys
[staf@snuffel ~]$ 
[staf@snuffel ~]$ gpg --import <snip>.asc
gpg: key XXXXXXXXXXXXXXXX: public key "XXXX XXXXXXXXXX <XXX@XXXXXX>" imported
gpg: Total number processed: 1
gpg:               imported: 1
[staf@snuffel ~]$ 
[staf@snuffel ~]$  gpg --list-keys
/home/staf/.gnupg/pubring.kbx
-----------------------------
pub   xxxxxxx YYYYY-MM-DD [SC]
      XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
uid           [ xxxxxxx] xxxx xxxxxxxxxx <xxxx@xxxxxxxxxx.xx>
sub   xxxxxxx xxxx-xx-xx [A]
sub   xxxxxxx xxxx-xx-xx [E]

[staf@snuffel ~]$ 

Pinentry

Thunderbird will not ask for your smartcard’s pin code.

This must be done on your smartcard reader if it has a pin pad or an external pinentry program.

The pinentry is configured in the gpg-agent.conf configuration file. As we’re using Thunderbird is a graphical environment we’ll configure it to use a graphical version.

Installation

I’m testing KDE plasma 6 on FreeBSD, so I installed the Qt version of pinentry.

On GNU/Linux you can check the documentation of your favourite Linux distribution to install a graphical pinentry. If you use a Graphical user environment there is probably already a graphical-enabled pinentry installed.

[staf@snuffel ~]$ sudo pkg install -y pinentry-qt6
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
The following 1 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        pinentry-qt6: 1.3.0

Number of packages to be installed: 1

76 KiB to be downloaded.
[1/1] Fetching pinentry-qt6-1.3.0.pkg: 100%   76 KiB  78.0kB/s    00:01    
Checking integrity... done (0 conflicting)
[1/1] Installing pinentry-qt6-1.3.0...
[1/1] Extracting pinentry-qt6-1.3.0: 100%
==> Running trigger: desktop-file-utils.ucl
Building cache database of MIME types
[staf@snuffel ~]$ 

Configuration

The gpg-agent is responsible for starting the pinentry program. Let’s reconfigure it to start the pinentry that we like to use.

[staf@snuffel ~]$ cd .gnupg/
[staf@snuffel ~/.gnupg]$ 
[staf@snuffel ~/.gnupg]$ vi gpg-agent.conf

The pinentry is configured in the pinentry-program directive. You’ll find the complete gpg-agent.conf that I’m using below.

debug-level expert
verbose
verbose
log-file /home/staf/logs/gpg-agent.log
pinentry-program /usr/local/bin/pinentry-qt

Reload the sdaemon and gpg-agent configuration.

staf@freebsd-gpg3:~/.gnupg $ gpgconf --reload scdaemon
staf@freebsd-gpg3:~/.gnupg $ gpgconf --reload gpg-agent
staf@freebsd-gpg3:~/.gnupg $ 

Test

To verify that gpg works correctly and that the pinentry program works in our graphical environment we sign a file.

Create a new file.

$ cd /tmp
[staf@snuffel /tmp]$ 
[staf@snuffel /tmp]$ echo "foobar" > foobar
[staf@snuffel /tmp]$ 

Try to sign it.

[staf@snuffel /tmp]$ gpg --sign foobar
[staf@snuffel /tmp]$ 

If everything works fine, the pinentry program will ask for the pincode to sign it.

image info

Thunderbird

In this section we’ll (finally) configure Thunderbird to use GPG with a smartcard reader.

Allow external smartcard reader

open settings

Open the global settings, click on the "Hamburger" icon and select settings.

Or press [F10] to bring-up the "Menu bar" in Thunderbird and select [Edit] and Settings.

open settings

In the settings window click on [Config Editor].

This will open the Advanced Preferences window.

allow external gpg

In the Advanced Preferences window search for "external_gnupg" settings and set mail.indenity.allow_external_gnupg to true.


 

Setup End-To-End Encryption

The next step is to configure the GPG keypair that we’ll use for our user account.

open settings

Open the account setting by pressing on the "Hamburger" icon and select Account Settings or press [F10] to open the menu bar and select Edit, Account Settings.

Select End-to-End Encryption at OpenPG section select [ Add Key ].

open settings

Select the ( * ) Use your external key though GnuPG (e.g. from a smartcard)

And click on [Continue]

The next window will ask you for the Secret Key ID.

open settings

Execute gpg --list-keys to get your secret key id.

Copy/paste your key id and click on [ Save key ID ].

I found that it is sometimes required to restart Thunderbird to reload the configuration when a new key id is added. So restart Thunderbird or restart it fails to find your key id in the keyring.

Test

open settings

As a test we send an email to our own email address.

Open a new message window and enter your email address into the To: field.

Click on [OpenPGP] and Encrypt.

open settings

Thunderbird will show a warning message that it doesn't know the public key to set up the encryption.

Click on [Resolve].

discover keys In the next window Thunderbird will ask to Discover Public Keys online or to import the Public Keys From File, we'll import our public key from a file.
open key file In the Import OpenPGP key File window select your public key file, and click on [ Open ].
open settings

Thunderbird will show a window with the key fingerprint. Select ( * ) Accepted.

Click on [ Import ] to import the public key.

open settings

With our public key imported, the warning about the End-to-end encryption requires resolving key issue should be resolved.

Click on the [ Send ] button to send the email.

open settings

To encrypt the message, Thunderbird will start a gpg session that invokes the pinentry command type in your pincode. gpg will encrypt the message file and if everything works fine the email is sent.

 

Have fun!

Links

Produire l’abondant en utilisant des ressources rares

Petite déambulation sur l’écriture, l’art et le productivisme

Je discutais avec mon épouse de l’œuvre de Marina Abramović, une artiste totale qui nous remue profondément (que j’ai découvert il y a des années à la lecture de 2312, roman de Kim Stanley Robinson). Grâce à elle, nous sommes tombés d’accord sur une définition de l’art.

L’art est un échange humain qui nous confronte et nous pousse hors de nos certitudes, de notre zone de confort intellectuel et moral.

Si la génération automatique nous pousse à nous interroger sur l’art lui-même (et ce n’est pas nouveau), ce qui est produit aujourd’hui par des algorithmes n’est pour moi pas de l’art.

L’auteur de SF Ted Chiang met les mots exacts là-dessus : tout écrivain a un jour été abordé par quelqu’un qui pense avoir une idée géniale de livre et veut « juste un écrivain pour l’écrire » (et, oui, ça m’est arrivé). Comme si « écrire » n’était qu’une couche technique au-dessus de l’idée. Comme s’il suffisait de demander « Peinds moi une dame avec un sourire énigmatique » à quelqu’un qui sait dessiner pour produire la Joconde.

J’avais même fait un billet pour expliquer à quel point l’idée seule n’est rien.

Selon Ted Chiang, l’écriture est un art et tout art consiste en des milliers, des millions de décisions, d’ajustements. La technique fait partie intrinsèque de la création. Le 15 octobre sort mon nouveau roman que j’ai tapé entièrement à la machine à écrire mécanique. Ce n’est pas un hasard. C’est une particularité profondément ancrée dans le texte lui-même. Je n’aurais pas pu l’écrire autrement.

Tout écrivain, Saint-Exupéry le premier, vous le dira : l’art de l’écriture, c’est de supprimer, de trancher, de raccourcir le texte pour lui donner de la puissance.

Tous les générateurs de type ChatGPT ne font qu’une chose : allonger un texte court (appelé « prompt »). Je dirais même que c’est le contraire de l’art ! Rallonger pour rallonger, c’est le sens même de la bullshitite administrativite aigüe qui gangrène notre espèce et contre lesquels les artistes et scientifiques, de Kafka à Graeber, luttent depuis des millénaires en souffrant, en hurlant.

Produire de la merde en abondance

C’est exactement la définition du capitalisme tardif morbide : continuer à produire en masse ce qui est déjà abondant (du plastique, des particules fines, de la distraction, de la merde…), en utilisant pour cela des ressources qui sont devenues rares (de l’air pur, du temps de vie …). Au point de rendre notre monde, réel ou virtuel, invivable. La merdification n’est pas qu’une image, c’est une description physique de ce que nous sommes en train de faire à grande échelle.

Dans les villes américaines, on ne peut plus s’arrêter si on n’est pas une voiture. On ne peut plus pisser. On ne peut plus exister si on ne conduit pas.

Je ne peux m’empêcher de faire un parallèle avec ce que dit Louis Derrac sur le fait que le Web est devenu un simple vecteur de consommation comme la télévision, et non plus un espace d’échange.

Encart publicitaire

D’ailleurs, j’en profite pour vous dire que mon nouveau roman sort le 15 octobre. Je vous l’avais déjà dit ? Les voitures vont en prendre pour leur grade de la part des cyclistes.

Un outil et non une solution

Certains artistes ont compris que l’IA n’est pas une solution, mais un outil comme un autre. Ted Chiang cite Bennet Miller, je pense à Thierry Crouzet.

Ces artistes ont la particularité de passer plus de temps, plus d’énergie à utiliser les IA que s’ils avaient choisi de réaliser leur œuvre par des moyens traditionnels. Ils explorent les limites d’un outil et ne sont pas du tout représentatifs du marché économique pour ces outils qui tentent de séduire les auteurs amateurs participant au NanoWriMo.

Comme le dit Ted Chiang, la majorité des gens qui pensent écrire avec ChatGPT n’aiment pas écrire. Dans mon entourage, les gens l’utilisent pour envoyer des dossiers administratifs. Alors, est-ce utile ? Non, c’est juste que ces dossiers sont complètement cons, que personne ne va les lire et qu’on utilise des outils cons pour gérer des problèmes à la con qu’on se crée soi-même. Bref, pour générer de la merde alors qu’on en est déjà submergé.

L’humanité cherche à atteindre le principe d’inefficacité maximale et nous avons désormais des outils pour être encore plus inefficaces.

Et cela au prix d’une perte totale de compétence. Comme le dit la linguiste Emily M. Bender, on ne demande pas aux étudiants de faire des rédactions parce que le monde a besoin de rédactions. Mais pour apprendre aux élèves à structurer leurs pensées, à être critiques. Utiliser ChatGPT c’est, selon les mots de Ted Chiang, prendre un chariot élévateur à la salle de musculation. Oui, les poids vont faire des va-et-vient, mais va-t-on se muscler pour autant ?

Une panique morale ?

Ne sommes-nous pas dans une panique morale comme celles des jeux de rôle et des jeux vidéos il y a 30 ans ?

Il faut rappeler que, parfois, les paniques morales étaient justifiées voire ont sous-estimé le problème. Pensez à l’impact nocif global sur la société des voitures ou de la télévision. Certains prédisaient le pire. Ils étaient encore loin de la vérité. Vous verrez le 15 octobre…

On disait aussi que le GPS allait nous faire perdre le sens de l’orientation. Je vis dans une ville piétonnière où, il faut le reconnaître, il est assez difficile de s’orienter. Depuis 25 ans que je connais la ville, j’ai toujours croisé des gens perdus me demandant leur chemin. Or, depuis quelques années, une tendance inquiétante m’est apparue : les gens qui me demandent le chemin ont tous en main un téléphone avec Google Maps. Google Maps leur indique la direction. Ils n’ont qu’à littéralement suivre la ligne bleue. Et pourtant, ils sont à la fois incapables de suivre la direction de la flèche et se détacher de l’écran. Je sens que mes explications sont incompréhensibles. Leur regard est perdu, ils se raccrochent à l’écran. Parfois, je me contente de leur dire « suivez la direction indiquée par votre téléphone ».

Une menace pour les écrivains ?

J’aime beaucoup l’aphorisme bien connu qui dit que « tout le monde peut écrire, l’écrivain est celui qui ne sait pas s’empêcher d’écrire ». J’écris des livres, ce blog et mon journal parce que j’aime écrire. Parce que l’acte physique d’écrire m’est indispensable, me soulage. Depuis que j’ai 18 ans, j’ai découvert que je préférais écrire plutôt que de prendre une photo.

Écrire me force à penser. J’appelle d’ailleurs ma machine à écrire ma « machine à penser ». En plusieurs décennies, l’écriture a eu autant d’impact sur moi que j’en ai sur mes écrits. Je fusionne avec mon clavier, je deviens ma machine à penser et j’évolue, parfois même je m’améliore. Si je ne suis pas un prix Nobel de littérature, je peux affirmer sans fard que « j’écris ». Je tente de maitriser l’écriture et commence seulement à percevoir l’écriture comme autre chose que quelques mots jetés intuitivement sur une page blanche virtuelle. Je ne fais que découvrir la profondeur de ce qu’est réellement l’écriture, je débute.

Je repense à ce que me disait Bruno Leyval à propos de son rapport au dessin : il dessine tous les jours depuis qu’il est tout petit. Il dessine tout le temps. Il s’est transformé en machine à dessiner. Cette sensibilité de toute une vie ne pourra jamais se comparer à un algorithme générateur d’images.

Pour la couverture de mon nouveau roman (sortie le 15 octobre. ah ? Vous le saviez déjà ? ), j’aurais pu, comme de nombreux éditeurs, faire un prompt et trouver une image jolie. Mais mon prompt, je l’ai donné à Bruno, sous forme d’un roman de 300 pages tapé à la machine. Bruno s’est emparé de l’histoire, a créé une couverture sur laquelle nous avons échangé. Cette couverture est ensuite passée au crible du graphiste de la collection. Il y a eu beaucoup de désaccords. La couverture n’est pas parfaite. Elle ne le sera jamais. Mais elle est puissante. Le dessin à l’encre, dont Bruno m’a offert l’original, exprime une histoire. Le contraste entre l’encre et le papier exprime l’univers que j’ai tenté de construire.

J’ai la prétention de croire que ces désaccords, ces imperfections, cette sensibilité humaine dans la couverture comme dans le texte et la mise en page vont parler aux lecteurs et faire passer, dès l’entame, l’idée d’un monde où, soudainement, les voitures, les ordinateurs et les algorithmes se sont éteints.

Je parle d’écrire du texte parce que je suis écrivain. Mais cela fonctionne de la même façon pour le code informatique comme le montre Ian Cooper. On cherche à optimiser la « création de logiciel » tout en oubliant la maintenance du logiciel et de l’infrastructure pour le faire tourner.

Nous connaissons tou·te·s des entreprises qui se sont tournées vers Visual Basic ou J2EE pour « faire des économies ». Elles paient, aujourd’hui encore, leur dette au centuple. Se tourner vers l’IA n’est qu’une énième rediffusion du même scénario : tenter de faire des économies en évitant de prendre le temps de réfléchir et d’apprendre. En payer le prix pendant des décennies, mais sans même s’en rendre compte. Parce que tout le monde fait comme ça…

La résistance numérique s’organise

Carl Svensson explique sa stratégie d’utilisation du numérique et c’est, de manière très surprenante, incroyablement similaire à ma propre utilisation du Net. Sauf que j’utilise Offpunk pour lire le web/gemini et les RSS.

Pour les curieux, je viens d’ailleurs de commencer un site présentant ce qu’est Offpunk.

Simon Phipps, avec qui j’ai travaillé sur LibreOffice, compare le logiciel libre à l’alimentation bio (« organic » en anglais).

Comme il le note très bien, le bio était au départ un concept holistique, rebelle. Comme il y avait de la demande, ça a été récupéré par le marketing. Le marketing ne cherche pas à concevoir une approche holistique, mais se pose la question « quelles sont les étapes minimales et les moins chères pour que je puisse apposer l’autocollant bio sur mon produit de merde ? ».

L’open source et le logiciel libre sont exactement dans la même situation. L’idéal de liberté est devenu tellement accessoire que Richard Stallman est désormais perçu comme un extrémiste original au sein du mouvement qu’il a lui-même défini et fondé !

Il est parfois nécessaire de brûler le marketing et de revenir aux fondamentaux. Ce qui est une bonne chose pour les business non monopolistiques, comme le souligne Simon. C’est pour ça que j’encourage de reconsidérer les licences copyleft comme l’AGPL.

Retour au marketing

Comme je l’ai subtilement suggéré, mon prochain roman sort le 15 octobre. Vu mon amour du marketing, je compte sur vous pour m’aider à le faire connaître, car j’ai la prétention de croire que son histoire vous passionnera tout en faisant réfléchir. Je cherche des contacts liés à des médias « cyclistes » (blogueu·r·se·s, journalistes, youtubeu·r·se·s, médias alternatifs, etc.) qui seraient intéressés par recevoir un exemplaire « presse » en avant-première. Me contacter par mail.

Dès que le livre sera disponible à la vente, je l’annoncerai bien sûr ici sur ce blog. Oui, je risque de vous en parler quelques fois. Si vous voulez vraiment être informé·e avant tout le monde, envoyez-moi un mail.

Ingénieur et écrivain, j’explore l’impact des technologies sur l’humain, tant par écrit que dans mes conférences.

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

Pour me soutenir, achetez mes livres (si possible chez votre libraire) ! Je viens de publier Bikepunk, un thriller post-apocalyptico-cyliste entièrement tapé sur une machine à écrire mécanique..

September 09, 2024

The NBD protocol has grown a number of new features over the years. Unfortunately, some of those features are not (yet?) supported by the Linux kernel.

I suggested a few times over the years that the maintainer of the NBD driver in the kernel, Josef Bacik, take a look at these features, but he hasn't done so; presumably he has other priorities. As with anything in the open source world, if you want it done you must do it yourself.

I'd been off and on considering to work on the kernel driver so that I could implement these new features, but I never really got anywhere.

A few months ago, however, Christoph Hellwig posted a patch set that reworked a number of block device drivers in the Linux kernel to a new type of API. Since the NBD mailinglist is listed in the kernel's MAINTAINERS file, this patch series were crossposted to the NBD mailinglist, too, and when I noticed that it explicitly disabled the "rotational" flag on the NBD device, I suggested to Christoph that perhaps "we" (meaning, "he") might want to vary the decision on whether a device is rotational depending on whether the NBD server signals, through the flag that exists for that very purpose, whether the device is rotational.

To which he replied "Can you send a patch".

That got me down the rabbit hole, and now, for the first time in the 20+ years of being a C programmer who uses Linux exclusively, I got a patch merged into the Linux kernel... twice.

So, what do these things do?

The first patch adds support for the ROTATIONAL flag. If the NBD server mentions that the device is rotational, it will be treated as such, and the elevator algorithm will be used to optimize accesses to the device. For the reference implementation, you can do this by adding a line "rotational = true" to the relevant section (relating to the export where you want it to be used) of the config file.

It's unlikely that this will be of much benefit in most cases (most nbd-server installations will be exporting a file on a filesystem and have the elevator algorithm implemented server side and then it doesn't matter whether the device has the rotational flag set), but it's there in case you wish to use it.

The second set of patches adds support for the WRITE_ZEROES command. Most devices these days allow you to tell them "please write a N zeroes starting at this offset", which is a lot more efficient than sending over a buffer of N zeroes and asking the device to do DMA to copy buffers etc etc for just zeroes.

The NBD protocol has supported its own WRITE_ZEROES command for a while now, and hooking it up was reasonably simple in the end. The only problem is that it expects length values in bytes, whereas the kernel uses it in blocks. It took me a few tries to get that right -- and then I also fixed up handling of discard messages, which required the same conversion.

My HTTP Header Analyzer continues to be used a lot. Last week, I received a bug report, so I decided to look into it over the weekend. One thing led to another, and I ended up making a slew of improvements:

  1. Clarified the explanations for various Cloudflare headers, including CF-Edge-Cache, CF-APO-Via, CF-BGJ, CF-Polish, CF-Mitigated, CF-Ray, CF-Request-ID, CF-Connecting-IP, and CF-IPCountry.
  2. Added support for new headers: X-Logged-In, X-Hacker, X-Vimeo-Device, and Origin-Agent-Cluster.
  3. Improved checks and explanations for cache-related headers, including X-Cache, X-Cache-Status, and X-Varnish.
  4. Expanded the validation and explanation for the X-Content-Type-Options header.
  5. Marked X-Content-Security-Policy as a deprecated version of the Content-Security-Policy header and provided a more comprehensive breakdown of Content Security Policy (CSP) directives.
  6. Improved the validation for CORS-related headers: Access-Control-Expose-Headers and Access-Control-Max-Age.
  7. Expanded the explanation of the Cross-Origin-Resource-Policy header, covering its possible values.
  8. Added support for the Timing-Allow-Origin header.
  9. Clarified the X-Runtime header, which provides timing information for server response generation.
  10. Expanded the explanations for TLS and certificate-related headers: Strict-Transport-Security, Expect-Staple, and Expect-CT.
  11. Added an explanation for the Host-Header header.
  12. Improved details for X-Forwarded-For.
  13. Refined the explanations for Cache-Control directives like Public, Private, and No-Cache.
  14. Expanded the explanation for the Vary header and its impact on caching behavior.
  15. Added an explanation for the Retry-After header.
  16. Updated the explanation for the legacy X-XSS-Protection header.
  17. Added an explanation for the Akamai-specific Akamai-Age-MS header.

HTTP headers are crucial for web application functionality and security. While some are commonly used, there are many lesser-known headers that protect against security vulnerabilities, enforces stronger security policies, and improves performance.

To explore these headers further, you can try the latest HTTP Header Analyzer. It is pretty simple to use: enter a URL, and the tool will analyze the headers sent by your website. It then explains these headers, provides a score, and suggests possible improvements.

September 06, 2024

Some users, myself included, have noticed that their MySQL error log contains many lines like this one: Where does that error come from? The error MY-010914 is part of the Server Network issues like: Those are usually more problematic than the ones we are covering today. The list is not exhaustive and in the source […]

September 05, 2024

IT architects generally use architecture-specific languages or modeling techniques to document their thoughts and designs. ArchiMate, the framework I have the most experience with, is a specialized enterprise architecture modeling language. It is maintained by The Open Group, an organization known for its broad architecture framework titled TOGAF.

My stance, however, is that architects should not use the diagrams from their architecture modeling framework to convey their message to every stakeholder out there...

An enterprise framework for architects

Certainly, using a single modeling language like ArchiMate is important. It allows architects to use a common language, a common framework, in which they can convey their ideas and designs. When collaborating on the same project, it would be unwise to use different modeling techniques or design frameworks among each other.

By standardizing on a single framework for a particular purpose, a company can optimize their efforts surrounding education and documentation. If several architecture frameworks are used for the same purpose, inefficiencies arise. Supporting tooling can also be selected (such as Archi), which has specialized features to support this framework. The more architects are fluent in a common framework, the less likely ambiguity or misunderstandings occur about what certain architectures or designs want to present.

Now, I highlighted "for a particular purpose", because that architecture framework isn't the goal, it's a means.

Domain-specific language, also in architecture

In larger companies, you'll find architects with different specializations and focus areas. A common set is to have architects at different levels or layers of the architecture:

  • enterprise architects focus on the holistic and strategic level,
  • domain architects manage the architecture for one or more domains (a means of splitting the complexity of a company, often tied to business domains),
  • solution or system architects focus on the architecture for specific projects or solutions,
  • security architects concentrate on the cyber threats and protection measures,
  • network architects look at the network design, flows and flow controls, etc.

Architecture frameworks are often not meant to support all levels. ArchiMate for instance, is tailored to enterprise and domain level in general. It also supports solution or system architecture well when it focuses on applications. Sure, other architecture layers can be expressed as well, but after a while, you'll notice that the expressivity of the framework lacks the details or specifics needed for those layers.

It is thus not uncommon that, at a certain point, architects drop one framework and start using another. Network architecture and design is expressed differently than the ICT domain architecture. Both need to 'plug into each other', because network architects need to understand the larger picture they operate in, and domain architects should be able to read network architecture design and relate it back to the domain architecture.

Such a transition is not only within IT. Consider city planning and housing units, where architects design new community areas and housing. These designs need to be well understood by the architects, who are responsible for specific specializations such as utilities, transportation, interior, landscaping, and more. They use different ways of designing, but make sure it is understandable (and often even standardized) by the others.

Your schematic is not your presentation

I've seen architects who are very satisfied with their architectural design: they want nothing more than to share this with their (non-architect) stakeholders in all its glory. And while I do agree that lead engineers, for instance, should be able to understand architecture drawings, the schematics themselves shouldn't be the presentation material.

And definitely not towards higher management.

When you want to bring a design to a broader community, or to stakeholders with different backgrounds or roles, it is important to tell your story in an easy-to-understand way. Just like building architects would create physical mock-ups at scale to give a better view of a building, IT architects should create representative material to expedite presentations and discussions.

Certainly, you will lose a lot of insight compared to the architectural drawings, but you'll get much better acceptance by the community.

What is the definition of “Open Source”?

There’s been no shortage of contention on what “Open Source software” means. Two instances that stand out to me personally are ElasticSearch’s “Doubling down on Open” and Scott Chacon’s “public on GitHub”.

I’ve been active in Open Source for 20 years and could use a refresher on its origins and officialisms. The plan was simple: write a blog post about why the OSI (Open Source Initiative) and its OSD (Open Source Definition) are authoritative, collect evidence in its support (confirmation that they invented the term, of widespread acceptance with little dissent, and of the OSD being a practical, well functioning tool). That’s what I keep hearing, I just wanted to back it up. Since contention always seems to be around commercial re-distribution restrictions (which are forbidden by the OSD), I wanted to particularly confirm that there hasn’t been all that many commercial vendors who’ve used, or wanted, to use the term “open source” to mean “you can view/modify/use the source, but you are limited in your ability to re-sell, or need to buy additional licenses for use in a business”

However, the further I looked, the more I found evidence of the opposite of all of the above. I’ve spent a few weeks now digging and some of my long standing beliefs are shattered. I can’t believe some of the things I found out. Clearly I was too emotionally invested, but after a few weeks of thinking, I think I can put things in perspective. So this will become not one, but multiple posts.

The goal for the series is look at the tensions in the community/industry (in particular those directed towards the OSD), and figure out how to resolve, or at least reduce them.

Without further ado, let’s get into the beginnings of Open Source.

The “official” OSI story.

Let’s first get the official story out the way, the one you see repeated over and over on websites, on Wikipedia and probably in most computing history books.

Back in 1998, there was a small group of folks who felt that the verbiage at the time (Free Software) had become too politicized. (note: the Free Software Foundation was founded 13 years prior, in 1985, and informal use of “free software” had around since the 1970’s). They felt they needed a new word “to market the free software concept to people who wore ties”. (source) (somewhat ironic since today many of us like to say “Open Source is not a business model”)

Bruce Perens - an early Debian project leader and hacker on free software projects such as busybox - had authored the first Debian Free Software Guidelines in 1997 which was turned into the first Open Source Definition when he founded the OSI (Open Source Initiative) with Eric Raymond in 1998. As you continue reading, keep in mind that from the get-go, OSI’s mission was supporting the industry. Not the community of hobbyists.

Eric Raymond is of course known for his seminal 1999 essay on development models “The cathedral and the bazaar”, but he also worked on fetchmail among others.

According to Bruce Perens, there was some criticism at the time, but only to the term “Open” in general and to “Open Source” only in a completely different industry.

At the time of its conception there was much criticism for the Open Source campaign, even among the Linux contingent who had already bought-in to the free software concept. Many pointed to the existing use of the term “Open Source” in the political intelligence industry. Others felt the term “Open” was already overused. Many simply preferred the established name Free Software. I contended that the overuse of “Open” could never be as bad as the dual meaning of “Free” in the English language–either liberty or price, with price being the most oft-used meaning in the commercial world of computers and software

From Open Sources: Voices from the Open Source Revolution: The Open Source Definition

Furthermore, from Bruce Perens’ own account:

I wrote an announcement of Open Source which was published on February 9 [1998], and that’s when the world first heard about Open Source.

source: On Usage of The Phrase “Open Source”

Occasionally it comes up that it may have been Christine Peterson who coined the term earlier that week in February but didn’t give it a precise meaning. That was a task for Eric and Bruce in followup meetings over the next few days.

Even when you’re the first to use or define a term, you can’t legally control how others use it, until you obtain a Trademark. Luckily for OSI, US trademark law recognizes the first user when you file an application, so they filed for a trademark right away. But what happened? It was rejected! The OSI’s official explanation reads:

We have discovered that there is virtually no chance that the U.S. Patent and Trademark Office would register the mark “open source”; the mark is too descriptive. Ironically, we were partly a victim of our own success in bringing the “open source” concept into the mainstream

This is our first 🚩 red flag and it lies at the basis of some of the conflicts which we will explore in this, and future posts. (tip: I found this handy Trademark search website in the process)

Regardless, since 1998, the OSI has vastly grown its scope of influence (more on that in future posts), with the Open Source Definition mostly unaltered for 25 years, and having been widely used in the industry.

Prior uses of the term “Open Source”

Many publications simply repeat the idea that OSI came up with the term, has the authority (if not legal, at least in practice) and call it a day. I, however, had nothing better to do, so I decided to spend a few days (which turned into a few weeks 😬) and see if I could dig up any references to “Open Source” predating OSI’s definition in 1998, especially ones with different meanings or definitions.

Of course, it’s totally possible that multiple people come up with the same term independently and I don’t actually care so much about “who was first”, I’m more interested in figuring out what different meanings have been assigned to the term and how widespread those are.

In particular, because most contention is around commercial limitations (non-competes) where receivers of the code are forbidden to resell it, this clause of the OSD stands out:

Free Redistribution: The license shall not restrict any party from selling (…)

Turns out, the “Open Source” was already in use for more than a decade, prior to the OSI founding.

OpenSource.com

In 1998, a business in Texas called “OpenSource, Inc” launched their website. They were a “Systems Consulting and Integration Services company providing high quality, value-added IT professional services”. Sometime during the year 2000, the website became a RedHat property. Enter the domain name on Icann and it reveals the domain name was registered Jan 8, 1998. A month before the term was “invented” by Christine/Richard/Bruce. What a coincidence. We are just warming up…

image

Caldera announces Open Source OpenDOS

In 1996, a company called Caldera had “open sourced” a DOS operating system called OpenDos. Their announcement (accessible on google groups and a mailing list archive) reads:

Caldera Announces Open Source for DOS.
(…)
Caldera plans to openly distribute the source code for all of the DOS technologies it acquired from Novell., Inc
(…)
Caldera believes an open source code model benefits the industry in many ways.
(…)
Individuals can use OpenDOS source for personal use at no cost.
Individuals and organizations desiring to commercially redistribute
Caldera OpenDOS must acquire a license with an associated small fee.

Today we would refer to it as dual-licensing, using Source Available due to the non-compete clause. But in 1996, actual practitioners referred to it as “Open Source” and OSI couldn’t contest it because it didn’t exist!

You can download the OpenDos package from ArchiveOS and have a look at the license file, which includes even more restrictions such as “single computer”. (like I said, I had nothing better to do).

Investigations by Martin Espinoza re: Caldera

On his blog, Martin has an article making a similar observation about Caldera’s prior use of “open source”, following up with another article which includes a response from Lyle Ball, who headed the PR department of Caldera

Quoting Martin:

As a member of the OSI, he [Bruce] frequently championed that organization’s prerogative to define what “Open Source” means, on the basis that they invented the term. But I [Martin] knew from personal experience that they did not. I was personally using the term with people I knew before then, and it had a meaning — you can get the source code. It didn’t imply anything at all about redistribution.

The response from Caldera includes such gems as:

I joined Caldera in November of 1995, and we certainly used “open source” broadly at that time. We were building software. I can’t imagine a world where we did not use the specific phrase “open source software”. And we were not alone. The term “Open Source” was used broadly by Linus Torvalds (who at the time was a student (…), John “Mad Dog” Hall who was a major voice in the community (he worked at COMPAQ at the time), and many, many others.

Our mission was first to promote “open source”, Linus Torvalds, Linux, and the open source community at large. (…) we flew around the world to promote open source, Linus and the Linux community….we specifically taught the analysts houses (i.e. Gartner, Forrester) and media outlets (in all major markets and languages in North America, Europe and Asia.) (…) My team and I also created the first unified gatherings of vendors attempting to monetize open source

So according to Caldera, “open source” was a phenomenon in the industry already and Linus himself had used the term. He mentions plenty of avenues for further research, I pursued one of them below.

Linux Kernel discussions

Mr. Ball’s mentions of Linus and Linux piqued my interest, so I started digging.

I couldn’t find a mention of “open source” in the Linux Kernel Mailing List archives prior to the OSD day (Feb 1998), though the archives only start as of March 1996. I asked ChatGPT where people used to discuss Linux kernel development prior to that, and it suggested 5 Usenet groups, which google still lets you search through:

What were the hits? Glad you asked!

comp.os.linux: a 1993 discussion about supporting binary-only software on Linux

This conversation predates the OSI by five whole years and leaves very little to the imagination:

The GPL and the open source code have made Linux the success that it is. Cygnus and other commercial interests are quite comfortable with this open paradigm, and in fact prosper. One need only pull the source code to GCC and read the list of many commercial contributors to realize this.

comp.os.linux.announce: 1996 announcement of Caldera’s open-source environment

In November 1996 Caldera shows up again, this time with a Linux based “open-source” environment:

Channel Partners can utilize Caldera’s Linux-based, open-source environment to remotely manage Windows 3.1 applications at home, in the office or on the road. By using Caldera’s OpenLinux (COL) and Wabi solution, resellers can increase sales and service revenues by leveraging the rapidly expanding telecommuter/home office market. Channel Partners who create customized turn-key solutions based on environments like SCO OpenServer 5 or Windows NT,

comp.os.linux.announce: 1996 announcement of a trade show

On 17 Oct 1996 we find this announcement

There will be a Open Systems World/FedUnix conference/trade show in Washington DC on November 4-8. It is a traditional event devoted to open computing (read: Unix), attended mostly by government and commercial Information Systems types.

In particular, this talk stands out to me:

** Schedule of Linux talks, OSW/FedUnix'96, Thursday, November 7, 1996 ***
(…)
11:45 Alexander O. Yuriev, “Security in an open source system: Linux study

The context here seems to be open standards, and maybe also the open source development model.

1990: Tony Patti on “software developed from open source material”

in 1990, a magazine editor by name of Tony Patti not only refers to Open Source software but mentions that NSA in 1987 referred to “software was developed from open source material”

1995: open-source changes emails on OpenBSD-misc email list

I could find one mention of “Open-source” on an OpenBSD email list, seems there was a directory “open-source-changes” which had incoming patches, distributed over email. (source). Though perhaps the way to interpret is, to say it concerns “source-changes” to OpenBSD, paraphrased to “open”, so let’s not count this one.

(I did not look at other BSD’s)

Bryan Lunduke’s research

Bryan Lunduke has done similar research and found several more USENET posts about “open source”, clearly in the context of of source software, predating OSI by many years. He breaks it down on his substack. Some interesting examples he found:

19 August, 1993 post to comp.os.ms-windows

Anyone else into “Source Code for NT”? The tools and stuff I’m writing for NT will be released with source. If there are “proprietary” tricks that MS wants to hide, the only way to subvert their hoarding is to post source that illuminates (and I don’t mean disclosing stuff obtained by a non-disclosure agreement).

(source)

Then he writes:

Open Source is best for everyone in the long run.

Written as a matter-of-fact generalization to the whole community, implying the term is well understood.

December 4, 1990

BSD’s open source policy meant that user developed software could be ported among platforms, which meant their customers saw a much more cost effective, leading edge capability combined hardware and software platform.

source

1985: The “the computer chronicles documentary” about UNIX.

The Computer Chronicles was a TV documentary series talking about computer technology, it started as a local broadcast, but in 1983 became a national series. On February 1985, they broadcasted an episode about UNIX. You can watch the entire 28 min episode on archive.org, and it’s an interesting snapshot in time, when UNIX was coming out of its shell and competing with MS-DOS with its multi-user and concurrent multi-tasking features. It contains a segment in which Bill Joy, co-founder of Sun Microsystems is being interviewed about Berkley Unix 4.2. Sun had more than 1000 staff members. And now its CTO was on national TV in the United States. This was a big deal, with a big audience. At 13:50 min, the interviewer quotes Bill:

“He [Bill Joy] says its open source code, versatility and ability to work on a variety of machines means it will be popular with scientists and engineers for some time”

“Open Source” on national TV. 13 years before the founding of OSI.

image

Uses of the word “open”

We’re specifically talking about “open source” in this article. But we should probably also consider how the term “open” was used in software, as they are related, and that may have played a role in the rejection of the trademark.

Well, the Open Software Foundation launched in 1988. (10 years before the OSI). Their goal was to make an open standard for UNIX. The word “open” is also used in software, e.g. Common Open Software Environment in 1993 (standardized software for UNIX), OpenVMS in 1992 (renaming of VAX/VMS as an indication of its support of open systems industry standards such as POSIX and Unix compatibility), OpenStep in 1994 and of course in 1996, the OpenBSD project started. They have this to say about their name: (while OpenBSD started in 1996, this quote is from 2006):

The word “open” in the name OpenBSD refers to the availability of the operating system source code on the Internet, although the word “open” in the name OpenSSH means “OpenBSD”. It also refers to the wide range of hardware platforms the system supports.

Does it run DOOM?

The proof of any hardware platform is always whether it can run Doom. Since the DOOM source code was published in December 1997, I thought it would be fun if ID Software would happen to use the term “Open Source” at that time. There are some FTP mirrors where you can still see the files with the original December 1997 timestamps (e.g. this one). However, after sifting through the README and other documentation files, I only found references to the “Doom source code”. No mention of Open Source.

The origins of the famous “Open Source” trademark application: SPI, not OSI

This is not directly relevant, but may provide useful context: In June 1997 the SPI (“Software In the Public Interest”) organization was born to support the Debian project, funded by its community, although it grew in scope to help many more free software / open source projects. It looks like Bruce, as as representative of SPI, started the “Open Source” trademark proceedings. (and may have paid for it himself). But then something happened, 3/4 of the SPI board (including Bruce) left and founded the OSI, which Bruce announced along with a note that the trademark would move from SPI to OSI as well. Ian Jackson - Debian Project Leader and SPI president - expressed his “grave doubts” and lack of trust. SPI later confirmed they owned the trademark (application) and would not let any OSI members take it. The perspective of Debian developer Ean Schuessler provides more context.

A few years later, it seems wounds were healing, with Bruce re-applying to SPI, Ean making amends, and Bruce taking the blame.

All the bickering over the Trademark was ultimately pointless, since it didn’t go through.

Searching for SPI on the OSI website reveals no acknowledgment of SPI’s role in the story. You only find mentions in board meeting notes (ironically, they’re all requests to SPI to hand over domains or to share some software).

By the way, in November 1998, this is what SPI’s open source web page had to say:

Open Source software is software whose source code is freely available

A Trademark that was never meant to be.

Lawyer Kyle E. Mitchell knows how to write engaging blog posts. Here is one where he digs further into the topic of trademarking and why “open source” is one of the worst possible terms to try to trademark (in comparison to, say, Apple computers).

He writes:

At the bottom of the hierarchy, we have “descriptive” marks. These amount to little more than commonly understood statements about goods or services. As a general rule, trademark law does not enable private interests to seize bits of the English language, weaponize them as exclusive property, and sue others who quite naturally use the same words in the same way to describe their own products and services.
(…)
Christine Peterson, who suggested “open source” (…) ran the idea past a friend in marketing, who warned her that “open” was already vague, overused, and cliche.
(…)
The phrase “open source” is woefully descriptive for software whose source is open, for common meanings of “open” and “source”, blurry as common meanings may be and often are.
(…)
no person and no organization owns the phrase “open source” as we know it. No such legal shadow hangs over its use. It remains a meme, and maybe a movement, or many movements. Our right to speak the term freely, and to argue for our own meanings, understandings, and aspirations, isn’t impinged by anyone’s private property.

So, we have here a great example of the Trademark system working exactly as intended, doing the right thing in the service of the people: not giving away unique rights to common words, rights that were demonstrably never OSI’s to have.

I can’t decide which is more wild: OSI’s audacious outcries for the whole world to forget about the trademark failure and trust their “pinky promise” right to authority over a common term, or the fact that so much of the global community actually fell for it and repeated a misguided narrative without much further thought. (myself included)

I think many of us, through our desire to be part of a movement with a positive, fulfilling mission, were too easily swept away by OSI’s origin tale.

Co-opting a term

OSI was never relevant as an organization and hijacked a movement that was well underway without them.

(source: a harsh but astute Slashdot comment)

We have plentiful evidence that “Open Source” was used for at least a decade prior to OSI existing, in the industry, in the community, and possibly in government. You saw it at trade shows, in various newsgroups around Linux and Windows programming, and on national TV in the United States. The word was often uttered without any further explanation, implying it was a known term. For a movement that happened largely offline in the eighties and nineties, it seems likely there were many more examples that we can’t access today.

“Who was first?” is interesting, but more relevant is “what did it mean?”. Many of these uses were fairly informal and/or didn’t consider re-distribution. We saw these meanings:

  • a collaborative development model
  • portability across hardware platforms, open standards
  • disclosing (making available) of source code, sometimes with commercial limitations (e.g. per-seat licensing) or restrictions (e.g. non-compete)
  • possibly a buzz-word in the TV documentary

Then came the OSD which gave the term a very different, and much more strict meaning, than what was already in use for 15 years. However, the OSD was refined, “legal-aware” and the starting point for an attempt at global consensus and wider industry adoption, so we are far from finished with our analysis.

(ironically, it never quite matched with free software either - see this e-mail or this article)

Legend has it…

Repeat a lie often enough and it becomes the truth

Yet, the OSI still promotes their story around being first to use the term “Open Source”. RedHat’s article still claims the same. I could not find evidence of resolution. I hope I just missed it (please let me know!). What I did find, is one request for clarification remaining unaddressed and another handled in a questionable way, to put it lightly. Expand all the comments in the thread and see for yourself For an organization all about “open”, this seems especially strange. Seems we have veered far away from the “We will not hide problems” motto in the Debian Social Contract.

Real achievements are much more relevant than “who was first”. Here are some suggestions for actually relevant ways the OSI could introduce itself and its mission:

  • “We were successful open source practitioners and industry thought leaders”
  • “In our desire to assist the burgeoning open source movement, we aimed to give it direction and create alignment around useful terminology”.
  • “We launched a campaign to positively transform the industry by defining the term - which had thus far only been used loosely - precisely and popularizing it”

I think any of these would land well in the community. Instead, they are strangely obsessed with “we coined the term, therefore we decide its meaning. and anything else is “flagrant abuse”.

Is this still relevant? What comes next?

Trust takes years to build, seconds to break, and forever to repair

I’m quite an agreeable person, and until recently happily defended the Open Source Definition. Now, my trust has been tainted, but at the same time, there is beauty in knowing that healthy debate has existed since the day OSI was announced. It’s just a matter of making sense of it all, and finding healthy ways forward.

Most of the events covered here are from 25 years ago, so let’s not linger too much on it. There is still a lot to be said about adoption of Open Source in the industry (and the community), tension (and agreements!) over the definition, OSI’s campaigns around awareness and standardization and its track record of license approvals and disapprovals, challenges that have arisen (e.g. ethics, hyper clouds, and many more), some of which have resulted in alternative efforts and terms. I have some ideas for productive ways forward.

Stay tuned for more, sign up for the RSS feed and let me know what you think!
Comment below, on X or on HackerNews

August 29, 2024

In his latest Lex Fridman appearance, Elon Musk makes some excellent points about the importance of simplification.

Follow these steps:

  1. Simplify the requirements
  2. For each step, try to delete it altogether
  3. Implement well

1. Simplify the Requirements

Even the smartest people come up with requirements that are, in part, dumb. Start by asking yourself how they can be simplified.

There is no point in finding the perfect answer to the wrong question. Try to make the question as least wrong as possible.

I think this is so important that it is included in my first item of advice for junior developers.

There is nothing so useless as doing efficiently that which should not be done at all.

2. Delete the Step

For each step, consider if you need it at all, and if not, delete it. Certainty is not required. Indeed, if you only delete what you are 100% certain about, you will leave in junk. If you never put things back in, it is a sign you are being too conservative with deletions.

The best part is no part.

Some further commentary by me:

This applies both to the product and technical implementation levels. It’s related to YAGNI, Agile, and Lean, also mentioned in the first section of advice for junior developers.

It’s crucial to consider probabilities and compare the expected cost/value of different approaches. Don’t spend 10 EUR each day to avoid a 1% chance of needing to pay 100 EUR. Consistent Bayesian reasoning will reduce making such mistakes, though Elon’s “if you do not put anything back in, you are not removing enough” heuristic is easier to understand and implement.

3. Implement Well

Here, Elon talks about optimization and automation, which are specific to his problem domain of building a supercomputer. More generally, this can be summarized as good implementation, which I advocate for in my second section of advice for junior developers.

 

The relevant segment begins at 43:48.

The post Simplify and Delete appeared first on Entropy Wins.

August 27, 2024

I just reviewed the performance of a customer’s WordPress site. Things got a lot worse he wrote and he assumed Autoptimize (he was a AOPro user) wasn’t working any more and asked me to guide him to fix the issue. Instead it turns out he installed CookieYes, which adds tons of JS (part of which is render-blocking), taking 3.5s of main thread work and (fasten your seat-belts) which somehow seems…

Source

August 26, 2024

Setting up a new computer can be a lot of work, but I've made it much simpler with Homebrew, a popular package manager.

Creating a list of installed software

As a general rule, I prefer to install all software on my Mac using Homebrew. I always try Homebrew first and only resort to downloading software directly from websites if it is not available through Homebrew.

Homebrew manages both formulae and casks. Casks are typically GUI applications, while formulae are command-line tools or libraries.

First, I generate a list of all manually installed packages on my old computer:

$ brew leaves

The brew leaves command displays only the packages you installed directly using Homebrew. It excludes any packages that were installed automatically as dependencies. To view all installed packages, including dependencies, you can use the brew list command.

To save this list to a file:

$ brew leaves > brews.txt

Next, I include installed casks in the same list:

$ brew list --cask >> brews.txt

This appends the cask list to brews.txt, giving me a complete list of all the packages I've explicitly installed.

Reviewing your packages

It is a good idea to check if you still need all packages on your new computer. I review my packages as follows:

$ cat brews.txt | xargs brew desc --eval-all

This command provides a short description for each package in your list.

Installing your packages on a new machine

Transfer your brews.txt file to the new computer, install Homebrew, and run:

$ xargs brew install  brews.txt

This installs all the packages listed in your file.

Building businesses based on an Open Source project is like balancing a solar system. Like the sun is the center of our own little universe, powering life on the planets which revolve around it in a brittle, yet tremendously powerful astrophysical equilibrium; so is the relationship between a thriving open source project, with a community, one or more vendors and their commercially supported customers revolving around it, driven by astronomical aspirations.

Source-available & Non-Compete licensing have existed in various forms, and have been tweaked and refined for decades, in an attempt to combine just enough proprietary conditions with just enough of Open Source flavor, to find that perfect trade-off. Fair Source is the latest refinement for software projects driven by a single vendor wanting to combine monetization, a high rate of contributions to the project (supported by said monetization), community collaboration and direct association with said software project.

Succinctly, Fair Source licenses provide much of the same benefits to users as Open Source licenses, although outsiders are not allowed to build their own competing service based on the software; however after 2 years the software automatically becomes MIT or Apache2 licensed, and at that point you can pretty much do whatever you want with the older code.

To avoid confusion, this project is different from:

It seems we have reached an important milestone in 2024: on the surface, “Fair Source” is yet another new initiative that positions itself as a more business friendly alternative to “Open Source”, but the delayed open source publication (DSOP) model has been refined to the point where the licenses are succinct, clear, easy to work with and should hold up well in court. Several technology companies are choosing this software licensing strategy (Sentry being the most famous one, you can see the others on their website).

My 2 predictions:

  • we will see 50-100 more companies in the next couple of years.
  • a governance legal entity will appear soon, and a trademark will follow after.

In this article, I’d like to share my perspective and address some - what I believe to be - misunderstandings in current discourse.

The licenses

At this time, the Fair Source ideology is implemented by the following licenses:

BSL/BUSL are more tricky to understand can have different implementations. FCL and FSL are nearly identical. They are clearly and concisely written and embody the Fair Source spirit in the most pure form.

Seriously, try running the following in your terminal. Sometimes as an engineer you have to appreciate legal text when it’s this concise, easy to understand, and diff-able!

wget https://raw.githubusercontent.com/keygen-sh/fcl.dev/master/FCL-1.0-MIT.md
wget https://fsl.software/FSL-1.1-MIT.template.md
diff FSL-1.1-MIT.template.md FCL-1.0-MIT.md

I will focus on FSL and FCL and FSL, the Fair Source “flagship licenses”.

Is it “open source, fixed”, or an alternative to open source? Neither.

First, we’ll need to agree on what the term “Open Source” means. This itself has been a battle for decades, with non-competes (commercial restrictions) being especially contentious and in use even before OSI came along, so I’m working on an article which challenges OSI’s Open Source Definition which I will publish soon. However, the OSD is probably the most common understanding in the industry today - so we’ll use that here - and it seems that folks behind FSL/Fair Source made the wise decision to distance themselves from these contentious debates: after some initial conversations about FSL using the “Open Source” term, they’ve adopted the less common term of “Fair Source” and I’ve seen a lot of meticulous work (e.g. fsl#2 and fsl#10 on how they articulate what they stand for. (the Open Source Definition debate is why I hope the Fair Source folks will file a trademark if this projects gains more traction.

Importantly, OSI’s definition of “Open Source” includes non-discrimination and free redistribution.

When you check out code that is FSL licensed, and the code was authored:

  1. less than 2 years ago: it’s available to you under terms similar to MIT, except you cannot compete with the author by making a similar service using the same software
  2. more than 2 years ago: it is now MIT licensed. (or Apache2, when applicable)

While after 2 years, it is clearly open source, the non-compete clause in option 1 is not compatible with the set of terms set forth by the OSI Open Source Definition. (or freedom 0 from the 4 freedoms of Free Software). Such a license is often referred to as “Source Available”.

So, Fair Source is a system to combine 2 licenses (an Open Source one and a Source Available one with proprietary conditions) in one. I think this is very clever approach, but I think it’s not all that useful to compare this to Open Source. Rather, it has a certain symmetry to Open Core:

  • In an Open Core product, you have a “scoped core”: a core built from open source code which is surrounded by specifically scoped pieces from proprietary code, for a indeterminate, but usually many-year or perpetual timeframe
  • With Fair Source, you have a “timed core”: the open source core is all the code that’s more than 2 years old, and the proprietary bits are the most recent developments (regardless which scope they belong to).

Open Core and Fair Source both try to balance open source with business interests: both have an open source component to attract a community, and a proprietary shell to make a business more viable. Fair Source is a licensing choice that’s only relevant to business, not individuals. How many business monetize pure Open Source software? I can count them on one hand. The vast majority go for something like Open Core. This is why the comparison with Open Core makes much more sense.

A lot of the criticisms of Fair Source suddenly become a lot more palatable when you consider it an alternative to Open Core.

As a customer, which is more tolerable? proprietary features or a proprietary 2-years worth of product developments? I don’t think it matters nearly as much as some of the advantages Fair Source has over Open Core:

  • Users can view, modify and distribute (but not commercialize) the proprietary code. (with Open Core, you only get the binaries)
  • It follows then, that the project can use a single repository and single license (with Open Core, there are multiple repositories and licenses involved)

Technically, Open Core is more of a business architecture, where you still have to figure out which licenses you want to use for the core and shell, whereas Fair Source is more of a prepackaged solution which defines the business architecture as well as the 2 licenses to use.

image

Note that you can also devise hybrid approaches. Here are some ideas:

  • a Fair Source core and Closed Source shell. (more defensive than Open Core or Fair Source separately). (e.g. PowerSync does this)
  • an Open Source core, with Fair Source shell. (more open than Open Core or Fair Source separately).
  • Open Source Core, with Source Available shell (users can view, modify and distribute the code but not commercialize it, and without the delayed open source publication). This would be the “true” symmetrical counterpart to Fair Source. It is essentially Open Core where the community also has access to the proprietary features (but can’t commercialize those). It would also allow to put all code in the same repository. (although this benefit works better with Fair Source because any contributed code will definitely become open source, thus incentivizing the community more). I find this a very interesting option that I hope Open Core vendors will start considering. (although it has little to do with Fair Source).
  • etc.

Non-Competition

The FSL introduction post states:

In plain language, you can do anything with FSL software except economically undermine its producer through harmful free-riding

The issue of large cloud vendors selling your software as a service, making money, and contributing little to nothing back to the project, has been widely discussed under a variety of names. This can indeed severely undermine a project’s health, or kill it.

(Personally, I find discussions around whether this is “fair” not very useful. Businesses will act in their best interest, you can’t change the rules of the game, you only have control over how you play the game, i.o.w. your own licensing and strategy)

Here, we’ll just use the same terminology that the FSL does, the “harmful free-rider” problem

However, the statement above is incorrect. Something like this would be more correct:

In plain language, you can do anything with FSL software except offer a similar paid service based on the software when it’s less than 2 years old.

What’s the difference? There are different forms of competition that are not harmful free-riding.

Multiple companies can offer a similar service/product which they base on the same project, which they all contribute to. They can synergize and grow the market together. (aka “non-zero-sum” if you want to sound smart). I think there are many good examples of this, e.g. Hadoop, Linux, Node.js, OpenStack, Opentelemetry, Prometheus, etc.

When the FSL website makes statements such as “You can do anything with FSL software except undermine its producer”, it seems to forget some of the best and most ubiquitous software in the world is the result of synergies between multiple companies collaborating.

Furthermore, when the company who owns the copyright on the project turns their back on their community/customers wouldn’t the community “deserve” a new player who offers a similar service, but on friendly terms? The new player may even contribute more to the project. Are they a harmful free-rider? Who gets to be judge of that?

Let’s be clear, FSL allows no competition whatsoever, at least not during the first 2 years. What about after 2 years?

Zeke Gabrielse, one of the shepherds of Fair Source, said it well here:

Being 2 years old also puts any SaaS competition far enough back to not be a concern

Therefore, you may as well say no competition is allowed. Although, in Zeke’s post, I presume he was writing from the position of an actively developing software project. If it becomes abandoned, the 2 years countdown is an obstacle, an overcomeable one, that eventually does let you compete, but in this case, the copyright holder probably went bust, so you aren’t really competing with them either. The 2 year window is not designed to enable competition, instead it is a contingency plan for when the company goes bankrupt. The wait can be needlessly painful for the community in such a situation. If a company is about to go bust, they could immediately release their Fair Source code as Open Source, but I wonder if this can be automated via the actual license text.

(I had found some ambiguous use of the term “direct” competition which I’ve reported and has since been resolved)

Perverse incentives

Humans are notoriously bad about predicting 2nd order effects. So I like to try to. What could be some second order effects of Fair Source projects? And how do they compare to Open Core?

  • can companies first grow on top of their Fair Source codebase, take community contributions, and then switch to more restrictive, or completely closed licensing, shutting out the community? Yes if a CLA is in place (or using the 2 year old code). (this isn’t any different from any other CLA using Open Source or Open Core project. Though with Open Core, you can’t take in external contributions on proprietary parts to begin with)
  • if you enjoy a privileged position where others can’t meaningfully compete with you based on the same source code, that can affect how the company treats its community and its customers. It can push through undesirable changes, it can price more aggressively, etc. (these issues are the same with Open Core)
  • With Open Source & Open Core, the company is incentivized to make the code well understood by the community. Under Fair Source it would still be sensible (in order to get free contributions), but at the same time, by hiding design documents, subtly obfuscating the code and withholding information it can also give itself the edge for when the code does become Open Source, although as we’ve seen, the 2 year delay makes competition unrealistic anyway.

All in all, nothing particularly worse than Open Core, here.

Developer sustainability

The FSL introduction post says:

We value user freedom and developer sustainability. Free and Open Source Software (FOSS) values user freedom exclusively. That is the source of its success, and the source of the free-rider problem that occasionally boils over into a full-blown tragedy of the commons, such as Heartbleed and Log4Shell.

F/OSS indeed doesn’t involve itself with sustainability, because of the simple fact that Open Source has nothing to do business models and monetization. As stated above, it makes more sense to compare to Open Core.

It’s like saying asphalt paving machinery doesn’t care about funding and is therefore to blame when roads don’t get built. Therefore we need tolls. But it would be more useful to compare tolls to road taxes and vignettes.

Of course it happens that people dedicate themselves to writing open source projects, usually driven by their interests, don’t get paid, get volumes of support requests (incl. from commercial entities), which can become suffering, and can also lead to codebases becoming critically important, yet critically misunderstood and fragile. This is clearly a situation to avoid, and there are many ways to solve the problem ranging from sponsorships (e.g. GitHub, tidelift), bounty programs (e.g. Algora), direct funding (e.g. Sentry’s 500k donation) and many more initiatives that have launched in the last few years. Certainly a positive development. Sometimes formally abandoning a project is also a clear sign that puts the burden of responsibility onto whoever consumes it and can be a relief to the original author. If anything, it can trigger alarm bells within corporations and be a fast path to properly engaging and compensating the author. There is no way around the fact that developers (and people in general) are generally responsible for their own well being and sometimes need to put their foot down, or put on their business hat (which many developers don’t like to do) if their decision to open source project is resulting in problems. No amount of licensing can change this hard truth.

Furthermore, you can make money via Open Core around OSI approved open source projects (e.g. Grafana), consulting/support, and many companies that pay developers to work on (pure) Open Source code (Meta, Microsft, Google, etc are the most famous ones, but there are many smaller ones). Companies that try to achieve sustainability (and even thriving) on pure open source software for which they are the main/single driving force, are extremely rare. (Chef tried, and now System Initiative is trying to do it better. I remain skeptical but am hopeful and am rooting for them to prove the model)

Doesn’t it sound a bit ironic that the path to getting developers paid is releasing your software via a non-compete license?

Do we reach developer sustainability by preventing developers from making money on top of projects they want to - or already have - contribute(d) to?

Important caveats:

  • Fair Source does allow to make money via consulting and auxiliary services related to the software.
  • Open Core shuts out people similarly, but many of the business models above, don’t.

CLA needed?

When a project uses an Open Source license with some restrictions (e.g. GPL with its copyleft) it is common to use a CLA such that the company backing it can use more restrictive or commercial licenses (either as a license change later on, or as dual licensing). With Fair Source (and indeed all Source Available licenses), this is also the the case.

However, unlike Open Source licenses, with Fair Source / Source Available licenses, a CLA becomes much more of a necessity, because such a license without CLA isn’t compatible with anything else, and the commercial FSL restriction may not always apply to outside contributions (it depends on e.g. whether it can be offered stand-alone). I’m not a lawyer, for more clarity you should consult with one. I think the Fair Source website, at least their adoption guide should mention something about CLA’s, because it’s an important step beyond simply choosing a license and publishing, so I’ve raised this with them.

AGPL

The FSL website states:

AGPLv3 is not permissive enough. As a highly viral copyleft license, it exposes users to serious risk of having to divulge their proprietary source code.

This looks like fear mongering.

  • AGPL is not categorically less permissive than FSL. It is less permissive when the code is 2 years old or older (and the FSL has turned into MIT/Apache2). For current and recent code, AGPL permits competition; FSL does not.
  • The world “viral” is more divisive than accurate. In my mind, complying with AGPL is rather easy, my rule of thumb is to say you trigger copyleft when you “ship”. Most engineers have an intuitive understanding of what it means to “ship” a feature, whether that’s on cloud, or on-prem. In my experience, people struggle more with patent clauses or even the relation between trademarks and software licensing than they do with copyleft. There’s still some level of uncertainty and caution around AGPL, mainly due to its complexity. (side note: Google and CNCF doesn’t allow copyleft licenses, and their portfolio doesn’t have a whole lot of commercial success to show for it, I see mainly projects that can easily be picked up by Google)

Heather Meeker, the lawyer consulted to draft up the FSL has spoken out against the virality discourse and tempering the FUD around AGPL

Conclusion

I think Fair Source, the FSL and FCL have a lot to offer. Throughout my analysis I may have raised some criticisms, but if anything, it reminds me of how much Open Core can suck (though it depends on the relative size of core vs shell). So I find it a very compelling alternative to Open Core. Despite some poor choices of wording, I find it well executed: It ties up a lot of loose ends from previous initiatives (Source Available, BSL and other custom licenses) into a neat package. Despite the need for a CLA it’s still quite easy to implement and is arguably more viable than Open Core is, in its current state today. When comparing to Open Source, the main question is: which is worse, the “harmful free-rider problem”, or the non-compete? (Anecdotally, my gut feeling says the former, but I’m on the look out for data driven evidence). When comparing to Open Core, the main question is: is a business more viable keeping proprietary features closed, or making them source-available (non-compete)?.

As mentioned, there are many more hybrid approaches possible. For a business thinking about their licensing strategy, it may make sense to think of these questions separately:

  • should our proprietary shell be time based or feature scoped? Does it matter?
  • should our proprietary shell be closed, or source-available?

I certainly would prefer to see companies and projects appear:

  • as Fair Source, rather than not at all
  • as Open Core, rather than not at all
  • as Fair Source, rather than Open Core (depending on “shell thickness”).
  • with more commercial restrictions from the get-go, instead of starting more permissively and re-licensing later. Just kidding, but that’s a topic for another day.

For vendors, I think there are some options left to explore, such as the Open Core with an source available (instead of closed) shell. Something to consider for any company doing Open Core today. For end-users / customers, “Open Source” vendors are not the only ones to be taken with a grain of salt, it’s the same with Fair Source, since they may have a more complicated arrangement rather than just using a Fair Source license.

Thanks to Heather Meeker and Joseph Jacks for providing input, although this article reflects only my personal views.

Le retour de la vengeance des luddites technophiles

La merdification par écran tactile

En 2017, le destroyer John S. McCain de l’US Navy est rentré en collision avec un pétrolier libérien, causant la mort de 10 de ses marins et 100 millions de dégâts matériels.

Adrian Hanft analyse de manière très détaillée l’accident causé principalement par le fait que les contrôles mécaniques ont été remplacés par des interfaces tactiles. Ce qui a fait que, suite à un malencontreux appui sur une « checkbox », les deux moteurs se sont décorrélés (ce qui est une fonctionnalité attendue) et, en conséquence, le bateau s’est mis à tourner lorsqu’un des moteurs a changé de puissance.

Les détails sont croustillants. Comme le relève très bien Adrian, rien dans l’analyse de l’accident n’incrimine l’interface. L’US Navy semble quand même décidée d’en finir avec les interfaces tactiles et revenir aux contrôles physiques.

Adrian, qui est designer de profession, de s’insurger et de dire qu’on peut concevoir de « bonnes interfaces graphiques ».

Non.

On ne peut pas. Un écran tactile nécessite de le regarder. On ne peut pas acquérir des réflexes moteurs sur un écran tactile. On ne peut utiliser un écran tactile en regardant ailleurs (ce qui est quand même utile quand on conduit un bateau de 9000 tonnes). On ne peut pas former des marins capables d’agir intuitivement en situation de stress si l’interface change tous les 6 mois pour cause de mise à jour.

Il est temps d’accepter que nous nous faisons entuber par l’industrie de l’informatique et qu’informatiser tout est essentiellement contre-productif, voire dangereux. Si j’étais vulgaire, je dirais qu’on se fait doigter par la digitalisitation.

Dessin humoristique montrant une dame cherchant à acheter un ticket de train et se voyant rétorquer de rentrer chez elle, de créer un compte sur la plateforme, d’acheter le ticket, de le transférer sur son smartphone et que ce sera plus facile (Dessin de Len Hawkins pour le magazine "Private Eye") Dessin humoristique montrant une dame cherchant à acheter un ticket de train et se voyant rétorquer de rentrer chez elle, de créer un compte sur la plateforme, d’acheter le ticket, de le transférer sur son smartphone et que ce sera plus facile (Dessin de Len Hawkins pour le magazine "Private Eye")

La première et même unique propriété d’une bonne interface utilisateur c’est de ne jamais changer. De rester identique pendant des années, des décennies afin de construire une expertise durable et transmissible.

L’exploitation du travailleur

Comme le souligne Danièle Linhart dans « La comédie humaine du travail », le changement permanent est une stratégie consciente afin de détruire toute expertise et donc d’enlever tout pouvoir au travailleur. En supprimant le concept d’expertise, les travailleurs deviennent des pions interchangeables sans aucune capacité de négociation.

Le changement permanent n’est pas une conséquence, c’est une stratégie de management parfaitement consciente et théorisée !

Les premiers à comprendre la spoliation dont ils étaient victimes à travers le changement de leurs outils furent les luddites. Gavin Mueller le raconte très bien dans « Breaking Things at Work, The Luddites Were Right About Why You Hate Your Job ». Dans son premier chapitre, Gavin Mueller décrit avec une incroyable justesse le logiciel libre comme un mouvement Luddite technophile.

Contrairement à ce que la propagande nous fait croire, le luddisme n’est pas une opposition à la technologie. C’est une opposition à une forme de technologie appartenant à une minorité utilisée pour exploiter la majorité.

L’exploitation de l’utilisateur

Mais pourquoi se limiter à exploiter le travailleur maintenant qu’on a des « utilisateurs » ?

Numerama revient sur la Tesla qui a bloqué le centre de Sète pendant 45 minutes en se mettant à jour. Selon eux, le problème est le conducteur qui a lancé volontairement la mise à jour.

C’est pas faux.

Et l’article d’insister et de donner des conseils pour programmer la mise à jour de son véhicule la nuit et de prévoir qu’une mise à jour peut mal se passer et rendre le véhicule inopérant.

J’aimerais qu’on s’arrête un moment sur ce concept popularisé par nos ordinateurs puis nos téléphones et désormais nos voitures : il est désormais acquis que tout appareil doit, régulièrement, être rendu inopérant pendant plusieurs minutes. Il est acquis que chaque utilisateur doit prendre le temps de lire et de comprendre ce que chaque mise à jour entraîne (ils sont bien naïfs chez Numerama de croire que cliquer sur "accepter" n’est pas devenu un réflexe musculaire). À l’issue de ce processus, si l’appareil fonctionne encore (ce qui n’est pas toujours le cas), l’utilisateur aura la chance de voir que son appareil ne fonctionne plus de la même façon (c’est le principe même d’une mise à jour). D’après le constructeur, c’est toujours « en mieux ». D’après mon expérience, pour la majorité des utilisateurs, c’est une terrible période de stress où on ne peut plus faire confiance, certaines choses doivent être réapprises. Quand elles n’ont pas tout simplement disparu.

En résumé, un appareil qui se modifie est, par essence, un ennemi de son utilisateur. L’utilisateur paie pour avoir le droit de devenir l’esclave de son appareil ! Je le disais dans ma conférence « Pourquoi ? ».

Le point positif de tout cela c’est que si les bateaux de guerre et les voitures individuelles commencent à agir contre leurs utilisateurs de manière plus ouverte, les gens vont peut-être finir par comprendre que ces engins leur pourrissent la vie depuis le début.

Mais en fait, je rêve complètement. Une majorité de gens veulent des trucs merdiques. Sporiff revient sur le sujet avec le fait qu’en Allemagne, partager son compte Iban ou son adresse email est considéré comme « intime », mais pas partager son Insta pour chatter et son compte Paypal.

L’arnaque Telegram

À propos de vie privée et d’exploitation des utilisateurs. Ou au moins de leur crédulité.

J’entends régulièrement que Telegram serait une messagerie chiffrée. Ce n’est pas et n’a jamais été le cas. Telegram est équivalent à Facebook Messenger, c’est vous dire. L’entreprise Telegram (et le gouvernement russe) peut lire et analyser 99% des messages qui transitent sur cette plateforme.

On pourrait penser que c’est une subtilité informatique dans les techniques de chiffrement, mais pas du tout. Telegram n’est pas et n’a jamais été chiffré de bout en bout. C’est aussi bête que ça. Ils ont tout simplement menti à travers le marketing.

Mais comment ce mensonge est-il devenu populaire ? Tout simplement parce que Telegram offre une possibilité de créer un « secret chat » qui lui est techniquement chiffré. Il y a donc moyen d’avoir une conversation secrète sur Telegram, en théorie. Mais en pratique ce n’est pas le cas, car :

  • Ces conversations secrètes ne fonctionnent que pour les conversations directes, pas pour les groupes.
  • Ces conversations secrètes sont difficiles à mettre en place
  • Ces conversations secrètes ne sont pas par défaut
  • Ces conversations secrètes n’utilisent pas les standards de l’industrie cryptographique, mais une solution « maison » développée par des personnes n’ayant aucune expertise cryptographique
  • L’entreprise Telegram (et le gouvernement russe) savent quand et avec qui vous avez initié un chat secret. Et ils peuvent le recouper avec la partie non secrète (imaginez un instant le « On passe sur le chat secret pour discuter de ça ? »).

Je pensais sincèrement que tout le monde était plus ou moins au courant et que le succès de Telegram était comparable à celui de Messenger ou Instagram : pour les gens qui s’en foutent. Mais les médias n’arrêtent pas de décrire Telegram comme une application sécurisée. C’est faux et cela n’a jamais été le cas.

Ce qui confirme l’adage (que j’ai inventé) :

Les articles dans les médias grand public sont fiables sauf pour les domaines dans lesquels vous avez un minimum de compétences.

Si vous utilisez Telegram, il est urgent de migrer sur une autre plateforme (le plus facile étant pour moi Signal, mais surtout, évitez Whatsapp). Même si vous pensez que vous vous en foutez, que vous ne postez jamais. Votre simple présence sur cette plateforme peut un jour encourager quelqu’un d’autre à vous contacter pour des sujets privés que vous n’avez pas spécialement envie de partager avec le gouvernement russe.

Soutenir le logiciel libre

Mais tout n’est pas noir.

Des entreprises françaises s’associent pour soutenir le logiciel libre à hauteur d’un pourcentage de leur chiffre d’affaires. J’aime ces enfoirés de luddites technophiles !

Le retour des blogs luddites

Le monde luddite se porte d’ailleurs plutôt bien. Solderpunk, le fondateur de Gemini, s’étonne de voir que certains des blogueurs non-tech qu’il suivait il y a plusieurs années réouvrent aujourd’hui leur blog après l’avoir fermé pour devenir Instagrammeu·r·se·s durant quelques années.

Je ne sais pas si c’est une vraie tendance ou juste ma bulle personnelle. En vrai, je crois avoir appris qu’il ne faut pas tenter de chasser les tendances à tout prix. J’aime bloguer et je veux le faire jusqu’à la fin de ma vie. J’aime écrire de la science-fiction et je pense continuer à écrire de la science-fiction jusqu’à la fin de ma vie. Si les blogs et la SF ne sont pas tendance, tant pis, au moins je ferai ce que j’aime et ce en quoi je crois. Si les blogs ou la SF (re)deviennent soudainement à la mode, alors peut-être que j’en bénéficierai d’une manière ou d’une autre. Et encore, je ne suis pas sûr parce que le succès médiatique ira certainement à ceux qui ont le talent de surfer sur les tendances.

Mais je suis persuadé qu’au crépuscule de ma vie, je tirerai beaucoup plus de fierté d’avoir consacré mon temps à faire ce que j’aime et à partager avec ceux qui apprécient les mêmes choses que moi. Les sirènes de la gloire éphémère rendent parfois ce choix difficile, surtout pour moi, qui suis vite attiré par les paillettes du star-système. Mais je suis convaincu que c’est le bon.

Interview sur le métablogging

Pour ceux qui s’intéressent à la manière dont je blogue, pourquoi je blogue, comment je blogue, j’ai répondu à un long interview en anglais par Manuel Moreale pour la newsletter « People & Blogs ». Je n’aime pas trop le « Métablogging » (bloguer sur le fait qu’on blogue). Du coup, il y a dans cet interview pas mal de choses que je n’aurais jamais abordées ici même.

Ingénieur et écrivain, j’explore l’impact des technologies sur l’humain, tant par écrit que dans mes conférences.

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

Pour me soutenir, achetez mes livres (si possible chez votre libraire) ! Je viens de publier Bikepunk, un thriller post-apocalyptico-cyliste entièrement tapé sur une machine à écrire mécanique..

August 25, 2024

I made some time to give some love to my own projects and spent some time rewriting the Ansible role stafwag.ntpd and cleaning up some other Ansible roles.

There is some work ongoing for some other Ansible roles/projects, but this might be a topic for some other blog post(s) ;-)

freebsd with smartcard

stafwag.ntpd


An ansible role to configure ntpd/chrony/systemd-timesyncd.


This might be controversial, but I decided to add support for chrony and systemd-timesyncd. Ntpd is still supported and the default on the BSDs ( FreeBSD, NetBSD, OpenBSD).

It’s possible to switch from the ntp implementation by using the ntpd.provider directive.

The Ansible role stafwag.ntpd v2.0.0 is available at:

Release notes

V2.0.0

  • Added support for chrony and systemd-timesyncd on GNU/Linux
    • systemd-timesynced is the default on Debian GNU/Linux 12+ and Archlinux
    • ntpd is the default on all operating systems (BSDs, Solaris) and Debian GNU/Linux 10 and 11
    • chrony is the default on all other GNU/Linux distributes
    • For ntpd hash as the input for the role.
    • Updated README
    • CleanUp

stafwag.ntpdate


An ansible role to activate the ntpdate service on FreeBSD and NetBSD.


The ntpdate service is used on FreeBSD and NetBSD to sync the time during the system boot-up. On most Linux distributions this is handled by chronyd or systemd-timesyncd now. The OpenBSD ntpd implementation OpenNTPD also has support to sync the time during the system boot-up.

The role is available at:

Release notes

V1.0.0

  • Initial release on Ansible Galaxy
    • Added support for NetBSD

stafwag.libvirt


An ansible role to install libvirt/KVM packages and enable the libvirtd service.


The role is available at:

Release notes

V1.1.3

  • Force bash for shell execution on Ubuntu.
    • Force bash for shell execution on Ubuntu. As the default dash shell doesn’t support pipefail.

V1.1.2

  • CleanUp
    • Corrected ansible-lint errors
    • Removed install task “install/.yml’”;
      • This was introduced to support Kali Linux, Kali Linux is reported as “Debian” now.
      • It isn’t used in this role
    • Removed invalid CentOS-8.yml softlink
      • Removed invalid soft link, Centos 8 should be catched by
      • RedHat-yum.yml

stafwag.cloud_localds


An ansible role to create cloud-init config disk images. This role is a wrapper around the cloud-localds command.


It’s still planned to add support for distributions that don’t have cloud-localds as part of their official package repositories like RedHat 8+.

See the GitHub issue: https://github.com/stafwag/ansible-role-cloud_localds/issues/7

The role is available at:

Release notes

V2.1.3

  • CleanUp
    • Switched to vars and package to install the required packages
    • Corrected ansible-lint errors
    • Added more examples

stafwag.qemu_img


An ansible role to create QEMU disk images.


The role is available at:

Release notes

V2.3.0

  • CleanUp Release
    • Added doc/examples
    • Updated meta data
    • Switched to vars and package to install the required packages
    • Corrected ansible-lint errors

stafwag.virt_install_import


An ansible role to import virtual machine with the virt-install import command


The role is available at:

Release notes

  • Use var and package to install pkgs
    • v1.2.1 wasn’t merged correctly. The release should fix it…
    • Switched to var and package to install the required packages
    • Updated meta data
    • Updated documentation and include examples
    • Corrected ansible-lint errors



Have fun!

August 18, 2024

Le marketing, une religion malveillante, incompétente et dangereuse

Le marketing est devenu une religion. Un ensemble d’actions que les pratiquants appliquent aveuglément parce que « tout le monde le fait ». Comme toute religion, c’est profondément stupide.

Ticket de train européen

Prenez l’idée de mettre en place un ticket de train européen. Simple, efficace.

Mais le plus gros morceau, ça va être de convaincre les marketeux de la SNCF qui s’efforcent, depuis des années, de faire exactement le contraire en forçant chaque voyageur qui traverse la France d’apprendre la différence entre un TGV Ouigo et un TGV Inoui.

Comme si on en avait quelque chose à foutre du nom du train dans lequel on est…

Mais pour les marketeux, deux types de trains, c’est deux fois plus de budgets pour faire deux fois plus de pub. Bref, si vous êtes dans le marketing, comme disait Bill Hicks : « Please kill yourself ! »

Comme le dit Esther sur Mastodon : il faut considérer la publicité de la même manière que les spécialistes en sécurité considèrent les injections de code. Une exploitation frauduleuse d’un bug !

Marketing intelligent

Regardez cette mode de rendre tout « intelligent ». Et bien il s’avère que je ne suis pas le seul à ne pas vouloir une brosse à dents ou un frigo « intelligent ».

Une étude à démontré que le mot « artificial intelligence » est une raison majeure pour ne pas acheter un produit.

L’une des critiques majeures que je fais envers la bulle AI c’est qu’elle n’est absolument pas nouvelle.

Vous vous souvenez du scandale Ashley Madison ? Un site de rencontre pour adultes mariés souhaitant vivre des aventures extra-conjugales. La base de données s’était fait pirater et les noms et informations des adultes inscrits avaient été rendus publics. « Mais chérie, je te jure que c’était juste pour tester ! »

Le truc que tout le monde a raté c’est qu’en réalité, les membres du site n’étaient que des hommes. Et qu’ils payaient pour chatter, sans le savoir, avec des bots. Ce qui, ironiquement, signifie qu’Ashley Madison rendait les maris volages plus fidèles vu qu’ils perdaient leur temps à draguer des algorithmes.

Mais là où je pleure, c’est lorsque des entreprises qui ont des intérêts a priori opposé à cette mode de l’AI partout tombent en plein dans le panneau. Comme par exemple Protonmail, qui se focalise sur la sécurité et la confidentialité de vos emails.

Et bien ils viennent de lancer un assistant AI pour rédiger vos emails à votre place. Ça pose plein de questions sur la confidentialité bien résumées ici:

Mais, avant cela, la question principale qui me vient à l’esprit c’est : si vous ne voulez pas écrire un email, pourquoi l’envoyer en premier lieu ? Pourquoi le rendre plus long que nécessaire ?

Qui trouve réellement plus facile d’écrire un prompt et de relire en email automatiquement généré que de juste écrire ce qu’il veut dire ? Vous trouvez qu’on a pas assez d’emails comme ça ? Vous trouvez que les gens sont tellement doués avec l’expression écrite que leurs textes sont trop précis, trop clairs et qu’il faut absolument complexifier et allonger tout ça ?

C’est effrayant comme le monde semble conçu par des incompétents pour des incompétents. Comme je le disais au Breizhcamp : si vous utilisez un ordinateur régulièrement, apprenez la dactylographie. Tant que vous ne maitrisez pas votre clavier, le reste est du bullshit.

Malveillant, mais incompétent

Le pire avec cette religion du marketing, ce n’est pas qu’elle soit totalement malveillante et qu’elle nous fasse tous contribuer à la destruction des écosystèmes. Non, le pire c’est que c’est avant tout le règne de l’incompétence totale. Tout le monde fait n’importe quoi sans rien comprendre.

Genre on utilise Slack.

Le truc le plus improductif du monde.

On en revient à l’algorithme de Richard Feynman pour résoudre un problème :

1. Écrire le problème
2. Penser très fort
3. Écrire la solution

Tous les outils, les meetings, les chats, les notifications sont des tentatives de ne pas faire les deux premières étapes, car c’est difficile. Cela demande un effort. Cela demande des moments de solitude. Alors on brasse du vent en espérant que le problème se résolve tout seul ou en harcelant quelqu’un pour qu’il le fasse.

Le fait de harceler celui qui résout le problème a deux avantages : s’il réussit, on pourra dire que c’est grâce à notre harcèlement. S’il échoue, on pourra dire qu’on ne l’a pas harcelé assez.

Comme le dit Sporiff, avant d’acheter tous les ustensiles spécialisés d’une cuisine, il vaut mieux investir dans quelques très bons couteaux et apprendre à s’en servir.

La tech réactionnaire

Les geeks de ma génération ont grandi avec l’idée d’un progrès technologique positif porté par des entreprises bienveillantes. « Don’t be evil ».

Mais, comme l’analyse Olivier Alexandre, sociologue au CNRS, si la tech jouit d’une image progressiste, elle est devenue fondamentalement réactionnaire. Les hippies d’hier sont devenus milliardaires et construisent des idéologies morbides pour justifier leur fortune.

Notre dépendance à la tech est pour la jeune génération ce que la dépendance au pétrole était pour les boomers.

Winter parle d’ailleurs de la suppression de son compte Twitter, Reddit et Facebook.

Twitter/X est devenu un réseau dédié à promouvoir Trump. Est-ce que vous créeriez un compte sur Truth.social, le réseau de Trump ? Probablement pas. Du coup, pourquoi êtes-vous encore sur X/Twitter ?

La réponse est simple : par immobilisme. Parce que c’est plus difficile de « défaire » que de faire, un truc que le marketing comprend très bien. On s’est moqué de Musk lorsqu’il a acheté Twitter pour une véritable fortune, qu’il perd de l’argent tout le temps. Mais si Trump est réélu, on réalisera que Musk n’a pas fait autre chose que Bolloré : investir son argent pour tenter d’imposer son idéologie.

Absurdité administrative

Mais en fait, tout cela est très logique. Le marketing nous vend une illusion de bienveillance et d’efficacité, mais veut surtout que nous soyons le plus inefficaces possible. Que nous gaspillons le maximum. Et que nous soyons malheureux pour compenser en achetant.

Un billet d’Olivier Ertzcheid résonne particulièrement en moi. Il décrit l’absurdité quotidienne qu’il vit dans son université. Et sa stratégie de demander toujours « Et après ? » lorsqu’on lui présente des tâches comme incontournables et qu’il négocie pour ne pas les faire.

Je n’ai pas sa patience. Je bouillonne de colère. Je refuse platement. Je dis « C’est parfaitement stupide. Je le ferai lorsque vous m’aurez démontré que ce n’est pas complètement absurde, con et dangereux ».

Lorsque je dis cela, j’attends sincèrement qu’on me démontre mes erreurs. Je crois sincèrement que je n’ai pas compris certains aspects. J’ai envie de comprendre.

Mais, dans l’immense majorité des cas, il n’y a rien à comprendre qu’une obéissance aveugle et un refus total de se poser la moindre question. Je suis simplement, subtilement écarté de tous les processus.

Souvent, on me dit que je suis prétentieux. Que je prends les gens pour des cons. C’est parce que, lorsque je pose des questions, personne ne veut m’expliquer. Personne ne sait m’expliquer. On me dit que « C’est comme ça ! ». « C’est évident ! ». Je réponds que non, pas pour moi.

Alors on me regarde avec une forme de pitié : « Le pauvre… Il ne comprend pas l’importance du processus de décision du format des timesheets ». Et je lis dans leurs yeux qu’eux non plus ne le comprennent pas, mais qu’ils n’arrivent même pas à le réaliser. Qu’ils n’ont pas le droit de le réaliser.

Alors je vous le demande, à vous qui me lisez, à vous dont je suis sûr que vous percevez l’absurdité de bien des actions qu’on vous demande de faire : demandez « Pourquoi ? ». Demandez « Et si on ne le fait pas ? ». Cela ne vous coûtera rien. Cela ne fera pas de vous un rebelle. Mais ce sera déjà un début…

Ingénieur et écrivain, j’explore l’impact des technologies sur l’humain, tant par écrit que dans mes conférences.

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

Pour me soutenir, achetez mes livres (si possible chez votre libraire) ! Je viens de publier Bikepunk, un thriller post-apocalyptico-cyliste entièrement tapé sur une machine à écrire mécanique..

August 14, 2024

Retro-style image of a couple in a spaceship with the text: 'Drupal CMS, The official name for Drupal Starshot'

We're excited to announce that "Drupal CMS" will be the official name for the product developed by the Drupal Starshot Initiative.

The name "Drupal CMS" was chosen after user testing with both newcomers and experienced users. This name consistently scored highest across all tested groups, including marketers unfamiliar with Drupal.

Participants appreciated the clarity it brings:

Having the words CMS in the name will make it clear what the product is. People would know that Drupal was a content management system by the nature of its name, rather than having to ask what Drupal is.
I'm a designer familiar with the industry, so the term CMS or content management system is the term or phrase that describes this product most accurately in my opinion. I think it is important to have CMS in the title.

The name "Drupal Starshot" will remain an internal code name until the first release of Drupal CMS, after which it will most likely be retired.

August 13, 2024

Suppose you want to do something crazy, like watching a sports game on a PC...

There are basically two options.

Option 1:
Step 1: Find a service provider such as TV Vlaanderen via Google.
Step 2: Register an account.
Step 3: Provide private information such as full name, date of birth, telephone number, home address.
Step 4: Prove that you are indeed the owner of the e-mail or telephone number.
Step 5: Give permission to withdraw money from your account during the free trial period.
Step 6: Watch carefully all the time to prevent being subscribed to 'newsletters'.
Step 7: Accept the General Terms and Conditions!
Step 8: Accept third-party cookies!
Step 9: Disable your adblocker!
Step 10: Press 'Play' and see this message:



Step 11: Log in on another computer with a different operating system and browser.
Step 12: Press 'Play' and see this message:


Step 13: Find out how to email customer service. You can't!
Step 14: Fill out a contact form asking for help.
Step 15: Endure the daily SPAM emails.
Step 16: Endure the daily SPAM emails.
Step 17: Endure the daily SPAM emails.
Step 18: Wait fruitlessly for an answer.

Option 2:
Step 1. Find a dubious website via Google and press 'Play'.

With option 2 there is no registration, no Terms and Conditions, the AdBlocker remains on, free, no SPAM in your mailbox, no personal data requested, no idiotic playback errors...

Here’s a neat little trick for those of you using Home Assistant while also driving a Volvo.

To get your Volvo driving data (fuel level, battery state, …) into Home Assistant, there’s the excellent volvo2mqtt addon.

One little annoyance is that every time it starts up, you will receive an e-mail from Volvo with a two-factor authentication code, which you then have to enter in Home Assistant.

Fortunately, there’s a solution for that, you can automate this using the built-in imap support of Home Assistant, with an automation such as this one:

alias: Volvo OTP
description: ""
trigger:
  - platform: event
    event_type: imap_content
    event_data:
      initial: true
      sender: no-reply@volvocars.com
      subject: Your Volvo ID Verification code
condition: []
action:
  - service: mqtt.publish
    metadata: {}
    data:
      topic: volvoAAOS2mqtt/otp_code
      payload: >-
        {{ trigger.event.data['text'] | regex_findall_index(find='Your Volvo ID verification code is:\s+(\d+)', index=0) }}
  - service: imap.delete
    data:
      entry: "{{ trigger.event.data['entry_id'] }}"
      uid: "{{ trigger.event.data['uid'] }}"
mode: single

This will post the OTP code to the right location and then delete the message from your inbox (if you’re using Google Mail, that means archiving it).


Comments | More on rocketeer.be | @rubenv on Twitter

August 06, 2024

Starshot strategy

I'm excited to share the first version of Drupal Starshot's product strategy, a document that aims to guide the development and marketing of Drupal Starshot. To read it, download the full Drupal Starshot strategy document as a PDF (8 MB).

This strategy document is the result of a collaborative effort among the Drupal Starshot leadership team, the Drupal Starshot Advisory Council, and the Drupal Core Committers. We also tested it with marketers who provided feedback and validation.

Drupal Starshot and Drupal Core

Drupal Starshot is the temporary name for an initiative that extends the capabilities of Drupal Core. Drupal Starshot aims to broaden Drupal's appeal to marketers and a wider range of project budgets. Our ultimate goal is to increase Drupal's adoption, solidify Drupal's position as a leading CMS, and champion an Open Web.

For more context, please watch my DrupalCon Portland keynote.

It's important to note that Drupal Starshot and Drupal Core will have separate, yet complementary, product strategies. Drupal Starshot will focus on empowering marketers and expanding Drupal's presence in the mid-market, while Drupal Core will prioritize the needs of developers and more technical users. I'll write more about the Drupal Core product strategy in a future blog post once we have finalized it. Together, these two strategies will form a comprehensive vision for Drupal as a product.

Why a product strategy?

By defining our goals, target audience and necessary features, we can more effectively guide contributors and ensure that everyone is working towards a common vision. This product strategy will serve as a foundation for our development roadmap, our marketing efforts, enabling Drupal Certified Partners, and more.

Drupal Starshot product strategy TL;DR

For the detailed product strategy, please read the full Drupal Starshot strategy document (8 MB, PDF). Below is a summary.

Drupal Starshot aims to be the gold standard for marketers that want to build great digital experiences.

We'd like to expand Drupal's reach by focusing on two strategic shifts:

  1. Prioritizing Drupal for content creators, marketers, web managers, and web designers so they can independently build websites. A key goal is to empower these marketing professionals to build and manage their websites independently without relying on developers or having to use the command line or an IDE.
  2. Extending Drupal's presence in the mid-market segment, targeting projects with total budgets between $30,000 and $120,000 USD (€25,000 to €100,000).

Drupal Starshot will differentiate itself from competitors by providing:

  1. A thoughtfully designed platform for marketers, balancing ease of use with flexibility. It includes smart defaults, best practices for common marketing tasks, marketing-focused editorial tools, and helpful learning resources.
  2. A growth-oriented approach. Start simple with Drupal Starshot's user-friendly tools, and unlock advanced features as your site grows or you gain expertise. With sophisticated content modeling, efficient content reuse across channels, and robust integrations with other leading marketing technologies, ambitious marketers won't face the limitations of other CMSs and will have the flexibility to scale their site as needed.
  3. AI-assisted site building tools to simplify complex tasks, making Drupal accessible to a wider range of users.
  4. Drupal's existing competitive advantages such as extensibility, scalability, security, accessibility, multilingual support, and more.

What about ambitious site builders?

In the past, we used the term ambitious site builders to describe Drupal's target audience. Although this term doesn't appear in the product strategy document, it remains relevant.

While the strategy document is publicly available, it is primarily an internal guide. It outlines our plans but doesn't dictate our marketing language. Our product strategy's language purposly aligns with terms used by our target users, based on persona research and interviews.

To me, "ambitious site builders" includes all Drupal users, from those working with Drupal Core (more technically skilled) to those working with Drupal Starshot (less technical). Both groups are ambitious, with Drupal Starshot specifically targeting "ambitious marketers" or "ambitious no-code developers".

Give feedback

The product strategy is a living document, and we value input. We invite you to share your thoughts, suggestions, and questions in the product strategy feedback issue within the Drupal Starshot issue queue.

Get involved

There are many opportunities to get involved with Drupal Starshot, whether you're a marketer, developer, designer, writer, project manager, or simply passionate about the future of Drupal. To learn more about how you can contribute to Drupal Starshot, visit https://drupal.org/starshot.

Thank you

I'd like to thank the Drupal Starshot leadership team, the Drupal Starshot Advisory Council, and the Drupal Core Committers for their input on the strategy. I'm also grateful for the marketers who provided feedback on our strategy, helping us refine our approach.

August 05, 2024

Décidément, un rien vous habille !

Petit éloge de la nudité

Si j’apprécie le fait de nager ou de faire un sauna tout nu, le naturisme ne m’avait jamais réellement attiré. Sans juger ceux qui le pratiquaient, je considérais que ce n’était tout simplement pas pour moi.

Jusqu’au jour où un couple d’amis est parti vivre à l’étranger. Depuis trente ans, ils passent toutes leurs vacances dans un centre naturiste. Ils nous ont invités à les rejoindre une semaine.

J’ai tout d’abord rechigné. Mes plus bas instincts patriarcaux, dont j’ignorais jusqu’à l’existence, se sont rebellés à l’idée que mon épouse soit nue au milieu d’étrangers. Mais elle a argué que nous n’aurions plus beaucoup d’opportunités de revoir nos amis, que je n’étais pas obligé de l’accompagner, que ce n’était que quelques jours, qu’au pire, cela ferait une expérience intéressante.

J’ai opposé un mâle refus catégorique. C’est ainsi que, quelques mois plus tard, nous avons débarqué en famille avec armes et (trop de) bagages au sein d’un gigantesque complexe naturiste.

La première chose qui m’a rassuré fut de constater que beaucoup de gens étaient bel et bien habillés. Si la nudité est obligatoire à la plage et à la piscine, le reste du camp est entièrement libre.

Force est de constater que, durant les premières heures, mon regard fut irrémédiablement attiré par ces corps nus marchant, faisant du mini-golf, du vélo ou de la pétanque. Mon esprit y voyait quelque chose d’anormal, de choquant. Moi-même, je ne me déshabillais que pour accéder à la plage.

Et puis, bien plus rapidement que tout ce que j’avais pu imaginer, mon sentiment de normalité a basculé. Ces jeunes, ces vieux, ces vieilles, ces hommes, ces femmes, ces enfants, ces ados, ces gros·ses, ces maigres. Tou·te·s sont devenu un brouillard couleur chair bronzée dans lequel je me mouvais sans avoir à faire attention à ma propre apparence, à l’image que je véhiculais.

Une nudité normale, respectueuse et déconnectée

Depuis les zones de campings de tentes Décathlon aux larges chalets devant lesquels sont garées des Tesla flambant neuves, le camp naturiste fédère un panaché de catégories sociales. Pourtant, une fois dégagées de l’incontournable apparat des vêtements, les classes ne se distinguent plus. Une sensation d’égalité se dégage.

Très vite, mon propre corps m’est apparu comme parfaitement normal, banal. Ni le plus gros, ni le plus maigre, ni le plus musclé, ni le plus malingre. J’ai acquis la conviction particulièrement reposante qu’il n’intéressait personne. Ce fait est particulièrement important pour les femmes habituées à être reluquées. Mon épouse m’a confié l’étonnant sentiment de confiance de se sentir nue sur la plage avec un respect naturel des hommes. Car, dans un camp naturiste, un homme indélicat ne va pas s’attarder sur un corps comme il peut le faire en temps normal sur un décolleté ou un string. Les corps nus sont la normalité.

Il faut avouer que cette ambiance respectueuse est rendue possible par l’organisation d’une sécurité impressionnante. De jeunes jobistes patrouillent en permanence. Tout comportement indélicat est immédiatement sanctionné et, en cas de récidive, peut mener à l’exclusion.

La liberté de la nudité n’est pas simplement psychologique. Elle est également matérielle. Mon épouse et moi-même avons découvert que nous avions prévu beaucoup trop de linge pour la semaine. Pas de linge, pas de lessive, pas d’usure, pas de besoin de renouveler une garde-robe. Outre l’égalité, la nudité offre une contre-mesure incroyable au consumérisme.

Nous qui ne supportons pas le tabac, nous avons également rarement été aussi peu dérangés par la cigarette. Si certains fument, ils m’ont semblé moins nombreux que dans les endroits que je fréquente habituellement. Philosophiquement, le refus du tabac et le l’alcool font partie des fondements historiques du naturisme. Il est d’ailleurs interdit de fumer sur l’île naturiste du Levant, dans le Var.

Une règle évidente d’un camp naturiste est l’interdiction de prendre des photos sur lesquelles peuvent apparaitre d’autres membres. Cela semble logique, mais cela a un impact profond : l’immense majorité des vacanciers se déplace sans smartphone. Sans poche, c’est d’ailleurs peu pratique. À quelques rares exceptions près, je n’ai vu personne rivé sur son écran durant toutes la semaine. En croisant des bandes d’adolescents qui se retrouvaient ou se déplaçaient, je fus plus frappé par la disparition totale des smartphones que par l’absence de vêtements.

En les voyant faire du surf ou des concours de poirier, la nudité souriante de ces corps élancés m’est apparue comme une métaphore de la déconnexion.

La contre-sexualisation de la nudité

L’une de mes craintes inconscientes avant de pratiquer le naturisme était certainement l’aspect sexuel. Je ne supporte ni le voyeurisme ni l’exhibitionnisme et j’avais peur de me retrouver au milieu d’une population pratiquant une forme douce des deux.

Je m’étais totalement fourvoyé.

Ce n’est pas la nudité qui sexualise. C’est nous qui sexualisons la nudité en la cachant, en la rendant honteuse.

Nous avons tellement peur que nos enfants soient confrontés à la violence de la pornographie en ligne que nous en oublions que c’est l’une des seules occasions durant laquelle ils seront confrontés à la nudité. Le corps est alors irrémédiablement associé au sexe, à la violence, à l’humiliation.

L’hyper sexualisation de la nudité est poussée à l’extrême par les religions qui cherchent à camoufler le corps des femmes. Mais, d’une manière générale, toutes les religions monothéistes rejettent violemment la nudité. En cachant le corps, on génère artificiellement la honte et la violence. On transforme le corps en marchandise tout en soumettant l’esprit aux dictats religieux.

À l’inverse, l’ostentation si prisée par le consumérisme est également délétère. Sur tout le séjour, une seule personne m’a négativement impactée. Une femme sur la plage qui, bien que nue, portait de lourds bracelets aux poignets et aux chevilles, un collier, des lunettes de soleil de marque, un chapeau compliqué. Marchant avec des sandales aux talons surélevés, elle fumait de longues et très fines cigarettes. Il m’a fallu quelques secondes avant de comprendre pourquoi j’avais été choqué. Contrairement aux milliers d’autres naturistes, cette personne mettait sa nudité en scène. Elle perpétuait, probablement sans en être conscient, le jeu capitaliste de la marchandisation des corps. Le fait qu’elle ait été la seule de tout mon séjour à fumer sur la plage n’est probablement pas anodin.

La maladie de l’anti-nudité

Le souvenir d’un ancien voisin m’est un jour revenu. Il y a quelques années, cet homme, avec qui j’échangeais jusque là de simples « Bonjour », avait commencé à m’injurier en hurlant, en me traitant de malade mental. Il m’avait fallu de longues minutes de palabres pour comprendre qu’il m’avait un jour vu nu dans mon jardin.

Il faut reconnaitre que lorsqu’il faisait nuit noire, persuadé que personne ne pouvait me voir, j’allais parfois me plonger nu dans le bac d’eau qui nous sert de piscine.

Mes explications et excuses n’ont en rien atténué sa colère. Le simple fait que je puisse être nu chez moi m’avait transformé définitivement en monstre abject mentalement dérangé.

Ma brève expérience du naturisme m’a permis de me rendre compte à quel point ce rejet de la nudité est une maladie. Car nous sommes tous nus sous nos vêtements. Nous avons tous un corps. Il n’y a rien de plus naturel que la nudité. Que ce voisin soit entré dans une colère aussi noire pour un événement aussi anodin en dit long sur sa propre haine inconsciente du corps humain.

En rejetant et sexualisant la nudité, nous traumatisons le regard de nos enfants sur leur propre corps, nous générons artificiellement de la violence, de la souffrance, de la honte.

À toutes les personnes qui sont complexées vis-à-vis de leur propre corps, je ne peux que conseiller de passer quelques jours dans un camp naturiste.

Cela demande du courage, c’est réellement étrange. Ce n’est certainement pas pour tout le monde.

Mais c’est un avant-goût d’une liberté que nous avons trop souvent camouflée. C’est un remède contre la marchandisation et la bigoterie qui étouffent nos corps et nos esprits.

Être nu, c’est une lettre d’amour à la vie, à l’humanité. Nues, les personnes sont belles. En se croisant au détour d’une promenade, leurs regards se disent :

« Vous êtes beaux, vous êtes belles ! Décidément, un rien vous habille ! »

Ingénieur et écrivain, j’explore l’impact des technologies sur l’humain, tant par écrit que dans mes conférences.

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

Pour me soutenir, achetez mes livres (si possible chez votre libraire) ! Je viens de publier Bikepunk, un thriller post-apocalyptico-cyliste entièrement tapé sur une machine à écrire mécanique..

August 02, 2024

Drupal logo with the text "Eleven" and a rocket launch in the background.

Today is a big day for Drupal as we officially released Drupal 11!

In recent years, we've seen an uptick in innovation in Drupal. Drupal 11 continues this trend with many new and exciting features. You can see an overview of these improvements in the video below:

Drupal 11 has been out for less than six hours, and updating my personal site was my first order of business this morning. I couldn't wait! Dri.es is now running on Drupal 11.

I'm particularly excited about two key features in this release, which I believe are transformative and will likely reshape Drupal in the years ahead:

  1. Recipes (experimental): This feature allows you to add new features to your website by applying a set of predefined configurations.
  2. Single-Directory Components: SDCs simplify front-end development by providing a component-based workflow where all necessary code for each component lives in a single, self-contained directory.

These two new features represent a major shift in how developers and site builders will work with Drupal, setting the stage for even greater improvements in future releases. For example, we'll rely heavily on them in Drupal Starshot.

Drupal 11 is the result of contributions from 1,858 individuals across 590 organizations. These numbers show how strong and healthy Drupal is. Community involvement remains one of Drupal's greatest strengths. Thank you to everyone who contributed to Drupal 11!

July 28, 2024


Updated @ Mon Sep 2 07:55:20 PM CEST 2024: Added devfs section
Updated @ Wed Sep 4 07:48:56 PM CEST 2024 : Corrected gpg-agent.conf


I use FreeBSD and GNU/Linux. freebsd with smartcard

In a previous blog post, we set up GnuPG with smartcard support on Debian GNU/Linux.

In this blog post, we’ll install and configure GnuPG with smartcard support on FreeBSD.

The GNU/Linux blog post provides more details about GnuPG, so it might be useful for the FreeBSD users to read it first.

Likewise, Linux users are welcome to read this blog post if they’re interested in how it’s done on FreeBSD ;-)

Install the required packages

To begin, we need to install the required packages on FreeBSD.

Update the package database

Execute pkg update to update the package database.

Thunderbird

[staf@monty ~]$ sudo pkg install -y thunderbird
Password:
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
Checking integrity... done (0 conflicting)
The most recent versions of packages are already installed
[staf@monty ~]$ 

lsusb

You can verify the USB devices on FreeBSD using the usbconfig command or lsusb which is also available on FreeBSD as part of the usbutils package.

[staf@monty ~/git/stafnet/blog]$ sudo pkg install usbutils
Password:
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
The following 3 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
	usbhid-dump: 1.4
	usbids: 20240318
	usbutils: 0.91

Number of packages to be installed: 3

301 KiB to be downloaded.

Proceed with this action? [y/N]: y
[1/3] Fetching usbutils-0.91.pkg: 100%   54 KiB  55.2kB/s    00:01    
[2/3] Fetching usbhid-dump-1.4.pkg: 100%   32 KiB  32.5kB/s    00:01    
[3/3] Fetching usbids-20240318.pkg: 100%  215 KiB 220.5kB/s    00:01    
Checking integrity... done (0 conflicting)
[1/3] Installing usbhid-dump-1.4...
[1/3] Extracting usbhid-dump-1.4: 100%
[2/3] Installing usbids-20240318...
[2/3] Extracting usbids-20240318: 100%
[3/3] Installing usbutils-0.91...
[3/3] Extracting usbutils-0.91: 100%
[staf@monty ~/git/stafnet/blog]$

GnuPG

We’ll need GnuPG ( of course ), so ensure that it is installed.

[staf@monty ~]$ sudo pkg install gnupg
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
Checking integrity... done (0 conflicting)
The most recent versions of packages are already installed
[staf@monty ~]$ 

Smartcard packages

To enable smartcard support on FreeBSD, we’ll need to install the smartcard packages. The same software as on GNU/Linux - opensc - is available on FreeBSD.

pkg provides

It’s handy to be able to check which packages provide certain files. On FreeBSD this is provided by the provides plugin. This plugin is not enabled by default in the pkg command.

To install in the provides plugin install the pkg-provides package.

[staf@monty ~]$ sudo pkg install pkg-provides
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
The following 1 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        pkg-provides: 0.7.3_3

Number of packages to be installed: 1

12 KiB to be downloaded.

Proceed with this action? [y/N]: y
[1/1] Fetching pkg-provides-0.7.3_3.pkg: 100%   12 KiB  12.5kB/s    00:01    
Checking integrity... done (0 conflicting)
[1/1] Installing pkg-provides-0.7.3_3...
[1/1] Extracting pkg-provides-0.7.3_3: 100%
=====
Message from pkg-provides-0.7.3_3:

--
In order to use the pkg-provides plugin you need to enable plugins in pkg.
To do this, uncomment the following lines in /usr/local/etc/pkg.conf file
and add pkg-provides to the supported plugin list:

PKG_PLUGINS_DIR = "/usr/local/lib/pkg/";
PKG_ENABLE_PLUGINS = true;
PLUGINS [ provides ];

After that run `pkg plugins' to see the plugins handled by pkg.
[staf@monty ~]$ 

Edit the pkg configuration to enable the provides plug-in.

staf@freebsd-gpg:~ $ sudo vi /usr/local/etc/pkg.conf
PKG_PLUGINS_DIR = "/usr/local/lib/pkg/";
PKG_ENABLE_PLUGINS = true;
PLUGINS [ provides ];

Verify that the plugin is enabled.

staf@freebsd-gpg:~ $ sudo pkg plugins
NAME       DESC                                          VERSION   
provides   A plugin for querying which package provides a particular file 0.7.3     
staf@freebsd-gpg:~ $ 

Update the pkg-provides database.

staf@freebsd-gpg:~ $ sudo pkg provides -u
Fetching provides database: 100%   18 MiB   9.6MB/s    00:02    
Extracting database....success
staf@freebsd-gpg:~ $

Install the required packages

Let’s check which packages provide the tools to set up the smartcard reader on FreeBSD. And install the required packages.

staf@freebsd-gpg:~ $ pkg provides "pkcs15-tool"
Name    : opensc-0.25.1
Comment : Libraries and utilities to access smart cards
Repo    : FreeBSD
Filename: usr/local/share/man/man1/pkcs15-tool.1.gz
          usr/local/etc/bash_completion.d/pkcs15-tool
          usr/local/bin/pkcs15-tool
staf@freebsd-gpg:~ $ 
staf@freebsd-gpg:~ $ pkg provides "bin/pcsc"
Name    : pcsc-lite-2.2.2,2
Comment : Middleware library to access a smart card using SCard API (PC/SC)
Repo    : FreeBSD
Filename: usr/local/sbin/pcscd
          usr/local/bin/pcsc-spy
staf@freebsd-gpg:~ $ 
[staf@monty ~]$ sudo pkg install opensc pcsc-lite
Password:
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
Checking integrity... done (0 conflicting)
The most recent versions of packages are already installed
[staf@monty ~]$ 
staf@freebsd-gpg:~ $ sudo pkg install -y pcsc-tools
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
Checking integrity... done (0 conflicting)
The most recent versions of packages are already installed
staf@freebsd-gpg:~ $ 
staf@freebsd-gpg:~ $ sudo pkg install -y ccid
Password:
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
Checking integrity... done (0 conflicting)
The most recent versions of packages are already installed
staf@freebsd-gpg:~ $ 

USB

To use the smartcard reader we will need access to the USB devices as the user we use for our desktop environment. (No, this shouldn’t be the root user :-) )

permissions

verify

Execute the usbconfig command to verify that you can access the USB devices.

[staf@snuffel ~]$ usbconfig
No device match or lack of permissions.
[staf@snuffel ~]$ 

If you don’t have access, verify the permissions of the USB devices.

[staf@snuffel ~]$ ls -l /dev/usbctl
crw-r--r--  1 root operator 0x5b Sep  2 19:17 /dev/usbctl
[staf@snuffel ~]$  ls -l /dev/usb/
total 0
crw-------  1 root operator 0x34 Sep  2 19:17 0.1.0
crw-------  1 root operator 0x4f Sep  2 19:17 0.1.1
crw-------  1 root operator 0x36 Sep  2 19:17 1.1.0
crw-------  1 root operator 0x53 Sep  2 19:17 1.1.1
crw-------  1 root operator 0x7e Sep  2 19:17 1.2.0
crw-------  1 root operator 0x82 Sep  2 19:17 1.2.1
crw-------  1 root operator 0x83 Sep  2 19:17 1.2.2
crw-------  1 root operator 0x76 Sep  2 19:17 1.3.0
crw-------  1 root operator 0x8a Sep  2 19:17 1.3.1
crw-------  1 root operator 0x8b Sep  2 19:17 1.3.2
crw-------  1 root operator 0x8c Sep  2 19:17 1.3.3
crw-------  1 root operator 0x8d Sep  2 19:17 1.3.4
crw-------  1 root operator 0x38 Sep  2 19:17 2.1.0
crw-------  1 root operator 0x56 Sep  2 19:17 2.1.1
crw-------  1 root operator 0x3a Sep  2 19:17 3.1.0
crw-------  1 root operator 0x51 Sep  2 19:17 3.1.1
crw-------  1 root operator 0x3c Sep  2 19:17 4.1.0
crw-------  1 root operator 0x55 Sep  2 19:17 4.1.1
crw-------  1 root operator 0x3e Sep  2 19:17 5.1.0
crw-------  1 root operator 0x54 Sep  2 19:17 5.1.1
crw-------  1 root operator 0x80 Sep  2 19:17 5.2.0
crw-------  1 root operator 0x85 Sep  2 19:17 5.2.1
crw-------  1 root operator 0x86 Sep  2 19:17 5.2.2
crw-------  1 root operator 0x87 Sep  2 19:17 5.2.3
crw-------  1 root operator 0x40 Sep  2 19:17 6.1.0
crw-------  1 root operator 0x52 Sep  2 19:17 6.1.1
crw-------  1 root operator 0x42 Sep  2 19:17 7.1.0
crw-------  1 root operator 0x50 Sep  2 19:17 7.1.1

devfs

When the /dev/usb* are only accessible by the root user. You probably want to create devfs.rules that to grant permissions to the operator or another group.

See https://man.freebsd.org/cgi/man.cgi?devfs.rules for more details.

/etc/rc.conf

Update the /etc/rc.conf to apply custom devfs permissions.

[staf@snuffel /etc]$ sudo vi rc.conf
devfs_system_ruleset="localrules"

/etc/devfs.rules

Create or update the /dev/devfs.rules with the update permissions to grant read/write access to the operator group.

[staf@snuffel /etc]$ sudo vi devfs.rules
[localrules=10]
add path 'usbctl*' mode 0660 group operator
add path 'usb/*' mode 0660 group operator

Restart the devfs service to apply the custom devfs ruleset.

[staf@snuffel /etc]$ sudo -i
root@snuffel:~ #
root@snuffel:~ # service devfs restart

The operator group should have read/write permissions now.

root@snuffel:~ # ls -l /dev/usb/
total 0
crw-rw----  1 root operator 0x34 Sep  2 19:17 0.1.0
crw-rw----  1 root operator 0x4f Sep  2 19:17 0.1.1
crw-rw----  1 root operator 0x36 Sep  2 19:17 1.1.0
crw-rw----  1 root operator 0x53 Sep  2 19:17 1.1.1
crw-rw----  1 root operator 0x7e Sep  2 19:17 1.2.0
crw-rw----  1 root operator 0x82 Sep  2 19:17 1.2.1
crw-rw----  1 root operator 0x83 Sep  2 19:17 1.2.2
crw-rw----  1 root operator 0x76 Sep  2 19:17 1.3.0
crw-rw----  1 root operator 0x8a Sep  2 19:17 1.3.1
crw-rw----  1 root operator 0x8b Sep  2 19:17 1.3.2
crw-rw----  1 root operator 0x8c Sep  2 19:17 1.3.3
crw-rw----  1 root operator 0x8d Sep  2 19:17 1.3.4
crw-rw----  1 root operator 0x38 Sep  2 19:17 2.1.0
crw-rw----  1 root operator 0x56 Sep  2 19:17 2.1.1
crw-rw----  1 root operator 0x3a Sep  2 19:17 3.1.0
crw-rw----  1 root operator 0x51 Sep  2 19:17 3.1.1
crw-rw----  1 root operator 0x3c Sep  2 19:17 4.1.0
crw-rw----  1 root operator 0x55 Sep  2 19:17 4.1.1
crw-rw----  1 root operator 0x3e Sep  2 19:17 5.1.0
crw-rw----  1 root operator 0x54 Sep  2 19:17 5.1.1
crw-rw----  1 root operator 0x80 Sep  2 19:17 5.2.0
crw-rw----  1 root operator 0x85 Sep  2 19:17 5.2.1
crw-rw----  1 root operator 0x86 Sep  2 19:17 5.2.2
crw-rw----  1 root operator 0x87 Sep  2 19:17 5.2.3
crw-rw----  1 root operator 0x40 Sep  2 19:17 6.1.0
crw-rw----  1 root operator 0x52 Sep  2 19:17 6.1.1
crw-rw----  1 root operator 0x42 Sep  2 19:17 7.1.0
crw-rw----  1 root operator 0x50 Sep  2 19:17 7.1.1
root@snuffel:~ # 

Make sure that you’re part of the operator group

staf@freebsd-gpg:~ $ ls -l /dev/usbctl 
crw-rw----  1 root operator 0x5a Jul 13 17:32 /dev/usbctl
staf@freebsd-gpg:~ $ ls -l /dev/usb/
total 0
crw-rw----  1 root operator 0x31 Jul 13 17:32 0.1.0
crw-rw----  1 root operator 0x53 Jul 13 17:32 0.1.1
crw-rw----  1 root operator 0x33 Jul 13 17:32 1.1.0
crw-rw----  1 root operator 0x51 Jul 13 17:32 1.1.1
crw-rw----  1 root operator 0x35 Jul 13 17:32 2.1.0
crw-rw----  1 root operator 0x52 Jul 13 17:32 2.1.1
crw-rw----  1 root operator 0x37 Jul 13 17:32 3.1.0
crw-rw----  1 root operator 0x54 Jul 13 17:32 3.1.1
crw-rw----  1 root operator 0x73 Jul 13 17:32 3.2.0
crw-rw----  1 root operator 0x75 Jul 13 17:32 3.2.1
crw-rw----  1 root operator 0x76 Jul 13 17:32 3.3.0
crw-rw----  1 root operator 0x78 Jul 13 17:32 3.3.1
staf@freebsd-gpg:~ $ 

You’ll need to be part of the operator group to access the USB devices.

Execute the vigr command and add the user to the operator group.

staf@freebsd-gpg:~ $ sudo vigr
operator:*:5:root,staf

Relogin and check that you are in the operator group.

staf@freebsd-gpg:~ $ id
uid=1001(staf) gid=1001(staf) groups=1001(staf),0(wheel),5(operator)
staf@freebsd-gpg:~ $ 

The usbconfig command should work now.

staf@freebsd-gpg:~ $ usbconfig
ugen1.1: <Intel UHCI root HUB> at usbus1, cfg=0 md=HOST spd=FULL (12Mbps) pwr=SAVE (0mA)
ugen2.1: <Intel UHCI root HUB> at usbus2, cfg=0 md=HOST spd=FULL (12Mbps) pwr=SAVE (0mA)
ugen0.1: <Intel UHCI root HUB> at usbus0, cfg=0 md=HOST spd=FULL (12Mbps) pwr=SAVE (0mA)
ugen3.1: <Intel EHCI root HUB> at usbus3, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)
ugen3.2: <QEMU Tablet Adomax Technology Co., Ltd> at usbus3, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=ON (100mA)
ugen3.3: <QEMU Tablet Adomax Technology Co., Ltd> at usbus3, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=ON (100mA)
staf@freebsd-gpg:~ $ 

SmartCard configuration

Verify the USB connection

The first step is to ensure your smartcard reader is detected on a USB level. Execute usbconfig and lsusb and make sure your smartcard reader is listed.

usbconfig

List the USB devices.

[staf@monty ~/git]$ usbconfig
ugen1.1: <Intel EHCI root HUB> at usbus1, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)
ugen0.1: <Intel XHCI root HUB> at usbus0, cfg=0 md=HOST spd=SUPER (5.0Gbps) pwr=SAVE (0mA)
ugen2.1: <Intel EHCI root HUB> at usbus2, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)
ugen2.2: <Integrated Rate Matching Hub Intel Corp.> at usbus2, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)
ugen1.2: <Integrated Rate Matching Hub Intel Corp.> at usbus1, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)
ugen0.2: <AU9540 Smartcard Reader Alcor Micro Corp.> at usbus0, cfg=0 md=HOST spd=FULL (12Mbps) pwr=ON (50mA)
ugen0.3: <VFS 5011 fingerprint sensor Validity Sensors, Inc.> at usbus0, cfg=0 md=HOST spd=FULL (12Mbps) pwr=ON (100mA)
ugen0.4: <Centrino Bluetooth Wireless Transceiver Intel Corp.> at usbus0, cfg=0 md=HOST spd=FULL (12Mbps) pwr=ON (0mA)
ugen0.5: <SunplusIT INC. Integrated Camera> at usbus0, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=ON (500mA)
ugen0.6: <X-Rite Pantone Color Sensor X-Rite, Inc.> at usbus0, cfg=0 md=HOST spd=LOW (1.5Mbps) pwr=ON (100mA)
ugen0.7: <GemPC Key SmartCard Reader Gemalto (was Gemplus)> at usbus0, cfg=0 md=HOST spd=FULL (12Mbps) pwr=ON (50mA)
[staf@monty ~/git]$ 

lsusb

[staf@monty ~/git/stafnet/blog]$ lsusb
Bus /dev/usb Device /dev/ugen0.7: ID 08e6:3438 Gemalto (was Gemplus) GemPC Key SmartCard Reader
Bus /dev/usb Device /dev/ugen0.6: ID 0765:5010 X-Rite, Inc. X-Rite Pantone Color Sensor
Bus /dev/usb Device /dev/ugen0.5: ID 04f2:b39a Chicony Electronics Co., Ltd 
Bus /dev/usb Device /dev/ugen0.4: ID 8087:07da Intel Corp. Centrino Bluetooth Wireless Transceiver
Bus /dev/usb Device /dev/ugen0.3: ID 138a:0017 Validity Sensors, Inc. VFS 5011 fingerprint sensor
Bus /dev/usb Device /dev/ugen0.2: ID 058f:9540 Alcor Micro Corp. AU9540 Smartcard Reader
Bus /dev/usb Device /dev/ugen1.2: ID 8087:8008 Intel Corp. Integrated Rate Matching Hub
Bus /dev/usb Device /dev/ugen2.2: ID 8087:8000 Intel Corp. Integrated Rate Matching Hub
Bus /dev/usb Device /dev/ugen2.1: ID 0000:0000  
Bus /dev/usb Device /dev/ugen0.1: ID 0000:0000  
Bus /dev/usb Device /dev/ugen1.1: ID 0000:0000  
[staf@monty ~/git/stafnet/blog]$ 

Check the GnuPG smartcard status

Let’s check if we get access to our smart card with gpg.

This might work if you have a native-supported GnuPG smartcard.

[staf@monty ~]$ gpg --card-status
gpg: selecting card failed: Operation not supported by device
gpg: OpenPGP card not available: Operation not supported by device
[staf@monty ~]$ 

In my case, it doesn’t work. I prefer the OpenSC interface, this might be useful if you want to use your smartcard for other usages.

opensc

Enable pcscd

FreeBSD has a handy tool sysrc to manage rc.conf

Enable the pcscd service.

[staf@monty ~]$ sudo sysrc pcscd_enable=YES
Password:
pcscd_enable: NO -> YES
[staf@monty ~]$ 

Start the pcscd service.

[staf@monty ~]$ sudo /usr/local/etc/rc.d/pcscd start
Password:
Starting pcscd.
[staf@monty ~]$ 

Verify smartcard access

pcsc_scan

The opensc-tools package provides a tool - pcsc_scan to verify the smartcard readers.

Execute pcsc_scan to verify that your smartcard is detected.

[staf@monty ~]$ pcsc_scan 
PC/SC device scanner
V 1.7.1 (c) 2001-2022, Ludovic Rousseau <ludovic.rousseau@free.fr>
Using reader plug'n play mechanism
Scanning present readers...
0: Gemalto USB Shell Token V2 (284C3E93) 00 00
1: Alcor Micro AU9540 01 00
 
Thu Jul 25 18:42:34 2024
 Reader 0: Gemalto USB Shell Token V2 (<snip>) 00 00
  Event number: 0
  Card state: Card inserted, 
  ATR: <snip>

ATR: <snip>
+ TS = 3B --> Direct Convention
+ T0 = DA, Y(1): 1101, K: 10 (historical bytes)
  TA(1) = 18 --> Fi=372, Di=12, 31 cycles/ETU
    129032 bits/s at 4 MHz, fMax for Fi = 5 MHz => 161290 bits/s
  TC(1) = FF --> Extra guard time: 255 (special value)
  TD(1) = 81 --> Y(i+1) = 1000, Protocol T = 1 
-----
  TD(2) = B1 --> Y(i+1) = 1011, Protocol T = 1 
-----
  TA(3) = FE --> IFSC: 254
  TB(3) = 75 --> Block Waiting Integer: 7 - Character Waiting Integer: 5
  TD(3) = 1F --> Y(i+1) = 0001, Protocol T = 15 - Global interface bytes following 
-----
  TA(4) = 03 --> Clock stop: not supported - Class accepted by the card: (3G) A 5V B 3V 
+ Historical bytes: 00 31 C5 73 C0 01 40 00 90 00
  Category indicator byte: 00 (compact TLV data object)
    Tag: 3, len: 1 (card service data byte)
      Card service data byte: C5
        - Application selection: by full DF name
        - Application selection: by partial DF name
        - EF.DIR and EF.ATR access services: by GET DATA command
        - Card without MF
    Tag: 7, len: 3 (card capabilities)
      Selection methods: C0
        - DF selection by full DF name
        - DF selection by partial DF name
      Data coding byte: 01
        - Behaviour of write functions: one-time write
        - Value 'FF' for the first byte of BER-TLV tag fields: invalid
        - Data unit in quartets: 2
      Command chaining, length fields and logical channels: 40
        - Extended Lc and Le fields
        - Logical channel number assignment: No logical channel
        - Maximum number of logical channels: 1
    Mandatory status indicator (3 last bytes)
      LCS (life card cycle): 00 (No information given)
      SW: 9000 (Normal processing.)
+ TCK = 0C (correct checksum)

Possibly identified card (using /usr/local/share/pcsc/smartcard_list.txt):
<snip>
        OpenPGP Card V2

 Reader 1: Alcor Micro AU9540 01 00
  Event number: 0
  Card state

pkcs15

pkcs15 is the application interface for hardware tokens while pkcs11 is the low-level interface.

You can use pkcs15-tool -D to verify that your smartcard is detected.

staf@monty ~]$ pkcs15-tool -D
Using reader with a card: Gemalto USB Shell Token V2 (<snip>) 00 00
PKCS#15 Card [OpenPGP card]:
        Version        : 0
        Serial number  : <snip>
        Manufacturer ID: ZeitControl
        Language       : nl
        Flags          : PRN generation, EID compliant


PIN [User PIN]
        Object Flags   : [0x03], private, modifiable
        Auth ID        : 03
        ID             : 02
        Flags          : [0x13], case-sensitive, local, initialized
        Length         : min_len:6, max_len:32, stored_len:32
        Pad char       : 0x00
        Reference      : 2 (0x02)
        Type           : UTF-8
        Path           : 3f00
        Tries left     : 3

PIN [User PIN (sig)]
        Object Flags   : [0x03], private, modifiable
        Auth ID        : 03
        ID             : 01
        Flags          : [0x13], case-sensitive, local, initialized
        Length         : min_len:6, max_len:32, stored_len:32
        Pad char       : 0x00
        Reference      : 1 (0x01)
        Type           : UTF-8
        Path           : 3f00
        Tries left     : 0

PIN [Admin PIN]
        Object Flags   : [0x03], private, modifiable
        ID             : 03
        Flags          : [0x9B], case-sensitive, local, unblock-disabled, initialized, soPin
        Length         : min_len:8, max_len:32, stored_len:32
        Pad char       : 0x00
        Reference      : 3 (0x03)
        Type           : UTF-8
        Path           : 3f00
        Tries left     : 0

Private RSA Key [Signature key]
        Object Flags   : [0x03], private, modifiable
        Usage          : [0x20C], sign, signRecover, nonRepudiation
        Access Flags   : [0x1D], sensitive, alwaysSensitive, neverExtract, local
        Algo_refs      : 0
        ModLength      : 3072
        Key ref        : 0 (0x00)
        Native         : yes
        Auth ID        : 01
        ID             : 01
        MD:guid        : <snip>

Private RSA Key [Encryption key]
        Object Flags   : [0x03], private, modifiable
        Usage          : [0x22], decrypt, unwrap
        Access Flags   : [0x1D], sensitive, alwaysSensitive, neverExtract, local
        Algo_refs      : 0
        ModLength      : 3072
        Key ref        : 1 (0x01)
        Native         : yes
        Auth ID        : 02
        ID             : 02
        MD:guid        : <snip>

Private RSA Key [Authentication key]
        Object Flags   : [0x03], private, modifiable
        Usage          : [0x200], nonRepudiation
        Access Flags   : [0x1D], sensitive, alwaysSensitive, neverExtract, local
        Algo_refs      : 0
        ModLength      : 3072
        Key ref        : 2 (0x02)
        Native         : yes
        Auth ID        : 02
        ID             : 03
        MD:guid        : <snip>

Public RSA Key [Signature key]
        Object Flags   : [0x02], modifiable
        Usage          : [0xC0], verify, verifyRecover
        Access Flags   : [0x02], extract
        ModLength      : 3072
        Key ref        : 0 (0x00)
        Native         : no
        Path           : b601
        ID             : 01

Public RSA Key [Encryption key]
        Object Flags   : [0x02], modifiable
        Usage          : [0x11], encrypt, wrap
        Access Flags   : [0x02], extract
        ModLength      : 3072
        Key ref        : 0 (0x00)
        Native         : no
        Path           : b801
        ID             : 02

Public RSA Key [Authentication key]
        Object Flags   : [0x02], modifiable
        Usage          : [0x40], verify
        Access Flags   : [0x02], extract
        ModLength      : 3072
        Key ref        : 0 (0x00)
        Native         : no
        Path           : a401
        ID             : 03

[staf@monty ~]$ 

GnuPG configuration

First test

Stop (kill) the scdaemon, to ensure that the scdaemon tries to use the opensc interface.

[staf@monty ~]$ gpgconf --kill scdaemon
[staf@monty ~]$ 
[staf@monty ~]$ ps aux | grep -i scdaemon
staf  9236  0.0  0.0   12808   2496  3  S+   20:42   0:00.00 grep -i scdaemon
[staf@monty ~]$ 

Try to read the card status again.

[staf@monty ~]$ gpg --card-status
gpg: selecting card failed: Operation not supported by device
gpg: OpenPGP card not available: Operation not supported by device
[staf@monty ~]$ 

Reconfigure GnuPG

Go to the .gnupg directory in your $HOME directory.

[staf@monty ~]$ cd .gnupg/
[staf@monty ~/.gnupg]$ 

scdaemon

Reconfigure scdaemon to disable the internal ccid and enable logging - always useful to verify why something isn’t working…

[staf@monty ~/.gnupg]$ vi scdaemon.conf
disable-ccid

verbose
debug-level expert
debug-all
log-file    /home/staf/logs/scdaemon.log

gpg-agent

Enable debug logging for the gpg-agent.

[staf@monty ~/.gnupg]$ vi gpg-agent.conf
debug-level expert
verbose
verbose
log-file /home/staf/logs/gpg-agent.log

Verify

Stop the scdaemon.

[staf@monty ~/.gnupg]$ gpgconf --kill scdaemon
[staf@monty ~/.gnupg]$ 

If everything goes well gpg will detect the smartcard.

If not, you have some logging to do some debugging ;-)

[staf@monty ~/.gnupg]$ gpg --card-status
Reader ...........: Gemalto USB Shell Token V2 (<snip>) 00 00
Application ID ...: <snip>
Application type .: OpenPGP
Version ..........: 2.1
Manufacturer .....: ZeitControl
Serial number ....: 000046F1
Name of cardholder: <snip>
Language prefs ...: nl
Salutation .......: Mr.
URL of public key : <snip>
Login data .......: [not set]
Signature PIN ....: forced
Key attributes ...: xxxxxxx xxxxxxx xxxxxxx
Max. PIN lengths .: 32 32 32
PIN retry counter : 3 0 3
Signature counter : 80
Signature key ....: <snip>
      created ....: <snip>
Encryption key....: <snip>
      created ....: <snip>
Authentication key: <snip>
      created ....: <snip>
General key info..: [none]
[staf@monty ~/.gnupg]$ 

Test

shadow private keys

After you executed gpg --card-status, GnuPG created “shadow private keys”. These keys just contain references on which hardware tokens the private keys are stored.

[staf@monty ~/.gnupg]$ ls -l private-keys-v1.d/
total 14
-rw-------  1 staf staf 976 Mar 24 11:35 <snip>.key
-rw-------  1 staf staf 976 Mar 24 11:35 <snip>.key
-rw-------  1 staf staf 976 Mar 24 11:35 <snip>.key
[staf@monty ~/.gnupg]$ 

You can list the (shadow) private keys with the gpg --list-secret-keys command.

Pinentry

To be able to type in your PIN code, you’ll need a pinentry application unless your smartcard reader has a pinpad.

You can use pkg provides to verify which pinentry applications are available.

For the integration with Thunderbird, you probably want to have a graphical-enabled version. But this is the topic for a next blog post ;-)

We’ll stick with the (n)curses version for now.

Install a pinentry program.

[staf@monty ~/.gnupg]$ pkg provides pinentry | grep -i curses
Name    : pinentry-curses-1.3.1
Comment : Curses version of the GnuPG password dialog
Filename: usr/local/bin/pinentry-curses
[staf@monty ~/.gnupg]$ 
[staf@monty ~/.gnupg]$ sudo pkg install pinentry-curses
Password:
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
Checking integrity... done (0 conflicting)
The most recent versions of packages are already installed
[staf@monty ~/.gnupg]$ 

A soft link is created for the pinentry binary. On FreeBSD, the pinentry soft link is managed by the pinentry package.

You can verify this with the pkg which command.

[staf@monty ~]$ pkg which /usr/local/bin/pinentry
/usr/local/bin/pinentry was installed by package pinentry-1.3.1
[staf@monty ~]$ 

The curses version is the default.

If you want to use another pinentry version in the gpg-agent configuration ( $HOME/.gnupg/gpg-agent.conf).

pinentry-program <PATH>

Import your public key

Import your public key.

[staf@monty /tmp]$ gpg --import <snip>.asc
gpg: key <snip>: public key "<snip>" imported
gpg: Total number processed: 1
gpg:               imported: 1
[staf@monty /tmp]$ 

List the public keys.

[staf@monty /tmp]$ gpg --list-keys
/home/staf/.gnupg/pubring.kbx
-----------------------------
pub   XXXXXXX XXXX-XX-XX [SC]
      <snip>
uid           [ unknown] <snip>
sub   XXXXXXX XXXX-XX-XX [A]
sub   XXXXXXX XXXX-XX-XX [E]

[staf@monty /tmp]$ 

As a test, we try to sign something with the private key on our GnuPG smartcard.

Create a test file.

[staf@monty /tmp]$ echo "foobar" > foobar
[staf@monty /tmp]$ 
[staf@monty /tmp]$ gpg --sign foobar

If your smartcard isn’t inserted GnuPG will ask to insert it.

GnuPG asks for the smartcard with the serial in the shadow private key.


                ┌────────────────────────────────────────────┐
                │ Please insert the card with serial number: │
                │                                            │
                │ XXXX XXXXXXXX                              │
                │                                            │
                │                                            │
                │      <OK>                      <Cancel>    │
                └────────────────────────────────────────────┘


Type in your PIN code.



               ┌──────────────────────────────────────────────┐
               │ Please unlock the card                       │
               │                                              │
               │ Number: XXXX XXXXXXXX                        │
               │ Holder: XXXX XXXXXXXXXX                      │
               │ Counter: XX                                  │
               │                                              │
               │ PIN ________________________________________ │
               │                                              │
               │      <OK>                        <Cancel>    │
               └──────────────────────────────────────────────┘


[staf@monty /tmp]$ ls -l foobar*
-rw-r-----  1 staf wheel   7 Jul 27 11:11 foobar
-rw-r-----  1 staf wheel 481 Jul 27 11:17 foobar.gpg
[staf@monty /tmp]$ 

In a next blog post in this series, we’ll configure Thunderbird to use the smartcard for OpenPG email encryption.

Have fun!

Links

July 26, 2024

We saw in the previous post how we can deal with data stored in the new VECTOR datatype that was released with MySQL 9.0. We implemented the 4 basic mathematical operations between two vectors. To do so we created JavaScript functions. MySQL JavaScript functions are available in MySQL HeatWave and MySQL Enterprise Edition (you can […]

July 25, 2024

MySQL 9.0.0 has brought the VECTOR datatype to your favorite Open Source Database. There are already some functions available to deal with those vectors: This post will show how to deal with vectors and create our own functions to create operations between vectors. We will use the MLE Component capability to create JavaScript functions. JS […]

July 23, 2024

Keeping up appearances in tech

Cover Image
The word "rant" is used far too often, and in various ways.
It's meant to imply aimless, angry venting.

But often it means:

Naming problems without proposing solutions,
this makes me feel confused.

Naming problems and assigning blame,
this makes me feel bad.

I saw a remarkable pair of tweets the other day.

In the wake of the outage, the CEO of CrowdStrike sent out a public announcement. It's purely factual. The scope of the problem is identified, the known facts are stated, and the logistics of disaster relief are set in motion.


  CrowdStrike is actively working with customers impacted by a defect found in a single content update for Windows hosts. Mac and Linux hosts are not impacted.

  This is not a security incident or cyberattack. The issue has been identified, isolated and a fix has been deployed.

  We refer customers to the support portal for the latest updates and will continue to provide complete and continuous updates on our website. We further recommend organizations ensure they’re communicating with CrowdStrike representatives through official channels.

  Our team is fully mobilized to ensure the security and stability of CrowdStrike customers.

Millions of computers were affected. This is the equivalent of a frazzled official giving a brief statement in the aftermath of an earthquake, directing people to the Red Cross.

Everything is basically on fire for everyone involved. Systems are failing everywhere, some critical, and quite likely people are panicking. The important thing is to give the technicians the information and tools to fix it, and for everyone else to do what they can, and stay out of the way.

In response, a communication professional posted an 'improved' version:


  I’m the CEO of CrowdStrike. I’m devastated to see the scale of today’s outage and will be personally working on it together with our team until it’s fully fixed for every single user.

  But I wanted to take a moment to come here and tell you that I am sorry. People around the world rely on us, and incidents like this can’t happen. This came from an error that ultimately is my responsibility. 

  Here’s what we know: [brief synopsis of what went wrong and how it wasn’t a cyberattack etc.]

  Our entire team will be working all day, all night, all weekend, and however long it takes to resolve this and make sure it doesn’t happen again.

  We’ll be sharing updates as often as possible, which you can find here [link]. If you need to contact us, the quickest way is to go here [link].

  We’re responding as quickly as possible. Thank you to everyone who has alerted us to the outage, and again, please accept my deepest apologies. More to come soon.

Credit where credit is due, she nailed the style. 10/10. It seems unobjectionable, at first. Let's go through, shall we?

Hyacinth fixing her husband's tie

Opposite Day

First is that the CEO is "devastated." A feeling. And they are personally going to ensure it's fixed for every single user.

This focuses on the individual who is inconvenienced. Not the disaster. They take a moment out of their time to say they are so, so sorry a mistake was made. They have let you and everyone else down, and that shouldn't happen. That's their responsibility.

By this point, the original statement had already told everyone the relevant facts. Here the technical details are left to the imagination. The writer's self-assigned job is to wrap the message in a more palatable envelope.

Everyone will be working "all day, all night, all weekends," indeed, "however long it takes," to avoid it happening again.

I imagine this is meant to be inspiring and reassuring. But if I was a CrowdStrike technician or engineer, I would find it demoralizing: the boss, who will actually be personally fixing diddly-squat, is saying that the long hours of others are a sacrifice they're willing to make.

Plus, CrowdStrike's customers are in the same boat: their technicians get volunteered too. They can't magically unbrick PCs from a distance, so "until it's fully fixed for every single user" would be a promise outsiders will have to keep. Lovely.

There's even a punch line: an invitation to go contact them, the quickest way linked directly. It thanks people for reaching out.

If everything is on fire, that includes the phone lines, the inboxes, and so on. The most stupid thing you could do in such a situation is to tell more people to contact you, right away. Don't encourage it! That's why the original statement refers to pre-existing lines of communication, internal representatives, and so on. The Support department would hate the CEO too.

Hyacinth and Richard peering over a fence

Root Cause

If you're wondering about the pictures, it's Hyacinth Bucket, from 90s UK sitcom Keeping Up Appearances, who would always insist "it's pronounced Bouquet."

Hyacinth's ambitions always landed her out of her depth, surrounded by upper-class people she's trying to impress, in the midst of an embarrassing disaster. Her increasingly desperate attempts to save face, which invariably made things worse, are the main source of comedy.

Try reading that second statement in her voice.

I’m devastated to see the scale of today’s outage and will be personally working on it together with our team until it’s fully fixed for every single user.

But I wanted to take a moment to come here and tell you that I am sorry. People around the world rely on us, and incidents like this can’t happen. This came from an error that ultimately is my responsibility.

I can hear it perfectly, telegraphing Britishness to restore dignity for all. If she were in tech she would give that statement.

It's about reputation management first, projecting the image of competence and accountability. But she's giving the speech in front of a burning building, not realizing the entire exercise is futile. Worse, she thinks she's nailing it.

If CrowdStrike had sent this out, some would've applauded and called it an admirable example of wise and empathetic communication. Real leadership qualities.

But it's the exact opposite. It focuses on the wrong things, it alienates the staff, and it definitely amplifies the chaos. It's Monty Python-esque.

Apologizing is pointless here, the damage is already done. What matters is how severe it is and whether it could've been avoided. This requires a detailed root-cause analysis and remedy. Otherwise you only have their word. Why would that re-assure you?

The original restated the company's mission: security and stability. Those are the stakes to regain a modicum of confidence.

You may think that I'm reading too much into this. But I know the exact vibe on an engineering floor when the shit hits the fan. I also know how executives and staff without that experience end up missing the point entirely. I once worked for a Hyacinth Bucket. It's not an anecdote, it's allegory.

They simply don't get the engineering mindset, and confuse authority with ownership. They step on everyone's toes without realizing, because they're constantly wearing clown shoes. Nobody tells them.

Hyacinth is not happy

Softness as a Service

The change in style between #1 and #2 is really a microcosm of the conflict that has been broiling in tech for ~15 years now. I don't mean the politics, but the shifting of norms, of language and behavior.

It's framed as a matter of interpersonal style, which needs to be welcoming and inclusive. In practice this means they assert or demand that style #2 be the norm, even when #1 is advisable or required.

Factuality is seen as deficient, improper and primitive. It's a form of doublethink: everyone's preference is equally valid, except yours, specifically.

But the difference is not a preference. It's about what actually works and what doesn't. Style #1 is aimed at the people who have to fix it. Style #2 is aimed at the people who can't do anything until it's fixed. Who should they be reaching out to?

In #2, communication becomes an end in itself, not a means of conveying information. It's about being seen saying the words, not living them. Poking at the statement makes it fall apart.

When this becomes the norm in a technical field, it has deep consequences:

  • Critique must be gift-wrapped in flattery, and is not allowed to actually land.
  • Mistakes are not corrected, and sentiment takes precedence over effectiveness.
  • Leaders speak lofty words far from the trenches to save face.
  • The people they thank the loudest are the ones they pay the least.

Inevitably, quiet competence is replaced with gaudy chaos. Everyone says they're sorry and responsible, but nobody actually is. Nobody wants to resign either. Sound familiar?

Onslow

Cope and Soothe

The elephant in the room is that #1 is very masculine, while #2 is more feminine. When you hear "women are more empathetic communicators", this is what it means. They tend to focus on the individual and their relation to them, not the team as a whole and its mission.

Complaints that tech is too "male dominated" and "notoriously hostile to women" are often just this. Tech was always full of types who won't preface their proposals and criticisms with fluff, and instead lean into autism. When you're used to being pandered to, neutrality feels like vulgarity.

The notable exceptions are rare and usually have an exasperating lead up. Tech is actually one of the most accepting and egalitarian fields around. The maintainers do a mostly thankless job.

"Oh so you're saying there's no misogyny in tech?" No I'm just saying misogyny doesn't mean "something 1 woman hates".

The tone is really a distraction. If someone drops an analysis, saying shit or get off the pot, even very kindly and patiently, some will still run away screaming. Like an octopus spraying ink, they'll deploy a nasty form of #2 as a distraction. That's the real issue.

Many techies, in their naiveté, believed the cultural reformers when they showed up to gentrify them. They obediently branded heretics like James Damore, and burned witches like Richard Stallman. Thanks to racism, words like 'master' and 'slave' are now off-limits as technical terms. Ironic, because millions of computers just crashed because they worked exactly like that.

Django commit replacing master/slave
Guys, I'm stuck in the we work lift.

The cope is to pretend that nothing has truly changed yet, and more reform is needed. In fact, everything has already changed. Tech forums used to be crucibles for distilling insight, but now they are guarded jealously by people more likely to flag and ban than strongly disagree.

I once got flagged on HN because I pointed out Twitter's mass lay-offs were a response to overhiring, and that people were rooting for the site to fail after Musk bought it. It suggested what we all know now: that the company would not implode after trimming the dead weight, and that they'd never forgive him for it.

Diversity is now associated with incompetence, because incompetent people have spent over a decade reaching for it as an excuse. In their attempts to fight stereotypes, they ensured the stereotypes came true.

Hyacinth is not happy

Bait and Snitch

The outcry tends to be: "We do all the same things you do, but still we get treated differently!" But they start from the conclusion and work their way backwards. This is what the rewritten statement does: it tries to fix the relationship before fixing the problem.

The average woman and man actually do things very differently in the first place. Individual men and women choose. And others respond accordingly. The people who build and maintain the world's infrastructure prefer the masculine style for a reason: it keeps civilization running, and helps restore it when it breaks. A disaster announcement does not need to be relatable, it needs to be effective.

Furthermore, if the job of shoveling shit falls on you, no amount of flattery or oversight will make that more pleasant. It really won't. Such commentary is purely for the benefit of the ones watching and trying to look busy. It makes it worse, stop pretending otherwise.

There's little loyalty in tech companies nowadays, and it's no surprise. Project and product managers are acting more like demanding clients to their own team, than leaders. "As a user, I want..." Yes, but what are you going to do about it? Do you even know where to start?

What's perceived as a lack of sensitivity is actually the presence of sensibility. It's what connects the words to the reality on the ground. It does not need to be improved or corrected, it just needs to be respected. And yes it's a matter of gender, because bashing men and masculine norms has become a jolly recreational sport in the overculture. Mature women know it.

It seems impossible to admit. The entire edifice of gender equality depends on there not being a single thing men are actually better at, even just on average. Where men and women's instincts differ, women must be right.

It's childish, and not harmless either. It dares you to call it out, so they can then play the wounded victim, and paint you as the unreasonable asshole who is mean. This is supposed to invalidate the argument.

* * *

This post is of course a giant cannon pointing in the opposite direction, sitting on top of a wall. Its message will likely fly over the reformers' heads.

If they read it at all, they'll selectively quote or paraphrase, call me a tech-bro, and spool off some sentences they overheard, like an LLM. It's why they adore AI, and want it to be exactly as sycophantic as them. They don't care that it makes stuff up wholesale, because it makes them look and feel competent. It will never tell them to just fuck off already.

Think less about what is said, more about what is being done. Otherwise the next CrowdStrike will probably be worse.

July 16, 2024

A pilot races a spacecraft along a curved track in space.

The Drupal Starshot initiative has been making significant progress behind the scenes, and I'm excited to share some updates with the community.

Leadership team formation and product definition

Over the past few months, we've been working diligently on Drupal Starshot. One of our first steps was to appoint a leadership team to guide the project. With the leadership team in place as well as the new Starshot Advisory Council, we shifted our focus to defining the product. We've made substantial progress on this front and will be sharing more details about the product strategy in the coming weeks.

Introducing Drupal Starshot tracks

We already started to break down the initiative into manageable components, and are introducing the concept of "tracks". Tracks are smaller, focused parts of the Drupal Starshot project that allow for targeted development and contributions. We've already published the first set of tracks on the Drupal Starshot issue queue on Drupal.org.

Example tracks include:

  1. Creating Drupal Recipes for features like contact forms, advanced search, events, SEO and more.
  2. Enhancing the Drupal installer to enable Recipes during installation.
  3. Updating Drupal.org for Starshot, including product marketing and a trial experience.

While many tracks are technical and need help from developers, most of the tracks need contribution from designers, UX experts, marketers, testers and site builders.

Recruiting more track leads

Several tracks already have track leads and have made significant progress:

However, we need many additional track leads to drive our remaining tracks to completion.

We're now accepting applications for track lead positions. Interested individuals and organizations can apply by completing our application form. The application window closes on July 31st, two weeks from today.

Key responsibilities of a track lead

Track leads can be individuals, teams, or organizations, including Drupal Certified Partners. While technical expertise is beneficial, the role primarily focuses on strategic coordination and project management. Key responsibilities include:

  • Defining and validating requirements to ensure the track meets the expectations of our target audience.
  • Developing and maintaining a prioritized task list, including creating milestones and timelines.
  • Overseeing and driving the track's implementation.
  • Collaborating with key stakeholders, including the Drupal Starshot leadership team, module maintainers, the marketing team, etc.
  • Communicating progress to the community (e.g. blogging).

Track lead selection and announcement

After the application deadline, the Drupal Starshot Leadership Team will review the applications and appoint track leads. We expect to announce the selected track leads in the first week of August.

While the application period is open, we will be available to answer any questions you may have. Feel free to reach out to us through the Drupal.org issue queue, or join us in an upcoming zoom meeting (details to be announced / figured out).

Looking ahead to DrupalCon Barcelona

Our goal is to make significant progress on these tracks by DrupalCon Barcelona, where we plan to showcase the advancements we've made. We're excited about the momentum building around Drupal Starshot and can't wait to see the contributions from the community.

If you're passionate about Drupal and want to play a key role in shaping its future, consider applying for a track lead position.

Stay tuned for more updates on Drupal Starshot, and thank you for your continued support of the Drupal community.

July 11, 2024

A group of people in a futuristic setting having a meeting in an open forum with a space background.

I'm excited to announce the formation of the Drupal Starshot Advisory Council. When I announced Starshot's Leadership Team, I explained that we are innovating on the leadership model by adding a team of advisors. This council will provide strategic input and feedback to help ensure Drupal Starshot meets the needs of key stakeholders and end-users.

The Drupal Starshot initiative represents an ambitious effort to expand Drupal's reach and impact. To guide this effort, we've established a diverse Advisory Council that includes members of the Drupal Starshot project team, Drupal Association staff and Board of Directors, representatives from Drupal Certified Partners, Drupal Core Committers, and last but not least, individuals representing the target end-users for Drupal Starshot. This ensures a wide range of perspectives and expertise to inform the project's direction and decision-making.

The initial members include:

The council has been meeting monthly to receive updates from myself and the Drupal Starshot Leadership Team. Members will provide feedback on project initiatives, offer recommendations, and share insights based on their diverse experiences and areas of expertise.

In addition to guiding the strategic direction of Drupal Starshot, the Advisory Council will play a vital role in communication and alignment between the Drupal Starshot team, the Drupal Association, Drupal Core, and the broader Drupal community.

I'm excited to be working with this accomplished group to make the Drupal Starshot vision a reality. Together we can expand the reach and impact of Drupal, and continue advancing our mission to make the web a better place.

July 10, 2024

MySQL HeatWave 9.0 was released under the banner of artificial intelligence. It includes a VECTOR datatype and can easily process and analyze vast amounts of proprietary unstructured documents in object storage, using HeatWave GenAI and Lakehouse. Oracle Cloud Infrastructure also provides a wonderful GenAI Service, and in this post, we will see how to use […]

July 04, 2024

As you can read in my previous post related to MySQL 9 and authentication, the old mysql_native_password plugin has been removed. In that post, I showed an example using PHP 7.2, the default version in OL8. If you are using PHP and you want to use MySQL 9, you must be using a more recent […]
With the latest MySQL Innovation Release, we decided that it was time to remove the remaining weak authentication plugin: mysql_native_password. We previously deprecated it and made it not default loaded in MySQL 8.4 LTS and, now, in 9.0 it’s gone! Reasons Oracle places significant attention on the security of all its products, and MySQL is […]

July 03, 2024

On July 1st, 3 new releases of MySQL came out. Indeed, we released the next 8.0 (8.0.38), the first update of the 8.4 LTS (8.4.1), and the very first 9.0 as Innovation Release. We now support 3 versions of the most popular Open Source database. With these releases, we also want to thank all the […]

June 15, 2024

 

book picture
A book written by a doctor that has ADD himself, and it shows.

I was annoyed in the first half by his incessant use of anecdotes to prove that ADD is not genetic. It felt like he had to convince himself, and it read as an excuse for his actions as a father.

He uses clear and obvious examples of how not to raise a child (often with himself as the child or the father) to play on the readers emotion. Most of these examples are not even related to ADD.

But in the end, definitely the second half, it is a good book. Most people will recognize several situations and often it does make one think about life choices and interpretation of actions and emotions.

So for those getting past the disorganization (yes there are parts and chapters in this book, but most of it feels randomly disorganized), the second half of the book is a worthy thought provoking read.


June 11, 2024

When I mindlessly unplugged a USB audio card from my laptop while viewing its mixer settings in AlsaMixer, I was surprised by the following poetic error message: [1]

In the midst of the word he was trying to say, In the midst of his laughter and glee, He had softly and suddenly vanished away— For the Snark was a Boojum, you see.

These lines are actually the final verse of Lewis Carroll's whimsical poem The Hunting of the Snark:

In the midst of the word he was trying to say,
  In the midst of his laughter and glee,
He had softly and suddenly vanished away—
  For the Snark was a Boojum, you see.

-- Lewis Carroll, "The Hunting of the Snark"

Initially, I didn't even notice the bright red error message The sound device was unplugged. Press F6 to select another sound card. The poetic lines were enough to convey what had happened and they put a smile on my face.

Please, developers, keep putting a smile on my face when something unexpected happens.

[1]

Fun fact: the copyright header of AlsaMixer's source code includes: Copyright (c) 1874 Lewis Carroll.

June 02, 2024

Some years ago, I wrote an article on connecting to MySQL 8.0 using the default authentication plugin (caching_sha2_password) with Perl. I also provided Perl-DBD-MySQL packages for EL7. Somebody recently left a comment as he was looking for a perl-DBD-MySQL driver compatible with the caching_sha2_password for Rocky Linux 8 (EL8 or OL8 compatible). Therefore, I build […]

May 30, 2024

www.wagemakers.be vierkant

https://www.wagemakers.be

I’ve finally found the time to give my homepage a complete makeover. Yes, HTTPS is enabled now ;-)

The content has been migrated from WebGUI to Hugo.

It still contains the same old content, but I’ll update it in the coming weeks or when some of the projects are updated.




May 28, 2024

In MySQL HeatWave, the Database as a Service (DBaaS) offered by Oracle, Machine Learning is incorporated to provide database administrators (DBAs) and developers with an optimal experience. Those tools are the Auto ML advisors like the autopilot indexing for example. MySQL HeatWave is also an invaluable asset for Data Analysts, offering straightforward tools that streamline […]

May 22, 2024

MySQL HeatWave is the MySQL DBaaS provided by Oracle in OCI and some other clouds. Compared to the vanilla MySQL, one of the key features of the service is that it allows you to run analytics queries (aka OLAP) very quickly using the HeatWave cluster. You can also run such queries on files using LakeHouse. […]

May 20, 2024

Mask27.dev I started my company last year and recently had the time to create my company’s website.

https://mask27.dev

Feel free to check it out.




May 09, 2024

Beste andere Belgen

Venera en ik hebben een bébé gemaakt. Hij is Troi en naar het schijnt lijkt hij erg op mij.

Hier is een fotooke van hem en zijn mama. Troi is uiteraard heel erg schattig:

Last week my new book has been published, Building Wireless Sensor Networks with OpenThread: Developing CoAP Applications for Thread Networks with Zephyr.

Thread is a protocol for building efficient, secure and scalable wireless mesh networks for the Internet of Things (IoT), based on IPv6. OpenThread is an open-source implementation of the Thread protocol, originally developed by Google. It offers a comprehensive Application Programming Interface (API) that is both operating system and platform agnostic. OpenThread is the industry’s reference implementation and the go-to platform for professional Thread application developers.

All code examples from this book are published on GitHub. The repository also lists errors that may have been found in this book since its publication, as well as information about changes that impact the examples.

This book uses OpenThread in conjunction with Zephyr, an open-source real-time operating system designed for use with resource-constrained devices. This allows you to develop Thread applications that work on various hardware platforms, without the need to delve into low-level details or to learn another API when switching hardware platforms.

With its practical approach, this book not only explains theoretical concepts, but also demonstrates Thread’s network features with the use of practical examples. It explains the code used for a variety of basic Thread applications based on Constrained Application Protocol (CoAP). This includes advanced topics such as service discovery and security. As you work through this book, you’ll build on both your knowledge and skills. By the time you finish the final chapter, you’ll have the confidence to effectively implement Thread-based wireless networks for your own IoT projects.

What is Thread?

Thread is a low-power, wireless, and IPv6-based networking protocol designed specifically for IoT devices. Development of the protocol is managed by the Thread Group, an alliance founded in 2015. This is a consortium of major industry players, including Google, Apple, Amazon, Qualcomm, Silicon Labs, and Nordic Semiconductor.

You can download the Thread specification for free, although registration is required. The version history is as follows:

Versions of the Thread specification

Version

Release year

Remarks

Thread 1.0

2015

Never implemented in commercial products

Thread 1.1

2017

  • The ability to automatically move to a clear channel on detecting interference (channel agility)

  • The ability to reset a master key and drive new rotating keys in the network

Thread 1.2

2019

  • Enhancements to scalability and energy efficiency

  • Support for large-scale networking applications, including the ability to integrate multiple Thread networks into one singular Thread domain

Thread 1.3

2023

  • Enhancements to scalability, reliability, and robustness of Thread networks

  • Standardization of Thread Border Routers, Matter support, and firmware upgrades for Thread devices

All versions of the Thread specification maintain backward compatibility.

Thread’s architecture

Thread employs a mesh networking architecture that allows devices to communicate directly with each other, eliminating the need for a central hub or gateway. This architecture offers inherent redundancy, as devices can relay data using multiple paths, ensuring increased reliability in case of any single node failure.

The Thread protocol stack is built upon the widely used IPv6 protocol, which simplifies the integration of Thread networks into existing IP infrastructures. Thread is designed to be lightweight, streamlined and efficient, making it an excellent protocol for IoT devices running in resource-constrained environments.

Before discussing Thread, it’s essential to understand its place in the network protocol environment. Most modern networks are based on the Internet Protocol suite, which uses a four-layer architecture. The layers of the Internet Protocol suite are, from bottom to top:

Link layer

Defines the device’s connection with a local network. Protocols like Ethernet and Wi-Fi operate within this layer. In the more complex Open Systems Interconnection (OSI) model, this layer includes the physical and data link layers.

Network layer

Enables communication across network boundaries, known as routing. The Internet Protocol (IP) operates within this layer and defines IP addresses.

Transport layer

Enables communication between two devices, either on the same network or on different networks with routers in between. The transport layer also defines the concept of a port. User Datagram Protocol (UDP) and Transmission Control Protocol (TCP) operate within this layer.

Application layer

Facilitates communication between applications on the same device or different devices. Protocols like HyperText Transfer Protocol (HTTP), Domain Name System (DNS), Simple Mail Transfer Protocol (SMTP), and Message Queuing Telemetry Transport (MQTT) operate within this layer.

When data is transmitted across the network, the data of each layer is embedded in the layer below it, as shown in this diagram:

/images/internet-protocol-encapsulation.png

Network data is encapsulated in the four layers of the Internet Protocol suite (based on: Colin Burnett, CC BY-SA 3.0)

Thread operates in the network and transport layers. However, it’s important to know what’s going on in all layers.

Network layer: 6LoWPAN and IPv6

Thread’s network layer consists of IPv6 over Low-Power Wireless Access Networks (6LoWPAN). This is an IETF specification described in RFC 4944, "Transmission of IPv6 Packets over IEEE 802.15.4 Networks", with an update for the compression mechanism in RFC 6282, "Compression Format for IPv6 Datagrams over IEEE 802.15.4-Based Networks". 6LoWPAN’s purpose is to allow even the smallest devices with limited processing power and low energy requirements to be part of the Internet of Things.

6LoWPAN essentially enables you to run an IPv6 network over the IEEE 802.15.4 link layer. It acts as an ‘adaptation layer’ between IPv6 and IEEE 802.15.4 and is therefore sometimes considered part of the link layer. This adaptation poses some challenges. For example, IPv6 mandates that all links can handle datagram sizes of at least 1280 bytes, but IEEE 802.15.4, as explained in the previous subsection, limits a frame to 127 bytes. 6LoWPAN solves this by excluding information in the 6LoWPAN header that can already be derived from IEEE 802.15.4 frames and by using local, shortened addresses. If the payload is still too large, it’s divided into fragments.

The link-local IPv6 addresses of 6LoWPAN devices are derived from the IEEE 802.15.4 EUI-64 addresses and their shortened 16-bit addresses. The devices in a 6LoWPAN network can communicate directly with ‘normal’ IPv6 devices in a network if connected via an edge router. This means there is no need for translation via a gateway, in contrast to non-IP networks such as Zigbee or Z-Wave. Communication with non-6LoWPAN networks purely involves forwarding data packets at the network layer.

In this layer, Thread builds upon IEEE 802.15.4 to create an IPv6-based mesh network. In IEEE 802.15.4 only communication between devices that are in immediate radio range is possible, whereas routing allows devices that aren’t in immediate range to communicate.

If you want to delve into more details of Thread’s network layer, consult the Thread Group’s white paper Thread Usage of 6LoWPAN.

Transport layer: UDP

On top of 6LoWPAN and IPv6, Thread’s transport layer employs the User Datagram Protocol (UDP), which is the lesser known alternative to Transmission Control Protocol (TCP).

UDP has the following properties:

Connectionless

Unlike TCP, UDP doesn’t require establishing a connection before transferring data. It sends datagrams (packets) directly without any prior setup.

No error checking and recovery

UDP doesn’t provide built-in error checking, nor does it ensure the delivery of packets. If data is lost or corrupted during transmission, UDP doesn’t attempt to recover or resend it.

Speed

UDP is faster than TCP, as it doesn’t involve the overhead of establishing and maintaining a connection, error checking, or guaranteeing packet delivery.

UDP is ideal for use by Thread for several reasons:

Resource constraints

Thread devices often have limited processing power, memory, and energy. The simplicity and low overhead of UDP make it a better fitting choice for such resource-constrained devices compared to TCP.

Lower latency

UDP’s connectionless and lightweight nature guarantees data transmission with low latency, making it appropriate for wireless sensors.

Thread lacks an application layer

Unlike home automation protocols such as Zigbee, Z-Wave, or Matter, the Thread standard doesn’t define an application layer. A Thread network merely offers network infrastructure that applications can use to communicate. Just as your web browser and email client use TCP/IP over Ethernet or Wi-Fi, home automation devices can use Thread over IEEE 802.15.4 as their network infrastructure.

Several application layers exist that can make use of Thread:

Constrained Application Protocol (CoAP)

A simpler version of HTTP, designed for resource-constrained devices and using UDP instead of TCP.

MQTT for Sensor Networks (MQTT-SN)

A variant of MQTT designed to be used over UDP.

Apple HomeKit

Apple’s application protocol for home automation devices.

Matter

A new home automation standard developed by the Connectivity Standards Alliance (CSA).

Note

Matter was developed by the Zigbee Alliance, the original developers of the Zigbee standard for home automation. Following the introduction of Matter, the Zigbee Alliance rebranded to the Connectivity Standards Alliance. Matter’s application layer is heavily influenced by Zigbee. It not only runs on top of Thread, but can also be used over Wi-Fi and Ethernet.

Like your home network which simultaneously hosts a lot of application protocols including HTTP, DNS, and SMTP, a Thread network can also run all these application protocols concurrently.

/images/thread-stack.png

The Thread network stack supports multiple application protocols simultaneously.

Throughout this book, I will be using CoAP as an application protocol in a Thread network. Once you have a Thread network set up, you can also use it for Apple HomeKit and Matter devices.

Advantages of Thread

Some of Thread’s key benefits include:

Scalability

Thread’s mesh network architecture allows for large-scale deployment of IoT devices, supporting hundreds of devices within a single network. Thread accommodates up to 32 Routers per network and up to 511 End Devices per Router. In addition, from Thread 1.2, multiple Thread networks can be integrated into one single Thread domain, allowing thousands of devices within a mesh network.

Security

Devices can’t access a Thread network without authorization. Moreover, all communication on a Thread network is encrypted using IEEE 802.15.4 security mechanisms. As a result, an outsider without access to the network credentials can’t read network traffic.

Reliability

Thread networks are robust and support self-healing and self-organizing network properties at various levels, ensuring the network’s resilience against failures. This all happens transparently to the user; messages are automatically routed around any bad node via alternative paths.

For example, an End Device requires a parent Router to communicate with the rest of the network. If communication with this parent Router fails for any reason and it becomes unavailable the End Device will choose another parent Router in its neighborhood, after which communication resumes.

Thread Routers also relay packets from their End Devices to other Routers using the most efficient route they can find. In case of connection problems, the Router will immediately seek an alternate route. Router-Eligible End Devices can also temporarily upgrade their status to Routers if necessary. Each Thread network has a Leader who supervises the Routers and whose role is dynamically selected by the Routers. If the Leader fails, another Router automatically takes over as Leader.

Border Routers have the same built-in resilience. In a Thread network with multiple Border Routers, communication between Thread devices and devices on another IP network (such as a home network) occurs along multiple routes. If one Border Router loses connectivity, communications will be rerouted via the other Border Routers. If your Thread network only has a single Border Router, this will become a single point of failure for communication with the outside network.

Low power consumption

Thread is based on the power-efficient IEEE 802.15.4 link layer. This enables devices to operate on batteries for prolonged periods. Sleepy End Devices will also switch off their radio during idle periods and only wake up periodically to communicate with their parent Router, thereby giving even longer battery life.

Interoperability

As the protocol is built upon IPv6, Thread devices are straightforward to incorporate into existing networks, both in industrial and home infrastructure, and for both local networks and cloud connections. You don’t need a proprietary gateway; every IP-based device is able to communicate with Thread devices, as long as there’s a route between both devices facilitated by a Border Router.

Disadvantages of Thread

In life, they say there’s no such thing as a free lunch and Thread is no exception to this rule. Some of its limitations are:

Limited range

Thread’s emphasis on low power consumption can lead to a restricted wireless range compared with other technologies such as Wi-Fi. Although this can be offset by its mesh architecture, where Routers relay messages, it still means that devices in general can’t be placed too far apart.

Low data rates

Thread supports lower data rates than some rival IoT technologies. This makes Thread unsuitable for applications requiring high throughput.

Complexity

The Thread protocol stack may be more challenging to implement compared to other IoT networking solutions, potentially ramping up development costs and time.

This is an abridged version of the introduction of my book Building Wireless Sensor Networks with OpenThread: Developing CoAP Applications for Thread Networks with Zephyr. You can read the full introduction for free in the Elektor Store, where you can also find the table of contents.

Platforms used in this book

In this book I focus on building wireless sensor networks using the Thread protocol together with the following hardware and software environments:

Nordic Semiconductor’s nRF52840 SoC

A powerful yet energy-efficient and low cost hardware solution for Thread-based IoT devices

OpenThread

An open-source implementation of the Thread protocol, originally developed by Google, making it easy to incorporate Thread into your IoT projects

Zephyr

An open-source real-time operating system designed for resource-constrained devices

Armed with these tools and by using practical examples I will go on to guide you step-by-step through the details of the hardware and software required to build Thread networks and OpenThread-based applications.

Nordic Semiconductor’s nRF52840 SoC

Nordic Semiconductor’s nRF52840 is a SoC built around the 32-bit ARM Cortex-M4 CPU running at 64 MHz. This chip supports Bluetooth Low Energy (BLE), Bluetooth Mesh, Thread, Zigbee, IEEE 802.15.4, ANT and 2.4 GHz proprietary stacks. With its 1 MB flash storage and 256 KB RAM, it offers ample resources for advanced Thread applications.

This SoC is available for developers in the form of Nordic Semiconductor’s user-friendly nRF52840 Dongle. You can power and program this small, low-cost USB dongle via a computer USB port. The documentation lists comprehensive information about the dongle.

/images/nrf52840-dongle.png

Nordic Semiconductor’s nRF52840 Dongle is a low-cost USB dongle ideal for experimenting with Thread. (image source: Nordic Semiconductor)

OpenThread

Thread is basically a networking protocol; in order to work with it you need an implementation of the Thread networking protocol. The industry’s reference implementation, used even by professional Thread device developers, is OpenThread. It’s a BSD-licensed implementation, with development ongoing via its GitHub repository. The license for this software allows for its use in both open-source and proprietary applications.

OpenThread provides an Application Programming Interface (API) that’s operating system and platform agnostic. It has a narrow platform abstraction layer to achieve this, and has a small memory footprint, making it highly portable. In this book, I will be using OpenThread in conjunction with Zephyr, but the same API can be used in other combinations, such as ESP-IDF for Espressif’s Thread SoCs.

/images/openthread-website.png

The BSD-licensed OpenThread project is the industry’s standard implementation of the Thread protocol.

Zephyr

Zephyr is an open-source real-time operating system (RTOS) designed for resource-constrained devices. It incorporates OpenThread as a module, simplifying the process of creating Thread applications based on Zephyr and OpenThread. Because of Zephyr’s hardware abstraction layer, these applications will run on all SoCs supported by Zephyr that have an IEEE 802.15.4 radio facility.

/images/zephyr-website.png

Zephyr is an open-source real-time operating system (RTOS) with excellent support for OpenThread.

April 21, 2024

I use a Free Software Foundation Europe fellowship GPG smartcard for my email encryption and package signing. While FSFE doesn’t provide the smartcard anymore it’s still available at www.floss-shop.de.

gpg smartcard readers

I moved to a Thinkpad w541 with coreboot running Debian GNU/Linux and FreeBSD so I needed to set up my email encryption on Thunderbird again.

It took me more time to reconfigure it again - as usual - so I decided to take notes this time and create a blog post about it. As this might be useful for somebody else … or me in the future :-)

The setup is executed on Debian GNU/Linux 12 (bookworm) with the FSFE fellowship GPG smartcard, but the setup for other Linux distributes, FreeBSD or other smartcards is very similar.

umask

When you’re working on privacy-related tasks - like email encryption - it’s recommended to set the umask to a more private setting to only allow the user to read the generated files.

This is also recommended in most security benchmarks for “interactive users”.

umask 27 is a good default as it allows only users that are in the same group to read the generated files. Only the user that generated the file can modify it. This should be the default IMHO, but for historical reasons, umask 022 is the default on most Un!x or alike systems.

On most modern Un!x systems the default group for the user is the same as the user name. If you really want to be sure that only the user can read the generated files you can also use umask 077.

Set the umask to something more secure as the default. You probably want to configure it in your ~/.profile or ~/.bashrc ( if you’re using bash) or configure this on the system level for the UID range that is assigned to “interactive users”.

$ umask 27

Verify.

$ umask
0027
$

Install GnuPG

Smartcards are supported by GnuPG by scdaemon, it should be possible to use the native GnuPG CCID(Chip Card Interface) driver with more recent versions of GnuPG or the OpenSC PC/SC interface.

gpg logo

For the native GnuPG CCID interface you’ll need a supported smartcard reader. I continue to use the OpenSC PC/SC method, which implements the PKCS11 interface - the defacto standard interface for smartcards or HSMs - Hardware Security modules - and supports more smartcard readers.

The PKCS11 interface also allows you to reuse your PGP keypair for other things like SSH authentication through the PKCS11 interface ( SSH support is also possible with the gpg-agent ) or other applications like web browsers.

On Debian GNU/Linux, scdaemon is a separate package.

Make sure that the gpg and scdaemon are installed.

$ sudo apt install gpg scdaemon

GPG components

GnuPG has a few components, to list the components you can execute the gpgconf --list-components command.

$ gpgconf --list-component
gpg:OpenPGP:/usr/bin/gpg
gpgsm:S/MIME:/usr/bin/gpgsm
gpg-agent:Private Keys:/usr/bin/gpg-agent
scdaemon:Smartcards:/usr/lib/gnupg/scdaemon
dirmngr:Network:/usr/bin/dirmngr
pinentry:Passphrase Entry:/usr/bin/pinentry

If you want to know in more detail what the function is for a certain component you can check the manpages for each component.

We use in the:

  • scdaemon: Smartcard daemon for the GnuPG system
  • gpg-agent: Secret key management for GnuPG

components in this blog post as they are involved in OpenPGP encryption.

To reload the configuration of all components you execute the gpgconf --reload command. To reload you can execute gpgconf --reload <component>.

e.g.

$ gpgconf --reload scdaemon

Will reload the configuration of the scdaemon.

When you want or need to stop a gpg process you can use the gpgconf --kill <component> command.

e.g.

$ gpgconf --kill scdaemon

Will kill the scdaemon process.

(Try to) Find your smartcard device

Make sure that the reader is connected to your system.

$ lsusb
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 003: ID 08e6:3438 Gemalto (was Gemplus) GemPC Key SmartCard Reader
Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd QEMU Tablet
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
$ 

Without config

Let’s see if gpg can detect the smartcard.

Execute gpg --card-status.

$ gpg --card-status
gpg: selecting card failed: No such device
gpg: OpenPGP card not available: No such device
$ 

gpg doesn’t detect the smartcard…

When you execute the gpg command the gpg-agent is started.

$ ps aux | grep -i gpg
avahi        540  0.0  0.0   8288  3976 ?        Ss   Apr05   0:00 avahi-daemon: running [debian-gpg.local]
staf         869  0.0  0.0  81256  3648 ?        SLs  Apr05   0:00 /usr/bin/gpg-agent --supervised
staf        5143  0.0  0.0   6332  2076 pts/1    S+   12:48   0:00 grep -i gpg
$ 

The gpg-agent also starts the scdaemon process.

$ ps aux | grep -i scdaemon
staf        5150  0.0  0.1  90116  5932 ?        SLl  12:49   0:00 scdaemon --multi-server
staf        5158  0.0  0.0   6332  2056 pts/1    S+   12:49   0:00 grep -i scdaemon
$ 

When it’s the first time that you started gpg it’ll create a ~/.gnupg directory.

staf@debian-gpg:~/.gnupg$ ls
private-keys-v1.d  pubring.kbx
staf@debian-gpg:~/.gnupg$ 

Add debug info

scdaemon

To debug why GPG doesn’t detect the smartcard, we’ll enable debug logging in the scdeamon.

Create a directory for the logging.

$ mkdir ~/logs
$ chmod 700 ~/logs

Edit ~/gnugpg/scdaemon.conf to reconfigure the scdaemon to create logging.

staf@debian-gpg:~/.gnupg$ cat scdaemon.conf
verbose
debug-level expert
debug-all
log-file    /home/staf/logs/scdaemon.log
staf@debian-gpg:~/.gnupg$ 

Stop the sdaemon process.

staf@debian-gpg:~/logs$ gpgconf --reload scdaemon
staf@debian-gpg:~/logs$ 

Verify.

$ ps aux | grep -i scdaemon
staf        5169  0.0  0.0   6332  2224 pts/1    S+   12:51   0:00 grep -i scdaemon
$ 

The next time you execute a gpg command, the scdaemon is restarted.

$ gpg --card-status
gpg: selecting card failed: No such device
gpg: OpenPGP card not available: No such device
$ 

With the new config, the scdaemon creates a log file with debug info.

]$ ls -l ~/logs
total 8
-rw-r--r-- 1 staf staf 1425 Apr 15 19:45 scdaemon.log
$ 

gpg-agent

We’ll configure the gpg-agent to generate a debug log as this might help to debug if something goes wrong.

Open the gpg-agent.conf configuration file with your favourite editor.

staf@bunny:~/.gnupg$ vi gpg-agent.conf

And set the debug-level and the log output file.

debug-level expert
verbose
verbose 
log-file /home/staf/logs/gpg-agent.log

Reload the gpg-agent configuration.

staf@bunny:~/.gnupg$ gpgconf --reload gpg-agent
staf@bunny:~/.gnupg$ 

Verify that a log-file is created.

staf@bunny:~/.gnupg$ ls ~/logs/
gpg-agent.log  scdaemon.log
staf@bunny:~/.gnupg$ 

The main components are configured to generate debug info now. We’ll continue with the gpg configuration.

Setup opensc

We will use opensc to configure the smartcard interface.

opensc

Install apt-file

To find the required files/libraries, it can be handy to have apt-file on our system.

Install apt-file.


$ sudo apt install apt-file

Update the apt-file database.

$ sudo apt-file update

Search for the tools that will be required.

$ apt-file search pcsc_scan 
pcsc-tools: /usr/bin/pcsc_scan            
pcsc-tools: /usr/share/man/man1/pcsc_scan.1.gz
$ 
$ apt-file search pkcs15-tool
opensc: /usr/bin/pkcs15-tool              
opensc: /usr/share/bash-completion/completions/pkcs15-tool
opensc: /usr/share/man/man1/pkcs15-tool.1.gz
$ 

Install smartcard packages

Install the required packages.

$ sudo apt install opensc pcsc-tools

Start the pcscd.service

On Debian GNU/Linux the services are normally automatically enabled/started when a package is installed.

But the pcscd service wasn’t enabled on my system.

Let’s enable it.

$ sudo systemctl enable pcscd.service 
Synchronizing state of pcscd.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable pcscd
$ 

Start it.

$ sudo systemctl start pcscd.service 

And verify that it’s running.

$ sudo systemctl status pcscd.service 
● pcscd.service - PC/SC Smart Card Daemon
     Loaded: loaded (/lib/systemd/system/pcscd.service; indirect; preset: enabled)
     Active: active (running) since Tue 2024-04-16 07:47:50 CEST; 16min ago
TriggeredBy: ● pcscd.socket
       Docs: man:pcscd(8)
   Main PID: 8321 (pcscd)
      Tasks: 7 (limit: 19026)
     Memory: 1.5M
        CPU: 125ms
     CGroup: /system.slice/pcscd.service
             └─8321 /usr/sbin/pcscd --foreground --auto-exit

Apr 16 07:47:50 bunny systemd[1]: Started pcscd.service - PC/SC Smart Card Daemon.

Verify connection

Execute the pcsc_scan command and insert your smartcard.

$ pcsc_scan
PC/SC device scanner
V 1.6.2 (c) 2001-2022, Ludovic Rousseau <ludovic.rousseau@free.fr>
Using reader plug'n play mechanism
Scanning present readers...
0: Alcor Micro AU9540 00 00
 
Tue Apr 16 08:05:26 2024
 Reader 0: Alcor Micro AU9540 00 00
  Event number: 0
  Card state: Card removed, 
 Scanning present readers...
0: Alcor Micro AU9540 00 00
1: Gemalto USB Shell Token V2 (284C3E93) 01 00
 
Tue Apr 16 08:05:50 2024
 Reader 0: Alcor Micro AU9540 00 00
  Event number: 0
  Card state: Card removed, 
 Reader 1: Gemalto USB Shell Token V2 (284C3E93) 01 00
  Event number: 0
  Card state: Card inserted, 
  ATR: <snip>

ATR: <snip>
+ TS = 3B --> Direct Convention
+ T0 = DA, Y(1): 1101, K: 10 (historical bytes)
<snip>
+ TCK = 0C (correct checksum)
<snip>

Possibly identified card (using /usr/share/pcsc/smartcard_list.txt):
<snip>
	OpenPGP Card V2
$ 

Execute pkcs15-tool -D to ensure that the pkcs11/pkcs15 interface is working.

$ pkcs15-tool -D
Using reader with a card: Gemalto USB Shell Token V2 (284C3E93) 00 00
PKCS#15 Card [OpenPGP card]:
	Version        : 0
	Serial number  : <snip>
	Manufacturer ID: ZeitControl
	Language       : nl
	Flags          : PRN generation, EID compliant


PIN [User PIN]
	Object Flags   : [0x03], private, modifiable
	Auth ID        : 03
	ID             : 02
	Flags          : [0x13], case-sensitive, local, initialized
	Length         : min_len:6, max_len:32, stored_len:32
	Pad char       : 0x00
	Reference      : 2 (0x02)
	Type           : UTF-8
	Path           : 3f00
	Tries left     : 3

PIN [User PIN (sig)]
	Object Flags   : [0x03], private, modifiable
	Auth ID        : 03
	ID             : 01
	Flags          : [0x13], case-sensitive, local, initialized
	Length         : min_len:6, max_len:32, stored_len:32
	Pad char       : 0x00
	Reference      : 1 (0x01)
	Type           : UTF-8
	Path           : 3f00
	Tries left     : 3

PIN [Admin PIN]
	Object Flags   : [0x03], private, modifiable
	ID             : 03
	Flags          : [0x9B], case-sensitive, local, unblock-disabled, initialized, soPin
	Length         : min_len:8, max_len:32, stored_len:32
	Pad char       : 0x00
	Reference      : 3 (0x03)
	Type           : UTF-8
	Path           : 3f00
	Tries left     : 3

Private RSA Key [Signature key]
	Object Flags   : [0x03], private, modifiable
	Usage          : [0x20C], sign, signRecover, nonRepudiation
	Access Flags   : [0x1D], sensitive, alwaysSensitive, neverExtract, local
	Algo_refs      : 0
	ModLength      : 3072
	Key ref        : 0 (0x00)
	Native         : yes
	Auth ID        : 01
	ID             : 01
	MD:guid        : <snip>

Private RSA Key [Encryption key]
	Object Flags   : [0x03], private, modifiable
	Usage          : [0x22], decrypt, unwrap
	Access Flags   : [0x1D], sensitive, alwaysSensitive, neverExtract, local
	Algo_refs      : 0
	ModLength      : 3072
	Key ref        : 1 (0x01)
	Native         : yes
	Auth ID        : 02
	ID             : 02
	MD:guid        : <snip>

Private RSA Key [Authentication key]
	Object Flags   : [0x03], private, modifiable
	Usage          : [0x222], decrypt, unwrap, nonRepudiation
	Access Flags   : [0x1D], sensitive, alwaysSensitive, neverExtract, local
	Algo_refs      : 0
	ModLength      : 3072
	Key ref        : 2 (0x02)
	Native         : yes
	Auth ID        : 02
	ID             : 03
	MD:guid        : <snip>

Public RSA Key [Signature key]
	Object Flags   : [0x02], modifiable
	Usage          : [0xC0], verify, verifyRecover
	Access Flags   : [0x02], extract
	ModLength      : <snip>
	Key ref        : 0 (0x00)
	Native         : no
	Path           : b601
	ID             : 01

Public RSA Key [Encryption key]
	Object Flags   : [0x02], modifiable
	Usage          : [0x11], encrypt, wrap
	Access Flags   : [0x02], extract
	ModLength      : <snip>
	Key ref        : 0 (0x00)
	Native         : no
	Path           : b801
	ID             : 02

Public RSA Key [Authentication key]
	Object Flags   : [0x02], modifiable
	Usage          : [0x51], encrypt, wrap, verify
	Access Flags   : [0x02], extract
	ModLength      : <snip>
	Key ref        : 0 (0x00)
	Native         : no
	Path           : a401
	ID             : 03

$ 

You might want to reinsert your smartcard after you execute pkcs15-tool or pkcs11-tool commands, as still this might lock the smartcard that generates a sharing violation (0x8010000b) error (see below).

GnuPG configuration

scdaemon

The first step is to get the smartcard detected by gpg (scdaemon).

Test

Execute gpg --card-status to verify that your smartcard is detected by gpg. This might work as gpg will try to use the internal CCID if this fails it will try to use the opensc interface.

$ gpg --card-status
gpg: selecting card failed: No such device
gpg: OpenPGP card not available: No such device
$ 

When the smartcard doesn’t get detected, have a look at the ~/logs/scdaemon.log log file.

Disable the internal ccid

When you use a smartcard reader that isn’t supported by GnuPG or you want to force GnuPG to use the opensc interface it’s a good idea to disable the internal ccid.

Edit scdaemon.conf.

staf@bunny:~/.gnupg$ vi scdaemon.conf
staf@bunny:~/.gnupg$

and add disable-ccid.

card-timeout 5
disable-ccid

verbose
debug-level expert
debug-all
log-file    /home/staf/logs/scdaemon.log

Reload the scdaemon.

staf@bunny:~/.gnupg$ gpgconf --reload scdaemon
staf@bunny:~/.gnupg$ 

gpg will try to use the opensc as a failback by default, so disabling the internal CCI interface will probably not fix the issue when your smartcard isn’t detected.

multiple smartcard readers

When you have multiple smartcard readers on your system. e.g. an internal and other one connected over USB. gpg might only use the first smartcard reader.

Check the scdaemon.log to get the reader port for your smartcard reader.

staf@bunny:~/logs$ tail -f scdaemon.log 
2024-04-14 11:39:35 scdaemon[471388] DBG: chan_7 -> D 2.2.40
2024-04-14 11:39:35 scdaemon[471388] DBG: chan_7 -> OK
2024-04-14 11:39:35 scdaemon[471388] DBG: chan_7 <- SERIALNO
2024-04-14 11:39:35 scdaemon[471388] detected reader 'Alcor Micro AU9540 00 00'
2024-04-14 11:39:35 scdaemon[471388] detected reader 'Gemalto USB Shell Token V2 (284C3E93) 01 00'
2024-04-14 11:39:35 scdaemon[471388] reader slot 0: not connected
2024-04-14 11:39:35 scdaemon[471388] reader slot 0: not connected
2024-04-14 11:39:35 scdaemon[471388] DBG: chan_7 -> ERR 100696144 No such device <SCD>
2024-04-14 11:39:35 scdaemon[471388] DBG: chan_7 <- RESTART
2024-04-14 11:39:35 scdaemon[471388] DBG: chan_7 -> OK

To force gpg to use a smartcard reader you can set the reader-port directive in the scdaemon.conf configuration file.

staf@bunny:~/.gnupg$ vi scdaemon.conf
card-timeout 5
disable-ccid

reader-port 'Gemalto USB Shell Token V2 (284C3E93) 01 00'

verbose
debug-level expert
debug-all
log-file    /home/staf/logs/scdaemon.log
staf@bunny:~$ gpgconf --reload scdaemon
staf@bunny:~$

sharing violation (0x8010000b)

Applications will try to lock your opensc smartcard, if your smartcard is already in use by another application you might get a sharing violation (0x8010000b).

This is fixed by reinserting your smartcard. If this doesn’t help it might be a good idea to restart the pcscd service as this disconnects all the applications that might lock the opensc interface.

$ sudo systemctl restart pcscd

When everything goes well your smartcard should be detected now. If not, take a look at the scdaemon.log .

staf@bunny:~$ gpg --card-status
Reader ...........: Gemalto USB Shell Token V2 (284C3E93) 00 00
Application ID ...: <snip>
Application type .: OpenPGP
Version ..........: 2.1
Manufacturer .....: ZeitControl
Serial number ....: 000046F1
Name of cardholder: <snip>
Language prefs ...: <snip>
Salutation .......: <snip>
URL of public key : <snip>
Login data .......: [not set]
Signature PIN ....: <snip>
Key attributes ...: <snip>
Max. PIN lengths .: <snip>
PIN retry counter : <snip>
Signature counter : <snip>
Signature key ....: <snip>
      created ....: <snip>
Encryption key....: <snip>
      created ....: <snip>
Authentication key: <snip>
      created ....: <snip>
General key info..: [none]
staf@bunny:~$ 

Note that gpg will lock the smartcard, if you try to use your smartcard for another application you’ll get a “Reader in use by another application” error.

staf@bunny:~$ pkcs15-tool -D
Using reader with a card: Gemalto USB Shell Token V2 (284C3E93) 00 00
Failed to connect to card: Reader in use by another application
staf@bunny:~$ 

Private key references

When you execute gpg --card-status successfully. gpg creates references (shadowed-private-keys) to the private key(s) on your smartcard. These references are text files created in ~/.gnupg/private-keys-v1.d.

staf@bunny:~/.gnupg/private-keys-v1.d$ ls
<snip>.key
<snip>.key
<snip>.key
staf@bunny:~/.gnupg/private-keys-v1.d$ 

These references also include the smartcard serial number, this way gpg knows on which hardware token the private key is located and will ask you to insert the correct hardware token (smartcard).

Pinentry

pinentry allows you to type in a passphrase (pin code) to unlock the encryption if you don’t have a smartcard reader with a PIN-pad.

Lets’s look for the packages that provides a pinentry binary.

$ sudo apt-file search "bin/pinentry"
kwalletcli: /usr/bin/pinentry-kwallet     
pinentry-curses: /usr/bin/pinentry-curses
pinentry-fltk: /usr/bin/pinentry-fltk
pinentry-gnome3: /usr/bin/pinentry-gnome3
pinentry-gtk2: /usr/bin/pinentry-gtk-2
pinentry-qt: /usr/bin/pinentry-qt
pinentry-tty: /usr/bin/pinentry-tty
pinentry-x2go: /usr/bin/pinentry-x2go
$ 

Install the packages.

$ sudo apt install pinentry-curses pinentry-gtk2 pinentry-gnome3 pinentry-qt

/bin/pinetry is a soft link to the pinentry binary.

$ which pinentry
/usr/bin/pinentry
staf@debian-gpg:/tmp$ ls -l /usr/bin/pinentry
lrwxrwxrwx 1 root root 26 Oct 18  2022 /usr/bin/pinentry -> /etc/alternatives/pinentry
staf@debian-gpg:/tmp$ ls -l /etc/alternatives/pinentry
lrwxrwxrwx 1 root root 24 Mar 23 11:29 /etc/alternatives/pinentry -> /usr/bin/pinentry-gnome3
$ 

If you want to use another pinentry provider you can update it with update-alternatives --config pinentry.

sudo update-alternatives --config pinentry

Import public key

The last step is to import our public key.

$ gpg --import <snip>.asc
gpg: key <snip>: "Fname Sname <email>" not changed
gpg: Total number processed: 1
gpg:              unchanged: 1
$

Test

Check if our public and private keys are known by GnuPG.

List public keys.

$ gpg --list-keys
/home/staf/.gnupg/pubring.kbx
-----------------------------
pub   rsa<snip> <snip> [SC]
    XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
uid           [ unknown] fname sname <email>
sub   rsa<snip> <snip> [A]
sub   rsa<snip> <snip> [E]

$ 

List our private keys.

$ gpg --list-secret-keys 
/home/staf/.gnupg/pubring.kbx
-----------------------------
sec>  rsa<size> <date> [SC]
      XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
      Card serial no. = XXXX YYYYYYYY
uid           [ unknown] Fname Sname <email>
ssb>  rsa<size> <date> [A]
ssb>  rsa<size> <date> [E]

$ 

Test if we sign something with our GnuPG smartcard.

Create a test file.

$ echo "I'm boe." > /tmp/boe
$ 

Sign it.

$ gpg --sign /tmp/boe
$ 

Type your PIN code.

                ┌──────────────────────────────────────────────┐
                │ Please unlock the card                       │
                │                                              │
                │ Number: XXXX YYYYYYYY                        │
                │ Holder: first list                           │
                │ Counter: <number>                            │
                │                                              │
                │ PIN ________________________________________ │
                │                                              │
                │      <OK>                        <Cancel>    │
                └──────────────────────────────────────────────┘

In a next blog post will set up Thunderbird to use the smartcard for OpenPGP email encryption.

Have fun!

Links

April 11, 2024

Getting the Belgian eID to work on Linux systems should be fairly easy, although some people do struggle with it.

For that reason, there is a lot of third-party documentation out there in the form of blog posts, wiki pages, and other kinds of things. Unfortunately, some of this documentation is simply wrong. Written by people who played around with things until it kind of worked, sometimes you get a situation where something that used to work in the past (but wasn't really necessary) now stopped working, but it's still added to a number of locations as though it were the gospel.

And then people follow these instructions and now things don't work anymore.

One of these revolves around OpenSC.

OpenSC is an open source smartcard library that has support for a pretty large number of smartcards, amongst which the Belgian eID. It provides a PKCS#11 module as well as a number of supporting tools.

For those not in the know, PKCS#11 is a standardized C API for offloading cryptographic operations. It is an API that can be used when talking to a hardware cryptographic module, in order to make that module perform some actions, and it is especially popular in the open source world, with support in NSS, amongst others. This library is written and maintained by mozilla, and is a low-level cryptographic library that is used by Firefox (on all platforms it supports) as well as by Google Chrome and other browsers based on that (but only on Linux, and as I understand it, only for linking with smartcards; their BoringSSL library is used for other things).

The official eID software that we ship through eid.belgium.be, also known as "BeID", provides a PKCS#11 module for the Belgian eID, as well as a number of support tools to make interacting with the card easier, such as the "eID viewer", which provides the ability to read data from the card, and validate their signatures. While the very first public version of this eID PKCS#11 module was originally based on OpenSC, it has since been reimplemented as a PKCS#11 module in its own right, with no lineage to OpenSC whatsoever anymore.

About five years ago, the Belgian eID card was renewed. At the time, a new physical appearance was the most obvious difference with the old card, but there were also some technical, on-chip, differences that are not so apparent. The most important one here, although it is not the only one, is the fact that newer eID cards now use a NIST P-384 elliptic curve-based private keys, rather than the RSA-based ones that were used in the past. This change required some changes to any PKCS#11 module that supports the eID; both the BeID one, as well as the OpenSC card-belpic driver that is written in support of the Belgian eID.

Obviously, the required changes were implemented for the BeID module; however, the OpenSC card-belpic driver was not updated. While I did do some preliminary work on the required changes, I was unable to get it to work, and eventually other things took up my time so I never finished the implementation. If someone would like to finish the work that I started, the preliminal patch that I wrote could be a good start -- but like I said, it doesn't yet work. Also, you'll probably be interested in the official documentation of the eID card.

Unfortunately, in the mean time someone added the Applet 1.8 ATR to the card-belpic.c file, without also implementing the required changes to the driver so that the PKCS#11 driver actually supports the eID card. The result of this is that if you have OpenSC installed in NSS for either Firefox or any Chromium-based browser, and it gets picked up before the BeID PKCS#11 module, then NSS will stop looking and pass all crypto operations to the OpenSC PKCS#11 module rather than to the official eID PKCS#11 module, and things will not work at all, causing a lot of confusion.

I have therefore taken the following two steps:

  1. The official eID packages now conflict with the OpenSC PKCS#11 module. Specifically only the PKCS#11 module, not the rest of OpenSC, so you can theoretically still use its tools. This means that once we release this new version of the eID software, when you do an upgrade and you have OpenSC installed, it will remove the PKCS#11 module and anything that depends on it. This is normal and expected.
  2. I have filed a pull request against OpenSC that removes the Applet 1.8 ATR from the driver, so that OpenSC will stop claiming that it supports the 1.8 applet.

When the pull request is accepted, we will update the official eID software to make the conflict versioned, so that as soon as it works again you will again be able to install the OpenSC and BeID packages at the same time.

In the mean time, if you have the OpenSC PKCS#11 module installed on your system, and your eID authentication does not work, try removing it.

March 21, 2024

Box Art In Game

Mario & Luigi: Superstar Saga is a surprisingly fun game. At its core it’s a role-playing game. But you wouldn’t tell because of the amount of platforming, the quantity and quality of small puzzles and -most of all- the funny self-deprecating dialogue.

As the player you control Mario and Luigi simultaneously and that takes some time to master. As you learn techniques along the way, each character get special abilities allowing you to solve new puzzles and advance on your journey. It’s not a shot game for a handheld, taking around 20-30 hours to finish at the minimum.

The 3DS 2017 remake of 2017 (Mario & Luigi: Superstar Saga + Bowser’s Minions) did keep most of the game as it is, just updating the graphics and music for the more powerful handheld. That’s says a lot about the original whose 2D SNES/GBA graphics don’t feel outdated today, but hip and cute pixel-art.

Have a look at “How to play retrogames?” if you don’t know how to play retrogames.

March 15, 2024

Dit kan helemaal niet in België want het vereist een serieus niet populistisch debat tussen alle politieke partijen (waarbij ze hun rekenmachine en logisch redeneren meenemen én er op toezien dat het budget écht in balans blijft). Dat bestaat al decennia niet meer op het Federaal niveau. Dus ja.

Maar een manier om het budgetair probleem van het land op te lossen zou kunnen door meer te belasten op consumptie door het BTW tarief van 21% naar bv. 25% of 30% te verhogen, (veel) minder te belasten op inkomen en tot slot meer producten en diensten op 6% en 12% te zetten.

M.a.w. alle noodzakelijke uitgaven zoals electriciteit, Internet, (gas) verwarming, water, brood, en zo verder op 6%. Een (veel) grotere mand van (bv. gezonde) voedingsmiddelen, woning (en kosten zoals bv. verbouwingen) op 12%.

Maar voor alle luxeconsumptie 25% of 30% of eventueel zelfs 40%.

Daaraan gekoppeld een stevige verlaging van de personenbelasting.

Gevolg meer BTW-inkomsten uit consumptie. Betere concurrentiepositie t.o.v. onze buurlanden doordat de salarissen niet hoeven te stijgen (door de verlaging van de personenbelasting). Koopkracht wordt daardoor versterkt en die versterking wordt aangevuld doordat er een grotere mand voor 6% en 12% BTW tarieven is.

M.a.w. enkel wordt luxeconsumptie (veel) meer belast. Noodzakelijke consumptie gaat naar 6% of 12% (of men behoudt hiervoor de 21%).

Grootste nadeel is dat onze boekhouders meer werk hebben. Maar ik denk wel dat ze dat zo erg niet zullen vinden (ze factureren hun extra uren trouwens aan het luxeconsumptie BTW tarief).

Voorbeeld voordeel is dat je consumptie beter kan sturen. Wil je dat wij Belgen meer electrische wagens kopen? Maak zo’n wagen 6% BTW en een wagen met een dieselverbrandingsmotor 40% BTW. Wil je dat jongeren gezondere voeding eten? Coca Cola 40% BTW en fruitsap 6% BTW.

Uiteraard moet je ook besparen en een serieuze efficiëntieoefening doen.

March 12, 2024

Nintendo killed the switch emulator Yuzu. Sadly, the official Citra distribution disappeared a well because they shared some developers.

The good news is that Citra being open source, is still available as archive builds and new forks. EmuDeck on the Steam Deck adapted to this new reality and adopted a bring-your-own Yuzu and Citra model. The changes to this emulators came at the same time of a big EmuDeck release, complication things a little. Without the changes below, you won’t have access to your previous saves, hotbuttons won’t work and you will no longer start games directly (but start the Citra application itself).

Here are my notes for completing the 3DS emulation setup on the Steam Deck:

  1. Go to Desktop mode.
  2. Get the latest citra-linux-appimage-*.tar.gz from https://github.com/PabloMK7/citra/releases.
  3. Unpack it and put cita-qt.Appimage in $HOME/Applications.
  4. Open EmuDeck and reset the settings for the Citra emulator. Rescan the games with the ROM Manager.
  5. In case you already had Citra installed:
    • Copy your savefiles from /home/deck/.var/app/org.citra_emu.citra/data/citra-emu/sdmc/Nintendo 3DS/ to /home/deck/.local/share/citra-emu/sdmc/Nintendo 3DS/.
    • Uninstall Citra from Discovery. DON´T remove the settings. (Better: backup /home/deck/.var/app/org.citra_emu.citra).
  6. Open Citra from the KDE menu:
    • You can bump the resolution to 3x (720p).
    • You can use Vulkan instead of OpenGL.
    • Set your prefer windowing mode (1 use 1 big screen and flip when needed)
  7. Go to Game mode.
  8. Pick a 3DS game and go to the controller icon. Pick “EmuDeck - Controller Hotkeys”. The backbuttons will allow to switch screen and viewing modes.
  9. Go to the menu and set the game to Full Screen. When you start the game again it will start in Full Screen mode.

That’s it. Have fun.

February 25, 2024

Wij houden met Euroclear zo’n 260 miljard euro aan Russische geblokkeerde tegoeden vast.

Moesten wij die gebruiken dan houdt dat in dat ons land de komende honderd en meer jaar internationaal gezien zal worden als een dief.

Tegelijkertijd spenderen wij slechts zo’n procent van onze GDP aan defensie. Dat is niet goed. Maar ja. Misschien indien de VS zo erg graag wil dat we die Russische tegoeden voor hun politieke pleziertjes gebruiken, dat ze deze ‘gunsten’ als Belgische defensie-uitgaven kunnen zien?

Bovendien horen er een aantal garanties te zijn t.o.v. de represailles die Rusland zal uitwerken tegen België wanneer Euroclear de tegoeden herinvesteert (dit wil voor Rusland zeggen: steelt).

Waar zijn die garanties? Welke zijn ze? Over hoeveel geld gaat het? Is het extreem veel? Want dat moet. Is het walgelijk extreem veel? Want dat moet.

Die garanties, gaan die over meer dan honderd jaar? Want dat moet.

Het is allemaal wel gemakkelijk en zo om een en ander te willen. Maar wat staat daar tegenover?

I started to migrate all the services that I use on my internal network to my Raspberry Pi 4 cluster. I migrated my FreeBSD jails to BastileBSD on a virtual machine running on a Raspberry Pi. See my blog post on how to migrate from ezjail to BastilleBSD. https://stafwag.github.io/blog/blog/2023/09/10/migrate-from-ezjail-to-bastille-part1-introduction-to-bastillebsd/

tianocore

Running FreeBSD as a virtual machine with UEFI on ARM64 came to the point that it just works. I have to use QEMU with u-boot to get FreeBSD up and running on the Raspberry Pi as a virtual machine with older FreeBSD versions: https://stafwag.github.io/blog/blog/2021/03/14/howto_run_freebsd_as_vm_on_pi/.

But with the latest versions of FreeBSD ( not sure when it started to work, but it works on FreeBSD 14) you can run FreeBSD as a virtual machine on ARM64 with UEFI just like on x86 on GNU/Linux with KVM.

UEFI on KVM is in general provided by the open-source tianocore project.

I didn’t find much information on how to run OpenBSD with UEFI on x86 or ARM64.

OpenBSD 7.4

So I decided to write a blog post about it, in the hope that this information might be useful to somebody else. First I tried to download the OpenBSD 7.4 ISO image and boot it as a virtual machine on KVM (x86). But the iso image failed to boot on a virtual with UEFI enabled. It looks like the ISO image only supports a legacy BIOS.

ARM64 doesn’t support a “legacy BIOS”. The ARM64 download page for OpenBSD 7.4 doesn’t even have an ISO image, but there is an install-<version>.img image available. So I tried to boot this image on one of my Raspberry Pi systems and this worked. I had more trouble getting NetBSD working as a virtual machine on the Raspberry Pi but this might be a topic for another blog post :-)

You’ll find my journey with my installation instructions below.

Download

Download the installation image

Download the latest OpenBSD installation ARM64 image from: https://www.openbsd.org/faq/faq4.html#Download

The complete list of the mirrors is available at https://www.openbsd.org/ftp.html

Download the image.

[staf@staf-pi002 openbsd]$ wget https://cdn.openbsd.org/pub/OpenBSD/7.4/arm64/install74.img
--2024-02-13 19:04:52--  https://cdn.openbsd.org/pub/OpenBSD/7.4/arm64/install74.img
Connecting to xxx.xxx.xxx.xxx:3128... connected.
Proxy request sent, awaiting response... 200 OK
Length: 528482304 (504M) [application/octet-stream]
Saving to: 'install74.img'

install74.img       100%[===================>] 504.00M  3.70MB/s    in 79s     

2024-02-13 19:06:12 (6.34 MB/s) - 'install74.img' saved [528482304/528482304]

[staf@staf-pi002 openbsd]$ 

Download the checksum and the signed checksum.

2024-02-13 19:06:12 (6.34 MB/s) - 'install74.img' saved [528482304/528482304]

[staf@staf-pi002 openbsd]$ wget https://cdn.openbsd.org/pub/OpenBSD/7.4/arm64/SHA256
--2024-02-13 19:07:00--  https://cdn.openbsd.org/pub/OpenBSD/7.4/arm64/SHA256
Connecting to xxx.xxx.xxx.xxx:3128... connected.
Proxy request sent, awaiting response... 200 OK
Length: 1392 (1.4K) [text/plain]
Saving to: 'SHA256'

SHA256                  100%[=============================>]   1.36K  --.-KB/s    in 0s      

2024-02-13 19:07:01 (8.09 MB/s) - 'SHA256' saved [1392/1392]

[staf@staf-pi002 openbsd]$ 
[staf@staf-pi002 openbsd]$ wget https://cdn.openbsd.org/pub/OpenBSD/7.4/arm64/SHA256.sig
--2024-02-13 19:08:01--  https://cdn.openbsd.org/pub/OpenBSD/7.4/arm64/SHA256.sig
Connecting to xxx.xxx.xxx.xxx:3128... connected.
Proxy request sent, awaiting response... 200 OK
Length: 1544 (1.5K) [text/plain]
Saving to: 'SHA256.sig'

SHA256.sig              100%[=============================>]   1.51K  --.-KB/s    in 0s      

2024-02-13 19:08:02 (3.91 MB/s) - 'SHA256.sig' saved [1544/1544]

[staf@staf-pi002 openbsd]$ 

Verify

OpenBSD uses signify to validate the cryptographic signatures. signify is also available for GNU/Linux (at least on Debian GNU/Linux and Arch Linux).

More details on how to verify the signature with signify is available at: https://www.openbsd.org/74.html

This blog post was also useful: https://www.msiism.org/blog/2019/10/20/authentic_pufferfish_for_penguins.html

Install OpenBSD signify

Download the signify public key from: https://www.openbsd.org/74.html

[staf@staf-pi002 openbsd]$ wget https://ftp.openbsd.org/pub/OpenBSD/7.4/openbsd-74-base.pub
--2024-02-13 19:14:25--  https://ftp.openbsd.org/pub/OpenBSD/7.4/openbsd-74-base.pub
Connecting to xxx.xxx.xxx.xxx:3128... connected.
Proxy request sent, awaiting response... 200 OK
Length: 99 [text/plain]
Saving to: 'openbsd-74-base.pub'

openbsd-74-base.pub     100%[=============================>]      99   397 B/s    in 0.2s    

2024-02-13 19:14:26 (397 B/s) - 'openbsd-74-base.pub' saved [99/99]

[staf@staf-pi002 openbsd]$

I run Debian GNU/Linux on my Raspberry Pi’s, let see which signify packages are available.

[staf@staf-pi002 openbsd]$ sudo apt search signify
sudo: unable to resolve host staf-pi002: Name or service not known
[sudo] password for staf: 
Sorting... Done
Full Text Search... Done
chkrootkit/stable 0.57-2+b1 arm64
  rootkit detector

elpa-diminish/stable 0.45-4 all
  hiding or abbreviation of the mode line displays of minor-modes

fcitx-sayura/stable 0.1.2-2 arm64
  Fcitx wrapper for Sayura IM engine

fcitx5-sayura/stable 5.0.8-1 arm64
  Fcitx5 wrapper for Sayura IM engine

signify/stable 1.14-7 all
  Automatic, semi-random ".signature" rotator/generator

signify-openbsd/stable 31-3 arm64
  Lightweight cryptographic signing and verifying tool

signify-openbsd-keys/stable 2022.2 all
  Public keys for use with signify-openbsd

[staf@staf-pi002 openbsd]$

There’re two OpenBSD signify packages available on Debian 12 (bookworm);

  • signify-openbsd/: The OpenBSD signify tool.
  • signify-openbsd-keys: This package contains the OpenBSD release public keys, installed in /usr/share/signify-openbsd-keys/. Unfortunately, the OpenBSD 7.4 release isn’t (yet) included in Debian 12 (bookworm).
[staf@staf-pi002 openbsd]$ sudo apt install signify-openbsd signify-openbsd-keys
sudo: unable to resolve host staf-pi002: Name or service not known
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following NEW packages will be installed:
  signify-openbsd signify-openbsd-keys
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 70.4 kB of archives.
After this operation, 307 kB of additional disk space will be used.
Get:1 http://deb.debian.org/debian bookworm/main arm64 signify-openbsd arm64 31-3 [62.3 kB]
Get:2 http://deb.debian.org/debian bookworm/main arm64 signify-openbsd-keys all 2022.2 [8020 B]
Fetched 70.4 kB in 0s (404 kB/s)          
Selecting previously unselected package signify-openbsd.
(Reading database ... 94575 files and directories currently installed.)
Preparing to unpack .../signify-openbsd_31-3_arm64.deb ...
Unpacking signify-openbsd (31-3) ...
Selecting previously unselected package signify-openbsd-keys.
Preparing to unpack .../signify-openbsd-keys_2022.2_all.deb ...
Unpacking signify-openbsd-keys (2022.2) ...
Setting up signify-openbsd-keys (2022.2) ...
Setting up signify-openbsd (31-3) ...
[staf@staf-pi002 openbsd]$ 

Verify the checksum

Verify the checksum.

[staf@staf-pi002 openbsd]$ sha256sum install74.img 
09e4d0fe6d3f49f2c4c99b6493142bb808253fa8a8615ae1ca8e5f0759cfebd8  install74.img
[staf@staf-pi002 openbsd]$ 
[staf@staf-pi002 openbsd]$ grep 09e4d0fe6d3f49f2c4c99b6493142bb808253fa8a8615ae1ca8e5f0759cfebd8 SHA256
SHA256 (install74.img) = 09e4d0fe6d3f49f2c4c99b6493142bb808253fa8a8615ae1ca8e5f0759cfebd8
[staf@staf-pi002 openbsd]$ 

Verify with signify

Execute the signify command to verify the checksum. See the OpenBSD signify manpage for more information.

You’ll find a brief list of the arguments that are used to verify the authenticity of the image.

  • -C: Will verify the Checksum.
  • -p <path>: The path to the Public key.
  • -x <path>: The path to the signature file.

Verify the image with signify.

[staf@staf-pi002 openbsd]$ signify-openbsd -C -p openbsd-74-base.pub -x SHA256.sig install74.img
Signature Verified
install74.img: OK
[staf@staf-pi002 openbsd]$

Secure boot

The Debian UEFI package for libvirt ovmf is based on https://github.com/tianocore/tianocore.github.io/wiki/OVMF.

Debian Bookworm comes with the following UEFI BIOS settings:

  • /usr/share/AAVMF/AAVMF_CODE.ms.fd This is with secure boot enabled.
  • /usr/share/AAVMF/AAVMF_CODE.fd This is without secure boot enabled.

The full description is available at /usr/share/doc/ovmf/README.Debian on a Debian system when the ovmf package is installed.

To install OpenBSD we need to disable secure boot.

Test boot

I first started a test boot.

Logon to the Raspberry Pi.

[staf@vicky ~]$ ssh -X -CCC staf-pi002 
Warning: untrusted X11 forwarding setup failed: xauth key data not generated
Linux staf-pi002 6.1.0-17-arm64 #1 SMP Debian 6.1.69-1 (2023-12-30) aarch64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Wed Feb 14 06:08:45 2024 from xxx.xxx.xxx.xxx
[staf@staf-pi002 ~]$ 


virt-manager



Start virt-manager and click on the [ Create on new VM ] icon.



new vm




This will bring up the new vm window. Select ( ) Import existing disk image, you review the architecture option by selecting the \/ Architecture options. The defaults are fine. Click on [ Forward ].






import vm



This will open the “import vm” window. Click on [ Browse ] to select the OpenBSD installation image or just copy/paste the path.

At the bottom of the screen, you’ll see Choose the operating system you are installing. Starting type openbsd and select [ X ] include end-of-life operating systems Debian 12 (bookworm) doesn’t include support for OpenBSD 7.4 (yet) so we need to set it to “OpenBSD 7.0”. Click on [ Forward ].





select custom


In the next windows keep the default Memory and CPU settings as we’re just verifying that we can boot from the installation image.

Debian uses “secure boot” by default. We need to disable secure boot. Select [ X ] Customize configuration before install, this allows us to set the UEFI boot image.






begin install


Set the Firmware to: /usr/share/AAVMF/AAVMF_CODE.fd to disable secure boot and click on [ Begin Installation ].








begin install




Let’s check if OpenBSD can boot. Great, it works!




Installation with virt-install

I prefer to use the command line to install as this allows me to make the installation reproducible and automated.

Create a ZFS dataset

I used ZFS on my Raspberry Pi’s, this makes it easier to create snapshots etc when you’re testing software etc.

root@staf-pi002:/var/lib/libvirt/images# zfs create staf-pi002_pool/root/var/lib/libvirt/images/openbsd-gitlabrunner001
root@staf-pi002:/var/lib/libvirt/images# 
root@staf-pi002:/var/lib/libvirt/images/openbsd-gitlabrunner001# pwd
/var/lib/libvirt/images/openbsd-gitlabrunner001
root@staf-pi002:/var/lib/libvirt/images/openbsd-gitlabrunner001# 

Get the correct os-variant

To get the operating system settings you can execute the command virt-install --osinfo list

root@staf-pi002:/var/lib/libvirt/images/openbsd-gitlabrunner001# virt-install --osinfo list | grep -i openbsd7
openbsd7.0
root@staf-pi002:/var/lib/libvirt/images/openbsd-gitlabrunner001# 

We’ll use openbsd7.0 as the operating system variant.

Create QEMU image

Create a destination disk image.

root@staf-pi002:/var/lib/libvirt/images/openbsd-gitlabrunner001# qemu-img create -f qcow2 openbsd-gitlabrunner001.qcow2 50G
Formatting 'openbsd-gitlabrunner001.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=53687091200 lazy_refcounts=off refcount_bits=16
root@staf-pi002:/var/lib/libvirt/images/openbsd-gitlabrunner001# 

Run virt-install

Run virt-install to import the virtual machine.

#!/bin/bash

virt-install --name openbsd-gitlabrunner001  \
 --noacpi \
 --boot loader=/usr/share/AAVMF/AAVMF_CODE.fd \
 --os-variant openbsd7.0 \
 --ram 2048 \
 --import \
 --disk /home/staf/Downloads/isos/openbsd/install74.img  \
 --disk /var/lib/libvirt/images/openbsd-gitlabrunner001/openbsd-gitlabrunner001.qcow2

If everything goes well the virtual machine gets booted.

BdsDxe: loading Boot0001 "UEFI Misc Device" from PciRoot(0x0)/Pci(0x1,0x3)/Pci(0x0,0x0)
BdsDxe: starting Boot0001 "UEFI Misc Device" from PciRoot(0x0)/Pci(0x1,0x3)/Pci(0x0,0x0)
disks: sd0*
>> OpenBSD/arm64 BOOTAA64 1.18
boot> 
cannot open sd0a:/etc/random.seed: No such file or directory
booting sd0a:/bsd: 2861736+1091248+12711584+634544 [233295+91+666048+260913]=0x13d5cf8
Copyright (c) 1982, 1986, 1989, 1991, 1993
	The Regents of the University of California.  All rights reserved.
Copyright (c) 1995-2023 OpenBSD. All rights reserved.  https://www.OpenBSD.org

OpenBSD 7.4 (RAMDISK) #2131: Sun Oct  8 13:35:40 MDT 2023
    deraadt@arm64.openbsd.org:/usr/src/sys/arch/arm64/compile/RAMDISK
real mem  = 2138013696 (2038MB)
avail mem = 2034593792 (1940MB)
random: good seed from bootblocks
mainbus0 at root: linux,dummy-virt
psci0 at mainbus0: PSCI 1.1, SMCCC 1.1
efi0 at mainbus0: UEFI 2.7
efi0: EDK II rev 0x10000
smbios0 at efi0: SMBIOS 3.0.0
smbios0:
sd1 at scsibus1 targ 0 lun 0: <VirtIO, Block Device, >
sd1: 51200MB, 512 bytes/sector, 104857600 sectors
virtio35: msix per-VQ
ppb5 at pci0 dev 1 function 5 vendor "Red Hat", unknown product 0x000c rev 0x00: irq
pci6 at ppb5 bus 6
ppb6 at pci0 dev 1 function 6 vendor "Red Hat", unknown product 0x000c rev 0x00: irq
pci7 at ppb6 bus 7
ppb7 at pci0 dev 1 function 7 vendor "Red Hat", unknown product 0x000c rev 0x00: irq
pci8 at ppb7 bus 8
ppb8 at pci0 dev 2 function 0 vendor "Red Hat", unknown product 0x000c rev 0x00: irq
pci9 at ppb8 bus 9
ppb9 at pci0 dev 2 function 1 vendor "Red Hat", unknown product 0x000c rev 0x00: irq
pci10 at ppb9 bus 10
ppb10 at pci0 dev 2 function 2 vendor "Red Hat", unknown product 0x000c rev 0x00: irq
pci11 at ppb10 bus 11
ppb11 at pci0 dev 2 function 3 vendor "Red Hat", unknown product 0x000c rev 0x00: irq
pci12 at ppb11 bus 12
ppb12 at pci0 dev 2 function 4 vendor "Red Hat", unknown product 0x000c rev 0x00: irq
pci13 at ppb12 bus 13
ppb13 at pci0 dev 2 function 5 vendor "Red Hat", unknown product 0x000c rev 0x00: irq
pci14 at ppb13 bus 14
pluart0 at mainbus0: rev 1, 16 byte fifo
pluart0: console
"pmu" at mainbus0 not configured
agtimer0 at mainbus0: 54000 kHz
"apb-pclk" at mainbus0 not configured
softraid0 at root
scsibus2 at softraid0: 256 targets
root on rd0a swap on rd0b dump on rd0b
WARNING: CHECK AND RESET THE DATE!
erase ^?, werase ^W, kill ^U, intr ^C, status ^T

Welcome to the OpenBSD/arm64 7.4 installation program.
(I)nstall, (U)pgrade, (A)utoinstall or (S)hell? 

Continue with the OpenBSD installation as usual. Make sure that you select the second disk during the installation process.

To fully automate the installation we need a system that executes the post-configuration at the first boot. On GNU/Linux is normally done by cloud-init while there are solutions to get cloud-init working on the BSDs. I didn’t look into this (yet).

Have fun!

Links