November 15, 2024
It's been a while since I last blogged one of my favorite songs. Even after more than 25 years of listening to "Tonight, Tonight" by The Smashing Pumpkins, it has never lost its magic. It has aged better than I have.
DDEV is an Open Source development environment that makes it easy to setup Drupal on your computer. It handles all the complex configuration by providing pre-configured Docker containers for your web server, database, and other services.
On macOS, you can install DDEV using Homebrew:
$ brew install ddev
Next, clone the Drupal CMS Git repository:
$ git clone https://git.drupalcode.org/project/drupal_cms.git
This command fetches the latest version of Drupal CMS from the official repository and saves it in the drupal_cms
directory.
Next, configure DDEV for your Drupal project:
$ cd drupal_cms
$ ddev config --docroot=web --project-type=drupal
The --docroot=web
parameter tells DDEV where your Drupal files will live, while --project-type=drupal
ensures DDEV understands the project type.
Next, let's start our engines:
$ ddev start
The first time you start DDEV, it will setup Docker containers for the web server and database. It will also use Composer to download the necessary Drupal files and dependencies.
The final step is configuring Drupal itself. This includes things like setting your site name, database credentials, etc. You can do this in one of two ways:
The final step is configuring Drupal itself. This includes things like your site name, database configuration, etc. You can do this in one of two ways:
-
Option 1: Configure Drupal via the command line
$ ddev drush site:install
This method is the easiest and the fastest, as things like the database credentials are automatically setup. The downside is that, at the time of this writing, you can't choose which Recipes to enable during installation.
-
Option 2: Configure Drupal via the web installer
You can also use the web-based installer to configure Drupal, which allows you to enable individual Recipes. You'll need your site's URL and database credentials. Run this command to get both:
$ ddev describe
Navigate to your site and step through the installer.
Once everything is installed and configured, you can access your new Drupal CMS site. You can simply use:
$ ddev launch
This command opens your site's homepage in your default browser — no need to remember the specific URL that DDEV created for your local development site.
To build or manage a Drupal site, you'll need to log in. By default, Drupal creates a main administrator account. It's a good idea to update the username and password for this account. To do so, run the following command:
$ ddev drush uli
This command generates a one-time login link that takes you directly to the Drupal page where you can update your Drupal account's username and password.
That's it! Happy Drupal-ing!
November 13, 2024
Hyperconnexion, addiction et obéissance
De l’hyperconnexion
Nous sommes désormais connectés partout, tout le temps. J’appelle cela "l’hyperconnexion" (et elle ne passe pas nécessairement par les écrans).
Parfois, je tente de me convaincre que mon addiction personnelle à cette hyperconnexion est surtout liée à mon côté geek, que je ne peux généraliser mon cas.
Et puis, quand je roule à vélo, je me rends compte du nombre de piétons qui n’entendent pas ma sonnette, qui ne me voie pas arriver (même de face), qui ne s’écartent pas et qui, lorsqu’ils réalisent ma présence (qui va, dans certains cas, jusqu’à nécessiter une tape sur l’épaule), ont un air complètement abruti, comme si je venais de les extirper d’un univers parallèle.
Et puis je vois cette mère, dans une salle d’attente, dont la petite fille de deux ans tente vainement d’attirer l’attention « Regarde maman ! Regarde ! ».
Et puis je me souviens de cette autre mère qui avait placé sa gamine sur un parapet avec plusieurs mètres de vide en dessous juste pour faire une jolie photo pour Instagram.
Et puis je vois cet homme d’affaires dans un costume chic, à l’air très sérieux, qui sort son téléphone pour aligner quelques fruits virtuels durant les trente secondes que dure l’attente de son café ou les quinze secondes d’un feu rouge à un carrefour.
Et puis, à vélo, je tente vainement de croiser le regard de ces automobilistes, les yeux rivés sur leur écran, mettent en danger la vie de leurs propres enfants assis à l’arrière.
Il est impossible de juger chaque anecdote. Parce qu’il y a plein de justifications qui sont, dans certains cas, complètement valables. Une mère peut juste être épuisée de donner toute son attention à son enfant toute la journée. Elle a le droit de penser à autre chose. Un piéton peut être absorbé dans une conversation téléphonique importante ou dans ses pensées.
L’individu a le droit. Mais le problème me semble global.
Dans les témoignages que je recueille, le plus difficile pour ceux qui arrivent à se déconnecter, c’est qu’ils se rendent compte de l’addiction des autres. C’est particulièrement frappant dans les couples.
Si les deux sont addicts, aucun ne souffre. Mais que l’un tente de regagner un peu de liberté mentale et il découvre que son conjoint ne l’écoute pas. Est inattentif. Gritty décrit ici la douleur de sa déconnexion en remarquant que son épouse ne l’écoute plus, qu’elle écoute en permanence des messages audios. Il n’avait jamais remarqué le problème avant de se déconnecter lui-même.
Je pense à nos enfants. Nous nous inquiétons de l’impact des écrans sur nos enfants. Mais n’est-ce pas avant tout l’impact de l’hyperconnexion sur les parents qui déteint sur les enfants ?
Négocier avec les machines
Et si cette addiction n’était qu’une adaptation à notre environnement ? Car, comme le décrit très bien Gee, on doit désormais passer notre temps à nous adapter aux machines, à négocier avec leur comportement absurde.
Négocier, c’est quand tu veux que la machine fasse quelque chose et qu’elle ne veut pas. Mais il y a pire. Les machines donnent des ordres. Les notifications.
Il y a évidemment les notifications sur le smartphone qui nécessitent de tout abandonner pour y obéir. Je viens à l’instant de voir mon plombier, à quatre pattes contre un mur ouvert, les deux mains en train de glisser des tuyaux dans une gaine, se contorsionner pour répondre au téléphone. Une enquête de satisfaction. À laquelle il a répondu posément pendant plusieurs minutes, malgré sa position inconfortable.
Outre les smartphones, chaque appareil cherche désormais à imposer son fonctionnement. Mon micro-onde bipe de manière très énervante quand il a fini jusqu’au moment où on ouvre la porte. Si tu profites des deux minutes où ta soupe réchauffe pour aller à la toilette, t’es bon pour te farcir un bip sonore dans toute la maison durant toute la durée. Il faut donc attendre et obéir patiemment. C’est encore plus absurde avec le lave-linge et le sèche-linge. Ce n’est pas comme si c’était urgent ! Mention à mon lave-vaisselle qui, lui, ne bipe pas du tout et se contente de s’ouvrir pour dire qu’il a fini, ce qui est une très bonne idée et prouve qu’il est possible de ne pas emmerder l’utilisateur. Sans doute une erreur qui sera corrigée dans la prochaine version.
La palme revient à mes, heureusement anciennes, plaques de cuisson qui ne supportaient pas qu’un objet soit posé dessus. Et qui était d’une telle sensibilité que la moindre tache de graisse ou d’humidité entrainait un bruit continu extrêmement irritant pour dire : « Nettoie-moi, humain ! Tout de suite ! ». Oui, même à trois heures du matin si une mouche s’était posée dessus ou si elle avait soudainement détecté un coin de l’éponge de nettoyage.
À noter que la même plaque se désactivait lorsqu’elle détectait une goutte de liquide. Oui, même au milieu d’une cuisson. Et, pendant une cuisson, une tache de graisse ou d’eau, c’est quand même fréquent.
— Nettoie-moi, humain !
— Finis d’abord de cuire mon repas !
— Non, nettoie-moi d’abord.
— T’es une taque de cuisson, tu peux survivre à une goutte d’eau sur ta surface.
— Nettoie-moi !
— Mais tu es bouillante, je ne peux pas te nettoyer, je dois d’abord te laisser refroidir.
— C’est ton problème, nettoie-moi !
— Alors, arrête au moins de sonner le temps que tu refroidisses.
— NETTOIE-MOI HUMAIN !
Les outils non intelligents sont désormais un luxe.
L’addiction à la suffisance
Lorsqu’on est ingénieur où qu’on a des compétences techniques, on sait comment fonctionnent ces engins. Lorsque, comme moi, on a travaillé dans l’industrie, on peut même percevoir les dizaines de décisions stupides prises par des petits chefs qui ont mené à ce qui est un désastre d’inefficacité. Négocier pendant 4 minutes avec une machine pour faire ce qui devrait prendre une seconde est absurde, énervant, irritant.
Mais chez les personnes pour lesquelles la technologie est une sorte de magie noire, j’observe souvent une satisfaction voire une fierté à arriver à contourner les limitations parfaitement arbitraire.
Les gens sont fiers ! Tellement fiers d’arriver à faire un truc absurde et contre-intuitif qu’ils s’en vantent voire se proposent de l’enseigner à d’autres. Ils font des vidéos Youtube pour montrer comment ils activent une option de leur iPhone ou comment ils font un prompt ChatGPT. Ils deviennent addicts à cette fierté, cette satisfaction, cette fausse impression de contrôle de la technologie.
Ils plongent dans l’hyperconnexion en regardant de haut les pauvres geeks comme moi « qui ne savent pas s’adapter aux nouvelles technologies », ils obéissent comme des moutons aux sirènes du marketing, aux décisions commerciales qui leur imposent une nouvelle interface propriétaire. Ils s’autoproclament "geeks" parce qu’ils passent leur journée sur Whatsapp et Fruit Ninja.
Ils pensent apprendre, ils ne font qu’obéir.
Les humains sont addicts à l’obéissance aveugle. Ils aiment se soumettre à un pouvoir totalitaire opaque à la condition d’avoir de brefs moments où on leur donne l’illusion d’avoir eux-mêmes du pouvoir.
Et sur une chose, ils ont raison : les pauvres rebelles qui luttent pour une liberté dont personne ne veut sont des inadaptés.
- On se retrouve entre inadaptés ce jeudi 14 novembre à Paris pour discuter de tout ça ?
- Photo par Matt Brown
Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !
Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.
November 07, 2024
Rencontres littéraires à Paris, à Louvain-la-Neuve et un bout de contribution aux communs
Paris le 14 novembre
Ce jeudi 14 novembre à 19h, je participerai à une rencontre littéraire à Paris, à la librairie Wallonie-Bruxelles. C’est l’occasion de faire dédicacer un exemplaire de Bikepunk à mettre sous le sapin.
Louvain-la-Neuve le 10 décembre
Pour les Belges, la rencontre littéraire se déroulera le mardi 10 décembre à 19h dans mon fief de Louvain-la-Neuve, à La Page d’Après, une librairie qui sent bon la librairie. Inscrivez-vous dès à présent en leur envoyant un email.
J’irai dédicacer chez vous…
Pour les Bruxellois qui désirent une dédicace avant Noël et qui ne savent pas venir à Louvain-la-Neuve, je vous propose d’aller chercher un exemplaire de Bikepunk à la librairie Chimères et d’y laisser le livre en dépôt avec un petit papier décrivant la dédicace que vous souhaitez. Je dois passer à la librairie fin novembre, je dédicacerai les livres en attente pour que vous puissiez les récupérer plus tard.
Bien entendu, la francophonie ne se limite pas à Paris, Bruxelles et Louvain-la-Neuve. Mais je vais là où je suis invité. Ni mon éditeur ni moi ne disposons d’un attaché de presse en France pour nous mettre en relation avec les médias français et de nous placer des séances de dédicaces dans les librairies de l’hexagone.
Et c’est là que vous, lecteurices, vous pouvez nous aider !
En parlant du livre avec votre libraire et, si intérêt de sa part, en nous mettant en contact pour que nous fixions une date (j’essaie, bien entendu, de combiner les déplacements). Je viendrai avec ma machine à écrire et mon stylo-ploum. Si vous avez des contacts dans des médias indépendants et/ou cyclistes, parlez de nous !
Merci à vous de poster des photos de vos vélos avec le tag #bikepunk. Ça me fait chaud au cœur à chaque fois ! (merci Vincent Jousse d’avoir lancé la mode !)
De l’importance des communs
Bon, maintenant qu’on a évacué les questions administratives, parlons sérieusement : un aspect important que je n’ai pas encore évoqué à propos de Bikepunk est qu’il est sous licence libre (CC By-SA). Il fait donc, dès à présent, partie des communs. Je dois reconnaître que, malheureusement, cela ne semble pas intéresser les médias.
Les communs sont un concept parfois difficile à percevoir, que le capitalisme cherche à invisibiliser et qui est pourtant tellement indispensable. Sans n’être jamais nommés, les communs ne me sont jamais apparus plus clairement que dans le roman « Place d’âmes », de Sara Schneider (qui est, bien entendu, lui aussi sous licence libre).
Je ne connais rien à l’histoire du Jura suisse. La question ne m’intéresse pas le moins du monde. C’est par pur copinage avec Sara que je me suis lancé dans son « Place d’âmes », un peu à reculon. Sauf que j’ai été happé, aspiré. La question qui anime le livre n’est celle du Jura qu’en apparence. En réalité, il s’agit de protéger nos communs. De protéger nos rebelles, nos révolutionnaires, nos sorcières qui, iels-mêmes, protègent nos communs.
Une uchronie historique poétique et actuelle, indispensable. À travers l’histoire du Jura, c’est la planète Terre toute entière que décrit Sara.
D’ailleurs, tant qu’on parle de biens communs, Bruno Leyval commence à mettre ses archives sous licence Art Libre. Parce que l’art est une grammaire, un vocabulaire. En faire un bien commun, c’est donner à chacun le pouvoir de se l’approprier, de le modifier, d’affiner sa perception du monde comme un citoyen acteur et non plus un simple consommateur.
Utilisez, modifiez ces images !
Bruno est notamment l’auteur de la couverture de Bikepunk et de l’illustration qui orne ce blog. Elles sont sous licence libre. Vous êtes nombreux à avoir réclamé des t-shirts (on y bosse, promis, ça va prendre un peu de temps).
Mais, vous savez quoi ? Vous n’êtes pas obligés de nous attendre. Les images, comme le livre, sont dans les communs.
Ils vous appartiennent désormais autant qu’à moi ou à Bruno…
Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !
Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.
November 04, 2024
A number of years ago, my friend Klaas and I made a pact: instead of exchanging birthday gifts, we'd create memories together. You can find some of these adventures on my blog, like Snowdonia in 2019 or the Pemi Loop in 2023. This time our adventure led us to the misty Isle of Skye in Scotland.
For a week, we lived out of a van, wild camping along Scotland's empty roads. Our days were filled with muddy hikes, including the iconic Storr and Quiraing trails. The weather was characteristically Scottish – a mix of fog, rain, and occasional clear skies that revealed breathtaking cliffs and rolling highlands. Along the way, we shared the landscape with Highland cows and countless sheep, who seemed unfazed by our presence.
Our daily routine was refreshingly simple: wake up to misty mornings, often slightly cold from sleeping in the van. Fuel up with coffee and breakfast, including the occasional haggis breakfast roll, and tackle a hike. We'd usually skip lunch, but by the end of the day, we'd find a hearty meal or local pub. We even found ourselves having evening coding sessions in our mobile home; me working on my location tracking app and Klaas working on an AI assistant.
Though we mostly embraced the camping lifestyle, we did surrender to civilization on day three, treating ourselves to a hotel room and much-needed shower. We also secretly washed our camping dishes in the bathroom sink.
At one point, a kind park ranger had to kick us out of a parking lot, but decided to sprinkle some Scottish optimism our way. The good news is that the weather will get better …
, he said, watching our faces light up. Then came the punchline, delivered with the timing of a seasoned comedian: ... in April.
.
Would April have offered better weather? Perhaps. But watching the fog dance around the peaks of Skye, coding in our van during quiet evenings, and sharing sticky toffee pudding in restaurants made for exactly the kind of memory we'd hoped to create. Each year, these birthday trips deepen our friendship – the hugs get stronger and the conversations more meaningful.
November 02, 2024
November 01, 2024
Dear WordPress friends in the USA: I hope you vote and when you do, I hope you vote for respect. The world worriedly awaits your collective verdict, as do I. Peace! Watch this video on YouTube.
October 29, 2024
October 28, 2024
October 27, 2024
October 25, 2024
At Acquia Engage NYC this week, our partner and customer conference, we shared how Acquia's Digital Experience Platform (DXP) helps organizations deliver digital experiences through three key capabilities:
- Content: Create, manage and deliver digital content and experiences - from images and videos to blog posts, articles, and landing pages - consistently across all your digital channels.
- Optimize: Continuously improve your digital content and experiences by improving accessibility, readability, brand compliance, and search engine optimization (SEO).
- Insights: Understand how people interact with your digital experiences, segment audiences based on their behavior and interests, and deliver personalized content that drives better engagement and conversion rates.
Since our last Acquia Engage conference in May, roughly six months ago, we've made some great progress, and we announced some major innovations and updates across our platform.
The Acquia Open DXP platform consists of three pillars - Content, Optimize, and Insight - with specialized products in each category to help organizations create, improve, and personalize digital experiences.Simplify video creation in Acquia DAM
Video is one of the most engaging forms of media, but it's also one of the most time-consuming and expensive to create. Producing professional, branded videos has traditionally required significant time, budget, and specialized skills. Our new Video Creator for DAM changes this equation. By combining templating, AI, and DAM's workflow functionality, organizations can now create professional, on-brand videos in minutes rather than days.
Make assets easier to find in Acquia DAM
Managing large digital asset libraries can become increasingly overwhelming. Traditional search methods rely on extensive metadata tagging and manual filtering options. Depending on what you are looking for, it might be difficult to quickly find the right assets.
To address this, we introduced Acquia DAM Copilot, which transforms the experience through conversational AI. Instead of navigating complicated filter menus, users can now simply type natural requests like "show me photos of bikes outside" and refine their search conversationally with commands like "only show bikes from the side view". This AI-powered approach eliminates the need for extensive tagging and makes finding the right content intuitive and fast.
Easier site building with Drupal
I updated the Acquia Engage audience on Drupal CMS (also known as Drupal Starshot), a major initiative I'm leading in the Drupal community with significant support from Acquia. I demonstrated several exciting innovations coming to Drupal: "recipes" to simplify site building, AI-powered site creation capabilities, and a new Experience Builder that will transform how we build Drupal websites.
Many in the audience had already watched my DrupalCon Barcelona keynote and expressed continued enthusiasm for the direction of Drupal CMS and our accelerated pace of innovation. Even after demoing it multiple times the past month, I'm still very excited about it myself. If you want to learn more, be sure to check out my DrupalCon presentation!
Improving content ranking with Acquia SEO
Creating content that ranks well in search engines traditionally requires both specialized SEO expertise and skilled content writers - making it an expensive and time-consuming process. Our new SEO Copilot, powered by Conductor, integrated directly into Drupal's editing experience, provides real-time guidance on keyword optimization, content suggestions, length recommendations, and writing complexity for your target audience. This helps content teams create search-engine-friendly content more efficiently, without needing deep SEO expertise.
Improving content quality with Acquia Optimize
We announced the rebranding of Monsido to Acquia Optimize and talked about two major improvements to this offering.
First, we improved how organizations create advanced content policies. Creating advanced content policies usually requires some technical expertise, as it can involve writing regular expressions. Now, users can simply describe in plain language what they want to monitor. For example, they could enter something like "find language that might be insensitive to people with disabilities", and AI will help create the appropriate policy rules. Acquia Optimize will then scan content across all your websites to detect any violations of those rules.
Second, we dramatically shortened the feedback loop for content checking. Previously, content creators had to publish their content and then wait for scheduled scans to discover problems with accessibility, policy compliance or technical SEO - a process that could take a couple of days. Now, they can get instant feedback. Authors can request a check while they work, and the system immediately flags accessibility issues, content policy violations, and other problems, allowing them to fix problems while the content is being written. This shift from "publish and wait" to "check and fix" helps teams maintain higher content quality standards, allows them to work faster, and can prevent non-compliant content from ever going live.
FedRAMP for Acquia Cloud Next
We were excited to announce that our next-generation Drupal Cloud, Acquia Cloud Next (ACN), has achieved FedRAMP accreditation, just like our previous platform, which remains FedRAMP accredited.
This means our government customers can now migrate their Drupal sites onto our latest cloud platform, taking advantage of improved autoscaling, self-healing, and cutting-edge features. We already have 56 FedRAMP customers hosting their Drupal sites on ACN, including Fannie Mae, The US Agency for International Development, and the Department of Education, to name a few.
Improved fleet management for Drupal
Acquia Cloud Site Factory is a platform that helps organizations manage fleets of Drupal sites from a single dashboard, making it easier to launch, update, and scale sites. Over the past two years, we've been rebuilding Site Factory on top of Acquia Cloud Next, integrating them more closely. Recently, we reached a major milestone in this journey. At Engage, we showcased Multi-Experience Operations (MEO) to manage multiple Drupal codebases across your portfolio of sites.
Previously, all sites in a Site Factory instance had to run the same Drupal code, requiring simultaneous updates across all sites. Now, organizations can run sites on different codebases and update them independently. This added flexibility is invaluable for large organizations managing hundreds or thousands of Drupal sites, allowing them to update at their own pace and maintain different Drupal versions where needed.
Improved conversion rates with Acquia Convert
Understanding user behavior is key to optimizing digital experiences, but interpreting the data and deciding on next steps can be challenging. We introduced some new Acquia Convert features (powered by VWO) to solve this.
First, advanced heat-mapping shows exactly how users interact with your pages, where they click first, how far they scroll, and where they show signs of frustration (like rage clicks).
Next, and even more powerful, is our new Acquia Convert Copilot that automatically analyzes this behavioral data to suggest specific improvements. For example, if the AI notices high interaction with a pricing slider but also signs of user confusion, it might suggest an A/B test to clarify the slider's purpose. This helps marketers and site builders make data-driven decisions and improve conversion rates.
Privacy-first analytics with Piwik Pro
As data privacy regulations become stricter globally, organizations face growing challenges with web analytics. Google Analytics has been banned in several European countries for not meeting data sovereignty requirements, leaving organizations scrambling for compliant alternatives.
We announced a partnership with Piwik Pro to address this need. Piwik Pro offers a privacy-first analytics solution that maintains compliance with global data regulations by allowing organizations to choose where their data is stored and maintaining full control over their data.
This makes it an ideal solution for organizations that operate in regions with strict data privacy laws, or any organization that wants to ensure their analytics solution remains compliant with evolving privacy regulations.
After the Piwik Pro announcement at Acquia Engage, I spoke with several customers who are already using Piwik Pro. Most worked in healthcare and other sectors handling sensitive data. They were excited about our partnership and a future that brings deeper integration between Piwik Pro, Acquia Optimize, Drupal, and other parts of our portfolio.
Conclusion
The enthusiasm from our customers and partners at Acquia Engage always reinvigorates me. None of these innovations would be possible without the dedication of our teams at Acquia. I'm grateful for their hard work in bringing these innovations to life, and I'm excited for what is next!
October 24, 2024
Ils nous mentent
Le mensonge politicien
Lors de ma campagne électorale pour le Parti Pirate en 2012, j’ai été interpellé dans la rue par un citoyen de ma commune qui me demandait ce que j’allais faire pour sauver son emploi.
J’ai été étonné par la question et j’ai répondu sincèrement. Rien ! Rien, car, si je suis élu, je serai conseiller communal dans l’opposition d’une commune qui n’a rien à voir avec l’usine. En fait, tout candidat qui vous prétend le contraire vous ment. Car il ne sait rien faire. L’usine n’est pas sur la commune.
— Alors, je ne voterai pas pour vous ! Les autres disent qu’ils feront quelque chose.
J’avais expliqué que les autres mentaient, mais la personne préférait un mensonge. L’immense majorité des humains préfère un mensonge réconfortant. Moi qui déteste mentir, qui en suis physiquement incapable, je découvrais à plus de trente balais que certains humains pouvaient mentir droit dans les yeux et que tout le monde semblait apprécier cela (je sais, je suis à la fois naïf et pas très rapide).
Par essence, les politiciens mentent parce que ça fonctionne. Et aussi parce qu’ils sont confrontés à plein de groupes mouvants et multiples. Contrairement à ce qu’on croit parfois, la société n’est pas binaire et divisée entre d’un côté les ouvriers et de l’autre les cadres, les étrangers et les « de souche », les automobilistes et les cyclistes, etc. Nous sommes tous dans plein de cases à la fois. Nous sommes incompatibles avec nous-mêmes.
Alors, le politicien va dire aux cyclistes qu’il va construire plus de pistes cyclables. Puis aux automobilistes qu’il va construire plus de parkings. Les deux propositions sont contradictoires. Personne ne le remarque même si certaines personnes sont dans les deux assemblées (la majorité des cyclistes étant également automobilistes). Le politicien ment. Il le sait. Il ne peut pas faire autrement. C’est ce qu’on attend de lui.
De toute façon, une fois élu, il fera ce qu’il faut faire, ce que la machine administrative le pousse à faire en tentant de légèrement l’influencer selon ses propres intérêts du moment. De toute façon, aux prochaines élections, il pourra raconter n’importe quoi. De toute façon, les personnes intéressées par le « fact checking » sont une minorité qui ne vote pas pour lui.
Les politiciens mentent, c’est l’essence même de la politique. Celleux qui tentent de ne pas le faire ne tiennent pas assez longtemps pour acquérir un réel pouvoir. C’est impossible. Dénoncer les mensonges d’un politicien, c’est comme dénoncer l’humidité de l’eau.
Mais le mensonge des politiciens reste artisanal, humain. Ils tentent de convaincre chaque électeur en lançant cent propositions toutes contradictoires, mais en sachant que l’humain moyen n’entendra que celle qu’il veut entendre.
Secrètement, les politiciens envient tous Trump qui n’a même plus besoin de faire des propositions parce qu’une frange de l’électorat croit entendre ce qu’il veut dans le moindre de ses borborygmes.
Mais ce n’est rien face au mensonge industriel, ce mensonge gigantesque, énorme. Celui qui est à la base des plus grosses fortunes de notre ère. Le mensonge qu’on a élevé au rang de discipline universitaire en le parant du nom « marketing ».
Le mensonge de l’industrie
Le mensonge est à la base de toute l’industrie moderne. Les marketeux apprennent à mentir, s’encouragent à le faire à travers des séminaires tout en rêvant de devenir les plus gros menteurs de l’histoire moderne : les dirigeants des multinationales du numérique.
Contrairement aux pires industries comme le tabac ou le soda sucré, qui vendent de la merde, mais au moins vendent quelque chose, les CEOs de la Silicon Valley ne cherchent pas à créer une entreprise rentable ou un service utile. Ils nous racontent des histoires, ils inventent des mythes, des mensonges. Nous croyons que c’est une technique marketing au service de leur produit.
Mais c’est le contraire : le produit n’est qu’un support à leur histoire, à leur fiction. Le produit doit faire le strict minimum pour que leur histoire reste crédible auprès des gens les plus crédules : les médias.
Mark Zuckerberg n’est pas un génie de l’informatique ni de la publicité. Il a simplement réussi à faire croire aux publicitaires qu’il avait inventé la technique publicitaire ultime, un « rayon de contrôle des esprits ». Et il a réussi à faire croire que nous devrions tous être des publicitaires pour bénéficier de son invention (même si nous sommes tous convaincus que le rayon de contrôle ne fonctionne pas sur nous, c’est juste pour les autres).
Ça fonctionne parce que c’est comme ça que les CEOs de la Silicon Valley voient toute leur vie : comme un support au mythe qu’ils sont en train de créer. Steve Jobs ne cherchait pas à créer l’iPhone. Il était initialement opposé à l’App Store. Mais il a créé un business qui a fait croire qu’il était un visionnaire et s’est transformé en un dieu immortel dont la biographie s’arrache.
Bill Gates n’a jamais créé de système d’exploitation. Il a créé un monde dans lequel tout le monde doit payer Bill Gates pour en avoir un. Il n’était pas spécialement un bon codeur, mais il a réussi à convaincre le monde que partager un logiciel sans le payer était une hérésie. Pour ensuite appliquer son idéologie à la vaccination.
Elon Musk n’a pas créé Tesla. Mais il en a transformé l’histoire pour devenir, après coup, un fondateur. Il a inventé le concept d’Hyperloop et l’a promu en sachant très bien que ça ne fonctionnerait jamais, mais que croire que ça pourrait fonctionner empêcherait le développement du train traditionnel qui risquait de faire concurrence à l’automobile individuelle.
Ces gens sont, par essence, par définition, des mythomanes. Ils ne sont pas bêtes, ils sont même très intelligents, mais, comme le souligne Edward Zitron, ils ne sont pas du tout intellectuels et n’ont pas la moindre morale. Ils lisent tous les mêmes livres. Ils ne vont jamais au fond des choses. Ils sont fiers de prendre des décisions cruciales en trois minutes. Ils ne sont pas intéressés par la subtilité, par le raisonnement. Ce sont des junkies de l’adrénaline et de l’adulation des foules.
S’ils lisent le résumé d’un livre de science-fiction, ils trouvent les idées géniales alors même que l’auteur cherchait à nous mettre en garde contre elles.
Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary Tale
Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don’t Create The Torment Nexus
– Alex Blechman
Ils sont intellectuellement médiocres en tout. Sauf dans la capacité cruciale à se donner une image médiatique, une aura.
Et cette image médiatique altère la réalité. Nous leur donnons des sous. Nos gouvernements leur donnent les clés de nos démocraties. Les politiciens vendent un rein pour être pris en photo avec eux. Les journalistes sont tout contents d’obtenir des interviews consensuels alors même que ces milliardaires ont besoin des journalistes pour exister ! Mais comme ils contrôlent les organes de presse, que ce soit en les possédant ou en étant leurs plus gros clients, ils s’arrangent pour que tout journaliste un peu trop malin soit vite évincé.
Lorsqu’une entreprise se paye des encarts publicitaires dans les médias, ce n’est pas pour faire de la publicité auprès du grand public, même si c’est un effet secondaire intéressant. Le véritable objectif est de rendre le média dépendant de cette manne financière, d’exercer un pouvoir : si ce que vous publiez ne nous plait plus, on coupe les robinets. Il faut voir la fierté naïve du rédacteur en chef d’un média, pourtant financé en grande partie par l’argent public, lorsqu’il signe un contrat publicitaire avec Google ou Microsoft.
Toutes les rédactions saisissent intuitivement ce rapport de force et exercent, parfois inconsciemment, un contrôle sur les journalistes un peu trop indépendants. Qui sont de toute façon trop mal payés pour avoir le temps d’enquêter ou de réfléchir sérieusement et se contentent de relayer, parfois sans même les relire ou les modifier, les « communiqués de presse ». (pour ceux qui l’ignorent, un communiqué de presse est en fait un article pré-écrit que le journaliste peut copier/coller pour atteindre son quota d’articles publiés dans la journée sans trop se fouler)
Toute la communication autour d’OpenAI, le fournisseur de ChatGPT, n’est qu’un gigantesque mensonge. Même les investisseurs qui mettent des millions n’obtiennent, en réalité, pas de parts dans l’entreprise, mais « une promesse d’intéressement aux bénéfices futurs ». Or toute personne qui a le niveau mathématique de l’école primaire comprend très vite qu’il est littéralement impossible pour OpenAI de faire un jour des bénéfices. Pire : pour survivre, la société va devoir trouver plus d’investisseurs qu’aucune autre société avant elle dans l’histoire du capitalisme. Et cela, chaque année ! « On dépense tellement d’argent qu’on va bien finir par en gagner et si vous nous donnez votre argent, vous aurez une part lorsque ça arrivera ». Le business OpenAI est la définition d’une pyramide de Ponzi sauf que c’est légal, car écrit noir sur blanc dans le contrat.
La valeur de la société Nvidia, qui fournit le matériel permettant de faire tourner ChatGPT, approche des 15% du produit intérieur brut des États-Unis. Aucune personne sensée ne peut défendre le fait que quelques puces électroniques représentent presque un sixième de la valeur du travail des États-Unis tout entier. Le fait que Nvidia soit désormais un « investisseur » dans OpenAI démontre qu’ils savent parfaitement qu’il s’agit d’une bulle et qu’ils choisissent de l’autoalimenter. Il est évident que Nvidia et OpenAI nous mentent, en s’étonnant eux-mêmes que ça continue à fonctionner.
À ce niveau, j’ai envie de dire que les crétins qui continuent à y croire méritent vraiment ce qui va leur arriver. Et ceux qui, sur LinkedIn, s’émerveillent encore sur l’impact que ChatGPT pourrait avoir sur leur business méritent amplement de porter une pancarte avec écrit « Je suis littéralement le dernier des crétins ».
Mais même leurs plus farouches opposants aux milliardaires les prennent au mot et tentent de les affronter dans leurs propres mythologies. Facebook contrôle nos esprits, c’est mal ! ChatGPT va détruire l’emploi, c’est mal ! Bill Gates va mettre des puces 5G dans les vaccins qu’il nous vend ! Ces peurs, fictives, ne font que renforcer la fiction que racontent ceux qui s’imaginent en supervilains à la James Bond et qui sont incroyablement flattés de voir que même leurs ennemis les croient bien plus puissants qu’ils ne le sont en réalité.
Pourtant, comme n’importe quelle superstition, comme n’importe quel gourou sectaire, il suffirait d’une simple chose pour détruire ces mensonges. Il nous suffirait d’arrêter de les croire.
Il suffirait de leur dire « N’importe quoi ! » en rigolant et en haussant les épaules.
Il suffirait d’arrêter de penser que les publicités Facebook/Instagram fonctionnent, d’arrêter de donner du crédit aux métriques qu’ils inventent (les "vues" et les likes") pour que Méta s’effondre (ces métriques ont été amplement démontrées comme complètement fictives). Il suffirait d’arrêter de croire que les résultats de Google sont pertinents pour qu’Alphabet s’effondre (ils ne le sont plus depuis longtemps). Il suffirait de réaliser les défauts de Tesla et des dangers d’une voiture individuelle connectée pour que la fortune d’Elon Musk fonde comme neige au soleil (fortune qui est basée sur la fiction selon laquelle Tesla vendra plus de voitures que tous les autres constructeurs automobiles chaque année durant toutes les prochaines décennies). Il suffirait de s’arrêter pour réfléchir trente secondes pour réaliser que ChatGPT n’est pas le début d’une révolution, mais la fin d’une bulle.
Mais s’arrêter pour réfléchir trente secondes entre deux notifications est devenu un luxe. Un luxe inimaginable, une difficulté inouïe.
Plutôt que de réfléchir à ce que nous voyons, nous préférons cliquer sur le bouton « J’aime » et passer à autre chose. Le mensonge est plus confortable.
Chaque fois que nous sortons notre téléphone, nous ne demandons qu’une chose…
Qu’ils nous mentent.
Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !
Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.
October 20, 2024
20 years of Linux on the Desktop (part 1)
Twenty years ago, I had an epiphany: Linux was ready for the desktop.
(*audience laughs*)
I had been one of those teenagers invited everywhere to "fix" the computer. Neighbours, friends, family. Yes, that kind of nerdy teenager. You probably know what I mean. But I was tired of installing cracked antivirus and cleaning infested Microsoft Windows computers, their RAM full of malware, their CPU slowing to a crawl, with their little power LED begging me to alleviate their suffering.
Tired of being an unpaid Microsoft support technician, I offered people to install Linux on their computer, with my full support, or to never talk with me about their computer any more.
To my surprise, some accepted the Linux offer.
I started to always have two CD-ROMs with me: the latest Knoppix and Debian Woody. I would first launch Knoppix, the first Live CD Linux, make a demonstration to my future victims and, more importantly, save the autogenerated XFree86 config file. I would then install Debian. When X would fail to start after installation, which was a given, I would copy the X config file from Knoppix, install GNOME 2 and OpenOffice from the testing repository and start working on what needed to be done. Like installing and launching ESD by default to allow multiple sounds. Configuring the network which was, most of the time, a USB ADSL modem requiring some proprietary firmware that I would have downloaded beforehand.
I would also create some shell scripts for common operations: connect to Internet, mount the USB camera, etc. I put those scripts on the GNOME desktop so people could simply click to launch them. In some cases, I would make a quick Zenity interface.
It was a lot of work upfront but, after that, it worked. People were using their computer for months or years without messing or breaking anything. Most of my complains were about running some windows software (like those on CD-ROM founds in cereal boxes).
With GNOME 2.0, I felt that Linux was ready for the desktop. It was just really hard to install. And that could be fixed.
The Perfect Desktop
I had my own public wiki (called the FriWiki) with a few regular contributors. On it, I wrote a long text called: "Debian+GNOME=The Perfect Desktop?". It explained my observations and all the problems that needed to be fixed.
As I wanted to improve the situation, I described how the installation should autodetect everything, like Knoppix did. I also suggested to have a LiveCD and an Installation CD that would be totally mirroring each other so you could test then install. In a perfect world, you could install directly from the LiveCD but I didn’t know if it was technically possible. That installation should also offer standard partitions schemes, autodetect and preserve the Windows partition so people would not be afraid of messing their system.
Installation was not everything. I suggested that the user created during installation should automatically have root rights. I had learned that two passwords were really too high of a bar for most if not for all people. I don’t remember anyone understanding the "root" concept. Systems with multiple users were too complex to teach so I ended in every case with a single whole family account. The fact that this single account was prevented from doing some stuff on the computer was baffling. Especially for trivial things such as mounting a CD-ROM or a USB key.
Speaking of root: installing software could be really more friendly. I imagined a synaptic-like interface which would only display main applications (not all packages) with screenshots, descriptions and reviews. I did some mockups and noted that the hardest parts would probably be the selection and the translation. I’ve lost those mockups but, in my souvenirs, they were incredibly close of what app stores would become years later.
I, of course, insisted on installing ESD by default to multiplex sound, to include all the multimedia codecs, lbdvdcss and all possible firmware (called "drivers" at the time) in case hardware is added later.
I had pages and pages of detailed analysis of every single aspect I wanted to be improved to make Linux ready for the desktop.
With 2.0, GNOME switched to a bi-yearly release. Every six months, a new GNOME desktop would be released, no matter what. I thought it would be a nice idea to have the underlying OS released just after, to be kept in sync. But every six months was probably a bit too much work and people I knew would not upgrade as often anyway. So I advocated for a yearly release where the version number would be the full year. This would greatly help people to understand what version they were running. Like in "I’m running Linux Desktop 2003".
UserLinux
When you have a good idea, it’s probably because this idea is already in the zeitgeist of the time. I don’t believe in "ownership" or even "stealing" when it’s about ideas. Bruce Perens himself was thinking about the subject. He decided to launch UserLinux, an initiative that had the goal of doing exactly what I had in mind.
I immediately joined the project and started to be a very vocal member, always referring to my "Perfect Desktop" essay. I wanted UserLinux to succeed. If Bruce Perens was behind it, it could not not succeed, right?
A mockup of the UserLinux desktop (GNOME 2.0 with a custom theme)Unfortunately, most of UserLinux people were, like me, talking a lot but not doing much. The only active member was an artist who designed a logo and started to create multiple GNOME themes. It was great. And lots of discussions about how to make the theme even better ensued.
This is how I learned about "bike shedding".
UserLinux was the ultimate bikeshedded project. To my knowledge, no code was ever written. In fact, it was not even clear what code should be written at all. After launching the initial idea, Bruce Perens was mostly discussing with us, everybody waiting for someone to "do something".
No-name-yet
At the start of 2004, I was contacted by Sébastien Bacher, a Debian developer who told me that he had read my "Perfect Desktop" essay months ago and forwarded it to someone who had very similar ideas. And lots of money. So much money that they were already secretly working on it and, now that it was starting to take shape, they were interested in my feedback about the very alpha version.
I, of course, agreed and was excited. This is how I joined a mysterious project called "no-name-yet" with a nonameyet.com website and an IRC channel. While discussing about the project and learning it in great depth, my greatest fear was that it would become a fork of Debian. I felt strongly that one should not fork Debian lightly. Instead, it should be more a few packages and metapackages that would sit on top of Debian. Multiple people in the team assured me that the goal was to cooperate with Debian, not to fork it.
At one point, I strongly argued with someone on IRC whose nick was "sabdfl". Someone else asked me in private if I knew who it was. I didn’t.
That’s how I learned that the project was funded by Mark Shuttleworth himself.
Dreaming of being an astronaut, I was a huge fan of Mark Shuttleworth. The guy was an astronaut but was also a free software supporter. I knew him from the days when he tried to offer bounties to improve free software like Thunderbird. Without much success. But I was surprised to learn that Mark had also been a Debian developer.
This guy was my hero (and still kinda is). He represented all of my dreams: an astronaut, a debian developer and a billionaire (in my order of importance). Years later, I would meet him once in the hall of an Ubuntu Summit. He was typing on his laptop, looked at me and I could not say anything other than "Hello". And that was it. But I’m proud to say that his hackergothi on planet.ubuntu.com is still the one I designed for him as a way to celebrate his space flight.
sabdfl’s hackergotchi, designed by yours, trulyDuring the spring or early summer of 2004, I received a link to the very first alpha version of no-name-yet. Which, suddenly, had a real name. And I liked it: Ubuntu. I installed Ubuntu on a partition to test. Quickly, I found myself using it daily, forgetting about my Debian partition. It was brown. Very brown at first. A bit later, it even got naked people on the login screen (and I defended sabdfl for this controversial decision). Instead of studying for my exams, I started to do lengthy reports about what could be improved, about bugs I found, etc.
The first Ubuntu login picture with three half-naked people looking toward the sky. Some alpha versions were with even fewer clothes.This makes me one of the very few people on earth who started to use Ubuntu with 4.04 (it was not named like that, of course).
Wanting to promote Ubuntu and inspired by Tristan Nitot, I decided to start a blog.
A blog which, coincidentally, was started the very same day the first public Ubuntu was released, exactly twenty years ago.
A blog you are reading right now.
And this was just the beginning…
(to be continued)
Subscribe by email or by rss to get the next episodes of "20 years of Linux on the Desktop".
I’m currently turning this story into a book. I’m looking for an agent or a publisher interested to work with me on this book and on an English translation of "Bikepunk", my new post-apocalyptic-cyclist typewritten novel.
As a writer and an engineer, I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.
I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!
October 15, 2024
I'm excited to share an experiment I've been working on: a solar-powered, self-hosted website running on a Raspberry Pi. The website at https://solar.dri.es is powered entirely by a solar panel and battery on our roof deck in Boston.
My solar panel and Raspberry Pi Zero 2 are set up on our rooftop deck for testing. Once it works, it will be mounted properly and permanently.By visiting https://solar.dri.es, you can dive into all the technical details and lessons learned – from hardware setup to networking configuration and custom monitoring.
As the content on this solar-powered site is likely to evolve or might even disappear over time, I've included the full article below (with minor edits) to ensure that this information is preserved.
Finally, you can view the real-time status of my solar setup on my solar panel dashboard, hosted on my main website. This dashboard stays online even when my solar-powered setup goes offline.
Background
For over two decades, I've been deeply involved in web development. I've worked on everything from simple websites to building and managing some of the internet's largest websites. I've helped create a hosting business that uses thousands of EC2 instances, handling billions of page views every month. This platform includes the latest technology: cloud-native architecture, Kubernetes orchestration, auto-scaling, smart traffic routing, geographic failover, self-healing, and more.
This project is the complete opposite. It's a hobby project focused on sustainable, solar-powered self-hosting. The goal is to use the smallest, most energy-efficient setup possible, even if it means the website goes offline sometimes. Yes, this site may go down on cloudy or cold days. But don't worry! When the sun comes out, the website will be back up, powered by sunshine.
My primary website, https://dri.es, is reliably hosted on Acquia, and I'm very happy with it. However, if this solar-powered setup proves stable and efficient, I might consider moving some content to solar hosting. For instance, I could keep the most important pages on traditional hosting while transferring less essential content – like my 10,000 photos – to a solar-powered server.
Why am I doing this?
This project is driven by my curiosity about making websites and web hosting more environmentally friendly, even on a small scale. It's also a chance to explore a local-first approach: to show that hosting a personal website on your own internet connection at home can often be enough for small sites. This aligns with my commitment to both the Open Web and the IndieWeb.
At its heart, this project is about learning and contributing to a conversation on a greener, local-first future for the web. Inspired by solar-powered sites like LowTech Magazine, I hope to spark similar ideas in others. If this experiment inspires even one person in the web community to rethink hosting and sustainability, I'll consider it a success.
Solar panel and battery
The heart of my solar setup is a 50-watt panel from Voltaic, which captures solar energy and delivers 12-volt output. I store the excess power in an 18 amp-hour Lithium Iron Phosphate (LFP or LiFePO4) battery, also from Voltaic.
I'll never forget the first time I plugged in the solar panel – it felt like pure magic. Seeing the battery spring to life, powered entirely by sunlight, was an exhilarating moment that is hard to put into words. And yes, all this electrifying excitement happened right in our laundry room.
Voltaic's battery system includes a built-in charge controller with Maximum Power Point Tracking (MPPT) technology, which regulates the solar panel's output to optimize battery charging. In addition, the MPPT controller protects the battery from overcharging, extreme temperatures, and short circuits.
A key feature of the charge controller is its ability to stop charging when temperatures fall below 0°C (32°F). This preserves battery health, as charging in freezing conditions can damage the battery cells. As I'll discuss in the Next steps section, this safeguard complicates year-round operation in Boston's harsh winters. I'll likely need a battery that can charge in colder temperatures.
The 12V to 5V voltage converter used to convert the 12V output from the solar panel to 5V for the Raspberry Pi.I also encountered a voltage mismatch between the 12-volt solar panel output and the Raspberry Pi's 5-volt input requirement. Fortunately, this problem had a more straightforward solution. I solved this using a buck converter to step down the voltage. While this conversion introduces some energy loss, it allows me to use a more powerful solar panel.
Raspberry Pi models
This website is currently hosted on a Raspberry Pi Zero 2 W. The main reason for choosing the Raspberry Pi Zero 2 W is its energy efficiency. Consuming just 0.4 watts at idle and up to 1.3 watts under load, it can run on my battery for about a week. This decision is supported by a mathematical uptime model, detailed in Appendix 1.
That said, the Raspberry Pi Zero 2 W has limitations. Despite its quad-core 1 GHz processor and 512 MB of RAM, it may still struggle with handling heavier website traffic. For this reason, I also considered the Raspberry Pi 4. With its 1.5 GHz quad-core ARM processor and 4 GB of RAM, the Raspberry Pi 4 can handle more traffic. However, this added performance comes at a cost: the Pi 4 consumes roughly five times the power of the Zero 2 W. As shown in Appendix 2, my 50W solar panel and 18Ah battery setup are likely insufficient to power the Raspberry Pi 4 through Boston's winter.
With a single-page website now live on https://solar.dri.es, I'm actively monitoring the real-world performance and uptime of a solar-powered Raspberry Pi Zero 2 W. For now, I'm using the lightest setup that I have available and will upgrade only when needed.
Networking
The Raspberry Pi's built-in Wi-Fi is perfect for our outdoor setup. It wirelessly connects to our home network, so no extra wiring was needed.
I want to call out that my router and Wi-Fi network are not solar-powered; they rely on my existing network setup and conventional power sources. So while the web server itself runs on solar power, other parts of the delivery chain still depend on traditional energy.
Running this website on my home internet connection also means that if my ISP or networking equipment goes down, so does the website – there is no failover in place.
For security reasons, I isolated the Raspberry Pi in its own Virtual Local Area Network (VLAN). This ensures that even if the Pi is compromised, the rest of our home network remains protected.
To make the solar-powered website accessible from the internet, I configured port forwarding on our router. This directs incoming web traffic on port 80 (HTTP) and port 443 (HTTPS) to the Raspberry Pi, enabling external access to the site.
One small challenge was the dynamic nature of our IP address. ISPs typically do not assign fixed IP addresses, meaning our IP address changes from time to time. To keep the website accessible despite these IP address changes, I wrote a small script that looks up our public IP address and updates the DNS record for solar.dri.es
on Cloudflare. This script runs every 10 minutes via a cron job.
I use Cloudflare's DNS proxy, which handles DNS and offers basic DDoS protection. However, I do not use Cloudflare's caching or CDN features, as that would somewhat defeat the purpose of running this website on solar power and keeping it local-first.
The Raspberry Pi uses Caddy as its web server, which automatically obtains SSL certificates from Let's Encrypt. This setup ensures secure, encrypted HTTP connections to the website.
Monitoring and dashboard
The Raspberry Pi 4 (on the left) can run a website, while the RS485 CAN HAT (on the right) will communicate with the charge controller for the solar panel and battery.One key feature that influenced my decision to go with the Voltaic battery is its RS485 interface for the charge controller. This allowed me to add an RS485 CAN HAT (Hardware Attached on Top) to the Raspberry Pi, enabling communication with the charge controller using the Modbus protocol. In turn, this enabled me to programmatically gather real-time data on the solar panel's output and battery's status.
I collect data such as battery capacity, power output, temperature, uptime, and more. I send this data to my main website via a web service API, where it's displayed on a dashboard. This setup ensures that key information remains accessible, even if the Raspberry Pi goes offline.
My main website runs on Drupal. The dashboard is powered by a custom module I developed. This module adds a web service endpoint to handle authentication, validate incoming JSON data, and store it in a MariaDB database table. Using the historical data stored in MariaDB, the module generates Scalable Vector Graphics (SVGs) for the dashboard graphs. For more details, check out my post on building a temperature and humidity monitor, which explains a similar setup in much more detail. Sure, I could have used a tool like Grafana, but sometimes building it yourself is part of the fun.
A Raspberry Pi 4 with an attached RS485 CAN HAT module is being installed in a waterproof enclosure.For more details on the charge controller and some of the issues I've observed, please refer to Appendix 3.
Energy use, cost savings, and environmental impact
When I started this solar-powered website project, I wasn't trying to revolutionize sustainable computing or drastically cut my electricity bill. I was driven by curiosity, a desire to have fun, and a hope that my journey might inspire others to explore local-first or solar-powered hosting.
That said, let's break down the energy consumption and cost savings to get a better sense of the project's impact.
The tiny Raspberry Pi Zero 2 W at the heart of this project uses just 1 Watt on average. This translates to 0.024 kWh daily (1W * 24h / 1000 = 0.024 kWh) and approximately 9 kWh annually (0.024 kWh * 365 days = 8.76 kWh). The cost savings? Looking at our last electricity bill, we pay an average of $0.325 per kWh in Boston. This means the savings amount to $2.85 USD per year (8.76 kWh * $0.325/kWh = $2.85). Not exactly something to write home about.
The environmental impact is similarly modest. Saving 9 kWh per year reduces CO2 emissions by roughly 4 kg, which is about the same as driving 16 kilometers (10 miles) by car.
There are two ways to interpret these numbers. The pessimist might say that the impact of my solar setup is negligible, and they wouldn't be wrong. Offsetting the energy use of a Raspberry Pi Zero 2, which only draws 1 Watt, will never be game-changing. The $2.85 USD saved annually won't come close to covering the cost of the solar panel and battery. In terms of efficiency, this setup isn't a win.
But the optimist in me sees it differently. When you compare my solar-powered setup to traditional website hosting, a more compelling case emerges. Using a low-power Raspberry Pi to host a basic website, rather than large servers in energy-hungry data centers, can greatly cut down on both expenses and environmental impact. Consider this: a Raspberry Pi Zero 2 W costs just $15 USD, and I can power it with main power for only $0.50 USD a month. In contrast, traditional hosting might cost around $20 USD a month. Viewed this way, my setup is both more sustainable and economical, showing some merit.
Lastly, it's also important to remember that solar power isn't just about saving money or cutting emissions. In remote areas without grid access or during disaster relief, solar can be the only way to keep communication systems running. In a crisis, a small solar setup could make the difference between isolation and staying connected to essential information and support.
Why do so many websites need to stay up?
The reason the energy savings from my solar-powered setup won't offset the equipment costs is that the system is intentionally oversized to keep the website running during extended low-light periods. Once the battery reaches full capacity, any excess energy goes to waste. That is unfortunate as that surplus could be used, and using it would help offset more of the hardware costs.
This inefficiency isn't unique to solar setups – it highlights a bigger issue in web hosting: over-provisioning. The web hosting world is full of mostly idle hardware. Web hosting providers often allocate more resources than necessary to ensure high uptime or failover, and this comes at an environmental cost.
One way to make web hosting more eco-friendly is by allowing non-essential websites to experience more downtime, reducing the need to power as much hardware. Of course, many websites are critical and need to stay up 24/7 – my own work with Acquia is dedicated to ensuring essential sites do just that. But for non-critical websites, allowing some downtime could go a long way in conserving energy.
It may seem unconventional, but I believe it's worth considering: many websites, mine included, aren't mission-critical. The world won't end if they occasionally go offline. That is why I like the idea of hosting my 10,000 photos on a solar-powered Raspberry Pi.
And maybe that is the real takeaway from this experiment so far: to question why our websites and hosting solutions have become so resource-intensive and why we're so focused on keeping non-essential websites from going down. Do we really need 99.9% uptime for personal websites? I don't think so.
Perhaps the best way to make the web more sustainable is to accept more downtime for those websites that aren't critical. By embracing occasional downtime and intentionally under-provisioning non-essential websites, we can make the web a greener, more efficient place.
Next steps
As I continue this experiment, my biggest challenge is the battery's inability to charge in freezing temperatures. As explained, the battery's charge controller includes a safety feature that prevents charging when the temperature drops below freezing. While the Raspberry Pi Zero 2 W can run on my fully charged battery for about six days, this won't be sufficient for Boston winters, where temperatures often remain below freezing for longer.
With winter approaching, I need a solution to charge my battery in extreme cold. Several options to consider include:
- Adding a battery heating system that uses excess energy during peak sunlight hours.
- Applying insulation, though this alone may not suffice since the battery generates minimal heat.
- Replacing the battery with one that charges at temperatures as low as -20°C (-4°F), such as Lithium Titanate (LTO) or certain AGM lead-acid batteries. However, it's not as simple as swapping it out – my current battery has a built-in charge controller, so I'd likely need to add an external charge controller, which would require rewiring the solar panel and updating my monitoring code.
Each solution has trade-offs in cost, safety, and complexity. I'll need to research the different options carefully to ensure safety and reliability.
The last quarter of the year is filled with travel and other commitments, so I may not have time to implement a fix before freezing temperatures hit. With some luck, the current setup might make it through winter. I'll keep monitoring performance and uptime – and, as mentioned, a bit of downtime is acceptable and even part of the fun! That said, the website may go offline for a few weeks and restart after the harshest part of winter. Meanwhile, I can focus on other aspects of the project.
For example, I plan to expand this single-page site into one with hundreds or even thousands of pages. Here are a few things I'd like to explore:
- Testing Drupal on a Raspberry Pi Zero 2 W: As the founder and project lead of Drupal, my main website runs on Drupal. I'm curious to see if Drupal can actually run on a Raspberry Pi Zero 2 W. The answer might be "probably not", but I'm eager to try.
- Upgrading to a Raspberry Pi 4 or 5: I'd like to experiment with upgrading to a Raspberry Pi 4 or 5, as I know it could run Drupal. As noted in Appendix 2, this might push the limits of my solar panel and battery. There are some optimization options to explore though, like disabling CPU cores, lowering the RAM clock speed, and dynamically adjusting features based on sunlight and battery levels.
- Creating a static version of my site: I'm interested in experimenting with a static version of https://dri.es. A static site doesn't require PHP or MySQL, which would likely reduce resource demands and make it easier to run on a Raspberry Pi Zero 2 W. However, dynamic features like my solar dashboard depend on PHP and MySQL, so I'd potentially need alternative solutions for those. Tools like Tome and QuantCDN offer ways to generate static versions of Drupal sites, but I've never tested these myself. Although I prefer keeping my site dynamic, creating a static version also aligns with my interests in digital preservation and archiving, offering me a chance to delve deeper into these concepts.
Either way, it looks like I'll have some fun ahead. I can explore these ideas from my office while the Raspberry Pi Zero 2 W continues running on the roof deck. I'm open to suggestions and happy to share notes with others interested in similar projects. If you'd like to stay updated on my progress, you can sign up to receive new posts by email or subscribe via RSS. Feel free to email me at dries@buytaert.net. Your ideas, input, and curiosity are always welcome.
Appendix
Appendix 1: Sizing a solar panel and battery for a Raspberry Pi Zero 2 W
To keep the Raspberry Pi Zero 2 W running in various weather conditions, we need to estimate the ideal solar panel and battery size. We'll base this on factors like power consumption, available sunlight, and desired uptime.
The Raspberry Pi Zero 2 W is very energy-efficient, consuming only 0.4W at idle and up to 1.3W under load. For simplicity, we'll assume an average power consumption of 1W, which totals 24Wh per day (1W * 24 hours).
We also need to account for energy losses due to inefficiencies in the solar panel, charge controller, battery, and inverter. Assuming a total loss of 30%, our estimated daily energy requirement is 24Wh / 0.7 ≈ 34.3Wh.
In Boston, peak sunlight varies throughout the year, averaging 5-6 hours per day in summer (June-August) and only 2-3 hours per day in winter (December-February). Peak sunlight refers to the strongest, most direct sunlight hours. Basing the design on peak sunlight hours rather than total daylight hours provides a margin of safety.
To produce 34.3Wh in the winter, with only 2 hours of peak sunlight, the solar panel should generate about 17.15W (34.3Wh / 2 hours ≈ 17.15W). As mentioned, my current setup includes a 50W solar panel, which provides well above the estimated 17.15W requirement.
Now, let's look at battery sizing. As explained, I have an 18Ah battery, which provides about 216Wh of capacity (18Ah * 12V = 216Wh). If there were no sunlight at all, this battery could power the Raspberry Pi Zero 2 W for roughly 6 days (216Wh / 34.3Wh per day ≈ 6.3 days), ensuring continuous operation even on snowy winter days.
These estimates suggest that I could halve both my 50W solar panel and 18Ah battery to a 25W panel and a 9Ah battery, and still meet the Raspberry Pi Zero 2 W's power needs during Boston winters. However, I chose the 50W panel and larger battery for flexibility, in case I need to upgrade to a more powerful board with higher energy requirements.
Appendix 2: Sizing a solar panel and battery for a Raspberry Pi 4
If I need to switch to a Raspberry Pi 4 to handle increased website traffic, the power requirements will rise significantly. The Raspberry Pi 4 consumes around 3.4W at idle and up to 7.6W under load. For estimation purposes, I'll assume an average consumption of 4.5W, which totals 108Wh per day (4.5W * 24 hours = 108Wh).
Factoring in a 30% loss due to system inefficiencies, the adjusted daily energy requirement increases to approximately 154.3Wh (108Wh / 0.7 ≈ 154.3Wh). To meet this demand during winter, with only 2 hours of peak sunlight, the solar panel would need to produce about 77.15W (154.3Wh / 2 hours ≈ 77.15W).
While some margin of safety is built into my calculations, this likely means my current 50W solar panel and 216Wh battery are insufficient to power a Raspberry Pi 4 during a Boston winter.
For example, with an average power draw of 4.5W, the Raspberry Pi 4 requires 108Wh daily. In winter, if the solar panel generates only 70 to 105Wh per day, there would be a shortfall of 3 to 38Wh each day, which the battery would need to cover. And with no sunlight at all, a fully charged 216Wh battery would keep the system running for about 2 days (216Wh / 108Wh per day ≈ 2 days) before depleting.
To ensure reliable operation, a 100W solar panel, capable of generating enough power with just 2 hours of winter sunlight, paired with a 35Ah battery providing 420Wh, could be better. This setup, roughly double my current capacity, would offer sufficient backup to keep the Raspberry Pi 4 running for 3-4 days without sunlight.
Appendix 3: Observations on the Lumiax charge controller
As I mentioned earlier, my battery has a built-in charge controller. The brand of the controller is Lumiax, and I can access its data programmatically. While the controller excels at managing charging, its metering capabilities feel less robust. Here are a few observations:
- I reviewed the charge controller's manual to clarify how it defines and measures different currents, but the information provided was insufficient.
- The charge controller allows monitoring of the "solar current" (register
12367
). I expected this to measure the current flowing from the solar panel to the charge controller, but it actually measures the current flowing from the charge controller to the battery. In other words, it tracks the "useful current" – the current from the solar panel used to charge the battery or power the load. The problem with this is that when the battery is fully charged, the controller reduces the current from the solar panel to prevent overcharging, even though the panel could produce more. As a result, I can't accurately measure the maximum power output of the solar panel. For example, in full sunlight with a fully charged battery, the calculated power output could be as low as 2W, even though the solar panel is capable of producing 50W. - The controller also reports the "battery current" (register
12359
), which appears to represent the current flowing from the battery to the Raspberry Pi. I believe this to be the case because the "battery current" turns negative at night, indicating discharge. - Additionally, the controller reports the "load current" (register
12362
), which, in my case, consistently reads zero. This is odd because my Raspberry Pi Zero 2 typically draws between 0.1-0.3A. Even with a Raspberry Pi 4, drawing between 0.6-1.3A, the controller still reports 0A. This could be a bug or suggest that the charge controller lacks sufficient accuracy.
- The charge controller allows monitoring of the "solar current" (register
- When the battery discharges and the low voltage protection activates, it shuts down the Raspberry Pi as expected. However, if there isn't enough sunlight to recharge the battery within a certain timeframe, the Raspberry Pi does not automatically reboot. Instead, I must perform a manual 'factory reset' of the charge controller. This involves connecting my laptop to the controller – a cumbersome process that requires me to disconnect the Raspberry Pi, open its waterproof enclosure, detach the RS485 hat wires, connect them to a USB-to-RS485 adapter for my laptop, and run a custom Python script. Afterward, I have to reverse the entire process. This procedure can't be performed while traveling as it requires physical access.
- The charge controller has two temperature sensors: one for the environment and one for the controller itself. However, the controller's temperature readings often seem inaccurate. For example, while the environment temperature might correctly register at 24°C, the controller could display a reading as low as 14°C. This seems questionable though there might be an explanation that I'm overlooking.
- The battery's charge and discharge patterns are non-linear, meaning the charge level may drop rapidly at first, then stay steady for hours. For example, I've seen it drop from 100% to 65% within an hour but remain at 65% for over six hours. This is common for LFP batteries due to their voltage characteristics. Some advanced charge controllers use look-up tables, algorithms, or coulomb counting to more accurately predict the state of charge based on the battery type and usage patterns. The Lumiax doesn't support this, but I might be able to implement coulomb counting myself by tracking the current flow to improve charge level estimates.
Appendix 4: When size matters (but it's best not to mention it)
When buying a solar panel, sometimes it's easier to beg for forgiveness than to ask for permission.
One day, I casually mentioned to my wife, "Oh, by the way, I bought something. It will arrive in a few days."
"What did you buy?", she asked, eyebrow raised.
"A solar panel", I said, trying to sound casual.
"A what?!", she asked again, her voice rising.
Don't worry!", I reassured her. "It's not that big", I said, gesturing with my hands to show a panel about the size of a laptop.
She looked skeptical but didn't push further.
Fast forward to delivery day. As I unboxed it, her eyes widened in surprise. The panel was easily four or five times larger than what I'd shown her. Oops.
The takeaway? Sometimes a little underestimation goes a long way.
Bikepunk, les chroniques du flash
Vingt ans après le flash, la catastrophe qui a décimé l’humanité, la jeune Gaïa n’a qu’une seule solution pour fuir l’étouffante communauté dans laquelle elle a grandi : enfourcher son vélo et pédaler en compagnie de Thy, un vieil ermite cycliste.
Pour survivre dans ce monde dévasté où toute forme d’électricité est impossible, où les cyclistes sont pourchassés, où les jeunes femmes fécondes sont très recherchées, Gaïa et Thy ne pourront compter que sur leur maîtrise du guidon.
Mon nouveau roman « Bikepunk, les chroniques du flash » est désormais disponible dans toutes les librairies de France, de Belgique ou de Suisse. Ou, en ligne, sur le site de l’éditeur.
J’ai tant d’anecdotes à raconter sur ce projet très personnel, entièrement tapé sur une machine à écrire mécanique et né de ma passion pour l’exploration cycliste. J’ai tant à vous dire sur ma relation avec Gaïa et Thy, sur les chroniques du flash qui jalonnent l’histoire comme des billets de blog issus d’un futur d’où l’électricité, l’essence et Internet ont soudainemnet disparu.
Fable écolo-cycliste ou thriller d’action ? Réflexion philosophique ou Mad Max à vélo ?
Je vous laisse choisir, je vous laisse découvrir.
Mais je dois vous prévenir :
Vous risquez de ne plus jamais oser entrer dans une voiture électrique…
Bienvenue dans l’univers Bikepunk !
Couverture du livre Bikepunk : une cycliste en papier recyclé dévale une pente dans une ville dévastée entièrement argentée.Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !
Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.
October 11, 2024
Seuls les fous en tentent l’ascension
Je vous ai déjà parlé de Bruno Leyval, l’artiste qui a réalisé la couverture et l’illustration de Bikepunk ? Mais si, souvenez-vous. J’avais même mis en texte une des ces photos.
J’adore travailler avec Bruno. C’est un grand professionnel qui incarne néanmoins l’archétype de l’artiste fou. Pour le découvrir, je vous conseille cette formidable interview.
L’artiste est un fou. De cette folie révolutionnaire, nécessaire, indispensable dont Érasme chantait déjà les louanges. Mais la démence dangereuse, elle, est générée par la gloire. Une gloire que Bruno a connue et qu’il a fuie avec un discernement fascinant :
J’ai connu les effets de la reconnaissance artistique et je peux te dire qu’il faut être solide dans sa tête pour ne pas basculer dans la folie. Tu te retrouves plongé dans un monde parallèle où tout le monde te lèche le cul et le pire, c’est que c’est agréable !
C’est le paradoxe de l’artiste : soit il n’est pas reconnu et son message s’éteint, sans grande portée. Soit il devient célèbre et son message se transforme en une infâme mixture à vocation commerciale. Le message est totalement corrompu lorsqu’il atteint le public (qui, de par son adoration idolâtre, est en grande partie responsable de cette corruption).
Les pires victimes de cette démence sont sûrement celles qui la connaissent sans avoir la moindre once de folie artistique. Qui sont corrompues dès le départ, car elles n’ont même pas d’autre message que la quête de gloire. Les politiciens.
La gangrène de l’humanité est peut-être cette glorification inéluctable de celleux qui sont dans une quête compulsive de gloire, cette mise au pouvoir de ces obsédés de la puissance. Face à cette maladie, l’art et la folie me semblent être les seules manières mener une lutte raisonnable.
J’adore Bruno parce que nous sommes complètement différents. Il est chaman, spirituel, mystique, travailleur de ses mains. Je suis un ingénieur hyper rationnel scientifique greffé à un clavier. Chacun empêche l’autre de dormir avec ses ronflements, au propre comme au figuré.
Et pourtant, si l’on enlève les déguisements (les peintures corporelles et la tête de bison pour lui, la blouse blanche et l’éprouvette de laboratoire pour moi), on découvre des pensées incroyablement similaires. Nous sommes, tous les deux en train de gravir la même montagne, mais par des chemins opposés.
À lire Bruno, à avoir la chance de partager des moments avec lui, je pense que cette montagne s’appelle la sagesse.
C’est une montagne dont nul n’a jamais atteint le sommet. Une montagne dont aucun alpiniste n’est jamais revenu vivant.
Seuls les fous en tentent l’ascension.
Seuls les fous, en ce monde, sont en quête de sagesse.
C’est d’ailleurs pour cela qu’on les appelle des fous.
En ouvrant le livre Bikepunk, une fois la couverture passée, avant même les premiers tours de roue, avant même la première lettre imprimée, une image, une sensation mystique vous assaillira peut-être comme elle m’assaille moi-même à chaque fois. La folie spirituelle de Bruno qui me renvoie le reflet de ma propre folie mécanico-scientiste.
Vous êtes sur le point d’entrer, de partager notre folie.
Ça y est, vous êtes fous.
Bienvenue sur notre montagne !
- Illustration : « La montagne sacrée », Bruno Leyval, Creative Commons By-NC-ND 4.0
Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !
Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.
October 08, 2024
Vélo et machine à écrire, petite eulogie de la satiété
Et si nous avions assez ?
Un des premiers lecteurs de Bikepunk à l’impression de retrouver le personnage de Thy dans Grant Petersen, un fabricant de vélos inusables et anti-compétitifs.
Ça me touche et me rassure quant au message que je tente de porter dans le livre. Grant Petersen pourrait parfaitement arborer l’adage que je mets en conclusion de Bikepunk : « Je ne crains pas la fin du monde, j’ai un vélo ».
Le blog de Grant Petersen est passionnant. Dans le dernier billet, on y trouve notamment la photo d’un journal contenant une interview avec José Mujica, l’ancien président de l’Uruguay (celui qui vivait dans sa petite maison et allait accomplir son travail de président à pieds).
Une phrase m’a frappé :
Prenez l’Uruguay. L’Uruguay compte 3,5 millions d’habitants. Mais importe 27 millions de paires de chaussures. Nous produisons de la merde (garbage) et souffrons au travail. Pour quoi ?
C’est quelque chose que j’essayais déjà d’articuler dans mon TEDx en 2015, qui est très populaire sur Youtube (mais je n’en tire aucun bénéfice et je vous mets le lien Peertube).
La production, c’est littéralement transformer notre environnement en déchets (l’étape de consommation intermédiaire étant très limitée). Toutes nos usines, tous nos centres commerciaux, tous nos experts en marketing ne font qu’une chose : tenter de convertir notre planète en déchet le plus rapidement possible.
À partir de quand considérerons-nous que nous avons assez ?
Le côté "anti-déchet/conçu pour durer" est l’un des aspects qui me passionnent avec les machines à écrire. Elles ne sont plus produites, mais sont tellement résistantes que toutes celles qui ont été produites sont aujourd’hui suffisantes pour alimenter un marché restreint (mais existant).
En fait, les machines à écrire ont été tellement produites pour durer que, durant la Seconde Guerre mondiale, les fabricants de machines les rachetaient aux particuliers afin de fournir l’armée qui en avait un gigantesque besoin.
Je trouve un point commun fascinant entre les réflexions de Grant Petersen sur le vélo, José Mujica sur la politique, Paul Watson sur la faune marine, Richard Stallman sur le logiciel, Richard Polt sur les machines à écrire et même Bruno Leyval sur l’art graphique. Tous, outre le fait que ce sont des hommes blancs barbus, ont refusé le principe même de croissance. Chacun, à son niveau, tente de poser la question : « Et si nous avions assez ? »
Ils sont perçus comme extrémistes. Mais n’est-ce pas la quête infinie de gloire et de richesse qui est un extrême morbide ?
Dans un idéal poétique, je me rends compte que je n’ai besoin de rien d’autre qu’un vélo, une machine à écrire et une bonne bibliothèque. Marrant, ça me fait justement penser à un roman sur le sujet qui sort le 15 octobre. Je vous en ai déjà parlé ? Vous l’avez déjà précommandé ?
- Commander Bikepunk dans votre librairie (France)
- Commander Bikepunk dans votre librairie (Belgique)
- Commander Bikepunk si vous n’avez aucune librairie à proximité
Addendum sur les précommandes de livre
La raison pour laquelle ces précommandes sont importantes est que, contrairement à certains mastodontes de l’édition, mon éditeur PVH ne peut se permettre de financer des piles de 30 livres dans chaque librairie pour forcer les libraires à tenter de les vendre, quitte à en envoyer la moitié au pilon par après (ce qui va arriver à la moitié de l’immense pile de best-sellers dans l’entrée de votre librairie). Pour la plupart des livres moins connus, le libraire ne l’aura donc pas de stock ou en prendra un seul. Ce qui fait qu’un livre peu connu reste toujours peu connu.
Si, par contre, un libraire reçoit deux ou trois précommandes pour un livre pas encore sorti, sa curiosité est titillée. Il va potentiellement en commander plus au distributeur. Il va sans doute le lire et, parfois, le mettre en avant (honneur suprême que mon recueil de nouvelles « Stagiaire au spatioport Omega 3000 » a eu la chance de connaître à la chouette librairie/salon de thé Maruani à Paris).
Quelques libraires commandant un titre particulier, ça attire l’attention du distributeur et du diffuseur. Qui se disent que le livre a du potentiel et en parle avec les autres libraires. Qui le lisent et, parfois, le mettent en avant. La boucle est bouclée…
C’est bien sûr moins efficace que d’imprimer 100.000 exemplaires d’un coup, de les vendre dans les supermarchés et de mettre des affiches partout dans le métro avec des phrases grandiloquentes sorties de nulle part : « Le dernier livre événement de Ploum, le nouveau maître du thriller cycliste ».
On est d’accord que les publicités dans le métro, c’est moche. De toute façon, des histoires de machine à écrire et de vélo, ça n’intéresse probablement pas 100.000 personnes (quoique mon éditeur ne serait pas contre…).
Mais je pense que ça pourrait vraiment toucher celleux qui aiment flâner dans les libraires, leur vélo adossé à la devanture, et qui pourraient avoir l’œil attiré par une couverture brillante, flashy, sur laquelle se découpe une cycliste chaleureuse en papier recyclé…
Celleux qui aiment se poser avec un livre et qui se demandent parfois : « Et si nous avions assez ? »
Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !
Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.
October 05, 2024
Proton is a compatibility layer for Windows games to run on Linux. Running a Windows games is mostly just hitting the Play button within Steam. It’s that good that many games now run faster on Linux than on native Windows. That’s what makes the Steam Deck the best gaming handheld of the moment.
But a compatibility layer is still a layer, so you may encounter … incompatibilities. Ember Knights is a lovely game with fun co-op multiplayer support. It runs perfectly on the (Linux-based) Steam Deck, but on my Ubuntu laptop I encountered long loading times (startup was 5 minutes and loading between worlds was slow). But once the game was loaded it ran fine.
Debugging the game reveled that there were lost of EAGAIN
errors while the
game was trying to access the system clock. Changing the numer of allowed open
files fixed the problem for me.
Add this to end end of the following files:
- in
/etc/security/limits.conf
:
* hard nofile 1048576
- in /etc/systemd/system.conf and /etc/systemd/user.conf:
DefaultLimitNOFILE=1048576
Reboot.
“The Witcher 3: Wild Hunt” is considered to be one of the greatest video games of all time. I certainly agree with that sentiment.
At its core, The Witcher 3 is a action-role playing game with a third-person perspective in a huge open world. You develop your character while the story advances. At the same time you can freely roam and explore as much as you like. The main story is captivating and the world is filled with with side quests and lots of interesting people. Fun for at least 200 hours, if you’re the exploring kind. If you’re not, the base game (without DLCs) will still take you 50 hours to finish.
While similar to other great games like Nintendo’s Zelda Breath of the Wild and Sony’s Horizon Zero Dawn, the strength of the game is a deep lore originating from the Witcher series novels written by the “Polish Tolkien” Andrzej Sapkowski. It’s not a game, but a universe (nowadays it even includes a Netflix tv-series).
A must play.
Played on the Steam Deck without any issues (“Steam Deck Verified”)
October 03, 2024
A couple of months after reaching 1600, I hit another milestone: Elo 1700!
When I reached an Elo rating of 1600, I expected the climb to get more difficult. Surprisingly, moving up to 1700 was easier than I thought.
I stuck with my main openings but added a few new variations. For example, I started using the "Queen's Gambit Declined: Cambridge Springs Defense" against white opening with 1. d4
– a name that, just six months ago, might as well have been a spell from Harry Potter. Despite expanding my opening repertoire, my opening knowledge remains limited, and I tend to stick to the few openings I know well.
A key challenge I keep facing is what my chess coach calls "pattern recognition", the ability to instantly recognize common tactical setups and positional themes. Vanessa, my wife, would almost certainly agree with that – she's great at spotting patterns in my life that I completely miss. To work on this blind spot, I've made solving chess puzzles a daily habit.
What has really started to make sense for me is understanding pawn breaks and how to create weaknesses in my opponent's position – while avoiding creating weaknesses in my own. These concepts are becoming clearer, and I feel like I'm seeing the board better.
Next stop: Elo 1800.
Invitation à la sortie du roman Bikepunk et aux 20 ans de Ploum.net
Pour fêter la sortie de son nouveau roman « Bikepunk, les chroniques du flash » le 15 octobre et fêter les 20 ans du blog Ploum.net le 20 octobre, Ploum a le plaisir de convier :
.........................(ajoute ici ton nom, ami·e lecteurice)
pour une séance de dédicaces, une rencontre et un verre de l’amitié dans deux différents cadres.
Bruxelles, le 12 octobre
Le 12 octobre à partir de 16h à la toute nouvelle librairie Chimères, à Bruxelles.
404 avenue de la Couronne
1050 Ixelles
(l’adresse risque d’être introuvable pour les informaticiens)
Paris, le 18 octobre
Le 18 octobre à 17h30 à la librairie À Livr’Ouvert, à Paris
171b Boulevard Voltaire
75011 Paris
Dans les deux cas, l’entrée est libre, famille, amis et accompagnants sont les bienvenus.
Tenue cycliste souhaitée.
Ailleurs en francophonie
La francophonie ne se limite pas à Bruxelles ni Paris. Si vous êtes libraires ou si vous connaissez de chouettes librairies indépendantes, commandez-y dès à présent Bikepunk (ISBN : 978-2-88979-009-8) et n’hésitez pas à nous mettre en relation pour organiser une séance de dédicaces. Voire, pourquoi pas, organiser une séance de dédicaces dans un magasin de cycles ?
Si vous n’avez pas de librairie à disposition, commandez directement sur le site de l’éditeur.
Au plaisir de se rencontrer, en chaîne et en pédales,
Bikepunkement vôtre…
Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !
Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.
October 02, 2024
Recently, a public dispute has emerged between WordPress co-founder Matt Mullenweg and hosting company WP Engine. Matt has accused WP Engine of misleading users through its branding and profiting from WordPress without adequately contributing back to the project.
As the Founder and Project Lead of Drupal, another major open source Content Management System (CMS), I hesitated to weigh in on this debate, as this could be perceived as opportunistic. In the end, I decided to share my perspective because this conflict affects the broader open source community.
I've known Matt Mullenweg since the early days, and we've grown both our open source projects and companies alongside each other. With our shared interests and backgrounds, I consider Matt a good friend and can relate uniquely to him. Equally valuable to me are my relationships with WP Engine's leadership, including CEO Heather Brunner and Founder Jason Cohen, both of whom I've met several times. I have deep admiration for what they've achieved with WP Engine.
Although this post was prompted by the controversy between Automattic and WP Engine, it is not about them. I don't have insight into their respective contributions to WordPress, and I'm not here to judge. I've made an effort to keep this post as neutral as possible.
Instead, this post is about two key challenges that many open source projects face:
- The imbalance between major contributors and those who contribute minimally, and how this harms open source communities.
- The lack of an environment that supports the fair coexistence of open source businesses.
These issues could discourage entrepreneurs from starting open source businesses, which could harm the future of open source. My goal is to spark a constructive dialogue on creating a more equitable and sustainable open source ecosystem. By solving these challenges, we can build a stronger future for open source.
This post explores the "Maker-Taker problem" in open source, using Drupal's contribution credit system as a model for fairly incentivizing and recognizing contributors. It suggests how WordPress and other open source projects could benefit from adopting a similar system. While this is unsolicited advice, I believe this approach could help the WordPress community heal, rebuild trust, and advance open source productively for everyone.
The Maker-Taker problem
At the heart of this issue is the Maker-Taker problem, where creators of open source software ("Makers") see their work being used by others, often service providers, who profit from it without contributing back in a meaningful or fair way ("Takers").
Five years ago, I wrote a blog post called Balancing Makers and Takers to scale and sustain Open Source, where I defined these concepts:
The difference between Makers and Takers is not always 100% clear, but as a rule of thumb, Makers directly invest in growing both their business and the open source project. Takers are solely focused on growing their business and let others take care of the open source project they rely on.
In that post, I also explain how Takers can harm open source projects. By not contributing back meaningfully, Takers gain an unfair advantage over Makers who support the open source project. This can discourage Makers from keeping their level of contribution up, as they need to divert resources to stay competitive, which can ultimately hurt the health and growth of the project:
Takers harm open source projects. An aggressive Taker can induce Makers to behave in a more selfish manner and reduce or stop their contributions to open source altogether. Takers can turn Makers into Takers.
Solving the Maker-Taker challenge is one of the biggest remaining hurdles in open source. Successfully addressing this could lead to the creation of tens of thousands of new open source businesses while also improving the sustainability, growth, and competitiveness of open source – making a positive impact on the world.
Drupal's approach: the Contribution Credit System
In Drupal, we've adopted a positive approach to encourage organizations to become Makers rather than relying on punitive measures. Our approach stems from a key insight, also explained in my Makers and Takers blog post: customers are a "common good" for an open source project, not a "public good".
Since a customer can choose only one service provider, that choice directly impacts the health of the open source project. When a customer selects a Maker, part of their revenue is reinvested into the project. However, if they choose a Taker, the project sees little to no benefit. This means that open source projects grow faster when commercial work flows to Makers and away from Takers.
For this reason, it's crucial for an open source community to:
- Clearly identify the Makers and Takers within their ecosystem
- Actively support and promote their Makers
- Educate end users about the importance of choosing Makers
To address these needs and solve the Maker-Taker problem in Drupal, I proposed a contribution credit system 10 years ago. The concept was straightforward: incentivize organizations to contribute to Drupal by giving them tangible recognition for their efforts.
We've since implemented this system in partnership with the Drupal Association, our non-profit organization. The Drupal Association transparently tracks contributions from both individuals and organizations. Each contribution earns credits, and the more you contribute, the more visibility you gain on Drupal.org (visited by millions monthly) and at events like DrupalCon (attended by thousands). You can earn credits by contributing code, submitting case studies, organizing events, writing documentation, financially supporting the Drupal Association, and more.
A screenshot of an issue comment on Drupal.org. You can see that jamadar worked on this patch as a volunteer, but also as part of his day job working for TATA Consultancy Services on behalf of their customer, Pfizer.Drupal's credit system is unique and groundbreaking within the Open Source community. The Drupal contribution credit system serves two key purposes: it helps us identify who our Makers and Takers are, and it allows us to guide end users towards doing business with our Makers.
Here is how we accomplish this:
- Certain benefits, like event sponsorships or advertising on Drupal.org, are reserved for organizations with a minimum number of credits.
- The Drupal marketplace only lists Makers, ranking them by their contributions. Organizations that stop contributing gradually drop in ranking and are eventually removed.
- We encourage end users to require open source contributions from their vendors. Drupal users like Pfizer and the State of Georgia only allow Makers to apply in their vendor selection process.
Governance and fairness
To make sure the contribution credit system is fair, it benefits from the oversight from an independent, neutral party.
In the Drupal ecosystem, the Drupal Association fulfills this crucial role. The Drupal Association operates independently, free from control by any single company within the Drupal ecosystem. Some of the Drupal Association's responsibilities include:
- Organizing DrupalCons
- Managing Drupal.org
- Overseeing the contribution tracking and credit system
It's important to note that while I serve on the Drupal Association's Board, I am just one of 12 members and have not held the Chair position for several years. My company, Acquia, receives no preferential treatment in the credit system; the visibility of any organization, including Acquia, is solely determined by its contributions over the preceding twelve months. This structure ensures fairness and encourages active participation from all members of the Drupal community.
Drupal's credit system certainly isn't perfect. It is hard to accurately track and fairly value diverse contributions like code, documentation, mentorship, marketing, event organization, etc. Some organizations have tried to game the system, while others question whether the cost-benefit is worthwhile.
As a result, Drupal's credit system has evolved significantly since I first proposed it ten years ago. The Drupal Association continually works to improve the system, aiming for a credit structure that genuinely drives positive behavior.
Recommendations for WordPress
WordPress has already taken steps to address the Maker-Taker challenge through initiatives like the Five for the Future program, which encourages organizations to contribute 5% of their resources to WordPress development.
Building on this foundation, I believe WordPress could benefit from adopting a contribution credit system similar to Drupal's. This system would likely require the following steps to be taken:
- Expanding the current governance model to be more distributed.
- Providing clear definitions of Makers and Takers within the ecosystem.
- Implementing a fair and objective system for tracking and valuing various types of contributions.
- Implementing a structured system of rewards for Makers who meet specific contribution thresholds, such as priority placement in the WordPress marketplace, increased visibility on WordPress.org, opportunities to exhibit at WordPress events, or access to key services.
This approach addresses both key challenges highlighted in the introduction: it balances contributions by incentivizing major involvement, and it creates an environment where open source businesses of all sizes can compete fairly based on their contributions to the community.
Conclusion
Addressing the Maker-Taker challenge is essential for the long-term sustainability of open source projects. Drupal's approach may provide a constructive solution not just for WordPress, but for other communities facing similar issues.
By transparently rewarding contributions and fostering collaboration, we can build healthier open source ecosystems. A credit system can help make open source more sustainable and fair, driving growth, competitiveness, and potentially creating thousands of new open source businesses.
As Drupal continues to improve its credit system, we understand that no solution is perfect. We're eager to learn from the successes and challenges of other open source projects and are open to ideas and collaboration.
October 01, 2024
Ode aux perdants
Le Fediverse est-il pour les perdants ? se demande Andy Wingo
La question est provocante et intelligente : le Fediverse semble être un repère d’écologistes, libristes, défenseurs des droits sociaux, féministes et cyclistes. Bref la liste de tous ceux qui ne sont pas mis en avant, qui semblent « perdre ».
Je n’avais jamais vu les choses sous cet angle. Pour moi, le point commun est surtout une volonté de changer les choses. Or, par définition, si on veut changer les choses, c’est qu’on n’est pas satisfait avec la situation actuelle. On est donc « perdant ». En fait, tout révolutionnaire est, par définition, un·e perdant·e. Dès qu’iel gagne, ce n’est plus un·e révolutionnaire, mais une personne au pouvoir !
Être progressiste implique donc d’être perçu comme perdant selon le filtre d’Andy. Le progrès nait de l’insatisfaction. Le monde ne sera jamais parfait, il faudra toujours l’améliorer, toujours lutter. Mais tout le monde n’a pas l’énergie de lutter tout le temps. C’est humain, c’est souhaitable. Lorsque l’énergie me manque, lorsque je ne suis pas un révolutionnaire, je me concentre sur un objectif minimal : ne pas être un obstacle à celleux qui mènent la lutte.
Je ne suis pas végétarien, mais lorsqu’un restaurant propose des menus végés, je félicite le personnel pour l’initiative. Ce n’est pas grand-chose, mais, au moins, j’envoie un signal.
Si vous n’avez pas la force de quitter les réseaux sociaux propriétaires, encouragez celleux qui le font, ayez un compte sur les réseaux libres, mettez-les au même niveau que les autres. Vous ne serez pas révolutionnaire, mais, au moins, vous ne serez pas un obstacle.
Le coût de la rébellion
Je suis conscient que tout le monde n’a pas le loisir d’être rebelle. J’enseigne à mes étudiants qu’ils vont avoir un des meilleurs diplômes sur le marché, que les entreprises se battent pour les embaucher (je le sais, j’ai moi-même recruté pour mes équipes). Qu’ils sont donc, pour la plupart, immunisés contre le chômage longue durée. Et ce luxe vient avec une responsabilité morale : celle de dire « Non ! » lorsque notre conscience nous le dicte.
D’ailleurs, j’ai suffisamment dit « Non ! » (et pas toujours très poliment) pour savoir que le risque est vraiment minime lorsqu’on est un homme blanc avec un beau diplôme.
Remarquez que je ne cherche pas à vous dire ce qui est bien ou mal ni comment vous devez penser. C’est votre affaire. Je pense juste que le monde serait infiniment meilleur si les travailleurs refusaient de faire ce qui est contraire à leur conscience. Et arrêtez de vous raconter des histoires, de vous mentir à vous-même. Non, vous n’avez pas une « mission ». Vous cherchez juste à aider à construire un système pour vendre de la merde afin de pouvoir acheter votre propre merde.
Après, dire « Non », ce n’est pas toujours possible. Parce qu’on a vraiment quelque chose à perdre. Ou à gagner. Que le compromis moral nous semble nécessaire ou avantageux. Une fois encore, je ne juge pas. On fait tous des compromis moraux. Le tout est d’en être conscient. Chacun les siens.
Mais lorsque le « Non » frontal n’est pas possible, il reste la rébellion passive en jouant au plus con. C’est une technique qui fonctionne vraiment bien. Parfois, elle fonctionne même mieux que l’opposition franche.
Elle consiste à poser des questions :
— C’est légal ça ? Ça me semble quand même étrange, non ?
— Tu peux m’expliquer ? Parce que là, j’ai l’impression que c’est vraiment un truc dégueulasse qu’on nous demande, non ?
— Ça ne me semble pas très moral pour les clients, la planète. Tu en penses quoi ?
Les meilleurs résultats à ce genre de questionnement sont lorsque les deux parties sont prêtes à se laisser convaincre. Mais cela est malheureusement trop rare…
Soyez perdants, consommez de la culture indépendante
On peut également être rebelle dans sa consommation et, plus particulièrement, dans sa consommation culturelle.
Nous avons le réflexe, pour ne pas paraître un perdant, de vouloir faire comme tout le monde. Si un livre ou un film a du succès, nous l’achèterons plus facilement. Il suffit de voir les « déjà un million de lecteurs » sur les pochettes des best-sellers pour comprendre que ça fait vendre. Je dois être une exception comme me l’ont démontré mes colocs il y a 25 ans :
— Hey, vous regardez un film ? C’est quoi ?
— Tu n’aimeras pas, Ploum.
— Pourquoi je n’aimerais pas ?
— Parce que tout le monde aime bien ce film.
Les librairies de gare « Relay », propriétés de Bolloré, ont même des étagères contenant le top 10 des meilleures ventes. Notez que ce top 10 est entièrement fictif. Pour en faire partie, il suffit de payer. Les libraires reçoivent les ordres des livres à mettre dans l’étagère. D’une manière générale, je vous invite à questionner tous les « top vente » ou autre « top 50 ». Vous découvrirez à quel point ces classements sont arbitraires, commerciaux et facilement manipulables dès qu’on a un peu d’argent à investir. Parfois, le top est tout simplement une marque déposée, comme les « Produits de l’année ». Il suffit de payer pour appliquer l’écusson sur son emballage (un jour je vous raconterai ça en détail). Sans parler de l’influence de la pub ou des médias qui, même s’ils prétendent le contraire, sont achetés à travers les relations des attachés de presse. Je suis tombé récemment, dans un média du service public belge, sur une critique dithyrambique du film « L’Heureuse Élue ». Je n’ai pas vu le film, mais à voir la bande-annonce, le fait que ce film soit bon me semble extrêmement improbable. Si on ajoute que la critique ne relève aucun défaut, encourage à plusieurs reprises à aller le voir au plus vite, et n’est pas signée, ça fait beaucoup d’indice sur le fait que l’on est face à une publicité (à peine) déguisée (et complètement illégale, sur un média du service public qui plus est).
Mais parfois, c’est un peu plus subtil. Voire, dans certains cas, tellement subtil que le journaliste lui-même n’a pas réalisé à quel point il est manipulé (vu les tarifs pour un article, les bons journalistes sont pour la plupart ailleurs). La presse n’est plus qu’une caisse de résonnance publicitaire, essentiellement au bénéfice des sponsors des équipes de foot. Il faut juste en être conscient.
Être rebelle pour moi c’est donc choisir volontairement de s’intéresser à la culture indépendante, hors norme. Aller voir des concerts de groupes locaux que personne ne connait. Acheter des livres dans les festivals à des auteurs autoédités. S’intéresser à tout ce qui n’a pas de budget publicitaire.
Au dernier festival Trolls & Légendes, j’ai rencontré Morgan The Slug et je lui ai acheté les deux tomes de sa bédé « Rescue The Princess ». Ce n’est clairement pas de la grande bande dessinée. Mais je vois l’évolution de l’auteur au fur et à mesure des pages et, à ma grande surprise, mon fils adore. Les moments que nous avons passés ensemble à lire les deux tomes n’auraient pas été meilleurs avec des bédés plus commerciales. Nous attendons tous les deux le tome 3 !
- La boutique de Morgan The Slug sur ko-fi.
- Alternalivre, une boutique en ligne dédiée aux livres n’ayant pas de distributeur
- Croafunding, une librairie physique dédiée à l’auto-édition à Lille
Soyez perdants, cessez de suivre les réseaux propriétaires
Cela fait désormais des années que j’ai supprimé mes comptes sur tous les réseaux sociaux (à l’exception de Mastodon). Je vois régulièrement des retours de gens ayant fait pareil et, à chaque fois, je lis ce que j’ai exactement ressenti.
I noticed a low level anxiety, that I had not even known was there, disappear. Im less and less inclined to join in the hyper-polarization that dominates so much of online dialog. I underestimated the psycological effects of social media on how I think.
(extrait du gemlog de ATYH)
Le fait d’être toute la journée exposé à un flux constant d’images, de vidéos et de courts textes crée un stress permanent, une volonté de se conformer, de réagir, de se positionner.
Le collectif Sleeping Giants démontre ce que c’est d’être abonné aux flux de la fachosphère.
Personnellement, je ne trouve pas l’article très intéressant, car il ne m’apprend rien. Tout cela me semble évident, dit et redit. Les fachos vivent dans une bulle où l’on cherche à se faire peur (un noir ou un arabe casse la gueule à un blanc) et à se rassurer (le blanc casse la gueule à l’arabe. Oui, c’est subtil, faut suivre).
Mais, sur les réseaux sociaux, nous sommes tous dans une bulle. Nous avons tous une bulle qui nous fait à la fois peur (la politique, les guerres, le réchauffement climatique…) et nous rassure (l’inauguration d’une piste cyclable, les chats qui sont mignons). Nous sommes en permanence en train de jouer au même jeu de montagnes russes émotionnelles et de nous détraquer le cerveau. Afin de mieux consommer pour lutter contre la peur (alarmes, médicaments) ou « se faire plaisir » (t-shirt en coton bio à l’image du nouveau roman de Ploum, si si, ils arrivent).
Il n’y a pas une « bonne » façon de consommer, que ce soit physiquement ou l’information. On ne peut jamais « gagner », c’est le principe même de la société d’insatisfaction-consommatrice.
Soyez perdants, regagnez votre vie !
Être rebelle, c’est donc avant tout accepter sa propre identité, sa propre solitude. Accepter de ne pas être informé de tout, de ne pas avoir une opinion sur tout, de sortir d’un groupe qui ne nous correspond pas, de ne pas se mettre en avant tout le temps, de perdre toutes ces micro-opportunités, essentiellement factices, de créer un contact, de gagner un follower ou un client.
Donc, oui, être rebelle, c’est vu comme perdre. Pour moi, c’est avant tout ne pas accepter aveuglément les règles du jeu.
Être un perdant, aujourd’hui, c’est peut-être la seule manière de regagner sa vie.
Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !
Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.
September 30, 2024
Approximately 1,100 Drupal enthusiasts gathered in Barcelona, Spain, last week for DrupalCon Europe. As per tradition, I delivered my State of Drupal keynote, often referred to as the "DriesNote".
If you missed it, you can watch the video or download my slides (177 MB).
In my keynote, I gave an update on Drupal Starshot, an ambitious initiative we launched at DrupalCon Portland 2024. Originally called Drupal Starshot, inspired by President Kennedy's Moonshot challenge, the product is now officially named Drupal CMS.
The goal of Drupal CMS is to set the standard for no-code website building. It will allow non-technical users, like marketers, content creators, and site builders, to create digital experiences with ease, without compromising on the power and flexibility that Drupal is known for.
A four-month progress report
While Kennedy gave NASA eight years, I set a goal to deliver the first version of Drupal CMS in just eight months. It's been four months since DrupalCon Portland, which means we're halfway through.
So in my keynote, I shared our progress and gave a 35-minute demo of what we've built so far. The demo highlights how a fictional marketer, Sarah, can build a powerful website in just hours with minimal help from a developer. Along her journey, I showcased the following key innovations:
- A new brand for a new market: A brand refresh of Drupal.org, designed to appeal to both marketers and developers. The first pages are ready and available for preview at new.drupal.org, with more pages launching in the coming months.
- A trial experience: A trial experience that lets you try Drupal CMS with a single click, eliminating long-standing adoption barriers for new users. Built with WebAssembly, it runs entirely in the browser – no servers to install or manage.
- An improved installer: An installer that lets users install recipes – pre-built features that combine modules, configuration, and default content for common website needs. Recipes bundle years of expertise into repeatable, shareable solutions.
- Events recipe: A simple events website that used to take an experienced developer a day to build can now be created in just a few clicks by non-developers.
- Project Browser support for recipes: Users can now browse the Drupal CMS recipes in the Project Browser, and install them in seconds.
- First page of documentation: New documentation created specifically for end users. Clear, effective documentation is key to Drupal CMS's success, so we began by writing a single page as a model for the quality and style we aim to achieve.
- AI for site building: AI agents capable of creating content types, configuring fields, building Views, forms, and more. These agents will transform how people build and manage websites with Drupal.
- Responsible AI policy: To ensure responsible AI development, we've created a Responsible AI policy. I'll share more details in an upcoming blog, but the policy focuses on four key principles: human-in-the-loop, transparency, swappable large language models (LLMs), and clear guidance.
- SEO Recipe: Combines and configures all the essential Drupal modules to optimize a Drupal site for search engines.
- 14 recipes in development: In addition to the Events and SEO recipes, 12 more are in development with the help of our Drupal Certified Partners. Each Drupal CMS recipe addresses a common marketing use case outlined in our product strategy. We showcased both the process and progress during the Initiative Lead Keynote for some of the tracks. After DrupalCon, we'll begin developing even more recipes and invite additional contributors to join the effort.
- AI-assisted content migration: AI will crawl your source website and handle complex tasks like mapping unstructured HTML to structured Drupal content types in your destination site, making migrations faster and easier. This could be a game-changer for website migrations.
- Experience Builder: An early preview of a brand new, out-of-the-box tool for content creators and designers, offering layout design, page building, basic theming and content editing tools. This is the first time I've showcased our progress on stage at a DrupalCon.
- Future-proof admin UI with React: Our strategy for modernizing Drupal's backend UI with React.
- The "Adopt-a-Document" initiative: A strategy and funding model for creating comprehensive documentation for Drupal CMS. If successful, I'm hopeful we can expand this model to other areas of Drupal. For more details, please read the announcement on drupal.org.
- Global Documentation Lead: The Drupal Association's commitment to hire a dedicated Documentation Lead, responsible for managing all aspects of Drupal's documentation, beyond just Drupal CMS.
The feedback on my presentation has been incredible, both online and in-person. The room was buzzing with energy and positivity! I highly recommend watching the recording.
Attendees were especially excited about the AI capabilities, Experience Builder, and recipes. I share their enthusiasm as these capabilities are transformative for Drupal.
Many of these features are designed with non-developers in mind. Our goal is to broaden Drupal's reach beyond its traditional user base and reach more people than ever before.
Release schedule
Our launch plan targets Drupal CMS's release on Drupal's upcoming birthday: January 15, 2025. It's also just a couple of weeks after the Drupal 7 End of Life, marking the end of one era and the beginning of another.
The next milestone is DrupalCon Singapore, taking place from December 9–11, 2024, less than 3 months away. We hope to have a release candidate ready by then.
Now that we're back from DrupalCon and have key milestone dates set, there is a lot to coordinate and plan in the coming weeks, so stay tuned for updates.
Call for contribution
Ambitious? Yes. But achievable if we work together. That's why I'm calling on all of you to get involved with Drupal CMS. Whether it's building recipes, enhancing the Experience Builder, creating AI agents, writing tests, improving documentation, or conducting usability testing – there are countless ways to contribute and make a difference. If you're ready to get involved, visit https://drupal.org/starshot to learn how to get started.
Thank you
This effort has involved so many people that I can't name them all, but I want to give a huge thank you to the Drupal CMS Leadership Team, who I've been working with closely every week: Cristina Chumillas (Lullabot), Gábor Hojtsy (Acquia), Lenny Moskalyk (Drupal Association), Pamela Barone (Technocrat), Suzanne Dergacheva (Evolving Web), and Tim Plunkett (Acquia).
A special shoutout goes to the demo team we assembled for my presentation: Adam Hoenich (Acquia), Amber Matz (Drupalize.me), Ash Sullivan (Acquia), Jamie Abrahams (FreelyGive), Jim Birch (Kanopi), Joe Shindelar (Drupalize.me), John Doyle (Digital Polygon), Lauri Timmanee (Acquia), Marcus Johansson (FreelyGive), Martin Anderson-Clutz (Acquia), Matt Glaman (Acquia), Matthew Grasmick (Acquia), Michael Donovan (Acquia), Tiffany Farriss (Palantir.net), and Tim Lehnen (Drupal Association).
I also want to thank the Drupal CMS track leads and contributors for their development work. Additionally, I'd like to recognize the Drupal Core Committers, Drupal Association staff, Drupal Association Board of Directors, and Certified Drupal Partners for continued support and leadership. There are so many people and organizations whose contributions deserve recognition that I can't list everyone individually, partly to avoid the risk of overlooking anyone. Please know your efforts are deeply appreciated.
Lastly, thank you to everyone who helped make DrupalCon Barcelona a success. It was excellent!
September 24, 2024
September 23, 2024
Les vieux cons (ou L’humaine imperfection de la perfection morale)
Idéaliser nos idoles
L’écrivain de SF John Scalzi réagit au fait que certaines personnes semblent faire de lui une idole, surtout depuis que Neil Gaiman est accusé d’agressions sexuelles. Son argument principal : tous les humains sont imparfaits et vous ne devriez prendre personne pour idole. Et surtout pas ceux qui le souhaitent.
Ce qui est intéressant, ce que nous avons tendance à idéaliser les personnes que nous suivons en ligne et qui font des choses que nous aimons. Je ne suis pas le millionième de John Scalzi, mais, à mon échelle, j’ai constaté parfois certaines réactions effrayantes. Comme lorsqu’un lecteur qui me rencontre s’excuse auprès de moi d’utiliser un logiciel propriétaire.
Mais bon sang, j’utilise aussi des logiciels propriétaires. Je lutte pour le logiciel libre, mais je ne suis (heureusement) pas Richard Stallman. Je cède comme tout le monde à la praticité ou à l’effet de réseau. C’est juste que j’essaye de conscientiser, de dénoncer la pression sociale. Mais, comme tout le monde, je suis très imparfait. Lorsque j’écris sur mon blog, c’est à propos de l’idéal que je souhaite atteindre au moment où j’écris, pas ce que je suis. Retenez également qu’un idéal n’est pas censé être atteint : il n’est qu’un guide sur le chemin et doit pouvoir être changé à tout moment. Un idéal gravé dans le marbre est morbide. Ceux qui vous font miroiter un idéal sont souvent dangereux.
L’hypocrisie de la perfection publique
D’une manière générale, les combats d’une personne sont très révélateurs de son obsession. Les homophobes sont souvent des homosexuels refoulés. Les scandales sexuels touchent le plus souvent ceux qui ont une image publique de rigueur. Personnellement, je parle beaucoup des dangers de l’addiction aux réseaux sociaux. Je vous laisse deviner pourquoi le sujet m’obsède à ce point…
Je me reconnais également dans ce que Scalzi raconte au sujet des conventions et des conférences : tout le monde le trouve sympa et puis, en rentrant chez lui, il s’écroule et s’enferme dans sa solitude. Je fais exactement pareil. J’ai un personnage public très différent du Ploum privé. Ma femme déteste le Ploum public : « Tu es plus sympa avec tes lecteurs qu’avec ta propre famille ! ».
Les zones d’ombre de chaque être humain
Je déteste également cette propension à sur analyser la perfection d’un individu, surtout à travers ses supposés liens sociaux. Combien de fois n’ai-je pas entendu que « Tu as partagé ce texte de machin, mais sais-tu que machin a lui-même partagé des textes de untel et que untel est en fait très limite au niveau de l’antisémitisme ? »
Réponse : non, je ne le sais pas. Et je n’enquête pas sur toutes les personnes qui publient parce que toutes, sans exception, ont leurs zones d’ombre, leurs erreurs de jeunesse, leurs œuvres qui appartiennent à une époque, mais ne sont plus du tout acceptables aujourd’hui.
Personne n’est parfait. Si je partage un texte, c’est pour le texte. Parce qu’il me parle et je le trouve intéressant. Le contexte peut parfois apporter une compréhension plus fine, mais je ne veux et ne peux pas juger les individus sur leur passé, leurs agissements ou toute information que je peux entendre sur eux. Je ne suis pas juge. Je ne condamne ni ne cautionne qui que ce soit en partageant des œuvres.
Si vous jugez les gens, je suis certain que, dans les 900 billets de ce blog, s’en trouve au moins un qui vous choquera et me condamnera définitivement à vos yeux. Oui, Bertrand Cantat est un con coupable de féminicide. Oui, j’adore et j’écoute Noir Désir presque tous les jours.
Paul Watson
Paul Watson a été arrêté. Depuis des années, il est accusé d’écoterrorisme.
J’ai tant à dire, mais, pour ne pas me répéter, je vous invite à relire ce que j’avais écrit suite à la lecture de son livre.
Oui, Paul Watson est copain avec Brigitte Bardot qui est elle-même copine avec Jean-Marie Le Pen qui est un gros con. Il n’empêche que je soutiens le combat de Paul Watson pour préserver les océans. Ce combat me semble juste et important. J’ai l’impression que Paul Watson défend réellement les intérêts de la faune marine. Je peux évidemment me tromper, mais c’est ma position aujourd’hui.
Les médias mentent et déforment tout
J’insiste sur le sujet, car, comme je vous l’ai dit, j’ai lu le livre de Paul Watson et ne peux que constater que les médias déforment complètement des propos ou des passages dont je me souviens particulièrement.
Dans la même veine, la professeure Emily Bender décrit comment ses propos ont été déformés sur la BBC pour transformer son discours "démystification de chatGPT" en "l’AI c’est le futur".
J’ai un jour vécu une expérience similaire. Une télévision belge voulait m’interviewer sur un sujet d’actualité informatique. Le rendez-vous était fixé et le journaliste m’appelle une heure avant pour préparer son sujet. Au téléphone, il me pose quelques questions. Je réponds et il rephrase mes réponses de manière à dire exactement l’inverse. Je crois à une incompréhension. Je lui signale. Il insiste. Le ton monte légèrement.
– Ce n’est pas du tout ce que j’ai dit. C’est même exactement l’inverse.
— Mais moi j’ai besoin de quelqu’un qui dise cela.
— Mais c’est faux. Je suis l’expert et c’est que vous voulez dire est complètement faux.
— C’est trop tard, tout notre sujet a été préparé en ce sens, il faudra adapter votre discours.
— Si vous savez déjà ce que vous voulez que je dise, vous n’avez pas besoin de m’interviewer. Au revoir.
Je n’ai plus jamais eu de nouvelles de ce journaliste ni même de cette chaîne.
Méfiez-vous de ce que disent les experts dans les médias. Parfois, le désir de passer à la télévision est plus fort que le souci de véracité. Parfois, on s’autojustifie en disant "au moins, on va faire passer un message dans le bon sens même s’il est un peu falsifié". Mais le montage final, l’englobage dans un sujet plus large ou la musique d’ambiance déformeront définitivement les propos.
Dès que vous connaissez un peu un domaine, vous vous rendez compte que tout ce qui est dit est dans les médias grand public est complètement faux. À un tel point qu’un article qui est vaguement juste est souvent souligné par les experts du domaine. C’est tellement rare !
Souvenez-vous-en à chaque fois que vous lisez un article dans un média grand public : ce que vous lisez est complètement faux, mensonger et a été écrit par une personne qui ne comprend rien au domaine et est payé pour générer du clic sur les publicités.
L’exemple éclairant des zones bleues
Un exemple qui illustre très bien cela sont ces fameuses « zones bleues », des régions d’Italie, de Grèce ou du Japon où pulluleraient les supercentenaires. Les médias parlent souvent de ces régions où vivre jusque 110 ans semble être la norme, avec des articles qui vantent l’huile d’olive ou la vie rurale en communauté, le tout illustré par de souriants vieillards assis sur un banc au soleil.
Et bien le professeur Saul Justin Newman vient de percer le secret de ces zones bleues et d’obtenir pour cela le prix « Ig Nobel » (un prix qui récompense les recherches les plus absurdes).
Pourquoi un Ig Nobel ? Parce qu’il a découvert que le secret de ces zones est tout simplement que 80% des supercentenaires sont… morts. Les autres n’ont pas de certificats de naissance.
Ces « zones bleues » sont souvent les régions où tout semble encourager et faciliter la fraude à la pension.
Je vous parie que cette recherche restera confidentielle. Les médias continueront à nous vanter l’huile d’olive pour vivre jusqu’à 110 ans et nous parler du « mystère » des zones bleues. Parce que la réalité n’est pas intéressante dans un programme de divertissement à vocation publicitaire.
Comme le disait Aaron Swartz: je déteste les actualités !
Pensées pour Aaron
Un excellent article de Dave Karpf qui m’apprend que Sam Altman, l’actuel CEO derrière ChatGPT, était dans l’incubateur de startup YCombinator en même temps qu’Aaron Swartz. Sur la photo de 2007, ils sont littéralement épaule contre épaule. La photo m’a arraché une larme de tristesse et une autre de colère.
Aaron Swartz, Sam Altman and Paul Graham in YCombinator 2007Aaron Swartz voulait libérer la connaissance. Il avait créé un script pour télécharger, à travers le réseau de son université, des milliers d’articles scientifiques. Soupçonné de le faire afin de les rendre publics illégalement, il a été poursuivi en justice et risquait… des décennies de prison ce qui l’a poussé au suicide. Notez qu’il n’a finalement jamais rendu ces articles publics, articles financés par les contribuables, soit dit en passant.
Aujourd’hui, Sam Altman aspire un milliard de fois plus de contenu. Tous les contenus artistiques, tous vos messages, tous vos emails et, bien sûr, tous les articles scientifiques. Mais, contrairement à Aaron, qui le faisait pour libérer la connaissance, Sam Altman a un motif beaucoup plus correct. Il veut gagner de l’argent. Et donc… c’est acceptable.
Je n’invente rien, c’est réellement l’argument de Sam Altman.
On enferme les militants écolos, les défenseurs de baleines. On décore les CEO des grosses entreprises qui détruisent la planète le plus vite possible. En fait, je crois que je commence à voir comme une sorte de fil rouge…
Le problème, comme le souligne Dave Karpf, c’est qu’YCombinator est rempli de gens qui rêvent et veulent devenir… comme Sam Altman. Devenir un rebelle qui se suicide ou qui est envoyé en prison ne fait envie à personne.
À un moment de ma vie, baigné dans l’environnement « startups », j’ai moi-même essayé de devenir un « entrepreneur à succès ». Intellectuellement, je comprenais tout ce qu’il fallait faire pour y arriver. Je me disais « Pourquoi pas moi ? ». Mais je n’y arrivais pas. Je n’étais pas capable d’accepter ce qui ne me semblait pas juste ou parfaitement moral. Peut-être que je partage un peu trop de points de communs avec Aaron Swartz (surtout les défauts et les spécificités alimentaires, un peu moins le génie).
Et bien tant pis pour le succès. Aaron, chaque fois que je pense à toi, les larmes me montent aux yeux et j’ai envie de me battre. Tu me donnes l’énergie pour tenter de changer le monde avec mon clavier.
Je n’espère pas y arriver. Mais je n’ai pas le droit d’abandonner.
Dave, dans ton article tu te plains que tout le monde veut devenir Sam Altman. Moi pas. Moi plus. J’ai grandi, j’ai muri. Je veux (re)devenir un disciple d’Aaron Swartz. Lui, je peux en faire un modèle, une idole, car il n’a pas eu la chance de vivre assez longtemps pour développer ses zones d’ombre, pour devenir un vieux con.
Un vieux con comme celui vers lequel l’entropie cérébrale me pousse chaque jour un peu davantage.
Faut dire que pour taper ses billets de blog dans Vim et ses romans sur une machine à écrire mécanique, faut vraiment être un vieux con…
Je réalise d’ailleurs que les héros de mon prochain roman sont une jeune rebelle et un vieux con. En fait, c’est pas si mal de devenir un vieux con rebelle… (sortie le 15 octobre, vous pouvez déjà commander dans votre librairie l’EAN 9782889790098)
Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !
Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.
September 22, 2024
September 17, 2024
September 10, 2024
In previous blog posts, we discussed setting up a GPG smartcard on GNU/Linux and FreeBSD.
In this blog post, we will configure Thunderbird to work with an external smartcard reader and our GPG-compatible smartcard.
Before Thunderbird 78, if you wanted to use OpenPGP email encryption, you had to use a third-party add-on such as https://enigmail.net/.
Thunderbird’s recent versions natively support OpenPGP. The Enigmail addon for Thunderbird has been discontinued. See: https://enigmail.net/index.php/en/home/news.
I didn’t find good documentation on how to set up Thunderbird with a GnuPG smartcard when I moved to a new coreboot laptop, so this was the reason I created this blog post series.
GnuPG configuration
We’ll not go into too much detail on how to set up GnuPG. This was already explained in the previous blog posts.
- Use a GPG smartcard with Thunderbird. Part 1: setup GnuPG
- Use a GPG smart card with Thunderbird. Part 2: setup GnuPG on FreeBSD
If you want to use a HSM with GnuPG you can use the gnupg-pkcs11-scd
agent https://github.com/alonbl/gnupg-pkcs11-scd that translates the pkcs11
interface to GnuPG. A previous blog post describes how this can be configured with SmartCard-HSM.
We’ll go over some steps to make sure that the GnuPG is set up correctly before we continue with the Thunderbird configuration. The pinentry
command must be
configured with graphical support to type our pin code in the Graphical user environment.
Import Public Key
Make sure that your public key - or the public key of the reciever(s) - is/are imported.
[staf@snuffel ~]$ gpg --list-keys
[staf@snuffel ~]$
[staf@snuffel ~]$ gpg --import <snip>.asc
gpg: key XXXXXXXXXXXXXXXX: public key "XXXX XXXXXXXXXX <XXX@XXXXXX>" imported
gpg: Total number processed: 1
gpg: imported: 1
[staf@snuffel ~]$
[staf@snuffel ~]$ gpg --list-keys
/home/staf/.gnupg/pubring.kbx
-----------------------------
pub xxxxxxx YYYYY-MM-DD [SC]
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
uid [ xxxxxxx] xxxx xxxxxxxxxx <xxxx@xxxxxxxxxx.xx>
sub xxxxxxx xxxx-xx-xx [A]
sub xxxxxxx xxxx-xx-xx [E]
[staf@snuffel ~]$
Pinentry
Thunderbird will not ask for your smartcard’s pin code.
This must be done on your smartcard reader if it has a pin pad or an external pinentry
program.
The pinentry
is configured in the gpg-agent.conf
configuration file. As we’re using Thunderbird is a graphical environment we’ll configure it to use a graphical version.
Installation
I’m testing KDE plasma 6 on FreeBSD, so I installed the Qt version of pinentry.
On GNU/Linux you can check the documentation of your favourite Linux distribution to install a graphical pinentry
. If you use a Graphical user environment there is probably already a graphical-enabled pinentry
installed.
[staf@snuffel ~]$ sudo pkg install -y pinentry-qt6
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
The following 1 package(s) will be affected (of 0 checked):
New packages to be INSTALLED:
pinentry-qt6: 1.3.0
Number of packages to be installed: 1
76 KiB to be downloaded.
[1/1] Fetching pinentry-qt6-1.3.0.pkg: 100% 76 KiB 78.0kB/s 00:01
Checking integrity... done (0 conflicting)
[1/1] Installing pinentry-qt6-1.3.0...
[1/1] Extracting pinentry-qt6-1.3.0: 100%
==> Running trigger: desktop-file-utils.ucl
Building cache database of MIME types
[staf@snuffel ~]$
Configuration
The gpg-agent
is responsible for starting the pinentry
program. Let’s reconfigure it to start the pinentry
that we like to use.
[staf@snuffel ~]$ cd .gnupg/
[staf@snuffel ~/.gnupg]$
[staf@snuffel ~/.gnupg]$ vi gpg-agent.conf
The pinentry
is configured in the pinentry-program
directive. You’ll find the complete gpg-agent.conf
that I’m using below.
debug-level expert
verbose
verbose
log-file /home/staf/logs/gpg-agent.log
pinentry-program /usr/local/bin/pinentry-qt
Reload the sdaemon
and gpg-agent
configuration.
staf@freebsd-gpg3:~/.gnupg $ gpgconf --reload scdaemon
staf@freebsd-gpg3:~/.gnupg $ gpgconf --reload gpg-agent
staf@freebsd-gpg3:~/.gnupg $
Test
To verify that gpg
works correctly and that the pinentry
program works in our graphical environment we sign a file.
Create a new file.
$ cd /tmp
[staf@snuffel /tmp]$
[staf@snuffel /tmp]$ echo "foobar" > foobar
[staf@snuffel /tmp]$
Try to sign it.
[staf@snuffel /tmp]$ gpg --sign foobar
[staf@snuffel /tmp]$
If everything works fine, the pinentry
program will ask for the pincode to sign it.
Thunderbird
In this section we’ll (finally) configure Thunderbird to use GPG with a smartcard reader.
Allow external smartcard reader
Open the global settings, click on the "Hamburger" icon and select settings.
Or press [F10] to bring-up the "Menu bar" in Thunderbird and select [Edit] and Settings.
In the Advanced Preferences window search for "external_gnupg" settings and set mail.indenity.allow_external_gnupg to true.
Setup End-To-End Encryption
The next step is to configure the GPG keypair that we’ll use for our user account.
Open the account setting by pressing on the "Hamburger" icon and select Account Settings or press [F10] to open the menu bar and select Edit, Account Settings.
Select End-to-End Encryption at OpenPG section select [ Add Key ].
Select the ( * ) Use your external key though GnuPG (e.g. from a smartcard)
And click on [Continue]
The next window will ask you for the Secret Key ID.
Execute gpg --list-keys
to get your secret key id.
Copy/paste your key id and click on [ Save key ID ].
I found that it is sometimes required to restart Thunderbird to reload the configuration when a new key id is added. So restart Thunderbird or restart it fails to find your key id in the keyring.
Test
As a test we send an email to our own email address.
Open a new message window and enter your email address into the To: field.
Click on [OpenPGP] and Encrypt.
Thunderbird will show a warning message that it doesn't know the public key to set up the encryption.
Click on [Resolve].
Thunderbird will show a window with the key fingerprint. Select ( * ) Accepted.
Click on [ Import ] to import the public key.
With our public key imported, the warning about the End-to-end encryption requires resolving key issue should be resolved.
Click on the [ Send ] button to send the email.
To encrypt the message, Thunderbird will start a gpg
session that invokes the pinentry
command type in your pincode. gpg will encrypt the message file and if everything works fine the email is sent.
Have fun!
Links
- https://wiki.mozilla.org/Thunderbird:OpenPGP
- https://wiki.mozilla.org/Thunderbird:OpenPGP:Smartcards
- https://support.mozilla.org/en-US/kb/openpgp-thunderbird-howto-and-faq
- https://addons.thunderbird.net/nl/thunderbird/addon/enigmail/
- https://wiki.debian.org/Smartcards/OpenPGP
- https://www.floss-shop.de/en/security-privacy/smartcards/13/openpgp-smart-card-v3.4?c=11
- https://www.gnupg.org/howtos/card-howto/en/smartcard-howto-single.html
- https://support.nitrokey.com/t/nk3-mini-gpg-selecting-card-failed-no-such-device-gpg-card-setup/5057/7
- https://security.stackexchange.com/questions/233916/gnupg-connecting-to-specific-card-reader-when-multiple-reader-available#233918
- https://www.fsij.org/doc-gnuk/stop-scdaemon.html
- https://wiki.debian.org/Smartcards
- https://github.com/OpenSC/OpenSC/wiki/Overview/c70c57c1811f54fe3b3989d01708b45b86fafe11
- https://superuser.com/questions/1693289/gpg-warning-not-using-as-default-key-no-secret-key
- https://stackoverflow.com/questions/46689885/how-to-get-public-key-from-an-openpgp-smart-card-without-using-key-servers
September 09, 2024
The NBD protocol has grown a number of new features over the years. Unfortunately, some of those features are not (yet?) supported by the Linux kernel.
I suggested a few times over the years that the maintainer of the NBD driver in the kernel, Josef Bacik, take a look at these features, but he hasn't done so; presumably he has other priorities. As with anything in the open source world, if you want it done you must do it yourself.
I'd been off and on considering to work on the kernel driver so that I could implement these new features, but I never really got anywhere.
A few months ago, however, Christoph Hellwig posted a patch set that reworked a number of block device drivers in the Linux kernel to a new type of API. Since the NBD mailinglist is listed in the kernel's MAINTAINERS file, this patch series were crossposted to the NBD mailinglist, too, and when I noticed that it explicitly disabled the "rotational" flag on the NBD device, I suggested to Christoph that perhaps "we" (meaning, "he") might want to vary the decision on whether a device is rotational depending on whether the NBD server signals, through the flag that exists for that very purpose, whether the device is rotational.
To which he replied "Can you send a
patch
".
That got me down the rabbit hole, and now, for the first time in the 20+ years of being a C programmer who uses Linux exclusively, I got a patch merged into the Linux kernel... twice.
So, what do these things do?
The first patch adds support for the ROTATIONAL flag. If the NBD server mentions that the device is rotational, it will be treated as such, and the elevator algorithm will be used to optimize accesses to the device. For the reference implementation, you can do this by adding a line "rotational = true" to the relevant section (relating to the export where you want it to be used) of the config file.
It's unlikely that this will be of much benefit in most cases (most nbd-server installations will be exporting a file on a filesystem and have the elevator algorithm implemented server side and then it doesn't matter whether the device has the rotational flag set), but it's there in case you wish to use it.
The second set of patches adds support for the WRITE_ZEROES
command.
Most devices these days allow you to tell them "please write a N zeroes
starting at this offset", which is a lot more efficient than sending
over a buffer of N zeroes and asking the device to do DMA to copy
buffers etc etc for just zeroes.
The NBD protocol has supported its own WRITE_ZEROES
command for a
while now, and hooking it up was reasonably simple in the end. The only
problem is that it expects length values in bytes, whereas the kernel
uses it in blocks. It took me a few tries to get that right -- and then
I also fixed up handling of discard messages, which required the same
conversion.
My HTTP Header Analyzer continues to be used a lot. Last week, I received a bug report, so I decided to look into it over the weekend. One thing led to another, and I ended up making a slew of improvements:
- Clarified the explanations for various Cloudflare headers, including
CF-Edge-Cache
,CF-APO-Via
,CF-BGJ
,CF-Polish
,CF-Mitigated
,CF-Ray
,CF-Request-ID
,CF-Connecting-IP
, andCF-IPCountry
. - Added support for new headers:
X-Logged-In
,X-Hacker
,X-Vimeo-Device
, andOrigin-Agent-Cluster
. - Improved checks and explanations for cache-related headers, including
X-Cache
,X-Cache-Status
, andX-Varnish
. - Expanded the validation and explanation for the
X-Content-Type-Options
header. - Marked
X-Content-Security-Policy
as a deprecated version of theContent-Security-Policy
header and provided a more comprehensive breakdown of Content Security Policy (CSP) directives. - Improved the validation for CORS-related headers:
Access-Control-Expose-Headers
andAccess-Control-Max-Age
. - Expanded the explanation of the
Cross-Origin-Resource-Policy
header, covering its possible values. - Added support for the
Timing-Allow-Origin
header. - Clarified the
X-Runtime
header, which provides timing information for server response generation. - Expanded the explanations for TLS and certificate-related headers:
Strict-Transport-Security
,Expect-Staple
, andExpect-CT
. - Added an explanation for the
Host-Header
header. - Improved details for
X-Forwarded-For
. - Refined the explanations for
Cache-Control
directives likePublic
,Private
, andNo-Cache
. - Expanded the explanation for the
Vary
header and its impact on caching behavior. - Added an explanation for the
Retry-After
header. - Updated the explanation for the legacy
X-XSS-Protection
header. - Added an explanation for the Akamai-specific
Akamai-Age-MS
header.
HTTP headers are crucial for web application functionality and security. While some are commonly used, there are many lesser-known headers that protect against security vulnerabilities, enforces stronger security policies, and improves performance.
To explore these headers further, you can try the latest HTTP Header Analyzer. It is pretty simple to use: enter a URL, and the tool will analyze the headers sent by your website. It then explains these headers, provides a score, and suggests possible improvements.
September 06, 2024
September 05, 2024
IT architects generally use architecture-specific languages or modeling techniques to document their thoughts and designs. ArchiMate, the framework I have the most experience with, is a specialized enterprise architecture modeling language. It is maintained by The Open Group, an organization known for its broad architecture framework titled TOGAF.
My stance, however, is that architects should not use the diagrams from their architecture modeling framework to convey their message to every stakeholder out there...
An enterprise framework for architects
Certainly, using a single modeling language like ArchiMate is important. It allows architects to use a common language, a common framework, in which they can convey their ideas and designs. When collaborating on the same project, it would be unwise to use different modeling techniques or design frameworks among each other.
By standardizing on a single framework for a particular purpose, a company can optimize their efforts surrounding education and documentation. If several architecture frameworks are used for the same purpose, inefficiencies arise. Supporting tooling can also be selected (such as Archi), which has specialized features to support this framework. The more architects are fluent in a common framework, the less likely ambiguity or misunderstandings occur about what certain architectures or designs want to present.
Now, I highlighted "for a particular purpose", because that architecture framework isn't the goal, it's a means.
Domain-specific language, also in architecture
In larger companies, you'll find architects with different specializations and focus areas. A common set is to have architects at different levels or layers of the architecture:
- enterprise architects focus on the holistic and strategic level,
- domain architects manage the architecture for one or more domains (a means of splitting the complexity of a company, often tied to business domains),
- solution or system architects focus on the architecture for specific projects or solutions,
- security architects concentrate on the cyber threats and protection measures,
- network architects look at the network design, flows and flow controls, etc.
Architecture frameworks are often not meant to support all levels. ArchiMate for instance, is tailored to enterprise and domain level in general. It also supports solution or system architecture well when it focuses on applications. Sure, other architecture layers can be expressed as well, but after a while, you'll notice that the expressivity of the framework lacks the details or specifics needed for those layers.
It is thus not uncommon that, at a certain point, architects drop one framework and start using another. Network architecture and design is expressed differently than the ICT domain architecture. Both need to 'plug into each other', because network architects need to understand the larger picture they operate in, and domain architects should be able to read network architecture design and relate it back to the domain architecture.
Such a transition is not only within IT. Consider city planning and housing units, where architects design new community areas and housing. These designs need to be well understood by the architects, who are responsible for specific specializations such as utilities, transportation, interior, landscaping, and more. They use different ways of designing, but make sure it is understandable (and often even standardized) by the others.
Your schematic is not your presentation
I've seen architects who are very satisfied with their architectural design: they want nothing more than to share this with their (non-architect) stakeholders in all its glory. And while I do agree that lead engineers, for instance, should be able to understand architecture drawings, the schematics themselves shouldn't be the presentation material.
And definitely not towards higher management.
When you want to bring a design to a broader community, or to stakeholders with different backgrounds or roles, it is important to tell your story in an easy-to-understand way. Just like building architects would create physical mock-ups at scale to give a better view of a building, IT architects should create representative material to expedite presentations and discussions.
Certainly, you will lose a lot of insight compared to the architectural drawings, but you'll get much better acceptance by the community.
What is the definition of “Open Source”?
There’s been no shortage of contention on what “Open Source software” means. Two instances that stand out to me personally are ElasticSearch’s “Doubling down on Open” and Scott Chacon’s “public on GitHub”.
I’ve been active in Open Source for 20 years and could use a refresher on its origins and officialisms. The plan was simple: write a blog post about why the OSI (Open Source Initiative) and its OSD (Open Source Definition) are authoritative, collect evidence in its support (confirmation that they invented the term, of widespread acceptance with little dissent, and of the OSD being a practical, well functioning tool). That’s what I keep hearing, I just wanted to back it up. Since contention always seems to be around commercial re-distribution restrictions (which are forbidden by the OSD), I wanted to particularly confirm that there hasn’t been all that many commercial vendors who’ve used, or wanted, to use the term “open source” to mean “you can view/modify/use the source, but you are limited in your ability to re-sell, or need to buy additional licenses for use in a business”
However, the further I looked, the more I found evidence of the opposite of all of the above. I’ve spent a few weeks now digging and some of my long standing beliefs are shattered. I can’t believe some of the things I found out. Clearly I was too emotionally invested, but after a few weeks of thinking, I think I can put things in perspective. So this will become not one, but multiple posts.
The goal for the series is look at the tensions in the community/industry (in particular those directed towards the OSD), and figure out how to resolve, or at least reduce them.
Without further ado, let’s get into the beginnings of Open Source.
The “official” OSI story.
Let’s first get the official story out the way, the one you see repeated over and over on websites, on Wikipedia and probably in most computing history books.
Back in 1998, there was a small group of folks who felt that the verbiage at the time (Free Software) had become too politicized. (note: the Free Software Foundation was founded 13 years prior, in 1985, and informal use of “free software” had around since the 1970’s). They felt they needed a new word “to market the free software concept to people who wore ties”. (source) (somewhat ironic since today many of us like to say “Open Source is not a business model”)
Bruce Perens - an early Debian project leader and hacker on free software projects such as busybox - had authored the first Debian Free Software Guidelines in 1997 which was turned into the first Open Source Definition when he founded the OSI (Open Source Initiative) with Eric Raymond in 1998. As you continue reading, keep in mind that from the get-go, OSI’s mission was supporting the industry. Not the community of hobbyists.
Eric Raymond is of course known for his seminal 1999 essay on development models “The cathedral and the bazaar”, but he also worked on fetchmail among others.
According to Bruce Perens, there was some criticism at the time, but only to the term “Open” in general and to “Open Source” only in a completely different industry.
At the time of its conception there was much criticism for the Open Source campaign, even among the Linux contingent who had already bought-in to the free software concept. Many pointed to the existing use of the term “Open Source” in the political intelligence industry. Others felt the term “Open” was already overused. Many simply preferred the established name Free Software. I contended that the overuse of “Open” could never be as bad as the dual meaning of “Free” in the English language–either liberty or price, with price being the most oft-used meaning in the commercial world of computers and software
From Open Sources: Voices from the Open Source Revolution: The Open Source Definition
Furthermore, from Bruce Perens’ own account:
I wrote an announcement of Open Source which was published on February 9 [1998], and that’s when the world first heard about Open Source.
source: On Usage of The Phrase “Open Source”
Occasionally it comes up that it may have been Christine Peterson who coined the term earlier that week in February but didn’t give it a precise meaning. That was a task for Eric and Bruce in followup meetings over the next few days.
Even when you’re the first to use or define a term, you can’t legally control how others use it, until you obtain a Trademark. Luckily for OSI, US trademark law recognizes the first user when you file an application, so they filed for a trademark right away. But what happened? It was rejected! The OSI’s official explanation reads:
We have discovered that there is virtually no chance that the U.S. Patent and Trademark Office would register the mark “open source”; the mark is too descriptive. Ironically, we were partly a victim of our own success in bringing the “open source” concept into the mainstream
This is our first 🚩 red flag and it lies at the basis of some of the conflicts which we will explore in this, and future posts. (tip: I found this handy Trademark search website in the process)
Regardless, since 1998, the OSI has vastly grown its scope of influence (more on that in future posts), with the Open Source Definition mostly unaltered for 25 years, and having been widely used in the industry.
Prior uses of the term “Open Source”
Many publications simply repeat the idea that OSI came up with the term, has the authority (if not legal, at least in practice) and call it a day. I, however, had nothing better to do, so I decided to spend a few days (which turned into a few weeks 😬) and see if I could dig up any references to “Open Source” predating OSI’s definition in 1998, especially ones with different meanings or definitions.
Of course, it’s totally possible that multiple people come up with the same term independently and I don’t actually care so much about “who was first”, I’m more interested in figuring out what different meanings have been assigned to the term and how widespread those are.
In particular, because most contention is around commercial limitations (non-competes) where receivers of the code are forbidden to resell it, this clause of the OSD stands out:
Free Redistribution: The license shall not restrict any party from selling (…)
Turns out, the “Open Source” was already in use for more than a decade, prior to the OSI founding.
OpenSource.com
In 1998, a business in Texas called “OpenSource, Inc” launched their website. They were a “Systems Consulting and Integration Services company providing high quality, value-added IT professional services”. Sometime during the year 2000, the website became a RedHat property. Enter the domain name on Icann and it reveals the domain name was registered Jan 8, 1998. A month before the term was “invented” by Christine/Richard/Bruce. What a coincidence. We are just warming up…
Caldera announces Open Source OpenDOS
In 1996, a company called Caldera had “open sourced” a DOS operating system called OpenDos. Their announcement (accessible on google groups and a mailing list archive) reads:
Caldera Announces Open Source for DOS.
(…)
Caldera plans to openly distribute the source code for all of the DOS technologies it acquired from Novell., Inc
(…)
Caldera believes an open source code model benefits the industry in many ways.
(…)
Individuals can use OpenDOS source for personal use at no cost.
Individuals and organizations desiring to commercially redistribute
Caldera OpenDOS must acquire a license with an associated small fee.
Today we would refer to it as dual-licensing, using Source Available due to the non-compete clause. But in 1996, actual practitioners referred to it as “Open Source” and OSI couldn’t contest it because it didn’t exist!
You can download the OpenDos package from ArchiveOS and have a look at the license file, which includes even more restrictions such as “single computer”. (like I said, I had nothing better to do).
Investigations by Martin Espinoza re: Caldera
On his blog, Martin has an article making a similar observation about Caldera’s prior use of “open source”, following up with another article which includes a response from Lyle Ball, who headed the PR department of Caldera
Quoting Martin:
As a member of the OSI, he [Bruce] frequently championed that organization’s prerogative to define what “Open Source” means, on the basis that they invented the term. But I [Martin] knew from personal experience that they did not. I was personally using the term with people I knew before then, and it had a meaning — you can get the source code. It didn’t imply anything at all about redistribution.
The response from Caldera includes such gems as:
I joined Caldera in November of 1995, and we certainly used “open source” broadly at that time. We were building software. I can’t imagine a world where we did not use the specific phrase “open source software”. And we were not alone. The term “Open Source” was used broadly by Linus Torvalds (who at the time was a student (…), John “Mad Dog” Hall who was a major voice in the community (he worked at COMPAQ at the time), and many, many others.
Our mission was first to promote “open source”, Linus Torvalds, Linux, and the open source community at large. (…) we flew around the world to promote open source, Linus and the Linux community….we specifically taught the analysts houses (i.e. Gartner, Forrester) and media outlets (in all major markets and languages in North America, Europe and Asia.) (…) My team and I also created the first unified gatherings of vendors attempting to monetize open source
So according to Caldera, “open source” was a phenomenon in the industry already and Linus himself had used the term. He mentions plenty of avenues for further research, I pursued one of them below.
Linux Kernel discussions
Mr. Ball’s mentions of Linus and Linux piqued my interest, so I started digging.
I couldn’t find a mention of “open source” in the Linux Kernel Mailing List archives prior to the OSD day (Feb 1998), though the archives only start as of March 1996. I asked ChatGPT where people used to discuss Linux kernel development prior to that, and it suggested 5 Usenet groups, which google still lets you search through:
- alt.os.linux (no hits)
- comp.os.minix (no hits)
- comp.os.linux (one hit!)
- comp.os.linux.development (no hits)
- comp.os.linux.announce (two hits!)
What were the hits? Glad you asked!
comp.os.linux: a 1993 discussion about supporting binary-only software on Linux
This conversation predates the OSI by five whole years and leaves very little to the imagination:
The GPL and the open source code have made Linux the success that it is. Cygnus and other commercial interests are quite comfortable with this open paradigm, and in fact prosper. One need only pull the source code to GCC and read the list of many commercial contributors to realize this.
comp.os.linux.announce: 1996 announcement of Caldera’s open-source environment
In November 1996 Caldera shows up again, this time with a Linux based “open-source” environment:
Channel Partners can utilize Caldera’s Linux-based, open-source environment to remotely manage Windows 3.1 applications at home, in the office or on the road. By using Caldera’s OpenLinux (COL) and Wabi solution, resellers can increase sales and service revenues by leveraging the rapidly expanding telecommuter/home office market. Channel Partners who create customized turn-key solutions based on environments like SCO OpenServer 5 or Windows NT,
comp.os.linux.announce: 1996 announcement of a trade show
On 17 Oct 1996 we find this announcement
There will be a Open Systems World/FedUnix conference/trade show in Washington DC on November 4-8. It is a traditional event devoted to open computing (read: Unix), attended mostly by government and commercial Information Systems types.
In particular, this talk stands out to me:
** Schedule of Linux talks, OSW/FedUnix'96, Thursday, November 7, 1996 ***
(…)
11:45 Alexander O. Yuriev, “Security in an open source system: Linux study”
The context here seems to be open standards, and maybe also the open source development model.
1990: Tony Patti on “software developed from open source material”
in 1990, a magazine editor by name of Tony Patti not only refers to Open Source software but mentions that NSA in 1987 referred to “software was developed from open source material”
1995: open-source changes emails on OpenBSD-misc email list
I could find one mention of “Open-source” on an OpenBSD email list, seems there was a directory “open-source-changes” which had incoming patches, distributed over email. (source). Though perhaps the way to interpret is, to say it concerns “source-changes” to OpenBSD, paraphrased to “open”, so let’s not count this one.
(I did not look at other BSD’s)
Bryan Lunduke’s research
Bryan Lunduke has done similar research and found several more USENET posts about “open source”, clearly in the context of of source software, predating OSI by many years. He breaks it down on his substack. Some interesting examples he found:
19 August, 1993 post to comp.os.ms-windows
Anyone else into “Source Code for NT”? The tools and stuff I’m writing for NT will be released with source. If there are “proprietary” tricks that MS wants to hide, the only way to subvert their hoarding is to post source that illuminates (and I don’t mean disclosing stuff obtained by a non-disclosure agreement).
(source)
Then he writes:
Open Source is best for everyone in the long run.
Written as a matter-of-fact generalization to the whole community, implying the term is well understood.
December 4, 1990
BSD’s open source policy meant that user developed software could be ported among platforms, which meant their customers saw a much more cost effective, leading edge capability combined hardware and software platform.
1985: The “the computer chronicles documentary” about UNIX.
The Computer Chronicles was a TV documentary series talking about computer technology, it started as a local broadcast, but in 1983 became a national series. On February 1985, they broadcasted an episode about UNIX. You can watch the entire 28 min episode on archive.org, and it’s an interesting snapshot in time, when UNIX was coming out of its shell and competing with MS-DOS with its multi-user and concurrent multi-tasking features. It contains a segment in which Bill Joy, co-founder of Sun Microsystems is being interviewed about Berkley Unix 4.2. Sun had more than 1000 staff members. And now its CTO was on national TV in the United States. This was a big deal, with a big audience. At 13:50 min, the interviewer quotes Bill:
“He [Bill Joy] says its open source code, versatility and ability to work on a variety of machines means it will be popular with scientists and engineers for some time”
“Open Source” on national TV. 13 years before the founding of OSI.
Uses of the word “open”
We’re specifically talking about “open source” in this article. But we should probably also consider how the term “open” was used in software, as they are related, and that may have played a role in the rejection of the trademark.
Well, the Open Software Foundation launched in 1988. (10 years before the OSI). Their goal was to make an open standard for UNIX. The word “open” is also used in software, e.g. Common Open Software Environment in 1993 (standardized software for UNIX), OpenVMS in 1992 (renaming of VAX/VMS as an indication of its support of open systems industry standards such as POSIX and Unix compatibility), OpenStep in 1994 and of course in 1996, the OpenBSD project started. They have this to say about their name: (while OpenBSD started in 1996, this quote is from 2006):
The word “open” in the name OpenBSD refers to the availability of the operating system source code on the Internet, although the word “open” in the name OpenSSH means “OpenBSD”. It also refers to the wide range of hardware platforms the system supports.
Does it run DOOM?
The proof of any hardware platform is always whether it can run Doom. Since the DOOM source code was published in December 1997, I thought it would be fun if ID Software would happen to use the term “Open Source” at that time. There are some FTP mirrors where you can still see the files with the original December 1997 timestamps (e.g. this one). However, after sifting through the README and other documentation files, I only found references to the “Doom source code”. No mention of Open Source.
The origins of the famous “Open Source” trademark application: SPI, not OSI
This is not directly relevant, but may provide useful context: In June 1997 the SPI (“Software In the Public Interest”) organization was born to support the Debian project, funded by its community, although it grew in scope to help many more free software / open source projects. It looks like Bruce, as as representative of SPI, started the “Open Source” trademark proceedings. (and may have paid for it himself). But then something happened, 3/4 of the SPI board (including Bruce) left and founded the OSI, which Bruce announced along with a note that the trademark would move from SPI to OSI as well. Ian Jackson - Debian Project Leader and SPI president - expressed his “grave doubts” and lack of trust. SPI later confirmed they owned the trademark (application) and would not let any OSI members take it. The perspective of Debian developer Ean Schuessler provides more context.
A few years later, it seems wounds were healing, with Bruce re-applying to SPI, Ean making amends, and Bruce taking the blame.
All the bickering over the Trademark was ultimately pointless, since it didn’t go through.
Searching for SPI on the OSI website reveals no acknowledgment of SPI’s role in the story. You only find mentions in board meeting notes (ironically, they’re all requests to SPI to hand over domains or to share some software).
By the way, in November 1998, this is what SPI’s open source web page had to say:
Open Source software is software whose source code is freely available
A Trademark that was never meant to be.
Lawyer Kyle E. Mitchell knows how to write engaging blog posts. Here is one where he digs further into the topic of trademarking and why “open source” is one of the worst possible terms to try to trademark (in comparison to, say, Apple computers).
He writes:
At the bottom of the hierarchy, we have “descriptive” marks. These amount to little more than commonly understood statements about goods or services. As a general rule, trademark law does not enable private interests to seize bits of the English language, weaponize them as exclusive property, and sue others who quite naturally use the same words in the same way to describe their own products and services.
(…)
Christine Peterson, who suggested “open source” (…) ran the idea past a friend in marketing, who warned her that “open” was already vague, overused, and cliche.
(…)
The phrase “open source” is woefully descriptive for software whose source is open, for common meanings of “open” and “source”, blurry as common meanings may be and often are.
(…)
no person and no organization owns the phrase “open source” as we know it. No such legal shadow hangs over its use. It remains a meme, and maybe a movement, or many movements. Our right to speak the term freely, and to argue for our own meanings, understandings, and aspirations, isn’t impinged by anyone’s private property.
So, we have here a great example of the Trademark system working exactly as intended, doing the right thing in the service of the people: not giving away unique rights to common words, rights that were demonstrably never OSI’s to have.
I can’t decide which is more wild: OSI’s audacious outcries for the whole world to forget about the trademark failure and trust their “pinky promise” right to authority over a common term, or the fact that so much of the global community actually fell for it and repeated a misguided narrative without much further thought. (myself included)
I think many of us, through our desire to be part of a movement with a positive, fulfilling mission, were too easily swept away by OSI’s origin tale.
Co-opting a term
OSI was never relevant as an organization and hijacked a movement that was well underway without them.
(source: a harsh but astute Slashdot comment)
We have plentiful evidence that “Open Source” was used for at least a decade prior to OSI existing, in the industry, in the community, and possibly in government. You saw it at trade shows, in various newsgroups around Linux and Windows programming, and on national TV in the United States. The word was often uttered without any further explanation, implying it was a known term. For a movement that happened largely offline in the eighties and nineties, it seems likely there were many more examples that we can’t access today.
“Who was first?” is interesting, but more relevant is “what did it mean?”. Many of these uses were fairly informal and/or didn’t consider re-distribution. We saw these meanings:
- a collaborative development model
- portability across hardware platforms, open standards
- disclosing (making available) of source code, sometimes with commercial limitations (e.g. per-seat licensing) or restrictions (e.g. non-compete)
- possibly a buzz-word in the TV documentary
Then came the OSD which gave the term a very different, and much more strict meaning, than what was already in use for 15 years. However, the OSD was refined, “legal-aware” and the starting point for an attempt at global consensus and wider industry adoption, so we are far from finished with our analysis.
(ironically, it never quite matched with free software either - see this e-mail or this article)
Legend has it…
Repeat a lie often enough and it becomes the truth
Yet, the OSI still promotes their story around being first to use the term “Open Source”. RedHat’s article still claims the same. I could not find evidence of resolution. I hope I just missed it (please let me know!). What I did find, is one request for clarification remaining unaddressed and another handled in a questionable way, to put it lightly. Expand all the comments in the thread and see for yourself For an organization all about “open”, this seems especially strange. Seems we have veered far away from the “We will not hide problems” motto in the Debian Social Contract.
Real achievements are much more relevant than “who was first”. Here are some suggestions for actually relevant ways the OSI could introduce itself and its mission:
- “We were successful open source practitioners and industry thought leaders”
- “In our desire to assist the burgeoning open source movement, we aimed to give it direction and create alignment around useful terminology”.
- “We launched a campaign to positively transform the industry by defining the term - which had thus far only been used loosely - precisely and popularizing it”
I think any of these would land well in the community. Instead, they are strangely obsessed with “we coined the term, therefore we decide its meaning. and anything else is “flagrant abuse”.
Is this still relevant? What comes next?
Trust takes years to build, seconds to break, and forever to repair
I’m quite an agreeable person, and until recently happily defended the Open Source Definition. Now, my trust has been tainted, but at the same time, there is beauty in knowing that healthy debate has existed since the day OSI was announced. It’s just a matter of making sense of it all, and finding healthy ways forward.
Most of the events covered here are from 25 years ago, so let’s not linger too much on it. There is still a lot to be said about adoption of Open Source in the industry (and the community), tension (and agreements!) over the definition, OSI’s campaigns around awareness and standardization and its track record of license approvals and disapprovals, challenges that have arisen (e.g. ethics, hyper clouds, and many more), some of which have resulted in alternative efforts and terms. I have some ideas for productive ways forward.
Stay tuned for more, sign up for the RSS feed and let me know what you think!
Comment below, on X or on HackerNews
August 29, 2024
In his latest Lex Fridman appearance, Elon Musk makes some excellent points about the importance of simplification.
Follow these steps:
- Simplify the requirements
- For each step, try to delete it altogether
- Implement well
1. Simplify the Requirements
Even the smartest people come up with requirements that are, in part, dumb. Start by asking yourself how they can be simplified.
There is no point in finding the perfect answer to the wrong question. Try to make the question as least wrong as possible.
I think this is so important that it is included in my first item of advice for junior developers.
There is nothing so useless as doing efficiently that which should not be done at all.
2. Delete the Step
For each step, consider if you need it at all, and if not, delete it. Certainty is not required. Indeed, if you only delete what you are 100% certain about, you will leave in junk. If you never put things back in, it is a sign you are being too conservative with deletions.
The best part is no part.
Some further commentary by me:
This applies both to the product and technical implementation levels. It’s related to YAGNI, Agile, and Lean, also mentioned in the first section of advice for junior developers.
It’s crucial to consider probabilities and compare the expected cost/value of different approaches. Don’t spend 10 EUR each day to avoid a 1% chance of needing to pay 100 EUR. Consistent Bayesian reasoning will reduce making such mistakes, though Elon’s “if you do not put anything back in, you are not removing enough” heuristic is easier to understand and implement.
3. Implement Well
Here, Elon talks about optimization and automation, which are specific to his problem domain of building a supercomputer. More generally, this can be summarized as good implementation, which I advocate for in my second section of advice for junior developers.
The relevant segment begins at 43:48.
The post Simplify and Delete appeared first on Entropy Wins.
August 27, 2024
I just reviewed the performance of a customer’s WordPress site. Things got a lot worse he wrote and he assumed Autoptimize (he was a AOPro user) wasn’t working any more and asked me to guide him to fix the issue. Instead it turns out he installed CookieYes, which adds tons of JS (part of which is render-blocking), taking 3.5s of main thread work and (fasten your seat-belts) which somehow seems…
August 26, 2024
Setting up a new computer can be a lot of work, but I've made it much simpler with Homebrew, a popular package manager.
Creating a list of installed software
As a general rule, I prefer to install all software on my Mac using Homebrew. I always try Homebrew first and only resort to downloading software directly from websites if it is not available through Homebrew.
Homebrew manages both formulae and casks. Casks are typically GUI applications, while formulae are command-line tools or libraries.
First, I generate a list of all manually installed packages on my old computer:
$ brew leaves
The brew leaves
command displays only the packages you installed directly using Homebrew. It excludes any packages that were installed automatically as dependencies. To view all installed packages, including dependencies, you can use the brew list
command.
To save this list to a file:
$ brew leaves > brews.txt
Next, I include installed casks in the same list:
$ brew list --cask >> brews.txt
This appends the cask list to brews.txt
, giving me a complete list of all the packages I've explicitly installed.
Reviewing your packages
It is a good idea to check if you still need all packages on your new computer. I review my packages as follows:
$ cat brews.txt | xargs brew desc --eval-all
This command provides a short description for each package in your list.
Installing your packages on a new machine
Transfer your brews.txt
file to the new computer, install Homebrew, and run:
$ xargs brew install brews.txt
This installs all the packages listed in your file.
Building businesses based on an Open Source project is like balancing a solar system. Like the sun is the center of our own little universe, powering life on the planets which revolve around it in a brittle, yet tremendously powerful astrophysical equilibrium; so is the relationship between a thriving open source project, with a community, one or more vendors and their commercially supported customers revolving around it, driven by astronomical aspirations.
Source-available & Non-Compete licensing have existed in various forms, and have been tweaked and refined for decades, in an attempt to combine just enough proprietary conditions with just enough of Open Source flavor, to find that perfect trade-off. Fair Source is the latest refinement for software projects driven by a single vendor wanting to combine monetization, a high rate of contributions to the project (supported by said monetization), community collaboration and direct association with said software project.
Succinctly, Fair Source licenses provide much of the same benefits to users as Open Source licenses, although outsiders are not allowed to build their own competing service based on the software; however after 2 years the software automatically becomes MIT or Apache2 licensed, and at that point you can pretty much do whatever you want with the older code.
To avoid confusion, this project is different from:
- a previous “Fair Source” initiative that worked quite differently and has been abandoned.
- other modern Source-Available projects such as Fair Code and Commons Clause which also use non-compete clauses but don’t use the delayed open source publication.
It seems we have reached an important milestone in 2024: on the surface, “Fair Source” is yet another new initiative that positions itself as a more business friendly alternative to “Open Source”, but the delayed open source publication (DSOP) model has been refined to the point where the licenses are succinct, clear, easy to work with and should hold up well in court. Several technology companies are choosing this software licensing strategy (Sentry being the most famous one, you can see the others on their website).
My 2 predictions:
- we will see 50-100 more companies in the next couple of years.
- a governance legal entity will appear soon, and a trademark will follow after.
In this article, I’d like to share my perspective and address some - what I believe to be - misunderstandings in current discourse.
The licenses
At this time, the Fair Source ideology is implemented by the following licenses:
BSL/BUSL are more tricky to understand can have different implementations. FCL and FSL are nearly identical. They are clearly and concisely written and embody the Fair Source spirit in the most pure form.
Seriously, try running the following in your terminal. Sometimes as an engineer you have to appreciate legal text when it’s this concise, easy to understand, and diff-able!
wget https://raw.githubusercontent.com/keygen-sh/fcl.dev/master/FCL-1.0-MIT.md
wget https://fsl.software/FSL-1.1-MIT.template.md
diff FSL-1.1-MIT.template.md FCL-1.0-MIT.md
I will focus on FSL and FCL and FSL, the Fair Source “flagship licenses”.
Is it “open source, fixed”, or an alternative to open source? Neither.
First, we’ll need to agree on what the term “Open Source” means. This itself has been a battle for decades, with non-competes (commercial restrictions) being especially contentious and in use even before OSI came along, so I’m working on an article which challenges OSI’s Open Source Definition which I will publish soon. However, the OSD is probably the most common understanding in the industry today - so we’ll use that here - and it seems that folks behind FSL/Fair Source made the wise decision to distance themselves from these contentious debates: after some initial conversations about FSL using the “Open Source” term, they’ve adopted the less common term of “Fair Source” and I’ve seen a lot of meticulous work (e.g. fsl#2 and fsl#10 on how they articulate what they stand for. (the Open Source Definition debate is why I hope the Fair Source folks will file a trademark if this projects gains more traction.
Importantly, OSI’s definition of “Open Source” includes non-discrimination and free redistribution.
When you check out code that is FSL licensed, and the code was authored:
- less than 2 years ago: it’s available to you under terms similar to MIT, except you cannot compete with the author by making a similar service using the same software
- more than 2 years ago: it is now MIT licensed. (or Apache2, when applicable)
While after 2 years, it is clearly open source, the non-compete clause in option 1 is not compatible with the set of terms set forth by the OSI Open Source Definition. (or freedom 0 from the 4 freedoms of Free Software). Such a license is often referred to as “Source Available”.
So, Fair Source is a system to combine 2 licenses (an Open Source one and a Source Available one with proprietary conditions) in one. I think this is very clever approach, but I think it’s not all that useful to compare this to Open Source. Rather, it has a certain symmetry to Open Core:
- In an Open Core product, you have a “scoped core”: a core built from open source code which is surrounded by specifically scoped pieces from proprietary code, for a indeterminate, but usually many-year or perpetual timeframe
- With Fair Source, you have a “timed core”: the open source core is all the code that’s more than 2 years old, and the proprietary bits are the most recent developments (regardless which scope they belong to).
Open Core and Fair Source both try to balance open source with business interests: both have an open source component to attract a community, and a proprietary shell to make a business more viable. Fair Source is a licensing choice that’s only relevant to business, not individuals. How many business monetize pure Open Source software? I can count them on one hand. The vast majority go for something like Open Core. This is why the comparison with Open Core makes much more sense.
A lot of the criticisms of Fair Source suddenly become a lot more palatable when you consider it an alternative to Open Core.
As a customer, which is more tolerable? proprietary features or a proprietary 2-years worth of product developments? I don’t think it matters nearly as much as some of the advantages Fair Source has over Open Core:
- Users can view, modify and distribute (but not commercialize) the proprietary code. (with Open Core, you only get the binaries)
- It follows then, that the project can use a single repository and single license (with Open Core, there are multiple repositories and licenses involved)
Technically, Open Core is more of a business architecture, where you still have to figure out which licenses you want to use for the core and shell, whereas Fair Source is more of a prepackaged solution which defines the business architecture as well as the 2 licenses to use.
Note that you can also devise hybrid approaches. Here are some ideas:
- a Fair Source core and Closed Source shell. (more defensive than Open Core or Fair Source separately). (e.g. PowerSync does this)
- an Open Source core, with Fair Source shell. (more open than Open Core or Fair Source separately).
- Open Source Core, with Source Available shell (users can view, modify and distribute the code but not commercialize it, and without the delayed open source publication). This would be the “true” symmetrical counterpart to Fair Source. It is essentially Open Core where the community also has access to the proprietary features (but can’t commercialize those). It would also allow to put all code in the same repository. (although this benefit works better with Fair Source because any contributed code will definitely become open source, thus incentivizing the community more). I find this a very interesting option that I hope Open Core vendors will start considering. (although it has little to do with Fair Source).
- etc.
Non-Competition
The FSL introduction post states:
In plain language, you can do anything with FSL software except economically undermine its producer through harmful free-riding
The issue of large cloud vendors selling your software as a service, making money, and contributing little to nothing back to the project, has been widely discussed under a variety of names. This can indeed severely undermine a project’s health, or kill it.
(Personally, I find discussions around whether this is “fair” not very useful. Businesses will act in their best interest, you can’t change the rules of the game, you only have control over how you play the game, i.o.w. your own licensing and strategy)
Here, we’ll just use the same terminology that the FSL does, the “harmful free-rider” problem
However, the statement above is incorrect. Something like this would be more correct:
In plain language, you can do anything with FSL software except offer a similar paid service based on the software when it’s less than 2 years old.
What’s the difference? There are different forms of competition that are not harmful free-riding.
Multiple companies can offer a similar service/product which they base on the same project, which they all contribute to. They can synergize and grow the market together. (aka “non-zero-sum” if you want to sound smart). I think there are many good examples of this, e.g. Hadoop, Linux, Node.js, OpenStack, Opentelemetry, Prometheus, etc.
When the FSL website makes statements such as “You can do anything with FSL software except undermine its producer”, it seems to forget some of the best and most ubiquitous software in the world is the result of synergies between multiple companies collaborating.
Furthermore, when the company who owns the copyright on the project turns their back on their community/customers wouldn’t the community “deserve” a new player who offers a similar service, but on friendly terms? The new player may even contribute more to the project. Are they a harmful free-rider? Who gets to be judge of that?
Let’s be clear, FSL allows no competition whatsoever, at least not during the first 2 years. What about after 2 years?
Zeke Gabrielse, one of the shepherds of Fair Source, said it well here:
Being 2 years old also puts any SaaS competition far enough back to not be a concern
Therefore, you may as well say no competition is allowed. Although, in Zeke’s post, I presume he was writing from the position of an actively developing software project. If it becomes abandoned, the 2 years countdown is an obstacle, an overcomeable one, that eventually does let you compete, but in this case, the copyright holder probably went bust, so you aren’t really competing with them either. The 2 year window is not designed to enable competition, instead it is a contingency plan for when the company goes bankrupt. The wait can be needlessly painful for the community in such a situation. If a company is about to go bust, they could immediately release their Fair Source code as Open Source, but I wonder if this can be automated via the actual license text.
(I had found some ambiguous use of the term “direct” competition which I’ve reported and has since been resolved)
Perverse incentives
Humans are notoriously bad about predicting 2nd order effects. So I like to try to. What could be some second order effects of Fair Source projects? And how do they compare to Open Core?
- can companies first grow on top of their Fair Source codebase, take community contributions, and then switch to more restrictive, or completely closed licensing, shutting out the community? Yes if a CLA is in place (or using the 2 year old code). (this isn’t any different from any other CLA using Open Source or Open Core project. Though with Open Core, you can’t take in external contributions on proprietary parts to begin with)
- if you enjoy a privileged position where others can’t meaningfully compete with you based on the same source code, that can affect how the company treats its community and its customers. It can push through undesirable changes, it can price more aggressively, etc. (these issues are the same with Open Core)
- With Open Source & Open Core, the company is incentivized to make the code well understood by the community. Under Fair Source it would still be sensible (in order to get free contributions), but at the same time, by hiding design documents, subtly obfuscating the code and withholding information it can also give itself the edge for when the code does become Open Source, although as we’ve seen, the 2 year delay makes competition unrealistic anyway.
All in all, nothing particularly worse than Open Core, here.
Developer sustainability
The FSL introduction post says:
We value user freedom and developer sustainability. Free and Open Source Software (FOSS) values user freedom exclusively. That is the source of its success, and the source of the free-rider problem that occasionally boils over into a full-blown tragedy of the commons, such as Heartbleed and Log4Shell.
F/OSS indeed doesn’t involve itself with sustainability, because of the simple fact that Open Source has nothing to do business models and monetization. As stated above, it makes more sense to compare to Open Core.
It’s like saying asphalt paving machinery doesn’t care about funding and is therefore to blame when roads don’t get built. Therefore we need tolls. But it would be more useful to compare tolls to road taxes and vignettes.
Of course it happens that people dedicate themselves to writing open source projects, usually driven by their interests, don’t get paid, get volumes of support requests (incl. from commercial entities), which can become suffering, and can also lead to codebases becoming critically important, yet critically misunderstood and fragile. This is clearly a situation to avoid, and there are many ways to solve the problem ranging from sponsorships (e.g. GitHub, tidelift), bounty programs (e.g. Algora), direct funding (e.g. Sentry’s 500k donation) and many more initiatives that have launched in the last few years. Certainly a positive development. Sometimes formally abandoning a project is also a clear sign that puts the burden of responsibility onto whoever consumes it and can be a relief to the original author. If anything, it can trigger alarm bells within corporations and be a fast path to properly engaging and compensating the author. There is no way around the fact that developers (and people in general) are generally responsible for their own well being and sometimes need to put their foot down, or put on their business hat (which many developers don’t like to do) if their decision to open source project is resulting in problems. No amount of licensing can change this hard truth.
Furthermore, you can make money via Open Core around OSI approved open source projects (e.g. Grafana), consulting/support, and many companies that pay developers to work on (pure) Open Source code (Meta, Microsft, Google, etc are the most famous ones, but there are many smaller ones). Companies that try to achieve sustainability (and even thriving) on pure open source software for which they are the main/single driving force, are extremely rare. (Chef tried, and now System Initiative is trying to do it better. I remain skeptical but am hopeful and am rooting for them to prove the model)
Doesn’t it sound a bit ironic that the path to getting developers paid is releasing your software via a non-compete license?
Do we reach developer sustainability by preventing developers from making money on top of projects they want to - or already have - contribute(d) to?
Important caveats:
- Fair Source does allow to make money via consulting and auxiliary services related to the software.
- Open Core shuts out people similarly, but many of the business models above, don’t.
CLA needed?
When a project uses an Open Source license with some restrictions (e.g. GPL with its copyleft) it is common to use a CLA such that the company backing it can use more restrictive or commercial licenses (either as a license change later on, or as dual licensing). With Fair Source (and indeed all Source Available licenses), this is also the the case.
However, unlike Open Source licenses, with Fair Source / Source Available licenses, a CLA becomes much more of a necessity, because such a license without CLA isn’t compatible with anything else, and the commercial FSL restriction may not always apply to outside contributions (it depends on e.g. whether it can be offered stand-alone). I’m not a lawyer, for more clarity you should consult with one. I think the Fair Source website, at least their adoption guide should mention something about CLA’s, because it’s an important step beyond simply choosing a license and publishing, so I’ve raised this with them.
AGPL
The FSL website states:
AGPLv3 is not permissive enough. As a highly viral copyleft license, it exposes users to serious risk of having to divulge their proprietary source code.
This looks like fear mongering.
- AGPL is not categorically less permissive than FSL. It is less permissive when the code is 2 years old or older (and the FSL has turned into MIT/Apache2). For current and recent code, AGPL permits competition; FSL does not.
- The world “viral” is more divisive than accurate. In my mind, complying with AGPL is rather easy, my rule of thumb is to say you trigger copyleft when you “ship”. Most engineers have an intuitive understanding of what it means to “ship” a feature, whether that’s on cloud, or on-prem. In my experience, people struggle more with patent clauses or even the relation between trademarks and software licensing than they do with copyleft. There’s still some level of uncertainty and caution around AGPL, mainly due to its complexity. (side note: Google and CNCF doesn’t allow copyleft licenses, and their portfolio doesn’t have a whole lot of commercial success to show for it, I see mainly projects that can easily be picked up by Google)
Heather Meeker, the lawyer consulted to draft up the FSL has spoken out against the virality discourse and tempering the FUD around AGPL
Conclusion
I think Fair Source, the FSL and FCL have a lot to offer. Throughout my analysis I may have raised some criticisms, but if anything, it reminds me of how much Open Core can suck (though it depends on the relative size of core vs shell). So I find it a very compelling alternative to Open Core. Despite some poor choices of wording, I find it well executed: It ties up a lot of loose ends from previous initiatives (Source Available, BSL and other custom licenses) into a neat package. Despite the need for a CLA it’s still quite easy to implement and is arguably more viable than Open Core is, in its current state today. When comparing to Open Source, the main question is: which is worse, the “harmful free-rider problem”, or the non-compete? (Anecdotally, my gut feeling says the former, but I’m on the look out for data driven evidence). When comparing to Open Core, the main question is: is a business more viable keeping proprietary features closed, or making them source-available (non-compete)?.
As mentioned, there are many more hybrid approaches possible. For a business thinking about their licensing strategy, it may make sense to think of these questions separately:
- should our proprietary shell be time based or feature scoped? Does it matter?
- should our proprietary shell be closed, or source-available?
I certainly would prefer to see companies and projects appear:
- as Fair Source, rather than not at all
- as Open Core, rather than not at all
- as Fair Source, rather than Open Core (depending on “shell thickness”).
- with more commercial restrictions from the get-go, instead of starting more permissively and re-licensing later. Just kidding, but that’s a topic for another day.
For vendors, I think there are some options left to explore, such as the Open Core with an source available (instead of closed) shell. Something to consider for any company doing Open Core today. For end-users / customers, “Open Source” vendors are not the only ones to be taken with a grain of salt, it’s the same with Fair Source, since they may have a more complicated arrangement rather than just using a Fair Source license.
Thanks to Heather Meeker and Joseph Jacks for providing input, although this article reflects only my personal views.
August 25, 2024
I made some time to give some love to my own projects and spent some time rewriting the Ansible role stafwag.ntpd and cleaning up some other Ansible roles.
There is some work ongoing for some other Ansible roles/projects, but this might be a topic for some other blog post(s) ;-)
stafwag.ntpd
An ansible role to configure ntpd/chrony/systemd-timesyncd.
This might be controversial, but I decided to add support for chrony and systemd-timesyncd. Ntpd is still supported and the default on the BSDs ( FreeBSD, NetBSD, OpenBSD).
It’s possible to switch from the ntp implementation by using the ntpd.provider
directive.
The Ansible role stafwag.ntpd v2.0.0 is available at:
- https://github.com/stafwag/ansible-role-ntpd
- https://galaxy.ansible.com/ui/standalone/roles/stafwag/ntpd/
Release notes
V2.0.0
- Added support for chrony and systemd-timesyncd on GNU/Linux
- systemd-timesynced is the default on Debian GNU/Linux 12+ and Archlinux
- ntpd is the default on all operating systems (BSDs, Solaris) and Debian GNU/Linux 10 and 11
- chrony is the default on all other GNU/Linux distributes
- For ntpd hash as the input for the role.
- Updated README
- CleanUp
stafwag.ntpdate
An ansible role to activate the ntpdate service on FreeBSD and NetBSD.
The ntpdate
service is used on FreeBSD and NetBSD to sync the time during the system boot-up. On most Linux distributions this is handled by chronyd
or systemd-timesyncd
now. The OpenBSD ntpd implementation OpenNTPD also has support to sync the time during the system boot-up.
The role is available at:
- https://github.com/stafwag/ansible-role-ntpdate
- https://galaxy.ansible.com/ui/standalone/roles/stafwag/ntpdate/
Release notes
V1.0.0
- Initial release on Ansible Galaxy
- Added support for NetBSD
stafwag.libvirt
An ansible role to install libvirt/KVM packages and enable the libvirtd service.
The role is available at:
- https://github.com/stafwag/ansible-role-libvirt
- https://galaxy.ansible.com/ui/standalone/roles/stafwag/libvirt/
Release notes
V1.1.3
- Force
bash
for shell execution on Ubuntu.- Force
bash
for shell execution on Ubuntu. As the defaultdash
shell doesn’t supportpipefail
.
- Force
V1.1.2
- CleanUp
- Corrected ansible-lint errors
- Removed install task “install/.yml’”;
- This was introduced to support Kali Linux, Kali Linux is reported as “Debian” now.
- It isn’t used in this role
- Removed invalid CentOS-8.yml softlink
- Removed invalid soft link, Centos 8 should be catched by
- RedHat-yum.yml
stafwag.cloud_localds
An ansible role to create cloud-init config disk images. This role is a wrapper around the cloud-localds command.
It’s still planned to add support for distributions that don’t have cloud-localds
as part of their official package repositories like RedHat 8+.
See the GitHub issue: https://github.com/stafwag/ansible-role-cloud_localds/issues/7
The role is available at:
- https://github.com/stafwag/ansible-role-cloud_localds/
- https://galaxy.ansible.com/ui/standalone/roles/stafwag/cloud_localds/
Release notes
V2.1.3
- CleanUp
- Switched to vars and package to install the required packages
- Corrected ansible-lint errors
- Added more examples
stafwag.qemu_img
An ansible role to create QEMU disk images.
The role is available at:
- https://github.com/stafwag/ansible-role-qemu_img
- https://galaxy.ansible.com/ui/standalone/roles/stafwag/qemu_img/
Release notes
V2.3.0
- CleanUp Release
- Added doc/examples
- Updated meta data
- Switched to vars and package to install the required packages
- Corrected ansible-lint errors
stafwag.virt_install_import
An ansible role to import virtual machine with the virt-install import command
The role is available at:
- https://github.com/stafwag/ansible-role-virt_install_import
- https://galaxy.ansible.com/ui/standalone/roles/stafwag/virt_install_import/
Release notes
- Use var and package to install pkgs
- v1.2.1 wasn’t merged correctly. The release should fix it…
- Switched to var and package to install the required packages
- Updated meta data
- Updated documentation and include examples
- Corrected ansible-lint errors
Have fun!
August 13, 2024
Here’s a neat little trick for those of you using Home Assistant while also driving a Volvo.
To get your Volvo driving data (fuel level, battery state, …) into Home Assistant, there’s the excellent volvo2mqtt addon.
One little annoyance is that every time it starts up, you will receive an e-mail from Volvo with a two-factor authentication code, which you then have to enter in Home Assistant.
Fortunately, there’s a solution for that, you can automate this using the built-in imap support of Home Assistant, with an automation such as this one:
alias: Volvo OTP
description: ""
trigger:
- platform: event
event_type: imap_content
event_data:
initial: true
sender: no-reply@volvocars.com
subject: Your Volvo ID Verification code
condition: []
action:
- service: mqtt.publish
metadata: {}
data:
topic: volvoAAOS2mqtt/otp_code
payload: >-
{{ trigger.event.data['text'] | regex_findall_index(find='Your Volvo ID verification code is:\s+(\d+)', index=0) }}
- service: imap.delete
data:
entry: "{{ trigger.event.data['entry_id'] }}"
uid: "{{ trigger.event.data['uid'] }}"
mode: single
This will post the OTP code to the right location and then delete the message from your inbox (if you’re using Google Mail, that means archiving it).
July 28, 2024
Updated @ Mon Sep 2 07:55:20 PM CEST 2024: Added devfs section
Updated @ Wed Sep 4 07:48:56 PM CEST 2024 : Corrected gpg-agent.conf
In a previous blog post, we set up GnuPG with smartcard support on Debian GNU/Linux.
In this blog post, we’ll install and configure GnuPG with smartcard support on FreeBSD.
The GNU/Linux blog post provides more details about GnuPG, so it might be useful for the FreeBSD users to read it first.
Likewise, Linux users are welcome to read this blog post if they’re interested in how it’s done on FreeBSD ;-)
Install the required packages
To begin, we need to install the required packages on FreeBSD.
Update the package database
Execute pkg update
to update the package database.
Thunderbird
[staf@monty ~]$ sudo pkg install -y thunderbird
Password:
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
Checking integrity... done (0 conflicting)
The most recent versions of packages are already installed
[staf@monty ~]$
lsusb
You can verify the USB devices on FreeBSD using the usbconfig command or lsusb which is also available on FreeBSD as part of the usbutils
package.
[staf@monty ~/git/stafnet/blog]$ sudo pkg install usbutils
Password:
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
The following 3 package(s) will be affected (of 0 checked):
New packages to be INSTALLED:
usbhid-dump: 1.4
usbids: 20240318
usbutils: 0.91
Number of packages to be installed: 3
301 KiB to be downloaded.
Proceed with this action? [y/N]: y
[1/3] Fetching usbutils-0.91.pkg: 100% 54 KiB 55.2kB/s 00:01
[2/3] Fetching usbhid-dump-1.4.pkg: 100% 32 KiB 32.5kB/s 00:01
[3/3] Fetching usbids-20240318.pkg: 100% 215 KiB 220.5kB/s 00:01
Checking integrity... done (0 conflicting)
[1/3] Installing usbhid-dump-1.4...
[1/3] Extracting usbhid-dump-1.4: 100%
[2/3] Installing usbids-20240318...
[2/3] Extracting usbids-20240318: 100%
[3/3] Installing usbutils-0.91...
[3/3] Extracting usbutils-0.91: 100%
[staf@monty ~/git/stafnet/blog]$
GnuPG
We’ll need GnuPG ( of course ), so ensure that it is installed.
[staf@monty ~]$ sudo pkg install gnupg
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
Checking integrity... done (0 conflicting)
The most recent versions of packages are already installed
[staf@monty ~]$
Smartcard packages
To enable smartcard support on FreeBSD, we’ll need to install the smartcard packages. The same software as on GNU/Linux - opensc - is available on FreeBSD.
pkg provides
It’s handy to be able to check which packages provide certain files. On FreeBSD this is provided by the provides
plugin. This plugin is not enabled by default in the pkg command.
To install in the provides
plugin install the pkg-provides
package.
[staf@monty ~]$ sudo pkg install pkg-provides
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
The following 1 package(s) will be affected (of 0 checked):
New packages to be INSTALLED:
pkg-provides: 0.7.3_3
Number of packages to be installed: 1
12 KiB to be downloaded.
Proceed with this action? [y/N]: y
[1/1] Fetching pkg-provides-0.7.3_3.pkg: 100% 12 KiB 12.5kB/s 00:01
Checking integrity... done (0 conflicting)
[1/1] Installing pkg-provides-0.7.3_3...
[1/1] Extracting pkg-provides-0.7.3_3: 100%
=====
Message from pkg-provides-0.7.3_3:
--
In order to use the pkg-provides plugin you need to enable plugins in pkg.
To do this, uncomment the following lines in /usr/local/etc/pkg.conf file
and add pkg-provides to the supported plugin list:
PKG_PLUGINS_DIR = "/usr/local/lib/pkg/";
PKG_ENABLE_PLUGINS = true;
PLUGINS [ provides ];
After that run `pkg plugins' to see the plugins handled by pkg.
[staf@monty ~]$
Edit the pkg configuration to enable the provides plug-in.
staf@freebsd-gpg:~ $ sudo vi /usr/local/etc/pkg.conf
PKG_PLUGINS_DIR = "/usr/local/lib/pkg/";
PKG_ENABLE_PLUGINS = true;
PLUGINS [ provides ];
Verify that the plugin is enabled.
staf@freebsd-gpg:~ $ sudo pkg plugins
NAME DESC VERSION
provides A plugin for querying which package provides a particular file 0.7.3
staf@freebsd-gpg:~ $
Update the pkg-provides
database.
staf@freebsd-gpg:~ $ sudo pkg provides -u
Fetching provides database: 100% 18 MiB 9.6MB/s 00:02
Extracting database....success
staf@freebsd-gpg:~ $
Install the required packages
Let’s check which packages provide the tools to set up the smartcard reader on FreeBSD. And install the required packages.
staf@freebsd-gpg:~ $ pkg provides "pkcs15-tool"
Name : opensc-0.25.1
Comment : Libraries and utilities to access smart cards
Repo : FreeBSD
Filename: usr/local/share/man/man1/pkcs15-tool.1.gz
usr/local/etc/bash_completion.d/pkcs15-tool
usr/local/bin/pkcs15-tool
staf@freebsd-gpg:~ $
staf@freebsd-gpg:~ $ pkg provides "bin/pcsc"
Name : pcsc-lite-2.2.2,2
Comment : Middleware library to access a smart card using SCard API (PC/SC)
Repo : FreeBSD
Filename: usr/local/sbin/pcscd
usr/local/bin/pcsc-spy
staf@freebsd-gpg:~ $
[staf@monty ~]$ sudo pkg install opensc pcsc-lite
Password:
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
Checking integrity... done (0 conflicting)
The most recent versions of packages are already installed
[staf@monty ~]$
staf@freebsd-gpg:~ $ sudo pkg install -y pcsc-tools
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
Checking integrity... done (0 conflicting)
The most recent versions of packages are already installed
staf@freebsd-gpg:~ $
staf@freebsd-gpg:~ $ sudo pkg install -y ccid
Password:
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
Checking integrity... done (0 conflicting)
The most recent versions of packages are already installed
staf@freebsd-gpg:~ $
USB
To use the smartcard reader we will need access to the USB devices as the user we use for our desktop environment. (No, this shouldn’t be the root
user :-) )
permissions
verify
Execute the usbconfig
command to verify that you can access the USB devices.
[staf@snuffel ~]$ usbconfig
No device match or lack of permissions.
[staf@snuffel ~]$
If you don’t have access, verify the permissions of the USB devices.
[staf@snuffel ~]$ ls -l /dev/usbctl
crw-r--r-- 1 root operator 0x5b Sep 2 19:17 /dev/usbctl
[staf@snuffel ~]$ ls -l /dev/usb/
total 0
crw------- 1 root operator 0x34 Sep 2 19:17 0.1.0
crw------- 1 root operator 0x4f Sep 2 19:17 0.1.1
crw------- 1 root operator 0x36 Sep 2 19:17 1.1.0
crw------- 1 root operator 0x53 Sep 2 19:17 1.1.1
crw------- 1 root operator 0x7e Sep 2 19:17 1.2.0
crw------- 1 root operator 0x82 Sep 2 19:17 1.2.1
crw------- 1 root operator 0x83 Sep 2 19:17 1.2.2
crw------- 1 root operator 0x76 Sep 2 19:17 1.3.0
crw------- 1 root operator 0x8a Sep 2 19:17 1.3.1
crw------- 1 root operator 0x8b Sep 2 19:17 1.3.2
crw------- 1 root operator 0x8c Sep 2 19:17 1.3.3
crw------- 1 root operator 0x8d Sep 2 19:17 1.3.4
crw------- 1 root operator 0x38 Sep 2 19:17 2.1.0
crw------- 1 root operator 0x56 Sep 2 19:17 2.1.1
crw------- 1 root operator 0x3a Sep 2 19:17 3.1.0
crw------- 1 root operator 0x51 Sep 2 19:17 3.1.1
crw------- 1 root operator 0x3c Sep 2 19:17 4.1.0
crw------- 1 root operator 0x55 Sep 2 19:17 4.1.1
crw------- 1 root operator 0x3e Sep 2 19:17 5.1.0
crw------- 1 root operator 0x54 Sep 2 19:17 5.1.1
crw------- 1 root operator 0x80 Sep 2 19:17 5.2.0
crw------- 1 root operator 0x85 Sep 2 19:17 5.2.1
crw------- 1 root operator 0x86 Sep 2 19:17 5.2.2
crw------- 1 root operator 0x87 Sep 2 19:17 5.2.3
crw------- 1 root operator 0x40 Sep 2 19:17 6.1.0
crw------- 1 root operator 0x52 Sep 2 19:17 6.1.1
crw------- 1 root operator 0x42 Sep 2 19:17 7.1.0
crw------- 1 root operator 0x50 Sep 2 19:17 7.1.1
devfs
When the /dev/usb*
are only accessible by the root
user. You probably want to create devfs.rules
that to grant permissions to the operator
or another group.
See https://man.freebsd.org/cgi/man.cgi?devfs.rules for more details.
/etc/rc.conf
Update the /etc/rc.conf
to apply custom devfs
permissions.
[staf@snuffel /etc]$ sudo vi rc.conf
devfs_system_ruleset="localrules"
/etc/devfs.rules
Create or update the /dev/devfs.rules
with the update permissions to grant read/write access to the operator
group.
[staf@snuffel /etc]$ sudo vi devfs.rules
[localrules=10]
add path 'usbctl*' mode 0660 group operator
add path 'usb/*' mode 0660 group operator
Restart the devfs
service to apply the custom devfs
ruleset.
[staf@snuffel /etc]$ sudo -i
root@snuffel:~ #
root@snuffel:~ # service devfs restart
The operator group should have read/write permissions now.
root@snuffel:~ # ls -l /dev/usb/
total 0
crw-rw---- 1 root operator 0x34 Sep 2 19:17 0.1.0
crw-rw---- 1 root operator 0x4f Sep 2 19:17 0.1.1
crw-rw---- 1 root operator 0x36 Sep 2 19:17 1.1.0
crw-rw---- 1 root operator 0x53 Sep 2 19:17 1.1.1
crw-rw---- 1 root operator 0x7e Sep 2 19:17 1.2.0
crw-rw---- 1 root operator 0x82 Sep 2 19:17 1.2.1
crw-rw---- 1 root operator 0x83 Sep 2 19:17 1.2.2
crw-rw---- 1 root operator 0x76 Sep 2 19:17 1.3.0
crw-rw---- 1 root operator 0x8a Sep 2 19:17 1.3.1
crw-rw---- 1 root operator 0x8b Sep 2 19:17 1.3.2
crw-rw---- 1 root operator 0x8c Sep 2 19:17 1.3.3
crw-rw---- 1 root operator 0x8d Sep 2 19:17 1.3.4
crw-rw---- 1 root operator 0x38 Sep 2 19:17 2.1.0
crw-rw---- 1 root operator 0x56 Sep 2 19:17 2.1.1
crw-rw---- 1 root operator 0x3a Sep 2 19:17 3.1.0
crw-rw---- 1 root operator 0x51 Sep 2 19:17 3.1.1
crw-rw---- 1 root operator 0x3c Sep 2 19:17 4.1.0
crw-rw---- 1 root operator 0x55 Sep 2 19:17 4.1.1
crw-rw---- 1 root operator 0x3e Sep 2 19:17 5.1.0
crw-rw---- 1 root operator 0x54 Sep 2 19:17 5.1.1
crw-rw---- 1 root operator 0x80 Sep 2 19:17 5.2.0
crw-rw---- 1 root operator 0x85 Sep 2 19:17 5.2.1
crw-rw---- 1 root operator 0x86 Sep 2 19:17 5.2.2
crw-rw---- 1 root operator 0x87 Sep 2 19:17 5.2.3
crw-rw---- 1 root operator 0x40 Sep 2 19:17 6.1.0
crw-rw---- 1 root operator 0x52 Sep 2 19:17 6.1.1
crw-rw---- 1 root operator 0x42 Sep 2 19:17 7.1.0
crw-rw---- 1 root operator 0x50 Sep 2 19:17 7.1.1
root@snuffel:~ #
Make sure that you’re part of the operator group
staf@freebsd-gpg:~ $ ls -l /dev/usbctl
crw-rw---- 1 root operator 0x5a Jul 13 17:32 /dev/usbctl
staf@freebsd-gpg:~ $ ls -l /dev/usb/
total 0
crw-rw---- 1 root operator 0x31 Jul 13 17:32 0.1.0
crw-rw---- 1 root operator 0x53 Jul 13 17:32 0.1.1
crw-rw---- 1 root operator 0x33 Jul 13 17:32 1.1.0
crw-rw---- 1 root operator 0x51 Jul 13 17:32 1.1.1
crw-rw---- 1 root operator 0x35 Jul 13 17:32 2.1.0
crw-rw---- 1 root operator 0x52 Jul 13 17:32 2.1.1
crw-rw---- 1 root operator 0x37 Jul 13 17:32 3.1.0
crw-rw---- 1 root operator 0x54 Jul 13 17:32 3.1.1
crw-rw---- 1 root operator 0x73 Jul 13 17:32 3.2.0
crw-rw---- 1 root operator 0x75 Jul 13 17:32 3.2.1
crw-rw---- 1 root operator 0x76 Jul 13 17:32 3.3.0
crw-rw---- 1 root operator 0x78 Jul 13 17:32 3.3.1
staf@freebsd-gpg:~ $
You’ll need to be part of the operator
group to access the USB devices.
Execute the vigr command and add the user to the operator group.
staf@freebsd-gpg:~ $ sudo vigr
operator:*:5:root,staf
Relogin and check that you are in the operator group.
staf@freebsd-gpg:~ $ id
uid=1001(staf) gid=1001(staf) groups=1001(staf),0(wheel),5(operator)
staf@freebsd-gpg:~ $
The usbconfig
command should work now.
staf@freebsd-gpg:~ $ usbconfig
ugen1.1: <Intel UHCI root HUB> at usbus1, cfg=0 md=HOST spd=FULL (12Mbps) pwr=SAVE (0mA)
ugen2.1: <Intel UHCI root HUB> at usbus2, cfg=0 md=HOST spd=FULL (12Mbps) pwr=SAVE (0mA)
ugen0.1: <Intel UHCI root HUB> at usbus0, cfg=0 md=HOST spd=FULL (12Mbps) pwr=SAVE (0mA)
ugen3.1: <Intel EHCI root HUB> at usbus3, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)
ugen3.2: <QEMU Tablet Adomax Technology Co., Ltd> at usbus3, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=ON (100mA)
ugen3.3: <QEMU Tablet Adomax Technology Co., Ltd> at usbus3, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=ON (100mA)
staf@freebsd-gpg:~ $
SmartCard configuration
Verify the USB connection
The first step is to ensure your smartcard reader is detected on a USB level. Execute usbconfig
and lsusb
and make sure your smartcard reader is listed.
usbconfig
List the USB devices.
[staf@monty ~/git]$ usbconfig
ugen1.1: <Intel EHCI root HUB> at usbus1, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)
ugen0.1: <Intel XHCI root HUB> at usbus0, cfg=0 md=HOST spd=SUPER (5.0Gbps) pwr=SAVE (0mA)
ugen2.1: <Intel EHCI root HUB> at usbus2, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)
ugen2.2: <Integrated Rate Matching Hub Intel Corp.> at usbus2, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)
ugen1.2: <Integrated Rate Matching Hub Intel Corp.> at usbus1, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)
ugen0.2: <AU9540 Smartcard Reader Alcor Micro Corp.> at usbus0, cfg=0 md=HOST spd=FULL (12Mbps) pwr=ON (50mA)
ugen0.3: <VFS 5011 fingerprint sensor Validity Sensors, Inc.> at usbus0, cfg=0 md=HOST spd=FULL (12Mbps) pwr=ON (100mA)
ugen0.4: <Centrino Bluetooth Wireless Transceiver Intel Corp.> at usbus0, cfg=0 md=HOST spd=FULL (12Mbps) pwr=ON (0mA)
ugen0.5: <SunplusIT INC. Integrated Camera> at usbus0, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=ON (500mA)
ugen0.6: <X-Rite Pantone Color Sensor X-Rite, Inc.> at usbus0, cfg=0 md=HOST spd=LOW (1.5Mbps) pwr=ON (100mA)
ugen0.7: <GemPC Key SmartCard Reader Gemalto (was Gemplus)> at usbus0, cfg=0 md=HOST spd=FULL (12Mbps) pwr=ON (50mA)
[staf@monty ~/git]$
lsusb
[staf@monty ~/git/stafnet/blog]$ lsusb
Bus /dev/usb Device /dev/ugen0.7: ID 08e6:3438 Gemalto (was Gemplus) GemPC Key SmartCard Reader
Bus /dev/usb Device /dev/ugen0.6: ID 0765:5010 X-Rite, Inc. X-Rite Pantone Color Sensor
Bus /dev/usb Device /dev/ugen0.5: ID 04f2:b39a Chicony Electronics Co., Ltd
Bus /dev/usb Device /dev/ugen0.4: ID 8087:07da Intel Corp. Centrino Bluetooth Wireless Transceiver
Bus /dev/usb Device /dev/ugen0.3: ID 138a:0017 Validity Sensors, Inc. VFS 5011 fingerprint sensor
Bus /dev/usb Device /dev/ugen0.2: ID 058f:9540 Alcor Micro Corp. AU9540 Smartcard Reader
Bus /dev/usb Device /dev/ugen1.2: ID 8087:8008 Intel Corp. Integrated Rate Matching Hub
Bus /dev/usb Device /dev/ugen2.2: ID 8087:8000 Intel Corp. Integrated Rate Matching Hub
Bus /dev/usb Device /dev/ugen2.1: ID 0000:0000
Bus /dev/usb Device /dev/ugen0.1: ID 0000:0000
Bus /dev/usb Device /dev/ugen1.1: ID 0000:0000
[staf@monty ~/git/stafnet/blog]$
Check the GnuPG smartcard status
Let’s check if we get access to our smart card with gpg
.
This might work if you have a native-supported GnuPG smartcard.
[staf@monty ~]$ gpg --card-status
gpg: selecting card failed: Operation not supported by device
gpg: OpenPGP card not available: Operation not supported by device
[staf@monty ~]$
In my case, it doesn’t work. I prefer the OpenSC interface, this might be useful if you want to use your smartcard for other usages.
opensc
Enable pcscd
FreeBSD has a handy tool sysrc to manage rc.conf
Enable the pcscd
service.
[staf@monty ~]$ sudo sysrc pcscd_enable=YES
Password:
pcscd_enable: NO -> YES
[staf@monty ~]$
Start the pcscd
service.
[staf@monty ~]$ sudo /usr/local/etc/rc.d/pcscd start
Password:
Starting pcscd.
[staf@monty ~]$
Verify smartcard access
pcsc_scan
The opensc-tools
package provides a tool - pcsc_scan
to verify the smartcard readers.
Execute pcsc_scan
to verify that your smartcard is detected.
[staf@monty ~]$ pcsc_scan
PC/SC device scanner
V 1.7.1 (c) 2001-2022, Ludovic Rousseau <ludovic.rousseau@free.fr>
Using reader plug'n play mechanism
Scanning present readers...
0: Gemalto USB Shell Token V2 (284C3E93) 00 00
1: Alcor Micro AU9540 01 00
Thu Jul 25 18:42:34 2024
Reader 0: Gemalto USB Shell Token V2 (<snip>) 00 00
Event number: 0
Card state: Card inserted,
ATR: <snip>
ATR: <snip>
+ TS = 3B --> Direct Convention
+ T0 = DA, Y(1): 1101, K: 10 (historical bytes)
TA(1) = 18 --> Fi=372, Di=12, 31 cycles/ETU
129032 bits/s at 4 MHz, fMax for Fi = 5 MHz => 161290 bits/s
TC(1) = FF --> Extra guard time: 255 (special value)
TD(1) = 81 --> Y(i+1) = 1000, Protocol T = 1
-----
TD(2) = B1 --> Y(i+1) = 1011, Protocol T = 1
-----
TA(3) = FE --> IFSC: 254
TB(3) = 75 --> Block Waiting Integer: 7 - Character Waiting Integer: 5
TD(3) = 1F --> Y(i+1) = 0001, Protocol T = 15 - Global interface bytes following
-----
TA(4) = 03 --> Clock stop: not supported - Class accepted by the card: (3G) A 5V B 3V
+ Historical bytes: 00 31 C5 73 C0 01 40 00 90 00
Category indicator byte: 00 (compact TLV data object)
Tag: 3, len: 1 (card service data byte)
Card service data byte: C5
- Application selection: by full DF name
- Application selection: by partial DF name
- EF.DIR and EF.ATR access services: by GET DATA command
- Card without MF
Tag: 7, len: 3 (card capabilities)
Selection methods: C0
- DF selection by full DF name
- DF selection by partial DF name
Data coding byte: 01
- Behaviour of write functions: one-time write
- Value 'FF' for the first byte of BER-TLV tag fields: invalid
- Data unit in quartets: 2
Command chaining, length fields and logical channels: 40
- Extended Lc and Le fields
- Logical channel number assignment: No logical channel
- Maximum number of logical channels: 1
Mandatory status indicator (3 last bytes)
LCS (life card cycle): 00 (No information given)
SW: 9000 (Normal processing.)
+ TCK = 0C (correct checksum)
Possibly identified card (using /usr/local/share/pcsc/smartcard_list.txt):
<snip>
OpenPGP Card V2
Reader 1: Alcor Micro AU9540 01 00
Event number: 0
Card state
pkcs15
pkcs15 is the application
interface for hardware tokens while pkcs11 is the low-level interface.
You can use pkcs15-tool -D
to verify that your smartcard is detected.
staf@monty ~]$ pkcs15-tool -D
Using reader with a card: Gemalto USB Shell Token V2 (<snip>) 00 00
PKCS#15 Card [OpenPGP card]:
Version : 0
Serial number : <snip>
Manufacturer ID: ZeitControl
Language : nl
Flags : PRN generation, EID compliant
PIN [User PIN]
Object Flags : [0x03], private, modifiable
Auth ID : 03
ID : 02
Flags : [0x13], case-sensitive, local, initialized
Length : min_len:6, max_len:32, stored_len:32
Pad char : 0x00
Reference : 2 (0x02)
Type : UTF-8
Path : 3f00
Tries left : 3
PIN [User PIN (sig)]
Object Flags : [0x03], private, modifiable
Auth ID : 03
ID : 01
Flags : [0x13], case-sensitive, local, initialized
Length : min_len:6, max_len:32, stored_len:32
Pad char : 0x00
Reference : 1 (0x01)
Type : UTF-8
Path : 3f00
Tries left : 0
PIN [Admin PIN]
Object Flags : [0x03], private, modifiable
ID : 03
Flags : [0x9B], case-sensitive, local, unblock-disabled, initialized, soPin
Length : min_len:8, max_len:32, stored_len:32
Pad char : 0x00
Reference : 3 (0x03)
Type : UTF-8
Path : 3f00
Tries left : 0
Private RSA Key [Signature key]
Object Flags : [0x03], private, modifiable
Usage : [0x20C], sign, signRecover, nonRepudiation
Access Flags : [0x1D], sensitive, alwaysSensitive, neverExtract, local
Algo_refs : 0
ModLength : 3072
Key ref : 0 (0x00)
Native : yes
Auth ID : 01
ID : 01
MD:guid : <snip>
Private RSA Key [Encryption key]
Object Flags : [0x03], private, modifiable
Usage : [0x22], decrypt, unwrap
Access Flags : [0x1D], sensitive, alwaysSensitive, neverExtract, local
Algo_refs : 0
ModLength : 3072
Key ref : 1 (0x01)
Native : yes
Auth ID : 02
ID : 02
MD:guid : <snip>
Private RSA Key [Authentication key]
Object Flags : [0x03], private, modifiable
Usage : [0x200], nonRepudiation
Access Flags : [0x1D], sensitive, alwaysSensitive, neverExtract, local
Algo_refs : 0
ModLength : 3072
Key ref : 2 (0x02)
Native : yes
Auth ID : 02
ID : 03
MD:guid : <snip>
Public RSA Key [Signature key]
Object Flags : [0x02], modifiable
Usage : [0xC0], verify, verifyRecover
Access Flags : [0x02], extract
ModLength : 3072
Key ref : 0 (0x00)
Native : no
Path : b601
ID : 01
Public RSA Key [Encryption key]
Object Flags : [0x02], modifiable
Usage : [0x11], encrypt, wrap
Access Flags : [0x02], extract
ModLength : 3072
Key ref : 0 (0x00)
Native : no
Path : b801
ID : 02
Public RSA Key [Authentication key]
Object Flags : [0x02], modifiable
Usage : [0x40], verify
Access Flags : [0x02], extract
ModLength : 3072
Key ref : 0 (0x00)
Native : no
Path : a401
ID : 03
[staf@monty ~]$
GnuPG configuration
First test
Stop (kill) the scdaemon
, to ensure that the scdaemon
tries to use the opensc
interface.
[staf@monty ~]$ gpgconf --kill scdaemon
[staf@monty ~]$
[staf@monty ~]$ ps aux | grep -i scdaemon
staf 9236 0.0 0.0 12808 2496 3 S+ 20:42 0:00.00 grep -i scdaemon
[staf@monty ~]$
Try to read the card status again.
[staf@monty ~]$ gpg --card-status
gpg: selecting card failed: Operation not supported by device
gpg: OpenPGP card not available: Operation not supported by device
[staf@monty ~]$
Reconfigure GnuPG
Go to the .gnupg
directory in your $HOME
directory.
[staf@monty ~]$ cd .gnupg/
[staf@monty ~/.gnupg]$
scdaemon
Reconfigure scdaemon
to disable the internal ccid
and enable logging - always useful to verify why something isn’t working…
[staf@monty ~/.gnupg]$ vi scdaemon.conf
disable-ccid
verbose
debug-level expert
debug-all
log-file /home/staf/logs/scdaemon.log
gpg-agent
Enable debug logging for the gpg-agent
.
[staf@monty ~/.gnupg]$ vi gpg-agent.conf
debug-level expert
verbose
verbose
log-file /home/staf/logs/gpg-agent.log
Verify
Stop the scdaemon
.
[staf@monty ~/.gnupg]$ gpgconf --kill scdaemon
[staf@monty ~/.gnupg]$
If everything goes well gpg
will detect the smartcard.
If not, you have some logging to do some debugging ;-)
[staf@monty ~/.gnupg]$ gpg --card-status
Reader ...........: Gemalto USB Shell Token V2 (<snip>) 00 00
Application ID ...: <snip>
Application type .: OpenPGP
Version ..........: 2.1
Manufacturer .....: ZeitControl
Serial number ....: 000046F1
Name of cardholder: <snip>
Language prefs ...: nl
Salutation .......: Mr.
URL of public key : <snip>
Login data .......: [not set]
Signature PIN ....: forced
Key attributes ...: xxxxxxx xxxxxxx xxxxxxx
Max. PIN lengths .: 32 32 32
PIN retry counter : 3 0 3
Signature counter : 80
Signature key ....: <snip>
created ....: <snip>
Encryption key....: <snip>
created ....: <snip>
Authentication key: <snip>
created ....: <snip>
General key info..: [none]
[staf@monty ~/.gnupg]$
Test
shadow private keys
After you executed gpg --card-status
, GnuPG created “shadow private keys”. These keys just contain references on which hardware tokens the private keys are stored.
[staf@monty ~/.gnupg]$ ls -l private-keys-v1.d/
total 14
-rw------- 1 staf staf 976 Mar 24 11:35 <snip>.key
-rw------- 1 staf staf 976 Mar 24 11:35 <snip>.key
-rw------- 1 staf staf 976 Mar 24 11:35 <snip>.key
[staf@monty ~/.gnupg]$
You can list the (shadow) private keys with the gpg --list-secret-keys
command.
Pinentry
To be able to type in your PIN code, you’ll need a pinentry
application unless your smartcard reader has a pinpad.
You can use pkg provides
to verify which pinentry
applications are available.
For the integration with Thunderbird, you probably want to have a graphical-enabled version. But this is the topic for a next blog post ;-)
We’ll stick with the (n)curses version for now.
Install a pinentry
program.
[staf@monty ~/.gnupg]$ pkg provides pinentry | grep -i curses
Name : pinentry-curses-1.3.1
Comment : Curses version of the GnuPG password dialog
Filename: usr/local/bin/pinentry-curses
[staf@monty ~/.gnupg]$
[staf@monty ~/.gnupg]$ sudo pkg install pinentry-curses
Password:
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
Checking integrity... done (0 conflicting)
The most recent versions of packages are already installed
[staf@monty ~/.gnupg]$
A soft link is created for the pinentry binary.
On FreeBSD, the pinentry
soft link is managed by the pinentry package.
You can verify this with the pkg which
command.
[staf@monty ~]$ pkg which /usr/local/bin/pinentry
/usr/local/bin/pinentry was installed by package pinentry-1.3.1
[staf@monty ~]$
The curses version is the default.
If you want to use another pinentry version in the gpg-agent
configuration ( $HOME/.gnupg/gpg-agent.conf
).
pinentry-program <PATH>
Import your public key
Import your public key.
[staf@monty /tmp]$ gpg --import <snip>.asc
gpg: key <snip>: public key "<snip>" imported
gpg: Total number processed: 1
gpg: imported: 1
[staf@monty /tmp]$
List the public keys.
[staf@monty /tmp]$ gpg --list-keys
/home/staf/.gnupg/pubring.kbx
-----------------------------
pub XXXXXXX XXXX-XX-XX [SC]
<snip>
uid [ unknown] <snip>
sub XXXXXXX XXXX-XX-XX [A]
sub XXXXXXX XXXX-XX-XX [E]
[staf@monty /tmp]$
As a test, we try to sign something with the private key on our GnuPG smartcard.
Create a test file.
[staf@monty /tmp]$ echo "foobar" > foobar
[staf@monty /tmp]$
[staf@monty /tmp]$ gpg --sign foobar
If your smartcard isn’t inserted GnuPG will ask to insert it.
GnuPG asks for the smartcard with the serial in the shadow private key.
┌────────────────────────────────────────────┐
│ Please insert the card with serial number: │
│ │
│ XXXX XXXXXXXX │
│ │
│ │
│ <OK> <Cancel> │
└────────────────────────────────────────────┘
Type in your PIN code.
┌──────────────────────────────────────────────┐
│ Please unlock the card │
│ │
│ Number: XXXX XXXXXXXX │
│ Holder: XXXX XXXXXXXXXX │
│ Counter: XX │
│ │
│ PIN ________________________________________ │
│ │
│ <OK> <Cancel> │
└──────────────────────────────────────────────┘
[staf@monty /tmp]$ ls -l foobar*
-rw-r----- 1 staf wheel 7 Jul 27 11:11 foobar
-rw-r----- 1 staf wheel 481 Jul 27 11:17 foobar.gpg
[staf@monty /tmp]$
In a next blog post in this series, we’ll configure Thunderbird to use the smartcard for OpenPG email encryption.
Have fun!
Links
- https://vermaden.wordpress.com/2019/01/17/less-known-pkg8-features/
- https://www.floss-shop.de/en/security-privacy/smartcards/13/openpgp-smart-card-v3.4?c=11
- https://www.gnupg.org/howtos/card-howto/en/smartcard-howto-single.html
- https://support.nitrokey.com/t/nk3-mini-gpg-selecting-card-failed-no-such-device-gpg-card-setup/5057/7
- https://security.stackexchange.com/questions/233916/gnupg-connecting-to-specific-card-reader-when-multiple-reader-available#233918
- https://www.fsij.org/doc-gnuk/stop-scdaemon.html
- https://wiki.debian.org/Smartcards
- https://github.com/OpenSC/OpenSC/wiki/Overview/c70c57c1811f54fe3b3989d01708b45b86fafe11
- https://superuser.com/questions/1693289/gpg-warning-not-using-as-default-key-no-secret-key
- https://stackoverflow.com/questions/46689885/how-to-get-public-key-from-an-openpgp-smart-card-without-using-key-servers
July 26, 2024
July 25, 2024
July 23, 2024
Keeping up appearances in tech
The word "rant" is used far too often, and in various ways.
It's meant to imply aimless, angry venting.
But often it means:
Naming problems without proposing solutions,
this makes me feel confused.
Naming problems and assigning blame,
this makes me feel bad.
I saw a remarkable pair of tweets the other day.
In the wake of the outage, the CEO of CrowdStrike sent out a public announcement. It's purely factual. The scope of the problem is identified, the known facts are stated, and the logistics of disaster relief are set in motion.
Millions of computers were affected. This is the equivalent of a frazzled official giving a brief statement in the aftermath of an earthquake, directing people to the Red Cross.
Everything is basically on fire for everyone involved. Systems are failing everywhere, some critical, and quite likely people are panicking. The important thing is to give the technicians the information and tools to fix it, and for everyone else to do what they can, and stay out of the way.
In response, a communication professional posted an 'improved' version:
Credit where credit is due, she nailed the style. 10/10. It seems unobjectionable, at first. Let's go through, shall we?
Opposite Day
First is that the CEO is "devastated." A feeling. And they are personally going to ensure it's fixed for every single user.
This focuses on the individual who is inconvenienced. Not the disaster. They take a moment out of their time to say they are so, so sorry a mistake was made. They have let you and everyone else down, and that shouldn't happen. That's their responsibility.
By this point, the original statement had already told everyone the relevant facts. Here the technical details are left to the imagination. The writer's self-assigned job is to wrap the message in a more palatable envelope.
Everyone will be working "all day, all night, all weekends," indeed, "however long it takes," to avoid it happening again.
I imagine this is meant to be inspiring and reassuring. But if I was a CrowdStrike technician or engineer, I would find it demoralizing: the boss, who will actually be personally fixing diddly-squat, is saying that the long hours of others are a sacrifice they're willing to make.
Plus, CrowdStrike's customers are in the same boat: their technicians get volunteered too. They can't magically unbrick PCs from a distance, so "until it's fully fixed for every single user" would be a promise outsiders will have to keep. Lovely.
There's even a punch line: an invitation to go contact them, the quickest way linked directly. It thanks people for reaching out.
If everything is on fire, that includes the phone lines, the inboxes, and so on. The most stupid thing you could do in such a situation is to tell more people to contact you, right away. Don't encourage it! That's why the original statement refers to pre-existing lines of communication, internal representatives, and so on. The Support department would hate the CEO too.
Root Cause
If you're wondering about the pictures, it's Hyacinth Bucket, from 90s UK sitcom Keeping Up Appearances, who would always insist "it's pronounced Bouquet."
Hyacinth's ambitions always landed her out of her depth, surrounded by upper-class people she's trying to impress, in the midst of an embarrassing disaster. Her increasingly desperate attempts to save face, which invariably made things worse, are the main source of comedy.
Try reading that second statement in her voice.
I’m devastated to see the scale of today’s outage and will be personally working on it together with our team until it’s fully fixed for every single user.
But I wanted to take a moment to come here and tell you that I am sorry. People around the world rely on us, and incidents like this can’t happen. This came from an error that ultimately is my responsibility.
I can hear it perfectly, telegraphing Britishness to restore dignity for all. If she were in tech she would give that statement.
It's about reputation management first, projecting the image of competence and accountability. But she's giving the speech in front of a burning building, not realizing the entire exercise is futile. Worse, she thinks she's nailing it.
If CrowdStrike had sent this out, some would've applauded and called it an admirable example of wise and empathetic communication. Real leadership qualities.
But it's the exact opposite. It focuses on the wrong things, it alienates the staff, and it definitely amplifies the chaos. It's Monty Python-esque.
Apologizing is pointless here, the damage is already done. What matters is how severe it is and whether it could've been avoided. This requires a detailed root-cause analysis and remedy. Otherwise you only have their word. Why would that re-assure you?
The original restated the company's mission: security and stability. Those are the stakes to regain a modicum of confidence.
You may think that I'm reading too much into this. But I know the exact vibe on an engineering floor when the shit hits the fan. I also know how executives and staff without that experience end up missing the point entirely. I once worked for a Hyacinth Bucket. It's not an anecdote, it's allegory.
They simply don't get the engineering mindset, and confuse authority with ownership. They step on everyone's toes without realizing, because they're constantly wearing clown shoes. Nobody tells them.
Softness as a Service
The change in style between #1 and #2 is really a microcosm of the conflict that has been broiling in tech for ~15 years now. I don't mean the politics, but the shifting of norms, of language and behavior.
It's framed as a matter of interpersonal style, which needs to be welcoming and inclusive. In practice this means they assert or demand that style #2 be the norm, even when #1 is advisable or required.
Factuality is seen as deficient, improper and primitive. It's a form of doublethink: everyone's preference is equally valid, except yours, specifically.
But the difference is not a preference. It's about what actually works and what doesn't. Style #1 is aimed at the people who have to fix it. Style #2 is aimed at the people who can't do anything until it's fixed. Who should they be reaching out to?
In #2, communication becomes an end in itself, not a means of conveying information. It's about being seen saying the words, not living them. Poking at the statement makes it fall apart.
When this becomes the norm in a technical field, it has deep consequences:
- Critique must be gift-wrapped in flattery, and is not allowed to actually land.
- Mistakes are not corrected, and sentiment takes precedence over effectiveness.
- Leaders speak lofty words far from the trenches to save face.
- The people they thank the loudest are the ones they pay the least.
Inevitably, quiet competence is replaced with gaudy chaos. Everyone says they're sorry and responsible, but nobody actually is. Nobody wants to resign either. Sound familiar?
Cope and Soothe
The elephant in the room is that #1 is very masculine, while #2 is more feminine. When you hear "women are more empathetic communicators", this is what it means. They tend to focus on the individual and their relation to them, not the team as a whole and its mission.
Complaints that tech is too "male dominated" and "notoriously hostile to women" are often just this. Tech was always full of types who won't preface their proposals and criticisms with fluff, and instead lean into autism. When you're used to being pandered to, neutrality feels like vulgarity.
The notable exceptions are rare and usually have an exasperating lead up. Tech is actually one of the most accepting and egalitarian fields around. The maintainers do a mostly thankless job.
"Oh so you're saying there's no misogyny in tech?" No I'm just saying misogyny doesn't mean "something 1 woman hates".
The tone is really a distraction. If someone drops an analysis, saying shit or get off the pot, even very kindly and patiently, some will still run away screaming. Like an octopus spraying ink, they'll deploy a nasty form of #2 as a distraction. That's the real issue.
Many techies, in their naiveté, believed the cultural reformers when they showed up to gentrify them. They obediently branded heretics like James Damore, and burned witches like Richard Stallman. Thanks to racism, words like 'master' and 'slave' are now off-limits as technical terms. Ironic, because millions of computers just crashed because they worked exactly like that.
The cope is to pretend that nothing has truly changed yet, and more reform is needed. In fact, everything has already changed. Tech forums used to be crucibles for distilling insight, but now they are guarded jealously by people more likely to flag and ban than strongly disagree.
I once got flagged on HN because I pointed out Twitter's mass lay-offs were a response to overhiring, and that people were rooting for the site to fail after Musk bought it. It suggested what we all know now: that the company would not implode after trimming the dead weight, and that they'd never forgive him for it.
Diversity is now associated with incompetence, because incompetent people have spent over a decade reaching for it as an excuse. In their attempts to fight stereotypes, they ensured the stereotypes came true.
Bait and Snitch
The outcry tends to be: "We do all the same things you do, but still we get treated differently!" But they start from the conclusion and work their way backwards. This is what the rewritten statement does: it tries to fix the relationship before fixing the problem.
The average woman and man actually do things very differently in the first place. Individual men and women choose. And others respond accordingly. The people who build and maintain the world's infrastructure prefer the masculine style for a reason: it keeps civilization running, and helps restore it when it breaks. A disaster announcement does not need to be relatable, it needs to be effective.
Furthermore, if the job of shoveling shit falls on you, no amount of flattery or oversight will make that more pleasant. It really won't. Such commentary is purely for the benefit of the ones watching and trying to look busy. It makes it worse, stop pretending otherwise.
There's little loyalty in tech companies nowadays, and it's no surprise. Project and product managers are acting more like demanding clients to their own team, than leaders. "As a user, I want..." Yes, but what are you going to do about it? Do you even know where to start?
What's perceived as a lack of sensitivity is actually the presence of sensibility. It's what connects the words to the reality on the ground. It does not need to be improved or corrected, it just needs to be respected. And yes it's a matter of gender, because bashing men and masculine norms has become a jolly recreational sport in the overculture. Mature women know it.
It seems impossible to admit. The entire edifice of gender equality depends on there not being a single thing men are actually better at, even just on average. Where men and women's instincts differ, women must be right.
It's childish, and not harmless either. It dares you to call it out, so they can then play the wounded victim, and paint you as the unreasonable asshole who is mean. This is supposed to invalidate the argument.
* * *
This post is of course a giant cannon pointing in the opposite direction, sitting on top of a wall. Its message will likely fly over the reformers' heads.
If they read it at all, they'll selectively quote or paraphrase, call me a tech-bro, and spool off some sentences they overheard, like an LLM. It's why they adore AI, and want it to be exactly as sycophantic as them. They don't care that it makes stuff up wholesale, because it makes them look and feel competent. It will never tell them to just fuck off already.
Think less about what is said, more about what is being done. Otherwise the next CrowdStrike will probably be worse.
July 10, 2024
July 04, 2024
July 03, 2024
June 15, 2024
A book written by a doctor that has ADD himself, and it shows.
I was annoyed in the first half by his incessant use of anecdotes to prove that ADD is not genetic. It felt like he had to convince himself, and it read as an excuse for his actions as a father.
He uses clear and obvious examples of how not to raise a child (often with himself as the child or the father) to play on the readers emotion. Most of these examples are not even related to ADD.
But in the end, definitely the second half, it is a good book. Most people will recognize several situations and often it does make one think about life choices and interpretation of actions and emotions.
So for those getting past the disorganization (yes there are parts and chapters in this book, but most of it feels randomly disorganized), the second half of the book is a worthy thought provoking read.
June 11, 2024
When I mindlessly unplugged a USB audio card from my laptop while viewing its mixer settings in AlsaMixer, I was surprised by the following poetic error message: [1]
These lines are actually the final verse of Lewis Carroll's whimsical poem The Hunting of the Snark:
In the midst of the word he was trying to say, In the midst of his laughter and glee, He had softly and suddenly vanished away— For the Snark was a Boojum, you see. -- Lewis Carroll, "The Hunting of the Snark"
Initially, I didn't even notice the bright red error message The sound device was unplugged. Press F6 to select another sound card. The poetic lines were enough to convey what had happened and they put a smile on my face.
Please, developers, keep putting a smile on my face when something unexpected happens.
[1]Fun fact: the copyright header of AlsaMixer's source code includes: Copyright (c) 1874 Lewis Carroll
.
June 02, 2024
May 30, 2024
May 20, 2024
May 09, 2024
Beste andere Belgen
Venera en ik hebben een bébé gemaakt. Hij is Troi en naar het schijnt lijkt hij erg op mij.
Hier is een fotooke van hem en zijn mama. Troi is uiteraard heel erg schattig:
Last week my new book has been published, Building Wireless Sensor Networks with OpenThread: Developing CoAP Applications for Thread Networks with Zephyr.
Thread is a protocol for building efficient, secure and scalable wireless mesh networks for the Internet of Things (IoT), based on IPv6. OpenThread is an open-source implementation of the Thread protocol, originally developed by Google. It offers a comprehensive Application Programming Interface (API) that is both operating system and platform agnostic. OpenThread is the industry’s reference implementation and the go-to platform for professional Thread application developers.
All code examples from this book are published on GitHub. The repository also lists errors that may have been found in this book since its publication, as well as information about changes that impact the examples.
This book uses OpenThread in conjunction with Zephyr, an open-source real-time operating system designed for use with resource-constrained devices. This allows you to develop Thread applications that work on various hardware platforms, without the need to delve into low-level details or to learn another API when switching hardware platforms.
With its practical approach, this book not only explains theoretical concepts, but also demonstrates Thread’s network features with the use of practical examples. It explains the code used for a variety of basic Thread applications based on Constrained Application Protocol (CoAP). This includes advanced topics such as service discovery and security. As you work through this book, you’ll build on both your knowledge and skills. By the time you finish the final chapter, you’ll have the confidence to effectively implement Thread-based wireless networks for your own IoT projects.
What is Thread?
Thread is a low-power, wireless, and IPv6-based networking protocol designed specifically for IoT devices. Development of the protocol is managed by the Thread Group, an alliance founded in 2015. This is a consortium of major industry players, including Google, Apple, Amazon, Qualcomm, Silicon Labs, and Nordic Semiconductor.
You can download the Thread specification for free, although registration is required. The version history is as follows:
Version |
Release year |
Remarks |
---|---|---|
Thread 1.0 |
2015 |
Never implemented in commercial products |
Thread 1.1 |
2017 |
|
Thread 1.2 |
2019 |
|
Thread 1.3 |
2023 |
|
All versions of the Thread specification maintain backward compatibility.
Thread’s architecture
Thread employs a mesh networking architecture that allows devices to communicate directly with each other, eliminating the need for a central hub or gateway. This architecture offers inherent redundancy, as devices can relay data using multiple paths, ensuring increased reliability in case of any single node failure.
The Thread protocol stack is built upon the widely used IPv6 protocol, which simplifies the integration of Thread networks into existing IP infrastructures. Thread is designed to be lightweight, streamlined and efficient, making it an excellent protocol for IoT devices running in resource-constrained environments.
Before discussing Thread, it’s essential to understand its place in the network protocol environment. Most modern networks are based on the Internet Protocol suite, which uses a four-layer architecture. The layers of the Internet Protocol suite are, from bottom to top:
- Link layer
Defines the device’s connection with a local network. Protocols like Ethernet and Wi-Fi operate within this layer. In the more complex Open Systems Interconnection (OSI) model, this layer includes the physical and data link layers.
- Network layer
Enables communication across network boundaries, known as routing. The Internet Protocol (IP) operates within this layer and defines IP addresses.
- Transport layer
Enables communication between two devices, either on the same network or on different networks with routers in between. The transport layer also defines the concept of a port. User Datagram Protocol (UDP) and Transmission Control Protocol (TCP) operate within this layer.
- Application layer
Facilitates communication between applications on the same device or different devices. Protocols like HyperText Transfer Protocol (HTTP), Domain Name System (DNS), Simple Mail Transfer Protocol (SMTP), and Message Queuing Telemetry Transport (MQTT) operate within this layer.
When data is transmitted across the network, the data of each layer is embedded in the layer below it, as shown in this diagram:
Network data is encapsulated in the four layers of the Internet Protocol suite (based on: Colin Burnett, CC BY-SA 3.0)
Thread operates in the network and transport layers. However, it’s important to know what’s going on in all layers.
Link layer: IEEE 802.15.4
For its link layer, Thread builds upon the IEEE 802.15.4 standard, a specification by the Institute of Electrical and Electronics Engineers (IEEE). This is the same link layer that Zigbee uses. It focuses on low power consumption, allowing devices to communicate for months or even years on a single battery charge. The direct communication range is approximately 10 meters under normal conditions.
IEEE 802.15.4 is a radio standard operating in the license-free 2.4 GHz frequency band with data rates up to 250 kbit/s. It has 16 channels (channels 11 to 26), ranging from 2405 to 2530 MHz with 5 MHz channel spacing. This is the same 2.4 GHz frequency band used by Bluetooth and Wi-Fi.
Note
In Europe and North America, IEEE 802.15.4 also supports channels at frequencies of 868 MHz and 920 MHz, respectively. Channel 0 (with a data rate of 20 kbit/s) is for 868.3 MHz, and channels 1 to 10 (with a data rate of 40 kbit/s) are for 906 to 924 MHz with 2 MHz channel spacing.
An IEEE 802.15.4 network includes two types of devices:
- Full Function Device (FFD)
A device that implements the complete feature set of the IEEE 802.15.4 standard. It can communicate with all other IEEE 802.15.4 devices within its immediate radio range.
- Reduced Function Device (RFD)
A device that implements only a subset of features of the IEEE 802.15.4 standard to conserve device resources. It can only communicate with the FFD through which it joined the network.
An IEEE 802.15.4 network is known as a Personal Area Network (PAN). Each PAN is identified by a unique PAN ID that no other PAN in range is allowed to have. Within each PAN, one FFD is the PAN coordinator. This device initiates and manages the PAN, enabling other devices to join the network.
IEEE 802.15.4 includes built-in encryption and authentication, using AES-128 (128-bit Advanced Encryption Standard) in CCM* mode, a variation of CCM mode (counter with cipher block chaining message authentication code). This mode provides authenticated encryption by combining encryption in CTR (counter) mode with authentication using CBC-MAC (cipher block chaining message authentication code). The original message is transmitted alongside a MIC (Message Integrity Check) so the recipient can validate that the message has not been tampered with.
Devices can only communicate with each other in an IEEE 802.15.4 network if they use the same 128-bit AES key. The IEEE 802.15.4 standard doesn’t, however, define the key exchange process. The key must therefore either be stored in each device during manufacture, entered manually by the user, or by a key exchange protocol in the layers above the link layer.
IEEE 802.15.4 data packets can transmit data from 0 to 127 bytes long. Each IEEE 802.15.4 device has an EUI-64 (Extended Unique Identifier) address set by the manufacturer, which is used as a MAC address. When a device joins a PAN, it’s also assigned a 16-bit short address. This allows the transmitted packets to be shorter because they only need to use two instead of eight bytes to define source and target addresses. Some special short addresses exist, such as 0xffff
as a broadcast address to send packets to all devices, and 0xfffe
as an unassigned address, used by devices while joining a network.
Network layer: 6LoWPAN and IPv6
Thread’s network layer consists of IPv6 over Low-Power Wireless Access Networks (6LoWPAN). This is an IETF specification described in RFC 4944, "Transmission of IPv6 Packets over IEEE 802.15.4 Networks", with an update for the compression mechanism in RFC 6282, "Compression Format for IPv6 Datagrams over IEEE 802.15.4-Based Networks". 6LoWPAN’s purpose is to allow even the smallest devices with limited processing power and low energy requirements to be part of the Internet of Things.
6LoWPAN essentially enables you to run an IPv6 network over the IEEE 802.15.4 link layer. It acts as an ‘adaptation layer’ between IPv6 and IEEE 802.15.4 and is therefore sometimes considered part of the link layer. This adaptation poses some challenges. For example, IPv6 mandates that all links can handle datagram sizes of at least 1280 bytes, but IEEE 802.15.4, as explained in the previous subsection, limits a frame to 127 bytes. 6LoWPAN solves this by excluding information in the 6LoWPAN header that can already be derived from IEEE 802.15.4 frames and by using local, shortened addresses. If the payload is still too large, it’s divided into fragments.
The link-local IPv6 addresses of 6LoWPAN devices are derived from the IEEE 802.15.4 EUI-64 addresses and their shortened 16-bit addresses. The devices in a 6LoWPAN network can communicate directly with ‘normal’ IPv6 devices in a network if connected via an edge router. This means there is no need for translation via a gateway, in contrast to non-IP networks such as Zigbee or Z-Wave. Communication with non-6LoWPAN networks purely involves forwarding data packets at the network layer.
In this layer, Thread builds upon IEEE 802.15.4 to create an IPv6-based mesh network. In IEEE 802.15.4 only communication between devices that are in immediate radio range is possible, whereas routing allows devices that aren’t in immediate range to communicate.
If you want to delve into more details of Thread’s network layer, consult the Thread Group’s white paper Thread Usage of 6LoWPAN.
Transport layer: UDP
On top of 6LoWPAN and IPv6, Thread’s transport layer employs the User Datagram Protocol (UDP), which is the lesser known alternative to Transmission Control Protocol (TCP).
UDP has the following properties:
- Connectionless
Unlike TCP, UDP doesn’t require establishing a connection before transferring data. It sends datagrams (packets) directly without any prior setup.
- No error checking and recovery
UDP doesn’t provide built-in error checking, nor does it ensure the delivery of packets. If data is lost or corrupted during transmission, UDP doesn’t attempt to recover or resend it.
- Speed
UDP is faster than TCP, as it doesn’t involve the overhead of establishing and maintaining a connection, error checking, or guaranteeing packet delivery.
UDP is ideal for use by Thread for several reasons:
- Resource constraints
Thread devices often have limited processing power, memory, and energy. The simplicity and low overhead of UDP make it a better fitting choice for such resource-constrained devices compared to TCP.
- Lower latency
UDP’s connectionless and lightweight nature guarantees data transmission with low latency, making it appropriate for wireless sensors.
Thread lacks an application layer
Unlike home automation protocols such as Zigbee, Z-Wave, or Matter, the Thread standard doesn’t define an application layer. A Thread network merely offers network infrastructure that applications can use to communicate. Just as your web browser and email client use TCP/IP over Ethernet or Wi-Fi, home automation devices can use Thread over IEEE 802.15.4 as their network infrastructure.
Several application layers exist that can make use of Thread:
- Constrained Application Protocol (CoAP)
A simpler version of HTTP, designed for resource-constrained devices and using UDP instead of TCP.
- MQTT for Sensor Networks (MQTT-SN)
A variant of MQTT designed to be used over UDP.
- Apple HomeKit
Apple’s application protocol for home automation devices.
- Matter
A new home automation standard developed by the Connectivity Standards Alliance (CSA).
Note
Matter was developed by the Zigbee Alliance, the original developers of the Zigbee standard for home automation. Following the introduction of Matter, the Zigbee Alliance rebranded to the Connectivity Standards Alliance. Matter’s application layer is heavily influenced by Zigbee. It not only runs on top of Thread, but can also be used over Wi-Fi and Ethernet.
Like your home network which simultaneously hosts a lot of application protocols including HTTP, DNS, and SMTP, a Thread network can also run all these application protocols concurrently.
The Thread network stack supports multiple application protocols simultaneously.
Throughout this book, I will be using CoAP as an application protocol in a Thread network. Once you have a Thread network set up, you can also use it for Apple HomeKit and Matter devices.
Advantages of Thread
Some of Thread’s key benefits include:
Scalability
Thread’s mesh network architecture allows for large-scale deployment of IoT devices, supporting hundreds of devices within a single network. Thread accommodates up to 32 Routers per network and up to 511 End Devices per Router. In addition, from Thread 1.2, multiple Thread networks can be integrated into one single Thread domain, allowing thousands of devices within a mesh network.
Security
Devices can’t access a Thread network without authorization. Moreover, all communication on a Thread network is encrypted using IEEE 802.15.4 security mechanisms. As a result, an outsider without access to the network credentials can’t read network traffic.
Reliability
Thread networks are robust and support self-healing and self-organizing network properties at various levels, ensuring the network’s resilience against failures. This all happens transparently to the user; messages are automatically routed around any bad node via alternative paths.
For example, an End Device requires a parent Router to communicate with the rest of the network. If communication with this parent Router fails for any reason and it becomes unavailable the End Device will choose another parent Router in its neighborhood, after which communication resumes.
Thread Routers also relay packets from their End Devices to other Routers using the most efficient route they can find. In case of connection problems, the Router will immediately seek an alternate route. Router-Eligible End Devices can also temporarily upgrade their status to Routers if necessary. Each Thread network has a Leader who supervises the Routers and whose role is dynamically selected by the Routers. If the Leader fails, another Router automatically takes over as Leader.
Border Routers have the same built-in resilience. In a Thread network with multiple Border Routers, communication between Thread devices and devices on another IP network (such as a home network) occurs along multiple routes. If one Border Router loses connectivity, communications will be rerouted via the other Border Routers. If your Thread network only has a single Border Router, this will become a single point of failure for communication with the outside network.
Low power consumption
Thread is based on the power-efficient IEEE 802.15.4 link layer. This enables devices to operate on batteries for prolonged periods. Sleepy End Devices will also switch off their radio during idle periods and only wake up periodically to communicate with their parent Router, thereby giving even longer battery life.
Interoperability
As the protocol is built upon IPv6, Thread devices are straightforward to incorporate into existing networks, both in industrial and home infrastructure, and for both local networks and cloud connections. You don’t need a proprietary gateway; every IP-based device is able to communicate with Thread devices, as long as there’s a route between both devices facilitated by a Border Router.
Disadvantages of Thread
In life, they say there’s no such thing as a free lunch and Thread is no exception to this rule. Some of its limitations are:
Limited range
Thread’s emphasis on low power consumption can lead to a restricted wireless range compared with other technologies such as Wi-Fi. Although this can be offset by its mesh architecture, where Routers relay messages, it still means that devices in general can’t be placed too far apart.
Low data rates
Thread supports lower data rates than some rival IoT technologies. This makes Thread unsuitable for applications requiring high throughput.
Complexity
The Thread protocol stack may be more challenging to implement compared to other IoT networking solutions, potentially ramping up development costs and time.
This is an abridged version of the introduction of my book Building Wireless Sensor Networks with OpenThread: Developing CoAP Applications for Thread Networks with Zephyr. You can read the full introduction for free in the Elektor Store, where you can also find the table of contents.
Platforms used in this book
In this book I focus on building wireless sensor networks using the Thread protocol together with the following hardware and software environments:
- Nordic Semiconductor’s nRF52840 SoC
A powerful yet energy-efficient and low cost hardware solution for Thread-based IoT devices
- OpenThread
An open-source implementation of the Thread protocol, originally developed by Google, making it easy to incorporate Thread into your IoT projects
- Zephyr
An open-source real-time operating system designed for resource-constrained devices
Armed with these tools and by using practical examples I will go on to guide you step-by-step through the details of the hardware and software required to build Thread networks and OpenThread-based applications.
Nordic Semiconductor’s nRF52840 SoC
Nordic Semiconductor’s nRF52840 is a SoC built around the 32-bit ARM Cortex-M4 CPU running at 64 MHz. This chip supports Bluetooth Low Energy (BLE), Bluetooth Mesh, Thread, Zigbee, IEEE 802.15.4, ANT and 2.4 GHz proprietary stacks. With its 1 MB flash storage and 256 KB RAM, it offers ample resources for advanced Thread applications.
This SoC is available for developers in the form of Nordic Semiconductor’s user-friendly nRF52840 Dongle. You can power and program this small, low-cost USB dongle via a computer USB port. The documentation lists comprehensive information about the dongle.
Nordic Semiconductor’s nRF52840 Dongle is a low-cost USB dongle ideal for experimenting with Thread. (image source: Nordic Semiconductor)
OpenThread
Thread is basically a networking protocol; in order to work with it you need an implementation of the Thread networking protocol. The industry’s reference implementation, used even by professional Thread device developers, is OpenThread. It’s a BSD-licensed implementation, with development ongoing via its GitHub repository. The license for this software allows for its use in both open-source and proprietary applications.
OpenThread provides an Application Programming Interface (API) that’s operating system and platform agnostic. It has a narrow platform abstraction layer to achieve this, and has a small memory footprint, making it highly portable. In this book, I will be using OpenThread in conjunction with Zephyr, but the same API can be used in other combinations, such as ESP-IDF for Espressif’s Thread SoCs.
The BSD-licensed OpenThread project is the industry’s standard implementation of the Thread protocol.
Zephyr
Zephyr is an open-source real-time operating system (RTOS) designed for resource-constrained devices. It incorporates OpenThread as a module, simplifying the process of creating Thread applications based on Zephyr and OpenThread. Because of Zephyr’s hardware abstraction layer, these applications will run on all SoCs supported by Zephyr that have an IEEE 802.15.4 radio facility.
Zephyr is an open-source real-time operating system (RTOS) with excellent support for OpenThread.
Recommended hardware
For the hardware, I recommend a Nordic Semiconductor nRF52840 Dongle, but you can use any microcontroller board with Thread radio supported by Zephyr. There are variants of the dongle by other manufacturers, such as the April USB Dongle 52840 (with an external antenna) or the nRF52840 MDK USB Dongle from makerdiary (with a case).
From left to right: April Brother’s April USB Dongle 52840, Nordic Semiconductor’s nRF52840 Dongle, and makerdiary’s nRF52840 MDK USB Dongle.
The more devices you have in a Thread network, the better it works, because Routers can relay messages via other devices. Having more devices also aids in understanding how Thread operates: you can turn devices on and off, run commands on them through Zephyr’s shell, and observe how the network reacts. So, if possible, I recommend purchasing a handful of nRF52840 Dongles or other IEEE 802.15.4 devices supported by Zephyr. A USB hub with several Thread devices plugged in makes an excellent test setup, as shown in this example:
A USB hub with some Thread dongles plugged in is a great test setup to get to know Thread networking.
If you’re serious about Thread development with Zephyr, I also recommend Nordic Semiconductor’s nRF52840 Development Kit. This is the bigger (and slightly less affordable) brother of the nRF52840 Dongle, offering more capabilities to debug code easily. With one of these, you can carry out initial development and debugging then flash the code to the nRF52840 Dongle once all the bugs have been ironed out.
Nordic Semiconductor’s nRF52840 Development Kit has some powerful debugging capabilities. (image source: Nordic Semiconductor)
April 21, 2024
I use a Free Software Foundation Europe fellowship GPG smartcard for my email encryption and package signing. While FSFE doesn’t provide the smartcard anymore it’s still available at www.floss-shop.de.
I moved to a Thinkpad w541 with coreboot running Debian GNU/Linux and FreeBSD so I needed to set up my email encryption on Thunderbird again.
It took me more time to reconfigure it again - as usual - so I decided to take notes this time and create a blog post about it. As this might be useful for somebody else … or me in the future :-)
The setup is executed on Debian GNU/Linux 12 (bookworm) with the FSFE fellowship GPG smartcard, but the setup for other Linux distributes, FreeBSD or other smartcards is very similar.
umask
When you’re working on privacy-related tasks - like email encryption - it’s recommended to set the umask
to a more private setting to only allow the user to read the generated files.
This is also recommended in most security benchmarks for “interactive users”.
umask 27
is a good default as it allows only users that are in the same group to read the generated files. Only the user that generated the file can modify it. This should be the default IMHO, but for historical reasons, umask 022
is the default on most Un!x or alike systems.
On most modern Un!x systems the default group for the user is the same as the user name. If you really want to be sure that only the user can read the generated files you can also use umask 077
.
Set the umask
to something more secure as the default. You probably want to configure it in your ~/.profile
or ~/.bashrc
( if you’re using bash
) or configure this on the system level for the UID
range that is assigned to “interactive users”.
$ umask 27
Verify.
$ umask
0027
$
Install GnuPG
Smartcards are supported by GnuPG by scdaemon, it should be possible to use the native GnuPG CCID(Chip Card Interface) driver with more recent versions of GnuPG or the OpenSC PC/SC interface.
For the native GnuPG CCID interface you’ll need a supported smartcard reader. I continue to use the OpenSC PC/SC method, which implements the PKCS11 interface - the defacto standard interface for smartcards or HSMs - Hardware Security modules - and supports more smartcard readers.
The PKCS11 interface also allows you to reuse your PGP keypair for other things like SSH authentication through the PKCS11 interface ( SSH support is also possible with the gpg-agent
) or other applications like web browsers.
On Debian GNU/Linux, scdaemon
is a separate package.
Make sure that the gpg
and scdaemon
are installed.
$ sudo apt install gpg scdaemon
GPG components
GnuPG has a few components, to list the components you can execute the gpgconf --list-components
command.
$ gpgconf --list-component
gpg:OpenPGP:/usr/bin/gpg
gpgsm:S/MIME:/usr/bin/gpgsm
gpg-agent:Private Keys:/usr/bin/gpg-agent
scdaemon:Smartcards:/usr/lib/gnupg/scdaemon
dirmngr:Network:/usr/bin/dirmngr
pinentry:Passphrase Entry:/usr/bin/pinentry
If you want to know in more detail what the function is for a certain component you can check the manpages for each component.
We use in the:
- scdaemon: Smartcard daemon for the GnuPG system
- gpg-agent: Secret key management for GnuPG
components in this blog post as they are involved in OpenPGP encryption.
To reload the configuration of all components you execute the gpgconf --reload
command. To reload you can execute gpgconf --reload <component>
.
e.g.
$ gpgconf --reload scdaemon
Will reload the configuration of the scdaemon
.
When you want or need to stop a gpg
process you can use the gpgconf --kill <component>
command.
e.g.
$ gpgconf --kill scdaemon
Will kill the scdaemon
process.
(Try to) Find your smartcard device
Make sure that the reader is connected to your system.
$ lsusb
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 003: ID 08e6:3438 Gemalto (was Gemplus) GemPC Key SmartCard Reader
Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd QEMU Tablet
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
$
Without config
Let’s see if gpg
can detect the smartcard.
Execute gpg --card-status
.
$ gpg --card-status
gpg: selecting card failed: No such device
gpg: OpenPGP card not available: No such device
$
gpg
doesn’t detect the smartcard…
When you execute the gpg
command the gpg-agent
is started.
$ ps aux | grep -i gpg
avahi 540 0.0 0.0 8288 3976 ? Ss Apr05 0:00 avahi-daemon: running [debian-gpg.local]
staf 869 0.0 0.0 81256 3648 ? SLs Apr05 0:00 /usr/bin/gpg-agent --supervised
staf 5143 0.0 0.0 6332 2076 pts/1 S+ 12:48 0:00 grep -i gpg
$
The gpg-agent
also starts the scdaemon
process.
$ ps aux | grep -i scdaemon
staf 5150 0.0 0.1 90116 5932 ? SLl 12:49 0:00 scdaemon --multi-server
staf 5158 0.0 0.0 6332 2056 pts/1 S+ 12:49 0:00 grep -i scdaemon
$
When it’s the first time that you started gpg
it’ll create a ~/.gnupg
directory.
staf@debian-gpg:~/.gnupg$ ls
private-keys-v1.d pubring.kbx
staf@debian-gpg:~/.gnupg$
Add debug info
scdaemon
To debug why GPG doesn’t detect the smartcard, we’ll enable debug logging in the scdeamon
.
Create a directory for the logging.
$ mkdir ~/logs
$ chmod 700 ~/logs
Edit ~/gnugpg/scdaemon.conf
to reconfigure the scdaemon
to create logging.
staf@debian-gpg:~/.gnupg$ cat scdaemon.conf
verbose
debug-level expert
debug-all
log-file /home/staf/logs/scdaemon.log
staf@debian-gpg:~/.gnupg$
Stop the sdaemon
process.
staf@debian-gpg:~/logs$ gpgconf --reload scdaemon
staf@debian-gpg:~/logs$
Verify.
$ ps aux | grep -i scdaemon
staf 5169 0.0 0.0 6332 2224 pts/1 S+ 12:51 0:00 grep -i scdaemon
$
The next time you execute a gpg
command, the scdaemon
is restarted.
$ gpg --card-status
gpg: selecting card failed: No such device
gpg: OpenPGP card not available: No such device
$
With the new config, the scdaemon
creates a log file with debug info.
]$ ls -l ~/logs
total 8
-rw-r--r-- 1 staf staf 1425 Apr 15 19:45 scdaemon.log
$
gpg-agent
We’ll configure the gpg-agent to generate a debug log as this might help to debug if something goes wrong.
Open the gpg-agent.conf
configuration file with your favourite editor.
staf@bunny:~/.gnupg$ vi gpg-agent.conf
And set the debug-level
and the log output file.
debug-level expert
verbose
verbose
log-file /home/staf/logs/gpg-agent.log
Reload the gpg-agent
configuration.
staf@bunny:~/.gnupg$ gpgconf --reload gpg-agent
staf@bunny:~/.gnupg$
Verify that a log-file
is created.
staf@bunny:~/.gnupg$ ls ~/logs/
gpg-agent.log scdaemon.log
staf@bunny:~/.gnupg$
The main components are configured to generate debug info now.
We’ll continue with the gpg
configuration.
Setup opensc
We will use opensc
to configure the smartcard interface.
Install apt-file
To find the required files/libraries, it can be handy to have apt-file on our system.
Install apt-file
.
$ sudo apt install apt-file
Update the apt-file
database.
$ sudo apt-file update
Search for the tools that will be required.
$ apt-file search pcsc_scan
pcsc-tools: /usr/bin/pcsc_scan
pcsc-tools: /usr/share/man/man1/pcsc_scan.1.gz
$
$ apt-file search pkcs15-tool
opensc: /usr/bin/pkcs15-tool
opensc: /usr/share/bash-completion/completions/pkcs15-tool
opensc: /usr/share/man/man1/pkcs15-tool.1.gz
$
Install smartcard packages
Install the required packages.
$ sudo apt install opensc pcsc-tools
Start the pcscd.service
On Debian GNU/Linux the services are normally automatically enabled/started when a package is installed.
But the pcscd
service wasn’t enabled on my system.
Let’s enable it.
$ sudo systemctl enable pcscd.service
Synchronizing state of pcscd.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable pcscd
$
Start it.
$ sudo systemctl start pcscd.service
And verify that it’s running.
$ sudo systemctl status pcscd.service
● pcscd.service - PC/SC Smart Card Daemon
Loaded: loaded (/lib/systemd/system/pcscd.service; indirect; preset: enabled)
Active: active (running) since Tue 2024-04-16 07:47:50 CEST; 16min ago
TriggeredBy: ● pcscd.socket
Docs: man:pcscd(8)
Main PID: 8321 (pcscd)
Tasks: 7 (limit: 19026)
Memory: 1.5M
CPU: 125ms
CGroup: /system.slice/pcscd.service
└─8321 /usr/sbin/pcscd --foreground --auto-exit
Apr 16 07:47:50 bunny systemd[1]: Started pcscd.service - PC/SC Smart Card Daemon.
Verify connection
Execute the pcsc_scan
command and insert your smartcard.
$ pcsc_scan
PC/SC device scanner
V 1.6.2 (c) 2001-2022, Ludovic Rousseau <ludovic.rousseau@free.fr>
Using reader plug'n play mechanism
Scanning present readers...
0: Alcor Micro AU9540 00 00
Tue Apr 16 08:05:26 2024
Reader 0: Alcor Micro AU9540 00 00
Event number: 0
Card state: Card removed,
Scanning present readers...
0: Alcor Micro AU9540 00 00
1: Gemalto USB Shell Token V2 (284C3E93) 01 00
Tue Apr 16 08:05:50 2024
Reader 0: Alcor Micro AU9540 00 00
Event number: 0
Card state: Card removed,
Reader 1: Gemalto USB Shell Token V2 (284C3E93) 01 00
Event number: 0
Card state: Card inserted,
ATR: <snip>
ATR: <snip>
+ TS = 3B --> Direct Convention
+ T0 = DA, Y(1): 1101, K: 10 (historical bytes)
<snip>
+ TCK = 0C (correct checksum)
<snip>
Possibly identified card (using /usr/share/pcsc/smartcard_list.txt):
<snip>
OpenPGP Card V2
$
Execute pkcs15-tool -D
to ensure that the pkcs11/pkcs15 interface is working.
$ pkcs15-tool -D
Using reader with a card: Gemalto USB Shell Token V2 (284C3E93) 00 00
PKCS#15 Card [OpenPGP card]:
Version : 0
Serial number : <snip>
Manufacturer ID: ZeitControl
Language : nl
Flags : PRN generation, EID compliant
PIN [User PIN]
Object Flags : [0x03], private, modifiable
Auth ID : 03
ID : 02
Flags : [0x13], case-sensitive, local, initialized
Length : min_len:6, max_len:32, stored_len:32
Pad char : 0x00
Reference : 2 (0x02)
Type : UTF-8
Path : 3f00
Tries left : 3
PIN [User PIN (sig)]
Object Flags : [0x03], private, modifiable
Auth ID : 03
ID : 01
Flags : [0x13], case-sensitive, local, initialized
Length : min_len:6, max_len:32, stored_len:32
Pad char : 0x00
Reference : 1 (0x01)
Type : UTF-8
Path : 3f00
Tries left : 3
PIN [Admin PIN]
Object Flags : [0x03], private, modifiable
ID : 03
Flags : [0x9B], case-sensitive, local, unblock-disabled, initialized, soPin
Length : min_len:8, max_len:32, stored_len:32
Pad char : 0x00
Reference : 3 (0x03)
Type : UTF-8
Path : 3f00
Tries left : 3
Private RSA Key [Signature key]
Object Flags : [0x03], private, modifiable
Usage : [0x20C], sign, signRecover, nonRepudiation
Access Flags : [0x1D], sensitive, alwaysSensitive, neverExtract, local
Algo_refs : 0
ModLength : 3072
Key ref : 0 (0x00)
Native : yes
Auth ID : 01
ID : 01
MD:guid : <snip>
Private RSA Key [Encryption key]
Object Flags : [0x03], private, modifiable
Usage : [0x22], decrypt, unwrap
Access Flags : [0x1D], sensitive, alwaysSensitive, neverExtract, local
Algo_refs : 0
ModLength : 3072
Key ref : 1 (0x01)
Native : yes
Auth ID : 02
ID : 02
MD:guid : <snip>
Private RSA Key [Authentication key]
Object Flags : [0x03], private, modifiable
Usage : [0x222], decrypt, unwrap, nonRepudiation
Access Flags : [0x1D], sensitive, alwaysSensitive, neverExtract, local
Algo_refs : 0
ModLength : 3072
Key ref : 2 (0x02)
Native : yes
Auth ID : 02
ID : 03
MD:guid : <snip>
Public RSA Key [Signature key]
Object Flags : [0x02], modifiable
Usage : [0xC0], verify, verifyRecover
Access Flags : [0x02], extract
ModLength : <snip>
Key ref : 0 (0x00)
Native : no
Path : b601
ID : 01
Public RSA Key [Encryption key]
Object Flags : [0x02], modifiable
Usage : [0x11], encrypt, wrap
Access Flags : [0x02], extract
ModLength : <snip>
Key ref : 0 (0x00)
Native : no
Path : b801
ID : 02
Public RSA Key [Authentication key]
Object Flags : [0x02], modifiable
Usage : [0x51], encrypt, wrap, verify
Access Flags : [0x02], extract
ModLength : <snip>
Key ref : 0 (0x00)
Native : no
Path : a401
ID : 03
$
You might want to reinsert your smartcard after you execute pkcs15-tool
or pkcs11-tool
commands, as still this might lock
the smartcard that generates a sharing violation (0x8010000b) error (see below).
GnuPG configuration
scdaemon
The first step is to get the smartcard detected by gpg
(scdaemon
).
Test
Execute gpg --card-status
to verify that your smartcard is detected by gpg
.
This might work as gpg
will try to use the internal CCID if this fails it will try
to use the opensc
interface.
$ gpg --card-status
gpg: selecting card failed: No such device
gpg: OpenPGP card not available: No such device
$
When the smartcard doesn’t get detected, have a look at the ~/logs/scdaemon.log
log file.
Disable the internal ccid
When you use a smartcard reader that isn’t supported by GnuPG or you want to force GnuPG to use the opensc
interface it’s a good idea to disable the internal ccid.
Edit scdaemon.conf
.
staf@bunny:~/.gnupg$ vi scdaemon.conf
staf@bunny:~/.gnupg$
and add disable-ccid
.
card-timeout 5
disable-ccid
verbose
debug-level expert
debug-all
log-file /home/staf/logs/scdaemon.log
Reload the scdaemon
.
staf@bunny:~/.gnupg$ gpgconf --reload scdaemon
staf@bunny:~/.gnupg$
gpg
will try to use the opensc
as a failback by default, so disabling the internal CCI interface will probably not fix the issue when your smartcard isn’t detected.
multiple smartcard readers
When you have multiple smartcard readers on your system. e.g. an internal and other one connected over USB. gpg
might only use the first smartcard reader.
Check the scdaemon.log
to get the reader port
for your smartcard reader.
staf@bunny:~/logs$ tail -f scdaemon.log
2024-04-14 11:39:35 scdaemon[471388] DBG: chan_7 -> D 2.2.40
2024-04-14 11:39:35 scdaemon[471388] DBG: chan_7 -> OK
2024-04-14 11:39:35 scdaemon[471388] DBG: chan_7 <- SERIALNO
2024-04-14 11:39:35 scdaemon[471388] detected reader 'Alcor Micro AU9540 00 00'
2024-04-14 11:39:35 scdaemon[471388] detected reader 'Gemalto USB Shell Token V2 (284C3E93) 01 00'
2024-04-14 11:39:35 scdaemon[471388] reader slot 0: not connected
2024-04-14 11:39:35 scdaemon[471388] reader slot 0: not connected
2024-04-14 11:39:35 scdaemon[471388] DBG: chan_7 -> ERR 100696144 No such device <SCD>
2024-04-14 11:39:35 scdaemon[471388] DBG: chan_7 <- RESTART
2024-04-14 11:39:35 scdaemon[471388] DBG: chan_7 -> OK
To force gpg
to use a smartcard reader you can set the reader-port
directive in the scdaemon.conf
configuration file.
staf@bunny:~/.gnupg$ vi scdaemon.conf
card-timeout 5
disable-ccid
reader-port 'Gemalto USB Shell Token V2 (284C3E93) 01 00'
verbose
debug-level expert
debug-all
log-file /home/staf/logs/scdaemon.log
staf@bunny:~$ gpgconf --reload scdaemon
staf@bunny:~$
sharing violation (0x8010000b)
Applications will try to lock your opensc
smartcard, if your smartcard is already in use by another application you might get a sharing violation (0x8010000b)
.
This is fixed by reinserting your smartcard. If this doesn’t help it might be a good idea to
restart the pcscd
service as this disconnects all the applications that might lock
the opensc
interface.
$ sudo systemctl restart pcscd
When everything goes well your smartcard should be detected now. If not, take a look at the scdaemon.log
.
staf@bunny:~$ gpg --card-status
Reader ...........: Gemalto USB Shell Token V2 (284C3E93) 00 00
Application ID ...: <snip>
Application type .: OpenPGP
Version ..........: 2.1
Manufacturer .....: ZeitControl
Serial number ....: 000046F1
Name of cardholder: <snip>
Language prefs ...: <snip>
Salutation .......: <snip>
URL of public key : <snip>
Login data .......: [not set]
Signature PIN ....: <snip>
Key attributes ...: <snip>
Max. PIN lengths .: <snip>
PIN retry counter : <snip>
Signature counter : <snip>
Signature key ....: <snip>
created ....: <snip>
Encryption key....: <snip>
created ....: <snip>
Authentication key: <snip>
created ....: <snip>
General key info..: [none]
staf@bunny:~$
Note that gpg
will lock the smartcard, if you try to use your smartcard for another application you’ll get
a “Reader in use by another application” error.
staf@bunny:~$ pkcs15-tool -D
Using reader with a card: Gemalto USB Shell Token V2 (284C3E93) 00 00
Failed to connect to card: Reader in use by another application
staf@bunny:~$
Private key references
When you execute gpg --card-status
successfully. gpg
creates references (shadowed-private-keys) to the private key(s) on your smartcard.
These references are text files created in ~/.gnupg/private-keys-v1.d
.
staf@bunny:~/.gnupg/private-keys-v1.d$ ls
<snip>.key
<snip>.key
<snip>.key
staf@bunny:~/.gnupg/private-keys-v1.d$
These references also include the smartcard serial number, this way gpg
knows on which hardware token the private key is located and will
ask you to insert the correct hardware token (smartcard).
Pinentry
pinentry
allows you to type in a passphrase (pin code) to unlock the encryption if you don’t have a smartcard reader with a PIN-pad.
Lets’s look for the packages that provides a pinentry
binary.
$ sudo apt-file search "bin/pinentry"
kwalletcli: /usr/bin/pinentry-kwallet
pinentry-curses: /usr/bin/pinentry-curses
pinentry-fltk: /usr/bin/pinentry-fltk
pinentry-gnome3: /usr/bin/pinentry-gnome3
pinentry-gtk2: /usr/bin/pinentry-gtk-2
pinentry-qt: /usr/bin/pinentry-qt
pinentry-tty: /usr/bin/pinentry-tty
pinentry-x2go: /usr/bin/pinentry-x2go
$
Install the packages.
$ sudo apt install pinentry-curses pinentry-gtk2 pinentry-gnome3 pinentry-qt
/bin/pinetry
is a soft link to the pinentry
binary.
$ which pinentry
/usr/bin/pinentry
staf@debian-gpg:/tmp$ ls -l /usr/bin/pinentry
lrwxrwxrwx 1 root root 26 Oct 18 2022 /usr/bin/pinentry -> /etc/alternatives/pinentry
staf@debian-gpg:/tmp$ ls -l /etc/alternatives/pinentry
lrwxrwxrwx 1 root root 24 Mar 23 11:29 /etc/alternatives/pinentry -> /usr/bin/pinentry-gnome3
$
If you want to use another pinentry
provider you can update it with update-alternatives --config pinentry
.
sudo update-alternatives --config pinentry
Import public key
The last step is to import our public key.
$ gpg --import <snip>.asc
gpg: key <snip>: "Fname Sname <email>" not changed
gpg: Total number processed: 1
gpg: unchanged: 1
$
Test
Check if our public and private keys are known by GnuPG.
List public keys.
$ gpg --list-keys
/home/staf/.gnupg/pubring.kbx
-----------------------------
pub rsa<snip> <snip> [SC]
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
uid [ unknown] fname sname <email>
sub rsa<snip> <snip> [A]
sub rsa<snip> <snip> [E]
$
List our private keys.
$ gpg --list-secret-keys
/home/staf/.gnupg/pubring.kbx
-----------------------------
sec> rsa<size> <date> [SC]
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Card serial no. = XXXX YYYYYYYY
uid [ unknown] Fname Sname <email>
ssb> rsa<size> <date> [A]
ssb> rsa<size> <date> [E]
$
Test if we sign something with our GnuPG smartcard.
Create a test file.
$ echo "I'm boe." > /tmp/boe
$
Sign it.
$ gpg --sign /tmp/boe
$
Type your PIN code.
┌──────────────────────────────────────────────┐
│ Please unlock the card │
│ │
│ Number: XXXX YYYYYYYY │
│ Holder: first list │
│ Counter: <number> │
│ │
│ PIN ________________________________________ │
│ │
│ <OK> <Cancel> │
└──────────────────────────────────────────────┘
In a next blog post will set up Thunderbird to use the smartcard for OpenPGP email encryption.
Have fun!
Links
- https://wiki.debian.org/Smartcards/OpenPGP
- https://www.floss-shop.de/en/security-privacy/smartcards/13/openpgp-smart-card-v3.4?c=11
- https://www.gnupg.org/howtos/card-howto/en/smartcard-howto-single.html
- https://support.nitrokey.com/t/nk3-mini-gpg-selecting-card-failed-no-such-device-gpg-card-setup/5057/7
- https://security.stackexchange.com/questions/233916/gnupg-connecting-to-specific-card-reader-when-multiple-reader-available#233918
- https://www.fsij.org/doc-gnuk/stop-scdaemon.html
- https://wiki.debian.org/Smartcards
- https://github.com/OpenSC/OpenSC/wiki/Overview/c70c57c1811f54fe3b3989d01708b45b86fafe11
- https://superuser.com/questions/1693289/gpg-warning-not-using-as-default-key-no-secret-key
- https://stackoverflow.com/questions/46689885/how-to-get-public-key-from-an-openpgp-smart-card-without-using-key-servers
April 11, 2024
Getting the Belgian eID to work on Linux systems should be fairly easy, although some people do struggle with it.
For that reason, there is a lot of third-party documentation out there in the form of blog posts, wiki pages, and other kinds of things. Unfortunately, some of this documentation is simply wrong. Written by people who played around with things until it kind of worked, sometimes you get a situation where something that used to work in the past (but wasn't really necessary) now stopped working, but it's still added to a number of locations as though it were the gospel.
And then people follow these instructions and now things don't work anymore.
One of these revolves around OpenSC.
OpenSC is an open source smartcard library that has support for a pretty large number of smartcards, amongst which the Belgian eID. It provides a PKCS#11 module as well as a number of supporting tools.
For those not in the know, PKCS#11 is a standardized C API for offloading cryptographic operations. It is an API that can be used when talking to a hardware cryptographic module, in order to make that module perform some actions, and it is especially popular in the open source world, with support in NSS, amongst others. This library is written and maintained by mozilla, and is a low-level cryptographic library that is used by Firefox (on all platforms it supports) as well as by Google Chrome and other browsers based on that (but only on Linux, and as I understand it, only for linking with smartcards; their BoringSSL library is used for other things).
The official eID software that we ship through eid.belgium.be, also known as "BeID", provides a PKCS#11 module for the Belgian eID, as well as a number of support tools to make interacting with the card easier, such as the "eID viewer", which provides the ability to read data from the card, and validate their signatures. While the very first public version of this eID PKCS#11 module was originally based on OpenSC, it has since been reimplemented as a PKCS#11 module in its own right, with no lineage to OpenSC whatsoever anymore.
About five years ago, the Belgian eID card was renewed. At the time, a new physical appearance was the most obvious difference with the old card, but there were also some technical, on-chip, differences that are not so apparent. The most important one here, although it is not the only one, is the fact that newer eID cards now use a NIST P-384 elliptic curve-based private keys, rather than the RSA-based ones that were used in the past. This change required some changes to any PKCS#11 module that supports the eID; both the BeID one, as well as the OpenSC card-belpic driver that is written in support of the Belgian eID.
Obviously, the required changes were implemented for the BeID module; however, the OpenSC card-belpic driver was not updated. While I did do some preliminary work on the required changes, I was unable to get it to work, and eventually other things took up my time so I never finished the implementation. If someone would like to finish the work that I started, the preliminal patch that I wrote could be a good start -- but like I said, it doesn't yet work. Also, you'll probably be interested in the official documentation of the eID card.
Unfortunately, in the mean time someone added the Applet 1.8 ATR to the card-belpic.c file, without also implementing the required changes to the driver so that the PKCS#11 driver actually supports the eID card. The result of this is that if you have OpenSC installed in NSS for either Firefox or any Chromium-based browser, and it gets picked up before the BeID PKCS#11 module, then NSS will stop looking and pass all crypto operations to the OpenSC PKCS#11 module rather than to the official eID PKCS#11 module, and things will not work at all, causing a lot of confusion.
I have therefore taken the following two steps:
- The official eID packages now conflict with the OpenSC PKCS#11 module. Specifically only the PKCS#11 module, not the rest of OpenSC, so you can theoretically still use its tools. This means that once we release this new version of the eID software, when you do an upgrade and you have OpenSC installed, it will remove the PKCS#11 module and anything that depends on it. This is normal and expected.
- I have filed a pull request against OpenSC that removes the Applet 1.8 ATR from the driver, so that OpenSC will stop claiming that it supports the 1.8 applet.
When the pull request is accepted, we will update the official eID software to make the conflict versioned, so that as soon as it works again you will again be able to install the OpenSC and BeID packages at the same time.
In the mean time, if you have the OpenSC PKCS#11 module installed on your system, and your eID authentication does not work, try removing it.
March 21, 2024
Mario & Luigi: Superstar Saga is a surprisingly fun game. At its core it’s a role-playing game. But you wouldn’t tell because of the amount of platforming, the quantity and quality of small puzzles and -most of all- the funny self-deprecating dialogue.
As the player you control Mario and Luigi simultaneously and that takes some time to master. As you learn techniques along the way, each character get special abilities allowing you to solve new puzzles and advance on your journey. It’s not a shot game for a handheld, taking around 20-30 hours to finish at the minimum.
The 3DS 2017 remake of 2017 (Mario & Luigi: Superstar Saga + Bowser’s Minions) did keep most of the game as it is, just updating the graphics and music for the more powerful handheld. That’s says a lot about the original whose 2D SNES/GBA graphics don’t feel outdated today, but hip and cute pixel-art.
Have a look at “How to play retrogames?” if you don’t know how to play retrogames.
March 15, 2024
Dit kan helemaal niet in België want het vereist een serieus niet populistisch debat tussen alle politieke partijen (waarbij ze hun rekenmachine en logisch redeneren meenemen én er op toezien dat het budget écht in balans blijft). Dat bestaat al decennia niet meer op het Federaal niveau. Dus ja.
Maar een manier om het budgetair probleem van het land op te lossen zou kunnen door meer te belasten op consumptie door het BTW tarief van 21% naar bv. 25% of 30% te verhogen, (veel) minder te belasten op inkomen en tot slot meer producten en diensten op 6% en 12% te zetten.
M.a.w. alle noodzakelijke uitgaven zoals electriciteit, Internet, (gas) verwarming, water, brood, en zo verder op 6%. Een (veel) grotere mand van (bv. gezonde) voedingsmiddelen, woning (en kosten zoals bv. verbouwingen) op 12%.
Maar voor alle luxeconsumptie 25% of 30% of eventueel zelfs 40%.
Daaraan gekoppeld een stevige verlaging van de personenbelasting.
Gevolg meer BTW-inkomsten uit consumptie. Betere concurrentiepositie t.o.v. onze buurlanden doordat de salarissen niet hoeven te stijgen (door de verlaging van de personenbelasting). Koopkracht wordt daardoor versterkt en die versterking wordt aangevuld doordat er een grotere mand voor 6% en 12% BTW tarieven is.
M.a.w. enkel wordt luxeconsumptie (veel) meer belast. Noodzakelijke consumptie gaat naar 6% of 12% (of men behoudt hiervoor de 21%).
Grootste nadeel is dat onze boekhouders meer werk hebben. Maar ik denk wel dat ze dat zo erg niet zullen vinden (ze factureren hun extra uren trouwens aan het luxeconsumptie BTW tarief).
Voorbeeld voordeel is dat je consumptie beter kan sturen. Wil je dat wij Belgen meer electrische wagens kopen? Maak zo’n wagen 6% BTW en een wagen met een dieselverbrandingsmotor 40% BTW. Wil je dat jongeren gezondere voeding eten? Coca Cola 40% BTW en fruitsap 6% BTW.
Uiteraard moet je ook besparen en een serieuze efficiëntieoefening doen.