Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

January 16, 2025

With FOSDEM just a few days away, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers make FOSDEM happen and make it a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, especially for heralding during the conference, during the buildup (starting Friday at noon) and teardown (Sunday evening). No need to worry about missing lunch at the weekend, food will be provided. Would you like to be part of the team that makes FOSDEM tick?舰
If your non-geek partner and/or kids are joining you to FOSDEM, they may be interested in spending some time exploring Brussels while you attend the conference. Like previous years, FOSDEM is organising sightseeing tours.

January 15, 2025

We were made aware of planned protests during the upcoming FOSDEM 2025 in response to a scheduled talk which is causing controversy. The talk in question is claimed to be on the schedule for sponsorship reasons; additionally, some of the speakers scheduled to speak during this talk are controversial to some of our attendees. To be clear, in our 25 year history, we have always had the hard rule that sponsorship does not give you preferential treatment for talk selection; this policy has always applied, it applied in this particular case, and it will continue to apply in the future.舰

We did it: Drupal CMS 1.0 is here! 🎉

Eight months ago, I challenged our community to make Drupal easier for marketers, content creators, and site builders. Today, on Drupal's 24th birthday, we're making history with the launch of Drupal CMS 1.0.

With this release, you now have two ways to build with Drupal:

  • Drupal Core serves expert users and developers who want complete control over their websites. It provides a blank canvas for building websites and has been the foundation behind millions of websites since Drupal began 24 years ago.
  • Drupal CMS is a ready-to-use platform for marketing teams, content creators and site builders built on Drupal 11 core. When you install Drupal CMS, you get a set of out-of-the-box tools such as advanced media management, SEO tools, AI-driven website building, consent management, analytics, search, automatic updates and more.

To celebrate this milestone, more than 60 launch parties are happening around the world today! These celebrations highlight one of Drupal's greatest strengths: a worldwide community that builds and innovates together.

If you want to try Drupal CMS, you can start a free trial today at https://www.drupal.org/drupal-cms/trial.

Built for ambitious marketers

Drupal CMS targets organizations with ambitious digital goals, particularly in mid-market and enterprise settings. The platform provides a robust foundation that adapts and scales with evolving needs.

Organizations often hit a growth ceiling with non-Drupal CMS platforms. What starts as a simple website becomes a constraint as needs expand. Take privacy and consent management as an example: while these features are now essential due to GDPR, CCPA, and growing privacy concerns, most CMS platforms don't offer them out of the box. This forces organizations to create patchwork solutions.

Drupal CMS addresses this by including privacy and consent management tools by default. This not only simplifies setup but also sets a new standard for CMS platforms, promoting a better Open Web – one that prioritizes user privacy while helping organizations meet regulatory requirements.

Recipes for success

The privacy and consent management feature is just one of many 'recipes' available in Drupal CMS. Recipes are pre-configured packages of features, like blogs, events, or case studies, that simplify and speed up site-building. Each recipe automatically installs the necessary modules, sets up content types, and applies configurations, reducing manual setup.

This streamlined approach makes Drupal more accessible for beginners but also more efficient for experienced developers. Drupal CMS 1.0 launches with nearly 30 recipes included, many of which are applied by default to provide common functionality that most sites require. Recipes not applied by default are available as optional add-ons and can be applied either during setup or later through the new Project Browser. More recipes are already in development, with plans to release new versions of Drupal CMS throughout the year, each adding fresh recipes.

The Drupal CMS installer lets users choose from predefined 'recipes' like blog, events, case studies and more. Each recipe automatically downloads the required modules, sets up preconfigured content types, and applies the necessary configurations.

Pioneering the future, again

Drupal CMS not only reduces costs and accelerates time to value with recipes but also stands out with innovative features like AI agents designed specifically for site building. While many platforms use AI primarily for content creation, our AI agents go further by enabling advanced tasks such as creating custom content types, configuring taxonomies, and more.

This kind of innovation really connects to Drupal's roots. In its early days, Drupal earned its reputation as a forward-thinking, innovative CMS. We helped pioneer the assembled web (now called 'composable') and contributed to the foundation of Web 2.0, shipping with features like blogging, RSS, and commenting long before the term Web 2.0 existed. Although it happened long ago and many may not remember, Drupal was the first CMS to adopt jQuery. This move played a key role in popularizing jQuery and establishing it as a cornerstone of web development.

Curious about what Drupal CMS' AI agents can do? Watch Ivan Zugec's video for a hands-on demonstration of how these tools simplify site-building tasks – even for expert developers.

We don't know exactly where AI agents will take us, but I'm excited to explore, learn, and grow. It feels like the early days when we experimented and boldly ventured into the unknown.

Changing perceptions and reaching more users

Drupal has often been seen as complex, but Drupal CMS is designed to change that. Still, we know that simply creating a more user-friendly and easier-to-maintain product isn't enough. After 24 years, many people still hold outdated perceptions shaped by experiences from over a decade ago.

Changing those perceptions takes time and deliberate effort. That is why the Drupal CMS initiative is focused not just on building software but also on repositioning and marketing Drupal in a way that highlights how much it has evolved.

The new Drupal.org features a refreshed brand and updated messaging, positioning Drupal as a modern, composable CMS.

To make this happen, we've refreshed our brand and started reworking Drupal.org with the help of the Drupal Association and our Drupal Certified Partners. The updated brand feels fresher, more modern, and more appealing to a larger audience.

For the first time, the Drupal Association has hired two full-time product marketers to help communicate our message.

Our goal is clear: to help people move past outdated perceptions and see Drupal for what it truly is – a powerful, modern platform for building websites that is becoming more user-friendly, as well as more affordable to use and maintain.

Achieving bold ambitions through collaboration

Launching the Drupal CMS initiative was bold and ambitious, requiring extraordinary effort from our community – and they truly stepped up. It was ambitious because this initiative has been about much more than building a second version of Drupal. It's been a focused and comprehensive effort to expand our market, modernize our brand, accelerate innovation, expand our marketing, and reimagine our partner ecosystem.

When I announced Drupal Starshot and Drupal CMS just 8 months ago, I remember turning to the team and asking, How exactly are we going to pull this off?. We had a lot to figure out – from building a team, setting goals, and mapping a path forward. It was a mix of uncertainty, determination, and maybe a touch of What have we gotten ourselves into?.

A key success factor has been fostering closer collaboration among contributors, agency partners, Drupal Core Committers, Drupal Association staff, and the Drupal Association Board of Directors. This stronger alignment didn't happen by chance; it's the result of thoughtfully structured meetings and governance changes that brought everyone closer together.

After just 8 months, the results speak for themselves. Drupal CMS has significantly increased the pace of innovation and the level of contributions to Drupal. It's a testament to what we can achieve when we work together. We've seen a 40% increase in contributor activity since the initiative launch, with over 2,000 commits from more than 300 contributors.

Drupal CMS has been a powerful catalyst for accelerating innovation and collaboration. Since development began in 2024, contributions have soared. Organization credits for strategic initiatives grew by 44% compared to 2023, with individual contributions increasing by 37%. The number of unique contributors rose by 12.5%, and participating organizations grew by 11.3%.

The initiative required me to make a significant time commitment I hadn't anticipated at the start of 2024 – but it's an experience I'm deeply grateful for. The Drupal CMS leadership team met at least twice a week, often more, to tackle challenges head-on. Similarly, I had weekly meetings with the Drupal Association.

Along the way we developed new working principles. One key principle was to solve end-user problems first, focusing on what marketers truly need rather than trying to account for every edge case. Another was prioritizing speed over process, enabling us to innovate and adapt quickly. These principles are still evolving, and now that the release is behind us, I'm eager to refine them further with the team.

The work we did together was intense, energizing, and occasionally filled with uncertainty about meeting our deadlines. We built strong bonds, learned to make quick, effective decisions, and maintained forward momentum. This experience has left me feeling more connected than ever to our shared mission.

The Drupal CMS roadmap for 2025

As exciting as this achievement is, some might ask if we've accomplished everything we set out to do. The answer is both yes and no. We've exceeded my expectations in collaboration and innovation, making incredible progress. But there is still much to do. In many ways, we're just getting started. We're less than one-third of the way through our three-year product strategy.

With Drupal CMS 1.0 released, 2025 is off to a strong start. Our roadmap for 2025 is clear: we'll launch Experience Builder 1.0, roll out more out-of-the-box recipes for marketers, improve our documentation, roll out our new brand to more parts of Drupal.org, and push forward with innovative experiments.

Each step brings us closer to our goal: modernizing Drupal and making Drupal the go-to platform for marketers and developers who want to build ambitious digital experiences — all while championing the Open Web.

Thank you, Drupal community

We built Drupal CMS in a truly open source way – collaboratively, transparently, and driven by community contributions – proving once again that open source is the best way to build software.

The success of Drupal CMS 1.0 reflects the work of countless contributors. I'm especially grateful to these key contributors and their organizations (listed alphabetically): Jamie Abrahams (FreelyGive), Gareth Alexander (Zoocha), Martin Anderson-Clutz (Acquia), Tony Barker (Annertech), Pamela Barone (Technocrat), Addison Berry (Drupalize.me), Jim Birch (Kanopi Studios), Baddy Breidert (1xINTERNET), Christoph Breidert (1xINTERNET), Nathaniel Catchpole (Third and Grove / Tag1 Consulting), Cristina Chumillas (Lullabot), Suzanne Dergacheva (Evolving Web), Artem Dmitriiev (1xINTERNET), John Doyle (Digital Polygon), Tim Doyle (Drupal Association), Sascha Eggenberger (Gitlab), Dharizza Espinach (Evolving Web), Tiffany Farriss (Palantir.net), Matthew Grasmick (Acquia), Adam Globus-Hoenich (Acquia), Jürgen Haas (LakeDrops), Mike Herchel (DripYard), J. Hogue (Oomph, Inc), Gábor Hojtsy (Acquia), Emma Horrell (University of Edinburgh), Marcus Johansson (FreelyGive), Nick Koger (Drupal Association), Tim Lehnen (Drupal Association), Pablo López Escobés (Lullabot), Christian López Espínola (Lullabot), Leah Magee (Acquia), Amber Matz (Drupalize.me), Lenny Moskalyk (Drupal Association), Lewis Nyman, Matt Olivera (Lullabot), Shawn Perritt (Acquia), Megh Plunkett (Lullabot), Tim Plunkett (Acquia), Kristen Pol (Salsa Digital), Joe Shindelar (Drupalize.me), Lauri Timmanee (Acquia), Matthew Tift (Lullabot), Laurens Van Damme (Dropsolid), Ryan Witcombe (Drupal Association), Jen Witowski (Lullabot).

I also want to recognize our Marketing Committee, the Core Committers, the Drupal Association Board of Directors, and the Drupal Starshot Advisory Council, whose guidance and strategic input shaped this initiative along the way.

While I've highlighted some contributors here, I know there are hundreds more who shaped Drupal CMS 1.0 through their code, testing, UX work, feedback, advocacy and more. Each contribution, big or small, moved us forward. To everyone who helped build this milestone: THANK YOU!

January 12, 2025

One of the topics that most financial institutions are (still) currently working on, is their compliance with a European legislation called DORA. This abbreviation, which stands for "Digital Operational Resilience Act", is a European regulation. European regulations apply automatically and uniformly across all EU countries. This is unlike another recent legislation called NIS2, the "Network and Information Security" directive. As a EU directive, NIS2 requires the EU countries to formulate the directive into local law. As a result, different EU countries can have a slightly different implementation.

The DORA regulation applies to the EU financial sector, and has some strict requirements in it that companies' IT stakeholders are affected by. It doesn't often sugar-coat things like some frameworks do. This has the advantage that its "interpretation flexibility" is quite reduced - but not zero of course. Yet, that advantage is also a disadvantage: financial entities might have had different strategies covering their resiliency, and now need to adjust their strategy.

History of DORA

Officially called the Regulation (EU) 2022/2554, DORA was proposed as a new regulatory framework in September 2020. It aimied to further strengthen the digital operational resilience of the financial sector. "Operational resilience" here focuses strongly on cyber threat resilience and operational risks like IT disasters. On January 16th 2023, the main DORA regulation entered into force and will apply as of January 17th 2025. Yes, that's about now.

Alongside the main DORA text, additional standards are being developed too. These are Regulatory Technical Standards that detail requirements of one or more articles within DORA. The one I currently come into contact with the most is the RTS on ICT risk management framework. This RTS elaborates on various requirements close to my own expertise. However, other RTS documents are also on my visor to read through, such as the technical standard on ICT services supporting critical or important functions provided by ICT third-party service providers and subcontracting ICT services.

During the development of the DORA regulation and these technical standards, various stakeholders were consulted. The European Supervisory Authorities (ESAs) were of course primary stakeholders here, but other stakeholders could also provide their feedback. This feedback, and the answers or reactions of it by the legislative branch, help out in understanding parts of the regulation ("am I reading this right") as well as conveying the understanding of the regulatory branch about what is stated ("does the regular understand what they are asking").

DORA is not a first of course. The moment you start reading the regulation, you notice that it amends previous regulations. These were predecessors, so should also be seen as part of the "history" of DORA: - Regulation (EC) No 1060/2009, which regulates the credit rating agencies. - Regulation (EU) No 648/2012, covering regulation on derivatives, central counter parties, and trade repositories. - Regulation (EU) No 600/2014, regulating the markets in financial instruments. - Regulation (EU) No 909/2014, which focuses on securities settlements and central securities depositories. - Regulation (EU) 2016/1011, focusing on benchmarks in financial instruments and financial contracts, and measuring performance of investment funds

Now, these are all market oriented regulations, and while many of these do sporadically refer to an operational resilience aspect, they require a significant understanding of that financial market to begin with, which isn't the case for DORA. But DORA wasn't the first to be more IT oriented.

The first part of the DORA regulation provides context into the actual legislative articles (which only start one-third into the document). It provides references to previous publications or legislation that are more IT (or cyber threat) oriented. This first part is called the "preamble" in EU legislation.

In this preamble, paragraph 15 references Directive (EU) 2016/1148 as the first broad cybersecurity framework enacted at EU level. It covers a high common level of security of network and information systems. Yes, that's the "NIS" directive that recently got a new iteration: Directive (EU) 2022/2555, aka NIS2. Plenty of other references exist as well. Sometimes these refer to the legislation that covers certain markets, as listed before. Other references focus on the supervisory bodies. Many references are towards other legislation that provides definitions for the used terms.

Structure of DORA

The main DORA legislation (so excluding the regulatory technical standards) covers 64 articles, divided into 9 chapters. But as mentioned earlier, it has a quite sizeable preamble that covers context, motivation and interpretation details for the legislative text. This allows for improved interpretation of the articles themselves.

More specifically, such a preamble covers the legal basis of the legislation, and the objectives why the legislation came to be. I found How to read EU law by Fabian Bohnenberger to be a very good and quick-to-read overview of how an EU legislative text is structured. The DORA preamble covers 106 paragraphs already, and they're not even the actual legislative articles.

So, how are the legislative articles themselves structured like?

  • Chapter I - General provisions defines what the purpose of the legislation is and its structure (art 1), where the legislation applies to (art 2), 65 definitions used throughout the legislation (art 3) and the notion that proportionality applies (art 4).

  • Chapter II - ICT risk management tells the in-scope markets and institutions how to govern and organize themselves to cover risk management (art 5), what the risk management framework should do (art 6), that they need to have fit-for-purpose tooling and procedures to cover risk management (art 7), that they need to properly identify risks towards their assets (art 8), need to take sufficient measures to protect themselves against various threats (art 9), be able to detect anomalous activities (art 10), have a response and recovery process in place (art 11), have backup/restore processes in place (art 12), ensure knowledge of the employees is sound (art 13), be able to communicate properly (art 14) and follow regulatory technical standards for ICT risk management (art 15). In the end, art 16 covers the requirements for smaller financial institutions (as DORA differentiates requirements based on the impact, size and some other criteria).

  • Chapter III - ICT-related incident management, classification and reporting describes how to handle ICT incidents (art 17), how to classify these incidents and threats (art 18), what reporting expectations exist (art 19), and the standardization on the reporting (art 20). The reported incidents are centralized (art 21), supervisors will provide answers to the reports (art 22), and art 23 informs us for which incidents the above is all applicable.

  • Chapter IV - Digital operational resilience testing covers the testing of the operational resilience. First, general requirements are provided (art 24), after which DORA covers the testing of tools and systems (art 25), mandatory use of penetration testing (art 26), and how these threat-led penetration tests (TLPTs) are carried out (art 27).

  • Chapter V - Managing of ICT third-party risk further divulges in managing threats related to outsourcing and use of third parties. Art 28 covers the general principles, whereas art 29 covers the potential concentration risk (aka "if everyone depends on this third party, then..."). Contractual expectations are covered in art 30. Further, this chapter covers the introduction of an oversight framework for large, critical third-party service providers. Art 31 designates when a third-party service provider is deemed critical, art 32 covers the oversight structure, art 33 introduces the role of the Lead Overseer(s), their operational coordination (art 34), their power (art 35), and what their capabilities are outside of the EU (art 36). Art 37 covers how the overseer can receive information, if and how investigations take place (art 38), how inspections are handled (art 39), how this relates to existing oversights (art 40), how the conduct of oversight activities is handled (art 41), how authorities follow-up on overseer activities (art 42), who will pay for the oversight activities (art 43) and the international cooperation amongst regulatory/supervisory bodies (art 44).

  • Chapter VI - Information-sharing arrangements has a single article, art 45, on threat/intelligence information sharing amongst the financial institutions.

  • Chapter VII - Competent authorities assigns the appropriate authorities towards the various financial institutions (art 46), how these authorities cooperate with others (art 47), and how they cooperate amongst themselves (art 48). Cross-sector exercises are covered in art 49, and the penalties and remedial measures are covered in art 50. Art 51 is how/when administrative penalties/measures are imposed, art 52 is when criminal liabilities where found. Art 53 requires EU member states to notify the EU institutions about related legislation/provisions, art 54 documents how administrative penalties are published. Art 55 confirms the professional secrecy of the authorities, and art 56 covers the data protection provisions for the authorities.

  • Chapter VIII - Delegated acts has one article (art 57) covering by whom this legislation is exercised (role of the Commission, Parliament, etc.)

  • Chapter IX - Transitional and final provisions is a "the rest" chapter. It covers a review of the law and implementations by January 17th 2028 (art 58), and then many amendments to existing regulations to make them aligned and consistent with the DORA legislation (art 59 - 63). The last article, art 64, describes when DORA comes into force and when it shall apply.

For me, chapters II (risk management), IV (resilience testing) and V (third party risk) are the most interesting as they cover expectations for many IT processes or controls.

Regulatory Technical Standards

Within the DORA legislation, references are made to regulatory technical standards that need to be drafted up. The intention of the regulatory technical standards is to further elaborate on the expectations and requirements of DORA. These RTS documents also have legislative power (hence the "regulatory" in the name) and are important to track too.

The RTS that covers the ICT risk framework from article 15 is one that has a strong IT orientation with it. Like the EU legislative texts, it holds a lot of context to begin with. The draft publications also cover the feedback received and the answers/results from that feedback. It is unlikely that these will be found in the final published RTS' though.

The current JC 2023 76 - Final report on draft RTS on ICT Risk Management Framework and on simplified ICT Risk Management Framework has the actual technical standard between pages 45 and 89. It too uses chapters to split the text up a bit. After art 1, covering the overall risk profile and complexity, we have:

  • Chapter I - ICT security policies, procedures, protocols, and tools contains significant input for various IT processes and domains. It is further subdivided into sections:

  • Section I covers what should be in the ICT security policies (art 2),

  • Section II describes the details of the ICT risk management framework (art 3),
  • Section III - ICT Asset Management covers the ICT asset management expectations (art 4) and ICT asset management process (art 5),
  • Section IV - Encryption and cryptography covers cryptography expectations (art 6 and 7),
  • Section V - ICT Operations Security handles the ICT operations policies (art 8), capacity and performance management (art 9), vulnerability and patch management (art 10), data and system security (art 11), logging expectations (art 12),
  • Section VI - Network security is about the network security expectations (art 13), and in-transit data protection measures (art 14),
  • Section VII - ICT project and change management covers the ICT project management (art 15), ICT development and maintenance activities (art 16), and change management (art 17),
  • Section VIII handles physical security measures (art 18)

  • Chapter II - Human Resources Policy and Access Control handles HR policies (art 19), identity management (art 20), access control (art 21).

  • Chapter III - ICT-related incident detection and response , incident management (art 22), anomalous activity detection and response (art 23), business continuity (art 24 and 25), ICT response and recovery (26)

As you can read from the titles, these are more specific. Don't think "Oh, it is just a single article" about a subject. Some articles span more than a full page. For instance, Article 13 on network security has 13 sub-paragraphs.

DORA for architects

I think that the DORA legislation is a crucial authority to consider when you are developing internal policies for EU-based financial institutions. I've mentioned the use of frameworks in the past, which can inspire companies in the development of their own policies. Companies should never blindly copy these frameworks (or legislative requirements) into a policy, because then your policy becomes a mess of overlapping or sometimes even contradictory requirements. Instead, policies should refer to these authorities when relevant, allowing readers to understand which requirements are triggered by which source.

When you're not involved in the development of policies, having a read through some of the DORA texts might be still sensible as it gives a grasp on what requirements are pushed to your company. And while we're at it, do the same for the NIS2 documents, because even if your company is in scope of DORA, NIS2 still applies (DORA is a specialized law, so takes precedence over what NIS2 asks, but if DORA doesn't cover a topic and NIS2 does, then you still have to follow NIS2).

Feedback? Comments? Don't hesitate to get in touch on Mastodon.

January 09, 2025

The preFOSDEM MySQL Belgian Days 2025 will occur at the usual place (ICAB Incubator, Belgium, 1040 Bruxelles) on Thursday, January 30th, and Friday, January 31st, just before FOSDEM. Again this year, we will have the chance to have incredible sessions from our Community and the opportunity to meet some MySQL Engineers from Oracle. DimK will […]

To our valued customers, partners, and the Drupal community.

I'm excited to share an important update about my role at Acquia, the company I co-founded 17 years ago. I'm transitioning from my operational roles as Chief Technology Officer (CTO) and Chief Strategy Officer (CSO) to become Executive Chairman. In this new role, I'll remain an Acquia employee, collaborating with Steve Reny (our CEO), our Board of Directors, and our leadership team on company strategy, product vision, and M&A.

This change comes at the right time for both Acquia and me. Acquia is stronger than ever, investing more in Drupal and innovation than at any point in our history. I made this decision so I can rebalance my time and focus on what matters most to me. I'm looking forward to spending more time with family and friends, as well as pursuing personal passions (including more blogging).

This change does not affect my commitment to Drupal or my role in the project. I will continue to lead the Drupal Project, helping to drive Drupal CMS, Drupal Core, and the Drupal Association.

Six months ago, I already chose to dedicate more of my time to Drupal. The progress we've made is remarkable. The energy in the Drupal community today is inspiring, and I'm amazed by how far we've come with Drupal Starshot. I'm truly excited to continue our work together.

Thank you for your continued trust and support!

January 07, 2025

FOSDEM Junior is a collaboration between FOSDEM, Code Club, CoderDojo, developers, and volunteers to organize workshops and activities for children during the FOSDEM weekend. These activities are for children to learn and get inspired about technology. This year’s activities include microcontrollers, embroidery, game development, music, and mobile application development. Last year we organized the first edition of FOSDEM Junior. We are pleased to announce that we will be back this year. Registration for individual workshops is required. Links can be found on the page of each activity. The full schedule can be viewed at the junior track schedule page. You舰

January 02, 2025

2024 brought a mix of work travel and memorable adventures, taking me to 13 countries across four continents — including the ones I call home. With 39 flights and 90 nights spent in hotels and rentals (about 25% of the year), it was a year marked by movement and new experiences.

Activity Count
🌍 Countries visited 13
✈️ Flights taken 39
🚕 Taxi rides 158
🍽️ Restaurant visits 175
☕️ Coffee shop visits 44
🍺 Bar visits 31
🏨 Days at hotel or rentals 90
⛺️ Days camping 12

Countries visited:

  • Australia
  • Belgium
  • Bulgaria
  • Canada
  • Cayman Islands
  • France
  • Japan
  • Netherlands
  • Singapore
  • South Korea
  • Spain
  • United Kingdom
  • United States

January 01, 2025

2025 = (20 + 25)²

2025 = 45²

2025 = 1³+2³+3³+4³+5³+6³+7³+8³+9³

2025 = (1+2+3+4+5+6+7+8+9)²

2025 = 1+3+5+7+9+11+...+89

2025 = 9² x 5²

2025 = 40² + 20² + 5²

December 30, 2024

At DrupalCon Asia in Singapore a few weeks ago, I delivered my traditional State of Drupal keynote. This event marked DrupalCon's return to Asia after an eight-year hiatus, with the last one being DrupalCon Mumbai in 2016.

It was so fun to reconnect with the Drupal community across Asia and feel the passion and enthusiasm for Drupal. The positive energy was so contagious that three weeks later, I still feel inspired by it.

If you missed the keynote, you can watch the video below, or download my slides (196 MB).

I talked about the significant progress we've made on Drupal CMS (code name Drupal Starshot) since DrupalCon Barcelona just a few months ago.

Our vision for Drupal CMS is clear: to set the standard for no-code website building. My updates highlighted how Drupal CMS empowers digital marketers and content creators to design sophisticated digital experiences while preserving Drupal's power and flexibility.

For more background on Drupal CMS, I recommend reading our three-year strategy document. We're about a quarter of the way through, time-wise, and as you'll see from my keynote, we're making very fast progress.

A slide from my recent DrupalCon Singapore State of Drupal keynote showcasing key contributors to Drupal CMS. This slide showcases how we recognize and celebrate Makers in our community, encouraging active participation in the project.

Below are some of the key improvements I showcased in my keynote, highlighted in short video segments. These videos demonstrate just 7 recipes, but we have nearly 20 in development.

Watching these demos, it will become very clear how much time and effort Drupal CMS can save for both beginners and experienced developers. Manually assembling these features would take weeks for a Drupal expert and months for a novice. These recipes pack a wealth of expertise and best practices. What once took a Drupal expert weeks can now be done by a novice in hours.

AI support in Drupal

We're integrating AI agents into Drupal to assist with complex site-building tasks, going far beyond just content creation. Users can choose to have AI complete tasks automatically or provide step-by-step guidance, helping them learn Drupal as they go.

Search

We're including enhanced search functionality that includes autocomplete and faceted search, delivering enterprise-grade functionality out-of-the-box.

Privacy

With increasing global privacy regulations, marketers need robust compliance solutions, yet very few content management systems offer this out-of-the-box. I demonstrated how Drupal CMS will offer a user-centric approach to privacy and consent management, making compliance easier and more effective.

Media management

Our improved media management tools now include features like focal point control and image style presets, enabling editors to handle visual content efficiently while maintaining accessibility standards.

Accessibility tools

Our accessibility tools provide real-time feedback during content creation, helping identify and resolve potential issues that could affect the user experience for visually-impaired visitors.

Analytics

Analytics integration streamlines the setup of Google Analytics and Tag Manager, something that 75% of all marketers use.

Experience Builder

Drupal's new Experience Builder will bring a massive improvement in visual page building. It combines drag-and-drop simplicity with an enterprise-grade component architecture. It looks fantastic, and I'm really excited for it!

Conclusion

Drupal CMS has been a catalyst for innovation and collaboration, driving strong growth in organizational credits. Development of Drupal CMS began in 2024, and we expect a significant increase in contributions this year. Credits have tripled from 2019 to 2024, demonstrating our growing success in driving strategic innovation in Drupal.

In addition to our progress on Drupal CMS, the product, we've made real strides in other areas, such as marketing, modernizing Drupal.org, and improving documentation – all important parts of the Drupal Starshot initiative.

Overall, I'm incredibly proud of the progress we've made. So much so that we've released our first release candidate at DrupalCon Singapore, which you can try today by following my installation instructions for Drupal CMS.

While we still have a lot of work left, we are on track for the official release on January 15, 2025! To mark the occasion, we're inviting the Drupal community to organize release parties around the world. Whether you want to host your own event or join a party near you, you can find all the details and sign-up links for Drupal CMS release parties. I'll be celebrating from Boston and hope to drop in on other parties via Zoom!

Finally, I want to extend my heartfelt thanks to everyone who has contributed to Drupal CMS and DrupalCon Singapore. Your hard work and dedication have made this possible. Thank you!

December 27, 2024

At work, I've been maintaining a perl script that needs to run a number of steps as part of a release workflow.

Initially, that script was very simple, but over time it has grown to do a number of things. And then some of those things did not need to be run all the time. And then we wanted to do this one exceptional thing for this one case. And so on; eventually the script became a big mess of configuration options and unreadable flow, and so I decided that I wanted it to be more configurable. I sat down and spent some time on this, and eventually came up with what I now realize is a domain-specific language (DSL) in JSON, implemented by creating objects in Moose, extensible by writing more object classes.

Let me explain how it works.

In order to explain, however, I need to explain some perl and Moose basics first. If you already know all that, you can safely skip ahead past the "Preliminaries" section that's next.

Preliminaries

Moose object creation, references.

In Moose, creating a class is done something like this:

package Foo;

use v5.40;
use Moose;

has 'attribute' => (
    is  => 'ro',
    isa => 'Str',
    required => 1
);

sub say_something {
    my $self = shift;
    say "Hello there, our attribute is " . $self->attribute;
}

The above is a class that has a single attribute called attribute. To create an object, you use the Moose constructor on the class, and pass it the attributes you want:

use v5.40;
use Foo;

my $foo = Foo->new(attribute => "foo");

$foo->say_something;

(output: Hello there, our attribute is foo)

This creates a new object with the attribute attribute set to bar. The attribute accessor is a method generated by Moose, which functions both as a getter and a setter (though in this particular case we made the attribute "ro", meaning read-only, so while it can be set at object creation time it cannot be changed by the setter anymore). So yay, an object.

And it has methods, things that we set ourselves. Basic OO, all that.

One of the peculiarities of perl is its concept of "lists". Not to be confused with the lists of python -- a concept that is called "arrays" in perl and is somewhat different -- in perl, lists are enumerations of values. They can be used as initializers for arrays or hashes, and they are used as arguments to subroutines. Lists cannot be nested; whenever a hash or array is passed in a list, the list is "flattened", that is, it becomes one big list.

This means that the below script is functionally equivalent to the above script that uses our "Foo" object:

use v5.40;
use Foo;

my %args;

$args{attribute} = "foo";

my $foo = Foo->new(%args);

$foo->say_something;

(output: Hello there, our attribute is foo)

This creates a hash %args wherein we set the attributes that we want to pass to our constructor. We set one attribute in %args, the one called attribute, and then use %args and rely on list flattening to create the object with the same attribute set (list flattening turns a hash into a list of key-value pairs).

Perl also has a concept of "references". These are scalar values that point to other values; the other value can be a hash, a list, or another scalar. There is syntax to create a non-scalar value at assignment time, called anonymous references, which is useful when one wants to remember non-scoped values. By default, references are not flattened, and this is what allows you to create multidimensional values in perl; however, it is possible to request list flattening by dereferencing the reference. The below example, again functionally equivalent to the previous two examples, demonstrates this:

use v5.40;
use Foo;

my $args = {};

$args->{attribute} = "foo";

my $foo = Foo->new(%$args);

$foo->say_something;

(output: Hello there, our attribute is foo)

This creates a scalar $args, which is a reference to an anonymous hash. Then, we set the key attribute of that anonymous hash to bar (note the use arrow operator here, which is used to indicate that we want to dereference a reference to a hash), and create the object using that reference, requesting hash dereferencing and flattening by using a double sigil, %$.

As a side note, objects in perl are references too, hence the fact that we have to use the dereferencing arrow to access the attributes and methods of Moose objects.

Moose attributes don't have to be strings or even simple scalars. They can also be references to hashes or arrays, or even other objects:

package Bar;

use v5.40;
use Moose;

extends 'Foo';

has 'hash_attribute' => (
    is => 'ro',
    isa => 'HashRef[Str]',
    predicate => 'has_hash_attribute',
);

has 'object_attribute' => (
    is => 'ro',
    isa => 'Foo',
    predicate => 'has_object_attribute',
);

sub say_something {
    my $self = shift;

    if($self->has_object_attribute) {
        $self->object_attribute->say_something;
    }

    $self->SUPER::say_something unless $self->has_hash_attribute;

    say "We have a hash attribute!"
}

This creates a subclass of Foo called Bar that has a hash attribute called hash_attribute, and an object attribute called object_attribute. Both of them are references; one to a hash, the other to an object. The hash ref is further limited in that it requires that each value in the hash must be a string (this is optional but can occasionally be useful), and the object ref in that it must refer to an object of the class Foo, or any of its subclasses.

The predicates used here are extra subroutines that Moose provides if you ask for them, and which allow you to see if an object's attribute has a value or not.

The example script would use an object like this:

use v5.40;
use Bar;

my $foo = Foo->new(attribute => "foo");

my $bar = Bar->new(object_attribute => $foo, attribute => "bar");

$bar->say_something;

(output: Hello there, our attribute is foo)

This example also shows object inheritance, and methods implemented in child classes.

Okay, that's it for perl and Moose basics. On to...

Moose Coercion

Moose has a concept of "value coercion". Value coercion allows you to tell Moose that if it sees one thing but expects another, it should convert is using a passed subroutine before assigning the value.

That sounds a bit dense without example, so let me show you how it works. Reimaginging the Bar package, we could use coercion to eliminate one object creation step from the creation of a Bar object:

package "Bar";

use v5.40;

use Moose;
use Moose::Util::TypeConstraints;

extends "Foo";

coerce "Foo",
    from "HashRef",
    via { Foo->new(%$_) };

has 'hash_attribute' => (
    is => 'ro',
    isa => 'HashRef',
    predicate => 'has_hash_attribute',
);

has 'object_attribute' => (
    is => 'ro',
    isa => 'Foo',
    coerce => 1,
    predicate => 'has_object_attribute',
);

sub say_something {
    my $self = shift;

    if($self->has_object_attribute) {
        $self->object_attribute->say_something;
    }

    $self->SUPER::say_something unless $self->has_hash_attribute;

    say "We have a hash attribute!"
}

Okay, let's unpack that a bit.

First, we add the Moose::Util::TypeConstraints module to our package. This is required to declare coercions.

Then, we declare a coercion to tell Moose how to convert a HashRef to a Foo object: by using the Foo constructor on a flattened list created from the hashref that it is given.

Then, we update the definition of the object_attribute to say that it should use coercions. This is not the default, because going through the list of coercions to find the right one has a performance penalty, so if the coercion is not requested then we do not do it.

This allows us to simplify declarations. With the updated Bar class, we can simplify our example script to this:

use v5.40;

use Bar;

my $bar = Bar->new(attribute => "bar", object_attribute => { attribute => "foo" });

$bar->say_something

(output: Hello there, our attribute is foo)

Here, the coercion kicks in because the value object_attribute, which is supposed to be an object of class Foo, is instead a hash ref. Without the coercion, this would produce an error message saying that the type of the object_attribute attribute is not a Foo object. With the coercion, however, the value that we pass to object_attribute is passed to a Foo constructor using list flattening, and then the resulting Foo object is assigned to the object_attribute attribute.

Coercion works for more complicated things, too; for instance, you can use coercion to coerce an array of hashes into an array of objects, by creating a subtype first:

package MyCoercions;
use v5.40;

use Moose;
use Moose::Util::TypeConstraints;

use Foo;

subtype "ArrayOfFoo", as "ArrayRef[Foo]";
subtype "ArrayOfHashes", as "ArrayRef[HashRef]";

coerce "ArrayOfFoo", from "ArrayOfHashes", via { [ map { Foo->create(%$_) } @{$_} ] };

Ick. That's a bit more complex.

What happens here is that we use the map function to iterate over a list of values.

The given list of values is @{$_}, which is perl for "dereference the default value as an array reference, and flatten the list of values in that array reference".

So the ArrayRef of HashRefs is dereferenced and flattened, and each HashRef in the ArrayRef is passed to the map function.

The map function then takes each hash ref in turn and passes it to the block of code that it is also given. In this case, that block is { Foo->create(%$_) }. In other words, we invoke the create factory method with the flattened hashref as an argument. This returns an object of the correct implementation (assuming our hash ref has a type attribute set), and with all attributes of their object set to the correct value. That value is then returned from the block (this could be made more explicit with a return call, but that is optional, perl defaults a return value to the rvalue of the last expression in a block).

The map function then returns a list of all the created objects, which we capture in an anonymous array ref (the [] square brackets), i.e., an ArrayRef of Foo object, passing the Moose requirement of ArrayRef[Foo].

Usually, I tend to put my coercions in a special-purpose package. Although it is not strictly required by Moose, I find that it is useful to do this, because Moose does not allow a coercion to be defined if a coercion for the same type had already been done in a different package. And while it is theoretically possible to make sure you only ever declare a coercion once in your entire codebase, I find that doing so is easier to remember if you put all your coercions in a specific package.

Okay, now you understand Moose object coercion! On to...

Dynamic module loading

Perl allows loading modules at runtime. In the most simple case, you just use require inside a stringy eval:

my $module = "Foo";
eval "require $module";

This loads "Foo" at runtime. Obviously, the $module string could be a computed value, it does not have to be hardcoded.

There are some obvious downsides to doing things this way, mostly in the fact that a computed value can basically be anything and so without proper checks this can quickly become an arbitrary code vulnerability. As such, there are a number of distributions on CPAN to help you with the low-level stuff of figuring out what the possible modules are, and how to load them.

For the purposes of my script, I used Module::Pluggable. Its API is fairly simple and straightforward:

package Foo;

use v5.40;
use Moose;

use Module::Pluggable require => 1;

has 'attribute' => (
    is => 'ro',
    isa => 'Str',
);

has 'type' => (
    is => 'ro',
    isa => 'Str',
    required => 1,
);

sub handles_type {
    return 0;
}

sub create {
    my $class = shift;
    my %data = @_;

    foreach my $impl($class->plugins) {
        if($impl->can("handles_type") && $impl->handles_type($data{type})) {
            return $impl->new(%data);
        }
    }
    die "could not find a plugin for type " . $data{type};
}

sub say_something {
    my $self = shift;
    say "Hello there, I am a " . $self->type;
}

The new concept here is the plugins class method, which is added by Module::Pluggable, and which searches perl's library paths for all modules that are in our namespace. The namespace is configurable, but by default it is the name of our module; so in the above example, if there were a package "Foo::Bar" which

  • has a subroutine handles_type
  • that returns a truthy value when passed the value of the type key in a hash that is passed to the create subroutine,
  • then the create subroutine creates a new object with the passed key/value pairs used as attribute initializers.

Let's implement a Foo::Bar package:

package Foo::Bar;

use v5.40;
use Moose;

extends 'Foo';

has 'type' => (
    is => 'ro',
    isa => 'Str',
    required => 1,
);

has 'serves_drinks' => (
    is => 'ro',
    isa => 'Bool',
    default => 0,
);

sub handles_type {
    my $class = shift;
    my $type = shift;

    return $type eq "bar";
}

sub say_something {
    my $self = shift;
    $self->SUPER::say_something;
    say "I serve drinks!" if $self->serves_drinks;
}

We can now indirectly use the Foo::Bar package in our script:

use v5.40;
use Foo;

my $obj = Foo->create(type => bar, serves_drinks => 1);

$obj->say_something;

output:

Hello there, I am a bar.
I serve drinks!

Okay, now you understand all the bits and pieces that are needed to understand how I created the DSL engine. On to...

Putting it all together

We're actually quite close already. The create factory method in the last version of our Foo package allows us to decide at run time which module to instantiate an object of, and to load that module at run time. We can use coercion and list flattening to turn a reference to a hash into an object of the correct type.

We haven't looked yet at how to turn a JSON data structure into a hash, but that bit is actually ridiculously trivial:

use JSON::MaybeXS;

my $data = decode_json($json_string);

Tada, now $data is a reference to a deserialized version of the JSON string: if the JSON string contained an object, $data is a hashref; if the JSON string contained an array, $data is an arrayref, etc.

So, in other words, to create an extensible JSON-based DSL that is implemented by Moose objects, all we need to do is create a system that

  • takes hash refs to set arguments
  • has factory methods to create objects, which

    • uses Module::Pluggable to find the available object classes, and
    • uses the type attribute to figure out which object class to use to create the object
  • uses coercion to convert hash refs into objects using these factory methods

In practice, we could have a JSON file with the following structure:

{
    "description": "do stuff",
    "actions": [
        {
            "type": "bar",
            "serves_drinks": true,
        },
        {
            "type": "bar",
            "serves_drinks": false,
        }
    ]
}

... and then we could have a Moose object definition like this:

package MyDSL;

use v5.40;
use Moose;

use MyCoercions;

has "description" => (
    is => 'ro',
    isa => 'Str',
);

has 'actions' => (
    is => 'ro',
    isa => 'ArrayOfFoo'
    coerce => 1,
    required => 1,
);

sub say_something {
    say "Hello there, I am described as " . $self->description . " and I am performing my actions: ";

    foreach my $action(@{$self->actions}) {
        $action->say_something;
    }
}

Now, we can write a script that loads this JSON file and create a new object using the flattened arguments:

use v5.40;
use MyDSL;
use JSON::MaybeXS;

my $input_file_name = shift;

my $args = do {
    local $/ = undef;

    open my $input_fh, "<", $input_file_name or die "could not open file";
    <$input_fh>;
};

$args = decode_json($args);

my $dsl = MyDSL->new(%$args);

$dsl->say_something

Output:

Hello there, I am described as do stuff and I am performing my actions:
Hello there, I am a bar
I am serving drinks!
Hello there, I am a bar

In some more detail, this will:

  • Read the JSON file and deserialize it;
  • Pass the object keys in the JSON file as arguments to a constructor of the MyDSL class;
  • The MyDSL class then uses those arguments to set its attributes, using Moose coercion to convert the "actions" array of hashes into an array of Foo::Bar objects.
  • Perform the say_something method on the MyDSL object

Once this is written, extending the scheme to also support a "quux" type simply requires writing a Foo::Quux class, making sure it has a method handles_type that returns a truthy value when called with quux as the argument, and installing it into the perl library path. This is rather easy to do.

It can even be extended deeper, too; if the quux type requires a list of arguments rather than just a single argument, it could itself also have an array attribute with relevant coercions. These coercions could then be used to convert the list of arguments into an array of objects of the correct type, using the same schema as above.

The actual DSL is of course somewhat more complex, and also actually does something useful, in contrast to the DSL that we define here which just says things.

Creating an object that actually performs some action when required is left as an exercise to the reader.

December 23, 2024

Mon collègue Julius

Vous connaissez Julius ? Mais si, Julius ! Vous voyez certainement de qui je veux parler !

J’ai rencontré Julius à l’université. Un jeune homme discret, sympathique, le sourire aux lèvres. Ce qui m’a d’abord frappé chez Julius, outre ses vêtements toujours parfaitement repassés, c’est la qualité de son écoute. Il ne m’interrompait jamais, acceptait de s’être trompé et répondait sans hésiter à toutes mes interrogations.

Il allait à tous les cours, demandait souvent les notes des autres pour « comparer avec les siennes » comme il disait. Et puis il y eut le fameux projet informatique. Nous devions, en équipe, coder un logiciel système assez complexe en utilisant le langage C. Julius participait à toutes nos réunions, mais je ne me souviens pas de l’avoir vu écrire une seule ligne de code. Au final, je crois qu’il s’est contenté de faire la mise en page du rapport. Qui était très bien.

De par sa prestance et son élégance, Julius était tout désigné pour faire la présentation finale. Je suis sûr qu’il a fait du théâtre, car, à son charisme naturel, il ajoute une diction parfaite. Il émane de sa personne une impression de confiance innée.

À tel point que les professeurs n’ont pas tout de suite réalisé le problème lorsqu’il s’est mis à parler de la machine virtuelle C utilisée dans notre projet. Il avait intégré dans la présentation un slide avec un logo que je n’avais jamais vu, un screenshot et des termes n’ayant aucun rapport avec quoi que ce soit de connu en informatique.

Pour celleux qui ne connaissent pas l’informatique, le C est un langage compilé. Il n’a pas besoin d’une machine virtuelle. Parler de machine virtuelle C, c’est comme parler du carburateur d’une voiture électrique. Cela n’a tout simplement aucun sens.

Je me suis levé, j’ai interrompu Julius et j’ai improvisé en disant qu’il s’agissait d’une simple blague entre nous. « Bien entendu ! » a fait Julius en me regardant avec un grand sourire. Le jury de projet était perplexe, mais j’ai sauvé les meubles.

Durant toutes nos études, j’ai entendu plusieurs professeurs discuter du « cas Julius ». Certains le trouvaient très bon. D’autres disaient qu’il avait des lacunes profondes. Mais, malgré des échecs dans certaines matières, il a fini par avoir son diplôme en même temps que moi.

Nos chemins se sont ensuite séparés durant plusieurs années.

Alors que je travaillais depuis presque une décennie dans une grande entreprise où j’avais acquis de belles responsabilités, mon chef m’a annoncé que les recruteurs avaient trouvé la perle rare pour renforcer l’équipe. Un CV hors-norme m’a-t-il dit.

À la coupe parfaite de son costume, à sa démarche et sa prestance, je reconnus Julius avant même de voir son visage.

Julius ! Mon vieux camarade !

Si j’avais vieilli, il semblait avoir mûri. Toujours autant de charisme, d’assurance. Il portait désormais une barbe de trois jours légèrement grisonnante qui lui donnait un air de sage autorité. Il semblait sincèrement content de me revoir.

Nous parlâmes du passé et de nos carrières respectives. Contrairement à moi, Julius n’était jamais resté très longtemps dans la même entreprise. Il partait après un an, parfois moins. Son CV était impressionnant : il avait acquis diverses expériences, il avait touché à tous les domaines de l’informatique. À chaque fois, il montait en compétence et en salaire. Je devais découvrir plus tard que, alors que nous occupions une position similaire, il avait été engagé pour le double de mon salaire. Plus des primes dont j’ignorais jusqu’à l’existence.

Mais je n’étais pas au courant de cet aspect des choses lorsque nous nous mîmes au travail. Au début, je tentai de le former sur nos projets et nos process internes. Je lui donnais des tâches sur lesquelles il me posait des questions. Beaucoup de questions pas toujours très pertinentes. Avec ce calme olympien et cet éternel sourire qui le caractérisait.

Parfois il prenait des initiatives. Écrivait du code ou de la documentation. Il avait réponse à toutes les questions que nous pouvions nous poser, quel que soit le domaine. C’était quelquefois très bon, souvent médiocre voire du grand n’importe quoi. Il nous a fallu un certain temps pour comprendre que chacune des contributions de Julius nécessitait d’être entièrement revue et corrigée par un autre membre de l’équipe. Si nous ne connaissions pas le domaine, il fallait le faire vérifier par un expert externe. Très vite, le mot d’ordre fut qu’aucun document issu de Julius ne devait être rendu public avant d’avoir été relu par deux d’entre nous.

Mais Julius excellait dans la mise en page, la présentation et la gestion des réunions. Régulièrement, mon chef s’approchait de moi et me disait : « On a vraiment de la chance d’avoir ce Julius ! Quel talent ! Quel apport à l’équipe ! »

J’essayais vainement d’expliquer que Julius ne comprenait rien à ce que nous faisions, que nous en étions au point où nous l’envoyions à des réunions inutiles pour nous en débarrasser afin de ne pas avoir à répondre à ses questions et corriger son travail. Mais même cette stratégie avait ses limites.

Il nous a fallu une semaine de réunion de crises pour expliquer à un client déçu par une mise à jour de notre logiciel que, si Julius avait promis que l’interface serait simplifiée pour ne comporter qu’un seul bouton qui ferait uniquement ce que voulait justement le client, il y avait un malentendu. Qu’à part développer une machine qui lisait dans les pensées, c’était impossible de répondre à des besoins aussi complexes que les siens avec un seul bouton.

C’est lorsque j’ai entendu Julius prétendre à un autre client, paniqué à l’idée de se faire « hacker », que, par mesure de sécurité, nos serveurs connectés à Internet n’avaient pas d’adresse IP que nous avons du lui interdire de rencontrer un client seul.

Pour celleux qui ne connaissent pas l’informatique, le "I" de l’adresse IP signifie Internet. La définition même d’Internet est l’ensemble des ordinateurs interconnectés possédant une adresse IP.

Être sur Internet sans adresse IP, c’est comme prétendre être joignable par téléphone sans avoir de numéro.

L’équipe s’était désormais organisée pour que l’un d’entre nous ait en permanence la charge d’occuper Julius. Je n’ai jamais voulu dire du mal à son sujet, car c’était mon ami. Une codeuse exaspérée a cependant exposé le problème à mon chef. Qui lui a répondu en l’accusant de jalousie, car il était très satisfait du travail de Julius. Elle a reçu un blâme et a démissionné un peu après.

Heureusement, Julius nous a un jour annoncé qu’il nous quittait, car il avait reçu une offre qu’il ne pouvait pas refuser. Il a apporté des gâteaux pour fêter son dernier jour avec nous. Mon chef et tout le département des ressources humaines étaient sincèrement tristes de le voir partir.

J’ai dit au revoir à Julius et ne l’ai plus jamais revu. Sur son compte LinkedIn, qui est très actif et reçoit des centaines de commentaires, l’année qu’il a passée avec nous est devenue une expérience incroyable. Il n’a pourtant rien exagéré. Tout est vrai. Mais sa façon de tourner les mots et une certaine modestie mal camouflée donne l’impression qu’il a vraiment apporté beaucoup à l’équipe. Il semblerait qu’il soit ensuite devenu adjoint de la CEO puis CEO par intérim d’une startup qui venait d’être rachetée par une multinationale. Un journal économique a fait un article à son sujet. Après cet épisode, il a rejoint un cabinet ministériel. Une carrière fulgurante !

De mon côté, j’ai essayé d’oublier Julius. Mais, dernièrement, mon chef est venu avec un énorme sourire. Il avait rencontré le commercial d’une boîte qui l’avait ébahi par ses produits. Des logiciels d’intelligence artificielle qui allait, je cite, doper notre productivité !

J’ai désormais un logiciel d’intelligence artificielle qui m’aide à coder. Un autre qui m’aide à chercher des informations. Un troisième qui résume et rédige mes emails. Je n’ai pas le droit de les désactiver.

À chaque instant, à chaque seconde, j’ai l’impression d’être entouré par Julius. Par des dizaines de Julius.

Je dois travailler cerné par des Julius. Chaque clic sur mon ordinateur, chaque notification sur mon téléphone semble provenir de Julius. Ma vie est un enfer pavé de Julius.

Mon chef est venu me voir. Il m’a dit que la productivité de l’équipe baissait dangereusement. Que nous devrions utiliser plus efficacement les intelligences artificielles. Que nous risquions de nous faire dépasser par les concurrents qui, eux, utilisent à n’en pas douter les toutes dernières intelligences artificielles. Qu’il avait mandaté un consultant pour nous installer une intelligence artificielle de gestion du temps et de la productivité.

Je me suis mis à pleurer. « Encore un Julius ! » ai-je sangloté.

Mon chef a soupiré. Il m’a tapoté l’épaule et m’a dit : « Je comprends. Moi aussi je regrette Julius. Il nous aurait certainement aidés à passer ce moment difficile. »

Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

My colleague Julius

Do you know Julius? You certainly know who I’m talking about!

I met Julius at university. A measured, friendly young man. He always wore a smile on his face. What struck me about Julius, aside from his always perfectly ironed clothes, was his ability to listen. He never interrupted me. He accepted gratefully when he was wrong. He answered questions without hesitation.

He attended all the classes and often asked for our notes to "compare with his own" as he said. Then came the infamous computer project. As a team of students, we had to code a fairly complex system software using the C language. Julius took part in all our meetings but I don’t remember witnessing him write a single line of code. In the end, I think he did the report formatting. Which, to his credit, was very well done.

Because of his charisma and elegance, Julius was the obvious choice to give the final presentation.

He was so self-confident during the presentation that the professors didn’t immediately notice the problem. He had started talking about the C virtual machine used in our project. He even showed a slide with an unknown logo and several random screenshots which had nothing to do with anything known in computing.

For those who don’t know about computing, C is a compiled language. It doesn’t need a virtual machine. Talking about a C virtual machine is like talking about the carburettor of an electric vehicle. It doesn’t make sense.

I stood up, interrupted Julius and improvised by saying it was just a joke. “Of course!” said Julius, looking at me with a big smile. The jury was perplexed. But I saved the day.

Throughout our studies, I’ve heard several professors discuss the “Julius case.” Some thought he was very good. Others said he was lacking a fundamental understanding. Despite failing some classes, he ended up graduating with me.

After that, our paths went apart for several years.

I’ve been working for nearly a decade at a large company where I had significant responsibilities. One day, my boss announced that recruiters had found a rare gem for our team. An extraordinary resume, he told me.

From the perfect cut of his suit, I recognised Julius before seeing his face.

Julius! My old classmate!

If I had aged, he had matured. Still charismatic and self-assured. He now sported a slightly graying three-day beard that gave him an air of wise authority. He genuinely seemed happy to see me.

We talked about the past and about our respective careers. Unlike me, Julius had never stayed very long in the same company. He usually left after a year, sometimes less. His resume was impressive: he had gained various experiences, touched on all areas of computing. Each time, he moved up in skills and salary. I would later discover that, while we held similar positions, he had been hired at twice my salary. He also got bonuses I didn’t even know existed.

But I wasn’t aware of this aspect when we started working together. At first, I tried to train him on our projects and internal processes. I assigned him tasks on which he would ask me questions. Many questions, not always very relevant ones. With his characteristic calm and his signature smile.

He took initiatives. Wrote code or documentation. He had answers to all the questions we could ask, regardless of the field. Sometimes it was very good, often mediocre or, in some cases, complete nonsense. It took us some time to understand that each of Julius’s contributions needed to be completely reviewed and corrected by another team member. If it was not our field of expertise, it had to be checked externally. We quickly had a non-written rule stating that no document from Julius should leave the team before being proofread by two of us.

But Julius excelled in formatting, presentation, and meeting management. Regularly, my boss would come up to me and say, “We’re really lucky to have this Julius! What talent! What a contribution to the team!”

I tried, without success, to explain that Julius understood nothing of what we were doing. That we had reached the point where we sent him to useless meetings to get rid of him for a few hours. But even that strategy had its limits.

It took us a week of crisis management meetings to calm down a customer disappointed by an update of our software. We had to explain that, if Julius had promised that the interface would be simplified to have only one button that would do exactly what the client wanted, there was a misunderstanding. That aside from developing a machine that read minds, it was impossible to meet his complex needs with just one button.

We decided to act when I heard Julius claim to a customer, panicked at the idea of being "hacked", that, for security reasons, our servers connected to the Internet had no IP address. We had to forbid him from meeting a client alone.

For those who don’t know about computing, the "I" in IP address stands for Internet. The very definition of the Internet is the network of interconnected computers that have an IP address.

Being on the Internet without an IP address is like claiming to be reachable by phone without having a phone number.

The team was reorganised so that one of us was always responsible for keeping Julius occupied. I never wanted to speak ill of him because he was my friend. An exasperated programmer had no such restraint and exposed the problem to my boss. Who responded by accusing her of jealousy, as he was very satisfied with Julius’s work. She was reprimanded and resigned shortly after.

Fortunately, Julius announced that he was leaving because he had received an offer he couldn’t refuse. He brought cakes to celebrate his last day with us. My boss and the entire human resources department were genuinely sad to see him go.

I said goodbye to Julius and never saw him again. On his LinkedIn account, which is very active and receives hundreds of comments, the year he spent with us became an incredible experience. He hasn’t exaggerated anything. Everything is true. But his way of turning words and a kind of poorly concealed modesty gives the impression that he really contributed a lot to the team. He later became the deputy CEO then interim CEO of a startup that had just been acquired by a multinational. An economic newspaper wrote an article about him. After that episode, he joined the team of a secretary of state. A meteoric career!

On my side, I tried to forget Julius. But, recently, my boss came to me with a huge smile. He had met the salesperson from a company that had amazed him with its products. Artificial intelligence software that would, I quote, boost our productivity!

I now have an artificial intelligence software that helps me code. Another that helps me search for information. A third one that summarises and writes my emails. I am not allowed to disable them.

At every moment, every second, I feel surrounded by Julius. By dozens of Juliuses.

I have to work in a mist of Juliuses. Every click on my computer, every notification on my phone seems to come from Julius. My life is hell paved with Juliuses.

My boss came to see me. He told me that the team’s productivity was dangerously declining. That we should use artificial intelligence more effectively. That we risked being overtaken by competitors who, without a doubt, were using the very latest artificial intelligence. That he had hired a consultant to install a new time and productivity management artificial intelligence.

I started to cry. “Another Julius!” I sobbed.

My boss sighed. He patted my shoulder and said, “I understand. I miss Julius too. He would certainly have helped us get through this difficult time.”

I’m Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!

December 20, 2024

Let’s stay a bit longer with MySQL 3.2x to advance the MySQL Retrospective in anticipation of the 30th Anniversary. The idea of this article was suggested to me by Daniël van Eeden. Did you know that in the early days, and therefore still in MySQL 3.20, MySQL used the ISAM storage format? IBM introduced the […]

December 18, 2024

To further advance the MySQL Retrospective in anticipation of the 30th Anniversary, today, let’s discuss the very first version of MySQL that became availble to a wide audient though the popular InfoMagic distribution: MySQL 3.20! In 1997, InfoMagic incorporated MySQL 3.20 as part of the RedHat Contrib CD-ROM (MySQL 3.20.25). Additionally, version 3.20.13-beta was also […]

L’urgence de soutenir l’énergie du libre

Éditorial rédigé pour le Lama déchaîné n°9, l’hebdomadaire réalisé par l’April afin d’alerter sur la précarité financière de l’association. J’étais limité à 300 mots. Pour un bavard comme moi, c’est un exercice très difficile ! (il est déchaîné… hi hi hi ! Elle est amusante celle-là, je viens de la comprendre )

Montée de l’extrémisme, catastrophes climatiques, crises politiques et sociales, guerres. Entre ces urgences, est-il encore raisonnable de consacrer de l’énergie au logiciel libre, aux communs numériques et culturels? Ne devrait-on pas revoir nos priorités?

Le raccourci est dangereux.

Ne faut-il pas au contraire revenir aux fondamentaux, réfléchir à l’infrastructure même de notre société?

Contrairement à ce que nous serinent les magnats de l’industrie, la technologie n’est jamais neutre. Elle porte en elle sa propre idéologie. Par essence, l’extrême centralisation de nos outils Internet préfigure la centralisation d’un pouvoir autoritaire fasciste. L’ubiquité du modèle publicitaire rend la croissance et l’hyperconsommation incontournable. Ces deux piliers se rejoignent et se complètent dans la normalisation de l’espionnage technologique permanent.

Si nous voulons changer de direction, si nous voulons apprendre à limiter notre consommation des ressources naturelles, à écouter et respecter nos différences, à bâtir des compromis démocratiques, il est urgent et indispensable de nous attaquer à la racine: notre infrastructure de communication et d’échange. De libérer le réseau qui nous relie, qui relie nos données, nos échanges commerciaux, nos pensées, nos émotions.

Rejoindre un groupe anticapitaliste sur Facebook, poster des vidéos zéro déchet sur Instagram ou utiliser l’infrastructure Outlook pour les mails de son syndicat sont des actes qui participent activement à promouvoir, justifier et perpétuer le système qu’ils cherchent, naïvement, à dénoncer.

Ce n’est pas un hasard si les discussions sur le Fediverse et le réseau Mastodon parlent de cyclisme, d’écologie, de féminisme. Parce que la technologie libre et décentralisée porte sa propre idéologie. Parce qu’elle arrive à se maintenir, à effrayer les plus gros monopoles que le capitalisme ait jamais engendrés, et cela malgré le fait qu’elle ne tienne que grâce à des bouts de ficelle et l’énergie de quelques personnes sous-payées ou bénévoles.

Lorsque tout semble aller de travers, il faut se concentrer sur les racines, les fondamentaux. L’infrastructure, l’éducation. C’est pourquoi je pense que les actions de l’April, la Quadrature du Net, Framasoft, la Contre-voie et toutes les associations libristes ne sont pas simplement importantes.

Elles sont vitales, cruciales.

Le libre n’est pas un luxe, c’est une urgence absolue.

— Et alors, hi hi hi, Ploum a dit : il est… hi hi hi… Il est déchaîné !
— Par pitié, faites un don à l’April sinon il va la raconter de nouveau !
Il est déchaîné… hi hi hi…

Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

December 16, 2024

20 years of Linux on the Desktop (part 2)

Previously in "20 years of Linux on the Deskop" : Looking to make the perfect desktop with GNOME and Debian, a young Ploum finds himself joining a stealth project called "no-name-yet". The project is later published under the name "Ubuntu".

Flooded with Ubuntu CD-ROMs

The first official Ubuntu release was 4.10. At that time, I happened to be the president of my University LUG: LouvainLiNux. LouvainLiNux was founded a few years before by Fabien Pinckaers, Anthony Lesuisse and Benjamin Henrion as an informal group of friends. After they graduated and left university, Fabien handled me all the archives, all the information and told me do continue the work while he was running his company that would, much later, becomes Odoo. With my friend Bertrand Rousseau, we decided to make Louvain-Li-Nux a formal and enduring organisation known as "KAP" (Kot-à-Projet). Frédéric Minne designed the logo by putting the student hat ("calotte") of Fabien on a penguin clipart.

In 2005 and 2006, we worked really hard to organise multiple install parties and conferences. We were also offering resources and support. At a time where broadband Internet was not really common, the best resource to install GNU/Linux was an installation CD-ROM.

Thanks to Mark Shuttleworth’s money, Ubuntu was doing something unprecedented: sending free CD-ROMs of Ubuntu to anyone requesting them. Best of all: the box contained two CD-ROMs. A live image and an installation CD. Exactly how I dreamed it (I’m not sure if the free CD-ROMs started with 4.10, 5.04 or even 5.10).

I managed to get Louvain-Li-Nux recognised as an official Ubuntu distributor and we started to receive boxes full of hundreds of CD-ROMs with small cardboard dispensers. We had entire crates of Ubuntu CD-ROMs. It was the easiest to install. It was the one I knew the best and I had converted Bertrand (before Fabien taught me about Debian, Bertrand tried to convert me to Mandrake, which he was using himself. He nevertheless spent the whole night with me when I installed Debian for the first time, not managing to configure the network because the chipset of my ethernet card was not the same as the one listed on the box of said card. At the time, you had to manually choose which module to load. It was another era, kids these days don’t know what they are missing).

With Louvain-Li-Nux, we literally distributed hundreds of CD-ROMs. I’ve myself installed Ubuntu on tenths of computers. It was not always easy as the market was pivoting from desktop computers to laptops. Laptops were starting to be affordable and powerful enough. But laptops came with exotic hardware, wifi, Bluetooth, power management, sleep, hibernate, strange keyboard keys and lots of very complex stuff that you don’t need to handle on a desktop computer with a RJ-45 hole.

Sound was a hard problem. I remember spending hours on a laptop before realising there was a hardware switch. To play multiple sounds at the same time, you needed to launch a daemon called ESD. Our frustration with ESD would lead Bertrand and I to trap Lennart Poetering in a cave in Brussels to spend the whole night drinking beers with him while swearing we would wear a "we love Lennart" t-shirt during FOSDEM in order to support is new Polypaudio project that was heavily criticised at the time. Spoiler: we never did the t-shirt thing but Polypaudio was renamed Pulseaudio and succeeded without our support.

Besides offering beers to developers, I reported all the bugs I experienced and worked hard with Ubuntu developers. If I remember correctly, I would, at some point, even become the head of the "bug triaging team" (if such a position ever existed. It might be that someone called me like that to flatter my ego). Selected as a student for the Google Summer of Code, I created a python client for Launchpad called "Conseil". Launchpad had just replaced Bugzilla but, as I found out after starting Conseil, was not open source and had no API. I learned web scrapping and was forced to update Conseil each time something changed on Launchpad side.

The most important point about Bugzilla and Launchpad was the famous bug #1. Bug #1, reported by sabdfl himself, was about breaking Microsoft monopoly. It could be closed once it would be considered that any computer user could freely choose which operating system to use on a newly bought computer.

The very first book about Ubuntu

Meanwhile, I was contacted by a French publisher who stumbled upon my newly created blog that I mainly used to profess my love of Ubuntu and Free Software. Yes, the very blog you are currently reading.

That French publisher had contracted two authors to write a book about Ubuntu and wanted my feedback about the manuscript. I didn’t really like what I read and said it bluntly. Agreeing with me, the editor asked me to write a new book, using the existing material if I wanted. But the two other authors would remain credited and the title could not be changed. I naively agreed and did the work, immersing myself even more in Ubuntu.

The result was « Ubuntu, une distribution facile à installer », the very first book about Ubuntu. I hated the title. But, as I have always dreamed of becoming a published author, I was proud of my first book. And it had a foreword by Mark Shuttleworth himself.

I updated and rewrote a lot of it in 2006, changing its name to "Ubuntu Efficace". A later version was published in 2009 as "Ubuntu Efficace, 3ème édition". During those years, I was wearing Ubuntu t-shirts. In my room, I had a collection of CD-ROMs with each Ubuntu version (I would later throw them, something I still regret). I bootstrapped "Ubuntu-belgium" at FOSDEM. I had ploum@ubuntu.com as my primary email on my business card and used it to look for jobs, hoping to set the tone. You could say that I was an Ubuntu fanatic.

The very first Ubuntu-be meeting. I took the picture and gimped a quick logo. The very first Ubuntu-be meeting. I took the picture and gimped a quick logo.

Ironically, I was never paid by Canonical and never landed a job there. The only money I received for that work was from my books or from Google through the Summer of Code (remember: Google was still seen as a good guy). I would later work for Lanedo and be paid to contribute to GNOME and LibreOffice. But never to contribute to Ubuntu nor Debian.

In the Ubuntu and GNOME community with Jeff Waugh

Something which was quite new to me was that Ubuntu had a "community manager". At the time, it was not the title of someone posting on Twitter (which didn’t exist). It was someone tasked with putting the community together, with being the public face of the project.

Jeff Waugh is the first Ubuntu community manager I remember and I was blown away by his charism. Jeff came from the GNOME project and one of his pet issues was to make computers easier. He started a trend that would, way later, gives birth to the infamous GNOME 3 design.

You have to remember that the very first fully integrated desktop on Linux was KDE. And KDE had a very important problem: it was relying on the Qt toolkit which, at the time, was under a non-free license. You could not use Qt in a commercial product without paying Trolltech, the author of Qt.

GNOME was born as an attempt by Miguel de Icaza and Federico Mena to create a KDE-like desktop using the free toolkit created for the Gimp image editor: Gtk.

This is why I liked to make the joke that the G in GNOME stands for Gtk, that the G in Gtk stands for Gimp, that the G in Gimp stands for GNU and that the G in GNU stands for GNU. This is not accurate as the G in GNOME stands for GNU but this makes the joke funnier. We, free software geeks, like to have fun.

Like its KDE counterpart, GNOME 1 was full of knobs and whistles. Everything could be customised to the pixel and to the milliseconds. Jeff Waugh often made fun of it by showing the preferences boxes and asking the audience who wanted to customise a menu animation to the millisecond. GNOME 1 was less polished than KDE and heavier than very simple window managers like Fvwm95 or Fvwm2 (my WM of choice before I started my quest for the perfect desktop).

Screenshot from my FVWM2 config which is still featured on fvwm.org, 21 years later Screenshot from my FVWM2 config which is still featured on fvwm.org, 21 years later

With GNOME 2, GNOME introduced its own paradigm and philosophy: GNOME would be different from KDE by being less customisable but more intuitive. GNOME 2 opened a new niche in the Linux world: a fully integrated desktop for those who don’t want to tweak it.

KDE was for those wanting to customise everything. The most popular distributions featured KDE: Mandrake, Red Hat, Suse. The RPM world. There was no real GNOME centric distribution. And there was no desktop distribution based on Debian. As Debian was focused on freedom, there was no KDE in Debian.

Which explains why GNOME + Debian made a lot of sense in my mind.

As Jeff Waugh had been the GNOME release manager for GNOME 2 and was director of the GNOME board, having him as the first Ubuntu community manager set the tone: Ubuntu would be very close to GNOME. And it is exactly what happened. There was a huge overlap between GNOME and Ubuntu enthusiasts. As GNOME 2 would thrive and get better with each release, Ubuntu would follow.

But some people were not happy. While some Debian developers had been hired by Canonical to make Ubuntu, some others feared that Ubuntu was a kind of Debian fork that would weaken Debian. Similarly, Red Hat had been investing lot of time and money in GNOME. I’ve never understood why, as Qt was released under the GPL in 2000, making KDE free, but Red Hat wanted to offer both KDE and GNOME. It went as far as tweaking both of them so they would look perfectly identical when used on Red Hat Linux. Red Hat employees were the biggest pool of contributors to GNOME.

There was a strong feeling in the atmosphere that Ubuntu was piggybacking on the work of Debian and Red Hat.

I didn’t really agree as I thought that Ubuntu was doing a lot of thankless polishing and marketing work. I liked the Ubuntu community and was really impressed by Jeff Waugh. Thanks to him, I entered the GNOME community and started to pay attention to user experience. He was inspiring and full of energy.

Drinking a beer with Jeff Waugh and lots of hackers at FOSDEM. I’m the one with the red sweater. Drinking a beer with Jeff Waugh and lots of hackers at FOSDEM. I’m the one with the red sweater.

Benjamin Mako Hill

What I didn’t realise at the time was that Jeff Waugh’s energy was not in infinite supply. Mostly burned out by his dedication, he had to step down and was replaced by Benjamin Mako Hill. That’s, at least, how I remember it. A quick look at Wikipedia told me that Jeff Waugh and Benjamin Mako Hill were, in fact, working in parallel and that Jeff Waugh was not the community manager but an evangelist. It looks like I’ve been wrong all those years. But I choose to stay true to my own experience as I don’t want to write a definitive and exhaustive history.

Benjamin Mako Hill was not a GNOME guy. He was a Debian and FSF guy. He was focused on the philosophical aspects of free software. His intellectual influence would prove to have a long-lasting effect on my own work. I remember fondly that he introduced the concept of "anti-features" to describe the fact that developers are sometimes working to do something against their own users. They spend energy to make the product worse. Examples include advertisement in apps or limited-version software. But it is not limited to software: Benjamin Mako Hill took the example of benches designed so you can’t sleep on them, to prevent homeless person to take a nap. It is obviously more work to design a bench that prevents napping. The whole anti-feature concept would be extended and popularised twenty years later by Cory Doctorow under the term "enshitification".

Benjamin Mako Hill introduced a code of conduct in the Ubuntu community and made the community very aware of the freedom and philosophical aspects. While I never met him, I admired and still admire Benjamin. I felt that, with him at the helm, the community would always stay true to its ethical value. Bug #1 was the leading beacon: offering choice to users, breaking monopolies.

Jono Bacon

But the one that would have the greatest influence on the Ubuntu community is probably Jono Bacon who replaced Benjamin Mako Hill. Unlike Jeff Waugh and Benjamin Mako Hill, Jono Bacon had no Debian nor GNOME background. As far as I remember, he was mostly unknown in those communities. But he was committed to communities in general and had very great taste in music. I’m forever grateful for introducing me to Airbourne.

With what feels like an immediate effect but probably lasted months or years, the community mood switched from engineering/geek discussions to a cheerful, all-inclusive community.

It may look great on the surface but I hated it. The GNOME, Debian and early Ubuntu communities were shared-interest communities. You joined the community because you liked the project. The communities were focused on making the project better.

With Jono Bacon, the opposite became true. The community was great and people joined the project because they liked the community, the sense of belonging. Ubuntu felt each day more like a church. The project was seen as less important than the people. Some aspects would not be discussed openly not to hurt the community.

I felt every day less and less at home in the Ubuntu community. Decisions about the project were taken behind closed doors by Canonical employees and the community transformed from contributors to unpaid cheerleaders. The project to which I contributed so much was every day further away from Debian, from freedom, from openness and from its technical roots.

But people were happy because Jono Bacon was such a good entertainer.

Something was about to break…

(to be continued)

Subscribe by email or by rss to get the next episodes of "20 years of Linux on the Desktop".

I’m currently turning this story into a book. I’m looking for an agent or a publisher interested to work with me on this book and on an English translation of "Bikepunk", my new post-apocalyptic-cyclist typewritten novel which sold out in three weeks in France and Belgium.

I’m Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!

December 09, 2024

La colère de l’écrivain

RAPPEL: je serai à Louvain-la-Neuve mardi 10 décembre à 19h à La Page d’Après pour ma dernière rencontre de l’année.
Comme je l’explique dans la suite de ce billet, le succès de Bikepunk a pris le distributeur au dépourvu et le livre s’est retrouvé indisponible pour beaucoup de libraires. La situation devrait être à présent résolue. Si votre libraire ne peut pas l’avoir rapidement, commandez avant le 10-11 décembre sur le site PVH. Vous devriez le recevoir avant Noël. Un tout grand merci pour votre patience !

Une colère inextinguible

Il y a une vingtaine d’années, je me suis rendu dans un grand hôtel bruxellois pour une soirée de lectures de poésie. J’ai appelé l’ascenseur. Quand il s’est ouvert devant moi, je me suis subitement retrouvé face à Jean-Marie Le Pen et son garde du corps. Je l’ai reconnu sans aucune hésitation, sans aucun doute.

J’ai été tétanisé.

Sans réfléchir, par réflexe, j’ai fait un pas en arrière et refusé de rentrer dans l’ascenseur. Tout en montant les escaliers, je me suis longuement demandé si ce réflexe était juste. Si cette personne méritait cette attention, si j’aurais du l’ignorer totalement et faire comme si de rien n’était. Je me demandais également s’il se rendait à la même soirée que moi, soirée organisée par la veuve d’un résistant belge célèbre qui avait été prisonnier à Dachau.

À la fin de la soirée, discutant avec cette vieille dame adorable et très intéressante, je lui tends un recueil de mes propres poèmes. Elle prend un très long moment pour lire mes poèmes, les relire avant de se tourner vers moi avec un petit signe de dénégation :

— Vous n’êtes pas un poète.

La douche est froide.

— Vous avez trop de choses à dire. Trop de colère, trop de sujets à aborder. Vous n’êtes pas un poète. Vous êtes un écrivain !

Cette rencontre me revient régulièrement en tête et je ne peux que constater la justesse de cette analyse. Je veux en dire trop tout le temps. Parfois, je laisse écouler ma colère, brute, sauvage, violente et agressive. Parfois je tente de la catalyser ou de la camoufler en utilisant l’analyse rationnelle, l’humour voire même l’hypocrisie. Il n’en reste pas moins que je ne fais que rajouter un « é » devant chacun de mes cris pour en faire des écrits.

Chaque écrit est un cri, chaque cri est un écrit. Toute ma vie n’est que la gestion de cette colère bouillonnante qui m’anime (et que mon épouse m’éduque à contrôler).

C’est pour quoi j’ai été incroyablement touché de découvrir que Terry Pratchett, un de mes modèles, était un homme profondément, désespérément en colère.

Une autre anecdote que j’ai retenue de la très chouette biographie de Terry Pratchett est, qu’au bord du burn-out, il a finalement accepté de prendre six mois sabbatiques pour se reposer de l’écriture. À son retour, son agent lui a demandé ce qu’il avait fait de ces six mois. « J’ai écrit deux livres » a répondu Terry Pratchett.

Tout le monde peut écrire. L’écrivain est celui qui ne peut pas s’empêcher d’écrire. Il en est de même du blogging. On me demande parfois pourquoi je blogue autant. Je ne peux que répondre : « Parce que je ne peux physiquement pas faire moins ». Comme je répondais à Eddy Caekelberghs sur les ondes de la RTBF, si je ne le faisais pas, j’ai l’impression que j’exploserais. J’écris comme je refuse de monter dans un ascenseur avec Jean-Marie Le Pen : par réflexe, par nécessité vitale.

J’ai tellement de choses à écrire et il me reste tellement peu d’années pour le faire. Je dois en permanence faire des choix, me dire « non » à moi-même, repousser les nouvelles idées qui m’assaillent. Un problème que Thurk a appelé « The Boltzmann Brain ».

L’industrie du livre

Écrire. Oui, mais pourquoi ? Pour être lu bien sûr ! Le Graal, pour un auteur non anglophone, c’est d’être traduit en anglais pour accéder au marché international. Mais ce marché rêvé est-il si formidable ? Le monde de l’édition est typiquement un marché de type « Winner takes it all » comme le décrit Jaron Lanier dans son livre « Who owns the future ? ».

Si vous n’êtes pas une mégacélébrité de type Michelle Obama ou un auteur à franchise de type Tom Clancy, n’espérez pas vendre plus de 1000 voire 2000 livres. Dans le monde entier. Et, en fait, même si vous êtes une mégacélébrité avec des millions de followers, le succès n’est pas du tout garanti.

Mais pourquoi y’a-t-il autant de livres alors ? Parce que, comme les investisseurs dans les startups, les éditeurs cherchent le prochain Harry Potter ou le prochain 50 Shades of Grey, l’anomalie qui n’arrive que tous les 5 ou 10 ans.

L’article dépeint également une dépendance incroyablement morbide à Amazon, avec une grande partie du budget marketing des livres partant chez Amazon pour « remonter dans les résultats de recherche ». Faut dire qu’un livre sur deux est aujourd’hui acheté chez Amazon.

Alors, par pitié, soutenez les libraires indépendants ! Allez flâner, commandez chez eux, écoutez leurs recommandations. Nous avons déjà perdu trop de libertés chez les GAFAM, l’idée de perdre les librairies me terrorise.

Je vous propose de nous retrouver ce mardi 10 décembre, à 19h à La Page d’Après à Louvain-la-Neuve pour une rencontre littéraire. J’aime les libraires ! Si vous êtes libraire, n’hésitez pas à me contacter, je me déplace avec plaisir ou je profite de déplacements pour vous rendre visite.

L’indisponibilité de Bikepunk

Je suis conscient de me tirer une balle dans le pied. De par la stratégie de mon éditeur de minimiser les interactions avec Amazon, de ne leur céder que le strict nécessaire, mon roman Bikepunk s’est retrouvé en rupture de stock chez le géant américain alors même que je faisais plusieurs apparitions télé. Puis en rupture de stock chez les libraires, le distributeur ne pouvant pas suivre la demande. On en est au point où je crains que la version EPUB soit bientôt également épuisée.

Je ne vais pas me plaindre que mon livre ait du succès ! Au point de voir une émission télé accoler mon nom à la phrase « Le vélo comme seule arme contre l’aveuglement d’une société ».

Ploum sur LN24, Le vélo comme seule arme contre l’aveuglement d’une société Ploum sur LN24, Le vélo comme seule arme contre l’aveuglement d’une société

Ou de lire le peu suspect de gauchisme Paris-Match définir Bikepunk comme un mouvement de rébellion urbaine « utilisant le vélo comme symbole de résistance contre les systèmes dominants » (sic).

Je suis à la fois empli de gratitude envers vous pour cet enthousiasme, pour votre incroyable soutien et plein de frustration, car ce sont certainement des centaines d’exemplaires de Bikepunk qui n’ont pas trouvé leurs lecteurs. Des centaines de personnes frustrées ou déçues. Des opportunités manquées parce que tant mon éditeur que moi tentons de sortir de la ligne Amazon/grands distributeurs appartenant à des milliardaires.

Si Bikepunk n’est pas disponible chez votre libraire et que vous le voulez pour Noël, commandez-le en urgence sur le site PVH.

Et rappelez-vous qu’il est sous licence libre. Vous avez le droit de le copier et de le partager ! La première liberté est celle de l’imagination. Libérez les histoires, libérez l’imaginaire !

De la nourriture que nous offrons à notre cerveau

Cette mainmise de quelques monopoles sur l’industrie du livre me rend inquiet. Beaucoup de jeunes auteurs ne sont pas mis en avant pour promouvoir des « valeurs sûres ». Si nous avons besoin de relire en permanence Zola ou Hugo, nous avons également besoin d’entendre de nouvelles voix, de tester de nouvelles idées. Pour éviter l’uniformisation, un livre n’a pas besoin d’être génialissime, grandiose ou culte. Il peut se contenter de nous offrir, à un moment de notre vie, une nouvelle perspective inconsciente. Il suffit qu’il soit différent. Votre corps est composé de ce que vous mangez, votre esprit est composé de ce que vous lisez. Même si vous ne vous en souvenez plus, même si ça vous a semblé sans importance, même si vous n’avez pas fini le livre. Vous êtes ce que vous lisez.

Il y a déjà 11 ans, je faisais une analogie entre la nourriture que nous consommons et ce que nous offrons à notre cerveau.

Cal Newport, l’auteur de « Digital Minimalism » a repris cette analogie pour décrire les réseaux sociaux comme des fabriques de contenus « ultra-transformés ». La nourriture ultra-transformée est, en effet, produite pour être irrésistible : un goût très fort, du sel, de la graisse, un engourdissement de la satiété. Finalement, on se rue sur des posts Facebook ou Instagram tout comme on se jette sur sachet de chips.

Ce qui me perturbe dans cette analogie, c’est la réalisation que les problèmes sont identiques, mais que les personnes qui sont conscientes de l’un ignorent complètement l’autre.

Dans les rassemblements de geeks linuxiens ou de rôlistes libristes, le coca et la cigarette, par exemple, sont trop souvent normalisés et tolérés (ce que je réprouve fortement).

Mais l’inverse est tout aussi vrai : les personnes attentives à l’alimentation, à l’écologie, à la santé se concentrent sur Instagram, Facebook, Tik-Tok et/ou Twitter, alimentant une crise d’obésité informationnelle morbide. Leur suggérer de communiquer par Signal plutôt que Whatsapp leur fait souvent lever les yeux au ciel. Le domaine ne les intéresse pas. Je ne parle même pas de Mastodon…

Il y a même des groupes "anticapitalistes" sur Facebook !

La cohérence plutôt que la perfection

C’est pourquoi je suis particulièrement heureux de travailler avec un éditeur qui explore, qui teste de nouveaux modèles, qui crée une atmosphère de collaboration entre tous les auteurs et qui a pour mission première de contribuer à la liberté de la culture.

Non seulement les livres sont sous licence libre, mais les auteurs sont encouragés à rejoindre le Fediverse !

L’équipe PVH lutte tous les jours, toutes les heures pour tenter de diffuser des livres et des idées de liberté sans baisser son froc devant les milliardaires qui tentent d’imposer ce qu’ils sont sûrs de vendre ou, pire, ce qui arrange leurs intérêts idéologiques. C’est difficile, il y a des couacs comme l’indisponibilité de Bikepunk. Mais, vous savez quoi ? Ça commence à fonctionner ! Le logiciel libre Be-Bop, développé en interne, commence à prendre de l’ampleur.

PVH n’est pas parfait. Personne n’est parfait. Mais nous tentons de garder une ligne cohérente dans notre combat pour promouvoir et diffuser la culture de l’imaginaire et la philosophie du libre.

Votre compréhension, votre patience, vos achats, vos partages sont le meilleur des soutiens. Merci ! Merci de faire partie de cette aventure, de parler de nous autour de vous. Vous n’apaisez pas ma colère, mais vous m’aidez à la transformer, à la canaliser pour contribuer à quelque chose de plus grand, quelque chose dont nous pourrons un jour être fières et fiers.

Comme le dit très bien Framasoft : Le chemin est long, mais la voie est libre !

Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

December 08, 2024

It’s been 2 years since AOPro was launched and a lot has happened in that time; bugs were squashed, improvements were made and some great features were added. Taking that into account on one hand and increasing costs from suppliers on the other: prices will see a smallish increase as from 2025 (exact amounts still to be determined) But rest assured; if you already signed up, you will continue to…

Source

December 06, 2024

Pérenniser ma numérique éphémérité

J’écris mon journal personnel à la machine à écrire. De simples feuilles de papier que je fais relier chaque année et dont le contenu n’est nulle part en ligne.

Pourtant, j’ai le sentiment que ce contenu a beaucoup plus de chances d’être un jour accessible voire, soyons fou, lu par mes enfants, mes petits enfants et, qui sait, plus loin encore.

Parce qu’hormis un incendie dramatique, ces écrits sont quasiment indestructibles. Qu’ils sont facilement trouvables sur l’étagère de mon bureau. Qu’ils sont facilement lisibles et le resteront sans aucune connaissance autre que la langue française.

Tout ce qui est publié numériquement est à quelques erreurs de manipulation de se faire effacer définitivement. Un accident anodin peut rendre un support illisible. Un mot de passe oublié rendre des documents définitivement inaccessibles. Mais pas besoin d’aller si loin : si demain je disparais, qui serait capable de retrouver quoi que ce soit dans le fouillis de mon disque dur ? Même en imaginant qu’il ne soit pas chiffré !

Si tout est transitoire, le pire n’est-il pas de confier cette impermanence à des entreprises externes ?

L’anti-pérennité des réseaux propriétaires

Votre histoire sur les réseaux sociaux propriétaires peut disparaître à tout moment. Cela fait longtemps que je le crie et le répète partout, mais rien ne vaut un bon exemple.

Depuis 2024, Strava n’autorise plus de partager un lien. Qu’il soit dans un post ou une activité, ce lien est supprimé.

Pire: tous les liens des posts précédents ont été supprimés. Toutes vos histoires Strava ont été altérées de manière permanente.

Bon, il faut avouer que Strava est en train de suivre à la lettre le processus de merdification en limitant son API et ses conditions d’utilisation.

Ne faites confiance à aucun réseau social propriétaire ! N’oubliez pas que tout service propriétaire va un jour fermer ou devenir soudainement inutilisable selon vos critères. Ou supprimer arbitrairement votre compte.

Si votre unique raison de garder un compte sur un service propriétaire est "parce que vous y avez un historique", sachez que cet historique ne vous appartient pas. La question n’est pas de savoir s’il va disparaître ou non, mais « quand ? » Car il va disparaître. C’est une certitude.

L’humanité et la non-marchandisation des réseaux libres

Vous allez me dire que le libre n’est pas nécessairement mieux. Mastodon, par exemple, ne permet à ma connaissance pas d’exporter l’intégralité de son historique personnel.

Mais, comme c’est du logiciel libre, rien ne s’oppose à implémenter des outils qui font ce genre de choses. Rien n’empêche de trouver ou de créer des alternatives.

C’est complètement imparfait, mais, au moins, on est face à des êtres humains.

Un très beau témoignage d’Aemarielle, aquarelliste arrivée sur Mastodon en 2022. En résumé, Mastodon c’est vachement mieux pour les humains.

Mais les influenceurs aux grosses métriques sont complètement perdus : il n’y a pas d’algorithme et donc pas moyen de les exploiter pour faire monter son audience. Il n’y a pas des milliers de robots et de comptes abandonnés qui font grimper le nombre de followers. La population du Fediverse a également tendance à ignorer voire à critiquer les contenus publicitaires ou avec des titres inutilement accrocheurs.

Les compétences développées par certain·e·s durant quinze ans sur Twitter sont donc inutiles, voire contreproductives, sur Mastodon !

Certains ne sont pas prêts à ce genre de plateforme. Ils ont eu la chance d’un moment être favorisés par les algorithmes Twitter/X/Instagram et, du coup, ils se raccrochent à tout prix à ces plateformes, à ce modèle.

Sur ces plateformes, vous êtes la marchandise comme le rappelle judicieusement X. En effet, le magazine satirique The Onion souhaite racheter le média d’extrême droite InfoWars (ce qui est très drôle). Mais X rappelle qu’ils sont propriétaires du compte X d’InfoWars et ne le transmettront pas à The Onion.

Libre et science

L’accès libre et pérenne aux données et aux algorithmes n’est pas seulement historique et pratique, c’est également essentiel pour le principe scientifique. Si, en lisant votre papier, il n’est pas possible de reproduire vos résultats, car il manque l’un et/ou l’autre, ce n’est plus de la science, mais du simple « personal branding ».

Le marketing et la "protection de la propriété intellectuelle" sont deux vers qui rongent les cerveaux, y compris des scientifiques les plus pointus.

À propos, j’ai appris que les mots « sciences » et « shit » ont la même racine. Et que « nice » vient de la négation de science et signifiait à la base « ignorant » avant de subtilement évoluer vers « gentil ». Bref, un peu concon…

J’adore l’étymologie.

La solution du libre

Le logiciel de gestion de librairie Biblys se libère. Un très beau témoignage de son auteur.

Le libre n’est pas utopiste, il n’est pas meilleur, il n’est pas idéaliste : il est aujourd’hui indispensable !

Penser sa propre disparition avec git

Si la pérennité de vos données en ligne est importante, il faut y penser, vous y préparer. C’est une des raisons majeures qui m’ont poussé à simplifier ce blog et en faire un simple répertoire remplit de fichiers au format texte que n’importe qui peut copier entièrement avec une simple commande git.

Pour avoir sur votre ordinateur une copie de tout mon blog, y compris le logiciel qui le génère, il suffit de taper, dans un terminal :

git clone https://git.sr.ht/~lioploum/ploum.net

J’envisage même de mettre les sources de mes livres dans ce dépôt. Faut que j’investigue les sous-dépôts dans git.

Gwit

Je suis le développement de Gwit, une manière de publier des sites Internet à travers Git. L’auteur de Gwit a justement pris mon site en exemple pour l’adapter au format Gwit.

C’est très technique, mais ce que je pense que tout le monde devrait retenir c’est que ça a été rendu possible, car les sources et le contenu de mon blog sont disponibles sous une licence libre et avec une simple commande git comme cité plus haut. L’auteur n’a pas du me demander la permission, n’a pas hésité. Il a le droit de le faire.

Et j’en suis incroyablement flatté…

L’arme ultime : RSS

Je le dis, je le répète : la solution à l’immense majorité de nos problèmes, le réseau social ultime, c’est le flux RSS. On suit ce qu’on veut suivre, sans publicités, sans tracking, dans la police qui est le plus lisible pour nous. Nos abonnements restent confinés à notre lecteur de RSS.

Il faut juste apprendre à s’en servir (et c’est beaucoup plus facile que l’email).

Il y a une raison pour laquelle les grandes plateformes tentent de tuer le RSS. Il y a une raison pour laquelle vous devriez suivre ce blog par RSS.

Portrait d’Aaron Swartz par Bruno Leyval Portrait d’Aaron Swartz par Bruno Leyval

Aaron Swartz a contribué à la norme RSS. En l’utilisant (ou en utilisant son successeur ATOM), vous célébrez sa mémoire et son combat.

Vous souhaitez utiliser un lecteur RSS en ligne classique ? Je vous conseille le FreshRSS de Zaclys ou de Flus:

Vous préférez un truc plus automatisé, plus moderne ? Flus est pour vous.

Mais il y a des centaines de possibilités. Et, comme le souligne Cory Doctorrow, vous pouvez passer de l’un à l’autre sans problème. Il suffit d’exporter la liste de vos flux dans un format (appelé OPML) puis de l’importer dans la nouvelle plateforme.

Penser la disparition

Après, on peut faire comme Bruno, l’auteur du portrait d’Aaron Swartz ci-dessus, et souhaiter disparaître. On peut célébrer l’impermanence.

Préserver et transmettre notre patrimoine culturel personnel se pense, s’organise. Mais ne se confie surtout pas à une multinationale publicitaire.

Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

December 02, 2024

La conjuration de la fierté ignorante

Les scientifiques, les vulgarisateurs, les professeurs consacrent leur vie à lutter contre l’ignorance. Mais l’ignorance n’est pas vraiment le problème. Ce qui est dangereux c’est lorsqu’elle se camoufle. Lorsqu’elle se transforme en confiance.

Pour un parfait ignorant, l’ignorance camouflée est indistinguable de la connaissance voire de l’expertise. D’ailleurs, l’expert dira toujours : « C’est compliqué, je n’ai pas de réponse toute faite ! » là où l’ignorant camouflé répondra : « C’est très simple ! J’ai la vérité, je vais te l’expliquer ! »

De par leur conception, les outils comme ChatGPT sont des ignorants camouflés. Ils ont été littéralement conçus pour faire croire qu’ils savent alors que ce n’est pas le cas. Pour camoufler leur ignorance sous un excès de confiance, ce qui fonctionne très bien depuis des siècles, les politiciens et autres CEOs de multinationales en sont la preuve.

C’est d’ailleurs la raison pour laquelle ces gens-là sont persuadés que l’intelligence artificielle peut remplacer les travailleurs. Parce que leur propre métier consiste à faire semblant de comprendre les choses et qu’ils ont fait tellement semblant qu’ils ont oublié que certains postes, subalternes certes, nécessitent de réellement comprendre.

La peur comme outil marketing

Une excellente analogie de Marcello Vitali-Rosati. ChatGPT utilise la technique des LLMs pour produire un automate conversationnel. L’utiliser pour n’importe quoi d’autre (par exemple pour faire des recherches, des résumés, des traductions, des analyses …) revient à tenter de conduire une tronçonneuse sur l’autoroute sous prétexte qu’elle possède un moteur à explosion.

D’une manière générale, c’est effrayant comme les gens qui ne comprennent rien sont tellement effrayés à l’idée d’accepter qu’ils ne comprennent rien qu’ils s’engouffrent dans toutes les conneries du marketing. Ce sont par exemple les politiciens tout fiers d’encourager de former les jeunes « à l’IA » (c’est quoi bon sang « se former à l’IA » ?) ou certains professeurs dans les domaines des sciences sociales, tous fiers de montrer une fenêtre ChatGPT à leurs étudiants pour analyser un texte ou préparer les sujets à aborder lors d’un cours.

Lorsque j’interroge des gens sur les raisons qui les poussent à utiliser l’IA, la réponse la plus fréquente est : « Pour ne pas être à la traîne ». La peur.

Ce qui me frappe certainement le plus c’est que toutes les personnes que j’ai rencontrées qui sont à fond dans « l’IA » sont complètement abasourdies que je soi moi-même critique. Elles n’ont jamais entendu la moindre critique. Elles n’ont jamais imaginé que l’on puisse être critique. On parle de médecins, d’avocats, de banquiers, de chefs d’entreprises : l’IA est pour ces personnes une évidence inquestionnable.

Le fait que je puisse être critique les surprend. Elles sont déstabilisées. Elles ne pensaient pas que ce soit possible. Mais peut-être que c’est parce que je ne comprends pas bien.

Alors j’assène un coup fatal : je dis que j’ai fait ma thèse de master dans ce domaine et que j’enseigne au département d’informatique de la faculté polytechnique. Je sais, c’est un argument d’autorité moyennement pertinent. Mais ça me permet de raccourcir la conversation sans faire à chaque fois une conférence « Pouet Pouet Coin Coin ».

Du danger de l’ignorance camouflée

La pire catégorie de personnes est celle des ignares technologiques qui ont la prétention de ne pas en être. Ceux qui ont élevé au rang de compétence principale le camouflage de leur ignorance. Cette prétention à la connaissance leur permet de ne pas écouter ceux qui ont des connaissances réelles voire même de les regarder de haut en les traitant de « geeks ». Il y a un truc pour les reconnaître : ils parlent de « nouvelles technologies », lapsus qui révèle à quel point ces technologies qu’ils font semblant de comprendre à coup de partages sur LinkedIn sont encore nouvelles et inexplorées dans leurs esprits.

ChatGPT est l’aboutissement ultime de l’abrutissement par l’outil. Il ne nécessite aucune interface, aucun apprentissage. N’importe qui arrivant à taper sur les touches d’un clavier peut l’utiliser.

Lorsqu’un outil est utile, mais complexe à utiliser, la croyance est qu’il faut le rendre plus simple, plus accessible. C’est en partie vrai, mais jusqu’à un certain point.

Tout outil requiert un apprentissage pour comprendre dans quoi il s’insère. Pour obtenir un permis de conduire, la première chose qu’on vérifie est votre connaissance du code de la route, connaissance complètement indépendante et orthogonale à votre capacité d’appréhender la mécanique d’une voiture.

En supprimant totalement la barrière de l’apprentissage, votre outil devient du grand n’importe quoi. Les gens l’utilisent sans réfléchir. On ne peut pas apprendre à utiliser ChatGPT : il est de toute façon par nature aléatoire, il subit des mises à jour. Vous pouvez l’utiliser pendant 10 ans tous les jours sans jamais devenir « meilleur ». Comme on n’apprend pas à regarder la télévision. Le principe même est stupide.

L’ignorance crée l’abstraction qui crée la complexité qui nourrit l’ignorance

En informatique, les couches d’abstraction créées pour simplifier l’interface ajoutent, au niveau technique, de la complexité, de l’opacité. Empêche l’apprentissage. Ceux qui s’insurgent contre une mauvaise utilisation de leur travail sont traités comme des réactionnaires, sont moqués et exclus de leur propre système.

Lorsque, cas réel vécu il y a 25 ans, une personne envoie un email contenant un document au format .doc contenant lui-même une macro lançant un fichier en .exe qui se révèle une animation au format Flash avec le lecteur Flash intégré et que cette animation est en tout et pour tout une image fixe au format .jpg, cette personne est considérée comme normale.

Je n’invente rien, ça m’est réellement arrivé.

Cette personne, qui n’avait aucune connaissance informatique, m’a ensuite prétendu que j’étais le seul à avoir eu des problèmes pour ouvrir sa photo.

Toute personne ne sachant pas voir l’image devrait donc considérer que c’est de sa faute. Qu’il était normal, à l’époque, d’avoir la version de Microsoft Outlook qui ouvre automatiquement la version de Word qui lance automatiquement les macros. Que le geek (moi) qui, devant le risque de sécurité béant, avait configuré l’ordinateur familial pour ne pas permettre cette hérésie était un « alternatif avec des trucs qui ne marchent jamais ».

Pour être normal, il faut donc être stupide. Il faut se plier au niveau du plus stupide. C’est une course effrénée vers le bas pour se fondre dans le troupeau du plus crétin tout en prétendant le contraire.

Les entreprises comptent là-dessus. Sans cet instinct d’idiotie grégaire, il n’y aurait pas de monopole de Microsoft Windows depuis des décennies (Microsoft Windows étant l’archétype du « C’est nul, ça ne marche pas, mais tout le monde fait comme ça »).

Les apparences sont tellement trompeuses que pour tenter d’apparaître le plus malin, il faut être le plus stupide de la bande. Pour tenter d’apparaître le plus riche, il faut s’appauvrir en se payant des trucs inutiles incroyablement chers.

Suivre aveuglément de peur de paraitre ignorant

En 2006, jeune ingénieur cherchant du travail dans une foire d’emploi, je m’installe à la table d’un recruteur et lui tends mon CV.

— Aha, Linux. C’est bien pour les étudiants. Mais bon, pour travailler, il faut être sérieux, il faut utiliser Microsoft.
— Je pense que Linux est très sérieux et je compte bien travailler dans ce domaine où j’ai une expertise reconnue.
— Non, il faut être sérieux. Bill Gates est l’homme le plus riche du monde. Si vous voulez devenir riche, il faut le suivre, pas lutter contre lui !

J’ai eu un blanc. À l’époque, je ne pensais pas encore que les gens plus âgés, plus expérimentés et mieux payés que moi pouvaient être à ce point stupides. J’ai d’abord tenté de comprendre :

— Mais en le suivant, vous lui donnez de l’argent pour le rendre riche justement. Mieux vaut entrer en compétition.
— Aha, vous êtes jeune, vous ne connaissez pas encore la réalité du business.

Je me suis levé, j’ai pris le CV des mains du recruteur en disant :

— Je reprends mon CV, je ne pense pas qu’il puisse vous être utile. Au revoir.

Cette anecdote m’est restée longtemps en tête, me faisant douter de ma propre compréhension. Mais aujourd’hui j’ai l’expérience pour reconnaître que, oui, ce mec était complètement crétin. Donner des sous à un milliardaire en espérant que ça t’enrichisse, c’est complètement con.

Et pourtant, c’est incroyablement courant parce que personne n’ose dire : « Je ne comprends pas comment je peux devenir riche » et tout le monde se dit « Je vais obéir aveuglément à quelqu’un de riche parce que je veux lui ressembler » (ce qui est stupide, on est d’accord).

L’écologie de la stupidité

Un autre truc qui est complètement crétin dès qu’on se pose la question de comment ça fonctionne : les mécanismes de compensation carbone.

Ça parait évident que ces mécanismes nous prennent pour des crétins. Heureusement, certains commencent à s’en rendre compte. L’université d’Exeter a investigué les crédits carbone et conclut, sans surprise, que leur impact est soit nul, soit négatif. En gros, dans le meilleur des cas, ça ne sert à rien, mais, le plus souvent, ça pollue encore plus que ne rien faire du tout.

Mais bon, ce ne sont qu’une minorité d’intellectuels qui rationalisent. Certes, ils ont raison. Mais avoir raison n’est pas une bonne chose pour le succès individuel.

L’objectif n’est pas d’être intelligent, mais de convaincre des millions de gens qu’on l’est.

L’objectif d’un CEO n’est pas de prendre une bonne décision, mais de convaincre son auditoire qu’il l’a prise.

L’objectif d’un politicien n’est pas d’avoir une opinion sur un sujet, mais de faire croire aux électeurs qu’il a la même qu’eux.

La particularité des gens intelligents, c’est qu’ils doutent. Ils ne sont donc pas très convaincants…

L’intelligence condamne à faire partie des perdants.

Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

November 27, 2024

Offrez des évasions livresques !

La fin de l’année approche à grands pas et, avec elle, l’obligation d’offrir des cadeaux à la famille, la belle-famille, les amis. Difficile d’échapper à cette pression consumériste sans passer pour un rustre. Tenter le cadeau réutilisable ?

Mais le meilleur cadeau réutilisable n’est-il pas un livre qui peut se partager, se prêter, se perdre, se retrouver ? Alors voici quelques idées de livres pour vous inspirer et prendre un peu d’avance avant que les centres commerciaux ne soient bondés !

Sauf mention contraire, les romans recommandés sont sous licence libre ! La plupart des auteur·e·s mentionné·e·s sont sur Mastodon et tou·te·s sont des gens très biens que j’ai beaucoup de plaisir à fréquenter dans la vraie vie !

Le pack « Cyclisteurs »

Pour offrir emballé dans un gilet jaune fluo avec une paire de gants de vélo :

Le pack « Écologie, protection des communs et des écureuils qui gambadent »

Pour chambrer gentiment, mais sans en avoir l’air le tonton qui roule en SUV et soutien la construction de la nouvelle bretelle d’autoroute :

Le pack « Adblock, anticapitalisme et résistance »

Chère Tata syndicaliste, voici de la lecture pour les piquets de grève de 2025, car on va avoir besoin de toi, je le sens :

Le pack anti-GAFAM

Cher cousin René, en plus d’un don en ton nom à Framasoft et d’une migration de ton compte mail vers un fournisseur éthique, je t’offre ces belles lectures.

Le pack « LOL, mais un peu jaune »

Je te sens crispée, voici de quoi détendre l’atmosphère en ces temps troublés…

Le pack « Histoire, avec un grand F, comme dans Fantastique »

Toi qui aimes particulièrement l’histoire, la grande, la voici, mais avec, à chaque fois, une légère pointe de fantastique surprenant.

Le pack « âmes sensibles s’abstenir »

Ne venez pas dire que vous n’étiez pas prévenus !

Les coffrets

Pour découvrir la science-fiction vue par un auteur suisse, un auteur français et un auteur belge, le tout dans un magnifique coffret collector tout argenté. Parfait pour actualiser la culture SF.

Pour découvrir trois univers de Fantasy très différents et très complémentaires dans un magnifique coffret rouge. Parfait pour ceux cherchent de nouveaux horizons.

Commandez-vite !

Il est possible que votre libraire n’ait pas certains de ces livres.

Si vous passez commande avant le 2 décembre sur le site PVH, vous êtes non seulement certains de recevoir vos colis avant Noël, mais, de plus, 20% de la somme récoltée sera allouée à une toute nouvelle bourse pour soutenir les artistes créant sous licence libre. Une raison de plus de commander rapidement !

Retrouvez les auteurs sur Mastodon…

Pour ajouter une touche de personnalisation, offrez à votre heureux bénéficiaire de créer avec lui un compte Mastodon et d’entrer directement en contact avec les auteurs !

Bonnes lectures et bonne fête de solstice d’hiver !

Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

November 26, 2024

The deadline for talk submissions is rapidly approaching! If you are interested in talking at FOSDEM this year (yes, I'm talking to you!), it's time to polish off and submit those proposals in the next few days before the 1st: Devrooms: follow the instructions in each cfp listed here Main tracks: for topics which are more general or don't fit in a devroom, select 'Main' as the track here Lightning talks: for short talks (15 minutes) on a wide range of topics, select 'Lightning Talks' as the track here For more details, refer to the previous post.

November 25, 2024

Last month we released MySQL 9.1, the latest Innovation Release. Of course, we released bug fixes for 8.0 and 8.4 LTS but in this post, I focus on the newest release. Within these releases, we included patches and code received by our amazing Community. Here is the list of contributions we processed and included in […]

November 15, 2024

With great pleasure we can announce that the following projects will have a stand at FOSDEM 2025 (1st & 2nd February). This is the list of stands (in alphabetic order): 0 A.D. Empires Ascendant AlekSIS and Teckids AlmaLinux OS CalyxOS Ceph Chamilo CISO Assistant Cloud Native Computing Foundation (CNCF) Codeberg and Forgejo coreboot / flashprog / EDKII / OpenBMC Debian DeepComputing's DC-ROMA RISC-V Mainboard with Framework Laptop 13 DevPod Digital Public Goods Dolibarr ERP CRM Drupal Eclipse Foundation Fedora Project FerretDB Firefly Zero FOSSASIA Free Software Foundation Europe FreeBSD Project FreeCAD and KiCAD Furi Labs Gentoo Linux & Flatcar舰

It's been a while since I last blogged one of my favorite songs. Even after more than 25 years of listening to "Tonight, Tonight" by The Smashing Pumpkins, it has never lost its magic. It has aged better than I have.

Installation instructions for end users and testers

We will use DDEV to setup and run Drupal on your computer. DDEV handles all the complex configuration by providing pre-configured Docker containers for your web server, database, and other services.

To install DDEV, you can use Homebrew (or choose an alternative installation method):

$ brew install ddev/ddev/ddev

Next, download a pre-packaged zip-file. Unzip it, navigate to the new directory and simply run:

$ ddev launch

That's it! DDEV will automatically configure everything and open your new Drupal site in your default browser.

Installation instructions for contributors

If you plan to contribute to Drupal CMS development, set up your environment using Git to create merge requests and submit contributions to the project. If you're not contributing, this approach isn't recommended. Instead, follow the instructions provided above.

First, clone the Drupal CMS Git repository:

$ git clone https://git.drupalcode.org/project/drupal_cms.git

This command fetches the latest version of Drupal CMS from the official Git repository and saves it in the drupal_cms directory.

Drupal CMS comes pre-configured for DDEV with all the necessary settings in .ddev/config.yaml, so you don't need to configure anything.

So, let's just fire up our engines:

$ ddev start

The first time you start DDEV, it will setup Docker containers for the web server and database. It will also use Composer to download the necessary Drupal files and dependencies.

The final step is configuring Drupal itself. This includes things like setting your site name, database credentials, etc. You can do this in one of two ways:

  • Option 1: Configure Drupal via the command line
    $ ddev drush site:install

    This method is the easiest and the fastest, as things like the database credentials are automatically setup. The downside is that, at the time of this writing, you can't choose which Recipes to enable during installation.

  • Option 2: Configure Drupal via the web installer

    You can also use the web-based installer to configure Drupal, which allows you to enable individual Recipes. You'll need your site's URL and database credentials. Run this command to get both:

    $ ddev describe

    Navigate to your site and step through the installer.

Once everything is installed and configured, you can access your new Drupal CMS site. You can simply use:

$ ddev launch

This command opens your site's homepage in your default browser — no need to remember the specific URL that DDEV created for your local development site.

To build or manage a Drupal site, you'll need to log in. By default, Drupal creates a main administrator account. It's a good idea to update the username and password for this account. To do so, run the following command:

$ ddev drush uli

This command generates a one-time login link that takes you directly to the Drupal page where you can update your Drupal account's username and password.

That's it! Happy Drupal-ing!

November 13, 2024

Hyperconnexion, addiction et obéissance

De l’hyperconnexion

Nous sommes désormais connectés partout, tout le temps. J’appelle cela "l’hyperconnexion" (et elle ne passe pas nécessairement par les écrans).

Parfois, je tente de me convaincre que mon addiction personnelle à cette hyperconnexion est surtout liée à mon côté geek, que je ne peux généraliser mon cas.

Et puis, quand je roule à vélo, je me rends compte du nombre de piétons qui n’entendent pas ma sonnette, qui ne me voie pas arriver (même de face), qui ne s’écartent pas et qui, lorsqu’ils réalisent ma présence (qui va, dans certains cas, jusqu’à nécessiter une tape sur l’épaule), ont un air complètement abruti, comme si je venais de les extirper d’un univers parallèle.

Et puis je vois cette mère, dans une salle d’attente, dont la petite fille de deux ans tente vainement d’attirer l’attention « Regarde maman ! Regarde ! ».

Et puis je me souviens de cette autre mère qui avait placé sa gamine sur un parapet avec plusieurs mètres de vide en dessous juste pour faire une jolie photo pour Instagram.

Et puis je vois cet homme d’affaires dans un costume chic, à l’air très sérieux, qui sort son téléphone pour aligner quelques fruits virtuels durant les trente secondes que dure l’attente de son café ou les quinze secondes d’un feu rouge à un carrefour.

Et puis, à vélo, je tente vainement de croiser le regard de ces automobilistes, les yeux rivés sur leur écran, mettent en danger la vie de leurs propres enfants assis à l’arrière.

Il est impossible de juger chaque anecdote. Parce qu’il y a plein de justifications qui sont, dans certains cas, complètement valables. Une mère peut juste être épuisée de donner toute son attention à son enfant toute la journée. Elle a le droit de penser à autre chose. Un piéton peut être absorbé dans une conversation téléphonique importante ou dans ses pensées.

L’individu a le droit. Mais le problème me semble global.

Dans les témoignages que je recueille, le plus difficile pour ceux qui arrivent à se déconnecter, c’est qu’ils se rendent compte de l’addiction des autres. C’est particulièrement frappant dans les couples.

Si les deux sont addicts, aucun ne souffre. Mais que l’un tente de regagner un peu de liberté mentale et il découvre que son conjoint ne l’écoute pas. Est inattentif. Gritty décrit ici la douleur de sa déconnexion en remarquant que son épouse ne l’écoute plus, qu’elle écoute en permanence des messages audios. Il n’avait jamais remarqué le problème avant de se déconnecter lui-même.

Je pense à nos enfants. Nous nous inquiétons de l’impact des écrans sur nos enfants. Mais n’est-ce pas avant tout l’impact de l’hyperconnexion sur les parents qui déteint sur les enfants ?

Négocier avec les machines

Et si cette addiction n’était qu’une adaptation à notre environnement ? Car, comme le décrit très bien Gee, on doit désormais passer notre temps à nous adapter aux machines, à négocier avec leur comportement absurde.

Négocier, c’est quand tu veux que la machine fasse quelque chose et qu’elle ne veut pas. Mais il y a pire. Les machines donnent des ordres. Les notifications.

Il y a évidemment les notifications sur le smartphone qui nécessitent de tout abandonner pour y obéir. Je viens à l’instant de voir mon plombier, à quatre pattes contre un mur ouvert, les deux mains en train de glisser des tuyaux dans une gaine, se contorsionner pour répondre au téléphone. Une enquête de satisfaction. À laquelle il a répondu posément pendant plusieurs minutes, malgré sa position inconfortable.

Outre les smartphones, chaque appareil cherche désormais à imposer son fonctionnement. Mon micro-onde bipe de manière très énervante quand il a fini jusqu’au moment où on ouvre la porte. Si tu profites des deux minutes où ta soupe réchauffe pour aller à la toilette, t’es bon pour te farcir un bip sonore dans toute la maison durant toute la durée. Il faut donc attendre et obéir patiemment. C’est encore plus absurde avec le lave-linge et le sèche-linge. Ce n’est pas comme si c’était urgent ! Mention à mon lave-vaisselle qui, lui, ne bipe pas du tout et se contente de s’ouvrir pour dire qu’il a fini, ce qui est une très bonne idée et prouve qu’il est possible de ne pas emmerder l’utilisateur. Sans doute une erreur qui sera corrigée dans la prochaine version.

La palme revient à mes, heureusement anciennes, plaques de cuisson qui ne supportaient pas qu’un objet soit posé dessus. Et qui était d’une telle sensibilité que la moindre tache de graisse ou d’humidité entrainait un bruit continu extrêmement irritant pour dire : « Nettoie-moi, humain ! Tout de suite ! ». Oui, même à trois heures du matin si une mouche s’était posée dessus ou si elle avait soudainement détecté un coin de l’éponge de nettoyage.

À noter que la même plaque se désactivait lorsqu’elle détectait une goutte de liquide. Oui, même au milieu d’une cuisson. Et, pendant une cuisson, une tache de graisse ou d’eau, c’est quand même fréquent.

— Nettoie-moi, humain !
— Finis d’abord de cuire mon repas !
— Non, nettoie-moi d’abord.
— T’es une taque de cuisson, tu peux survivre à une goutte d’eau sur ta surface.
— Nettoie-moi !
— Mais tu es bouillante, je ne peux pas te nettoyer, je dois d’abord te laisser refroidir.
— C’est ton problème, nettoie-moi !
— Alors, arrête au moins de sonner le temps que tu refroidisses.
— NETTOIE-MOI HUMAIN !

Les outils non intelligents sont désormais un luxe.

L’addiction à la suffisance

Lorsqu’on est ingénieur où qu’on a des compétences techniques, on sait comment fonctionnent ces engins. Lorsque, comme moi, on a travaillé dans l’industrie, on peut même percevoir les dizaines de décisions stupides prises par des petits chefs qui ont mené à ce qui est un désastre d’inefficacité. Négocier pendant 4 minutes avec une machine pour faire ce qui devrait prendre une seconde est absurde, énervant, irritant.

Mais chez les personnes pour lesquelles la technologie est une sorte de magie noire, j’observe souvent une satisfaction voire une fierté à arriver à contourner les limitations parfaitement arbitraire.

Les gens sont fiers ! Tellement fiers d’arriver à faire un truc absurde et contre-intuitif qu’ils s’en vantent voire se proposent de l’enseigner à d’autres. Ils font des vidéos Youtube pour montrer comment ils activent une option de leur iPhone ou comment ils font un prompt ChatGPT. Ils deviennent addicts à cette fierté, cette satisfaction, cette fausse impression de contrôle de la technologie.

Ils plongent dans l’hyperconnexion en regardant de haut les pauvres geeks comme moi « qui ne savent pas s’adapter aux nouvelles technologies », ils obéissent comme des moutons aux sirènes du marketing, aux décisions commerciales qui leur imposent une nouvelle interface propriétaire. Ils s’autoproclament "geeks" parce qu’ils passent leur journée sur Whatsapp et Fruit Ninja.

Ils pensent apprendre, ils ne font qu’obéir.

Les humains sont addicts à l’obéissance aveugle. Ils aiment se soumettre à un pouvoir totalitaire opaque à la condition d’avoir de brefs moments où on leur donne l’illusion d’avoir eux-mêmes du pouvoir.

Et sur une chose, ils ont raison : les pauvres rebelles qui luttent pour une liberté dont personne ne veut sont des inadaptés.

Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

November 07, 2024

Rencontres littéraires à Paris, à Louvain-la-Neuve et un bout de contribution aux communs

Paris le 14 novembre

Ce jeudi 14 novembre à 19h, je participerai à une rencontre littéraire à Paris, à la librairie Wallonie-Bruxelles. C’est l’occasion de faire dédicacer un exemplaire de Bikepunk à mettre sous le sapin.

Louvain-la-Neuve le 10 décembre

Pour les Belges, la rencontre littéraire se déroulera le mardi 10 décembre à 19h dans mon fief de Louvain-la-Neuve, à La Page d’Après, une librairie qui sent bon la librairie. Inscrivez-vous dès à présent en leur envoyant un email.

J’irai dédicacer chez vous…

Pour les Bruxellois qui désirent une dédicace avant Noël et qui ne savent pas venir à Louvain-la-Neuve, je vous propose d’aller chercher un exemplaire de Bikepunk à la librairie Chimères et d’y laisser le livre en dépôt avec un petit papier décrivant la dédicace que vous souhaitez. Je dois passer à la librairie fin novembre, je dédicacerai les livres en attente pour que vous puissiez les récupérer plus tard.

Bien entendu, la francophonie ne se limite pas à Paris, Bruxelles et Louvain-la-Neuve. Mais je vais là où je suis invité. Ni mon éditeur ni moi ne disposons d’un attaché de presse en France pour nous mettre en relation avec les médias français et de nous placer des séances de dédicaces dans les librairies de l’hexagone.

Et c’est là que vous, lecteurices, vous pouvez nous aider !

En parlant du livre avec votre libraire et, si intérêt de sa part, en nous mettant en contact pour que nous fixions une date (j’essaie, bien entendu, de combiner les déplacements). Je viendrai avec ma machine à écrire et mon stylo-ploum. Si vous avez des contacts dans des médias indépendants et/ou cyclistes, parlez de nous !

Merci à vous de poster des photos de vos vélos avec le tag #bikepunk. Ça me fait chaud au cœur à chaque fois ! (merci Vincent Jousse d’avoir lancé la mode !)

De l’importance des communs

Bon, maintenant qu’on a évacué les questions administratives, parlons sérieusement : un aspect important que je n’ai pas encore évoqué à propos de Bikepunk est qu’il est sous licence libre (CC By-SA). Il fait donc, dès à présent, partie des communs. Je dois reconnaître que, malheureusement, cela ne semble pas intéresser les médias.

Les communs sont un concept parfois difficile à percevoir, que le capitalisme cherche à invisibiliser et qui est pourtant tellement indispensable. Sans n’être jamais nommés, les communs ne me sont jamais apparus plus clairement que dans le roman « Place d’âmes », de Sara Schneider (qui est, bien entendu, lui aussi sous licence libre).

Je ne connais rien à l’histoire du Jura suisse. La question ne m’intéresse pas le moins du monde. C’est par pur copinage avec Sara que je me suis lancé dans son « Place d’âmes », un peu à reculon. Sauf que j’ai été happé, aspiré. La question qui anime le livre n’est celle du Jura qu’en apparence. En réalité, il s’agit de protéger nos communs. De protéger nos rebelles, nos révolutionnaires, nos sorcières qui, iels-mêmes, protègent nos communs.

Une uchronie historique poétique et actuelle, indispensable. À travers l’histoire du Jura, c’est la planète Terre toute entière que décrit Sara.

D’ailleurs, tant qu’on parle de biens communs, Bruno Leyval commence à mettre ses archives sous licence Art Libre. Parce que l’art est une grammaire, un vocabulaire. En faire un bien commun, c’est donner à chacun le pouvoir de se l’approprier, de le modifier, d’affiner sa perception du monde comme un citoyen acteur et non plus un simple consommateur.

Utilisez, modifiez ces images !

Bruno est notamment l’auteur de la couverture de Bikepunk et de l’illustration qui orne ce blog. Elles sont sous licence libre. Vous êtes nombreux à avoir réclamé des t-shirts (on y bosse, promis, ça va prendre un peu de temps).

Mais, vous savez quoi ? Vous n’êtes pas obligés de nous attendre. Les images, comme le livre, sont dans les communs.

Ils vous appartiennent désormais autant qu’à moi ou à Bruno…

Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

November 04, 2024

A number of years ago, my friend Klaas and I made a pact: instead of exchanging birthday gifts, we'd create memories together. You can find some of these adventures on my blog, like Snowdonia in 2019 or the Pemi Loop in 2023. This time our adventure led us to the misty Isle of Skye in Scotland.

Each year, Klaas and I pick a new destination for our outdoor adventure. In 2024, we set off for the Isle of Skye in Scotland. This stop was near Glencoe, about halfway between Glasgow and Skye.

For a week, we lived out of a van, wild camping along Scotland's empty roads. Our days were filled with muddy hikes, including the iconic Storr and Quiraing trails. The weather was characteristically Scottish – a mix of fog, rain, and occasional clear skies that revealed breathtaking cliffs and rolling highlands. Along the way, we shared the landscape with Highland cows and countless sheep, who seemed unfazed by our presence.

Our daily routine was refreshingly simple: wake up to misty mornings, often slightly cold from sleeping in the van. Fuel up with coffee and breakfast, including the occasional haggis breakfast roll, and tackle a hike. We'd usually skip lunch, but by the end of the day, we'd find a hearty meal or local pub. We even found ourselves having evening coding sessions in our mobile home; me working on my location tracking app and Klaas working on an AI assistant.

We hiked the Quiraing through mud and wind, with Highland cows watching us trudge by. My favorite part was wandering across the open highlands, letting the strong wind push me forward.

Though we mostly embraced the camping lifestyle, we did surrender to civilization on day three, treating ourselves to a hotel room and much-needed shower. We also secretly washed our camping dishes in the bathroom sink.

At one point, a kind park ranger had to kick us out of a parking lot, but decided to sprinkle some Scottish optimism our way. The good news is that the weather will get better …, he said, watching our faces light up. Then came the punchline, delivered with the timing of a seasoned comedian: ... in April..

Would April have offered better weather? Perhaps. But watching the fog dance around the peaks of Skye, coding in our van during quiet evenings, and sharing sticky toffee pudding in restaurants made for exactly the kind of memory we'd hoped to create. Each year, these birthday trips deepen our friendship – the hugs get stronger and the conversations more meaningful.

November 02, 2024

We now invite proposals for presentations. FOSDEM offers open source and free software developers a place to meet, share ideas and collaborate. Renowned for being highly developer-oriented, the event brings together some 8000+ geeks from all over the world. The twenty-fifth edition will take place on Saturday 1st and Sunday 2nd February 2025 at the usual location, ULB Campus Solbosch in Brussels. Developer Rooms For more details about submission to Developer Rooms, please refer to each devroom's Call for Participation listed here. Main Tracks Main track presentations cover topics of interest to a significant part of our audience that舰

November 01, 2024

Dear WordPress friends in the USA: I hope you vote and when you do, I hope you vote for respect. The world worriedly awaits your collective verdict, as do I. Peace! Watch this video on YouTube.

Source

October 29, 2024

As announced yesterday, the MySQL Devroom is back at FOSDEM! For people preparing for their travel to Belgium, we want to announce that the MySQL Belgian Days fringe event will be held on the Thursday and Friday before FOSDEM. This event will take place on January 30th and 31st, 2025, in Brussels at the usual […]

October 28, 2024

We are pleased to announce the Call for Participation (CfP) for the FOSDEM 2025 MySQL Devroom. The Devroom will be held on February 2 (Sunday), 2025 in Brussels, Belgium. The submission deadline for talk proposals is December 1, 2024. FOSDEM is a free event for software developers to meet, share ideas, and collaborate. Every year, […]

October 27, 2024

We are pleased to announce the developer rooms that will be organised at FOSDEM 2025. Developer rooms are assigned to self-organising groups to work together on open source projects, to discuss topics relevant to a broader subset of the community, etc. The individual developer room organisers will issue their calls for participation in the next few days. The list below will be updated accordingly. Topic Call for Participation Ada CfP Android Open Source Project CfP APIs: GraphQL, OpenAPI, AsyncAPI, and friends CfP Attestation CfP BSD CfP Cloud Native Databases CfP Collaboration and Content Management CfP Community CfP Confidential Computing舰

October 25, 2024

At Acquia Engage NYC this week, our partner and customer conference, we shared how Acquia's Digital Experience Platform (DXP) helps organizations deliver digital experiences through three key capabilities:

  • Content: Create, manage and deliver digital content and experiences - from images and videos to blog posts, articles, and landing pages - consistently across all your digital channels.
  • Optimize: Continuously improve your digital content and experiences by improving accessibility, readability, brand compliance, and search engine optimization (SEO).
  • Insights: Understand how people interact with your digital experiences, segment audiences based on their behavior and interests, and deliver personalized content that drives better engagement and conversion rates.

Since our last Acquia Engage conference in May, roughly six months ago, we've made some great progress, and we announced some major innovations and updates across our platform.

The Acquia Open DXP platform consists of three pillars - Content, Optimize, and Insight - with specialized products in each category to help organizations create, improve, and personalize digital experiences.

Simplify video creation in Acquia DAM

Video is one of the most engaging forms of media, but it's also one of the most time-consuming and expensive to create. Producing professional, branded videos has traditionally required significant time, budget, and specialized skills. Our new Video Creator for DAM changes this equation. By combining templating, AI, and DAM's workflow functionality, organizations can now create professional, on-brand videos in minutes rather than days.

Make assets easier to find in Acquia DAM

Managing large digital asset libraries can become increasingly overwhelming. Traditional search methods rely on extensive metadata tagging and manual filtering options. Depending on what you are looking for, it might be difficult to quickly find the right assets.

To address this, we introduced Acquia DAM Copilot, which transforms the experience through conversational AI. Instead of navigating complicated filter menus, users can now simply type natural requests like "show me photos of bikes outside" and refine their search conversationally with commands like "only show bikes from the side view". This AI-powered approach eliminates the need for extensive tagging and makes finding the right content intuitive and fast.

Easier site building with Drupal

I updated the Acquia Engage audience on Drupal CMS (also known as Drupal Starshot), a major initiative I'm leading in the Drupal community with significant support from Acquia. I demonstrated several exciting innovations coming to Drupal: "recipes" to simplify site building, AI-powered site creation capabilities, and a new Experience Builder that will transform how we build Drupal websites.

Many in the audience had already watched my DrupalCon Barcelona keynote and expressed continued enthusiasm for the direction of Drupal CMS and our accelerated pace of innovation. Even after demoing it multiple times the past month, I'm still very excited about it myself. If you want to learn more, be sure to check out my DrupalCon presentation!

Improving content ranking with Acquia SEO

Creating content that ranks well in search engines traditionally requires both specialized SEO expertise and skilled content writers - making it an expensive and time-consuming process. Our new SEO Copilot, powered by Conductor, integrated directly into Drupal's editing experience, provides real-time guidance on keyword optimization, content suggestions, length recommendations, and writing complexity for your target audience. This helps content teams create search-engine-friendly content more efficiently, without needing deep SEO expertise.

Improving content quality with Acquia Optimize

We announced the rebranding of Monsido to Acquia Optimize and talked about two major improvements to this offering.

First, we improved how organizations create advanced content policies. Creating advanced content policies usually requires some technical expertise, as it can involve writing regular expressions. Now, users can simply describe in plain language what they want to monitor. For example, they could enter something like "find language that might be insensitive to people with disabilities", and AI will help create the appropriate policy rules. Acquia Optimize will then scan content across all your websites to detect any violations of those rules.

Second, we dramatically shortened the feedback loop for content checking. Previously, content creators had to publish their content and then wait for scheduled scans to discover problems with accessibility, policy compliance or technical SEO - a process that could take a couple of days. Now, they can get instant feedback. Authors can request a check while they work, and the system immediately flags accessibility issues, content policy violations, and other problems, allowing them to fix problems while the content is being written. This shift from "publish and wait" to "check and fix" helps teams maintain higher content quality standards, allows them to work faster, and can prevent non-compliant content from ever going live.

FedRAMP for Acquia Cloud Next

We were excited to announce that our next-generation Drupal Cloud, Acquia Cloud Next (ACN), has achieved FedRAMP accreditation, just like our previous platform, which remains FedRAMP accredited.

This means our government customers can now migrate their Drupal sites onto our latest cloud platform, taking advantage of improved autoscaling, self-healing, and cutting-edge features. We already have 56 FedRAMP customers hosting their Drupal sites on ACN, including Fannie Mae, The US Agency for International Development, and the Department of Education, to name a few.

Improved fleet management for Drupal

Acquia Cloud Site Factory is a platform that helps organizations manage fleets of Drupal sites from a single dashboard, making it easier to launch, update, and scale sites. Over the past two years, we've been rebuilding Site Factory on top of Acquia Cloud Next, integrating them more closely. Recently, we reached a major milestone in this journey. At Engage, we showcased Multi-Experience Operations (MEO) to manage multiple Drupal codebases across your portfolio of sites.

Previously, all sites in a Site Factory instance had to run the same Drupal code, requiring simultaneous updates across all sites. Now, organizations can run sites on different codebases and update them independently. This added flexibility is invaluable for large organizations managing hundreds or thousands of Drupal sites, allowing them to update at their own pace and maintain different Drupal versions where needed.

Improved conversion rates with Acquia Convert

Understanding user behavior is key to optimizing digital experiences, but interpreting the data and deciding on next steps can be challenging. We introduced some new Acquia Convert features (powered by VWO) to solve this.

First, advanced heat-mapping shows exactly how users interact with your pages, where they click first, how far they scroll, and where they show signs of frustration (like rage clicks).

Next, and even more powerful, is our new Acquia Convert Copilot that automatically analyzes this behavioral data to suggest specific improvements. For example, if the AI notices high interaction with a pricing slider but also signs of user confusion, it might suggest an A/B test to clarify the slider's purpose. This helps marketers and site builders make data-driven decisions and improve conversion rates.

Privacy-first analytics with Piwik Pro

As data privacy regulations become stricter globally, organizations face growing challenges with web analytics. Google Analytics has been banned in several European countries for not meeting data sovereignty requirements, leaving organizations scrambling for compliant alternatives.

We announced a partnership with Piwik Pro to address this need. Piwik Pro offers a privacy-first analytics solution that maintains compliance with global data regulations by allowing organizations to choose where their data is stored and maintaining full control over their data.

This makes it an ideal solution for organizations that operate in regions with strict data privacy laws, or any organization that wants to ensure their analytics solution remains compliant with evolving privacy regulations.

After the Piwik Pro announcement at Acquia Engage, I spoke with several customers who are already using Piwik Pro. Most worked in healthcare and other sectors handling sensitive data. They were excited about our partnership and a future that brings deeper integration between Piwik Pro, Acquia Optimize, Drupal, and other parts of our portfolio.

Conclusion

The enthusiasm from our customers and partners at Acquia Engage always reinvigorates me. None of these innovations would be possible without the dedication of our teams at Acquia. I'm grateful for their hard work in bringing these innovations to life, and I'm excited for what is next!

October 15, 2024

I'm excited to share an experiment I've been working on: a solar-powered, self-hosted website running on a Raspberry Pi. The website at https://solar.dri.es is powered entirely by a solar panel and battery on our roof deck in Boston.

My solar panel and Raspberry Pi Zero 2 are set up on our rooftop deck for testing. Once it works, it will be mounted properly and permanently.

By visiting https://solar.dri.es, you can dive into all the technical details and lessons learned – from hardware setup to networking configuration and custom monitoring.

As the content on this solar-powered site is likely to evolve or might even disappear over time, I've included the full article below (with minor edits) to ensure that this information is preserved.

Finally, you can view the real-time status of my solar setup on my solar panel dashboard, hosted on my main website. This dashboard stays online even when my solar-powered setup goes offline.

Background

For over two decades, I've been deeply involved in web development. I've worked on everything from simple websites to building and managing some of the internet's largest websites. I've helped create a hosting business that uses thousands of EC2 instances, handling billions of page views every month. This platform includes the latest technology: cloud-native architecture, Kubernetes orchestration, auto-scaling, smart traffic routing, geographic failover, self-healing, and more.

This project is the complete opposite. It's a hobby project focused on sustainable, solar-powered self-hosting. The goal is to use the smallest, most energy-efficient setup possible, even if it means the website goes offline sometimes. Yes, this site may go down on cloudy or cold days. But don't worry! When the sun comes out, the website will be back up, powered by sunshine.

My primary website, https://dri.es, is reliably hosted on Acquia, and I'm very happy with it. However, if this solar-powered setup proves stable and efficient, I might consider moving some content to solar hosting. For instance, I could keep the most important pages on traditional hosting while transferring less essential content – like my 10,000 photos – to a solar-powered server.

Why am I doing this?

This project is driven by my curiosity about making websites and web hosting more environmentally friendly, even on a small scale. It's also a chance to explore a local-first approach: to show that hosting a personal website on your own internet connection at home can often be enough for small sites. This aligns with my commitment to both the Open Web and the IndieWeb.

At its heart, this project is about learning and contributing to a conversation on a greener, local-first future for the web. Inspired by solar-powered sites like LowTech Magazine, I hope to spark similar ideas in others. If this experiment inspires even one person in the web community to rethink hosting and sustainability, I'll consider it a success.

Solar panel and battery

The heart of my solar setup is a 50-watt panel from Voltaic, which captures solar energy and delivers 12-volt output. I store the excess power in an 18 amp-hour Lithium Iron Phosphate (LFP or LiFePO4) battery, also from Voltaic.

A solar panel being tested on the floor in our laundry room. Upon connecting it, it started charging a battery right away. It feels truly magical. Of course, it won't stay in the laundry room forever so stay tuned for more ...

I'll never forget the first time I plugged in the solar panel – it felt like pure magic. Seeing the battery spring to life, powered entirely by sunlight, was an exhilarating moment that is hard to put into words. And yes, all this electrifying excitement happened right in our laundry room.

A 18Ah LFP battery from Voltaic, featuring a waterproof design and integrated MPPT charge controller. The battery is large and heavy, weighing 3kg (6.6lbs), but it can power a Raspberry Pi for days.

Voltaic's battery system includes a built-in charge controller with Maximum Power Point Tracking (MPPT) technology, which regulates the solar panel's output to optimize battery charging. In addition, the MPPT controller protects the battery from overcharging, extreme temperatures, and short circuits.

A key feature of the charge controller is its ability to stop charging when temperatures fall below 0°C (32°F). This preserves battery health, as charging in freezing conditions can damage the battery cells. As I'll discuss in the Next steps section, this safeguard complicates year-round operation in Boston's harsh winters. I'll likely need a battery that can charge in colder temperatures.

The 12V to 5V voltage converter used to convert the 12V output from the solar panel to 5V for the Raspberry Pi.

I also encountered a voltage mismatch between the 12-volt solar panel output and the Raspberry Pi's 5-volt input requirement. Fortunately, this problem had a more straightforward solution. I solved this using a buck converter to step down the voltage. While this conversion introduces some energy loss, it allows me to use a more powerful solar panel.

Raspberry Pi models

This website is currently hosted on a Raspberry Pi Zero 2 W. The main reason for choosing the Raspberry Pi Zero 2 W is its energy efficiency. Consuming just 0.4 watts at idle and up to 1.3 watts under load, it can run on my battery for about a week. This decision is supported by a mathematical uptime model, detailed in Appendix 1.

That said, the Raspberry Pi Zero 2 W has limitations. Despite its quad-core 1 GHz processor and 512 MB of RAM, it may still struggle with handling heavier website traffic. For this reason, I also considered the Raspberry Pi 4. With its 1.5 GHz quad-core ARM processor and 4 GB of RAM, the Raspberry Pi 4 can handle more traffic. However, this added performance comes at a cost: the Pi 4 consumes roughly five times the power of the Zero 2 W. As shown in Appendix 2, my 50W solar panel and 18Ah battery setup are likely insufficient to power the Raspberry Pi 4 through Boston's winter.

With a single-page website now live on https://solar.dri.es, I'm actively monitoring the real-world performance and uptime of a solar-powered Raspberry Pi Zero 2 W. For now, I'm using the lightest setup that I have available and will upgrade only when needed.

Networking

The Raspberry Pi's built-in Wi-Fi is perfect for our outdoor setup. It wirelessly connects to our home network, so no extra wiring was needed.

I want to call out that my router and Wi-Fi network are not solar-powered; they rely on my existing network setup and conventional power sources. So while the web server itself runs on solar power, other parts of the delivery chain still depend on traditional energy.

Running this website on my home internet connection also means that if my ISP or networking equipment goes down, so does the website – there is no failover in place.

For security reasons, I isolated the Raspberry Pi in its own Virtual Local Area Network (VLAN). This ensures that even if the Pi is compromised, the rest of our home network remains protected.

To make the solar-powered website accessible from the internet, I configured port forwarding on our router. This directs incoming web traffic on port 80 (HTTP) and port 443 (HTTPS) to the Raspberry Pi, enabling external access to the site.

One small challenge was the dynamic nature of our IP address. ISPs typically do not assign fixed IP addresses, meaning our IP address changes from time to time. To keep the website accessible despite these IP address changes, I wrote a small script that looks up our public IP address and updates the DNS record for solar.dri.es on Cloudflare. This script runs every 10 minutes via a cron job.

I use Cloudflare's DNS proxy, which handles DNS and offers basic DDoS protection. However, I do not use Cloudflare's caching or CDN features, as that would somewhat defeat the purpose of running this website on solar power and keeping it local-first.

The Raspberry Pi uses Caddy as its web server, which automatically obtains SSL certificates from Let's Encrypt. This setup ensures secure, encrypted HTTP connections to the website.

Monitoring and dashboard

The Raspberry Pi 4 (on the left) can run a website, while the RS485 CAN HAT (on the right) will communicate with the charge controller for the solar panel and battery.

One key feature that influenced my decision to go with the Voltaic battery is its RS485 interface for the charge controller. This allowed me to add an RS485 CAN HAT (Hardware Attached on Top) to the Raspberry Pi, enabling communication with the charge controller using the Modbus protocol. In turn, this enabled me to programmatically gather real-time data on the solar panel's output and battery's status.

I collect data such as battery capacity, power output, temperature, uptime, and more. I send this data to my main website via a web service API, where it's displayed on a dashboard. This setup ensures that key information remains accessible, even if the Raspberry Pi goes offline.

My main website runs on Drupal. The dashboard is powered by a custom module I developed. This module adds a web service endpoint to handle authentication, validate incoming JSON data, and store it in a MariaDB database table. Using the historical data stored in MariaDB, the module generates Scalable Vector Graphics (SVGs) for the dashboard graphs. For more details, check out my post on building a temperature and humidity monitor, which explains a similar setup in much more detail. Sure, I could have used a tool like Grafana, but sometimes building it yourself is part of the fun.

A Raspberry Pi 4 with an attached RS485 CAN HAT module is being installed in a waterproof enclosure.

For more details on the charge controller and some of the issues I've observed, please refer to Appendix 3.

Energy use, cost savings, and environmental impact

When I started this solar-powered website project, I wasn't trying to revolutionize sustainable computing or drastically cut my electricity bill. I was driven by curiosity, a desire to have fun, and a hope that my journey might inspire others to explore local-first or solar-powered hosting.

That said, let's break down the energy consumption and cost savings to get a better sense of the project's impact.

The tiny Raspberry Pi Zero 2 W at the heart of this project uses just 1 Watt on average. This translates to 0.024 kWh daily (1W * 24h / 1000 = 0.024 kWh) and approximately 9 kWh annually (0.024 kWh * 365 days = 8.76 kWh). The cost savings? Looking at our last electricity bill, we pay an average of $0.325 per kWh in Boston. This means the savings amount to $2.85 USD per year (8.76 kWh * $0.325/kWh = $2.85). Not exactly something to write home about.

The environmental impact is similarly modest. Saving 9 kWh per year reduces CO2 emissions by roughly 4 kg, which is about the same as driving 16 kilometers (10 miles) by car.

There are two ways to interpret these numbers. The pessimist might say that the impact of my solar setup is negligible, and they wouldn't be wrong. Offsetting the energy use of a Raspberry Pi Zero 2, which only draws 1 Watt, will never be game-changing. The $2.85 USD saved annually won't come close to covering the cost of the solar panel and battery. In terms of efficiency, this setup isn't a win.

But the optimist in me sees it differently. When you compare my solar-powered setup to traditional website hosting, a more compelling case emerges. Using a low-power Raspberry Pi to host a basic website, rather than large servers in energy-hungry data centers, can greatly cut down on both expenses and environmental impact. Consider this: a Raspberry Pi Zero 2 W costs just $15 USD, and I can power it with main power for only $0.50 USD a month. In contrast, traditional hosting might cost around $20 USD a month. Viewed this way, my setup is both more sustainable and economical, showing some merit.

Lastly, it's also important to remember that solar power isn't just about saving money or cutting emissions. In remote areas without grid access or during disaster relief, solar can be the only way to keep communication systems running. In a crisis, a small solar setup could make the difference between isolation and staying connected to essential information and support.

Why do so many websites need to stay up?

The reason the energy savings from my solar-powered setup won't offset the equipment costs is that the system is intentionally oversized to keep the website running during extended low-light periods. Once the battery reaches full capacity, any excess energy goes to waste. That is unfortunate as that surplus could be used, and using it would help offset more of the hardware costs.

This inefficiency isn't unique to solar setups – it highlights a bigger issue in web hosting: over-provisioning. The web hosting world is full of mostly idle hardware. Web hosting providers often allocate more resources than necessary to ensure high uptime or failover, and this comes at an environmental cost.

One way to make web hosting more eco-friendly is by allowing non-essential websites to experience more downtime, reducing the need to power as much hardware. Of course, many websites are critical and need to stay up 24/7 – my own work with Acquia is dedicated to ensuring essential sites do just that. But for non-critical websites, allowing some downtime could go a long way in conserving energy.

It may seem unconventional, but I believe it's worth considering: many websites, mine included, aren't mission-critical. The world won't end if they occasionally go offline. That is why I like the idea of hosting my 10,000 photos on a solar-powered Raspberry Pi.

And maybe that is the real takeaway from this experiment so far: to question why our websites and hosting solutions have become so resource-intensive and why we're so focused on keeping non-essential websites from going down. Do we really need 99.9% uptime for personal websites? I don't think so.

Perhaps the best way to make the web more sustainable is to accept more downtime for those websites that aren't critical. By embracing occasional downtime and intentionally under-provisioning non-essential websites, we can make the web a greener, more efficient place.

The solar panel and battery mounted on our roof deck.

Next steps

As I continue this experiment, my biggest challenge is the battery's inability to charge in freezing temperatures. As explained, the battery's charge controller includes a safety feature that prevents charging when the temperature drops below freezing. While the Raspberry Pi Zero 2 W can run on my fully charged battery for about six days, this won't be sufficient for Boston winters, where temperatures often remain below freezing for longer.

With winter approaching, I need a solution to charge my battery in extreme cold. Several options to consider include:

  1. Adding a battery heating system that uses excess energy during peak sunlight hours.
  2. Applying insulation, though this alone may not suffice since the battery generates minimal heat.
  3. Replacing the battery with one that charges at temperatures as low as -20°C (-4°F), such as Lithium Titanate (LTO) or certain AGM lead-acid batteries. However, it's not as simple as swapping it out – my current battery has a built-in charge controller, so I'd likely need to add an external charge controller, which would require rewiring the solar panel and updating my monitoring code.

Each solution has trade-offs in cost, safety, and complexity. I'll need to research the different options carefully to ensure safety and reliability.

The last quarter of the year is filled with travel and other commitments, so I may not have time to implement a fix before freezing temperatures hit. With some luck, the current setup might make it through winter. I'll keep monitoring performance and uptime – and, as mentioned, a bit of downtime is acceptable and even part of the fun! That said, the website may go offline for a few weeks and restart after the harshest part of winter. Meanwhile, I can focus on other aspects of the project.

For example, I plan to expand this single-page site into one with hundreds or even thousands of pages. Here are a few things I'd like to explore:

  1. Testing Drupal on a Raspberry Pi Zero 2 W: As the founder and project lead of Drupal, my main website runs on Drupal. I'm curious to see if Drupal can actually run on a Raspberry Pi Zero 2 W. The answer might be "probably not", but I'm eager to try.
  2. Upgrading to a Raspberry Pi 4 or 5: I'd like to experiment with upgrading to a Raspberry Pi 4 or 5, as I know it could run Drupal. As noted in Appendix 2, this might push the limits of my solar panel and battery. There are some optimization options to explore though, like disabling CPU cores, lowering the RAM clock speed, and dynamically adjusting features based on sunlight and battery levels.
  3. Creating a static version of my site: I'm interested in experimenting with a static version of https://dri.es. A static site doesn't require PHP or MySQL, which would likely reduce resource demands and make it easier to run on a Raspberry Pi Zero 2 W. However, dynamic features like my solar dashboard depend on PHP and MySQL, so I'd potentially need alternative solutions for those. Tools like Tome and QuantCDN offer ways to generate static versions of Drupal sites, but I've never tested these myself. Although I prefer keeping my site dynamic, creating a static version also aligns with my interests in digital preservation and archiving, offering me a chance to delve deeper into these concepts.

Either way, it looks like I'll have some fun ahead. I can explore these ideas from my office while the Raspberry Pi Zero 2 W continues running on the roof deck. I'm open to suggestions and happy to share notes with others interested in similar projects. If you'd like to stay updated on my progress, you can sign up to receive new posts by email or subscribe via RSS. Feel free to email me at dries@buytaert.net. Your ideas, input, and curiosity are always welcome.

Appendix

Appendix 1: Sizing a solar panel and battery for a Raspberry Pi Zero 2 W

To keep the Raspberry Pi Zero 2 W running in various weather conditions, we need to estimate the ideal solar panel and battery size. We'll base this on factors like power consumption, available sunlight, and desired uptime.

The Raspberry Pi Zero 2 W is very energy-efficient, consuming only 0.4W at idle and up to 1.3W under load. For simplicity, we'll assume an average power consumption of 1W, which totals 24Wh per day (1W * 24 hours).

We also need to account for energy losses due to inefficiencies in the solar panel, charge controller, battery, and inverter. Assuming a total loss of 30%, our estimated daily energy requirement is 24Wh / 0.7 ≈ 34.3Wh.

In Boston, peak sunlight varies throughout the year, averaging 5-6 hours per day in summer (June-August) and only 2-3 hours per day in winter (December-February). Peak sunlight refers to the strongest, most direct sunlight hours. Basing the design on peak sunlight hours rather than total daylight hours provides a margin of safety.

To produce 34.3Wh in the winter, with only 2 hours of peak sunlight, the solar panel should generate about 17.15W (34.3Wh / 2 hours ≈ 17.15W). As mentioned, my current setup includes a 50W solar panel, which provides well above the estimated 17.15W requirement.

Now, let's look at battery sizing. As explained, I have an 18Ah battery, which provides about 216Wh of capacity (18Ah * 12V = 216Wh). If there were no sunlight at all, this battery could power the Raspberry Pi Zero 2 W for roughly 6 days (216Wh / 34.3Wh per day ≈ 6.3 days), ensuring continuous operation even on snowy winter days.

These estimates suggest that I could halve both my 50W solar panel and 18Ah battery to a 25W panel and a 9Ah battery, and still meet the Raspberry Pi Zero 2 W's power needs during Boston winters. However, I chose the 50W panel and larger battery for flexibility, in case I need to upgrade to a more powerful board with higher energy requirements.

Appendix 2: Sizing a solar panel and battery for a Raspberry Pi 4

If I need to switch to a Raspberry Pi 4 to handle increased website traffic, the power requirements will rise significantly. The Raspberry Pi 4 consumes around 3.4W at idle and up to 7.6W under load. For estimation purposes, I'll assume an average consumption of 4.5W, which totals 108Wh per day (4.5W * 24 hours = 108Wh).

Factoring in a 30% loss due to system inefficiencies, the adjusted daily energy requirement increases to approximately 154.3Wh (108Wh / 0.7 ≈ 154.3Wh). To meet this demand during winter, with only 2 hours of peak sunlight, the solar panel would need to produce about 77.15W (154.3Wh / 2 hours ≈ 77.15W).

While some margin of safety is built into my calculations, this likely means my current 50W solar panel and 216Wh battery are insufficient to power a Raspberry Pi 4 during a Boston winter.

For example, with an average power draw of 4.5W, the Raspberry Pi 4 requires 108Wh daily. In winter, if the solar panel generates only 70 to 105Wh per day, there would be a shortfall of 3 to 38Wh each day, which the battery would need to cover. And with no sunlight at all, a fully charged 216Wh battery would keep the system running for about 2 days (216Wh / 108Wh per day ≈ 2 days) before depleting.

To ensure reliable operation, a 100W solar panel, capable of generating enough power with just 2 hours of winter sunlight, paired with a 35Ah battery providing 420Wh, could be better. This setup, roughly double my current capacity, would offer sufficient backup to keep the Raspberry Pi 4 running for 3-4 days without sunlight.

Appendix 3: Observations on the Lumiax charge controller

As I mentioned earlier, my battery has a built-in charge controller. The brand of the controller is Lumiax, and I can access its data programmatically. While the controller excels at managing charging, its metering capabilities feel less robust. Here are a few observations:

  1. I reviewed the charge controller's manual to clarify how it defines and measures different currents, but the information provided was insufficient.
    • The charge controller allows monitoring of the "solar current" (register 12367). I expected this to measure the current flowing from the solar panel to the charge controller, but it actually measures the current flowing from the charge controller to the battery. In other words, it tracks the "useful current" – the current from the solar panel used to charge the battery or power the load. The problem with this is that when the battery is fully charged, the controller reduces the current from the solar panel to prevent overcharging, even though the panel could produce more. As a result, I can't accurately measure the maximum power output of the solar panel. For example, in full sunlight with a fully charged battery, the calculated power output could be as low as 2W, even though the solar panel is capable of producing 50W.
    • The controller also reports the "battery current" (register 12359), which appears to represent the current flowing from the battery to the Raspberry Pi. I believe this to be the case because the "battery current" turns negative at night, indicating discharge.
    • Additionally, the controller reports the "load current" (register 12362), which, in my case, consistently reads zero. This is odd because my Raspberry Pi Zero 2 typically draws between 0.1-0.3A. Even with a Raspberry Pi 4, drawing between 0.6-1.3A, the controller still reports 0A. This could be a bug or suggest that the charge controller lacks sufficient accuracy.
  2. When the battery discharges and the low voltage protection activates, it shuts down the Raspberry Pi as expected. However, if there isn't enough sunlight to recharge the battery within a certain timeframe, the Raspberry Pi does not automatically reboot. Instead, I must perform a manual 'factory reset' of the charge controller. This involves connecting my laptop to the controller – a cumbersome process that requires me to disconnect the Raspberry Pi, open its waterproof enclosure, detach the RS485 hat wires, connect them to a USB-to-RS485 adapter for my laptop, and run a custom Python script. Afterward, I have to reverse the entire process. This procedure can't be performed while traveling as it requires physical access.
  3. The charge controller has two temperature sensors: one for the environment and one for the controller itself. However, the controller's temperature readings often seem inaccurate. For example, while the environment temperature might correctly register at 24°C, the controller could display a reading as low as 14°C. This seems questionable though there might be an explanation that I'm overlooking.
  4. The battery's charge and discharge patterns are non-linear, meaning the charge level may drop rapidly at first, then stay steady for hours. For example, I've seen it drop from 100% to 65% within an hour but remain at 65% for over six hours. This is common for LFP batteries due to their voltage characteristics. Some advanced charge controllers use look-up tables, algorithms, or coulomb counting to more accurately predict the state of charge based on the battery type and usage patterns. The Lumiax doesn't support this, but I might be able to implement coulomb counting myself by tracking the current flow to improve charge level estimates.

Appendix 4: When size matters (but it's best not to mention it)

When buying a solar panel, sometimes it's easier to beg for forgiveness than to ask for permission.

One day, I casually mentioned to my wife, "Oh, by the way, I bought something. It will arrive in a few days."

"What did you buy?", she asked, eyebrow raised.

"A solar panel", I said, trying to sound casual.

"A what?!", she asked again, her voice rising.

Don't worry!", I reassured her. "It's not that big", I said, gesturing with my hands to show a panel about the size of a laptop.

She looked skeptical but didn't push further.

Fast forward to delivery day. As I unboxed it, her eyes widened in surprise. The panel was easily four or five times larger than what I'd shown her. Oops.

The takeaway? Sometimes a little underestimation goes a long way.

October 05, 2024

Cover Ember Knights

Proton is a compatibility layer for Windows games to run on Linux. Running a Windows games is mostly just hitting the Play button within Steam. It’s that good that many games now run faster on Linux than on native Windows. That’s what makes the Steam Deck the best gaming handheld of the moment.

But a compatibility layer is still a layer, so you may encounter … incompatibilities. Ember Knights is a lovely game with fun co-op multiplayer support. It runs perfectly on the (Linux-based) Steam Deck, but on my Ubuntu laptop I encountered long loading times (startup was 5 minutes and loading between worlds was slow). But once the game was loaded it ran fine.

Debugging the game reveled that there were lost of EAGAIN errors while the game was trying to access the system clock. Changing the numer of allowed open files fixed the problem for me.

Add this to end end of the following files:

  • in /etc/security/limits.conf:
* hard nofile 1048576
  • in /etc/systemd/system.conf and /etc/systemd/user.conf:
DefaultLimitNOFILE=1048576 

Reboot.

Cover In Game

“The Witcher 3: Wild Hunt” is considered to be one of the greatest video games of all time. I certainly agree with that sentiment.

At its core, The Witcher 3 is a action-role playing game with a third-person perspective in a huge open world. You develop your character while the story advances. At the same time you can freely roam and explore as much as you like. The main story is captivating and the world is filled with with side quests and lots of interesting people. Fun for at least 200 hours, if you’re the exploring kind. If you’re not, the base game (without DLCs) will still take you 50 hours to finish.

While similar to other great games like Nintendo’s Zelda Breath of the Wild and Sony’s Horizon Zero Dawn, the strength of the game is a deep lore originating from the Witcher series novels written by the “Polish Tolkien” Andrzej Sapkowski. It’s not a game, but a universe (nowadays it even includes a Netflix tv-series).

A must play.

Played on the Steam Deck without any issues (“Steam Deck Verified”)

October 03, 2024

A couple of months after reaching 1600, I hit another milestone: Elo 1700!

When I reached an Elo rating of 1600, I expected the climb to get more difficult. Surprisingly, moving up to 1700 was easier than I thought.

I stuck with my main openings but added a few new variations. For example, I started using the "Queen's Gambit Declined: Cambridge Springs Defense" against white opening with 1. d4 – a name that, just six months ago, might as well have been a spell from Harry Potter. Despite expanding my opening repertoire, my opening knowledge remains limited, and I tend to stick to the few openings I know well.

Trying my luck against a chess hustler in New York City. I lost.

A key challenge I keep facing is what my chess coach calls "pattern recognition", the ability to instantly recognize common tactical setups and positional themes. Vanessa, my wife, would almost certainly agree with that – she's great at spotting patterns in my life that I completely miss. To work on this blind spot, I've made solving chess puzzles a daily habit.

What has really started to make sense for me is understanding pawn breaks and how to create weaknesses in my opponent's position – while avoiding creating weaknesses in my own. These concepts are becoming clearer, and I feel like I'm seeing the board better.

Next stop: Elo 1800.

September 24, 2024

FOSDEM 2025 will take place at the ULB on the 1st and 2nd of February 2025. As has become traditional, we offer free and open source projects a stand to display their work "in real life" to the audience. You can share information, demo software, interact with your users and developers, give away goodies, sell merchandise or accept donations. Anything is possible! We offer you: One table (180x80cm) with a set of chairs and a power socket. Fast wireless internet access. You can choose whether you want the spot for the entire conference, or simply for one day. Joint舰

September 22, 2024

We now invite proposals for developer rooms for FOSDEM 2025. FOSDEM offers open source and free software developers a place to meet, share ideas and collaborate. Renowned for being highly developer-oriented, the event brings together some 8000+ geeks from all over the world. The twenty-fifth edition will take place Saturday 1st and Sunday 2nd February 2025 in Brussels, Belgium. Developer rooms are assigned to self-organising groups to work together on open source and free software projects, to discuss topics relevant to a broader subset of the community, etc. Most content should take the form of presentations. Proposals involving collaboration舰

September 10, 2024

In previous blog posts, we discussed setting up a GPG smartcard on GNU/Linux and FreeBSD.

In this blog post, we will configure Thunderbird to work with an external smartcard reader and our GPG-compatible smartcard.

beastie gnu tux

Before Thunderbird 78, if you wanted to use OpenPGP email encryption, you had to use a third-party add-on such as https://enigmail.net/.

Thunderbird’s recent versions natively support OpenPGP. The Enigmail addon for Thunderbird has been discontinued. See: https://enigmail.net/index.php/en/home/news.

I didn’t find good documentation on how to set up Thunderbird with a GnuPG smartcard when I moved to a new coreboot laptop, so this was the reason I created this blog post series.

GnuPG configuration

We’ll not go into too much detail on how to set up GnuPG. This was already explained in the previous blog posts.

If you want to use a HSM with GnuPG you can use the gnupg-pkcs11-scd agent https://github.com/alonbl/gnupg-pkcs11-scd that translates the pkcs11 interface to GnuPG. A previous blog post describes how this can be configured with SmartCard-HSM.

We’ll go over some steps to make sure that the GnuPG is set up correctly before we continue with the Thunderbird configuration. The pinentry command must be configured with graphical support to type our pin code in the Graphical user environment.

Import Public Key

Make sure that your public key - or the public key of the reciever(s) - is/are imported.

[staf@snuffel ~]$ gpg --list-keys
[staf@snuffel ~]$ 
[staf@snuffel ~]$ gpg --import <snip>.asc
gpg: key XXXXXXXXXXXXXXXX: public key "XXXX XXXXXXXXXX <XXX@XXXXXX>" imported
gpg: Total number processed: 1
gpg:               imported: 1
[staf@snuffel ~]$ 
[staf@snuffel ~]$  gpg --list-keys
/home/staf/.gnupg/pubring.kbx
-----------------------------
pub   xxxxxxx YYYYY-MM-DD [SC]
      XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
uid           [ xxxxxxx] xxxx xxxxxxxxxx <xxxx@xxxxxxxxxx.xx>
sub   xxxxxxx xxxx-xx-xx [A]
sub   xxxxxxx xxxx-xx-xx [E]

[staf@snuffel ~]$ 

Pinentry

Thunderbird will not ask for your smartcard’s pin code.

This must be done on your smartcard reader if it has a pin pad or an external pinentry program.

The pinentry is configured in the gpg-agent.conf configuration file. As we’re using Thunderbird is a graphical environment we’ll configure it to use a graphical version.

Installation

I’m testing KDE plasma 6 on FreeBSD, so I installed the Qt version of pinentry.

On GNU/Linux you can check the documentation of your favourite Linux distribution to install a graphical pinentry. If you use a Graphical user environment there is probably already a graphical-enabled pinentry installed.

[staf@snuffel ~]$ sudo pkg install -y pinentry-qt6
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
The following 1 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        pinentry-qt6: 1.3.0

Number of packages to be installed: 1

76 KiB to be downloaded.
[1/1] Fetching pinentry-qt6-1.3.0.pkg: 100%   76 KiB  78.0kB/s    00:01    
Checking integrity... done (0 conflicting)
[1/1] Installing pinentry-qt6-1.3.0...
[1/1] Extracting pinentry-qt6-1.3.0: 100%
==> Running trigger: desktop-file-utils.ucl
Building cache database of MIME types
[staf@snuffel ~]$ 

Configuration

The gpg-agent is responsible for starting the pinentry program. Let’s reconfigure it to start the pinentry that we like to use.

[staf@snuffel ~]$ cd .gnupg/
[staf@snuffel ~/.gnupg]$ 
[staf@snuffel ~/.gnupg]$ vi gpg-agent.conf

The pinentry is configured in the pinentry-program directive. You’ll find the complete gpg-agent.conf that I’m using below.

debug-level expert
verbose
verbose
log-file /home/staf/logs/gpg-agent.log
pinentry-program /usr/local/bin/pinentry-qt

Reload the sdaemon and gpg-agent configuration.

staf@freebsd-gpg3:~/.gnupg $ gpgconf --reload scdaemon
staf@freebsd-gpg3:~/.gnupg $ gpgconf --reload gpg-agent
staf@freebsd-gpg3:~/.gnupg $ 

Test

To verify that gpg works correctly and that the pinentry program works in our graphical environment we sign a file.

Create a new file.

$ cd /tmp
[staf@snuffel /tmp]$ 
[staf@snuffel /tmp]$ echo "foobar" > foobar
[staf@snuffel /tmp]$ 

Try to sign it.

[staf@snuffel /tmp]$ gpg --sign foobar
[staf@snuffel /tmp]$ 

If everything works fine, the pinentry program will ask for the pincode to sign it.

image info

Thunderbird

In this section we’ll (finally) configure Thunderbird to use GPG with a smartcard reader.

Allow external smartcard reader

open settings

Open the global settings, click on the "Hamburger" icon and select settings.

Or press [F10] to bring-up the "Menu bar" in Thunderbird and select [Edit] and Settings.

open settings

In the settings window click on [Config Editor].

This will open the Advanced Preferences window.

allow external gpg

In the Advanced Preferences window search for "external_gnupg" settings and set mail.indenity.allow_external_gnupg to true.


 

Setup End-To-End Encryption

The next step is to configure the GPG keypair that we’ll use for our user account.

open settings

Open the account setting by pressing on the "Hamburger" icon and select Account Settings or press [F10] to open the menu bar and select Edit, Account Settings.

Select End-to-End Encryption at OpenPG section select [ Add Key ].

open settings

Select the ( * ) Use your external key though GnuPG (e.g. from a smartcard)

And click on [Continue]

The next window will ask you for the Secret Key ID.

open settings

Execute gpg --list-keys to get your secret key id.

Copy/paste your key id and click on [ Save key ID ].

I found that it is sometimes required to restart Thunderbird to reload the configuration when a new key id is added. So restart Thunderbird or restart it fails to find your key id in the keyring.

Test

open settings

As a test we send an email to our own email address.

Open a new message window and enter your email address into the To: field.

Click on [OpenPGP] and Encrypt.

open settings

Thunderbird will show a warning message that it doesn't know the public key to set up the encryption.

Click on [Resolve].

discover keys In the next window Thunderbird will ask to Discover Public Keys online or to import the Public Keys From File, we'll import our public key from a file.
open key file In the Import OpenPGP key File window select your public key file, and click on [ Open ].
open settings

Thunderbird will show a window with the key fingerprint. Select ( * ) Accepted.

Click on [ Import ] to import the public key.

open settings

With our public key imported, the warning about the End-to-end encryption requires resolving key issue should be resolved.

Click on the [ Send ] button to send the email.

open settings

To encrypt the message, Thunderbird will start a gpg session that invokes the pinentry command type in your pincode. gpg will encrypt the message file and if everything works fine the email is sent.

 

Have fun!

Links

September 09, 2024

The NBD protocol has grown a number of new features over the years. Unfortunately, some of those features are not (yet?) supported by the Linux kernel.

I suggested a few times over the years that the maintainer of the NBD driver in the kernel, Josef Bacik, take a look at these features, but he hasn't done so; presumably he has other priorities. As with anything in the open source world, if you want it done you must do it yourself.

I'd been off and on considering to work on the kernel driver so that I could implement these new features, but I never really got anywhere.

A few months ago, however, Christoph Hellwig posted a patch set that reworked a number of block device drivers in the Linux kernel to a new type of API. Since the NBD mailinglist is listed in the kernel's MAINTAINERS file, this patch series were crossposted to the NBD mailinglist, too, and when I noticed that it explicitly disabled the "rotational" flag on the NBD device, I suggested to Christoph that perhaps "we" (meaning, "he") might want to vary the decision on whether a device is rotational depending on whether the NBD server signals, through the flag that exists for that very purpose, whether the device is rotational.

To which he replied "Can you send a patch".

That got me down the rabbit hole, and now, for the first time in the 20+ years of being a C programmer who uses Linux exclusively, I got a patch merged into the Linux kernel... twice.

So, what do these things do?

The first patch adds support for the ROTATIONAL flag. If the NBD server mentions that the device is rotational, it will be treated as such, and the elevator algorithm will be used to optimize accesses to the device. For the reference implementation, you can do this by adding a line "rotational = true" to the relevant section (relating to the export where you want it to be used) of the config file.

It's unlikely that this will be of much benefit in most cases (most nbd-server installations will be exporting a file on a filesystem and have the elevator algorithm implemented server side and then it doesn't matter whether the device has the rotational flag set), but it's there in case you wish to use it.

The second set of patches adds support for the WRITE_ZEROES command. Most devices these days allow you to tell them "please write a N zeroes starting at this offset", which is a lot more efficient than sending over a buffer of N zeroes and asking the device to do DMA to copy buffers etc etc for just zeroes.

The NBD protocol has supported its own WRITE_ZEROES command for a while now, and hooking it up was reasonably simple in the end. The only problem is that it expects length values in bytes, whereas the kernel uses it in blocks. It took me a few tries to get that right -- and then I also fixed up handling of discard messages, which required the same conversion.

September 06, 2024

Some users, myself included, have noticed that their MySQL error log contains many lines like this one: Where does that error come from? The error MY-010914 is part of the Server Network issues like: Those are usually more problematic than the ones we are covering today. The list is not exhaustive and in the source […]

September 05, 2024

IT architects generally use architecture-specific languages or modeling techniques to document their thoughts and designs. ArchiMate, the framework I have the most experience with, is a specialized enterprise architecture modeling language. It is maintained by The Open Group, an organization known for its broad architecture framework titled TOGAF.

My stance, however, is that architects should not use the diagrams from their architecture modeling framework to convey their message to every stakeholder out there...

What is the definition of “Open Source”?

There’s been no shortage of contention on what “Open Source software” means. Two instances that stand out to me personally are ElasticSearch’s “Doubling down on Open” and Scott Chacon’s “public on GitHub”.

I’ve been active in Open Source for 20 years and could use a refresher on its origins and officialisms. The plan was simple: write a blog post about why the OSI (Open Source Initiative) and its OSD (Open Source Definition) are authoritative, collect evidence in its support (confirmation that they invented the term, of widespread acceptance with little dissent, and of the OSD being a practical, well functioning tool). That’s what I keep hearing, I just wanted to back it up. Since contention always seems to be around commercial re-distribution restrictions (which are forbidden by the OSD), I wanted to particularly confirm that there hasn’t been all that many commercial vendors who’ve used, or wanted, to use the term “open source” to mean “you can view/modify/use the source, but you are limited in your ability to re-sell, or need to buy additional licenses for use in a business”

However, the further I looked, the more I found evidence of the opposite of all of the above. I’ve spent a few weeks now digging and some of my long standing beliefs are shattered. I can’t believe some of the things I found out. Clearly I was too emotionally invested, but after a few weeks of thinking, I think I can put things in perspective. So this will become not one, but multiple posts.

The goal for the series is look at the tensions in the community/industry (in particular those directed towards the OSD), and figure out how to resolve, or at least reduce them.

Without further ado, let’s get into the beginnings of Open Source.

The “official” OSI story.

Let’s first get the official story out the way, the one you see repeated over and over on websites, on Wikipedia and probably in most computing history books.

Back in 1998, there was a small group of folks who felt that the verbiage at the time (Free Software) had become too politicized. (note: the Free Software Foundation was founded 13 years prior, in 1985, and informal use of “free software” had around since the 1970’s). They felt they needed a new word “to market the free software concept to people who wore ties”. (source) (somewhat ironic since today many of us like to say “Open Source is not a business model”)

Bruce Perens - an early Debian project leader and hacker on free software projects such as busybox - had authored the first Debian Free Software Guidelines in 1997 which was turned into the first Open Source Definition when he founded the OSI (Open Source Initiative) with Eric Raymond in 1998. As you continue reading, keep in mind that from the get-go, OSI’s mission was supporting the industry. Not the community of hobbyists.

Eric Raymond is of course known for his seminal 1999 essay on development models “The cathedral and the bazaar”, but he also worked on fetchmail among others.

According to Bruce Perens, there was some criticism at the time, but only to the term “Open” in general and to “Open Source” only in a completely different industry.

At the time of its conception there was much criticism for the Open Source campaign, even among the Linux contingent who had already bought-in to the free software concept. Many pointed to the existing use of the term “Open Source” in the political intelligence industry. Others felt the term “Open” was already overused. Many simply preferred the established name Free Software. I contended that the overuse of “Open” could never be as bad as the dual meaning of “Free” in the English language–either liberty or price, with price being the most oft-used meaning in the commercial world of computers and software

From Open Sources: Voices from the Open Source Revolution: The Open Source Definition

Furthermore, from Bruce Perens’ own account:

I wrote an announcement of Open Source which was published on February 9 [1998], and that’s when the world first heard about Open Source.

source: On Usage of The Phrase “Open Source”

Occasionally it comes up that it may have been Christine Peterson who coined the term earlier that week in February but didn’t give it a precise meaning. That was a task for Eric and Bruce in followup meetings over the next few days.

Even when you’re the first to use or define a term, you can’t legally control how others use it, until you obtain a Trademark. Luckily for OSI, US trademark law recognizes the first user when you file an application, so they filed for a trademark right away. But what happened? It was rejected! The OSI’s official explanation reads:

We have discovered that there is virtually no chance that the U.S. Patent and Trademark Office would register the mark “open source”; the mark is too descriptive. Ironically, we were partly a victim of our own success in bringing the “open source” concept into the mainstream

This is our first 🚩 red flag and it lies at the basis of some of the conflicts which we will explore in this, and future posts. (tip: I found this handy Trademark search website in the process)

Regardless, since 1998, the OSI has vastly grown its scope of influence (more on that in future posts), with the Open Source Definition mostly unaltered for 25 years, and having been widely used in the industry.

Prior uses of the term “Open Source”

Many publications simply repeat the idea that OSI came up with the term, has the authority (if not legal, at least in practice) and call it a day. I, however, had nothing better to do, so I decided to spend a few days (which turned into a few weeks 😬) and see if I could dig up any references to “Open Source” predating OSI’s definition in 1998, especially ones with different meanings or definitions.

Of course, it’s totally possible that multiple people come up with the same term independently and I don’t actually care so much about “who was first”, I’m more interested in figuring out what different meanings have been assigned to the term and how widespread those are.

In particular, because most contention is around commercial limitations (non-competes) where receivers of the code are forbidden to resell it, this clause of the OSD stands out:

Free Redistribution: The license shall not restrict any party from selling (…)

Turns out, the “Open Source” was already in use for more than a decade, prior to the OSI founding.

OpenSource.com

In 1998, a business in Texas called “OpenSource, Inc” launched their website. They were a “Systems Consulting and Integration Services company providing high quality, value-added IT professional services”. Sometime during the year 2000, the website became a RedHat property. Enter the domain name on Icann and it reveals the domain name was registered Jan 8, 1998. A month before the term was “invented” by Christine/Richard/Bruce. What a coincidence. We are just warming up…

image

Caldera announces Open Source OpenDOS

In 1996, a company called Caldera had “open sourced” a DOS operating system called OpenDos. Their announcement (accessible on google groups and a mailing list archive) reads:

Caldera Announces Open Source for DOS.
(…)
Caldera plans to openly distribute the source code for all of the DOS technologies it acquired from Novell., Inc
(…)
Caldera believes an open source code model benefits the industry in many ways.
(…)
Individuals can use OpenDOS source for personal use at no cost.
Individuals and organizations desiring to commercially redistribute
Caldera OpenDOS must acquire a license with an associated small fee.

Today we would refer to it as dual-licensing, using Source Available due to the non-compete clause. But in 1996, actual practitioners referred to it as “Open Source” and OSI couldn’t contest it because it didn’t exist!

You can download the OpenDos package from ArchiveOS and have a look at the license file, which includes even more restrictions such as “single computer”. (like I said, I had nothing better to do).

Investigations by Martin Espinoza re: Caldera

On his blog, Martin has an article making a similar observation about Caldera’s prior use of “open source”, following up with another article which includes a response from Lyle Ball, who headed the PR department of Caldera

Quoting Martin:

As a member of the OSI, he [Bruce] frequently championed that organization’s prerogative to define what “Open Source” means, on the basis that they invented the term. But I [Martin] knew from personal experience that they did not. I was personally using the term with people I knew before then, and it had a meaning — you can get the source code. It didn’t imply anything at all about redistribution.

The response from Caldera includes such gems as:

I joined Caldera in November of 1995, and we certainly used “open source” broadly at that time. We were building software. I can’t imagine a world where we did not use the specific phrase “open source software”. And we were not alone. The term “Open Source” was used broadly by Linus Torvalds (who at the time was a student (…), John “Mad Dog” Hall who was a major voice in the community (he worked at COMPAQ at the time), and many, many others.

Our mission was first to promote “open source”, Linus Torvalds, Linux, and the open source community at large. (…) we flew around the world to promote open source, Linus and the Linux community….we specifically taught the analysts houses (i.e. Gartner, Forrester) and media outlets (in all major markets and languages in North America, Europe and Asia.) (…) My team and I also created the first unified gatherings of vendors attempting to monetize open source

So according to Caldera, “open source” was a phenomenon in the industry already and Linus himself had used the term. He mentions plenty of avenues for further research, I pursued one of them below.

Linux Kernel discussions

Mr. Ball’s mentions of Linus and Linux piqued my interest, so I started digging.

I couldn’t find a mention of “open source” in the Linux Kernel Mailing List archives prior to the OSD day (Feb 1998), though the archives only start as of March 1996. I asked ChatGPT where people used to discuss Linux kernel development prior to that, and it suggested 5 Usenet groups, which google still lets you search through:

What were the hits? Glad you asked!

comp.os.linux: a 1993 discussion about supporting binary-only software on Linux

This conversation predates the OSI by five whole years and leaves very little to the imagination:

The GPL and the open source code have made Linux the success that it is. Cygnus and other commercial interests are quite comfortable with this open paradigm, and in fact prosper. One need only pull the source code to GCC and read the list of many commercial contributors to realize this.

comp.os.linux.announce: 1996 announcement of Caldera’s open-source environment

In November 1996 Caldera shows up again, this time with a Linux based “open-source” environment:

Channel Partners can utilize Caldera’s Linux-based, open-source environment to remotely manage Windows 3.1 applications at home, in the office or on the road. By using Caldera’s OpenLinux (COL) and Wabi solution, resellers can increase sales and service revenues by leveraging the rapidly expanding telecommuter/home office market. Channel Partners who create customized turn-key solutions based on environments like SCO OpenServer 5 or Windows NT,

comp.os.linux.announce: 1996 announcement of a trade show

On 17 Oct 1996 we find this announcement

There will be a Open Systems World/FedUnix conference/trade show in Washington DC on November 4-8. It is a traditional event devoted to open computing (read: Unix), attended mostly by government and commercial Information Systems types.

In particular, this talk stands out to me:

** Schedule of Linux talks, OSW/FedUnix'96, Thursday, November 7, 1996 ***
(…)
11:45 Alexander O. Yuriev, “Security in an open source system: Linux study

The context here seems to be open standards, and maybe also the open source development model.

1990: Tony Patti on “software developed from open source material”

in 1990, a magazine editor by name of Tony Patti not only refers to Open Source software but mentions that NSA in 1987 referred to “software was developed from open source material”

1995: open-source changes emails on OpenBSD-misc email list

I could find one mention of “Open-source” on an OpenBSD email list, seems there was a directory “open-source-changes” which had incoming patches, distributed over email. (source). Though perhaps the way to interpret is, to say it concerns “source-changes” to OpenBSD, paraphrased to “open”, so let’s not count this one.

(I did not look at other BSD’s)

Bryan Lunduke’s research

Bryan Lunduke has done similar research and found several more USENET posts about “open source”, clearly in the context of of source software, predating OSI by many years. He breaks it down on his substack. Some interesting examples he found:

19 August, 1993 post to comp.os.ms-windows

Anyone else into “Source Code for NT”? The tools and stuff I’m writing for NT will be released with source. If there are “proprietary” tricks that MS wants to hide, the only way to subvert their hoarding is to post source that illuminates (and I don’t mean disclosing stuff obtained by a non-disclosure agreement).

(source)

Then he writes:

Open Source is best for everyone in the long run.

Written as a matter-of-fact generalization to the whole community, implying the term is well understood.

December 4, 1990

BSD’s open source policy meant that user developed software could be ported among platforms, which meant their customers saw a much more cost effective, leading edge capability combined hardware and software platform.

source

1985: The “the computer chronicles documentary” about UNIX.

The Computer Chronicles was a TV documentary series talking about computer technology, it started as a local broadcast, but in 1983 became a national series. On February 1985, they broadcasted an episode about UNIX. You can watch the entire 28 min episode on archive.org, and it’s an interesting snapshot in time, when UNIX was coming out of its shell and competing with MS-DOS with its multi-user and concurrent multi-tasking features. It contains a segment in which Bill Joy, co-founder of Sun Microsystems is being interviewed about Berkley Unix 4.2. Sun had more than 1000 staff members. And now its CTO was on national TV in the United States. This was a big deal, with a big audience. At 13:50 min, the interviewer quotes Bill:

“He [Bill Joy] says its open source code, versatility and ability to work on a variety of machines means it will be popular with scientists and engineers for some time”

“Open Source” on national TV. 13 years before the founding of OSI.

image

Uses of the word “open”

We’re specifically talking about “open source” in this article. But we should probably also consider how the term “open” was used in software, as they are related, and that may have played a role in the rejection of the trademark.

Well, the Open Software Foundation launched in 1988. (10 years before the OSI). Their goal was to make an open standard for UNIX. The word “open” is also used in software, e.g. Common Open Software Environment in 1993 (standardized software for UNIX), OpenVMS in 1992 (renaming of VAX/VMS as an indication of its support of open systems industry standards such as POSIX and Unix compatibility), OpenStep in 1994 and of course in 1996, the OpenBSD project started. They have this to say about their name: (while OpenBSD started in 1996, this quote is from 2006):

The word “open” in the name OpenBSD refers to the availability of the operating system source code on the Internet, although the word “open” in the name OpenSSH means “OpenBSD”. It also refers to the wide range of hardware platforms the system supports.

Does it run DOOM?

The proof of any hardware platform is always whether it can run Doom. Since the DOOM source code was published in December 1997, I thought it would be fun if ID Software would happen to use the term “Open Source” at that time. There are some FTP mirrors where you can still see the files with the original December 1997 timestamps (e.g. this one). However, after sifting through the README and other documentation files, I only found references to the “Doom source code”. No mention of Open Source.

The origins of the famous “Open Source” trademark application: SPI, not OSI

This is not directly relevant, but may provide useful context: In June 1997 the SPI (“Software In the Public Interest”) organization was born to support the Debian project, funded by its community, although it grew in scope to help many more free software / open source projects. It looks like Bruce, as as representative of SPI, started the “Open Source” trademark proceedings. (and may have paid for it himself). But then something happened, 3/4 of the SPI board (including Bruce) left and founded the OSI, which Bruce announced along with a note that the trademark would move from SPI to OSI as well. Ian Jackson - Debian Project Leader and SPI president - expressed his “grave doubts” and lack of trust. SPI later confirmed they owned the trademark (application) and would not let any OSI members take it. The perspective of Debian developer Ean Schuessler provides more context.

A few years later, it seems wounds were healing, with Bruce re-applying to SPI, Ean making amends, and Bruce taking the blame.

All the bickering over the Trademark was ultimately pointless, since it didn’t go through.

Searching for SPI on the OSI website reveals no acknowledgment of SPI’s role in the story. You only find mentions in board meeting notes (ironically, they’re all requests to SPI to hand over domains or to share some software).

By the way, in November 1998, this is what SPI’s open source web page had to say:

Open Source software is software whose source code is freely available

A Trademark that was never meant to be.

Lawyer Kyle E. Mitchell knows how to write engaging blog posts. Here is one where he digs further into the topic of trademarking and why “open source” is one of the worst possible terms to try to trademark (in comparison to, say, Apple computers).

He writes:

At the bottom of the hierarchy, we have “descriptive” marks. These amount to little more than commonly understood statements about goods or services. As a general rule, trademark law does not enable private interests to seize bits of the English language, weaponize them as exclusive property, and sue others who quite naturally use the same words in the same way to describe their own products and services.
(…)
Christine Peterson, who suggested “open source” (…) ran the idea past a friend in marketing, who warned her that “open” was already vague, overused, and cliche.
(…)
The phrase “open source” is woefully descriptive for software whose source is open, for common meanings of “open” and “source”, blurry as common meanings may be and often are.
(…)
no person and no organization owns the phrase “open source” as we know it. No such legal shadow hangs over its use. It remains a meme, and maybe a movement, or many movements. Our right to speak the term freely, and to argue for our own meanings, understandings, and aspirations, isn’t impinged by anyone’s private property.

So, we have here a great example of the Trademark system working exactly as intended, doing the right thing in the service of the people: not giving away unique rights to common words, rights that were demonstrably never OSI’s to have.

I can’t decide which is more wild: OSI’s audacious outcries for the whole world to forget about the trademark failure and trust their “pinky promise” right to authority over a common term, or the fact that so much of the global community actually fell for it and repeated a misguided narrative without much further thought. (myself included)

I think many of us, through our desire to be part of a movement with a positive, fulfilling mission, were too easily swept away by OSI’s origin tale.

Co-opting a term

OSI was never relevant as an organization and hijacked a movement that was well underway without them.

(source: a harsh but astute Slashdot comment)

We have plentiful evidence that “Open Source” was used for at least a decade prior to OSI existing, in the industry, in the community, and possibly in government. You saw it at trade shows, in various newsgroups around Linux and Windows programming, and on national TV in the United States. The word was often uttered without any further explanation, implying it was a known term. For a movement that happened largely offline in the eighties and nineties, it seems likely there were many more examples that we can’t access today.

“Who was first?” is interesting, but more relevant is “what did it mean?”. Many of these uses were fairly informal and/or didn’t consider re-distribution. We saw these meanings:

  • a collaborative development model
  • portability across hardware platforms, open standards
  • disclosing (making available) of source code, sometimes with commercial limitations (e.g. per-seat licensing) or restrictions (e.g. non-compete)
  • possibly a buzz-word in the TV documentary

Then came the OSD which gave the term a very different, and much more strict meaning, than what was already in use for 15 years. However, the OSD was refined, “legal-aware” and the starting point for an attempt at global consensus and wider industry adoption, so we are far from finished with our analysis.

(ironically, it never quite matched with free software either - see this e-mail or this article)

Legend has it…

Repeat a lie often enough and it becomes the truth

Yet, the OSI still promotes their story around being first to use the term “Open Source”. RedHat’s article still claims the same. I could not find evidence of resolution. I hope I just missed it (please let me know!). What I did find, is one request for clarification remaining unaddressed and another handled in a questionable way, to put it lightly. Expand all the comments in the thread and see for yourself For an organization all about “open”, this seems especially strange. Seems we have veered far away from the “We will not hide problems” motto in the Debian Social Contract.

Real achievements are much more relevant than “who was first”. Here are some suggestions for actually relevant ways the OSI could introduce itself and its mission:

  • “We were successful open source practitioners and industry thought leaders”
  • “In our desire to assist the burgeoning open source movement, we aimed to give it direction and create alignment around useful terminology”.
  • “We launched a campaign to positively transform the industry by defining the term - which had thus far only been used loosely - precisely and popularizing it”

I think any of these would land well in the community. Instead, they are strangely obsessed with “we coined the term, therefore we decide its meaning. and anything else is “flagrant abuse”.

Is this still relevant? What comes next?

Trust takes years to build, seconds to break, and forever to repair

I’m quite an agreeable person, and until recently happily defended the Open Source Definition. Now, my trust has been tainted, but at the same time, there is beauty in knowing that healthy debate has existed since the day OSI was announced. It’s just a matter of making sense of it all, and finding healthy ways forward.

Most of the events covered here are from 25 years ago, so let’s not linger too much on it. There is still a lot to be said about adoption of Open Source in the industry (and the community), tension (and agreements!) over the definition, OSI’s campaigns around awareness and standardization and its track record of license approvals and disapprovals, challenges that have arisen (e.g. ethics, hyper clouds, and many more), some of which have resulted in alternative efforts and terms. I have some ideas for productive ways forward.

Stay tuned for more, sign up for the RSS feed and let me know what you think!
Comment below, on X or on HackerNews

August 29, 2024

In his latest Lex Fridman appearance, Elon Musk makes some excellent points about the importance of simplification.

Follow these steps:

  1. Simplify the requirements
  2. For each step, try to delete it altogether
  3. Implement well

1. Simplify the Requirements

Even the smartest people come up with requirements that are, in part, dumb. Start by asking yourself how they can be simplified.

There is no point in finding the perfect answer to the wrong question. Try to make the question as least wrong as possible.

I think this is so important that it is included in my first item of advice for junior developers.

There is nothing so useless as doing efficiently that which should not be done at all.

2. Delete the Step

For each step, consider if you need it at all, and if not, delete it. Certainty is not required. Indeed, if you only delete what you are 100% certain about, you will leave in junk. If you never put things back in, it is a sign you are being too conservative with deletions.

The best part is no part.

Some further commentary by me:

This applies both to the product and technical implementation levels. It’s related to YAGNI, Agile, and Lean, also mentioned in the first section of advice for junior developers.

It’s crucial to consider probabilities and compare the expected cost/value of different approaches. Don’t spend 10 EUR each day to avoid a 1% chance of needing to pay 100 EUR. Consistent Bayesian reasoning will reduce making such mistakes, though Elon’s “if you do not put anything back in, you are not removing enough” heuristic is easier to understand and implement.

3. Implement Well

Here, Elon talks about optimization and automation, which are specific to his problem domain of building a supercomputer. More generally, this can be summarized as good implementation, which I advocate for in my second section of advice for junior developers.

 

The relevant segment begins at 43:48.

The post Simplify and Delete appeared first on Entropy Wins.

August 27, 2024

I just reviewed the performance of a customer’s WordPress site. Things got a lot worse he wrote and he assumed Autoptimize (he was a AOPro user) wasn’t working any more and asked me to guide him to fix the issue. Instead it turns out he installed CookieYes, which adds tons of JS (part of which is render-blocking), taking 3.5s of main thread work and (fasten your seat-belts) which somehow seems…

Source

August 26, 2024

Building businesses based on an Open Source project is like balancing a solar system. Like the sun is the center of our own little universe, powering life on the planets which revolve around it in a brittle, yet tremendously powerful astrophysical equilibrium; so is the relationship between a thriving open source project, with a community, one or more vendors and their commercially supported customers revolving around it, driven by astronomical aspirations.

Source-available & Non-Compete licensing have existed in various forms, and have been tweaked and refined for decades, in an attempt to combine just enough proprietary conditions with just enough of Open Source flavor, to find that perfect trade-off. Fair Source is the latest refinement for software projects driven by a single vendor wanting to combine monetization, a high rate of contributions to the project (supported by said monetization), community collaboration and direct association with said software project.

Succinctly, Fair Source licenses provide much of the same benefits to users as Open Source licenses, although outsiders are not allowed to build their own competing service based on the software; however after 2 years the software automatically becomes MIT or Apache2 licensed, and at that point you can pretty much do whatever you want with the older code.

To avoid confusion, this project is different from:

It seems we have reached an important milestone in 2024: on the surface, “Fair Source” is yet another new initiative that positions itself as a more business friendly alternative to “Open Source”, but the delayed open source publication (DSOP) model has been refined to the point where the licenses are succinct, clear, easy to work with and should hold up well in court. Several technology companies are choosing this software licensing strategy (Sentry being the most famous one, you can see the others on their website).

My 2 predictions:

  • we will see 50-100 more companies in the next couple of years.
  • a governance legal entity will appear soon, and a trademark will follow after.

In this article, I’d like to share my perspective and address some - what I believe to be - misunderstandings in current discourse.

The licenses

At this time, the Fair Source ideology is implemented by the following licenses:

BSL/BUSL are more tricky to understand can have different implementations. FCL and FSL are nearly identical. They are clearly and concisely written and embody the Fair Source spirit in the most pure form.

Seriously, try running the following in your terminal. Sometimes as an engineer you have to appreciate legal text when it’s this concise, easy to understand, and diff-able!

wget https://raw.githubusercontent.com/keygen-sh/fcl.dev/master/FCL-1.0-MIT.md
wget https://fsl.software/FSL-1.1-MIT.template.md
diff FSL-1.1-MIT.template.md FCL-1.0-MIT.md

I will focus on FSL and FCL and FSL, the Fair Source “flagship licenses”.

Is it “open source, fixed”, or an alternative to open source? Neither.

First, we’ll need to agree on what the term “Open Source” means. This itself has been a battle for decades, with non-competes (commercial restrictions) being especially contentious and in use even before OSI came along, so I’m working on an article which challenges OSI’s Open Source Definition which I will publish soon. However, the OSD is probably the most common understanding in the industry today - so we’ll use that here - and it seems that folks behind FSL/Fair Source made the wise decision to distance themselves from these contentious debates: after some initial conversations about FSL using the “Open Source” term, they’ve adopted the less common term of “Fair Source” and I’ve seen a lot of meticulous work (e.g. fsl#2 and fsl#10 on how they articulate what they stand for. (the Open Source Definition debate is why I hope the Fair Source folks will file a trademark if this projects gains more traction.

Importantly, OSI’s definition of “Open Source” includes non-discrimination and free redistribution.

When you check out code that is FSL licensed, and the code was authored:

  1. less than 2 years ago: it’s available to you under terms similar to MIT, except you cannot compete with the author by making a similar service using the same software
  2. more than 2 years ago: it is now MIT licensed. (or Apache2, when applicable)

While after 2 years, it is clearly open source, the non-compete clause in option 1 is not compatible with the set of terms set forth by the OSI Open Source Definition. (or freedom 0 from the 4 freedoms of Free Software). Such a license is often referred to as “Source Available”.

So, Fair Source is a system to combine 2 licenses (an Open Source one and a Source Available one with proprietary conditions) in one. I think this is very clever approach, but I think it’s not all that useful to compare this to Open Source. Rather, it has a certain symmetry to Open Core:

  • In an Open Core product, you have a “scoped core”: a core built from open source code which is surrounded by specifically scoped pieces from proprietary code, for a indeterminate, but usually many-year or perpetual timeframe
  • With Fair Source, you have a “timed core”: the open source core is all the code that’s more than 2 years old, and the proprietary bits are the most recent developments (regardless which scope they belong to).

Open Core and Fair Source both try to balance open source with business interests: both have an open source component to attract a community, and a proprietary shell to make a business more viable. Fair Source is a licensing choice that’s only relevant to business, not individuals. How many business monetize pure Open Source software? I can count them on one hand. The vast majority go for something like Open Core. This is why the comparison with Open Core makes much more sense.

A lot of the criticisms of Fair Source suddenly become a lot more palatable when you consider it an alternative to Open Core.

As a customer, which is more tolerable? proprietary features or a proprietary 2-years worth of product developments? I don’t think it matters nearly as much as some of the advantages Fair Source has over Open Core:

  • Users can view, modify and distribute (but not commercialize) the proprietary code. (with Open Core, you only get the binaries)
  • It follows then, that the project can use a single repository and single license (with Open Core, there are multiple repositories and licenses involved)

Technically, Open Core is more of a business architecture, where you still have to figure out which licenses you want to use for the core and shell, whereas Fair Source is more of a prepackaged solution which defines the business architecture as well as the 2 licenses to use.

image

Note that you can also devise hybrid approaches. Here are some ideas:

  • a Fair Source core and Closed Source shell. (more defensive than Open Core or Fair Source separately). (e.g. PowerSync does this)
  • an Open Source core, with Fair Source shell. (more open than Open Core or Fair Source separately).
  • Open Source Core, with Source Available shell (users can view, modify and distribute the code but not commercialize it, and without the delayed open source publication). This would be the “true” symmetrical counterpart to Fair Source. It is essentially Open Core where the community also has access to the proprietary features (but can’t commercialize those). It would also allow to put all code in the same repository. (although this benefit works better with Fair Source because any contributed code will definitely become open source, thus incentivizing the community more). I find this a very interesting option that I hope Open Core vendors will start considering. (although it has little to do with Fair Source).
  • etc.

Non-Competition

The FSL introduction post states:

In plain language, you can do anything with FSL software except economically undermine its producer through harmful free-riding

The issue of large cloud vendors selling your software as a service, making money, and contributing little to nothing back to the project, has been widely discussed under a variety of names. This can indeed severely undermine a project’s health, or kill it.

(Personally, I find discussions around whether this is “fair” not very useful. Businesses will act in their best interest, you can’t change the rules of the game, you only have control over how you play the game, i.o.w. your own licensing and strategy)

Here, we’ll just use the same terminology that the FSL does, the “harmful free-rider” problem

However, the statement above is incorrect. Something like this would be more correct:

In plain language, you can do anything with FSL software except offer a similar paid service based on the software when it’s less than 2 years old.

What’s the difference? There are different forms of competition that are not harmful free-riding.

Multiple companies can offer a similar service/product which they base on the same project, which they all contribute to. They can synergize and grow the market together. (aka “non-zero-sum” if you want to sound smart). I think there are many good examples of this, e.g. Hadoop, Linux, Node.js, OpenStack, Opentelemetry, Prometheus, etc.

When the FSL website makes statements such as “You can do anything with FSL software except undermine its producer”, it seems to forget some of the best and most ubiquitous software in the world is the result of synergies between multiple companies collaborating.

Furthermore, when the company who owns the copyright on the project turns their back on their community/customers wouldn’t the community “deserve” a new player who offers a similar service, but on friendly terms? The new player may even contribute more to the project. Are they a harmful free-rider? Who gets to be judge of that?

Let’s be clear, FSL allows no competition whatsoever, at least not during the first 2 years. What about after 2 years?

Zeke Gabrielse, one of the shepherds of Fair Source, said it well here:

Being 2 years old also puts any SaaS competition far enough back to not be a concern

Therefore, you may as well say no competition is allowed. Although, in Zeke’s post, I presume he was writing from the position of an actively developing software project. If it becomes abandoned, the 2 years countdown is an obstacle, an overcomeable one, that eventually does let you compete, but in this case, the copyright holder probably went bust, so you aren’t really competing with them either. The 2 year window is not designed to enable competition, instead it is a contingency plan for when the company goes bankrupt. The wait can be needlessly painful for the community in such a situation. If a company is about to go bust, they could immediately release their Fair Source code as Open Source, but I wonder if this can be automated via the actual license text.

(I had found some ambiguous use of the term “direct” competition which I’ve reported and has since been resolved)

Perverse incentives

Humans are notoriously bad about predicting 2nd order effects. So I like to try to. What could be some second order effects of Fair Source projects? And how do they compare to Open Core?

  • can companies first grow on top of their Fair Source codebase, take community contributions, and then switch to more restrictive, or completely closed licensing, shutting out the community? Yes if a CLA is in place (or using the 2 year old code). (this isn’t any different from any other CLA using Open Source or Open Core project. Though with Open Core, you can’t take in external contributions on proprietary parts to begin with)
  • if you enjoy a privileged position where others can’t meaningfully compete with you based on the same source code, that can affect how the company treats its community and its customers. It can push through undesirable changes, it can price more aggressively, etc. (these issues are the same with Open Core)
  • With Open Source & Open Core, the company is incentivized to make the code well understood by the community. Under Fair Source it would still be sensible (in order to get free contributions), but at the same time, by hiding design documents, subtly obfuscating the code and withholding information it can also give itself the edge for when the code does become Open Source, although as we’ve seen, the 2 year delay makes competition unrealistic anyway.

All in all, nothing particularly worse than Open Core, here.

Developer sustainability

The FSL introduction post says:

We value user freedom and developer sustainability. Free and Open Source Software (FOSS) values user freedom exclusively. That is the source of its success, and the source of the free-rider problem that occasionally boils over into a full-blown tragedy of the commons, such as Heartbleed and Log4Shell.

F/OSS indeed doesn’t involve itself with sustainability, because of the simple fact that Open Source has nothing to do business models and monetization. As stated above, it makes more sense to compare to Open Core.

It’s like saying asphalt paving machinery doesn’t care about funding and is therefore to blame when roads don’t get built. Therefore we need tolls. But it would be more useful to compare tolls to road taxes and vignettes.

Of course it happens that people dedicate themselves to writing open source projects, usually driven by their interests, don’t get paid, get volumes of support requests (incl. from commercial entities), which can become suffering, and can also lead to codebases becoming critically important, yet critically misunderstood and fragile. This is clearly a situation to avoid, and there are many ways to solve the problem ranging from sponsorships (e.g. GitHub, tidelift), bounty programs (e.g. Algora), direct funding (e.g. Sentry’s 500k donation) and many more initiatives that have launched in the last few years. Certainly a positive development. Sometimes formally abandoning a project is also a clear sign that puts the burden of responsibility onto whoever consumes it and can be a relief to the original author. If anything, it can trigger alarm bells within corporations and be a fast path to properly engaging and compensating the author. There is no way around the fact that developers (and people in general) are generally responsible for their own well being and sometimes need to put their foot down, or put on their business hat (which many developers don’t like to do) if their decision to open source project is resulting in problems. No amount of licensing can change this hard truth.

Furthermore, you can make money via Open Core around OSI approved open source projects (e.g. Grafana), consulting/support, and many companies that pay developers to work on (pure) Open Source code (Meta, Microsft, Google, etc are the most famous ones, but there are many smaller ones). Companies that try to achieve sustainability (and even thriving) on pure open source software for which they are the main/single driving force, are extremely rare. (Chef tried, and now System Initiative is trying to do it better. I remain skeptical but am hopeful and am rooting for them to prove the model)

Doesn’t it sound a bit ironic that the path to getting developers paid is releasing your software via a non-compete license?

Do we reach developer sustainability by preventing developers from making money on top of projects they want to - or already have - contribute(d) to?

Important caveats:

  • Fair Source does allow to make money via consulting and auxiliary services related to the software.
  • Open Core shuts out people similarly, but many of the business models above, don’t.

CLA needed?

When a project uses an Open Source license with some restrictions (e.g. GPL with its copyleft) it is common to use a CLA such that the company backing it can use more restrictive or commercial licenses (either as a license change later on, or as dual licensing). With Fair Source (and indeed all Source Available licenses), this is also the the case.

However, unlike Open Source licenses, with Fair Source / Source Available licenses, a CLA becomes much more of a necessity, because such a license without CLA isn’t compatible with anything else, and the commercial FSL restriction may not always apply to outside contributions (it depends on e.g. whether it can be offered stand-alone). I’m not a lawyer, for more clarity you should consult with one. I think the Fair Source website, at least their adoption guide should mention something about CLA’s, because it’s an important step beyond simply choosing a license and publishing, so I’ve raised this with them.

AGPL

The FSL website states:

AGPLv3 is not permissive enough. As a highly viral copyleft license, it exposes users to serious risk of having to divulge their proprietary source code.

This looks like fear mongering.

  • AGPL is not categorically less permissive than FSL. It is less permissive when the code is 2 years old or older (and the FSL has turned into MIT/Apache2). For current and recent code, AGPL permits competition; FSL does not.
  • The world “viral” is more divisive than accurate. In my mind, complying with AGPL is rather easy, my rule of thumb is to say you trigger copyleft when you “ship”. Most engineers have an intuitive understanding of what it means to “ship” a feature, whether that’s on cloud, or on-prem. In my experience, people struggle more with patent clauses or even the relation between trademarks and software licensing than they do with copyleft. There’s still some level of uncertainty and caution around AGPL, mainly due to its complexity. (side note: Google and CNCF doesn’t allow copyleft licenses, and their portfolio doesn’t have a whole lot of commercial success to show for it, I see mainly projects that can easily be picked up by Google)

Heather Meeker, the lawyer consulted to draft up the FSL has spoken out against the virality discourse and tempering the FUD around AGPL

Conclusion

I think Fair Source, the FSL and FCL have a lot to offer. Throughout my analysis I may have raised some criticisms, but if anything, it reminds me of how much Open Core can suck (though it depends on the relative size of core vs shell). So I find it a very compelling alternative to Open Core. Despite some poor choices of wording, I find it well executed: It ties up a lot of loose ends from previous initiatives (Source Available, BSL and other custom licenses) into a neat package. Despite the need for a CLA it’s still quite easy to implement and is arguably more viable than Open Core is, in its current state today. When comparing to Open Source, the main question is: which is worse, the “harmful free-rider problem”, or the non-compete? (Anecdotally, my gut feeling says the former, but I’m on the look out for data driven evidence). When comparing to Open Core, the main question is: is a business more viable keeping proprietary features closed, or making them source-available (non-compete)?.

As mentioned, there are many more hybrid approaches possible. For a business thinking about their licensing strategy, it may make sense to think of these questions separately:

  • should our proprietary shell be time based or feature scoped? Does it matter?
  • should our proprietary shell be closed, or source-available?

I certainly would prefer to see companies and projects appear:

  • as Fair Source, rather than not at all
  • as Open Core, rather than not at all
  • as Fair Source, rather than Open Core (depending on “shell thickness”).
  • with more commercial restrictions from the get-go, instead of starting more permissively and re-licensing later. Just kidding, but that’s a topic for another day.

For vendors, I think there are some options left to explore, such as the Open Core with an source available (instead of closed) shell. Something to consider for any company doing Open Core today. For end-users / customers, “Open Source” vendors are not the only ones to be taken with a grain of salt, it’s the same with Fair Source, since they may have a more complicated arrangement rather than just using a Fair Source license.

Thanks to Heather Meeker and Joseph Jacks for providing input, although this article reflects only my personal views.

August 25, 2024

I made some time to give some love to my own projects and spent some time rewriting the Ansible role stafwag.ntpd and cleaning up some other Ansible roles.

There is some work ongoing for some other Ansible roles/projects, but this might be a topic for some other blog post(s) ;-)

freebsd with smartcard

stafwag.ntpd


An ansible role to configure ntpd/chrony/systemd-timesyncd.


This might be controversial, but I decided to add support for chrony and systemd-timesyncd. Ntpd is still supported and the default on the BSDs ( FreeBSD, NetBSD, OpenBSD).

It’s possible to switch from the ntp implementation by using the ntpd.provider directive.

The Ansible role stafwag.ntpd v2.0.0 is available at:

Release notes

V2.0.0

  • Added support for chrony and systemd-timesyncd on GNU/Linux
    • systemd-timesynced is the default on Debian GNU/Linux 12+ and Archlinux
    • ntpd is the default on all operating systems (BSDs, Solaris) and Debian GNU/Linux 10 and 11
    • chrony is the default on all other GNU/Linux distributes
    • For ntpd hash as the input for the role.
    • Updated README
    • CleanUp

stafwag.ntpdate


An ansible role to activate the ntpdate service on FreeBSD and NetBSD.


The ntpdate service is used on FreeBSD and NetBSD to sync the time during the system boot-up. On most Linux distributions this is handled by chronyd or systemd-timesyncd now. The OpenBSD ntpd implementation OpenNTPD also has support to sync the time during the system boot-up.

The role is available at:

Release notes

V1.0.0

  • Initial release on Ansible Galaxy
    • Added support for NetBSD

stafwag.libvirt


An ansible role to install libvirt/KVM packages and enable the libvirtd service.


The role is available at:

Release notes

V1.1.3

  • Force bash for shell execution on Ubuntu.
    • Force bash for shell execution on Ubuntu. As the default dash shell doesn’t support pipefail.

V1.1.2

  • CleanUp
    • Corrected ansible-lint errors
    • Removed install task “install/.yml’”;
      • This was introduced to support Kali Linux, Kali Linux is reported as “Debian” now.
      • It isn’t used in this role
    • Removed invalid CentOS-8.yml softlink
      • Removed invalid soft link, Centos 8 should be catched by
      • RedHat-yum.yml

stafwag.cloud_localds


An ansible role to create cloud-init config disk images. This role is a wrapper around the cloud-localds command.


It’s still planned to add support for distributions that don’t have cloud-localds as part of their official package repositories like RedHat 8+.

See the GitHub issue: https://github.com/stafwag/ansible-role-cloud_localds/issues/7

The role is available at:

Release notes

V2.1.3

  • CleanUp
    • Switched to vars and package to install the required packages
    • Corrected ansible-lint errors
    • Added more examples

stafwag.qemu_img


An ansible role to create QEMU disk images.


The role is available at:

Release notes

V2.3.0

  • CleanUp Release
    • Added doc/examples
    • Updated meta data
    • Switched to vars and package to install the required packages
    • Corrected ansible-lint errors

stafwag.virt_install_import


An ansible role to import virtual machine with the virt-install import command


The role is available at:

Release notes

  • Use var and package to install pkgs
    • v1.2.1 wasn’t merged correctly. The release should fix it…
    • Switched to var and package to install the required packages
    • Updated meta data
    • Updated documentation and include examples
    • Corrected ansible-lint errors



Have fun!

August 13, 2024

Here’s a neat little trick for those of you using Home Assistant while also driving a Volvo.

To get your Volvo driving data (fuel level, battery state, …) into Home Assistant, there’s the excellent volvo2mqtt addon.

One little annoyance is that every time it starts up, you will receive an e-mail from Volvo with a two-factor authentication code, which you then have to enter in Home Assistant.

Fortunately, there’s a solution for that, you can automate this using the built-in imap support of Home Assistant, with an automation such as this one:

alias: Volvo OTP
description: ""
trigger:
  - platform: event
    event_type: imap_content
    event_data:
      initial: true
      sender: no-reply@volvocars.com
      subject: Your Volvo ID Verification code
condition: []
action:
  - service: mqtt.publish
    metadata: {}
    data:
      topic: volvoAAOS2mqtt/otp_code
      payload: >-
        {{ trigger.event.data['text'] | regex_findall_index(find='Your Volvo ID verification code is:\s+(\d+)', index=0) }}
  - service: imap.delete
    data:
      entry: "{{ trigger.event.data['entry_id'] }}"
      uid: "{{ trigger.event.data['uid'] }}"
mode: single

This will post the OTP code to the right location and then delete the message from your inbox (if you’re using Google Mail, that means archiving it).


Comments | More on rocketeer.be | @rubenv on Twitter

July 28, 2024


Updated @ Mon Sep 2 07:55:20 PM CEST 2024: Added devfs section
Updated @ Wed Sep 4 07:48:56 PM CEST 2024 : Corrected gpg-agent.conf


I use FreeBSD and GNU/Linux. freebsd with smartcard

In a previous blog post, we set up GnuPG with smartcard support on Debian GNU/Linux.

In this blog post, we’ll install and configure GnuPG with smartcard support on FreeBSD.

The GNU/Linux blog post provides more details about GnuPG, so it might be useful for the FreeBSD users to read it first.

Likewise, Linux users are welcome to read this blog post if they’re interested in how it’s done on FreeBSD ;-)

Install the required packages

To begin, we need to install the required packages on FreeBSD.

Update the package database

Execute pkg update to update the package database.

Thunderbird

[staf@monty ~]$ sudo pkg install -y thunderbird
Password:
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
Checking integrity... done (0 conflicting)
The most recent versions of packages are already installed
[staf@monty ~]$ 

lsusb

You can verify the USB devices on FreeBSD using the usbconfig command or lsusb which is also available on FreeBSD as part of the usbutils package.

[staf@monty ~/git/stafnet/blog]$ sudo pkg install usbutils
Password:
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
The following 3 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
	usbhid-dump: 1.4
	usbids: 20240318
	usbutils: 0.91

Number of packages to be installed: 3

301 KiB to be downloaded.

Proceed with this action? [y/N]: y
[1/3] Fetching usbutils-0.91.pkg: 100%   54 KiB  55.2kB/s    00:01    
[2/3] Fetching usbhid-dump-1.4.pkg: 100%   32 KiB  32.5kB/s    00:01    
[3/3] Fetching usbids-20240318.pkg: 100%  215 KiB 220.5kB/s    00:01    
Checking integrity... done (0 conflicting)
[1/3] Installing usbhid-dump-1.4...
[1/3] Extracting usbhid-dump-1.4: 100%
[2/3] Installing usbids-20240318...
[2/3] Extracting usbids-20240318: 100%
[3/3] Installing usbutils-0.91...
[3/3] Extracting usbutils-0.91: 100%
[staf@monty ~/git/stafnet/blog]$

GnuPG

We’ll need GnuPG ( of course ), so ensure that it is installed.

[staf@monty ~]$ sudo pkg install gnupg
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
Checking integrity... done (0 conflicting)
The most recent versions of packages are already installed
[staf@monty ~]$ 

Smartcard packages

To enable smartcard support on FreeBSD, we’ll need to install the smartcard packages. The same software as on GNU/Linux - opensc - is available on FreeBSD.

pkg provides

It’s handy to be able to check which packages provide certain files. On FreeBSD this is provided by the provides plugin. This plugin is not enabled by default in the pkg command.

To install in the provides plugin install the pkg-provides package.

[staf@monty ~]$ sudo pkg install pkg-provides
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
The following 1 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        pkg-provides: 0.7.3_3

Number of packages to be installed: 1

12 KiB to be downloaded.

Proceed with this action? [y/N]: y
[1/1] Fetching pkg-provides-0.7.3_3.pkg: 100%   12 KiB  12.5kB/s    00:01    
Checking integrity... done (0 conflicting)
[1/1] Installing pkg-provides-0.7.3_3...
[1/1] Extracting pkg-provides-0.7.3_3: 100%
=====
Message from pkg-provides-0.7.3_3:

--
In order to use the pkg-provides plugin you need to enable plugins in pkg.
To do this, uncomment the following lines in /usr/local/etc/pkg.conf file
and add pkg-provides to the supported plugin list:

PKG_PLUGINS_DIR = "/usr/local/lib/pkg/";
PKG_ENABLE_PLUGINS = true;
PLUGINS [ provides ];

After that run `pkg plugins' to see the plugins handled by pkg.
[staf@monty ~]$ 

Edit the pkg configuration to enable the provides plug-in.

staf@freebsd-gpg:~ $ sudo vi /usr/local/etc/pkg.conf
PKG_PLUGINS_DIR = "/usr/local/lib/pkg/";
PKG_ENABLE_PLUGINS = true;
PLUGINS [ provides ];

Verify that the plugin is enabled.

staf@freebsd-gpg:~ $ sudo pkg plugins
NAME       DESC                                          VERSION   
provides   A plugin for querying which package provides a particular file 0.7.3     
staf@freebsd-gpg:~ $ 

Update the pkg-provides database.

staf@freebsd-gpg:~ $ sudo pkg provides -u
Fetching provides database: 100%   18 MiB   9.6MB/s    00:02    
Extracting database....success
staf@freebsd-gpg:~ $

Install the required packages

Let’s check which packages provide the tools to set up the smartcard reader on FreeBSD. And install the required packages.

staf@freebsd-gpg:~ $ pkg provides "pkcs15-tool"
Name    : opensc-0.25.1
Comment : Libraries and utilities to access smart cards
Repo    : FreeBSD
Filename: usr/local/share/man/man1/pkcs15-tool.1.gz
          usr/local/etc/bash_completion.d/pkcs15-tool
          usr/local/bin/pkcs15-tool
staf@freebsd-gpg:~ $ 
staf@freebsd-gpg:~ $ pkg provides "bin/pcsc"
Name    : pcsc-lite-2.2.2,2
Comment : Middleware library to access a smart card using SCard API (PC/SC)
Repo    : FreeBSD
Filename: usr/local/sbin/pcscd
          usr/local/bin/pcsc-spy
staf@freebsd-gpg:~ $ 
[staf@monty ~]$ sudo pkg install opensc pcsc-lite
Password:
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
Checking integrity... done (0 conflicting)
The most recent versions of packages are already installed
[staf@monty ~]$ 
staf@freebsd-gpg:~ $ sudo pkg install -y pcsc-tools
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
Checking integrity... done (0 conflicting)
The most recent versions of packages are already installed
staf@freebsd-gpg:~ $ 
staf@freebsd-gpg:~ $ sudo pkg install -y ccid
Password:
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
Checking integrity... done (0 conflicting)
The most recent versions of packages are already installed
staf@freebsd-gpg:~ $ 

USB

To use the smartcard reader we will need access to the USB devices as the user we use for our desktop environment. (No, this shouldn’t be the root user :-) )

permissions

verify

Execute the usbconfig command to verify that you can access the USB devices.

[staf@snuffel ~]$ usbconfig
No device match or lack of permissions.
[staf@snuffel ~]$ 

If you don’t have access, verify the permissions of the USB devices.

[staf@snuffel ~]$ ls -l /dev/usbctl
crw-r--r--  1 root operator 0x5b Sep  2 19:17 /dev/usbctl
[staf@snuffel ~]$  ls -l /dev/usb/
total 0
crw-------  1 root operator 0x34 Sep  2 19:17 0.1.0
crw-------  1 root operator 0x4f Sep  2 19:17 0.1.1
crw-------  1 root operator 0x36 Sep  2 19:17 1.1.0
crw-------  1 root operator 0x53 Sep  2 19:17 1.1.1
crw-------  1 root operator 0x7e Sep  2 19:17 1.2.0
crw-------  1 root operator 0x82 Sep  2 19:17 1.2.1
crw-------  1 root operator 0x83 Sep  2 19:17 1.2.2
crw-------  1 root operator 0x76 Sep  2 19:17 1.3.0
crw-------  1 root operator 0x8a Sep  2 19:17 1.3.1
crw-------  1 root operator 0x8b Sep  2 19:17 1.3.2
crw-------  1 root operator 0x8c Sep  2 19:17 1.3.3
crw-------  1 root operator 0x8d Sep  2 19:17 1.3.4
crw-------  1 root operator 0x38 Sep  2 19:17 2.1.0
crw-------  1 root operator 0x56 Sep  2 19:17 2.1.1
crw-------  1 root operator 0x3a Sep  2 19:17 3.1.0
crw-------  1 root operator 0x51 Sep  2 19:17 3.1.1
crw-------  1 root operator 0x3c Sep  2 19:17 4.1.0
crw-------  1 root operator 0x55 Sep  2 19:17 4.1.1
crw-------  1 root operator 0x3e Sep  2 19:17 5.1.0
crw-------  1 root operator 0x54 Sep  2 19:17 5.1.1
crw-------  1 root operator 0x80 Sep  2 19:17 5.2.0
crw-------  1 root operator 0x85 Sep  2 19:17 5.2.1
crw-------  1 root operator 0x86 Sep  2 19:17 5.2.2
crw-------  1 root operator 0x87 Sep  2 19:17 5.2.3
crw-------  1 root operator 0x40 Sep  2 19:17 6.1.0
crw-------  1 root operator 0x52 Sep  2 19:17 6.1.1
crw-------  1 root operator 0x42 Sep  2 19:17 7.1.0
crw-------  1 root operator 0x50 Sep  2 19:17 7.1.1

devfs

When the /dev/usb* are only accessible by the root user. You probably want to create devfs.rules that to grant permissions to the operator or another group.

See https://man.freebsd.org/cgi/man.cgi?devfs.rules for more details.

/etc/rc.conf

Update the /etc/rc.conf to apply custom devfs permissions.

[staf@snuffel /etc]$ sudo vi rc.conf
devfs_system_ruleset="localrules"

/etc/devfs.rules

Create or update the /dev/devfs.rules with the update permissions to grant read/write access to the operator group.

[staf@snuffel /etc]$ sudo vi devfs.rules
[localrules=10]
add path 'usbctl*' mode 0660 group operator
add path 'usb/*' mode 0660 group operator

Restart the devfs service to apply the custom devfs ruleset.

[staf@snuffel /etc]$ sudo -i
root@snuffel:~ #
root@snuffel:~ # service devfs restart

The operator group should have read/write permissions now.

root@snuffel:~ # ls -l /dev/usb/
total 0
crw-rw----  1 root operator 0x34 Sep  2 19:17 0.1.0
crw-rw----  1 root operator 0x4f Sep  2 19:17 0.1.1
crw-rw----  1 root operator 0x36 Sep  2 19:17 1.1.0
crw-rw----  1 root operator 0x53 Sep  2 19:17 1.1.1
crw-rw----  1 root operator 0x7e Sep  2 19:17 1.2.0
crw-rw----  1 root operator 0x82 Sep  2 19:17 1.2.1
crw-rw----  1 root operator 0x83 Sep  2 19:17 1.2.2
crw-rw----  1 root operator 0x76 Sep  2 19:17 1.3.0
crw-rw----  1 root operator 0x8a Sep  2 19:17 1.3.1
crw-rw----  1 root operator 0x8b Sep  2 19:17 1.3.2
crw-rw----  1 root operator 0x8c Sep  2 19:17 1.3.3
crw-rw----  1 root operator 0x8d Sep  2 19:17 1.3.4
crw-rw----  1 root operator 0x38 Sep  2 19:17 2.1.0
crw-rw----  1 root operator 0x56 Sep  2 19:17 2.1.1
crw-rw----  1 root operator 0x3a Sep  2 19:17 3.1.0
crw-rw----  1 root operator 0x51 Sep  2 19:17 3.1.1
crw-rw----  1 root operator 0x3c Sep  2 19:17 4.1.0
crw-rw----  1 root operator 0x55 Sep  2 19:17 4.1.1
crw-rw----  1 root operator 0x3e Sep  2 19:17 5.1.0
crw-rw----  1 root operator 0x54 Sep  2 19:17 5.1.1
crw-rw----  1 root operator 0x80 Sep  2 19:17 5.2.0
crw-rw----  1 root operator 0x85 Sep  2 19:17 5.2.1
crw-rw----  1 root operator 0x86 Sep  2 19:17 5.2.2
crw-rw----  1 root operator 0x87 Sep  2 19:17 5.2.3
crw-rw----  1 root operator 0x40 Sep  2 19:17 6.1.0
crw-rw----  1 root operator 0x52 Sep  2 19:17 6.1.1
crw-rw----  1 root operator 0x42 Sep  2 19:17 7.1.0
crw-rw----  1 root operator 0x50 Sep  2 19:17 7.1.1
root@snuffel:~ # 

Make sure that you’re part of the operator group

staf@freebsd-gpg:~ $ ls -l /dev/usbctl 
crw-rw----  1 root operator 0x5a Jul 13 17:32 /dev/usbctl
staf@freebsd-gpg:~ $ ls -l /dev/usb/
total 0
crw-rw----  1 root operator 0x31 Jul 13 17:32 0.1.0
crw-rw----  1 root operator 0x53 Jul 13 17:32 0.1.1
crw-rw----  1 root operator 0x33 Jul 13 17:32 1.1.0
crw-rw----  1 root operator 0x51 Jul 13 17:32 1.1.1
crw-rw----  1 root operator 0x35 Jul 13 17:32 2.1.0
crw-rw----  1 root operator 0x52 Jul 13 17:32 2.1.1
crw-rw----  1 root operator 0x37 Jul 13 17:32 3.1.0
crw-rw----  1 root operator 0x54 Jul 13 17:32 3.1.1
crw-rw----  1 root operator 0x73 Jul 13 17:32 3.2.0
crw-rw----  1 root operator 0x75 Jul 13 17:32 3.2.1
crw-rw----  1 root operator 0x76 Jul 13 17:32 3.3.0
crw-rw----  1 root operator 0x78 Jul 13 17:32 3.3.1
staf@freebsd-gpg:~ $ 

You’ll need to be part of the operator group to access the USB devices.

Execute the vigr command and add the user to the operator group.

staf@freebsd-gpg:~ $ sudo vigr
operator:*:5:root,staf

Relogin and check that you are in the operator group.

staf@freebsd-gpg:~ $ id
uid=1001(staf) gid=1001(staf) groups=1001(staf),0(wheel),5(operator)
staf@freebsd-gpg:~ $ 

The usbconfig command should work now.

staf@freebsd-gpg:~ $ usbconfig
ugen1.1: <Intel UHCI root HUB> at usbus1, cfg=0 md=HOST spd=FULL (12Mbps) pwr=SAVE (0mA)
ugen2.1: <Intel UHCI root HUB> at usbus2, cfg=0 md=HOST spd=FULL (12Mbps) pwr=SAVE (0mA)
ugen0.1: <Intel UHCI root HUB> at usbus0, cfg=0 md=HOST spd=FULL (12Mbps) pwr=SAVE (0mA)
ugen3.1: <Intel EHCI root HUB> at usbus3, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)
ugen3.2: <QEMU Tablet Adomax Technology Co., Ltd> at usbus3, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=ON (100mA)
ugen3.3: <QEMU Tablet Adomax Technology Co., Ltd> at usbus3, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=ON (100mA)
staf@freebsd-gpg:~ $ 

SmartCard configuration

Verify the USB connection

The first step is to ensure your smartcard reader is detected on a USB level. Execute usbconfig and lsusb and make sure your smartcard reader is listed.

usbconfig

List the USB devices.

[staf@monty ~/git]$ usbconfig
ugen1.1: <Intel EHCI root HUB> at usbus1, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)
ugen0.1: <Intel XHCI root HUB> at usbus0, cfg=0 md=HOST spd=SUPER (5.0Gbps) pwr=SAVE (0mA)
ugen2.1: <Intel EHCI root HUB> at usbus2, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)
ugen2.2: <Integrated Rate Matching Hub Intel Corp.> at usbus2, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)
ugen1.2: <Integrated Rate Matching Hub Intel Corp.> at usbus1, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)
ugen0.2: <AU9540 Smartcard Reader Alcor Micro Corp.> at usbus0, cfg=0 md=HOST spd=FULL (12Mbps) pwr=ON (50mA)
ugen0.3: <VFS 5011 fingerprint sensor Validity Sensors, Inc.> at usbus0, cfg=0 md=HOST spd=FULL (12Mbps) pwr=ON (100mA)
ugen0.4: <Centrino Bluetooth Wireless Transceiver Intel Corp.> at usbus0, cfg=0 md=HOST spd=FULL (12Mbps) pwr=ON (0mA)
ugen0.5: <SunplusIT INC. Integrated Camera> at usbus0, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=ON (500mA)
ugen0.6: <X-Rite Pantone Color Sensor X-Rite, Inc.> at usbus0, cfg=0 md=HOST spd=LOW (1.5Mbps) pwr=ON (100mA)
ugen0.7: <GemPC Key SmartCard Reader Gemalto (was Gemplus)> at usbus0, cfg=0 md=HOST spd=FULL (12Mbps) pwr=ON (50mA)
[staf@monty ~/git]$ 

lsusb

[staf@monty ~/git/stafnet/blog]$ lsusb
Bus /dev/usb Device /dev/ugen0.7: ID 08e6:3438 Gemalto (was Gemplus) GemPC Key SmartCard Reader
Bus /dev/usb Device /dev/ugen0.6: ID 0765:5010 X-Rite, Inc. X-Rite Pantone Color Sensor
Bus /dev/usb Device /dev/ugen0.5: ID 04f2:b39a Chicony Electronics Co., Ltd 
Bus /dev/usb Device /dev/ugen0.4: ID 8087:07da Intel Corp. Centrino Bluetooth Wireless Transceiver
Bus /dev/usb Device /dev/ugen0.3: ID 138a:0017 Validity Sensors, Inc. VFS 5011 fingerprint sensor
Bus /dev/usb Device /dev/ugen0.2: ID 058f:9540 Alcor Micro Corp. AU9540 Smartcard Reader
Bus /dev/usb Device /dev/ugen1.2: ID 8087:8008 Intel Corp. Integrated Rate Matching Hub
Bus /dev/usb Device /dev/ugen2.2: ID 8087:8000 Intel Corp. Integrated Rate Matching Hub
Bus /dev/usb Device /dev/ugen2.1: ID 0000:0000  
Bus /dev/usb Device /dev/ugen0.1: ID 0000:0000  
Bus /dev/usb Device /dev/ugen1.1: ID 0000:0000  
[staf@monty ~/git/stafnet/blog]$ 

Check the GnuPG smartcard status

Let’s check if we get access to our smart card with gpg.

This might work if you have a native-supported GnuPG smartcard.

[staf@monty ~]$ gpg --card-status
gpg: selecting card failed: Operation not supported by device
gpg: OpenPGP card not available: Operation not supported by device
[staf@monty ~]$ 

In my case, it doesn’t work. I prefer the OpenSC interface, this might be useful if you want to use your smartcard for other usages.

opensc

Enable pcscd

FreeBSD has a handy tool sysrc to manage rc.conf

Enable the pcscd service.

[staf@monty ~]$ sudo sysrc pcscd_enable=YES
Password:
pcscd_enable: NO -> YES
[staf@monty ~]$ 

Start the pcscd service.

[staf@monty ~]$ sudo /usr/local/etc/rc.d/pcscd start
Password:
Starting pcscd.
[staf@monty ~]$ 

Verify smartcard access

pcsc_scan

The opensc-tools package provides a tool - pcsc_scan to verify the smartcard readers.

Execute pcsc_scan to verify that your smartcard is detected.

[staf@monty ~]$ pcsc_scan 
PC/SC device scanner
V 1.7.1 (c) 2001-2022, Ludovic Rousseau <ludovic.rousseau@free.fr>
Using reader plug'n play mechanism
Scanning present readers...
0: Gemalto USB Shell Token V2 (284C3E93) 00 00
1: Alcor Micro AU9540 01 00
 
Thu Jul 25 18:42:34 2024
 Reader 0: Gemalto USB Shell Token V2 (<snip>) 00 00
  Event number: 0
  Card state: Card inserted, 
  ATR: <snip>

ATR: <snip>
+ TS = 3B --> Direct Convention
+ T0 = DA, Y(1): 1101, K: 10 (historical bytes)
  TA(1) = 18 --> Fi=372, Di=12, 31 cycles/ETU
    129032 bits/s at 4 MHz, fMax for Fi = 5 MHz => 161290 bits/s
  TC(1) = FF --> Extra guard time: 255 (special value)
  TD(1) = 81 --> Y(i+1) = 1000, Protocol T = 1 
-----
  TD(2) = B1 --> Y(i+1) = 1011, Protocol T = 1 
-----
  TA(3) = FE --> IFSC: 254
  TB(3) = 75 --> Block Waiting Integer: 7 - Character Waiting Integer: 5
  TD(3) = 1F --> Y(i+1) = 0001, Protocol T = 15 - Global interface bytes following 
-----
  TA(4) = 03 --> Clock stop: not supported - Class accepted by the card: (3G) A 5V B 3V 
+ Historical bytes: 00 31 C5 73 C0 01 40 00 90 00
  Category indicator byte: 00 (compact TLV data object)
    Tag: 3, len: 1 (card service data byte)
      Card service data byte: C5
        - Application selection: by full DF name
        - Application selection: by partial DF name
        - EF.DIR and EF.ATR access services: by GET DATA command
        - Card without MF
    Tag: 7, len: 3 (card capabilities)
      Selection methods: C0
        - DF selection by full DF name
        - DF selection by partial DF name
      Data coding byte: 01
        - Behaviour of write functions: one-time write
        - Value 'FF' for the first byte of BER-TLV tag fields: invalid
        - Data unit in quartets: 2
      Command chaining, length fields and logical channels: 40
        - Extended Lc and Le fields
        - Logical channel number assignment: No logical channel
        - Maximum number of logical channels: 1
    Mandatory status indicator (3 last bytes)
      LCS (life card cycle): 00 (No information given)
      SW: 9000 (Normal processing.)
+ TCK = 0C (correct checksum)

Possibly identified card (using /usr/local/share/pcsc/smartcard_list.txt):
<snip>
        OpenPGP Card V2

 Reader 1: Alcor Micro AU9540 01 00
  Event number: 0
  Card state

pkcs15

pkcs15 is the application interface for hardware tokens while pkcs11 is the low-level interface.

You can use pkcs15-tool -D to verify that your smartcard is detected.

staf@monty ~]$ pkcs15-tool -D
Using reader with a card: Gemalto USB Shell Token V2 (<snip>) 00 00
PKCS#15 Card [OpenPGP card]:
        Version        : 0
        Serial number  : <snip>
        Manufacturer ID: ZeitControl
        Language       : nl
        Flags          : PRN generation, EID compliant


PIN [User PIN]
        Object Flags   : [0x03], private, modifiable
        Auth ID        : 03
        ID             : 02
        Flags          : [0x13], case-sensitive, local, initialized
        Length         : min_len:6, max_len:32, stored_len:32
        Pad char       : 0x00
        Reference      : 2 (0x02)
        Type           : UTF-8
        Path           : 3f00
        Tries left     : 3

PIN [User PIN (sig)]
        Object Flags   : [0x03], private, modifiable
        Auth ID        : 03
        ID             : 01
        Flags          : [0x13], case-sensitive, local, initialized
        Length         : min_len:6, max_len:32, stored_len:32
        Pad char       : 0x00
        Reference      : 1 (0x01)
        Type           : UTF-8
        Path           : 3f00
        Tries left     : 0

PIN [Admin PIN]
        Object Flags   : [0x03], private, modifiable
        ID             : 03
        Flags          : [0x9B], case-sensitive, local, unblock-disabled, initialized, soPin
        Length         : min_len:8, max_len:32, stored_len:32
        Pad char       : 0x00
        Reference      : 3 (0x03)
        Type           : UTF-8
        Path           : 3f00
        Tries left     : 0

Private RSA Key [Signature key]
        Object Flags   : [0x03], private, modifiable
        Usage          : [0x20C], sign, signRecover, nonRepudiation
        Access Flags   : [0x1D], sensitive, alwaysSensitive, neverExtract, local
        Algo_refs      : 0
        ModLength      : 3072
        Key ref        : 0 (0x00)
        Native         : yes
        Auth ID        : 01
        ID             : 01
        MD:guid        : <snip>

Private RSA Key [Encryption key]
        Object Flags   : [0x03], private, modifiable
        Usage          : [0x22], decrypt, unwrap
        Access Flags   : [0x1D], sensitive, alwaysSensitive, neverExtract, local
        Algo_refs      : 0
        ModLength      : 3072
        Key ref        : 1 (0x01)
        Native         : yes
        Auth ID        : 02
        ID             : 02
        MD:guid        : <snip>

Private RSA Key [Authentication key]
        Object Flags   : [0x03], private, modifiable
        Usage          : [0x200], nonRepudiation
        Access Flags   : [0x1D], sensitive, alwaysSensitive, neverExtract, local
        Algo_refs      : 0
        ModLength      : 3072
        Key ref        : 2 (0x02)
        Native         : yes
        Auth ID        : 02
        ID             : 03
        MD:guid        : <snip>

Public RSA Key [Signature key]
        Object Flags   : [0x02], modifiable
        Usage          : [0xC0], verify, verifyRecover
        Access Flags   : [0x02], extract
        ModLength      : 3072
        Key ref        : 0 (0x00)
        Native         : no
        Path           : b601
        ID             : 01

Public RSA Key [Encryption key]
        Object Flags   : [0x02], modifiable
        Usage          : [0x11], encrypt, wrap
        Access Flags   : [0x02], extract
        ModLength      : 3072
        Key ref        : 0 (0x00)
        Native         : no
        Path           : b801
        ID             : 02

Public RSA Key [Authentication key]
        Object Flags   : [0x02], modifiable
        Usage          : [0x40], verify
        Access Flags   : [0x02], extract
        ModLength      : 3072
        Key ref        : 0 (0x00)
        Native         : no
        Path           : a401
        ID             : 03

[staf@monty ~]$ 

GnuPG configuration

First test

Stop (kill) the scdaemon, to ensure that the scdaemon tries to use the opensc interface.

[staf@monty ~]$ gpgconf --kill scdaemon
[staf@monty ~]$ 
[staf@monty ~]$ ps aux | grep -i scdaemon
staf  9236  0.0  0.0   12808   2496  3  S+   20:42   0:00.00 grep -i scdaemon
[staf@monty ~]$ 

Try to read the card status again.

[staf@monty ~]$ gpg --card-status
gpg: selecting card failed: Operation not supported by device
gpg: OpenPGP card not available: Operation not supported by device
[staf@monty ~]$ 

Reconfigure GnuPG

Go to the .gnupg directory in your $HOME directory.

[staf@monty ~]$ cd .gnupg/
[staf@monty ~/.gnupg]$ 

scdaemon

Reconfigure scdaemon to disable the internal ccid and enable logging - always useful to verify why something isn’t working…

[staf@monty ~/.gnupg]$ vi scdaemon.conf
disable-ccid

verbose
debug-level expert
debug-all
log-file    /home/staf/logs/scdaemon.log

gpg-agent

Enable debug logging for the gpg-agent.

[staf@monty ~/.gnupg]$ vi gpg-agent.conf
debug-level expert
verbose
verbose
log-file /home/staf/logs/gpg-agent.log

Verify

Stop the scdaemon.

[staf@monty ~/.gnupg]$ gpgconf --kill scdaemon
[staf@monty ~/.gnupg]$ 

If everything goes well gpg will detect the smartcard.

If not, you have some logging to do some debugging ;-)

[staf@monty ~/.gnupg]$ gpg --card-status
Reader ...........: Gemalto USB Shell Token V2 (<snip>) 00 00
Application ID ...: <snip>
Application type .: OpenPGP
Version ..........: 2.1
Manufacturer .....: ZeitControl
Serial number ....: 000046F1
Name of cardholder: <snip>
Language prefs ...: nl
Salutation .......: Mr.
URL of public key : <snip>
Login data .......: [not set]
Signature PIN ....: forced
Key attributes ...: xxxxxxx xxxxxxx xxxxxxx
Max. PIN lengths .: 32 32 32
PIN retry counter : 3 0 3
Signature counter : 80
Signature key ....: <snip>
      created ....: <snip>
Encryption key....: <snip>
      created ....: <snip>
Authentication key: <snip>
      created ....: <snip>
General key info..: [none]
[staf@monty ~/.gnupg]$ 

Test

shadow private keys

After you executed gpg --card-status, GnuPG created “shadow private keys”. These keys just contain references on which hardware tokens the private keys are stored.

[staf@monty ~/.gnupg]$ ls -l private-keys-v1.d/
total 14
-rw-------  1 staf staf 976 Mar 24 11:35 <snip>.key
-rw-------  1 staf staf 976 Mar 24 11:35 <snip>.key
-rw-------  1 staf staf 976 Mar 24 11:35 <snip>.key
[staf@monty ~/.gnupg]$ 

You can list the (shadow) private keys with the gpg --list-secret-keys command.

Pinentry

To be able to type in your PIN code, you’ll need a pinentry application unless your smartcard reader has a pinpad.

You can use pkg provides to verify which pinentry applications are available.

For the integration with Thunderbird, you probably want to have a graphical-enabled version. But this is the topic for a next blog post ;-)

We’ll stick with the (n)curses version for now.

Install a pinentry program.

[staf@monty ~/.gnupg]$ pkg provides pinentry | grep -i curses
Name    : pinentry-curses-1.3.1
Comment : Curses version of the GnuPG password dialog
Filename: usr/local/bin/pinentry-curses
[staf@monty ~/.gnupg]$ 
[staf@monty ~/.gnupg]$ sudo pkg install pinentry-curses
Password:
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
Checking integrity... done (0 conflicting)
The most recent versions of packages are already installed
[staf@monty ~/.gnupg]$ 

A soft link is created for the pinentry binary. On FreeBSD, the pinentry soft link is managed by the pinentry package.

You can verify this with the pkg which command.

[staf@monty ~]$ pkg which /usr/local/bin/pinentry
/usr/local/bin/pinentry was installed by package pinentry-1.3.1
[staf@monty ~]$ 

The curses version is the default.

If you want to use another pinentry version in the gpg-agent configuration ( $HOME/.gnupg/gpg-agent.conf).

pinentry-program <PATH>

Import your public key

Import your public key.

[staf@monty /tmp]$ gpg --import <snip>.asc
gpg: key <snip>: public key "<snip>" imported
gpg: Total number processed: 1
gpg:               imported: 1
[staf@monty /tmp]$ 

List the public keys.

[staf@monty /tmp]$ gpg --list-keys
/home/staf/.gnupg/pubring.kbx
-----------------------------
pub   XXXXXXX XXXX-XX-XX [SC]
      <snip>
uid           [ unknown] <snip>
sub   XXXXXXX XXXX-XX-XX [A]
sub   XXXXXXX XXXX-XX-XX [E]

[staf@monty /tmp]$ 

As a test, we try to sign something with the private key on our GnuPG smartcard.

Create a test file.

[staf@monty /tmp]$ echo "foobar" > foobar
[staf@monty /tmp]$ 
[staf@monty /tmp]$ gpg --sign foobar

If your smartcard isn’t inserted GnuPG will ask to insert it.

GnuPG asks for the smartcard with the serial in the shadow private key.


                ┌────────────────────────────────────────────┐
                │ Please insert the card with serial number: │
                │                                            │
                │ XXXX XXXXXXXX                              │
                │                                            │
                │                                            │
                │      <OK>                      <Cancel>    │
                └────────────────────────────────────────────┘


Type in your PIN code.



               ┌──────────────────────────────────────────────┐
               │ Please unlock the card                       │
               │                                              │
               │ Number: XXXX XXXXXXXX                        │
               │ Holder: XXXX XXXXXXXXXX                      │
               │ Counter: XX                                  │
               │                                              │
               │ PIN ________________________________________ │
               │                                              │
               │      <OK>                        <Cancel>    │
               └──────────────────────────────────────────────┘


[staf@monty /tmp]$ ls -l foobar*
-rw-r-----  1 staf wheel   7 Jul 27 11:11 foobar
-rw-r-----  1 staf wheel 481 Jul 27 11:17 foobar.gpg
[staf@monty /tmp]$ 

In a next blog post in this series, we’ll configure Thunderbird to use the smartcard for OpenPG email encryption.

Have fun!

Links

July 26, 2024

We saw in the previous post how we can deal with data stored in the new VECTOR datatype that was released with MySQL 9.0. We implemented the 4 basic mathematical operations between two vectors. To do so we created JavaScript functions. MySQL JavaScript functions are available in MySQL HeatWave and MySQL Enterprise Edition (you can […]

July 25, 2024

MySQL 9.0.0 has brought the VECTOR datatype to your favorite Open Source Database. There are already some functions available to deal with those vectors: This post will show how to deal with vectors and create our own functions to create operations between vectors. We will use the MLE Component capability to create JavaScript functions. JS […]

July 23, 2024

Keeping up appearances in tech

Cover Image
The word "rant" is used far too often, and in various ways.
It's meant to imply aimless, angry venting.

But often it means:

Naming problems without proposing solutions,
this makes me feel confused.

Naming problems and assigning blame,
this makes me feel bad.

I saw a remarkable pair of tweets the other day.

In the wake of the outage, the CEO of CrowdStrike sent out a public announcement. It's purely factual. The scope of the problem is identified, the known facts are stated, and the logistics of disaster relief are set in motion.


  CrowdStrike is actively working with customers impacted by a defect found in a single content update for Windows hosts. Mac and Linux hosts are not impacted.

  This is not a security incident or cyberattack. The issue has been identified, isolated and a fix has been deployed.

  We refer customers to the support portal for the latest updates and will continue to provide complete and continuous updates on our website. We further recommend organizations ensure they’re communicating with CrowdStrike representatives through official channels.

  Our team is fully mobilized to ensure the security and stability of CrowdStrike customers.

Millions of computers were affected. This is the equivalent of a frazzled official giving a brief statement in the aftermath of an earthquake, directing people to the Red Cross.

Everything is basically on fire for everyone involved. Systems are failing everywhere, some critical, and quite likely people are panicking. The important thing is to give the technicians the information and tools to fix it, and for everyone else to do what they can, and stay out of the way.

In response, a communication professional posted an 'improved' version:


  I’m the CEO of CrowdStrike. I’m devastated to see the scale of today’s outage and will be personally working on it together with our team until it’s fully fixed for every single user.

  But I wanted to take a moment to come here and tell you that I am sorry. People around the world rely on us, and incidents like this can’t happen. This came from an error that ultimately is my responsibility. 

  Here’s what we know: [brief synopsis of what went wrong and how it wasn’t a cyberattack etc.]

  Our entire team will be working all day, all night, all weekend, and however long it takes to resolve this and make sure it doesn’t happen again.

  We’ll be sharing updates as often as possible, which you can find here [link]. If you need to contact us, the quickest way is to go here [link].

  We’re responding as quickly as possible. Thank you to everyone who has alerted us to the outage, and again, please accept my deepest apologies. More to come soon.

Credit where credit is due, she nailed the style. 10/10. It seems unobjectionable, at first. Let's go through, shall we?

Hyacinth fixing her husband's tie

Opposite Day

First is that the CEO is "devastated." A feeling. And they are personally going to ensure it's fixed for every single user.

This focuses on the individual who is inconvenienced. Not the disaster. They take a moment out of their time to say they are so, so sorry a mistake was made. They have let you and everyone else down, and that shouldn't happen. That's their responsibility.

By this point, the original statement had already told everyone the relevant facts. Here the technical details are left to the imagination. The writer's self-assigned job is to wrap the message in a more palatable envelope.

Everyone will be working "all day, all night, all weekends," indeed, "however long it takes," to avoid it happening again.

I imagine this is meant to be inspiring and reassuring. But if I was a CrowdStrike technician or engineer, I would find it demoralizing: the boss, who will actually be personally fixing diddly-squat, is saying that the long hours of others are a sacrifice they're willing to make.

Plus, CrowdStrike's customers are in the same boat: their technicians get volunteered too. They can't magically unbrick PCs from a distance, so "until it's fully fixed for every single user" would be a promise outsiders will have to keep. Lovely.

There's even a punch line: an invitation to go contact them, the quickest way linked directly. It thanks people for reaching out.

If everything is on fire, that includes the phone lines, the inboxes, and so on. The most stupid thing you could do in such a situation is to tell more people to contact you, right away. Don't encourage it! That's why the original statement refers to pre-existing lines of communication, internal representatives, and so on. The Support department would hate the CEO too.

Hyacinth and Richard peering over a fence

Root Cause

If you're wondering about the pictures, it's Hyacinth Bucket, from 90s UK sitcom Keeping Up Appearances, who would always insist "it's pronounced Bouquet."

Hyacinth's ambitions always landed her out of her depth, surrounded by upper-class people she's trying to impress, in the midst of an embarrassing disaster. Her increasingly desperate attempts to save face, which invariably made things worse, are the main source of comedy.

Try reading that second statement in her voice.

I’m devastated to see the scale of today’s outage and will be personally working on it together with our team until it’s fully fixed for every single user.

But I wanted to take a moment to come here and tell you that I am sorry. People around the world rely on us, and incidents like this can’t happen. This came from an error that ultimately is my responsibility.

I can hear it perfectly, telegraphing Britishness to restore dignity for all. If she were in tech she would give that statement.

It's about reputation management first, projecting the image of competence and accountability. But she's giving the speech in front of a burning building, not realizing the entire exercise is futile. Worse, she thinks she's nailing it.

If CrowdStrike had sent this out, some would've applauded and called it an admirable example of wise and empathetic communication. Real leadership qualities.

But it's the exact opposite. It focuses on the wrong things, it alienates the staff, and it definitely amplifies the chaos. It's Monty Python-esque.

Apologizing is pointless here, the damage is already done. What matters is how severe it is and whether it could've been avoided. This requires a detailed root-cause analysis and remedy. Otherwise you only have their word. Why would that re-assure you?

The original restated the company's mission: security and stability. Those are the stakes to regain a modicum of confidence.

You may think that I'm reading too much into this. But I know the exact vibe on an engineering floor when the shit hits the fan. I also know how executives and staff without that experience end up missing the point entirely. I once worked for a Hyacinth Bucket. It's not an anecdote, it's allegory.

They simply don't get the engineering mindset, and confuse authority with ownership. They step on everyone's toes without realizing, because they're constantly wearing clown shoes. Nobody tells them.

Hyacinth is not happy

Softness as a Service

The change in style between #1 and #2 is really a microcosm of the conflict that has been broiling in tech for ~15 years now. I don't mean the politics, but the shifting of norms, of language and behavior.

It's framed as a matter of interpersonal style, which needs to be welcoming and inclusive. In practice this means they assert or demand that style #2 be the norm, even when #1 is advisable or required.

Factuality is seen as deficient, improper and primitive. It's a form of doublethink: everyone's preference is equally valid, except yours, specifically.

But the difference is not a preference. It's about what actually works and what doesn't. Style #1 is aimed at the people who have to fix it. Style #2 is aimed at the people who can't do anything until it's fixed. Who should they be reaching out to?

In #2, communication becomes an end in itself, not a means of conveying information. It's about being seen saying the words, not living them. Poking at the statement makes it fall apart.

When this becomes the norm in a technical field, it has deep consequences:

  • Critique must be gift-wrapped in flattery, and is not allowed to actually land.
  • Mistakes are not corrected, and sentiment takes precedence over effectiveness.
  • Leaders speak lofty words far from the trenches to save face.
  • The people they thank the loudest are the ones they pay the least.

Inevitably, quiet competence is replaced with gaudy chaos. Everyone says they're sorry and responsible, but nobody actually is. Nobody wants to resign either. Sound familiar?

Onslow

Cope and Soothe

The elephant in the room is that #1 is very masculine, while #2 is more feminine. When you hear "women are more empathetic communicators", this is what it means. They tend to focus on the individual and their relation to them, not the team as a whole and its mission.

Complaints that tech is too "male dominated" and "notoriously hostile to women" are often just this. Tech was always full of types who won't preface their proposals and criticisms with fluff, and instead lean into autism. When you're used to being pandered to, neutrality feels like vulgarity.

The notable exceptions are rare and usually have an exasperating lead up. Tech is actually one of the most accepting and egalitarian fields around. The maintainers do a mostly thankless job.

"Oh so you're saying there's no misogyny in tech?" No I'm just saying misogyny doesn't mean "something 1 woman hates".

The tone is really a distraction. If someone drops an analysis, saying shit or get off the pot, even very kindly and patiently, some will still run away screaming. Like an octopus spraying ink, they'll deploy a nasty form of #2 as a distraction. That's the real issue.

Many techies, in their naiveté, believed the cultural reformers when they showed up to gentrify them. They obediently branded heretics like James Damore, and burned witches like Richard Stallman. Thanks to racism, words like 'master' and 'slave' are now off-limits as technical terms. Ironic, because millions of computers just crashed because they worked exactly like that.

Django commit replacing master/slave
Guys, I'm stuck in the we work lift.

The cope is to pretend that nothing has truly changed yet, and more reform is needed. In fact, everything has already changed. Tech forums used to be crucibles for distilling insight, but now they are guarded jealously by people more likely to flag and ban than strongly disagree.

I once got flagged on HN because I pointed out Twitter's mass lay-offs were a response to overhiring, and that people were rooting for the site to fail after Musk bought it. It suggested what we all know now: that the company would not implode after trimming the dead weight, and that they'd never forgive him for it.

Diversity is now associated with incompetence, because incompetent people have spent over a decade reaching for it as an excuse. In their attempts to fight stereotypes, they ensured the stereotypes came true.

Hyacinth is not happy

Bait and Snitch

The outcry tends to be: "We do all the same things you do, but still we get treated differently!" But they start from the conclusion and work their way backwards. This is what the rewritten statement does: it tries to fix the relationship before fixing the problem.

The average woman and man actually do things very differently in the first place. Individual men and women choose. And others respond accordingly. The people who build and maintain the world's infrastructure prefer the masculine style for a reason: it keeps civilization running, and helps restore it when it breaks. A disaster announcement does not need to be relatable, it needs to be effective.

Furthermore, if the job of shoveling shit falls on you, no amount of flattery or oversight will make that more pleasant. It really won't. Such commentary is purely for the benefit of the ones watching and trying to look busy. It makes it worse, stop pretending otherwise.

There's little loyalty in tech companies nowadays, and it's no surprise. Project and product managers are acting more like demanding clients to their own team, than leaders. "As a user, I want..." Yes, but what are you going to do about it? Do you even know where to start?

What's perceived as a lack of sensitivity is actually the presence of sensibility. It's what connects the words to the reality on the ground. It does not need to be improved or corrected, it just needs to be respected. And yes it's a matter of gender, because bashing men and masculine norms has become a jolly recreational sport in the overculture. Mature women know it.

It seems impossible to admit. The entire edifice of gender equality depends on there not being a single thing men are actually better at, even just on average. Where men and women's instincts differ, women must be right.

It's childish, and not harmless either. It dares you to call it out, so they can then play the wounded victim, and paint you as the unreasonable asshole who is mean. This is supposed to invalidate the argument.

* * *

This post is of course a giant cannon pointing in the opposite direction, sitting on top of a wall. Its message will likely fly over the reformers' heads.

If they read it at all, they'll selectively quote or paraphrase, call me a tech-bro, and spool off some sentences they overheard, like an LLM. It's why they adore AI, and want it to be exactly as sycophantic as them. They don't care that it makes stuff up wholesale, because it makes them look and feel competent. It will never tell them to just fuck off already.

Think less about what is said, more about what is being done. Otherwise the next CrowdStrike will probably be worse.

July 10, 2024

MySQL HeatWave 9.0 was released under the banner of artificial intelligence. It includes a VECTOR datatype and can easily process and analyze vast amounts of proprietary unstructured documents in object storage, using HeatWave GenAI and Lakehouse. Oracle Cloud Infrastructure also provides a wonderful GenAI Service, and in this post, we will see how to use […]

June 15, 2024

 

book picture
A book written by a doctor that has ADD himself, and it shows.

I was annoyed in the first half by his incessant use of anecdotes to prove that ADD is not genetic. It felt like he had to convince himself, and it read as an excuse for his actions as a father.

He uses clear and obvious examples of how not to raise a child (often with himself as the child or the father) to play on the readers emotion. Most of these examples are not even related to ADD.

But in the end, definitely the second half, it is a good book. Most people will recognize several situations and often it does make one think about life choices and interpretation of actions and emotions.

So for those getting past the disorganization (yes there are parts and chapters in this book, but most of it feels randomly disorganized), the second half of the book is a worthy thought provoking read.