Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

November 26, 2021

We rode in a converted German missile launcher over Langjökull, Iceland's second-largest glacier.

The glacier is massive: 50 kilometers long, 20 kilometers wide, and the ice is up to 580 meter thick.

A converted German missile launcher in front of the entrance of the glacier.

Through a small entrance, we descended into a man-made tunnel. We walked through various tunnels and caves, and experienced blue ice deep inside the glacier.

One of the tunnels in the glacier, lid with Christmas lights.

November 22, 2021

There is a common misconception that large open source projects are well-funded. In practice, many rely on a small group of maintainers.

The PHP programming language is one of them. Despite being used by 75%+ of the web, PHP only has a few full-time contributors.

That is why the PHP Foundation is launching. Its mission: The PHP Foundation will be a non-profit organization whose mission is to ensure the long life and prosperity of the PHP language..

Acquia is proud to support the PHP Foundation by contributing $25,000 to the foundation, alongside Automattic, JetBrains, Laravel and others. Our donations will help fund the development of PHP.

PHP is vital to the functioning of governments, schools, non-profits, private companies, public companies, and much more. If your organization relies on PHP, I'd encourage you to make a contribution.

Large open source projects like PHP need meaningful, long-term support. I remain very passionate about how to make Open Source production more sustainable, more fair, more egalitarian, and more cooperative. It will be interesting to see how the PHP Foundation develops.

This weekend I created a topics page for my site. It aspires to the simplicity of printed works.

It's already been a great way to re-discover old blog posts. Quite a few blog posts brought a smile on my face.

November 21, 2021

My dad loves working on old cars, and my kids have always been fascinated by it. Great memories!

Stan sitting behind the steering wheel of Opa's Citroën 2CV.
Opa showing Stan how to use the kludge of his Citroën 2CV.
Stan and Opa in a Citroën 2CV.

About 2 weeks ago, OSMC happened, the Open Source Monitoring Conference, a conference that normally takes place every year and as a habitual visitor and speaker. The conference takes place in Nuremburg, Germany and is for 3 days, Tuesday, Wednesday and Thursday. As most conferences, OSMC 2020 didn’t happen, but 2021 was able to run. It was a close call, as the situtation changed over the weekend before the conference for Germany. The complete conference was “3G” safe, and while we did need to wear masks for a lot of the conference, it was still doable, and there was a safe zone where you didn’t need to wear the mask all of the time. There was also enough opportunity for hall way track discussions and extracurricular activities.

On day one of the conference, the opening was about the logistics of the conference and how we’re all happy to back on an in-person event. Next I learned about Merlin and Neamon, a good presentation, where the demo failed, however still interesting. After that I saw Fue talk about Contributing to Open Source, while I already contribute to open source in many ways, it still is good to get others opinion and this presentation was good, and reminds us that not only code makes up open source. Any contribution is welcome, from documentation to hosting events, spreading the word and being a good user. After lunch I saw the ignites, Lennart presented his Icinga-Installer more as a lighting talk, then Bram did his Overengineering your personal website, which with every run adds more complexity. Finally Kris did his Dashboard as Code, which is a nice tool, however impressive, the tool that runs the ignite, seemed to have an NTP issue, and didn’t adhere to the 15 second per slide setting, but all went well. I then saw Monitoring Open Infrastructure, a talk I had already seen from Marcelo, and so I switched to Bram’s talk, Gamification of Observability, in which he advises to train for outages like fireman. We have never actually tried this ourselfs, and I have always been able to confince customers to create check lists, and have emergency lists, similar to what pilots have, where in an outage, means grabbing that check-list. While we do not advice customers to do this as a general practice, simulations are adviced. However Bram’s presentation hightlights why training is vital, and not just for operators. I then saw the presention on Thola, a tool which I have played with, but not in full detail, the presentation gave a good overview on the tool and how to extend it, a very nice piece of software. And to close the first day, the usual, Current State of Icinga, where Bernd presented 2 years of Icinga development in one hour. It was very interesting with a lot of German jokes.

On day two, I opened with Monitoring Open Source Hardware, in which I spoke about open source hardware, the choices available today, the projects in working, like OPAL, an open source firmware for POWER systems, the LibreBMC project an open source hardware project. And then how monitoring can be used from the inside out, letting us get more information with less overhead. After my talk, I saw Open Source Application Performance in which a large Java stack gets monitored, using open source tooling. I then saw Philipp’s talk about Observability, in which he reminds us that tools are just tools, similar to using Linux, which distribution you favour doesn’t make a differnce, or for DevOps, which tooling you chose, the tool doesn’t mean anything unless to use and make use of it correctly. After which I saw Still directing the director, in which the Icinga Director, Icinga Business Processes, and Ansible, are glued together. Then Kris gave his Observability will not fix your broken Monitoring, in which he spoke about the misconceptions of a hype. The hype of observability, that will solve problems like monitoring, alerting, or culture, does not exists, there is no magic. If you want to achieve real observability, you need tranditional more than just monitoring tools, logging tools, and alerting tools that can work together, you need to use those tools to achieve better insights, that you then use to better understand and improve your infrastructure. Last I saw the Icinga for Windows presentation an update of the work around Icinga2 on Windows.

Overall the whole conference went soomthly and the talks were good, the number of attendees was less than in a normal year. We all enjoyed the whole conference and it was good to be again at an in-person conference.

November 20, 2021

Nowadays it is impossible to ignore, or even prevent open source from being active within the enterprise world. Even if a company only wants to use commercially backed solutions, many - if not most - of these are built with, and are using open source software.

However, open source is more than just a code sourcing possibility. By having a good statement within the company on how it wants to deal with open source, what it wants to support, etc. engineers and developers can have a better understanding of what they can do to support their business further.

In many cases, companies will draft up an open source policy, and in this post I want to share some practices I've learned on how to draft such a policy.

Assess the current situation

When drafting a policy, make sure you know what the current situation already is. Especially when the policy might be very restrictive, you might be facing a huge backlash from the organization if the policy is not reflecting the reality. If that is the case, and the policy still needs to go through, proper communication and grooming will be needed (and of course, the "upper management hammer" can help out as well).

Often, higher management is not aware of the current situation either. They might think that open source is hardly in use. Presenting them with facts and figures not only makes it more understandable, it will also support the need for a decent open source policy.

When you have a good view on the current usage, you can use that to track where you want to go to. For instance, if your company wants to adopt open source more actively, and pursue open source contributions, you might want to report on the currently detected contributions, and use that for follow-up later.

Get HR and compliance involved

Before you embark on the journey of developing a decent open source policy, make sure you have management support on this, as well as people from HR and your compliance department (unless your policy will be extremely restrictive, but let's hope that is not the case).

You will need (legal &) compliance involved in order to draft and assess the impact of internal developers and engineers working on open source projects, as well as the same people working on open source projects in their free time. Both are different use cases but have to be assessed regardless.

HR is generally involved at a later stage, so they know how the company wants to deal with open source development. This could be useful for recruitment, but also for HR to understand what the policy is about in case of issues.

An important consideration to assess is how the company, and the contractual obligations that the employees have, deals with intellectual property. In some companies, the contract allows for the employees to retain the intellectual property rights for their creations outside of company projects. However, that is not always the case, and in certain sectors intellectual property might be assumed to be owned by the company whenever the creation is something in which the company is active. And that might be considered very broadly (such as anything IT related for employees of an IT company).

The open source policy that you develop should know what the contractual stipulations say, and clarify for engineers and developers how the company considers the intellectual property ownership. This is important, as it defines who can decide to contribute something to open source.

Understand and simplify license requirements

Many of the decisions that the open source policy has to clarify will be related to the open source licenses in use. Moreover, it might even be relevant to define what open source is to begin with.

A good source to use is the Open Source Definition as published and maintained by the Open Source Initiative (OSI). Another definition is the one by the Free Software Foundation titled "What is free software and why is it so important for society".

The license is the agreement that the owner of the software puts out that declares how users can use that software. Most, if not all software that a company uses, will have a license - open source or not. But most commercial software titles have specific licenses that you need to go through for each specific product, as the licenses are not reused. In the open source world, licenses are reused so that end users do not need to go through product-specific terms.

The OSI organization has a list of approved licenses. However, even amongst these licenses, you will find different types of licenses out there. While they are commonly grouped into copyleft and permissive open source licenses, there are two main categories within the copyleft licenses that you need to understand:

  • strong copyleft licenses that require making all source code available upon distribution, or sometimes even disclosure of the application base
  • "scoped" copyleft licenses that require making only the source code available of the modules or libraries that use the open source license (especially if you modified them) without impacting the entire application

While the term "strong copyleft" is something that I think is somewhat generally accepted (such as in the Snyk article "Open Source Licenses: Types and Comparison" or in Wikipedia's article), I do not like to use its opposite "weak" term, as the licenses themselves do not reduce the open source identity from the code. Instead, they make sure the scope of the license is towards a particular base (such as a library) and not the complete application that uses the license.

Hence, open source policies might want to focus on those three license types for each of the use cases:

  • permissive licenses, like Apache License 2.0 or MIT
  • scoped copyleft licenses, like LGPL or EPL-2.0
  • strong copyleft licenses, like GPL or AGPL

Differentiate on the different open source use cases

There are several use cases that the policy will need to tackle. These are, in no particular order:

  • Using off-the-shelf, ready-to-use open source products
  • Using off-the-shelf libraries and modules for development
  • Using open source code
  • Contributing to open source projects for company purposes
  • Contributing to open source projects for personal/private purposes
  • Launching and maintaining open source projects from the company

Each of these use cases might have their specific focuses. Combine that with the license categories listed earlier, and you can start assessing how to deal with these situations.

For instance, you might want to have a policy that generally boils down to the following:

  • When using off-the-shelf, ready-to-use open source products, all types of products are allowed, assuming the organization remains able to support the technologies adopted. Furthermore, the products have to be known by the inventory and asset tooling used by the company.
  • When using libraries or modules in development projects, only open source products with permissive or scoped copyleft licenses can be used. Furthermore, the libraries or modules have to be well managed (kept up-to-date) and known by the inventory and asset tooling used by the company.
  • When using open source code, only code that is published with a permissive license can be used. At all times, a reference towards the original author has to be retained.
  • When contributing to open source projects for company purposes, approval has to be given by the hierarchical manager of the team. Contributions have to be tagged appropriately as originating from the company (e.g. using the company e-mail address as author). Furthermore, employees are not allowed to contribute code or intellectual property that is deemed a competitive advantage for the company.
  • When contributing to open source projects for personal/private purposes, employees are prohibited to use code from the company or to do contributions using their company's e-mail address. However, the company does not claim ownership on the contributions an employee does outside the company's projects and hours.
  • When creating new projects or publishing internal projects as open source, sufficient support for the project has to be granted from the company, and the publications are preferentially done within the same development services (like version control) under management of the company. This ensures consistency and control over the company's assets and liability. Projects have to use a permissive license (and perhaps even a single, particular license).

Or, if the company actively pursues an open source first strategy:

  • Off-the-shelf, ready-to-use open source products are preferred over propriatary products. Internal support teams must be able to deal with general maintenance and updates. The use of commercially backed products is not mandatory, but might help when there is a need for acquiring short-term support (such as through independent consultants).
  • Development projects must use projects that use permissive or scoped copyleft licenses for the libraries and dependencies of that project. Only when the development project itself uses a strong copyleft license are dependencies with (the same) strong copyleft license allowed. Approval to use a strong copyleft license is left to the management board.
  • Engineers and developers retain full intellectual property rights to their contributions. However, a Contributor License Agreement (CLA) is used to grant the company the rights to use and distribute the contributions under the license mentioned, as well as initiate or participate in legal actions related to the contributed code.

Clarify what is allowed to be contributed and what not

In the above example I already indicated a "do not contribute code that is deemed a competitive advantage" statement. While it would be common sense, companies will need to clarify this (if they follow this principle) in their policies so they are not liable for problems later on.

A competitive advantage primarily focuses on a company's crown jewels, but can be extended with code or other intellectual property (like architectural information, documentation, etc.) that refers to indirect advantageous solutions. If a company is a strong data-driven company that gains massive insights from data, it might refuse to share its artificial intelligence related code.

There are other principles that might decide if code is contributed or not. For instance, the company might only want to contribute code that has received all the checks and controls to ensure it is secure, it is effective and efficient, and is understandable and well-written. After all, when such contributions are made in name of the company, the quality of that code reflects upon the company as well.

I greatly suggest to include examples in the open source policy to clarify or support certain statements.

Assess the maturity of an open source product

When supporting the use of open source products, the policy will also have to decide which open source products can be used and which ones can't. Now, it is it possible to create an exhaustive list (as that would defeat the purpose of an open source policy). Instead, I recommend to clarify how stakeholders can assess if an open source product can be used or not.

Personally, I consider this from a "maturity" point of view. Open source products that are mature are less likely to become a liability within a larger company, whereas products that only have a single maintained (like my own cvechecker project) are not to be used without understanding the consequences.

So, what is a mature open source project? There are online resources that could help you out (like the Qualipso-originated Open Source Maturity Model (OSMM)), but personally I tend to look at the following principles:

  • The project has an active development, with more than 5 active contributors in the last three months.
  • The project is visibly used by several other projects or products.
  • The project has well-maintained documentation, both for developers and for users. This can very well be a decent wiki site.
  • The project has an active support community, with not only an issue system, but also interactive services like forums, IRC, Slack, Discord, etc.
  • The project supports more than one major version in parallel, and has a clear lifecycle for its support (such as "major version is supported up to at least 1 year after the next major version is released").
  • The project publishes its artefacts in a controlled and secure manner.

A policy is just the beginning, not the end

As always, there will be situations where a company wants to allow a one-off case to deviate from the policy. Hence, make clear how deviations can be targeted.

For instance, you might want to position an architecture review board to support deviations from the license usage. When you do, make sure that this governance body knows how to deal with such deviations - understanding what licenses are, what the impact might be towards the organization, etc.

Furthermore, once the policy is ready to be made available, make sure you have support for that policy in the organization, as well as supporting tools and processes.

You might want to include an internal community to support open source/free software endeavors. This community can help other stakeholders with the assessment of a product's maturity, or with the license identification.

You might want to make sure you can track license usage in projects and deployments. For software development projects, there are plenty of commercial and free services that scan and present license usage (and other details) for a project. Inventory and asset management utilities often also include identification of detected software. Validate that you can report on open source usage if the demand comes up, and that you can support development and engineering teams in ensuring open source usage is in line with the company's expectations.

The company might want to dedicate resources in additional leakage detection and prevention measures for the open source contributions. While the company might already have code scanning techniques in place in their on-premise version control system, it might be interesting to extend this service to the public services (like GitHub and GitLab). And with that, I do not want to imply using the same tools and integrations, but more on a functional level.

Finishing off

A few companies, and most governmental organizations, publish their open source policies online. The TODO Group has graceously drafted a list of examples and templates to use. They might be a good resource to use when drafting up your own.

Having a clear and understandable open source policy simplifies discussions, and with the appropriate support within the organization it might jumpstart initiatives even further. Assuming the policy is sufficiently supportive of open source, having it published might eliminate the fear of engineers and developers to suggest certain open source projects.

Feedback? Comments? Don't hesitate to drop me an email, or join the discussion on Twitter.

November 19, 2021

When I have a little bit of time, I enjoy working on my website. I sand it down, polish it, or smoothen a rough edge.

Just this week I added a search feature to my blog posts page.

I often have to find links to old blog posts. To do so, I navigate to https://dri.es/blog, which provides a long list of every blog post I've ever written. From there, I type ⌘-F to use my browser's built-in search.

I like that my blog posts page is one long list that is easy to search. What I don't like is that when there is more than one match, iterating through the results causes the search experience to jump around.

The new search feature smoothens that rough edge. Instead of jumping around, it filters the list. It creates a nice overview. Try it out, and you'll see that it makes a big difference.

Amazing what 30 lines of vanilla JavaScript code can do!

November 18, 2021

Over 20 years have passed since first seeing Magnolia which not only was a beautiful movie, but also a testament to the great songwriter Aimee Mann is. “I see You” is a song from her new album “Queens of the Summer Hotel” and it is just as great!

Source

November 17, 2021

Security professionals are high-profile users and virtualization is a key component of our labs. Many of us are also fans of Macbook laptops. But since Apple started to roll out its new computers with M1 processors, we are facing a major issue… The M1 is an ARM-based chipset and this architecture has a huge impact on virtualization… Let’s be clear: Today, there is no way to easily run a classic (Intel) Windows guest on an M1-based Macbook! We see here and there blog posts that explain how to install an ARM version of Windows 11 on a new Macbook but it remains unpractical to run your best tools on it. How can we deal with this?

My current Macbook pro is 1-year old and is pretty powerful (64GB RAM and 2TB of storage), I don’t have plans to change in the coming months but who knows 😉 When the time for a change will come, there will be no alternative (because I love Macbooks) and I’ll switch to a M1-setup. That’s why I decided to prepare for the future and change the way I’m working. I’m teaching the SANS FOR610 training and we use a malware analyzis lab based on two virtual machines: one Windows and one Linux (based on REMnux).

The idea is to get rid of the virtual machines on my Macbook and run them on a light device that I could bring with me when travelling. Let’s review the hardware I chose. My first idea was to use an Intel NUC but it was difficult to find one with multiple NICs onboard. After some research, I found the following MiniPC on Amazon:

MiniPC picture

The hardware specifications are more then enough to run an hypervisor:

  • Intel CPU with all virtualization features
  • 2 x NICs (1GBits & 2.5Gbits)
  • Wireless
  • Enough USB ports
  • 16GB memory
  • 512GB SSD
  • HDMI w/4K support (ok, less interesting for virtualization)

It’s possible to extend the memory by replacing the modules and an free slot is present to host an extra SSD!

My first choice was to use ESXi (the free version) but I faced a problem with the network chipsets. The 1Gbits port is based on a Realtek chipset and the 2.5Gbits one on an Intel chipset. I was able to generate a customized ESXi 6.7 image with the Realtek driver but no Intel. The Intel driver is available with ESXi 7.0 but … no the Realtek one! After testing multiple images, I gave up and decided to switch to something else: The perfect candidate was ProxMox! This hypervisor was already mentioned multiple times in my entourage. Based on Debian, this distribution offers a complete virtualization solution and it was able to detect and use all three NICs:

root@pve0:~# ip a|grep s0
2: enp1s0: mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
3: enp2s0: mtu 1500 qdisc noop state DOWN group default qlen 1000
4: wlp3s0: mtu 1500 qdisc noop state DOWN group default qlen 1000

I won’t describe the installation of ProxMox, it’s pretty straightforward: Create a bootable USD drive, boot it and follow the wizard. The configuration is very simple, no cluster, nothing special. Once the setup is ready, the hypervisor is able to boot automatically without a keyboard and a screen. My network setup is the following: The 1Gbits NIC is dedicated to management and has a fixed IP address. It will be available on my home network and when traveling, I’ll just need a cross-cable between my Macbook and the MiniPC. The 2.5Gbits is dedicated to guests that need to be connected to the Internet.

Network schema

The lab used by students during the FOR610 class requires to be disconnected from the Internet and any other network for security reasons. We use it to analyze pieces of malware. The first thing to do is to create a new network that will be isolated. In /etc/network/interfaces, add the following lines and restart the network:

auto vmbr1
iface vmbr1 inet static
address 10.0.0.1/24
bridge_ports none
bridge_stp off
bridge_fd 0

Then, I installed the two guests. Because SANS supports only the VMware hypervisor, the virtual machines are provided as VMware guests. The first step is to convert the disk images from VMDK to QCOW2. Because I don’t like to install specific tools on my Macbook, I’m a big fan of Docker containers. You can use a simple container that offers the qemu toolbox and directly convert the image:

$ docker run -v $(pwd):/data --rm heinedej/docker-qemu-utils \
      qemu-img convert \
               -f vmdk /data/REMnux-disk.vmdk \
               -O qcow2 /data/REMnux-disk.qcow2

Once the conversion completed, transfer the .cqow2 file (use scp) into /root/imported-disks/ on your Proxmox host.

It’s now time to create the two guests. Start with a standard config (assign resources depending on your future usage like cores and memory). Be sure to select the right bridge (the one created just above) for isolation. You will have to create a disk but we will deleted it later, just create a disk of a few Gigabytes.

My REMnux guest looks like this:

REMnux config

Note: I had to change the Display driver from “default” to “VMware compatible” to be able to boot the guest. Same for the SCSI controller.

And my REMWorkstation guest:

REMworkstation config

Once the guests are created, we must import the converted disks into an existing VM. SSH to the Proxmox and attach the disk images to newly created guests:

$ cd /root/imported-disks
$ qm importdisk <vm-id> <disk>.qcow2 local-lvm

Detach (and delete) the original disk created during the initial configuration, change the boot order and boot the guests. The last step is to configure the network to allow network connectivity between them. Configure a fixed IP address on REMnux and on REMworkstation. Usually, I use the bridge network + VM ID: 10.0.0.100 & 10.0.0.101. Don’t forget to configure the REMnux IP address as DNS server and default gateway on REMworkstation!

Last steps:

  • Fine tune your hosts
  • Create your initial snapshot
  • Enable auto-start of both guests

Happy reversing!

Note: This setup can be deployed in a cloud environment or a colocation server.

The post Portable Malware Analyzis Lab appeared first on /dev/random.

November 10, 2021

Just when I was starting to get a good old-fashioned cold I heard this on the radio while in the car. It didn’t stop me from going into hibernation for a couple of days, but man what a great tune!

Source

I published the following diary on isc.sans.edu: “Shadow IT Makes People More Vulnerable to Phishing“:

Shadow IT is a real problem in many organizations. Behind this term, we speak about pieces of hardware or software that are installed by users without the approval of the IT department. In many cases, shadow IT is used because internal IT teams are not able to provide tools in time. Think about a user who needs to safely exchange files with partners and no tool is available. A change request will be created to deploy one but, with the lack of (time|money|resources), the project will take time. Unfortunately, the user needs the tool now, so an alternative path will be used like a cloud file sharing service… [Read more]

The post [SANS ISC] Shadow IT Makes People More Vulnerable to Phishing appeared first on /dev/random.

November 08, 2021

I am not an advocate for hybrid cloud architectures. Or at least, not the definition for hybrid cloud that assumes one (cloud or on premise) environment is just an extension of another (cloud or on premise) environment. While such architectures seem to be simple and fruitful - you can easily add some capacity in the other environment to handle burst load - they are a complex beast to tame.

Every now and then I run into some awesome open source project on GitHub, that is written in some cool programming language, and it assumes that the development tools for that language are already installed. My assumption is that they have a specific target audience in mind: an already existing developer community around that specific language. People who already have those tools installed.

The annoying thing is when someone like me, who doesn’t really need to know if a thing is written in Python or Ruby or JavaScript or whatever, tries to follow instructions like these:

$ pip install foo
Command 'pip' not found
$ gem install bar
Command 'gem' not found
$ yarn install baz
Command 'yarn' not found
$ ./configure && make && sudo make install
Command 'make' not found

By now, I already know that I first need to do sudo apt install python3-pip (or the equivalent installation commands for RubyGems, Yarn, build-essential,…). I also understand that, within the context of a specific developer community, this is so obvious that it is often assumed. That being said, I am making a promise:

For every open source project that I will henceforth publish online (on Github or any other code sharing platforms), I promise to do the following things:
(1) Test the installation on at least one clean installed operating system – which will be documented.
(2) Include full installation steps in the documentation, including all frameworks, development tools, etc. that would otherwise be assumed.
(3) Where possible and useful, provide an installation script.

The operating system I’m currently targeting, is Ubuntu, which means I’ll include apt commands. I’m counting on Continuous Integration to help me test on other operating systems that I don’t personally use.

The post A small rant about dependencies (and a promise) appeared first on amedee.be.

I published the following diary on isc.sans.edu: “(Ab)Using Security Tools & Controls for the Bad“:

As security practitioners, we give daily advice to our customers to increase the security level of their infrastructures. Install this tool, enable this feature, disable this function, etc. When enabled, these techniques can also be (ab)used by attackers to perform nasty actions.

PAM or Pluggable Authentication Modules is an old authentication system that is around since 1997! It allows you to extend the authentication capabilities of a system to interconnect with third-party systems. PAM is available on all Linux flavors and used, amongst plenty of others, by the SSH daemon. By default, SSH allows you to authenticate via credentials or a key but they are plenty of other ways to authenticate a user. Via a centralized DB (LDAP, RADIUS, Kerberos) against proprietary databases and much more.  It can also be used to raise the security level by implementing MFA (“Multi-Factor Authentication”). In 2009(!), I already wrote a blog post to explain how to use a Yubikey as the second factor via PAM… [Read more]

The post [SANS ISC] (Ab)Using Security Tools & Controls for the Bad appeared first on /dev/random.

November 06, 2021

Deze trein vandaag was 22 minuten te laat in Berchem, en is NIET naar Antwerpen-Centraal gereden... maar in de NMBS statistieken staat deze wel als 'op tijd'. Bende leugenaars bij de de NMBS ja.



En de conducteur druft dan nog omroepen dat de trein 'uitzonderlijk' stopt in Berchem ipv in eindstation Antwerpen-Centraal, terwijl dit regelmatig gebeurt.

Is het nu zo moeilijk om correcte (en eerlijke) gegevens te publiceren?

November 02, 2021

J’ai relu la magnifique conclusion du Héros aux mille et un visages de Joseph Campbell. L’essence d’un livre se trouve dans sa conclusion. Tout le reste ne fait que préparer le lecteur à comprendre et pouvoir apprécier cette conclusion. En relisant Campbell, je réalise que ce que je croyais être une recherche d’amélioration de mon écriture devient une quête universelle, un plongeon qui m’entraîne dans la réflexion de ma place dans la société, du rôle de la société, de l’essence de l’individu, de la quête d’identité.

Qui suis-je ? Qui sommes-nous ? En quoi la lecture et l’écriture peuvent répondre à cette question ? En quoi cette quête peut-elle me réconforter dans ma propre mortalité et dans la mortalité de l’humanité toute entière symbolisée désormais par mes enfants ?

Se passer d’écran, se passer d’interaction nous permet de contempler le gouffre infini de nos questions, de nos angoisses. Une contemplation effrayante, dangereuse. N’est-il point étonnant que la majorité se réfugient dans les viatiques que sont le travail, la suractivité et l’abrutissement réconfortant ? Ce que que nous appelons ennui, solitude, je l’appelle désormais conscience et vérité.

Mais comment pourrais-je juger ceux qui souhaitent inconsciemment s’en écarter ? J’ai fait partie du groupe, j’en fais toujours partie. Je lutte pour m’extirper, pour trouver ou retrouver une illusoire clarté d’esprit que certains n’ont jamais perdu et que d’autres ne pourront jamais imaginer.

Extrait de mon journal intime publié sur recommandation de mon épouse.

Recevez les billets par mail ou par RSS. Max 2 billets par semaine, rien d’autre. Adresse email jamais partagée et définitivement effacée lors du désabonnement. Dernier livre paru : Printeurs, thriller cyberpunk. Pour soutenir l’auteur, lisez, offrez et partagez des livres.

Ce texte est publié sous la licence CC-By BE.

November 01, 2021

We now invite proposals for developer rooms for FOSDEM 2022. Invitations to submit proposals for other elements of the event will follow in the next couple of weeks. FOSDEM offers open source and free software developers a place to meet, share ideas and collaborate. Renowned for being highly developer-oriented, the event brings together some 8000+ geeks from all over the world. The twenty-second edition will take place Saturday 5th and Sunday 6th February 2022, online. Developer rooms are assigned to self-organising groups to work together on open source and free software projects, to discuss topics relevant to a broader舰

October 31, 2021

ansible-role-pkg_update

Keeping your software up-to-date is an important task in System Administration. Not only for security reasons but also to roll out bug fixes to your systems.

As always we should try to automate this process as much as possible.

Ansible has a package module to install packages in a generic way. It supports most Un*x platforms (GNU/Linux, BSD, …). But it doesn’t allow you to update all packages.

For this reason, I created an Ansible role: package update.

Package update enables you to update all packages on most Linux distributions and the BSD operating systems. It can also update the running jails on FreeBSD.

Version 2.0.2 is available at

Version 2.0.2:

Changelog:

  • Always update the apt cache on Debian based distributions.

Have fun!

Ansible Role: package_update

An ansible role to update all packages (multiplatform)

Requirements

Supported platforms

  • Archlinux
  • Debian
  • FreeBSD
  • NetBSD
  • OpenBSD
  • RedHat
  • Suse
  • Kali GNU/Linux

Role Variables

The following variables are set by the role.

  • freebsd_running_jails: List with the running FreeBSD jails.
  • package_update: “name space”
    • freebsd: “freebsd config”
      • get_running_jails: no yes (default) set the freebsd_running_jails variable.
      • host: no yes (default) update the host system
      • jails: Array of jails to update, freebsd_running_jails by default.

Dependencies

None

Example Playbooks

Upgrade

---
- name: update packages
  hosts: all
  become: true
  roles:
    - stafwag.package_update

Update only the FreeBSD host systems.

---
- name: update packages
  hosts: all
  become: true
  roles:
    - role: stafwag.package_update
      vars:
        package_update:
          freebsd:
            get_running_jails: no
            jails: []

Update only the running jails on FreeBSD systems.

---
- name: update packages
  hosts: all
  become: true
  roles:
    - role: stafwag.package_update
      vars:
        package_update:
          freebsd:
            host: no

Update a jail on a FreeBSD system.

---
- name: update packages
  hosts: rataplan
  become: true
  roles:
    - role: stafwag.package_update
      vars:
        package_update:
          freebsd:
            host: no
            jails:
              - stafmail

License

MIT/BSD

Author Information

Created by Staf Wagemakers, email: staf@wagemakers.be, website: http://www.wagemakers.be

October 30, 2021

Si vous lisez ce message et que vous me suivez ou que nous sommes amis sur un quelconque réseau social, arrêtez de me suivre. Annulez notre amitié sur Facebook, arrêtez d’aimer ma page, arrêter de me suivre suivre sur Twitter, sur Medium voire sur Mastodon.

J’ai découvert, en supprimant mon compte Linkedin, à quel point cela m’ôtait un poids inconscient de la poitrine, à quel point l’existence même d’un compte à mon nom m’aspirait dans un monde d’apparence, de marketing et de quête de gloriole. Après tout, mon compte Facebook n’a été (re)créé qu’avec l’objectif avoué de me faire élire aux élections de 2012 (ce qui n’a, heureusement, pas fonctionné).

En créant un gemlog, l’équivalent d’un blog pour le protocole Gemini, j’ai retrouvé le plaisir d’écrire simplement, sans fioriture, sans me tracasser du succès potentiel d’un billet ni de mon lectorat. L’influence néfaste des réseaux sociaux, dans laquelle mon égotique quête de gloire m’a fait m’engouffrer, tentant, je m’en excuse aujourd’hui, de vous aspirer avec moi, a transformé mes réflexions en d’amphigouriques prétentions, quémandant les « likes » et les partages à tout prix.

Je supprimerai bientôt mon compte et ma page Facebook, chose que j’ai déjà faite en 2008 et que j’aurais dû refaire il y a bien longtemps. De toute façon, même si vous me suivez sur ce réseau, il y’a beaucoup de chances pour que vous ne voyiez pas ce que je poste.

C’est d’ailleurs la raison pour laquelle je vous demande de me « déliker/unfriender/unfollower ».

J’ai en effet observé que l’impact d’un post sur Facebook ou Twitter était toujours très faible en regard du nombre théorique de « followers ».

=> https://ploum.net/le-mensonge-des-reseaux-sociaux/

Une de mes théories est qu’au plus vous avez de followers, au moins Facebook et Twitter diffusent votre contenu pour vous encourager à payer. Si cette théorie est juste, ce que je souhaite vérifier avec cette expérience, en arrêtant de me suivre, vous m’aideriez à diffuser ce message paradoxal : « arrêtez de me suivre ! ».

Contrairement à Facebook et Twitter, je suis complètement aligné avec l’éthique du projet Mastodon. Néanmoins, j’ai l’impression que ce réseau entretient également une quête de « followers ». Je garde toujours mon compte, mais, dans le doute, arrêtez de me suivre également là-bas.

Si mes écrits vous intéressent, vous pouvez vous abonner par mail ou par RSS. Ou tout simplement venir sur ce blog à votre meilleure convenance. Si un billet mérite selon vous d’être partagé, envoyez-le par mail, copiez/collez, imprimez-le. Autorisez-vous également à ne rien partager immédiatement, mais à garder l’idée dans un coin de votre cerveau pour vous l’approprier, pour en parler autour de vous en oubliant son origine.

« Ce que je reproche aux journaux c’est de nous faire faire attention tous les jours à des choses insignifiantes tandis que nous lisons trois ou quatre fois dans notre vie les livres où il y a des choses essentielles. »

(Proust, Du côté de chez Swann)

Comme Proust, je pense que les idées vraiment importantes, celles qui peuvent changer nos vies, ne se trouvent pas dans les médias, dont les réseaux sociaux ne sont qu’une descendance bâtarde, mais dans les livres.

Des livres qui attendent patiemment, dans une bibliothèque de famille ou de quartier, en pile sur votre bureau, dans une caisse au grenier, que vous lâchiez votre écran des yeux.

Aidez-moi à lire plus de livres, arrêtez de me suivre sur les réseaux sociaux !

Recevez les billets par mail ou par RSS. Max 2 billets par semaine, rien d’autre. Adresse email jamais partagée et définitivement effacée lors du désabonnement. Dernier livre paru : Printeurs, thriller cyberpunk. Pour soutenir l’auteur, lisez, offrez et partagez des livres.

Ce texte est publié sous la licence CC-By BE.

The Digispark USB development board is a compact board with the ATtiny85 AVR microcontroller. You can use it in the Arduino IDE by adding http://digistump.com/package_digistump_index.json as an additional board support URL in File / Preferences next to Additional Boards Manager URLs.

After this, open the menu Tools / Board / Boards Manager... and install the package Digistump AVR Boards:

/images/arduino-digistump-avr-boards.png

The Digispark is now supported, but the board support package installs an older version of the command-line tool micronucleus for the bootloader. This is incompatible with newer versions of the bootloader. If you've upgraded the micronucleus bootloader on your Digispark, you need to upgrade the command-line tool too.

So make a backup of the original file and copy the newer micronucleus binary that you installed earlier:

cd ~/.arduino15/packages/digistump/tools/micronucleus/2.0a4/
mv micronucleus micronucleus.original
cp /usr/local/bin/micronucleus .

Now you can flash your Arduino sketches to the device. Try it with one of the example sketches, such as the one in File / Examples / Digispark_Examples / Start. This has the following code that blinks the Digispark's LED:

// the setup routine runs once when you press reset:
void setup() {
  // initialize the digital pin as an output.
  pinMode(0, OUTPUT); //LED on Model B
  pinMode(1, OUTPUT); //LED on Model A  or Pro
}

// the loop routine runs over and over again forever:
void loop() {
  digitalWrite(0, HIGH);   // turn the LED on (HIGH is the voltage level)
  digitalWrite(1, HIGH);
  delay(1000);               // wait for a second
  digitalWrite(0, LOW);    // turn the LED off by making the voltage LOW
  digitalWrite(1, LOW);
  delay(1000);               // wait for a second
}

Then choose the board Digispark (Default - 16.5mhz) in the menu Tools / Board / Digistump AVR Boards and compile the code. If you click on the upload button after this, the Arduino IDE asks you to put the Digispark in a USB port. If all goes well, it uploads the firmware to your board with the newer version of the micronucleus command. And finally the LED on the board starts blinking.

Note

If you want to use your Digispark with a newer bootloader in PlatformIO, you need the same trick: make sure that PlatformIO uses the newer micronucleus command.

The Digispark USB development board is a compact board with the ATtiny85 AVR microcontroller. You can use it in the Arduino IDE by adding http://digistump.com/package_digistump_index.json as an additional board support URL in File / Preferences next to Additional Boards Manager URLs.

After this, open the menu Tools / Board / Boards Manager... and install the package Digistump AVR Boards:

/images/arduino-digistump-avr-boards.png

The Digispark is now supported, but the board support package installs an older version of the command-line tool micronucleus for the bootloader. This is incompatible with newer versions of the bootloader. If you've upgraded the micronucleus bootloader on your Digispark, you need to upgrade the command-line tool too.

So make a backup of the original file and copy the newer micronucleus binary that you installed earlier:

cd ~/.arduino15/packages/digistump/tools/micronucleus/2.0a4/
mv micronucleus micronucleus.original
cp /usr/local/bin/micronucleus .

Now you can flash your Arduino sketches to the device. Try it with one of the example sketches, such as the one in File / Examples / Digispark_Examples / Start. This has the following code that blinks the Digispark's LED:

// the setup routine runs once when you press reset:
void setup() {
  // initialize the digital pin as an output.
  pinMode(0, OUTPUT); //LED on Model B
  pinMode(1, OUTPUT); //LED on Model A  or Pro
}

// the loop routine runs over and over again forever:
void loop() {
  digitalWrite(0, HIGH);   // turn the LED on (HIGH is the voltage level)
  digitalWrite(1, HIGH);
  delay(1000);               // wait for a second
  digitalWrite(0, LOW);    // turn the LED off by making the voltage LOW
  digitalWrite(1, LOW);
  delay(1000);               // wait for a second
}

Then choose the board Digispark (Default - 16.5mhz) in the menu Tools / Board / Digistump AVR Boards and compile the code. If you click on the upload button after this, the Arduino IDE asks you to put the Digispark in a USB port. If all goes well, it uploads the firmware to your board with the newer version of the micronucleus command. And finally the LED on the board starts blinking.

Note

If you want to use your Digispark with a newer bootloader in PlatformIO, you need the same trick: make sure that PlatformIO uses the newer micronucleus command.

De opensource collaborationsoftware Nextcloud Hub verbetert jaar na jaar. Het begon ooit als een online opslagdienst die je op je eigen server kon installeren, maar ondertussen doet het veel meer. Je kunt er bestanden mee delen en synchroniseren, chatten en videobellen, en er zelfs een agenda, contacten en e-mail op draaien. Jaar na jaar komen er nieuwe functies bij, wat Nextcloud zeker voor thuisgebruikers, maar zelfs voor bedrijven interessant maakt.

Een ander succes is het webgebaseerde kantoorpakket Collabora Online, de webversie van LibreOffice. Installeer je dit op een Linux-server, dan kun je via je browser documenten bewerken en zelfs met meerdere personen samenwerken. Vooral de integratie van Collabora Online in Nextcloud Hub maakt het een interessant programma om documenten te delen en eraan samen te werken:

/images/nextcloud-collabora-online.png

De installatie van Nextcloud Hub en de integratie met Collabora Online kan op meerdere manieren. Wanneer je dit volledig zelf doet, steek je er wel wat werk in. Ook het up-to-date houden en het beheer vereist aandacht. Dat is niet voor iedereen weggelegd.

Een oplossing voor wie zich daarmee niet wil bezighouden, is de Ubuntu-appliance van Nextcloud. Dit is een aangepaste versie van de Linux-distributie Ubuntu, Ubuntu Core, met Nextcloud Hub in een 'snap' geïnstalleerd. Je krijgt dan automatisch updates. Bovendien kun je er ook Collabora Online in installeren.

Van dit alles bestaan ook versies voor een ARM-processor. Ideaal om op een Raspberry Pi 3B(+) of 4B te zetten. Wil je weten hoe dit werkt, lees dan mijn stappenplan in het artikel Nextcloud Hub installeren op Raspberry Pi met Collabora Online voor PCM.

Als je ooit al eens Nextcloud en/of Collabora Online geïnstalleerd hebt en dat te lastig vond, moet je het zeker eens opnieuw proberen. Niet alleen het Ubuntu-appliance is handig, ook op andere vlakken is het beheer van Nextcloud in de laatste jaren veel eenvoudiger geworden. De webinterface van een certificaat van Let's Encrypt voorzien is bijvoorbeeld met één opdracht en één configuratiewijziging te doen.

October 29, 2021

Definitely a halfling barbarian. Alignment: chaotic neutral.

Oh, you didn’t mean tabletop role playing but job roles? Riiiight…

I don’t think that this blog post will ever be complete, and it will always be evolving. But at this point, some of the things that I see myself doing:


Anything related to Continuous Delivery in software. From my perspective, that may include:

  • Test Automation – I’ve done this a lot, I liked it and wouldn’t mind doing more of it.
  • DevOps – I’m still not sure if DevOps must be a separate role, or if other roles can work better if they apply DevOps principles. That being said, I have done some devops-ish things, I liked it, and I would sure like to do more of it.
  • Software Development – There, I’ve put it in writing. I haven’t done this yet in a work context, but I like doing it and learning about it. And really – isn’t test automation also writing software?

Maybe you noticed that in none of these things I mention a specific technology. There may be tech&tools that I already have experience with, and you can read about that in my CV or on LinkedIn, but that is not what this blog post is about. I believe that technologies can (and should) always be learned, and it’s more of an attitude to work quality-driven.


Technical Storytelling or Technical Community Management
Storytelling can help simplify the complexities of new technologies. It’s a combination of technical skills, communication skills and empathy. It’s about supporting a community by creating helpful content, from sample code to tutorials, blog posts(*) and videos; speaking at conferences; and helping improve a product or technology by collecting feedback from the community. I recently read a blog post on this, and I can totally recognize myself there.

(*) Yes, the blog posts that I’m writing now, are also written with that kind of role in mind.


Also have a look at the roles that I am not interested in (but do get a lot of emails about).

The post What are my preferred roles? appeared first on amedee.be.

October 28, 2021

Acquia Engage, now in its eighth year, was an all-virtual event this year. In my keynote, I talked about how the Acquia Digital Experience Platform (DXP) helps with many of the biggest challenges in the world of websites and digital experiences.

If you want to see what Acquia has been up to, watch the recording of my keynote. It's packed full of short product demos. You can also download a copy of my slides (47 MB), but you'd miss out on all the demos.

My keynote focused on how we are fixing broken, disconnected digital workflows across marketing and IT. By fixing these workflows, we can accelerate the delivery and optimization of new digital experiences.

We showcased innovations in the following areas:

  • DevOps: Connecting developers with all the tools they need to plan, build, review, and test any Drupal application, all in a 100% cloud-based development environment.
  • WebOps: Automating testing, deployment, and scaling of Drupal applications to fuel faster innovation with maximum security, availability, and resiliency.
  • ExperienceOps: Empowering marketers with new low-code / no-code tools to rapidly compose digital experiences, including digital commerce experiences. These tools use reusable application and design components.
  • ContentOps: Connecting marketers with all the tools they need to create, collaborate, review, and share any kind of content across omni-channel campaigns.
  • CampaignOps: Streamlining creation, targeting, execution, and optimization of digital campaigns within a framework of overall brand governance and privacy compliance.
  • MLOps: Industrializing the application of machine learning to real-world digital workflows and experiences. New features help both data scientists and marketers continuously optimize each customer's unique journey.

I want to thank the product and engineering teams at Acquia for working incredibly hard to deliver all of these new innovations. I'm proud of the work the Acquia team has done to create this event for our customers and partners.

October 25, 2021

When recruiters contact me, I invariably get asked in what region I am willing to work. Well. It depends.
(scroll down for a map if you don’t want to read).

The thing is, I actually enjoy going from point A to point B. At the same time, if it is in much less than ideal situations (lots of traffic, or crowded public transportation), then I may get overstimulated, which leads to fatigue and lack of concentration. The least enjoyable commute was only 20km, by car, but it typically took me more than one hour. This was when a new bridge was constructed over the Scheldt in Temse.

The most pleasant work experiences I had, involved these commute patterns:

  • A 3km bicycle ride (about 10 minutes).
  • 30 km by car, with the first 15 minutes on almost empty rural roads, and then 25 minutes on a highway in the direction that had the least amount of traffic.
  • 5km, which I did on foot in 50 minutes (I was training for the Dodentocht at the time).
  • 40km, which I did with 5 minutes bicycle, 35 minutes train, 5 minutes walking. Ideal for listening to one or two episodes of a podcast. Doing the same distance by car would taken me about the same amount of time, in ideal conditions. And I can’t focus on traffic and listen to a podcast at the same time.
  • 6km, which was 20 minutes on a bicycle or 12 minutes by car. I preferred cycling, because I had separate bike lanes for about 80% of the way. 20 minutes was also an ideal amount of time to listen to one epidode of a podcast.

That looks like a lot of cycling, even though I don’t really consider myself to be an athletic type. It’s also eco-friendly, even though I don’t really consider myself to be an eco-warrior.

I’m not a petrol head, I don’t know anything about cars. 4 wheels and steering wheel, that’s about the limit of my knowledge. Currently I don’t even have a car, I make use of car sharing services like Cambio on the rare occasions that I actually need a car. At the same time, I do enjoy the experience of driving, especially long, smooth stretches. For example each year I go to a music course somewhere in the middle of Germany. That’s a 5 hour drive, not including stops. I absolutely love the change of scenery along the way. But but me in city traffic for an hour and I get too much input.

I have found a website where you can draw a map of the places you can reach within a certain time: TravelTime (the also have an API! �).

This is a map I made with the following data:

  • Yellow: reachable by cycling in 30 minutes or less. That’s about all of the city center of Ghent.
  • Red: reachable by public transport in 1 hour or less. That doesn’t get me to Antwerp, Mechelen or Kortrijk, but Brussels and Bruges are just about reachable.
  • Blue: reachable by car in 45 minutes or less. That barely touches Antwerp. Brussels: the north, west and south edges. Kortrijk and Bruges are also within reach. Why the cutoff at 45 minutes? Well, I would need really, really good other motivations to consider Brussels. Some time ago I thought that 30 minutes would be my maximum, but it isn’t. I’d rather call it an optimum than a maximum.
TravelTime

Even with this map, I still have a personal bias. Most of my social life occurs somewhere in the triangle Ghent-Antwerp-Brussels. It becomes harder to do something after work when working in West-Flanders. It’s not a hard pass, just a preference.

I have more to tell on this topic, so I might update this blog post later.

The post What is my preferred region? appeared first on amedee.be.

October 24, 2021

This post is meant to encourage me to read a bit more (paper books). By the way, I thought I was reading four books simultaneously, but when I put them next to each other it turned out to be seven.

Form left to right (writer-title(year) pages read-total pages):

Daniele Benedettelli - Creating Cool Mindstorms NXT Robots(2008) 24-575
Leo Tolstoj - Oorlog en Vrede(1869, NL translation 1973) 115-462
Dirk De Wachter - De kunst van het ongelukkig zijn(2019) 35-101
Allen/Fonagy/Bateman - Mentaliseren(2008/2019 edition) 30-368
LEGO and philosophy(2017) 56-226
Charlie Mackesy - The Boy, the mole, the fox and the horse(2019)
Michael Collins - Carrying the Fire(1974) - finished

Personal goal: finish at least three more of these before 2022.

I just finished Michael Collins - Carrying the Fire and it took me five weeks, which I consider a bit too long for a 470-ish pages book. It was a very good book though. If you're into space travel, then I would definitely recommend it. It also proves how the Sixties was vastly different to today, for example of the fourteen astronauts selected in 1963, four died during training. Two of the nine also died. Such numbers are unacceptable in 2021, even for 'dangerous' jobs.

The Daniele Benedettelli book is about programming Finite State Machines using Lego robots. I don't know much about programming, but this looks like fun. Thing is I need to build some Lego robots (like this one) to continue this book.

And I probably need to start from page 1 again in War and Peace because of the many characters that I forgot.

Some other books that I read the past three years are:

Celestin-Westreich/Celestin - Observeren en Rapporteren
Dick Swaab - Ons creatieve brein
Dirk De Wachter - Borderline Times
Dirk De Wachter - De wereld van De Wachter
Etienne Vermeersch - Over God
Etienne Vermeersch - Provencaalse gesprekken
Jan Van de Craats - Basisboek wiskunde
Jude Woodward - The US vs China
Paul Verhaeghe - Autoriteit
Paul Verhaeghe - Identiteit
Randall Munroe - Thing Explainer
Randall Munroe - What If
Rebecca Smethurst - Space, 10 things you should now
Robert Bly - De Wildeman
Terry Goodkind - Law of Nines
Terry Goodkind - Severed Souls
Terry Goodkind - The first Confessor
Terry Goodkind - The Omen Machine
Terry Goodkind - The Third Kingdom
Terry Goodkind - Warheart
Thomas D'ansembourg - Stop met aardig zijn

Most of it non-fiction apparently. I really enjoyed 'Borderline Times' and both books of Paul Verhaeghe and Dick Swaab. I couldn't really get into Robert Bly or Thomas D'Ansembourg (but collaborative communication gives me a lot of insight in people).

October 21, 2021

FOSDEM 2022 will take place on Saturday 5 and Sunday 6 February 2022. It will be an online event. After several long and passionate debates, we have decided to make FOSDEM 2022 an online event. There are a lot of good arguments in favor of physical, hybrid, and online events each. We do not wish to rehash all of them in public, so please understand if we do not engage in public debates on their relative merits. This was not an easy decision, but at least it's a decision. Similar to CCC, we would prefer something else, but it舰

October 20, 2021

For the past few years, I've examined Drupal.org's contribution data to understand how the Drupal project works. Who develops Drupal? How diverse is the Drupal community? How much of Drupal's maintenance and innovation is sponsored? Where do sponsorships come from?

The report might be of interest even if you don't use Drupal. It provides insights into the inner workings of one of the largest Open Source projects in the world.

This year's report shows that:

  • Compared to last year, we have fewer contributions and fewer contributors. The slowdown is consistent across organizations, countries, project types, and more. I believe this is the result of COVID-19, where we are in the Drupal Super Cycle, and many Drupal shops being too busy growing.
  • Despite a slowdown, it's amazing to see that just in the last year, Drupal welcomed more than 7,000 individual contributors and over 1,100 corporate contributors.
  • Two-thirds of all contributions are sponsored, but volunteer contributions remain important to Drupal's success.
  • Drupal's maintenance and innovation depends mostly on smaller Drupal agencies and Acquia. We don't see many contributions from hosting companies, multi-platform digital agencies, system integrators, or end users.
  • Drupal's contributors have become more diverse, but are still not diverse enough.

For comparison, you can also look at the 2016 report, 2017 report, 2018 report, 2019 report, and the 2020 report.

Methodology

What data did I analyze?

I looked at all Drupal.org issues marked "closed" or "fixed" in the 12-month period from July 1, 2020 to June 30, 2021. This is across issues in Drupal Core and all contributed projects, including all major versions of Drupal.

What are Drupal.org issues?

Each "Drupal.org issue" tracks an idea, feature request, bug report, task, or more. It's similar to "issues" in GitHub or "tickets" in Jira. See https://www.drupal.org/project/issues for the list of all issues.

What are Drupal.org credits?

In the spring of 2015, I proposed some ideas for how to give credit to Drupal contributors. A year later, Drupal.org added the ability for contributors to attribute their work to an organization or customer sponsor, or mark it the result of volunteer efforts.

Example issue credit on drupal org
A screenshot of an issue comment on Drupal.org. You can see that jamadar worked on this patch as a volunteer, but also as part of his day job working for TATA Consultancy Services on behalf of their customer, Pfizer.

Drupal.org's credit system is unique and groundbreaking within the Open Source community. It provides unprecedented insights into the inner workings of a large Open Source project. There are a few limitations with this approach, which I'll address at the end of this report.

How is the Drupal community doing?

In the 12-month period between July 1, 2020 and June 30, 2021, Drupal.org's credit system received contributions from 7,420 different individuals and 1,186 different organizations. We saw a 10% decline in individual contributors, and a 2% decrease in organizational contributors.

Contributions by individuals vs organizations

For this report's time period, 23,882 issues were marked "closed" or "fixed", a 23% decline from the 2019-2020 period. This averages out to 65 issues marked "closed" or "fixed" each day.

In total, the Drupal community worked on 3,779 different Drupal.org projects this year compared to 4,195 projects in the 2019-2020 period — a 10% year-over-year decline.

Metric 2019 - 2020 2020 - 2021 Delta
Number of individual contributors 8,303 7,420 -12%
Number of organizational contributors 1,216 1,186 -2%
Number of issues "fixed" or "closed" 31,153 23,882 -23%
Number of projects worked on 4,195 3,779 -10%

Understanding the slowdown in contribution

Individual contributors slowed down

To understand the slowdown, I looked at the behavior of the top 1,000 contributors:

  • The top 1,000 individual contributors are responsible for 65% of all contributions. The remaining 6,420 individuals account for the remaining 35%. Overall, Drupal follows a long tail model.
  • In the last year, 77 of the top 1,000 individual contributors stopped contributing to Drupal, 671 contributed less, and 252 contributed more.

A 7.7% annual attrition rate in the top 1,000 contributors is very low. It means that the average contributor in the top 1,000 is active for 13 years. In other words, Drupal's top 1,000 contributors are extremely loyal — we should be grateful for their contributions and continued involvement in the Drupal project.

While we can't compare Open Source projects like Drupal to commercial companies, it might be useful to know that most commercial organizations are very happy with an attrition rate of 15% or less. This means that an employee stays with their employer for almost 6.5 years. Nowadays, a lot of people don't stay with their employer for that long. When it’s put that way, you can see that an attrition rate of 7.7% is very good!

The big takeaway is that the top individual and organizational contributors aren't leaving Drupal. They just became less active in 2020-2021.

Organizational contributors also slowed down

Next, I looked at the behavior of the top 250 organizations:

  • The top 250 organizational contributors are responsible for 82% of all contributions. The other 936 organizations account for the remaining 18%.
  • In the last year, 8 organizations (3%) stopped contributing, 168 (67%) contributed less, and 74 (30%) contributed more.
  • Five of the 8 organizations that stopped contributing were end users; they most likely switched their website away from Drupal. The remaining 3 were digital agencies. The end user attrition rate in the top 250 was 2%, while the digital agency attrition rate was 0.4%.

The top Drupal agencies remain very committed to Drupal. While many agencies contributed less, very few agencies stopped contributing to Drupal altogether.

Why are individuals and organizations contributing less?

As part of my research, I reached out to some of the top contributing Drupal agencies. The main reason why they are contributing less is that they are too busy growing:

  • We grew 33% so far in 2021. We have grown our contribution as well, but there has been a shift from code contributions to non-code contributions. We've contributed less code because Drupal has all the features we need to deliver amazing digital experiences, and has become really stable and robust. There has been less code to contribute. — Baddý Sonja Breidert, CEO of 1xINTERNET, Germany
  • We have grown 35% in the last year — from around 65 employees to 90. — Nick Veenhof, CTO of DropSolid, Belgium
  • Customer investment in digital has accelerated by several years the past 12 months. We grew our Drupal practice by 35% in the past year. — Paul Johnson, Drupal Director at CTI Digital, UK
  • We grew 27% in revenue last year. We expect to continue on that growth trajectory. Our only concern is shortage of Drupal talent. — Janne Kalliola, CEO of Exove, Finland
  • We grew 40% over the last year. This has been driven by an increased demand for large Drupal projects on tight deadlines. With more time pressures from clients and changing personal commitments, it’s been more difficult for people to find the time to contribute. But also, more of our contribution shifted from Drupal.org to GitHub, and doesn't use the credit system. — Stella Power, Managing Director of Annertech, Ireland
  • We experienced unexpected sales growth during COVID. Thanks to Drupal Commerce, we grew 95% in 2020 and 25% year to date. In addition, two of our leading contributors pursued other opportunities. As new team members get onboarded and the workload stabilizes, I'm hopeful we see our overall contributions increase again in 2022. — Ryan Szrama, CEO of Centarro, United States

It's great to see so many Drupal agencies doing well.

Other than being too busy with client work, the following secondary reasons were provided:

  • Drupal is a stable and mature software project. Drupal has all the features we need to deliver ambitious digital experiences. Furthermore, Drupal has never been this stable and robust; we don't have many bug fixes to contribute, either.
  • There is a shortage of Drupal talent; the people we hire don't know how to contribute yet.
  • COVID eliminated in-person events and code sprints. In-person events inspired our employees to contribute and collaborate. Without in-person events, it's hard to instill employees with a passion to contribute.
  • It's more difficult to teach new employees how to contribute when everyone is remote.
  • People want a vision for Drupal that they can rally behind. We have already achieved the vision: Drupal is for ambitious digital experiences. People want to know: what is next?
  • The tools and processes to contribute are becoming more complex; contribution has become more difficult and less desirable.
  • We are getting more efficient at managing major Drupal releases. Rector automates more and more of the upgrade work. When we work smarter, contribution drops.

There is no doubt that COVID has accelerated a lot of digital transformation projects, but it has also slowed down contribution. Parents are busy home-schooling their children, people have Zoom-fatigue, some families may have lost income, etc. COVID added both stress and extra work to people's lives. For many, this made contribution more difficult or less possible.

Drupal Super Cycle

Drupal agencies provided many valid reasons for why contribution is down. In addition to those, I believe a Drupal Super Cycle might exist. The Drupal Super Cycle is a new concept that I have not talked about before. In fact, this is just a theory — and only time will tell if it is valid.

The Drupal Super Cycle is a recognition that Drupal's development cycle ebbs and flows between a "busy period" and "quiet period" depending on when the next major release takes place. There is a "busy period" before a major release, followed by a "quiet period" after each major release.

Major Drupal releases only happen every 2 or 3 years. When a major release is close, contributors work on making their projects compatible. This requires extra development work, such as adopting new APIs, subsystems, libraries, and more. Once projects are compatible, the work often shifts from active development to maintenance work.

A visual representation of the Drupal Super Cycle; contribution accelerates just before a major release and slows down after.
A slide from the my DrupalCon Europe 2021 keynote where I explain the Drupal Super Cycle theory.

The last major Drupal release was Drupal 9, released in June of 2020. Last year's report analyzed contribution activity between July 1, 2019 and June 30, 2020. This period includes the 11-month period leading up to the Drupal 9 release, the Drupal 9 release itself, and 1 month after the Drupal 9 release. It's the "busy period" of the Super Cycle because the Drupal community is getting thousands of contributed modules ready for Drupal 9.

This year's report analyzes contribution data starting 1 month after the Drupal 9 release. There was no major Drupal release this year, and we are still 9 to 14 months away from Drupal 10, currently targeted for the summer of 2022. We are in the "quiet period" of the Super Cycle.

If the Drupal Super Cycle concept is valid, we should see increased activity in next year's report, assuming we remain on track for a Drupal 10 release in June of 2022. Time will tell!

What is the community working on?

Contribution credits decreased across all project types, but increased for Drupal Core.

A graph showing the year over year growth of contributions per project type: only contributions to core grew

Core contributions saw a 7% year-over-year increase in credits, while work on contributed projects — modules, themes and distributions — are all down compared to last year.

Who are Drupal's top individual contributors?

The top 30 individual contributors between July 1, 2020 and June 30, 2021 are:

A graph showing the top 30 individual contributors ranked by the quantity of their contributions.
A graph showing the top 30 individual contributors ranked by the impact of their contributions.

For the weighted ranking, I weighed each credit based on the adoption of the project the credit is attributed to. For example, each contribution credit to Drupal Core is given a weight of 10, because Drupal Core has about 1 million active installations. Credits to the Webform module, which has over 450,000 installations, get a weight of 4.5. And credits to Drupal's Commerce project get 0.5 points, as it is installed on around 50,000 sites.

The weighting algorithm also makes adjustments for Drupal's strategic initiatives. Strategic initiatives get a weight of 10, the highest possible score, regardless of whether these are being developed in Drupal Core's Git repository or in a sandbox on Drupal.org.

The idea is that these weights capture the end user impact of each contribution, but also act as a proxy for the effort required to get a change committed. Getting a change accepted in Drupal Core is both more difficult and more impactful than getting a change accepted to a much smaller, contributed project.

This weighting is far from perfect, but so is the unweighted view. For code contributions, the weighted chart may be more accurate than a purely unweighted approach. I included both charts:

No matter how you look at the data, all of these individuals put an incredible amount of time and effort into Drupal.

It's important to recognize that most of the top contributors are sponsored by an organization. We value the organizations that sponsor these remarkable individuals. Without their support, it could be more challenging for these individuals to contribute.

How much of the work is sponsored?

When people contribute to Drupal, they can tag their contribution as a "volunteer contribution" or a "sponsored contribution". Contributions can be marked both volunteer and sponsored at the same time (shown in jamadar's screenshot near the top of this post). This could be the case when a contributor does paid work for a customer, in addition to using unpaid time to add extra functionality or polish.

For those credits with attribution details, 16% were "purely volunteer" (7,034 credits). This is in stark contrast to the 68% that were "purely sponsored" (29,240 credits). Put simply, roughly two-thirds of all contributions are "purely sponsored". Even so, volunteer contribution remains very important to Drupal.

A graph showing how many of the contributions are volunteered vs sponsored.

Volunteers contribute across all areas of the project. A lot of volunteer time and energy goes towards non-product related contributions such as event organization, mentoring, and more. Non-code contributions like these are very valuable, yet they are under-recognized in many Open Source communities.

Contributions by project type

Who are Drupal's top organizational contributors?

Similar to the individual contributors, I've ranked organizations by both "unweighted contributions" and "weighted contributions". Unweighted scores are based solely on volume of contributions, while weighted scores also try to take into account both the effort and impact of each contribution.

A graph showing the top 30 organizational contributors ranked by the quantity of their contributions.
A graph showing the top 30 organizational contributors ranked by the impact of their contributions.

If you are an end user looking for a company to work with, these are some of the companies I'd work with first. Not only do they know Drupal best, but they also help improve your investment in Drupal. If you are a Drupal developer looking for work, these are some of the companies I'd apply to first.

A variety of different types of companies are active in Drupal's ecosystem:

Category Description
Traditional Drupal businesses Small-to-medium-sized professional services companies that primarily make money using Drupal. They typically employ fewer than 100 employees. Because they specialize in Drupal, many of these companies contribute frequently and are a huge part of our community. Examples are Third and Grove, OpenSense Labs, Srijan, etc.
Digital marketing agencies Larger full-service agencies that have marketing-led practices using a variety of tools, typically including Drupal, Adobe Experience Manager, Sitecore, WordPress, etc. Many of these larger agencies employ thousands of people. Examples are Wunderman Thompson, Possible, and Mirum.
System integrators Larger companies that specialize in bringing together different technologies into one solution. Example system integrators are Accenture, TATA Consultancy Services, EPAM Systems, and CI&T.
Hosting companies Examples are Acquia, Pantheon, and Platform.sh, but also Rackspace or Bluehost.
End users Examples are the European Commission or Pfizer.

A few observations:

  • Most of the sponsors in the top 30 are traditional Drupal businesses with fewer than 100 employees. With the exception of Acquia, Drupal's maintenance and innovation largely depends on these small Drupal businesses.
  • The larger, multi-platform digital marketing agencies are barely contributing to Drupal. Only 1 digital marketing agency shows up in the top 30: Intracto with 410 credits. Hardly any appear in the entire list of contributing organizations. I'm frustrated that we have not yet found the right way to communicate the value of contribution to these companies. We need to incentivize these firms to contribute with the same level of commitment that we see from traditional Drupal businesses.
  • The only system integrator in the top 30 is CI&T with 1,177 credits. CI&T is a smaller system integrator with approximately 5,200 employees. We see various system integrators outside of the top 30, including EPAM Systems (138 credits), TATA Consultancy Services (109 credits), Publicis Sapient (60 credits), Capgemini (40 credits), Globant (8 credits), Accenture (2 credits), etc.
  • Various hosting companies make a lot of money with Drupal, yet only Acquia appears in the top 30 with 1,263 credits. The contribution gap between Acquia and other hosting companies remains very large. Pantheon earned 71 credits compared to 122 last year. Platform.sh earned 8 credits compared to 23 in the last period. In general, there is a persistent problem with hosting companies not contributing back.
  • We only saw 1 end user in the top 30 this year: Thunder (815 credits). Many end users contribute though: European Commission (152 credits), Pfizer (147 credits), bio.logis (111 credits), Johnson & Johnson (93 credits), University of British Columbia (105 credits), Georgia Institute of Technology (75 credits), United States Department of Veterans Affairs (51 credits), NBCUniversal (45 credits), Princeton University (43 credits), Estée Lauder (38 credits), University of Texas at Austin (22 credits), and many more.
A graph showing that Acquia is by far the number one contributing hosting company.
A graph showing that CI&T is by far the number one contributing system integrator.

I often recommend end users to mandate contributions from their partners. Pfizer, for example, only works with agencies that contribute back to Drupal. The State of Georgia started doing the same; they made Open Source contribution a vendor selection criteria. If more end users took this stance, it could have a big impact on Drupal. We'd see many more digital agencies, hosting companies, and system integrators contributing to Drupal.

While we should encourage more organizations to sponsor Drupal contributions, we should also understand and respect that some organizations can give more than others — and that some might not be able to give back at all. Our goal is not to foster an environment that demands what and how others should give back. Instead, we need to help foster an environment worthy of contribution. This is clearly laid out in Drupal's Values and Principles.

How diverse is Drupal?

Supporting diversity and inclusion is essential to the health and success of Drupal. The people who work on Drupal should reflect the diversity of people who use the web.

I looked at both the gender and geographic diversity of Drupal.org contributors.

Gender diversity

While Drupal is slowly becoming more diverse, less than 9% of the recorded contributions were made by contributors who do not identify as men. The gender imbalance in Drupal remains profound. We need to continue fostering diversity and inclusion in our community.

A graph showing contributions by gender: 67% of the contributions come from people who identify as male.

A few years ago I wrote a post about the privilege of free time in Open Source. I made the case that Open Source is not a meritocracy. Not everyone has equal amounts of free time to contribute. For example, research shows that women still spend more than double the time as men doing unpaid domestic work, such as housework or childcare. This makes it more difficult for women to contribute to Open Source on an unpaid, volunteer basis. Organizations capable of giving back should consider financially sponsoring individuals from underrepresented groups to contribute to Open Source.

A graph that shows that compared to males, female contributors do more sponsored work, and less volunteer work.
Compared to men, women do more sponsored work, and less volunteer work. We believe this is because men have the privilege of more free time.

Free time being a privilege is just one of the reasons why Open Source projects suffer from a lack of diversity.

The gender diversity chart above shows that there is a growing number of individuals that no longer share their gender identity on Drupal.org. This is because a couple of years ago, the gender field on Drupal.org profile was deprecated in favor of a Big 8/Big 10 demographics field.

Today, over 100,000 individuals have filled out the new "Big 8/Big 10" demographics field. The new demographics field allows for more axes of representation, but is also somewhat non-specific within each axis. Here are the results:

A graph showing different axes of diversity in Drupal

Diversity in leadership

Drupal.org recently introduced the ability for contributors to identify what contributor roles they fulfill. The people who hold these key contribution roles can be thought of as the leaders of different aspects of our community, whether they are local community leaders, event organizers, project maintainers, etc. As more users begin to fill out this data, we can use it to build a picture of the key contributor roles in our community. Perhaps most importantly, we can look at the diversity of individuals who hold these key contributor roles. In next year's report we will provide a focused picture of diversity in these leadership positions.

Geographic diversity

We saw individual contributors from 6 continents and 121 countries. Consistent with the trends described above, most countries contributed less compared to a year earlier. Here are the top countries for 2020-2021:

 A graph showing the top 20 contributing countries in 2021.
The top 20 countries from which contributions originate. The data is compiled by aggregating the countries of all individual contributors behind each issue. Note that the geographical location of contributors doesn't always correspond with the origin of their sponsorship. Wim Leers, for example, works from Belgium, but his funding comes from Acquia, which has the majority of its customers in North America. Wim's contributions count towards Belgium as that is his country of residence.

Europe contributes more than North America. However, contribution from Europe continues to decline, while all other continents have become more active contributors.

A graph that shows most contributions in 2021 come from Europe and North America.

Asia, South America, and Africa remain big opportunities for Drupal; their combined population accounts for 6.3 billion out of 7.5 billion people in the world.

Limitations of the credit system

It is important to note a few of the current limitations of Drupal.org's credit system:

  • The credit system doesn't capture all code contributions. Parts of Drupal are developed on GitHub rather than Drupal.org. Contributions on GitHub usually aren't credited on Drupal.org. For example, a lot of the work on the Automatic Updates initiative is happening on GitHub instead of Drupal.org, and companies like Acquia and Pantheon don't get credit for that work.
  • The credit system is not used by everyone. Because using the credit system is optional, many contributors don't. For example, while they could, not all event organizers and speakers capture their work in the credit system. As a result, contributions often have incomplete or no contribution credits. Where possible, we should automatically capture credits. For example, translation efforts on https://localize.drupal.org are not currently captured in the credit system, but could be automatically.
  • The credit system doesn't accurately value complexity and quality. One person might have worked several weeks for just 1 credit, while another person might receive a credit for 10 minutes of work. Each year we see a few individuals and organizations trying to game the credit system. In this post, I used a basic weighting system based on project adoption. In future, we should consider refining that by looking at issue priority, patch size, number of reviews, etc. This could help incentivize people to work on larger and more important problems and save smaller issues, such as coding standards improvements, for new contributor sprints.

Because of these limitations, the actual number of contributions and contributors could be much higher than what we report.

Conclusions

While we have fewer contributions and fewer contributors compared to last year, it is not something to be worried about. We can attribute this to various things, such as COVID-19, agency growth, and the Drupal Super Cycle.

Our data confirms that Drupal is a vibrant community full of contributors who are constantly evolving and improving the software. It's amazing to see that just in the last year, Drupal welcomed more than 7,000 individual contributors and over 1,100 corporate contributors.

To grow and sustain Drupal, we should support those that contribute to Drupal and find ways to get those that are not contributing involved in our community. We are working on several new ways to make it easier for new contributors to get started with Drupal, which I covered in my latest DrupalCon keynote. Improving diversity within Drupal is critical, and we should welcome any suggestions that encourage participation from a broader range of individuals and organizations.

Special thanks to Tim Lehnen, CTO at the Drupal Association, for supporting me during my research.

Lot has been said about how the web evolved to become a kind of monstrous entity. If you are on Gemini, you probably see what I’m talking about. The mail protocol has followed a similar evolution but it’s a bit more subtle and has often been summarised as « too much email. ».

I’m currently thinking deeply about Offmini, a protocol which would be to email what Gemini is to the web. This prompted me to write about what was bad with email. This post was written as part of another one where I described how I’m building Offmini. It became so long I figured it should be an independent post on its own.

Sending Problem

The first and obvious problem with email is that it has been developed 40 years ago as a receiver-only protocol. 15 years ago, most of the mail traffic was random spam. By random spam I mean that spammers were really generating random mail addresses (or scrapping them over the web) and sending trillions of emails to every possible address. You could receive spam in a language you didn’t even speak.

Complex protocols have been added on top of SMTP, relying once again on DNS (SPF, Dmarc,DKIM), to try controlling the spam by making email a sender-and-receiver protocol (a notion I will describe in a subsequent post). This had the side effect of making it harder and harder to set up your own mail server.

As homemade mail servers were harder to build and less reliable, email started to consolidate into a small oligopoly: Hotmail, Gmail, Yahoo and a handful of others. Having lots of power, those huge monopolistic beasts could easily reject mails from independent servers as spam, making their service even more attractive to customers. It might not have been on purpose but it was the case (source : I maintained multiple independent mail servers between 2000 and 2010).

One clear consequence is that you can’t send email from your computer anymore as it was originally intended. You need a mail server properly configured with a permanent connection to act as a sender identity. And even that is though as Cory Doctorow learned.

=> https://doctorow.medium.com/dead-letters-73924aa19f9d

Format problem

The impact of monopolies on email impacted the format of emails themselves. Emails are encoded in a very impractical format called MIME 1.0. By today standard, this format is very hard to parse and has been awfully abused with HTML emails, trackers, etc. Basically created on a napkin by two guys, MIME never went further than 1.0 because nobody agreed to upgrade it. It was too successful too quickly.

There was only one major addition to MIME. And it was not a planned one: the infamous winmail.dat format.

At the time, there was a bug in Microsoft Outlook, the main mail client on the market, that transformed outgoing emails in a cryptic file called « winmail.dat ». The problem was that Outlook itself was able to decrypt winmail.dat files. As Outlook had a dominant position, only people not using outlook were having the problem of receiving empty emails with only one attachment that they could not open called « winmail.dat ». Users of independent mail servers relying on the open source Squirrelmail webmail interface started to blame their mail administrator (myself, for ten years).

The problem became worse when Google reverse engineered the winmail.dat bug in order to transparently support it in Gmail. At that point, the winmail.dat Microsoft bug became part of the MIME standard without any specification having been ever written.

=> https://ploum.net/winmail-dat-syndrome/

It may gradually fade out but most mail client still have the code to deal with the winmail.dat never documented format.

Email has a bad protocol for sending, a bad file format but what about receiving? It’s not better.

Receiving Problem

As Vint Cerf acknowledged, the whole IP stack was built at a time were memory was really expensive. There was a feeling that storing data would always be more expensive than sending them directly. That’s the reason why we envision the Internet as real-time connection only. As soon as you unplug your computer, you are outside of the Internet. Each software has to deal with disconnection independently. Usually it’s by popping up an error in the face of the user and telling him to check his connection.

As it was clear that most personal computers were not connected all the time, yet another protocol was created to retrieve emails from mail servers and store them on a non-mail server computer: POP3.

The protocol was bad enough not to allow any synchronisation for stuff like folders or marking mails as read. Once again, the goal was to save memory on the server: users could download mail once for all and they would be removed from the server (there was an option to leave them on the server but this was rarely used as space was limited and, with some clients, you had to regularly redownload all the emails).

IMAP was introduced to fix POP3 flaws. IMAP is a very complex protocol with fuzzy part open to interpretations. It’s well known by mail administrators that some particular IMAP servers are not compatible with some particular clients. IMAP was so severely abused by providers that most mail client handle some mail providers separately (like Gmail or Outlook).

IMAP was also created with a permanent connection in mind and most mail clients expect a connection, throwing an error at each offline action. If an offline mode exists, like in Thunderbird, it must be manually configured and is clearly an after-thought.

Worst of all: every IMAP client store emails in its own particular way. There are multiple standard, like Mailbox and Maildir, but each client seems to have its own interpretation of it. If you ever browse Stackoverflow, you will find lots of people asking a naively simple question : « How can I access my email locally with Mutt on the command line and with a graphical mail client ? » Sounds reasonable, right ? After all, you did all the increasingly obscure work of configuring mbsync/isync to get your mail on your local computer (which randomly stop working from time to time until you realize that not receiving any emails in a few days is not normal) in a standardised maildir format, not to mention the cryptic config file you had to copy/past to be able to send email through Postfix, why not accessing them with something other than Mutt ?

Right ?

Well, you can’t. Or, as all Stack Overflow answers will tell you, you « only » need to install an IMAP local server and make the graphical client point to it using your local 127.0.0.1 IP.

Simply have a look at how to use Himalaya, a neat and fresh CLI mail client, offline.

=> https://github.com/soywod/himalaya/wiki/Tips:offline-with-isync-dovecot

Email had become a monstrosity beyond reasonable comprehension while still having inherent flaws such as plain text sending. Every email out there is sent and stored in plaintext (we can easily agree that PGP/GPG use is anecdotical) and, through HTML and inline pictures, most of them are trying to track you to know when you open the email.

The whole ecosystem is becoming even more and more centralised with some modern mail providers not offering the ability to get your mail out of the service at all, arguing, with reason, that IMAP sucks and does not permit some features (the hipsterish Hey! or the privacy-oriented Tutanota only provide you access to your email through their own proprietary webmail). You can’t even read your mail offline by design and nobody blink an eye.

The spam problem

But, at the very least, we have solved the spam problem, haven’t we?

According to my own statistics, we indeed solved most of the random spam. The spam that was plaguing the network 20 years ago seems to have been greatly reduced or, at the very least, is easily blocked.

But, instead, we are now receiving 10 times the amount of what I call « corporate and notifications spam ». Unsolicited emails that come from real identified companies and people. They send you thousands of emails that, they think, should interest you. Most of the time, you can quickly identify why you are receiving this email. It’s linked to one of your accounts somewhere. At worst, your email has only been sold to « trusted partners ». They always provide you with the option to unsubscribe even if they are very sad to see you leaving. Unsubscribing only works for a short time because they basically create a new mailing list for every mail sent and this one should really interest you. The tracking of users is there by default in most mailing-list tools and show marketers that most of their emails are never opened. Which prompts them to send even more emails, arguing that, at some point, you will be tired of not opening them. Providers like Gmail heavily spy on how to use email. It is widely known in the mailing-list community that Gmail learns to mark as spam mails from senders which are rarely open. Prompting marketers to change regularly their newsletter address and to try to make a catchy title. Sounds familiar?

This effect is worsened by the fact that email has become the lazy default for everything. If anything is happening on a service, even a non-commercial one, mail is sent. Facebook and Linkedin are quite infamous for regularly adding « notification categories » where you are subscribed by default, even if you previously find the hidden setting « unsubscribe me from anything ». Besides lazy engineering, as Szczeżuja points out in the link below, it is obvious that it is a cheap way to remind their users that they exist.

=> gemini://szczezuja.flounder.online/gemlog/2021-10-10-dont-be-like-a-developer.gmi

We are now forced to rely on hundreds of very centralised web services for everything and each of those services, by default, fill your inbox. When you enter a new job in a big company, the first action of HR is to subscribe you to a bunch of mailing list. It’s even worse in the academic sector. From the « Weekly news » you don’t care to the « There’s no paper left in the printer from the second floor », email has become a centralised broadcast network. You are forced to be on the receiving end while every central authority known to man tries to broadcast as much as it could.

Your mail inbox is becoming a battlefield where everyone with a small authority fight for your attention, trying to fill your mental space even if you don’t open the mail. I’m an Inbox 0 Taliban and I’m mortified each time I get a glance at a « normal person’s inbox ». It’s basically a long list of companies (lots of Facebook but also local companies) displayed in a long list where only one mail out of ten has been ever opened. Ever wondered why Gmail doesn’t display advertising in its interface? Because it does! All that mail neatly lined up is basically cheap advertising. What’s Google’s benefit? The clumsier your inbox is, the more their automatic triaging rules look appealing. Google is already deciding for you what to write (by suggesting you how to reply) and what is important to look at (with their « smart folders »). It’s not far-fetched to imagine that, at some point, you will need to pay Google for your emails to be important and not considered as spam. Maybe there are already doing it through some kind of « trusted partner program ». It’s, after all, the reason why Facebook created its feed: choosing for you what is important to see and monetising this access to your brain.

There should be a better way.

I’m an inbox 0 extremist. I unsubscribe from everything that contains an unsubscribe link. I spent the last two years sending GDPR removing requests to every company sending me an unwanted email. The first three months were completely exhausting but once I did the first bulk, it became more rigorous hygiene.

I can tell many anecdotes about how companies handle GDPR requests, how I found that I was in some commercial databases, how I tracked down the owners of those databases. How I permanently removed more than 300 online accounts and how I sometimes receive a very unexpected email from companies one year after they told me every information about me was removed. Or how I got stuck in a loop where being unsubscribed from a newsletter required posting on their support forum but they automatically subscribed everyone posting on their support forum. Latest story in town: Lying companies telling me that they had removed my data while I still can log in! (they simply renamed my user account as « removed » but didn’t delete anything).

Funny stories about human stupidity and dishonesty but, in summary, it mostly works. The effort pays off.

My inbox rarely has more than 5 emails at once. In the last year, I received only 20-25 unwanted emails every month, including every random spam and phishing attempt. Half of it is spam from marketers that want to advertise on my website.

But how many people are able to put this time, effort and dedication only to get a less polluted inbox?

Not to mention that, despite all my motivation, I have not yet been able to build an offline setup where I could read/reply to emails offline, having everything synchronised once I connect but still use the webmail if I want. Mbsync configuration is cryptic and randomly stop working without reason. Some mails are, for reason, never downloaded. I know it should possible but it is very hard or too convoluted (no, I’m not installing a local IMAP server on my laptop).

I spent so much time on this because I love emails. But there should be a better way.

Recevez les billets par mail ou par RSS. Max 2 billets par semaine, rien d’autre. Adresse email jamais partagée et définitivement effacée lors du désabonnement. Dernier livre paru : Printeurs, thriller cyberpunk. Pour soutenir l’auteur, lisez, offrez et partagez des livres.

Ce texte est publié sous la licence CC-By BE.

October 19, 2021

Transparent encryption is relatively easy to implement, but without understanding what it actually means or why you are implementing it, you will probably make the assumption that this will prevent the data from being accessed by unauthorized users. Nothing can be further from the truth.

October 17, 2021

Unbound

In previous blog posts, I described howto setup stubby as a DNS-over-TLS resolver. I used stubby on my laptop(s) and unbound on my internal network.

I migrated to unbound last year and created a docker container for it. Unbound is a popular DNS resolver, it’s less known that you can also use it as an authoritative DNS server.

This work was based on Debian Buster, I migrated the container to Debian Bullseye reorganize it a bit to make it easier to store the zones configuration outside the container like a configmap or persistent volume on Kubernetes.

Version 2.0.0 is available at https://github.com/stafwag/docker-stafwag-unbound.

Version 2.0.0:

Changelog:

  • Updated the base image to debian:bullseye.
  • Updated create_zone_config.sh to be able to run outside the container.
  • Removed the zones.conf generation from the entrypoint
  • Start the container as the unbound user
  • Updated to logging.conf
  • Set the pidfile /tmp/unbound.pid
  • Added remote-control.conf
  • Updated the documentation

docker-stafwag-unbound

Dockerfile to run unbound inside a docker container. The unbound daemon will run as the unbound user. The uid/gid is mapped to 5000153.

Installation

clone the git repo

$ git clone https://github.com/stafwag/docker-stafwag-unbound.git
$ cd docker-stafwag-unbound

Configuration

Port

The default DNS port is set to 5353 this port is mapped with the docker command to the default port 53 (see below). If you want to use another port, you can edit etc/unbound/unbound.conf.d/interface.conf.

scripts/create_zone_config.sh helper script

The create_zone_config.sh helper script, can we help you to the zones.conf configuration file. It’s executed during the container build and creates the zones.conf from the datafiles in etc/unbound/zones.

If you want to use a docker volume or configmaps/persistent volumes on Kubernetes. You can use this script to generate the zones.conf a zones data directory.

create_zone_config.sh has following arguments:

  • -f Default: /etc/unbound/unbound.conf.d/zones.conf The zones.conf file to create
  • -d Default: /etc/unbound/zones/ The zones data source files
  • -p Default: the realpath of zone files
  • -s Skip chown/chmod

Use unbound as an authoritative DNS server

To use unbound as an authoritative authoritive DNS server - a DNS server that hosts DNS zones - add your zones file etc/unbound/zones/.

During the creation of the image scripts/create_zone_config.sh is executed to create the zones configuration file.

Alternatively, you can also use a docker volume to mount /etc/unbound/zones/ to your zone files. And a volume mount for the zones.conf configuration file.

You can use subdirectories. The zone file needs to have $ORIGIN set to our zone origin.

Use DNS-over-TLS

The default configuration uses quad9 to forward the DNS queries over TLS. If you want to use another vendor or you want to use the root DNS servers director you can remove this file.

Build the image

$ docker build -t stafwag/unbound . 

To use a different BASE_IMAGE, you can use the –build-arg BASE_IMAGE=your_base_image.

$ docker build --build-arg BASE_IMAGE=stafwag/debian:bullseye -t stafwag/unbound .

Run

Recursive DNS server with DNS-over-TLS

Run

$ docker run -d --rm --name myunbound -p 127.0.0.1:53:5353 -p 127.0.0.1:53:5353/udp stafwag/unbound

Test

$ dig @127.0.0.1 www.wagemakers.be

Authoritative dns server.

If you want to use unbound as an authoritative dns server you can use the steps below.

Create a directory with your zone files:

[staf@vicky ~]$ mkdir -p ~/docker/volumes/unbound/zones/stafnet
[staf@vicky ~]$ 
[staf@vicky stafnet]$ cd ~/docker/volumes/unbound/zones/stafnet
[staf@vicky ~]$ 

Create the zone files

Zone files

stafnet.zone:

$TTL  86400 ; 24 hours
$ORIGIN stafnet.local.
@  1D  IN  SOA @  root (
            20200322001 ; serial
            3H ; refresh
            15 ; retry
            1w ; expire
            3h ; minimum
           )
@  1D  IN  NS @ 

stafmail IN A 10.10.10.10

stafnet-rev.zone:

$TTL    86400 ;
$ORIGIN 10.10.10.IN-ADDR.ARPA.
@       IN      SOA     stafnet.local. root.localhost.  (
                        20200322001; Serial
                        3h      ; Refresh
                        15      ; Retry
                        1w      ; Expire
                        3h )    ; Minimum
        IN      NS      localhost.
10      IN      PTR     stafmail.

Make sure that the volume directoy and zone files have the correct permissions.

$ sudo chmod 750 ~/docker/volumes/unbound/zones/stafnet/
$ sudo chmod 640 ~/docker/volumes/unbound/zones/stafnet/*
$ sudo chown -R root:5000153 ~/docker/volumes/unbound/

Create the zones.conf configuration file.

[staf@vicky stafnet]$ cd ~/github/stafwag/docker-stafwag-unbound/
[staf@vicky docker-stafwag-unbound]$ 

The script will execute a chown and chmod on the generated zones.conf file and is excute with sudo for this reason.

[staf@vicky docker-stafwag-unbound]$ sudo scripts/create_zone_config.sh -f ~/docker/volumes/unbound/zones.conf -d ~/docker/volumes/unbound/zones/stafnet -p /etc/unbound/zones
Processing: /home/staf/docker/volumes/unbound/zones/stafnet/stafnet.zone
origin=stafnet.local
Processing: /home/staf/docker/volumes/unbound/zones/stafnet/stafnet-rev.zone
origin=1.168.192.IN-ADDR.ARPA
[staf@vicky docker-stafwag-unbound]$ 

Verify the generated zones.conf

[staf@vicky docker-stafwag-unbound]$ sudo cat ~/docker/volumes/unbound/zones.conf
auth-zone:
  name: stafnet.local
  zonefile: /etc/unbound/zones/stafnet.zone

auth-zone:
  name: 1.168.192.IN-ADDR.ARPA
  zonefile: /etc/unbound/zones/stafnet-rev.zone

[staf@vicky docker-stafwag-unbound]$ 

run the container

$ docker run --rm --name myunbound -v ~/docker/volumes/unbound/zones/stafnet:/etc//unbound/zones/ -v ~/docker/volumes/unbound/zones.conf:/etc/unbound/unbound.conf.d/zones.conf -p 127.0.0.1:53:5353 -p 127.0.0.1:53:5353/udp stafwag/unbound

test

[staf@vicky ~]$ dig @127.0.0.1 soa stafnet.local

; <<>> DiG 9.16.1 <<>> @127.0.0.1 soa stafnet.local
; (1 server found)
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 37184
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;stafnet.local.     IN  SOA

;; ANSWER SECTION:
stafnet.local.    86400 IN  SOA stafnet.local. root.stafnet.local. 3020452817 10800 15 604800 10800

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Sun Mar 22 19:41:09 CET 2020
;; MSG SIZE  rcvd: 83

[staf@vicky ~]$ 

Links

October 13, 2021

Update: Found it, fixed it. We can computer, after all :) We seem to be losing some email sent to our mailing lists. If you send anything important, please check the list archive to make certain it has arrived.

Last week, Drupalists around the world gathered virtually for DrupalCon Europe 2021.

In good tradition, I delivered my State of Drupal keynote. You can watch the video of my keynote, download my slides (156 MB), or read the brief summary below.

I talked about end-of-life schedules for various Drupal versions, delivered some exciting updates on Drupal 10 progress, and covered the health of the Drupal community in terms of contributor dynamics. Last but not least, I talked about how we are attracting new users and contributors by making it much easier to contribute to Drupal.

Drupal 7 and Drupal 8 end-of-life

If you are using Drupal 7 or Drupal 8, time is of the essence to upgrade to Drupal 9. Drupal 7 end-of-life is scheduled for November 2022.

Drupal 8's end-of-life is more pressing, as it is scheduled for November 2nd, 2021 (i.e. in less than a month). If you are wondering why Drupal 8 is end-of-life before Drupal 7, that is because we changed how we develop Drupal in 2016. These changes have been really great for Drupal. They've made it much easier to upgrade to the latest version without friction.

As a community, we've spent thousands of hours building tools and automations to make migrating to Drupal 9 as simple as possible.

Drupal 10 timeline

Next, I gave an update on Drupal 10 timelines. Timing-wise, our preferred option would be to ship Drupal 10 in June 2022. That date hinges on how much work we can get done in the next few months.

Drupal and timelines

Drupal core strategic initiatives

After these timelines, I walked through the six strategic initiatives for Drupal core. We've made really great progress on almost all of them. To see our progress in action, I invited key contributors to present video updates.

A slide with progress bars for each of the 6 initiatives; 3 of them are over 80% complete.

Project Browser

You may recall that I introduced the Project Browser initiative in my April 2021 State of Drupal presentation. The idea is to make it easy for site builders to find and install modules right from their Drupal site, much like an app store on a smartphone. The goal of this initiative is to help more evaluators and site builders fall in love with Drupal.

Today, just six months later, we have a working prototype! Take a look at the demo video:

Decoupled Menus

Drupal is an excellent headless CMS with support for REST, JSON:API and GraphQL.

As a next step in our evolution, we want to expand the number of web service endpoints Drupal offers, and build a large repository of web components and JavaScript framework integrations.

With that big goal in mind, we launched the Decoupled Menus initiative about one year ago. The goal was to create a small web component that could ship quickly and solve a common use case. We focused on one component so we could take all the learnings from that one component to improve our development infrastructure and policies to help us create many more web service end points and JavaScript components.

I talked about the various improvements we made to Drupal.org to support the development and management of more JavaScript components. I also showed that we've now shipped Drupal menu components for React, Svelte and more. Take a look at the video below to see where we're at today:

Our focus on inviting more JavaScript developers to the Drupal community is a transformative step. Why? Headless momentum is growing fast, largely driven by the growth of JavaScript frameworks. Growing right along with it is the trend of composability, or the use of independent, API-first micro-services. Building more web service endpoints and JavaScript components extends Drupal's leadership in both headless development and composability. This will continue to make Drupal one of the most powerful and flexible tools for developers.

Easy Out of the Box

The goal of this initiative is to have Layout Builder, Media, and Claro added to the Standard Profile. That means these features would be enabled by default for any new Drupal user.

Unfortunately, we have not made a lot of progress on this initiative. In my presentation, I talked about how I'd like to find a way for us to get it done by Drupal 10. My recommendation is that we reduce the scope of work that is required to get them into Standard Profile.

Automatic Updates

The Automatic Updates initiative's goal is to make it easier to update Drupal sites. Vulnerabilities in software, if left unchecked, can lead to security problems. Automatic updates are an important step toward helping Drupal users keep their sites secure.

The initiative made excellent progress. For the very first time, I was able to show a working development version:

Drupal 10 Readiness

The Drupal 10 Readiness initiative is focused on upgrading the third-party components that Drupal depends on. This initiative has been a lot of work, but we are largely on track.

A slide from the DriesNote saying that the Drupal 10 upgrade work is 300% more automated than Drupal 9.

The most exciting part? The upgrade to Drupal 10 will be easy thanks to careful management of deprecated code and continued investment in Rector. As it stands, upgrading modules from Drupal 9 to Drupal 10 can almost be entirely automated, which is a big 300% improvement compared to the Drupal 8 to Drupal 9 upgrade.

New front end theme

We are nearly at the finish line for our new front end theme, Olivero. In the past few months, a lot of effort has gone into ensuring that Olivero is fully accessible, consistent with our commitment to accessibility.

Olivero already received a glowing review from the National Federation of the Blind (USA):

Olivero is very well done and low-vision accessible. We are not finding any issues with contrast, focus, or scaling, the forms are very well done, and the content is easy to find and navigate.

Something to be really proud of!

The health of Drupal's contribution dynamics

Next, I took a look at Drupal's contribution data. These metrics show that contributions are down. At first I panicked when I saw this data, but then I realized that there are some good explanations for this trend. I also believe this trend could be temporary.

Contribution metrics

To learn more about why this was happening, I looked at the attrition rate of Drupal's contributors — the percentage of individuals and organizations who stopped contributing within the last year. I compared this data to industry averages for software and services companies.

Slide with data that shows Drupal's top contributors are very loyal
While typical attrition for software and services companies is considered "good" at 15%, Drupal's attrition rate for its Top 1,000 contributors is only 7.7%. The attrition rate for Drupal agencies in the Top 250 organizations is only 1.2%.

I was very encouraged by this data. It shows that we have a very strong, loyal and resilient community of contributors. While many of our top contributors are contributing less (see the full recording for more data), almost none of them are leaving Drupal.

There are a number of reasons for the slowdown in contribution:

  • The COVID-19 pandemic has made contribution more difficult and/or less desirable.
  • We are in the slow period of the "Drupal Super Cycle" — after every major release, work shifts from active development to maintenance.
  • Anecdotally, many Drupal agencies have told me they have less time to contribute because they are growing so fast (see quotes in image below). That is great news for Drupal adoption.
  • Drupal is a stable and mature software project. Drupal has nearly all the features organizations need to deliver state-of-the-art digital experiences. Because of Drupal's maturity, there are simply fewer bug fixes and feature improvements to contribute.
  • Rector-automations have led to less contribution. It's good to work smarter, not harder.

I'll expand on this more in my upcoming Who sponsors Drupal development post.

Slide with quotes from Drupal agencies CEOs stating that they are growing fast

The magic of contribution

I wrapped up my presentation by talking about some of the things that we are doing to make it easier to adopt Drupal. I highlighted DrupalPod and Simplytest as two examples of amazing community-driven innovations.

A slide promoting DrupalPod and Simplytest

After people adopt Drupal, we need to make it easier for them to become contributors. To make contribution easier, Drupal has started adopting GitLab in favor of our home-grown development tools. Many developers outside the Drupal ecosystem are accustomed to using tools like GitLab. Allowing them to use tools with which they are already familiar is an important step to attracting new contributors. Check out this video to get the latest update on our GitLab effort:

Thank you

To wrap up I'd like to thank all of the people and organizations who have contributed to Drupal since the last DriesNote. It's pretty amazing to see the momentum on our core initiatives! As always, your contributions are inspiring to me!

Thank you for the many contribution

October 11, 2021

FOSDEM 2022 will take place on Saturday 5 and Sunday 6 February 2022. The exact format is yet to be decided. As every year, we have started planning for real in August. As evident from our lack of updates since then, it's a bit harder this year. There are a lot of strong opinions about what the best, or least bad, FOSDEM 2022 could look like. Finding consensus is harder than we would like it to be. Going forward, we will do a better job of keeping you informed going forward. Apologies; we're also burned out and just want舰

October 08, 2021

Chers parents,

Nous sommes le 8 octobre 2051. J’ai aujourd’hui 30 ans et un peu de recul sur mon enfance, mon éducation et le monde dans lequel j’ai grandi.

Ma génération est confrontée à une problématique sans précédent dans l’histoire de l’humanité : devoir gérer les déchets de la génération précédente.

Jusqu’aux années 1970, la planète se régénérait naturellement. Les déchets humains étaient absorbés et recyclés spontanément. À partir de votre génération, ce ne fut plus le cas. Vous fûtes la toute première génération de l’histoire à produire et consommer plus que ce que la terre ne le permettait.

Vous nous laissez sur les bras l’excédent de déchets.

Le pire, c’est que vous le saviez.

Quand je me réfère aux archives et à mes souvenirs de prime jeunesse, votre époque n’était guère accueillante. Vous aviez des voitures consommant de l’énergie fossile et des fumeurs au cœur des villes ! Aujourd’hui, la voiture électrique ne sert que pour se déplacer entre les centres citadins. Elles sont strictement interdites dans les zones urbaines où tout se fait à pied, à vélo, à trottinette ou en taxi-tram autonome. Malgré tout, notre air est moins respirable que le vôtre !

Merci d’avoir œuvré à cette transformation. Peut-être était-ce le minimum à faire pour que nous survivions. Car si vous avez agi, souvent avec beaucoup de bonne volonté, c’était rarement dans le bon sens.

Comme cette manie que vous aviez de vouloir économiser l’électricité. J’ai du mal à croire que, même à votre époque, l’électricité n’était pas abondante et peu polluante pour les individus. Si j’en crois les archives, les années 2020 voyaient de réguliers pics de surproduction d’électricité dus aux panneaux solaires et vous démanteliez des centrales nucléaires parfaitement fonctionnelles. Vous perdiez vraiment votre temps à vous convaincre de mettre des ampoules économiques si polluantes à produire ? Un peu comme le coup de faire pipi dans la douche ou de ne pas imprimer les emails. Vous pensiez sérieusement que nous allions vous remercier pour cela ?

Vous semblez avoir dépensé tellement d’énergie et de temps pour tenter, parfois vainement, d’économiser 10% de votre consommation privée de ce qui n’était de toute façon qu’une goutte d’eau face à l’industrie. Vous culpabilisiez les individus alors que votre consommation personnelle représentait le quart de l’électricité consommée globalement (dont le tiers uniquement pour le chauffage). Même si vous aviez arrêté de consommer complètement de l’électricité à titre individuel, cela n’aurait eu qu’un impact imperceptible pour nous.

Par contre, vous nous laissez sur le dos des gigatonnes de déchets de ces appareils dont plus personne ne voulait, car ils consommaient un peu trop. Chaque année, culpabilisés par le marketing, vous vous équipiez d’une nouvelle génération d’appareils qui consommaient « moins », de vêtements « fair trade », de gourdes prétendument recyclables et de vaisselle en bambou. Le tout ayant fait le tour du monde pour rester brièvement dans vos armoires avant de combler les décharges sur lesquelles nous vivons désormais.

Vous semblez vous être évertué à acheter le plus de gadgets inutiles possibles, mais en vous rassurant, car, cette année, la fabrication du gadget en question avait émis 10% de CO2 en moins que celui de l’année précédente et que l’emballage était « presque entièrement recyclable  ». Ses composants avaient fait trois fois le tour du globe, mais, rassurez-vous, deux arbres avaient été plantés. Aujourd’hui encore, nous avons du mal à comprendre comment vous aviez matériellement le temps de faire autant d’achats. Il semblerait que vous deviez passer plus de temps à faire « du shopping » et à remplir vos armoires qu’à réellement utiliser vos achats. Armoires pleines à craquer que nous devons vider les jours qui suivent votre décès, moitié pleurant votre perte, moitié râlant sur votre propension à tout garder.

Consommer des gadgets était peut-être la seule façon que vous pouviez imaginer pour poursuivre la lubie de votre génération : créer des emplois. Toujours plus d’emplois. Une partie de ces emplois consistaient d’ailleurs explicitement à vous convaincre d’acheter plus. Comment avez-vous moralement pu accomplir ces tâches explicitement morbides ? Parce que c’était votre travail, certainement. L’histoire démontre que les pires exactions furent commises par des gens dont « c’était le travail ». Pousser les autres à consommer fait désormais partie de ces crimes historiques contre l’humanité. Utiliser le prétexte écologique pour consommer encore plus ne fait qu’aggraver la culpabilité de ceux qui furent impliqués.

Pendant 40 ans, vous avez eu comme politique de créer autant d’emplois que possible, emplois dont le rôle premier était de transformer les ressources en déchets. Pendant 40 ans, vous vous êtes démenés pour remplir le plus vite possible votre poubelle planétaire : nous, l’an 2050.

Nous, vos enfants, sommes votre poubelle. Ce pays lointain qui vous semblait abstrait, nous vivons dedans.

Il a fallu attendre notre génération pour décider que tout vendeur d’un bien ou d’un emballage ni immédiatement consommable ni naturellement dégradable était tenu de racheter ses produits à la moitié du prix, quel que soit l’état. De faire ainsi remonter la chaîne à chaque pièce, chaque composant. Au final, le producteur est en charge de l’évacuation et forcé de gérer son impact.

Bien sûr, il y’eut une énorme perturbation dans les services logistiques qui ont, soudainement, dû fonctionner dans les deux sens. Les industries se sont adaptées en tentant de développer des produits qui dureraient le plus longtemps possible et en favorisant la réparabilité ou la démontrabilité. Soudainement, c’était un argument de vente. Le marketing n’a pas mis longtemps à retourner sa veste et à tenter de vous convaincre que la location, même à très longs termes, était une liberté par rapport à la possession. La réparation a créé une activité économique que vous assimileriez peut-être à des emplois. Paradoxalement, une activité économique naturelle s’est développée le jour où nous avons arrêté de tenter de la créer artificiellement. Où nous avons considéré qu’il devait être possible de vivre sans travail. Nous espérons, de cette manière, redevenir une génération qui ne produit pas plus de déchet que ce que la planète peut absorber. Que ce soit en CO2, en microparticules, en métaux lourds.

Le réchauffement climatique et les feux de forêt ne nous aident pas, mais nous avons bon espoir d’y arriver.

Il n’empêche que, même si on y arrive, on doit toujours se coltiner vos 50 ans de déchets. Ils ne sont pas prêts de disparaitre vos jouets en plastique bon marché pas cher achetés pour calmer le petit dernier dans le magasin ou le téléphone super révolutionnaire devenu un presse-papier has-been 2 ans plus tard. Sans compter que le prix de leur fabrication et de leur transport nous accompagne à chacune de nos inspirations dans l’air chargé de CO2.

Chacune de nos respirations nous rappelle votre existence. Nous fait nous demander pourquoi vous n’avez pas agi ? Pourquoi avons-nous dû attendre de vous enterrer ou vous mettre à la retraite pour pouvoir faire quelque chose ?

Et puis certains d’entre nous me racontent qu’ils ont eu des parents qui fumaient. Qu’il était normal de fumer dans les rues à proximité des enfants voir dans les maisons ou les voitures.

Votre génération dépensait donc de l’argent dans le seul et unique but de se détruire la santé, de détruire la santé de ses propres enfants tout en polluant l’atmosphère, tout en polluant l’eau ? Vous financiez une florissante industrie dont le seul et unique objectif était la destruction de la santé de ses clients, des enfants de ses clients, de l’entourage de ses clients et de la nature ? On estime aujourd’hui que près de 1% du CO2 excédentaire dans l’atmosphère est dû à l’industrie du tabac. On s’en serait bien passé.

Par contre, il faut le reconnaitre, nous avons plein de photos et de documents historiques qui prouvent que vous étiez militants, que vous signiez des pétitions et que vous « marchiez pour le climat ». En fumant des clopes.

C’est devenu une moquerie récurrente quand on parle de vous. La génération des écolos-fumeurs. L’image est devenue célèbre pour illustrer ce mélange de bonne volonté collective inutile et paresseuse, cette propension à culpabiliser les individus pour des broutilles, à accomplir des actions collectives symboliques sans enjeu et à se voiler la face devant les comportements réellement morbides.

Vous hurliez « Priorité à la sauvegarde de la planète ! ». Ce à quoi les politiciens répondaient « Tout à fait ! Priorité à l’économie et la sauvegarde de la planète ! ». Puis, la gorge un peu enrouée, chacun rentrait chez soi, satisfait. Avant d’organiser un grand atelier participatif « Méditation transcendentale et toilettes sèches » où vous vous faisiez passer un joint de tabac industriel mélangé d’herbe bio issue du potager partagé.

Notre génération est permissive. Dans beaucoup de parties du monde, l’usage de drogue récréative est autorisé ou toléré. Par contre, toute émission de particules toxiques est strictement interdite dans les lieux publics. Ce n’était vraiment pas difficile à mettre en place et la seule raison que nous voyons pour laquelle vous ne l’avez pas fait c’est que vous ne le vouliez pas.

Malgré vos discours, vous ne vouliez absolument pas construire un monde meilleur pour nous. Il suffisait de vous poser la question : « est-ce que j’ai envie que mes enfants fument ? » Même parmi les fumeurs invétérés, je pense que très peu auraient répondu par l’affirmative. « Ai-je envie que mes enfants subissent le poids écologique de vingt téléphones portables pour lesquels j’ai, au total, dépensé un an de salaire ? De milliers de kilomètres de diesel et de cinq voitures de société ? ». Il aurait suffi de vous poser la question. Interdire la cigarette dans l’espace public aurait été une manière toute simple d’affirmer que vous pensiez un peu à nous.

Mais vous ne pensiez pas à nous. Vous n’avez jamais pensé à nous. Vous avez juste voulu vous donner bonne conscience en ne changeant strictement rien à vos habitudes, même les plus stupides. Pour votre décharge, vous n’avez pas hérité non plus d’une situation facile de vos propres parents, cette génération qui après une gueule de bois post-mai 68, s’est accaparé toutes les richesses et les a gardées en votant Reagan/Tatcher et allongeant son espérance de vie. Sans jamais vous laisser votre place.

Quand nous en discutons entre nous, nous pensons que, finalement, nous avons de la chance d’être là. On doit gérer vos poubelles, mais vous auriez pu, pour le même prix, nous annihiler. Vous nous avez traités comme un pays vierge, un pays lointain à conquérir pour en exploiter les ressources à n’importe quel prix. Un pays qui vous appartenait de droit, car les autochtones n’offraient aucune résistance active.

Ce qui est fait est fait. Il nous reste la tâche ardue de ne pas faire pareil et tenter d’offrir un monde meilleur à nos enfants. Non pas en prétendant penser à eux pour nous donner bonne conscience, mais en tentant de penser comme ils le feront. En les traitant comme un pays ami à respecter, un partenaire. Non plus comme une poubelle sans fond.

Signé : votre futur

Note de l’auteur : L’idée de considérer le futur comme un pays avec qui entretenir des relations internationales m’a été inspirée par Vinay Gupta lors d’une rencontre au parlement européen en 2017. Vinay a ensuite publié une analyse très intéressante où il suggère de voir toutes nos actions à travers le filtre du futur que nous réservons aux enfants de cette planète.

https://medium.com/@vinay_12336/a-simple-plan-for-repairing-our-society-we-need-new-human-rights-and-this-is-how-we-get-them-cee5d6ededa9

Bien que ces deux inspirations n’aient pas été conscientes au moment de la rédaction de ce texte, elles m’apparaissent comme indubitables à la relecture.

Photo by Simon Hurry on Unsplash

Recevez les billets par mail ou par RSS. Max 2 billets par semaine, rien d’autre. Adresse email jamais partagée et définitivement effacée lors du désabonnement. Dernier livre paru : Printeurs, thriller cyberpunk. Pour soutenir l’auteur, lisez, offrez et partagez des livres.

Ce texte est publié sous la licence CC-By BE.

October 04, 2021

Security vendors are touting the benefits of "zero trust" as the new way to approach security and security-conscious architecturing. But while there are principles within the zero trust mindset that came up in the last dozen years, most of the content in zero trust discussions is tied to age-old security propositions.

October 01, 2021

Cover Image - Tackling
"The problem isn't that Johnny can't read. The problem isn't even that Johnny can't think. The problem is that Johnny doesn't know what thinking is; he confuses it with feeling."
– Thomas Sowell

I'm not one to miss an important milestone, so let me draw your attention to a shift in norms that's taking place in the Ruby open source community: it's now no longer expected to be tolerant of views that differ.

This ought to be a remarkable change: previously, a common refrain was that "in order to be tolerant, we cannot tolerate intolerance." This was the rationale for excluding certain people, under the guise of inclusivity. Well, that line of reasoning is now on its way out, and intolerance is now openly advocated for, with lots of heart emoji to boot.

heart

The Anatomy of Man - Da Vinci (1513)

Code of Misconduct

Source for this is a series of changes to the Ruby Code of Conduct, which subtly tweak the language. The stated rationale is to "remove abuse enabling language."

There are a few specific shifts to notice here:

  • Objections no longer have to be based on reasonable concerns.
  • All that matters is that someone could consider something to be harassing behavior.
  • Behavior is now mainly unacceptable if it targets protected classes.
  • Tolerance of opposing views is removed entirely as expected conduct.

Also noticeable is that this is done through multiple small changes, each stacking on top of the next over a few days, as a perfect illustration of "boiling the frog."

This ought to set off alarm bells. If concerns no longer have to be reasonable, then completely unreasonable complaints will have to be taken seriously. If opposing views are no longer welcome, then casting doubt on accusations of abuse is also misconduct. If only protected classes are singled out as worthy of protection, then it creates a grey area of traits which are acceptable to use as weapons to bully people.

It shouldn't take much imagination to see how these changes can actually enable abuse, if you know how emotional blackmail works: it's when an abuser makes other people responsible for managing the abuser's feelings, which are unstable and not grounded in mutual respect and obligation. If Alice's behavior causes Bob to be upset, Bob castigates Alice as an offender. If Bob's behavior causes Alice to be upset, then Alice is making Bob feel unsafe, and it's still Alice's fault, who needs to make amends.

A good example is how the social interaction style of people with autism can be trivially recast as deliberate insensitivity. Cancelled Googler James Damore made exactly this point in The Neurodiversity Case for Free Speech. This is also excellently illustrated in Splain it to Me which highlights how one person's gift of information can almost always be recast as an attempt to embarrass another as ignorant.

For all this to seem sensible, the people involved have to have enormous blinders on, suffering from the phenomenon that Sowell so aptly described: the focus isn't on thinking out a set of effective and consistent rules, but rather on letting the feelings do the driving, letting the most volatile members dominate over everyone else. Quite possibly they themselves have one or more emotional abusers in their lives, who have trained them to see such asymmetry as normal. "Heads I win, tails you lose" is a recipe for gaslighting, after all.

The Ruby community is of course free to decide what constitutes acceptable behavior. But there is little evidence there is widespread support for such a change. On HackerNews, the change in policy was widely criticized. Discussion on the proposals themselves was locked within a day, for being "too heated," despite involving only a handful of people. This moderator action seems itself an example of the new policy, letting feelings dominate over reality: after proposing a controversial change, maintainers plug their ears because they do not wish to hear opposing views, even before they are actually uttered in full.

A man kneeling and placing a laurel branch upon a pile of burning books

Marco Dente (ca. 1515-1527)

Harassment Policy

Way back in 2013, something similar happened at the PyCon conference in the notorious DongleGate incident. After overhearing a joke between two men seated in the audience, activist Adria Richards decided to take the offenders' picture and post it on Twitter. She was widely praised in media for doing so, and it resulted in the loss of the jokester's job.

What was crucial to notice, and which many people didn't, was that "harassing photography" was explicitly against the conference's anti-harassment policy. By any reasonable interpretation of the rules, Richards was the harasser, who wielded social media as a weapon for intimidation. She should've been sanctioned and told in no uncertain terms that such behavior was not welcome.

Of course, that did not happen. Citing concerns about women in tech, she appealed exactly to those "protected classes" to justify her behavior. She cast herself in the role of defender of women, while engaging in an unquestionable attack.

It's easy to show that this was not motivated by fairness or equality: had the joke been made by a woman instead, Richards wouldn't have been able to make the same argument. The accusation of sexism seemed to derive from the sexual innuendo in the joke, an assumed male-only trait. Indeed, the only reason it worked was because of her own sexism: she assumed that when one man makes a joke, he is an avatar of oppression by men in the entire industry. She treated him differently because of his sex, so her accusation of sexism was a cover for her own.

Even more ridiculous was that her actual job was "Developer Relations." She was supposedly tasked with improving relations with and between developers, but did the exact opposite, creating a scandal that would resonate for years. What it really showed was that she was volatile and a liability for any company that would hire her in this role.

Somehow, this all went unnoticed. Nobody involved seemed to actually think it through. The entire story ran purely on hurt feelings, narrating the entire experience from one person's subjective point of view. This is now a common thread in many environments that are supposed to be professional: the people in charge have no idea how to keep their own members in check, and allow them to hijack everyone's resources and time for grievances and external drama.

As a rare counter-example, consider crypto-exchange CoinBase. They explicitly went against the grain a year ago, by announcing they were a mission-focused company, who would concentrate their efforts on their actual core competence. Today, things are looking much brighter for them, as the negative response and doom-saying in media turned out to be entirely irrelevant. On the inside, the reaction was mostly positive. The employees that left in anger were eventually replaced, with a group of equally diverse people.

The School of Athens

The School of Athens - Raphael (1508)

Professing

Professionalism seems to be a concept that is very poorly understood. In the direct sense, it's a set of policies and strategies that allow people with wildly different interests to come together and get productive work done regardless.

In a world where many people wish to bring "their entire selves to work," this can't happen. If it's more important to keep everyone's feelings in check, and less important to actually deliver results, then there's no room for fixing mistakes. It creates an environment where pointing out problems is considered an unwelcome insensitivity, to which the response is to gang up on the messenger and shoot them for being abusive.

The most common strategy is simply to shame people into silence. If that doesn't work, their objections are censored out of sight, and then reframed as bigotry if anyone asks. The narrative machine will spin up again, using emotionally charged terms such as "harassment" and "sexism."

The idea of "victim blaming" is particularly pernicious here: any time someone invokes it, without knowing all the details, they must have pre-assumed they know who is the victim and who is the offender. This is where the concept of "protected classes" comes into play again.

While it's supposed to mean that we cannot discriminate e.g. on the basis of sex, what it means in practice is that one assumes automatically that men are the offenders and that women are being victimized. Even if it's the other way around. Indeed, such a model is the cornerstone of intersectionality, a social theory which teaches that on every demographic axis, one can identify exclusive categories of oppressors and the oppressed. White oppresses black, straight oppresses gay, cis oppresses trans, and so on.

If you engage such bigoteers in debate, the experience is pretty much like talking to a brick wall. You are not speaking to someone who is interested in being correct, merely in remaining on the right side. This seems to be the axiom from which they start, and a core part of their self-image. If you insist on peeling off the fallacies and mistakes in reasoning, you only invoke more ire. Your line of reasoning is upsetting to them, and therefor, you are a bigot who needs to leave, or be forcefully expelled. In the name of tolerance, for the sake of diversity and inclusion, they flatten the actual complexities of life and become utterly intolerant and exclusionary.

It's no coincidence that these cultural flare ups first came to a head in environments like open source, where results speak the loudest. Or in STEM and video games, where merit reigns supreme. When faced with widespread competence, the incompetent resort to lesser weapons and begin to undermine social norms, to try and mend the gap between their self-image and what they are actually able to do.

* * *

Personally, I'm quite optimistic, because the game is now clearly visible. In their zeal for ideological purity, activists have blown straight past their own end zone. When they tell you they are no longer interested in tolerance, you should believe them. It represents a complete abandonment of the principles that allowed liberal society to grow and flourish.

That means tolerance now again belongs to the adults in the room, who are able to separate fact from fiction, and feelings from actual principled conviction. We can only hope these children finally learn.

Bij problemen met draadloze apparaten is het vaak moeilijk om precies de oorzaak aan te duiden. Wireshark is een populaire opensource-packetsniffer die zowel op Windows, Linux als macOS draait. Het is een standaard hulpmiddel geworden in de gereedschapskist van netwerkbeheerders. Je onderzoekt er zowel wifi- als ethernetverkeer mee, bijvoorbeeld om netwerkproblemen te analyseren.

Met een nRF52840 Dongle van Nordic Semiconductor en een plug-in voor Wireshark kun je er ook Bluetooth Low Energy-, Zigbee- en Thread-verkeer mee uit de lucht plukken. Voor PCM beschreef ik de hele procedure om BLE en Zigbee te sniffen.

Opmerking

De firmware is ook compatibel met de April USB Dongle 52840 van April Brother. De externe antenne maakt een groot verschil in bereik met de PCB-antenne van de nRF52840 Dongle.

Nordic Semiconductor biedt de firmware voor de nRF52840 Dongle en de bijbehorende plug-ins voor Wireshark hier aan:

Ik gebruik de nRF Sniffer for Bluetooth LE continu om BLE-apparaten te debuggen. Zo kun je eenvoudig met de displayfilters van Wireshark filteren op specifieke types BLE-pakketten. Zo filter je bijvoorbeeld op iBeacon-pakketten:

(btcommon.eir_ad.entry.company_id == 0x004c) && (btcommon.eir_ad.entry.data[:2] == 02:15)

Dat beperkt de getoonde pakketten tot degene met manufacturer-specific data van company ID 0x004c (van Apple) en met de eerste twee bytes gelijk aan 0x0215. 1

Maar hoe kom je aan die displayfilter? Als je op manufacturer-specific data van company ID 0x004c wilt filteren in Wireshark, klik je eenvoudigweg op het veld Company ID in het paneel met pakketdetails van een iBeacon-pakket, rechtskik je en kies je dan Apply as Filter en dan Selected. Dat voegt een displayfilter toe voor alle pakketten met de geselecteerde waarde voor het company ID.

De extra filter voor de eerste twee bytes is wat meer werk als je de syntax niet kent. Selecteer gewoon het volledige veld Data in het paneel met pakketdetails van een iBeacon-pakket, rechtsklik en kies dan Apply af Filter en dan ... and Selected. Dat voegt deze filter toe als extra vereiste aan de al gebruikte filter. Maar nu filter je alle pakketten met exact dezelfde data als dit geselecteerde pakket:

btcommon.eir_ad.entry.data == 02:15:18:ee:15:16:01:6b:4b:ec:ad:96:bc:b9:6d:16:6e:97:00:00:00:00:d8

Als je wilt filteren op de eerste twee bytes, voeg je [:2] aan het dataveld toe. Die vergelijk je dan met de bytes 02:15.

1

Waarom 0x0215? Dit vind je in de specificatie van iBeacon. Apple gebruikt in zijn manufacturer-specific data een TLV-formaat (type-length-value). De 0x02 staat voor het type iBeacon en de 0x15 stelt de lengte van de data erna voor (21 decimaal: 16 bytes voor de UUID, 2 voor de major, 2 voor de minor en 1 voor de measured power).

September 30, 2021

For the past two years I’ve been working on something less visible but no less important.

Since DrupalCon Amsterdam 2019 (an actual in-person conference — sounds surreal in 2021, doesn’t it?!) I’ve been working on Acquia Migrate Accelerate, or “AMA” for short. In a few days, another DrupalCon Europe is starting … so perfect timing for a recap! :D

Why?

Drupal 8 comes with an awesome migration system built in, originating in the Migrate Drupal 7 module. It standardized many migration best practices. But it still required a huge time investment to learn it.

Of course, there’s the “Migrate Drupal UI” (migrate_drupal_ui) module in Drupal core. But that does not allow for granular migrations. It allows for a one-shot migration: you see which things will be migrated and which won’t. You can click a button and hope for the best. It only works for the very simplest of sites. It is impressively minimal in terms of the code it needs, but unfortunately it also means one pretty much needs to be a expert in migrations to use it successfully.

It will be of little help as soon as you run into important data you want to migrate for which no migration path exists.

See Mauricio Dinarte’s excellent “31 days of Drupal migrations”. In those 31 blog posts, you’ll learn to know and appreciate the migration system (I sure did!). Unfortunately, that still won’t fully prepare you: you’ll need to decipher/reverse engineer the intricacies of how the data gets stored in Drupal 7 with its entities, revisions and fields — and with each field type having its own intricacies — and map that to Drupal 9 equivalents.

And how does one migrate straight from Drupal 7 with its more fragmented ecosystem? 1

For example: media handling. There are easily a dozen approaches possible in Drupal 7. Each in use on tens of thousands of sites. In Drupal 8 & 9, everything has standardized on Media and Media Library. But how do you get your Drupal 7 site’s content in there?

Another example: location data. location was very popular, but is now dead. geofield was equally popular but still alive. geolocation was less popular but now more. addressfield was popular, address is the successor. None of the Drupal 9 modules offer Location’s feature set. How do you migrate this data?

Goal

The goal for AMA (the vision of https://www.drupal.org/u/grasmash and especially https://www.drupal.org/u/webchick!) is to empower the non-technical user to be able to perform migrations. A UI that is aimed at the site builder POV: one should be able to select which content types (also vocabularies, menus, et cetera) make sense to migrate, and then not have to bother with technical details such as “migration plugins” or YAML files.

Acquia Migrate Accelerate:

For example, AMA shows just “Page” in the UI. Under the hood (and you can see this in the UI too, but it’s just not prominent), that corresponds to the following migration plugin definitions:

  • d7_node_type:page
  • d7_field_instance:node:page
  • d7_field_formatter_settings:node:page
  • d7_field_instance_widget_settings:node:page
  • d7_node_complete:page
  • d7_url_alias:node:page
  • node_translation_menu_links:node:page

In other words: the support configuration for nodes of the page bundle (the first 4), then all actual entity/field data (d7_node_complete:page), followed by URL aliases and menu links referencing pages.

However, to be able to do this, we need many more migrations in Drupal core to be derived: view modes, fields, formatters and widget should all have an entity type+bundle-specific derivative. That’d allow each bundle to be migrated individually. Which enables the site builder to check that their pages and everything related to it has been correctly migrated before moving on to the next data concept to migrate. So far we’ve not yet been able to convince the migration system maintainers of the value of this. 2

(AMA does many more things, but that’s not in scope of this blog post.)

Closed & Open

Acquia understandably wants its customers to be able to use AMA, and not its competitors’ users. Like all Drupal modules, the AMA module is GPL 2+ licensed. Only the React UI is closed source. The automated recommendations engine is closed source. Obviously the code to spin up AMA environments in Acquia Cloud is closed source 3.

But … all of the work that goes into making migrations reliable is open source. At the time of writing, we have ~100 unique patches that are being applied, 39 of which to Drupal core! While I was writing this, https://www.drupal.org/project/drupal/issues/3190818 got committed, plus a few were committed recently but did not yet ship in a 9.2.x point release, so soon that number will be lower :)

An overview

In the past 20 months we’ve hardened many migrations, and created new ones from scratch. Thousands of Drupal sites have already benefited — many more than there are Acquia customers.

The highlights:

Overall, 29 Drupal core patches 4 and 18 Drupal contrib patches have been committed! Plus another 36 core patches 5 and 32 34 contrib patches are successfully being used, and will hopefully land in the near future. (Not counting commits to the migration modules we now (co-)maintain.) Many dozens of migration paths from Drupal 7 have been stabilized, especially by https://www.drupal.org/project/media_migration.

A comprehensive overview (all patches are uncommitted unless stated otherwise):

D7D9 for all

We aim to continue to do to the work to get patches committed: address feedback, add test coverage, and so on. We want to help everyone migrate from Drupal 7 to 9!

Teamwork

These many hardened migrations are thanks to the titanic work of:

If you found this interesting, check out Gabe’s write-up of the application architecture that powers the awesome React-based UI that Peter built.


  1. Some would say richer↩︎

  2. It also implicitly reveals one of the most fundamental assumptions in the migration system: that only the final state after running all migrations matters. For developers who know both Drupal 7 and 9’s data models really well, this may be fine. But for a non-expert, it’d be simpler if they were able to migrate each the entities of each entity type+bundle and then inspect the results, not to mention that it’d take less time to get some confidence in the migration! For example, first the “tags” taxonomy terms, then the “image” media items, then the “blog post” content items. Verifying the correct validation of each of those clusters of data is simpler conceptually. Site builders, if you want this, please leave a comment in https://www.drupal.org/node/3097336↩︎

  3. Acquia Cloud handles the creation of an ephemeral Drupal 9 migration environment, with a Drupal 9 site automatically generated, with all equivalent D9 modules pre-composer required, and all modules with a vetted migration path pre-installed. For Acquia the value is obvious: its customers are more likely to succesfully mgirate to Drupal 9 sooner, the customer is more likely to stay a customer. We’ve currently got over 100 customers using AMA. ↩︎

  4. Committed core patches: #3096676, #2814953 (this one has the majority of the work done by the community!), #3122056, #3126063, #3126063, #2834958 (from those first 14), #3152789, #3151980, #3151993, #3153791, #2925899, #3165944, #3176394, #3178966, #3187320, #3187415, #3187418, #3187463, #3189463, #3187263 (from 2020), #3190815, #3190818, #3191490, #3097312, #3212539, #3213616, #3224620, #3227549, #3085192 (from 2021). ↩︎

  5. We started out with 14 core patches. Of those, #3115073, #3122649, #3096972, #3108302, #3097336, #3115938, #3123775 still remain. Other core patches we’ve contributed in 2020 that are not yet committed: #2845340, #3151979, #3051251, #3154156, #3156083, #3156730, #3156733, #3165813, #3166930, #3167267, #3186449, #3187334, #3187419, #3187474, #3187616. And those in 2021: #2859314, #3200949, #3204343, #3198732, #3204212, #3202462, #3118262, #3213636, #3218294, #3219078, #3219140, #3226744, #3227361, #3227660 ↩︎

  6. Added October 1, 2021. ↩︎ ↩︎

September 28, 2021

Not that long ago, a vulnerability was found in Microsoft Azure Cosmos DB, a NoSQL SaaS database within the Microsoft Azure cloud. The vulnerability, which is dubbed ChaosDB by the Wiz Research Team, uses a vulnerability or misconfiguration in the Jupyter Notebook feature within Cosmos DB. This vulnerability allowed an attacker to gain access to other's Cosmos DB credentials. Not long thereafter, a second vulnerability dubbed OMIGOD showed that cloud security is not as simple as some vendors like you to believe.

These vulnerabilities are a good example of how scale is a cloud threat. Companies that do not have enough experience with public cloud might not assume this in their threat models.

September 27, 2021

SReview, the video review and transcode tool that I originally wrote for FOSDEM 2017 but which has since been used for debconfs and minidebconfs as well, has long had a sizeable component for inspecting media files with ffprobe, and generating ffmpeg command lines to convert media files from one format to another.

This component, SReview::Video (plus a number of supporting modules), is really not tied very much to the SReview webinterface or the transcoding backend. That is, the webinterface and the transcoding backend obviously use the ffmpeg handling library, but they don't provide any services that SReview::Video could not live without. It did use the configuration API that I wrote for SReview, but disentangling that turned out to be very easy.

As I think SReview::Video is actually an easy to use, flexible API, I decided to refactor it into Media::Convert, and have just uploaded the latter to CPAN itself.

The intent is to refactor the SReview webinterface and transcoding backend so that they will also use Media::Convert instead of SReview::Video in the near future -- otherwise I would end up maintaining everything twice, and then what's the point. This hasn't happened yet, but it will soon (this shouldn't be too difficult after all).

Unfortunately Media::Convert doesn't currently install cleanly from CPAN, since I made it depend on Alien::ffmpeg which currently doesn't work (I'm in communication with the Alien::ffmpeg maintainer in order to get that resolved), so if you want to try it out you'll have to do a few steps manually.

I'll upload it to Debian soon, too.

September 24, 2021

I published the following diary on isc.sans.edu: “Keep an Eye on Your Users Mobile Devices (Simple Inventory)“:

Today, smartphones are everywhere and became our best friends for many tasks. Probably your users already access their corporate mailbox via a mobile device. If it’s not yet the case, you probably have many requests to implement this. They are two ways to achieve this: you provide corporate devices to all users. From a risk perspective, it’s the best solution: you select the models and control them. But it’s very expensive and people don’t like to carry two devices (a personal and a corporate one). Hopefully, if you use a Microsoft Exchange platform, there are ways to authorize personal devices to access corporate emails with a software component called ActiveSync. ActiveSync allows deploying basic security policies like forcing the device to be locked with a password, force a minimum password length, etc. However, it’s not a real MDM (“Mobile Device Management”)… [Read more]

The post [SANS ISC] Keep an Eye on Your Users Mobile Devices (Simple Inventory) appeared first on /dev/random.

September 23, 2021

I published the following diary on isc.sans.edu: “Excel Recipe: Some VBA Code with a Touch of Excel4 Macro“:

Microsoft Excel supports two types of macros. The legacy format is known as “Excel4 macro” and the new (but already used for a while) is based on VBA. We already cover both formats in many diaries. Yesterday, I spotted an interesting sample that implements… both!

The malicious file was delivered through a classic phishing email and is called “Document_195004540-Copy.xls” (SHA256:4f4e67dccb3dfc213fac91d34d53d83be9b9f97c0b75fbbce8a6d24f26549e14). The file is unknown on VT at this time. It looks like a classic trap… [Read more]

The post [SANS ISC] Excel Recipe: Some VBA Code with a Touch of Excel4 Macro appeared first on /dev/random.

September 20, 2021

Vous pouvez dès à présent précommander la version audiolivre de Printeurs et donner votre avis sur la voix à choisir. Ce qui me fait réfléchir à la voix, au bruit, au marketing, au crowdfunding et aux inondations…

Mon roman Printeurs va prendre de la voix et sera bientôt produit sous forme d’un audiolivre. Un format avec lequel je ne suis pas du tout familier (je suis un lecteur visuel), mais dont je me réjouis d’écouter le résultat. Je suis d’ailleurs curieux d’avoir les avis des gros consommateurs de livres audio sur ce qui fait un « bon » audiolivre. Qu’aimez-vous ? À quoi doit-on faire attention ? Et qu’est-ce qui vous fait arrêter votre écoute à tous les coups ?

Afin de financer cette entreprise, mon éditeur a mis en place une campagne de crowdfunding au cours de laquelle vous pouvez précommander la version audio de Printeurs. Vous aurez même la possibilité de donner votre avis sur des voix présélectionnées. Je suis vraiment curieux de lire l’avis des amateurs du genre.

Précommander la version audio de Printeurs :https://fr.ulule.com/ludomire/?reward=752654
Voter pour votre voix préférée (lien réservé aux souscripteurs) : https://fr.ulule.com/ludomire/news/decouvrez-le-casting-de-voix-pour-le-livre-audio-p-312900
Explications techniques sur l’adaptation audio : https://fr.ulule.com/ludomire/news/les-adaptations-en-livres-audio-312264/

La voix

La voix est un médium particulier. Lorsqu’on parle, le charisme et les intonations ont souvent plus d’importance que le contenu lui-même. Les incohérences sont gommées par le rythme. Un exemple parmi tant d’autres : j’ai été récemment interviewé par Valentin Demé pour le podcast Cryptoast afin de parler des monopoles et de la blockchain.

Pendant une heure, je parle en laissant mes idées vagabonder. Des idées bien moins formées que ce que j’écris d’habitude, des intuitions, des explorations. D’après les réactions, ce que je dis semble intéressant. Mais il faudrait garder à l’esprit que, à l’exception d’un discours entièrement préparé (un cours par exemple), les informations sont beaucoup plus aléatoires et toujours à prendre avec un grain de sel. Paradoxalement, la voix est plus convaincante alors qu’elle est moins rigoureuse. On apprend et réfléchit dans les livres, on se fait convaincre par les discours. La politique est une affaire de voix. La science est une affaire d’écrit.

Ploum sur Cryptoast : https://www.youtube.com/watch?v=vq6o_30LxJM

Le crowdfunding en question

Cette campagne de crowdfunding ne concerne pas que Printeurs. C’est avant tout une campagne englobant toutes les nouveautés de la collection SFFF Ludomire notamment la version papier en quatre volumes du One Minute de Thierry Crouzet. One Minute est un roman de SF se déroulant durant… une seule et unique minute, comme le dit le titre. Chacun des 365 chapitres dure… une minute. J’ai beaucoup apprécié la version Wattpad et je me réjouis de lire cette version entièrement retravaillée.

Encore de la pub pour une campagne de crowdfunding ? Autant je suis enthousiaste sur le contenu, autant je vous comprends.

La campagne crowdfunding de Printeurs m’a laissé un souvenir assez amer. Certes, elle a été un incroyable succès (grâce à vous qui me lisez) mais j’ai eu l’impression de spammer sans arrêt mes réseaux. De produire le bruit contre lequel je me bats tellement. J’en suis sorti lessivé et ceux qui me suivent également. Le problème, comme me l’a fait remarquer mon éditeur, c’est que le spam… ça fonctionne !

Ces campagnes sont désormais beaucoup plus nombreuses. Il faut se différencier, se professionnaliser. Bref, le marketing redevient essentiel alors que, dans mon esprit, l’un des buts initiaux du crowdfunding était de se passer de cette étape. Ironiquement, le marketing se concentre, non plus sur le produit lui-même, mais sur la promotion… de la campagne de financement ! Alors que cette méthode est censée rapprocher le créateur du consommateur, elle l’éloigne paradoxalement.

C’est un questionnement que se pose également Lionel, mon éditeur. Comment se faire connaitre et se financer sans pour autant tomber dans le spam ? Thierry lui-même m’a confié ne pas avoir la moindre envie de promouvoir la campagne liée à la parution de son roman.

La campagne Ludomire 2021 :https://fr.ulule.com/ludomire/
Crouzet raconte One Minute : https://tcrouzet.com/2021/09/14/de-lecriture-de-la-vie-du-roman/
Réflexions sur le crowdfunding : http://ludom.cc/index.php/2021/09/08/levolution-du-crowdfunding-selon-mon-experience/

Le prix libre ?

La problématique n’est pas uniquement limitée au crowdfunding. Le prix libre est également impacté. Il y a quelques années, je fus l’un des pionniers francophones du Prix Libre sur le Web à travers un billet au titre provocant : « Ce blog est payant ! ». Force est de constater que le concept s’est largement popularisé, au point d’avoir sa page Wikipédia.

Un peu trop popularisé peut-être. Désormais, le prix libre est partout et, comme par magie, se fédère sur quelques plateformes centralisées. Alias parle justement de son questionnement à propos de Tipeee, plateforme que j’ai également quittée.

Il y’a une fatigue indéniable du public : nous sommes sollicités tout le temps pour financer tous les projets imaginables, depuis les aiguilles à tricoter connectées révolutionnaires à l’installation de pots de fleurs sur la voirie de notre quartier. Outre les sous, il s’agit de jongler entre les différentes plateformes, les sommes, récurrentes ou non. J’ai également le sentiment que ce sont toujours les mêmes qui contribuent à tout, pas spécialement les plus aisés.

J’en ai déduit une sorte de loi générale sur Internet qui fait que toutes les bonnes idées sont soit pas assez populaires pour être largement utiles, soit tellement populaires que ça en fait des mauvaises idées. Les réseaux sociaux, la mobilité en sont les illustrations les plus marquantes. Le prix libre est-il en train de suivre cette voie ?

Les alternatives que nous construisons ne sont-elles séduisantes que parce qu’elles sont des alternatives justement ? Le succès n’entraîne-t-il pas obligatoirement un excès inexorable ? Je pense par exemple au réseau minimaliste Gemini dont je vous ai parlé.

Le prix libre sur Wikipedia :https://fr.wikipedia.org/wiki/Prix_libre
Alias quitte Tipeee : https://erdorin.org/il-est-temps-de-changer-de-tipeee/
Le drama tipeee (lien gemini) : gemini://lord.re/fast-posts/62-le-drama-tipee-2021/index.gmi

Le livre suspendu et les inondations

Face à ce constat, j’ai décidé de retirer tous les appels aux dons sur mon blog et encourager l’achat de livres. Je trouve que les livres sont parmi les objets les plus symboliques de l’humanité. Un livre n’est jamais inutile. Il peut dormir des années voire des siècles sur des étagères avant de renaître et d’illuminer une journée ou une vie. Le livre, y compris au format électronique, c’est le cadeau par excellence : un monde à découvrir, un objet à transmettre, des explorations intellectuelles à partager, dans le présent et le futur.

Acheter mes livres :  https://ploum.net/livres/

Le livre papier ne connait que deux dangers : le feu et l’eau. C’est malheureusement ce qui est arrivé cet été dans mon pays. Si je n’ai pas été personnellement touché, ce fut bien le cas de ma ville (Ottignies) et surtout de la région d’où sont originaires mon épouse, mes parents et mes ancêtres (vallée de la Vesdre).

Si vous avez perdu votre bibliothèque suite aux inondations ou si vous connaissez quelqu’un dans le cas, envoyez-moi un petit mot, je vous ferais parvenir un exemplaire de Printeurs. Je dispose également de plusieurs ouvrages de la collection Ludomire que j’enverrai volontiers aux bibliothèques qui cherchent à se reconstruire. N’hésitez pas à prendre contact et à faire l’intermédiaire pour des personnes à qui cela pourrait apporter un petit sourire. C’est toujours bon à prendre dans cette période difficile de reconstruction où la vie, comme la Vesdre, semble avoir repris son cours normal. Sauf pour ceux qui ont tout perdu, qui vivent dans l’humidité, qui sont nourris par la Croix-Rouge et dont le cœur s’étreint d’angoisse à chaque nouvelle pluie un peu drue.

Il me reste quelques exemplaires du livre « Les aventures d’Aristide, le lapin cosmonaute ». Ils sont normalement en vente, mais je les offre avec plaisir aux familles avec enfant (idéalement 5-9 ans) qui sont en manque de livre, que ce soit à cause des inondations ou pour des raisons qui ne me regardent pas.

Envoyez-moi un mail en précisant quel livre vous ferait plaisir (ou bien les deux) à l’adresse suspendu at ploum.net.

Bonne lecture et bonne écoute !

Recevez les billets par mail ou par RSS. Max 2 billets par semaine, rien d’autre. Adresse email jamais partagée et définitivement effacée lors du désabonnement. Dernier livre paru : Printeurs, thriller cyberpunk. Pour soutenir l’auteur, lisez, offrez et partagez des livres.

Ce texte est publié sous la licence CC-By BE.

September 19, 2021

playbook

I wrote a few articles:

on my blog on how to use cloud images with cloud-init on a “non-cloud” environment.

I finally took the time to create an Ansible role for it. You’ll find the READE.md below.

Virt_install_vm 1.0.0 is available at: https://github.com/stafwag/ansible-role-virt_install_vm

Have fun!


Ansible Role: virt_install_vm

An Ansible role to install a libvirt virtual machine with virt-install and cloud-init. It is “designed” to be flexible.

An example template is provided to set up a Debian system.

Requirements

The role is wrapper around the following roles:

Install the required roles with

$ ansible-galaxy install -r requirements.yml

this will install the latest default branch releases.

Or follow the installation instruction for each role on Ansible Galaxy.

https://galaxy.ansible.com/stafwag

Supported GNU/Linux Distributions

It should work on most GNU/Linux distributions. cloud-cloudds is required. cloud-clouds was available on Centos/RedHat 7 but not on Redhat 8. You’ll need to install it manually to use it role on Centos/RedHat 8.

  • Archlinux
  • Debian
  • Centos 7
  • RedHat 7
  • Ubuntu

Role Variables and templates

Variables

See the documentation of the roles in the Requirements section.

  • virt_install_vm: “namespace”

    • skip_if_deployed: boolean default: false.

                            When true:
                              Skip role if the VM is already deployed. The role will exit successfully.
                            When false:
                              The role will exit with an error if the VM is already deployed.
      

Templates.

  • templates/simple_debian: Example template to create a Debian virtual machine.

This template use cloud_localds.cloudinfo to configure the cloud-init user-data.

See the Usage section for an example.

Usage

Create a virtual machine template

This is a file with the role variables to set set up a virtual machine with all the common settings for the virtual machines. In this example vm.hostname and vm.ip_address can be configured for each virtual machine.

  • debian_vm_template.yml:
qemu_img:
  dest: "/var/lib/libvirt/images/.qcow2"
  format: qcow2
  src: /Downloads/isos/debian/cloud/debian-10-generic-amd64.qcow2
  size: "50G"
  owner: root
  group: kvm
  mode: 660
cloud_localds:
  dest: "/var/lib/libvirt/images/_cloudinit.iso"
  config_template: "templates/simple_debian/debian.j2"
  network_config_template: "templates/simple_debian/debian_netconfig.j2"
  cloud_config:
    system_info:
      default_user:
        name: ansible
        passwd: ""
        ssh_authorized_keys:
          - ""
    network:
      dns_nameservers:
        9.9.9.9
      dns_search:
        intern.local
      interface:
        name:
          enp1s0
        address:
          ""
        gateway:
          192.168.123.1
    disable_cloud_init: true
    reboot:
      true
virt_install_import:
  wait: 0
  name: ""
  os_type: Linux
  os_variant: debian10
  network: network:default
  graphics: spice
  disks:
    - "/var/lib/libvirt/images/.qcow2,device=disk"
    - "/var/lib/libvirt/images/_cloudinit.iso,device=cdrom"

Playbook

Playbook to setup a virtual machine:

- name: Install tstdebian2
  hosts: kvmhost
  become: true
  vars:
    vm:
      hostname:
        tstdebian2
      ip_address:
        192.168.123.2/24
  pre_tasks:
    - name: Load the vm template
      include_vars: debian_vm_template.yml
    - name: display qemu_img
      debug:
        msg: 
          - "qemu_img: "
  roles:
    - stafwag.virt_install_vm

September 17, 2021

I published the following diary on isc.sans.edu: “Malicious Calendar Subscriptions Are Back?“:

Did this threat really disappear? This isn’t a brand new technique to deliver malicious content to mobile devices but it seems that attackers started new waves of spam campaigns based on malicious calendar subscriptions. Being a dad, you can imagine that I always performed security awareness with my daughters. Since they use computers and the Internet, my message was always the same: “Don’t be afraid to ask me, there are no stupid questions or shame if you think you did something wrong”… [Read more]

The post [SANS ISC] Malicious Calendar Subscriptions Are Back? appeared first on /dev/random.

September 15, 2021

De VRT radio streams, zoals deze (http://icecast.vrtcdn.be/radio1-high.mp3) hebben sinds kort de neiging om reklame te spelen als je ze start. Niet altijd, maar wel regelmatig.

Zonet (21u en enkele seconden) wou ik naar het nieuws luisteren , maar er begon reklame te spelen. Ik laadde de stream opnieuw en de reklame begon opnieuw (terwijl het nieuws bezig was, maar dat kreeg ik dus niet).

Een paar dagen geleden had ik dat ook al met de Radio 1 stream, telkens reklame als je de stream start. Dit is irritant.

Tijd dus om het script aan te passen en de eerste 60 seconden van de stream te muten. De cronjob om het nieuws te spelen kan dan een minuut eerder starten.

 

UPDATE 2021-09-15: dit werkt

#!/bin/bash

export HOME=/var/www
pkill mplayer

mplayer -volume 0 -slave -input file=/var/www/master http://icecast.vrtcdn.be/radio1-high.mp3 &

sleep 60

echo volume 100 1 > /var/www/master
exit

Naming conventions. Picking the right naming convention is easy if you are all by yourself, but hard when you need to agree upon the conventions in a larger group. Everybody has an opinion on naming conventions, and once you decide on it, you do expect everybody to follow through on it.

Let's consider why naming conventions are (not) important and consider a few examples to help in creating a good naming convention yourself.

My laptop is a 2011 MacBook Air. I’m not a huge Apple fan, it’s just that at the time it had the most interesting hardware features compared to similar laptops. And it’s quite sturdy, so that’s nice.

Over the years I have experimented with installing Linux in parallel to the OS X operating system, but in the end I settled on installing my favorite Linux tools inside OS X using Homebrew, because having two different operating systems on one laptop was Too Much Effortâ„¢. In recent times Apple has decided, in it’s infinite wisdom (no sarcasm at all *cough*), that it will no longer provide operating system upgrades for older hardware. Okay, then. Lately the laptop had become slow as molasses anyway, so I decided to replace OS X entirely with Ubuntu. No more half measures! I chose 20.04 LTS for the laptop because reasons. 🙂

The laptop was really slow…

According to the Ubuntu Community Help Wiki, all hardware should be supported, except Thunderbolt. I don’t use anything Thunderbolt, so that’s OK for me. The installation was pretty straightforward: I just created a bootable USB stick and powered on the Mac with the Option/Alt (⌥) key pressed. Choose EFI Boot in the Startup Manager, and from there on it’s all a typical Ubuntu installation.

screenshot
Startup Manager

I did not bother with any of the customizations described on the Ubuntu Wiki, because everything worked straight out of the box, and besides, the wiki is terribly outdated anyway.

The end result? I now have a laptop that feels snappy again, and that still gets updates for the operating system and the installed applications. And it’s my familiar Linux. What’s next? I’m thinking about using Ansible to configure the laptop.

To finish, I want to show you my sticker collection on the laptop. There’s still room for a lot more!

sticker collection on my laptop. Photo copyright: me.

The post Installing Ubuntu 20.04 LTS on 2011 MacBook Air appeared first on amedee.be.

September 10, 2021

Cover Image

Cultural Assimilation, Theory vs Practice

The other day, I read the following, shared 22,000+ times on social media:

"Broken English is a sign of courage and intelligence, and it would be nice if more people remembered that when interacting with immigrants and refugees."

This resonates with me, as I spent 10 years living on the other side of the world. Eventually I lost my accent in English, which took conscious effort and practice. These days I live in a majority French city and neighborhood, as a native Dutch speaker. When I need to call a plumber, I first have to go look up the words for "drainage pipe." When my barber asks me what kind of cut I want, it mostly involves gesturing and "short".

This is why I am baffled by the follow-up, by the same person:

"Thanks to everyone commenting on the use of 'broken' to describe language. You're right. It is problematic. I'll use 'beginner' from now on."

It's not difficult to imagine the pile-on that must've happened for the author to add this note. What is difficult to imagine is that anyone who raised the objection has actually ever thought about it.

mines

Minesweeper

Consider what this situation looks like to an actual foreigner who is learning English and trying to speak it. While being ostensibly lauded for their courage, they are simultaneously shown that the English language is a minefield where an expression as plain as "broken English" is considered a faux pas, enough to warrant a public correction and apology.

To stay in people's good graces, you must speak English not as the dictionary teaches you, but according to the whims and fashions of a highly volatile and easily triggered mass. They effectively demand you speak a particular dialect, one which mostly matches the sensibilities of the wealthier, urban parts of coastal America. This is an incredibly provincial perspective.

The objection relies purely on the perception that "broken" is a word with a negative connotation. It ignores the obvious fact that people who speak a language poorly do so in a broken way: they speak with interruptions, struggling to find words, and will likely say things they don't quite mean. The dialect demands that you pretend this isn't so, by never mentioning it directly.

But in order to recognize the courage and intelligence of someone speaking a foreign language, you must be able to see past such connotations. You must ignore the apparent subtleties of the words, and try to deduce the intended meaning of the message. Therefor, the entire sentiment is self-defeating. It fell on such deaf ears that even the author seemingly missed the point. One must conclude that they don't actually interact with foreigners much, at least not ones who speak broken English.

The sentiment is a good example of what is often called a luxury belief: a conviction that doesn't serve the less fortunate or abled people it claims to support. Often the opposite. It merely helps privileged, upper-class people feel better about themselves, by demonstrating to everyone how sophisticated they are. That is, people who will never interact with immigrants or refugees unless they are already well integrated and wealthy enough.

By labeling it as "beginner English," they effectively demand an affirmation that the way a foreigner speaks is only temporary, that it will get better over time. But I can tell you, this isn't done out of charity. Because I have experienced the transition from speaking like a foreigner to speaking like one of them. People treat you and your ideas differently. In some ways, they cut you less slack. In other ways, it's only then that they finally start to take you seriously.

Let me illustrate this with an example that sophisticates will surely be allergic to. One time, while at a bar, when I still had my accent, I attempted to colloquially use a particular word. That word is "nigga." With an "a" at the end. In response, there was a proverbial record scratch, and my companions patiently and carefully explained to me that that was a word that polite people do not use.

No shit, Sherlock. You live on a continent that exports metric tons of gangsta rap. We can all hear and see it. It's really not difficult to understand the particular rules. Bitch, did I stutter?

Even though I had plenty of awareness of the linguistic sensitivities they were beholden to, in that moment, they treated me like an idiot, while playing the role of a more sophisticated adult. They saw themselves as empathetic and concerned, but actually demonstrated they didn't take me fully seriously. Not like one of them at all.

If you want people's unconditional respect, here's what did work for me: you go toe-to-toe with someone's alcoholic wine aunt at a party, as she tries to degrade you and your friend, who is the host. You effortlessly spit back fire in her own tongue and get the crowd on your side. Then you casually let them know you're not even one of them, not one bit. Jawdrops guaranteed.

This is what peak assimilation actually looks like.

Ethnic food

The Ethnic Aisle

In a similar vein, consider the following, from NYT Food:

"Why do American grocery stores still have an ethnic aisle?

The writer laments the existence of segregated foods in stores, and questions their utility. "Ethnic food" is a meaningless term, we are told, because everyone has an ethnicity. Such aisles even personify a legacy of white supremacy and colonialism. They are an anachronism which must be dismantled and eliminated wholesale, though it "may not be easy or even all that popular."

We do get other perspectives: shop owners simply put products where their customers are most likely to go look for them. Small brands tend to receive obscure placement, while larger brands get mixed in with the other foods, which is just how business goes. The ethnic aisle can also signal that the products are the undiluted original, rather than a version adapted to local palates. Some native shoppers explicitly go there to discover new ingredients or flavors, and find it convenient.

More so, the point about colonialism seems to be entirely undercut by the mention of "American aisles" in other countries, containing e.g. peanut butter, BBQ sauce and boxed cake mix. It cannot be colonialism on "our" part both when "we" import "their" products, as well as when "they" import "ours". That's just called trade.

Along the way, the article namedrops the exotic ingredients and foreign brands that apparently should just be mixed in with the rest: cassava flour, pomegranate molasses, dal makhani, jollof rice seasoning, and so on. We are introduced to a whole cast of business owners "of color," with foreign-sounding names. We are told about the "desire for more nuanced storytelling," including two sisters who bypassed stores entirely by selling online, while mocking ethnic aisles on TikTok. Which we all know is the most nuanced of places.

I find the whole thing preposterous. In order to even consider the premise, you already have to live in an incredibly diverse, cosmopolitan city. You need to have convenient access to products imported from around the world. This is an enormous luxury, enabled by global peace and prosperity, as well as long-haul and just-in-time logistics. There, you can open an app on your phone and have top-notch world cuisine delivered to your doorstep in half an hour.

For comparison, my parents are in their 70s and they first ate spaghetti as teenagers. Also, most people here still have no clue what to do with fish sauce other than throw it away as soon as possible, lest you spill any. This is fine. The expectation that every cuisine is equally commoditized in your local corner store is a huge sign of privilege, which reveals how provincial the premise truly is. It ignores that there are wide ranging differences between countries in what is standard in a grocery store, and what people know how to make at home.

Even chips flavors can differ wildly from country to country, from the very same multinational brands. Did you know paprika chips are the most common thing in some places, and not a hipster food?

paprika chips by lays

Crucially, in a different time, you could come up with the same complaints. In the past it would be about foods we now consider ordinary. In the future it would be about things we've never even heard of. While the story is presented as a current issue for the current times, there is nothing to actually support this.

To me, this ignorance is a feature, not a bug. The point of the article is apparently to waffle aimlessly while namedropping a lot of things the reader likely hasn't heard of. The main selling point is novelty, which paints the author and their audience as being particularly in-the-know. It lets them feel they are sophisticated because of the foods they cook and eat, as well as the people they know and the businesses they frequent. If you're not in this loop, you're supposed to feel unsophisticated and behind the times.

It's no coincidence that this is published in the New York Times. New Yorkers have a well-earned reputation for being oblivious about life outside their bubble: the city offers the sense that you can have access to anything, but its attention is almost always turned inwards. It's not hard to imagine why, given the astronomical cost of living: surely it must be worth it! And yes, I have in fact spent a fair amount of time there, working. It couldn't just be that life elsewhere is cheaper, safer, cleaner and friendlier. That you can reach an airport in less than 2 hours during rush hour. On a comfortable, modern train. Which doesn't look and smell like an ashtray that hasn't been emptied out since 1975.

But I digress.

"Ethnic aisles are meaningless because everyone has an ethnicity" is revealed to be a meaningless thought. It smacks headfirst into the reality of the food business, which is a lesson the article seems determined not to learn. When "diversity" turns out to mean that people are actually diverse, have different needs and wants, and don't all share the same point of view, they just think diversity is wrong, or at least, outmoded, a "necessary evil." Even if they have no real basis of comparison.

graffiti near school in New York

Negative Progress

I think both stories capture an underlying social affliction, which is about progress and progressivism.

The basic premise of progressivism is seemingly one of optimism: we aim to make the future better than today. But the way it often works is by painting the present as fundamentally flawed, and the past as irredeemable. The purpose of adopting progressive beliefs is then to escape these flaws yourself, at least temporarily. You make them other people's fault by calling for change, even demanding it.

What is particularly noticeable is that perceived infractions are often in defense of people who aren't actually present at all. The person making the complaint doesn't suffer any particular injury or slight, but others might, and this is enough to condemn in the name of progress. "If an [X] person saw that, they'd be upset, so how dare you?" In the story of "broken English," the original message doesn't actually refer to a specific person or incident. It's just a general thing we are supposed to collectively do. That the follow-up completely contradicts the premise, well, that apparently doesn't matter. In the case of the ethnic aisle, the contradictory evidence is only reluctantly acknowledged, and you get the impression they had hoped to write a very different story.

This too is a provincial belief masquerading as sophistication. It mashes together groups of people as if they all share the exact same beliefs, hang-ups and sensitivities. Even if individuals are all saying different things, there is an assumed archetype that overrules it all, and tells you what people really think and feel, or should feel.

To do this, you have to see entire groups as an "other," as people that are fundamentally less diverse, self-aware and curious than the group you're in. That they need you to stand up for them, that they can't do it themselves. It means that "inclusion" is often not about including other groups, but about dividing your own group, so you can exclude people from it. The "diversity" it seeks reeks of blandness and commodification.

In the short term it's a zero-sum game of mining status out of each other, but in the long run everyone loses, because it lets the most unimaginative, unworldly people set the agenda. The sense of sophistication that comes out of this is imaginary: it relies on imagining fault where there is none, and playing meaningless word games. It's not about what you say, but how you say it, and the rules change constantly. Better keep up.

Usually this is associated with a profound ignorance about the actual past. This too is a status-mining move, only against people who are long gone and can't defend themselves. Given how much harsher life was, with deadly diseases, war and famine regular occurences, our ancestors had to be far smarter, stronger and self-sufficient, just to survive. They weren't less sophisticated, they came up with all the sophisticated things in the first place.

When it comes to the more recent past, you get the impression many people still think 1970 was 30, not 51 years ago. The idea that everyone was irredeemably sexist, racist and homophobic barely X years ago just doesn't hold up. Real friendships and relationships have always been able to transcend larger social matters. Vice versa, the idea that one day, everyone will be completely tolerant flies in the face of evidence and human nature. Especially the people who loudly say how tolerant they are: there are plenty of skeletons in those closets, you can be sure of that.

* * *

There's a Dutch expression that applies here: claiming to have invented hot water. To American readers, I gotta tell you: it really isn't hard to figure out that America is a society stratified by race, or exactly how. I figured that out the first time I visited in 2001. I hadn't even left the airport in Philadelphia when it occurred to me that every janitor I had seen was both black and morbidly obese. Completely unrelated, McDonald's was selling $1 cheeseburgers.

Later in the day, a black security guard had trouble reading an old-timey handwritten European passport. Is cursive racist? Or is American literacy abysmal because of fundamental problems in how school funding is tied to property taxes? You know this isn't a thing elsewhere, right?

In the 20 years since then, nothing substantial has improved on this front. Quite the opposite: many American schools and universities have abandoned their mission of teaching, in favor of pushing a particular worldview on their students, which leaves them ill-equipped to deal with the real world.

Ironically this has created a wave of actual American colonialism, transplanting the ideology of intersectionality onto other Western countries where it doesn't apply. Each country has their own long history of ethnic strife, with entirely different categories. The aristocrats who ruled my ancestors didn't even let them get educated in our own language. That was a right people had to fight for in the late 1960s. You want to tell me which words I should capitalize and which I shouldn't? Take a hike.

Not a year ago, someone trying to receive health care here in Dutch was called racist for it, by a French speaker. It should be obvious the person who did so was 100% projecting. I suspect insecurity: Dutch speakers are commonly multi-lingual, but French speakers are not. When you are surrounded by people who can speak your language, when you don't speak a word of theirs, the moron is you, but the ego likes to say otherwise. So you pretend yours is the sophisticated side.

All it takes to pierce this bubble is to actually put the platitudes and principles to the test. No wonder people are so terrified.

September 09, 2021

It's been long overdue, but Planet Grep now does the https dance (i.e., if you try to use an unencrypted connection, it will redirect you to https). Thank you letsencrypt!

I hadn't previously done this because some blogs that we carry might link to http-only images; but really, that shouldn't matter, and we can make Planet Grep itself be a https site even if some of the content is http-only.

Enjoy!

September 08, 2021

Cette interdépendance que l’on essaie d’oublier afin de camoufler l’apport essentiel de l’oisiveté et de la réflexion ouverte.

En 2014, alors que je parlais beaucoup du prix libre, j’ai reçu un gros paiement d’un lecteur. Ce lecteur me remerciait, car les idées que je décrivais l’inspiraient pour son projet de site de jeu d’échecs en ligne. 6 années plus tard, un de mes étudiants a choisi, comme logiciel libre à présenter pour son examen, ce logiciel : Lichess. Il m’a décrit le modèle libre de développement de Lichess, la méthode de don et le prix libre. Lichess est l’un des plus importants sites d’échecs dans le monde et est fréquenté par des grands maitres comme Magnus Carlsen.

Outre une immense fierté de savoir que certaines des graines que j’ai semées ont contribué à de magnifiques forêts, cette anecdote illustre surtout un point très important que l’idéologie Randienne tente à tout prix de camoufler : le succès n’est pas la propriété d’un individu. Un individu n’est jamais productif tout seul, il ne peut pas « se faire tout seul » en dépit de l’image que l’on aime donner des milliardaires. Si les parents de Jeff Bezos ne lui avaient pas donné 300.000$ en lui faisant promettre de trouver un vrai travail une fois les 300.000$ dépensés, il n’y aurait pas d’Amazon aujourd’hui. Chacun d’entre nous utilise des routes, des moyens de communication, des hôpitaux, des écoles et a des échanges intellectuels fournis par la communauté. L’idéologie de la propriété intellectuelle et des brevets nous fait croire qu’il y’a un unique inventeur, un génie solitaire qui mérite de récolter le fruit de ses efforts. C’est entièrement complètement faux. Nous sommes dépendants les uns des autres et nos succès sont essentiellement des chances, saisies ou non, que nous offre la communauté.

De plus, les brevets sont une gigantesque arnaque intellectuelle. J’en ai fait l’expérience moi-même dans un article assez ancien qui a eu pas mal de retentissement sans jamais rencontrer de contradiction.

https://ploum.net/working-with-patents/

Brevets qui ne servent d’ailleurs que l’intérêt des riches et puissants. Amazon, par exemple, a développé une technique pour repérer ce qui se vend bien sur son site afin de le copier et d’en faire sa propre version. Même s’il y’a des brevets. Parce que personne n’a les ressources d’attaquer Amazon sur une histoire de brevets.

https://www.currentaffairs.org/2020/12/how-amazon-destroys-the-intellectual-justifications-for-capitalism

Les brevets sont une arnaque construite sur un concept entièrement fictif : celui de l’inventeur solitaire. Une fiction qui nie l’idée même de l’interdépendance sociale.

Une interdépendance sociale dont l’apport essentiel à la productivité individuelle a été illustré par un généticien, William Muir, qui a décidé de sélectionner les poules qui pondaient le plus d’œufs afin de créer un « super poulailler » qui serait hyper productif. Le résultat a été catastrophique. Les poules qui pondaient le plus d’œufs au sein d’un poulailler étaient en fait les plus agressives qui empêchaient les autres de pondre. Le super poulailler est devenu une boucherie d’ou presque aucun œuf ne sortait et dont la majorité des poules mourraient !

La conclusion est simple : même les poules qui pondent peu ont un rôle essentiel dans la productivité globale de la communauté. Le meilleur poulailler n’est pas composé des meilleures pondeuses, bien au contraire.

https://economicsfromthetopdown.com/2021/01/14/the-rise-of-human-capital-theory/

Grâce aux témoignages de mes lecteurs, je peux affirmer que mes billets de blog ont une influence sur la société à laquelle j’appartiens. Influence que j’estime essentiellement positive, voire très positive, selon mes propres critères. Lichess en est un exemple spectaculaire, mais je reçois des mails beaucoup plus intimes qui vont dans le même sens et qui me touchent beaucoup (même si j’ai pris la décision de ne plus y répondre systématiquement). Je peux donc affirmer que je suis utile à mon humble échelle.

Au cours de ma carrière, je ne peux trouver aucun exemple où mon travail salarié ait jamais eu le moindre impact et où mon utilité a été démontrée. Pire : je ne vois pas un seul impact positif des entreprises entières pour lesquelles j’ai travaillé. En étant très optimiste, je peux affirmer qu’on a amélioré la rentabilité de certains de nos clients. Mais ce n’est pas vraiment un impact sociétal positif. Et ce rendement est de toute façon noyé dans une gabegie de projets abscons et de procédures administratives. Pendant dix ans, j’ai été payé dans des super-poulaillers, dans des entreprises qui sont elles-mêmes en compétition. Pour un résultat soit nul, soit nocif pour l’humanité et la planète car augmentant la consommation globale.

À l’opposé, je vois directement l’impact des projets auxquels j’ai contribué sans rétribution, notamment les projets de logiciels libres. Le développeur Mike Williamson est arrivé à la même conclusion.

https://mike.zwobble.org/2021/08/side-projects-vs-industry/

Si vous cherchez mon nom sur Wikipedia, vous arriverez sur la page d’un projet auquel j’ai consacré plusieurs années de sommeil sans toucher le moindre centime.

https://fr.wikipedia.org/wiki/Getting_Things_Gnome

Revenu de base

C’est peut-être pour ça que le revenu de base me semble tellement essentiel. En 2013, je tentais de vous convaincre que le revenu de base était une bonne idée et de signer la pétition pour forcer les instances européennes d’étudier la question. Hélas, le nombre de signatures n’avait pas été atteint.

https://ploum.net/pourquoi-vous-etes-sans-le-savoir-favorable-au-revenu-de-base/

Huit ans plus tard, une nouvelle pétition vient de voir le jour. Si vous êtes citoyen européen, je vous invite vivement à la signer. C’est très facile et très officiel. Il faut mettre vos données personnelles, mais pas votre email. Il est nécessaire d’obtenir un minimum de signatures dans tous les pays d’Europe. N’hésitez pas à partager avec vos contacts internationaux.

https://eci.ec.europa.eu/014/public/#/screen/home

Les observables

Lorsqu’on vous parle de la productivité d’un individu ou du mérite des personnes riches, rappelez-vous l’histoire des poulaillers.

Mais pour les poules, c’est facile. Il suffit de mesurer les œufs pondus. Le problème avec le capitalisme moderne, c’est qu’on se plante tout le temps dans les métriques. Or, si on utilise une mauvaise métrique, on va optimiser tout le système pour avoir des mauvais résultats.

J’ai beaucoup glosé sur ce paradigme des métriques, que j’appelle des « observables ». Je tourne en rond autour du même thème : on mesure la productivité à l’aide des heures de travail (vu que le salarié moyen ne pond pas), donc on crée des heures de travail, donc les jobs servent à remplir le plus d’heures possible. Ce que j’appelle le principe d’inefficacité maximale. Au final, on passe 8h par jour à tenter de brûler la planète afin, une fois sorti du bureau, de pouvoir se payer des légumes bio en ayant l’impression de sauver la même planète.

https://ploum.net/le-principe-dinefficacite-maximale/

Outre les heures de travail, il y’a d’autres métriques absurdes comme les clics, les pages vues et ce genre de choses. Les métriques des gens qui font du marketing : faire le plus de bruit possible ! Le département marketing, c’est un peu un super-poulailler où on a mis tous les coqs les plus bruyants. Et on s’étonne de ne pas avoir un seul œuf. Mais beaucoup de bruit.

https://ploum.net/le-silence-au-milieu-du-bruit/

L’effet des métriques absurdes a un impact direct sur votre vie. Genre si vous utilisez Microsoft Team au travail. Car désormais, votre manager va pouvoir avoir des statistiques sur votre utilisation de Teams. Le programmeur hyper concentré qui a coupé Teams pour coder une super fonctionnalité va bien vite se faire virer à cause de mauvaises statistiques. Et votre vie privée ? Elle ne rentre pas dans les plans du superpoulailler !

https://www.zdnet.com/article/i-looked-at-all-the-ways-microsoft-teams-tracks-users-and-my-head-is-spinning/

Comme plus personne n’a le temps de réfléchir (vu qu’il n’y a pas de métriques sur le sujet et qu’au contraire réfléchir bousille d’autres métriques), l’avenir appartient à ceux qui arrivent à maximiser les métriques. Ou mieux : qui arrive à faire croire qu’ils sont responsables de métriques maximisées. Changer de travail régulièrement permet de ne jamais vraiment exposer son incompétence et de montrer en grade à chaque étape, augmentant ainsi son salaire jusqu’à devenir grand manager hyper bien payé dans un univers où les métriques sont de plus en plus floues. La compétence est remplacée par l’apparence de compétence, qui est essentiellement de la confiance en soi et de l’opportunisme politique. Cela rejoint un peu la thèse de Daniel Drezner développée dans « The Ideas Industry » : les idées simples, prémâchées, faciles à s’approprier (genre TED) prennent le pas sur les analyses profondes et plus subtiles. C’est également un constat fait par Cal Newport dans « A World Without Email » où il dénonce la mentalité de « ruche bourdonnante » de toute entreprise moderne.

Vous êtes entrepreneur ou indépendant ? C’est pareil : vous maximisez les métriques absurdes de vos clients. Si vous avez de la chance d’avoir des clients ! Sinon, vous passez votre temps à optimiser les métriques que vous offrent Facebook, Google Analytics ou Amazon en ayant l’impression de bosser à votre projet. Y’a même un métier entier qui ne fait qu’optimiser une métrique offerte par Google : le SEO.

Il y a quelques années, le simple fait d’avoir émis cette idée m’a valu que des professionnels du secteur s’organisent pour qu’une recherche à mon nom renvoie vers des injures de leur cru. Cette anecdote illustre bien le problème des métriques absurdes : il est impossible de faire comprendre qu’une métrique est absurde à ceux qui payent pour optimiser cette métrique et à ceux qui ont bâti leur carrière sur la même métrique. Une simple remise en question génère une violence complètement disproportionnée, religieuse.

Religion et violence

Le repli identitaire, la religiosité ou la plupart des opinions conservatrices sont générés par l’angoisse et le sentiment de ne pas comprendre. Ce n’est pas une analyse politique, mais bien neurologique. Il suffit de désactiver quelques neurones dans le cerveau pour que, soudainement, l’angoisse ne soit plus liée à ce repli. Comme on ne peut pas désactiver ces neurones chez tout le monde, il reste une solution qui a déjà fait ses preuves : l’éducation, qui permet de comprendre et d’être moins angoissé.

https://www.lemonde.fr/passeurdesciences/article/2015/10/21/moins-croire-en-dieu-avec-la-stimulation-magnetique_6001729_5470970.html

La religion n’est de toute façon qu’un prétexte. Ce ne sont pas les interprétations religieuses qui sont la cause de violences ou de repli, elles en sont au contraire le symptôme, l’excuse.

https://medium.com/incerto/religion-violence-tolerance-progress-nothing-to-do-with-theology-a31f351c729e

Le poulailler sans-tête !

En utilisant religieusement les mauvaises métriques, nous sommes en train de faire de la planète une sorte de super-poulailler où la bêtise et la stupidité sont optimisées. C’est d’ailleurs la définition même de la foi : croire sans poser de question, sans chercher à comprendre. La foi est la bêtise élevée au rang de qualité. L’invasion du capitole par les partisans de Trump en a été l’illustration suprême : des gens pas très malins, ayant la foi que l’un d’entre eux avait un plan et qu’ils allaient le suivre. Sauf qu’il n’y avait pas de plan, que cette invasion était un « meme » comme l’est Q : une simple idée lancée sur les réseaux sociaux qui s’est créé une auto-importance grâce à la rumeur et au bouche-à-oreille virtuel. D’ailleurs, une fois dans le capitole, personne ne savait quoi faire. Ils se sont assis sur les fauteuils pour se sentir importants, ont pris des selfies, ont tenté de trouver des complots croustillants, en quelques secondes, dans les centaines de pages de documents législatifs qui sont probablement disponibles sur le site du gouvernement. Quand votre culture politique est alimentée essentiellement par des séries d’actions sur Netflix, la révolution trouve vite ses limites.

Comme le souligne très bien Cory Doctorrow, les memes et les fake news ne sont pas la réalité, mais ils sont l’expression d’un fantasme. Les memes sur Internet ne sont pas créés pour décrire la réalité, mais pour tenter de faire plier la réalité à nos désirs.

https://locusmag.com/2019/07/cory-doctorow-fake-news-is-an-oracle/

Mais pas besoin d’aller aussi loin. Bien avant Trump, la Belgique avait connu le concept du « politicien-meme » avec le député Laurent Louis. Député tellement absurde que j’avais ironisé sur le fait qu’il n’était qu’une blague à travers un article satirique. Article qui avait d’ailleurs eu pour résultat que Laurent Louis lui-même avait posté son certificat de naissance sur les réseaux sociaux, pour prouver qu’il existait. Cette non-perception de l’ironie m’avait particulièrement frappé.

Comme Trump, Laurent Louis avait fini par trouver un créneau et des partisans. Assez pour foutre un peu le bordel, pas assez pour ne pas disparaitre dans l’oubli comme une parenthèse illustrant les faiblesses d’un système politique bien trop optimisé pour récompenser le marketing et la bêtise. Mais je tombe dans le pléonasme.

https://ploum.net/le-depute-qui-nexistait-pas/

S’évader du poulailler

J’achète un recueil de nouvelles de Valery Bonneau. Je le prête à ma mère avant même de le lire. Elle me dit de lire absolument la première nouvelle,  » Putain de cafetière « . Je me plonge. Je tombe de mon fauteuil de rire. Franchement, le coup du frigo américain avec un code PIN, j’en rigole encore.

Profitez-en ! (en version papier, c’est encore plus délectable !)

https://www.valerybonneau.com/romans/nouvelles-noires-pour-se-rire-du-desespoir/putain-de-cafetiere

Envie d’un roman gonflé à la vitamine ? Besoin de vous évader des confinements et couvre-feux à gogo ? Printeurs de Ploum est fait pour vous!

Ce n’est pas moi qui le dis, c’est une critique que je ne me lasse pas de relire :

https://albdoblog.com/2021/01/20/printeurs-ploum/

D’ailleurs, si vous avez lu Printeurs, n’hésitez pas à donner votre avis sur Senscritique et Babelio. Je déteste Senscritique, mais je n’ai pas encore trouvé d’alternative durable.

https://www.senscritique.com/livre/Printeurs/43808921

https://www.babelio.com/livres/Dricot-Printeurs-Science-fiction/1279338

Un autre plugin Firefox qui me sauve la vie et pour lequel j’ai souscrit un abonnement premium à prix libre :


https://ninja-cookie.com/

Fini de paramétrer les cookies. Le plugin les refuse automatiquement au maximum refus possible. C’est parfait et indispensable.

Ça en dit long sur l’état du web actuel. Quand on voit le nombre de protections qu’il faut avoir pour pouvoir tout simplement « lire » le contenu des pages web sans avoir le cerveau qui frit et sans être espionné de tous les côtés, on comprend mieux l’intérêt d’un protocole comme Gemini qui est conçu à la base pour être le moins extensible possible !

Conseil BD

Après les magnifiques « L’Autre Monde » et « Mary la Noire », je découvre une nouvelle facette de l’univers de Florence Magnin . « L’héritage d’Émilie ».

J’ai découvert Magnin par hasard, dans ma librairie préférée. L’Autre Monde m’a interpellé. Le dessin était magnifique, mais d’une naïveté particulière. Je n’étais pas certain d’aimer. Je n’ai pas aimé, j’ai littéralement été aspiré. Ce mélange de naïveté et d’univers pour adulte, de fantastique à la fois désuet et incroyablement moderne. L’héritage d’Émilie ne fait pas exception. En fait, il transcende même les deux autres en mélangeant le Paris des années folles et les légendes celtiques d’Irlande, le tout dans une œuvre de fantastique champêtre qui glisse brusquement dans le space opera intergalactique. Oui, c’est complètement incroyable. Et oui, j’adore.

Photo by Artem Beliaikin on Unsplash

Recevez les billets par mail ou par RSS. Max 2 billets par semaine, rien d’autre. Adresse email jamais partagée et définitivement effacée lors du désabonnement. Dernier livre paru : Printeurs, thriller cyberpunk. Pour soutenir l’auteur, lisez, offrez et partagez des livres.

Ce texte est publié sous la licence CC-By BE.

I'm excited to announce that Acquia has signed a definitive agreement to acquire Widen, a digital asset management (DAM) and product information management (PIM) company.

The Acquia and Widen logos shown next to each other

It's not hard to understand how Widen fits Acquia's strategy. Our goal is to build the best Digital Experience Platform (DXP). Content is at the heart of any digital experience. By adding a DAM and PIM to our platform, our customers will be able to create better content, more easily. That will result in better customer experiences. Plain and simple.

Widen is for organizations with larger marketing teams managing one or more brands. These teams create thousands of "digital assets": images, videos, PDFs and much more. Those digital assets are used on websites, mobile applications, in-store displays, presentations, etc. Managing thousands of files, plus all the workflows to support them, is difficult without the help of a DAM.

For commerce purposes, marketers need to correlate product images with product information like pricing, sizing, or product specifications. To do so, Widen offers a PIM. Widen built their PIM on top of their DAM — an approach that is both clever and unique. Widen's PIM can ingest product content from proprietary systems, master data management (MDM) platforms, data lakes, and more. From there, marketers can aggregate, synthesize, and syndicate product content across digital channels.

In short, organizations need a lot of content to do business. And online commerce can't exist without product information. It's why we are so excited about Widen, and the ability to add a DAM and PIM to our product portfolio.

Because content is at the heart of any digital experience, we will build deep integrations between Widen and Acquia's DXP. So in addition to acquiring Widen, we are making a large investment in growing Widen's engineering team. That investment will go towards extending the existing Widen module for Drupal, and creating integrations with Acquia's products: Acquia Site Studio, Acquia Campaign Studio (Mautic), Acquia Personalization, and more. Digital asset management will be a core building block of our DXP.

Needless to say, we will continue to support and invest in Widen working with other Content Management Systems and Digital Experience Platforms. We are building an open DXP; one of our key principles is that customers should be able to integrate with their preferred technologies, and that might not always be ours. By growing the engineering team, we can focus on building Drupal and Acquia integrations without disrupting the existing roadmap and customer commitments.

A few other facts that might be of interest:

So last but not least, I'd like to welcome all of Widen's customers and employees to the Acquia family. I'm excited to see what we will accomplish together.

September 07, 2021

In this last post on the infrastructure domain, I cover the fifth and final viewpoint that is important for an infrastructure domain representation, and that is the location view. As mentioned in previous posts, the viewpoints I think are most representative of the infrastructure domain are:

Like with the component view, the location view is a layered approach. While I initially wanted to call it the network view, "location" might be a broader term that matches the content better. Still, it's not a perfect name, but the name is less important than the content, not?

September 04, 2021

I had been a happy user of the Nokia 6.1 I bought 3 and a half years ago, but with battery life slowly going down and both OS major updates and security-updates having stopped it was time to find a replacement. Although the tech reporters and vloggers were underwhelmed by the screen (no Oled or Amoled, only 60Hz refresh rate) and CPU (the SM4350 Snapdragon 480 is considered too slow)...

Source

September 02, 2021

I published the following diary on isc.sans.edu: “Attackers Will Always Abuse Major Events in our Lifes“:

All major events in our daily life are potential sources of revenue for attackers. When elections or major sports events are organized, attackers will surf on these waves and try to make some profit or collect interesting data (credentials). It’s the same with major meteorological phenomena. The hurricane “Ida” was the second most intense hurricane to hit the state of Louisiana on record, only behind “Katrina”… [Read more]

The post [SANS ISC] Attackers Will Always Abuse Major Events in our Lifes appeared first on /dev/random.

September 01, 2021

Blogging sometimes feels like talking to an imaginary friend. It's an interesting comparison because it could help me write more regularly. For example: I can picture myself going to dinner with my imaginary friend. Once we sit down, what would we talk about? What would I share?

I'd share that I've been doing well the past year.

Work is going well. I'm fortunate to help lead at a growing software company. We continue to hit record sales quarter after quarter, and hired more than 250 new employees in 2021 alone. Keeping up with all the work can be challenging but I continue to have fun and learn a lot, which is the most important part.

Most days I work from home. Working from home consists of 8 hours of Zoom meetings, followed by email, presentation and planning work. I finish most work days energized and drained at the same time.

Over the course of two years, I've created a home office setup that is more comfortable, more ergonomic, and more productive than my desk at the office. I invested in an ergonomic chair, standing desk, camera setup, a second screen, and even a third screen. Possibly an interesting topic for a future blog post.

Despite having a great home office setup, I'd like to work more from interesting locations. I'm writing this blog post from an island on Lake Winnipesaukee in New Hampshire where we have a management offsite. Working from an island is as awesome as it sounds. The new hybrid work arrangement provides that extra flexibility.

A chair with a view of Lake Winnipesaukee
Overlooking Lake Winnipesaukee in New Hampshire. Coffee and laptop for morning blogging.

When not working, I've been enjoying the summer in Boston. We moved from the suburbs to the city this year, and have been busy exploring our new neighborhood. We love it!

I've been very happy with our decision to move to the city, except for one thing: tennis. I love playing tennis with a coach, and that has been nearly impossible in the city. As a result I haven't played tennis for months — the lack of workout routine has been really bothering me. Because I love racket sports the most, I started to explore if there are good squash, pickleball or table tennis options in downtown Boston. Recommendations welcome!

Last but not least, we spent some time at Cape Cod this summer, and traveled to Iceland for a weekend. I'll tie off this blog post with a few photos of those trips.

An American flag waving in the light of the moon
A red moon over the water in Cape Cod.
Eating dinner outside overlooking the ocean
Dinner at Cape Cod.
A aarshmallow over a camp fire
S'mores on the beach.

iceland-2021/gerlingadalur-volcano-2

In my previous post, I started with the five different views that would support a good view of what infrastructure would be. I believe these views (component, location, process, service, and zoning) cover the breadth of the domain. The post also described the component view a bit more and linked to previous posts I made (one for services, another for zoning).

The one I want to tackle here is the most elaborate one, also the most enterprise-ish, and one that always is a balance on how much time and effort to put into it (as an architect), as well as hoping that the processes are sufficiently standardized in a flexible manner so that you don't need to cover everything again and again in each project.

So, let's talk about processes...

August 30, 2021

Alors que je déclipsais le pied de mes pédales après ma grande traversée du Massif central en VTT en compagnie de Thierry Crouzet, mon téléphone m’afficha un mail au titre à la fois évident et incompréhensible, inimaginable : « Roudou nous a quittés ».

Avec Internet est apparu une nouvelle forme de relation sociale, une nouvelle forme d’interaction voire, j’ose le terme, d’amitié. Une amitié envers des personnes avec qui on se découvre des affinités intellectuelles, mais qu’on ne verra pas souvent voire jamais. Une amitié tout de même. Une amitié qui peut mener sur une complicité, sur la création de projets communs. Une amitié qui dépasse bien des relations en chair et en os que la proximité nous impose quotidiennement.

Jean-Marc Delforge, Roudou pour les intimes, était pour moi de ces amitiés au long cours. Lecteur de mon blog depuis des années, utilisateur de logiciel libre et illustrateur amateur, il m’a envoyé le tout premier fan-art de Printeur et signera ensuite la couverture du premier EPUB Printeurs.

À force de discussions, nous créerons ensemble le webcomic « Les startupeurs » dont j’ai empilé les scénarios avant que, malheureusement, Roudou ne trouve plus le temps pour les dessiner. Des personnages d’employés un peu désabusés (dont l’un est ma parodie selon Roudou), rêvant de créer leurs startup et addicts de la machine à café (une trouvaille de Roudou !).

https://ploum.net/les-startupeurs-un-nouveau-webcomic/

On s’amusait comme des fous avec ces idées, s’essayant au cartoon politique, partageant, discutant et se découvrant une passion commune pour le VTT.

Car Roudou était plus qu’un passionné de VTT. C’était un meneur, un créateur de trace et le fondateur du forum VTTnet. Dans son sillage, impossible de ne pas pédaler.

En 2015, il m’invita à le rejoindre avec mon filleul Loïc pour 3 jours de VTT intensifs en compagnie des membres du forum.

Roudou, sa fille Noémie, mon filleul Loïc et les autres malades de VTTNet en 2015

Par le plus grand des hasards, Loïc et moi sommes repassés dans la région début juillet pour un trip bikepacking. Lorsque Roudou a découvert cela, il m’a immédiatement envoyé un message pour me dire qu’on s’était raté de peu. Alors que Loïc et moi nous prélassions au bord du lac de l’Eau d’Heure, lui était probablement en train d’y faire du bateau. Il rigolait en lisant l’itinéraire que nous avions pris, me disant qu’il aurait pu nous guider, qu’il habitait tout près.

Je me suis senti triste à l’idée d’avoir manqué une telle opportunité de pédaler ensemble. J’ai promis qu’on referait le trip l’année prochaine. Que ce serait vraiment chouette de se retrouver sur un vélo (même si, pour des raisons de santé qu’il ne voulait pas détailler, le VTT de Roudou était devenu électrique).

À un message un peu accusateur me demandant comment j’osais venir pédaler dans sa région sans le prévenir, je répondis que j’étais persuadé qu’il habitait bien plus à l’ouest.

La réponse de Roudou ne se fit pas attendre : « Ma femme aussi me dit souvent que je suis bien trop à l’ouest. »

Ce fut le dernier message que je reçus de lui. Le 16 juillet, j’embarquais pour 1000km de VTT essentiellement déconnectés, me promettant d’aller rouler avec Roudou l’été prochain.

Mais alors que je pédalais loin de tout, la mort l’a surpris, interrompant à jamais notre fil de discussion, plongeant les startupeurs, les vététistes, sa femme, ses filles et ses amis dans une tristesse infinie.

Roudou va me manquer. Ses crobards et ses photos humoristiques envoyés pour réagir à mes billets de blog et mes livres vont me manquer. Les startupeurs, même s’ils étaient en hibernation, vont me manquer (je n’ai d’ailleurs pas de copie de cette œuvre commune, peut-être perdue). Lorsque je me plongerai dans la suite de Printeurs, je sais que les personnages auront une pensée pour Roudou, ce lecteur qui leur faisait prendre corps sous sa tablette graphique.

Je garderai toujours en moi ce regret d’avoir oublié de le prévenir, d’avoir gâché cette dernière opportunité avant qu’il parte pédaler un peu plus à l’Ouest. Un peu trop à l’Ouest…

Salut l’artiste, salut Roudou ! Nous continuerons à suivre tes traces en pensant à toi.

Recevez les billets par mail ou par RSS. Max 2 billets par semaine, rien d’autre. Adresse email jamais partagée et définitivement effacée lors du désabonnement. Dernier livre paru : Printeurs, thriller cyberpunk. Pour soutenir l’auteur, lisez, offrez et partagez des livres.

Ce texte est publié sous la licence CC-By BE.

I published the following diary on isc.sans.edu: “Cryptocurrency Clipboard Swapper Delivered With Love“:

Be careful if you’re a user of cryptocurrencies. My goal is not to re-open a debate about them and their associated financial risks. No, I’m talking here about technical risk. Wallet addresses are long strings of characters that are pretty impossible to use manually. It means that you’ll use your clipboard to copy/paste your wallets to perform payments. But some malware monitors your clipboard for “interesting data” (like wallet addresses) and tries to replace it with another one. If you perform a payment operation, it means that you will transfer some BTC or XMR to the wrong wallet, owned by the attacker… [Read more]

The post [SANS ISC] Cryptocurrency Clipboard Swapper Delivered With Love appeared first on /dev/random.