Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

September 20, 2018

Petit rappel sur le danger des plateformes centralisées

On a tendance à l’oublier mais, aujourd’hui, la plupart de nos interactions sur le net ont lieu à travers des plateformes centralisées. Cela signifie qu’une seule entité possède le pouvoir absolu sur ce qui se passe sur sa plateforme.

C’est fort théorique jusqu’au jour où, sans raison apparente, votre compte est suspendu. Du jour au lendemain, tous vos contenus sont inaccessibles. Tous vos contacts sont injoignables (et vous êtes injoignables pour eux).

C’est insupportable lorsque ça vous arrive à titre privé. Cela peut être tout bonnement la ruine si cela vous arrive à titre professionnel. Et ce n’est pas réservé qu’aux autres.

Un ami s’est ainsi un jour réveillé pour constater que son compte Google était complètement suspendu. Il n’avait plus accès à ses emails personnels et semi-professionnels. Il n’avait plus accès à ses documents Google Drive, à son calendrier. Il n’avait plus accès à tous les services qui utilisent votre adresse Google pour se connecter. La plupart des apps de son téléphone ne fonctionnaient plus. Il était numériquement anéanti et n’avait absolument aucun recours. De plus, il était en voyage à l’autre bout du monde. Cela pourrait limite faire le scenario d’un film.

Par chance, un de ses contacts travaillait chez Google et a réussi à débloquer la situation après quelques semaines mais sans aucune explication. Cela pourrait lui arriver encore demain. Cela pourrait vous arriver à vous. Faites l’expérience de vivre une semaine sans votre compte Google et vous comprendrez.

Personnellement, je ne me sentais pas trop concerné car, si je suis encore partiellement chez Google, j’ai un compte payant avec mon propre nom de domaine. Celui-là, me disais-je, ne devrait jamais avoir de problème. Après tout, je suis un client payant.

Mais voilà que je découvre que mon profil Google+ a été suspendu. Si je peux toujours utiliser mes mails et mon calendrier, tous mes posts Google+ sont désormais inaccessibles. Heureusement que, depuis des années, je n’y postais plus activement et que j’avais copié sur mon blog tout ce que je trouvais important pour moi. Néanmoins, pour les 3000 personnes qui me suivent sur Google+, j’ai tout simplement cesser d’exister et ils ne le savent pas (merci de leur envoyer ce billet). Lorsque je collabore à un document sur Google Drive, ce n’est plus ma photo et le nom que j’ai choisi qui s’affichent. Je ne peux plus non plus commenter sur Youtube (pas que ça m’arrive).

Bref, si cette suppression de profil n’est pas dramatique pour moi, elle sert quand même de piqûre de rappel. Sur Twitter, j’ai vécu un incident similaire lorsque j’ai décidé de tester le modèle publicitaire en payant pour promouvoir les aventures d’Aristide, mon livre pour enfant (que vous pouvez encore commander durant quelques jours). Après quelques impressions, ma campagne a été bloquée et j’ai été averti que ma publicité ne respectait pas les règles de Twitter (un livre pour enfants, imaginez un peu le scandale !). Chance, mon compte n’a pas été affecté par cette décision (cela m’aurait bien plus ennuyé que la suspension Google+) mais, encore une fois, on n’est pas passé loin.

Même histoire sur Reddit où, là, mon compte a été “shadow banned”. Cela veut dire que je ne m’en rendais pas compte mais tout ce que je postais était invisible. Je m’étonnais de n’avoir aucune réponse à mes commentaires et c’est plusieurs contacts qui m’ont confirmé ne pas voir ce que je postais. Un modérateur anonyme a décidé, de manière irrévocable, que je ne convenais pas à Reddit et je n’en savais rien.

Sur Facebook, les histoires de ce genre sont nombreuses même si je n’en ai pas fait l’expérience personnellement. Sur Medium, le projet Liberapay a également connu ce genre d’aventure car, comme toute plateforme centralisée, Medium s’abroge le droit de décider ce qui est publiable ou non sur sa plateforme.

Dans chacun des cas, vous remarquez qu’il n’y a aucune infraction clairement identifiée et qu’il n’y a aucun recours. Parfois il est possible de demander de réexaminer la situation mais la discussion n’est généralement pas possible, tout est automatique.

Mais alors, que faire ?

Et bien c’est la raison pour laquelle je vous recommande chaudement de me suivre également sur Mastodon, un réseau social décentralisé. Je ne possède pas ma propre instance mais j’ai choisi de créer mon compte sur mamot.fr qui est mis en place par La Quadrature du Net. J’ai confiance qu’en cas de problème ou de litige, j’aurai face à moi un interlocuteur humain qui partage plus ou moins ma sensibilité.

Bien sûr, la solution n’est pas parfaite. Si votre instance Mastodon disparait, vous perdrez également tout. C’est la raison pour laquelle beaucoup de mastonautes ont désormais plusieurs comptes. J’avoue que, dans ce cas-ci, je fais une confiance aveugle aux capacités techniques de La Quadrature du Net. Mais, à choisir, je préfère perdre un compte suite à un problème technique que suite à une décision politique inique et indiscutable.

En attendant une solution parfaite, il est important de se rappeler constamment que nous offrons du contenu aux plateformes et qu’elles en font ce qu’elles veulent ou peuvent. Ne perdez donc pas trop d’énergie à faire de grandes tartines, à travailler vos textes là-bas. Gardez à l’esprit que tous vos comptes, vos écrits, vos données, vos contacts peuvent disparaitre demain. Prévoyez des solutions de rechange, soyez résilients.

C’est la raison pour laquelle ce blog reste, depuis 14 ans, une constante de ma présence en ligne, mon seul historique officiel et unique. Il a déjà une longévité plus grande que la plupart des services web et j’espère même qu’il me survivra. 

PS: Sans explication, mon compte Google+ semble être de nouveau actif. Mais jusqu’à quand ?

Photo by Kev Seto on Unsplash

Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

I published the following diary on isc.sans.edu: “Hunting for Suspicious Processes with OSSEC“:

Here is a quick example of how OSSEC can be helpful to perform threat hunting. OSSEC  is a free security monitoring tool/log management platform which has many features related to detecting malicious activity on a live system like the rootkit detection or syscheck modules. Here is an example of rules that can be deployed to track malicious processes running on a host (it can be seen as an extension of the existing rootkit detection features). What do I mean by malicious processes? Think about crypto miners. They are plenty of suspicious processes that can be extracted from malicious scripts… [Read more]

[The post [SANS ISC] Hunting for Suspicious Processes with OSSEC has been first published on /dev/random]

September 18, 2018

September 17, 2018

Ou bien écoutez-vous Facebook ?

Vous avez probablement entendu parler de cette rumeur : Facebook écouterait toutes nos conversations à travers le micro de nos téléphones et ses algorithmes de reconnaissance en profiterait pour nous afficher des publicités liées à nos récentes discussions.

Plusieurs témoignages abondent en ce sens, toujours selon la même structure : un utilisateur de Facebook voit apparaitre une publicité qui lui semble en rapport avec une discussion qu’il vient d’avoir. Ce qui le frappe c’est qu’à aucun moment il n’a eu un comportement en ligne susceptible d’informer les publicitaires (visite de sites sur le sujet, recherches, like de posts, etc).

Facebook a formellement nié utiliser le micro des téléphones pour enregistrer les conversations. La plupart des spécialistes considèrent d’ailleurs que le faire à une telle échelle n’est pas encore technologiquement réalisable ni rentable. Google, de son côté, reconnait enregistrer l’environnement de ses utilisateurs mais sans utiliser ces données de manière publicitaire (vous pouvez consulter les enregistrements faits par votre compte Google sur ce lien, c’est assez saisissant d’entendre des moments aléatoires de votre vie quotidienne).

Ce qui frappe dans ce débat, c’est tout d’abord la facilité avec laquelle le problème pourrait être résolu : désinstaller l’application Facebook du téléphone (et accéder à Facebook via son navigateur).

Ensuite, s’il n’est pas complètement impossible que Facebook enregistre nos conversations, il est amusant de constater que des millions de personnes paient pour installer chez eux un engin qui fait exactement cela : Amazon Alexa, Google Echo ou Apple Homepod enregistrent nos conversations et les transmettent à Amazon, Google et Apple. Ouf, comme ce n’est pas Facebook, alors ça va. Et votre position ? Les personnes avec qui vous êtes dans un endroit ? Les personnes dont vous regardez les photos ? Les personnes avec qui vous avez des échanges épistolaires ? Bon, ça, ça va, Facebook peut le savoir. De toutes façons Facebook saura tout même si vous faites attention. La développeur Laura Kalbag s’est ainsi vu proposé des publicités pour un service de funérailles à la mort de sa mère malgré un blocage Facebook complet. La faille ? Une de ses sœurs aurait informé une amie via Messenger.

Cependant, la rumeur des micros de téléphone persiste. L’hypothèse qui semble la plus probable est celle de la simple coincidence. Nous sommes frappés par la similitude d’une publicité avec une conversation que nous avons eue plus tôt mais, lorsque ce n’est pas le cas, nous ne remarquons pas consciemment la publicité et n’enregistrons pas l’événement.

Mais il existe une autre hypothèse. Encore plus effrayante. Effrayante de simplicité et de perversité. L’hypothèse toute simple que si nous avons une conversation sur un sujet particulier, c’est parce que l’un ou plusieurs d’entre nous avons vu une publicité sur ce sujet. Parfois sans s’en souvenir. Souvent sans le réaliser.

Du coup, la publicité nous paraitrait bien plus voyante par après. Nous refuserions d’admettre l’hypothèse que notre libre-arbitre soit à ce point manipulable. Et nous imaginerions que Facebook contrôle nos téléphones, contrôle nos micros.

Pour ne pas avouer qu’il contrôle déjà nos esprits. Qu’ils contrôle nos sujets de conversation car c’est son business model. Même si nous ne sommes pas sur Facebook : il suffit que nos amis y soient, eux.

Facebook n’a pas besoin d’écouter nos conversations. Ils décident déjà de quoi nous parlons, ce que nous pensons et pour qui nous allons voter.

Mais c’est plus facile de s’indigner à l’idée que Facebook puisse contrôler un simple microphone que de remettre en question ce qui fonde nos croyances et notre identité…

Photo by Nathaniel dahan on Unsplash

Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

Last week, nearly 1,000 Drupalists gathered in Darmstadt, Germany for Drupal Europe. In good tradition, I presented my State of Drupal keynote. You can watch a recording of my keynote (starting at 4:38) or download a copy of my slides (37 MB).



Drupal 8 continues to mature

I started my keynote by highlighting this month's Drupal 8.6.0 release. Drupal 8.6 marks the sixth consecutive Drupal 8 release that has been delivered on time. Compared to one year ago, we have 46 percent more stable Drupal 8 modules. We also have 10 percent more contributors are working on Drupal 8 Core in comparison to last year. All of these milestones indicate that the Drupal 8 is healthy and growing.

The number of stable modules for Drupal 8 is growing fast

Next, I gave an update on our strategic initiatives:

An overview of Drupal 8's strategic initaitives

Make Drupal better for content creators

A photo from Drupal Europe in Darmstadt© Paul Johnson

The expectations of content creators are changing. For Drupal to be successful, we have to continue to deliver on their needs by providing more powerful content management tools, in addition to delivering simplicity though drag-and-drop functionality, WYSIWYG, and more.

With the release of Drupal 8.6, we have added new functionality for content creators by making improvements to the Media, Workflow, Layout and Out-of-the-Box initiatives. I showed a demo video to demonstrate how all of these new features not only make content authoring easier, but more powerful:



We also need to improve the content authoring experience through a modern administration user interface. We have been working on a new administration UI using React. I showed a video of our latest prototype:





Extended security coverage for Drupal 8 minor releases

I announced an update to Drupal 8's security policy. To date, site owners had one month after a new minor Drupal 8 release to upgrade their sites before losing their security updates. Going forward, Drupal 8 site owners have 6 months to upgrade between minor releases. This extra time should give site owners flexibility to plan, prepare and test minor security updates. For more information, check out my recent blog post.

Make Drupal better for evaluators

One of the most significant updates since DrupalCon Nashville is Drupal's improved evaluator experience. The time required to get a Drupal site up and running has decreased from more than 15 minutes to less than two minutes and from 20 clicks to 3. This is a big accomplishment. You can read more about it in my recent blog post.



Promote Drupal

After launching Promote Drupal at DrupalCon Nashville, we hit the ground running with this initiative and successfully published a community press release for the release of Drupal 8.6, which was also translated into multiple languages. Much more is underway, including building a brand book, marketing collaboration space on Drupal.org, and a Drupal pitch deck.

The Drupal 9 roadmap and a plan to end-of-life Drupal 7 and Drupal 8

To keep Drupal modern, maintainable, and performant, we need to stay on secure, supported versions of Drupal 8's third-party dependencies. This means we need to end-of-life Drupal 8 with Symfony 3's end-of-life. As a result, I announced that:

  1. Drupal 8 will be end-of-life by November 2021.
  2. Drupal 9 will be released in 2020, and it will be an easy upgrade.

Historically, our policy has been to only support two major versions of Drupal; Drupal 7 would ordinarily reach end of life when Drupal 9 is released. Because a large number of sites might still be using Drupal 7 by 2020, we have decided to extend support of Drupal 7 until November 2021.

For those interested, I published a blog post that further explains this.

Adopt GitLab on Drupal.org

Finally, the Drupal Association is working to integrate GitLab with Drupal.org. GitLab will provide support for "merge requests", which means contributing to Drupal will feel more familiar to the broader audience of open source contributors who learned their skills in the post-patch era. Some of GitLab's tools, such as inline editing and web-based code review, will also lower the barrier to contribution, and should help us grow both the number of contributions and contributors on Drupal.org.

To see an exciting preview of Drupal.org's gitlab integration, watch the video below:

Thank you

Our community has a lot to be proud of, and this progress is the result of thousands of people collaborating and working together. It's pretty amazing! The power of our community isn't just visible in minor releases or a number of stable modules. It was also felt at this very conference, as many volunteers gave their weekends and evenings to help organize Drupal Europe in the absence of a DrupalCon Europe organized by the Drupal Association. From code to community, the Drupal project is making an incredible impact. I look forward to celebrating our community's work and friendships at future Drupal conferences.

A summary of our recent progress

Someone pointed me towards this email, in which Linus apologizes for some of his more unhealthy behaviour.

The above is basically a long-winded way to get to the somewhat painful personal admission that hey, I need to change some of my behavior, and I want to apologize to the people that my personal behavior hurt and possibly drove away from kernel development entirely.

To me, this came somewhat as a surprise. I'm not really involved in Linux kernel development, and so the history of what led up to this email mostly passed unnoticed, at least for me; but that doesn't mean I cannot recognize how difficult this must have been to write for him.

As I know from experience, admitting that you have made a mistake is hard. Admitting that you have been making the same mistake over and over again is even harder. Doing so publically? Even more so, since you're placing yourself in a vulnerable position, one that the less honorably inclined will take advantage of if you're not careful.

There isn't much I can contribute to the whole process, but there is this: Thanks, Linus, for being willing to work on those things, which can only make the community healthier as a result. It takes courage to admit things like that, and that is only to be admired. Hopefully this will have the result we're hoping for, too; but that, only time can tell.

September 14, 2018

Now that Debian has migrated away from alioth and towards a gitlab instance known as salsa, we get a pretty advanced Continuous Integration system for (almost) free. Having that, it might make sense to use that setup to autobuild and -test a package when committing something. I had a look at doing so for one of my packages, ola; the reason I chose that package is because it comes with an autopkgtest, so that makes testing it slightly easier (even if the autopkgtest is far from complete).

Gitlab CI is configured through a .gitlab-ci.yml file, which supports many options and may therefore be a bit complicated for first-time users. Since I've worked with it before, I understand how it works, so I thought it might be useful to show people how you can do things.

First, let's look at the .gitlab-ci.yml file which I wrote for the ola package:

stages:
  - build
  - autopkgtest
.build: &build
  before_script:
  - apt-get update
  - apt-get -y install devscripts adduser fakeroot sudo
  - mk-build-deps -t "apt-get -y -o Debug::pkgProblemResolver=yes --no-install-recommends" -i -r
  - adduser --disabled-password --gecos "" builduser
  - chown -R builduser:builduser .
  - chown builduser:builduser ..
  stage: build
  artifacts:
    paths:
    - built
  script:
  - sudo -u builduser dpkg-buildpackage -b -rfakeroot
  after_script:
  - mkdir built
  - dcmd mv ../*ges built/
.test: &test
  before_script:
  - apt-get update
  - apt-get -y install autopkgtest
  stage: autopkgtest
  script:
  - autopkgtest built/*ges -- null
build:testing:
  <<: *build
  image: debian:testing
build:unstable:
  <<: *build
  image: debian:sid
test:testing:
  <<: *test
  dependencies:
  - build:testing
  image: debian:testing
test:unstable:
  <<: *test
  dependencies:
  - build:unstable
  image: debian:sid

That's a bit much. How does it work?

Let's look at every individual toplevel key in the .gitlab-ci.yml file:

stages:
  - build
  - autopkgtest

Gitlab CI has a "stages" feature. A stage can have multiple jobs, which will run in parallel, and gitlab CI won't proceed to the next stage unless and until all the jobs in the last stage have finished. Jobs from one stage can use files from a previous stage by way of the "artifacts" or "cache" features (which we'll get to later). However, in order to be able to use the stages feature, you have to create stages first. That's what we do here.

.build: &build
  before_script:
  - apt-get update
  - apt-get -y install devscripts autoconf automake adduser fakeroot sudo
  - autoreconf -f -i
  - mk-build-deps -t "apt-get -y -o Debug::pkgProblemResolver=yes --no-install-recommends" -i -r
  - adduser --disabled-password --gecos "" builduser
  - chown -R builduser:builduser .
  - chown builduser:builduser ..
  stage: build
  artifacts:
    paths:
    - built
  script:
  - sudo -u builduser dpkg-buildpackage -b -rfakeroot
  after_script:
  - mkdir built
  - dcmd mv ../*ges built/

This tells gitlab CI what to do when building the ola package. The main bit is the script: key in this template: it essentially tells gitlab CI to run dpkg-buildpackage. However, before we can do so, we need to install all the build-dependencies and a few helper things, as well as create a non-root user (since ola refuses to be built as root). This we do in the before_script: key. Finally, once the packages have been built, we create a built directory, and use devscripts' dcmd to move the output of the dpkg-buildpackage command into the built directory.

Note that the name of this key starts with a dot. This signals to gitlab CI that it is a "hidden" job, which it should not start by default. Additionally, we create an anchor (the &build at the end of that line) that we can refer to later. This makes it a job template, not a job itself, that we can reuse if we want to.

The reason we split up the script to be run into three different scripts (before_script, script, and after_script) is simply so that gitlab can understand the difference between "something is wrong with this commit" and "we failed to even configure the build system". It's not strictly necessary, but I find it helpful.

Since we configured the built directory as the artifacts path, gitlab will do two things:

  • First, it will create a .zip file in gitlab, which allows you to download the packages from the gitlab webinterface (and inspect them if needs be). The length of time for which the artifacts are stored can be configured by way of the artifacts:expire_in key; if not set, it defaults to 30 days or whatever the salsa maintainers have configured (of which I'm not sure what it is)
  • Second, it will make the artifacts available in the same location on jobs in the next stage.

The first can be avoided by using the cache feature rather than the artifacts one, if preferred.

.test: &test
  before_script:
  - apt-get update
  - apt-get -y install autopkgtest
  stage: autopkgtest
  script:
  - autopkgtest built/*ges -- null

This is very similar to the build template that we had before, except that it sets up and runs autopkgtest rather than dpkg-buildpackage, and that it does so in the autopkgtest stage rather than the build one, but there's nothing new here.

build:testing:
  <<: *build
  image: debian:testing
build:unstable:
  <<: *build
  image: debian:sid

These two use the build template that we defined before. This is done by way of the <<: *build line, which is YAML-ese to say "inject the other template here". In addition, we add extra configuration -- in this case, we simply state that we want to build inside the debian:testing docker image in the build:testing job, and inside the debian:sid docker image in the build:unstable job.

test:testing:
  <<: *test
  dependencies:
  - build:testing
  image: debian:testing
test:unstable:
  <<: *test
  dependencies:
  - build:unstable
  image: debian:sid

This is almost the same as the build:testing and the build:unstable jobs, except that:

  • We instantiate the test template, not the build one;
  • We say that the test:testing job depends on the build:testing one. This does not cause the job to start before the end of the previous stage (that is not possible); instead, it tells gitlab that the artifacts created in the build:testing job should be copied into the test:testing working directory. Without this line, all artifacts from all jobs from the previous stage would be copied, which in this case would create file conflicts (since the files from the build:testing job have the same name as the ones from the build:unstable one).

It is also possible to run autopkgtest in the same image in which the build was done. However, the downside of doing that is that if one of your built packages lacks a dependency that is an indirect dependency of one of your build dependencies, you won't notice; by blowing away the docker container in which the package was built and running autopkgtest in a pristine container, we avoid this issue.

With that, you have a complete working example of how to do continuous integration for Debian packaging. To see it work in practice, you might want to look at the ola version

UPDATE (2018-09-16): dropped the autoreconf call, isn't needed (it was there because it didn't work from the first go, and I thought that might have been related, but that turned out to be a red herring, and I forgot to drop it)

Seven months ago, Matthew Grasmick published an article describing how hard it is to install Drupal. His article included the following measurements for creating a new application on his local machine, across four different PHP frameworks:

Platform Clicks Time
Drupal 20+ 15:00+
Symfony 3 1:55
WordPress 7 7:51
Laravel 3 17:28

The results from Matthew's blog were clear: Drupal is too hard to install. It required more than 15 minutes and 20 clicks to create a simple site.




Seeing these results prompted me to launch a number of initiatives to improve the evaluator experience at DrupalCon Nashville. Here is the slide from my DrupalCon Nashville presentation:

Improve technical evaluator experience

A lot has happened between then and now:

  • We improved the download page to improve the discovery experience on drupal.org
  • We added an Evaluator Guide to Drupal.org
  • We added a quick-start command to Drupal 8.6
  • We added the Umami demo profile to Drupal 8.6
  • We started working on a more modern administration experience (in progress)

You can see the result of that work in this video:




Thanks to this progress, here is the updated table:

Platform Clicks Time
Drupal 3 1:27
Symfony 3 1:55
WordPress 7 7:51
Laravel 3 17:28

Drupal now requires the least time and is tied for least clicks! You can now install Drupal in less than two minutes. Moreover, the Drupal site that gets created isn't an "empty canvas" anymore; it's a beautifully designed and fully functional application with demo content.

Copy-paste the following commands in a terminal window if you want to try it yourself:

mkdir drupal && cd drupal && curl -sSL https://www.drupal.org/download-latest/tar.gz | tar -xz --strip-components=1
php core/scripts/drupal quick-start demo_umami

For more detailed information on how we achieved these improvements, read Matthew's latest blog post: The New Drupal Evaluator Experience, by the numbers.

A big thank you to Matthew Grasmick (Acquia) for spearheading this initiative!

September 13, 2018

I published the following diary on isc.sans.edu: “Malware Delivered Through MHT Files“:

What are MHT files? Microsoft is a wonderful source of multiple file formats. MHT files are web page archives. Usually, a web page is based on a piece of HTML code with links to external resources, images and other media. MHT files contain all the data related to a web page in a single place and are therefore very useful to archive them. Also called MHTML (MIME Encapsulation of Aggregate HTML Documents), there are encoded like email messages using MIME parts… [Read more]

[The post [SANS ISC] Malware Delivered Through MHT Files has been first published on /dev/random]

Since the launch of Drupal 8.0, we have successfully launched a new minor release on schedule every six months. I'm very proud of the community for this achievement. Prior to Drupal 8, most significant new features were only added in major releases like Drupal 6 or Drupal 7. Thanks to our new release cadence we now consistently and predictably ship great new features twice a year in minor releases (e.g. Drupal 8.6 comes with many new features).

However, only the most recent minor release has been actively supported for both bug fixes and security coverage. With the release of each new minor version, we gave a one-month window to upgrade to the new minor. In order to give site owners time to upgrade, we would not disclose security issues with the previous minor release during that one-month window.

Old Drupal 8 security policy for minor releasesIllustration of the security policy since the launch of Drupal 8.0 for minor releases, demonstrating that previous minor releases receive one month of security coverage. Source: Drupal.org issue #2909665: Extend security support to cover the previous minor version of Drupal and Drupal Europe DriesNote.

Over the past three years, we have learned that users find it challenging to update to the latest minor in one month. Drupal's minor updates can include dependency updates, internal API changes, or features being transitioned from contributed modules to core. It takes time for site owners to prepare and test these types of changes, and a window of one month to upgrade isn't always enough.

At DrupalCon Nashville we declared that we wanted to extend security coverage for minor releases. Throughout 2018, Drupal 8 release managers quietly conducted a trial. You may have noticed that we had several security releases against previous minor releases this year. This trial helped us understand the impact to the release process and learn what additional work remained ahead. You can read about the results of the trial at #2909665: Extend security support to cover the previous minor version of Drupal.

I'm pleased to share that the trial was a success! As a result, we have extended the security coverage of minor releases to six months. Instead of one month, site owners now have six months to upgrade between minor releases. It gives teams time to plan, prepare and test updates. Releases will have six months of normal bug fix support followed by six months of security coverage, for a total lifetime of one year. This is a huge win for Drupal site owners.

New Drupal 8 security policy for minor releasesIllustration of the new security policy for minor releases, demonstrating that the security coverage for minor releases is extended to six months. Source: Drupal.org issue #2909665: Extend security support to cover the previous minor version of Drupal and the Drupal Europe DriesNote.

It's important to note that this new policy only applies to Drupal 8 core starting with Drupal 8.5, and only applies to security issues. Non-security bug fixes will still only be committed to the actively supported release.

While the new policy will provide extended security coverage for Drupal 8.5.x, site owners will need to update to an upcoming release of Drupal 8.5 to be correctly notified about their security coverage.

Next steps

We still have some user experience issues we'd like to address around how site owners are alerted of a security update. We have not yet handled all of the potential edge cases, and we want to be very clear about the potential actions to take when updating.

We also know module developers may need to declare that a release of their project only works against specific versions of Drupal core. Resolving outstanding issues around semantic versioning support for contrib and module version dependency definitions will help developers of contributed projects better support this policy. If you'd like to get involved in the remaining work, the policy and roadmap issue on Drupal.org is a great place to find related issues and see what work is remaining.

Special thanks to Jess and Jeff Beeman for co-authoring this post.

The post HHVM is Ending PHP Support appeared first on ma.ttias.be.

It looks like HHVM is becoming an independent language, free from the shackles of PHP.

HHVM v3.30 will be the last release series where HHVM aims to support PHP.

The key dates are:
-- 2018-12-03: branch cut: expect PHP code to stop working with master and nightly builds after this date
-- 2018-12-17: expected release date for v3.30.0
-- 2019-01-28: expected release date for v4.0.0, without PHP support
-- 2019-11-19: expected end of support for v3.30

Ultimately, we recommend that projects either migrate entirely to the Hack language, or entirely to PHP7 and the PHP runtime.

Source: Ending PHP Support, and The Future Of Hack | HHVM

The post HHVM is Ending PHP Support appeared first on ma.ttias.be.

September 12, 2018

We just released Drupal 8.6.0. With six minor releases behind us, it is time to talk about the long-term future of Drupal 8 (and therefore Drupal 7 and Drupal 9). I've written about when to release Drupal 9 in the past, but this time, I'm ready to provide further details.

The plan outlined in this blog has been discussed with the Drupal 7 Core Committers, the Drupal 8 Core Committers and the Drupal Security Team. While we feel good about this plan, we can't plan for every eventuality and we may continue to make adjustments.

Drupal 8 will be end-of-life by November 2021

Drupal 8's innovation model depends on introducing new functionality in minor versions while maintaining backwards compatibility. This approach is working so well that some people have suggested we institute minor releases forever, and never release Drupal 9 at all.

However that approach is not feasible. We need to periodically remove deprecated functionality to keep Drupal modern, maintainable, and performant, and we need to stay on secure, supported versions of Drupal 8's third-party dependencies. As Nathaniel Catchpole explained in his post "The Long Road to Drupal 9", our use of various third party libraries such as Symfony, Twig, and Guzzle means that we need to be in sync with their release timelines.

Our biggest dependency in Drupal 8 is Symfony 3, and according to Symfony's roadmap, Symfony 3 has an end-of-life date in November 2021. This means that after November 2021, security bugs in Symfony 3 will not get fixed. To keep your Drupal sites secure, Drupal must adopt Symfony 4 or Symfony 5 before Symfony 3 goes end-of-life. A major Symfony upgrade will require us to release Drupal 9 (we don't want to fork Symfony 3 and have to backport Symfony 4 or Symfony 5 bug fixes). This means we have to end-of-life Drupal 8 no later than November 2021.

Drupal 8 will be end-of-life by November 2021

Drupal 9 will be released in 2020, and it will be an easy upgrade

If Drupal 8 will be end-of-life on November 2021, we have to release Drupal 9 before that. Working backwards from November 2021, we'd like to give site owners one year to upgrade from Drupal 8 to Drupal 9.

If November 2020 is the latest we could release Drupal 9, what is the earliest we could release Drupal 9?

We certainly can't release Drupal 9 next week or even next month. Preparing for Drupal 9 takes a lot of work: we need to adopt Symfony 4 and/or Symfony 5, we need to remove deprecated code, we need to allow modules and themes to declare compatibility with more than one major version, and possibly more. The Drupal 8 Core Committers believe we need more than one year to prepare for Drupal 9.

Therefore, our current plan is to release Drupal 9 in 2020. Because we still need to figure out important details, we can't be more specific at this time.

Drupal 9 will be released in 2020

If we release Drupal 9 in 2020, it means we'll certainly have Drupal 8.7 and 8.8 releases.

Wait, I will only have one year to migrate from Drupal 8 to 9?

Yes, but fortunately moving from Drupal 8 to 9 will be far easier than previous major version upgrades. The first release of Drupal 9 will be very similar to the last minor release of Drupal 8, as the primary goal of the Drupal 9.0.0 release will be to remove deprecated code and update third-party dependencies. By keeping your Drupal 8 sites up to date, you should be well prepared for Drupal 9.

And what about contributed modules? The compatibility of contributed modules is historically one of the biggest blockers to upgrading, so we will also make it possible for contributed modules to be compatible with Drupal 8 and Drupal 9 at the same time. As long as contributed modules do not use deprecated APIs, they should work with Drupal 9 while still being compatible with Drupal 8.

Drupal 7 will be supported until November 2021

Historically, our policy has been to only support two major versions of Drupal; Drupal 7 would ordinarily reach end of life when Drupal 9 is released. Because a large number of sites might still be using Drupal 7 by 2020, we have decided to extend support of Drupal 7 until November 2021. Drupal 7 will receive community support for three whole more years.

Drupal 7 will be supported until November 2021

We'll launch a Drupal 7 commercial Long Term Support program

In the past, commercial vendors have extended Drupal's security support. In 2015, a Drupal 6 commercial Long Term Support program was launched and continues to run to this day. We plan a similar paid program for Drupal 7 to extend support beyond November 2021. The Drupal Security Team will announce the Drupal 7 commercial LTS program information by mid-2019. Just like with the Drupal 6 LTS program, there will be an application for vendors.

We'll update Drupal 7 to support newer versions of PHP

The PHP team will stop supporting PHP 5.x on December 31st, 2018 (in 3 months), PHP 7.0 on December 3rd, 2018 (in 2 months), PHP 7.1 on December 1st, 2019 (in 1 year and 3 months) and PHP 7.2 on November 30th, 2020 (in 2 years and 2 months).

Drupal will drop official support for unsupported PHP versions along the way and Drupal 7 site owners may have to upgrade their PHP version. The details will be provided later.

We plan on updating Drupal 7 to support newer versions of PHP in line with their support schedule. Drupal 7 doesn't fully support PHP 7.2 yet as there have been some backwards-incompatible changes since PHP 7.1. We will release a version of Drupal 7 that supports PHP 7.2. Contributed modules and custom modules will have to be updated too, if not already.

Conclusion

If you are still using Drupal 7 and are wondering what to do, you currently have two options:

  1. Stay on Drupal 7 while also updating your PHP version. If you stay on Drupal 7 until after 2021, you can either engage a vendor for a long term support contract, or migrate to Drupal 9.
  2. Migrate to Drupal 8 by 2020, so that it's easier to update to Drupal 9 when it is released.

The announcements in this blog post made option (1) a lot more viable and/or hopefully helps you better evaluate option (2).

If you are on Drupal 8, you just have to keep your Drupal 8 site up-to-date and you'll be ready for Drupal 9.

We plan to have more specifics by April 2019 (DrupalCon Seattle).

Thanks for the Drupal 7 Core Committers, the Drupal 8 Core Committers and the Drupal Security Team for their contributions to this blog post.

September 11, 2018

Wow, 10  years already! In a few weeks, this is the 10th edition of BruCON or the “0x0A edition“. If you know me or follow me, you probably know that I’m part of this wonderful experience since the first edition. I’m also sponsoring the conference through my company with a modest contribution: I’m hosting the websites and other technical kinds of stuff. Getting close to the event, we are still receiving many requests from people who are looking for tickets but we are sold-out for a while. That’ a good news (for us) but it may be frustrating (if you’re still looking for one).

As a sponsor, I still have a free ticket to give away… but it must be earned! I wrote a quick challenge for you. Not too complicated nor too easy because everybody must have a chance to solve it. If you have some skills in document analysis, it should not be too complicated.

Here are the rules of the game:

  • The ticket will be assigned to the first one who submits the flag by email ONLY (I’m easy to catch!)
  • There is no malicious activity required to solve the challenge, nothing relevant on this blog or any BruCON website.
  • Do not just submit the flag to explain how you found it (provide a quick wrap-up, a few lines are ok)
  • You’re free to play and solve the challenge but claim the ticket if you will be certain to attend the conference! (Be fair-play)
  • All costs besides the free ticket are on you! (hotel, travel, …)

Ready? So, enjoy this file

 

[The post Wanna Come to BruCON? Solve This Challenge! has been first published on /dev/random]

The post Indie Hackers interview with Oh Dear! appeared first on ma.ttias.be.

I recently did an interview with Indie Hackers about our newly launched Oh Dear! monitoring service.

Some fun questions that I hope give more insight into how the idea came to be, where we are now and what our plans are for the future.

Hi! My name is Mattias Geniar and together with my partner Freek Van der Herten we founded Oh Dear!, a new tool focused on monitoring websites.

We both got fed up with existing tools that didn't fit our needs precisely. Most came close, but there was always something — whether settings or design choices — that we just didn't like. Being engineers, the obvious next step was to just build it ourselves!

Oh Dear! launched in private beta earlier this year and has been running in production for a few months now.

Source: The Market Lacked the Tool We Wanted. So We Built it Ourselves. -- Indie Hackers

Some more details are available in the forum discussion about our launch.

The post Indie Hackers interview with Oh Dear! appeared first on ma.ttias.be.

The post Elasticsearch fails to start: access denied “/etc/elasticsearch/${instance}/scripts” appeared first on ma.ttias.be.

Ran into a not-so-obvious issue with Elasticsearch today. After an update from 6.2.x to 6.4.x, an elasticsearch instance failed to start with following error in its logs.

$ tail -f instance.log
...
Caused by: java.security.AccessControlException:
  access denied ("java.io.FilePermission" "/etc/elasticsearch/production/scripts" "read")
   at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)
   at java.security.AccessController.checkPermission(AccessController.java:884)
   at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
...

The logs indicated it couldn't read from /etc/elasticsearch/production/scripts, but all permissions were OK. Apparently, Elasticsearch doesn't follow symlinks for the scripts directory, even if it has all the required read permissions.

In this case, that scripts directory was indeed a symlink to a shared resource.

$ ls -alh /etc/elasticsearch/production/scripts 
/etc/elasticsearch/production/scripts -> /usr/share/elasticsearch/scripts

The fix was simple, as the shared directory didn't contain any scripts to begin with.

$ rm -f /etc/elasticsearch/production/scripts
$ mkdir /etc/elasticsearch/production/scripts
$ chown elasticsearch:elasticsearch /etc/elasticsearch/production/scripts

Turn that symlink into a normal directory & restart your instance.

If you run multiple instances, you'll have to repeat these steps for every instance that fails to start.

The post Elasticsearch fails to start: access denied “/etc/elasticsearch/${instance}/scripts” appeared first on ma.ttias.be.

For the past two years, I've examined Drupal.org's commit data to understand who develops Drupal, how much of that work is sponsored, and where that sponsorship comes from.

I have now reported on this data for three years in a row, which means I can start to better compare year-over-year data. Understanding how an open-source project works is important because it establishes a benchmark for project health and scalability.

I would also recommend taking a look at the 2016 report or the 2017 report. Each report looks at data collected in the 12-month period between July 1st and June 30th.

This year's report affirms that Drupal has a large and diverse community of contributors. In the 12-month period between July 1, 2017 and June 30, 2018, 7,287 different individuals and 1,002 different organizations contributed code to Drupal.org. This include contributions to Drupal core and all contributed projects on Drupal.org.

In comparison to last year's report, both the number of contributors and contributions has increased. Our community of contributors (including both individuals and organizations) is also becoming more diverse. This is an important area of growth, but there is still work to do.

For this report, we looked at all of the issues marked "closed" or "fixed" in our ticketing system in the 12-month period from July 1, 2017 to June 30, 2018. This includes Drupal core and all of the contributed projects on Drupal.org, across all major versions of Drupal. This year, 24,447 issues were marked "closed" or "fixed", a 5% increase from the 23,238 issues in the 2016-2017 period. This averages out to 67 feature improvements or bug fixes a day.

In total, we captured 49,793 issue credits across all 24,447 issues. This marks a 17% increase from the 42,449 issue credits recorded in the previous year. Of the 49,793 issue credits reported this year, 18% (8,822 credits) were for Drupal core, while 82% (40,971 credits) went to contributed projects.

"Closed" or "fixed" issues are often the result of multiple people working on the issue. We try to capture who contributes through Drupal.org's unique credit system. We used the data from the credit system for this analysis. There are a few limitations with this approach, which we'll address at the end of this report.

What is the Drupal.org credit system?

In the spring of 2015, after proposing ideas for giving credit and discussing various approaches at length, Drupal.org added the ability for people to attribute their work to an organization or customer in the Drupal.org issue queues. Maintainers of Drupal modules, themes, and distributions can award issue credits to people who help resolve issues with code, translations, documentation, design and more.

Example issue credit on drupal orgA screenshot of an issue comment on Drupal.org. You can see that jamadar worked on this patch as a volunteer, but also as part of his day job working for TATA Consultancy Services on behalf of their customer, Pfizer.

Credits are a powerful motivator for both individuals and organizations. Accumulating credits provides individuals with a way to showcase their expertise. Organizations can utilize credits to help recruit developers, to increase their visibility within the Drupal.org marketplace, or to showcase their Drupal expertise.

Who is working on Drupal?

In the 12-month period between July 1, 2017 to June 30, 2018, 24,447, Drupal.org received code contributions from 7,287 different individuals and 1,002 different organizations.

Contributions by individuals vs organizations

While the number of individual contributors rose, a relatively small number of individuals still do the majority of the work. Approximately 48% of individual contributors received just one credit. Meanwhile, the top 30 contributors (the top 0.4%) account for more than 24% of the total credits. These individuals put an incredible amount of time and effort in developing Drupal and its contributed projects:

RankUsernameIssues
1RenatoG851
2RajabNatshah745
3jrockowitz700
4adriancid529
5bojanz515
6Berdir432
7alexpott414
8mglaman414
9Wim Leers395
10larowlan360
11DamienMcKenna353
12dawehner340
13catch339
14heddn327
15xjm303
16pifagor284
17quietone261
18borisson_255
19adci_contributor255
20volkswagenchick254
21drunken monkey231
22amateescu225
23joachim199
24mkalkbrenner195
25chr.fritsch185
26gaurav.kapoor178
27phenaproxima177
28mikeytown2173
29joelpittet170
30timmillwood169

Out of the top 30 contributors featured, 15 were also recognized as top contributors in our 2017 report. These Drupalists' dedication and continued contribution to the project has been crucial to Drupal's development. It's also exciting to see 15 new names on the list. This mobility is a testament to the community's evolution and growth. It's also important to recognize that a majority of the 15 repeat top contributors are at least partially sponsored by an organization. We value the organizations that sponsor these remarkable individuals, because without their support, it could be more challenging to be in the top 30 year over year.

How diverse is Drupal?

Next, we looked at both the gender and geographic diversity of Drupal.org code contributors. While these are only two examples of diversity, this is the only available data that contributors can currently choose to share on their Drupal.org profiles. The reported data shows that only 7% of the recorded contributions were made by contributors that do not identify as male, which continues to indicates a steep gender gap. This is a one percent increase compared to last year. The gender imbalance in Drupal is profound and underscores the need to continue fostering diversity and inclusion in our community.

To address this gender gap, in addition to advancing representation across various demographics, the Drupal community is supporting two important initiatives. The first is to adopt more inclusive user demographic forms on Drupal.org. Adopting Open Demographics on Drupal.org will also allow us to improve reporting on diversity and inclusion, which in turn will help us better support initiatives that advance diversity and inclusion. The second initiative is supporting the Drupal Diversity and Inclusion Contribution Team, which works to better include underrepresented groups to increase code and community contributions. The DDI Contribution Team recruits team members from diverse backgrounds and underrepresented groups, and provides support and mentorship to help them contribute to Drupal.

It's important to reiterate that supporting diversity and inclusion within Drupal is essential to the health and success of the project. The people who work on Drupal should reflect the diversity of people who use and work with the software. While there is still a lot of work to do, I'm excited about the impact these various initiatives will have on future reports.

Contributions by gender

When measuring geographic diversity, we saw individual contributors from 6 different continents and 123 different countries:

Contributions by continent
Contributions by countryThe top 20 countries from which contributions originate. The data is compiled by aggregating the countries of all individual contributors behind each commit. Note that the geographical location of contributors doesn't always correspond with the origin of their sponsorship. Wim Leers, for example, works from Belgium, but his funding comes from Acquia, which has the majority of its customers in North America.

123 different countries is seven more compared to the 2017 report. The new countries include Rwanda, Namibia, Senegal, Sierra Leone, and Swaziland, Zambia. Seeing contributions from more African countries is certainly a highlight.

How much of the work is sponsored?

Issue credits can be marked as "volunteer" and "sponsored" simultaneously (shown in jamadar's screenshot near the top of this post). This could be the case when a contributor does the minimum required work to satisfy the customer's need, in addition to using their spare time to add extra functionality.

While Drupal started out as a 100% volunteer-driven project, today the majority of the code on Drupal.org is sponsored by organizations. Only 12% of the commit credits that we examined in 2017-2018 were "purely volunteer" credits (6,007 credits), in stark contrast to the 49% that were "purely sponsored". In other words, there were four times as many "purely sponsored" credits as "purely volunteer" credits.

Contributions by volunteer vs sponsored

A few comparisons between the 2017-2018 and the 2016-2017 data:

  • The credit system is being used more frequently. In total, we captured 49,793 issue credits across all 24,447 issues in the 2017-2018 period. This marks a 17% increase from the 42,449 issue credits recorded in the previous year. Between July 1, 2016 and June 30, 2017, 28% of all credits had no attribution while in the period between July 1, 2017 to June 30, 2018, only 25% of credits lacked attribution. More people have become aware of the credit system, the attribution options, and their benefits.
  • Sponsored credits are growing faster than volunteer credits. Both "purely volunteer" and "purely sponsored" credits grew, but "purely sponsored" credits grew faster. There are two reasons why this could be the case: (1) more contributions are sponsored and (2) organizations are more likely to use the credit system compared to volunteers.

No data is perfect, but it feels safe to conclude that most of the work on Drupal is sponsored. At the same time, the data shows that volunteer contribution remains very important to Drupal. Maybe most importantly, while the number of volunteers and sponsors has grown year over year in absolute terms, sponsored contributions appear to be growing faster than volunteer contributions. This is consistent with how open source projects grow and scale.

Who is sponsoring the work?

Now that we've established a majority of contributions to Drupal are sponsored, we want to study which organizations contribute to Drupal. While 1,002 different organizations contributed to Drupal, approximately 50% of them received four credits or less. The top 30 organizations (roughly the top 3%) account for approximately 48% of the total credits, which implies that the top 30 companies play a crucial role in the health of the Drupal project. The graph below shows the top 30 organizations and the number of credits they received between July 1, 2017 and June 30, 2018:

Top 30 organizations contributing to DrupalThe top 30 contributing organizations based on the number of Drupal.org commit credits.

While not immediately obvious from the graph above, a variety of different types of companies are active in Drupal's ecosystem:

Category Description
Traditional Drupal businesses Small-to-medium-sized professional services companies that primarily make money using Drupal. They typically employ fewer than 100 employees, and because they specialize in Drupal, many of these professional services companies contribute frequently and are a huge part of our community. Examples are Chapter Three and Lullabot (both shown on graph).
Digital marketing agencies Larger full-service agencies that have marketing-led practices using a variety of tools, typically including Drupal, Adobe Experience Manager, Sitecore, WordPress, etc. They tend to be larger, with the larger agencies employing thousands of people. Examples are Wunderman and Mirum.
System integrators Larger companies that specialize in bringing together different technologies into one solution. Example system agencies are Accenture, TATA Consultancy Services, Capgemini and CI&T (shown on graph).
Technology and infrastructure companies Examples are Acquia (shown on graph), Lingotek, BlackMesh, Rackspace, Pantheon and Platform.sh.
End-users Examples are Pfizer (shown on graph) or NBCUniversal.

A few observations:

  • Almost all of the sponsors in the top 30 are traditional Drupal businesses. Companies like MD Systems (12 employees), Valuebound (58 employees), Chapter Three (33 employees), Commerce Guys (13 employees) and PreviousNext (22 employees) are, despite their size, critical to Drupal's success.
  • Compared to these traditional Drupal businesses, Acquia has nearly 800 employees and at least ten full-time Drupal contributors. Acquia works to resolve some of the most complex issues on Drupal.org, many of which are not recognized by the credit system (e.g. release management, communication, sprint organizing, and project coordination). Acquia added several full-time contributors compared to last year, however, I believe that Acquia should contribute even more due to its comparative size.
  • No digital marketing agencies show up in the top 30, though some of them are starting to contribute. It's exciting that an increasing number of digital marketing agencies are delivering beautiful experiences using Drupal. As a community, we need to work to ensure that each of these firms are contributing back to the project with the same commitment that we see from firms like Commerce Guys, CI&T or Acro Media. Compared to last year, we have not made meaningful progress on growing contributions from digital marketing agencies. It would be interesting to see what would happen if more large organizations mandated contributions from their partners. Pfizer, for example, only works with agencies and vendors that contribute back to Drupal, and requires that its agency partners contribute to open source. If more organizations took this stance, it could have a big impact on the number of digital agencies that contribute to Drupal
  • The only system integrator in the top 30 is CI&T, which ranked 3rd with 959 credits. As far as system integrators are concerned, CI&T is a smaller player with approximately 2,500 employees. However, we do see various system integrators outside of the top 30, including Globant, Capgemini, Sapient and TATA Consultancy Services. Each of these system integrators reported 30 to 85 credits in the past year. The top contributor is TATA with 85 credits.
  • Infrastructure and software companies also play an important role in our community, yet only Acquia appears in the top 30. While Acquia has a professional services division, more than 75% of the contributions come from the product organization. Other infrastructure companies include Pantheon and Platform.sh, which are both venture-backed, platform-as-a-service companies that were born from the Drupal community. Pantheon has 6 credits and Platform.sh has 47 credits. Amazee Labs, a company that is building an infrastructure business, reported 40 credits. Compared to last year, Acquia and Rackspace have slightly more credits, while Pantheon, Platform.sh and Amazee contributed less. Lingotek, a vendor that offers cloud-based translation management software has 84 credits.
  • We also saw three end-users in the top 30 as corporate sponsors: Pfizer (491 credits, up from 251 credits the year before), Thunder (432 credits), and the German company, bio.logis (319 credits, up from 212 credits the year before). Other notable customers outside of the top 30, include Workday, Wolters Kluwer, Burda Media, YMCA and OpenY, CARD.com and NBCUniversal. We also saw contributions from many universities, including University of Colorado Boulder, University of Waterloo, Princeton University, University of Adelaide, University of Sydney, University of Edinburgh, McGill University and more.
Contributions by technology companies

We can conclude that technology and infrastructure companies, digital marketing agencies, system integrators and end-users are not making significant code contributions to Drupal.org today. How can we explain this disparity in comparison to the traditional Drupal businesses that contribute the most? We believe the biggest reasons are:

  1. Drupal's strategic importance. A variety of the traditional Drupal agencies almost entirely depend on Drupal to support their businesses. Given both their expertise and dependence on Drupal, they are most likely to look after Drupal's development and well-being. Contrast this with most of the digital marketing agencies and system integrators who work with a diversified portfolio of content management platforms. Their well-being is less dependent on Drupal's success.
  2. The level of experience with Drupal and open source. Drupal aside, many organizations have little or no experience with open source, so it is important that we motivate and teach them to contribute.
  3. Legal reservations. We recognize that some organizations are not legally permitted to contribute, let alone attribute their customers. We hope that will change as open source continues to get adopted.
  4. Tools barriers. Drupal contribution still involves a patch-based workflow on Drupal.org's unique issue queue system. This presents a fairly steep learning curve to most developers, who primarily work with more modern and common tools such as GitHub. We hope to lower some of these barriers through our collaboration with GitLab.
  5. Process barriers. Getting code changes accepted into a Drupal project — especially Drupal core — is hard work. Peer reviews, gates such as automated testing and documentation, required sign-offs from maintainers and committers, knowledge of best practices and other community norms are a few of the challenges a contributor must face to get code accepted into Drupal. Collaborating with thousands of people on a project as large and widely-used as Drupal requires such processes, but new contributors often don't know that these processes exist, or don't understand why they exist.

We should do more to entice contribution

Drupal is used by more than one million websites. Everyone who uses Drupal benefits from work that thousands of other individuals and organizations have contributed. Drupal is great because it is continuously improved by a diverse community of contributors who are enthusiastic to give back.

However, the vast majority of the individuals and organizations behind these Drupal websites never participate in the development of the project. They might use the software as it is or don't feel the need to help drive its development. We have to provide more incentive for these individuals and organizations to contribute back to the project.

Consequently, this data shows that the Drupal community can do more to entice companies to contribute code to Drupal.org. The Drupal community has a long tradition of encouraging organizations to share code rather than keep it behind firewalls. While the spirit of the Drupal project cannot be reduced to any single ideology — not every organization can or will share their code — we would like to see organizations continue to prioritize collaboration over individual ownership.

We understand and respect that some can give more than others and that some might not be able to give back at all. Our goal is not to foster an environment that demands what and how others should give back. Our aim is not to criticize those who do not contribute, but rather to help foster an environment worthy of contribution. This is clearly laid out in Drupal's Values and Principles.

Given the vast amount of Drupal users, we believe continuing to encourage organizations and end-users to contribute is still a big opportunity. From my own conversations, it's clear that organizations still need need education, training and help. They ask questions like: "Where can we contribute?", "How can we convince our legal department?", and more.

There are substantial benefits and business drivers for organizations that contribute: (1) it improves their ability to sell and win deals and (2) it improves their ability to hire. Companies that contribute to Drupal tend to promote their contributions in RFPs and sales pitches. Contributing to Drupal also results in being recognized as a great place to work for Drupal experts.

What projects have sponsors?

To understand where the organizations sponsoring Drupal put their money, I've listed the top 20 most sponsored projects:

RankProject nameIssues
1Drupal core5919
2Webform905
3Drupal Commerce607
4Varbase: The Ultimate Drupal 8 CMS Starter Kit (Bootstrap Ready)551
5Commerce Point of Sale (POS)324
6Views318
7Commerce Migrate307
8JSON API304
9Paragraphs272
10Open Social222
11Search API Solr Search212
12Drupal Connector for Janrain Identity Cloud197
13Drupal.org security advisory coverage applications189
14Facets171
15Open Y162
16Metatag162
17Web Page Archive154
18Drupal core - JavaScript Modernization Initiative145
19Thunder144
20XML sitemap120

Who is sponsoring the top 30 contributors?

Rank Username Issues Volunteer Sponsored Not specified Sponsors
1 RenatoG 851 0% 100% 0% CI&T (850), Johnson & Johnson (23)
2 RajabNatshah 745 14% 100% 0% Vardot (653), Webship (90)
3 jrockowitz 700 94% 97% 1% The Big Blue House (680), Memorial Sloan Kettering Cancer Center (7), Rosewood Marketing (2), Kennesaw State University (1)
4 adriancid 529 99% 19% 0% Ville de Montréal (98)
5 bojanz 515 0% 98% 2% Commerce Guys (503), Torchbox (17), Adapt (6), Acro Media (4), Bluespark (1)
6 Berdir 432 0% 92% 8% MD Systems (396), Translations.com (10), Acquia (2)
7 alexpott 414 13% 84% 10% Chapter Three (123), Thunder (120), Acro Media (103)
8 mglaman 414 5% 96% 1% Commerce Guys (393), Impactiv (17), Circle Web Foundry (16), Rosewood Marketing (14), LivePerson (13), Bluespark (4), Acro Media (4), Gaggle.net (3), Thinkbean (2), Matsmart (2)
9 Wim Leers 395 8% 94% 0% Acquia (371)
10 larowlan 360 13% 97% 1% PreviousNext (350), University of Technology, Sydney (24), Charles Darwin University (10), Australian Competition and Consumer Commission (ACCC) (1), Department of Justice & Regulation, Victoria (1)
11 DamienMcKenna 353 1% 95% 5% Mediacurrent (334)
12 dawehner 340 48% 86% 4% Chapter Three (279), Torchbox (10), Drupal Association (5), Tag1 Consulting (3), Acquia (2), TES Global (1)
13 catch 339 1% 97% 3% Third and Grove (320), Tag1 Consulting (8)
14 heddn 327 2% 99% 1% MTech (325)
15 xjm 303 0% 97% 3% Acquia (293)
16 pifagor 284 32% 99% 1% GOLEMS GABB (423), Drupal Ukraine Community (73)
17 quietone 261 48% 55% 5% Acro Media (143)
18 borisson_ 255 93% 55% 3% Dazzle (136), Intracto digital agency (1), Acquia (1), DUG BE vzw (Drupal User Group Belgium) (1)
19 adci_contributor 255 0% 100% 0% ADCI Solutions (255)
20 volkswagenchick 254 1% 100% 0% Hook 42 (253)
21 drunken monkey 231 91% 22% 0% DBC (24), Vizala (20), Sunlime Web Innovations GmbH (4), Wunder Group (1), epiqo (1), Zebralog (1)
22 amateescu 225 3% 95% 3% Pfizer (211), Drupal Association (1), Chapter Three (1)
23 joachim 199 56% 44% 19% Torchbox (88)
24 mkalkbrenner 195 0% 99% 1% bio.logis (193), OSCE: Organization for Security and Co-operation in Europe (119)
25 chr.fritsch 185 0% 99% 1% Thunder (183)
26 gaurav.kapoor 178 0% 81% 19% OpenSense Labs (144), DrupalFit (55)
27 phenaproxima 177 0% 99% 1% Acquia (176)
28 mikeytown2 173 0% 0% 100%
29 joelpittet 170 28% 74% 16% The University of British Columbia (125)
30 timmillwood 169 1% 100% 0% Pfizer (169), Appnovation (163), Millwood Online (6)

We observe that the top 30 contributors are sponsored by 58 organizations. This kind of diversity is aligned with our desire to make sure that Drupal is not controlled by a single organization. These top contributors and organizations are from many different parts of the world, and work with customers large and small. Nonetheless, we will continue to benefit from an increased distribution of contribution.

Limitations of the credit system and the data

While the benefits are evident, it is important to note a few of the limitations in Drupal.org's current credit system:

  • Contributing to issues on Drupal.org is not the only way to contribute. Other activities, such as sponsoring events, promoting Drupal, and providing help and mentorship are also important to the long-term health of the Drupal project. Many of these activities are not currently captured by the credit system. For this post, we chose to only look at code contributions.
  • We acknowledge that parts of Drupal are developed on GitHub and therefore aren't fully credited on Drupal.org. The actual number of contributions and contributors could be significantly higher than what we report. The Drupal Association is working to integrate GitLab with Drupal.org. GitLab will provide support for "merge requests", which means contributing to Drupal will feel more familiar to the broader audience of open source contributors who learned their skills in the post-patch era. Some of GitLab's tools, such as inline editing and web-based code review, will also lower the barrier to contribution, and should help us grow both the number of contributions and contributors on Drupal.org.
  • Even when development is done on Drupal.org, the credit system is not used consistently. As using the credit system is optional, a lot of code committed on Drupal.org has no or incomplete contribution credits.
  • Not all code credits are the same. We currently don't have a way to account for the complexity and quality of contributions; one person might have worked several weeks for just one credit, while another person might receive a credit for ten minutes of work. In the future, we should consider issuing credit data in conjunction with issue priority, patch size, etc. This could help incentivize people to work on larger and more important problems and save coding standards improvements for new contributor sprints. Implementing a scoring system that ranks the complexity of an issue would also allow us to develop more accurate reports of contributed work.

Like Drupal itself, the Drupal.org credit system needs to continue to evolve. Ultimately, the credit system will only be useful when the community uses it, understands its shortcomings, and suggests constructive improvements.

Conclusion

Our data confirms that Drupal is a vibrant community full of contributors who are constantly evolving and improving the software. While we have amazing geographic diversity, we still need greater gender diversity, in addition to better representation across various demographic groups. Our analysis of the Drupal.org credit data concludes that most contributions to Drupal are sponsored. At the same time, the data shows that volunteer contribution remains very important to Drupal.

As a community, we need to understand that a healthy open source ecosystem includes more than the traditional Drupal businesses that contribute the most. We still don't see a lot of contribution from the larger digital marketing agencies, system integrators, technology companies, or end-users of Drupal — we believe that might come as these organizations build out their Drupal practices and Drupal becomes more strategic for them.

To grow and sustain Drupal, we should support those that contribute to Drupal and find ways to get those that are not contributing involved in our community. We invite you to help us continue to strengthen our ecosystem.

September 10, 2018

The post MySQL: Calculate the free space in IBD files appeared first on ma.ttias.be.

If you use MySQL with InnoDB, chances are you've seen growing IBD data files. Those are the files that actually hold your data within MySQL. By default, they only grow -- they don't shrink. So how do you know if you still have free space left in your IBD files?

There's a query you can use to determine that:

SELECT round((data_length+index_length)/1024/1024,2)
FROM information_schema.tables
WHERE
  table_schema='zabbix'
  AND table_name='history_text';

The above will check a database called zabbix for a table called history_text. The result will be the size that MySQL has "in use" in that file. If that returns 5.000 as a value, you have 5GB of data in there.

In my example, it showed the data size to be 16GB. But the actual IBD file was over 50GB large.

$ ls -alh history_text.ibd
-rw-r----- 1 mysql mysql 52G Sep 10 15:26 history_text.ibd

In this example I had 36GB of wasted space on the disk (52GB according to the OS, 16GB in use by MySQL). If you run MySQL with innodb_file_per_table=ON, you can individually shrink the IBD files. One way, is to run an OPTIMIZE query on that table.

Note: this can be a blocking operation, depending on your MySQL version. WRITE and READ I/O can be blocked to the table for the duration of the OPTIMIZE query.

MariaDB [zabbix]> OPTIMIZE TABLE history_text;
Stage: 1 of 1 'altering table'   93.7% of stage done
Stage: 1 of 1 'altering table'    100% of stage done

+---------------------+----------+----------+-------------------------------------------------------------------+
| Table               | Op       | Msg_type | Msg_text                                                          |
+---------------------+----------+----------+-------------------------------------------------------------------+
| zabbix.history_text | optimize | note     | Table does not support optimize, doing recreate + analyze instead |
| zabbix.history_text | optimize | status   | OK                                                                |
+---------------------+----------+----------+-------------------------------------------------------------------+
2 rows in set (55 min 37.37 sec)

The result is quite a big file size savings:

$ ls -alh history_text.ibd
-rw-rw---- 1 mysql mysql 11G Sep 10 16:27 history_text.ibd

The file that was previously 52GB in size, is now just 11GB.

The post MySQL: Calculate the free space in IBD files appeared first on ma.ttias.be.

September 09, 2018

Thanks to updates from Vignesh Jayaraman, Anton Hillebrand and Rolf Eike Beer, a new release of cvechecker is now made available.

This new release (v3.9) is a bugfix release.

Petit exercice de transparence comptable pour l’impression des aventures d’Aristide.

Grâce à votre soutien, notre projet Ulule pour soutenir les aventures d’Aristide est un franc succès avec plus de 200 livres commandés alors que nous sommes tout juste à la moitié de la campagne !

Je suis un fervent partisan de la transparence, que ce soit en politique ou dans une démarche qui est, il faut l’avouer, commerciale. J’avais donc envie de vous détailler pourquoi ce cap de 200 exemplaires est un soulagement financier : c’est à partir de 200 exemplaires que nous commençons à rentrer dans nos frais !

Pour imprimer 200 exemplaires, quantité minimale d’impression, l’imprimeur demande 12€ par exemplaire. Ajoutez à cela les frais d’envoi (entre 1€ et 2€ pour l’emballage plus 5€/6€ d’envoi en Belgique et près de 10€ pour l’envoi en France), cela fait que chaque exemplaire nous coûte un minimum de 18€. C’est pour cette raison que j’ai décidé de livrer une partie en vélo : c’est à la fois un beau challenge mais cela permet aussi de faire de sérieuses économies pour financer l’envoi à des lecteurs de l’hexagone !

Rien que l’emballage nous coûtait 2€ pièce si nous commandions 150 cartons mais passe à 1€ pièce si nous en commandons 450. 450 cartons ou un mètre cube à stocker à la maison en attendant les livres !

À cela ajoutez les 8% de commission Ulule et différents frais fixes (comptable, frais administratifs uniques, etc). Il apparait du coup qu’avec 100 exemplaires, nous aurions du mettre de notre poche. Mais Vinch et moi avions tellement envie de voir Aristide exister entre les mains des lecteurs !

Au-dessus de 200 exemplaires, nous passons chez l’imprimeur dans la tranche supérieure (500 exemplaires). Et là, miracle, chaque exemplaire ne nous coûte plus que 7€ (les autres frais étant identiques). Au total, cela revient à dire que quelque part entre 200 et 300 exemplaires, la campagne s’équilibre financièrement et finance un excès de livres. Nous ne gagnerons pas d’argent directement mais nous pourrons peut-être nous rémunérer en vendant le trop plein d’exemplaires après la campagne (ce qui, après un bref calcul, ne paiera jamais le centième du temps passé sur ce projet mais c’est mieux que d’y aller de notre poche).

À partir de 1000 exemplaires, le livre ne couterait plus que 4€ à fabriquer. Là ça deviendrait vraiment intéressant. Mais encore faut-il les vendre…

C’est peut-être pour ça que j’aime tant publier mes textes sur mon blog en vous encourageant à me soutenir à prix libre sur Tipeee, Patreon ou Liberapay. Vendre, c’est un métier pour lequel je ne suis pas vraiment taillé.

Mais Aristide avait tellement envie de connaître le papier, de se faire caresser par les petites mains pleines de terre et de nourriture, de se faire déchirer dans un excès d’enthousiasme, d’exister dans un moment complice d’où les écrans sont bannis.

Merci de lui permettre cela ! La magie d’Internet, c’est aussi de permettre à deux auteurs d’exister sans avoir le moindre contact dans le monde de l’édition, de permettre à tout le monde de devenir auteur et créateur.

PS: comme nous sommes désormais sûr de le produire 500 exemplaires, n’hésitez pas à recommander Aristide à vos amis. Cela me ferait trop de peine de voir quelques centaines d’exemplaires pourrir dans des cartons ou, pire, être envoyés au pilon ! Plus que 20 jours de campagne !

Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

I’m happy to release a two Clean Architecture + Bounded Contexts diagrams into the public domain (CC0 1.0).

I created these diagrams for Wikimedia Deutchland with the help of Jan Dittrich, Charlie Kritschmar and Hanna Petruschat. They represent the architecture of our fundraising codebase. I explain the rules of this architecture in my post Clean Architecture + Bounded Contexts. The new diagrams are based on the ones I published two years ago in my Clean Architecture Diagram post.

Diagram 1: Clean Architecture + DDD, generic version. Click to enlarge. Link: SVG version

Diagram 1: Clean Architecture + DDD, fundraising version. Click to enlarge. Link: SVG version

 

September 07, 2018

I published the following diary on isc.sans.edu: “Crypto Mining in a Windows Headless Browser“:

Crypto miners in the browser are not new. Delivery through malicious or compromised piece of javascript code is common these days (see my previous diary about this topic). This time, it’s another way to deliver the crypto miner that we found in a sample reported by one of our readers… [Read more]

[The post [SANS ISC] Crypto Mining in a Windows Headless Browser has been first published on /dev/random]

The post Chrome stops showing “www”-prefix in URI appeared first on ma.ttias.be.

Almost 10 years ago I ranted about removing the WWW prefix from your sites. Today, Chrome is starting to push that a bit more.

Here's what a www site looks like now (example: www.nucleus.be).

But in reality, it's actually www.nucleus.be that has been loaded, with the www-prefix.

Since it's no longer shown in the most popular browser, why not drop the www-prefix altogether?

As a webdeveloper or web agency, how many times will you have to explain what's going on here?

(Hat tip to @sebdedeyne, who showed this behaviour)

The post Chrome stops showing “www”-prefix in URI appeared first on ma.ttias.be.

September 06, 2018

I published the following diary on isc.sans.edu: “Malicious PowerShell Compiling C# Code on the Fly“:

What I like when hunting is to discover how attackers are creative to find new ways to infect their victim’s computers. I came across a Powershell sample that looked new and interesting to me. First, let’s deobfuscate the classic way.

It started with a simple Powerscript command with a big Base64 encoded string… [Read more]

[The post [SANS ISC] Malicious PowerShell Compiling C# Code on the Fly has been first published on /dev/random]

Last night, we shipped Drupal 8.6.0! I firmly believe this is the most significant Drupal 8 release to date. It is significant because we made a lot of progress on all twelve of Drupal 8 core's strategic initiatives. As a result, Drupal 8.6 delivers a large number of improvements for content authors, evaluators, site builders and developers.

What is new for content authors?

For content authors, Drupal 8.6 adds support for "remote media types". This means you can now easily embed YouTube or Vimeo videos in your content.

The Media Library in Drupal 8.6The Media Library in Drupal 8.6

Content authors want Drupal to be easy to use. We made incredible progress on a variety of features that will help to achieve that: we've delivered an experimental media library, added the Workspaces module as experimental, providing sophisticated content staging capabilities, and made great strides on the upcoming Layout Builder. The Layout Builder is shaping up to be a very powerful tool that solves a lot of authoring challenges, and is something many are looking forward to.

The Workspaces module in Drupal 8.6The Workspaces module in Drupal 8.6

Each initiative related to content authoring is making disciplined and steady progress. These features not only solve for the most requested authoring improvements, but provide a solid foundation on which we can continue to innovate. This means we can provide better compatibility and upgradability for contributed modules.

Top requests for content authorsThe top 10 requested features for content creators according to the 2016 State of Drupal survey.

What is new for evaluators?

Evaluators want an out-of-the-box experience that allows them to install and test drive Drupal in minutes. With Drupal 8.6, we have finally delivered on this need.

Prior to Drupal 8.6, downloading and installing Drupal was a complex and lengthy process that ended with an underwhelming "blank slate".

Now, you can install Drupal with the new "Umami demo profile". The Umami demo profile showcases some of Drupal's most powerful capabilities by providing a beautiful website filled with content right out of the box. A demo profile will not only help to onboard new users, but it can also be used by Drupal professionals and digital agencies to showcase Drupal to potential customers.

The experimental layout builder in Drupal 8.6 with the Umami demo profileThe new Umami demo profile together with the Layout Builder.

In addition to a new installation profile, we added a "quick-start" command that allows you to launch a Drupal site in one command using only one dependency, PHP. If you want to try Drupal, you no longer have to setup a webserver, a database, containers, etc.

Last but not least, the download experience and evaluator documentation on Drupal.org has been vastly improved.

With Drupal 8.6, you can download and install a fully functional Drupal demo application in less than two minutes. That is something to be very excited about.

What is new for developers?

You can now upgrade a single-language Drupal 6 or Drupal 7 site to Drupal 8 using the built-in user interface. While we saw good progress on multilingual migrations, they will remain experimental as we work on the final gaps.

I recently wrote about our progress in making Drupal an API-first platform, including an overview of REST improvements in Drupal 8.6, an update on JSON API, and the reasons why JSON API didn't make it into this release. I'm looking forward to JSON API being added in Drupal 8.7. Other decoupled efforts, including a React-based administration application and GraphQL support are still under heavy development, but making rapid progress.

We also converted almost all of our tests from SimpleTest to PHPUnit; and we've added Nightwatch.js and Prettier for JavaScript developers. While Drupal 8 has extensive back-end test coverage, using PHPUnit and Nightwatch.js provides a more modern platform that will make Drupal more familiar to PHP and JavaScript developers.

Drupal 8 continues to hit its stride

These are just some of the highlights that I'm most excited about. If you'd like to read more about Drupal 8.6.0, check out the official release announcement and important update information from the release notes. The next couple of months, I will write up more detailed progress reports on initiatives that I didn't touch upon in this blog post.

In my Drupal 8.5.0 announcement, I talked about how Drupal is hitting its stride, consistently delivering improvements and new features:

In future releases, we plan to add a media library, support for remote media types like YouTube videos, support for content staging, a layout builder, JSON API support, GraphQL support, a React-based administration application and a better out-of-the-box experience for evaluators.

As you can see from this blog post, Drupal 8.6 delivered on a number of these plans and made meaningful progress on many others.

In future releases we plan to:

  • Stabilize more of the features targeting content authors
  • Add JSON API, allowing developers to more easily and rapidly create decoupled applications
  • Provide stable multilingual migrations
  • Make big improvements for developers with Composer and configuration management changes
  • Continually improve the evaluator experience
  • Iterate towards an entirely new decoupled administrative experience
  • ... and more

Releases like Drupal 8.6.0 only happen with the help of hundreds of contributors and organizations. Thank you to everyone that contributed to this release. Whether you filed issues, wrote code, tested patches, funded a contributor, tested pre-release versions, or cheered for the team from the sidelines, you made this release happen. Thank you!

September 05, 2018

Lorsque j’ai rencontré XyglZ (de la 3ème nébuleuse), nos conversations nous ont vite menés sur nos modes de vie respectifs. Et l’un des concepts qui l’étonnait le plus était celui d’état. Quoi que je fasse, le principe d’un état levant des impôts lui semblait inintelligible.

— C’est pourtant simple, tentai-je de lui expliquer pour la énième fois. L’état est une somme d’argent mise en commun pour financer les services publics. Chacun verse un pourcentage de ce qu’il gagne à l’état, c’est l’impôt. Plus tu gagnes, plus ton pourcentage est élevé. De cette manière, les riches paient plus.

Expliqué comme cela, rien ne me semblait plus beau, plus simple et plus juste. Jusqu’au moment ou XyglZ (de la 3ème nébuleuse) me demanda comment on calculait ce que chacun gagnait.

— Et bien pour ceux qui reçoivent un salaire fixe d’une seule personne, la question est triviale. Par contre, pour les autres, il s’agit de l’argent gagné moins l’argent dépensé à caractère professionnel.

Disant cela, je réalisai combien définir l’argent gagné en fonction de comment il serait dépensé était absurde.

XyglZ (de la 3ème nébuleuse) releva très vite que cela divisait arbitrairement la société en deux castes : ceux qui touchaient une somme fixe d’une seule autre personne et ceux qui avait des revenus variés. La deuxième caste pouvait grandement diminuer ses impôts en s’arrangeant de dépenser l’argent gagné de manière « professionnelle », cette notion étant arbitrairement floue.

— Mais, tentai-je de justifier, les deux castes peuvent diminuer leurs impôts en dépensant leur argent de certaines manières que le gouvernement souhaite encourager. C’est l’abattement d’impôts.

XyglZ (de la 3ème nébuleuse) remarqua que c’était injuste. Car ceux qui gagnaient moins payaient moins d’impôts. Ils avaient donc moins le loisir de bénéficier de réductions fiscales. La réduction d’impôt bénéficiait donc toujours d’une manière ou d’une autre aux plus riches. Sans compter la complexité que tout cela engendrait.

– En effet, arguais-je. C’est pour cela qu’il me semble juste que l’impôt soit plus important pour ceux qui gagnent plus d’argent.

XyglZ (de la 3ème nébuleuse) n’était pas convaincu. Selon lui, il suffisait de dépenser l’argent de manière professionnelle pour que ces impôts soient inutiles. En fait, ce système encourageait très fortement à gagner le moins d’argent donc à le dépenser professionnellement le plus vite possible. Des comportements économiquement irrationnels devenaient soudainement plus avantageux.

– Et tu ne connais pas la meilleure, ne pus-je m’empêcher d’ajouter. Notre planète est divisée en pays. Chaque pays a des règles complètement différentes. Les riches ont le loisir de s’installer où c’est le plus rentable pour eux, voire de créer des sociétés dans différents pays. Ce n’est pas le cas des plus pauvres.

XyglZ (de la 3ème nébuleuse) émit un son qui, sur sa planète, correspondait à un rire moqueur. Entre deux hoquets, il me demanda comment nous pouvions gérer une telle complexité. Avions-nous des supers ordinateurs gérant toutes nos vies ?

— Non, fis-je. Chaque pays possède une administration importante qui ne sert qu’à appliquer les règles de l’impôt, les calculer et s’assurer que chacun paie. Un pourcentage non-négligeable des habitants d’un pays est employé par cette administration.

XyglZ (de la 3ème nébuleuse) fut visiblement surpris. Il me demanda comment étaient payés ces fonctionnaires si nombreux et capables d’appliquer des procédures si complexes.

– Ils sont payés par l’impôt, fis-je.

XyglZ (de la 3ème nébuleuse) se frappa le front de son tentacule, grimpa dans sa soucoupe et s’envola prestement.

Photo by Jonas Verstuyft on Unsplash

Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

September 04, 2018

During this summer, I went to SANSFire, Defcon and BSidesLV. Usually, the month of September is lighter without big events for me. This is to prepare for the next wave of conferences ahead! Of course, BruCON will be held on the first week of October but, especially, Hack.lu which remains one of my favourite years after years. First of all, because the relaxed atmosphere is favourable to networking: I meet friends on a yearly basis at hack.lu and there are here and there meetups like the Belgian dinner which is organized for a few years mainly between Belgian Infosec people (but we remain open to friends 😜). The second reason (the official one) is also the number of high-quality talks that are scheduled on a single track. The organizers told me that they received a huge amount of replies to the call-for-papers. It was not easy to prepare the schedule (days are not extensible!).

The agenda has been published today and I already made my list of favourite sessions. Here is the list of topics:

  • Ransomware and their economic model (how to quantify them)
  • MONARC, a risk analysis framework and the ability to share information
  • Threat intelligence in the real world or the gap that exists between what the security community think users need, what users think they need and what they actually need
  • Klara, the Kaspersky’s open source tool which allows building a YARA scanner in the cloud
  • Neuro-Hacking or how to perform efficient social-engineering
  • “WHAT THE FAX?!” or how to pwn a company with a 30-years old technology
  • DDoS or how to optimize attacks based on a combination of big-data analysis and pre-attack analysis to defeat modern anti-DDoS solutions
  • Vulnerabilities related to IPC  (“Inter-Process Communication) in computers
  • How to improve your pentesting skills
  • Privilege escalation and post-exploitation with bash on Windows
  • Relations between techies and users in the world of Infosec
  • How to find the best threat intelligence provider?
  • Boats are also connected, “How to hack a Yacht?”
  • How to discover API’s?
  • Data exfiltration on air-gapped networks
  • Presentation about the Security Information Exchange (S.I.E.)
  • Practical experiences in operating large scale honeypot sensor networks
  • An overview of the security industrial serial-to-ethernet converter

Here is a link to the official agenda. Besides the presentations, there are 18 workshops scheduled across the three days. I’ll be present to write some wrap-up’s! (Here is a list of the previous ones starting from 2011). See you there!

[The post Hack.lu 2018 is ahead! has been first published on /dev/random]

Consciencieusement, le jeune apprenti rangeait les bocaux sur les étagères qui s’entremêlaient à perte de vue.
— Vérifie qu’ils soient bien fermés ! bougonna un vieil homme qui s’affairait dans un grand livre de comptes.
— Bien sûr Monsieur Joutche, je fais toujours attention.
— Tu fais toujours attention mais sais-tu ce qui se passe lorsqu’un pot reste ouvert ?
— euh… Le contenu s’échappe ?
— Et qu’en penses-tu ?
— Je ne vois pas trop le mal Monsieur Joutche. En fait…
— Quoi ?

Le vieillard bondit de sa chaise, renversant l’encrier sur le grand livre. Ses narines palpitaient et il ne parut pas prêter attention au liquide noir qui s’étalait, formant un minuscule lac d’abîmes.

— Ces pots contiennent des rêves, petit imprudent. Rien moins que des rêves. S’ils s’échappent, ils ont alors la possibilité de se réaliser.
Blême et bégayant, l’apprenti tenta de rétorquer.
— Mais c’est bien de réaliser des rêves, non ?
— Tu sais combien ça me coûte de stocker tout ces rêves ? Tu sais combien de rêves sont dans ce magasin ? Et l’anarchie que cela représenterait s’ils se réalisaient tous en même temps ?

Fulminant, il attrapa le jeune homme par la manche et le traina dans une allée. Après quelques détours, ils se plantèrent face à une étagère particulière.
— Regarde ces bocaux ! Ce sont tes rêves à toi.
L’apprenti en était bouche-bée.
— Imagine ce qu’il adviendrait si tout tes rêves pouvaient soudainement se réaliser ?
— Ce serait…
— Non, ce serait terrible ! Crois-moi, il est important de garder les rêves enfermés. Lorsqu’une personne souhaite réaliser l’un de ses rêves, elle vient au magasin. J’analyse alors les conséquences du rêve et je fixe un prix pour le réaliser. De cette manière, tout est sous contrôle.
— Il suffit donc de payer pour voir son rêve se réaliser ?
— Pas vraiment. Une fois le bocal ouvert, le rêve a la possibilité de se réaliser. Mais le sujet doit encore agir pour concrétiser cette réalisation. Nous ne faisons que rendre un rêve possible.
— Et si la personne n’a pas d’argent ?
— Alors on en reste aux petits rêves sans impacts. Tiens, vu que nous sommes la veille de Noël, je fais une réduction et j’offrirai un rêve sans envergure à chaque client qui viendra aujourd’hui.

Le vieillard s’était adouci. Ses yeux pétillaient alors qu’il couvait du regard les rangées de bocaux.

— Mais il est indispensable de contrôler les rêves, de limiter les possibles. Sans cela, nous tomberions dans l’anarchie. C’est pour cela que notre métier est si important. Allez, c’est la veille de Noël, tu as bien travaillé, tu peux rentrer chez toi. Car demain sera une grosse journée. Nous ne pouvons pas fermer le jour de Noël ! Les gens sont enclins à dépenser beaucoup plus pour leurs rêves !

D’un pas lent, l’apprenti laissa ses pas le guider jusqu’à sa petite maison. Il contempla l’âtre éteint de sa petite cheminée et, comme tous les noëls, il déposa ses souliers en adressant une prière muette.

— Père Noël, je ne veux pas de cadeaux, je veux juste que les choses deviennent un peu différentes.

Il se réveilla au petit matin avec un étrange pressentiment. Père Noël l’avait entendu ! Il en était sûr ! Sautant hors de son lit, il se rua dans son petit salon.

Hélas, ses chaussures étaient vides.

Tristement, il s’habilla et les enfila avant de se rendre au magasin. Les allées lui semblèrent bien sombres et silencieuses.
— Monsieur Joutche ?

Intrigué, il se mit à parcourir les rayonnages. Il trouva son patron le visage blême, la bouche grande ouverte juste en face des rêves qu’il lui avait montré, ses propres rêves.
— Tout va bien monsieur Joutche ?

Le vieillard se retourna d’un bond.

— Quoi ? Tu oses remettre les pieds ici ? Dehors ! Je ne veux plus te voir ! Employé indigne ! Jamais je n’ai vu ça en quarante ans de carrière ! Tu m’entends ? Jamais ! Je te flanque à la porte !
— Mais…
— Dehors !

L’apprenti n’osa pas insister mais, avant de s’en retourner, il aperçut les flacons qui contenaient ses rêves. Tous étaient ouverts. Les couvercles avaient disparu ! Il n’y avait pas prêté attention mais cela lui sautait désormais aux yeux : tous les rêves de l’entrepôt s’étaient échappés.
— Dehors ! hurla monsieur Joutche.

Sans demander son reste, l’apprenti se mit à courir. Il sortit du magasin, déboucha sur la rue. Des flocons de neige tournoyait doucement dans le petit matin de Noël.

L’apprenti se mit à sourire.
— Merci Père Noël ! Quel beau cadeau ! murmura-t-il.

Puis, sans se départir de son sourire, il se mit à courir dans la neige en poussant des petits cris de liberté…

Photo by Javardh on Unsplash.

Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

Today is the first day back from my two-week vacation. We started our vacation in Maine, and we ended our vacation with a few days in Italy.  

While I did some work on vacation, it was my first two-week vacation since starting Acquia 11 years ago.

This morning when the alarm went off I thought: "Why is my alarm going off in the middle of the night?". A few moments later, reality struck. It's time to go back to work.  

Going on vacation is like going to space. Lots of work before take-off, followed by serenity and peaceful floating around in space, eventually abrupted by an insane re-entry process into the earth's atmosphere.

I got up early this morning to work on my "re-entry" and prioritize what I have to do this week. It's a lot!

Drupal Europe is only one week away and I have to make a lot of progress on my keynote presentation and prepare for other sessions and meetings. Between now and Drupal Europe, I also have two analyst meetings (Forrester and Gartner), three board meetings, and dozens of meetings to catch up with co-workers, projects, partners and customers.  Plus, I would love to write about the upcoming Drupal 8.6.0 release and publish my annual "Who sponsors Drupal development?" report. Lots to do this week, but all things I'm excited about.

If you're expecting to hear from me, know that it might take me several weeks to dig out.

August 31, 2018

As you might have read on the Drupal Association blog, Megan Sanicki, the Executive Director of the Drupal Association, has decided to move on.

Megan has been part of the Drupal Association for almost 8 years. She began as our very first employee responsible for DrupalCon Chicago sponsorship sales in 2011, and progressed to be our Executive Director, in charge of the Drupal Association.

It's easy to forget how far we've come in those years. When Megan started, the Drupal Association had little to no funding. During her tenure, the Drupal Association grew from one full-time employee to the 17 full-time employees, and from $1.8 million in annual revenues to $4 million today. We have matured into a nonprofit that can support and promote the mission of the Drupal project.

Megan led the way. She helped grow, mature and professionalize every aspect of the Drupal Association. The last two years in her role as Executive Director she was the glue for our staff and the driving force for expanding the Drupal Association's reach and impact. She understood how important it is to diversify the community, and include more stakeholders such as content creators, marketers, and commercial organizations.

I'm very grateful for all of this and more, including the many less visible contributions that it takes to make a global organization run each day, respond to challenges, and, ultimately, to thrive. Her work impacted everyone involved with Drupal.

It's sad to see Megan go, both professionally and personally. I enjoyed working with Megan from our weekly calls, to our strategy sessions as well as our email and text messages about the latest industry developments, fun stories taking place in our community, and even the occasional frustration. Open source stewardship can be hard and I'm glad we could lean on each other. I'll miss our collaboration and her support but I also understand it is time for Megan to move on. I'm excited to see her continue her open source adventure at Google.

It will be hard to fill Megan's shoes, but we have a really great story to tell. The Drupal community and the Drupal Association are doing well. Drupal continues to be a role model in the Open Source world and impacts millions of people around the world. I'm confident we can find excellent candidates.

Megan's last day is September 21st. We have activated our succession plan: putting in place a transition team and readying for a formal search for a new Executive Director. An important part of this plan is naming Tim Lehnen Interim Executive Director, elevating him from Director, Engineering. I'm committed to find a new Executive Director who can take the Drupal Association to the next level. With the help of Tim and the staff, our volunteers, sponsors and the Board of Directors, the Drupal Association is in good hands.

August 30, 2018

I published the following diary on isc.sans.edu: “Crypto Mining Is More Popular Than Ever!“:

We already wrote some diaries about crypto miners and they remain more popular than ever. Based on my daily hunting statistics, we can see that malicious scripts performing crypto mining operations remain on top of the sample I detected for the last 24h… [Read more]

[The post [SANS ISC] Crypto Mining Is More Popular Than Ever! has been first published on /dev/random]

August 29, 2018

In this post I go over the principles that govern package (library) design and one specific issue I have come across several times.

Robert C Martin has proposed six Package Principles. Personally I find the (linked) description on Wikipedia rather confusing, especially if you do not already understand the principles. Here is my take on library design using short and simple points:

Libraries should

  • cover a single responsibility
  • provide a clean and well segregated interface to their users
  • hide/encapsulate implementation details
  • not have cyclic dependencies
  • not depend on less stable libraries

The rest of this post focuses on the importance of the last point.

Stable Base Libraries

I am assuming usage of a package manager such as Composer (PHP) or NPM. This means that each library defines its dependencies (if any) and the version ranges of those that it is compatible with. Libraries have releases and these follow semver. It also means that libraries are ultimately consumed by applications that define their dependencies in the same way.

Consider the scenario where you have various applications that all consume a base library.

If this library contains a lot of concrete code, it will not be very stable and it will now and then be needed or desired to make a major release. A major release is one that can contain breaking changes and therefore does not automatically get used by consumers of the library. These consumers instead need to manually update the version range of the library they are compatible with and possibly adjust their code to work with the breaking changes.

In this scenario that is not a problem. Suppose the library is at version 1.2 and that all applications use version 1.x. If breaking changes are made in the library, these will need to be released as version 2.0. Due to the versioning system the consuming applications can upgrade at their leisure. There is of course some cost to making a major release. The consumers still do need to spend some time on the upgrade, especially if they are affected by the breaking changes. Still, this is typically easy to deal with and is a normal part of the development process.

Things change drastically when we have an unstable library that is used by other libraries, even if those libraries themselves are unstable.

In such a scenario making breaking changes to the base library are very costly. Let’s assume we make a breaking change to the base library and that we want to use it in our application. What do we actually need to do to get there?

  • Make a release of the base library (Library A)
  • Update library B to specify it is compatible with the new A and make a new release of B
  • Update library C to specify it is compatible with the new A and make a new release of C
  • Update library D to specify it is compatible with the new A and make a new release of D
  • Update our application to specify it is compatible with the new version of A

In other words, we need to make a new release of EVERY library that uses our base library and that is also used by our application. This can be very painful, and is a big waste of time if these intermediate libraries where not actually affected by the breaking change to begin with. This means that it is costly, and generally a bad idea, to have libraries depend on another library that is not very stable.

Stability can be achieved in several ways. If the package is very abstract, and we assume good design, it will be stable. Think of a package providing a logging interface, such as psr/log. It can also be approached by a combination of following the package principles and taking care to avoid breaking changes.

Conclusion

Keep the package principles in mind, and avoid depending on unstable libraries in your own libraries where possible.

See also

I published the following diary on isc.sans.edu: “3D Printers in The Wild, What Can Go Wrong?“:

Richard wrote a quick diary yesterday about an interesting information that we received from one of our readers. It’s about a huge amount of OctoPrint interfaces that are publicly facing the Internet. Octoprint is a web interface for 3D printers that allows to control and monitor all features of the printer. They are thousands of Octoprint instances accessible without any authentication reported by Shodan… [Read more]

[The post [SANS ISC] 3D Printers in The Wild, What Can Go Wrong? has been first published on /dev/random]

For our 2018 family vacation, we wanted to explore one of America's National Parks. We decided to take advantage of one of the national parks closest to us: Acadia National Park in Maine.

Day 1: Driving around Mount Desert Island

An aerial photo of our rental propertyAn aerial photo of our rental house near Bar Harbor, Maine.

We rented a house on the water near Bar Harbor. So on our first morning we explored the beach area around the house. In good tradition, the boys collected some sticks to practice their ninja moves and ninja sword fighting. Both also enjoyed throwing their "ninja stars" (rocks) into what they called "fudge" (dried up piles of seaweed). The ninja stars landed with a nice, soggy "plop".

When not being pretend ninjas, Axl's favorite part of exploring the beach was finding sea life in the tidal pools. For Stan it was collecting various crab shells in hopes to glue them all together to make a whole crab.

Next up, we drove around Mount Desert Island, stopped for lunch, visited Bass Harbor Lighthouse and explored the rocks. On the way, we saw multiple deer, which triggered a memory for Stan that he saw a "man deer".

Stan: I saw a man deer once!
Vanessa: What? A man deer? You mean like a half man and half deer?
Stan: Yes.
Vanessa: LOL! Like a mythical creature?
Stan: Yes.
Dries: You saw a deer that was half man and half deer in real life?
Stan: Yes, you know a deer that is a man.
Laughter erupted in the car
Everyone: Oh, you mean a male deer!?
Stan: Yes.

Stairs to the rocks near Bass Harbor LighthouseDescending the steep stairs to the rocks near Bass Harbor Lighthouse.
Stan on the rocks at Acadia National ParkStan standing on one of the many rock formations at Acadia National Park.

Acadia's rocky landscape dates back to more than 500 million years ago. It's the result of continents colliding, volcanoes erupting, and glaciers scraping the bedrock. It all sounds very dramatic, and I'm sure it was. Sand, mud and volcanic ash piled up in thick layers, and over millions of years, was compressed into sedimentary rock. As tectonic plates shifted, this newly formed rock was pushed to the surface of the earth. The end result? One of the most stunning islands in the United States.

Day 2: Hiking on Acadia Mountain

This beautiful landscape is perfect for hiking so on the second day of our vacation, we decided to take our first hike: Acadia Mountain Trail. The forested trail quickly makes its ascent up Acadia Mountain, with a few sections of relatively steep rock formations that require a bit of scrambling. Once at the top, the trail took us along the ridge of the mountain with great views into Somes Sound and the Atlantic Ocean.

Acadia Mountain TrailThe stunning view from Acadia Mountain Trail.

Coming down was harder than going up; the trail down is steep. Vanessa and I often had to sit down and hop off the rocks. Axl and Stan, on the other hand, hopped down the rocks like the elves in The Hobbit. We were happily surprised how much they loved hiking. Axl even declared he enjoyed hiking much more than walking, because "walking is exhausting" to him.

Acadia Mountain TrailDescending Acadia Mountain Trail was harder than expected.

After our hike we went to Echo Lake for a swim and picnic. It is a gorgeous fresh water lake in a spectacular setting. Imagine a beautiful mountain lake, next to 800 foot (250 meter) steep rocks. We even saw two bald eagles flying by when swimming. Pretty magical!

Day 3: Watching the sunrise on Cadillac Mountain

We woke up at 4am to the glaring noise of our alarm clock. There are few sounds worse than the blaring of an alarm clock, especially on vacation.

I feel tired, but also excited. We drank a quick cup of coffee, and hit the road. By 5am we were up at the top of Cadillac Mountain, a different part of Acadia National Park. Cadillac Mountain is famously the first spot in the United States to see the sunrise.

Cadillac Mountain at sunriseThe view from the top of Cadillac Mountain minutes before sunrise.

For the photographer in me, it also meant that I got to see the park in its most beautiful light. As the sun started to peek through, the colors first turned purple and blue, and then slowly yellow, until the morning sun covered everything in golden hues.

Cadillac Mountain at sunriseThe view from the top of Cadillac Mountain minutes after sunrise.

To watch the sunrise from the top of a mountain is an experience worth getting up early for.

Day 4: Rain day

Raining today! In the morning, Vanessa made cornmeal griddle cakes with fresh Maine wild blueberries. Stan gave them a 10 on 10, and while Axl liked them a lot but he indicated he prefers it when the blueberries are still whole (hadn't popped during the cooking process) … our little food critics! :)

We played a variety of board games throughout the day. It's fun to watch the boys start to think more strategically. In between games, Vanessa taught Stan how to make chocolate peanut butter chip cookies, and Axl how to make pasta bolognese. Both of the boys wrote down their recipes so they can make them again — we can't wait to try these!

Axl making pasta
Playing a game of PentePlaying a game of Pente.

We also love our long dinner conversations about life. Axl said he has a great business idea. I hope he doesn't mind me sharing it here but it's a "self-charging smartphone". He is determined to sell his idea to Apple for $3.5 million dollars. This prompted a long conversation about how it's not enough to have a great idea, but how it takes a lot of determination and team work to bring such an idea to life. In that conversation, we covered a range of topics from working hard, to wealth, giving back, and what really matters in life to be happy.

Day 5: Out on the water

We started the day with a Kubb tournament (a lawn game that involves throwing wooden batons), where Stan and I won the first game, and then Stan gave the second game away to Vanessa and Axl. Nonetheless it was a lot of fun!

After lunch we made our way into Bar Harbor and then boarded a boat to explore the Acadia coastline. On the boat tour we learned more about the animals indigenous to the area, the history of the park and how Bar Harbor came to be. On our trip we saw grey seals, Egg Rock Lighthouse and another bald eagle.

Egg Rock LighthouseEgg Rock Lighthouse in the distance.

It wouldn't be a vacation in Maine without lobster and chowder. When we got back on land, we went to a lobster shack where we were able to pick out our lobsters, and then they were boiled in custom fishermen's nets in large pots filled with seawater outside over fires fed with local wood. The lobsters were cooked to perfection! Stan was in heaven as he had been craving lobster for weeks.

Lobsters cooking in large potsLobsters cooking in large pots filled with seawater.

Day 6: Exploring Pemetic Mountain

After a quick work call in the morning, we're off for a hike. We hiked Pemetic Mountain today; roughly 4 miles (6.5 kilometer) and 1,200 feet (365 meter) of elevation. Getting to the top involves some steep and strenuous uphill hiking, but the views of the ocean, surrounding forests, lakes and other mountains make it worth it. The appeal is not just in communing with nature, but also connecting with the boys.

After our hike we grabbed lunch at Sweet Pea's Cafe. The name is misleading; it's not really a cafe, but a farm to table restaurant, and one were you are literally eating at the farm. All the food was fresh and delicious, and we all agreed to come back at a future trip (which is why I decided to capture the name in my blog).

My favorite part of the day was undeniably the dance party we had in the kitchen just before dinner. No, there are not pictures of it. The party climaxed with Neil Diamond's Sweet Caroline which involved us dancing and singing along. More than Neil Diamond, I love that we can dance and sing together without caution or reservation, and just being our true selves.

Day 7: Hiking Bubble Rock

We're kicking off the day with Axl and Stan doing some homework; math, reading, writing, reading the clock, etc. The entire month of August, we've been doing homework for about one hour a day. As it is often the case, it's frustrating me. I continue to be surprised how many mistakes they make or why they still don't seem to understand basic concepts. We'll keep practicing though!

In the afternoon, we decided to hike Bubble Rock, named after one of the most famous boulders in Maine. Bubble Rock is a huge boulder that was placed into this unique position, teetering on the edge of a cliff, millions of years ago by a glacier. It makes you wonder how it remained in place for all these years. Of course, we tried to push it off, and failed.

Bubble Rock at Acadia National ParkGiving the famous Bubble Rock at Acadia National Park a little push.

Day 8: Chilling in the backyard

The last day of our vacation, we just hung around the house. We played Spikeball (also called Roundnet), soccer, made pizza on the grill, collected shells on the beach, caught up on some work, and relaxed in the hammock. At home, we live in a condominium so we don't have a backyard or a large grill — just a small electric one. So playing games in the backyard and grilling actually makes for a wonderful experience. And with that last relaxing day, the Maine part of our vacation came to an end. We're headed back to the Boston suburbs, because the next day, we have a flight to Europe to catch. By the time you're reading this, we'll probably be in Europe.

Vanessa making pizza on the grillVanessa making pizza on the grill, which has become a summer vacation tradition.

Some photos were taken with my Nikon DSLR, some with my iPhone X and some with my DJI Mavic Pro. I wish I could have taken my Nikon on all our hikes, but they were often too strenuous to bring it along.

August 28, 2018

I recently bought a new webcam, because my laptop screen broke down, and the replacement screen that the Fujitsu people had on stock did not have a webcam anymore -- and, well, I need one.

One thing the new webcam has which the old one did not is a sensor with a 1920x1080 resolution. Since I've been playing around with various video-related things, I wanted to see if it was possible to record something in VP9 at live encoding settings. A year or so ago I would have said "no, that takes waaay too much CPU time", but right now I know that this is not true, you can easily do so if you use the right ffmpeg settings.

After a bit of fiddling about, I came up with the following:

ffmpeg -f v4l2 -framerate 25 -video_size 1920x1080 -c:v mjpeg -i /dev/video0 -f alsa -ac 1 -i hw:CARD=C615 -c:a libopus -map 0:v -map 1:a -c:v libvpx-vp9 -r 25 -g 90 -s 1920x1080 -quality realtime -speed 6 -threads 8 -row-mt 1 -tile-columns 2 -frame-parallel 1 -qmin 4 -qmax 48 -b:v 4500 output.mkv

Things you might want to change in the above:

  • Set the hw:CARD=C615 bit to something that occurs in the output of arecord -L on your system rather than on mine.
  • Run v4l2-ctl --list-formats-ext and verify that your camera supports 1920x1080 resolution in motion JPEG at 25 fps. If not, change the values of the parameters -framerate and -video_size, and the -c:v that occurs before the -i /dev/video0 position (which sets the input video codec; the one after selects the output video codec and you don't want to touch that unless you don't want VP9).
  • If don't have a quad-core CPU with hyperthreading, change the -threads setting.

If your CPU can't keep up with things, you might want to read the documentation on the subject and tweak the -qmin, -qmax, and/or -speed parameters.

This was done on a four-year-old Haswell Core i7; it should be easier on more modern hardware.

Next up: try to get a live stream into a DASH system. Or something.

An “Autoptimize Critical CSS“-user saw some weirdness in how the site looked when optimized. The reason for this turned out to be not the critical CSS, but the fact that he used “Hide My WordPress Pro”, a plugin that changes well-known paths in WordPress URL’s to other paths (e.g. /wp-content/plugins -> /modules/ and /wp-content/themes -> /templates) and uses rewrite rules .htaccess to map requests to the filesystem. This resulted in Autoptimize not finding the files on the filesystem (as AO does not “see” the mapping in .htaccess), leaving them un-aggregated.

To fix something like that, a small code snippet that hooks into Autoptimize’s API can do the trick;

add_filter('autoptimize_filter_cssjs_alter_url', 'rewrite_hidemywp');
function rewrite_hidemywp($url) {
    if ( strpos( $url, 'modules/' ) !== false && strpos( $url, 'wp-content' ) === false ) {
        $url = str_replace('modules/','wp-content/plugins/', $url);
    } elseif ( strpos( $url, 'templates/' ) !== false && strpos( $url, 'wp-content' ) === false ) {
	$url = str_replace('templates/','wp-content/themes/', $url);
    }
    return $url;
}

The above is just an example (as in the Pro version of hide-my-wp you can set paths of your own liking and you can even replace theme names by random-ish strings), but with a small amount of PHP-skills you should be able to work out the solution for your own site. Happy optimized hiding!

Lorsqu’ils collaborent, les individus peuvent accomplir des prouesses. Des prouesses dont personne n’a envie…

La sauvegarde de la planète semble un objectif louable mais devant lequel les individus ne peuvent rien faire d’autre que trier leurs poubelles et acheter du muesli bio en vrac. Une forme d’attentisme se met en place, espérant que les “politiciens” prennent “des décisions”.

Pourtant, les individus sont capables d’accomplir de grandes choses grâce à la coopération. Ils sont par exemple capables de se synchroniser pour déplacer chacun plusieurs tonnes de matériaux excessivement chers (valant parfois plusieurs années de leur travail) afin de bloquer complètement tous les accès à de gigantesques espaces de vie.

Et tout ça deux fois par jour tous les jours de l’année ! Alors qu’aucun individu n’en a pourtant envie et que les fameux politiciens font tout pour l’éviter !

Vu comme cela, les embouteillages quotidiens que nous créons devant les métropoles (car nous ne sommes pas “dans” les embouteillages, nous sommes les embouteillages) sont incroyables de volonté, de coopération et de révolte contre le pouvoir en place.

Imaginez l’énergie nécessaire pour arriver à faire un embouteillage contre la volonté des autorités ! Si des extra-terrestres observent notre planète, ils doivent se demander comment notre espèce arrive à reproduire chaque jour cet incroyable effort en tellement d’endroits du globe.

Par contre, effectivement, même parmi les pseudos écolos qui crient contre le gouvernement, ça continue de fumer, de jeter les mégots par terre (“oups, mais moi, promis, je le fais presque jamais”) et à pester sur Facebook, entre deux publicités ultra ciblées, contre le fait que les politiques ne font rien “pour créer de l’emploi”.

Le changement ne vient jamais d’une quelconque autorité. Les individus doivent apprendre à être critiques, à ouvrir les yeux sur leurs propres actions et à refuser en masse toute autorité, qu’elle soit politique, religieuse, idéologique voire économique. Qu’elle s’impose par la force, comme l’état, par la crédulité, comme la religion ou en s’appuyant sur les neurosciences comme les médias, l’autorité n’a qu’un seul objectif : contrôler vos pensées pour diriger vos actions. Vous faire “dé-penser” pour mieux dépenser et obéir.

En ce sens, tout tentative de “remplacer l’autorité par le même mais en moins pire” ne peut être que vouée à l’échec. Il est temps pour l’humanité d’entrer dans l’ère de l’indépendance intellectuelle. Il est temps de se remettre à re-penser alors que, par une ironie absurde, certains individus mettent leur énergie à lutter contre “l’autorité de la science” (la science n’est pas une autorité mais une méthode) en acceptant sans discuter l’autorité d’un guru sectaire en ouverture de chakras, d’une morale religieuse décatie, d’un politicien local qui paie des bières et du marketing permanent qui nous pousse à consommer.

Au fond, la seule question importante est de savoir à quoi vous voulez contribuer. Dans quoi voulez-vous mettre votre énergie ? Est-ce que vous voulez vraiment dire un jour à vos descendants que “pendant 30 ans, j’ai dépensé mon temps et mon argent pour participer anonymement à la création des plus gros embouteillages de la décennie” ?

Quelles sont les autorités qui vous contrôlent ? À quoi contribuez-vous votre énergie aujourd’hui ? Et à quoi aimeriez-vous plutôt contribuer ?

La réponse ne doit pas être immédiate. C’est justement pour cela qu’il est urgent de commencer à se regarder dans un miroir afin d’observer la seule autorité réellement légitime, la seule à laquelle on devrait obéir : soi-même !

Photo par McKylan Mullins.

Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

August 24, 2018

I published the following diary on isc.sans.org: “Microsoft Publisher Files Delivering Malware“:

Attackers are always searching for new ways to deliver malicious content to their victims. A few days ago, Microsoft Publisher malicious files were spotted by security researchers[1]. Publisher is a low-level desktop publishing application offered by Microsoft in its Office suite. They are linked to the “.pub” extension. If not very popular these days, Publisher is still installed on many computers because the default setup of Office 365 proposes it by default… [Read more]

[The post [SANS ISC] Microsoft Publisher Files Delivering Malware has been first published on /dev/random]

August 23, 2018

I published the following diary on isc.sans.org: “Simple Phishing Through formcrafts.com“:

For a long time, moving services to the cloud has been a major trend. Many organizations jumped into the cloud because it’s much easier and cost less money (in terms of maintenance, licence, electricity, etc). If so, why should bad guys not follow the same trend? Last year, I wrote a diary about a phishing campaign that used jotform.com to handle the HTTP POST data… [Read more]

[The post [SANS ISC] Simple Phishing Through formcrafts.com has been first published on /dev/random]

August 22, 2018

Daarnet, aan de kassa van een winkel stond er een mevrouw met een kleed. Aan haar kleed hing achteraan een breed lint vanaf haar hals tot aan haar billen.

Dus ik zeg: “Mevrouw, mevrouw, er hangt een lint los aan je kleed”

Antwoord zij: “Mmnjnee dat is zo”, “mmaar toch lief dat je het zegt”.

Dus nu ben ik stuk. Hartverscheurend was die tweede m telkens. Ocharme dat menske.

En toch zag dat kleed er stuk uit vanachter. Dat lint hing er ook niet deftig aan.

Vrouwen en hun rare ideeën qua kleren..

For personal reasons, I didn't make it to DebConf18 in Taiwan this year; but that didn't mean I wasn't interested in what was happening. Additionally, I remotely configured SReview, the video review and transcoding system which I originally wrote for FOSDEM.

I received a present for that today:

And yes, of course I'm happy about that :-)

On a side note, the videos for DebConf18 have all been transcoded now. There are actually three files per event:

  • The .webm file contains the high quality transcode of the video. It uses VP9 for video, and Opus for audio, and uses the Google-recommended settings for VP9 bitrate configuration. On modern hardware with decent bandwidth, you'll want to use this file.
  • The .lq.webm file contains the low-quality transcode of the video. It uses VP8 for video, and Vorbis for audio, and uses the builtin default settings of ffmpeg for VP8 bitrate configuration, as well as a scale-down to half the resolution; those usually end up being somewhere between a third and half of the size of the high quality transcodes.
  • The .ogg file contains the extracted vorbis audio from the .lq.webm file, and is useful only if you want to listen to the talk and not watch it. It's also the smallest download, for obvious reasons.

If you have comments, feel free to let us know on the mailinglist

August 21, 2018

I published the following diary on isc.sans.org: “Malicious DLL Loaded Through AutoIT“:

Here is an interesting sample that I found while hunting. It started with the following URL:

hxxp://200[.]98[.]170[.]29/uiferuisdfj/W5UsPk.php?Q8T3=OQlLg3rUFVE740gn1T3LjoPCQKxAL1i6WoY34y2o73Ap3C80lvTr9FM5

The value of the parameter (‘OQlLg3rUFVE740gn1T3LjoPCQKxAL1i6WoY34y2o73Ap3C80lvTr9FM5’) is used as the key to decode the first stage. If you don’t specify it, you get garbage data… [Read more]

[The post [SANS ISC] Malicious DLL Loaded Through AutoIT has been first published on /dev/random]

August 20, 2018

I’m proud to have been selected to give a training at DeepSec (Vienna, Austria) in November: “Hunting with OSSEC“. This training is intended for Blue Team members and system/security engineers who would like to take advantage of the OSSEC integration capabilities with other tools and increase the visibility of their infrastructure behaviour.

OSSEC is sometimes described as a low-cost log management solution but it has many interesting features which, when combined with external sources of information, may help in hunting for suspicious activity occurring on your servers and end-points. During this training, you will learn the basic of OSSEC and its components, how to deploy it and quickly get results. Then we will learn how to deploy specific rules to catch suspicious activities. From an input point of view, we will see how easy it is to learn new log formats to increase the detection scope and, from an output point of view, how we can generate alerts by interconnecting OSSEC with other tools like MISP, TheHive or an ELK Stack / Splunk /etc…

A quick overview of the training content:

  • Day 1
    • Introduction to OSSEC
    • Day to day management
      • Deployment (automation!)
      • Maintenance
      • Debugging
    • Collecting events using homemade decoders and rules
    • Reporting and alerting
  • Day 2
    • “Pimping” OSSEC with external feeds & data
    • Automation using Active-Response
    • Integration with external tools for better visibility

The DeepSec schedule is already online and the registration page is here. Please spread the word!

[The post Training Announce: “Hunting with OSSEC” has been first published on /dev/random]

August 17, 2018

Acquia was once again included the Inc 5000 listing of fast-growing private U.S. companies. It's a nice milestone for us because it is the seventh year in a row that Acquia has been included. We first appeared on the list in 2012, when Acquia was ranked the eighth fastest growing private company in the United States. It's easy to grow fast when you get started, but as you grow, it's increasingly more challenging to sustain high growth rates. While there may be 4,700 companies ahead of us, we have kept a solid track record of growth ever since our debut seven years ago. I continue to be proud of the entire Acquia team who are relentless in making these achievements possible. Kapow!

August 16, 2018

A very quick post about a new thread which has been started yesterday on the OSS-Security mailing list. It’s about a vulnerability affecting almost ALL SSH server version. Quoted from the initial message;

It affects all operating systems, all OpenSSH versions (we went back as far as OpenSSH 2.3.0, released in November 2000)

It is possible to enumerate usernames on a server that offers SSH services publicly. Of course, it did not take too long to see a proof-of-concept posted. I just tested it and it works like a charm:

$ ./ssh-check-username.py victim.domain.com test
[*] Invalid username
$ ./ssh-check-username.py victim.domain.com xavier
[+] Valid username

This is very nice/evil (depending on the side you’re working on). For Red Teams, it’s nice to enumerate usernames and focus on the weakest ones (“guest”, “support”, “test”, etc). There are plenty of username lists available online to brute force the server.

From a Blue Team point of view, how to detect if a host is targeted by this attack? Search for this type of event:

Aug 16 21:42:10 victim sshd[10680]: fatal: ssh_packet_get_string: incomplete message [preauth]

Note that the offending IP address is not listed in the error message. It’s time to keep an eye on your log files and block suspicious IP addresses that make too many SSH attempts (correlate with your firewall logs).

[The post Detecting SSH Username Enumeration has been first published on /dev/random]

I published the following diary on isc.sans.org: “Truncating Payloads and Anonymizing PCAP files“:

Sometimes, you may need to provide PCAP files to third-party organizations like a vendor support team to investigate a problem with your network. I was looking for a small tool to anonymize network traffic but also to restrict data to packet headers (and drop the payload). Google pointed me to a tool called ‘TCPurify’… [Read more]

 

[The post [SANS ISC] Truncating Payloads and Anonymizing PCAP files has been first published on /dev/random]

August 14, 2018

In this follow-up to Implementing the Clean Architecture I introduce you to a combination of The Clean Architecture and the strategic DDD pattern known as Bounded Contexts.

At Wikimedia Deutschland we use this combination of The Clean Architecture and Bounded Contexts for our fundraising applications. In this post I describe the structure we have and the architectural rules we follow in the abstract. For the story on how we got to this point and a more concrete description, see my post Bounded Contexts in the Wikimedia Fundraising Software. In that post and at the end of this one I link you to a real-world codebase that follows the abstract rules described in this post.

If you are not yet familiar with The Clean Architecture, please first read Implementing the Clean Architecture.

Clean Architecture + Bounded Contexts

Diagram by Jeroen De Dauw, Charlie Kritschmar, Jan Dittrich and Hanna Petruschat. Click image to enlarge

Diagram depicting Clean Architecture + Bounded Contexts

In the top layer of the diagram we have applications. These can be web applications, they can be console applications, they can be monoliths, they can be microservices, etc. Each application has presentation code which in bigger applications tends to reside in a decoupled presentation layer using patterns such as presenters. All applications also somehow construct the dependency graph they need, perhaps using a Dependency Injection Container or set of factories. Often this involves reading configuration from somewhere. The applications contain ALL framework binding, hence they are the place where you will find the Controllers if you are using a typical web framework.

Since the applications are in the top layer, and dependencies can only go down, no code outside of the applications is allowed to depend on code in the applications. That means there is 0 binding to mechanisms such as frameworks and presentation code outside of the applications.

In the second layer we have the Bounded Contexts. Ideally one Bounded Context per subdomain. At the core of each BC we have the Domain Model and Domain Services, containing the business logic part of the subdomain. Dependencies can only point inwards, so the Domain model which is at the center cannot depend on anything more to the outside. Around the Domain Model are the Domain Services. These include interfaces for persistence services such as Repositories. The UseCases form the final ring. They can use both the Domain Model and the Domain Services. They also form a boundary around the two, meaning that no code outside of the Bounded Context is allowed to talk to the Domain Model or Domain Services.

The Bounded Contexts include their own Persistence Layer. The Persistence Layer can use a relational database, files on the file system, a remote web API, a combination of these, etc. It has implementations of domain services such as Repositories which are used by the UseCases. These implementations are the only thing that is allowed to talk to and know about the low-level aspects of the Persistence Layer. The only things that can use these service implementations are other Domain Services and the UseCases.

The UseCases, including their Request Models and Response Models, form the public interface of the Bounded Context. This means that there is 0 binding to the persistence mechanisms outside of the Bounded Context. It also means that the code responsible for the domain logic cannot be directly accessed elsewhere, such as in the presentation layer of an application.

The applications and Bounded Contexts contain all the domain specific code. This code can make use of libraries and of course the runtime (ie PHP) itself.

As examples of Bounded Contexts following this approach, see the Donation Context and Membership Context. For an application following this architecture, see the FundraisingFrontend, which uses both the Donation Context and Membership Context. Both these contexts are also used by another application the code of which sadly enough is not currently public. You can also read the stories of how we rewrote the FundraisingFontend to use the Clean Architecture and how we refactored towards Bounded Contexts.

Further reading

If you are not yet familiar with Bounded Contexts or how to design them well, I recommend reading Domain-Driven Design Distilled.

In this follow-up to rewriting the Wikimedia Deutschland fundraising I tell the story of how we reorganized our codebases along the lines of the DDD strategic pattern Bounded Contexts.

In 2016 the FUN team at Wikimedia Deutschland rewrote the Wikimedia Deutschland fundraising application. This new codebase uses The Clean Architecture and near the end of the rewrite got reorganized partially towards Bounded Contexts. After adding many new features to the application in 2017, we reorganized further towards Bounded Contexts, this time also including our other fundraising applications. In this post I explain the questions we had and which decisions we ended up making. I also link to the relevant code so you have real world examples of using both The Clean Architecture and Bounded Contexts.

This post is a good primer to my Clean Architecture + Bounded Contexts post, which describes the structure and architecture rules we now have in detail.

Our initial rewrite

Back in 2014, we had two codebases, each in their own git repository and not using code from the other. The first one being a user facing PHP web-app that allows people to make donations and apply for memberships, called FundraisingFrontend. The “frontend” here stands for “user facing”. This is the application that we rewrote in 2016. The second codebase mainly contains the Fundraising Operations Center, a PHP web-app used by the fundraising staff to moderate and analyze donations and membership applications. This second codebase also contains some scripts to do exports of the data for communication with third-party systems.

Both the FundraisingFrontend and Fundraising Operations Center (FOC) used the same MySQL database. They each accessed this database in their own way, using PDO, Doctrine or SQL directly. In 2015 we created a Fundraising Store component based on Doctrine, to be used by both applications. Because we rewrote the FundraisingFrontend, all data access code there now uses this component. The FOC codebase we have been gradually migrating away from random SQL or PDO in its logic to also using this component, a process that is still ongoing.

Hence as we started with our rewrite of the FundraisingFrontend in 2016, we had 3 sets of code: FundraisingFrontend, FOC and Fundraising Store. After our rewrite the picture still looked largely the same. What we had done was turning FundraisingFrontend from a big ball of mud into a well designed application with proper architecture and separation of subdomains using Bounded Contexts. So while there where multiple components within the FundraisingFrontend, it was still all in one codebase / git repository. (Except for several small PHP libraries, but they are not relevant here.)

Need for reorganization

During 2017 we did a bunch of work on the FOC application and associated export script, all the while doing gradual refactoring towards sanity. While refactoring we found ourself implementing things such as a DonationRepository, things that already existed in the Bounded Contexts part of the FundraisingFrontend codebase. We realized that at least the subset of the FOC application that does moderation is part of the same subdomains that we created those Bounded Contexts for.

The inability to use these Bounded Contexts in the FOC app was forcing us to do double work by creating a second implementation of perfectly good code we already had. This also forced us to pay a lot of extra attention to avoid inconsistencies.  For instance, we ran into issues with the FOC app persisting Donations in a state deemed invalid by the donations Bounded Context.

To address these issues we decided to share the Bounded Contexts that we created during the 2016 rewrite with all relevant applications.

Sharing our Bounded Contexts

We considered several approaches to sharing our Bounded Contexts between our applications. The first approach we considered was having a dedicated git repository per BC. To do this we would need to answer what exactly would go into this repository and what would stay behind. We where concerned that for many changes we’d need to touch both the BC git repo and the application git repo, which requires more work and coordination than being able to make a change in a single repository. This lead us to consider options such as putting all BCs together into a single repo, to minimize this cost, or to simply put all code (BCs and applications) into a single huge repository.

We ended up going with one repo per BC, though started with a single BC to see how well this approach would work before committing to it. With this approach we still faced the question of what exactly should go into the BC repo. Should that just be the UseCases and their dependencies, or also the presentation layer code that uses the UseCases? We decided to leave the presentation layer code in the application repositories, to avoid extra (heavy) dependencies in the BC repo and because the UseCases provide a nice interface to the BC. Following this approach it is easy to tell if code belongs in the BC repo or not: if it binds to the domain model, it belongs in the BC repo.

These are the Bounded Context git repositories:

The BCs still use the FundraisingStore component in their data access services. Since this is not visible from outside of the BC, we can easily refactor towards removing the FundraisingStore and having the data access mechanism of a BC be truly private to it (as it is supposed to be for BCs).

The new BC repos allow us to continue gradual refactoring of the FOC and swap out legacy code with UseCases from the BCs. We can do so without duplicating what we already did and, thanks to the proper encapsulation, also without running into consistency issues.

Clean Architecture + Bounded Contexts

During our initial rewrite we created a diagram to represent the favor of The Clean Architecture that we where using. For an updated version that depicts the structure of The Clean Architecture + Bounded Contexts and describes the rules of this architecture in detail, see my blog post The Clean Architecture + Bounded Contexts.

Diagram depicting Clean Architecture + Bounded Contexts

August 10, 2018

So you have a WooCommerce shop which uses YouTube video’s to showcase your products but those same video’s are slowing your site down as YouTube embeds typically do? WP YouTube Lyte can go a long way to fix that issue, replacing the “fat embedded YouTube player” with a LYTE alternative.

LYTE will automatically detect and replace YouTube links (oEmbeds) and iFrames in your content (blogposts, pages, product descriptions) but is not active on content that does not hook into WordPress’ the_content filter (e.g. category descriptions or short product descriptions). To have LYTE active on those as well, just hook the respective filters up with the lyte_parse-function and you’re good to go;

if (function_exists('lyte_parse')) {
add_filter('woocommerce_short_description','lyte_parse');
add_filter('category_description','lyte_parse');
}

And a LYTE video, in case you’re wondering, looks like this (in this case beautiful harmonies by David Crosby & Venice, filmed way back in 1999 on Dutch TV);

YouTube Video
Watch this video on YouTube.

August 09, 2018

We now invite proposals for main track presentations, developer rooms, stands and lightning talks. FOSDEM offers open source and free software developers a place to meet, share ideas and collaborate. Renowned for being highly developer-oriented, the event brings together some 8000+ geeks from all over the world. The nineteenth edition will take place on Saturday 2nd and Sunday 3rd February 2019 at the usual location: ULB Campus Solbosch in Brussels. We will record and stream all main tracks, devrooms and lightning talks live. The recordings will be published under the same licence as all FOSDEM content (CC-BY). If, exceptionally,舰

Le problème d’Aristide, c’est qu’il passe un peu trop de temps sur Internet. Et que passer du temps sur Internet, ça donne des idées. Du genre des idées d’aller dans l’espace, d’explorer les planètes et de faire un petit pas pour un lapin mais un grand pas pour la lapinité.

C’est dit, Aristide ne sera pas commercial dans l’entreprise d’exportation de carotte familiale. Il sera cosmonaute !

Pour y arriver, il a besoin de votre aide sur notre campagne de financement participatif.

Né dans l’imagination de votre serviteur et mis en image par le talent graphique de Vinch, les aventures d’Aristide a été conçu pour un livre pour enfant à destination des adultes : texte dense, vocabulaire fouillé, humour absurde et second degré.

Mais n’infantilise-t-on pas un peu trop les enfants ? Eux aussi sont capables de se passionner pour une histoire plus longue, d’apprécier la naïveté colorée d’une conquête spatiale pas comme les autres. Aristide est donc un livre pour enfants pour adultes pour enfants. Avec en filigrane une problématique actuelle : faut-il croire tout ce qu’on lit sur Internet ? Parfois oui, parfois non, parfois cela donne des idées…

Bien que ce projet aie nécessité énormément de travail et d’efforts, nous avons choisi la voie de l’auto-édition afin de réaliser un véritable livre pour enfants (mais pour adultes pour enfants, vous suivez ?) de l’époque Internet. Plutôt que d’optimiser les coûts, nous cherchons avant tout à produire un livre de qualité sur tous les aspects (impression, papier recyclé). Le choix définitif de l’imprimeur n’est d’ailleurs pas arrêté, si jamais vous avez des filons, faites nous signe !

Bref, je pourrais vous parler de ce projet pendant des heures mais, aujourd’hui, on a surtout besoin de votre soutien à la fois financier et à la fois pour diffuser le projet sur les réseaux sociaux (surtout ceux hors-internet, genre les amis, la famille, les parents de l’école, toussa).

De la part d’Aristide, un tout grand merci d’avance !

Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

August 07, 2018

About

I finished my earlier work on build environment examples. Illustrating how to do versioning on shared object files right with autotools, qmake, cmake and meson. You can find it here.

The DIR examples are examples for various build environments on how to create a good project structure that will build libraries that are versioned with libtool or have versioning that is equivalent to what libtool would deliver, have a pkg-config file and have a so called API version in the library’s name.

What is right?

Information on this can be found in the autotools mythbuster docs, the libtool docs on versioning and freeBSD’s chapter on shared libraries. I tried to ensure that what is written here works with all of the build environments in the examples.

libpackage-4.3.so.2.1.0, what is what?

You’ll notice that a library called ‘package’ will in your LIBDIR often be called something like libpackage-4.3.so.2.1.0

We call the 4.3 part the APIVERSION, and the 2.1.0 part the VERSION (the ABI version).

I will explain these examples using semantic versioning as APIVERSION and either libtool’s current:revision:age or a semantic versioning alternative as field for VERSION (like in FreeBSD and for build environments where compatibility with libtool’s -version-info feature ain’t a requirement).

Noting that with libtool’s -version-info feature the values that you fill in for current, age and revision will not necessarily be identical to what ends up as suffix of the soname in LIBDIR. The formula to form the filename’s suffix is, for libtool, “(current – age).age.revision”. This means that for soname libpackage-APIVERSION.so.2.1.0, you would need current=3, revision=0 and age=1.

The VERSION part

In case you want compatibility with or use libtool’s -version-info feature, the document libtool/version.html on autotools.io states:

The rules of thumb, when dealing with these values are:

  • Increase the current value whenever an interface has been added, removed or changed.
  • Always increase the revision value.
  • Increase the age value only if the changes made to the ABI are backward compatible.

The libtool’s -version-info feature‘s updating-version-info part of libtool’s docs states:

  1. Start with version information of ‘0:0:0’ for each libtool library.
  2. Update the version information only immediately before a public release of your software. More frequent updates are unnecessary, and only guarantee that the current interface number gets larger faster.
  3. If the library source code has changed at all since the last update, then increment revision (‘c:r:a’ becomes ‘c:r+1:a’).
  4. If any interfaces have been added, removed, or changed since the last update, increment current, and set revision to 0.
  5. If any interfaces have been added since the last public release, then increment age.
  6. If any interfaces have been removed or changed since the last public release, then set age to 0.

When you don’t care about compatibility with libtool’s -version-info feature, then you can take the following simplified rules for VERSION:

  • SOVERSION = Major version
  • Major version: increase it if you break ABI compatibility
  • Minor version: increase it if you add ABI compatible features
  • Patch version: increase it for bug fix releases.

Examples when these simplified rules are or can be applicable is in build environments like cmake, meson and qmake. When you use autotools you will be using libtool and then they ain’t applicable.

The APIVERSION part

For the API version I will use the rules from semver.org. You can also use the semver rules for your package’s version:

Given a version number MAJOR.MINOR.PATCH, increment the:

  1. MAJOR version when you make incompatible API changes,
  2. MINOR version when you add functionality in a backwards-compatible manner, and
  3. PATCH version when you make backwards-compatible bug fixes.

When you have an API, that API can change over time. You typically want to version those API changes so that the users of your library can adopt to newer versions of the API while at the same time other users still use older versions of your API. For this we can follow section 4.3. called “multiple libraries versions” of the autotools mythbuster documentation. It states:

In this situation, the best option is to append part of the library’s version information to the library’s name, which is exemplified by Glib’s libglib-2.0.so.0 > soname. To do so, the declaration in the Makefile.am has to be like this:

lib_LTLIBRARIES = libtest-1.0.la

libtest_1_0_la_LDFLAGS = -version-info 0:0:0

The pkg-config file

Many people use many build environments (autotools, qmake, cmake, meson, you name it). Nowadays almost all of those build environments support pkg-config out of the box. Both for generating the file as for consuming the file for getting information about dependencies.

I consider it a necessity to ship with a useful and correct pkg-config .pc file. The filename should be /usr/lib/pkgconfig/package-APIVERSION.pc for soname libpackage-APIVERSION.so.VERSION. In our example that means /usr/lib/pkgconfig/package-4.3.pc. We’d use the command pkg-config package-4.3 –cflags –libs, for example.

Examples are GLib’s pkg-config file, located at /usr/lib/pkgconfig/glib-2.0.pc

The include path

I consider it a necessity to ship API headers in a per API-version different location (like for example GLib’s, at /usr/include/glib-2.0). This means that your API version number must be part of the include-path.

For example using earlier mentioned API-version 4.3, /usr/include/package-4.3 for /usr/lib/libpackage-4.3.so(.2.1.0) having /usr/lib/pkg-config/package-4.3.pc

What will the linker typically link with?

The linker will for -lpackage-4.3 typically link with /usr/lib/libpackage-4.3.so.2 or with libpackage-APIVERSION.so.(current – age). Noting that the part that is calculated as (current – age) in this example is often, for example in cmake and meson, referred to as the SOVERSION. With SOVERSION the soname template in LIBDIR is libpackage-APIVERSION.so.SOVERSION.

What is wrong?

Not doing any versioning

Without versioning you can’t make any API or ABI changes that wont break all your users’ code in a way that could be managable for them. If you do decide not to do any versioning, then at least also don’t put anything behind the .so part of your so’s filename. That way, at least you wont break things in spectacular ways.

Coming up with your own versioning scheme

Knowing it better than the rest of the world will in spectacular ways make everything you do break with what the entire rest of the world does. You shouldn’t congratulate yourself with that. The only thing that can be said about it is that it probably makes little sense, and that others will probably start ignoring your work. Your mileage may vary. Keep in mind that without a correct SOVERSION, certain things will simply not work correct.

In case of libtool: using your package’s (semver) release numbering for current, revision, age

This is similarly wrong to ‘Coming up with your own versioning scheme’.

The Libtool documentation on updating version info is clear about this:

Never try to set the interface numbers so that they correspond to the release number of your package. This is an abuse that only fosters misunderstanding of the purpose of library versions.

This basically means that once you are using libtool, also use libtool’s versioning rules.

Refusing or forgetting to increase the current and/or SOVERSION on breaking ABI changes

The current part of the VERSION (current, revision and age) minus age, or, SOVERSION is/are the most significant field(s). The current and age are usually involved in forming the so called SOVERSION, which in turn is used by the linker to know with which ABI version to link. That makes it … damn important.

Some people think ‘all this is just too complicated for me’, ‘I will just refuse to do anything and always release using the same version numbers’. That goes spectacularly wrong whenever you made ABI incompatible changes. It’s similarly wrong to ‘Coming up with your own versioning scheme’.

That way, all programs that link with your shared library can after your shared library gets updated easily crash, can corrupt data and might or might not work.

By updating the current and age, or, SOVERSION you will basically trigger people who manage packages and their tooling to rebuild programs that link with your shared library. You actually want that the moment you made breaking ABI changes in a newer version of it.

When you don’t want to care about libtool’s -version-info feature, then there is also a set of more simple to follow rules. Those rules are for VERSION:

  • SOVERSION = Major version (with these simplified set of rules, no subtracting of current with age is needed)
  • Major version: increase it if you break ABI compatibility
  • Minor version: increase it if you add ABI compatible features
  • Patch version: increase it for bug fix releases.

What isn’t wrong?

Not using libtool (but nonetheless doing ABI versioning right)

GNU libtool was made to make certain things more easy. Nowadays many popular build environments also make things more easy. Meanwhile has GNU libtool been around for a long time. And its versioning rules, commonly known as the current:revision:age field as parameter for -verison-info, got widely adopted.

What GNU libtool did was, however, not really a standard. It’s is one interpretation of how to do it. And a rather complicated one, at that.

Please let it be crystal clear that not using libtool does not mean that you can do ABI versioning wrong. Because very often people seem to think that they can, and think they’ll still get out safely while doing ABI versioning completely wrong. This is not the case.

Not having a APIVERSION at all

It isn’t wrong not to have an APIVERSION in the soname. It however means that you promise to not ever break API. Because the moment you break API, you disallow your users to stay on the old API for a little longer. They might both have programs that use the old and that use the new API. Now what?

When you have an APIVERSION then you can allow the introduction of a new version of the API while simultaneously the old API remains available on a user’s system.

Using a different naming-scheme for APIVERSION

I used the MAJOR.MINOR version numbers from semver to form the APIVERSION. I did this because only the MAJOR and the MINOR are technically involved in API changes (unless you are doing semantic versioning wrong – in which case see ‘Coming up with your own versioning scheme’).

Some projects only use MAJOR. Examples are Qt which puts the MAJOR number behind the Qt part. For example libQt5Core.so.VERSION (so that’s “Qt” + MAJOR + Module). The GLib world, however, uses “g” + Module + “-” + MAJOR + “.0″ as they have releases like 2.2, 2.3, 2.4 that are all called libglib-2.0.so.VERSION. I guess they figured that maybe someday in their 2.x series, they could use that MINOR field?

DBus seems to be using a similar thing to GLib, but then without the MINOR suffix: libdbus-1.so.VERSION. For their GLib integration they also use it as libdbus-glib-1.so.VERSION.

Who is right, who is wrong? It doesn’t matter too much for your APIVERSION naming scheme. As long as there is a way to differentiate the API in a) the include path, b) the pkg-config filename and c) the library that will be linked with (the -l parameter during linking/compiling). Maybe someday a standard will be defined? Let’s hope so.

Differences in interpretation per platform

FreeBSD

FreeBSD’s Shared Libraries of Chapter 5. Source Tree Guidelines and Policies states:

The three principles of shared library building are:

  1. Start from 1.0
  2. If there is a change that is backwards compatible, bump minor number (note that ELF systems ignore the minor number)
  3. If there is an incompatible change, bump major number

For instance, added functions and bugfixes result in the minor version number being bumped, while deleted functions, changed function call syntax, etc. will force the major version number to change.

I think that when using libtool on a FreeBSD (when you use autotools), that the platform will provide a variant of libtool’s scripts that will convert earlier mentioned current, revision and age rules to FreeBSD’s. The same goes for the VERSION variable in cmake and qmake. Meaning that with those tree build environments, you can just use the rules for GNU libtool’s -version-info.

I could be wrong on this, but I did find mailing list E-mails from ~ 2011 stating that this SNAFU is dealt with. Besides, the *BSD porters otherwise know what to do and you could of course always ask them about it.

Note that FreeBSD’s rules are or seem to be compatible with the rules for VERSION when you don’t want to care about libtool’s -version-info compatibility. However, when you are porting from a libtoolized project, then of course you don’t want to let newer releases break against releases that have already happened.

Modern Linux distributions

Nowadays you sometimes see things like /usr/lib/$ARCH/libpackage-APIVERSION.so linking to /lib/$ARCH/libpackage-APIVERSION.so.VERSION. I have no idea how this mechanism works. I suppose this is being done by packagers of various Linux distributions? I also don’t know if there is a standard for this.

I will update the examples and this document the moment I know more and/or if upstream developers need to worry about it. I think that using GNUInstallDirs in cmake, for example, makes everything go right. I have not found much for this in qmake, meson seems to be doing this by default and in autotools you always use platform variables for such paths.

As usual, I hope standards will be made and that the build environment and packaging community gets to their senses and stops leaving this into the hands of developers. I especially think about qmake, which seems to not have much at all to state that standardized installation paths must be used (not even a proper way to define a prefix).

Questions that I can imagine already exist

Why is there there a difference between APIVERSION and VERSION?

The API version is the version of your programmable interfaces. This means the version of your header files (if your programming language has such header files), the version of your pkgconfig file, the version of your documentation. The API is what software developers need to utilize your library.

The ABI version can definitely be different and it is what programs that are compiled and installable need to utilize your library.

An API breaks when recompiling the program without any changes, that consumes a libpackage-4.3.so.2, is not going to succeed at compile time. The API got broken the moment any possible way package’s API was used, wont compile. Yes, any way. It means that a libpackage-5.0.so.0 should be started.

An ABI breaks when without recompiling the program, replacing a libpackage-4.3.so.2.1.0 with a libpackage-4.3.so.2.2.0 or a libpackage-4.3.so.2.1.1 (or later) as libpackage-4.3.so.2 is not going to succeed at runtime. For example because it would crash, or because the results would be wrong (in any way). It implies that libpackage-4.3.so.2 shouldn’t be overwritten, but libpackage-4.3.so.3 should be started.

For example when you change the parameter of a function in C to be a floating point from a integer (and/or the other way around), then that’s an ABI change but not neccesarily an API change.

What is this SOVERSION about?

In most projects that got ported from an environment that uses GNU libtool (for example autotools) to for example cmake or meson, and in the rare cases that they did anything at all in a qmake based project, I saw people converting the current, revision and age parameters that they passed to the -version-info option of libtool to a string concatenated together using (current – age), age, revision as VERSION, and (current – age) as SOVERSION.

I wanted to use the exact same rules for versioning for all these examples, including autotools and GNU libtool. When you don’t have to (or want to) care about libtool’s set of (for some people, needlessly complicated) -version-info rules, then it should be fine using just SOVERSION and VERSION using these rules:

  • SOVERSION = Major version
  • Major version: increase it if you break ABI compatibility
  • Minor version: increase it if you add ABI compatible features
  • Patch version: increase it for bug fix releases.

I, however, also sometimes saw variations that are incomprehensible with little explanation and magic foo invented on the spot. Those variations are probably wrong.

In the example I made it so that in the root build file of the project you can change the numbers and calculation for the numbers. However. Do follow the rules for those correctly, as this versioning is about ABI compatibility. Doing this wrong can make things blow up in spectacular ways.

The examples

qmake in the qmake-example

Note that the VERSION variable must be filled in as “(current – age).age.revision” for qmake (to get 2.1.0 at the end, you need VERSION=2.1.0 when current=3, revision=0 and age=1)

To try this example out, go to the qmake-example directory and type

$ cd qmake-example
$ mkdir=_test
$ qmake PREFIX=$PWD/_test
$ make
$ make install

This should give you this:

$ find _test/
_test/
├── include
│   └── qmake-example-4.3
│       └── qmake-example.h
└── lib
    ├── libqmake-example-4.3.so -> libqmake-example-4.3.so.2.1.0
    ├── libqmake-example-4.3.so.2 -> libqmake-example-4.3.so.2.1.0
    ├── libqmake-example-4.3.so.2.1 -> libqmake-example-4.3.so.2.1.0
    ├── libqmake-example-4.3.so.2.1.0
    ├── libqmake-example-4.la
    └── pkgconfig
        └── qmake-example-4.3.pc

When you now use pkg-config, you get a nice CFLAGS and LIBS line back (I’m replacing the current path with $PWD in the output each time):

$ export PKG_CONFIG_PATH=$PWD/_test/lib/pkgconfig
$ pkg-config qmake-example-4.3 --cflags
-I$PWD/_test/include/qmake-example-4.3
$ pkg-config qmake-example-4.3 --libs
-L$PWD/_test/lib -lqmake-example-4.3

And it means that you can do things like this now (and people who know about pkg-config will now be happy to know that they can use your library in their own favorite build environment).

$ export LD_LIBRARY_PATH=$PWD/_test/lib
$ echo -en "#include <qmake-example.h>\nmain() {} " > test.cpp
$ g++ -fPIC test.cpp -o test.o `pkg-config qmake-example-4.3 --libs --cflags`

You can see that it got linked to libqmake-example-4.3.so.2, where that 2 at the end is (current – age).

$ ldd test.o 
    linux-gate.so.1 (0xb77b0000)
    libqmake-example-4.3.so.2 => $PWD/_test/lib/libqmake-example-4.3.so.2 (0xb77a6000)
    libstdc++.so.6 => /usr/lib/i386-linux-gnu/libstdc++.so.6 (0xb75f5000)
    libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xb759e000)
    libgcc_s.so.1 => /lib/i386-linux-gnu/libgcc_s.so.1 (0xb7580000)
    libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xb73c9000)
    /lib/ld-linux.so.2 (0xb77b2000)

cmake in the cmake-example

Note that the VERSION property on your library target must be filled in with “(current – age).age.revision” for cmake (to get 2.1.0 at the end, you need VERSION=2.1.0 when current=3, revision=0 and age=1. Note that in cmake you must also fill in the SOVERSION property as (current – age), so SOVERSION=2 when current=3 and age=1).

To try this example out, go to the cmake-example directory and do

$ cd cmake-example
$ mkdir _test
$ cmake -DCMAKE_INSTALL_PREFIX:PATH=$PWD/_test
-- Configuring done
-- Generating done
-- Build files have been written to: .
$ make
[ 50%] Building CXX object src/libs/cmake-example/CMakeFiles/cmake-example.dir/cmake-example.cpp.o
[100%] Linking CXX shared library libcmake-example-4.3.so
[100%] Built target cmake-example
$ make install
[100%] Built target cmake-example
Install the project...
-- Install configuration: ""
-- Installing: $PWD/_test/lib/libcmake-example-4.3.so.2.1.0
-- Up-to-date: $PWD/_test/lib/libcmake-example-4.3.so.2
-- Up-to-date: $PWD/_test/lib/libcmake-example-4.3.so
-- Up-to-date: $PWD/_test/include/cmake-example-4.3/cmake-example.h
-- Up-to-date: $PWD/_test/lib/pkgconfig/cmake-example-4.3.pc

This should give you this:

$ tree _test/
_test/
├── include
│   └── cmake-example-4.3
│       └── cmake-example.h
└── lib
    ├── libcmake-example-4.3.so -> libcmake-example-4.3.so.2
    ├── libcmake-example-4.3.so.2 -> libcmake-example-4.3.so.2.1.0
    ├── libcmake-example-4.3.so.2.1.0
    └── pkgconfig
        └── cmake-example-4.3.pc

When you now use pkg-config, you get a nice CFLAGS and LIBS line back (I’m replacing the current path with $PWD in the output each time):

$ pkg-config cmake-example-4.3 --cflags
-I$PWD/_test/include/cmake-example-4.3
$ pkg-config cmake-example-4.3 --libs
-L$PWD/_test/lib -lcmake-example-4.3

And it means that you can do things like this now (and people who know about pkg-config will now be happy to know that they can use your library in their own favorite build environment):

$ echo -en "#include <cmake-example.h>\nmain() {} " > test.cpp
$ g++ -fPIC test.cpp -o test.o `pkg-config cmake-example-4.3 --libs --cflags`

You can see that it got linked to libcmake-example-4.3.so.2, where that 2 at the end is the SOVERSION. This is (current – age).

$ ldd test.o
    linux-gate.so.1 (0xb7729000)
    libcmake-example-4.3.so.2 => $PWD/_test/lib/libcmake-example-4.3.so.2 (0xb771f000)
    libstdc++.so.6 => /usr/lib/i386-linux-gnu/libstdc++.so.6 (0xb756e000)
    libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xb7517000)
    libgcc_s.so.1 => /lib/i386-linux-gnu/libgcc_s.so.1 (0xb74f9000)
    libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xb7342000)
    /lib/ld-linux.so.2 (0xb772b000)

autotools in the autotools-example

Note that you pass -version-info current:revision:age directly with autotools. The libtool will translate that to (current – age).age.revision to form the so’s filename (to get 2.1.0 at the end, you need current=3, revision=0, age=1).

To try this example out, go to the autotools-example directory and do

$ cd autotools-example
$ mkdir _test
$ libtoolize
$ aclocal
$ autoheader
$ autoconf
$ automake --add-missing
$ ./configure --prefix=$PWD/_test
$ make
$ make install

This should give you this:

$ tree _test/
_test/
├── include
│   └── autotools-example-4.3
│       └── autotools-example.h
└── lib
    ├── libautotools-example-4.3.a
    ├── libautotools-example-4.3.la
    ├── libautotools-example-4.3.so -> libautotools-example-4.3.so.2.1.0
    ├── libautotools-example-4.3.so.2 -> libautotools-example-4.3.so.2.1.0
    ├── libautotools-example-4.3.so.2.1.0
    └── pkgconfig
        └── autotools-example-4.3.pc

When you now use pkg-config, you get a nice CFLAGS and LIBS line back (I’m replacing the current path with $PWD in the output each time):

$ export PKG_CONFIG_PATH=$PWD/_test/lib/pkgconfig
$ pkg-config autotools-example-4.3 --cflags
-I$PWD/_test/include/autotools-example-4.3
$ pkg-config autotools-example-4.3 --libs
-L$PWD/_test/lib -lautotools-example-4.3

And it means that you can do things like this now (and people who know about pkg-config will now be happy to know that they can use your library in their own favorite build environment):

$ echo -en "#include <autotools-example.h>\nmain() {} " > test.cpp
$ export LD_LIBRARY_PATH=$PWD/_test/lib
$ g++ -fPIC test.cpp -o test.o `pkg-config autotools-example-4.3 --libs --cflags`

You can see that it got linked to libautotools-example-4.3.so.2, where that 2 at the end is (current – age).

$ ldd test.o 
    linux-gate.so.1 (0xb778d000)
    libautotools-example-4.3.so.2 => $PWD/_test/lib/libautotools-example-4.3.so.2 (0xb7783000)
    libstdc++.so.6 => /usr/lib/i386-linux-gnu/libstdc++.so.6 (0xb75d2000)
    libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xb757b000)
    libgcc_s.so.1 => /lib/i386-linux-gnu/libgcc_s.so.1 (0xb755d000)
    libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xb73a6000)
    /lib/ld-linux.so.2 (0xb778f000)

meson in the meson-example

Note that the version property on your library target must be filled in with “(current – age).age.revision” for meson (to get 2.1.0 at the end, you need version=2.1.0 when current=3, revision=0 and age=1. Note that in meson you must also fill in the soversion property as (current – age), so soversion=2 when current=3 and age=1).

To try this example out, go to the meson-example directory and do

$ cd meson-example
$ mkdir -p _build/_test
$ cd _build
$ meson .. --prefix=$PWD/_test
$ ninja
$ ninja install

This should give you this:

$ tree _test/
_test/
├── include
│   └── meson-example-4.3
│       └── meson-example.h
└── lib
    └── i386-linux-gnu
        ├── libmeson-example-4.3.so -> libmeson-example-4.3.so.2.1.0
        ├── libmeson-example-4.3.so.2 -> libmeson-example-4.3.so.2.1.0
        ├── libmeson-example-4.3.so.2.1.0
        └── pkgconfig
            └── meson-example-4.3.pc

When you now use pkg-config, you get a nice CFLAGS and LIBS line back (I’m replacing the current path with $PWD in the output each time):

$ export PKG_CONFIG_PATH=$PWD/_test/lib/i386-linux-gnu/pkgconfig
$ pkg-config meson-example-4.3 --cflags
-I$PWD/_test/include/meson-example-4.3
$ pkg-config meson-example-4.3 --libs
-L$PWD/_test/lib -lmeson-example-4.3

And it means that you can do things like this now (and people who know about pkg-config will now be happy to know that they can use your library in their own favorite build environment):

$ echo -en "#include <meson-example.h>\nmain() {} " > test.cpp
$ export LD_LIBRARY_PATH=$PWD/_test/lib/i386-linux-gnu
$ g++ -fPIC test.cpp -o test.o `pkg-config meson-example-4.3 --libs --cflags`

You can see that it got linked to libmeson-example-4.3.so.2, where that 2 at the end is the soversion. This is (current – age).

$ ldd test.o 
    linux-gate.so.1 (0xb772e000)
    libmeson-example-4.3.so.2 => $PWD/_test/lib/i386-linux-gnu/libmeson-example-4.3.so.2 (0xb7724000)
    libstdc++.so.6 => /usr/lib/i386-linux-gnu/libstdc++.so.6 (0xb7573000)
    libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xb751c000)
    libgcc_s.so.1 => /lib/i386-linux-gnu/libgcc_s.so.1 (0xb74fe000)
    libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xb7347000)
    /lib/ld-linux.so.2 (0xb7730000)

August 03, 2018

So it’s probably the crazy nineties and you have Adriano Celentano in a live show joined by the young Manu Chao (then the “leader” of Mano Negra) and they perform a weird mix of “Prisencolinensinainciusol” and “King Kong 5” and they throw in an interview during the song and there’s lots of people dancing, a pretty awkward upskirt shot and general craziness.

This must (have) be(en) Italian TV, mustn’t it?

YouTube Video
Watch this video on YouTube.

 

August 01, 2018

Today, Acquia was named a leader in the 2018 Gartner Magic Quadrant for Web Content Management. Acquia has now been recognized as a leader for five years in a row.

The 2018 Gartner Magic Quadrant for Web Content ManagementAcquia recognized as a leader, next to Adobe and Sitecore, in the 2018 Gartner Magic Quadrant for Web Content Management.

Analyst reports like the Gartner Magic Quadrant are important because they introduce organizations to Acquia and Drupal. Last year, I explained it in the following way: "If you want to find a good coffee place, you use Yelp. If you want to find a nice hotel in New York, you use TripAdvisor. Similarly, if a CIO or CMO wants to spend $250,000 or more on enterprise software, they often consult an analyst firm like Gartner.".

Our tenure as a top vendor is not only a strong endorsement of Acquia's strategy and vision, but also underscores our consistency. Drupal and Acquia are here to stay!

What I found interesting about this year's report is the increased emphasis on flexibility and ease of integration. I've been saying this for a few years now, but it's all about innovation through integration, rather than just innovation in the core platform itself.

Marketing technology landscape 2018An image of the Marketing Technology Landscape 2018. For reference, here are the 2011, 2012, 2014, 2015, 2016 and 2017 versions of the landscape. It shows how fast the marketing technology industry is growing.

Just look at the 2018 Martech 5000 — the supergraphic now includes 7,000 marketing technology solutions, which is a 27% increase from a year ago. This accelerated innovation isn't exclusive to marketing technology; its happening across every part of the enterprise technology stack. From headless commerce integrations to the growing adoption of JavaScript frameworks and emerging cross-channel experiences, organizations have the opportunity to re-imagine customer experiences like never before.

It's not surprising that customers are looking for an open platform that allows for open innovation and unlimited integrations. The best way to serve this need is through open APIs, decoupled architectures and an Open Source innovation model. This is why Drupal can offer its users thousands of integrations, more than all of the other Gartner leaders combined.

Acquia Experience Platform

When you marry Drupal's community-driven innovation with Acquia's cloud platform and suite of marketing tools, you get an innovative solution across every layer of your technology stack. It allows our customers to bring powerful new experiences to market, across the web, mobile, native applications, chatbots and more. Most importantly, it gives customers the freedom to build on their own terms.

Thank you to everyone who contributed to this result!

When creating an application that follows The Clean Architecture you end up with a number of UseCases that hold your application logic. In this blog post I outline a testing pattern for effectively testing these UseCases and avoiding common pitfalls.

Testing UseCases

A UseCase contains the application logic for a single “action” that your system supports. For instance “cancel a membership”. This application logic interacts with the domain and various services. These services and the domain should have their own unit and integration tests. Each UseCase gets used in one or more applications, where it gets invoked from inside the presentation layer. Typically you want to have a few integration or edge-to-edge tests that cover this invocation. In this post I look at how to test the application logic of the UseCase itself.

UseCases tend to have “many” collaborators. I can’t recall any that had less than 3. For the typical UseCase the number is likely closer to 6 or 7, with more collaborators being possible even when the design is good. That means constructing a UseCase takes some work: you need to provide working instances of all the collaborators.

Integration Testing

One way to deal with this is to write integration tests for your UseCases. Simply get an instance of the UseCase from your Top Level Factory or Dependency Injection Container.

This approach often requires you to mutate the factory or DIC. Want to test that an exception from the persistence service gets handled properly? You’ll need to use some test double instead of the real service, or perhaps mutate the real service in some way. Want to verify a mail got send? Definitely want to use a Spy here instead of the real service. Mutability comes with a cost so is better avoided.

A second issue with using real collaborators is that your tests get slow due to real persistence usage. Even using an in-memory SQLite database (that needs initialization) instead of a simple in-memory fake repository makes for a speed difference of easily two orders of magnitude.

Unit Testing

While there might be some cases where integration tests make sense, normally it is better to write unit tests for UseCases. This means having test doubles for all collaborators. Which leads us to the question of how to best inject these test doubles into our UseCases.

As example I will use the CancelMembershipApplicationUseCase of the Wikimedia Deutschland fundrasing application.

function __construct(ApplicationAuthorizer $authorizer, ApplicationRepository $repository, TemplateMailerInterface $mailer) {
    $this->authorizer = $authorizer;
    $this->repository = $repository;
    $this->mailer = $mailer;
}

This UseCase uses 3 collaborators. An authorization service, a repository (persistence service) and a mailing service. First it checks if the operation is allowed with the authorizer, then it interacts with the persistence and finally, if all went well, it uses the mailing service to send a confirmation email. Our unit test should test all this behavior and needs to inject test doubles for the 3 collaborators.

The most obvious approach is to construct the UseCase in each test method.

public function testGivenIdOfUnknownDonation_cancellationIsNotSuccessful(): void {
    $useCase = new CancelMembershipApplicationUseCase(
        new SucceedingAuthorizer(),
        $this->newRepositoryWithCancellableDonation(),
        new MailerSpy()
    );

    $response = $useCase->cancelApplication(
        new CancellationRequest( self::ID_OF_NON_EXISTING_APPLICATION )
    );

    $this->assertFalse( $response->cancellationWasSuccessful() );
}

public function testGivenIdOfCancellableApplication_cancellationIsSuccessful(): void {
    $useCase = new CancelMembershipApplicationUseCase(
        new SucceedingAuthorizer(),
        $this->newRepositoryWithCancellableDonation(),
        new MailerSpy()
    );
    
    $response = $useCase->cancelApplication(
        new CancellationRequest( $this->cancelableApplication->getId() )
    );

    $this->assertTrue( $response->cancellationWasSuccessful() );
}

Note how both these test methods use the same test doubles. This is not always the case, for instance when testing authorization failure, the test double for the authorizer service will differ, and when testing persistence failure, the test double for the persistence service will differ.

public function testWhenAuthorizationFails_cancellationFails(): void {
    $useCase = new CancelMembershipApplicationUseCase(
        new FailingAuthorizer(),
        $this->newRepositoryWithCancellableDonation(),
        new MailerSpy()
    );

    $response = $useCase->cancelApplication(
        new CancellationRequest( $this->cancelableApplication->getId() )
    );

    $this->assertFalse( $response->cancellationWasSuccessful() );
}

Normally a test function will only change a single test double.

UseCases tend to have, on average, two or more behaviors (and thus tests) per collaborator. That means for most UseCases you will be repeating the construction of the UseCase in a dozen or more test functions. That is a problem. Ask yourself why.

If the answer you came up with was DRY then think again and read my blog post on DRY 😉 The primary issue is that you couple each of those test methods to the list of collaborators. So when the constructor signature of your UseCase changes, you will need to do Shotgun Surgery and update all test functions. Even if those tests have nothing to do with the changed collaborator. A second issue is that you pollute the test methods with irrelevant details, making them harder to read.

Default Test Doubles Pattern

The pattern is demonstrated using PHP + PHPUnit and will need some adaptation when using a testing framework that does not work with a class based model like that of PHPUnit.

The coupling to the constructor signature and resulting Shotgun Surgery can be avoided by having a default instance of the UseCase filled with the right test doubles. This can be done by having a newUseCase method that constructs the UseCase and returns it. A way to change specific collaborators is needed (ie a FailingAuthorizer to test handling of failing authorization).

private function newUseCase() {
    return new CancelMembershipApplicationUseCase(
        new SucceedingAuthorizer(),
        new InMemoryApplicationRepository(),
        new MailerSpy()
    );
}

Making the UseCase itself mutable is a big no-no. Adding optional parameters to the newUseCase method works in languages that have named parameters. Since PHP does not have named parameters, another solution is needed.

An alternative approach to getting modified collaborators into the newUseCase method is using fields. This is less nice than named parameters, as it introduces mutable state on the level of the test class. Since in PHP this approach gives us named fields and is understandable by tools, it is better than either using a positional list of optional arguments or emulating named arguments with an associative array (key-value map).

The fields can be set in the setUp method, which gets called by PHPUnit before the test methods. For each test method PHPUnit instantiates the test class, then calls setUp, and then calls the test method.

public function setUp() {
    $this->authorizer = new SucceedingAuthorizer();
    $this->repository = new InMemoryApplicationRepository();
    $this->mailer = new MailerSpy();

    $this->cancelableApplication = ValidMembershipApplication::newDomainEntity();
    $this->repository->storeApplication( $this->cancelableApplication );
}

private function newUseCase(): CancelMembershipApplicationUseCase {
    return new CancelMembershipApplicationUseCase(
        $this->authorizer,
        $this->repository,
        $this->mailer
    );
}

With this field-based approach individual test methods can modify a specific collaborator by writing to the field before calling newUseCase.

public function testWhenAuthorizationFails_cancellationFails(): void {
    $this->authorizer = new FailingAuthorizer();

    $response = $this->newUseCase()->cancelApplication(
        new CancellationRequest( $this->cancelableApplication->getId() )
    );

    $this->assertFalse( $response->cancellationWasSuccessful() );
}

public function testWhenSaveFails_cancellationFails() {
    $this->repository->throwOnWrite();

    $response = $this->newUseCase()->cancelApplication(
        new CancellationRequest( $this->cancelableApplication->getId() )
    );

    $this->assertFalse( $response->cancellationWasSuccessful() );
}

The choice of default collaborators is important. To minimize binding in the test functions, the default collaborators should not cause any failures. This is the case both when using the field-based approach and when using optional named parameters.

If the authorization service failed by default, most test methods would need to modify it, even if they have nothing to do with authorization. And it is not always self-evident they need to modify the unrelated collaborator. Imagine the default authorization service indeed fails and that the testWhenSaveFails_cancellationFails test method forgets to modify it. This test method would end up passing even if the behavior it tests is broken, since the UseCase will return the expected failure result even before getting to the point where it saves something.

This is why inside of the setUp function the example creates a “cancellable application” and puts it inside an in-memory test double of the repository.

I chose the CancelMembershipApplication UseCase as an example because it is short and easy to understand. For most UseCases it is even more important to avoid the constructor signature coupling as this issue becomes more severe with size. And no matter how big or small the UseCase is, you benefit from not polluting your tests with unrelated setup details.

You can view the whole CancelMembershipApplicationUseCase and CancelMembershipApplicationUseCaseTest.

See also:

July 31, 2018

I published the following diary on isc.sans.org: “Exploiting the Power of Curl“:

Didier explained in a recent diary that it is possible to analyze malicious documents with standard Linux tools. I’m using Linux for more than 20 years and, regularly, I find new commands or new switches that help me to perform recurring (boring?) tasks in a more efficient way. How to use these tools can be found by running them with the flag ‘-h’ or ‘–help’. They also have a corresponding man page that describes precisely how to use the numerous options available (just type ‘man <command>’ in your shell)… [Read more]

[The post [SANS ISC] Exploiting the Power of Curl has been first published on /dev/random]

Dien Francken, heeft die als staatsecretaris niet de eed gezworen op onze Belgische grondwet?

Want beweren dat zijn hypothetische aannamens boven een beslissing van het gerecht staan, gaat tegen één van de wetten van onze grondwet in. Namelijk de scheiding der machten. Iemand die in functie is, gezworen heeft op die grondwet en daar totaal tegen in gaat begaat meineed en is strafbaar.

Een staatssecretarisch die zijn eed niet kan houden en die geen respect heeft voor de Belgische grondwet kan wat mij betreft niet aanblijven. Hoe populair hij door zijn populistische zever ook is.

July 30, 2018

Digital backpack

I recently heard a heart-warming story from the University of California, Davis. Last month, UC Davis used Drupal to launch Article 26 Backpack, a platform that helps Syrian Refugees document and share their educational credentials.

Over the course of the Syrian civil war, more than 12 million civilians have been displaced. Hundreds of thousands of these refugees are students, who now have to overcome the obstacle of re-entering the workforce or pursuing educational degrees away from home.

Article 26 Backpack addresses this challenge by offering refugees a secure way to share their educational credentials with admissions offices, scholarship agencies, and potentials employers. The program also includes face-to-face counseling to provide participants with academic advisory and career development.

The UC Davis team launched their Drupal 8 application for Article 26 Backpack in four months. On the site, students can securely store their educational data, such as diplomas, transcripts and resumes. The next phase of the project will be to leverage Drupal's multilingual capabilities to offer the site in Arabic as well.

This is a great example of how organizations are using Drupal to prioritize impact. It's always inspiring to hear stories of how Drupal is changing lives for the better. Thank you to the UC Davis team for sharing their story, and continue the good work!

Toys

No, this isn't a post about Microsoft, judging by the title. I'm talking about the Xiaomi M365 electric step. I've been looking lately to use my car way less, partially due to the fact that parking space is very limited at the train station. Getting fines for not parking at designated places surely doesn't help either. It took me a while to obtain a M365 before summer, but eventually I got it. The e-step has an autonomy of 20km (in my case), and this just suffices for the round-trip from/to the station. The step has a maximum speed of 25km/h, which is 'acceptable' : I would have preferred a bit faster, as taking over bikes sometimes takes a while.

This e-step is quite high-tech : it features cruise-control, ABS and KERS, which makes me hardly use the brakes. Cruising at 25km/h really is a blast, and I have really become quite fond at my daily ride with it. Additionally, it allows me to explore different routes, is more versatile than a bike, and can be taken with me on the train (although the size, even when folded, is quite large).

There's a quite active group of 'developers' around this e-step, creating custom firmware which allows to change different parameters such as maximum speed or KERS control. I've tested out a few, but additional speed comes with too much impact on the battery, so that I decided to stick with the official firmware.

Mobile phones

I'll take back anything I said about the Xiaomi Mi Mix. Well, at least about the slippery part : 5 months after my purchase, I placed the mobile phone onto a pile of papers on my desk. Five minutes later, it must have seen it wasn't 100% horizontal placed, and decided to go for a walk. Fell on the floor, cracked screen. This thing *is* slippery as soap.

I still find the design of the phone the most beautiful I've ever seen, and the large screen just is gorgeous ! But you cannot use this phone without a cover, and that just breaks the beauty of the phone. That, in combination with the fact that mobile reception is zero without band 20 support, and the fact that putting a light sensor at the bottom of the screen is just irritating, were enough reasons for me not to reorder another Mi Mix again.

I've switched to a Onplus 5T as a new phone. The design just pales in comparison with the Mi Mix, but the development support is fantastic, and my Oneplus One was brilliant. I hope I'll can say that again within 3 to 4 years about the OP5T.