Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

February 25, 2021

February 19, 2021

I published the following diary on isc.sans.edu: “Dynamic Data Exchange (DDE) is Back in the Wild?‘”:

DDE or “Dynamic Data Exchange” is a Microsoft technology for interprocess communication used in early versions of Windows and OS/2. DDE allows programs to manipulate objects provided by other programs, and respond to user actions affecting those objects. FOr a while, DDE was partially replaced by Object Linking andEmbedding (OLE) but it’s still available in the latest versions of the Microsoft operating system for backward compatibility reasons. If fashion is known to be in a state of perpetual renewal, we could say the same in the cybersecurity landscape. Yesterday, I spotted a malicious Word document that abused this DDE technology… [Read more]

The post [SANS ISC] Dynamic Data Exchange (DDE) is Back in the Wild? appeared first on /dev/random.

myMail is a popular (10M+ downloads!) alternative email client for mobile devices. Available for iOS and Android, it is a powerful email client compatible with most of the mail providers (POP3/IMAP, Gmail, Yahoo!, Outlook, and even ActiveSync). Recently, I was involved in an incident that was related to a malicious usage of myMail. I had a closer look at the application, how it works and found something weird…

[Note: I tested the Android version of this app, iOS was not tested and I don’t know if it behaves in the same way]

I installed the version 6.4.0.24788 on an lab Android device. This device is configured to use a BurpSuite instance to log and inspect all the HTTP traffic. As usual, for the Android ecosystem, many permissions are requested by the application, besides the classic one (access to contacts, Internet, storage, …), those ones look more suspicious:

android.permission.REQUEST_INSTALL_PACKAGESAllows an application to request installing packages.
android.permission.WRITE_CONTACTSAllows an application to change contacts

Once you installed the application on your device, it’s time for the basic configuration. This is a pretty straightforward process: You enter your credentials (email address + password) and the application takes care of everything for you (if your email platform supports autodiscovery services).

Now that you entered your email and password, you can consider them as lost because myMail will send them to the backend architecture:

GET /cgi-bin/auth?Password=XXXXXXXX&mobile=1&mob_json=1&simple=1&useragent=android&Login=johndoe%40pwn3d.be&Lang=en_GB&mp=android&mmp=mail&DeviceID=9ff695be583a7eb72048dbe5e321189a&client=mobile&DeviceInfo=%7B%22playservices%22%3A%2211400000%22%2C%22Timezone%22%3A%22%2B0100%22%2C%22OS%22%3A%22Android%22%2C%22DeviceName%22%3A%22samsung%20GT-S7275R%22%2C%22Device%22%3A%22GT-S7275R%22%2C%22AppVersion%22%3A%22com.my.mail6.4.0.24788%22%7D&idfa=f70118e8-2e56-4e75-aa3e-8ab003c1ce24&appsflyerid=1613031754648-6863156579261663340&current=google&first=google&md5_signature=be20216f2b3114b37c3c6a6977d04f89 HTTP/1.1
User-Agent: mobmail android 6.4.0.24788 com.my.mail
Host: aj-https[.]my[.]com
Connection: close
Accept-Encoding: gzip, deflate

Indeed, the application does not talk directly to your mail server but uses the my.com cloud infrastructure to take care of all email-related communications. The mobile app is just a client for the API and all traffic happens over HTTPS. This technique is used to provide the “push services”:

Once configured, the app will poll the myMail cloud for messages:

GET /api/v1/messages/status?prefetch=1&sort=%7B%22type%22%3A%22id%22%2C%20%22order%22%3A%22desc%22%7D&snippet_limit=183&last_modified=1613681670&folder=0&offset=0&limit=20&email=johndoe%40pwn3d.be&token=1600745569219451055721945105573116678049%3A510f584c63015605190504070904050007020a01040003000406010403000000040200090c0d06030006030102090403XXXXXXXXXXXXXXXX&htmlencoded=false&mp=android&mmp=mail&DeviceID=9ff695be583a7eb72048dbe5e321189a&client=mobile&playservices=11400000&os=Android&os_version=4.2.2&ver=com.my.mail6.4.0.24788&vendor=samsung&model=GT-S7275R&device_type=Smartphone&country=GB&language=en_GB&timezone=%2B0100&device_name=samsung%20GT-S7275R&idfa=f70118e8-2e56-4e75-aa3e-8ab003c1ce24&device_year=2011&connection_class=MODERATE&current=google&first=google&appsflyerid=1613031754648-6863156579261663340&reqmode=fg&ExperimentID=Experiment_simple_signin&isExperiment=false&md5_signature=e29d6417e6ab8e90eabab6bdc5096c9e HTTP/1.1
Accept-Encoding: gzip, deflate
User-Agent: mobmail android 6.4.0.24788 com.my.mail
Cookie: Mpop=1613681611:510f584c630156051905000017031f051c054f6c5150445e05190401041d5b56585955565c19XXXXXXXXXXXXXXXX:johndoe@pwn3d.be:;domain=my.com
Host: aj-https[.]my[.]com
Connection: close

Because all the traffic is passing through the cloud, your attached files are also stored on their infrastructure. Here is an example of file download:

GET /cgi-bin/readmsg/cacert.cer?rid=354637854942502553472181455302888008800&&id=16130393950000000005;0;0&notype=1&notype=1&mp=android&mmp=mail&DeviceID=9ff695be583a7eb72048dbe5e321189a&client=mobile&playservices=11400000&os=Android&os_version=4.2.2&ver=com.my.mail6.4.0.24788&vendor=samsung&model=GT-S7275R&device_type=Smartphone&country=GB&language=en_GB&timezone=%2B0100&device_name=samsung%20GT-S7275R&idfa=f70118e8-2e56-4e75-aa3e-8ab003c1ce24&device_year=2011&connection_class=MODERATE&current=google&first=google&appsflyerid=1613031754648-6863156579261663340&reqmode=bg&ExperimentID=Experiment_simple_signin&isExperiment=false&md5_signature=3b75207d6ed3ff58fbbf874a2f13bebb HTTP/1.1
Accept-Encoding: gzip, deflate
User-Agent: mobmail android 6.4.0.24788 com.my.mail
Cookie: Mpop=1613681611:510f584c630156051905000017031f051c054f6c5150445e05190401041d5b56585955565c19XXXXXXXXXXXXXXXX:johndoe@pwn3d.be:;domain=my.com
Host: af[.]attachmy[.]com
Connection: close

But the worst has to come…

Once I completed my investigations, I first un-configured my test account. Before, I performed a proper logout:

GET /cgi-bin/logout?mp=android&mmp=mail&DeviceID=9ff695be583a7eb72048dbe5e321189a&client=mobile&playservices=11400000&os=Android&os_version=4.2.2&ver=com.my.mail6.4.0.24788&vendor=samsung&model=GT-S7275R&device_type=Smartphone&country=GB&language=en_GB&timezone=%2B0100&device_name=samsung%20GT-S7275R&idfa=f70118e8-2e56-4e75-aa3e-8ab003c1ce24&device_year=2011&connection_class=GOOD&current=google&first=google&appsflyerid=1613031754648-6863156579261663340&reqmode=fg&ExperimentID=Experiment_simple_signin&isExperiment=false&md5_signature=11f4aa0e644d79f8cadfb4aa0cd3b2e5 HTTP/1.1
Accept-Encoding: gzip, deflate
User-Agent: mobmail android 6.4.0.24788 com.my.mail
Cookie: Mpop=1613681611:510f584c630156051905000017031f051c054f6c5150445e05190401041d5b56585955565c195XXXXXXXXXXXXXXXX:johndoe@pwn3d.be:
Host: aj-https[.]my[.]com
Connection: close

While checking my mail server logs later, I saw this:

Feb 18 21:27:48 marge dovecot: imap-login: Login: user=, method=PLAIN, rip=185.30.176.168, lip=192.168.255.13, mpid=5140, TLS, session=<0JtFK6K7+7C5HrCo>
Feb 18 21:27:49 marge dovecot: imap(johndoe): Logged out in=275 out=1145

The my.com cloud infrastructure is still connecting to the mail account at regular intervals! Connections are coming from two /24:

  • 185.30.176.0/24
  • 185.30.177.0/24

The next step was to kill the app still running on the phone (but unconfigured). Same effect, polling was (and is still now) ongoing.

Once authenticated, my.com keeps a session with the mail server to fetch new emails. In my lab, it was a simple IMAP server:

root@marge:/var/tmp# net stat -anp |egrep "185\.30\.[76]*"
tcp 0 0 192.168.255.13:993 185.30.177.72:32735 ESTABLISHED 27234/dovecot/imap-

Let’s switch our cap now and try to think like an attacker. myMail can also be very interesting because you can test credentials belonging to your target without any connection directly from your network/devices! The application can be used as a “proxy”. In the case I was involved, one of the user who was detected as a myMail user never installed the application! This is perfect to monitor / steal data from your victim’s mailbox which leaving artefacts.

Conclusion: from a user’s experience, myMail is a nice application to handle multiple mailboxes, especially when you don’t have a built-in push notification service but, if you use it in a corporate environment, you should keep in mind that they have ALL your data in their hands! By the way, my.com is an international subsidiary of mail.ru, which provides the well-known mail service with the same name. The application code is also full of Java classes like “ru/mail/mailapp/DTOConfigurationImpl.java” with references to mail.ru URLs:

  • hxxps://e[.]mail[.]ru/fines/documents?openStatus=1
  • hxxps://clicker[.]bk[.]ru/api/v1/clickerproxy/mobile https://aj
  • hxxps.mail[.]ru/api/v1/gibdd/gmap
  • hxxps://help[.]mail[.]ru/mail/account/login/qr#noaccount
  • hxxps://account[.]mail[.]ru/login?opener=androidapp
  • hxxps://e[.]mail[.]ru/payment/center
  • hxxps://m[.]calendar[.]mail.ru/
  • hxxps://touch[.]calendar[.]mail.ru/
  • hxxps://touch[.]calendar[.]mail[.]ru/create
  • hxxps://todo[.]mail[.]ru/?source=webview
  • hxxps://todo.mail.ru/?source=superapp
  • hxxps://iframe[.]imgsmail[.]ru/pkgs/amp.viewer/2.3.1/iframe.html
  • hxxps://amproxy[.]imgsmail[.]ru/ hxxps://help[.]mail[.]ru/mailhelp/bonus/offline/ua?utm_source=mail_app& utm_medium=android
  • hxxps://help[.]mail[.]ru/mail-help/bonus/offline/support?utm_source=mail_ app&utm_medium=android

The post myMail Manages Your Mailbox… in a Strange Way! appeared first on /dev/random.

February 18, 2021

Up until now Autoptimize, when performing image optimization, relies on JS-based lazyloading (with the great lazysizes component) to differentiate between browser that support different image formats (AVIF, WebP and JPEG as fallback).

As JS-based Lazyload is going out of fashion though (with native lazyload being supported by more browsers and WordPress having out-of-the-box support for it too), it is time to start working on <picture> output in Autoptimize to serve “nextgen image formats” where different <source> tags offer the AVIF and WebP files and the <img> tag (which includes the  loading=”lazy” attribute) with JPEG as fallback.

For now that functionality is in a separate “power-up”, available on Github. If you have Image Optimization active in Autoptimize (and you are on the Beta version, Autoptimize 2.8.1 is missing a filter which the power-up needs so download & install the Beta if not done yet), you can download the plugin Github and give it a go. All feedback is welcome!

Facebook social decay
© Andrei Lacatusu

In response to a proposed law that requires technology companies to pay Australian publishers for linking to their news articles, Facebook made the sudden decision to restrict people and publishers from sharing news in Australia.

Facebook struggles to silence misinformation for years, but succeeds in silencing quality news in days. In both cases, Facebook's fast and loose approach continues to hurt millions of people ...

Social media platforms, news publishers, governments and internet users have been stuck in an inadequate equilibrium for years. The silver lining is that conflict is often necessary for driving positive changes.

My preferred outcome is that Australians "unfriend" Facebook, and switch back to reading real new websites.

February 17, 2021

De la nocivité des ondes à la bouffe bio et aux réseaux pédophiles, de la politique de la crise COVID à la distribution de vaccins : et si les complots étaient bien réels ? Réels mais pas tout à fait comme on les imagine.

Le complot des ondes électromagnétiques

Lorsque je me retrouve face à une personne qui me parle de la nocivité des ondes électromagnétiques, je lui demande d’abord si elle sait ce qu’est, physiquement, une telle onde. Dans la totalité des cas que j’ai vécus, la personne avoue son ignorance totale.

Une onde électromagnétique n’est qu’une série de particules, appelées photons, qui voyagent en vibrant à une certaine fréquence. Pour une certaine plage de fréquence, les photons deviennent visibles. On appelle cela… la lumière. Il y’a d’autres fréquences que nous ne voyons pas : l’infrarouge, l’ultraviolet et, bien entendu, les ondes radio.

Les ondes radio sont tellement difficiles à détecter qu’il est nécessaire de fabriquer des antennes particulièrement sophistiquées pour les capter. Antennes qui équipent nos téléphones.

Les ondes électromagnétiques peuvent être absorbées. L’énergie de leur vibration se transforme alors en chaleur. Pour vous en convaincre, il vous suffit de vous promener sous la plus grande source électromagnétique à notre disposition : le soleil. Les ondes émises par le soleil vous réchauffent. À trop grandes doses, elles peuvent même vous brûler. C’est le fameux « coup de soleil ». C’est également le principe qu’utilise votre four à micro-ondes, qui envoie des ondes à une fréquence dont l’énergie se transmet particulièrement bien à l’eau. C’est pour cela que votre four reste froid : il ne réchauffe que l’eau.

Les ondes électromagnétiques qui possèdent une très grande quantité d’énergie peuvent faire sauter un électron de l’atome qu’elles vont toucher. Cet atome est ionisé. Si un trop grand nombre d’atomes de notre ADN est ionisé, cet ADN ne pourra plus être réparé et cela peut induire des cancers. Il faut bien entendu une exposition longue, répétée à une source très puissante.

Par exemple le soleil. Responsable de nombreux cancers de la peau. Ou bien les rayons X, utilisés pour faire des radiographies médicales. L’avantage des ondes à très haute énergie, c’est qu’elles interagissent avec la première chose qu’elles touchent et qu’elles sont donc arrêtées facilement. C’est pour ça qu’il y’a des petits rideaux de caoutchouc plombé sur le tapis à rayons X  des aéroports. Ces protections servent essentiellement à protéger les employés qui, sans cela, seraient exposés en permanence aux rayons X. Pour le voyageur qui ne fait que passer deux fois par an, c’est bien moins essentiel.

En ce sens, les antennes GSM sont un peu comme des phares. Ils émettent des rayons électromagnétiques de la même façon. Seule la fréquence est différente.

Si un phare peut éblouir voire même brûler si on s’approche à quelques centimètres, personne n’ose imaginer que l’exposition à un phare puisse provoquer des cancers ou être nocive. De même pour votre routeur wifi : il n’émet pas plus d’énergie que votre ampoule halogène.

S’inquiéter de l’impact des ondes électromagnétiques semble donc absurde. Même si on venait à découvrir que certaines fréquences très précises pouvaient avoir un effet délétère, nous sommes dans un bain permanent d’ondes électromagnétiques depuis l’aube de l’humanité. Il est donc raisonnable de penser que tout impact actuellement inconnu, si un tel impact existe, est anecdotique.

Pourtant, je pense que les « anti-ondes » ont raison.

Les ondes sont nocives. Non pas parce qu’elles sont des ondes, mais à cause de l’usage que nous en faisons. Aujourd’hui, nous sommes en permanence hyperconnectés. Nos téléphones bruissent de notifications indésirables que nous ne savons pas désactiver. Nos maisons regorgent de petites lampes qui clignotent pour nous dire que le réseau est actif, que la tablette recharge. Quand je dors dans une chambre d’hôtel, je dois démonter la télévision pour accéder au routeur caché derrière et le débrancher. Non pas à cause des ondes, mais parce que je ne supporte pas ces lumières vertes clignotantes dans l’obscurité, lumière agrémentée de l’insupportable œil rouge luisant de la télévision en veille.

Comment ne pas être stressé à l’idée des millions de bits qui nous transperce en permanence pour aller notifier notre voisin de restaurant qu’une nouvelle vidéo YouTube est disponible ? Comment dormir en sachant toute cette activité qui nous traverse ? Les expériences ont montré que la sensibilité électromagnétique est belle et bien réelle. Que les gens en souffrent. Mais qu’elle n’est pas causée par la présence d’ondes électromagnétiques. Elle est causée par la croyance qu’il y’a des ondes électromagnétiques.

Les anti-ondes ont intuitivement perçu le problème. Avant de l’assigner à une raison qui n’est pas sous leur contrôle.

D’une manière générale, toutes les théories conspirationnistes sont des constructions basées sur un problème très juste. Problème auquel on a créé une cause artificielle absurde ou exagérée, cause qui symbolise et personnifie le problème afin d’avoir l’impression de le comprendre.

C’est pour cela que prouver l’absurdité d’une théorie du complot ne fonctionne pas. Le complot existe généralement réellement. Mais il est beaucoup trop simple, banal. Ce qui donne un sentiment d’impuissance. En lui donnant un nom, on se crée un ennemi identifié et la possibilité d’agir, de le combattre activement.

Le complot du deep state

Selon la légende, Dame Carcas libéra la ville de Carcassonne, assiégée par Charlemagne depuis cinq ans. La population mourant de faim, Dame Carcas eut l’idée de prendre le dernier porc de la ville, de nourrir avec le dernier sac de blé avant de le jeter du haut des remparts sur les assaillants. Ceux-ci se dirent que si la ville pouvait se permettre de balancer un porc nourri au blé, c’est qu’elle avait encore de nombreuses ressources et qu’il était préférable de lever le siège. Charlemagne ne se posa pas la question de savoir comment la ville pouvait avoir encore autant de ressources après cinq années de siège. Alors que les troupes s’éloignaient, Dame Carcas fit sonner les cloches de la ville qui en tirera désormais son nom : Carcas sonne !

La plupart des théories du complot se heurtent à un problème fondamental : leur réalité implique des milliers de spécialistes de domaines extrêmement différents travaillant dans le secret le plus total au sein d’une organisation incroyablement parfaite et efficace qui ne ferait jamais la moindre erreur. Or, il suffit d’ouvrir les yeux pour voir que dès que trois personnes travaillent ensemble, l’inefficacité est la loi.

Pour vous en convaincre, il vous suffit de regarder des films d’espionnage. L’histoire est toujours là même : un service ultra-secret de contre-espionnage lutte contre une organisation ultra-secrète d’espionnage qui cherche à accomplir son rôle en mettant au grand jour le service de contre-espionnage, qui s’engage donc dans une lutte de contre-contre-espionnage. C’est particulièrement marquant dans les « Missions Impossibles » ou dans la série Alias. Un peu de recul permet de se rendre compte que toutes ces organisations… ne servent strictement à rien. Même les scénaristes, spécialistes de la fiction, n’arrivent pas à trouver des idées pour justifier l’existence de telles organisations. On parachute alors artificiellement un terroriste qui veut faire sauter une bombe nucléaire, afin de camoufler légèrement la fatuité du scénario.

La réalité des services d’espionnage est tout autre. Des fonctionnaires qui, pour justifier leur budget et l’existence de leurs nombreux emplois, vont jusqu’à inventer des complots (un truc qui revient aussi dans Mission Impossible). Contrairement à Tom Cruise, les milliardaires surpuissants et les espions sont des humains qui mangent, dorment, font caca et se grattent les hémorroïdes. Ils font des erreurs de jugement, se laissent emporter par leur idéologie et leur sentiment de toute-puissance.

Et oui, ils tentent de favoriser leurs intérêts, même de manière illégale ou immorale. Cela consiste essentiellement à tenter de convaincre le monde d’acheter leur merde (le marketing), de commettre des délits d’initiés sur les plateformes boursières et de financer du lobbying politique pour que les lois soient en leur faveur. Là se trouvent les véritables complots, les véritables scandales qui ne requièrent la complicité que de quelques personnes, qui ne nécessitent pas de compétence ou de technologie particulière et qui ne sont, la plupart du temps, même pas secrets du tout !

La plupart des innovations secrètes de la guerre froide n’étaient que des canulars qui servaient à effrayer le camp adverse : rayons de la mort, rayon de contrôle des esprits, contacts extra-terrestres. D’ailleurs, les innovations réelles étaient tout sauf secrètes. La bombe nucléaire, la conquête spatiale, l’informatique et les prémices d’Internet. Comme le cochon de Dame Carcas, tout était entièrement public et les seules choses vraiment secrètes étaient ce qui n’existait pas, dans une tentative d’intoxication informationnelle.

Dans certains cas, la recherche des services secrets mènera à quelques rares avancées réelles. Ce fut par exemple le cas de Clifford Cocks qui inventa la cryptographie asymétrique en 1973 pour le compte des services secrets anglais. Malheureusement, cette invention purement théorique ne pouvait être mise en pratique sans un développement que Cocks ne pouvait réaliser seul. Elle fut dont jetée aux oubliettes avant que le concept ne soit redécouvert de l’autre côté de l’Atlantique, 3 ans plus tard, par Diffie, Hellman et Merkle qui la publieront et lanceront les bases d’une nouvelle science : la cryptographie informatique. Une fois encore l’histoire démontre que rien n’est réellement possible dans le secret et l’isolement. Le mythe de l’entrepreneur scientifique solitaire fonctionne dans les romans d’Ayn Rand (quand c’est un bon) et Ian Flemming (quand c’est un mauvais), pas dans la réalité.

La notion de « Deep state » ou d’élites secrètes prenant les décisions est plus rassurante que la vérité selon laquelle, oui, nos dirigeants sont corrompus, mais tout simplement comme des humains, pour favoriser leurs petits intérêts personnels en lieu et place de l’intérêt général. Le tout, en faisant des erreurs et en tentant de se justifier moralement que leur profit est bien pour l’intérêt général (comme la théorie du ruissellement des richesses ou l’idée selon laquelle la richesse se mérite). Les complots existent, mais ils sont petits, mesquins et pas particulièrement secrets.

Le complot des vaccins

L’idée d’un vaccin avec des puces pour nous surveiller ou des chemtrails pour contrôler nos esprits (technologies qui semblent complètement impossibles dans l’état actuel de nos connaissances et qu’il serait donc particulièrement difficile de développer en marge de la communauté scientifique, dans le secret le plus total) nous sert à oublier que nos téléphones nous surveillent déjà très bien et fournissent plus de données que ne peuvent en exploiter les gouvernements, que la télévision nous abrutit parfaitement, et que nous avons choisi de les utiliser, que personne ne nous a jamais forcés.

De même, les anti-vaccins pointent, avec justesse, le fait que l’oligopole pharmaceutique a un intérêt commercial évident à ce que nous soyons le plus possible malade pour consommer le plus de médicaments. Qu’à travers les brevets, l’industrie pharmaceutique privatise d’énormes budgets publics pour les transformer en juteux profits, parfois au détriment de notre santé. Mais il est difficile de se passer des médicaments. Il est donc plus simple d’attaquer les vaccins, médicaments dont la procédure est impressionnante (une piqure) et qui ont, à très court terme, un effet néfaste (fièvre ou durillon). Pire, on ne perçoit jamais l’utilité d’un vaccin. Si un vaccin fonctionne, on se dira toute sa vie qu’il n’était pas nécessaire… Et qu’on a été victime d’un complot.

Le vaccin, qui est probablement la plus belle invention de l’humanité en ce qui concerne le confort et l’espérance de vie, sert donc très injustement d’étendard à l’intuitif conflit d’intérêts et à la rapacité (réelle) de l’industrie pharmaceutique. La plupart des médicaments sont beaucoup moins efficaces que ce qu’ils prétendent, ils sont vendus à grands coups de marketing. Le simple fait que les devantures de pharmacie soient transformées en gigantesques panneaux publicitaires est un scandale en soi. Les vaccins sont peut-être l’exception la plus sûre, la plus bénéfique et la plus surveillée. Mais c’est aussi intuitivement la plus facile à critiquer.

Et ces critiques sont parfois nécessaires : les vaccins étant peu rentables (on ne les prend qu’une fois dans sa vie), l’industrie pharmaceutique tente de les faire développer sur des fonds publics à travers les universités avant de s’arroger tous les bénéfices en les revendant très cher aux états… qui ont financé leur mise au point ! L’université d’Oxford avait d’ailleurs annoncé son souhait de mettre son vaccin COVID dans le domaine public, sur le modèle de l’Open Source, avant de se raviser sous, à ce qu’il parait, la pression de la fondation Bill Gates. Un complot qui, sans remettre en cause la qualité du vaccin, me semble parfaitement plausible et réaliste. À  croire que les complots absurdes comme les puces 5G dans les vaccins sont inventés exprès pour décrédibiliser la moindre des critiques et nous détourner des véritables problématiques. À noter que la fondation Bill Gates joue un rôle positif prépondérant dans l’éradication de la polio. Rien n’est jamais parfaitement noir ni blanc. Le monde est complexe.

Le complot des réseaux pédophiles

Pour faire une bonne théorie du complot, il suffit donc de reprendre les souffrances réelles, de les amalgamer avec une histoire séduisante et choquante. Un exemple parmi tant d’autres est la persistance des théories de réseaux pédophiles très sophistiqués pour les élites. Parfois mâtinée de satanisme et de cannibalisme pour le décorum.

La pédophilie est bel et bien un problème de notre société. Hélas, elle est majoritairement présente au sein des familles elles-mêmes. Les enfants sont le plus souvent abusés par un parent ou un proche de confiance (comme l’ont souvent été les prêtres). Mais imaginer qu’un oncle ou un père puisse violer un enfant de sa propre famille est tellement affreux que nous rejetons la faute sur les ultra-riches. Ultra-riches qui ne font qu’ajouter de l’huile sur le feu en ayant parfois une sexualité débridée par un sentiment d’impunité, sentiment exacerbé par une culture machiste du viol menant parfois réellement à la pédophilie comme les affaires Weinstein ou Polanski.

Le traumatisme de l’affaire Dutroux en Belgique s’explique en partie, car il est difficile d’admettre qu’un pauvre type complètement malade puisse tout simplement enlever des gamines dans sa camionnette et les planquer dans sa cave. Que son nom était bien sur la liste des suspects, mais que la lenteur de la police à le démasquer s’explique essentiellement par l’application aveugle des procédures administratives en vigueur à l’époque, procédures ralenties par certains conflits de pouvoir au sein de la hiérarchie (ce qui a conduit, d’ailleurs, à une refonte complète de la police en Belgique). Il y’a un certain réconfort à imaginer que le crime n’est pas juste une série de malchances et de mesquineries administratives, mais bien la volonté d’une organisation toute puissante impliquant jusqu’à la famille royale.

Les complots de la juiverie internationale et de QAnon

Les théories du complot sont généralement l’illustration d’une perte de confiance justifiée envers les garants de la moralité et de l’autorité. Elles fleurissent le plus souvent en période de désarroi profond. La misère économique des années 30, juste après le krach boursier, permettra de mettre en avant la théorie séculaire de la cabale juive avec les conséquences que l’on sait en Allemagne. Je ne peux d’ailleurs m’empêcher de vous recommander l’excellent « Le cimetière de Prague », d’Umberto Eco, pour une illustration romancée de cette cabale.

La crise financière de 2008 n’échappe pas à la règle. Sur ses cendres naitront Donald Trump et QAnon qui n’ont, d’un point de vue historique, aucune originalité. Tout semble être, à la lettre près, issu des théories complotistes du passé.

Des thèses absurdes, mais avec, encore une fois, une intuition d’un problème très juste. Le problème de l’existence de l’industrie de la finance. Comment se fait-il qu’une industrie qui ne semble produire rien de concret pour les citoyens lambdas, qui génère des milliards, qui semble rendre chacun de ses membres millionnaires, comment se fait-il que cette industrie aux pratiques incompréhensibles reçoivent autant d’argent du gouvernement lors d’une difficulté qu’elle a elle-même créé ? Comment se fait-il que, dans ce qui se présente comme une démocratie, le principal facteur pour arriver au pouvoir soit la richesse ? Comment se fait-il que tous nos meilleurs cerveaux issus des écoles d’ingénieurs, de science ou d’administration soient recrutés dans le domaine de la finance ?

À ce sujet, je conseille le magnifique discours de Fabrice Luchini (préparé, mais jamais déclamé) dans le film « Alice et le maire ». Un film qui illustre de manière très réaliste les dessous de la politique : des gens stressés, qui enchainent les réunions et qui n’ont plus le temps de penser. Comment voulez-vous que ces organisations dont la vision à long terme relève de la prochaine élection puissent sérieusement mettre en place des complots d’envergure ?

Le complot de la malbouffe

Les théories du complot ne peuvent que diviser. Les intuitifs savent qu’elles représentent un problème réel. Les rationnels peuvent démontrer qu’elles sont absurdes et en viennent à nier l’existence du problème initial. Les deux camps ne peuvent donc plus se parler. Les comportements sensés et absurdes se mélangent.

Entrez dans un magasin de nourriture bio et vous serez abasourdi par le fatras de concepts dont une simple boîte de conserve peut se revendiquer.

Votre boîte est « bio ». Cela signifie qu’elle a reçu un label comme quoi elle utilisait une quantité limitée de certains pesticides.

La démarche est rationnelle. Si la nocivité des pesticides sur l’humain n’est pas toujours démontrée, elle l’est sur le vivant. L’absorption des pesticides par le corps a été démontrée et l’hypothèse que ces pesticides puissent avoir un impact sur la santé est sérieusement étudiée.

Votre boîte est également dans un emballage « écologique ». Cela semble intuitif, mais, malheureusement, la culture biologique produit énormément plus de CO2 que la culture avec pesticide. Ceci dit, les pesticides ont également un impact environnemental non négligeable, même si ce n’est pas du CO2.

L’aliment est également garanti sans OGM. Là, cela devient plus étrange. La nature produit en effet des OGM en permanence. C’est même le principe de l’évolution. Les OGM pourraient donc être particulièrement bénéfiques, par exemple en étant plus nutritifs. Rejeter les OGM, c’est rejeter le principe du bouturage, vieux comme l’agriculture. Mais le rejet des OGM est, encore une fois, le symptôme d’un réel problème, à savoir la volonté d’apposer une propriété intellectuelle sur les semences, procédé monopolistique dangereux. La lutte anti-OGM n’est pas tant contre le principe de l’OGM lui-même (la plupart des anti-OGM ne savent d’ailleurs pas ce qu’est un OGM) qu’une défiance envers ceux qui prétendent manipuler la nourriture sans vouloir nous dire comment ni nous permettre de le faire nous-mêmes. La défiance envers l’industrie qui pratique l’OGM  est pertinente. La défiance envers le principe même de l’OGM ne l’est sans doute pas.

Enfin, il arrive que votre nourriture (ou vos produits de beauté, s’ils sont de la marque Weleda) soit issue des principes de la biodynamie. La biodynamie est un concept inventé par Rudolf Steiner, un illuminé notoire qui a décidé de réinventer la philosophie, les sciences, la médecine, l’éducation et la religion en se basant uniquement sur son intuition. Il n’y connaissait strictement rien en agriculture, mais a un jour improvisé une conférence devant une centaine d’amis, dont seule une minorité d’agriculteurs, sur la meilleure manière de cultiver. Cette conférence a été retranscrite par une sténographe, mais Steiner lui-même a dit plusieurs fois qu’il n’avait pas relu cette transcription et que sa conférence avait pour objectif d’être orale, pas écrite. Que la transcription devait comporter énormément d’erreurs. Il mourra peu après sans jamais relire ni même mentionner le terme « biodynamie » qui sera inventé par après.

Il n’empêche que cette transcription erronée d’une conférence improvisée par un non-agriculteur passionné d’occultisme et de magie sert aujourd’hui de référence à toute une industrie. Les règles sont du style : « Telle plante doit être plantée quand Mars est visible dans le ciel parce que les fleurs sont rouges et que Mars est rouge. Et il faut répandre des rats morts dans le compost durant les nuits de pleines lunes parce que ça le fait ». Tout livre ou agriculteur qui se revendique de la biodynamie aujourd’hui ne fait qu’une chose : reprendre les élucubrations sans aucune substance empirique issues de la transcription erronée d’une seule et unique conférence d’un illuminé. Bref, la définition même de la théologie. Cependant, si on supprime toute la partie ésotérique, on retrouve les fondements de l’agriculture biologique. Comme n’importe quelle religion, la biodynamie est donc loin d’avoir tout faux. Tout simplement parce que, statistiquement, avoir tout faux est aussi improbable que d’avoir tout vrai et parce que, comme le souligne Kahneman, l’intuition est souvent juste. Mais pas toujours. Ce qui est son gros problème.

Donc, en achetant de la nourriture bio, ce que je fais personnellement, je mélange le plus souvent du sensé, du pas complètement sensé et de l’absurde total.

Tout cela à cause d’un problème intuitif bien réel : on possède désormais un confort suffisant pour faire le difficile concernant notre nourriture et force est de constater qu’on bouffe de la merde. À travers le sucre et les graisses saturées, les producteurs de nourriture ne cherchent qu’à nous rendre addicts à moindre coût au mépris le plus total de notre santé. Les aliments sont manipulés pour paraitre jolis en magasin, au détriment de leur composition. Depuis des décennies, des arnaques intellectuelles, parfois promues par nos gouvernements, ont servi les intérêts industriels (par exemple le fait de boire du lait pour renforcer les os ou le principe de la pyramide alimentaire, principe sans aucun fondement scientifique). Le complot est donc bel et bien réel !

Le complot des complotistes

Nous le sentons alors nous cherchons à préserver notre santé, à diminuer nos cancers en nous protégeant des ondes électromagnétiques et en bouffant bio. Ce qui, objectivement, pourrait avoir un impact positif. Très faible, mais ce n’est pas impossible.

Mais vous savez ce qui a un impact majeur sur notre santé ?

La cigarette, les pots d’échappement de voiture, l’alcool. Supprimez ces trois-là, dont deux sont à votre portée immédiate, et cela aura un million de fois plus d’effet que de bouffer bio et de mettre son GSM en mode avion la nuit. Pour un effet maximal, diminuez également la viande rouge, cancérigène établi, et faites 30 minutes d’exercice par jour.

Ils sont là les complots qui en veulent à votre santé. Ils crèvent les yeux. C’est le lobby du tabac qui fait qu’il est légal de fumer en public, en empestant autour de soi. C’est le lobby automobile qui vous vend des SUV en vous faisant pester sur les embouteillages et en tuant les jeunes adultes inconscients qui roulent à pleine vitesse. C’est le lobby de l’alcool qui fait des cartes blanches contre le concept de « tournée minérale » en Belgique et qui subventionne les cercles étudiants, ce sont les Facebook et Google qui accaparent toute votre vie privée et mettent en place des procédés monopolistiques qui les rendent incontournables.

Nous pouvons tous lutter contre ces complots qui nous menacent directement dans notre intégrité physique et mentale. Les plus grandes causes de mortalités évitables, hors suicide, peuvent se résumer à alcool, tabac et bagnole.

Mais c’est très difficile de renoncer à sa clope, à sa bagnole et à son compte Facebook. Alors on poste contre les vaccins, contre les OGMs et contre la 5G. On manifeste contre ce qu’on ne peut pas vraiment changer. Quitte à se mettre en danger un fumant de l’herbe « bio », en buvant des alcools distillés artisanalement et en refusant les vaccins pour ses enfants. Tout en le clamant haut et fort sur Facebook.

À force de remettre en question l’autorité, on se tourne alors vers des sources d’autorités sans aucune légitimité, mais qui nous font du bien. On prétend ne pas vouloir se faire manipuler et on va se mettre dans les pattes des intérêts commerciaux des gourous, des shamans et des vendeurs de cruches qui énergétisent l’eau. Sous prétexte de ne pas vouloir obéir, on en vient à faire exactement le contraire de ce que les autorités disent, sans réfléchir au fait qu’on est encore plus facilement manipulable, comme l’enfant qui dit toujours non et à qui on dit « Ne mange surtout pas ta soupe ! ».

Si vous pensez qu’un domaine quelconque est corrompu, de l’industrie alimentaire à la recherche scientifique, vous avez probablement raison. Mais ce n’est pas contre le domaine en question qu’il faut lutter, c’est contre la corruption. L’industrie de l’alimentation biologique, celle du cannabis, celle des cristaux énergétiques et des réseaux de coaching anti-cancer astrologique sont tout aussi corrompus, tout comme l’est la politique écologique. Ils comportent une partie de gens honnêtes dilués dans une population ne cherchant qu’à vider votre portefeuille.

Le plus dur à accepter c’est que, non, on ne nous cache pas la vérité. Elle est là, devant les yeux de qui veut bien la voir. Il n’y a rien de secret, rien de mystérieux. L’intelligence moyenne reste la même, quel que soit le niveau de richesse ou de pouvoir politique. Mais cette réalité est difficile à accepter, car elle n’offre pas de réponse toute faite, parce qu’elle n’offre aucune certitude, que des probabilités, parce qu’elle va très souvent en contradiction avec nos convictions et nos actions passées. Et parce que, si le complot est le plus souvent inventé ou exagéré, la souffrance qui en résulte est elle bien réelle.

Pour aller plus loin :  complot du Covid et autres lectures

« Vaincre les épidémies », par Didier Pittet et Thierry Crouzet.

Inventeur du gel hydroalcoolique que nous utilisons désormais tous les jours, Didier Pittet est un spécialiste suisse mondialement reconnu des maladies infectieuses et des épidémies. Dans ce livre, il retrace sa découverte du Covid, sa comparaison avec les autres épidémies (H1N1, grippe aviaire) et son expérience de devenir l’expert de référence pour Macron, qui enverra un jet privé le chercher pour l’amener à une réunion de l’Élysée. Ce livre illustre donc à merveille la vision d’une personne qui fait partie du plus haut niveau de pouvoir en ce qui concerne le COVID. Au menu : incompétences à tous les niveaux de décisions, conflits politiques qui impactent des décisions qui devraient être purement scientifiques, tentatives pas souvent efficaces de manipuler l’opinion publique « dans le bon sens » à travers le marketing. Dans le COVID comme partout, les complots sont bel et bien présents, mais tellement petits, humains, mesquins…

Didier Pittet vient d’être fait Docteur honoris causa de l’université où j’enseigne l’Open Source. Ce que je salue, car, avec la formule de son gel hydroalcoolique, il est un pionnier de l’Open Source dans le domaine de la santé.

Thierry Crouzet revient sur la nécessité de créer un vaccin Open Source.


https://tcrouzet.com/2020/12/02/je-veux-la-paix-dit-le-vaccin-mais-je-fais-la-guerre/

Ce qui n’est malheureusement pas le cas, comme je l’ai raconté, à cause de la fondation Bill Gates.


https://khn.org/news/rather-than-give-away-its-covid-vaccine-oxford-makes-a-deal-with-drugmaker/

Dans son intervention, le parlementaire belge François De Smet tente de trouver un juste milieu entre les mesures anti-Covid et les libertés publiques. Loin de crier au complot, dans un sens ou dans l’autre, il milite pour un équilibre raisonnable. Cela devient tellement rare que cela mérite d’être souligné. De la même façon, il avait dénoncé les procédures entourant le marché des vaccins anti-covid tout en militant pour plus de transparence. Un politicien qui me fait plaisir. Il risque de ne pas avoir beaucoup de voix. D’ailleurs, il ne semble intéresser personne d’autre que moi.

https://francoisdesmet.blog/2021/02/05/chambre-debat-covid-et-libertes-publiques/

https://francoisdesmet.blog/2020/12/22/chambre-vaccins-et-transparence/

Bad science, un livre et une chronique qui revient sur les arnaques scientifiques de l’industrie pharmaceutique, depuis Big Pharma aux laboratoires bio/indépendants qui fournissent les compléments alimentaires « alternatifs » (je n’ai pas lu le livre, je me fie à la critique de Cory Doctorow).


https://memex.craphound.com/2010/10/19/bad-science-comes-to-the-usa-ben-goldacres-tremendous-woo-fighting-book-in-print-in-the-states/

« Le cimetière de Prague », d’Umberto Eco. Avec sa verve habituelle, Eco nous plonge dans la vie d’un faussaire obligé de créer de toutes pièces les preuves d’un complot. Jouissif.

Compte-rendu de l’incompétence des services secrets anglais


https://www.bbc.co.uk/blogs/adamcurtis/entries/3662a707-0af9-3149-963f-47bea720b460

Un très long témoignage sur comment les théories du complot nous manipulent et sur le parallèle entre la diététique « alternative », les religions et les complots politiques.


https://wisetendersnob.medium.com/this-secret-message-could-change-your-life-wellness-culture-jesus-and-qanon-cd576e53c9c8

Photo by Markus Spiske on Unsplash

Je suis @ploum, ingénieur écrivain. Abonnez-vous par mail ou RSS pour ne rater aucun billet (max 2 par semaine). Je suis convaincu que Printeurs, mon dernier roman de science-fiction vous passionnera. Commander mes livres est le meilleur moyen de me soutenir et de m’aider à diffuser mes idées !

>

Ce texte est publié sous la licence CC-By BE.

Cory Doctorow is one of the most prolific bloggers in the world, capable of publishing multiple great posts a day. He recently documented his writing and publishing process. It's fascinating.

Over the last 20 years, Cory built a huge, personal database of thoughts, articles and links. He explains how his database simplifies and supports his writing process:

The memex I've created by thinking about and then describing every interesting thing I've encountered is hugely important for how I understand the world. It's the raw material of every novel, article, story and speech I write.

Inspired by Cory, I brought back the Notes section on my site. I will use Notes to document articles or ideas that grab my attention, but that I'm not ready to write a longer blog post about. I'll build my own memex with the goal to become a better writer.

February 13, 2021

Elementor 3.1 came with “refactored YouTube source to use YouTube API in Video widget” which resulted in WP YouTube Lyte not being able to lyten up the YouTubes any more. Below code snippet (consider it beta) hooks into Elementor to fix this regression;

add_filter( 'elementor/frontend/widget/should_render', function( $should_render, $elementor_element ) {
	if ( function_exists( 'lyte_preparse' ) && 'Elementor\Widget_Video' === get_class( $elementor_element ) ) {
		$pas = get_private_property( $elementor_element , 'parsed_active_settings');
		if ( ! empty( $pas['youtube_url'] ) ) {
			echo lyte_preparse( $pas['youtube_url'] );
			return false;
		}
	}

	return $should_render;
}, 10, 2 );

// from https://www.lambda-out-loud.com/posts/accessing-private-properties-php/
function get_private_property( object $object, string $property ) {
    $array = (array) $object;
    $propertyLength = strlen( $property );
    foreach ( $array as $key => $value ) {
        if ( substr( $key, -$propertyLength ) === $property ) {
            return $value;
        }
    }
}

Hat tip to Koen for his assistance in digging into Elementor, much appreciated!

February 12, 2021

I published the following diary on isc.sans.edu: “Agent Tesla Dropped Through Automatic Click in Microsoft Help File‘”:

Attackers have plenty of resources to infect our systems. If some files may look suspicious because the extension is less common (like .xsl files), others look really safe and make the victim confident to open it. I spotted a phishing campaign that delivers a fake invoice. The attached file is a classic ZIP archive but it contains a .chm file: a Microsoft compiled HTML Help file. The file is named “INV00620224400.chm” (sha256:af9fe480abc56cf1e1354eb243ec9f5bee9cac0d75df38249d1c64236132ceab) and has a current VT score of 27/59. If you open this file, you will get a normal help file (.chm extension is handled by the c:\windows\hh.exe tool)… [Read more]

The post [SANS ISC] Agent Tesla Dropped Through Automatic Click in Microsoft Help File appeared first on /dev/random.

February 11, 2021

(In case you don’t know rakudo-pkg: it’s a project to provide native packages and relocatable builds of Rakudo for Linux distributions.)

The downward spiral of Travis since it was taken over by a Private Equity firm (there is a pattern here), triggered a long coming refactoring of the rakudo-pkg workflow.  Until now the packages were created on Travis because it supported running Linux containers so we could tailor the build for each distribution/release. Almost at the same time JFrog announced it was sunsetting Bintray and other services popular with FOSS projects. The deb and rpm repos needed a new home.

Oh well, Travis and Bintray will be missed and certainly deserve our thanks for their service during years. It was difficult to imagine a better time to implement a few ideas from the good old TODO list.

From Travis to Github Actions

Github Actions is way faster than the post-P.E. Travis. Builds that took a few hours (we build around 25 distro/version combinations) now are done between 10 and 20 minutes. Not surprisingly this not only meant learning a complete new tool, but also a complete rewrite of the rakudo-pkg workflow.

Running on Github also means that every Github user can fork the repo including the rakudo-pkg Github workflows. One of the advantages of a rewrite is implementing new functionalities, like the new feature of rakudo-pkg to allow everyone to test the upstream MoarVM/NQP/Rakudo and zef commits/releases:

devbuild workflow

From Bintray to CloudSmith with Alpine repositories

Cloudsmith has a huge advantage over Bintray: it supports Alpine Linux repositories, making it a great addition to the rpm and deb repositories we already offer. The packages in the old repositories were GPG signed by BinTray. From now onwards, rakudo-pkg will be signed with his own key on Cloudsmith. Check the project’s README for instructions how to change your configuration. So far the experience with CloudSmith has been stellar. Their support is impressively responsive and they solve issues immediately, sometime while we were chatting about it.

From fpm to nfpm

rakudo-pkg created packages with the venerable fpm, written in Ruby. While the tool is extremely capable, I spent most of the time of a release getting fpm to build on new versions of distributions. The Ruby Gems ecosystem looks like a moving target. Enter nfpm, inspired with fpm but with the promise of a single binary. Written it Go, it’s more accessible to me to understand how it works and send fixes upstream if needed. The devs are also very responsive and fixed one issue I encountered extremely fast.

 

Feel free to open issues of send PRs if I missed something or if you have ideas to improve rakudo-pkg

February 10, 2021

All talks have been recorded and they will be made available on video.fosdem.org/2021 as soon as the presenter reviews their talk. For FOSDEM 2021, we make a mix of the original recording submitted by the presenter and the Q&A happening in the main room. To make sure that the result is up to our quality standards, we request that the original presenter reviews the video so the result contains no errors or mistakes. This can take a while. We expect that it will take some days to some weeks for all videos to become available at video.fosdem.org/2021. See our舰

February 09, 2021

Cover Image

Lies, damned lies, and social media

Rhetoric about the dangers of the internet is at a feverish high. The new buzzword these days is "disinformation." Funnily enough, nobody actually produces it themselves: it's only other people who do so.

Arguably the internet already came up with a better word for it: an "infohazard." This is information that is inherently harmful and destructive to anyone who hears or sees it. Infohazards are said to be profoundly unsettling and, in horror stories, possibly terminal: a maddening song stuck in your head; a realization so devastating it saps the will to live; a curse that happens to anyone who learns of it. You know, like the videotape from The Ring.

Words of power are nothing new. Neither is the concept of magic: weaving language into spells, blessing or harming specific people, or even compelling the universe itself to obey. Like much human mythology, it's wrong on the surface, but correct in spirit, at least more than is convenient. Luckily most actual infohazards are pretty mundane and individually often harmless, being just dumb memes. The problem is when magical thinking becomes the norm and forms a self-reinforcing system.

This is a weird place to start for sure, but I think it's a useful one. Because that is the concern, right, that people are being bewitched by the internet?

A witch offering a poison apple

I.

Last year the following was making the rounds, about polarization and extremism on Facebook, and their efforts to curb it. Citing work by Facebook researcher and sociologist Monica Lee:

The high number of extremist groups was concerning, the presentation says. Worse was Facebook’s realization that its algorithms were responsible for their growth. The 2016 presentation states that “64% of all extremist group joins are due to our recommendation tools” and that most of the activity came from the platform’s “Groups You Should Join” and “Discover” algorithms: “Our recommendation systems grow the problem.”

It's an extremely tweetable stat and quote, so naturally they first tell you how to feel about it. The article is rather long and dry, so we all know many people who shared it didn't read it in full. The current headline says that Facebook "shut down efforts to make the site less divisive." If you look at the URL, you can see it was originally titled: "Facebook knows it encourages division, top executives nixed solutions." So pot, meet kettle, but that's an aside.

While they acknowledge that some people believe social media has little to do with it, they immediately drown out the entire notion by referencing the American election and never mention this viewpoint again.

Don't get me wrong, I do think Facebook has problems, which is why I'm not on it. Optimizing mainly for time spent on the site is a textbook case of Goodhart's law: it's no surprise that it goes wrong. But a) that's not even a big-tech-specific problem and b) that's not what most people actually want changed when they cite this. What they say is that Facebook needs to limit "harmful content": they don't want Facebook to interfere less, they want it to interfere more.

This is in fact a very common pattern in all "disinformation" discourse: they tell you that what you are seeing is specifically abnormal and/or harmful, without any basis to actually justify limiting the conclusion. Like that anything about this is Facebook-specific. It's just that it's an easier sell to pressure one platform or channel at a time. It's not about what, but about who and whom.

There's also an assumption being snuck in. If a recommendation algorithm suggests you join a group, does the operator of that algorithm have a moral responsibility for your subsequent actions? If you agree with this, it seems Facebook should never recommend you join an extremist group, because extremism is bad. That sounds admirable, but is also not very achievable. It also hinges on what exactly is and isn't extreme, which is highly subjective.

Harmful content is for example "racist, conspiracy-minded and pro-Russian." So I doubt that they would consider e.g. the Jussie Smollett hoax or the hunt for the Covington Kid as harmful, seeing as they were sanctioned by both media outlets and prominent politicians. Despite being textbook cases of mass hysteria.

Calls for racially motivated action which result in violence, arson and anarchy under the Black Lives Matter flag also do not seem to count in practice. Facebook in fact placed BLM banners on official projects for most of 2020, as did others. The people who say we need to be deprogrammed seem to be doing most of the actual activism in the first place.

Facebook's React urging people to donate to Black Lives Matter

Facebook using its open-source projects to urge people to donate to political causes in an election year.

Like most big social media companies, Facebook's claims of political impartiality ring hollow on both an American and international level. Plus, if you actually read the whole article, the take-away is that they have put an inordinate amount of time, effort and process into this issue. They're just not very good at it.

But it doesn't really matter because in practice, people will share one number, with no actual comparison or context, from a source we can't see ourselves. This serves to convince you to let a specific group of people have specific powers with extremely wide reach, with no end in sight, for the good of all.

Infohazard.

II.

It's worth to ponder the stat. What should the number be?

First, some control. If 64% of members of extremist groups joined due to a machine recommendation, then what is that number for non-extremist groups? It would be useful to have some kind of reference. It would also be useful to compare to other platforms, to know whether Facebook is particularly different here. This is not rocket science.

Second, you should be sure about what this number is specifically measuring. It only tracks how effective recommendations are relative to other Facebook methods of discovering extremist groups, like search or likes. This is very different from which recommended groups people actually join. Confusing the two is how the trick works.

The two different conditional probabilities

The difference between P(Recommended | (Joined & Extremist)) and P((Joined & Extremist) | Recommended).

They're talking about the thing on the left, not on the right.

If ~0% of group joins were due to a recommendation, that would mean the recommendation algorithm is so bad nobody uses it. It's always wrong about who is interested in something. You wouldn't even see this if e.g. right-wing extremism was being shown only to left-wing extremists, or vice versa, because both camps pay enormous attention to each other. You would basically only need to recommend extremist groups to 100% apolitical people. Ironically, people would interpret that as the worst possible radicalization machine.

They do say Facebook had the idea of "[tweaking] recommendation algorithms to suggest a wider range of Facebook groups than people would ordinarily encounter," but seem to ignore that this implies exposing the middle to more of the extremes.

If ~100% of group joins were due to a recommendation, then that would imply the algorithm is so overwhelmingly good that it eclipses all other forms of discovery on the site, at least for extremist groups. Hence it could only be recommending them to people who are extremists or definitely want to hang out with them. This would be based on a person's strong prior interests, so the algorithm wouldn't be causing extremism either.

The value of this number doesn't really matter. The higher it is, the worse it sounds on paper: the recommendation engine seems to be doing more of the driving, even though the absolute numbers likely shrink. But the lower it is, the less relevant the recommendations must be, creating more complaints about them. It's somewhere in the middle, so nobody can actually say.

The popularity of extremist groups has nothing to do with what percentage of their members join due to a recommendation. You need to know the other stats: how often are certain recommendations made, and actually acted upon? How does it differ from topic to topic? If a particular category of recommendations is removed, do people seek it out in other ways?

The only acceptable value, per the censors, is if it becomes ~0% because Facebook stops recommending extremist groups altogether. That's why this really is just a very fancy call for censorship, using a lone statistic to create a sense of moral urgency. If a person had a choice of extremist and non-extremist recommendations, but deliberately chose the more extreme one... wouldn't that make a pretty strong case that the algorithm explicitly isn't responsible and actually just a scapegoat?

The article tells us that "[divisive] groups were disproportionately influenced by a subset of hyperactive users," so it seems to me that personal recommendations have a much higher success rate than machine recommendations. In that case, your problem isn't an algorithm, it's that everyone and their dog has a stake in manipulating this, and they do. They even brag about it in Time magazine.

There's a question early on in the article: "Does its platform aggravate polarization and tribal behavior? The answer it found, in some cases, was yes." Another way of putting that is: "The answer in most cases was no."

III.

The notion that groups are dominated by hyperactive influencers does track with my experience. I once worked in social media advertising, and let me tell you the open industry secret: going viral is a lie.

The romantic idea is that some undiscovered Justin Bieber-like wunderkind posts something brilliant online, in a little unknown corner. A kind soul discovers it, and it starts being shared from person to person. It's a bottom-up grass-roots phenomenon, growing in magnitude, measured by the virality coefficient: just like COVID if it's >1 then it spreads exponentially. This is the sort of thing mediocre marketing firms will explain to you with various diagrams.

Going viral from one person to many

The reality is very different. Bottom-up virality near 1 is almost unheard of, because that would imply that every person who sees it also shares it with someone else. We just don't do this. For something to go viral, it must be broadcast to a large enough group each time, so that at least one person decides to repost it to another big group.

Thus, going viral is not about bottom-up sharing at all: it's about content riding the top-down lightning between broadcasters and audiences. These channels build up their audiences glacially by comparison, one person at a time. One pebble does not shift the landscape. It also means the type of content that can go viral is constrained by the existing network. Even group chats work this way: you have to be invited in first.

This should break any illusions of the internet as a flat open space: rather it is accumulated infrastructure to route attention. Everyone knows you need sufficient eyeballs to be able to sell ad space, but somehow, translated into the world of social media, this basic insight was lost. When the billboard is a person or a personality, people forget. Because they seem so accessible.

In the conflict between big tech and old media, you often hear the lament that "tech people don't like to think about the moral implications of what they build." My pithy answer to that is "we learned it from you" but a more accurate answer is "yes, we do actually, quite a lot, and our code doesn't even have bylines." Though I can't actually consider myself "big tech" in any meaningful way. In this house we hack.

I can agree that large parts of social media are basically just cesspits of astroturfing and mass-hypnosis. But once you factor in who is broadcasting what to whom, and at what scale, the lines of causation look very different from the usually cited suspects.

While there are indeed weird niche groups online and offline, it is the crazy non-niche groups we should be more concerned about. Who is shouting the loudest, to the most people? Why, the traditional news outlets. The ones that 2020 revealed to be far less capable, objective and in-the-know than they pretend to be. The ones who chastised public gatherings as irresponsible only when it was the wrong group of people doing so.

So don't just question what they say and how they say it. Ask yourself what other stories they could've written, and why they did not.

IV.

What's especially frustrating is that the class of people who are supposed to be experts on media and communication have themselves been bubbled inside dogmatic social sciences and their media outposts. If you ask these people to tell you about radicalization on the internet, you are likely to hear a very detailed, incomplete, mostly wrong summary of pertinent events.

Much of this can be attributed to what I mentioned earlier: telling you how to feel about something before they tell you the details. Sometimes this is done explicitly, but often this is done by way of Russell Conjugation: "I am being targeted, you are getting pushback, they are being held accountable." The phrasing tells you whether something is happening to a person you should root for, be neutral about, or dislike. Given a pre-existing framework of oppressors and oppressed, they just snap to grid, and repetition does the rest.

Sometimes it's blatant, like when the NYT rewrote a fairly neutral article about Ellen Pao into nakedly partisan hero worship a few years ago.

But most people don't even realize they're doing it. When called upon they will insist "it's totally different," even if it's not. It's judging things by their subjective implications, not by what they objectively are. Once the initial impression of the players is set, future events are shoehorned to fit. The charge of whataboutism is a convenient excuse to not have to think about it. This is how you end up with people sincerely believing Jordan Peterson is an evil transphobe rather than a compelled speech objector dragged through the mud.

Information that disproves the narrative is swept under the rug, with plenty of scare quotes so you don't go check. If a reporter embarrasses themselves by asking incessantly leading questions, the story will shift to how mean and negative the response is, instead of actually correcting the misconceptions they just tried to dupe an entire nation with.

The magnitude of a particular event is also not decided by how many people participated in it, but rather, by how many people heard about it. This is epitomized by the story format that "the X community is mad about Y" based on 4 angry tweets or screencaps. It wasn't a thing until they decided to make it a thing, and now it is definitely a thing.

The idea of the media as an active actor in this process is a concept they are quite resistant to. Because if it isn't a story until they write the story, that means they are not actually reporting broadly on what's happening. They're just the same as anyone else.

Part of the problem is that what passes for news about the internet is mostly just gossip. Even if it is done in good faith, it is very hard for an individual to effectively see the full extent of an online phenomenon. In practice they rarely try, and when they do, the data gathering and analysis is usually amateur at best, lacking any reasonable control or perspective.

You will often be shown a sinister looking network graph, proving how corrupting influences are spreading around. What they don't tell you is that this is what the entire internet looks like everywhere, and e.g. a knitting community likely looks exactly the same as an alt-right influencer network. Each graph is just a particular subset of a larger thing, and you have to think about what's not shown as well as what is.

It's an even more profound mistake to think that we can all agree on which links should be cut and which should be amplified. Just because we can observe it, doesn't mean we know how to improve it. In fact, the biggest problem here is that so few people can decide what so many can't see. That's what makes them want to fight over it.

The idea that any of this is achievable in a moral way is itself a big red flag: it requires you to be sufficiently bubbled inside one ideology to even consider doing so. It is fundamentally illiberal.

It really is quite stunning. By and large, today the people who shout the loudest about disinformation, and the need to correct it, are themselves living in an enormous house of cards. It is built on bad thinking, distorted facts and sometimes, straight up gaslighting. They have forced themselves on companies, schools and governments, using critical theory to beat others into submission. They use the threat of cancellation as the stick, amplified eagerly by clickbait farms... but it's Facebook's fault. "They need to be held accountable."

These advocates only know how to mouth other people's incantations, they don't actually live by them.

Here's the thing about secret police: when studied, it is found that it's mostly underachievers who get the job and stick with it. Because they know that in a more merit-driven system, they would be lower on the totem pole.

If the world is messed up, it's because we gave power to people who don't know wtf they're supposed to do with it.

February 07, 2021

The Linux Professional Institute (LPI) has fixed the conference rate exam registration, and it is now available for anyone who participated in FOSDEM and is interested in taking advantage of it. As in previous years, the Linux Professional Institute will again offer a discount to examination candidates at FOSDEM 2021. The level 1, level 2 and level 3 exams are offered at a discount of nearly 50%. See our certification page for more information.

February 06, 2021

if “Load WebP or AVIF in supported browsers?” is on, .png files with transparency will loose that transparency in browsers that support AVIF due to a recent technical change in Shortpixel’s AVIF toolchain.

Shortpixel is looking at alternative solutions, but until then as a workaround you can either:

  • add .png to Autoptimize’s lazyload exclusion field
  • or to use below code snippet to disable AVIF images;

add_filter( 'autoptimize_filter_imgopt_do_avif', '__return_false');

February 05, 2021

An important part of our programme: the T-shirts and hoodies. They are (again) available, and contrary to normal practices, you won't have to stand in line to get them and they won't run out! For the online edition, we partnered with an print-on-demand provider, which means that they won't run out at Saturday noon, as they normally do. Visitors from the EU can go to treasure.fosdem.org, offering shipping to the EU. If you want to ship to a destionation outside of the EU, go to row.treasure.fosdem.org. If you can't find your country in either of the shops, send an舰

The M5Stack Core is a modular, stackable, ESP32 board with a 2 inch LCD screen and three buttons, all in a package that doesn't look bad in your living room. ESPHome is a system to configure your ESP8266/ESP32 with YAML files to connect them to your home automation system (MQTT or Home Assistant). As of ESPHome 1.16.0, released this week, ESPHome supports the M5Stack Core's ILI9341 display.

What this means is that you can now set up your own display device without having to solder (thanks to the M5Stack Core's all-in-one package) and without having to program (thanks to ESPHome's configuration-based approach), and let it talk to your home automation system, including advanced functionality such as over-the-air (OTA) updates. This is really bringing do-it-yourself home automation to the masses.

For an example, have a look at my ESPHome configuration for the M5Stack PM2.5 Air Quality Kit. I wrote it two months ago for the dev version of ESPHome, and I can confirm that this now just works with the 1.16.0 release. The result looks like this:

/images/m5stack-air-quality-kit-esphome.jpg

Using the display

The display is using the SPI bus, so you define it like this with the right clock, MOSI and MISO pins:

spi:
  clk_pin: 18
  mosi_pin: 23
  miso_pin: 19

Then define a font: 1

# Download Roboto font from https://fonts.google.com/specimen/Roboto
font:
  - file: "fonts/Roboto-Medium.ttf"
    id: font_roboto_medium22
    size: 22
    glyphs: '!"%()+,-_.:°0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz/³µ'

Then you can define the display and add a lambda expression to show something on the screen:

display:
  - platform: ili9341
    id: m5stack_display
    model: M5Stack
    cs_pin: 14
    dc_pin: 27
    led_pin: 32
    reset_pin: 33
    rotation: 0
    lambda: |-
      Color RED(1,0,0);
      Color BLUE(0,0,1);
      Color WHITE(1,1,1);
      it.rectangle(0,  0, it.get_width(), it.get_height(), BLUE);
      it.rectangle(0, 22, it.get_width(), it.get_height(), BLUE);
      it.print(it.get_width() / 2, 11, id(font_roboto_medium22), RED, TextAlign::CENTER, "Particulate matter");

Using the buttons

The M5Stack Core's buttons were already supported: they can just be used as a binary sensor. For instance, this is how you use the middle button to toggle the display's backlight:

# Button to toggle the display backlight
binary_sensor:
  - platform: gpio
    id: M5_BtnB
    pin:
      number: 38
      inverted: true
    on_click:
      then:
        - switch.toggle: backlight

# GPIO pin of the display backlight
switch:
  - platform: gpio
    pin: 32
    name: "Backlight"
    id: backlight
    restore_mode: ALWAYS_ON

The M5Stack Core has three buttons. From left to right those are button A (GPIO39), button B (GPIO38) and button C (GPIO37). You can use them all like in the code above. As an exercise, I have reimplemented Homepoint's interface in ESPHome. With the left and right buttons I cycle through the pages showing sensor values of my home, and with the middle button I toggle the display backlight.

1

To be able to use fonts in ESPHome, you need to install Pillow with pip3 install pillow.

I’m a fan of the Nanoleaf light panels! I use them in my office all the time. They provide a great daylight color while I’m in a Webex or training, they react to my music or give a relaxing atmosphere (while you need to concentrate on important stuff). Years ago, when I was working on Solaris systems, I often used the “snoop” (the Solaris version of tcpdump) command with the “-a” parameter to play a sound when some packets matched my filter. Who remembers this feature? Today, it’s the same, I’m often running a tcpdump command with a complex BPF filter, expecting to grab some interesting packets. Why not use my Nanoleaf panels to react to specific flows?

The Nanoleaf controller is connected to the IoT network and can be interfaced with any home automation platform (“Hey Siri, turn on the lights in my office“). But it is also reachable through an API. A Python library is available for this purpose. Let’s try it!

$python3
Python 3.8.5 (default, Jul 28 2020, 12:59:40)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from nanoleafapi import Nanoleaf
>>> nl = Nanoleaf("192.168.254.54")
>>> nl.get_power()
True
>>> nl.get_current_effect()
'Blaze'
>>>

When you need to capture packets with Python, the best library to use is scapy! The script is simple: It sniffs some traffic and, for each received packet, it performs an action on the light panels (if it matches the requirements). The script I wrote works in two “modes”:

  • With random colours – The first time a port of seen, it is assigned to a panel and a random colour is generated.
  • With fixed colours assigned by port – Easier to track the activity on a specific port (ex: you’re expecting an incoming connection on port 8000 and turn a panel in red)

To reduce the calls to the API, the script keeps a list of seen network flows and reacts only with new ones. The syntax is pretty simple:

$ ./pcaplights.py -h
Usage: pcaplights.py [options]

Options:
  --version             show program's version number and exit
  -h, --help            show this help message and exit
  -i INTERFACE, --interface=INTERFACE
                        Interface (default: "lo")
  -f BPF_FILTER, --filter=BPF_FILTER
                        BPF Filter (default: "ip")
  -c COUNT, --count=COUNT
                        Packets to capture (default: no limit)
  -H HOST, --host=HOST  Nanoleaf Controller IP/FQDN
  -C COLORS, --colors=COLORS
                        Color for protocols (port1=(r,g,b)/port2=(r,g,b)/…)
                        (default: random)
  -v, --verbose Verbose output

Here is a quick video to demonstrate the script:

The good point is that you can sniff on any location in your network and just make the light panels react just in front of you.

The script is available on my Github repository. As usual, the code is working for me but it has not been intensively tested. Bugs are always ahead! 😉

The post Network Flows Visualization With Nanoleaf Light Panels appeared first on /dev/random.

I published the following diary on isc.sans.edu: “VBA Macro Trying to Alter the Application Menus‘”:

Who remembers the worm Melissa? It started to spread in March 1999! In information security, it looks like speaking about prehistory but I spotted a VBA macro that tried to use the same defensive technique as Melissa. Yes, back in 1999, attackers already tried to use techniques to defeat users’ protections. The sample macro has a low VT score (7/44) (SHA256:386e1a60011ff0a818adff8c638005ec5015930c1b35d06cacc11f3ab53725d0)… [Read more]

The post [SANS ISC] VBA Macro Trying to Alter the Application Menus appeared first on /dev/random.

February 04, 2021

FOSDEM 2021 is only in several hours, so now is the time to check the schedule and mark all talks and stands that interest you! For a preview of how everything looks and works, go to the virtual conference floor and click on any of the links (they all work!). Talks and stands can be found on the schedule (look for the S building to find the stands). All of them will have several links to a live stream (called Video only), a live stream with the Q&A visible (called Video with Q&A) and a dedicated chatroom (called Join舰

How to create the long-lasting computer that will save your attention, your wallet, your creativity, your soul and the planet. Killing monopolies will only be a byproduct.

Each time I look at my Hermes Rocket typewriter (on the left in the picture), I’m astonished by the fact that the thing looks pretty modern and, after a few cleaning, works like a charm. The device is 75 years old and is a very complex piece of technology with more than 2000 moving parts. It’s still one of the best tools to focus on writing. Well, not really. I prefer the younger Lettera 32, which is barely 50 years old (on the right in the picture).

Typewriters are incredibly complex and precise piece of machinery. At their peak in the decades around World War II, we built them so well that, today, we don’t need to build any typewriters anymore. We simply have enough of them on earth. You may object that it’s because nobody uses them anymore. It’s not true. Lots of writers keep using them, they became trendy in the 2010s and, to escape surveillance, some secret services started to use them back. It’s a very niche but existing market.

Let’s that idea sink in: we basically built enough typewriters for the world in less than a century. If we want more typewriters, the solution is not to build more but to find them in attics and restore them. For most typewriters, restoration is only a matter of taking the time to do it. There’s no complex skills or tools involved. Even the most difficult operations could be learned alone, by simple trial and error. The whole theory needed to understand a typewriter is the typewriter itself.

By contrast, we have to change our laptops every three or four years. Our phones every couple of years. And all other pieces of equipment (charger,router, modem,printers,…) need to be changed regularly.

Even with proper maintenance, they simply fade out. They are not compatible with their environment anymore. It’s impossible for one person alone to understand perfectly what they are doing, let alone repair them. Batteries wear out. Screen cracks. Processors become obsolete. Software becomes insecure when they don’t crash or refuse to launch.

It’s not that you changed anything in your habits. You still basically communicate with people, look for information, watch videos. But today your work is on Slack. Which requires a modern CPU to load the interface of what is basically a slick IRC. Your videoconference software uses a new codec which requires a new processor. And a new wifi router. Your mail client is now 64 bits only. If you don’t upgrade, you are left out in the cold.

Of course, computers are not typewriters. They do a lot more than typewriters.

But could we imagine a computer built like a typewriter? A computer that could stay with you for your lifetime and get passed to your children?

Could we build a computer designed to last at least fifty years?

Well, given how we use the resources of our planet, the question is not if we could or not. We need to do it, no matter what.

So, how could we build a computer to last fifty years ? That’s what I want to explain in this essay. In my notes, I’m referring to this object as the #ForeverComputer. You may find a better name. It’s not really important. It’s not the kind of objects that will have a yearly keynote to present the new shiny model and ads everywhere telling us how revolutionary it is.

Focusing on timeless use cases

There’s no way we can predict what will be the next video codec or the next wifi standard. There’s no point in trying to do it. We can’t even guess what kind of online activity will be trendy in the next two years.

Instead of trying to do it all, we could instead focus on building a machine that will do timeless activities and do them well. My typewriter from 1944 is still typing. It is still doing something I find useful. Instead of trying to create a generic gaming station/Netflix watching computer, let’s accept a few constraints.

The machine will be built to communicate in written format. It means writing and reading. That covers already a lot of use cases. Writing documents. Writing emails. Reading mails, documents, ebooks. Searching on the network for information. Reading blogs and newsletters and newsgroups.

It doesn’t seem much but, if you think about it, it’s already a lot. Lots of people would be happy to have a computer that does only that. Of course, the graphic designers, the movie makers and the gamers would not be happy with such a computer. That’s not the point. It’s just that we don’t need a full-fledged machine all the time. Dedicated and powerful workstations would still exist but could be shared or be less often renewed if everybody had access to its own writing and reading device.

By constraining the use cases, we create lots of design opportunities.

Hardware

The goal of the 50-year computer is not to be tiny, ultra-portable and ultra-powerful. Instead, it should be sturdy and resilient.

Back in the typewriter’s day, a 5 kg machine was considered as ultraportable. As I was used to a 900 g MacBook and felt that my 1,1kg Thinkpad was bulky, I could not imagine being encumbered. But, as I started to write on a Freewrite (pictured between my typewriters), I realised something important. If we want to create long-lasting objects, the objects need to be able to create a connection with us.

A heavier and well-designed object feels different. You don’t have it always with you just in case. You don’t throw it in your bag without thinking about it. It is not there to relieve you from your boredom. Instead, moving the object is a commitment. A conscious act that you need it. You feel it in your hands, you feel the weight. You are telling the object: « I need you. You have a purpose. » When such a commitment is done, the purpose is rarely « scroll an endless stream of cat videos ». Having a purpose makes it harder to throw the object away because a shiny new version has been released. It also helps draw the line between the times where you are using the object and the times you are not.

Besides sturdiness, one main objective from the ForeverComputer would be to use as little electricity as possible. Batteries should be easily swappable.

In order to become relevant for the next 50 years, the computer needs to be made of easily replaceable parts. Inspirations are the Fairphone and the MNT Reform laptop. The specifications of all the parts need to be open source so anybody can produce them, repair them or even invent alternatives. The parts could be separated in a few logical blocks : the computing unit, which include a motherboard, CPU and RAM, the powering unit, aka the battery, the screen, the keyboard, the networking unit, the sound unit and the storage unit. All of this come in a case.

Of course, each block could be made of separate components that could be fixed but making clear logical blocks with defined interfaces allows for easier compatibility.

The body requires special attention because it will be the essence of the object. As for the ship of Theseus, the computer may stay the same even if you replace every part. But the enclosing case is special. As long as you keep the original case, the feeling toward the object would be that nothing has changed.

Instead of being mass-produced in China, ForeverComputers could be built locally, from open source blueprints. Manufacturers could bring their own skills in the game, their own experience. We could go as far as linking each ForeverComputer to a system like Mattereum where modifications and repairs will be listed. Each computer would thus be unique, with a history of ownership.

As with the Fairphone, the computer should be built with materials as ethical as possible. If you want to create a connection with an object, if you want to give him a soul, that object should be as respectful of your ethical principles as possible.

Opiniated choices

As we made the choice to mostly use the computer for written interaction, it makes sense, in the current affair of the technology, to use an e-ink screen. E-ink screens save a lot of power. This could make all the difference between a device that you need to recharge every night, replacing the battery every two years, and a device that basically sit idle for days, sometimes weeks and that you recharge once in a while. Or that you never need to recharge if, for example, the external protective case comes with solar panels or an emergency crank.

E-ink is currently harder to use with mouses and pointing devices. But we may build the computer without any pointing device. Geeks and programmers know the benefit of keyboard oriented workflows. They are efficient but hard to learn.

With dedicated software, this problem could be smartly addressed. The Freewrite has a dedicated part of the screen, mostly used for text statistics or displaying the time. The concept could be extended to display available commands. Most people are ready to learn how to use their tools. But, by changing the interface all the time with unexpected upgrades, by asking designers to innovate instead of being useful, we forbid any long-term learning, considering users as idiots instead of empowering them.

Can we create a text-oriented user interface with a gradual learning curve? For a device that should last fifty years, it makes sense. By essence, such device should reveal itself, unlocking its powers gradually. Careful design will not be about « targeting a given customer segment » but « making it useful to humans who took the time to learn it ».

Of course, one could imagine replacing the input block to have a keyboard with a pointing device, like the famous Thinkpad red dot. Or a USB mouse could be connected. Or the screen could be a touchscreen. But what if we tried to make it as far as we could without those?

E-ink and no pointing would kill the endless scrolling, forcing us to think of the user interface as a textual tool that should be efficient and serve the user, even if it requires some learning. Tools need to be learned and cared. If you don’t need to learn it, if you don’t need to care for it, then it’s probably not a tool. You are not using it, you are the one used.

Of course, this doesn’t mean that every user should learn to program in order to be able to use it. A good durable interface requires some learning but doesn’t require some complex mental models. You understand intuitively how a typewriter works. You may have to learn some more complex features like tabulations. But you don’t need to understand how the inside mechanism works to brink the paper forward with each key press.

Offline first

Our current devices expect to be online all the time. If you are disconnected for whatever reason, you will see plenty of notifications, plenty of errors. In 2020, MacOS users infamously discovered that their OS was sending lots of information to Apple’s servers because, for a few hours, those servers were not responding, resulting in an epidemic of bugs and error. At the same time, simply trying to use my laptop offline allowed me to spot a bug in the Regolith Linux distribution. Expecting to be online, a small applet was trying to reconnect furiously, using all the available CPU. The bug was never caught before me because very few users go offline for an extended period of time (it should be noted that it was fixed in the hours following my initial report, open source is great).

This permanent connectivity has a deep effect on our attention and on the way we use computers. By default, the computer is notifying us all the time with sounds and popups. Disabling those requires heavy configuration and sometimes hack. On MacOS, for example, you can’t enable No Disturb mode permanently. By design, not being disturbed is something that should be rare. The hack I used was to configure the mode to be set automatically between 3AM and 2AM.

When you are online, your brain knows that something might be happening, even without notification. There might be a new email waiting for you. A new something on a random website. It’s there, right on your computer. Just move the current window out of the way and you may have something that you are craving: newness. You don’t have to think. As soon as you hit some hard thought, your fingers will probably spontaneously find a diversion.

But this permanent connectivity is a choice. We can design a computer to be offline first. Once connected, it will synchronise everything that needs to be: mails will be sent and received, news and podcasts will be downloaded from your favourite websites and RSS, files will be backuped, some websites or gemini pods could even be downloaded until a given depth. This would be something conscious. The state of your sync will be displayed full screen. By default, you would not be allowed to use the computer while it is online. You would verify that all the sync is finished then take the computer back offline. Of course, the full screen could be bypassed but you would need to consciously do it. Being online would not be the mindless default.

This offline first design would also have a profound impact on the hardware. It means that, by default, the networking block could be wired. All you need is a simple RJ-45 plug.

We don’t know how wifi protocols will change. There are good chance that today’s wifi will not be supported by tomorrow’s routers or only as a fallback alternative. But chances are that RJ-45 will stay for at least a few decades. And if not RJ-45, a simple adaptor could be printed.

Wifi has other problems: it’s a power hog. It needs to always scan the background. It is unreliable and complex. If you want to briefly connect to wifi, you need to enable wifi, wait for the background scan, choose the network where to connect, cross your fingers that it is not some random access point that wants to spy your data, enter the password. Wait. Reenter that password because you probably wrote a zero instead of a O. Wait. It looks to be connected. Is it? Are the files synchronised? Why was the connection interrupted? Am I out of range? Are the walls too thick?

By contrast, all of this could be achieved by plugging a RJ-45 cable. Is there a small green or orange light? Yes, then the cable is well plugged, problem solved. This also adds to the consciousness of connection. You need to walk to a router and physically connect the cable. It feels like loading the tank with information.

Of course, the open source design means that anybody could produce a wifi or 5G network card that you could plug in a ForeverComputer. But, as with pointing devices, it is worth trying to see how far we could go without it.

Introducing peer-to-peer connectivity

The Offline First paradigm leads to a new era of connectivity: physical peer to peer. Instead of connecting to a central server, you could connect two random computers with a simple cable.

During this connection, both computers will tell each other what they need and, if by any chance they can answer one of those needs, they will. They could also transmit encrypted messages for other users, like bottles in the sea. If you ever happen to meet Alice, please give her this message.

Peer-to-peer connectivity implies strong cryptography. Private information should be encrypted with no other metadata than the recipient. The computer connecting to you have no idea if you are the original sender or just one node in the transmission chain. Public information should be signed, so you are sure that they come from a user you trust.

This also means that our big hard disks would be used fully. Instead of sitting on a lot of empty disk spaces, your storage will act as a carrier for others. When full, it will smartly erase older and probably less important stuff.

In order to use my laptop offline, I downloaded Wikipedia, with pictures, using the software Kiwix. It only takes 30Go of my hard drive and I’m able to have Wikipedia with me all the time. I only miss a towel to be a true galactic hitchhiker.

In this model, big centralised servers only serve as a gateway to make things happen faster. They are not required anymore. If a central gateway disappears, it’s not a big deal.

But it’s not only about Wikipedia. Protocols like IPFS may allow us to build a whole peer-to-peer and serverless Internet. In some rural areas of the planet where broadbands are not easily available, such Delay Tolerant Networks (DTNs) are already working and extensively used, including to browse the web.

Software

It goes without saying that, in order to built a computer that could be used for the next 50 years, every software should be open source.

Open source means that bugs and security issues could be solved long after the company that coded them has disappeared. Once again, look at typewriters. Most companies have disappeared or have been transformed beyond any recognition (try to bring back your IBM Selectric to an IBM dealer and ask for a repair, just to see the look on their face. And, yes, your IBM Selectric is probably exactly 50 years old). But typewriters are still a thing because you don’t need a company to fix them for you. All you need is a bit of time, dexterity and knowledge. For missing parts, other typewriters, sometimes from other brands, can be scavenged.

For a fifty-year computer to hit the market, we need an operating system. This is the easiest part as the best operating systems out there are already open source. We also need a user interface who should be dedicated to our particular needs. This is hard work but doable.

The peer-to-peer offline-first networking part is probably the most challenging part. As said previously, essential pieces like IPFS already exist. But everything needs to be glued together with a good user interface.

Of course, it might make sense to rely on some centralised servers first. For example, building on Debian and managing to get all dedicated features uploaded as part of the Official Debian repository already offers some long-term guarantees.

The main point is to switch our psychological stance about technological projects. Let’s scrap the Silicon Valley mentality of trying to stay stealthy then to suddenly try to get as many market share as possible in order to hire more developers.

The very fact that I’m writing this in the public is a commitment to the spirit of the project. If we ever manage to build a computer which is usable in 50 years and I’m involved, I want it highlighted that since the first description, everything was done in the open and free.

More about the vision

A computer built to last 50 years is not about market shares. It’s not about building a brand, raising money from VC and being bought by a monopoly. It’s not about creating a unicorn or even a good business.

It’s all about creating a tool to help humanity survive. It’s all about taking the best of 8 billion brains to create this tool instead of hiring a few programmers.

Of course, we all need to pay bills. A company might be a good vehicle to create the computer or at least parts of it. There’s nothing wrong with a company. In fact, I think that a company is currently the best option. But, since the very beginning, everything should be built by considering that the product should outlast the company.

Which means that customers will buy a tool. An object. It will be theirs. They could do whatever they want with it afterward.

It seems obvious but, nowadays, nearly every high technological item we have is not owned by us. We rent them. We depend on the company to use them. We are not allowed to do what we want. We are even forced to do things we don’t want such as upgrading software at an inappropriate time, sending data about us and hosting software we don’t use that can’t be removed or using proprietary clouds.

When you think about it, the computer built to last 50 years is trying to address the excessive consumption of devices, to fight monopolies, to claim back our attention, our time and our privacy and free us from abusive industries.

Isn’t that a lot for a single device? No because those problems are all different faces of the same coin. You can’t fight them separately. You can’t fight on their own grounds. The only hope? Changing the ground. Changing the rules of the game.

The ForeverComputer is not a replacement. It will not be better than your MacBook or your android tablet. It will not be cheaper. It will be different. It will be an alternative. It will allow you to use your time on a computer differently.

It doesn’t need to replace everything else to win. It just needs to exist. To provide a safe place. Mastodon will never replace Twitter. Linux desktop never replaced Windows. But they are huge successes because they exist.

We can dream. If the concept becomes popular enough, some businesses might try to become compatible with that niche market. Some popular websites or services may try to become available on a device which is offline most of the time, which doesn’t have a pointer by default and which has only an e-ink screen.

Of course, those businesses would need to find something else than advertising, click rates and views to earn money. That’s the whole point. Each opportunity to replace an advertising job (which includes all the Google and Facebook employees) by an honest way to earn money is a step in destroying our planet a bit less.

Building the first layers

There’s a fine equilibrium at play when an innovation tries to change our relationship with technology. In order to succeed, you need technologies, a product and contents. Most technologists try to build technologies first, then products on top of it then waits for content. It either fails or become a niche thingy. To succeed, there should be a game of back and forth between those steps. People should gradually use the new products without realising it.

The ForeverComputer that I described here would never gain real traction if released today. It would be incompatible with too much of the content we consume every day.

The first very small step I imagined is building some content that could, later, be already compatible. Not being a hardware guy (I’m a writer with a software background), it’s also the easiest step I could do today myself.

I call this first step WriteOnly. It doesn’t exist yet but is a lot more realistic than the ForeverComputer.

WriteOnly, as I imagine it, is a minimalist publishing tool for writers. The goal is simple : write markdown text files on your computer. Keep them. And let them published by WriteOnly. The readers will choose how they read you. They can read it on a website like a blog, receive your text by email or RSS if they subscribed, they can also choose to read you through Gemini or DAT or IPFS. They may receive a notification through a social network or through the fediverse. It doesn’t matter to you. You should not care about it, just write. Your text files are your writing.

Features are minimal. No comments. No tracking. No statistics. Pictures are dithered in greyscale by default (a format that allows them to be incredibly light while staying informative and sharper than full-colour pictures when displayed on an e-ink screen).

The goal of WriteOnly is to stop having the writers worrying about where to post a particular piece. It’s also a fight against censorship and cultural conformity. Writers should not try to write to please the readers of a particular platformn according to the metrics of that platform moguls. They should connect with their inner selves and write, launching words into the ether.

We never know what will be the impact of our words. We should set our writing free instead of reducing it to a marketing tool to sell stuff or ourselves.

The benefit of a platform like WriteOnly is that adding a new method of publishing would automatically add all the existing content to it. The end goal is to have your writing available to everyone without being hosted anywhere. It could be through IPFS, DAT or any new blockchain protocol. We don’t know yet but we can already work on WriteOnly as an open source platform.

We can also already work on the ForeverComputer. There will probably be different flavours. Some may fail. Some may reinvent personal computing as we know it.

At the very least, I know what I want tomorrow.

I want an open source, sustainable, decentralised, offline-first and durable computer.

I want a computer built to last 50 years and sit on my desk next to my typewriter.

I want a ForeverComputer.

Make it happen

As I said, I’m a software guy. I’m unlikely to make a ForeverComputer happen alone. But I still have a lot of ideas on how to do it. I also want to focus on WriteOnly first. If you think you could help make it a reality and want to invest in this project contact me on lionel at ploum.net.

If you would like to use a ForeverComputer or WriteOnly, you can either follow this blog (which is mostly in French) or subscribe here to a dedicated mailing list. I will not sell those emails, I will not share them and will not use them for anything else than telling you about the project when it becomes reality. In fact, there’s a good chance that no mail will ever be sent to that dedicated mailing list. And to make things harder, you will have to confirm your email address by clicking on a link in a confirmation mail written in French.

>

Further Reads

« The Future of Stuffs », by Vinay Gupta. A short, must-read, book about our relationship with objects and manufacturing.

« The Typewriter Revolution », by Richard Polt. A complete book and guide about the philosophy behind typewriters in the 21st century. Who is using them, why and how to use one yourself in an era of permanent connectivity.

NinjaTrappeur home built a digital typewriter with an e-ink screen in a wooden case:
https://alternativebit.fr/posts/ultimate-writer/

Another DIY project with an e-ink screen and a solar panel included:
https://forum.ei2030.org/t/e-ink-low-power-cpu-solar-power-3-sides-of-the-same-lid/82

SL is using an old and experimental operating system (Plan9) which allows him to do only what he wants (mails, simple web browsing and programming).
http://helpful.cat-v.org/Blog/2019/12/03/0/

Two artists living off the grid on a sail boat and connecting only rarely.
https://100r.co/site/working_offgrid_efficiently.html

« If somebody would produce a simple typewriter, an electronic typewriter that was silent, that I could use on airplanes, that would show me a screen of 8 1/2 by 11, like a regular page, and I could store it and print it out as a manuscript, I would buy one in a second! » (Harlan Ellison, SF writer and Startrek scenarist)
http://harlanellison.com/interview.htm

LowTech magazine has an excellent article about low-tech Internet, including Delay Tolerant Networks.
https://solar.lowtechmagazine.com/2015/10/how-to-build-a-low-tech-internet.html

Another LowTech magazine article about the impact typewriters and computers had on office work.
https://solar.lowtechmagazine.com/2016/11/why-the-office-needs-a-typewriter-revolution.html

UPDATE 6th Feb 2020 : Completely forgot about Scuttlebutt, which is an offline-first, p2p social network. It does exactly what I’m describing here to communicate.

https://scuttlebutt.nz/get-started/

A good very short introduction about it on BoingBoing :

https://boingboing.net/2017/04/07/bug-in-tech-for-antipreppers.html

UPDATE 8th Feb 2020 : The excellent « Tales from the Dork Web » has an issue on The 100 Year Computer which is strikinly similar to this piece.

https://thedorkweb.substack.com/p/the-100-year-computer

I also add this attempt at a Offline-first protocol : the Pigeon protocol :

https://github.com/PigeonProtocolConsortium/pigeon-spec

And another e-ink DIY typewriter :

https://hackaday.com/2019/02/18/offline-e-paper-typewriter-lets-you-write-without-distractions/

UPDATE 15th Feb 2020 : Designer Micah Daigle has proposed the concept of the Prose, an e-ink/distraction free laptop.

https://medium.com/this-should-exist/prose-a-distraction-free-e-ink-laptop-for-thinkers-writers-4182a62d63b2

Je suis @ploum, ingénieur écrivain. Abonnez-vous par mail ou RSS pour ne rater aucun billet (max 2 par semaine). Je suis convaincu que Printeurs, mon dernier roman de science-fiction vous passionnera. Commander mes livres est le meilleur moyen de me soutenir et de m’aider à diffuser mes idées !

>

Ce texte est publié sous la licence CC-By BE.

February 02, 2021

I published the following diary on isc.sans.edu: “New Example of XSL Script Processing aka ‘Mitre T1220‘”:

Last week, Brad posted a diary about TA551. A few days later, one of our readers submitted another sample belonging to the same campaign. Brad had a look at the traffic so I decided to have a look at the macro, not because the code is heavily obfuscated but data are spread at different locations in the Word document… [Read more]

The post [SANS ISC] New Example of XSL Script Processing aka “Mitre T1220” appeared first on /dev/random.

February 01, 2021

FOSDEM is all about talks, so we're going to delve a bit deeper in that topic. FOSDEM 2021 Online will happen through fosdem.org/2021/schedule, our main website. Each talk is in a room, and each room maps to a virtual room on chat.fosdem.org, combining video and Q&A. We'll add a link to each room, but the virtual room name is the same as the room name minus the bit before the dot. So you can find the M.misc room at chat.fosdem.org/#/room/#misc:fosdem.org. You do not need an account to view the talks in the virtual room. Find more details on the舰

January 31, 2021

FOSDEM is in 6 days, so now is a good time to check whether you can visit the conference. We're going to walk through all of the bits and pieces so you are fully prepared on Saturday en Sunday between 10am and 6pm CET. FOSDEM 2021 Online will happen through fosdem.org/2021/schedule, our main website. Each talk is in a room, and each room maps to a virtual room on chat.fosdem.org, combining video and Q&A. We'll add a link to each room, but the virtual room name is the same as the room name minus the bit before the dot.舰

January 30, 2021

FOSDEM 2021 is just around the corner, and some of you are probably curious how everything is going to work. After all, in other years you simply had to go the ULB, find your room (always a challenge), sit back and enjoy. But how is that going to work in this virtual event? Read more and find out! Like every year, there will be talks, devrooms and stands. While you would normally try to get in a room to see the talk, and watch the video only afterwards (or as a substitute because the room was full); this year,舰

January 29, 2021

For the second year, Acquia was named a Leader in Gartner's Magic Quadrant for Digital Experience Platforms (DXPs).

Gartner magic quadrant for digital experience platforms

Our leadership position improved compared to last year. Acquia is now the clear number two behind Adobe. Market validation from Gartner on our vision is exciting and encouraging.

In the report, the analysts note the Drupal community as a powerful entity that sets Acquia apart from closed-monoliths. Closed monolithic stacks and martech silos are quickly becoming a thing of the past. Drupal's scale, modularity and openness is a real differentiator.

Mandatory disclaimer from Gartner

Gartner, Magic Quadrant for Digital Experience Platforms, Irina Guseva, Mick MacComascaigh, Mike Lowndes, January 27, 2021.

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Acquia.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

I published the following diary on isc.sans.edu: “Sensitive Data Shared with Cloud Services“:

Yesterday was the data protection day in Europe. I was not on duty so I’m writing this quick diary a bit late. Back in 2020, the Nitro PDF service suffered from a data breach that impacted many companies around the world. This popular service allows you to create, edit and sign PDF documents. A few days ago, the database leak was released in the wild:  14GB compressed, 77M credentials… [Read more]

(Click to enlarge)

The post [SANS ISC] Sensitive Data Shared with Cloud Services appeared first on /dev/random.

January 27, 2021

With this year’s virtual edition of FOSDEM fast approaching, we look forward to inviting you to volunteer to help support the running of the event. The twenty-first edition will take place Saturday 6th and Sunday 7th February 2021, online. We will be running a variety of streams including Developer Rooms, Stands, Main Tracks and Lightning Talks. Because we are sadly unable to meet in person at the ULB in Brussels, a lot of our usual tasks will be superfluous - don’t worry, we won’t send you to an attendee’s house to help clean up a spilled drink! There will舰

Hello interwebz!

I am looking for a non-invasive front door key turning lock addon like https://nuki.io without too much brain. I'd rather run the brain myself in a way I can trust.

 The invasive version that requires changing the lock is easy to find. The non-invasive one, not so much...

Any hints?

January 26, 2021

*tap tap tap* Hello? Is this mic on? Is anyone reading this?

The post appeared first on amedee.be.

January 25, 2021

Je domoticasysteem stuur je in veel gevallen aan via een webinterface op je laptop of een mobiele app op je smartphone. Maar altijd toegang tot je domoticasysteem hebben via een klein schermpje aan de muur is zeker zo handig. Je kunt zo'n apparaatje zelf in elkaar knutselen met een microcontrollerbordje, een schermpje en wat knoppen, maar als je het in de woonkamer wilt gebruiken, wil je niet dat er allerlei losse draadjes en printplaatjes te zien zijn.

Homepoint op een M5Stack Core

M5Stack, een jong Chinees bedrijf, biedt allerlei producten aan die gebouwd zijn rond een ESP32-microcontroller, die er niet als een ontwikkelbordje uitzien. Vooral interessant is de M5Stack Core BASIC Kit: een modulair, stapelbaar ESP32-bordje met een lcd-scherm van 2 inch en drie knoppen in een sobere behuizing. Die ziet er niet slecht uit in je woonkamer. De module is heel compact: 13 mm hoog, en 54 mm lang en breed. Ik heb er ondertussen al enkele in huis staan.

Een van de programma's die je op de M5Stack Core 1 kunt installeren, is Homepoint. Het biedt een interface om een op MQTT gebaseerd domoticasysteem aan te sturen en de status van sensors te tonen.

Voor Computer!Totaal schreef ik een artikel over hoe je Homepoint op je M5Stack Core installeert en er een dashboard voor je domoticasysteem van maakt. Ik leg er diverse configuraties uit. Een volledige configuratie vind je ook in het artikel dat ik een tijdje geleden over Homepoint in het Engelstalige gedeelte van mijn blog schreef.

Een systeem zoals Homepoint op een apparaatje zoals de M5Stack Core maakt je domoticasysteem veel toegankelijker. Een dashboard dat altijd beschikbaar is zonder dat je een laptop hoeft open te klappen of een smartphone moet ontgrendelen, doet je veel meer gebruikmaken van de sensorgegevens. Zo is de vraag "Is het koud buiten?" hier nu snel beantwoord :-)

1

De M5Core2 is momenteel nog niet ondersteund.

January 22, 2021

It’s very tempting and, honestly, I’m doing it from time to time… I search for pictures on the Internet and use them in my documents! Why it could be dangerous in some cases? Let’s put aside copyright issues (yes, some pictures might not be free of use) but focus on the risk of linking pictures into your presentations, reports, …

I just faced an interesting case. A Word document passed through a set of security controls and was put in quarantine due to “suspicious HTTP activity” once opened. When you read this, you immediately think about a VBA macro that is trying to download some payload from the Wild internet, right? After the first suite of classic tests, the document did not appear to be malicious:

  • No “executable” code in the file (VBA, exploit, …)
  • The document meta-data were correct (names, organisation) (Ok, ok, it can be easily changed if the attacker is clever)
  • The document was delivered through an email and was sent from the pretended user (from his network, his servers, etc…)

Was it a BEC attack maybe? No Sir, it was not. Contact was taken with the recipient, he confirmed to be in contact with the sender and also requested the document to be released from the quarantine. So, what’s next?

The malicious document contained a header with logos and one of them was linked to a URL. The URL was pointing to a WordPress site. Ok, we found the guilty! Probably an exploit will be delivered with the help of a vulnerable WordPress plugin. Not possible to verify this scenario because the website replied to all requests with an HTTP/301 reply (redirect). The traffic was redirected to another site about football and another redirect landed to a Turkish online betting service. This last page had many pieces of JavaScript but, after checking them, none was malicious. Note that the first website was hosted on a domain registered more than two years ago, once again, not suspicious.

After contacting the sender of the initial email, it was more clear. The document was created based on an old template. The author searched the web for a “nice” logo to add in the header and found one, probably via a search engine. This image was added as an object into the document and every time it is opened, an HTTP request is performed by Word against the URL. In this case, it followed the multiple HTTP redirect and hit the JavaScript code and was flagged as malicious…

How this works? Let’s check the following Word document:

(Click to enlarge)

The small bug is a picture grabbed from a site and added as an object in the document. You can see/edit links in a Word document via the “Edit Links” feature:

(click to enlarge)

The link to the picture could have been found by using any search engine. Note also that the automatic update of the picture is disabled.

Even if disabled, Word will generate some HTTP requests every time the document is opened:

x.x.x.x - - [21/Jan/2021:15:23:09 +0100] "OPTIONS /images/bug.png/ HTTP/1.1" 200 3600 "-" "Microsoft Office Word 2014"
x.x.x.x - - [21/Jan/2021:15:23:09 +0100] "OPTIONS /images/bug.png/ HTTP/1.1" 200 3600 "-" "Microsoft Office Word 2014"
x.x.x.x - - [21/Jan/2021:15:23:10 +0100] "OPTIONS /images/bug.png/ HTTP/1.1" 200 308 "-" "Microsoft Office Word"
x.x.x.x - - [21/Jan/2021:15:23:10 +0100] "OPTIONS /images/bug.png/ HTTP/1.1" 200 3600 "-" "Microsoft Office Protocol Discovery"
x.x.x.x - - [21/Jan/2021:15:23:10 +0100] "HEAD /images/bug.png HTTP/1.1" 200 378 "-" "Microsoft Office Word 2014"
x.x.x.x - - [21/Jan/2021:15:23:10 +0100] "OPTIONS /images/bug.png/ HTTP/1.1" 200 308 "-" "Microsoft Office Word 2014"
x.x.x.x - - [21/Jan/2021:15:23:10 +0100] "HEAD /images/bug.png HTTP/1.1" 200 378 "-" "Microsoft Office Word 2014"
x.x.x.x - - [21/Jan/2021:15:23:10 +0100] "HEAD /images/bug.png HTTP/1.1" 200 378 "-" "Microsoft Office Existence Discovery"
x.x.x.x - - [21/Jan/2021:15:23:20 +0100] "GET /images/bug.png HTTP/1.1" 200 14625 "-" "Mozilla/4.0 (compatible; ms-office; MSOffice 16)"

Why is this technique dangerous? Here are some attack scenarios.

First, it’s possible to change the ranking of a website (and its pictures) in search engines using SEO techniques. If you know that your victim is looking for very specific pictures, you can expect that the search engine will propose your pictures first.

If an attacker has access to a document containing the URL. It can target the site hosting the picture, compromise it and replace the original picture with another one. This is even more dangerous if the auto-updated is enabled in the document.

If the attacker has access to the server hosting the picture, it can also track who opens the document and collect public IP addresses, User-Agents (revealing the Word version as you can see above).

If the website hosting the picture you linked because flagged as suspicious, your document might be flagged as suspicious (like the case I explained above).

My recommendation is to NOT link pictures directly in your Word documents. Search for pictures with your favorite search engine, download them on your file system and add them manually into documents. The key point is to avoid Word to perform HTTP requests on its own! This is a common hunting rule implemented by many SOCs: To try to spot HTTP traffic not originating from a classic browser.

The post Be Careful When Using Images Grabbed Online In Your Documents appeared first on /dev/random.

I published the following diary on isc.sans.edu: “Another File Extension to Block in your MTA: .jnlp“:

When hunting, one thing that I like to learn is how attackers can be imaginative at deploying new techniques. I spotted some emails that had suspicious attachments based on the ‘.jnlp’ extension. I’m pretty sure that many people don’t know what’s their purpose and, if you don’t know them, you don’t have a look at them on your logs, SIEM, … That makes them a good candidate to deliver malicious code… [Read more]

The post [SANS ISC] Another File Extension to Block in your MTA: .jnlp appeared first on /dev/random.

January 21, 2021

Als je een tijdje met domotica bezig bent, dan heb je waarschijnlijk apparaten in huis die allerlei protocollen gebruiken: Z-Wave, ZigBee, KNX, Loxone, maar ook Wi-Fi en Bluetooth. Dat levert al snel een taalbarrière op vanwege alle apparaten die hun eigen taaltje spreken.

Fabrikanten dachten daar iets op te hebben gevonden: ze ontwikkelden hun eigen domoticaplatforms die al deze standaarden ondersteunden. Zo heb je HomeKit van Apple en SmartThings van Samsung. En uiteraard bestaan er ook opensourceplatforms die hetzelfde bereiken, zoals Home Assistant.

Maar waarom al die moeite doen als er al twintig jaar een open protocol bestaat om op een gestandaardiseerde manier boodschappen uit te wisselen via ip-netwerken? MQTT (Message Queuing Telemetry Transport) is hét protocol voor domotica. Op de website van het project staat het concept van MQTT mooi geïllustreerd:

/images/mqtt-publish-subscribe.png

In een artikel in Computer!Totaal leg ik uit hoe MQTT werkt en hoe je het voor domotica gebruikt, bijvoorbeeld door een MQTT-broker zoals Eclipse Mosquitto op een Raspberry Pi of Linux-server in huis te installeren. Je leest er ook hoe je Home Assistant en Node-RED met je MQTT-broker integreert.

Het grootste deel van het artikel besteed ik aan allerlei projecten die andere protocols met MQTT koppelen en die je op een Raspberry Pi kunt installeren:

  • bt-mqtt-gateway om Bluetooth-sensoren uit te lezen

  • rtl_433toMQTT om via een RTL-SDR usb-dongle draadloze weersensoren uit te lezen die in de ISM-band rond 433,92 MHz uitzenden

  • Zwave2Mqtt om Z-Wave-apparaten aan te sturen

  • Zigbee2MQTT om Zigbee-apparaten (inclusief Philips Hue en IKEA Trådfri) aan te sturen

  • knx-mqtt-bridge om KNX-apparaten aan te sturen

  • LoxBerry met zijn MQTT Gateway om met Loxone-apparaten te communiceren

De meeste van deze projecten gebruik ik al jaren in mijn eigen domoticasysteem thuis, dat op een Raspberry Pi 4 draait.

Flexibel domoticasysteem

Waarom dan niet een domoticaplatform als Home Assistant gebruiken, dat ook allerlei protocols ondersteunt en een alles-in-éénoplossing aanbiedt? Ik ben zo inderdaad begonnen, maar Home Assistant is nogal beperkend: er wordt van uitgegaan dat je alles op één machine draait. Maar als je Raspberry Pi met Home Assistant in de kelder staat en je Zigbee-apparaten vooral op de zolder, zul je weinig ontvangst hebben met de Zigbee-transceiver op je Raspberry Pi aangesloten. Door bovenstaande programma's al die protocols naar MQTT te laten vertalen, kun je ze in principe allemaal op een andere machine draaien, die je plaatst waar die het beste bereik heeft.

Bovendien bestaat MQTT al veel langer dan Home Assistant en is het heel flexibel. Ik gebruik nog altijd Home Assistant, maar niet als alles-in-éénoplossing: in mijn domoticasysteem is het gewoon een van de vele onafhankelijk werkende componenten, namelijk een front-end. Home Assistant is dan ook uitstekend met MQTT te integreren. Wil ik ooit overschakelen naar een andere front-end, dan kan dat heel eenvoudig zonder dat dit een impact heeft op de andere componenten, zolang die front-end maar MQTT ondersteunt. En als ik bt-mqtt-gateway en rtl_422toMQTT wil vervangen door OpenMQTTGateway op een ESP32-microcontrollerbordje, dan kan dat ook. Dankzij MQTT kan mijn domoticasysteem met mijn veranderende wensen mee evolueren.

I published the following diary on isc.sans.edu: “Powershell Dropping a REvil Ransomware“:

I spotted a piece of Powershell code that deserved some investigations because it makes use of RunSpaces. The file (SHA256:e1e19d637e6744fedb76a9008952e01ee6dabaecbc6ad2701dfac6aab149cecf) has a very low VT score: only 1/59!.

The technique behind RunSpaces is helpful to create new threads on the existing Powershell process, and you can simply add what you need to it and send it off running. Here is an example of Runspace created by the malicious script… [Read more]

The post [SANS ISC] Powershell Dropping a REvil Ransomware appeared first on /dev/random.

January 17, 2021

The other day, we went to a designer's fashion shop whose owner was rather adamant that he was never ever going to wear a face mask, and that he didn't believe the COVID-19 thing was real. When I argued for the opposing position, he pretty much dismissed what I said out of hand, claiming that "the hospitals are empty dude" and "it's all a lie". When I told him that this really isn't true, he went like "well, that's just your opinion". Well, no -- certain things are facts, not opinions. Even if you don't believe that this disease kills people, the idea that this is a matter of opinion is missing the ball by so much that I was pretty much stunned by the level of ignorance.

His whole demeanor pissed me off rather quickly. While I disagree with the position that it should be your decision whether or not to wear a mask, it's certainly possible to have that opinion. However, whether or not people need to go to hospitals is not an opinion -- it's something else entirely.

After calming down, the encounter got me thinking, and made me focus on something I'd been thinking about before but hadn't fully forumlated: the fact that some people in this world seem to misunderstand the nature of what it is to do science, and end up, under the claim of being "sceptical", with various nonsense things -- see scientology, flat earth societies, conspiracy theories, and whathaveyou.

So, here's something that might (but probably won't) help some people figuring out stuff. Even if it doesn't, it's been bothering me and I want to write it down so it won't bother me again. If you know all this stuff, it might be boring and you might want to skip this post. Otherwise, take a deep breath and read on...

Statements are things people say. They can be true or false; "the sun is blue" is an example of a statement that is trivially false. "The sun produces light" is another one that is trivially true. "The sun produces light through a process that includes hydrogen fusion" is another statement, one that is a bit more difficult to prove true or false. Another example is "Wouter Verhelst does not have a favourite color". That happens to be a true statement, but it's fairly difficult for anyone that isn't me (or any one of the other Wouters Verhelst out there) to validate as true.

While statements can be true or false, combining statements without more context is not always possible. As an example, the statement "Wouter Verhelst is a Debian Developer" is a true statement, as is the statement "Wouter Verhelst is a professional Volleybal player"; but the statement "Wouter Verhelst is a professional Volleybal player and a Debian Developer" is not, because while I am a Debian Developer, I am not a professional Volleybal player -- I just happen to share a name with someone who is.

A statement is never a fact, but it can describe a fact. When a statement is a true statement, either because we trivially know what it states to be true or because we have performed an experiment that proved beyond any possible doubt that the statement is true, then what the statement describes is a fact. For example, "Red is a color" is a statement that describes a fact (because, yes, red is definitely a color, that is a fact). Such statements are called statements of fact. There are other possible statements. "Grass is purple" is a statement, but it is not a statement of fact; because as everyone knows, grass is (usually) green.

A statement can also describe an opinion. "The Porsche 911 is a nice car" is a statement of opinion. It is one I happen to agree with, but it is certainly valid for someone else to make a statement that conflicts with this position, and there is nothing wrong with that. As the saying goes, "opinions are like assholes: everyone has one". Statements describing opinions are known as statements of opinion.

The differentiating factor between facts and opinions is that facts are universally true, whereas opinions only hold for the people who state the opinion and anyone who agrees with them. Sometimes it's difficult or even impossible to determine whether a statement is true or not. The statement "The numbers that win the South African Powerball lottery on the 31st of July 2020 are 2, 3, 5, 19, 35, and powerball 14" is not a statement of fact, because at the time of writing, the 31st of July 2020 is in the future, which at this point gives it a 1 in 24,435,180 chance to be true). However, that does not make it a statement of opinion; it is not my opinion that the above numbers will win the South African powerball; instead, it is my guess that those numbers will be correct. Another word for "guess" is hypothesis: a hypothesis is a statement that may be universally true or universally false, but for which the truth -- or its lack thereof -- cannot currently be proven beyond doubt. On Saturday, August 1st, 2020 the above statement about the South African Powerball may become a statement of fact; most likely however, it will instead become a false statement.

An unproven hypothesis may be expressed as a matter of belief. The statement "There is a God who rules the heavens and the Earth" cannot currently (or ever) be proven beyond doubt to be either true or false, which by definition makes it a hypothesis; however, for matters of religion this is entirely unimportant, as for believers the belief that the statement is correct is all that matters, whereas for nonbelievers the truth of that statement is not at all relevant. A belief is not an opinion; an opinion is not a belief.

Scientists do not deal with unproven hypotheses, except insofar that they attempt to prove, through direct observation of nature (either out in the field or in a controlled laboratory setting) that the hypothesis is, in fact, a statement of fact. This makes unprovable hypotheses unscientific -- but that does not mean that they are false, or even that they are uninteresting statements. Unscientific statements are merely statements that science cannot either prove or disprove, and that therefore lie outside of the realm of what science deals with.

Given that background, I have always found the so-called "conflict" between science and religion to be a non-sequitur. Religion deals in one type of statements; science deals in another. The do not overlap, since a statement can either be proven or it cannot, and religious statements by their very nature focus on unprovable belief rather than universal truth. Sure, the range of things that science has figured out the facts about has grown over time, which implies that religious statements have sometimes been proven false; but is it heresy to say that "animals exist that can run 120 kph" if that is the truth, even if such animals don't exist in, say, Rome?

Something very similar can be said about conspiracy theories. Yes, it is possible to hypothesize that NASA did not send men to the moon, and that all the proof contrary to that statement was somehow fabricated. However, by its very nature such a hypothesis cannot be proven or disproven (because the statement states that all proof was fabricated), which therefore implies that it is an unscientific statement.

It is good to be sceptical about what is being said to you. People can have various ideas about how the world works, but only one of those ideas -- one of the possible hypotheses -- can be true. As long as a hypothesis remains unproven, scientists love to be sceptical themselves. In fact, if you can somehow prove beyond doubt that a scientific hypothesis is false, scientists will love you -- it means they now know something more about the world and that they'll have to come up with something else, which is a lot of fun.

When a scientific experiment or observation proves that a certain hypothesis is true, then this probably turns the hypothesis into a statement of fact. That is, it is of course possible that there's a flaw in the proof, or that the experiment failed (but that the failure was somehow missed), or that no observance of a particular event happened when a scientist tried to observe something, but that this was only because the scientist missed it. If you can show that any of those possibilities hold for a scientific proof, then you'll have turned a statement of fact back into a hypothesis, or even (depending on the exact nature of the flaw) into a false statement.

There's more. It's human nature to want to be rich and famous, sometimes no matter what the cost. As such, there have been scientists who have falsified experimental results, or who have claimed to have observed something when this was not the case. For that reason, a scientific paper that gets written after an experiment turned a hypothesis into fact describes not only the results of the experiment and the observed behavior, but also the methodology: the way in which the experiment was run, with enough details so that anyone can retry the experiment.

Sometimes that may mean spending a large amount of money just to be able to run the experiment (most people don't have an LHC in their backyard, say), and in some cases some of the required materials won't be available (the latter is expecially true for, e.g., certain chemical experiments that involve highly explosive things); but the information is always there, and if you spend enough time and money reading through the available papers, you will be able to independently prove the hypothesis yourself. Scientists tend to do just that; when the results of a new experiment are published, they will try to rerun the experiment, partially because they want to see things with their own eyes; but partially also because if they can find fault in the experiment or the observed behavior, they'll have reason to write a paper of their own, which will make them a bit more rich and famous.

I guess you could say that there's three types of people who deal with statements: scientists, who deal with provable hypotheses and statements of fact (but who have no use for unprovable hypotheses and statements of opinion); religious people and conspiracy theorists, who deal with unprovable hypotheses (where the religious people deal with these to serve a large cause, while conspiracy theorists only care about the unprovable hypotheses); and politicians, who should care about proven statements of fact and produce statements of opinion, but who usually attempt the reverse of those two these days :-/

Anyway...

[[!img Error: Image::Magick is not installed]]

Just over 7 months ago, I blogged about extrepo, my answer to the "how do you safely install software on Debian without downloading random scripts off the Internet and running them as root" question. I also held a talk during the recent "MiniDebConf Online" that was held, well, online.

The most important part of extrepo is "what can you install through it". If the number of available repositories is too low, there's really no reason to use it. So, I thought, let's look what we have after 7 months...

To cut to the chase, there's a bunch of interesting content there, although not all of it has a "main" policy. Each of these can be enabled by installing extrepo, and then running extrepo enable <reponame>, where <reponame> is the name of the repository.

Note that the list is not exhaustive, but I intend to show that even though we're nowhere near complete, extrepo is already quite useful in its current state:

Free software

  • The debian_official, debian_backports, and debian_experimental repositories contain Debian's official, backports, and experimental repositories, respectively. These shouldn't have to be managed through extrepo, but then again it might be useful for someone, so I decided to just add them anyway. The config here uses the deb.debian.org alias for CDN-backed package mirrors.
  • The belgium_eid repository contains the Belgian eID software. Obviously this is added, since I'm upstream for eID, and as such it was a large motivating factor for me to actually write extrepo in the first place.
  • elastic: the elasticsearch software.
  • Some repositories, such as dovecot, winehq and bareos contain upstream versions of their respective software. These two repositories contain software that is available in Debian, too; but their upstreams package their most recent release independently, and some people might prefer to run those instead.
  • The sury, fai, and postgresql repositories, as well as a number of repositories such as openstack_rocky, openstack_train, haproxy-1.5 and haproxy-2.0 (there are more) contain more recent versions of software packaged in Debian already by the same maintainer of that package repository. For the sury repository, that is PHP; for the others, the name should give it away.

    The difference between these repositories and the ones above is that it is the official Debian maintainer for the same software who maintains the repository, which is not the case for the others.

  • The vscodium repository contains the unencumbered version of Microsoft's Visual Studio Code; i.e., the codium version of Visual Studio Code is to code as the chromium browser is to chrome: it is a build of the same softare, but without the non-free bits that make code not entirely Free Software.
  • While Debian ships with at least two browsers (Firefox and Chromium), additional browsers are available through extrepo, too. The iridiumbrowser repository contains a Chromium-based browser that focuses on privacy.
  • Speaking of privacy, perhaps you might want to try out the torproject repository.
  • For those who want to do Cloud Computing on Debian in ways that isn't covered by Openstack, there is a kubernetes repository that contains the Kubernetes stack, the as well as the google_cloud one containing the Google Cloud SDK.

Non-free software

While these are available to be installed through extrepo, please note that non-free and contrib repositories are disabled by default. In order to enable these repositories, you must first enable them; this can be accomplished through /etc/extrepo/config.yaml.

  • In case you don't care about freedom and want the official build of Visual Studio Code, the vscode repository contains it.
  • While we're on the subject of Microsoft, there's also Microsoft Teams available in the msteams repository. And, hey, skype.
  • For those who are not satisfied with the free browsers in Debian or any of the free repositories, there's opera and google_chrome.
  • The docker-ce repository contains the official build of Docker CE. While this is the free "community edition" that should have free licenses, I could not find a licensing statement anywhere, and therefore I'm not 100% sure whether this repository is actually free software. For that reason, it is currently marked as a non-free one. Merge Requests for rectifying that from someone with more information on the actual licensing situation of Docker CE would be welcome...
  • For gamers, there's Valve's steam repository.

Again, the above lists are not meant to be exhaustive.

Special thanks go out to Russ Allbery, Kim Alvefur, Vincent Bernat, Nick Black, Arnaud Ferraris, Thorsten Glaser, Thomas Goirand, Juri Grabowski, Paolo Greppi, and Josh Triplett, for helping me build the current list of repositories.

Is your favourite repository not listed? Create a configuration based on template.yaml, and file a merge request!

... Why do you have to be so effing difficult about a YouTube API project that is used for a single event per year?

FOSDEM creates 600+ videos on a yearly basis. There is no way I am going to manually upload 600+ videos through your webinterface, so we use the API you provide, using a script written by Stefano Rivera. This script grabs video filenames and metadata from a YAML file, and then uses your APIs to upload said videos with said metadata. It works quite well. I run it from cron, and it uploads files until the quota is exhausted, then waits until the next time the cron job runs. It runs so well, that the first time we used it, we could upload 50+ videos on a daily basis, and so the uploads were done as soon as all the videos were created, which was a few months after the event. Cool!

The second time we used the script, it did not work at all. We asked one of our key note speakers who happened to be some hotshot at your company, to help us out. He contacted the YouTube people, and whatever had been broken was quickly fixed, so yay, uploads worked again.

I found out later that this is actually a normal thing if you don't use your API quota for 90 days or more. Because it's happened to us every bloody year.

For the 2020 event, rather than going through back channels (which happened to be unavailable this edition), I tried to use your normal ways of unblocking the API project. This involves creating a screencast of a bloody command line script and describing various things that don't apply to FOSDEM and ghaah shoot me now so meh, I created a new API project instead, and had the uploads go through that. Doing so gives me a limited quota that only allows about 5 or 6 videos per day, but that's fine, it gives people subscribed to our channel the time to actually watch all the videos while they're being uploaded, rather than being presented with a boatload of videos that they can never watch in a day. Also it doesn't overload subscribers, so yay.

About three months ago, I started uploading videos. Since then, every day, the "fosdemtalks" channel on YouTube has published five or six videos.

Given that, imagine my surprise when I found this in my mailbox this morning...

Google lies, claiming that my YouTube API project isn't being used for 90 days and informing me that it will be disabled

This is an outright lie, Google.

The project has been created 90 days ago, yes, that's correct. It has been used every day since then to upload videos.

I guess that means I'll have to deal with your broken automatic content filters to try and get stuff unblocked...

... or I could just give up and not do this anymore. After all, all the FOSDEM content is available on our public video host, too.

... isn't ready yet, but it's getting there.

I had planned to release a new version of SReview, my online video review and transcoding system that I wrote originally for FOSDEM but is being used for DebConf, too, after it was set up and running properly for FOSDEM 2020. However, things got a bit busy (both in my personal life and in the world at large), so it fell a bit by the wayside.

I've now also been working on things a bit more, in preparation for an improved administrator's interface, and have started implementing a REST API to deal with talks etc through HTTP calls. This seems to be coming along nicely, thanks to OpenAPI and the Mojolicious plugin for parsing that. I can now design the API nicely, and autogenerate client side libraries to call them.

While at it, because libmojolicious-plugin-openapi-perl isn't available in Debian 10 "buster", I moved the docker containers over from stable to testing. This revealed that both bs1770gain and inkscape changed their command line incompatibly, resulting in me having to work around those incompatibilities. The good news is that I managed to do so in a way that keeps running SReview on Debian 10 viable, provided one installs Mojolicious::Plugin::OpenAPI from CPAN rather than from a Debian package. Or installs a backport of that package, of course. Or, heck, uses the Docker containers in a kubernetes environment or some such -- I'd love to see someone use that in production.

Anyway, I'm still finishing the API, and the implementation of that API and the test suite that ensures the API works correctly, but progress is happening; and as soon as things seem to be working properly, I'll do a release of SReview 0.6, and will upload that to Debian.

Hopefully that'll be soon.

January 15, 2021

Birthday cup cakes

On January 15, 2001, exactly 20 years ago, I released Drupal 1.0.0 into the world. I was 22 years old, and just finished college. At the time, I had no idea that Drupal would someday power 1 in 35 websites, and impact so many people globally.

As with anything, there are things Drupal did right, and things we could have done differently. I recently spoke about this in my DrupalCon Europe 2020 keynote, but I'll summarize some thoughts here.

Why I'm still working on Drupal after 20 years

Student room
Me, twenty years ago, in the dorm room where I started Drupal. I'd work on Drupal sitting in that chair.

I started Drupal to build something for myself. As Drupal grew, my "why", or reasons for working on Drupal, evolved. I began to care more about its impact on end users and even non-users of Drupal. Today, I care about everyone on the Open Web.

Optimizing for impact means creating software that works for everyone. In recent years, our community has prioritized accessibility for users with disabilities, and features like lazy loading of images that help users with slower internet connections. Drupal's priority is to continue to foster diversity and inclusion within our community so all voices are represented in building an Open Web.

Three birthday wishes for Drupal

Dries giving a presentation on Drupal
Me in 2004, giving my first ever Drupal presentation, wearing my first ever Drupal t-shirt.

Drupal's 20th birthday got me thinking about things I'm hoping for in the future. Here are a few of those birthday wishes.

Birthday wish 1: Never stop evolving

Only 7% of the world's population had internet access when I released Drupal 1 in 2001. Smartphones or the mobile web didn't exist. Many of the largest and most prominent internet companies were either startups (e.g. Google) or had not launched yet (e.g. Facebook, Twitter).

A timeline with key technology events that impacted Drupal. Examples include the first mobile browser, social media, etc.
A list of technology events that came after Drupal, and that directly or indirectly impacted Drupal. To stay relevant, Drupal had to adjust to many of them.

Why has Drupal stayed relevant and thrived all these years?

First and foremost, we've been focused on a problem that existed 20 years ago, exists today, and will exist 20 years from now: people and organizations need to manage content and participate on the web. Working on a long-lasting problem certainly helps you stay relevant.

Second, we made Drupal easy to adopt (which is inherent to Open Source), and kept up with the ebbs and flows of technology trends (e.g. the mobile web, being API-first, supporting multiple channels of interaction, etc).

The great thing about Drupal is that we will never stop evolving and innovating.

Birthday wish 2: Continue our growing focus on ease-of-use

For the longest time I was focused on the technical purity of Drupal and neglected its user experience. My focus attracted more like-minded people. This resulted in Drupal's developer-heavy user experience, and poor usability for less technical people, such as content authors.

I wish I had spent more time thinking about the less technical end user from the start. Today, we've made the transition, and are much more focused on Drupal's ease-of-use, out-of-the-box experience, and more. We will continue to focus on this.

Birthday wish 3: Economic systems to sustain and scale Open Source

In the early years of the Open Source movement, commercial involvement was often frowned upon, or even banned. Today it's easy to see the positive impacts of sponsored contributions on Drupal's growth: two-thirds of all contributions come from Drupal's roughly 1,200 commercial contributors.

I believe we need to do more than just accept commercial involvement. We need to embrace it, encourage it, and promote it. As I've discussed before, we need to reward Makers to maximize contributions to Drupal. No Open Source community, Drupal included, does this really well today.

Why is that important?

In many ways, Open Source has won. Open Source provides better quality software, at a lower cost, without vendor lock-in. Drupal has helped Open Source win.

That said, scaling and sustaining Open Source projects remains hard. If we want to create Open Source projects that thrive for decades to come, we need to create economic systems that support the creation, growth and sustainability of Open Source projects.

The alternative is that we are stuck in the world we live in today, where proprietary software dominates most facets of our lives.

In another decade, I predict Drupal's incentive models for Makers will be a world-class example of Open Source sustainability. We will help figure out how to make Open Source more sustainable, more fair, more egalitarian, and more cooperative. And in doing so, Drupal will help remove the last hurdle that prevents Open Source from taking over the world.

Thank you

Areal photo of DrupalCon Seattle 2019 attendees
A group photo taken at DrupalCon Seattle in 2019.

Drupal wouldn't be where it is today without the Drupal community. The community and its growth continues to energize and inspire me. I'd like to thank everyone who helped improve and build Drupal over the past two decades. I continue to learn from you all. Happy 20th birthday Drupal!

Le XVIIIe siècle et les réseaux présociaux

En 1793, au cœur de la Révolution française, la plupart des députés girondins, particulièrement les brissotins, sont arrêtés et exécutés par les Montagnards dirigés par Robespierre. C’est la fameuse Terreur durant laquelle Robespierre voit dans la moindre idée modérée une menace contre la révolution.

Bien qu’il ne soit pas officiellement Girondin, un mandat d’arrestation est également émis contre le député, mathématicien et philosophe Condorcet, ami de Brissot. Condorcet est un idéaliste optimiste. Opposé à la peine de mort, il s’est refusé à voter pour l’exécution de Louis XVI, crime contre-révolutionnaire s’il en est.

Piètre orateur, Condorcet est un homme d’écrit d’une grande finesse et d’une grande intelligence, mais avec des compétences sociales limitées. S’il a utilisé sa plume essentiellement pour étudier les mathématiques, notablement celle du vote, et défendre ses idéaux (très en avance sur leur temps, notamment sur l’égalité des droits des hommes, des femmes, des noirs, sur la nécessité d’une éducation gratuite obligatoire et de qualité voire même sur une forme de revenu de base), il n’hésite pas à la tremper dans le fiel pour critiquer vivement les différents extrémistes que sont pour lui Robespierre ou Marat.

Car, au 18e siècle, pas besoin de Facebook pour s’insulter publiquement. Les publications, souvent éphémères et autoéditées, se suivent et se répondent avec virulence pour le plus grand plaisir des Parisiens. Le troll le plus célèbre étant assurément Voltaire.

Forcé de se cacher dans une petite chambre pour ne pas être guillotiné comme ses camarades, Condorcet va d’abord écrire toute sa rage, tenter de rétablir la vérité à travers des missives qu’il fait parvenir à différentes publications.

Percevant qu’il s’épuise et dépérit, Sophie de Condorcet, son épouse et complice, lui enjoint d’arrêter d’écrire contre les autres et le convainc d’écrire pour lui. Écoutant les conseils de son épouse, Condorcet va se mettre à rédiger ce qui sera son Opus Magna : « Esquisse d’un tableau historique des progrès de l’esprit humain ».

Bien que pourchassé et ayant vu ses amis guillotinés, Condorcet aux abois et sentant sa fin proche livre un plaidoyer humaniste et optimiste sur le futur de l’humanité. Le manuscrit achevé, il quittera sa chambre au bout de neuf mois par crainte de faire condamner sa logeuse. La brave dame sait qu’elle sera guillotinée sans procès si Condorcet est trouvé chez elle, elle refuse néanmoins à le laisser sortir et celui-ci doit user d’un subterfuge pour s’éclipser. Après deux jours d’errances, Condorcet est arrêté comme étant suspect, mais pas reconnu. Le temps de l’enquête, il est placé dans une petite prison locale où il décèdera mystérieusement. Suicide ou crise cardiaque liée à une piètre santé ? On ne le saura jamais.

L’histoire de Condorcet me revient à l’esprit chaque fois que je vois passer des débats, des attaques, des réponses détaillées à ces attaques (souvent de bonne foi). C’est une histoire à garder en mémoire chaque fois que l’envie vous prendra d’interagir sur les réseaux sociaux ou de pondre un virulent billet de blog à charge.

Livres vs réseaux

Car les réseaux sociaux sont mathématiquement bien pires que ce qu’on imaginait. Même si on construisait un réseau social éthique/bio/fairtrade/cycliste/sans gluten qui ne tente pas de nous vendre n’importe quelle merde ni de nous faire réagir à tout prix, le simple fait que ce soit un réseau social est suffisant pour que la qualité des messages mis en avant soit inversement proportionnelle au nombre de messages sur ledit réseau.

En résumé : au plus il y’a de messages, au plus votre lecture ne sera qu’un petit pourcentage des messages (logiques) et au plus la qualité moyenne des messages baissera.

=> http://meta.ath0.com/2020/12/social-notwork/

La conclusion optimiste c’est qu’il est difficile de faire un plus grand réseau que Facebook, que leurs algorithmes tirent encore plus la qualité vers le bas et que, mathématiquement, on aurait touché le fond. Ce que vous lisez sur Facebook serait donc la lie de l’humanité, le pire du pire.

La bonne nouvelle, c’est que si vous vous éloignez un instant de l’écran, que vous allez jusqu’à la bibliothèque municipale (voire celle de votre salon qui est devenue purement décorative depuis que vous avez un smartphone), vous trouverez des tas de choses à lire qui représente généralement le meilleur de l’humanité (particulièrement les vieux livres qui restent classiques ou populaires). Je vous invite à essayer, vous allez voir, l’effet est saisissant ! (même si c’est perturbant de ne pas avoir des notifications et des discussions qui interrompent la lecture).

Bon, encore faut-il que les éditeurs jouent leur rôle :  celui de forcer les artistes à innover, à sortir des sentiers battus. Malheureusement, ils font exactement le contraire en tentant de formater les œuvres, d’éviter de prendre des risques pour s’assurer une rentabilité. Or, les réels succès seront originaux.

=> https://unspicilege.org/index.php?post/Des-auteurs-et-des-artistes

Grandir pour vendre de la merde

Il semble y avoir un fil conducteur. Plus on devient grand, plus on produit de la merde. Comme le souligne Vinay Gupta dans son excellent « The Future Of Stuff » (livre très court qui ressemble au Capital de Marx live-tweeté par un cyberpunk sous amphés), le but du tailleur est de faire des vêtements qui vous aillent bien. C’est l’essence de son métier, de sa réputation, de son business. Le but d’un producteur de vêtements est de les vendre. C’est très différent. Le métier est différent et les produits seront différents. En fait, le vêtement peut même être immettable, on s’en fout s’il se vend.

Et lorsqu’une grosse entreprise se retrouve, par erreur, à faire un bon produit, elle s’arrange immédiatement pour corriger le tir. Désolé, on l’a pas fait exprès.

Pour ceux qui ont installé des réseaux wifi dans les années 2010, le WRT54G de Linksys était le meilleur routeur qu’il était possible d’acheter. La raison ? On pouvait flasher le firmware pour le remplacer par un logiciel open source permettant plein de choses (comme de servir de répétiteur, de configurer la puissance de chacune des antennes, d’installer un firewall, etc. Perso, j’utilisais DD-WRT). Mais ce n’était en fait pas volontaire. Linksys avait sous traité la gestion de la puce à Broadcom qui avait lui-même sous-traité le développement du firmware à une obscure boîte asiatique, laquelle avait tout simplement mis un Linux. Or, Linux étant sous GPL, il a bien fallu rendre les sources publiques.

Cela ne faisait pas les affaires de Cisco, qui venait de racheter Linksys, pour une raison toute simple : le firmware open source offrait des fonctionnalités normalement réservées aux produits bien plus onéreux. L’astuce qu’a trouvée Cisco ? Sortir une nouvelle version du WRT54G avec un processeur moins puissant et moins de mémoire, histoire d’empêcher la version open source de tourner.

Ce qui illustre bien que le but d’une entreprise n’est, en règle générale, pas de faire un bon produit pour les consommateurs, mais, au contraire, tenter de les forcer à acheter la plus mauvaise qualité possible.

=> https://tedium.co/2021/01/13/linksys-wrt54g-router-history/

L’idéologie de la Silicon Valley est qu’il faut soit croître au point de contrôler le monde, soit se faire racheter par un plus gros, soit disparaitre. Gardons à l’esprit que ce n’est qu’une idéologie. Et que nous sommes en train d’en subir les effets néfastes. Non, Gemini ou Mastodon ne remplaceront pas le web et Twitter. On s’en fout. Le but est d’être une alternative, pas un remplacement.

Mon premier conseil ? Évitez de travailler pour les grands monopoles.

Drew DeVault explique pourquoi c’est une mauvaise idée de bosser pour les GAFAM. Une mauvaise idée à la fois pour vous et pour le monde. Même si vous êtes super bien payé, vous jouez à la roulette russe avec votre avenir.

=> https://drewdevault.com/2021/01/01/Megacorps-are-not-your-dream-job.html

La vie privée et le respect mutuel

Mon second conseil ?

Même si tout cela vous passe par-dessus la tête, faites un effort d’écouter lorsque les gens qui s’intéressent au sujet expliquent des choses. Pour le moment, si vous utilisez Whatsapp, on vous demande certainement d’installer Signal.

Et bien, faites-le ! Ça ne vous coûte que quelques minutes d’effort et quelques mégaoctets d’espace sur votre téléphone.

Je sais bien que vous avez déjà trois messageries, que vous vous en foutez, que vous n’avez rien à cacher.

Mais c’est par simple respect des autres. En installant Whatsapp, vous avez fourni à Facebook tout votre carnet d’adresses, y compris ceux qui ne souhaitent pas être sur Facebook. La moindre des choses que vous pouvez faire est de permettre à vos amis pour qui c’est important de vous contacter sur Signal. Personne ne vous demande d’abandonner Whatsapp. On vous demande juste de ne pas rajouter à la pression sociale d’utiliser un service Facebook.

C’est comme pour les fumeurs : on ne vous demande pas d’arrêter de fumer, mais juste de ne pas nous forcer à fumer (même si, pour reprendre l’analogie, nous continuons à subir le tabagisme passif).

Alors, oui, Signal n’est pas parfait. Mais il est très bien alors installez-le par simple soutien citoyen envers ceux qui ne souhaitent pas être totalement pistés par Facebook. C’est bien leur droit, non ? En fait, la question est plutôt inverse : en quoi auriez-vous le droit d’imposer Whatsapp à vos contacts qui ne le souhaitent pas ?

=> https://standblog.org/blog/post/2021/01/13/Par-quoi-remplacer-WhatsApp

Si vous voulez aller plus loin dans l’exploration de la protection de la vie privée, voici le plug-in pour navigateur que j’attendais : Privacy Redirect. Il redirige Twitter, Instagram, Youtube, Reddit, Google Maps et Google vers des « miroirs » ou des alternatives qui préservent la vie privée tout en vous permettant d’accéder au contenu. C’est bien entendu complètement configurable et il est facile de désactiver un filtre particulier temporairement. J’utilise Teddit pour accéder aux liens Reddit et j’ai installé le logiciel Freetube, qui s’ouvre en dehors du navigateur, et permet de visualiser les vidéos Youtube rapidement et sans pub! J’ai également découvert que si l’interface d’Open Street Maps n’est pas aussi facile que celle de Google Maps, les résultats sont tout à fait utilisables. Je n’ai même pas besoin de me forcer : je tape machinalement « maps » et arrive sur openstreetmaps. Bon, pour la recherche, je reste à Duckduckgo parce que Searx est vraiment très expérimental.

Bref, un excellent outil pour tenter de changer ses habitudes à moindre effort.

=> https://github.com/SimonBrazell/privacy-redirect

S’informer hors des monopoles

Pour s’informer en dehors des réseaux sociaux, il reste les bons vieux flux RSS, les mailings-liste et même… Gemini. Je vous explique ma routine matinale pour lire Gemini grâce au client en ligne de commande AV-98. C’est très geek, ce n’est pas pour tout le monde, mais ça me change des onglets ouverts dans tous les sens.

=> https://linuxfr.org/users/ploum/journaux/augmenter-le-rendement-de-votre-moulage-de-pres-de-174

J’ai d’ailleurs lancé mon gemlog, en anglais.

=> gemini://rawtext.club/~ploum/

Si Gemini ne vous intéresse pas, vous pouvez vous en passer sans soucis. Si vous avez raté mon billet expliquant ce qu’est Gemini, le voici.

=> https://ploum.net/gemini-le-protocole-du-slow-web/

Ce qu’il y’a de rafraichissant avec Gemini, c’est l’impression de lire des textes tout simplement humains. Il n’y a pas de volonté de convaincre, pas de followers, pas de likes, pas de statistiques. Il y’a un plaisir calme à lire des avis, même complètement divergents, sur Gemini. Le fait d’être en ligne de commande et sans images y est également pour beaucoup.

Mais j’avoue que Gemini est encore limité. Je m’informe et m’inspire principalement par flux RSS.

Si je ne devais garder qu’un seul flux RSS, ce serait le Pluralistic de Cory Doctorow. Chaque jour, Cory poste un long billet composé de plusieurs sujets qu’il développe brièvement. C’est super intéressant, super bien résumé. Comme il poste ça sur Twitter, Mastodon, sa mailing-liste, son site web et Tumblr, que chaque billet vient avec des images et un colophon, je lui avais demandé quelle était sa technique de travail. Il m’avait répondu qu’il comptait justement en parler pour fêter le premier anniversaire de Pluralistic.

Je trouvais que, même avec de bons outils et bien automatisé, le tout devait être un travail de titan. Il s’avère que Cory n’utilise pas d’outils particuliers. Pire, il se complique la vie en postant tout d’abord dans Twitter puis en copiant/collant dans Mastodon et Tumblr puis en remettant tout dans un fichier XML qu’il édite à la main. Il va jusqu’à écrire à la main le code HTML pour la licence Creative Commons. Et faire ses montages photos dans Gimp. Et ça, tous les jours !

Insupportable pour un programmeur. Un lecteur l’a tellement pris en pitié qu’il lui a fait un script python pour simplifier certaines opérations. Cela reste néanmoins complètement inefficace. Mais une petite référence a attiré mon attention. Son père faisait de la mise en page de publications à la main, en coupant des morceaux de papier et les collant pour faire des compositions. Cory a l’impression de refaire le même travail.

Et peut-être que cette inefficacité est essentielle pour lui permettre de digérer ce qu’il a lu, de relire ce qu’il écrit. En étant inefficace, il est immergé plusieurs heures dans ce qu’il a écrit. Il ne cherche pas à optimiser ses clicks, ses visiteurs. Il ne cherche pas à poster rapidement, à être efficace. Il est tailleur et non pas producteur de vêtements. Force est de constater que cela se sent.

=> https://pluralistic.net/2021/01/13/two-decades/

Bds et pub obligatoire pour Printeurs

Outre les flux et les biographies de Condorcet, je lis aussi des Bds.

Je viens de terminer le 4e tome d’Aspic, par Gloris et Lamontagne. Une série vraiment brillante. Ce n’est pas un chef-d’œuvre absolu, mais lorsqu’un très bon dessin se met au service de personnages particulièrement attachants et d’une véritable histoire, il serait malvenu de bouder notre plaisir.

Loin d’être une série à rallonge avec une intrigue infinie (coucou XIII et Sillage), Aspic propose plutôt des enquêtes en deux tomes, enquêtes qui mélangent allègrement un côté enfantin et un côté adulte assez sombre.

À propos de livres adultes et sombres, Tonton Alias a fait une critique de Printeurs où il me compare… à Cory Doctorrow !

Je rapprocherais Printeurs des romans d’anticipation technologiques de Cory Doctorow. J’y ai trouvé quelques idées vraiment brillantes, comme l’idée du ciblage publicitaire comme outil de drague ou de surveillance.

Il a aussi quelque chose qui manque souvent aux bouquins de Doctorow: une intrigue trépidante. C’est un bon page-turner.

=> https://alias.erdorin.org/printeurs-de-ploum/

Il émet également des remarques négatives avec lesquelles je suis relativement d’accord. Pour les véritables amateurs de SF, Printeurs est un brin désuet. C’était assumé, car pensé au départ comme un hommage aux séries pulp des années 50. Je prends note pour m’améliorer.

Ceci dit, si vous n’êtes pas convaincu, je vous invite à lire ce qu’en disent Sebsauvage et d’autres mastonautes :

=> https://sebsauvage.net/links/?SlHGmQ

Notamment Sebiii et Pyves, qui se plaignent d’avoir failli faire une nuit blanche à cause de moi.

=> https://mastodon.social/@Sebiiiii/105321398407717687
=> https://framapiaf.org/@Pyves/105389956170119860

Purexo m’a également fait le plaisir de publier une critique de Printeurs… sur Gemini ! Je me demande si Printeurs est le premier roman critiqué sur Gemini.

=> gemini://purexo.mom/blog/2020/12-16-critique-printeurs.gmi

Si cela vous a convaincu, voici le lien pour commander :

=> https://www.plaisirvaleurdhistoire.com/shop/38-printeurs

Mais si ce genre de SF n’est pas votre truc, je comprends parfaitement. J’espère vous surprendre agréablement avec mes futurs écrits.

Je vous remercie d’avoir accordé du temps pour me lire. Vous écrire à vous, mes abonnés ou lecteurs occasionnels, est à la fois un plaisir et un processus intellectuel important pour moi. J’espère que ce plaisir est partagé et je vous souhaite une excellente journée.

Image : Mort de Condorcet, musée de la Révolution Française.

Je suis @ploum, ingénieur écrivain. Abonnez-vous par mail ou RSS pour ne rater aucun billet (max 2 par semaine). Je suis convaincu que Printeurs, mon dernier roman de science-fiction vous passionnera. Commander mes livres est le meilleur moyen de me soutenir et de m’aider à diffuser mes idées !

>

Ce texte est publié sous la licence CC-By BE.

January 12, 2021

Ik heb gelogen. In ieder geval ben ik een belofte niet nagekomen: ik beloofde een tiental blogpost die als ondertoon zouden hebben dat er geen gevaar is.

De enige twee die daar aan zouden kunnen voldoen waren mijn blog items over dat onze aarde een rechthoekige blok is en die waar ik oproep om te stoppen met NAVO.

Want nu dat onze aarde rechthoekig is, vallen we er aan onze kant toch al niet meer af. En ook wanneer we zouden stoppen met NAVO, dan vernietigen we onze rechthoekige blok ‘de aarde’ toch al niet meer met onze eigen nucleaire wapens!

Dus, moest de mensheid deze twee blog-items volgen en geloven, dan zou ongeveer 99,98% van alle gevaar voor de mensheid verdwijnen.

Maar de toekomst brengt een nieuwe belofte: Lozh en Vranyo. Want nu gaan we eindelijk eens die sociale media aanpakken. We hebben alvast Trump overal verbannen.

Vranyo!

January 11, 2021

At the beginning of every year, I like to publish a retrospective to look back and take stock of how far Acquia has come over the past 12 months. I take the time to write these retrospectives because I want to keep a record of the changes we've gone through as a company. It also helps me track my thinking and personal growth year over year.

If you'd like to read my previous retrospectives, you can find them here: 2019, 2018, 2017, 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009. This year marks the publishing of my twelfth retrospective. When read together, these posts provide a comprehensive overview of Acquia's trajectory.

COVID-19

2020 was a strange year. Since March 2020, I've only been in an Acquia office twice. Despite the changes the pandemic brought upon us all, Acquia did well in 2020. We continued our unbroken, 14-year growth streak.

We were proud to help many of our customers cope with COVID-19. A few notable examples:

  • The State of New York spun up COVID-19 sites in just three days using Acquia Site Factory. We supported 49 million service interactions and 342 million page views across 60 million users at the onset of the pandemic.
  • King Arthur Baking Company attracted a new digital audience of pandemic bakers that led to a 200% year-over-year growth in e-commerce sales.
  • When stores first closed, Godiva used Acquia Customer Data Platform (CDP) to double email-open rates and nearly triple click-through rates. Their website traffic was up 63%.

Execution on product vision

Open Source, digital transformation, cloud computing and data-driven marketing are powerful tailwinds transforming how every company does business. 2020 only accelerated these trends, which Acquia benefited from.

While COVID-19 changed some of Acquia's 2020 plans, our vision and strategy didn't waver. We have a clear vision for how to redefine Digital Experience Platforms. For years now, we've been patient investors and builders towards that vision.

Throughout 2020 we continued to invest decisively in our Web Content Management solutions while accelerating our move into the broader Digital Experience Platform market. "Decisively", because we grew our Marketing Cloud engineering team by 65% and our Drupal Cloud engineering team by 35% — a testament to the strength of our company amidst a pandemic.

The best way to learn about what Acquia has been working on is to watch the recording of my Acquia Engage keynote. The presentation is only two months old, and in 30 minutes I cover 10 major product updates. Rather than repeat them here, take a look at the video.

Our product portfolio is organized in two lines of business: the Drupal Cloud and Marketing Could. Content is at the core of the Drupal Cloud, and data is at the core of the Marketing Cloud.

Shown on the left: Drupal Cloud. Shown on the right: Marketing Cloud.
Acquia's product portfolio exiting 2020.

Acquia's Drupal Cloud remains the number one Drupal platform in the enterprise with 40% of Fortune 100 companies as our customers. Acquia is the number one vendor in terms of performance, scalability, security and compliance. Across our customers, we served more than 500 billion HTTP requests in 2020.

Acquia's Marketing Cloud managed over 3 billion customer profiles, 20 billion e-commerce transactions, and over 100 billion customer interactions in 2020. We made nearly 1.5 billion machine learning predictions every day.

This year, we saw much larger spikes than normal on Black Friday and Cyber Monday. These huge spikes are not surprising given much store traffic turned to digital. The autoscaling of the platform helped us handle these spikes without hiccups.

As of November 2020, Acquia moved completely off of Marketo to Acquia Campaign Studio (based on the Open Source Mautic). While this move probably won't come as a surprise, it is an important milestone for us. Marketo was a critical part of Acquia's marketing operations so we're excited to practice what we preach.

At the beginning of 2018, shortly after Mike Sullivan joined Acquia as our CEO, we set a goal to become a leader in Digital Experience Platforms within three years. We did it in two years. 2020 was the first time Acquia was recognized as a leader in the Gartner MQ for Digital Experience Platforms. We're now among leaders like Salesforce and Adobe, which I consider some of the very best software companies in the world.

The success that Acquia earned during 2020 was not just driven by our strategy and roadmap execution. They are also the result of our unique culture and how we continued to support our teams throughout the pandemic. We were recognized in Great Places to Work UK and Excellence in Wellbeing; Great Places to Work and Top IT Companies India; and were named a Boston Globe Top Place to Work.

Contributing to Open Source

Drupal did well in 2020. After five years of work, Drupal 9 was released, bringing many important improvements. In 2020, Drupal received contributions from more than 8,000 different individuals and more than 1,200 different organizations. The total number of contributions to Drupal increased year over year. For example, the Drupal community worked on 4,195 different Drupal.org projects in 2020 — a large 20% year-over-year increase compared to 2019.

Acquia remained the the top commercial contributor to Drupal in 2020:

A chart showing that Acquia contributes much more than Pantheon, Platform.sh, Rackspace, Bluehost and DigitalOcean
The contribution gap between Acquia and other PaaS and hosting companies is very large. According to Drupal.org's credit system, Acquia contributes 15x more than Pantheon and 80x more than Platform.sh.

One specific contribution that I'm extra proud of is that Acquia bought advertising on Drupal.org, and used said advertising to highlight the 10 Acquia partners who contribute back to Drupal the most. It's always important to promote the organizations that contribute to Drupal, but in an economic downturn, it's even more important.

Drupal.org advertising paid for by Acquia
The Acquia-paid banner that was on Drupal.org for most of 2020. It promotes Third and Gove, Acro Media, Mediacurrent, QED42, CI&T, FFW, Palantir.net, Lullabot, Four Kitchens, Phase2 and Srijan.

We also contributed to Mautic: we helped evolve Mautic's governance, release Mautic 3, and organize the first ever MautiCon. The Mautic project made 13 releases over the past year compared to only 3 releases in 2019 before Acquia acquired Mautic. We're also seeing a steady growth in community members and active contributors.

A year of personal growth

Acquia started off 2020 as a new member of the Vista portfolio. I've been learning a lot from working with Vista and implementing the best practices Vista is famous for. From a personal growth perspective, I wouldn't trade the experience for anything.

At the end of 2019, Acquia lost a great leader; Mike Aeschliman. Mike was our Head of Engineering. Mike and I were joined at the hip. Because of Mike's passing, I stepped in to help run Engineering, as well as Product, for the first three months of 2020. In March, John Mandel joined us to fill Mike's shoes. John has been a great leader and partner. Throughout 2020, Mike remained in my thoughts, especially when we achieved milestones that he and I were working towards. I believe he'd be proud of our progress.

Early in 2020, I organized my team into 4 groups: Drupal Cloud, Marketing Cloud, Product Marketing, and User Experience. I spent the year scaling and operationalizing the R&D organization: hiring, setting and tracking goals, managing 10+ product initiatives, evolving our pricing and packaging, reviewing business cases, improving our messaging and positioning, and more. I spent as much time focused on managing the top line as managing our bottom line — proof that Acquia is no longer a startup. It was a dense year; 10 to 16 meetings a day, 5 days a week. Every day was packed, but Acquia is better for it.

I did quite a few virtual speaking engagements in 2020, including my keynotes at DrupalCon Global, DrupalCon Europe and Web Summit. With COVID-19, it was uncertain if DrupalCon would happen, but I'm glad it still did. Virtual events are not the same as in-person events though; I miss the travel experience, direct attendee feedback and personal interactions. In-person events give me energy; virtual events don't.

Thank you

As I mentioned at the beginning of this post, 2020 was a strange year. On days like today, when looking back at the past year, I am reminded of how lucky I am. I'm fortunate to be working at a healthy and growing company during these uncertain times. I hope that 2021 brings good health, predictability and stability for everyone.

January 07, 2021

Wat een boek …

Coming of age, de buitenwijken, New York, Noorwegen. Muziek, toneel, schilderkunst. De oorlog in Vietnam de oorlog die Coppola’s film over de oorlog in Vietnam was. Communisme, kapitalisme. Eenzaamheid, vervreemding, maar ook liefde.

1230 pagina’s en ik zou direct opnieuw beginnen (en ik ben stukken aan het herlezen), ondanks de soms zwaar op de hand liggende beschouwingen over toneel of schilderkunst (maar vooral de stukken over toneel zijn nu zoveel relevanter).

Wat een boek!

Voor de muziekfreaks; een YouTube playlist :

YouTube Video
Watch this playlist on YouTube

A few weeks ago I wrote an ISC diary about a piece of malicious code that used ngrok.io to communicate with the C2 server. Just a quick reminder about this service: it provides a kind of reverse-proxy for servers or applications that people need to publish on the Internet. I knew the service for a while but how popular is it these days? What are people publishing? So, I wrote some Python code to have a look at *.tcp.ngrok.io. I focussed on HTTP traffic and, for each live service found, I took a screenshot.

ngrok.io is a good alternative to the classic NAT feature present in all residential routers because your IP address is hidden and you don’t have to play to dynamic DNS services to keep access to the published service when your IP address is renewed. But, like NAT, it is NOT a security feature and the service you publish must be hardened or restricted to authorized users.

I checked all ports (1-65535) for *.tcp.ngrok.io, collected screenshots and data returned by the discovered services. Here is a very small set of services found:

Most classic services:

  • Plex servers
  • Home automation
  • Game servers
  • Many types of “IoT” related stuff
  • Proxies
  • Development tools / services
  • Miners
  • Unconfigured services (Apache, Tomcat default pages)
  • Administration pages

Most exotic ones:

  • A BurpSuite instance
  • A Casino interface system?

Note that I found also plenty of MongoDB databases!

The post What’s Hosted Behind ngrok.io? appeared first on /dev/random.

January 06, 2021

As I mentioned previously, recently my latest installment of "SELinux System Administration" has been released by Packt Publishing. This is already the third edition of the book, after the first (2013) and second (2016) editions have gotten reasonable success given the technical and often hard nature of full SELinux administration.

Like with the previous editions, this book remains true to the public of system administrators, rather than SELinux policy developers. Of course, SELinux policy development is not ignored in the book.

What has changed

First and foremost, it of course updates the content of the previous edition to be up to date with the latest evolutions within SELinux. There are no earth shattering changes, so the second edition isn't suddenly deprecated. The examples are brought up to date with a recent distribution setup, for which I used Gentoo Linux and CentOS.

The latter is, given the recent announcement of CentOS stopping support for CentOS version 8 in general, a bit confrontational, although it doesn't really matter that much for the scope of the book. I hope that Rocky Linux will get the focus and support it deserves.

Anyway, I digress. A significant part of the updates on the existing content is on SELinux-enabled applications, applications that act as a so-called object manager themselves. While quite a few were already covered in the past, these applications continue to enhance their SELinux support, and in the third edition a few of these receive a full dedicated chapter.

There are also a small set of SELinux behavioral changes, like SELinux' NNP support, as well as SELinux userspace changes like specific extended attributes for restorecon.

Most of the book though isn't about changes, but about new content.

What has been added

As administrators face SELinux-aware applications more and more, the book goes into much more detail on how to tune SELinux with those SELinux-aware applications. If we look at the book's structure, you'll find that it has roughly three parts:

  1. Using SELinux, which covers the fundamentals of using SELinux and understanding what SELinux is.
  2. SELinux-aware platforms, which dives into the SELinux-aware application suites that administrators might come in contact with
  3. Policy management, which focuses on managing, analyzing and even developing SELinux policies.

By including additional content on SEPostgreSQL, libvirt, container platforms like Kubernetes, and even Xen Security Modules (which is not SELinux itself, but strongly influenced and aligned to it to the level that it even uses the SELinux userspace utilities) the book is showing how wide SELinux is being used.

Even on policy development, the book now includes more support than before. While another book of mine, SELinux Cookbook, is more applicable to policy development, I did not want to keep administrators out of the loop on how to develop SELinux policies at all. Especially not since there are more tools available nowadays that support policy creation, like udica.

SELinux CIL

One of the changes I also introduced in the book is to include SELinux Common Intermediate Language (CIL) information and support. When we need to add in a small SELinux policy change, the book will suggest CIL based changes as well.

SELinux CIL is not commonly used in large-scale policy development. Or at least, not directly. The most significant policy development out there, the SELinux Reference Policy, does not use CIL directly itself, and the level of support you find for the current development approach is very much the default way of working. So I do not ignore this more traditional approach.

The reason I did include more CIL focus is because CIL has a few advantages up its sleeve that is harder to get with the traditional language. Nothing major perhaps, but enough that I feel it should be more actively promoted anyway. And this book is hopefully a nice start to it.

I hope the book is a good read for administrators or even architects that would like to know more about the technology.

Every day, millions of new web pages are added to the internet. Most of them are unstructured, uncategorized, and nearly impossible for software to understand. It irks me.

Look no further than Sir Tim Berners-Lee's Wikipedia page:

The markup for Tim Berners-Lee's Wikipedia page; it's complex and inconsistent
What Wikipedia editors write (source).
The browser output for Tim Berners-Lee's Wikipedia page
What visitors of Wikipedia see.

At first glance, there is no rhyme or reason to Wikipedia's markup. (Wikipedia also has custom markup for hieroglyphs, which admittedly is pretty cool.)

The problem? Wikipedia is the world's largest source of knowledge. It's a top 10 website in the world. Yet, Wikipedia's markup language is nearly impossible to parse, Tim Berners-Lee's Wikipedia page has almost 100 HTML validation errors, and the page's generated HTML output is not very semantic. It's hard to use or re-use with other software.

I bet it irks Sir Tim Berners-Lee too.

The markup for Tim Berners-Lee's Wikipedia page; it's complex and inconsistent
What Wikipedia editors write (source).
The generated HTML code for Tim Berners-Lee's Wikipedia page; it could be more semantic
What the browser sees; the HTML code Wikipedia (MediaWiki) generates.

It's not just Wikipedia. Every site is still messing around with custom

s for a table of contents, footnotes, logos, and more. I could think of a dozen new HTML tags that would make web pages, including Wikipedia, easier to write and reuse: , , , and many more.

A good approach would be to take the most successful Schema.org schemas, Microformats and Web Components, and incorporate their functionality into the official HTML specification.

Adding new semantic markup options to the HTML specification is the surest way to improve the semantic web, improve content reuse, and advance content authoring tools.

Unfortunately, I don't see new tags being introduced. I don't see experiments with Web Components being promoted to official standards. I hope I'm wrong! (Cunningham's Law states that the best way to get the right answer on the internet is not to ask a question; it's to post the wrong answer. If I'm wrong, I'll update this post.)

If you want to help make the web better, you could literally start with Sir Tim Berners-Lee's Wikipedia page, and use it as the basis to spend a decade pushing for HTML markup improvements. It could be the start of a long and successful career.

January 04, 2021

I enjoyed reading Vitalik's 2020 endnotes. Vitalik is one of the founders of Ethereum, and one of the most interesting people in the world to follow right now.

Like Vitalik, I'm interested in economic systems, multi-stakeholder coordination, and public good governance and sustainability. How do we create Open Source communities that will thrive for hundreds of years to come? How do we make sure the Open Web is still thriving in a thousand years? These are some of the questions I think about.

While I think about these things, Vitalik is making it happen. The blockchain world is experimenting with new funding models (e.g. ICOs, DAICOs or quadratic funding), decision-making models (e.g. quadratic voting), organizational models (e.g. DAOs), and architectural innovation (e.g. Filecoin or Blockstack).

The blockchain allows these concepts to be implemented in a robust and secure way. Eventually, they could be used to help sustain and govern public goods like Open Source projects and the Open Web.

But it's not just Open Source or the Open Web that should be considered. Some of the very biggest problems in the world (for example, climate change) are multi-stakeholder problems that require better funding, coordination and decision-making models too.

So, yes, these are important developments to pay attention to!

January 03, 2021

And suddenly, before you notice it, the year has passed. And what a year it has been…

It’s easy to brush 2020 off as a year to quickly forget, given the pandemic we suddenly find ourselves in. But I’d rather not. Looking back, despite everthing we took for granted but currently can no longer do, it’s been a year full of great experiences, new friends, new business, launching things and lots of joy with the family.

I for one am very optimistic and excited what 2021 brings in terms of plot twists. You can’t always predict what will come, but flexibility goes a long way. Onwards and upwards!


Comments | More on rocketeer.be | @rubenv on Twitter

January 01, 2021

2021

$ sudo -i
# find / -name "*covid*" -exec rm -rf {} \;
# find / -name "*corona*" -exec rm -rf {} \;
# pkill -9 covid19
# pkill -9 corona
# reboot

Have fun!

December 31, 2020

As a technology writer and a fulltime nerd I like to tinker with everything, including all my hardware. I like to study it, program it, change it, let it do things that the manufacturer even hasn't thought about. I want to be in control.

Many of the articles and books that I write have this practical approach. And when I review hardware for a magazine and I find that the hardware has a cloud-based and a local mode, I turn my attention to the latter, because I don't have any interest in the former and almost all other reviewers will rave about the cloud-based mode anyway.

It won't surprise you that I love hackable hardware. Not hackable in the sense that it's insecure, but hackable in the sense that I can tinker with it. 1 In principle all devices are hackable, they just differ in the proficiency you need to do it. But this year I decided to put my money where my mouth is and buy some devices that are more or less expressly designed to tinker with. This way I want to support these companies, and of course it makes tinkering with these devices much easier for me, as I'm not an elite hardware hacker.

These are four hackable devices that I have bought in 2020 and that I really like:

NumWorks, a scientific calculator with MicroPython

/images/numworks-inside.jpg

As a math geek, I made do with the bc command or an interactive Python session on my computer when I needed to compute something, but it always felt a bit like... a hack? So I wanted a dedicated scientific calculator. I didn't find the one I had to buy when I was an engineering student, and I'm actually happy that I haven't found it, because I remember this device was quite kludgy.

Enter the NumWorks: at first sight it looks like the hipster's version of a graphical calculator, but the internals are as cool as its design. Both the hardware design and the software are available under a Creative Commons license. The 3D models of the case, the source code of the firmware, even the electronic schematics of the printed circuit board, you'll find it all on the company's Engineering page.

I have used the NumWorks a lot in the last few months and it's a dream to work with. Contrary to many hackable devices, it just works. It's slim and light, it has an amazing battery life, it instantly turns on and off, you can recharge it from a standard USB port and you can upgrade its firmware from your browser (for now sadly only Chromium/Chrome) thanks to WebUSB. It has all the basic computing and graphing functionality I need, and you can even run your own MicroPython scripts.

Another really cool feature is the online simulator of the NumWorks. This is a fully working version of the firmware that you can try in your web browser to explore the calculator's functionality. I hope more companies will offer something like this so you can try their device before you buy it.

PinePhone, a Linux phone designed for hackers

Mobian on the PinePhone: all the power of Linux on your phone

I have never liked the dumbed-down, closed walled gardens of Android and iOS with their ecosystem of apps that spy on you, yell at you with ads, force you to use their cloud service, and artificially limit what you can do with your phone. When you're running Linux on your desktop/laptop as I do, the contrast with the mature, privacy-friendly and open ecosystem of Linux is just unbelievable.

When I had to choose between Android and iOS for my first smartphone, the former seemed the lesser evil, so that's what I still use on my phone. But naturally I wanted to have a hackable phone with an operating system that treats me like an adult, and PINE64's PinePhone is the ultimate hackable phone. PINE64 has published as much information as possible to hack on the device, such as electronic schematics and data sheets.

You can run many Linux distributions on the PinePhone, and you have full control over your phone in a familiar Linux environment. Moreover, you have access to the vast ecosystem of open source programs that you're used to on the desktop.

That said, I don't consider the PinePhone ready as a daily phone. The hardware specifications are meager, the software is still buggy and many programs still have to be adapted to the mobile form factor. I'm using the PinePhone mainly as a development system to explore the mobile Linux ecosystem. 2

But I like the vibe of the community of hackers that has originated around the PinePhone. And the device is really designed for hackers. There are six pogo pins on the back that expose an interrupt line, power input to charge the battery, a power source and an I²C interface. PINE64 has already announced a battery extension backcover and a keyboard backcover that make use of these pogo pins, but you're free to connect anything you want. There's a breakout board for easier access to the pins, and Martijn Braam even connected a thermal camera to the backcover of his PinePhone.

Kobol Helios64, a five-bay NAS with built-in battery

/images/helios64-bundle.png

Linux is no stranger to NAS devices as an operating system, but most of them are locked down. Of course you can install a Linux-based or FreeBSD-based operating system on a server to function as a NAS 3, but just as for my other devices I wanted to have a NAS that was expressly designed for running your own Linux distribution and being hacked on.

That's how I discovered Kobol's Helios64. It's a quite well-looking five-bay NAS, powered by a 64-bit ARM Rockchip RK3399 SoC with 4 GB RAM, 16 GB eMMC for storage, dual Ethernet and a built-in battery.

The Helios64 comes as a DIY kit: you have to assemble it yourself, which took me two hours earlier this week. The advantage is that you learn a lot about the internals of your NAS while assembling it, which could come in handy when you start hacking on it.

Kobol's wiki has extensive documentation about the hardware, including electronic schematics and data sheets. The NAS has rich expansion possibilities through I²C, SPI, GPIO, ... for instance if you want to add a custom control panel or OLED display. Unfortunately the Kobol team has made a mistake in their Ethernet wiring, so the 2.5 Gbps Ethernet interface can not be connected to a Gigabit Ethernet switch without sacrificing performance, but they fully explain the fix and even suggest a hardware fix you can apply yourself.

Armbian has support for the Helios64, both in the Debian Buster and Ubuntu Focal variants. Not all hardware features are supported yet (for instance there are some issues with the USB Type C port in the driver), but I'm looking forward to testing the Helios64 with Armbian as a NAS.

reMarkable 2, a paper tablet with an awesome community

/images/remarkable2-schematics.jpg

The reMarkable 2 is really what it says: remarkable. The company calls it a "paper tablet", I would call it an e-reader. I bought it mainly because I wanted to read PDF reports and data sheets, which I don't like to read on a computer screen and I don't want to print them either. I already owned a 6" Kobo e-reader, but I haven't used it in the last few years because it was too slow, too small and just too cumbersome.

The reMarkable 2 is everything you would want from an e-reader: a sharp 10.3 inch e-paper display, thin, light, fast and beautiful to the eye. You can also take notes directly on PDF files, which I have been using quite a lot in the week that I own the device now. And many people are even using their reMarkable 2 for drawing.

The company expects you to use their reMarkable cloud, which offers synchronization of your documents and notes and adds conversion of your handwriting to text. However, I want to self-host as much as possible, so before I bought the device I made sure that I could do without the cloud.

And yes you can. The reMarkable 2 is running a proprietary Linux system, Codex, but the company doesn't restrict you in any way. You have root access and you can run your own software on it. There's a wiki with many hacking tips, and the reHackable community on GitHub offers custom software and the Awesome reMarkable list of third-party software.

There's also a community-maintained repository of free software for the reMarkable, toltec, which works on top of the opkg package manager and the Entware distribution, and one of the projects that I'm definitely going to try is rmfakecloud, which fakes the cloud synchronization protocol of reMarkable so you can run this on your own server. The stock firmware of the reMarkable 2 has only quite basic functionality compared to some competitors, but I'm quite confident the open source community will expand the possibilities.

The reMarkable 2 also has pogo pins, and someone even used these to attach a foot pedal so he could move forward and back through sheet music while playing his instrument. So while not open source, the reMarkable 2 is quite hackable and I expect I will enjoy it a lot.

1

I never even use the word hacking in the first sense, but every time I use the word, I have to consciously remind myself that most people won't even know the second sense, and often I have to explain it. Where did it go wrong between our society and the hacker culture?

2

To be honest, I have been using the PinePhone less than I wanted to, because I thought by now I should have received Purism's more powerful Librem 5 that I pre-ordered way too long ago.

3

This is in fact the only 'NAS' I have had: I never had a Synology, QNAP or whatever proprietary NAS device, but have always been running FreeNAS, FreeBSD or Ubuntu Server on SOHO server hardware.

December 26, 2020

Neurale netwerken zijn een belangrijke techniek in machinaal leren, maar helaas ook rekenintensief en energieverslindend. Met de Coral USB Accelerator biedt Google een handig versnellerbordje aan. Je gebruikt het op je laptop of een Raspberry Pi 4.

Het hart bestaat uit een Edge TPU-coprocessor, die slechts 2 W verbruikt, een heel laag verbruik voor dit soort berekeningen. De Edge TPU is geoptimaliseerd voor TensorFlow Lite, een uitgeklede versie van het opensourceplatform TensorFlow voor machinaal leren.

/images/rpi4-coral-usb-accelerator.jpg

Modellen

Google heeft op één webpagina al zijn modellen voor de Edge TPU geplaatst, waar je ze eenvoudig kunt downloaden. Vooral voor beeldclassificatie zijn er veel modellen beschikbaar. Zo zijn er drie versies van EfficientNet, waarmee je 1000 types objecten kunt herkennen in kleine, gemiddelde of grotere afbeeldingen. Ook van MobileNet zijn er diverse varianten: om 1000 objecten te herkennen, 1000 insecten, 2000 planten en 900 soorten vogels. Ook van Inception, dat 1000 types objecten herkent, zijn er vier versies beschikbaar.

/images/edge-tpu-modellen.png

Ook voor objectherkenning zijn er meerdere modellen. Objectherkenning doet meer dan beeldclassificatie: bij die laatste krijg je gewoon een label "vogel" als er een vogel in het beeld staat. Een model voor objectherkenning vertelt je niet alleen dat er een vogel te zien is, maar toont je ook de locatie. Dat kun je dan visualiseren met een kader rond het object. Er zijn twee versies van MobileNet SSD die de locatie van 90 types objecten kunnen herkennen, en één versie die de locatie van een menselijk gezicht detecteert.

Als je wat verder zoekt, vind je ook elders nog wel andere modellen. Maar je moet er altijd zeker van zijn dat ze voor de Edge TPU zijn gecompileerd. En welk model voor jouw toepassing geschikt is, hangt af van de beelden die je wilt verwerken. Ik raad aan om verschillende modellen te testen.

Aan de slag met de Coral USB Accelerator

In een artikel in Linux Magazine leid ik je door de installatie van de Coral USB Accelerator op Raspberry Pi OS en enkele voorbeelden van machinaal leren:

  • Vogelsoorten herkennen in afbeeldingen.

  • Sleutelwoorden herkennen in een audiostroom van de microfoon.

  • Bestaande modellen voor beeldclassificatie hertrainen met transfer learning om nieuwe klassen objecten te herkennen.

  • Objecten en hun locatie herkennen in afbeeldingen.

Er zijn vele voorbeeldprogramma's in Python te vinden voor de Edge TPU. Bekijk zeker eens de officiële pagina met voorbeelden. Maar ook op GitHub vind je allerlei interessante third-party projecten. Je kunt ook realtime de beelden van een Raspberry Pi Camera Module analyseren.

Dankzij de Google Coral USB Accelerator komen krachtige AI-toepassingen binnen handbereik van de Raspberry Pi waarvoor je vroeger toegang tot krachtige servers in de cloud nodig had.

December 25, 2020

IT is complex. Some even consider it to be more magic than reality. And with the ongoing evolutions and inventions, the complexity is not really going away. Sure, some IT areas are becoming easier to understand, but that is often offset with new areas being explored.

Companies and organizations that have a sizeable IT footprint generally see an increase in their infrastructure, regardless of how many rationalization initiatives that are started. Personally, I find it challenging, in a fun way, to keep up with the onslaught of new technologies and services that are onboarded in the infrastructure landscape that I'm responsible for.

But just understanding a technology isn't enough to deal with its position in the larger environment.

December 24, 2020

I published the following diary on isc.sans.edu: “Malicious Word Document Delivering an Octopus Backdoor“:

Here is an interesting malicious Word document that I spotted yesterday. This time, it does not contain a macro but two embedded objects that the victim must “activate” (click on one of them) to perform the malicious activities. The document (SHA256:ba6cc16770dc67c1af1a3e103c3fd19a854193e7cd1fecbb11ca11c2c47cdf04) has a VT score of 20/62… [Read more]

The post [SANS ISC] Malicious Word Document Delivering an Octopus Backdoor appeared first on /dev/random.

December 23, 2020

So in the ever-continuing saga of learning, testing and fine-tuning your site’s settings to get the ultimate in performance you might have enjoyed the new Autoptimize filter/ setting not to lazyload the first N images to keep the impact of lazyloading on Largest Contentfull Paint down?

Well, now you can take things a step further with below code snippet, which tells Autoptimize not to lazyload the logo and the featured image (hat tip to Gijo Varghese of Flying Press for the idea).

add_filter('autoptimize_filter_imgopt_lazyload_exclude_array', function( $lazy_excl ){
	// site logo.
	$custom_logo_id = get_theme_mod( 'custom_logo' );
	$image = wp_get_attachment_image_src( $custom_logo_id , 'full' );
	$logo = $image[0];
	if ( ! empty( $logo ) ) {
		$lazy_excl[] = $logo;
	}
	// featured image.
	if ( is_single() || is_page() ) {
		$featured_img = get_the_post_thumbnail_url();
		if ( ! empty( $featured_img ) ) {
			$lazy_excl[] = $featured_img;
		}
	}
	return $lazy_excl;
});

This might end up in a future version of Autoptimize, maybe even preloading those images … :-)

December 22, 2020

Brussels sprouts with Brussels' skyline in the background
Brussels sprouts in Brussels

We went on a holiday walk with my sister, and stumbled upon a Brussels sprout field that overlooks Brussels' skyline. Meta!

I published the following diary on isc.sans.edu: “Malware Victim Selection Through WiFi Identification“:

Last week, I found a malware sample that does nothing fancy, it’s a data stealer but it has an interesting feature. It’s always interesting to have a look at the network flows generated by malware samples. For a while, attackers use GeoIP API services to test if the victim’s computer deserves to be infected… or not! By checking the public IP address used by the victim, an attacker might prevent “friends” to be infected (ex: IP addresses from the attacker’s country) or if the IP address belongs to a security vendor. On the other side, the attacker might decide to infect the computer because it is located in a specific country or belongs to the targeted organization… [Read more]

The post [SANS ISC] Malware Victim Selection Through WiFi Identification appeared first on /dev/random.

December 21, 2020

I recently had the opportunity to be on a panel at Web Summit with Mitchell Baker, the co-founder and CEO of Mozilla, and Matt Mullenweg, co-founder of WordPress and CEO of Automattic. We talked about the power of Open Source and the Open Web.

In 2020, the threat to the web's openness has gained more mainstream awareness. Between big tech hearings in U.S. Congress and ongoing consumer data privacy issues, there are a lot of problems that need solving.

Our Web Summit panel only scratched the surface on some of these issues. Matt pointed out that we've been quick to trade our freedoms on the web for conveniences. Of course, companies haven't made that exchange very obvious to everyday people. Instead of 20-page user agreements, something clearer like nutrition facts labels could help people understand the contracts they're making with companies.

I pointed out that there is a shared responsibility among technology companies and governments to make the web better for everyone. I believe in the idea of a governing body for technology, kind of like an FDA for algorithms. Just as the FDA protects people against harmful pharmaceuticals, there should be a regulator for high-impact software. Privacy laws like GDPR are just the beginning of making sure that consumer data isn't abused.

Mitchell added that beyond regulation, technologists play an important role in developing clearer, fairer and more open alternatives for people on the web. I couldn't agree more. Open Source is an important contributor to maintaining the Open Web, and technologies like the blockchain hold potential for making the Open Web more robust.

To watch a video recording of our panel, visit https://youtu.be/qaIYdsy-Gb8. We only had 20 minutes, but this is the kind of conversation that could have lasted hours. Let's continue this important discussion in 2021!