Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

June 19, 2019

Choices If You Want Cash On Line Now

Economic dilemmas can befall anybody. Even if you’re on the top, A unexpected expense might come the right path. This sort of scenario may take the kind of any such thing from a costly day at a doctor to a pricey automotive fix. Often, final week’s income covers these inconveniences along with other times may possibly not enough stretch far. Expenses won’t await your next payday therefore, what you can do?

1. Unsecured Loans

One technique of working with financial obligation is taking out fully a loan that is personal. That could be alright under lengthy

term problems but does bit to ease problems that are urgent. Many finance companies need an ongoing process of money money reviewing your credit and making a choice on security should you default on re payment.

Bank loan providers have a tendency to provide financial loans for more than $1000 to help make the loan worth their particular whilst where interest can be involved. Another issue with individual financial financial financial loans is you don’t have to be approved that they can take time.

2. Seeking Money

Asking friends or family members for assistance with cash things might be method to cope with beg for cash online payday loans problems, but that avenue generally seems to carry dangers of the very very own. Needing to ask a buddy or a member of family could be tough and can place tension for a commitment.

Some personal groups tend to be much more comfortable with helping bear financial problems whilst in other people the thought of somebody asking to borrow cash comes with a kind of stigmata. It is always far better hold monetary drama out of essential connections whenever one could.

3. Considering an online payday loan

A choice worthy of consideration is always to simply just just take down a quick payday loan whenever you want cash today, otherwise referred to as a advance loan. These kinds of financial loans need extremely short amount of time to be authorized. You can expect endorsement within moments to some hours of using. Pay day loans require no security as they’re quick unsecured loans.look in to a loan that is payday

Really, you obtain a money quantity that may be reimbursed, and also a charge set by the lending company, upon getting your following income. This will be a great way of borrowing smaller amounts rapidly also to spend all of them off just like quickly. It’s an approach to make any time payday once you absolutely need it to be.

Life Happens

You can find large amount of things in life that won’t hold back until payday. You want an automobile in great restoration to forth get back and to exert effort. In the event that physician suggests brand brand new medication you’ll need certainly to purchase it to keep healthier. If you’re in the dental practitioner along with your kid requires cavities filled you’re maybe maybe perhaps maybe not planning to decrease the process. All of these plain things can price cash and additionally they seldom happen when it’s convenient.

Most of us take to our better to sometimes be self-sufficient but we are in need of some thing to move the chances straight right back inside our benefit. Fortunately, cash advances usually meet with the urgency for the scenario and simply just just take bit more than a 14 days to pay for|or two to pay week straight back.

Payday loans don’t constantly have to be only for problems, both. There were, probably, lots of occasions when just have used a a bit more Money on hand to make a wedding or prom a bit more special make a getaway more unforgettable. A cash advance could be a delightful method to have more money for an unique future celebration.

direct payday lenders that you will find on line. If you might use more money to protect an urgent situation or an unique occasion, a money Advance may be exactly what you’re trying to find.

June 18, 2019

Lawyers included in mergers plus acquisitions regarding companies are generally comfortable with operate on the situation of quite a few documentation. Watchful analysis entails gigabytes data. Until comparatively recently, the main so-called « cult campaign » about lawyers to your seller’s office was employed to gain access to the main required information and facts, which will be saved in the main archive – in a very split room.

Engineering made them feasible in order to boost this kind of process from the exchange of electronic paticulars. You might total the exact procedure just by sending the particular requested computer files via netmail or online hosting storage, nevertheless the security associated with such actions is dubious. Modern concept pertains the actual recovery, delivering not alone data safeguards, still also accelerate – a new secure vdr.

What is a Due Diligence Virtual Data Room

This particular term covers the online file storage space, which gives you the knowledge of reveal papers through third parties making use of gain access to tips. This gives you actually to launch different obtain standards for several project contributors.

Users gain access to the process with all the sent to identify in addition to security, within which information and entry level is usually encrypted. Specialised due diligence virtual data room make out from additional procedures the main highest safety measures – this really is many enterprises have actually safeguarded their valuable individuals right from making use of online hosting products and services in addition to other similar tools.

Present day providers connected with secure virtual data room have impressive experience in the field of mergers and transactions of the exact enterprise, along with the computer software products they provide have essential functionality plus the highest safety measures. With most of their aid well-performing running and even safe-keeping connected with extensive volumes of knowledge is usually presented. The exact creation regarding spot is completed about a web page by web site basis.

Any modern data room providers is definitely distinguished using a high stability priority along with compliance using advanced requirements, including inventive security standards and multifactor authentication. Sure rooms have got the function of constraining access in order to often the report remaining seen, with the aid of which often the office manager can have operator liberties when, actually after getting a file.

One of several even more features distinguish the implement of compelling watermarks viewed in any loaded insurance policy. This substrate stores learn about the file obtain date, challenge name, name and IP address of often the person who purchased typically the computer file.

The way to select a new Data room Providers

virtual data rooms

Truth be told there are a number of factors that need to be deemed when deciding upon the digital data room:

– Protection grade. You must locate a company, self-storage for files, devoid of transferring these people by subcontractor. It moreover necessitates recognition, reviews regarding the excellent of the very remedy, together with so in.

– Charge Many manufacturers fixed the expense of the exact solution, using the spot implemented and even the time period during which usually the electronic data room could be opened. Services offering storage for just a certain time are suited for one time use, primarily regular make use of, a subscription purchase may be the best option.

– Comfort and functionality. It again is crucial to pay attention to the format of your documentation the fact that supports the exact virtual dataroom data room providers, whether it is possible so that you can download file archives, and on which systems is are generally work. Meant for international dealings, round-the-clock admittance to the room and assistance for a number of which may have is normally essential.

Today, we released a new version of Acquia Lift, our web personalization tool.

In today's world, personalization has become central to the most successful customer experiences. Most organizations know that personalization is no longer optional, but have put it off because it can be too difficult. The new Acquia Lift solves that problem.

While before, Acquia Lift may have taken a degree of fine-tuning from a developer, the new version simplifies how marketers create and launch website personalization. With the new version, anyone can point, click and personalize content without any code.

We started working on the new version of Acquia Lift in early 2018, well over a year ago. In the process we interviewed over 50 customers, redesigned the user interface and workflows, and added various new capabilities to make it easier for marketers to run website personalization campaigns. And today, at our European customer conference, Acquia Engage London, we released the new Acquia Lift to the public.

You can see all of the new features in action in this 5-minute Acquia Lift demo video:

The new Acquia Lift offers the best web personalization solution in Acquia's history, and definitely the best tool for Drupal.

June 14, 2019

Le terrorisme a toujours été une invention politique d’un pouvoir en place. Des milliers de personnes meurent chaque année dans des accidents de voiture ou en consommant des produits néfastes pour la santé ? C’est normal, ça fait tourner l’économie, ça crée de l’emploi. Qu’un fou suicidaire trucide les gens autour de chez lui, on parle d’un désaxé. Mais qu’un musulman sorte un couteau dans une station de métro et on invoque, en boucle sur toutes les chaines de télévision, le terrorisme qui va faire camper des militaires dans nos villes, qui va permettre de passer des lois liberticides, de celles qui devraient attirer l’attention de n’importe quel démocrate avisé.

La définition de terrorisme implique qu’il est une vue de l’esprit, une terreur entretenue et non pas une observation rationnelle. On nous parle de moyens, de complicités, de financement. Il s’agit juste d’une tactique pour obnubiler nos cerveaux, pour activer le mode conspirationniste dont notre système neuronal ne sait malheureusement pas se défaire sans un violent effort conscient.

Car si le pouvoir et les médias inventent le terrorisme, c’est avant tout parce que nous sommes avides de combats, de sang, de peur, de grandes théories complexes. La réalité est bien plus prosaïque : un homme seul peut faire bien plus de dégâts que la plupart des attentats terroristes récents. J’irais même plus loin ! Je prétends qu’à quelques exceptions près, le fait d’agir en groupe a desservi les terroristes. Leurs attentats sont nivelés par le bas, leur bêtise commune fait obstacle à la moindre lueur de lucidité individuelle. Je ne parle pas en l’air, j’ai décidé de le prouver.

Il est vrai que les panneaux m’ont coûté un peu de temps et des allers-retours au magasin de bricolage. Mais je n’étais pas mécontent du résultat. Trente panneaux mobiles reprenant les consignes de sécurité antiterrorisme : pas d’armes, pas d’objets dangereux, pas de liquides. En dessous, trois compartiments poubelles pour faire du recyclage et qui servent également de support.

Dans ce socle, bien caché, un dispositif électronique très simple avec une charge fumigène. De quoi faire peur sans blesser.

Il m’a ensuite suffi de louer une camionnette de travaux à l’aspect vaguement officiel pour aller déposer, vêtu d’une salopette orange, mes panneaux devant les entrées du stade et des différentes salles de concert de la capitale.

Bien sûr que je me serais fait arrêter si j’avais déposé des paquets mal finis en djellaba. Mais que voulez-vous dire à un type bien rasé, avec une tenue de travail, qui pose des panneaux informatifs avec le logo officiel de la ville imprimé dessus ? À ceux qui posaient des questions, j’ai dit qu’on m’avait juste ordonné de les mettre en place. Le seul agent de sécurité qui a trouvé ça un peu bizarre, je lui sortis un ordre de mission signé par l’échevin des travaux. Ça l’a rassuré. Faut dire que je n’ai même pas imité la signature : j’ai repris celle publiée dans un compte-rendu du conseil communal sur le site web de la ville. Mon imprimante couleur a fait le reste.

D’une manière générale, personne ne se méfie si tu ne prends rien. J’apporte du matériel qui coûte de l’argent. Je ne peux donc pas avoir inventé ça tout seul.

Mes trente panneaux mis en place, je suis passé à la seconde phase de mon plan qui nécessitait un timing un peu plus serré. J’avais minutieusement choisi mon jour à cause d’un match international important au stade et de plusieurs concerts d’envergure.

Si j’avais su… Aujourd’hui encore je regrette de ne pas l’avoir fait un peu plus tôt. Ou plus tard. Pourquoi en même temps ?

Mais n’anticipons pas. M’étant changé, je me rendis à la gare ferroviaire. Mon billet de Thalys dument acheté en main, je gagnai ma place et, saluant ma voisine de travée, je mis mon lourd attaché-caisse dans le porte-bagage. Je glissai aussi mon journal dans le filet devant moi. Consultant ma montre, je remarquai à haute voix qu’il restait quelques minutes avant le départ. D’un air innocent, je demandai où se trouvait le wagon-bar. Je me levai et, après avoir traversé deux wagons, sortis du train.

Il n’y a rien de plus louche qu’un bagage abandonné. Mais si son propriétaire est propre sur lui, porte la cravate, ne montre aucun signe de nervosité et laisse sa veste sur le siège, le bagage n’est plus abandonné. Il est momentanément déposé. C’est aussi simple que cela !

Pour la beauté du geste, j’avais calculé l’emplacement idéal pour mettre mon détonateur et acheté ma place en conséquence. J’ai ajouté une petite touche de complexité : un micro-ordinateur avec un capteur GPS qui déclencherait mon bricolage au bon moment. Ce n’était pas strictement nécessaire, mais en quelques sortes une cerise technophile sur le gâteau.

Je ne voulais que faire peur, uniquement effrayer ! La coïncidence est malheureuse, mais pensez que j’avais été jusqu’à m’assurer que mon fumigène ne produirait pas la moindre déflagration susceptible d’être interprétée comme un coup de feu ! Je voulais éviter une panique !

En sortant de la gare, j’ai sauté dans la navette à destination de l’aéroport. Je me faisais un point d’honneur à parachever mon œuvre. Je suis arrivé juste à temps. Sous le regard d’un agent de sécurité, j’ai mis dans la poubelle prévue à cet effet des bouteilles en plastique qui contenait mon dispositif. C’était l’heure de pointe, la file était immense.

Je n’ai jamais compris pourquoi les terroristes cherchaient soit à s’introduire dans l’avion, soit à faire exploser leur bombe dans la large zone d’enregistrement. Le contrôle de sécurité est la zone la plus petite et la plus densément peuplée à cause des longues files. Renforcer les contrôles ne fait que rallonger les files et rendre cette zone encore plus propice à un attentat. Quelle absurdité !

Paradoxalement, c’est également la seule zone où abandonner un objet relativement volumineux n’est pas louche : c’est même normal et encouragé ! Pas de contenant de plus d’un dixième de litre ! Outre les bouteilles en plastique, j’ai pris une mine dépitée pour mon thermo bon marché. Le contrôleur m’a fait signe d’obtempérer. C’est sur son ordre que j’ai donc déposé le détonateur dans la poubelle, au beau milieu des files s’entrecroisant.

J’ai appris la nouvelle en sirotant un café à proximité des aires d’embarquement. Une série d’explosions au stade, au moment même où le public se pressait pour entrer. Mon sang se glaça ! Pas aujourd’hui ! Pas en même temps que moi !

Les réseaux sociaux bruissaient de rumeurs. Certains parlaient d’explosions devant des salles de concert. Ces faits étaient tout d’abord démentis par les autorités et par les journalistes, me rassurant partiellement. Jusqu’à ce qu’une vague de tweets me fasse réaliser que, obnubilées par les explosions du stade et par plusieurs centaines de blessés, les autorités n’étaient tout simplement pas au courant. Il n’y avait plus d’ambulances disponibles. Devant les salles, les mélomanes aux membres arrachés agonisaient dans leur sang. Le réseau téléphonique était saturé, les mêmes images tournaient en boucle, repartagées des milliers de fois par ceux qui captaient un semblant de wifi.

Certains témoignages parlaient d’une attaque massive, de combattants armés criant « Allah Akbar ! ». Des comptes-rendus parlaient de militaires patrouillant dans les rues et se défendant vaillamment. Les corps jonchaient les pavés, même loin de toute explosion. À en croire Twitter, c’était la guerre totale !

Il parait qu’en réalité, seule une quarantaine de personnes ont été touchées par les coups de feu des militaires apeurés. Et que ce n’est qu’à un seul endroit qu’une personne armée, se croyant elle-même attaquée, a riposté, tuant un des militaires et entrainant une riposte qui a fait deux morts et huit blessés. Le reste des morts et des blessés hors des sites d’explosion serait dû à des mouvements de panique.

Mais la présence des militaires a également permis de pallier, dans certains foyers, au manque d’ambulance. Ils ont prodigué les premiers soins et sauvé des vies. Paradoxalement, ils ont dans certains cas soigné des gens sur lesquels ils venaient de tirer.

Encore heureux que les armes de guerre qu’ils trimbalaient ne fussent pas chargées. Une seule balle d’un engin de ce calibre peut traverser plusieurs personnes, des voitures, des cloisons. La convention de Genève interdit strictement leur usage en dehors des zones de guerre. Elles ne servent que pour assurer le spectacle et une petite rente aux ostéopathes domiciliés en bordure des casernes. En cas d’attaque terroriste, les militaires doivent donc se défaire de leur hideux fardeau pour sortir leurs armes de poing. Qui n’en restent pas moins mortelles.

J’étais blême à la lecture de mon flux Twitter. Autour de moi, tout le monde tentait d’appeler la famille, des amis. Je crois que le déraillement des deux TGVs est quasiment passé inaperçu au milieu de tout ça. Exactement comme je l’avais imaginé : une déflagration assez intense dans un wagon, à hauteur des bagages, calculée pour se produire au moment où le train croisait la route d’un autre. Relativement, les deux rames se déplaçaient à 600km/h l’une par rapport à l’autre. Il n’est pas difficile d’imaginer que si l’une vacille sous le coup d’une explosion et vient toucher l’autre, c’est la catastrophe.

Comment s’assurer de l’explosion au bon moment ?

Je ne sais pas comment les véritables terroristes s’y sont pris, mais, moi, de mon côté, j’avais imaginé un petit algorithme d’apprentissage qui reconnaissait le bruit et le changement de pression dans l’habitacle lors du croisement. Je l’ai testé une dizaine de fois et je l’ai couplé à un GPS pour délimiter une zone de mise à feu. Un gadget très amusant. Mais ce n’était couplé qu’à un fumigène, bien sûr.

C’est lorsque l’explosion a retenti dans l’aéroport que je fus forcé d’écarter un simple concours de circonstances. La coïncidence devenait trop énorme. J’ai immédiatement pensé à mon billet de blog programmé pour être publié et partagé sur les réseaux sociaux à l’heure de la dernière explosion. J’y expliquais ma démarche, je m’excusais pour les désagréments des fumigènes et me justifiais en disant que c’était un prix bien faible à payer pour démontrer que toutes les atteintes à nos droits les plus fondamentaux, toutes les surveillances du monde ne pourraient jamais arrêter le terrorisme. Que la seule manière d’éviter le terrorisme est de donner aux gens des raisons pour aimer leur propre vie. Que, pour éviter la radicalisation, les peines de prison devraient être remplacées par des peines de bibliothèque sans télévision, sans Internet, sans smartphone. Incarcéré entre Victor Hugo et Amin Maalouf, l’extrémisme rendrait rapidement son dernier soupir.

Mon blog avait-il été partagé trop tôt à cause d’une erreur de programmation de ma part ? Ou piraté ? Mes idées avaient-elles été reprises par un véritable groupe terroriste qui comptait me faire porter le chapeau ?

Tout cela n’avait aucun sens. Les gens hurlaient dans l’aéroport, se jetaient à plat ventre. Ceux qui fuyaient percutaient ceux qui voulaient aider ou voir la source de l’explosion. Au milieu de ce tumulte, je devais m’avouer que je m’étais souvent demandé ce que donneraient « mes » attentats. J’avais même fait des recherches poussées sur les explosifs en masquant mon activité derrière le réseau Tor. De cette manière, j’ai découvert beaucoup de choses et j’ai même fait quelques tests, mais il ne me serait jamais venu à l’esprit de tuer.

Tout a été planifié comme un simple jeu intellectuel, une manière d’exposer spectaculairement l’inanité de nos dirigeants.

Il est vrai que de véritables explosions seraient encore plus frappantes. Je me le suis dit plusieurs fois. Mais de là à passer à l’acte !

Pourtant, à la lecture de votre enquête, un malaise m’envahit. Je me suis tellement souvent imaginé la méthode pour amorcer de véritables explosifs… Croyez-vous que je l’aie inconsciemment accompli ?

Je ne veux pas tuer. Je ne voulais pas tuer. Tous ces morts, ces images d’horreur. La responsabilité m’étouffe, m’effraie. À quelle ignominie m’aurait poussé mon subconscient !

Jamais je ne me serais senti capable de tuer ne fut-ce qu’un animal. Mais si votre enquête le démontre, je dois me rendre à l’évidence. Je suis le seul coupable, l’unique responsable de tous ces morts. Il n’y a pas de terroristes, pas d’organisation souterraine, pas d’idéologie.

Quand on y pense, c’est particulièrement amusant. Mais je ne suis pas sûr que ça vous arrange. Êtes-vous réellement prêt à révéler la vérité au grand public ? À annoncer que la loi martiale mise en place ne concernait finalement qu’un ingénieur un peu distrait qui a confondu fumigènes et explosifs ? Que les milliards investis dans l’antiterrorisme ne pourront jamais arrêter un individu seul et sont à la base de ces attentats ?

Bonne chance pour expliquer tout cela aux médias et à nos irresponsables politiques, Madame la Juge d’instruction !

Terminé le 14 avril 2019, quelque part au large du Pirée, en mer Égée.Photo by Warren Wong on Unsplash

Je suis @ploum, conférencier et écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

June 11, 2019

Recently I was interviewed on RTL Z, the Dutch business news television network. In the interview, I talk about the growth and success of Drupal, and what is to come for the future of the web. Beware, the interview is in Dutch. If you speak Dutch and are subscribed to my blog (hi mom!), feel free to check it out!

June 10, 2019

Recently, GitHub announced an initiative called GitHub Sponsors where open source software users can pay contributors for their work directly within GitHub.

There has been quite a bit of debate about whether initiatives like this are good or bad for Open Source.

On the one hand, there is the concern that the commercialization of Open Source could corrupt Open Source communities, harm contributors' intrinsic motivation and quest for purpose (blog post), or could lead to unhealthy corporate control (blog post).

On the other hand, there is the recognition that commercial sponsorship is often a necessary condition for Open Source sustainability. Many communities have found that to support their growth, as a part of their natural evolution, they need to pay developers or embrace corporate sponsors.

Personally, I believe initiatives like GitHub Sponsors, and others like Open Collective, are a good thing.

It helps not only with the long-term sustainability of Open Source communities, but also improves diversity in Open Source. Underrepresented groups, in particular, don't always have the privilege of free time to contribute to Open Source outside of work hours. Most software developers have to focus on making a living before they can focus on self-actualization. Without funding, Open Source communities risk losing or excluding valuable talent.

I published the following diary on isc.sans.edu: “Interesting JavaScript Obfuscation Example“:

Last Friday, one of our reader (thanks Mickael!) reported to us a phishing campaign based on a simple HTML page. He asked us how to properly extract the malicious code within the page. I did an analysis of the file and it looked interesting for a diary because a nice obfuscation technique was used in a Javascript file but also because the attacker tried to prevent automatic analysis by adding some boring code. In fact, the HTML page contains a malicious Word document encoded in Base64. HTML is wonderful because you can embed data into a page and the browser will automatically decode it. This is often used to deliver small pictures like logos… [Read more]

[The post [SANS ISC] Interesting JavaScript Obfuscation Example has been first published on /dev/random]

June 07, 2019

I believe web has a lot of essay writing service and they’re trying to draw the pupils with several faculties and reductions but as a pupil I’m not seeking such features or some cash reduction. Actually, you can find hundreds of web sites offering article writing services to pupils. Its extremely catchy to decide the ideal / brilliantly essay writing support. With essayswriting.org this custom article creating support you will manage to sleep calmly because our specialists can survive with almost any custom essay, therefore just relax. Among the most suggested options will be to utilize a effective composition writing service. Additionally, you may have no proper info and trusted sources to make an exceptional article particularly in the quicktime. Possessing an article which is surely nicely – written is now in your reach. If you want to get the extremely same day essay, it’s likewise not an issue for us. If so, you have got to be sure that the article fulfills the criteria. You will make an innovative, plagiarism-free entrance composition that will induce one to get noticed in the whole leftover applicants.

The post Nginx: add_header not working on 404 appeared first on ma.ttias.be.

I had the following configuration block on an Nginx proxy.

server {
  listen        80 default;
  add_header X-Robots-Tag  noindex;
  ...
}

The idea was to add the X-Robots-Tag header on all requests being served. However, it didn't work for 404 hits.

$ curl -I "http://domain.tld/does/not/exist"
HTTP/1.1 404 Not Found
...

... but no add_header was triggered.

Turns out, you need to specify the header should be added always, not just on successfull requests.

server {
  listen        80 default;
  ...

  add_header X-Robots-Tag  noindex always;
  ...
}

This fixes it, by adding always to the end of the add_header directive.

The post Nginx: add_header not working on 404 appeared first on ma.ttias.be.

June 04, 2019

June 01, 2019

A few months ago, I moved from Mechelen, Belgium to Cape Town, South Africa.

Muizenberg Beach

No, that's not the view outside my window. But Muizenberg beach is just a few minutes away by car. And who can resist such a beach? Well, if you ignore the seaweed, that is.

Getting settled after a cross-continental move takes some time, especially since the stuff that I decided I wanted to take with me would need to be shipped by boat, and that took a while to arrive. Additionally, the reason of the move was to move in with someone, and having two people live together for the first time requires some adjustment in their daily routine; and that is no different for us.

After having been here for several months now, however, things are pulling together:

I had a job lined up before I moved to cape town; but there were a few administrative issues in getting started, which meant that I had to wait at home for a few months while my savings were being reduced to almost nothingness. That is now mostly resolved (except that there seem to be a few delays with payment, but nothing that can't be resolved).

My stuff arrived a few weeks ago. Unfortunately the shipment was not complete; the 26" 16:10 FullHD monitor that I've had for a decade or so now, somehow got lost. I contacted the shipping company and they'll be looking into things, but I'm not hopeful. Beyond that, there were only a few minor items of damage (one of the feet of the TV broke off, and a few glasses broke), but nothing of real consequence. At least I did decide to go for the insurance which should cover loss and breakage, so worst case I'll just have to buy a new monitor (and throw away a few pieces of debris).

While getting settled, it was harder for me to spend quality time on doing free software- or Debian-related things, but I did manage to join Gerry and Mark to do some FOSDEM-related video review a while ago (meaning, all the 2019 videos that could be released have been released now), and spent some time configuring the Debian SReview instance for the Marseille miniconf last weekend, and will do the same for the Hamburg one that will be happening soon. Additionally, there have been some comments on the upstream nbd mailinglist that I cooperated with.

All in all, I guess it's safe to say that I'm slowly coming out of hibernation. Up next: once the payment issues have been fully resolved and I can spend money with a bit more impudence, join a local choir and/or orchestra and/or tennis club, and have some off time.

May 29, 2019

I published the following diary on isc.sans.edu: “Behavioural Malware Analysis with Microsoft ASA“:

When you need to quickly analyze a piece of malware (or just a suspicious program), your goal is to determine as quickly as possible what’s the impact. In many cases, we don’t have time to dive very deep because operations must be restored asap. To achieve this, there are different actions that can be performed against the sample: The first step is to perform a static analysis (like extracting strings, PE sections, YARA rules, etc).  Based on the first results, you can decide to go to the next step: the behavioural analysis. And finally, you decide to perform some live debugging or reverse-engineering. Let’s focus on the second step, the behavioural analysis… [Read more]

[The post [SANS ISC] Behavioural Malware Analysis with Microsoft ASA has been first published on /dev/random]

May 27, 2019

The post Some useful changes to the Oh Dear! monitoring service appeared first on ma.ttias.be.

Things have been a bit more quiet on this blog, but it's only because my time & energy has been spent elsewhere. In the last couple of months, we launched some useful new additions to the Oh Dear! monitoring service.

On top of that we shared tips on how to use our crawler to keep your cache warm or and we've been featured by Google as a prominent user of their new .app TLD. Cool!

In addition, we're always looking for feedback. If you notice something we're missing in terms of monitoring, reach out and we'll see what we can do!

The post Some useful changes to the Oh Dear! monitoring service appeared first on ma.ttias.be.

May 25, 2019

37,000 American flags on Boston Common

More than 37,000 American flags are on the Boston Common — an annual tribute to fallen military service members. Seeing all these flags was moving, made me pause, and recognize that Memorial Day weekend is not just about time off from work, outdoor BBQs with friends, or other fun start-of-summer festivities.

Autoptimize just joined the “1+ million active installs”-club. Crazy!

I’m very happy, thanks everyone for using, thanks for the support-questions & all the great feedback therein and especially thanks to the people who actively contributed and especially-especially to Emilio López (Turl) for creating Autoptimize and handing it over to me back in 2013 and to Tomaš Trkulja who cleaned up al lot of the messy code I added to it and introducing me to PHP codesniffer & Travis CI tests.

May 24, 2019

Autoptimize 2.5.1 (out earlier this week) does not have an option to enforce font-display on Google Fonts just yet, but this little code snippet does exactly that;

add_filter('autoptimize_filter_extra_gfont_fontstring','add_display');
function add_display($in) {
  return $in.'&display=swap';
}

Happy swapping (or fallback-ing or optional-ing) :-)

May 23, 2019

Last month, Special Counsel Robert Mueller's long-awaited report on Russian interference in the U.S. election was released on the Justice.gov website.

With the help of Acquia and Drupal, the report was successfully delivered without interruption, despite a 7,000% increase in traffic on its release date, according to the Ottawa Business Journal.

According to Federal Computer Week, by 5pm on the day of the report's release, there had already been 587 million site visits, with 247 million happening within the first hour.

During these types of high-pressure events when the world is watching, no news is good news. Keeping sites like this up and available to the public is an important part of democracy and the freedom of information. I'm proud of Acquia's and Drupal's ability to deliver when it matters most!

May 22, 2019

Drupal 8.7 was released with huge API-First improvements!

The REST API only got fairly small improvements in the 7th minor release of Drupal 8, because it reached a good level of maturity in 8.6 (where we added file uploads, exposed more of Drupal’s data model and improved DX.), and because we of course were busy with JSON:API :)

Thanks to everyone who contributed!

  1. JSON:API #2843147

    Need I say more? :) After keeping you informed about progress in October, November, December and January, followed by one of the most frantic Drupal core contribution periods I’ve ever experienced, the JSON:API module was committed to Drupal 8.7 on March 21, 2019.

    Surely you’re know thinking But when should I use Drupal’s JSON:API module instead of the REST module? Well, choose REST if you have non-entity data you want to expose. In all other cases, choose JSON:API.

    In short, after having seen people use the REST module for years, we believe JSON:API makes solving the most common needs an order of magnitude simpler — sometimes two. Many things that require a lot of client-side code, a lot of requests or even server-side code you get for free when using JSON:API. That’s why we added it, of course :) See Dries’ overview if you haven’t already. It includes a great video that Gabe recorded that shows how it simplifies common needs. And of course, check out the spec!

  2. datetime & daterange fields now respect standards #2926508

    They were always intended to respect standards, but didn’t.

    For a field configured to store date + time:

    "field_datetime":[{
      "value": "2017-03-01T20:02:00",
    }]
    "field_datetime":[{
      "value": "2017-03-01T20:02:00+11:00",
    }]

    The site’s timezone is now present! This is now a valid RFC3339 datetime string.

    For a field configured to store date only:

    "field_dateonly":[{
      "value": "2017-03-01T20:02:00",
    }]
    "field_dateonly":[{
      "value": "2017-03-01",
    }]

    Time information used to be present despite this being a date-only field! RFC3339 only covers combined date and time representations. For date-only representations, we need to use ISO 8601. There isn’t a particular name for this date-only format, so we have to hardcode the format. See https://en.wikipedia.org/wiki/ISO_8601#Calendar_dates.

    Note: backward compatibility is retained: a datetime string without timezone information is still accepted. This is discouraged though, and deprecated: it will be removed for Drupal 9.

    Previous Drupal 8 minor releases have fixed similar problems. But this one, for the first time ever, implements these improvements as a @DataType-level normalizer, rather than a @FieldType-level normalizer. Perhaps that sounds too far into the weeds, but … it means it fixed this problem in both rest.module and jsonapi.module! We want the entire ecosystem to follow this example.

  3. rest.module has proven to be maintainable!

    In the previous update, I proudly declared that Drupal 8 core’s rest.module was in a “maintainable” state. I’m glad to say that has proven to be true: six months later, the number of open issues still fit on “a single page” (fewer than 50). :) So don’t worry as a rest.module user: we still triage the issue queue, it’s still maintained. But new features will primarily appear for jsonapi.module.


Want more nuance and detail? See the REST: top priorities for Drupal 8.7.x issue on drupal.org.

Are you curious what we’re working on for Drupal 8.8? Want to follow along? Click the follow button at API-First: top priorities for Drupal 8.8.x — whenever things on the list are completed (or when the list gets longer), a comment gets posted. It’s the best way to follow along closely!1

Was this helpful? Let me know in the comments!


For reference, historical data:


  1. ~50 comments per six months — so very little noise. ↩︎

May 21, 2019

opnsense with no inodes

I use OPNsense as my firewall on a Pcengines Alix.

The primary reason is to have a firewall that will be always up-to-update, unlike most commercial customer grade firewalls that are only supported for a few years. Having a firewall that runs opensource software - it’s based on FreeBSD - also make it easier to review and to verify that there are no back doors.

When I tried to upgrade it to the latest release - 19.1.7 - the upgrade failed because the filesystem ran out of inodes. There is already a topic about this at the OPNsense forum and a fix available for the upcoming nano OPNsense images.

But this will only resolve the issue when a new image becomes available and would require a reinstallation of the firewall.

Unlike ext2/¾ or Sgi’s XFS on GNU/Linux, it isn’t possible to increase the number of inodes on a UFS filesystem on FreeBSD.

To resolve the upgrade issue I created a backup of the existing filesystem, created a new filesystem with enough inodes, and restored the backup.

You’ll find my journey of fixing the out of inodes issue below all commands are executed on a FreeBSD system. Hopefully this useful for someone.

Fixing the out of inodes

I connected the OPNsense CF disk to a USB cardreader on a FreeBSD virtual system.

Find the disk

Find the CF disk I use gpart show as this will also display the disk labels etc.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
root@freebsd:~ # gpart show
=>      40  41942960  vtbd0  GPT  (20G)
        40      1024      1  freebsd-boot  (512K)
      1064       984         - free -  (492K)
      2048   4194304      2  freebsd-swap  (2.0G)
   4196352  37744640      3  freebsd-zfs  (18G)
  41940992      2008         - free -  (1.0M)

=>      0  7847280  da0  BSD  (3.7G)
        0  7847280    1  freebsd-ufs  (3.7G)

=>      0  7847280  ufsid/5a6b137a11c4f909  BSD  (3.7G)
        0  7847280                       1  freebsd-ufs  (3.7G)

=>      0  7847280  ufs/OPNsense_Nano  BSD  (3.7G)
        0  7847280                  1  freebsd-ufs  (3.7G)

root@freebsd:~ # 

Backup

Create a backup with the old-school dd, just in case.

1
2
3
4
5
6
root@freebsd:~ # dd if=/dev/da0 of=/home/staf/opnsense.dd bs=4M status=progress
  4013948928 bytes (4014 MB, 3828 MiB) transferred 415.226s, 9667 kB/s
957+1 records in
957+1 records out
4017807360 bytes transferred in 415.634273 secs (9666689 bytes/sec)
root@freebsd:~ # 

mount

Verify that we can mount the filesystem.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
root@freebsd:/ # mount /dev/da0a /mnt
root@freebsd:/ # cd /mnt
root@freebsd:/mnt # ls
.cshrc                          dev                             net
.probe.for.install.media        entropy                         proc
.profile                        etc                             rescue
.rnd                            home                            root
COPYRIGHT                       lib                             sbin
bin                             libexec                         sys
boot                            lost+found                      tmp
boot.config                     media                           usr
conf                            mnt                             var
root@freebsd:/mnt # cd ..
root@freebsd:/ #  df -i /mnt
Filesystem 1K-blocks    Used   Avail Capacity iused ifree %iused  Mounted on
/dev/da0a    3916903 1237690 2365861    34%   38203  6595   85%   /mnt
root@freebsd:/ # 

umount

1
root@freebsd:/ # umount /mnt

dump

We’ll a create a backup with dump, we’ll use this backup to restore it again after we created a new filesystem with enough inodes.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
root@freebsd:/ # dump 0uaf /home/staf/opensense.dump /dev/da0a
  DUMP: Date of this level 0 dump: Sun May 19 10:43:56 2019
  DUMP: Date of last level 0 dump: the epoch
  DUMP: Dumping /dev/da0a to /home/staf/opensense.dump
  DUMP: mapping (Pass I) [regular files]
  DUMP: mapping (Pass II) [directories]
  DUMP: estimated 1264181 tape blocks.
  DUMP: dumping (Pass III) [directories]
  DUMP: dumping (Pass IV) [regular files]
  DUMP: 16.21% done, finished in 0:25 at Sun May 19 11:14:51 2019
  DUMP: 33.27% done, finished in 0:20 at Sun May 19 11:14:03 2019
  DUMP: 49.65% done, finished in 0:15 at Sun May 19 11:14:12 2019
  DUMP: 66.75% done, finished in 0:09 at Sun May 19 11:13:57 2019
  DUMP: 84.20% done, finished in 0:04 at Sun May 19 11:13:41 2019
  DUMP: 99.99% done, finished soon
  DUMP: DUMP: 1267205 tape blocks on 1 volume
  DUMP: finished in 1800 seconds, throughput 704 KBytes/sec
  DUMP: level 0 dump on Sun May 19 10:43:56 2019
  DUMP: Closing /home/staf/opensense.dump
  DUMP: DUMP IS DONE
root@freebsd:/ # 

newfs

According to the newfs manpage: We can specify the inode density with the -i option. A lower number will give use more inodes.

1
2
3
4
5
6
7
root@freebsd:/ # newfs -i 1 /dev/da0a
density increased from 1 to 4096
/dev/da0a: 3831.7MB (7847280 sectors) block size 32768, fragment size 4096
        using 8 cylinder groups of 479.00MB, 15328 blks, 122624 inodes.
super-block backups (for fsck_ffs -b #) at:
 192, 981184, 1962176, 2943168, 3924160, 4905152, 5886144, 6867136
root@freebsd:/ # 

mount and verify

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
root@freebsd:/ # mount /dev/da0a /mnt
root@freebsd:/ # df -i
Filesystem         1K-blocks    Used    Avail Capacity iused    ifree %iused  Mounted on
zroot/ROOT/default  14562784 1819988 12742796    12%   29294 25485592    0%   /
devfs                      1       1        0   100%       0        0  100%   /dev
zroot/tmp           12742884      88 12742796     0%      11 25485592    0%   /tmp
zroot/usr/home      14522868 1780072 12742796    12%      17 25485592    0%   /usr/home
zroot/usr/ports     13473616  730820 12742796     5%  178143 25485592    1%   /usr/ports
zroot/usr/src       13442804  700008 12742796     5%   84122 25485592    0%   /usr/src
zroot/var/audit     12742884      88 12742796     0%       9 25485592    0%   /var/audit
zroot/var/crash     12742884      88 12742796     0%       8 25485592    0%   /var/crash
zroot/var/log       12742932     136 12742796     0%      21 25485592    0%   /var/log
zroot/var/mail      12742884      88 12742796     0%       8 25485592    0%   /var/mail
zroot/var/tmp       12742884      88 12742796     0%       8 25485592    0%   /var/tmp
zroot               12742884      88 12742796     0%       7 25485592    0%   /zroot
/dev/da0a            3677780       8  3383552     0%       2   980988    0%   /mnt
root@freebsd:/ # 

restore

1
2
root@freebsd:/mnt # restore rf /home/staf/opensense.dump
root@freebsd:/mnt # 

verify

1
2
3
4
root@freebsd:/mnt # df -ih /mnt
Filesystem    Size    Used   Avail Capacity iused ifree %iused  Mounted on
/dev/da0a     3.5G    1.2G    2.0G    39%     38k  943k    4%   /mnt
root@freebsd:/mnt # 

Label

The installation had a OPNsense_Nano label, underscores are not allowed anymore in label name on FreeBSD 12. So I used OPNsense instead, we’ll update /etc/fstab with the new label name.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
root@freebsd:~ # tunefs -L OPNsense /dev/da0a
root@freebsd:~ # gpart show
=>      40  41942960  vtbd0  GPT  (20G)
        40      1024      1  freebsd-boot  (512K)
      1064       984         - free -  (492K)
      2048   4194304      2  freebsd-swap  (2.0G)
   4196352  37744640      3  freebsd-zfs  (18G)
  41940992      2008         - free -  (1.0M)

=>      0  7847280  da0  BSD  (3.7G)
        0  7847280    1  freebsd-ufs  (3.7G)

=>      0  7847280  ufsid/5ce123f5836f7018  BSD  (3.7G)
        0  7847280                       1  freebsd-ufs  (3.7G)

=>      0  7847280  ufs/OPNsense  BSD  (3.7G)
        0  7847280             1  freebsd-ufs  (3.7G)

root@freebsd:~ # 

update /etc/fstab

1
2
3
4
root@freebsd:~ # mount /dev/da0a /mnt
root@freebsd:~ # cd /mnt
root@freebsd:/mnt # cd etc
root@freebsd:/mnt/etc # vi fstab 
1
2
# Device                Mountpoint      FStype  Options         Dump    Pass#
/dev/ufs/OPNsense       /               ufs     rw              1       1

umount

1
2
3
root@freebsd:/mnt/etc # cd
root@freebsd:~ # umount /mnt
root@freebsd:~ # 

test

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
              ______  _____  _____                         
             /  __  |/ ___ |/ __  |                        
             | |  | | |__/ | |  | |___  ___ _ __  ___  ___ 
             | |  | |  ___/| |  | / __|/ _ \ '_ \/ __|/ _ \
             | |__| | |    | |  | \__ \  __/ | | \__ \  __/
             |_____/|_|    |_| /__|___/\___|_| |_|___/\___|

 +=========================================+     @@@@@@@@@@@@@@@@@@@@@@@@@@@@
 |                                         |   @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
 |  1. Boot Multi User [Enter]             |   @@@@@                    @@@@@
 |  2. Boot [S]ingle User                  |       @@@@@            @@@@@    
 |  3. [Esc]ape to loader prompt           |    @@@@@@@@@@@       @@@@@@@@@@@
 |  4. Reboot                              |         \\\\\         /////     
 |                                         |   ))))))))))))       (((((((((((
 |  Options:                               |         /////         \\\\\     
 |  5. [K]ernel: kernel (1 of 2)           |    @@@@@@@@@@@       @@@@@@@@@@@
 |  6. Configure Boot [O]ptions...         |       @@@@@            @@@@@    
 |                                         |   @@@@@                    @@@@@
 |                                         |   @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
 |                                         |   @@@@@@@@@@@@@@@@@@@@@@@@@@@@  
 +=========================================+                                  
                                             19.1  ``Inspiring Iguana''   

/boot/kernel/kernel text=0x1406faf data=0xf2af4+0x2abeec syms=[0x4+0xf9f50+0x4+0x1910f5]
/boot/entropy size=0x1000
/boot/kernel/carp.ko text=0x7f90 data=0x374+0x74 syms=[0x4+0xeb0+0x4+0xf40]
/boot/kernel/if_bridge.ko text=0x7b74 data=0x364+0x3c syms=[0x4+0x1020+0x4+0x125f]
loading required module 'bridgestp'
/boot/kernel/bridgestp.ko text=0x4878 data=0xe0+0x18 syms=[0x4+0x6c0+0x4+0x65c]
/boot/kernel/if_enc.ko text=0x1198 data=0x2b8+0x8 syms=[0x4+0x690+0x4+0x813]
/boot/kernel/if_gre.ko text=0x31d8 data=0x278+0x30 syms=[0x4+0xa30+0x4+0xab8]
<snip>
Root file system: /dev/ufs/OPNsense
Sun May 19 10:28:00 UTC 2019

*** OPNsense.stafnet: OPNsense 19.1.6 (i386/OpenSSL) ***
<snip>

 HTTPS: SHA256 E8 9F B2 8B BE F9 D7 2D 00 AD D3 D5 60 E3 77 53
           3D AC AB 81 38 E4 D2 75 9E 04 F9 33 FF 76 92 28
 SSH:   SHA256 FbjqnefrisCXn8odvUSsM8HtzNNs+9xR/mGFMqHXjfs (ECDSA)
 SSH:   SHA256 B6R7GRL/ucRL3JKbHL1OGsdpHRDyotYukc77jgmIJjQ (ED25519)
 SSH:   SHA256 8BOmgp8lSFF4okrOUmL4YK60hk7LTg2N08Hifgvlq04 (RSA)

FreeBSD/i386 (OPNsense.stafnet) (ttyu0)

login: 
1
2
3
4
5
6
7
8
9
10
root@OPNsense:~ # df -ih
Filesystem           Size    Used   Avail Capacity iused ifree %iused  Mounted on
/dev/ufs/OPNsense    3.5G    1.2G    2.0G    39%     38k  943k    4%   /
devfs                1.0K    1.0K      0B   100%       0     0  100%   /dev
tmpfs                307M     15M    292M     5%     203  2.1G    0%   /var
tmpfs                292M     88K    292M     0%      27  2.1G    0%   /tmp
devfs                1.0K    1.0K      0B   100%       0     0  100%   /var/unbound/dev
devfs                1.0K    1.0K      0B   100%       0     0  100%   /var/dhcpd/dev
root@OPNsense:~ # 
CTRL-A Z for help | 115200 8N1 | NOR | Minicom 2.7.1 | VT102 | Offline | ttyUSB0                                    

update

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
login: root
Password:
----------------------------------------------
|      Hello, this is OPNsense 19.1          |         @@@@@@@@@@@@@@@
|                                            |        @@@@         @@@@
| Website:      https://opnsense.org/        |         @@@\\\   ///@@@
| Handbook:     https://docs.opnsense.org/   |       ))))))))   ((((((((
| Forums:       https://forum.opnsense.org/  |         @@@///   \\\@@@
| Lists:        https://lists.opnsense.org/  |        @@@@         @@@@
| Code:         https://github.com/opnsense  |         @@@@@@@@@@@@@@@
----------------------------------------------

*** OPNsense.stafnet: OPNsense 19.1.6 (i386/OpenSSL) ***
<snip>

 HTTPS: SHA256 E8 9F B2 8B BE F9 D7 2D 00 AD D3 D5 60 E3 77 53
               3D AC AB 81 38 E4 D2 75 9E 04 F9 33 FF 76 92 28
 SSH:   SHA256 FbjqnefrisCXn8odvUSsM8HtzNNs+9xR/mGFMqHXjfs (ECDSA)
 SSH:   SHA256 B6R7GRL/ucRL3JKbHL1OGsdpHRDyotYukc77jgmIJjQ (ED25519)
 SSH:   SHA256 8BOmgp8lSFF4okrOUmL4YK60hk7LTg2N08Hifgvlq04 (RSA)

  0) Logout                              7) Ping host
  1) Assign interfaces                   8) Shell
  2) Set interface IP address            9) pfTop
  3) Reset the root password            10) Firewall log
  4) Reset to factory defaults          11) Reload all services
  5) Power off system                   12) Update from console
  6) Reboot system                      13) Restore a backup

Enter an option: 12

Fetching change log information, please wait... done

This will automatically fetch all available updates, apply them,
and reboot if necessary.

This update requires a reboot.

Proceed with this action? [y/N]: y 

Have fun!

Links

In his post Iconic consoles of the IBM System/360 mainframes, 55 years old, Ken Shirrif gives a beautiful overview of how IBM mainframes were operated.

I particularly liked this bit:

The second console function was “operator intervention”: program debugging tasks such as examining and modifying memory or registers and setting breakpoints. The Model 30 console controls below were used for operator intervention. To display memory contents, the operator selected an address with the four hexadecimal dials on the left and pushed the Display button, displaying data on the lights above the dials. To modify memory, the operator entered a byte using the two hex dials on the far right and pushed the Store button. (Although the Model 30 had a 32-bit architecture, it operated on one byte at a time, trading off speed for lower cost.) The Address Compare knob in the upper right set a breakpoint.

IBM System/360 Model 30 console, lower part

Debugging a program was built right into the hardware, to be performed at the console of the machine. Considering the fact that these machines were usually placed in rooms optimized for the machine rather than the human, that must have been a difficult job. Think about that the next time you’re poking at a Kubernetes cluster using your laptop, in the comfort of your home.

Also recommended is the book Core Memory: A Visual Survey of Vintage Computers. It really shows the intricate beauty of some of the earlier computers. It also shows how incredibly cumbersome these machines must have been to handle.

Core Memory: A Visual Survey of Vintage Computers

Even when you’re in IT operations, it’s getting more and more rare to see actual hardware and that’s probably a good thing. It never hurts to look at history to get a taste of how far we’ve come. Life in operations has never been more comfortable: let’s enjoy it by celebrating the past!


Comments | More on rocketeer.be | @rubenv on Twitter

May 16, 2019

I published the following diary on isc.sans.edu: “The Risk of Authenticated Vulnerability Scans“:

NTLM relay attacks have been a well-known opportunity to perform attacks against Microsoft Windows environments for a while and they remain usually successful. The magic with NTLM relay attacks? You don’t need to lose time to crack the hashes, just relay them to the victim machine. To achieve this, we need a “responder” that will capture the authentication session on a system and relay it to the victim. A lab is easy to setup: Install the Responder framework. The framework contains a tool called MultiRelay.py which helps to relay the captured NTLM authentication to a specific target and, if the attack is successful, execute some code! (There are plenty of blog posts that explain in details how to (ab)use of this attack scenario)… [Read more]

[The post [SANS ISC] The Risk of Authenticated Vulnerability Scans has been first published on /dev/random]

May 13, 2019

I published the following diary on isc.sans.edu: “From Phishing To Ransomware?“:

On Friday, one of our readers reported a phishing attempt to us (thanks to him!). Usually, those emails are simply part of classic phishing waves and try to steal credentials from victims but, this time, it was not a simple phishing. Here is a copy of the email, which was nicely redacted… [Read more]

[The post [SANS ISC] From Phishing To Ransomware? has been first published on /dev/random]

May 12, 2019

In my previous two posts (1, 2 ), we created Docker Debian and Arch-based images from scratch for the i386 architecture.

In this blog post - last one in this series - we’ll do the same for yum based distributions like CentOS and Fedora.

Building your own Docker base images isn’t difficult and let you trust your distribution Gpg signing keys instead of the docker hub. As explained in the first blog post. The mkimage scripts in the contrib directory of the Moby project git repository is a good place to start if you want to build own docker images.

Fedora is one of the GNU/Linux distributions that supports 32 bits systems. Centos has a Special Interest Groups to support alternative architectures. The Alternative Architecture SIG create installation images for power, i386, armhfp (arm v732 bits) and aarch64 (arm v8 64-bit).

Centos

In this blog post, we will create centos based docker images. The procedure to create Fedora images is the same.

Clone moby

1
2
3
4
5
6
7
8
9
staf@centos386 github]$ git clone https://github.com/moby/moby
Cloning into 'moby'...
remote: Enumerating objects: 7, done.
remote: Counting objects: 100% (7/7), done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 269517 (delta 0), reused 1 (delta 0), pack-reused 269510
Receiving objects: 100% (269517/269517), 139.16 MiB | 3.07 MiB/s, done.
Resolving deltas: 100% (182765/182765), done.
[staf@centos386 github]$ 

Go to the contrib directory

1
2
[staf@centos386 github]$ cd moby/contrib/
[staf@centos386 contrib]$ 

mkimage-yum.sh

When you run mkimage-yum.sh you get the usage message.

1
2
3
4
5
6
7
8
9
10
11
12
[staf@centos386 contrib]$ ./mkimage-yum.sh 
mkimage-yum.sh [OPTIONS] <name>
OPTIONS:
  -p "<packages>"  The list of packages to install in the container.
                   The default is blank. Can use multiple times.
  -g "<groups>"    The groups of packages to install in the container.
                   The default is "Core". Can use multiple times.
  -y <yumconf>     The path to the yum config to install packages from. The
                   default is /etc/yum.conf for Centos/RHEL and /etc/dnf/dnf.conf for Fedora
  -t <tag>         Specify Tag information.
                   default is reffered at /etc/{redhat,system}-release
[staf@centos386 contrib]$

build the image

The mkimage-yum.sh script will use /etc/yum.conf or /etc/dnf.conf to build the image. mkimage-yum.sh <name> will create the image with name.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[staf@centos386 contrib]$ sudo ./mkimage-yum.sh centos
[sudo] password for staf: 
+ mkdir -m 755 /tmp/mkimage-yum.sh.LeZQNh/dev
+ mknod -m 600 /tmp/mkimage-yum.sh.LeZQNh/dev/console c 5 1
+ mknod -m 600 /tmp/mkimage-yum.sh.LeZQNh/dev/initctl p
+ mknod -m 666 /tmp/mkimage-yum.sh.LeZQNh/dev/full c 1 7
+ mknod -m 666 /tmp/mkimage-yum.sh.LeZQNh/dev/null c 1 3
+ mknod -m 666 /tmp/mkimage-yum.sh.LeZQNh/dev/ptmx c 5 2
+ mknod -m 666 /tmp/mkimage-yum.sh.LeZQNh/dev/random c 1 8
+ mknod -m 666 /tmp/mkimage-yum.sh.LeZQNh/dev/tty c 5 0
+ mknod -m 666 /tmp/mkimage-yum.sh.LeZQNh/dev/tty0 c 4 0
+ mknod -m 666 /tmp/mkimage-yum.sh.LeZQNh/dev/urandom c 1 9
+ mknod -m 666 /tmp/mkimage-yum.sh.LeZQNh/dev/zero c 1 5
+ '[' -d /etc/yum/vars ']'
+ mkdir -p -m 755 /tmp/mkimage-yum.sh.LeZQNh/etc/yum
+ cp -a /etc/yum/vars /tmp/mkimage-yum.sh.LeZQNh/etc/yum/
+ [[ -n Core ]]
+ yum -c /etc/yum.conf --installroot=/tmp/mkimage-yum.sh.LeZQNh --releasever=/ --setopt=tsflags=nodocs --setopt=group_package_types=mandatory -y groupinstall Core
Loaded plugins: fastestmirror, langpacks
There is no installed groups file.
Maybe run: yum groups mark convert (see man yum)
<snip>
+ tar --numeric-owner -c -C /tmp/mkimage-yum.sh.LeZQNh .
+ docker import - centos:7.6.1810
sha256:7cdb02046bff4c5065de670604fb3252b1221c4853cb4a905ca04488f44f52a8
+ docker run -i -t --rm centos:7.6.1810 /bin/bash -c 'echo success'
success
+ rm -rf /tmp/mkimage-yum.sh.LeZQNh
[staf@centos386 contrib]$

Rename

A new image is created with the name centos.

1
2
3
4
[staf@centos386 contrib]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
centos              7.6.1810            7cdb02046bff        3 minutes ago       281 MB
[staf@centos386 contrib]$ 

You might want to rename to include your name or project name. You can do this by retag the image and remove the old image name.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[staf@centos386 contrib]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
centos              7.6.1810            7cdb02046bff        20 seconds ago      281 MB
[staf@centos386 contrib]$ docker rmi centos
Error response from daemon: No such image: centos:latest
[staf@centos386 contrib]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
centos              7.6.1810            7cdb02046bff        3 minutes ago       281 MB
[staf@centos386 contrib]$ docker tag 7cdb02046bff stafwag/centos_386:7.6.1810 
[staf@centos386 contrib]$ docker images
REPOSITORY           TAG                 IMAGE ID            CREATED             SIZE
centos               7.6.1810            7cdb02046bff        7 minutes ago       281 MB
stafwag/centos_386   7.6.1810            7cdb02046bff        7 minutes ago       281 MB
[staf@centos386 contrib]$ docker rmi centos:7.6.1810
Untagged: centos:7.6.1810
[staf@centos386 contrib]$ docker images
REPOSITORY           TAG                 IMAGE ID            CREATED             SIZE
stafwag/centos_386   7.6.1810            7cdb02046bff        8 minutes ago       281 MB
[staf@centos386 contrib]$ 

Test

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[staf@centos386 contrib]$ docker run -it --rm stafwag/centos_386:7.6.1810 /bin/sh
sh-4.2# yum update -y
Loaded plugins: fastestmirror
Determining fastest mirrors
 * base: mirror.usenet.farm
 * extras: mirror.usenet.farm
 * updates: mirror.usenet.farm
base                                                                                                                   | 3.6 kB  00:00:00     
extras                                                                                                                 | 2.9 kB  00:00:00     
updates                                                                                                                | 2.9 kB  00:00:00     
(1/4): updates/7/i386/primary_db                                                                                       | 2.5 MB  00:00:00     
(2/4): extras/7/i386/primary_db                                                                                        | 157 kB  00:00:01     
(3/4): base/7/i386/group_gz                                                                                            | 166 kB  00:00:01     
(4/4): base/7/i386/primary_db                                                                                          | 4.6 MB  00:00:02     
No packages marked for update
sh-4.2# 

Have fun!

May 10, 2019

I published the following diary on isc.sans.edu: “DSSuite – A Docker Container with Didier’s Tools“:

If you follow us and read our daily diaries, you probably already know some famous tools developed by Didier (like oledump.py, translate.py and many more). Didier is using them all the time to analyze malicious documents. His tools are also used by many security analysts and researchers. The complete toolbox is available on his github.com page. You can clone the repository or download the complete package available as a zip archive. However, it’s not convenient to install them all the time when you’re switching from computers all the time if, like me, you’re always on the road between different customers… [Read more]

[The post [SANS ISC] DSSuite – A Docker Container with Didier’s Tools has been first published on /dev/random]

May 08, 2019

Logo ChamiloCe jeudi 23 mai 2019 à 19h se déroulera la 78ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Chamilo: Améliorer l’accès à une éducation de qualité partout dans le monde

Thématique : éducation|formation|e-learning

Public : développeurs|enseignants|entreprises

L’animateur conférencier : Yannick Warnier (Chamilo)

Lieu de cette séance : Université de Mons, Campus Plaine de Nimy, avenue Maistriau, Grands Amphithéâtres, Auditoire Curie (cf. ce plan sur le site de l’UMONS, ou la carte OSM).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Le projet de plateforme e-learning Chamilo naît en 2010 du charbon encore rouge du projet Dokeos, lui-même basé sur le projet Claroline. En 9 ans, le projet, né en Belgique et aujourd’hui un travail combiné d’européens et de sud-américains, est classé dans tous les top 20 de comparaison de LMS et dans le top 3 des solutions LMS Open Source. Il a déjà été utilisé par plus de 21 millions d’utilisateurs et enregistre au minimum 100.000 nouveaux utilisateurs par mois. Il a désormais largement dépassé en popularité et en fonctionnalités les deux projets dont il est issu.

Chamilo est aujourd’hui une suite complète d’outils de formation ou d’éducation en ligne qui permet de réaliser des tests en ligne, de stocker des documents, d’organiser des cours structurés, de fomenter la collaboration via des forums, de lancer des sessions de vidéoconférence au sein des cours, de générer des certificats, des “badges” digitaux, et bien d’autres choses encore qui seront survolées.

Mais dans les coulisses d’un projet techno-éducatif à succès, c’est la sueur au front que le développement d’une solution qui compte plus de 2 millions de lignes de code a été rendu possible.

Quel est le secret de la réussite? Comment parvenir à développer une résilience commerciale sur base d’une solution Open Source (ou plus précisément de logiciel libre)? Où se termine la responsabilité sociale pour laisser la place aux revenus nécessaires à la croissance?

Autant de sujets passionnants que Yannick se réjouit de vous présenter ce jeudi 23 mai.

Short bio : Yannick Warnier, diplômé en Sciences Informatiques à l’Institut Paul Lambin en 2003, et ex-membre du BxLUG, est fondateur du projet Chamilo et l’actuel président de l’Association Chamilo. Pour allier le travail à la passion, il est également en charge de l’entreprise BeezNest, premier fournisseur de services sur le logiciel Chamilo, et du développement de son entreprise soeur en Amérique Latine. Il supervise le développement du logiciel et prend part aux décisions de l’Association qui déterminent les grands axes du projet, au-delà du logiciel.

Acquia joins forces with Mautic

I'm happy to announce today that Acquia acquired Mautic, an open source marketing automation and campaign management platform.

A couple of decades ago, I was convinced that every organization required a website — a thought that sounds rather obvious now. Today, I am convinced that every organization will need a Digital Experience Platform (DXP).

Having a website is no longer enough: customers expect to interact with brands through their websites, email, chat and more. They also expect these interactions to be relevant and personalized.

If you don't know Mautic, think of it as an alternative to Adobe's Marketo or Salesforce's Marketing Cloud. Just like these solutions, Mautic provides marketing automation and campaign management capabilities. It's differentiated in that it is easier to use, supports one-to-one customer experiences across many channels, integrates more easily with other tools, and is less expensive.

The flowchart style visual campaign builder you saw in the beginning of the Mautic demo video above is one of my favorite features. I love how it allows marketers to combine content, user profiles, events and a decision engine to deliver the best-next action to customers.

Mautic is a relatively young company, but has quickly grown into the largest open source player in the marketing automation space, with more than 200,000 installations. Its ease of use, flexibility and feature completeness has won over many marketers in a very short time: the company's top-line grew almost 400 percent year-over-year, its number of customers tripled, and Mautic won multiple awards for product innovation and customer service.

The acquisition of Mautic accelerates Acquia's product strategy to deliver the only Open Digital Experience Platform:

The building blocks of a Digital Experience Platform and how Mautic accelerates Acquia's vision.  The pieces that make up a Digital Experience Platform, and how Mautic fits into Acquia's Open Digital Experience Platform. Acquia is strong in content management, personalization, user profile management and commerce (yellow blocks). Mautic adds or improves Acquia's multi-channel delivery, campaign management and journey orchestration capabilities (purple blocks).

There are many reasons why we like Mautic, but here are my top 3:

Reason 1: Disrupting the market with "open"

Open Source will disrupt every component of the modern technology stack. It's not a matter of if, it's when.

Just as Drupal disrupted web content management with Open Source, we believe Mautic disrupts marketing automation.

With Mautic, Acquia is now the only open and open source alternative to the expensive, closed, and stagnant marketing clouds.

I'm both proud and excited that Acquia is doubling down on Open Source. Given our extensive open source experience, we believe we can help grow Mautic even faster.

Reason 2: Innovating through integrations

To build an optimal customer experience, marketers need to integrate with different data sources, customer technologies, and bespoke in-house platforms. Instead of buying a suite from a single vendor, most marketers want an open platform that allows for open innovation and unlimited integrations.

Only an open architecture can connect any technology in the marketing stack, and only an open source innovation model can evolve fast enough to offer integrations with thousands of marketing technologies (to date, there are 7,000 vendors in the martech landscape).

Because developers are largely responsible for creating and customizing marketing platforms, marketing technology should meet the needs of both business users and technology architects. Unlike other companies in the space, Mautic is loved by both marketers and developers. With Mautic, Acquia continues to focus on both personas.

Reason 3: The same technology stack and business model

Like Drupal, Mautic is built in PHP and Symfony, and like Drupal, Mautic uses the GNU GPL license. Having the same technology stack has many benefits.

Digital agencies or in-house teams need to deliver integrated marketing solutions. Because both Drupal and Mautic use the same technology stack, a single team of developers can work on both.

The similarities also make it possible for both open source communities to collaborate — while it is not something you can force to happen, it will be interesting to see how that dynamic naturally plays out over time.

Last but not least, our business models are also very aligned. Both Acquia and Mautic were "born in the cloud" and make money by offering subscription- and cloud-based delivery options. This means you pay for only what you need and that you can focus on using the products rather than running and maintaining them.

Mautic offers several commercial solutions:

  • Mautic Cloud, a fully managed SaaS version of Mautic with premium features not available in Open Source.
  • For larger organizations, Mautic has a proprietary product called Maestro. Large organizations operate in many regions or territories, and have teams dedicated to each territory. With Maestro, each territory can get its own Mautic instance, but they can still share campaign best-practices, and repeat successful campaigns across territories. It's a unique capability, which is very aligned with the Acquia Cloud Site Factory.

Try Mautic

If you want to try Mautic, you can either install the community version yourself or check out the demo or sandbox environment of Mautic Open Marketing Cloud.

Conclusion

We're very excited to join forces with Mautic. It is such a strategic step for Acquia. Together we'll provide our customers with more freedom, faster innovation, and more flexibility. Open digital experiences are the way of the future.

I've got a lot more to share about the Mautic acquisition, how we plan to integrate Mautic in Acquia's solutions, how we could build bridges between the Drupal and Mautic community, how it impacts the marketplace, and more.

In time, I'll write more about these topics on this blog. In the meantime, you can listen to this podcast with DB Hurley, Mautic's founder and CTO, and me.

May 07, 2019

A couple of weeks ago our first-born daughter appeared into my life. All the clichés of what this miracle does with a man are very well true. Not only is this (quite literally) the start of a new life, it also gives you a pause to reflect on your own life.

Around the same time I’ve finished working on the project that has occupied most of my time over the past years: helping a software-as-a-service company completely modernize and rearchitect their software stack, to help it grow further in the coming decade.

Going forward, I’ll be applying the valuable lessons learned while doing this, combined with all my previous experiences, as a consultant. More specifically I’ll be focusing on DevOps and related concerns. More information on that can be found on this page.

I also have a new business venture in the works, but that’s the subject of a future post.


Comments | More on rocketeer.be | @rubenv on Twitter

May 05, 2019

In my previous post, we started with creating Debian based docker images from scratch for the i386 architecture.

In this blog post, we’ll create Arch GNU/Linux based images.

Arch GNU/Linux

Arch Linux stopped supporting i386 systems. When you want to run Archlinux on an i386 system there is a community maintained Archlinux32 project and the Free software version Parabola GNU/Linux-libre.

For the arm architecture, there is Archlinux Arm project that I used.

mkimage-arch.sh in moby

I used mkimage-arch.sh from the Moby/Docker project in the past, but it failed when I tried it this time…

I created a small patch to fix it and created a pull request. Till the issue is resolved, you can use the version in my cloned git repository.

Build the docker image

Install the required packages

Make sure that your system is up-to-date.

1
staf@archlinux32 contrib]$ sudo pacman -Syu

Install the required packages.

1
[staf@archlinux32 contrib]$ sudo pacman -S arch-install-scripts expect wget

Directory

Create a directory that will hold the image data.

1
2
3
[staf@archlinux32 ~]$ mkdir -p dockerbuild/archlinux32
[staf@archlinux32 ~]$ cd dockerbuild/archlinux32
[staf@archlinux32 archlinux32]$ 

Get mkimage-arch.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[staf@archlinux32 archlinux32]$ wget https://raw.githubusercontent.com/stafwag/moby/master/contrib/mkimage-arch.sh
--2019-05-05 07:46:32--  https://raw.githubusercontent.com/stafwag/moby/master/contrib/mkimage-arch.sh
Loaded CA certificate '/etc/ssl/certs/ca-certificates.crt'
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.36.133
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.36.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3841 (3.8K) [text/plain]
Saving to: 'mkimage-arch.sh'

mkimage-arch.sh                 100%[====================================================>]   3.75K  --.-KB/s    in 0s      

2019-05-05 07:46:33 (34.5 MB/s) - 'mkimage-arch.sh' saved [3841/3841]

[staf@archlinux32 archlinux32]$ 

Make it executable.

1
2
[staf@archlinux32 archlinux32]$ chmod +x mkimage-arch.sh
[staf@archlinux32 archlinux32]$ 

Setup your pacman.conf

Copy your pacmnan.conf to the directory that holds mkimage-arch.sh.

1
2
[staf@archlinux32 contrib]$ cp /etc/pacman.conf mkimage-arch-pacman.conf
[staf@archlinux32 contrib]$ 

Build your image

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[staf@archlinux32 archlinux32]$ TMPDIR=`pwd` sudo ./mkimage-arch.sh
spawn pacstrap -C ./mkimage-arch-pacman.conf -c -d -G -i /var/tmp/rootfs-archlinux-wqxW0uxy8X base bash haveged pacman pacman-mirrorlist --ignore dhcpcd,diffutils,file,inetutils,iproute2,iputils,jfsutils,licenses,linux,linux-firmware,lvm2,man-db,man-pages,mdadm,nano,netctl,openresolv,pciutils,pcmciautils,psmisc,reiserfsprogs,s-nail,sysfsutils,systemd-sysvcompat,usbutils,vi,which,xfsprogs
==> Creating install root at /var/tmp/rootfs-archlinux-wqxW0uxy8X
==> Installing packages to /var/tmp/rootfs-archlinux-wqxW0uxy8X
:: Synchronizing package databases...
 core                                              198.0 KiB   676K/s 00:00 [##########################################] 100%
 extra                                               2.4 MiB  1525K/s 00:02 [##########################################] 100%
 community                                           6.3 MiB   396K/s 00:16 [##########################################] 100%
:: dhcpcd is in IgnorePkg/IgnoreGroup. Install anyway? [Y/n] n
:: diffutils is in IgnorePkg/IgnoreGroup. Install anyway? [Y/n] n
:: file is in IgnorePkg/IgnoreGroup. Install anyway? [Y/n] n
<snip>
==> WARNING: /var/tmp/rootfs-archlinux-wqxW0uxy8X is not a mountpoint. This may have undesirable side effects.
Generating locales...
  en_US.UTF-8... done
Generation complete.
tar: ./etc/pacman.d/gnupg/S.gpg-agent.ssh: socket ignored
tar: ./etc/pacman.d/gnupg/S.gpg-agent.extra: socket ignored
tar: ./etc/pacman.d/gnupg/S.gpg-agent: socket ignored
tar: ./etc/pacman.d/gnupg/S.gpg-agent.browser: socket ignored
sha256:41cd9d9163a17e702384168733a9ca1ade0c6497d4e49a2c641b3eb34251bde1
Success.
[staf@archlinux32 archlinux32]$ 

Rename

A new image is created with the name archlinux.

1
2
3
4
[staf@archlinux32 archlinux32]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED              SIZE
archlinux           latest              18e74c4d823c        About a minute ago   472MB
[staf@archlinux32 archlinux32]$ 

You might want to rename it. You can do this by retag the image and remove the old image name.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[staf@archlinux32 archlinux32]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED              SIZE
archlinux           latest              18e74c4d823c        About a minute ago   472MB
[staf@archlinux32 archlinux32]$ docker tag stafwag/archlinux:386 18e74c4d823c
Error response from daemon: No such image: stafwag/archlinux:386
[staf@archlinux32 archlinux32]$ docker tag 18e74c4d823c stafwag/archlinux:386             
[staf@archlinux32 archlinux32]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
archlinux           latest              18e74c4d823c        3 minutes ago       472MB
stafwag/archlinux   386                 18e74c4d823c        3 minutes ago       472MB
[staf@archlinux32 archlinux32]$ docker rmi archlinux
Untagged: archlinux:latest
[staf@archlinux32 archlinux32]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
stafwag/archlinux   386                 18e74c4d823c        3 minutes ago       472MB
[staf@archlinux32 archlinux32]$ 

Test

1
2
3
4
5
6
7
8
9
[staf@archlinux32 archlinux32]$ docker run --rm -it stafwag/archlinux:386 /bin/sh
sh-5.0# pacman -Syu
:: Synchronizing package databases...
 core is up to date
 extra is up to date
 community is up to date
:: Starting full system upgrade...
 there is nothing to do
sh-5.0# 

Have fun!

May 04, 2019

The post Enable the RPC JSON API with password authentication in Bitcoin Core appeared first on ma.ttias.be.

The bitcoin daemon has a very useful & easy-to-use HTTP API built-in, that allows you to talk to it like a simple webserver and get JSON responses back.

By default, it's enabled but it only listens on localhost port 8223, and it's unauthenticated.

$ netstat -alpn | grep 8332
tcp     0   0 127.0.0.1:8332     0.0.0.0:*    LISTEN      31667/bitcoind
tcp6    0   0 ::1:8332           :::*         LISTEN      31667/bitcoind

While useful if you're on the same machine (you can query it locally without username/password), it won't help much if you're querying a remote node.

In order to allow bitcoind to bind on a public-facing IP and have username/password authentication, you can modify the bitcoin.conf.

$ cat .bitcoin/bitcoin.conf
# Expose the RPC/JSON API
server=1
rpcbind=10.0.1.5
rpcallowip=0.0.0.0/0
rpcport=8332
rpcuser=bitcoin
rpcpassword=J9JkYnPiXWqgRzg3vAA

If you restart your daemon with this config, it would try to bind to IP "10.0.1.5" and open the RCP JSON API endpoint on its default port 8332. To authenticate, you'd give the user & password as shown in the config.

If you do not pass the rpcallowip parameter, the server won't bind on the requested IP, as confirmed in the manpage:

-rpcbind=[:port]
Bind to given address to listen for JSON-RPC connections. Do not expose
the RPC server to untrusted networks such as the public internet!
This option is ignored unless -rpcallowip is also passed. Port is
optional and overrides -rpcport. Use [host]:port notation for
IPv6. This option can be specified multiple times (default:
127.0.0.1 and ::1 i.e., localhost)

Keep that note that it's a lot safer to actually pass the allowed IPs and treat it as a whitelist, not as a workaround to listen to all IPs like I did above.

Here's an example of a curl call to query the daemon.

$ curl \
  --user bitcoin \
  --data-binary '{"jsonrpc": "1.0", "id":"curltest", "method": "getnetworkinfo", "params": [] }' \ 
  -H 'content-type: text/plain;' \
  http://10.0.1.5:8332/
Enter host password for user 'bitcoin':

{
  "result":
    {
      "version":180000,
      ...
    }
}

You can now safely query your bitcoin daemon with authentication.

The post Enable the RPC JSON API with password authentication in Bitcoin Core appeared first on ma.ttias.be.

May 02, 2019

The post How to upgrade to the latest Bitcoin Core version appeared first on ma.ttias.be.

This guide assumes you've followed my other guide, where you compile Bitcoin Core from source. These steps will allow you to upgrade a running Bitcoin Core node to the latest version.

Stop current Bitcoin Core node

First, stop the running version of the Bitcoin Core daemon.

$ su - bitcoin
$ bitcoin-cli stop
Bitcoin server stopping

Check to make sure your node is stopped.

$ bitcoin-cli status
error: Could not connect to the server 127.0.0.1:8332

Once that's done, download & compile the new version.

Install the latest version of Bitcoin Core

To do so, you can follow all other steps to self-compile Bitcoin Core.

In those steps, there's a part where you do a git checkout to a specific version. Change that to refer to the latest release. In this example, we'll upgrade to 0.18.0.

$ git clone https://github.com/bitcoin/bitcoin.git
$ cd bitcoin
$ git checkout v0.18.0

And compile Bitcoin Core using the same steps as before.

$ ./autogen.sh
$ ./configure
$ make -j $(nproc)

Once done, you have the latest version of Bitcoin Core.

Start the Bitcoin Core daemon

Start it again as normal;

$ bitcoind --version
Bitcoin Core Daemon version v0.18.0
$ bitcoind -daemon
Bitcoin server starting

Check the logs to make sure everything is OK.

$ tail -f ~/.bitcoin/debug.log
Bitcoin Core version v0.18.0 (release build)
Assuming ancestors of block 0000000000000000000f1c54590ee18d15ec70e68c8cd4cfbadb1b4f11697eee have valid signatures.
Setting nMinimumChainWork=0000000000000000000000000000000000000000051dc8b82f450202ecb3d471
Using the 'sse4(1way),sse41(4way),avx2(8way)' SHA256 implementation
Using RdSeed as additional entropy source
Using RdRand as an additional entropy source
Default data directory /home/bitcoin/.bitcoin
...

And you're now running the latest version.

The post How to upgrade to the latest Bitcoin Core version appeared first on ma.ttias.be.

I published the following diary on isc.sans.edu: “Another Day, Another Suspicious UDF File“:

In my last diary, I explained that I found a malcious UDF image used to deliver a piece of malware. After this, I created a YARA rule on VT to try to spot more UDF files in the wild. It seems like the tool ImgBurn is the attacker’s best friend to generate such malicious images. To find more UDF images, I used the following very simple YARA rule… [Read more]

[The post [SANS] Another Day, Another Suspicious UDF File has been first published on /dev/random]

May 01, 2019

The post I forgot how to manage a server appeared first on ma.ttias.be.

Something embarrassing happened to me the other day. I was playing around with a new server on Digital Ocean and it occurred to me: I had no idea how to manage it.

This is slightly awkward because I've been a sysadmin for over 10yrs, the largest part of my professional career.

The spoils of configuration management

The thing is, I've been writing and using config management for the last 6-ish years. I've shared many blogposts regarding Puppet, some of its design patterns, even pretended to hold all the knowledge by sharing my lessons learned after 3yr of using Puppet.

And now I've come to the point where I no longer know how to install, configure or run software without Puppet.

My config management does this for me. Whether it's Puppet, Ansible, Chef, ... all of the boring parts of being a sysadmin have been hidden behind management tools. Yet here I am, trying to quickly configure a personal server, without my company-managed config management to aid me.

Boy did I feel useless.

I had to Google the correct SSH config syntax to allow root logins, but only via public keys. I had to Google for iptables rule syntax and using ufw to manage them. I forgot where I had to place the configs for supervisor for running jobs, let alone how to write the config.

I know these configs in our tools. In our abstractions. In our automation. But I forgot what it's like in Linux itself.

A pitfall to remember

I previously blogged about 2 pitfalls I had already known about: trying to automate a service you don't fully understand or blindly trusting the automation of someone else, not understanding what it's doing under the hood.

I should add a third one to my pitfall list: I'm forgetting the basic and core tools used to manage a Linux server.

Is this a bad thing though? I'm not sure yet. Maybe the lower level knowledge isn't that valuable, as long as we have our automation standby to take care of this? It frees us from having to think about too many things and allows us to focus on the more important aspects of being a sysadmin.

But it sure felt strange Googling things I had to Google almost a decade ago.

The post I forgot how to manage a server appeared first on ma.ttias.be.

April 30, 2019

The Drupal Association announced today that Heather Rocker has been selected as its next Executive Director.

This is exciting news because it concludes a seven month search since Megan Sanicki left.

We looked long and hard for someone who could help us grow the global Drupal community by building on its diversity, working with developers and agency partners, and expanding our work with new audiences such as content creators and marketers.

The Drupal Association (including me) believes that Heather can do all of that, and is the best person to help lead Drupal into its next phase of growth.

Heather earned her engineering degree from Georgia Tech. She has dedicated much of her career to working with women in technology, both as the CEO of Girls, Inc. of Greater Atlanta and the Executive Director of Women in Technology.

We were impressed not only with her valuable experience with volunteer organizations, but also her work in the private sector with large customers. Most recently, Heather was part of the management team at Systems Evolution, a team of 250 business consultants, where she specialized in sales operations and managed key client relationships.

She is also a robotics fanatic who organizes and judges competitions for children. So, maybe we’ll see some robots roaming around DrupalCon in the future!

As you can tell, Heather will bring a lot of great experience to the Drupal community and I look forward to partnering with her.

Last but not least, I want to thank Tim Lehnen for serving as our Interim Executive Director. He did a fantastic job leading the Drupal Association through this transition.

Comment le simple fait d’avoir un compte Facebook m’a rendu injoignable pendant 5 ans pour plusieurs dizaines de lecteurs de mon blog

Ne pas être sur Facebook ou le quitter est souvent sujet au débat : « Mais comment vont faire les gens pour te contacter ? Comment vas-tu rester en contact ? ». Tout semble se réduire au choix cornélien : préserver sa vie privée ou bien être joignable par le commun des mortels.

Je viens de me rendre compte qu’il s’agit d’un faux débat. Tout comme Facebook offre l’illusion de popularité et d’audience à travers les likes, la disponibilité en ligne est illusoire. Pire ! J’ai découvert qu’être sur Facebook m’avait rendu moins joignable pour toute une catégorie de lecteurs de mon blog !

Il y’a une raison toute simple qui me pousse à garder un compte Facebook Messenger : c’est l’endroit où les quelques apnéistes belges organisent leurs sorties. Si je n’y suis pas, je rate les sorties, aussi simple que ça. Du coup, j’ai installé l’application Messenger Lite, dans le seul but de pouvoir aller plonger. Or, en fouillant dans les options de l’app, j’ai découvert une sous-rubrique bien cachée intitulée « Invitations filtrées ».

Là, j’y ai trouvé plusieurs dizaines de messages qui m’ont été envoyés depuis 2013. Plus de 5 années de messages dont j’ignorais l’existence ! Principalement des réactions, pas toujours positives, à mes billets de blogs. D’autres étaient plus factuels. Ils émanaient d’organisateurs de conférences, de personnes que j’avais croisées et qui souhaitaient rester en contact. Tous ces messages, sans exception, auraient eu leur place dans ma boîte mail.

Je ne savais pas qu’ils existaient. À aucun moment Facebook ne m’a signalé l’existence de ces messages, ne m’a donné la chance d’y répondre alors que certains datent d’une époque où j’étais très actif sur ce réseau, où l’app était installée sur mon téléphone.

À toutes ces personnes, Facebook a donné l’illusion qu’elles m’avaient contacté. Que j’étais joignable. À tous ceux qui ont un jour posté un commentaire sous un de mes billets publiés automatiquement, Facebook a donné l’impression d’être en contact avec moi.

À moi, personnage public, Facebook a donné l’illusion qu’on pouvait me joindre, que ceux qui n’utilisaient pas l’email pouvaient me contacter.

Tous, nous avons été trompés.

Il est temps de faire tomber le voile. Facebook n’offre pas un service, il offre une illusion de service. Une illusion qui est peut-être ce que beaucoup cherchent. L’illusion d’avoir des amis, d’avoir une activité sociale, une reconnaissance, un certain succès. Mais si vous ne cherchez pas l’illusion, alors il est temps de fuir Facebook. Ce n’est pas facile, car l’illusion est forte. Tout comme les adorateurs de la Bible prétendent qu’elle est la vérité ultime « car c’est écrit dans la Bible », les utilisateurs de Facebook se sentent entendus, écoutés, car « Facebook me dit que j’ai été vu ».

C’est d’ailleurs le seul et unique but de Facebook. Nous faire croire que nous sommes connectés, peu importe que ce soit vrai ou pas.

Pour chaque contenu posté, l’algorithme Facebook va tenter de trouver les quelques utilisateurs qui ont une probabilité maximale de commenter. Et encourager les autres à cliquer sur le like sans même lire ce qui s’y passe, juste parce que la photo est jolie ou que tel utilisateur like spontanément les posts de tel autre. Au final, une conversation entre 5 individus ponctuée de 30 likes donnera l’impression d’un retentissement national. Exception faite des célébrités, qui récolteront des dizaines de milliers de likes et de messages parce que ce sont des célébrités, peu importe la plateforme.

Facebook nous donne une petite impression de célébrité et de gloriole grâce à quelques likes, Facebook nous donne l’impression de faire partie d’une tribu, d’avoir des relations sociales.

Il est indéniable que Facebook a également des effets positifs, permet des échanges qui n’existeraient pas sans cela. Mais, pour paraphraser Cal Newport dans son livre Digital Minimalism : est-ce que le prix que nous payons n’est pas trop élevé pour les bénéfices que nous en retirons ?

Je rajouterais : tirons-nous vraiment des bénéfices ? Ou bien l’illusion de ceux-ci ?

Photo by Aranka Sinnema on Unsplash

Je suis @ploum, conférencier et écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

April 28, 2019

Puppet CA/puppetmasterd cert renewal

While we're still converting our puppet controlled infra to Ansible, we still have some nodes "controlled" by puppet, as converting some roles isn't something that can be done in just one or two days. Add to that other items in your backlog that all have priority set to #1 and then time is flying, until you realize this for your existing legacy puppet environment (assuming false FQDN here, but you'll get the idea):

Warning: Certificate 'Puppet CA: puppetmasterd.domain.com' will expire on 2019-05-06T12:12:56UTC
Warning: Certificate 'puppetmasterd.domain.com' will expire on 2019-05-06T12:12:56UTC

So, as long as your PKI setup for puppet is still valid, you can act in advance, resign/extend CA and puppetmasterd and distribute newer CA certs to agents, and go forward with other items in your backlog, while still converting from puppet to Ansible (at least for us)

Puppetmasterd/CA

Before anything else, (in case you don't backup this, but you should), let's take a backup on the Puppet CA (in our case, it's a Foreman driven puppetmasterd, so foreman host is where all this will happen, YMMV)

tar cvzf /root/puppet-ssl-backup.tar.gz /var/lib/puppet/ssl/

CA itself

We first need to regenerate the CSR for the CA cert, and sign it again Ideally we confirm that the ca_key.pem and the existing ca_crt.pem "matches" through modulus (should be equals)

cd /var/lib/puppet/ssl/ca
( openssl rsa -noout -modulus -in ca_key.pem  2> /dev/null | openssl md5 ; openssl x509 -noout -modulus -in ca_crt.pem  2> /dev/null | openssl md5 ) 

(stdin)= cbc4d35f58b28ad7c4dca17bd4408403
(stdin)= cbc4d35f58b28ad7c4dca17bd4408403

As it's the case, we can now Regenerate from that private key and existing crt a CSR

openssl x509 -x509toreq -in ca_crt.pem -signkey ca_key.pem -out ca_csr.pem
Getting request Private Key
Generating certificate request

Now that we have the CSR for CA, we need to sign it again, but we have to add extensions

cat > extension.cnf << EOF
[CA_extensions]
basicConstraints = critical,CA:TRUE
nsComment = "Puppet Ruby/OpenSSL Internal Certificate"
keyUsage = critical,keyCertSign,cRLSign
subjectKeyIdentifier = hash
EOF

And now archive old CA crt and sign (new) extended one

cp ca_crt.pem ca_crt.pem.old
openssl x509 -req -days 3650 -in ca_csr.pem -signkey ca_key.pem -out ca_crt.pem -extfile extension.cnf -extensions CA_extensions
Signature ok
subject=/CN=Puppet CA: puppetmasterd.domain.com
Getting Private key

openssl x509 -in ca_crt.pem -noout -text|grep -A 3 Validity
 Validity
            Not Before: Apr 29 08:25:49 2019 GMT
            Not After : Apr 26 08:25:49 2029 GMT

Puppetmasterd server

We have also to regen the CSR from the existing cert (assuming our fqdn for our cert is correctly also the currently set hostname)

cd /var/lib/puppet/ssl
openssl x509 -x509toreq -in certs/$(hostname).pem -signkey private_keys/$(hostname).pem -out certificate_requests/$(hostname)_csr.pem
Getting request Private Key
Generating certificate request

Now that we have CSR, we can sign with new CA

cp certs/$(hostname).pem certs/$(hostname).pem.old #Backing up
openssl x509 -req -days 3650 -in certificate_requests/$(hostname)_csr.pem -CA ca/ca_crt.pem \
  -CAkey ca/ca_key.pem -CAserial ca/serial -out certs/$(hostname).pem
Signature ok  

Validating that puppetmasted key and new certs are matching (so crt and private keys are ok)

( openssl rsa -noout -modulus -in private_keys/$(hostname).pem  2> /dev/null | openssl md5 ; openssl x509 -noout -modulus -in certs/$(hostname).pem 2> /dev/null | openssl md5 )

(stdin)= 0ab385eb2c6e9e65a4ed929a2dd0dbe5
(stdin)= 0ab385eb2c6e9e65a4ed929a2dd0dbe5

It seems all good, so let's restart puppetmasterd/httpd (foremand launches puppetmasterd for us)

systemctl restart puppet

Puppet agents

From this point, puppet agents will not complain about the puppetmasterd cert, but still about the fact that CA itself will expire soon :

Warning: Certificate 'Puppet CA: puppetmasterd.domain.com' will expire on 2019-05-06T12:12:56GMT

But as we have now the new ca_crt.pem at the puppetmasterd/foreman side, we can just distribute it on clients (through puppet or ansible or whatever) and then it will continue to work

cd /var/lib/puppet/ssl/certs
mv ca.pem ca.pem.old

And now distribute the new ca_crt.pem as ca.pem here

puppet snippet for this (in our puppet::agent class)

 file { '/var/lib/puppet/ssl/certs/ca.pem': 
   source => 'puppet:///puppet/ca_crt.pem', 
   owner => 'puppet', 
   group => 'puppet', 
   require => Package['puppet'],
 }

Next time you'll "puppet agent -t" or that puppet will contact puppetmasterd, it will apply the new cert on and on next call, no warning, issue anymore

Info: Computing checksum on file /var/lib/puppet/ssl/certs/ca.pem
Info: /Stage[main]/Puppet::Agent/File[/var/lib/puppet/ssl/certs/ca.pem]: Filebucketed /var/lib/puppet/ssl/certs/ca.pem to puppet with sum c63b1cc5a39489f5da7d272f00ec09fa
Notice: /Stage[main]/Puppet::Agent/File[/var/lib/puppet/ssl/certs/ca.pem]/content: content changed '{md5}c63b1cc5a39489f5da7d272f00ec09fa' to '{md5}e3d2e55edbe1ad45570eef3c9ade051f'

Hope it helps

I've loved this song since I was 15 years old, so after 25 years it definitely deserves a place in my favorite music list. When I watched this recording, I stopped breathing for a while. Beautifully devastating. Don't mix this song with alcohol.

April 25, 2019

Après avoir lu plusieurs de ses livres, j’ai eu l’occasion de rencontrer l’ingénieur philosophe Luc de Brabandere. Autour d’un jus d’orange, nous avons, entre une conversation sur le vélo et la blockchain, discuté de sa présence sur les listes Écolo aux prochaines élections européennes et de la pression de ses proches pour se mettre sur les réseaux sociaux.

« On me demande comment j’espère faire des voix si je ne suis pas sur les réseaux sociaux », m’a-t-il confié.

Une problématique que je connais bien, ayant moi-même créé un compte Facebook en 2012 dans le seul but d’être candidat aux élections pour le Parti Pirate. Si, il y’a quelques années, ne pas être sur les réseaux sociaux était perçu comme ne pas être en phase avec son époque, les choses sont-elles différentes aujourd’hui ? Les comptes Facebook ne sont-ils pas devenus l’équivalent des affiches placardées un peu partout ? En politique, un adage dit d’ailleurs que les affiches ne font pas gagner de voix, mais que ne pas avoir d’affiches peut en faire perdre.

Peut-être que pour un politicien professionnel dont la seule finalité est d’être élu à n’importe quel prix, les réseaux sociaux sont aussi indispensables que le serrage de mains dans les cafés et sur les marchés. Mais Luc n’entre clairement pas dans cette catégorie. Sa position de 4e suppléant sur la liste européenne rend son élection mathématiquement improbable.

Il ne s’en cache d’ailleurs pas, affirmant être candidat avant tout pour réconcilier écologie et économie. Mais les réseaux sociaux ne seraient-ils pas justement un moyen de diffuser ses idées ?

Je pense que c’est le contraire.

Les idées de Luc sont amplement accessibles à travers ses livres, ses écrits, ses conférences. Les réseaux sociaux broient la rigueur, la finesse de l’analyse pour la transformer en succédané idéologique, en prêt-à-partager. Sur les réseaux sociaux, le penseur devient vendeur et publicitaire. Les chiffres affolent et forcent à rentrer dans une sarabande de likes, dans une illusoire impression de fausse popularité savamment orchestrée.

Qu’on le veuille ou non, l’outil nous transforme. Les réseaux sociaux sont conçus pour nous empêcher de penser, de réfléchir. Ils échangent notre âme et notre esprit critique contre quelques chiffres indiquant une croissance de followers, de clics ou de likes. Ils se rappellent à nous, nous envahissent et nous conforment à un idéal consumériste. Ils servent d’exutoires à nos colères et à nos idées pour mieux les laisser tomber dans l’oubli une fois le feu de paille du buzz éteint.

Bref, les réseaux sociaux sont l’outil rêvé du politicien. Et l’ennemi juré du philosophe.

Une campagne électorale transformera n’importe qui en véritable politicien assoiffé de pouvoir et de popularité. Et que sont les réseaux sociaux sinon une campagne permanente pour chacun de nous ?

Ne pas être sur les réseaux sociaux est, pour l’électeur que je suis, une affirmation électorale forte. Prendre le temps de déposer ses idées dans des livres, un blog, des vidéos ou tout support hors réseaux sociaux devrait être la base du métier de penseur de la chose publique. Je cite d’ailleurs souvent l’exemple du conseiller communal Liégeois François Schreuer qui, à travers son blog et son site, transcende le débat politico-politicien pour tenter d’appréhender l’essence même des problématiques, apportant une véritable transparence citoyenne aux débats abscons d’un conseil communal.

Après, il reste la question de savoir si la présence de Luc sur les listes Écolo n’est pas qu’un attrape voix, si ses idées auront une quelconque influence une fois les élections terminées. Il y’a sans doute un peu de cela. Avant de voter pour lui, il faut se poser la question de savoir si on veut voter pour Philippe Lamberts, la tête de liste.

J’ai de nombreux différends avec Écolo au niveau local ou régional, mais je constate que c’est le parti qui me représente le mieux à l’Europe sur les sujets qui me sont chers : vie privée, copyright, contrôle d’Internet, transition écologique, revenu de base, remise en question de la place du travail.

Si je pouvais voter pour Julia Reda, du Parti Pirate allemand, la question ne se poserait pas car j’admire son travail. Mais le système électoral européen étant ce qu’il est, je crois que ma voix ira à Luc de Brabandere.

Entre autres parce qu’il n’est pas sur les réseaux sociaux.

Photo by Elijah O’Donnell on Unsplash

Je suis @ploum, conférencier et écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

April 22, 2019

The post Retrieving the Genesis block in Bitcoin with bitcoin-cli appeared first on ma.ttias.be.

If you run a Bitcoin full node, you have access to every transaction and block that was ever created on the network. This also allows you to look at the content of, say, the genesis block. The first block ever created, over 10y ago.

Retrieving the genesis block

First, you can ask for the block hash by providing it the block height. As with everything in computer science, arrays and block counts start at 0.

You use command getblockhash to find the correct hash.

$ bitcoin-cli getblockhash 0
000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f

Now you have the block hash that matches with the first ever block.

You can now request the full content of that block using the getblock command.

$ bitcoin-cli getblock 000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f
{
  "hash": "000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f",
  "confirmations": 572755,
  "strippedsize": 285,
  "size": 285,
  "weight": 1140,
  "height": 0,
  "version": 1,
  "versionHex": "00000001",
  "merkleroot": "4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b",
  "tx": [
    "4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b"
  ],
  "time": 1231006505,
  "mediantime": 1231006505,
  "nonce": 2083236893,
  "bits": "1d00ffff",
  "difficulty": 1,
  "chainwork": "0000000000000000000000000000000000000000000000000000000100010001",
  "nTx": 1,
  "nextblockhash": "00000000839a8e6886ab5951d76f411475428afc90947ee320161bbf18eb6048"
}

This is the only block that doesn't have a previousblockhash, all other blocks will have one as they form the chain itself. But, the first block can't have a previous one.

Retrieving the first and only transaction from the genesis block

In this block, there is only one transaction included. The one with the hash 4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b. This is a coinbase transaction, it's the block reward for finding the miner for finding this block (50BTC).

[...]
  "tx": [
    "4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b"
  ],
[...]

Let's have a look at what's in there, shall we?

$ bitcoin-cli getrawtransaction 4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b
The genesis block coinbase is not considered an ordinary transaction and cannot be retrieved

Ah, sucks! This is a special kind of transaction, but we'll see a way to find the details of it later on.

Getting more details from the genesis block

We retrieved the block details using the getblock command, but there's actually more details in that block than initially shown. You can get more verbose output by adding the 2 at the end of the command, indicating you want a json object with transaction data.

$ bitcoin-cli getblock 000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f 2
{
  "hash": "000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f",
  "confirmations": 572758,
  "strippedsize": 285,
  "size": 285,
  "weight": 1140,
  "height": 0,
  "version": 1,
  "versionHex": "00000001",
  "merkleroot": "4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b",
  "tx": [
    {
      "txid": "4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b",
      "hash": "4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b",
      "version": 1,
      "size": 204,
      "vsize": 204,
      "weight": 816,
      "locktime": 0,
      "vin": [
        {
          "coinbase": "04ffff001d0104455468652054696d65732030332f4a616e2f32303039204368616e63656c6c6f72206f6e206272696e6b206f66207365636f6e64206261696c6f757420666f722062616e6b73",
          "sequence": 4294967295
        }
      ],
      "vout": [
        {
          "value": 50.00000000,
          "n": 0,
          "scriptPubKey": {
            "asm": "04678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5f OP_CHECKSIG",
            "hex": "4104678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5fac",
            "reqSigs": 1,
            "type": "pubkey",
            "addresses": [
              "1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa"
            ]
          }
        }
      ],
      "hex": "01000000010000000000000000000000000000000000000000000000000000000000000000ffffffff4d04ffff001d0104455468652054696d65732030332f4a616e2f32303039204368616e63656c6c6f72206f6e206272696e6b206f66207365636f6e64206261696c6f757420666f722062616e6b73ffffffff0100f2052a01000000434104678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5fac00000000"
    }
  ],
  "time": 1231006505,
  "mediantime": 1231006505,
  "nonce": 2083236893,
  "bits": "1d00ffff",
  "difficulty": 1,
  "chainwork": "0000000000000000000000000000000000000000000000000000000100010001",
  "nTx": 1,
  "nextblockhash": "00000000839a8e6886ab5951d76f411475428afc90947ee320161bbf18eb6048"
}

Aha, that's more info!

Now, you'll notice there is a section with details of the coinbase transaction. It shows the 50BTC block reward, and even though we can't retrieve it with getrawtransaction, the data is still present in the genesis block.

      "vout": [
        {
          "value": 50.00000000,
          "n": 0,
          "scriptPubKey": {
            "asm": "04678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5f OP_CHECKSIG",
            "hex": "4104678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5fac",
            "reqSigs": 1,
            "type": "pubkey",
            "addresses": [
              "1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa"
            ]
          }
        }
      ],

Satoshi's Embedded Secret Message

I've always heard that Satoshi encoded a secret message in the first genesis block. Let's find it?

In our extensive output, there's a hex line in the block.

"hex": "01000000010000000000000000000000000000000000000000000000000000000000000000ffffffff4d04ffff001d0104455468652054696d65732030332f4a616e2f32303039204368616e63656c6c6f72206f6e206272696e6b206f66207365636f6e64206261696c6f757420666f722062616e6b73ffffffff0100f2052a01000000434104678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5fac00000000"

If we transform this hexadecimal format to a more readable ASCII form, we get this:

$ echo "01000000010000000000000000000000000000000000000000000000000000000000000000ffffffff
4d04ffff001d0104455468652054696d65732030332f4a616e2f32303039204368616e63656c6c6f72206f6e20
6272696e6b206f66207365636f6e64206261696c6f757420666f722062616e6b73ffffffff0100f2052a010000
00434104678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f3
5504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5fac00000000" | xxd -r -p

����M��EThe Times 03/Jan/2009 Chancellor on brink of second bailout for banks�����*CAg���UH'g�q0�\֨(�9	�yb��a޶I�?L�8��U���\8M�
        �W�Lp+k�_�

This confirms there is indeed a message in the form of "The Times 03/Jan/2009 Chancellor on brink of second bailout for banks", referring to a newspaper headline at the time of the genesis block.

The post Retrieving the Genesis block in Bitcoin with bitcoin-cli appeared first on ma.ttias.be.

The post Requesting certificates with Let’s Encrypt’s official certbot client appeared first on ma.ttias.be.

There's plenty of guides on this already, but I recently used Let's Encrypt certbot client again manually (instead of through already automated systems) and figured I'd write up the commands for myself. Just in case.

$ git clone https://github.com/letsencrypt/letsencrypt.git /opt/letsencrypt
$ cd /opt/letsencrypt

Now that the client is available on the system, you can request new certificates. If the DNS is already pointing to this server, it's super easy with the webroot validation.

$ /opt/letsencrypt/letsencrypt-auto certonly --expand \
  --email you@domain.tld --agree-tos \
  --webroot -w /var/www/vhosts/yoursite.tld/htdocs/public/ \
  -d yoursite.tld \
  -d www.yoursite.tld

You can add multiple domains with the -d flag and point it to the right document root using the -w flag.

After that, you'll find your certificates in

$ ls -alh /etc/letsencrypt/live/yoursite.tld/*
/etc/letsencrypt/live/yoursite.tld/cert.pem -> ../../archive/yoursite.tld/cert1.pem
/etc/letsencrypt/live/yoursite.tld/chain.pem -> ../../archive/yoursite.tld/chain1.pem
/etc/letsencrypt/live/yoursite.tld/fullchain.pem -> ../../archive/yoursite.tld/fullchain1.pem
/etc/letsencrypt/live/yoursite.tld/privkey.pem -> ../../archive/yoursite.tld/privkey1.pem

You can now use these certs in whichever webserver or application you like.

The post Requesting certificates with Let’s Encrypt’s official certbot client appeared first on ma.ttias.be.

Autoptimize 2.5 has been released earlier today (April 22nd).

Main focus of this release is more love for image optimization, now on a separate tab and including lazyload and WebP support.

Lots of other bugfixes and smaller improvements too off course, e.g. an option to disable the minification of excluded CSS/ JS (which 2.4 did by default).

No Easter eggs in there though :-)

I was using docker on an Odroid U3, but my Odroid stopped working. I switched to another system that is i386 only.

You’ll find my journey to build docker images for i386 below.

Reasons to build your own docker images

If you want to use docker you can start with docker images on the docker registry. There are several reasons to build your own base images.

  • Security

The first reason is security, docker images are not signed by default.

Anyone can upload docker images to the public docker hub with bugs or malicious code.

There are “official” docker images available at https://docs.docker.com/docker-hub/official_images/ when you execute a docker search the official docker images are tagged on the official column and are also signed by Docker. To only allow signed docker images you need to set the DOCKER_CONTENT_TRUST=1 environment variable. - This should be the default IMHO -

There is one distinction, the “official” docker images are signed by the “Repo admin” of the Docker hub, not by the official GNU/Linux distribution project. If you want to trust the official project instead of the Docker repo admin you can resolve this building your own images.

  • Support other architectures

Docker images are generally built for AMD64 architecture. If you want to use other architectures - ARM, Power, SPARC or even i386 - you’ll find some images on the Docker hub but these are usually not Official docker images.

  • Control

When you build your own images, you have more control over what goes or not goes into the image.

Building your own docker base images

There are several ways to build your own docker images.

The Mobyproject is Docker’s development project - a bit like what Fedora is to RedHat -. The Moby project has a few scripts that help you to create docker base images and is also a good start if you want to review how to build your own images.

GNU/Linux distributions

I build the images on the same GNU/Linux distribution (e.g. The debian images are build on a Debian system) to get the correct gpg keys.

Debian GNU/Linux & Co

Debian GNU/Linux makes it very easy to build your own Docker base images. Only debootstrap is required. I’ll use the moby script to the Debian base image and debootstrap to build an i386 docker Ubuntu 18.04 image.

Ubuntu doesn’t support i386 officially but includes the i386 userland so it’s possible to build i386 Docker images.

Clone moby

1
2
3
4
5
6
7
8
staf@whale:~/github$ git clone https://github.com/moby/moby
Cloning into 'moby'...
remote: Enumerating objects: 265639, done.
remote: Total 265639 (delta 0), reused 0 (delta 0), pack-reused 265640
Receiving objects: 99% (265640/265640), 137.75 MiB | 3.05 MiB/s, done.
Resolving deltas: 99% (179885/179885), done.
Checking out files: 99% (5508/5508), done.
staf@whale:~/github$ 

Make sure that debootstrap is installed

1
2
3
4
5
6
7
8
9
staf@whale:~/github/moby/contrib$ sudo apt install debootstrap
[sudo] password for staf: 
Reading package lists... Done
Building dependency tree       
Reading state information... Done
debootstrap is already the newest version (1.0.114).
debootstrap set to manually installed.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
staf@whale:~/github/moby/contrib$ 

The Moby way

Go to the contrib directory

1
2
staf@whale:~/github$ cd moby/contrib/
staf@whale:~/github/moby/contrib$ 

mkimage.sh

mkimage.sh --help gives you more details howto use the script.

1
2
3
4
5
6
7
8
9
staf@whale:~/github/moby/contrib$ ./mkimage.sh --help
usage: mkimage.sh [-d dir] [-t tag] [--compression algo| --no-compression] script [script-args]
   ie: mkimage.sh -t someuser/debian debootstrap --variant=minbase jessie
       mkimage.sh -t someuser/ubuntu debootstrap --include=ubuntu-minimal --components=main,universe trusty
       mkimage.sh -t someuser/busybox busybox-static
       mkimage.sh -t someuser/centos:5 rinse --distribution centos-5
       mkimage.sh -t someuser/mageia:4 mageia-urpmi --version=4
       mkimage.sh -t someuser/mageia:4 mageia-urpmi --version=4 --mirror=http://somemirror/
staf@whale:~/github/moby/contrib$ 

build the image

1
2
3
4
5
6
7
8
9
10
11
12
staf@whale:~/github/moby/contrib$ sudo ./mkimage.sh -t stafwag/debian_i386:stretch debootstrap --variant=minbase stretch
[sudo] password for staf: 
+ mkdir -p /var/tmp/docker-mkimage.dY9y9apEoK/rootfs
+ debootstrap --variant=minbase stretch /var/tmp/docker-mkimage.dY9y9apEoK/rootfs
I: Target architecture can be executed
I: Retrieving InRelease 
I: Retrieving Release 
I: Retrieving Release.gpg 
I: Checking Release signature
I: Valid Release signature (key id 067E3C456BAE240ACEE88F6FEF0F382A1A7B6500)
I: Retrieving Packages 
<snip>

Test

Verify that images is imported.

1
2
3
4
staf@whale:~/github/moby/contrib$ docker images
REPOSITORY            TAG                 IMAGE ID            CREATED              SIZE
stafwag/debian_i386   stretch             cb96d1663079        About a minute ago   97.6MB
staf@whale:~/github/moby/contrib$ 

Run a test docker instance

1
2
3
4
staf@whale:~/github/moby/contrib$ docker run -t -i --rm stafwag/debian_i386:stretch /bin/sh
# cat /etc/debian_version 
9.8
# 

The debootstrap way

Make sure that debootstrap is installed

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
staf@ubuntu184:~/github/moby$ sudo apt install debootstrap
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Suggested packages:
  ubuntu-archive-keyring
The following NEW packages will be installed:
  debootstrap
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 35,7 kB of archives.
After this operation, 270 kB of additional disk space will be used.
Get:1 http://be.archive.ubuntu.com/ubuntu bionic-updates/main amd64 debootstrap all 1.0.95ubuntu0.3 [35,7 kB]
Fetched 35,7 kB in 0s (85,9 kB/s)    
Selecting previously unselected package debootstrap.
(Reading database ... 163561 files and directories currently installed.)
Preparing to unpack .../debootstrap_1.0.95ubuntu0.3_all.deb ...
Unpacking debootstrap (1.0.95ubuntu0.3) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Setting up debootstrap (1.0.95ubuntu0.3) ...
staf@ubuntu184:~/github/moby$ 

bootsrap

Create a directory that will hold the chrooted operating system.

1
2
staf@ubuntu184:~$ mkdir -p dockerbuild/ubuntu
staf@ubuntu184:~/dockerbuild/ubuntu$ 

Bootstrap.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
staf@ubuntu184:~/dockerbuild/ubuntu$ sudo debootstrap --verbose --include=iputils-ping --arch i386 bionic ./chroot-bionic http://ftp.ubuntu.com/ubuntu/
I: Retrieving InRelease 
I: Checking Release signature
I: Valid Release signature (key id 790BC7277767219C42C86F933B4FE6ACC0B21F32)
I: Validating Packages 
I: Resolving dependencies of required packages...
I: Resolving dependencies of base packages...
I: Checking component main on http://ftp.ubuntu.com/ubuntu...
I: Retrieving adduser 3.116ubuntu1
I: Validating adduser 3.116ubuntu1
I: Retrieving apt 1.6.1
I: Validating apt 1.6.1
I: Retrieving apt-utils 1.6.1
I: Validating apt-utils 1.6.1
I: Retrieving base-files 10.1ubuntu2
<snip>
I: Configuring python3-yaml...
I: Configuring python3-dbus...
I: Configuring apt-utils...
I: Configuring netplan.io...
I: Configuring nplan...
I: Configuring networkd-dispatcher...
I: Configuring kbd...
I: Configuring console-setup-linux...
I: Configuring console-setup...
I: Configuring ubuntu-minimal...
I: Configuring libc-bin...
I: Configuring systemd...
I: Configuring ca-certificates...
I: Configuring initramfs-tools...
I: Base system installed successfully.

Customize

You can customize your installation before it goes into the image. One thing that you should customize is include update in the image.

Update /etc/resolve.conf

1
staf@ubuntu184:~/dockerbuild/ubuntu$ sudo vi chroot-bionic/etc/resolv.conf
1
nameserver 9.9.9.9

Update /etc/apt/sources.list

1
staf@ubuntu184:~/dockerbuild/ubuntu$ sudo vi chroot-bionic/etc/apt/sources.list

And include the updates

1
2
3
deb http://ftp.ubuntu.com/ubuntu bionic main
deb http://security.ubuntu.com/ubuntu bionic-security main
deb http://ftp.ubuntu.com/ubuntu/ bionic-updates main

Chroot into your installation and run apt-get update

1
2
3
4
5
6
7
8
9
10
11
12
13
staf@ubuntu184:~/dockerbuild/ubuntu$ sudo chroot $PWD/chroot-bionic
root@ubuntu184:/# apt update
Hit:1 http://ftp.ubuntu.com/ubuntu bionic InRelease
Get:2 http://ftp.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]   
Get:3 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]       
Get:4 http://ftp.ubuntu.com/ubuntu bionic/main Translation-en [516 kB]                  
Get:5 http://ftp.ubuntu.com/ubuntu bionic-updates/main i386 Packages [492 kB]           
Get:6 http://ftp.ubuntu.com/ubuntu bionic-updates/main Translation-en [214 kB]          
Get:7 http://security.ubuntu.com/ubuntu bionic-security/main i386 Packages [241 kB]     
Get:8 http://security.ubuntu.com/ubuntu bionic-security/main Translation-en [115 kB]
Fetched 1755 kB in 1s (1589 kB/s)      
Reading package lists... Done
Building dependency tree... Done

and apt-get upgrade

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
root@ubuntu184:/# apt upgrade
Reading package lists... Done
Building dependency tree... Done
Calculating upgrade... Done
The following NEW packages will be installed:
  python3-netifaces
The following packages will be upgraded:
  apt apt-utils base-files bsdutils busybox-initramfs console-setup console-setup-linux
  distro-info-data dpkg e2fsprogs fdisk file gcc-8-base gpgv initramfs-tools
  initramfs-tools-bin initramfs-tools-core keyboard-configuration kmod libapparmor1
  libapt-inst2.0 libapt-pkg5.0 libblkid1 libcom-err2 libcryptsetup12 libdns-export1100
  libext2fs2 libfdisk1 libgcc1 libgcrypt20 libglib2.0-0 libglib2.0-data libidn11
  libisc-export169 libkmod2 libmagic-mgc libmagic1 libmount1 libncurses5 libncursesw5
  libnss-systemd libpam-modules libpam-modules-bin libpam-runtime libpam-systemd
  libpam0g libprocps6 libpython3-stdlib libpython3.6-minimal libpython3.6-stdlib
  libseccomp2 libsmartcols1 libss2 libssl1.1 libstdc++6 libsystemd0 libtinfo5 libudev1
  libunistring2 libuuid1 libxml2 mount ncurses-base ncurses-bin netcat-openbsd
  netplan.io networkd-dispatcher nplan openssl perl-base procps python3 python3-gi
  python3-minimal python3.6 python3.6-minimal systemd systemd-sysv tar tzdata
  ubuntu-keyring ubuntu-minimal udev util-linux
84 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 26.6 MB of archives.
After this operation, 450 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://security.ubuntu.com/ubuntu bionic-security/main i386 netplan.io i386 0.40.1~18.04.4 [64.6 kB]
Get:2 http://ftp.ubuntu.com/ubuntu bionic-updates/main i386 base-files i386 10.1ubuntu2.4 [60.3 kB]
Get:3 http://security.ubuntu.com/ubuntu bionic-security/main i386 libapparmor1 i386 2.12-4ubuntu5.1 [32.7 kB]
Get:4 http://security.ubuntu.com/ubuntu bionic-security/main i386 libgcrypt20 i386 1.8.1-
<snip>
running python rtupdate hooks for python3.6...
running python post-rtupdate hooks for python3.6...
Setting up initramfs-tools-core (0.130ubuntu3.7) ...
Setting up initramfs-tools (0.130ubuntu3.7) ...
update-initramfs: deferring update (trigger activated)
Setting up python3-gi (3.26.1-2ubuntu1) ...
Setting up file (1:5.32-2ubuntu0.2) ...
Setting up python3-netifaces (0.10.4-0.1build4) ...
Processing triggers for systemd (237-3ubuntu10.20) ...
Setting up networkd-dispatcher (1.7-0ubuntu3.3) ...
Installing new version of config file /etc/default/networkd-dispatcher ...
Setting up netplan.io (0.40.1~18.04.4) ...
Setting up nplan (0.40.1~18.04.4) ...
Setting up ubuntu-minimal (1.417.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Processing triggers for initramfs-tools (0.130ubuntu3.7) ...
root@ubuntu184:/# 
staf@ubuntu184:~/dockerbuild/ubuntu$ 

Import

Go to your chroot installation.

1
2
staf@ubuntu184:~/dockerbuild/ubuntu$ cd chroot-bionic/
staf@ubuntu184:~/dockerbuild/ubuntu/chroot-bionic$ 

and import the image.

1
2
3
staf@ubuntu184:~/dockerbuild/ubuntu/chroot-bionic$ sudo tar cpf - . | docker import - stafwag/ubuntu_i386:bionic
sha256:83560ef3c8d48b737983ab8ffa3ec3836b1239664f8998038bfe1b06772bb3c2
staf@ubuntu184:~/dockerbuild/ubuntu/chroot-bionic$ 

Test

1
2
3
4
staf@ubuntu184:~/dockerbuild/ubuntu/chroot-bionic$ docker images
REPOSITORY            TAG                 IMAGE ID            CREATED              SIZE
stafwag/ubuntu_i386   bionic              83560ef3c8d4        About a minute ago   315MB
staf@ubuntu184:~/dockerbuild/ubuntu/chroot-bionic$ 
1
2
3
4
5
6
7
8
staf@ubuntu184:~/dockerbuild/ubuntu/chroot-bionic$ docker run -it --rm stafwag/ubuntu_i386:bionic /bin/bash
root@665cec6ee24f:/# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.2 LTS
Release:        18.04
Codename:       bionic
root@665cec6ee24f:/# 

Have fun!

Links

April 21, 2019

Several years ago, I created a list of ESXi versions with matching VM BIOS identifiers. The list is now complete up to vSphere 6.7 Update 2.
Your Linux runs on a VMware VM, but which on which ESXi version? You can see for yourself: run "dmidecode" and look at lines 10, 11 and 12.
ESX 2.5 - BIOS Release Date: 04/21/2004 - Address 0xE8480 - Size 97152 bytes
ESX 3.0 - BIOS Release Date: 04/17/2006 - Address 0xE7C70 - Size 99216 bytes
ESX 3.5 - BIOS Release Date: 01/30/2008 - Address 0xE7910 - Size 100080 bytes
ESX 4 - BIOS Release Date: 08/15/2008 - Address 0xEA6C0 - Size 88384 bytes
ESX 4U1 - BIOS Release Date: 09/22/2009 - Address 0xEA550 - Size 88752 bytes
ESX 4.1 - BIOS Release Date: 10/13/2009 - Address 0xEA2E0 - Size 89376 bytes
ESXi 5 - BIOS Release Date: 01/07/2011 - Address 0xE72C0 - Size 101696 bytes
ESXi 5.1 - BIOS Release Date: 06/22/2012 - Address: 0xEA0C0 - Size: 89920 bytes
ESXi 5.5 - BIOS Release Date: 07/30/2013 - Address: 0xEA050 - Size: 90032 bytes
ESXi 6 - BIOS Release Date: 09/30/2014 - Address: 0xE9A40 - Size: 91584 bytes
ESXi 6.5 - BIOS Release Date: 04/05/2016 - Address: 0xEA580 - Size: 88704 bytes 
ESXi 6.7 - BIOS Release Date: 07/03/2018 - Address: 0xEA520 - Size: 88800 bytes
ESXi 6.7 U2 - BIOS Release Date 12/12/2018 - Address: 0xEA490 - Size: 88944 bytes
NB These DMI properties are set at boot time. Even if your VM gets live-migrated to a host running a different vSphere version, your VM will keep the values it got from the host it booted on. What you see is the vSphere version of the host your VM booted on. It is the VM power-on that matters, so a guest OS reboot will not regenerate the DMI properties. A guest OS shut down on the other hand will also power off the VM, and the next power-on will regenerate the DMI properties.

April 20, 2019

We ditched the crowded streets of Seattle for a short vacation in Tuscany's beautiful countryside. After the cold winter months, Tuscany's rolling hills are coming back to life and showing their new colors.

Beautiful tuscany
Beautiful tuscany
Beautiful tuscany

April 18, 2019

I published the following diary on isc.sans.edu: “Malware Sample Delivered Through UDF Image“:

I found an interesting phishing email which was delivered with a malicious attachment: an UDF image (.img). UDF means “Universal Disk Format” and, as said by Wikipedia], is an open vendor-neutral file system for computer data storage. It has supplented the well-known ISO 9660 format (used for burning CD & DVD) that was also used in previous campaign to deliver malicious files… [Read more]

[The post [SANS ISC] Malware Sample Delivered Through UDF Image has been first published on /dev/random]

April 16, 2019

In a recent VMware project, an existing environment of vSphere ESXi hosts had to be split off to a new instance of vCenter. These hosts were member of a distributed virtual switch, an object that saves its configuration in the vCenter database. This information would be lost after the move to the new vCenter, and the hosts would be left with "orphaned" distributed vswitch configurations.

Thanks to the export/import function now available in vSphere 5.5 and 6.x, we can now move the full distributed vswitch configuration to the new vCenter:

  • In the old vCenter, right-click the switch object, click "Export configuration" and choose the default "Distributed switch and all port groups"
  • Add the hosts to the new vCenter
  • In the new vCenter, right-click the datacenter object, click "Import distributed switch" in the "Distributed switch" sub-menu.
  • Select your saved configuration file, and tick the "Preserve original distributed switch and port group identifiers" box (which is not default!)
What used to be orphaned configurations on the host, are now valid member switches of the distributed switch you just imported!
In vSphere 6, if the vi-admin account get locked because of too many failed logins, and you don't have the root password of the appliance, you can reset the account(s) using these steps:

  1. reboot the vMA
  2. from GRUB, "e"dit the entry
  3. "a"ppend init=/bin/bash
  4. "b"oot
  5. # pam_tally2 --user=vi-admin --reset
  6. # passwd vi-admin # Optional. Only if you want to change the password for vi-admin.
  7. # exit
  8. reset the vMA
  9. log in with vi-admin
These steps can be repeated for root or any other account that gets locked out.

If you do have root or vi-admin access, "sudo pam_tally2 --user=mylockeduser --reset" would do it, no reboot required.
Most VMware appliances (vCenter Appliance, VMware Support Appliance, vRealize Orchestrator) have the so called VAMI: the VMware Appliance Management Interface, generally served via https on port 5480. VAMI offers a variety of functions, including "check updates" and "install updates". Some appliances offer to check/install updates from a connected CD iso, but the default is always to check online. How does that work?
VMware uses a dedicated website to serve the updates: vapp-updates.vmware.com. Each appliance is configured with a repository URL: https://vapp-updates.vmware.com/vai-catalog/valm/vmw/PRODUCT-ID/VERSION-ID . The PRODUCT-ID is a hexadecimal code specific for the product. vRealize Orchestrator uses 00642c69-abe2-4b0c-a9e3-77a6e54bffd9, VMware Support Appliance uses 92f44311-2508-49c0-b41d-e5383282b153, vCenter Server Appliance uses 647ee3fc-e6c6-4b06-9dc2-f295d12d135c. The VERSION-ID contains the current appliance version and appends ".latest": 6.0.0.20000.latest, 6.0.4.0.latest, 6.0.0.0.latest.
The appliance will check for updates by retrieving the repository URL /manifest/manifest-latest.xml . This xml contains the latest available version in fullVersion and version (fullVersion includes the build number), pre- and post-install scripts, EULA, and a list of updated rpm packages. Each entry has a that can be appended to the repository URL and downloaded. The update procedure downloads manifest and rpms, verifies checksums on downloaded rpms, executes the preInstallScript, runs rpm -U on the downloaded rpm packages, executes the postInstallScript, displays the exit code and prompts for reboot.
With this information, you can setup your own local repository (for cases where internet access is impossible from the virtual appliances), or you can even execute the procedure manually. Be aware that manual update would be unsupported. Using a different repository is supported by a subset of VMware appliances (e.g. VCSA, VRO) but not all (VMware Support Appliance).
I did not yet update my older post when vSphere 6.7 was released. The list now complete up to vSphere 6.7. Your Linux runs on a VMware VM, but which on which ESXi version? You can see for yourself: run "dmidecode" and look at lines 10, 11 and 12.
ESX 2.5 - BIOS Release Date: 04/21/2004 - Address 0xE8480 - Size 97152 bytes
ESX 3.0 - BIOS Release Date: 04/17/2006 - Address 0xE7C70 - Size 99216 bytes
ESX 3.5 - BIOS Release Date: 01/30/2008 - Address 0xE7910 - Size 100080 bytes
ESX 4 - BIOS Release Date: 08/15/2008 - Address 0xEA6C0 - Size 88384 bytes
ESX 4U1 - BIOS Release Date: 09/22/2009 - Address 0xEA550 - Size 88752 bytes
ESX 4.1 - BIOS Release Date: 10/13/2009 - Address 0xEA2E0 - Size 89376 bytes
ESXi 5 - BIOS Release Date: 01/07/2011 - Address 0xE72C0 - Size 101696 bytes
ESXi 5.1 - BIOS Release Date: 06/22/2012 - Address: 0xEA0C0 - Size: 89920 bytes
ESXi 5.5 - BIOS Release Date: 07/30/2013 - Address: 0xEA050 - Size: 90032 bytes
ESXi 6 - BIOS Release Date: 09/30/2014 - Address: 0xE9A40 - Size: 91584 bytes
ESXi 6.5 - BIOS Release Date: 04/05/2016 - Address: 0xEA580 - Size: 88704 bytes 
ESXi 6.7 - BIOS Release Date: 07/03/2018 - Address: 0xEA520 - Size: 88800 bytes
NB These DMI properties are set at boot time. Even if your VM gets live-migrated to a host running a different vSphere version, your VM will keep the values it got from the host it booted on. What you see is the vSphere version of the host your VM booted on. It is the VM power-on that matters, so a guest OS reboot will not regenerate the DMI properties. A guest OS shut down on the other hand will also power off the VM, and the next power-on will regenerate the DMI properties.
Updating the VCSA is easy when it has internet access or if you can mount the update iso. On a private network, VMware assumes you have a webserver that can serve up the updaterepo files. In this article, we'll look at how to proceed when VCSA is on a private network where internet access is blocked, and there's no webserver available. The VCSA and PSC contain their own webserver that can be used for an HTTP based update. This procedure was tested on PSC/VCSA 6.0.

Follow these steps:


  • First, download the update repo zip (e.g. for 6.0 U3A, the filename is VMware-vCenter-Server-Appliance-6.0.0.30100-5202501-updaterepo.zip ) 
  • Transfer the updaterepo zip to a PSC or VCSA that will be used as the server. You can use Putty's pscp.exe on Windows or scp on Mac/Linux, but you'd have to run "chsh -s /bin/bash root" in the CLI shell before using pscp.exe/scp if your PSC/VCSA is set up with the appliancesh. 
    • chsh -s /bin/bash root
    • "c:\program files (x86)\putty\pscp.exe" VMware*updaterepo.zip root@psc-name-or-address:/tmp 
  • Change your PSC/VCSA root access back to the appliancesh if you changed it earlier: 
    • chsh -s /bin/appliancesh root
  • Make a directory for the repository files and unpack the updaterepo files there:
    • mkdir /srv/www/htdocs/6u3
    • chmod go+rx /srv/www/htdocs/6u3
    • cd /srv/www/htdocs/6u3
    • unzip /tmp/VMware-vCenter*updaterepo.zip
    • rm /tmp/VMware-vCenter*updaterepo.zip
  • Create a redirect using the HTTP rhttpproxy listener and restart it
    • echo "/6u3 local 7000 allow allow" > /etc/vmware-rhttpproxy/endpoints.conf.d/temp-update.conf 
    • /etc/init.d/vmware-rhttpproxy restart 
  • Create a /tmp/nginx.conf (I copied /etc/nginx/nginx.conf, changed "listen 80" to "listen 7000" and changed "mime.types" to "/etc/nginx/mime.types")
  • Start nginx
    • nginx -c /tmp/nginx.conf
  • Start the update via the VAMI. Change the repository URL in settings,  use http://psc-name-or-address/6u3/ as repository URL. Then use "Check URL". 
  • Afterwards, clean up: 
    • killall nginx
    • cd /srv/www/htdocs; rm -rf 6u3


P.S. I personally tested this using a PSC as webserver to update both that PSC, and also a VCSA appliance.
P.P.S. VMware released an update for VCSA 6.0 and 6.5 on the day I wrote this. For 6.0, the latest version is U3B at the time of writing, while I updated to U3A.
VMware's solution to a lost or forgotten root password for ESXi is simple: go to https://kb.vmware.com/s/article/1317898?lang=en_US and you'll find that "Reinstalling the ESXi host is the only supported way to reset a password on ESXi".

If your host is still connected to vCenter, you may be able to use Host Profiles to reset the root password, or alternatively you can join ESXi in Active Directory via vCenter, and log in with a user in the "ESX Admins" AD group.

If your host is no longer connected to vCenter, those options are closed. Can you avoid reinstallation? Fortunately, you can. You will need to reset and reboot your ESXi though. If you're ready for an unsupported deep dive into the bowels of ESXi, follow these steps:

  1. Create a bootable Linux USB-drive (or something else you can boot your server with). I used a CentOS 7 installation USB-drive that I could use to boot into rescue mode.
  2. Reset your ESXi and boot from the Linux medium.
  3. Identify your ESXi boot device from the Linux prompt. Use "fdisk -l /dev/sda", "fdisk -l /dev/sdb", etc. until you find a device that has 9 (maybe 8 in some cases) partitions. Partitions 5 and 6 are 250 MB and type "Microsoft basic" (for more information on this partition type, see https://en.wikipedia.org/wiki/Microsoft_basic_data_partition ). These are the ESXi boot banks. My boot device was /dev/sda, so I'll be using /dev/sda5 and/or /dev/sda6 as partition devices.
  4. Create a temporary directory for the primary boot bank: mkdir /tmp/b
  5. Mount the first ESXi bootbank on that directory: mount /dev/sda5 /tmp/b
  6. The current root password hash is stored inside state.tgz . We'll unpack this first. Create a temp directory for the state.tgz contents: mkdir /tmp/state
  7. Unpack state.tgz: cd /tmp/state ; tar xzf /tmp/b/state.tgz
  8. Inside state.tgz is local.tgz. Create a tempfile for the local.tgz contents: mkdir /tmp/local
  9. Unpack local.tgz: cd /tmp/local ; tar xzf /tmp/state/local.tgz
  10. Generate a new password hash: on a Linux system with Perl installed, you can use this: perl -e 'print crypt("MySecretPassword@","\$6\$AbCdEfGh") . "\n";' . On a Linux system with Python installed (like the CentOS rescue mode), you can use this: python -c "import crypt; print crypt.crypt('MySecretPassword@')" . Both will print out a new password hash for the given password: $6$MeOt/VCSA4PoKyHl$yk5Q5qbDVussUjt/3QZdy4UROEmn5gaRgYG7ckYIn1NC2BXXCUnCARnvNkscL5PA5ErbTddoVQWPqBUYe.S7Y0  . Alternatively, you can use an online hash generator, or you can leave the password hash field empty.
  11. Edit the shadow file to change the root password: vi /tmp/local/etc/shadow . Replace the current password hash in the second field of the first line (the line that starts with root:) with the new hash. Esc : w q Enter saves the contents of the shadow file.
  12. Recreate the local.tgz file: cd /tmp/local ; tar czf /tmp/state/local.tgz etc
  13. Recreate the state.tgz file: cd /tmp/state ; tar czf /tmp/b/state.tgz local.tgz
  14. Detach the bootbank partition: umount /tmp/b
  15. Exit from the Linux rescue environment and boot ESXi.
  16. Do the same for the other boot bank (/dev/sda6 in my case) if your system doesn't boot from the first boot bank. NB logging in via SSH doesn't work with an empty hash field. The Host UI client via a web browser does let you in with an empty password, and allows you to change your password.


April 15, 2019

Last week, many Drupalists gathered in Seattle for DrupalCon North America, for what was the largest DrupalCon in history.

As a matter of tradition, I presented my State of Drupal keynote. You can watch a recording of my keynote (starting at 32 minutes) or download a copy of my slides (153 MB).

Making Drupal more diverse and inclusive

DrupalCon Seattle was not only the largest, but also had the most diverse speakers. Nearly 50% of the DrupalCon speakers were from underrepresented groups. This number has been growing year over year, and is something to be proud of.

I actually started my keynote by talking about how we can make Drupal more diverse and inclusive. As one of the largest and most thriving Open Source communities, I believe that Drupal has an obligation to set a positive example.

Free time to contribute is a privilege

I talked about how Open Source communities often incorrectly believe that everyone can contribute. Unfortunately, not everyone has equal amounts of free time to contribute. In my keynote, I encouraged individuals and organizations in the Drupal community to strongly consider giving time to underrepresented groups.

Improving diversity is not only good for Drupal and its ecosystem, it's good for people, and it's the right thing to do. Because this topic is so important, I wrote a dedicated blog post about it.

Drupal 8 innovation update

I dedicated a significant portion of my keynote to Drupal 8. In the past year alone, there have been 35% more sites and 48% more stable modules in Drupal 8. Our pace of innovation is increasing, and we've seen important progress in several key areas.

With the release of Drupal 8.7, the Layout Builder will become stable. Drupal's new Layout Builder makes it much easier to build and change one-off page layouts, templated layouts and layout workflows. Best of all, the Layout Builder will be accessible.

Drupal 8.7 also brings a lot of improvements to the Media Library.

We also continue to innovate on headless or decoupled Drupal. The JSON:API module will ship with Drupal 8.7. I believe this not only advances Drupal's leadership in API-first, but sets Drupal up for long-term success.

These are just a few of the new capabilities that will ship with Drupal 8.7. For the complete list of new features, keep an eye out for the release announcement in a few weeks.

Drupal 7 end of life

If you're still on Drupal 7, there is no need to panic. The Drupal community will support Drupal 7 until November 2021 — two years and 10 months from today.

After the community support ends, there will be extended commercial support for a minimum of three additional years. This means that Drupal 7 will be supported for at least five more years, or until 2024.

Upgrading from Drupal 7 to Drupal 8

Upgrading from Drupal 7 to Drupal 8 can be a lot of work, especially for large sites, but the benefits outweigh the challenges.

For my keynote, I featured stories from two end-users who upgraded large sites from Drupal 7 to Drupal 8 — the State of Georgia and Pegasystems.

The keynote also featured quietone, one of the maintainers of the Migrate API. She talked about the readiness of Drupal 8 migration tools.

Preparing for Drupal 9

As announced a few months ago, Drupal 9 is targeted for June 2020. June 2020 is only 14 months away, so I dedicated a significant amount of my keynote to Drupal 9.

Making Drupal updates easier is a huge, ongoing priority for the community. Thanks to those efforts, the upgrade path to Drupal 9 will be radically easier than the upgrade path to Drupal 8.

In my keynote, I talked about how site owners, Drupal developers and Drupal module maintainers can start preparing for Drupal 9 today. I showed several tools that make Drupal 9 preparation easier. Check out my post on how to prepare for Drupal 9 for details.

A timeline with important dates and future milestones

Thank you

I'm grateful to be a part of a community that takes such pride in its work. At each DrupalCon, we get to see the tireless efforts of many volunteers that add up to one amazing event. It makes me proud to showcase the work of so many people and organizations in my presentations.

Thank you to all who have made this year's DrupalCon North America memorable. I look forward to celebrating our work and friendships at future events!

April 13, 2019

April 12, 2019

April 11, 2019

With Drupal 9 targeted to be released in June of 2020, many people are wondering what they need to do to prepare.

The good and important news is that upgrading from Drupal 8 to Drupal 9 should be really easy — radically easier than upgrading from Drupal 7 to Drupal 8.

The only caveat is that you need to manage "deprecated code" well.

If your site doesn't use deprecated code that is scheduled for removal in Drupal 9, your upgrade to Drupal 9 will be easy. In fact, it should be as easy as a minor version upgrade (like upgrading from Drupal 8.6 to Drupal 8.7).

What is deprecated code?

Code in Drupal is marked as "deprecated" when it should no longer be used. Typically, code is deprecated because there is a better alternative that should be used instead.

For example, in Drupal 8.0.0, we deprecated \Drupal::l($text, $url). Instead of using \Drupal::l(), you should use Link::fromTextAndUrl($text, $url). The \Drupal::l() function was marked for removal as part of some clean-up work; Drupal 8 had too many ways to generate links.

Deprecated code will continue to work for some time before it gets removed. For example, \Drupal::l() continues to work in Drupal 8.7 despite the fact that it was deprecated in Drupal 8.0.0 more than three years ago. This gives module maintainers ample time to update their code.

When we release Drupal 9, we will "drop" most deprecated code. In our example, this means that \Drupal::l() will not be available anymore in Drupal 9.

In other words:

  • Any Drupal 8 module that does not use deprecated code will continue to work with Drupal 9.
  • Any Drupal 8 module that uses deprecated code needs to be updated before Drupal 9 is released, or it will stop working with Drupal 9.

If you're interested, you can read more about Drupal's deprecation policy at https://www.drupal.org/core/deprecation.

How do I know if my site uses deprecated code?

There are a few ways to check if your site is using deprecated code.

If you work on a Drupal site as a developer, run drupal-check. Matt Glaman (Centarro) developed a static PHP analysis tool called drupal-check, which you can run against your codebase to check for deprecated code. I recommend running drupal-check in an automated fashion as part of your development workflow.

If you are a site owner, install the Upgrade Status module. This module was built by Acquia. The module provides a graphical user interface on top of drupal-check. The goal is to provide an easy-to-use readiness assessment for your site's migration to Drupal 9.

If you maintain a project on Drupal.org, enable Drupal.org's testing infrastructure to detect the use of deprecated code. There are two complementary ways to do so: you can run a static deprecation analysis and/or configure your existing tests to fail when calling deprecated code. Both can be set up in your drupalci.yml configuration file.

If you find deprecated code in a contributed module used on your site, consider filing an issue in the module's issue queue on Drupal.org (after having checked no issue has been created yet). If you can, provide a patch to fix the deprecation and engage with the maintainer to get it committed.

How hard is it to update my code?

While there are some deprecations that require more detailed refactoring, many are a simple matter of search-and-replace.

You can check the API documentation for instructions on how to remedy the deprecation.

When can I start updating my code?

I encourage you to start today. When you update your Drupal 8 code to use the latest and greatest APIs, you can benefit from those improvements immediately. There is no reason to wait until Drupal 9 is released.

Drupal 8.8.0 will be the last release to deprecate for Drupal 9. Today, we don't know the full set of deprecations yet.

How much time do I have to update my code?

The current plan is to release Drupal 9 in June of 2020, and to end-of-life Drupal 8 in November of 2021.

Contributed module maintainers are encouraged to remove the use of deprecated code by June of 2020 so everyone can upgrade to Drupal 9 the day it is released.

A timeline with important dates and future milestones

Drupal.org project maintainers should keep the extended security coverage policy in mind, which means that Drupal 8.8 will still be supported until Drupal 9.1 is released. Contributed projects looking to support both Drupal 8.8 and Drupal 9.0 might need to use two branches.

How ready are the contributed modules?

Dwayne McDaniel (Pantheon) analyzed all 7,000 contributed module for Drupal 8 using drupal-check.

As it stands today, 44% of the modules have no deprecation warnings. The remaining 56% of the modules need to be updated, but the majority have less than three deprecation warnings.

The benefits of backwards compatibility (BC) are clear: no users are left behind. Which leads to higher adoption rates because you’re often getting new features and you always have the latest security fixes.

Of course, that’s easy when you have a small API surface (as Nate Haug once said: “the WordPress API has like 11 functions!” — which is surprisingly close to the truth). But Drupal has an enormous API surface. In fact, it seems there’s APIs hiding in every crevice!

In my job at Acquia, I’ve been working almost exclusively on Drupal 8 core. In 2012–2013 I worked on authoring experience (in-place editing, CKEditor, and more). In 2014–2015, I worked on performance, cacheability, rendering and generally the stabilizing of Drupal 8. Drupal 8.0.0 shipped on November 19, 2015. And since then, I’ve spent most of my time on making Drupal 8 be truly API-first: improving the RESTful Web Services support that Drupal 8 ships with, and in the process also strengthening the JSON API & GraphQL contributed modules.

I’ve learned a lot about the impact of past decisions (by myself and others) on backwards compatibility. The benefit of backwards compatibility (BC). But the burden of ensuring BC can increase exponentially due to certain architectural decisions. I’ve been experiencing that first-hand, since I’m tasked with making Drupal 8’s REST support rock-solid, where I am seeing time and time again that “fixing bugs + improving DXrequires BC breaks. Tough decisions.

In Drupal 8, we have experience with some extremes:

  1. the BigPipe & Dynamic Page Cache modules have no API, but build on top of other APIs: they provide functionality only, not APIs
  2. the REST module has an API, and its functionality can be modified not just via that API, but also via other APIs

The first cannot break BC. The second requires scrutiny for every line of code modified to ensure we don’t break BC. For the second, the burden can easily outweigh the benefit, because how many sites actually are using this obscure edge case of the API?


We’ll look at:

  • How can we make our modules more evolvable in the future? (Contrib & core, D8 & D9.)
  • Ideas to improve this, and root cause hypotheses (for example, the fact that we have API cascades and not orthogonal APIs)

We should be thinking more actively about how feature X, configuration Y or API Z might get in the way of BC. I analyzed the architectural patterns in Drupal 8, and have some thoughts about how to do better. I don’t have all the answers. But what matters most is not answers, but a critical mindset going forward that is consciously considering BC implications for every patch that goes into Drupal 8! This session is only a starting point; we should continue discussing in the hallways, during dinner and of course: in the issue queues!

Preview:

DrupalCon Seattle
Seattle, WA, United States

April 10, 2019

In Open Source, there is a long-held belief in meritocracy, or the idea that the best work rises to the top, regardless of who contributes it. The problem is that a meritocracy assumes an equal distribution of time for everyone in a community.

Open Source is not a meritocracy

Free time to contribute is a privilege

I incorrectly made this assumption myself, saying: The only real limitation [to Open Source contribution] is your willingness to learn.

Today, I've come to understand that inequality makes it difficult for underrepresented groups to have the "free time" it takes to contribute to Open Source.

For example, research shows that women still spend more than double the time as men doing unpaid domestic work, such as housework or childcare. I've heard from some of my colleagues that they need to optimize every minute of time they don't spend working, which makes it more difficult to contribute to Open Source on an unpaid, volunteer basis.

Or, in other cases, many people's economic conditions require them to work more hours or several jobs in order to support themselves or their families.

Systemic issues like racial and gender wage gaps continue to plague underrepresented groups, and it's both unfair and impractical to assume that these groups of people have the same amount of free time to contribute to Open Source projects, if they have any at all.

What this means is that Open Source is not a meritocracy.

Underrepresented groups don't have the same amount of free time

Free time is a mark of privilege, rather than an equal right. Instead of chasing an unrealistic concept of meritocracy, we should be striving for equity. Rather than thinking, "everyone can contribute to open source", we should be thinking, "everyone deserves the opportunity to contribute".

Time inequality contributes to a lack of diversity in Open Source

This fallacy of "free time" makes Open Source communities suffer from a lack of diversity. The demographics are even worse than the technology industry overall: while 22.6% of professional computer programmers in the workforce identify as women (Bureau of Labor Statistics), less than 5% of contributors do in Open Source (GitHub). And while 34% of programmers identify as ethnic or national minorities (Bureau of Labor Statistics), only 16% do in Open Source (GitHub).

Diversity in data

It's important to note that time isn't the only factor; sometimes a hostile culture or unconscious bias play a part in limiting diversity. According to the same GitHub survey cited above, 21% of people who experienced negative behavior stopped contributing to Open Source projects altogether. Other recent research showed that women's pull requests were more likely to get accepted if they had a gender-neutral username. Unfortunately, examples like these are common.

Taking action: giving time to underrepresented groups

A person being ignored

While it's impossible to fix decades of gender and racial inequality with any single action, we must do better. Those in a position to help have an obligation to improve the lives of others. We should not only invite underrepresented groups into our Open Source communities, but make sure that they are welcomed, supported and empowered. One way to help is with time:

  • As individuals, by making sure you are intentionally welcoming people from underrepresented groups, through both outreach and actions. If you're in a community organizing position, encourage and make space for people from underrepresented groups to give talks or lead sprints about the work they're interested in. Or if you're asked to, mentor an underrepresented contributor.
  • As organizations in the Open Source ecosystem, by giving people more paid time to contribute.

Taking the extra effort to help onboard new members or provide added detail when reviewing code changes can be invaluable to community members who don't have an abundance of free time. Overall, being kinder, more patient and more supportive to others could go a long way in welcoming more people to Open Source.

In addition, organizations within the Open Source ecosystem capable of giving back should consider financially sponsoring underrepresented groups to contribute to Open Source. Sponsorship can look like full or part-time employment, an internship or giving to organizations like Girls Who Code, Code2040, Resilient Coders or one of the many others that support diversity in technology. Even a few hours of paid time during the workweek for underrepresented employees could help them contribute more to Open Source.

Applying the lessons to Drupal

Over the years, I've learned a lot from different people's perspectives. Learning out in the open is not always easy, but it's been an important part of my personal journey.

Knowing that Drupal is one of the largest and most influential Open Source projects, I find it important that we lead by example.

I encourage individuals and organizations in the Drupal community to strongly consider giving time and opportunities to underrepresented groups. You can start in places like:

When we have more diverse people contributing to Drupal, it will not only inject a spark of energy, but it will also help us make better, more accessible, inclusive software for everyone in the world.

Each of us needs to decide if and how we can help to create equity for everyone in Drupal. Not only is it good for business, it's good for people, and it's the right thing to do.

Special thanks to the Drupal Diversity and Inclusion group for discussing this topic with me. Ashe Dryden's thought-leadership indirectly influenced this piece. If you are interested in this topic, I recommend you check out Ashe's blog post The Ethics of Unpaid Labor and the OSS Community.

ImioCe jeudi 25 avril 2019 à 19h se déroulera la 77ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Imio : clés du succès du logiciel libre dans les communes wallonnes

Thématique : communauté|développement

Public : développeurs|entreprises|étudiants

L’animateur conférencier : Joël Lambillotte (IMIO)

Lieu de cette séance : Campus technique (ISIMs) de la Haute Ecole en Hainaut, Avenue V. Maistriau, 8a, Salle Académique, 2e bâtiment (cf. ce plan sur le site de l’ISIMs, et ici sur la carte Openstreetmap).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : L’intercommunale Imio conçoit et héberge des solutions libres pour près de 300 administration publiques locales en Wallonie.

Ce projet, initié par deux communes en 2005, s’est spontanément associé à la communauté open source Plone afin de co-construire des solutions dont il pouvait conserver la maîtrise.

Ce sont les choix technologiques et philosophiques opérés qui ont permis d’assurer la pérennité et la croissance de la structure. Ils sont multiples : la programmation orientée composant facilitant la gestion d’un tronc commun pour des grandes villes aussi bien que pour des communes rurales, la démarche qualité via la rédaction systématique de tests et l’utilisation d’outils comme Robot Framework, l’industrialisation via Jenkins, Puppet, Docker, Rundeck, les aspect sociaux via les ateliers pour les utilisateurs et sprints.

Short bio : Joël Lambillotte est directeur général adjoint d’Imio, dont il a participé à la création. Gradué en informatique il a une longue expérience comme responsable IT de la commune de Sambreville et a co-fondé les communautés open source CommunesPlone et PloneGov, finalistes aux EU eGovernement awards en 2007 et 2009.

April 09, 2019

For most people, today marks the first day of DrupalCon Seattle.

Open Source communities create better, more inclusive software when diverse people come to the table. Unfortunately, there is still a huge gender gap in Open Source, and software more broadly. It's something I'll talk more about in my keynote tomorrow.

One way to help close the gender gap in the technology sector is to give to organizations that are actively working to solve this problem. During DrupalCon Seattle, Acquia will donate $5 to Girls Who Code for every person that visits our booth.