Subscriptions

Planet Grep is open to all people who either have the Belgian nationality or live in Belgium, and who actively work with or contribute to Open Source/Free software.

About Planet Grep...

Other planets can be found at the Planet Index.

A complete feed is available in a number of syndication formats: RSS 1.0, RSS 2.0, FOAF, and OPML.

The layout of this site was done by Gregory

February 11, 2016

Frank Goossens

Music by Old Farts: Underworld exhales

So those Old Underworld Farts have a new single. This here fart is old enough to notice some similarities with the Blue Aeroplanes‘ vocal performance, albeit carefully hidden behind Underworld’s big beats. Still not too shabby a track, despite the video … :-)

YouTube Video
Watch this video on YouTube.

by frank at February 11, 2016 03:03 PM

February 10, 2016

Dries Buytaert

Turning Drupal outside-in

There has been a lot of discussion around the future of the Drupal front end both on Drupal.org (#2645250, #2645666, #2651660, #2655556) and on my blog posts about the future of decoupled Drupal, why a standard framework in core is a good idea, and the process of evaluating frameworks. These all relate to my concept of "progressive decoupling", in which some portions of the page are handed over to client-side logic after Drupal renders the initial page (not to be confused with "full decoupling").

My blog posts have drawn a variety of reactions. Members of the Drupal community, including Lewis Nyman, Théodore Biadala and Campbell Vertesi, have written blog posts with their opinions, as well as Ed Faulkner of the Ember community. Last but not least, in response to my last blog post, Google changed Angular 2's license from Apache to MIT for better compatibility with Drupal. I read all the posts and comments with great interest and wanted to thank everyone for all the feedback; the open discussion around this is nothing short of amazing. This is exactly what I hoped for: community members from around the world brainstorming about the proposal based on their experience, because only with the combined constructive criticism will we arrive at the best solution possible.

Based on the discussion, rather than selecting a client-side JavaScript framework for progressive decoupling right now, I believe the overarching question the community wants to answer first is: How do we keep Drupal relevant and widen Drupal's adoption by improving the user experience (UX)?

Improving Drupal's user experience is a topic near and dear to my heart. Drupal's user experience challenges led to my invitation to Mark Boulton to redesign Drupal 7, the creation of the Spark initiative to improve the authoring experience for Drupal 8, and continued support for usability-related initiatives. In fact, the impetus behind progressive decoupling and adopting a client-side framework is the need to improve Drupal's user experience.

It took me a bit longer than planned, but I wanted to take the time to address some of the concerns and share more of my thoughts about improving Drupal's UX (and JavaScript frameworks).

To iterate or to disrupt?

In his post, Lewis writes that the issues facing Drupal's UX "go far deeper than code" and that many of the biggest problems found during the Drupal 8 usability study last year are not resolved with a JavaScript framework. This is true; the results of the Drupal 8 usability study show that Drupal can confuse users with its complex mental models and terminology, but it also shows how modern features like real-time previews and in-page block insertion are increasingly assumed to be available.

To date, much of our UX improvements have been based on an iterative process, meaning it converges on a more refined end state by removing problems in the current state. However, we also require disruptive thinking, which is about introducing entirely new ideas, for true innovation to happen. It's essentially removing all constraints and imagining what an ideal result would look like.

I think we need to recognize that while some of the documented usability problems coming out of the Drupal 8 usability study can be addressed by making incremental changes to Drupal's user experience (e.g. our terminology), other well-known usability problems most likely require a more disruptive approach (e.g. our complex mental model). I also believe that we must acknowledge that disruptive improvements are possibly more impactful in keeping Drupal relevant and widening Drupal's adoption.

At this point, to get ahead and lead, I believe we have to do both. We have to iterate and disrupt.

From inside-out to outside-in

Let's forget about Drupal for a second and observe the world around us. Think of all the web applications you use on a regular basis, and consider the interaction patterns you find in them. In popular applications like Slack, the user can perform any number of operations to edit preferences (such as color scheme) and modify content (such as in-place editing) without incurring a single full page refresh. Many elements of the page can be changed without the user's flow being interrupted. Another example is Trello, in which users can create new lists on the fly and then add cards to them without ever having to wait for a server response.

Contrast this with Drupal's approach, where any complex operation requires the user to have detailed prior knowledge about the system. In our current mental model, everything begins in the administration layer at the most granular level and requires an unmapped process of bottom-up assembly. A user has to make a content type, add fields, create some content, configure a view mode, build a view, and possibly make the view the front page. If each individual step is already this involved, consider how much more difficult it becomes to traverse them in the right order to finally see an end result. While very powerful, the problem is that Drupal's current model is "inside-out". This is why it would be disruptive to move Drupal towards an "outside-in" mental model. In this model, I should be able to start entering content, click anything on the page, seamlessly edit any aspect of its configuration in-place, and see the change take effect immediately.

Drupal 8's in-place editing feature is actually a good start at this; it enables the user to edit what they see without an interrupted workflow, with faster previews and without needing to find what thing it is before they can start editing.

Making it real with content modeling

Eight years ago in 2007, I wrote about a database product called DabbleDB. I shared my belief that it was important to move CCK and Views into Drupal's core and learn from DabbleDB's integrated approach. DabbleDB was acquired by Twitter in 2010 but you can still find an eight-year-old demo video on YouTube. While the focus of DabbleDB is different, and the UX is obsolete, there is still a lot we can learn from it today: (1) it shows a more integrated experience between content creation, content modeling, and creating views of content, (2) it takes more of an outside-in approach, (3) it uses a lot less intimidating terminology while offering very powerful capabilities, and (4) it uses a lot of in-place editing. At a minimum, DabbleDB could give us some inspiration for what a better, integrated content modeling experience could look like, with the caveat that the UX should be as effortless as possible to match modern standards.

Other new data modeling approaches with compelling user experiences have recently entered the landscape. These include back end-as-a-service (BEaaS) solutions such as Backand, which provides a visually clean drag-and-drop interface for data modeling and helpful features for JavaScript application developers. Our use cases are not quite the same, but Drupal would benefit immensely from a holistic experience for content modeling and content views that incorporates both the rich feature set of DabbleDB and the intuitive UX of Backand.

This sort of vision was not possible in 2007 when CCK was a contributed module for Drupal 6. It still wasn't possible in Drupal 7 when Views existed as a separate contributed module. But now that both CCK and Views are in Drupal 8 core, we can finally start to think about how we can more deeply integrate the two. This kind of integration would be nontrivial but could dramatically simplify Drupal's UX. This should be really exciting because so many people are attracted to Drupal exactly because of features like CCK and Views. Taking an integrated approach like DabbleDB, paired with a seamless and easy-to-use experience like Slack, Trello and Backand, is exactly the kind of disruptive thinking we should do.

What most of the examples above have in common are in-place editing, immediate previews, no page refreshes, and non-blocking workflows. The implications on our form and render systems of providing configuration changes directly on the rendered page are significant. To achieve this requires us to have robust state management and rendering on the client side as well as the server side. In my vision, Twig will provide structure for the overall page and non-interactive portions, but more JavaScript will more than likely be necessary for certain parts of the page in order to achieve the UX that all users of the web have come to expect.

We shouldn't limit ourselves to this one example, as there are a multitude of Drupal interfaces that could all benefit from both big and small changes. We all want to improve Drupal's user experience — and we have to. To do so, we have to constantly iterate and disrupt. I hope we can all collaborate on figuring out what that looks like.

Special thanks to Preston So and Kevin O'Leary for contributions to this blog post and to Wim Leers for feedback.

by Dries at February 10, 2016 10:49 AM

February 09, 2016

Ruben Vermeersch

sql-migrate slides

I recently gave a small lightning talk about sql-migrate (a SQL Schema migration tool for Go), at the Go developer room at FOSDEM.

Annotated slides can be found here.

sql-migrate


Comments | More on rocketeer.be | @rubenv on Twitter

February 09, 2016 05:14 PM

Frank Goossens

Universal JS exception handler

universal_js_exception_handler

by frank at February 09, 2016 03:19 PM

Lionel Dricot

Demain, la conquête de la galaxie !

15987923260_930944d46c_z

Nous parlons beaucoup des voitures sans conducteurs mais il est un autre domaine où l’intelligence artificielle va certainement bientôt jouer un rôle disruptif : la conquête spatiale.

Si les humains sont encore cantonnés à la proche banlieue de la terre, il n’en est pas de même pour les robots : ils ont déjà été sur Mars, sur Vénus, sur une comète, sur Titan. Ils ont photographié Jupiter, Saturne et Pluton et ils sont en train de quitter le système solaire.

Bien entendu, ces robots sont extrêmement rudimentaires. Ils n’ont peu ou prou d’intelligence et se contentent d’obéir aux ordres directement donnés par la terre. Avec le gros défaut que, plus la distance est grande, plus l’ordre met du temps à parvenir.

Pour un rover roulant sur Mars et apercevant un rocher barrant sa route, il faut 20 minutes pour envoyer l’information jusqu’à la terre puis, en admettant une réponse immédiate de l’opérateur, 20 minutes pour transmettre l’ordre d’éviter le rocher.

Rendre les sondes exploratrices plus intelligentes et plus autonomes est donc particulièrement approprié. Elles seraient en mesure de prendre les décisions les plus pertinentes et n’envoyer sur terre que les clichés et les résultats scientifiques les plus intéressants.

L’énergie et les réparations

Le principal problème d’une sonde spatiale est celui de l’énergie. Le soleil s’éloignant, il devient de plus en plus difficile d’obtenir de l’énergie. Il faut donc le stocker et l’économiser. En étant plus intelligent, les sondes pourraient développer des stratégies inédites propres à leurs environnements.

En étant plus intelligentes, les sondes devraient également être capables de réparer les problèmes qu’elles pourraient rencontrer en exploitant, si possible, les ressources du milieu où elles se trouvent. Ce qui nécessite la capacité d’extraire des matériaux.

Si vous mettez ensemble une intelligence de préservation de l’énergie et une intelligence réparatrice, il est vraisemblable que les sondes se mettront à construire de nouveaux moyens de capter l’énergie. Par exemple, une sonde pourrait fabriquer une éolienne sur Mars.

Tout cela est bien entendu encore très théorique. Est-ce complètement fou ? Au vu des progrès actuels, il est vraisemblable que ce genre de sondes soit lancé dans le système solaire d’ici vingt ou trente ans. Peut-être cinquante.

La collaboration

Sur une planète comme Mars, imaginons que les sondes se mettent à coopérer entre elles. Ou, pourquoi pas, qu’elles se mettent à communiquer entre elles à travers tout le système solaire. Elles seraient également capables de communiquer avec les anciennes sondes et d’apprendre de leurs observations, comme le font les opérateurs humains aujourd’hui. Après tout, pourquoi pas ?

Nous aurions donc un véritable réseau de robots spatiaux lancés dans le cosmos, apprenant, communiquant, s’auto-réparant, s’entraidant. Certes, la plupart des sondes finiraient par mourir jusqu’au jour où…

La singularité

Tout ceci n’est pas foncièrement excitant. Des sondes qui se réparent toutes seules et qui communiquent, c’est joli mais ça ne change pas la vie. Vraiment ? Pourtant, une conséquence immédiate serait l’adaptation progressive des sondes, devenant à chaque réparation un engin de moins en moins reconnaissable par ses concepteurs. Certaines sondes pourraient devenir microscopiques ! D’autre sondes ayant trouvé une abondance de ressources auront une tendance naturelle à construire leurs propres robots autonomes qui les aideront à se réparer, à obtenir de l’énergie et à explorer.

En construisant des sondes qui se réparent et cherchent à obtenir de l’énergie, nous aurons insufflé l’instinct de survie !

Dans ces nouvelles sondes de seconde génération, certaines seront elles-mêmes lancées pour explorer le cosmos et contribuer au réseau de sondes ou à leur sonde mère. La conquête de la galaxie aura commencé !

Le temps

Lorsqu’on parle d’exploration spatiale, l’ennemi numéro un est le temps. Selon les théories actuelles, il serait strictement impossible de voyager plus vite que la lumière (ce que j’explique ici). L’espace étant immense, voyager d’étoile en étoile prendrait des décennies voire des siècles, une durée bien trop grande pour des humains.

Mais pas pour des robots. Les robots, eux, ont tout leur temps.

Si un robot met cent ans pour gagner une nouvelle étoile et que les informations qu’il envoie mettent dix ans ou vingt ans à parvenir aux autres robots, ce n’est pas un problème. De plus, si chaque robot donne naissance à plus d’un autre robot, la conquête sera exponentielle. En quelques centaines de millénaires, la galaxie pourra être conquise par les sondes spatiales humaines. Chaque étoile, chaque planète de la galaxie sera explorée, exploitée et les robots pourraient se mettre à coopérer afin de gagner une autre galaxie.

La race humaine aura probablement disparu d’ici là mais ses descendants, les robots, continueront son œuvre.

L’inquiétude

Cette singularité spatiale, ce moment clé où nous aurons enclenché irrémédiablement la conquête de la galaxie n’est pas si loin. Il est vraisemblable que certains d’entre nous connaîtront ce moment de leur vivant, même si personne ne s’en rendra forcément compte.

Mais du coup, une question existentielle se pose : pourquoi n’avons-nous pas encore détecté le moindre signe d’une intelligence extra-terrestre qui aurait suivi le même chemin ? Où sont les sondes spatiales issus d’autres planètes ? Doit-on s’inquiéter ou se réjouir d’être, a priori, les premiers dans notre zone du cosmos ?

Dans un prochain billet, je tenterai d’apporter des réponses à ces questions, mais, autant vous le dire, ce ne sont pas des réponses rassurantes quant à notre avenir immédiat.

 

Photo Nasa.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins: 12uAH27PhWepyzZ8kzY9rMAZzwpfsWSxXt, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

Flattr this!

by Lionel Dricot at February 09, 2016 03:07 PM

February 08, 2016

Xavier Mertens

[SANS ISC Diary] More Malicious JavaScript Obfuscation

SANS ISCThe following diary was published on isc.sans.org: More Malicious JavaScript Obfuscation.

[The post [SANS ISC Diary] More Malicious JavaScript Obfuscation has been first published on /dev/random]

by Xavier at February 08, 2016 07:41 PM

The Best Broth is Made in The Oldest Pot

Old PotIn 2014, I blogged about security awareness through proverbs. Many proverbs can be used to deliver important security messages. We are now in 2016 and I could add a new one to the long list that I already built:

The Best Broth is Made in The Oldest Pot

A new malware, called T9000, has been recently discovered by Palo Alto Networks. It specifically targets Skype users. Have a look at the blog post if you are interested in technical details. However, what caught my attention is the way the malware is delivered to potential victims. A RTF document is sent via spam campaign and try to exploit the following CVE’s affecting (but not only) Microsoft Office:

Both are rated as critical and allow remote code execution. So what? On one side, companies are starting bug bounty programs to find vulnerabilities in their code, new 0-day vulnerabilities are for sale at high prices and $VENDORS try to sell us new boxes that will keep us safe… On the other side, we have a new piece of malware using a vulnerability from… 2012?

If you’re still vulnerable to a 4-years old vulnerability in a product like Microsoft Office, don’t be surprised to be pwn3d! Hint: Windows Update or WSUS before spending your money to new security toys…

[The post The Best Broth is Made in The Oldest Pot has been first published on /dev/random]

by Xavier at February 08, 2016 03:55 PM

Joram Barrez

“Orchestrating Work with Activiti and Spring Integration” by Josh Long

When my buddy Josh Long from Pivotal writes about Activiti and Spring, it’s always must-read material. No exception this time, so I do want to give it exposure here too: Orchestrating Work With Activiti and Spring Integration Nice conclusion too: “BPM adds a fair amount of cognitive load for simple integrations but offers a lot of […]

by Joram Barrez at February 08, 2016 11:02 AM

February 07, 2016

Mark Van den Borre

Een monument is niet meer


Vaarwel Eddy!

by Mark Van den Borre (noreply@blogger.com) at February 07, 2016 10:43 AM

February 05, 2016

Julien Pivotto

puppetlabs-concat 2.0

If you are a Puppet user, you know that you have multiple choices regarding the way you manage your files. You might decide to manage the whole file or parts of it. I presented that as part of my augeas talk at PuppetCamp London in 2014.

One of the solutions out there is concat. Concats allows you to define parts of a file as different resources, then glue them together to create a single file.

In the past, the Puppetlabs concat module was made out of defined types, such as it seemed very fragile or unreliable. It created temporary files, required a setup class, and other stuff that could have been done internally in a custom type/provider.

On the other hand, some people decided to actually write concat as a provider. They created the Puppet module theforeman-concat_native, which was a really nice solution to the concat problems. From the catalog you only see the concat resources, and not all the temporary files, directories and exec that were needed by the Puppetlabs concat modules.

What I want to tell you today is that the Foreman module, which was really great, is now deprecated because its changes are available in the Puppetlabs module itself (in the 2.x releases). And it is almost compatible with the old one, as it provides backwards-compatible defines.

What it will change for you:

This article has been written by Julien Pivotto and is licensed under a Creative Commons Attribution 4.0 International License.

by Julien Pivotto at February 05, 2016 03:50 PM

February 03, 2016

Jeroen De Dauw

Missing in PHP7: Value objects

This is the third post in my Missing in PHP7 series. The previous one is about named parameters.

A Value Object does not have an identity, which means that if you have two of them with the same data, they are considered equal (take two latitude, longitude pairs for instance). Generally they are immutable and do not have methods beyond simple getters.

Such objects are a key building block in Domain Driven Design, and one of the common types of objects even in well designed codebases that do not follow Domain Driven Design. My current project at Wikimedia Deutschland by and large follows the “Clean Architecture” architecture, which means that each “use case” or “interactor” comes with two value objects: a request model and a response model. Those are certainly not the only Value Objects, and by the time this relatively small application is done, we’re likely to have over 50 of them. This makes it real unfortunate that PHP makes it such a pain to create Value Objects, even though it certainly does not prevent you from doing so.

Let’s look at an example of such a Value Object:

class ContactRequest {

	private $firstName;
	private $lastName;
	private $emailAddress;
	private $subject;
	private $messageBody;

	public function __construct( string $firstName, string $lastName, string $emailAddress, string $subject, string $messageBody ) {
		$this->firstName = $firstName;
		$this->lastName = $lastName;
		$this->emailAddress = $emailAddress;
		$this->subject = $subject;
		$this->messageBody = $messageBody;
	}

	public function getFirstName(): string {
		return $this->firstName;
	}

	public function getLastName(): string {
		return $this->lastName;
	}

	public function getEmailAddress(): string {
		return $this->emailAddress;
	}

	public function getSubject(): string {
		return $this->subject;
	}

	public function getMessageBody(): string {
		return $this->messageBody;
	}

}

As you can see, this is a very simple class. So what the smag am I complaining about? Three different things actually.

1. Initialization sucks

If you’ve read my previous post in the series, you probably saw this one coming. Indeed, I mentioned Value Objects at the end of that post. Why does it suck?

new ContactRequest( 'Nyan', 'Cat', 'maxwells-demon@entopywins.wtf', 'Kittens', 'Kittens are awesome' );

The lack of named parameters forces one to use a positional list of non-named arguments, which is bad for readability and is error prone. Of course one can create a PersonName Value Object with first- and last name fields, and some kind of partial email message Value Object. This only partially mitigates the problem though.

There are some ways around this, though none of them are nice. An obvious fix with an equally obvious downside is to have a Builder using a Fluent Interface for each Value Object. To me the added clutter sufficiently complicates the program to undo the benefits gained from removing the positional unnamed argument lists.

Another approach to avoid the positional list is to not use the constructor at all, and instead rely on setters. This does unfortunately introduce two new problems. Firstly, the Value Object becomes mutable during its entire lifetime. While it might be clear to some people those setters should not be used, their presence suggests that there is nothing wrong with changing the object. Having to rely on such special understanding or on people reading documentation is certainly not good. Secondly, it becomes possible to construct an incomplete object, one that misses required fields, and pass it to the rest of the system. When there is no automated checking going on, people will end up doing this by mistake, and the errors might be very non-local, and thus hard to trace the source of.

Some time ago I tried out one approach to tackle both these problems introduced by using setters. I created a wonderfully named Trait to be used by Value Objects which use setters in the format of a Fluent Interface.

class ContactRequest {
	use ValueObjectsInPhpSuckBalls;

	private $firstName;
	// ...

	public function withFirstName( string $firstName ): self {
		$this->firstName = $firstName;
		return $this;
	}
	// ...

}

The trait provides a static newInstance method, enabling construction of the using Value Object as follows:

$contactRequest =
    ContactRequest::newInstance()
        ->withFirstName( 'Nyan' )
        ->withLastName( 'Cat' )
        // ...
        ->withMessageBody( 'Pink fluffy unicorns dancing on rainbows' );

The trait also provides some utility functions to check if the object was fully initialized, which by default will assume that a field with a null value was not initialized.

More recently I tried out another approach, also using a trait to be used by Value Objects: FreezableValueObject. One thing I wanted to change here compared to the previous approach is that the users of the initialized Value Object should not have to do anything different from or additional to what they would do for a more standard Value Object initialized via constructor call. Freezing is a very simple concept. An object starts out as being mutable, and then when freeze is called, modification stops being possible. This is achieved via a freeze method that when called sets a flag that is checked every time a setter is called. If a setter is called when the flag is set, an exception is thrown.

$contactRequest = ( new ContactRequest() )
        ->setFirstName( 'Nyan' )
        ->setLastName( 'Cat' )
        // ...
        ->withMessageBody( 'Pink fluffy unicorns dancing on rainbows' )
        ->freeze();

$contactRequest->setFirstName( 'Nyan' ); // boom

To also verify initialization is complete in the code that constructs the object, the trait provides a assertNoNullFields method which can be called together with freeze. (The name assertFieldsInitialized would actually be better, as the former leaks implementation details and ends up being incorrect if a class overrides it.)

A downside this second trait approach has over the first is that each Value Object needs to call the method that checks the freeze flag in every setter. This is something that is easy to forget, and thus another potential source of bugs. I have yet to investigate if the need for this can be removed via some reflection magic.

It’s quite debatable if any of these approaches pay for themselves, and it’s clear none of them are even close to being nice.

2. Duplication and clutter

For each part of a value object you need a constructor parameter (or setter), a field and a getter. This is a lot of boilerplate, and the not needed flexibility the class language construct provides, creates ample room for inconsistency. I’ve come across plenty of bugs in Value Objected caused by assignments to the wrong field in the constructor or returning of the wrong field in getters.

“Duplication is the primary enemy of a well-designed system.”

― Robert C. Martin

(I actually disagree with the (wording of the) above quote and would replace “duplication” by “Complexity of interpretation and modification”.)

3. Concept missing from language

It’s important to convey intent with your code. Unclear intent causes time being wasted in programmers trying to understand the intent, and time being wasted in bugs caused by the intent not being understood. When Value Objects are classes, and many other things are classes, it might not be clear if a given class is intended to be a Value Object or not. This is especially a problem when there are more junior people in a project. Having a dedicated Value Object construct in the language itself would make intent unambiguous. It also forces conscious and explicit action to change the Value Object into something else, eliminating one avenue of code rot.

“Clean code never obscures the designers’ intent but rather is full of crisp abstractions and straightforward lines of control.”

― Grady Booch, author of Object-Oriented Analysis

I can haz

ValueObject ContactRequest {
    string $firstName;
    string $lastName;
    string $emailAddress;
}

// Construction of a new instance:
$contactRequest = new ContactRequest( firstName='Nyan', lastName='cat', emailAddress='something' );

// Access of the "fields":
$firstName = $contactRequest->firstName;

// Syntax error:
$contactRequest->firstName = 'hax';

See also

i-will-always-favor-value-objects

by Jeroen at February 03, 2016 07:17 PM

Frank Goossens

Autoptimize PowerUp sneak peak; Noptimize

In case you’re wondering if those Autoptimize Power-Ups are still coming and if so how they’ll look like and what they’ll do;

AO powerup sneak peak: noptimize

So some quick pointers;

Next steps for me; register my “secondary activity as independent” (as I still have an official day-time job), get in touch with an accountant, decide on EDD vs Freemius, set up shop on optimizingmatters.com (probably including bbpress for support) and determine pricing (thinking a Euro/month actually for each PowerUp, what do you think?).

Exiting times ahead …

by frank at February 03, 2016 05:36 PM

Wouter Verhelst

OMFG, ls

alias ls='ls --color=auto -N'

Unfortunately it doesn't actually revert to the previous behaviour, but it's close enough.

February 03, 2016 01:54 PM

Xavier Mertens

[SANS ISC Diary] Automating Vulnerability Scans

SANS ISCThe following diary was published on isc.sans.org: Automating Vulnerability Scans.

[The post [SANS ISC Diary] Automating Vulnerability Scans has been first published on /dev/random]

by Xavier at February 03, 2016 08:28 AM

February 02, 2016

Johan Van de Wauw

Report of Geospatial@FOSDEM 2016


Last Sunday, during FOSDEM the second edition of the geospatial devroom was happening.
As said before we had a nice lineup with 15 presentations with topics ranging from established GIS tools, Open Streetmap, 3D visualisation and spatial databases.

Anyway, I did the first presentation on SAGA GIS, and I was happy to have some very nice slides prepared by other members of the SAGA community.
Next up was Hugo Mercier (Oslandia) who presented Tempus. Planning multimodal transportation is a very hard problem, and it was nice to see that you can actually get the sources of the whole project. Next up was Astrid Emde presenting Mapbender, an OSGeo project allowing one to create web SDI's easily.
The next talk by Zeeshan Ali, "Build a geo-aware OS" was one where I was looking forward to : the talk is not one you usually encounter during FOSS4G conferences, but an example of the crosspolination of technologies you get at FOSDEM. It shows how location can now be used in Gnome (and other projects building on FreeDesktop) using common API's.
Getting ready to add location using a raspberry zero

Next up we got a full room for the presentation of Margherita Di Leo and Anne Ghisla presenting the results of the Google Summer of code projects coached at OSGeo.
This is how we like our devroom: full!


Everything got even more crowded for the next talk by Tuukka Hastrup on open journey planning. As people kept trying to enter our room even though it was full, I decided to play doorkeeper and missed the actual talk. 


For those who don't get it "FULL", stop entering!
But based on the feedback and the slides, I'm really looking forward to the video of the talk.
Still in a crowded room Ilya Zverev gave a talk about mapping with OSM with a phone . Rather than mapping using satellite imagery, it makes more sense to map when you are actually in the field.


The next talks were all focused on 3D. Thomas Bremer talked about how one could implement a flight simulator in javascript. He focused on the different components that could build such a simulator. I hope to see a demo of this soon. The next talk was on the integration of Openlayers and Cesium. I missed the talk myself (one has to get lunch at one point), but I got some nice feedback. The last talk in our 3D theme was Vincent Mora on iTowns, showing impressive Lidar scans of large parts of Paris. Even though many of us are still struggling with 2D, a lot of data collection is now 3D and I think GIS systems still have to catch up.

The next block of four presentations were all on databases. Starting of with MySQL GIS. I was pleasantly suprised by the presentation. MySQL and Oracle got some bad press in some open source communities, but I believe the way they are working on MySQL is examplary: not only is MySQL catching up quickly with other (spatial) databases, this is largely done by improving the underlying boost geometry library, which can be leveraged by other projects. Oracle is currently sponsoring 2 developers on that project.
Also Rasdaman, a raster array database extension for PostgreSQL was presented. Even though their use cases may currently be 'far from the bed' of many developers, I like the fact that they are not only implementing a product, but also shaping the standards (OGC and ISO) on the way. 
The third presentation focused on the geographical capabilities of a completely different type of database: mongoDB. I see a lot of possibilities of this database for geodata which is not as structured as classic databases. Nice to see their performance is improving.
The last presentation in the on pivotal Greenplum database was sometimes hard to follow, but I will definitely check it out further: I was not aware of the database, but it is a parallel database based on Postgresql, meaning that postgis functionality can be used, but also the extensions for trajectory data which were the topic of the presentation.

Closing the devroom was a presentation on geocoding by Ervin Ruci, which is a topic where the performance of open solutions can still be improved a lot.


At the end of the day, the closing keynote of FOSDEM, which was given by the people of HOT OSM. A full Janson is always something special, and this is definitely true for a topic of a talk I'm so familiar with.


Closing Keynote about Humanitarian Openstreetmap

To conclude, I believe the devroom at FOSDEM was a great event, and I really think it has its special role compared to other events such as FOSS4G and State of the Map - the chance to meet other open source communities.

Videos of the day should be available soon, thanks to the incredible FOSDEM video team and Anne Ghisla and Thomas Gratier who helped with the camera/organisation.

by Johan Van de Wauw (noreply@blogger.com) at February 02, 2016 09:27 PM

Paul Cobbaut

ls uses quotes ?

paul@d:~$ mkdir newls
paul@d:~$ cd newls/
paul@d:~/newls$ touch a\ space
paul@d:~/newls$ ls
'a space'
paul@d:~/newls$
oh well...

paul@d:~/newls$ touch \'
paul@d:~/newls$ ls
''\''' 'a space'
What ?

by Paul Cobbaut (noreply@blogger.com) at February 02, 2016 08:37 PM

Dries Buytaert

Dries meets the King and Queen of Belgium

A few weeks ago at Davos, I had the honor of meeting the King and Queen of Belgium. VTM, the main commercial television station in Belgium, captured the moment for the evening news. Proof in the video below!

by Dries at February 02, 2016 12:31 PM

February 01, 2016

Jeroen De Dauw

Missing in PHP7: Named parameters

This is the second post in my Missing in PHP7 series. The previous one is about function references.

Readability of code is very important, and this is most certainly not readable:

getCatPictures( 10, 0, true );

You can make some guesses, and in a lot of cases you’ll be passing in named variables.

getCatPictures( $limit, $offset, !$includeNonApproved );

It’s even possible to create named variables where you would otherwise have none.

$limit = 10;
$offset = 0;
$approvedPicturesOnly = true;
getCatPictures( $limit, $offset, $approvedPicturesOnly );

This gains you naming of the arguments, at the cost of boilerplate, and worse, the cost of introducing local state. I’d hate to see such state be introduced even if the function did nothing else, and it only gets worse as the complexity and other state of the function increases.

Another way to improve the situation a little is by making the boolean flag more readable via usage of constants.

getCatPictures( 10, 0, CatPictureRepo::APPROVED_PICTURES_ONLY );

Of course, long argument lists and boolean flags are both things you want to avoid to begin with and are rarely needed when designing your functions well. It’s however not possible to avoid all argument lists. Using the cat pictures example, the limit and offset parameters can not be removed.

getApprovedCatPictures( 10, 0 );

You can create a value object, though this just moves the problem to the constructor of said value object, and unless you create weird function specific value objects, this is only a partial move.

getApprovedCatPictures( new LimitOffset( 10, 0 ) );

An naive solution to this problem is to have a single parameter that is an associative array.

getApprovedCatPictures( [
    'limit' => 10,
    'offset' => 0
] );

The result of this is catastrophe. You are no longer able to see which parameters are required and supported from the function signature, or what their types are. You need to look at the implementation, where you are also forced to do a lot of checks before doing the actual job of the function. So many checks that they probably deserve their own function. Yay, recursion! Furthermore, static code analysis gets thrown out of the window, making it next to impossible for tools to assist with renaming a parameter or finding its usages.

What I’d like to be able to do is naming parameters with support from the language, as you can do in Python.

getApprovedCatPictures( limit=10, offset=0 );

To me this is not a small problem. Functions with more than 3 arguments might be rare in a well designed codebase, though even with fewer arguments readability suffers greatly. And there is an exception to the no more than 3 parameters per function: constructors. Unless you are being rather extreme and following Object Calisthenics, you’ll have plenty of constructors where the lack of named parameters gets extra annoying. This is especially true for value objects, though that is a topic for another post.

by Jeroen at February 01, 2016 04:59 PM

Mattias Geniar

Slides: HTTP/2 for PHP developers

The post Slides: HTTP/2 for PHP developers appeared first on ma.ttias.be.

Last weekend I had the pleasure of speaking in the PHP & Friends room at FOSDEM, Brussels. The slides for my talk "HTTP/2 for PHP developers" are now available online at speakerdeck.

Embedded below:

I hope to be back next year!

The post Slides: HTTP/2 for PHP developers appeared first on ma.ttias.be.

by Mattias Geniar at February 01, 2016 12:54 PM

January 31, 2016

Mattias Geniar

A clean mailing list browser, focussing on readability

The post A clean mailing list browser, focussing on readability appeared first on ma.ttias.be.

I wrote a new frontend for viewing mailing list entries that I'd like to show you. Its focus is on readability, typography and a clean layout.

I've used Twitter's Bootstrap with Red Hat's Overpass font for a responsive and focussed layout.

Here's what it looks like. I think it's best compared against marc.info, our industry's de facto standard for mailing list archives.

First: the original.


marc_original


Next: the redesign.


marc_clean


I'm not entirely satisfied yet, but it'll do for now. More tweaks will surely follow.

My main focus was:

We often rely on mailing list communication to spread important news, there's no reason that can't have a nice and professional looking layout.

The new layout is up at marc.ttias.be (I may change the URL later, I'm not sure yet).

You're free to use it, of course. I now mirror the mailing lists that are closest to me, but if I'm missing one that you would like to follow -- just drop me a note. You'll know where to find me.

As for the technology involved, it's a pretty default setup:

To add a new mailinglist, all I have to do is add a new system user in the correct Linux group and subscribe (manually) to the mailing list. From there on, the indexing, listing etc. happens automatically.

This was a fun weekend-hack!

The post A clean mailing list browser, focussing on readability appeared first on ma.ttias.be.

by Mattias Geniar at January 31, 2016 08:57 PM

Ruben Vermeersch

Show me the way

If you need further proof that OpenStreetMap is a great project, here’s a very nice near real-time animation of the most recent edits: https://osmlab.github.io/show-me-the-way/

Show me the way

Seen today at FOSDEM, at the stand of the Humanitarian OpenStreetMap team which also deserves attention: https://hotosm.org


Comments | More on rocketeer.be | @rubenv on Twitter

January 31, 2016 08:28 PM

January 30, 2016

Jeroen De Dauw

Missing in PHP7: function references

This is the first post in my Missing in PHP7 series.

Over time, PHP has improved its capabilities with regards to functions. As of PHP 5.3 you can create anonymous functions and as of 5.4 you can use the callable type hint. However referencing a function still requires using a string.

call_user_func( 'globalFunctionName' );
call_user_func( 'SomeClass::staticFunction' );
call_user_func( [ $someObject, 'someMethod' ] );

Unlike in Languages such as Python, that do provide proper function references, tools provide no support for the PHP approach whatsoever. No autocompletion or type checking in your IDE. No warnings from static code analysis tools. No “find usages” support.

Example

A common place where I run into this limitation is when I have a method that needs to return a modified version of an input array.

public function someStuff( array $input ) {
    $output = [];

    foreach ( $input as $element ) {
        $output[] = $this->somePrivateMethod( $element );
    }

    return $output;
}

In such cases array_map and similar higher order functions are much nicer than creating additional state, doing an imperative loop and a bunch of assignments.

public function someStuff( array $input ) {
    return array_map(
        [ $this, 'somePrivateMethod' ],
        $input
    );
}

I consider the benefit of tool support big enough to prefer the following code over the above:

public function someStuff( array $input ) {
    return array_map(
        function( $element ) {
            return $this->somePrivateMethod( $element );
        },
        $input
    );
}

This does make the already hugely verbose array map even more verbose, and makes this one of these scenarios where I go “really PHP? really?” when I come across it.

Related: class references

A similar stringly-typed problem in PHP used to be creating mocks in PHPUnit. Which of course is not a PHP problem (in itself), though still something affecting many PHP projects.

$kittenRepo = $this->getMock( 'Awesome\Software\KittenRepo' );

This causes the same types of problems as the lack of function references. If you now rename or move KittenRepo, tools will not update these string references. If you try to find usages of the class, you’ll miss this one, unless you do string search.

Luckily PHP 5.5 introduced the class:: construct, which allows doing the following:

$kittenRepo = $this->getMock( KittenRepo::class );

Where KittenRepo got imported with an use statement.

by Jeroen at January 30, 2016 10:28 PM

Missing in PHP7

I’ve decided to start a series of short blog posts on how PHP gets in the way of creating of well designed applications, with a focus on missing features.PHP7

The language flamewar

PHP is one of those languages that people love to hate. Its standard library is widely inconsistent, and its gradual typing approach leaves fundamentalists in both the strongly typed and dynamically typed camps unsatisfied. The standard library of a language is important, and, amongst other things, it puts an upper bound to how nice an application written in a certain language can be. This upper bound is however not something you run into in practice. Most code out there suffers from all kinds of pathologies that have quite little to do with the language used, and are much more detrimental to understanding or changing the code than its language. I will take a well designed application in a language that is not that great (such as PHP) over a ball of mud in [insert the language you think is holy here] any day.

“That’s the thing about people who think they hate computers. What they really hate is lousy programmers.”

― Larry Niven

Well designed applications

By well designed application, I do not mean an application that uses at least 10 design patterns from the GoF book and complies with a bunch of design principles. It might well do that, however what I’m getting at is code that is easy to understand, maintain, modify, extend and verify the correctness of. In other words, code that provides high value to the customer.

“The purpose of software is to help people.”

― Max Kanat-Alexander

Missing features

These will be outlined in upcoming blog posts which I will link here.

by Jeroen at January 30, 2016 09:47 PM

Les Jeudis du Libre

Mons, le 18 février : La recherche locale avec OscaR.cbls, expliquée à mon voisin

Logo OscarCe jeudi 18 février 2016 à 19h se déroulera la 46ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : La recherche locale avec OscaR.cbls, expliquée à mon voisin

Thématique : Algorithmique|Optimisation|Programmation|Développement

Public : Développeurs|étudiants|enseignants

L’animateur conférencier : Renaud De Landtsheer (CETIC)

Lieu de cette séance : Université de Mons, Faculté Polytechnique, Site Houdain, Rue de Houdain, 9, auditoire 3 (cf. ce plan sur le site de l’UMONS, ou la carte OSM). Entrée par la porte principale, au fond de la cour d’honneur. Suivre le fléchage à partir de là.

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié (le tout sera terminé au plus tard à 22h).

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, Normation, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description :  OscaR est une librairie libre (LGPL) pour résoudre des problèmes d’optimisation. OscaR propose plusieurs moteurs d’optimisation : la recherche locale basée sur les contraintes, la programmation par contraintes, et la programmation linéaire. OscaR est implémenté en Scala.

L’exposé débutera avec les motivations qui ont conduit à la naissance de ce projet open source, du point de vue du CETIC, incluant la facilité de valorisation, la mutualisation des contributions, et les contributions spontanées et la crédibilité que le modèle open source donne au projet.

Après un aperçu général des différents moteurs d’OscaR, cet exposé se concentrera sur le moteur de recherche locale basée sur les contraintes. Il en présentera les grands principes avant de se concentrer sur la couche déclarative permettant de définir aisément un problème de recherche et d’exprimer de manière compacte des procédures de recherche locale.

by Didier Villers at January 30, 2016 09:30 AM

January 29, 2016

Joram Barrez

Activiti 5.19.0.2 released

We just released Activiti 5.19.0.2. This is a bugfix release fixing following changes: Security fix around using commons-collection (more info see the commit) Upgrade to MyBatis 3.3.0 Enhancing AsyncExecutor for high load scenarios (especially when the job queue is full) A few small bug fixes

by Joram Barrez at January 29, 2016 03:52 PM

Frank Goossens

Music from Our Tube: Rocket Juice for Peace

Before Atoms for Peace released AMOK (but after Thom Yorke, Flea et all joined forces) there was a one-off album by Rocket Juice and the Moon, a group around Damon Albarn (Blur), Flea and the great Tony Allen (Fela Kuti’s drummer/ percussionist and producer). And below’s Follow-Fashion for example, sounds like the nonidentical twin of Atom for Peace’s Before your very Eyes, doesn’t it?

YouTube Video
Watch this video on YouTube.

by frank at January 29, 2016 12:29 PM

Xavier Mertens

[SANS ISC Diary] Scripting Web Categorization

SANS ISCThe following diary was published on isc.sans.org: Scripting Web Categorization.

[The post [SANS ISC Diary] Scripting Web Categorization has been first published on /dev/random]

by Xavier at January 29, 2016 09:27 AM

January 27, 2016

Wim Leers

BigPipe webinar

This talk covers everything in BigPipe (the webinar I did a few weeks ago was only an introduction).

I’ll let the webinar summary speak for itself:

Did you know that the majority of the time spent generating a HTML page is spent on a few personalized parts? Pages are only sent to the client after everything is rendered. Well, what if Drupal could start sending a page before it’s fully finished? What if Drupal could let the browser start downloading CSS, JS, and images, and even make the majority of the page available for interaction while Drupal generates the personalized parts? Good news - it can.

This is where Drupal 8’s improved architecture comes in: it allows us to do Facebook BigPipe-style rendering. You can now simply install the BigPipe module for Drupal 8 and see the difference for yourself.

In this webinar, we will discuss:

  • The ideas behind BigPipe
  • How to use BigPipe on your Drupal site
  • How Drupal is able to do this (high-level technical information)
  • How you can make your code BigPipe-compatible (technical details)
  • What else is made possible thanks to the improved architecture in Drupal 8 (ESI etc.), without any custom code

by Wim Leers at January 27, 2016 10:55 PM

Matt Casters

Matt’s Mad Labs Hangout

I’m starting a new quick and easy way for Pentaho labs to connect to the community: a regular hangout.
The first hangout will take place on February 16th, 7pm CET, 1pm Eastern, 10am Pacific.

Matt’s Mad Labs Hangout

We’ll discuss the cool things Labs can and will build in 2016 and beyond.

See you then,

Matt

by Matt Casters at January 27, 2016 05:57 PM

January 26, 2016

FOSDEM organizers

Roadworks near ULB Solbosch

Like last year, there are roadworks on Av Buyl near the ULB Solbosch campus. This affects travellers coming by car or public transport. By car Please be aware that the entrance to the campus on Av Buyl is completely blocked due to roadworks: the only access to and from the parking is via Av Roosevelt. Also note that this means the narrow and steep ramp is now two-way traffic so be very careful, especially in bad weather. Public transport Like last year, buses and trams will avoid Av Buyl. When coming by tram line 25 or 94 you can switch舰

January 26, 2016 03:00 PM

January 25, 2016

Mattias Geniar

Coming soon: SysCast

The post Coming soon: SysCast appeared first on ma.ttias.be.

syscast_logo I'd like to introduce you to SysCast, a new online resource for learning about Open Source projects and Linux through screencasts.

I'm excited to announce a teaser for an upcoming new project I've been working on for the last several months. While it isn't finished yet, I'm just about ready to start a closed beta program with a limited number of testers.

What is it?

If you're familiar with any of these websites, SysCast is a healthy mix between Laracasts, Servers for Hackers, Udemy and Linux Academy.

The concept is relatively simple: if you want to learn about Nginx, follow a screencasting course. Everything you need to know is explained in "video series", spanning multiple episodes. Each episode talks you through, in detail, about a specific topic.

In the case of Nginx for example, the episodes span the installation & basic configuration, how proxy'ing works, what the different configuration parameters mean, ...

But instead of boring blogposts where you just skip straight to the configurations to copy/paste (like this blog, for example), you'll learn the impact of each individual parameter.

Each episode will have specific and to-the-point examples on why each parameter is needed and which purpose it serves.

Right now, we're working on series on The HTTP Protocol, Varnish 4, Config Management with Puppet, Basic Linux training, Apache configurations, HTTP/2 Explained, ....

We'll cover the most popular open source projects first and move on to more complex configurations like Elastic Search, Kibana, Ansible, ... So much to share!

Sounds like a lot of work, is this going to be free?

Well, it is a lot of work. And no, not everything will be free. I plan to offer a monthly subscription fee where you get unlimited access to all videos and get notified about new series and episodes being released.

However, not everything will be behind a paywall. In part for publicity and in part because I do enjoy sharing knowledge and content, some series will be free for all. It hasn't been decided which series those are going to be, though.

Time will tell.

What does it look like?

It's an early prototype (but functional!), but here are some teasers -- if you're into that sort of thing.

The homepage

syscast_1_homepage

The listing of all series

syscast_2_series

The listing of episodes in a specific series

syscast_3_series_list_episodes

The detail page of a single screencast, with video

syscast_4_screencast_detail

Coolio, sign me up!

Well, I hope you're as excited as I am. :-)

I'll be looking for beta testers in the next couple of weeks, so if you'd like to give SysCast a try, sign up for the beta program at SysCa.st!

For other news, you can follow the Twitter account at @SysCast_Online. We'll post the configurations used in each video on Github too, so you can follow the SysCast organisation there, too.

Any feedback is appreciated! Perhaps especially the negative criticisme, so I can finetune (or abort altogether) the project where needed.

The post Coming soon: SysCast appeared first on ma.ttias.be.

by Mattias Geniar at January 25, 2016 09:15 PM

Jeroen De Dauw

Replicator: Wikidata import tool

I’m happy to announce the first release of Replicator, a CLI tool for importing entities from Wikidata.

Replicator was created for importing data from Wikidata into the QueryR REST API persistence. It has two big conceptual components: getting entities from a specified source, and then doing something with said entities.

Entity sources

Wikidata Web API

As the above ascii cast shows, you can import entities via the Wikidata web API. You need to be able to connect to the API, and this is by far the slowest way to import, however still much more convenient than getting a dump in case you just want to import a few entities for testing purposes.

The one required argument for the API import command is the list of entities to import. This argument accepts single entity IDs such as Q1 and P42, as well as ranges such as Q1-Q100. You can have as many of these as you want, for instance P42 Q1-100 P20-30 Q1337. The -r or –include-references flag allows you to specify all referenced entities should also be imported. This is particularly useful when you need the labels of these entities when displaying the one you actually imported. Finally there is a verbosity option that allows switching between 3 different levels of output.

The command internally does batching using my Batching Iterator PHP library. You can specify the batch size with the -b or –batchsize option. The command can also be safely aborted via ctrl+c. Rather than immediately dying and leaving your database (or other output) in a potentially inconsistent state, Replicator will finish importing the current entity before exiting. A summary of the import is displayed once it completed or was aborted.

Wikidata dumps

It is possible to import data from both compressed and uncompressed JSON dumps. This functionality is exposed via several commands. import:json for uncompressed dumps, import:bz2 for bzip2 compressed dumps and import:gz for gzip compressed dumps. It is possible to specify a maximum number of entities to import, or to safely and interactively abort the import via ctrl+c. In both cases you will be presented with a continuation token that can be used to continue the import from where it stopped.

The JSON import functionality is build on my Wikidata JSON Dump Reader PHP library. You can even import XML dumps via the import:xml command, though are likely better off sticking with the recommended JSON dumps.

Import targets

As Replicator was written for the QueryR REST API, it by default imports into the persistence used by this API. This persistence is composed of the QueryR EntityStore, the QueryR TermStore and Wikibase QueryEngine, all open source PHP libraries providing persistence for Wikibase data.

While internally Replicator uses a plugin system, there currently is no way to add additional sources without modifying the PHP code. The needed modifications are very trivial, and it is also relatively simple to make the application as a whole truly extensible. While I’m currently working on other projects, I suspect this capability is useful for various use cases. All it takes is implementing an interface with a method handleEntity( EntityDocument $entity ), and suddenly the tool becomes capable of importing into your MediaWiki, your Wikibase or your custom persistence.

Let me know if you are interested in creating such a plugin, then I will add the missing parts of the plugin system. I might get to this in some time anyway, and then do another blog post covering the details.

Further points of interest

I should mention that the Replicator application is build on top of the Wikibase DataModel and Wikibase DataModel Serialization PHP libraries, without which creating such a tool would be a lot more work, both initially and maintenance wise. It also uses the Symfony Console component, which I can highly recommend for anyone creating a CLI application in PHP.

See also

If you are running a Wikibase instance on your wiki, take a look at the Wikibase Import MediaWiki extension by Aude. If you want to import things into Wikidata, then have a look at this reference Microdata import script by Addshore. If you are working with Java and want to import dumps, check out Wikidata Toolkit.

by Jeroen at January 25, 2016 05:43 PM

Les Jeudis du Libre

Arlon, le 4 février, S02E04 : Signature électronique

Venez nous retrouver ce 4 février pour parler de signature électronique !

Après un lent démarrage après la Directive de 1999, la signature électronique décolle enfin en Europe. A l’intersection des aspects légaux, techniques et institutionnels, ce domaine présente de nombreux challenges qui ont dû être surmontés pour que les acteurs européens puissent bénéficier de ses avantages. La présentation va se focaliser sur le framework SD-DSS et le logiciel compagnon NexU, deux outils open-source qui facilitent l’adoption de ces technologies en prenant en charge les aspects techniques complexes de la signature avancée, de la confiance et de l’accès aux smartcards.

Informations pratiques

Intervenants :

Pierrick VandenbrouckePierrick Vandenbroucke
Passionné par tout ce qui touche la technologie Java et la sécurité informatique, Pierrick a commencé sa carrière il y a 5 ans dans la consultance à Bruxelles avant de revenir à Luxembourg. Il a rejoint l’équipe Nowina Solutions en tant qu’analyste/développeur Java et se spécialise dans la création et la validation de signatures électroniques avancées, notamment de par son travail sur le framework de signature électronique SD-DSS.
David NaramskiDavid Naramski
Développeur Java et explorateur/innovateur inexorable, David a participé à la conception du framework SD-DSS. Expert reconnu de la signature électronique légale européenne, il a également accompagné certains Etats Membres dans le déploiement de cette solution. Fondateur de Nowina Solutions, David aide des institutions financières ou gouvernementales à améliorer l’efficacité de leurs processus métiers en capitalisant sur les avantages de la signature électronique.

 

by Didier Villers at January 25, 2016 05:26 PM

FOSDEM organizers

Last set of speaker interviews

We proudly present the last set of speaker interviews. See you at FOSDEM this weekend! Blake Girardot: Putting 8 Million People on the Map:. Revolutionizing crisis response through open mapping tools Holger Levsen: Beyond reproducible builds. Making the whole free software ecosystem reproducible and then… James Bottomley: How containers work in Linux. an introduction to NameSpaces and Cgroups Langdon White: Re-thinking Linux Distributions. ... separate the operating system from the content Michael Meeks: Scaling and Securing LibreOffice Online. caging, taming and go-faster-striping a big beast of an office suite If you haven't read our previous interviews with main track speakers舰

January 25, 2016 03:00 PM

Dries Buytaert

Fast tracking life-saving innovations

Last week, a member of my family died in a car accident. Jasper was on his way home and was hit by a taxi. He fought for his life, but died the next day. Jasper was only 16 years old. I was at Davos and at one point I had to step out of the conference to cry. Five years ago, another family member died after she was hit by a truck when crossing the road.

It's hard to see a tragedy like this juxtaposed against a conference filled with people talking about improving the state of the world. These personal losses make me want to fast-forward to a time in the future where self-driving cars are normal, and life-saving innovations don't have as much regulatory red tape to cut through before they can have an impact. It's frustrating that we may have the right technology in sight today, but aren't making it available, especially when people's lives are at stake.

Imagine two busses full of people crashing, killing everyone on board, every single day. That is how many people die on America's roads every day. In fact, more people are killed by cars than guns, but I don't see anyone calling for a ban on automobiles. Car accidents (and traffic jams) are almost always the result of human error. It is estimated that self-driving cars could reduce deaths on the road by 90%. That is almost 30,000 lives saved each year in the US alone. The life-saving estimates for driverless cars are on par with the efficacy of modern vaccines. I hope my children, now ages 6 and 8, will never need a driver's license and can grow up in a world with driverless cars.

The self-driving car isn't as far off as you might think but is still being held back by government regulators. Delayed technology isn't limited to self-driving cars. Life-saving innovations in healthcare are often held back by regulatory requirements. The challenge of climate change could be addressed faster if the regulatory uncertainty around solar and wind power permits and policies were reduced. The self-serving interest of lobbying groups focused on maintaining the status quo for industries like Big Oil make it harder for alternative energies to gain momentum.

Regulators need to frame their jobs differently; they need to ask how they can facilitate and enable emerging disruptive innovations, rather than maintain existing systems. Their job should focus more on removing any barriers that prevent disruptions from having a faster impact. If they do this job well, some established institutions will fail. In some cases, economic sacrifices by the incumbents should be of lesser concern than advancing social health and safety for the benefit of society. I'm less concerned about technology destroying jobs, and more concerned about our children not being able to benefit from available technical advances that improve their lives. We should realize that opportunities for long-term economic growth come with short-term disruption or temporary pain.

Losing family members in fatal accidents makes one think about what could have been done. I'm often asked how one can create a "Silicon Valley" model elsewhere in the world. I may have an answer. If you want to out-"Silicon Valley" Silicon Valley, create a region with a regulatory environment that supports prompt, responsible innovation to drive the adoption and iteration of new technologies. A region where people can responsibly launch self-driving cars, fast-track healthcare and address climate change. A region where long-term advantages are valued more than short-term disadvantages. Such a region would attract capital and entrepreneurs, and would be much better for our children.

by Dries at January 25, 2016 01:54 PM

January 24, 2016

Wouter Verhelst

Playing with Letsencrypt

While I'm not convinced that encrypting everything by default is necessarily a good idea, it is certainly true that encryption has its uses. Unfortunately, for the longest time getting an SSL certificate from a CA was quite a hassle -- and then I'm not even mentioning the fact that it would cost money, too. In that light, the letsencrypt project is a useful alternative: rather than having to dabble with emails or webforms, letsencrypt does everything by way of a few scripts. Also, the letsencrypt CA is free to use, in contrast to many other certificate authorities.

Of course, once you start scripting, it's easy to overdo it. When you go to the letsencrypt website and try to install their stuff, they point you to a "letsencrypt-auto" thing, which starts installing a python virtualenv and loads of other crap, just so we can talk to a server and get a certificate properly signed. I'm sure that makes sense for some people, and I'm also sure it makes some things significantly easier, but honestly, I think it's a bit overkill. Luckily, there is also a regular letsencrypt script that doesn't do all the virtualenv stuff, and which is already packaged for Debian, so installing that gets us rid of the overkill bits. It's not in stable, but it is an arch:all package. Unfortunately, not all dependencies are available in Jessie, but backporting it turned out to be not too complicated.

Once the client is installed, you need to get it to obtain a certificate, and you need to have the certificate be activated for your webserver. By default, the letsencrypt client will update your server configuration for you, and run a temporary webserver, and do a whole lot of other things that I would rather not see it do. The good news, though, is that you can switch all that off. If you're willing to let it write a few files to your DocumentRoot, it can do the whole thing without any change to your system:

letsencrypt certonly -m <your mailaddress> --accept-tos --webroot -w \
    /path/to/DocumentRoot -d example.com -d www.example.com

In that command, the -m option (with mail address) and the --accept-tos option are only required the first time you ever run letsencrypt. Doing so creates an account for you at letsencrypt.org, without which you cannot get any certificates. The --accept-tos option specifies that you accept their terms of service, which you might want to read first before blindly accepting.

If all goes well, letsencrypt will tell you that it wrote your account information and some certificates somewhere under /etc/letsencrypt. All that's left is to update your webserver configuration so that the certificates are actually used. No, I won't tell you how that's done -- if you need that information, it's probably better if you just use the apache or nginx plugin of letsencrypt, which updates your webserver's configuration automatically. But if you already know how to do so, and prefer to maintain your configuration by yourself, then it's good to know that it is at least possible.

Since we specified two -d parameters, the certificate you get from the letsencrypt CA will have subjectAlternativeName fields, allowing you to use them for more than one domain name. I specified example.com and www.example.com as the valid names, but there is nothing in letsencrypt requiring that the domain names you specify share a suffix; so yes, it is certainly possible to have multiple domains on a single certificate, if you want that.

Finally, you need to ensure that your certificates are renewed before they expire. The easiest way to do that is to add a cronjob, which runs letsencrypt again; only this time, you'll need slightly different options:

letsencrypt certonly --renew-by-default --webroot -w \
    /path/to/DocumentRoot -d domain.name.tld -d www.domain.name.tld

The --renew-by-default option tells letsencrypt that you want to renew your certificates, not create new ones. If you specify this, the old certificate will be revoked, and a new one will be generated, again with three months validity. The symlinks under /etc/letsencrypt/live will be updated, so if your webserver's configuration uses those as the path to your certificates, you don't need to update any configuration files (although you may need to restart your webserver). You'll also notice that the -m and --accept-tos options are not specified this time around; this is because those are only required when you want to create an account -- once that step has been taken, letsencrypt will read the necessary information from its configuration files (and creating a new account for every renewal is just silly).

With that, you now have a working, browser-trusted, certificate.

Update: fix mistake about installing on Jessie. It's possible (I've done so), but it's been a while and I forgot that I had to recompile several of letsencrypt's dependencies in order for it to be installable.

January 24, 2016 11:01 AM

January 23, 2016

Jeroen De Dauw

The Liskov Substitution Principle

I have a number of web-based presentations online on the software craftsmanship topic. You can find these on the software craftsmanship page of my website.

I’ve just written an abstract for a presentation on the so called Liskov Substitution Principle. This is a presentation I’ve done a number of times and have refined over time. You can read the abstract and view the slides, though the slides are not really made for usage outside of a presentation. Here is the abstract:

The Liskov Substitution Principle

Everyone knows how to do inheritance. You create an interface or a base class and implement it or derive from it. Done.

Good software design is however not quite that easy. When using inheritance, a number of basic principles should be held into account. Favouring composition over inheritance, not using inheritance for code reuse and using narrow, well segregated, interfaces come to mind. Even though amongst this list of fundamentals and critical in nature, the Liskov Substitution Principle is not well known by many developers.

This presentation will introduce to said principle, and teach you how to both recognize violations and deal with the consequences. Examples of typical violations are provided, the undesired results are outlined and means to avoid these pitfalls are explained.

Audience

This presentation is aimed at developers. It is suitable both for people new to the field and those with many years of experience. Knowledge of inheritance is required.

Examples are mainly in PHP and Java, though the presentation also applies to other OOP languages. All examples are trivial, so knowledge of these languages is not required, while of course a plus.

Topic list

The covered topics include:

Slides

The presentation contains high quality slides with lots of long complicated sentences, citations and detailed proofs, to ensure sufficient seriousness.

The Liskov Substitution Principle

The Liskov Substitution Principle

The Liskov Substitution Principle

 

by Jeroen at January 23, 2016 11:33 PM

Lionel Dricot

Printeurs 38

10016531_6506adf496_b
Ceci est le billet 38 sur 38 dans la série Printeurs

Nellio et Junior sont arrivés dans une usine de mannequins sexuels à l’effigie d’Eva. La voix d’Eva retentit derrière eux.

— Qu’est-ce qui me prouve que tu es la vraie Eva ?
— Nellio, je suis l’Eva que tu as toujours connue, que tu as rencontrée à la conférence de CrazyDog.
— Tu n’as pas répondu à ma question ! D’ailleurs, pourquoi m’as-tu envoyé les coordonnées de cette usine ?
— Mais je n’ai pas…
— C’est moi !

La voix est douce, chaude. Elle jaillit dans mon dos comme celle d’un commentateur d’une vidéo animalière. Je me retourne et tombe face à face avec un amas de chairs, de métal et de plastiques.
— Max !
— Et oui, poursuit-il de son timbre incroyablement chaleureux. Comme tu peux l’entendre, j’ai même réparé mon générateur vocal.

Les deux employés s’approchent de nous. Je constate qu’aucun ne semble manifester de curiosité excessive ni même d’étonnement. Le plus grand des deux s’adresse familièrement à Max.
— Max, bordel, ce sont tes amis ? C’est toi qui les a amenés ici ? Tu risques de nous attirer des emmerdes !
– Relax, poursuit la voix chaude et envoutante du générateur. C’était le seul endroit auquel je faisais suffisamment confiance pour pouvoir rencontrer Nellio en toute sécurité. C’était également nécessaire en vue de son édification.
— Hein ? Je pige rien Max. Tout ce que je sais c’est que vous êtes quatre personnes non autorisées et que nous risquons de perdre notre boulot.
— Votre boulot ?

Max éclate de rire. Son corps hybride se tord et se dandine mais, étrangement, seul un rire calme et sympathique se fait entendre.

— Tu parles d’un boulot ! Vous passez votre temps à consulter des sites pornos et à baiser des mannequins désarticulés. Vous êtes les deux seuls humains de toute l’usine car le contrat d’exploitation exigeait la création d’emplois locaux !
— Ça, ce n’est pas ton problème Max ! L’important c’est que nous évitons le statut de télé-pass. Et c’est un avantage que je tiens à garder, même si cela nécessite que j’appelle une milice.
— Et qui te filerait tes shoots si j’étais arrêté par les flics ?
— Tu n’es pas le seul dealer, Max !
— Non, mais je suis le meilleur. Allez, je te taquine. Je comprends votre inquiétude. Merci pour votre aide et votre accueil ! Allez, je vous offre une dose, c’est ma tournée.

Un large sourire illumine le visage des deux employés. D’un geste incroyablement rapide, Max leur tend deux petites gélules noires. Ils se l’insèrent immédiatement dans l’oreille gauche avant de s’écrouler instantanément sur le sol, pantins désarticulés, la bouche déformée par un rictus baveux.

Je pousse un cri :
— Max, tu ne les as pas…
— Non, je te rassure. Ils sont juste endormi pour un quart d’heure. Après, ils auront droit à leur shoot normal et n’auront pas conscience d’avoir été dans le coma. Ils se souviendront à peine de notre entrevue.

Junior s’exclame :
— Nom d’un clavier ! Tout mon service est à la recherche des trafiquants de neuro-logiciels.
— Trafiquant est un bien grand mot. Je suis le seul réel fournisseur.

Max appuie sa phrase d’un mouvement du visage qui s’apparente à un clin d’œil. Mais, entre les vis, les muscles luisants mis à nus et les pièces imprimées, j’avoue ne pas être à même de lire ses émotions.

— Les neuro-logiciels ? Mais n’est-ce pas une belle saloperie ? Tu me déçois beaucoup Max ! fais-je d’un ton outré.
— C’est ce que la propagande essaie de nous faire croire. Mais les neuro-logiciels sont, au contraire, la drogue idéale ! Il suffit d’un implant dans le tympan qui va se connecter à ton réseau neuronal. Ensuite, pour peu que tu t’y connaisses, tu peux programmer absolument n’importe quel trip. Je me suis spécialisé dans les micro-doses. La puce électronique fond avec la chaleur du corps humain et se dissout totalement. Sans cela, les trips seraient illimités, ce qui n’est pas très bon pour mon business ! Un petit trip bien cinglant, c’est parfait, mes clients en redemandent !
— Mais… c’est répugnant ! Tu manipules le cerveau des gens !
— Oui. Mais sans tous les effets secondaires que peuvent induire les drogues chimiques. Mes trips sont complètement safes. J’ai même en magasin des trips compatibles avec une activité sociale normale. Ton conjoint t’emmerde ? J’ai un trip qui te fait vivre l’extase intérieure pendant que tu passes la soirée avec un air attentif à répondre aux questions.
— Dis, murmure Junior, tu n’aurais pas un truc à me filer pour supporter la douleur le temps qu’on me rafistole ?

Max se penche sur l’oreille de Junior. Je l’interromps d’un air solennel.
— Max, pourquoi m’as-tu amené ici ?

Il hésite une fraction de seconde.
— Parce que je ne pense pas qu’on vienne te chercher ici.
— Qui “on” ?
— Les flics, Georges Farreck, les industriels, … J’avoue que je ne sais pas trop qui tire les ficelles. Tu es traqué, recherché mais je n’arrive pas à mettre la main sur la personne qui chapeaute tout.
— Georges Farreck, fais—je, mais je ne comprends pas son acharnement.
— Georges Farreck n’est qu’un pion, assène Max. Il n’a été qu’un instrument, un catalyseur pour faire naitre le printeur. Tout comme toi et…

D’un geste ample, il désigne le hangar remplit de mannequins immobiles.

— … Eva !

Je me tourne vers Eva, la vraie. Elle n’a pas bougé, son regard est éteint et son visage trahit l’angoisse.

Je réalise que Max ne l’a pas pointée elle mais a bel et bien désigné l’ensemble du hangar.

— Eva ? Quel est ton rôle exactement !

Une larme perle le long de sa joue.

— Nellio ! Je te supplie de me faire confiance. Je suis de ton côté !
— Réponds à ma question : es-tu, oui ou non, la vraie Eva ? L’originale ?
— Je croyais que tu avais compris, nous interrompt Max d’une voix douce. Les vraies Eva sont là, dans ce hangar. Tu n’as jamais connu que la copie.
— Hein ?
— C’est pas que j’ai envie de jouer les troubles-fête mais je continue à pisser du sang et les deux zozos vont bientôt se réveiller. Est-ce que ça vous arracherait la gueule de vous occuper un peu de moi ? Bordel, j’ai mal !

En regardant Junior se tordre de douleur, j’ai soudain une illumination.
— Max, occupe-toi de Junior et arrange-toi pour que les deux employés restent endormis.
— Mais je ne peux modifier leur shoot sans…
— Écoute Max, tu te démerdes. J’ai besoin de deux heures. Et puis on se casse d’ici.

De la main, j’empoigne le bras d’Eva.
— Viens avec moi !
— Que fais-tu Nellio ?
— Ils veulent le printeur ? Et bien on va leur donner le printeur !

 

Photo par 7-how-7.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins: 12uAH27PhWepyzZ8kzY9rMAZzwpfsWSxXt, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

Flattr this!

by Lionel Dricot at January 23, 2016 11:13 AM

Frank Goossens

Music from Our Tube; Alabama Shakes’ feeling good

Alabama Shakes performing “This Feeling” live. “Please don’t take this feeling I have found at last”. Soul as it was meant to be!

YouTube Video
Watch this video on YouTube.

by frank at January 23, 2016 09:06 AM

January 20, 2016

Joram Barrez

Activiti Community Meetup Antwerp – March 2nd 2016 !

It’s been way too long ago we’ve had an Activiti meetup in Belgium (which is where I live, so there is no excuse really). Over a lunch with my good friend Yannick Spillemaeckers, we decided to rectify that. So I’m proud to announce we’ll be having a meetup March 2nd 2016 in Antwerp! You can […]

by Joram Barrez at January 20, 2016 01:03 PM

January 19, 2016

Wim Leers

BigPipe introduction webinar

Acquia frequently does webinars and given my work on BigPipe, I was asked to present with Fabien Potencier (of Symfony fame). Fabien covered back-end performance in the form of profiling, I covered front-end performance, and really the intersection between front-end and back-end performance in the form of BigPipe.

by Wim Leers at January 19, 2016 11:06 AM

January 18, 2016

Frank Goossens

Know about secure communication apps and privacy?

For the people on Planet Grep (and other technology-minded boys & girls): I’ve been contacted by a journalist who is looking for someone with knowledge of how secure communication apps  (Telegram was mentioned) work and who can explain how these can be used to stay under the radar of law-enforcement (with good or bad intentions, obviously). If you’re interested or know anyone who might be, do contact me and I’ll patch you through!

by frank at January 18, 2016 10:23 PM

Wim Leers

Performance Calendar 2015: "2013 is calling: are CMSes fast by default yet?"

This year, Performance Planet did an advent calendar again, just like the last few years. I also contributed an article — what follows is a verbatim copy of my “2013 is calling: are CMSes fast by default yet?” article for the 2015 Performance Calendar that was published exactly a month ago.

Two years ago, I wrote about how making the entire web fast is the real challenge. How only the big few companies are able to have very fast websites, because only they have the resources to optimize for performance. Many (if not most) organizations are happy just to have a functioning, decent looking site.

In that article, I said I thought that if we made the most popular CMSes faster by default. That would go a long way to make the majority of the web faster. Then less technical users do not need to know about the many arcane switches and knobs just to get the intended performance. No more “oh but you didn’t set it up correctly/optimally” — or at least far less of it.

Drupal 8: fast by default on many fronts

Since that article two years ago I’ve been working with >3300 other contributors to get the next version of Drupal done. Drupal 8 was released a month ago and is much faster by default.

All of the following changes improve perceived performance and Drupal 8 has all of them enabled by default:

… and for those last three fast defaults, each of which does caching: they don’t ever serve stale content.

Instantaneous, optimal invalidation thanks to cache tags. Lots of interesting possibilities with Drupal 8’s cacheability metadata.

Also, some CDNs already support cache tags. Which means instantaneous invalidation of content even when served from a CDN.

More fast defaults coming in minor Drupal 8 releases

We also paved the path for more optimizations to land in upcoming minor versions:

There are of course many additional defaults that would help Drupal 8 be even faster by default, such as:

WordPress 4.4: responsive images by default

WordPress has made one significant change to improve performance: responsive images are enabled by default since the 4.4 release a week ago. Sadly, that’s the only significant WPO change in 2 years. (Six significant releases were made: versions 3.8 through 4.4.)

Joomla 3.x: status quo

Unfortunately, no significant steps forward have been made by Joomla to make it fast by default. 3.3.0 made the routing system faster and continued the conversion from MooTools to jQuery, which hopefully translates to less JavaScript being loaded. (Two significant releases were made: 3.3.0 and 3.4.0.)

Let’s…

So, Drupal has put a lot of effort into making version 8 fast by default. As Drupal 8 is rolled out, hundreds of thousands of sites will become faster.

Hopefully other CMSes (and frameworks) will follow.

Let’s make the entire web fast!


  1. Functionality that is implemented in JavaScript for authenticated users is applied on the server side once and cached for subsequent visits. 

  2. Yes, for HTTP/2 that will make less sense … but at least today’s HTTP/2 servers and browsers still benefit from aggregation

  3. To do JavaScript minification in compliance with the GPL is not trivial

by Wim Leers at January 18, 2016 08:35 PM

FOSDEM organizers

Third set of speaker interviews

We have added a whole lot of new interviews with some of our main track speakers, with topics varying from systemd through the Vulkan graphics API, live migration of virtual machines and code reviews to the future of OpenDocument: Adrien Béraud and Guillaume Roguez: Building a peer-to-peer network for Real-Time Communication. Can a true peer-to-peer architecture, with no central point of control, be a universal and secure solution? Alberto Bacchelli: What Do Code Reviews at Microsoft and in Open Source Projects Have in Common? Amit Shah: Live Migration of Virtual Machines From the Bottom Up Anand Babu (AB) Periasamy: Rearchitecting舰

January 18, 2016 03:00 PM

January 17, 2016

Frank Goossens

Autoptimize: why are .php,jquery excluded from JS optimization?

If you see .php,jquery added to the JS optimization exclusion list and wonder why that is the case; these are added by WP Spamshield to ensure optimal functioning of the that plugin and when you remove them they will be re-added automatically by WP Spamshield.

I am currently exchanging information with Scott (WP Spamshield’s developer) to potentially improve this (as jquery causes all JS with jquery to be excluded from optimization) which is sub-optimal from a performance point-of-view.

If you know what you are doing (i.e. if you are willing to do the trial-and-error dance) you can trick WP Spamshield into leaving the JS optimization exclusion list as is by entering:

.php,jquery.js

which reduces the exclusion to just jquery.js (and .php, but that doesn’t really matter) which based on a (too) quick (a) test could enough for WP Spamshield to function.

.php,jquery.IknowwhatImdoing

which invalidates the exclusion of any jquery (which might work with WP Spamshield if you ‘force JS in head’).

If you do decide to trick WP Spamshield this way, this is entirely and only your responsibility and you should test this thoroughly. Don’t ask Scott (or me) for support for such ugly hacks ;-)

by frank at January 17, 2016 05:23 PM

January 16, 2016

Jeroen De Dauw

FAF science: Percivals vs T1 PDs

I’ve decided to start with writing small to the point blog posts on Supreme Commander Forged Alliance whenever I stumble upon a good topic. For those who don’t know it, this is the best Real Time Strategy game out there. It’s been said that it is to Star Craft what chess is to Tic-tac-toe. Check out the community maintained multilayer service Forged Alliance Forever for more info.

Lately I’ve been playing a lot of Phantom-X games on the Four Corners map. This changes up the Phantom meta compared to the usual maps so much and leads to games I find way more interesting. One scenario I’ve run into a few times is that I’m UEF inno and am next to a UEF phantom that spams a crapton of Percivals. I started wondering how effective T1 PDs are at countering percies on this map, and did some testing to find out.

Supreme Commander Fogred Alliance: Four Corners map

In most situations percies will straight out win, since they can pick off the T1 PDs from a distance, as their range is significantly bigger. However on this map, the percies have to move through the water first, a period during which neither they not the t1 PDs can exchange fire. And once they emerge from the water, they’ll be in range of the PDs (unless you placed them real bad).

Even with this advantage, how mass effective is countering percies with T1 PDs? Percies have a 4 second firing cycle and deal 1600 damage per shot, thus one-shotting T1 PDs. During my testing I found that you  can build 5 PDs for the cost of a percie. I did not look into the stats further, as the best way to find out how they actually compare is to let the game itself do the work.

UEF_T3_Armored_Assault_Bot (1) VS UEF_T1_PD UEF_T1_PD UEF_T1_PD UEF_T1_PD UEF_T1_PD

The simplest scenario is 1 percie vs 5 PDs, which is what I started with. The percie wins, with 20%ish of its health left. Next I tried 3 percies and 15 PDs. I figured that the more percies, the more the advantage would go to the PDs. Some percies might target the same PD, plus they do not all emerge from the water at the same time when in group. A third factor is that the DPS of the percies goes down in “bigger steps”. (If you do 90% damage to a single percie, it will still have 100% dps, while 90% to 5PDs will only leave 20%.) And indeed, like this the PDs win, with 4 or 5 PDs surviving. Moving up another step to 10 percies and 50 PDs, the difference gets even more pronounced, with half of the PDs surviving.

As UEF also has access to the highest health per mass cost unit in the game, the T2 mobile shield, I also experimented with this. The mass cost of such a generator is only half that of a T1 PD, and it takes percie shots to kill the shield, and then another to kill the generator. Interestingly, the generators appear to get targeted before the PDs, which in this situation is advantageous for the defender. Repeating the 10 percie assault when there are only 45 PDs and 5 shields leads to the shields getting killed and only a few PDs dying.

So it seems that T1 PDs plus shields are a viable counter to percies in this very specific scenario, even holding the Phantom bonus into account. Of course it’s better if you avoid the Phantom from building up a percie blob in the first place :)

by Jeroen at January 16, 2016 05:38 PM

Quotes: Entropy, the final enemy of all things

Recently I wrote about the books I read in 2015, and promised a post with quotes I particularly liked. These fall into several categories: funny ones, those with a high coolness factor, and those that make one reconsider about something or are otherwise interesting. Before I get to that though, I just have to post my collection of quotes on entropy :)

“You cannot fight entropy.”
― Peter F. Hamilton, The Neutronium Alchemist

“It is the epitome of entropy, the final enemy of all things.”
― Peter F. Hamilton, The Evolutionary Void

“Time travel is pure bullshit, impossible; nobody can defeat causality or entropy.”
― Peter F. Hamilton, The Temporal Void

“Venter Biomorphics—the last of the old-time corporations—had finally lost the fight against entropy and been swept away.”

― Peter Watts, Echopraxia (Firefall)

“Yet, in the end, entropy will always emerge victorious, snuffing out the very last glimmer of heat and light. After that there is only darkness. When that state is reached even eternity will cease to exist, for one moment will be like every other and nothingness will claim the universe.”
― Peter F. Hamilton, The Temporal Void

“Out of twinkling stardust all came, into dark matter all will fall. Death mocks us as we laugh defiance at entropy, yet ignorance birthed mortals sail forth upon time’s cruel sea.”
― Peter F. Hamilton, The Temporal Void

“This must be the sixth realm, the nameless void. Entropy is the only lord here. We will all bow down before him in the end.”
― Peter F. Hamilton, The Naked God

“This universe and all it is connected with will come to an end. Entropy carries us towards the inevitable omega point, that is why entropy exists.”
― Peter F. Hamilton, The Naked God

Hamilton makes it seem like entropy is so evil. Luckily there are more enlightened characters in other novels!

“Entropy is a bitch,” I said. “Now, now,” said Aenea from where she was leaning on the terrace wall. “Entropy can be our friend.”
― Dan Simmons, Endymion (Hyperion Cantos)

“Nothing up there tonight but entropy, and the same imaginary shapes that people had been imposing on nature since they’d first thought to wonder at the heavens.”

― Peter Watts, Echopraxia (Firefall)

As a bonus, enjoy this great short story by Asimov about entropy:

by Jeroen at January 16, 2016 07:44 AM

January 15, 2016

Frank Goossens

Mijn m.deredactie.be-alternatief nu ook met Sporza

futttas-sporzaHet was exact een jaar geleden dat ik nog iets over mijn m.deredactie.be-alternatief schreef en het was nog langer geleden dat ik er aan gesleuteld had. Omdat ik tussen mijn echte werk en mijn WordPress plugin spielereien nog wat tijd had, heb ik één en ander verbeterd;

  1. Je kunt nu ook Sporza-nieuws lezen
  2. Op PHP-installaties zonder APC-support (apc_store/ apc_fetch) wordt de cache nu op disk bijgehouden
  3. Op PHP-installaties zonder CURL-support wordt nu teruggevallen op file_get_contents
  4. Een reeks kleinere UI-verbeteringen en fixes voor PHP-notices
  5. Getest op PHP 5.2, 5.5, 7.0 en HHVM (op openshift)

De (crappy) sourcecode staat nog steeds op GitHub, bug-meldingen of pull-requests zijn daar zeer welkom :-)

by frank at January 15, 2016 04:28 PM

Xavier Mertens

[SANS ISC Diary] JavaScript Deobfuscation Tool

SANS ISCThe following diary was published on isc.sans.org: JavaScript Deobfuscation Tool.

[The post [SANS ISC Diary] JavaScript Deobfuscation Tool has been first published on /dev/random]

by Xavier at January 15, 2016 12:20 PM

Dries Buytaert

Drupal: 15 years old and still gaining momentum

On December 29, 2000, I made a code commit that would change my life; it is in this commit that I called my project "Drupal" and added the GPL license to it.

Drupal name and license

The commit where I dubbed my website project "Drupal" and added the GPL license.

A couple weeks later, on January 15, 2001, exactly 15 years ago from today, I released Drupal 1.0.0 into the world. The early decisions to open-source Drupal and use the GPL license set the cornerstone principles for how our community shares with one another and builds upon each other's achievements to this day.

Drupal is now 15 years old. In internet terms, that is an eternity. In 2001, only 7 percent of the world's population had internet access. The mobile internet had not entered the picture, less than 50% of the people in the United States had a mobile phone, and AT&T had just introduced text messaging. People searched the web with Lycos, Infoseek, AltaVista and Hot Bot. Google -- launched in 1998 as a Stanford University research project -- was still a small, private company just beginning its rise to prominence. Google AdWords, now a $65 billion business, had less than 500 customers when Drupal launched. Chrome, Firefox, and Safari didn't exist yet; most people used Netscape, Opera or Internet Explorer. New ideas for sharing and exchanging content such as "public diaries" and RSS had yet to gain widespread acceptance and Drupal was among the first to support those. Wikipedia was launched on the same day as Drupal and sparked the rise of user-generated content. Facebook and Twitter didn't exist until 4-5 years later. Proprietary software vendors started to feel threatened by open source; most didn't understand how a world-class operating system could coalesce out of part-time hacking by several thousand developers around the world.

Looking back, Drupal has not only survived massive changes in our industry; it has also helped drive them. Over the past decade and a half, I've seen many content management systems emerge and become obsolete: Vignette, Interwoven, PHP-Nuke, and Scoop were all popular at some point in the past but Drupal has outlived them all. A big reason is from the very beginning we have been about constant evolution and reinvention, painful as it is.

Keeping up with the pace of the web is a funny thing. Sometimes you'll look back on choices made years ago and think, "Well, I'm glad that was the right decision!". For example, Drupal introduced "hooks" and "modules" early on, concepts that are commonplace in today's platforms. At some point, you could even find some of my code in WordPress, which Matt Mullenweg started in 2003 with some inspiration from Drupal. Another fortuitous early decision was to focus Drupal on the concept of "nodes" rather than "pages". It wasn't until 10 years later with the rise of mobile that we started to see the web revolve less and less around pages. A node-based approach makes it possible to reuse content in different ways for different devices. In a way, much of the industry is still catching up to that vision. Even though the web is a living, breathing thing, there is a lot of things that we got right.

Other times, we got it wrong. For example, we added support for OpenID, which never took off. In the early days I focused, completely and utterly, on the aesthetics of Drupal's code. I spent days trying to do something better, with fewer lines of code and more elegant than elsewhere. But in the process, I didn't focus enough on end-user usability, shunned JavaScript for too long, and later tried to improve usability by adding a "dashboard" and "overlay".

In the end, I feel fortunate that our community is willing to experiment and break things to stay relevant. Most recently, with the release of Drupal 8, we've made many big changes that will fuel Drupal's continued adoption. I believe we got a lot of things right in Drupal 8 and that we are on the brink of another new and bright era for Drupal.

I've undergone a lot of personal reinvention over the past 15 years too. In the early days, I spent all my time writing code and building Drupal.org. I quickly learned that a successful open source project requires much more than writing code. As Drupal started to grow, I found myself an "accidental leader" and worried about our culture, scaling the project, attracting a strong team of contributors, focusing more and more on Drupal's end-users, growing the commercial ecosystem around Drupal, starting the Drupal Association, and providing vision. Today, I wear a lot of different hats: manager of people and projects, evangelist, fundraiser, sponsor, public speaker, and BDFL. At times, it is difficult and overwhelming, but I would not want it any other way. I want to continue to push Drupal to reach new heights and new goals.

Today we risk losing much of the privacy, serendipity and freedom of the web we know. As the web evolves from a luxury to a basic human right, it's important that we treat it that way. To increase our impact, we have to continue to make Drupal easier to use. I'd love to help build a world where people's privacy is safe and Drupal is more approachable. And as the pace of innovation continues to accelerate, we have to think even more about how to scale the project, remain agile and encourage experimentation. I think about these issues a lot, and am fortunate enough to work with some of the smartest people I know to build the best possible version of the web.

So, here is to another 15 years of evolution, reinvention, and continued growth. No one knows what the web will look like 15 years in the future, but we'll keep doing our best to guide Drupal responsibly.

by Dries at January 15, 2016 11:32 AM

January 13, 2016

Mattias Geniar

Delete files older than X days in Linux

The post Delete files older than X days in Linux appeared first on ma.ttias.be.

Sometimes you want to remove old files on a server, like old log files or temporary files created ages ago, that are no longer relevant. Here's a quick command that lets you do that.

But first, let's play safe and show you a command to view the files older than X days, so you can review the list first before you launch the delete-command.

$ find . -mtime +180 -print

To break it down: we use find in the current directory (the dot .) and list all files with a modification date that's older than 180 days. After that, we print the files to stdout with the -print command (which, technically, is obsolete -- you can omit that parameter and it'll still work, but it makes for a clearer example).

To pass an other directory, simply replace the dot:

$ find /to/your/directory -mtime +180 -print

Now, to actually delete the files older than X days, you can use any of these variants. Be careful though, there's no trashbin or confirmation in this case, once you copy/paste this, it'll delete everything in your current working directory older than 180 days. ;-)

$ find . -mtime +180 -delete;
$ find . -mtime +180 -exec rm -f {} \;

As before, the -mtime parameter is used to find files older than X. In this case, it's older than 180 days. You can either use the -delete parameter to immediately let find delete the files, or you can let any arbitrary command be executed (-exec) on the found files.

The latter is slightly more complex, but offers more flexibility if want to copy them to a temp directory instead of deleting. In that case, you can swap out the rm with an mv.

The post Delete files older than X days in Linux appeared first on ma.ttias.be.

by Mattias Geniar at January 13, 2016 08:30 PM

Lionel Dricot

Échappez à la manipulation de vos émotions !

167816442_2043c2df4e_o

Cela fait déjà deux ans et demi que j’ai écrit Le blog d’un condamné (republié sur Wattpad pour ceux qui veulent le redécouvrir).

Anecdote intéressante : deux ans après, je reçois encore régulièrement des insultes. Le motif ? Ce que j’ai écrit serait scandaleux car « je joue avec l’émotion des gens ». J’ai écrit une fiction sans panneaux clignotants affirmant qu’il s’agissait d’une fiction. Cela ferait de moi un manipulateur d’émotions.

Ces critiques font preuve d’une désarmante naïveté face aux dures réalités du monde.

Car nous sommes avides d’émotions

Utiliser la raison est difficile, fatiguant. Les émotions, elles, s’assimilent facilement. La littérature à l’eau de rose, les films d’horreur, les blockbusters, les publicités ont tous pour volonté de nous donner des émotions brutes, sans aucune réflexion, en jouant sur la manipulation de nos sens au travers couleurs, musiques, bruitages, montages étudiés, etc.

On nous montre des gens en proie à des émotions extrêmes, on nous sature d’émotions pour qu’elles débordent un peu sur nous. Les séries télés, dont les histoires sont souvent sans intérêt et truffées de contradictions, tiennent le public en haleine en nous “attachant à un personnage”.

La télé-réalité et les potins de stars sont encore plus efficaces. Il n’y même plus de tentative de construction, vous êtes simplement exposé au fait que d’autres personnes dans le monde ont une vie et des émotions. Vous pouvez alors vivre ces émotions par procuration.

Sur Internet, le marché des émotions low-cost est devenu une véritable caricature avec les titres accrocheurs : « Vous ne croirez pas ce que ce petit chien va faire » ou « Une maman a pleuré en voyant cette vidéo ».

Le contenu est toujours inutile et absolument sans intérêts. Mais il contient une émotion simpliste.

Car nous vivons dans une fiction

Prenez les buzz sur Internet ou dans les médias. Peu importe que l’histoire soit vraie ou pas : la plupart sont soit complètement fausses, soit très vieilles, soit tellement déformées qu’elles n’ont plus aucun sens.

Le « journalisme » et les « médias », si généreusement aidé par des subventions publiques, sont devenus tout entier des participants à ce grand marché de l’émotion. Ils ne cherchent pas à informer, même s’ils en sont convaincus, mais uniquement à fournir de l’émotion, versions locales à petit budget de Hollywood ou Endemol.

La mauvaise foi ou l’incompétence de la plupart des rédacteurs transforme d’ailleurs toutes les histoires que nous lisons. Le journal parlé n’est plus qu’un concentré de flash émotionnels qui vous vend de la bonne conscience ( « J’aime être informé » ) tout en préparant votre cerveau pour la page de publicité.

Car c’est un business dont nous sommes les pigeons

Une conséquence intéressante des émotions, spécialement les plus vives, c’est qu’elles ouvrent toutes grandes les portes du cerveau et de la mémoire.

La raison est simple : pour un homme vivant dans la savane, se souvenir à la perfection des événements marquants (la rencontre d’un prédateur) permet d’obtenir un avantage évolutif non-négligeable (se rappeler comment on s’en est sorti).

Cette particularité de notre cerveau est exploitée à merveille par les publicitaires qui peuvent ainsi inscrire leurs messages dans notre cerveau comme dans un livre blanc.

Nous avons besoin de notre dose d’émotion et, en échange, nous fournissons notre personnalité à modeler.

À votre avis, quand vous allumez la télé ou que vous cliquez sur un lien Facebook inattendu, le deal est-il à votre avantage ? L’émotion reçue a-t-elle une valeur plus importante que la parcelle de votre personnalité que vous avez cédée à un annonceur quelconque ?

Car nous abandonnons trop facilement notre libre arbitre

Lors de certains événements, ce fonctionnement médiatique de marchandisation des émotions s’emballe. Les médias entrent dans la surenchère et il devient virtuellement impossible de ne pas être informé en temps réel du-dit événement. Tout le monde se rue, tient à “être informé”, regarde en boucle les mêmes images. Il est même socialement inacceptable de ne pas participer à l’émotion collective !

Une aubaine pour les publicitaires et les sites de “presse” vivant de la publicité qui, en véritables charognards des drames, voient leurs revenus exploser. Les algorithmes publicitaires se greffent d’ailleurs automatiquement sur les événements les plus juteux.

Image

Ce terrorisme émotionnel n’a pas que des retombées économiques, il est également extrêmement profitable pour tous ceux qui profitent de l’irrationnel et de la peur. Quoi de mieux pour faire passer des mesures politiques absurdes, sécuritaires et dangereusement rétrogrades qu’une population abrutie par la peur ?

Car nous devons apprendre à nous protéger

Pour lutter contre cet incessant flot d’émotions manipulatrices, je vous ai déjà expliqué que je fuyais les sites de presse et que je séparais la “cueillette d’informations” de la lecture proprement dîtes.

Lors d’un événement particulièrement médiatisé et marquant, j’applique dorénavant les préceptes suivants :

Avec une semaine de recul, je me rends compte que les dizaines d’articles dans Pocket sont déjà obsolètes, qu’ils disent tous la même chose et que j’aurais vraiment perdu mon temps. De temps en temps, un article écrit avec un peu plus de recul et d’analyse résume très bien tout l’événement. Je n’ai alors plus besoin d’en savoir plus.

Car vos émotions sont ce que vous avez de plus précieux

Notre cerveau est le bien le plus précieux. Et, malgré tout les comités d’éthique, personne ne le protégera à notre place. C’est pour cette raison que j’encourage fortement l’installation de logiciels bloqueurs de pub.

Mais la publicité n’est pas la seule forme de manipulation et de contrôle des émotions. Dans un monde hyper connecté, nous devons individuellement apprendre à nous responsabiliser et à mettre en place des stratégies pour garder notre individualité et notre libre arbitre.

Nos émotions sont si belles, si personnelles qu’il serait dommage de les brader au premier média ou publicitaire venu…

 

Photo par Alexis.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins: 12uAH27PhWepyzZ8kzY9rMAZzwpfsWSxXt, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

Flattr this!

by Lionel Dricot at January 13, 2016 04:40 PM

January 12, 2016

Mattias Geniar

Sort ‘ps’ by memory usage in Linux

The post Sort ‘ps’ by memory usage in Linux appeared first on ma.ttias.be.

If you type ps aux, you'll get a detailed output of all linux processes running at that time.

Here's a little trick that allows you to sort that output by memory consumption, so the biggest memory consumers are at the bottom.

$ ps aux | sort -nk +4 | tail -n 10

This sorts the output of ps by the 4th field (%MEM) and only shows the last 10 lines.

Alternatively, you can also pass the --sort parameter to ps directly. For instance:

$ ps aux --sort -rss | head

This shows the top consumers at the top.

The post Sort ‘ps’ by memory usage in Linux appeared first on ma.ttias.be.

by Mattias Geniar at January 12, 2016 09:20 PM

Get the file or directory owner in Bash for use in scripts on Linux

The post Get the file or directory owner in Bash for use in scripts on Linux appeared first on ma.ttias.be.

If you ever need to retrieve the user that owns the file, or the group, you can use the very simple stat command. But instead of the usual grep/sed/awk dance, you can just set some additional parameters that retrieve only the user or group.

Here's how that works:

#!/bin/bash
USER=$(stat -c '%U' /path/to/your/file)

From now on, the $USER variable holds the username of the owner of that file. Want the group? Just as easy:

GROUP=$(stat -c '%G' /path/to/your/file)

The stat commands allows for several output formats, for handling users/groups these are the most important:

%u     user ID of owner
%U     user name of owner
%g     group ID of owner
%G     group name of owner

Handy little tool!

The post Get the file or directory owner in Bash for use in scripts on Linux appeared first on ma.ttias.be.

by Mattias Geniar at January 12, 2016 09:15 PM

Apache 2.4 AH01762 & AH01760: failed to initialize shm (Shared Memory Segment)

The post Apache 2.4 AH01762 & AH01760: failed to initialize shm (Shared Memory Segment) appeared first on ma.ttias.be.

I recently ran into the following problem on an Apache 2.4 server, where after server reboot the service itself would no longer start.

This was the error whenever I tried to start it:

$ tail -f error.log
[auth_digest:error] [pid 11716] (2)No such file or directory:
   AH01762: Failed to create shared memory segment on file /run/httpd/authdigest_shm.11716
[auth_digest:error] [pid 11716] (2)No such file or directory:
   AH01760: failed to initialize shm - all nonce-count checking, one-time nonces,
   and MD5-sess algorithm disabled

Systemd reported the same problem;

$ systemctl status -l httpd.service
 - httpd.service - The Apache HTTP Server
   Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since ...

The cause is shown in the first error message: Failed to create shared memory segment on file /run/httpd/authdigest_shm.11716.

If I traced this, I noticed the directory /run/httpd no longer existed. The simple fix in this case, was to re-create that missing directory.

$ mkdir /run/httpd
$ chown root:httpd /run/httpd
$ chmod 0710 /run/httpd

The directory should be owned by root and writeable for the root user. The Apache group (in my case, httpd), needs executable rights to look into the directory.

The post Apache 2.4 AH01762 & AH01760: failed to initialize shm (Shared Memory Segment) appeared first on ma.ttias.be.

by Mattias Geniar at January 12, 2016 09:00 PM

Frank Goossens

WordPress Plugin releases: who needs a big bang anyway?

On January 1st Mika Epstein blogged about releasing/ updating software for large projects, advising against releasing software during the festive season;

With the increasing provenance of online stories and websites for everyone, pushing a change when we know that the majority of the world is celebrating something between Nov 15th and January 15th is reckless. […] picture what happens when an update has a small bug that takes down […] 1/1000 of 1/4th of the entire Internet. […] It may be time to call a year end moratorium on updates to our systems and apps.

Working in corporate IT myself I could only agree. In theory that is, because a couple of days before I had purposely pushed out a major Autoptimize release in the last week of December, on a Saturday. Why? While inching closer to Autoptimize 2.0’s release, I was becoming worried of the impact some of the bigger changes could have. Impact as in “what if this breaks half of the sites AO is installed on“. One way to limit such impact, I thought, is by releasing on a moment people are bound to be less busy with their websites. So by releasing on Boxing Day, I assumed less people were bound to see & install the update on day 0, limiting the damage a major bug could do.

Now I do agree this approach is very clumsy, but being able to limit the amount of people seeing/ installing a plugin (or theme) update on day 0 could help prevent disasters such as the ones that plagued for example Yoast SEO. The idea of “throttled releases” is not new, it already exists for Android apps, with Google allowing developers to flag an update for a “staged rollout:

You can release an app update to production using a staged roll-out, where you release an app update to a percentage of your users and increase the percentage over time. New and existing users are eligible to receive updates from staged roll-outs. […] During a staged roll-out, it’s a good idea to closely monitor crash reports and user feedback.

Pushing an update to a percentage of users and monitoring feedback, allowing you to catch problems without the risk of impacting your entire install base? I want that for my WordPress plugins! So how could we get that to work?

What if an extra header were included in readme.txt, e.g. an optional “throttled release” flag. With that flag set, the percentage of people seeing the update in their wp-admin screens would be low on day one and increasing every day, for example;

Day after release % of people seeing release in dashboard
day 0 5%
day 1 10%
day 2 20%
day 3 40%
day 4 80%
day 5 100%

This could be accomplished by having https://api.wordpress.org/plugins/update-check/ (against which WordPress installs check for updates) “lie” about updates being available if the “throttled release”-flag is set by e.g. simply introducing randomness in plugins/update-check/;

$showupdate = false;
$randomness = mt_rand(1,40);
if ( ($throttledrelease === true) && ($datenow === $pluginreleasedate) && ($randomness < 2) ) { 
    $showupdate = true; 
    }

(The “magic” in above code is in the random value between 1 and 40 which has a 1 in 40 (or 2.5%) chance of being smaller than 2 (i.e. 1), so in 2.5% of requests $showupdate would be true. This translates to 5% of requesting WordPress instances per day, as there are checks for updates every 12h, so 2 per day. Obviously on $pluginreleasedate+1d the condition would change, with the random value having to be smaller than 3 (so being either 1 or 2, i.e. approx. 5% of cases X2 =10%), on +2d smaller than 5 (1, 2, 3 or 4 = 10% X 2 = 20%) and so on. This randomness-based approach allows for plugins/update-check not having to keep tabs of how many people saw/ did not see the update at a given date.)

This obviously is just a simplistic idea that does not take into account any of the smart stuff undoubtedly going on in plugins/update-check/ (such as caching, most likely), but I’m pretty sure the wordpress.org-people who are responsible for that code could implement something along these lines. And I do think this would very much be worth the trouble, as It would allow Yoast & other major plugins developers to release without the fear of breaking hundreds-of-thousands WordPress sites within a couple of hours. And I would not have to release on Boxing Day, leaving me and the users of my plugins the time to digest that Christmas-dinner peacefully. What’s not to like?

Blogpost updated (code example + explanation) on 13/01/2016 to reflect the fact that a WordPress instance checks for updates every 12 hours, which impacts the randomness.

by frank at January 12, 2016 06:34 AM

January 11, 2016

Xavier Mertens

[SANS ISC Diary] Virtual Bitlocker Containers

SANS ISCThe following diary was published on isc.sans.org: Virtual Bitlocker Containers.

[The post [SANS ISC Diary] Virtual Bitlocker Containers has been first published on /dev/random]

by Xavier at January 11, 2016 08:00 AM

January 10, 2016

FOSDEM organizers

Second set of speaker interviews

We have just published the second batch of interviews with our main track speakers. The following interviews give you a lot of interesting reading material about various topics: Dan McDonald: illumos at 5. An overview of illumos five years later Daniel Pocock: Free communications with Free Software. Is there any credible way to build a trustworthy communications platform without using free software? Dave Neary: How to run a telco on free software. The network transformation with OPNFV Francis Rowe: Libreboot - free your BIOS today Frank Ch. Eigler: Applying band-aids over security wounds with systemtap. A data-modification-based approach for fixing舰

January 10, 2016 03:00 PM

January 09, 2016

FOSDEM organizers

Work for us FOR FREE! Volunteer!

Are you coming to FOSDEM? Would you like to meet some crazy energetic smart friendly people? Have you got some time to spare before, during or after FOSDEM? Come work for us FOR FREE! Volunteer! HOW? Subscribe to volunteers.fosdem.org. Pick your tasks. WHERE? Come meet us at infodesk K. WHEN? 10 minutes before the first task you picked, for a short briefing. WHAT? Buildup on Friday. Infodesk, video, and heralding during FOSDEM. Teardown on Sunday evening. CONTACT? Mark, mvandenborre@fosdem.org (preferred), or +32 486 961726 (emergencies only) WHO? You, together with us other volunteers. Remember, staff are volunteers too, just the舰

January 09, 2016 03:00 PM

Join your fellow hackers for a drink at the eve of FOSDEM!

Looking for something to do the evening before FOSDEM? Like every year, many FOSDEM attendees are planning to enjoy some beers at the Delirium Café, in a beautiful alley near the Grand Place in Brussels. We have reserved most of the bar (most of the alley in fact!) again this year. Come and join us at the eve of FOSDEM, on Friday the 29th of January. You are welcome as of 16h00. Read the beer event page for all the details! If the beer event is very busy or if you would like dinner before heading over (Delirium Café舰

January 09, 2016 03:00 PM