Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

August 26, 2016

The post Podcast: Application Security, Cryptography & PHP appeared first on ma.ttias.be.

I just uploaded the newest SysCast episode, it's available for your podcasting hearing pleasures.

In the latest episode I talk to Scott Arciszewski to discuss all things security: from the OWASP top 10 to cache timing attacks, SQL injection and local/remote file inclusion. We also talk about his secure CMS called Airship, which takes a different approach to over-the-air updates.

Go have a listen: SysCast -- 6 -- Application Security & Cryptography with Scott Arciszewski.

The post Podcast: Application Security, Cryptography & PHP appeared first on ma.ttias.be.

Let’s Encrypt

In a previous post, I’ve already briefly touched on Let’s Encrypt. It’s a fairly new but already very well established Certificate Authority, providing anyone with free SSL certificates to use for sites and devices they own. This is a welcome change from the older CAs who charge a premium to get that padlock into your visitors’ browsers. Thanks to Let’s Encrypt being free however, their prices have come down as well in the last year, which is great!

A fairly major stumbling block for some people is the fact that, out of security concern, Let’s Encrypt’s certificates are only valid for 90 days, while paid certificates are usually valid up to 3 years, so administrators can have that part on autopilot for quite a while without intervention.

However, given the fact that it is possible for you to fully automate the renewals of Let’s Encrypt certificates, if you do it right, you may never have to manually touch any SSL certificate renewal ever again!

The ACME protocol

In that same previous post I’ve also touched on the fact that I don’t very much like the beginner-friendly software provided by Let’s Encrypt themselves. It’s nice for simple setups, but as it by default tries to mangle your Apache configuration to its liking, it breaks a lot of advanced set-ups. Luckily, the Let’s Encrypt system uses an open protocol called ACME (“Automated Certificate Management Environment“), so instead of using their own provided ACME client, we can use any other client that also speaks ACME. The client of my choice is letsencrypt.sh, which is written in bash and allows us to manage and control a lot more things. Last but not least, it allows the use of the dns-01 challenge type, which uses a DNS TXT entry to validate ownership of the domain/host name instead of a web server.

The dns-01 challenge

There are a few different reasons to use the dns-01 challenge instead of the http-01 challenge:

  • Non-server hardware: not all devices supporting SSL are fully under your control. It might be a router, for example, or even a management card of some sorts, where you can’t just go in and install Let’s Encrypt’s ACME client, but you can (usually manually) upload SSL certificates to it. It would be nice to be able to request an “official” (non-self signed) certificate for anything that can do it, as otherwise the value of SSL communication is debatable (users are quickly trained to dismiss certificate warnings and errors if they are trained t expect them).
  • Internally used systems: these don’t exist in outside DNS, and are likely not reachable from the internet on port 80 either, so the ACME server cannot contact the web server to validate the token.
  • Centralized configuration management: most if not all of my server configuration is centrally managed by Puppet, including distribution of SSL certificates and reloading daemons after certificate changes. I don’t feel much for running an ACME client on every single server, all managing its own certificates. Being able to retrieve all SSL certificates to this same system directly and coordinate redistribution from there is a big win, plus there’s only one ACME client on the entire network.

The DNS record creation challenge

When using the dns-01 challenge, the script needs to be able to update your public DNS server(s), to be able to insert (and remove) a TXT record for the zone(s) you want to secure with Let’s Encrypt. There are a few different ways of accomplishing this, depending on what DNS server software you use.

For example, if you use Amazon’s Route53, CloudFlare, or any other cloud-based system, you’ll have to use their API to manipulate DNS records. If you’re using PowerDNS with a database backend, you could modify the database directly (as this script by Joe Holden demonstrates for PowerDNS with MySQL backend). Other types of server may require you to (re)write a zone file and load it into the software.

RFC2136 aka Dynamic DNS Update

Luckily, there’s also somewhat of a standard solution to remote DNS updates, as detailed in RFC2136. This allows for signed (or unsigned) updates to happen on your DNS zones over the network if your DNS server supports this and is configured to allow it. RFC2136-style updates are supported in ISC BIND, and since version 4.0 also in PowerDNS authoritative server.

As I use PowerDNS for all my DNS needs, this next part will focus on setting up PowerDNS, but if you can configure your own DNS server to accept dynamic updates, the rest of the article will apply just the same.

Setting up PowerDNS for dynamic DNS updates

First things first, the requirements: RFC2136 is only available since version 4.0 of the PowerDNS Authoritative Server – it was available as an experimental option in 3.4.x already, but I do recommend running the latest incarnation. Also important is the backend support: as detailed on the Dynamic DNS Update documentation page only a number of backends can accept updates – this includes most database-based backends, but not the bind zone file backend, for example.

I will assume you already have a running PowerDNS server hosting at least one domain, and replication configured (database, AXFR, rsync, …) to your secondary name servers.

There are a number of ways in PowerDNS to secure dynamic DNS updates: you can allow specific IPs or IP ranges to modify either a single domain, or give them blanket authorization to modify records on all domains, or you can secure updates per domain with TSIG signatures.

In this example I went with the easiest route, giving my configuration management server full access for all domains hosted on the server.

Only 2 (extra) statements are required in your PowerDNS configuration:

dnsupdate=yes
allow-dnsupdate-from=10.1.1.53

This will enable the Dynamic DNS Updates functionality, and allow changes coming from the 10.1.1.53 server only. Multiple entries (separated by spaces) and netmasks (i.e. 10.1.53.0/24) are allowed.

Prerequisites

Installing letsencrypt.sh

The script is hosted on github, we can install it into /root/letsencrypt.sh with the following commands:

# apt-get install git
# cd /root; git clone https://github.com/lukas2511/letsencrypt.sh

Configuring letsencrypt.sh

# cd /root/letsencrypt.sh
# echo HOOK=/root/letsencrypt.sh/hook.sh > config

The HOOK variable in the configuration above points to the hook script we will install for dns-01, so we don’t have to supply the path on every invocation.

Hook script requirements

As the hook script we will use is a simple bash script, it requires 2 binaries, one of which is the ‘nsupdate’ binary which will do the RFC2136-speaking for us, and the other is the ‘host’ binary, used to check propagation. In Debian and derivatives, these are contained in the ‘dnsutils’ and ‘bind9-host’ packages, respectively.

# apt-get install dnsutils bind9-host

The hook script

I’ve uploaded the hook script to Github, download it and save it as /root/letsencrypt/hook.sh.
Make sure the script is executable as otherwise it won’t be run by letsencrypt.sh.

# chmod a+x hook.sh

This script will be called by letencrypt.sh and will handle the creation and removal of the DNS entry using dynamic updates. It will also check if the record has correctly propagated to the outside world.

If you don’t have direct database replication between your master and its slaves, say you use AXFR with notifies, it will take a short while before all nameservers responsible for the domain are up to date and serving the new record.

I initially thought of iterating through all the NS records for the domain and check if they are all serving the correct TXT record, but after seeing Joe’s PowerDNS/MySQL script run the check against Google’s 8.8.8.8, I decided to do the same. If in the end it turns out there are too many failures, I might update the script to check every nameserver individually before continuing.

The hook script will load the configuration file used by letsencrypt.sh itself (/root/letsencrypt.sh/config), so you can add a number of configuration values for the hook script in there:

Required variables

SERVER=10.1.1.53

This is the DNS server IP to send the dynamic update to.

Optional variables

NSUPDATE="/usr/bin/nsupdate"

This is the path for the nsupdate binary, the default is the correct path on Debian and derivatives.

ATTEMPTS=10

The amount of times to try asking Google if the DNS record propagation succeeded.

SLEEP=30

The amount of time to wait (in seconds) before retrying the DNS propagation check.

PORT=53

This is the DNS server port to send the dynamic update to.

TTL=300

This is the TTL for the record we will be inserting, default is 5 minutes which should be fine.

DESTINATION="/etc/puppet/modules/letsencrypt/files"
CERT_OWNER=puppet
CERT_GROUP=puppet
CERT_MODE=0600
CERTDIR_OWNER=root
CERTDIR_GROUP=root
CERTDIR_MODE=0755

This block defines where to copy the newly created certificates to after they have been received from Let’s Encrypt. A new directory inside DESTINATION will be created (named after the hostname) and the 3 files (key, certificate and full chain) will be copied into it. Leaving DESTINATION empty will disable the copy feature.

The CERT_OWNER, CERT_GROUP and CERT_MODE fields define the new owner of the files and their mode. Leaving CERT_OWNER empty will disable the chown functionality, leaving CERT_GROUP empty will change group ownership to the CERT_OWNER’s primary group, and leaving CERT_MODE empty will disable the chmod functionality.

CERTDIR_OWNER, CERTDIR_GROUP and CERTDIR_MODE offer the same functionality for the certificate files’ directory created inside DESTINATION.

I use this functionality to copy the files to the puppet configuration directory, and I need to change ownership and/or mode because the certificates generated are by default readable by root only, which means my Puppet install can not actually deploy them as it is running as the ‘puppet’ user.

Requesting a certificate

To request a certificate, run:

# ./letsencrypt.sh --cron --challenge dns-01 --domain <your.host.name>

If everything goes well, you will end up with a brand new 90-day certificate from Let’s Encrypt for the host name you provided, copied into the destination directory of your choice.

Renewing your certificates automatically

The hook script adds any successful certificate creations into domains.txt. This file is used by letsencrypt.sh to automatically renew certificates if you don’t pass the –domain parameter on the command line.

# ./letsencrypt.sh --cron --challenge dns-01

To do this fully automatically, just add the command into a cron job.

&url Writing informative technical how-to documentation takes time, dedication and knowledge. Should my blog series have helped you in getting things working the way you want them to, or configure certain software step by step, feel free to tip me via PayPal (paypal@powersource.cx) or the Flattr button. Thanks!

August 25, 2016

August 23, 2016

I published the following diary on isc.sans.org: “Voice Message Notifications Deliver Ransomware“.

Bad guys need to constantly find new ways to lure their victims. If billing notifications were very common for a while, not all people in a company are working with such kind of documents. Which types of notification do they have in common? All of them have a phone number and with modern communication channels (“Unified Communications”) like Microsoft Lync or Cisco, everybody can receive a mail with a voice mail notification. Even residential systems can deliver voice message notifications…[Read more]

[The post [SANS ISC Diary] Voice Message Notifications Deliver Ransomware has been first published on /dev/random]

Over the weekend, Drupal 8.2 beta was released. One of the reasons why I'm so excited about this release is that it ships with "more outside-in". In an "outside-in experience", you can click anything on the page, edit its configuration in place without having to navigate to the administration back end, and watch it take effect immediately. This kind of on-the-fly editorial experience could be a game changer for Drupal's usability.

When I last discussed turning Drupal outside-in, we were still in the conceptual stages, with mockups illustrating the concepts. Since then, those designs have gone through multiple rounds of feedback from Drupal's usability team and a round of user testing led by Cheppers. This study identified some issues and provided some insights which were incorporated into subsequent designs.

Two policy changes we introduced in Drupal 8 — semantic versioning and experimental modules — have fundamentally changed Drupal's innovation model starting with Drupal 8. I should write a longer blog post about this, but the net result of those two changes is ongoing improvements with an easy upgrade path. In this case, it enabled us to add outside-in experiences to Drupal 8.2 instead of having to wait for Drupal 9. The authoring experience improvements we made in Drupal 8 are well-received, but that doesn't mean we are done. It's exciting that we can move much faster on making Drupal easier to use.

In-place block configuration

As you can see from the image below, Drupal 8.2 adds the ability to trigger "Edit" mode, which currently highlights all blocks on the page. Clicking on one — in this case, the block with the site's name — pops out a new tray or sidebar. A content creator can change the site name directly from the tray, without having to navigate through Drupal's administrative interface to theme settings as they would have to in Drupal 7 and Drupal 8.1.

Editing the site name using outside-in

Making adjustments to menus

In the second image, the pattern is applied to a menu block. You can make adjustments to the menu right from the new tray instead of having to navigate to the back end. Here the content creator changes the order of the menu links (moving "About us" after "Contact") and toggles the "Team" menu item from hidden to visible.

Editing the menu using outside-in

In-context block placement

In Drupal 8.1 and prior, placing a new block on the page required navigating away from your front end into the administrative back end and noting the available regions. Once you discover where to go to add a block, which can in itself be a challenge, you'll have to learn about the different regions, and some trial and error might be required to place a block exactly where you want it to go.

Starting in Drupal 8.2, content creators can now just click "Place block" without navigating to a different page and knowing about available regions ahead of time. Clicking "Place block" will highlight the different possible locations for a block to be placed in.

Placing a block using outside-in

Next steps

These improvements are currently tagged "experimental". This means that anyone who downloads Drupal 8.2 can test these changes and provide feedback. It also means that we aren't quite satisfied with these changes yet and that you should expect to see this functionality improve between now and 8.2.0's release, and even after the Drupal 8.2.0 release.

As you probably noticed, things still look pretty raw in places; as an example, the forms in the tray are exposing too many visual details. There is more work to do to bring this functionality to the level of the designs. We're focused on improving that, as well as the underlying architecture and accessibility. Once we feel good about how it all works and looks, we'll remove the experimental label.

We deliberately postponed most of the design work to focus on introducing the fundamental concepts and patterns. That was an important first step. We wanted to enable Drupal developers to start experimenting with the outside-in pattern in Drupal 8.2. As part of that, we'll have to determine how this new pattern will apply broadly to Drupal core and the many contributed modules that would leverage it. Our hope is that once the outside-in work is stable and no longer experimental, it will trickle down to every Drupal module. At that point we can all work together, in parallel, on making Drupal much easier to use.

Users have proven time and again in usability studies to be extremely "preview-driven", so the ability to make quick configuration changes right from their front end, without becoming an expert in Drupal's information architecture, could be revolutionary for Drupal.

If you'd like to help get these features to stable release faster, please join us in the outside-in roadmap issue.

Thank you

I'd also like to thank everyone who contributed to these features and reviewed them, including Bojhan, yoroy, pwolanin, andrewmacpherson, gtamas, petycomp, zsofimajor, SKAUGHT, nod_, effulgentsia, Wim Leers, catch, alexpott, and xjm.

And finally, a special thank you to Acquia's outside-in team for driving most of the design and implementation: tkoleary, webchick, tedbow, Gábor Hojtsy, tim.plunkett, and drpal.

Acquia's outside in team
Acquia's outside-in team celebrating that the outside-in patch was committed to Drupal 8.2 beta. Go team!

August 22, 2016

Wifi

Update 20160818: added Proximus RADIUS server.

The Belgian ISPs Proximus and Telenet both provide access to a network of hotspots. A nice recent addition is the use of alternative ssids for “automatic” connections instead of a captive portal where you login through a webpage. Sadly, their support pages provide next to no information to make a safe connection to these hotspots.

Proximus is a terrible offender. According to their support page on a PC only Windows 8.1 is supported. Linux, OSX *and* Windows 8 (!) or 7 users are kindly encouraged to use the open wifi connection and login through the captive portal. Oh, and no certification information is given for Windows 8.1 either. That’s pretty silly, as they use EAP-TTLS. Here is the setup to connect from whatever OS you use (terminology from gnome-network-manager):

SSID: PROXIMUS_AUTO_FON
Security: WPA2 Enterprise
Authentication: Tunneled TLS (TTLS)
Anonymous identity: what_ever_you_wish_here@proximusfon.be
Certificate: GlobalSign Root CA (in Debian/Ubuntu in /usr/share/ca-certificates/mozilla/)
Inner Authentication: MSCHAPv2
Usename: your_fon_username_here@proximusfon.be
Password: your_password_here
RADIUS server certificate (optional): radius.isp.belgacom.be

Telenet’s support page is slightly better (not a fake Windows 8.1 restriction), but pretty useless as well with no certificate information whatsoever. Here is the information needed to use TelenetWifree using PEAP:

SSID: TelenetWifree
Security: WPA2 Enterprise
Authentication: Protected EAP (PEAP)
Anonymous identity:what_ever_you_wish_here@telenet.be
Certificate: GlobalSign Root CA (in Debian/Ubuntu in /usr/share/ca-certificates/mozilla/)
Inner Authentication: MSCHAPv2
Usename: your_fon_username_here@telenet.be
Password: your_password_here
RADIUS server certificate (optional): authentic.telenet.be

If you’re interested, screenshots of the relevant parts of the wireshark trace are attached here:

proximus_rootca telenet_rootca


Filed under: Uncategorized Tagged: GNU/Linux, Lazy support, proximus, PROXIMUS_AUTO_PHONE, telenet, TelenetWifree, Windows 7

If you’re a Vim user you probably use it for almost everything. Out of the box, Perl 6 support is rather limited. That’s why many people use editors like Atom for Perl 6 code.

What if with a few plugins you could configure vim to be a great Perl 6 editor? I made the following notes while configuring Vim on my main machine running Ubuntu 16.04. The instructions should be trivially easy to port to other distributions or Operating Systems. Skip the applicable steps if you already have a working vim setup (i.e. do not overwrite you .vimrc file).

I maintain my Vim plugins using pathogen, as it allows me to directly use git clones from github. This is specially important for plugins in rapid development.
(If your .vim directory is a git repository, replace ‘git clone’ in the commands by ‘git submodule add’.)

Basic vim Setup

Install vim with scripting support and pathogen. Create the directory where the plugins will live:
$ sudo apt-get install vim-nox vim-pathogen && mkdir -p ~/.vim/bundle

$ vim-addons install pathogen

Create a minimal .vimrc in your $HOME, with at least this configuration (enabling pathogen). Lines commencing with ” are comments:

“Enable extra features (e.g. when run systemwide). Must be before pathogen
set nocompatible

“Enable pathogen
execute pathogen#infect()
“Enable syntax highlighting
syntax on
“Enable indenting
filetype plugin indent on

Additionally I use these settings (the complete .vimrc is linked atthe end):

“Set line wrapping
set wrap
set linebreak
set nolist
set formatoptions+=l

“Enable 256 colours
set t_Co=256

“Set auto indenting
set autoindent

“Smart tabbing
set expandtab
set smarttab
set sw=4 ” no of spaces for indenting
set ts=4 ” show \t as 2 spaces and treat 2 spaces as \t when deleting

“Set title of xterm
set title

” Highlight search terms
set hlsearch

“Strip trailing whitespace for certain type of files
autocmd BufWritePre *.{erb,md,pl,pl6,pm,pm6,pp,rb,t,xml,yaml,go} :%s/\s\+$//e

“Override tab espace for specific languages
autocmd Filetype ruby,puppet setlocal ts=2 sw=2

“Jump to the last position when reopening a file
au BufReadPost * if line(“‘\””) > 1 && line(“‘\””) <= line(“$”) |
\ exe “normal! g’\”” | endif

“Add a coloured right margin for recent vim releases
if v:version >= 703
set colorcolumn=80
endif

“Ubuntu suggestions
set showcmd    ” Show (partial) command in status line.
set showmatch  ” Show matching brackets.
set ignorecase ” Do case insensitive matching
set smartcase  ” Do smart case matching
set incsearch  ” Incremental search
set autowrite  ” Automatically save before commands like :next and :make
set hidden     ” Hide buffers when they are abandoned
set mouse=v    ” Enable mouse usage (all modes)

Install plugins

vim-perl for syntax highlighting:

$ git clone https://github.com/vim-perl/vim-perl.git ~ /.vim/bundle/vim-perl

vim-perl

vim-airline and themes for a status bar:
$ git clone https://github.com/vim-airline/vim-airline.git ~/.vim/bundle/vim-airline
$ git clone https://github.com/vim-airline/vim-airline-themes.git ~/.vim/bundle/vim-airline-themes
In vim type :Helptags

In  Ubuntu the ‘fonts-powerline’ package (sudo apt-get install fonts-powerline) installs fonts that enable nice glyphs in the statusbar (e.g. line effect instead of ‘>’, see the screenshot at https://github.com/vim-airline/vim-airline/wiki/Screenshots.

Add this to .vimrc for airline (the complete .vimrc is attached):
“airline statusbar
set laststatus=2
set ttimeoutlen=50
let g:airline#extensions#tabline#enabled = 1
let g:airline_theme=’luna’
“In order to see the powerline fonts, adapt the font of your terminal
“In Gnome Terminal: “use custom font” in the profile. I use Monospace regular.
let g:airline_powerline_fonts = 1

airline

Tabular for aligning text (e.g. blocks):
$ git clone https://github.com/godlygeek/tabular.git ~/.vim/bundle/tabular
In vim type :Helptags

vim-fugitive for Git integration:
$ git clone https://github.com/tpope/vim-fugitive.git ~/.vim/bundle/vim-fugitive
In vim type :Helptags

vim-markdown for markdown syntax support (e.g. the README.md of your module):
$ git clone https://github.com/plasticboy/vim-markdown.git ~/.vim/bundle/vim-markdown
In vim type :Helptags

Add this to .vimrc for markdown if you don’t want folding (the complete .vimrc is attached):
“markdown support
let g:vim_markdown_folding_disabled=1

synastic-perl6 for Perl 6 syntax checking support. I wrote this plugin to add Perl 6 syntax checking support to synastic, the leading vim syntax checking plugin. See the ‘Call for Testers/Announcement’ here. Instruction can be found in the repo, but I’ll paste it here for your convenience:

You need to install syntastic to use this plugin.
$ git clone https://github.com/scrooloose/syntastic.git ~/.vim/bundle/synastic
$ git clone https://github.com/nxadm/syntastic-perl6.git ~/.vim/bundle/synastic-perl6

Type “:Helptags” in Vim to generate Help Tags.

Syntastic and syntastic-perl6 vimrc configuration, (comments start with “):

“airline statusbar integration if installed. De-comment if installed
“set laststatus=2
“set ttimeoutlen=50
“let g:airline#extensions#tabline#enabled = 1
“let g:airline_theme=’luna’
“In order to see the powerline fonts, adapt the font of your terminal
“In Gnome Terminal: “use custom font” in the profile. I use Monospace regular.
“let g:airline_powerline_fonts = 1

“syntastic syntax checking
let g:syntastic_always_populate_loc_list = 1
let g:syntastic_auto_loc_list = 1
let g:syntastic_check_on_open = 1
let g:syntastic_check_on_wq = 0
set statusline+=%#warningmsg#
set statusline+=%{SyntasticStatuslineFlag()}
set statusline+=%*
“Perl 6 support
“Optional comma separated list of quoted paths to be included to -I
“let g:syntastic_perl6_lib_path = [ ‘/home/user/Code/some_project/lib’, ‘lib’ ]
“Optional perl6 binary (defaults to perl6)
“let g:syntastic_perl6_interpreter = ‘/home/claudio/tmp/perl6’
“Register the checker provided by this plugin
let g:syntastic_perl6_checkers = [ ‘perl6latest’]
“Enable the perl6latest checker
let g:syntastic_enable_perl6latest_checker = 1

screenshot-perl6

You complete me fuzzy search autocomplete:

$ git clone https://github.com/Valloric/YouCompleteMe.git ~/.vim/bundle/YouCompleteMe

Read the YouCompleteMe documentation for the dependencies for your OS and for the switches for additional non-fuzzy support for additional languages like C/C++, Go and so on. If you just want fuzzy complete support for Perl 6, the default is ok. If someone is looking for a nice project, a native Perl6 autocompleter for YouCompleteMe (instead of the fuzzy one) would be a great addition. You can install YouCompleteMe like this:
$ cd ~/.vim/bundle/YouCompleteMe && ./install.py

autocomplete

That’s it. I hope my notes are useful to someone. The complete .vimrc can be found here.

 

 


Filed under: Uncategorized Tagged: Perl, perl6, vim

August 20, 2016

Vimlogo.svgI think that Perl 6, as a fairly new language, needs good tooling not only to attract new programmers but also to make the job of Perl 6 programmers more enjoyable. If you’ve worked with an IDE before, you certainly agree that syntax checking is one of those things that we take for granted. Syntastic-perl6 is a plugin that adds Perl 6 syntax checking in Vim using Syntastic. Syntastic is the leading Vim plugin for syntax checking. It supports many programming languages.

If the plugin proves to be useful, I plan on a parallel track for Perl 6 support in Vim. On one hand, this plugin will track the latest Perl 6 Rakudo releases (while staying as backwards compatible as possible) and be the first to receive new functionality. On the other hand, once this plugin is well-tested and feature complete, it will hopefully be added to the main syntastic repo (it has it’s own branch upstream already) in order to provide out-of-the-box support for Perl 6.

So, what do we need to get there? We need testers and users, so they can make this plugin better by:

  • sending Pull Requests to make the code (vimscript) better where needed.
  • sending Pull Requests to add tests for error cases not yet tested (see the t directory) or -more importantely- caught.
  • posting issues for bugs or errors not-yet-caught. In that case copy-paste the error (e.g. within vim: :!perl6 -c %) and post a sample of the erroneous Perl 6 code in question.

The plugin, with installation instructions, is on its github repo at syntastic-perl6. With a vim module manage like pathogen you can directly use a clone of the repo.

Keep me posted!

 


Filed under: Uncategorized Tagged: Perl, perl6, vim

August 19, 2016

I published the following diary on isc.sans.org: “Data Classification For the Masses“.

Data classification isn’t a brand new topic. For a long time, international organizations or military are doing “data classification”. It can be defined as:

A set of processes and tools to help the organization to know what data are used, how they are protected and what access levels are implemented

Military’s levels are well known: Top Secret, Secret, Confidential, Restricted, Unclassified.

But organizations are free to implement their own scheme and they are deviations. NATO is using: Cosmic Top Secret (CTS), NATO Secret (NS), NATO Confidential (NC) and NATO Restricted (NR). EU institutions are using: EU Top Secret, EU Secret, EU Confidential, EU Restricted. The most important is to have the right classification depending on your business… [Read more]

[The post [SANS ISC Diary] Data Classification For the Masses has been first published on /dev/random]

August 17, 2016

flac

You may have backupped your music cd’s using a single flac file instead of a file for each track. In case you need to split the cd-flac, do this:

Install the needed software:

$ sudo apt-get install cuetools shntool

Split the album flac file into separate tracks:

$ cuebreakpoints sample.cue | shnsplit -o flac sample.flac

Copy the flac tags (if present):

$ cuetag sample.cue split-track*.flac

The full howto can be found here (aidanjm).

Update (April 18th, 2009):
In case the cue file is not a separate file, but included in the flac file itself do this as the first step:

$ metaflac --show-tag=CUESHEET sample.flac | grep -v ^CUESHEET > sample.cue

(NB: The regular syntax is “metaflac –export-cuesheet-to=sample.cue sample.flac“, however often the cue file in embedded in a tag instead of the cuesheet block).


Posted in Uncategorized Tagged: flac, GNU/Linux, music

August 16, 2016

The post TCP vulnerability in Linux kernels pre 4.7: CVE-2016-5696 appeared first on ma.ttias.be.

This is a very interesting vulnerability in the TCP stack of Linux kernels pre < 4.7. The bad news: there are a lot of systems online running those kernel versions. The bug/vulnerability is as follows.

Red Hat Product Security has been made aware of an important issue in
the Linux kernel's implementation of challenge ACKS as specified in
RFC 5961. An attacker which knows a connections client IP, server IP
and server port can abuse the challenge ACK mechanism
to determine the accuracy of a normally 'blind' attack on the client or server.

Successful exploitation of this flaw could allow a remote attacker to
inject or control a TCP stream contents in a connection between a
Linux device and its connected client/server.

* This does NOT mean that cryptographic information is exposed.
* This is not a Man in the Middle (MITM) attack.
[oss-security] CVE-2016-5389: linux kernel -- challange ack information leak

In short: a successful attack could hijack a TCP session and facilitate a man-in-the-middle attack and allow the attacker to inject data. Ie: altering the content on websites, modifying responses from webservers, ...

This Stack Overflow post explains it very well.

The hard part of taking over a TCP connection is to guess the source port of the client and the current sequence number.

The global rate limit for sending Challenge ACK's (100/s in Linux) introduced together with Challenge ACK (RFC5961) makes it possible in the first step to guess a source port used by the clients connection and in the next step to guess the sequence number. The main idea is to open a connection to the server and send with the source of the attacker as much RST packets with the wrong sequence mixed with a few spoofed packets.

By counting how much Challenge ACK get returned to the attacker and by knowing the rate limit one can infer how much of the spoofed packets resulted in a Challenge ACK to the spoofed client and thus how many of the guesses where correct. This way can can quickly narrow down which values of port and sequence are correct. This attack can be done within a few seconds.

And of course the attacker need to be able to spoof the IP address of the client which is not true in all environments. It might be possible in local networks (depending on the security measures) but ISP will often block IP spoofing when done from the usual DSL/cable/mobile accounts.

TCP “off-path” Attack (CVE-2016-5696)

For RHEL (and CentOS derivatives), the following OS's are affected.

cve_2016_5696_tcp_vulnerability_kernel

While it's no permanent fix, the following config will make it a lot harder to abuse this vulnerability.

$ sysctl -w net.ipv4.tcp_challenge_ack_limit=999999999

And make it permanent so it persists on reboot:

$ echo "net.ipv4.tcp_challenge_ack_limit=999999999" >> /etc/sysctl.d/net.ipv4.tcp_challenge_ack_limit.conf

While the attack isn't actually prevented, it is damn hard to reach the ACK limits.

Further reading:

The post TCP vulnerability in Linux kernels pre 4.7: CVE-2016-5696 appeared first on ma.ttias.be.

Rio olympic stadium

As the 2016 Summer Olympics in Rio de Janeiro enters its second and final week, it's worth noting that the last time I blogged about Drupal and the Olympics was way back in 2008 when I called attention to the fact that Nike was running its sponsorship site on Drupal 6 and using Drupal's multilingual capabilities to deliver their message in 13 languages.

While watching some track and field events on television, I also spent a lot of time on my laptop with the NBC Olympics website. It is a site that has run on Drupal for several years, and this year I noticed they took it up a notch and did a redesign to enhance the overall visitor experience.

Last week NBC issued a news release that it has streamed over one billion minutes of sports via their site so far. That's a massive number!

I take pride in knowing that an event as far-reaching as the Olympics is being delivered digitally to a massive audience by Drupal. In fact, some of the biggest sporting leagues around the globe run their websites off of Drupal, including NASCAR, the NBA, NFL, MLS, and NCAA. Massive events like the Super Bowl, Kentucky Derby, and the Olympics run on Drupal, making it the chosen platform for global athletic organizations.

Rio website

Rio press release

Update on August 24: This week, the NBC Sports Group issued a press release stating that the Rio 2016 Olympics was the most successful media event in history! Digital coverage across NBCOlympics.com and the NBC Sports app set records, with 3.3 billion total streaming minutes, 2.71 billion live streaming minutes, and 100 million unique users. According to the announcement, live streaming minutes for the Rio games nearly doubled that of all Olympic games combined, and digital coverage amassed 29 percent more unique users than the London Olympics four years prior. Drupal was proud to be a part of the largest digital sporting event in history. Looking forward to breaking more records in the years to come!

August 12, 2016

The post youtube-dl: download audio-only files from YouTube on Mac appeared first on ma.ttias.be.

I may or may not have become addicted to a particular video on YouTube, and I wanted to download the MP3 for offline use.

(Whether it's allowed or not is up for debate, knowing copyright laws it probably depends per country.)

Luckily, I remember I featured a YouTube downloader once in cron.weekly issue #23 that I could use for this.

So, a couple of simple steps on Mac to download the MP3 from any YouTube video. All further commands assume the Brew package manager is installed on your Mac.

$ brew install ffmpeg youtube-dl

To download and convert to MP3:

$ youtube-dl --extract-audio --audio-format mp3 --prefer-ffmpeg https://www.youtube.com/watch?v=3UOtF4J9wpo
 3UOtF4J9wpo: Downloading webpage
 3UOtF4J9wpo: Downloading video info webpage
 3UOtF4J9wpo: Extracting video information
 3UOtF4J9wpo: Downloading MPD manifest
[download] 100% of 58.05MiB
[ffmpeg] Destination: 3UOtF4J9wpo.mp3

Deleting original file 3UOtF4J9wpo.webm

And bingo, all that remains is the MP3!

The post youtube-dl: download audio-only files from YouTube on Mac appeared first on ma.ttias.be.

August 11, 2016

The post Mark a varnish backend as healthy, sick or automatic via CLI appeared first on ma.ttias.be.

This is a useful little command for when you want to perform maintenance on a Varnish installation and want to dynamically mark backends as healthy or sick via the command line, without restarting or reloading varnish.

See varnish backend health status

To see all backends, there are 2 methods: a debug output and a normalized output.

$ varnishadm -S /etc/varnish/secret -T localhost:6082 backend.list
Backend name                   Refs   Admin      Probe
backend1(127.0.0.1,,80)        1      probe      Sick 0/4
fallback(172.16.80.5,,80)      12     probe      Healthy (no probe)

$ varnishadm -S /etc/varnish/secret -T localhost:6082 debug.health
Backend backend1 is Sick
Current states  good:  0 threshold:  2 window:  4
Average responsetime of good probes: 0.000000
Oldest                                                    Newest
================================================================
---------------------------------------------------------------- Happy

The backend.list shows all backends, even those without a probe (= healtcheck) configured.

The debug.health command will show in-depth statistics on the varnish probes that are being executed, including the IPv4 connect state, whether a send/receive has worked and if the response code was HTTP/200.

For instance, a healthy backend will be shown like this, with each state of the check (IPv4, send, receive & HTTP response code) on a seperate line.

$ varnishadm -S /etc/varnish/secret -T localhost:6082 debug.health
Backend backend1 is Healthy
Current states  good:  5 threshold:  4 window:  5
Average responsetime of good probes: 0.014626
Oldest                                                    Newest
================================================================
4444444444444444444444444444444444444444444444444444444444444444 Good IPv4
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit
RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR Good Recv
HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH Happy

Now, to change backend statuses.

Mark a varnish backend as healthy or sick

In order to mark a particular backend as sick or healthy, thus overriding the probe, you can do so like this.

$ varnishadm -S /etc/varnish/secret -T localhost:6082 backend.set_health backend1 healthy

The above command will mark the backend named backend1 as healthy. Likewise, you can mark a backend as sick to prevent it from getting traffic.

$ varnishadm -S /etc/varnish/secret -T localhost:6082 backend.set_health backend1 sick

If you have multiple Varnish backends and they're configured in a director to load balance traffic, all traffic should gracefully be sent to the other backend(s). (see the examples in mattiasgeniar/varnish-4.0-configuration-templates)

If you mark a backend explicitly as sick, the backend.list output changes and the admin column removes the 'probe' and marks it as 'sick' explicitly, indicating it was changed via CLI.

$ varnishadm -S /etc/varnish/secret -T localhost:6082 backend.list
Backend name                   Refs   Admin      Probe
backend1(127.0.0.1,,80)        1      sick       Sick 0/4
fallback(172.16.80.5,,80)      12     probe      Healthy (no probe)

You can also change it back to let Varnish decide the backend health.

Mark the backend as 'varnish managed', let probes decide the health

To let Varnish decide the health itself, by using it probes, mark the backend to be auto again:

$ varnishadm -S /etc/varnish/secret -T localhost:6082 backend.set_health backend1 auto

So to summarise: the backend.set_healthy command in varnishadm allows you to manipulate the backend health state of varnish backends, overriding the result of a probe.

Useful when you're trying to gracefully update several backend servers, by marking backends as sick one by one without waiting for the probes to discover that backends are sick. This method allows you to do things gracefully before the update.

The post Mark a varnish backend as healthy, sick or automatic via CLI appeared first on ma.ttias.be.

August 09, 2016

The post zsh: slow startup for new terminals appeared first on ma.ttias.be.

I couldn't quite put my finger on the why, but I was experiencing slower and slower startups of my terminal when using zsh (combined with the oh-my-zsh extension).

In my case, this was because of a rather long history file that gets loaded whenever you start a new terminal.

$  wc -l ~/.zsh_history
   10005 /Users/mattias/.zsh_history

Turns out, loading over 10k lines worth of shell history whenever you launch a new shell is hard for a computer.

This was my fix:

$ cp ~/.zsh_history ~/.zsh_history.1
$ echo '' > ~/.zsh_history

I had to use echo because the shortcut that would normally work on Bash didn't work here;

$ > ~/.zsh_history

Either way, that solved zsh from starting slowly for me.

The post zsh: slow startup for new terminals appeared first on ma.ttias.be.

The post Docker Cheat Sheet appeared first on ma.ttias.be.

An interesting Docker cheat sheet just got posted on the @Docker Twitter account that's worth sharing. Because it got linked to a strange domain (pdf.investintech.com, really?) I'll mirror it here -- I feel the original link will one day go down.

docker_cheat_sheet

Alternative links:

  • Docker Cheat Sheet: PNG
  • Docker Cheat Sheet: PDF

Good stuff Docker, thanks for sharing!

The post Docker Cheat Sheet appeared first on ma.ttias.be.

August 08, 2016

The post Awk trick: show lines longer than X characters appeared first on ma.ttias.be.

Here's a quick little awk trick to have in your arsenal: if you want to search through a bunch of files, but only want to show the lines that exceed X amount of characters, you can use awk's built-in length check.

For instance:

$ awk 'length > 350'
$ awk 'length < 50' 

If you combine this with a grep, you can do things like "show me all the lines that match TXT and that exceed 100 characters in length".

$ grep 'TXT' * | awk 'length > 100'

Super useful to quickly skim through a bunch of logs or text-files.

The post Awk trick: show lines longer than X characters appeared first on ma.ttias.be.

The post Podcast: Ansible config management & deploying code with James Cammarata appeared first on ma.ttias.be.

I recorded a fun new episode on the SysCast podcast about Ansible. I'm joined by James Cammarata, head of Ansible core engineering, to discuss Ansible, push vs. pull scenario's, deploying code, testing your config management and much more.

You can find the Episode on the website or wherever you get your podcasts: SysCast #5: Ansible: config management & deploying code with James Cammarata from Red Hat.

Feedback appreciated!

The post Podcast: Ansible config management & deploying code with James Cammarata appeared first on ma.ttias.be.

August 04, 2016

August 02, 2016

The post Postfix mail queue: deliver e-mail to an alternate address appeared first on ma.ttias.be.

Have you ever had an e-mail stuck in a postfix queue you'd just like to re-route to a different address? This could be because the original to address has e-mail delivery issues and you really want to have that e-mail delivered to an alternate recipient.

Here's a couple of steps to workaround that. It basically goes:

  1. Find the e-mail
  2. Mark mail as 'on hold' in the Postfix queue
  3. Extract the mail from the queue
  4. Requeue to a different recipient
  5. Delete the original mail from the queue after confirmed delivery

It's less work than it sounds. Let's go.

Find the mail ID in postfix

You need to find the mail ID of the mail you want to send to a different address.

$ postqueue  -p | grep 'm@ttias.be' -B 2 | grep 'keyword'
42DA1C0B84D0    28177 Tue Aug  2 14:52:38  thesender@domain.tld

In this case, a mail was sent to me and I'd like to have that be delivered to a different address. The identifier in the front, 42DA1C0B84D0 is what we'll need.

Mark the postfix queue item as 'on hold'

To prevent Postfix from trying to deliver it in the meanwhile.

$ postsuper -h CF452C1239FB
postsuper: CF452C1239FB: placed on hold
postsuper: Placed on hold: 1 message

Don't worry, your mail isn't deleted.

Extract the mail from the queue

Extract that email and save it to a temporary file. If you're paranoid, don't save to /tmp as everyone can read that mail while it's there.

$ postcat -qbh CF452C1239FB > /tmp/m.eml

Now, to resend.

Send queued mail to different recipient

Now that you've extracted that e-mail, you can have it be sent to a different recipient than the original.

$ sendmail -f $sender $recipient < /tmp/m.eml

Replace $sender and $recipient with real values. The sender should remain the same as the from address you saw with the postqueue -p command, the $recipient can be your modified address. For instance, in my example, I could do this.

$ sendmail -f thesender@domain.tld newrecipient@domain.tld < /tmp/m.eml

After a while, that mail should arrive at the new address.

Delete the 'on hold' mail from the postfix queue

After you've confirmed delivery to your new e-mail address, you can delete the mail from the 'on hold' queue in Postfix.

Warning: after this, the mail is gone forever from the postfix queue!

$ postsuper -d  CF452C1239FB
postsuper: CF452C1239FB: removed
postsuper: Deleted: 1 message

$ rm -f /tmp/m.eml

And you're good: you just resent a mail that got stuck in the postfix queue to a different address!

Fyi: there are alternatives using Postfix's smtp_generic_maps, but call me old fashioned - I still prefer this method.

The post Postfix mail queue: deliver e-mail to an alternate address appeared first on ma.ttias.be.

Autoptimize by default uses WordPress’ internal logic to determine if a URL should be HTTP or HTTPS. But in some cases WordPress may not be fully aware it is on HTTPS, or maybe you want part of your site HTTP and another part (cart & checkout?) in HTTPS. Protocol-relative URL’s to the rescue, except Autoptimize does not do those, right?

Well, not by default no. But the following code-snippet uses AO’s API to output protocol-relative URL’s (warning: not tested thoroughly in a production environment, but I’ll happy to assist in case of problems):

add_filter('autoptimize_filter_cache_getname','protocollesser');
add_filter('autoptimize_filter_base_replace_cdn','protocollesser');
function protocollesser($urlIn) {
  $urlOut=preg_replace('/https?:/i','',$urlIn);
  return $urlOut;
}

August 01, 2016

The post Chrome 52: return old backspace behaviour appeared first on ma.ttias.be.

Remember when you could hit backspace and go back one page in your history? Those were the days!

If you're a Chrome 52 user, you might have noticed that no longer works. Instead, it'll show this screen.

chrome_backspace_return_functionality

This was discussed at length and the consensus was: it's mad to have such functionality, it does more harm than good, let's rethink it. And so, the backspace functionality has been removed.

But to ease our pain, Chrome 52 introduced a new material design, so all is good, right?

chrome_52_material_design

Well, if like me, you miss the old backspace functionality, you can get it back!

Quickest fix: a Chrome extension

You can get the back to back Chrome extension that fixes this for you.

But come on, using an extension for this feels wrong, no?

Add CLI argument to restore backspace

Add the following argument whenever you start Chrome to restore the old backspace functionality: --enable-blink-features=BackspaceDefaultHandler --test-type.

Because apparently we're the 0.04% of users that want this feature.

The post Chrome 52: return old backspace behaviour appeared first on ma.ttias.be.

Remember when my webserver was acting up? Well, I was so fed up with it, that I took a preconfigured Bitnami WordPress image and ran that on AWS. I don’t care how Bitnami configured it, as long as it works.

As a minor detail, postfix/procmail/dovecot were of course not installed or configured. Meh. This annoyed the Mrs. a bit because she didn’t get her newsletters. But I was so fed up with all the technical problems, that I waited a month to do anything about it.

Doing sudo apt-get -y install postfix procmail dovecot-pop3d and copying over the configs from the old server solved that.

Did I miss email during that month? Not at all. People were able to contact met through Twitter, Facebook, Telegram and all the other social networks. And I had an entire month without spam. Wonderful!

The post Living without email for a month appeared first on amedee.be.

July 30, 2016

The post Google’s QUIC protocol: moving the web from TCP to UDP appeared first on ma.ttias.be.

The QUIC protocol (Quick UDP Internet Connections) is an entirely new protocol for the web developed on top of UDP instead of TCP.

Some are even (jokingly) calling it TCP/2.

I only learned about QUIC a few weeks ago while doing the curl & libcurl episode of the SysCast podcast.

The really interesting bit about the QUIC protocol is the move to UDP.

Now, the web is built on top of TCP for its reliability as a transmission protocol. To start a TCP connection a 3-way handshake is performed. This means additional round-trips (network packets being sent back and forth) for each starting connection which adds significant delays to any new connection.

tcp_3_way_handshake

(Source: Next generation multiplexed transport over UDP (PDF))

If on top of that you also need to negotiate TLS, to create a secure, encrypted, https connection, even more network packets have to be sent back and forth.

tcp_3_way_handshake_with_tls

(Source: Next generation multiplexed transport over UDP (PDF))

Innovation like TCP Fast Open will improve the situation for TCP, but this isn't widely adopted yet.

UDP on the other hand is more of a fire and forget protocol. A message is sent over UDP and it's assumed to arrive at the destination. The benefit is less time spent on the network to validate packets, the downside is that in order to be reliable, something has to be built on top of UDP to confirm packet delivery.

That's where Google's QUIC protocol comes in.

The QUIC protocol can start a connection and negotiate all the TLS (HTTPs) parameters in 1 or 2 packets (depends on if it's a new server you are connecting to or a known host).

udp_quic_with_tls

(Source: Next generation multiplexed transport over UDP (PDF))

This can make a huge difference for the initial connection and start of download for a page.

Why is QUIC needed?

It's absolutely mind boggling what the team developing the QUIC protocol is doing. It wants to combine the speed and possibilities of the UDP protocol with the reliability of the TCP protocol.

Wikipedia explains it fairly well.

As improving TCP is a long-term goal for Google, QUIC aims to be nearly equivalent to an independent TCP connection, but with much reduced latency and better SPDY-like stream-multiplexing support.

If QUIC features prove effective, those features could migrate into a later version of TCP and TLS (which have a notably longer deployment cycle).

QUIC

There's a part of that quote that needs emphasising: if QUIC features prove effective, those features could migrate into a later version of TCP.

The TCP protocol is rather highly regulated. Its implementation is inside the Windows and Linux kernel, it's in each phone OS, ... it's pretty much in every low-level device. Improving on the way TCP works is going to be hard, as each of those TCP implementation needs to follow.

UDP on the other hand is relatively simple in design. It's faster to implement a new protocol on top of UDP to prove some of the theories Google has about TCP. That way, once they can confirm their theories about network congestion, stream blocking, ... they can begin their efforts to move the good parts of QUIC to the TCP protocol.

But altering the TCP stack requires work from the Linux kernel & Windows, intermediary middleboxes, users to update their stack, ... Doing the same thing in UDP is much more difficult for the developers making the protocol but allows them to iterate much faster and implement those theories in months instead of years or decades.

Where does QUIC fit in?

If you look at the layers which make up a modern HTTPs connection, QUIC replaces the TLS stack and parts of HTTP/2.

The QUIC protocol implements its own crypto-layer so does not make use of the existing TLS 1.2.

tcp_udp_quic_http2_compared

It replaces TCP with UDP and on top of QUIC is a smaller HTTP/2 API used to communicate with remote servers. The reason it's smaller is because the multiplexing and connection management is already handled by QUIC. What's left is an interpretation of the HTTP protocol.

TCP head-of-line blocking

With SPDY and HTTP/2 we now have a single TCP connection being used to connect to a server instead of multiple connections for each asset on a page. That one TCP connection can independently request and receive resources.

spdy_multiplexed_assets

(Source: QUIC: next generation multiplexed transport over UDP)

Now that everything depends on that single TCP connection, a downside is introduced: head-of-line blocking.

In TCP, packets need to arrive be processed in the correct order. If a packet is lost on its way to/from the server, it needs to be retransmitted. The TCP connection needs to wait (or "block") on that TCP packet before it can continue to parse the other packets, because the order in which TCP packets are processed matters.

spdy_multiplexed_assets_head_of_line_blocked

(Source: QUIC: next generation multiplexed transport over UDP)

In QUIC, this is solved by not making use of TCP anymore.

UDP is not dependent on the order in which packets are received. While it's still possible for packets to get lost during transit, they will only impact an individual resource (as in: a single CSS/JS file) and not block the entire connection.

quic_multiplexing

(Source: QUIC: next generation multiplexed transport over UDP)

QUIC is essentially combining the best parts of SPDY and HTTP2 (the multiplexing) on top of a non-blocking transportation protocol.

Why fewer packets matter so much

If you're lucky enough to be on a fast internet connection, you can have latencies between you and a remote server between the 10-50ms range. Every packet you send across the network will take that amount of time to be received.

For latencies < 50ms, the benefit may not be immediately clear.

It's mostly noticeable when you are talking to a server on another continent or via a mobile carrier using Edge, 3G/4G/LTE. To reach a server from Europe in the US, you have to cross the Atlantic ocean. You immediately get a latency penalty of +100ms or higher purely because of the distance that needs to be traveled.

network_round_trip_europe_london

(Source: QUIC: next generation multiplexed transport over UDP)

Mobile networks have the same kind of latency: it's not unlikely to have a 100-150ms latency between your mobile phone and a remote server on a slow connection, merely because of the radio frequencies and intermediate networks that have to be traveled. In 4G/LTE situations, a 50ms latency is easier to get.

On mobile devices and for large-distance networks, the difference between sending/receiving 4 packets (TCP + TLS) and 1 packet (QUIC) can be up to 300ms of saved time for that initial connection.

Forward Error Correction: preventing failure

A nifty feature of QUIC is FEC or Forward Error Correction. Every packet that gets sent also includes enough data of the other packets so that a missing packet can be reconstructed without having to retransmit it.

This is essentially RAID 5 on the network level.

Because of this, there is a trade-off: each UDP packet contains more payload than is strictly necessary, because it accounts for the potential of missed packets that can more easily be recreated this way.

The current ratio seems to be around 10 packets. So for every 10 UDP packets sent, there is enough data to reconstruct a missing packet. A 10% overhead, if you will.

Consider Forward Error Correction as a sacrifice in terms of "data per UDP packet" that can be sent, but the gain is not having to retransmit a lost packet, which would take a lot longer (recipient has to confirm a missing packet, request it again and await the response).

Session resumption & parallel downloads

Another exciting opportunity with the switch to UDP is the fact that you are no longer dependent on the source IP of the connection.

In TCP, you need 4 parameters to make up a connection. The so-called quadruplets.

To start a new TCP connection, you need a source IP, source port, destination IP and destination port. On a Linux server, you can see those quadruplets using netstat.

$ netstat -anlp | grep ':443'
...
tcp6       0      0 2a03:a800:a1:1952::f:443 2604:a580:2:1::7:57940  TIME_WAIT   -
tcp        0      0 31.193.180.217:443       81.82.98.95:59355       TIME_WAIT   -
...

If any of the parameters (source IP/port or destination IP/port) change, a new TCP connection needs to be made.

This is why keeping a stable connection on a mobile device is so hard, because you may be constantly switching between WiFi and 3G/LTE.

quic_parking_lot_problem

(Source: QUIC: next generation multiplexed transport over UDP)

With QUIC, since it's now using UDP, there are no quadruplets.

QUIC has implemented its own identifier for unique connections called the Connection UUID. It's possible to go from WiFi to LTE and still keep your Connection UUID, so no need to renegotiate the connection or TLS. Your previous connection is still valid.

This works the same way as the Mosh Shell, keeping SSH connections alive over UDP for a better roaming & mobile experience.

This also opens the doors to using multiple sources to fetch content. If the Connection UUID can be shared over a WiFi and cellular connection, it's in theory possible to use both media to download content. You're effectively streaming or downloading content in parallel, using every available interface you have.

While still theoretical, UDP allows for such innovation.

The QUIC protocol in action

The Chrome browser has had (experimental) support for QUIC since 2014. If you want test QUIC, you can enable the protocol in Chrome. Practically, you can only test the QUIC protocol against Google services.

The biggest benefit Google has is the combination of owning both the browser and the server marketshare. By enabling QUIC on both the client (Chrome) and the server (Google services like YouTube, Google.com), they can run large-scale tests of new protocols in production.

There's a convenient Chrome plugin that can show the HTTP/2 and QUIC protocol as an icon in your browser: HTTP/2 and SPDY indicator.

You can see how QUIC is being used by opening the chrome://net-internals/#quic tab right now (you'll also notice the Connection UUID mentioned earlier).

quic_net_internals_sessions

If you're interested in the low-level details, you can even see all the live connections and get individual per-packet captures: chrome://net-internals/#events&q=type:QUIC_SESSION%20is:active.

quic_debug_packets_chrome

Similar to how you can see the internals of a SDPY or HTTP/2 connection.

Won't someone think of the firewall?

If you're a sysadmin or network engineer, you probably gave a little shrug at the beginning when I mentioned QUIC being UDP instead of TCP. You've probably got a good reason for that, too.

For instance, when we at Nucleus Hosting configure a firewall for a webserver, those firewall rules look like these.

firewall_http_https_incoming_allow

Take special note of the protocol column: TCP.

Our firewall isn't very different from the one deployed by thousands of other sysadmins. At this time, there's no reason for a webserver to allow anything other than 80/TCP or 443/TCP. TCP only. No UDP.

Well, if we want to allow the QUIC protocol, we will need to allow 443/UDP too.

For servers, this means opening incoming 443/UDP to the webserver. For clients, it means allowing outgoing 443/UDP to the internet.

In large enterprises, I can see this be an issue. Getting it past security to allow UDP on a normally TCP-only port sounds fishy.

I would've actually though this to be a major problem in terms of connectivity, but as Google has done the experiments -- this turns out to not be the case.

quic_connection_statistics

(Source: QUIC Deployment Experience @Google)

Those numbers were given at a recent HTTP workshop in Sweden. A couple of key-pointers;

  • Since QUIC is only supported on Google Services now, the server-side firewalling is probably OK.
  • These numbers are client-side only: they show how many clients are allowed to do UDP over port 443.
  • QUIC can be disabled in Chrome for compliance reasons. I bet there are a lot of enterprises that have disabled QUIC so those connections aren't even attempted.

Since QUIC is also TLS-enabled, we only need to worry about UDP on port 443. UDP on port 80 isn't very likely to happen soon.

The advantage of doing things encrypted-only is that Deep Packet Inspection middleware (aka: intrusion prevention systems) can't decrypt the TLS traffic and modify the protocol, they see binary data over the fire and will -- hopefully -- just let it go through.

Running QUIC server-side

Right now, the only webserver that can get you QUIC is Caddy since version 0.9.

Both client-side and server-side support is considered experimental, so it's up to you to run it.

Since no one has QUIC support enabled by default in the client, you're probably still safe to run it and enable QUIC in your own browser(s). (Update: since Chrome 52, everyone has QUIC enabled by default, even to non-whitelisted domains)

To help debug QUIC I hope curl will implement it soon, there certainly is interest.

Performance benefits of QUIC

In a 2015 blogpost Google has shared several results from the QUIC implementation.

As a result, QUIC outshines TCP under poor network conditions, shaving a full second off the Google Search page load time for the slowest 1% of connections.

These benefits are even more apparent for video services like YouTube. Users report 30% fewer rebuffers when watching videos over QUIC.
A QUIC update on Google’s experimental transport (2015)

The YouTube statistics are especially interesting. If these kinds of improvements are possible, we'll see a quick adoption in video streaming services like Vimeo or "adult streaming services".

Conclusion

I find the QUIC protocol to be truly fascinating!

The amount of work that has gone into it, the fact that it's already running for the biggest websites available and that it's working blow my mind.

I can't wait see the QUIC spec become final and implemented in other browsers and webservers!

Update: comment from Jim Roskind, designer of QUIC

Jim Roskind was kind enough to leave a comment on this blog (see below) that deserves emphasising.

Having spent years on the research, design and deployment of QUIC, I can add some insight. Your comment about UDP ports being blocked was exactly my conjecture when we were experimenting with QUIC’s (UDP) viability (before spending time on the detailed design and architecture). My conjecture was that the reason we could only get 93% reachability was because enterprise customers were commonly blocking UDP (perchance other than what was needed for DNS).

If you recall that historically, enterprise customers routinely blocked TCP port 80 "to prevent employees from wasting their time surfing," then you know that overly conservative security does happen (and usability drives changes!). As it becomes commonly known that allowing UDP:443 to egress will provide better user experience (i.e., employees can get their job done faster, and with less wasted bandwidth), then I expect that usability will once again trump security ... and the UDP:443 port will be open in most enterprise scenarios.

... also ... your headline using the words “TCP/2” may well IMO be on target. I expect that the rate of evolution of QUIC congestion avoidance will allow QUIC to track the advances (new hardware deployment? new cell tower protocols? etc.) of the internet much faster than TCP.

As a result, I expect QUIC to largely displace TCP, even as QUIC provides any/all technology suggestions for incorporation into TCP. TCP is routinely implemented in the kernel, which makes evolutionary steps take 5-15 years (including market penetration!… not to mention battles with middle-boxes), while QUIC can evolve in the course of weeks or months.

-- Jim (QUIC Architect)

Thanks Jim for the feedback, it's amazing to see the original author of the QUIC protocol respond!

Further reading

If you're looking for more information, have a look at these resources:

Many thanks to Google for leading the efforts here!

The post Google’s QUIC protocol: moving the web from TCP to UDP appeared first on ma.ttias.be.

I've been playing with Project Atomic as a platform to run Docker containers for some time now. The reason I like Project Atomic is something for another blogpost. One of the reasons however, is that while it's a minimal OS, it does come with Python so I can use Ansible to do orchestration and configuration management.

Now, running Docker containers on a single host is nice, but the real fun starts when you can run containers spread over a number of hosts. This is easier said than done and requires some extra services like a scheduler, service discovery, overlay networking,... There are several solutions, but one that I particularly like is Kubernetes.

ProjectAtomic happens to ship with all necessary pieces needed to deploy a Kubernetes cluster using Flannel for the overlay networking. The only thing left is the configuration. Now this happens to be something Ansible is particularry good at.

The following wil describe how you can deploy a 4 node cluster on top of Atomic hosts using Ansible. Let's start with the Ansible inventory.

inventory

We will keep things simple here by using a single file-based inventory file where we explicitly specify the ip adresses of the hosts for testing purposes. The important part here are the 2 groups k8s-nodes and k8s-master. The k8s-master group should contain only one host which will become the cluster manager. All nodes under k8s-nodes will become nodes to run containers on.

[k8s-nodes]
atomic02 ansible_ssh_host=10.0.0.2
atomic03 ansible_ssh_host=10.0.0.3
atomic04 ansible_ssh_host=10.0.0.4


[k8s-master]
atomic01 ansible_ssh_host=10.0.0.1

Variables

Currently these roles don't have many variables that can be configured but we need to provide the variables for the k8s-nodes group. Create a folder group_vars with a file that has the same name of the group. If you checked out the repository you already have it.

$ tree group_vars/
group_vars/
    k8s-nodes

The file should have following variables defined.

skydns_enable: true

# IP address of the DNS server.
# Kubernetes will create a pod with several containers, serving as the DNS
# server and expose it under this IP address. The IP address must be from
# the range specified as kube_service_addresses.
# And this is the IP address you should use as address of the DNS server
# in your containers.
dns_server: 10.254.0.10

dns_domain: kubernetes.local

Playbook

Now that we have our inventory we can create our playbook. First we configure the k8s master node. Once this is configured we can configure the k8s nodes.

deploy_k8s.yml

 - name: Deploy k8s Master
   hosts: k8s-master
   remote_user: centos
   become: true
   roles:
     - k8s-master

 - name: Deploy k8s Nodes
   hosts: k8s-nodes
   remote_user: centos
   become: true
   roles:
     - k8s-nodes

Run the playbook.

  ansible-playbook -i hosts deploy_k8s.yml

If all ran without errors you should have your kubernetes cluster running. Lets see if we can connect to it. You will need kubectl. On Fedora you can install the kubernetes-client package.

$ kubectl --server=192.168.124.40:8080 get nodes
NAME              STATUS    AGE
192.168.124.166   Ready     20s
192.168.124.55    Ready     20s
192.168.124.62    Ready     19s

That looks good. Lets see if we can run a container on this cluster.

$ kubectl --server=192.168.124.40:8080 run nginx --image=nginx
replicationcontroller "nginx" created

Check the status:

$ kubectl --server=192.168.124.40:8080 get pods
NAME          READY     STATUS    RESTARTS   AGE
nginx-ri1dq   0/1       Pending   0          55s

If you see the pod status in state pending, just wait a few moments. If this is the first time you run the nginx container image, it needs to be downloaded first which can take some time. Once your pod is is running you can try to enter the container.

kubectl --server=192.168.124.40:8080 exec -ti nginx-ri1dq -- bash
root@nginx-ri1dq:/#

This a rather basic setup (no HA masters, no auth, etc..). The idea is to improve these Ansible roles and add more advanced configuration.

If you are interested and want to try it out yourself you can find the source here:

https://gitlab.com/vincentvdk/ansible-k8s-atomic.git

July 29, 2016

Last week I made a comment on Twitter that I'd like to see Pantheon contribute more to Drupal core. I wrote that in response to the announcement that Pantheon has raised a $30 million Series C. Pantheon has now raised $50 to $60 million dollars of working capital (depending on Industry Ventures' $8.5M) and is in a special class of companies. This is an amazing milestone. Though it wasn't meant that way, Pantheon and Acquia compete for business and my Tweet could be read as a cheap attack on a competitor, and so it resulted in a fair amount of criticism. Admittedly, Pantheon was neither the best nor the only example to single out. There are many companies that don't contribute to Drupal at all – and Pantheon does contribute to Drupal in a variety of ways such as sponsoring events and supporting the development of contributed modules. In hindsight, I recognize that my tweet was not one of my best, and for that I apologize.

Having said that, I'd like to reiterate something I've said before, in my remarks at DrupalCon Amsterdam and many times on this blog: I would like to see more companies contribute more to Drupal core – with the emphasis on "core". Drupal is now relied upon by many, and needs a strong base of commercial contributors. We have to build Drupal together. We need a bigger and more diverse base of organizations taking on both leadership and contribution.

Contribution to Drupal core is the most important type of contribution in terms of the impact it can make. It touches every aspect of Drupal and all users who depend on it. Long-term and full-time contribution to core is not within everyone's reach. It typically requires larger investment due to a variety of things: the complexity of the problems we are solving, our need for stringent security and the importance of having a rigorous review-process. So much is riding on Drupal for all of us today. While every module, theme, event and display of goodwill in our community is essential, contributions to core are quite possibly the hardest and most thankless, but also the most rewarding of all when it comes to Drupal's overall progress and success.

I believe we should have different expectations for different organizations based on their maturity, their funding, their profitability, how strategic Drupal is for them, etc. For example, sponsoring code sprints is an important form of contribution for small or mid-sized organizations. But for any organization that makes millions of dollars with Drupal, I would hope for more.

The real question that we have to answer is this: at what point should an organization meaningfully contribute to Drupal core? Some may say "never", and that is their Open Source right. But as Drupal's project lead it is also my right and responsibility to encourage those who benefit from Drupal to give back. It should not be taboo for our community to question organizations that don't pull their weight, or choose not to contribute at all.

For me, committing my workdays and nights to Drupal isn't the exhausting part of my job. It's dealing with criticism that comes from false or incomplete information, or tackling differences in ideals and beliefs. I've learned not to sweat the small stuff, but it's on important topics like giving back that my emotions and communication skills get tested. I will not apologize for encouraging organizations to contribute to Drupal core. It's a really important topic and one that I'm very passionate about. I feel good knowing that I'm pushing these conversations from inside the arena rather than from the sidelines, and for the benefit of the Drupal project at large.

The post Enable QUIC protocol in Google Chrome appeared first on ma.ttias.be.

Google has support for the QUIC protocol in the Chrome browser, but it's only enabled for their own websites by default. You can enable it for use on other domains too -- assuming the webserver supports it. At this time, it's a setting you need to explicitly enable.

To start, open a new tab and go to chrome://flags/. Find the Experimental QUIC protocol and change the setting to Enabled. After the change, restart Chrome.

chrome_quic_support_setting

To find out of QUIC is enabled in your Chrome in the first place, go to chrome://net-internals/#quic.

In my case, it was disabled (which is the "default" value).

chrome_quic_internals_enabled

After changing the setting to enable QUIC support and restarting Chrome, the results were much better.

chrome_quic_internals_status_enabled

On the same page, you can also get a live list of which sessions are using the QUIC protocol. If it's enabled, it'll probably only be Google services for now.

chrome_quic_internals_sessions

I'm working on a blogpost to explain the QUIC protocol and how it compares to HTTP/2, so stay tuned for more QUIC updates!

The post Enable QUIC protocol in Google Chrome appeared first on ma.ttias.be.

July 28, 2016

As we all know does mmap, or even worse on Windows CreateFileMapping, need contiguous virtual address space for a given mapping size. That can become a problem when you want to load a file of a gigabyte with mmap.

The solution is of course to mmap the big file using multiple mappings. For example like adapting yesterday’s demo this way:

void FileModel::setFileName(const QString &fileName)
{
    ...
    if (m_file->open(QIODevice::ReadOnly)) {
        if (m_file->size() > MAX_MAP_SIZE) {
            m_mapSize = MAX_MAP_SIZE;
            m_file_maps.resize(1 + m_file->size() / MAX_MAP_SIZE, nullptr);
        } else {
            m_mapSize = static_cast(m_file->size());
            m_file_maps.resize(1, nullptr);
        }
        ...
    } else {
        m_index->open(QFile::ReadOnly);
        m_rowCount = m_index->size() / 4;
    }
    m_file_maps[0] = m_file->map(0, m_mapSize, QFileDevice::NoOptions);
    qDebug() << "Done loading " << m_rowCount << " lines";
    map_index = m_index->map(0, m_index->size(), QFileDevice::NoOptions);

    beginResetModel();
    endResetModel();
    emit fileNameChanged();
}

And in the data() function:

QVariant FileModel::data( const QModelIndex& index, int role ) const
{
    QVariant ret;
    ...
    quint32 mapIndex = pos_i / MAX_MAP_SIZE;
    quint32 map_pos_i = pos_i % MAX_MAP_SIZE;
    quint32 map_end_i = end_i % MAX_MAP_SIZE;
    uchar* map_file = m_file_maps[mapIndex];
    if (map_file == nullptr)
        map_file = m_file_maps[mapIndex] = m_file->map(mapIndex * m_mapSize, m_mapSize, QFileDevice::NoOptions);
    position = m_file_maps[mapIndex] + map_pos_i;
    if (position) {
            const int length = static_cast(end_i - pos_i);
            char *buffer = (char*) alloca(length+1);
            if (map_end_i >= map_pos_i)
                strncpy (buffer, (char*) position, length);
            else {
                const uchar *position2 = m_file_maps[mapIndex+1];
                if (position2 == nullptr) {
                    position2 = m_file_maps[mapIndex+1] = m_file->map((mapIndex+1) *
                         m_mapSize, m_mapSize, QFileDevice::NoOptions);
                }
                strncpy (buffer, (char*) position, MAX_MAP_SIZE - map_pos_i);
                strncpy (buffer + (MAX_MAP_SIZE - map_pos_i), (char*) position2, map_end_i);
            }
            buffer[length] = 0;
            ret = QVariant(QString(buffer));
        }
    }
    return ret;
}

You could also not use mmap for the very big source text file and use m_file.seek(map_pos_i) and m_file.read(buffer, length). The most important mapping is of course the index one, as the reading of the individual lines can also be done fast enough with normal read() calls (as long as you don’t have to do it for each and every line of the very big file and as long as you know in a O(1) way where the QAbstractListModel’s index.row()’s data is).

But you already knew that. Right?

The post Varnish Agent: an HTML frontend to manage & monitor your varnish installation appeared first on ma.ttias.be.

I've been using Varnish for several years, but I only just recently learned of the Varnish Agent. It's a small daemon that can connect to a running Varnish instance to help manipulate it: load new VCLs, see statistics, watch the varnishlog, flush caches, ...

If you're new to Varnish, this is an easier way of getting started than by learning all the CLI tools.

Installing Varnish Agent

The installation is pretty straight forward, assuming you're already using the Varnish repositories.

$ yum install varnish-agent

If you don't have the package available in your repositories, clone the source from varnish/vagent2 on Github and compile it yourself.

After that, start the service and it will bind on port :6085 by default.

$ systemctl start varnish-agent

By default, the web interface is protected by a simple HTTP authentication requiring username + password. Those get randomly generated during the installation and you can find them in /etc/varnish/agent_secret.

$ cat /etc/varnish/agent_secret
varnish:yourpass

After that, browse to $IP:6085, log in and behold Varnish Agent.

What does Varnish Agent look like?

To give you an idea, here's a screenshot of the Varnish agent running on this server.

(As you can see, it's powered by the Bootstrap CSS framework that I also used on this site.)

varnish_agent_demo

A couple of features are worth diving into even further.

Cache invalidation via Varnish Agent

One of the very useful features is that the Varnish Agent offers you a simple form to purge the cache for certain URLs. In Varnish terminology, this is called "banning".

varnish_agent_cache_invalidation

There are limits though: you pass the URL parameters, but you can't (yet?) pass the host. So if you want to ban the URL /index.html, you'll purge it for all the sites on that Varnish instance.

See cache misses

Another useful one is the parsing of varnishtop right in the web frontend.

varnish_agent_varnishtop_cache_misses

It instantly shows you which URLs are being fetched from the backend and are thus cache misses. These are probably the URLs or HTTP calls to focus on and see where cacheability can be improved.

Inline VCL editor

I consider this a very dangerous feature but a lifesaver at the same time: the web frontend allows you to edit the VCL of Varnish and instantly load it in the running Varnish instance (without losing the cache). If you're hit by a sudden traffic spike or need to quickly manipulate the HTTP requests, having the ability directly modify the Varnish VCL is pretty convenient.

Important to know is that the VCL configs aren't persisted on disk: they are passed to the running Varnish instance directly, but restarting the server (or the Varnish service) will cause the default .vcl file to be loaded again.

Varnishstat: statistics, graphs & numbers

The CLI tool varnishstat shows you number of hits/misses, connections per second, ... from the command line. But it isn't very useful to see historical data. That's usually handled by your monitoring system which fetches those datapoints and shows them in a timeline.

The Varnish Agent can parse those numbers and show you a (limited) timeline about how they evolved. It looks like this.

varnish_agent_graphs

The use case is limited, but it helps for a quick glance of the state of your Varnish instance.

Conclusion

While I personally still prefer the command line, I see the benefits of a simple web interface to quickly assess the state of your Varnish instance.

Having a built-in form to perform cache invalidation is useful and prevents having to create your own varnish URL purger.

If you're going to run Varnish Agent, make sure to look into firewalling the Varnish Agent port so only you are allowed access.

The post Varnish Agent: an HTML frontend to manage & monitor your varnish installation appeared first on ma.ttias.be.

July 27, 2016

By popular request...

If you go to the Debian video archive, you will notice the appearance of an "lq" directory in the debconf16 subdirectory of the archive. This directory contains low-resolution re-encodings of the same videos that are available in the toplevel.

The quality of these videos is obviously lower than the ones that have been made available during debconf, but their file sizes should be up to about 1/4th of the file sizes of the full-quality versions. This may make them more attractive as a quick download, as a version for a small screen, as a download over a mobile network, or something of the sorts.

Note that the audio quality has not been reduced. If you're only interested in the audio of the talks, these files may be a better option.

I published the following diary on isc.sans.org: “Analyze of a Linux botnet client source code“.

I like to play active-defense. Every day, I extract attacker’s IP addresses from my SSH honeypots and perform a quick Nmap scan against them. The goal is to gain more knowledge about the compromised hosts. Most of the time, hosts are located behind a residential broadband connection. But sometimes, you find more interesting stuff. When valid credentials are found, the classic scenario is the installation of a botnet client that will be controlled via IRC to launch multiple attacks or scans… [Read more]

[The post [SANS ISC Diary] Analyze of a Linux botnet client source code has been first published on /dev/random]

July 26, 2016

Sometimes people want to do crazy stuff like loading a gigabyte sized plain text file into a Qt view that can handle QAbstractListModel. Like for example a QML ListView. You know, the kind of files you generate with this commando:

base64 /dev/urandom | head -c 100000000 > /tmp/file.txt

But, how do they do it?

FileModel.h

So we will make a custom QAbstractListModel. Its private member fields I will explain later:

#ifndef FILEMODEL_H
#define FILEMODEL_H

#include <QObject>
#include <QVariant>
#include <QAbstractListModel>
#include <QFile>

class FileModel: public QAbstractListModel {
    Q_OBJECT

    Q_PROPERTY(QString fileName READ fileName WRITE setFileName NOTIFY fileNameChanged )
public:
    explicit FileModel( QObject* a_parent = nullptr );
    virtual ~FileModel();

    int columnCount(const QModelIndex &parent) const;
    int rowCount( const QModelIndex& parent =  QModelIndex() ) const Q_DECL_OVERRIDE;
    QVariant data( const QModelIndex& index, int role = Qt::DisplayRole ) const  Q_DECL_OVERRIDE;
    QVariant headerData( int section, Qt::Orientation orientation,
                         int role = Qt::DisplayRole ) const  Q_DECL_OVERRIDE;
    void setFileName(const QString &fileName);
    QString fileName () const
        { return m_file->fileName(); }
signals:
    void fileNameChanged();
private:
    QFile *m_file, *m_index;
    uchar *map_file;
    uchar *map_index;
    int m_rowCount;
    void clear();
};

#endif// FILEMODEL_H

FileModel.cpp

We will basically scan the very big source text file for newline characters. We’ll write the offsets of those to a file suffixed with “.mmap”. We’ll use that new file as a sort of “partition table” for the very big source text file, in the data() function of QAbstractListModel. But instead of sectors and files, it points to newlines.

The reason why the scanner itself isn’t using the mmap’s address space is because apparently reading blocks of 4kb is faster than reading each and every byte from the mmap in search of \n characters. Or at least on my hardware it was.

You should probably do the scanning in small qEventLoop iterations (make sure to use nonblocking reads, then) or in a thread, as your very big source text file can be on a unreliable or slow I/O device. Plus it’s very big, else you wouldn’t be doing this (please promise me to just read the entire text file in memory unless it’s hundreds of megabytes in size: don’t micro optimize your silly homework notepad.exe clone).

Note that this is demo code with a lot of bugs like not checking for \r and god knows what memory leaks and stuff was remaining when it suddenly worked. I leave it to the reader to improve this. An example is that you should check for validity of the “.mmap” file: your very big source text file might have changed since the newline partition table was made.

Knowing that I’ll soon find this all over the place without any of its bugs fixed, here it comes ..

#include "FileModel.h"

#include <QDebug>

#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <pthread.h>
#include <unistd.h>

FileModel::FileModel( QObject* a_parent )
    : QAbstractListModel( a_parent )
    , m_file (nullptr)
    , m_index(nullptr)
    , m_rowCount ( 0 ) { }

FileModel::~FileModel() { clear(); }

void FileModel::clear()
{
    if (m_file) {
        if (m_file->isOpen() && map_file != nullptr)
            m_file->unmap(map_file);
        delete m_file;
    }
    if (m_index) {
        if (m_index->isOpen() && map_index != nullptr)
            m_index->unmap(map_index);
        delete m_index;
    }
}

void FileModel::setFileName(const QString &fileName)
{
   clear();
   m_rowCount = 0;
   m_file = new QFile(fileName);
   int cur = 0;
   m_index = new QFile(m_file->fileName() + ".mmap");
   if (m_file->open(QIODevice::ReadOnly)) {
       if (!m_index->exists()) {
           char rbuffer[4096];
           m_index->open(QIODevice::WriteOnly);
           char nulbuffer[4];
           int idxnul = 0;
           memset( nulbuffer +0, idxnul >> 24 & 0xff, 1 );
           memset( nulbuffer +1, idxnul >> 16 & 0xff, 1 );
           memset( nulbuffer +2, idxnul >>  8 & 0xff, 1 );
           memset( nulbuffer +3, idxnul >>  0 & 0xff, 1 );
           m_index->write( nulbuffer, sizeof(quint32));
           qDebug() << "Indexing to" << m_index->fileName();
           while (!m_file->atEnd()) {
               int in = m_file->read(rbuffer, 4096);
               if (in == -1)
                   break;
               char *newline = (char*) 1;
               char *last = rbuffer;
               while (newline != 0) {
                   newline = strchr ( last, '\n');
                   if (newline != 0) {
                     char buffer[4];
                     int idx = cur + (newline - rbuffer);
                     memset( buffer +0, idx >> 24 & 0xff, 1 );
                     memset( buffer +1, idx >> 16 & 0xff, 1 );
                     memset( buffer +2, idx >>  8 & 0xff, 1 );
                     memset( buffer +3, idx >>  0 & 0xff, 1 );
                     m_index->write( buffer, sizeof(quint32));
                     m_rowCount++;
                     last = newline + 1;
                  }
               }
               cur += in;
           }
           m_index->close();
           m_index->open(QFile::ReadOnly);
           qDebug() << "done";
       } else {
           m_index->open(QFile::ReadOnly);
           m_rowCount = m_index->size() / 4;
       }
       map_file= m_file->map(0, m_file->size(), QFileDevice::NoOptions);
       qDebug() << "Done loading " << m_rowCount << " lines";
       map_index = m_index->map(0, m_index->size(), QFileDevice::NoOptions);
   }
   beginResetModel();
   endResetModel();
   emit fileNameChanged();
}

static quint32
read_uint32 (const quint8 *data)
{
    return data[0] << 24 |
           data[1] << 16 |
           data[2] << 8 |
           data[3];
}

int FileModel::rowCount( const QModelIndex& parent ) const
{
    Q_UNUSED( parent );
    return m_rowCount;
}

int FileModel::columnCount(const QModelIndex &parent) const
{
    Q_UNUSED( parent );
    return 1;
}

QVariant FileModel::data( const QModelIndex& index, int role ) const
{
    if( !index.isValid() )
        return QVariant();
    if (role == Qt::DisplayRole) {
        QVariant ret;
        quint32 pos_i = read_uint32(map_index + ( 4 * index.row() ) );
        quint32 end_i;
        if ( index.row() == m_rowCount-1 )
            end_i = m_file->size();
        else
            end_i = read_uint32(map_index + ( 4 * (index.row()+1) ) );
        uchar *position;
        position = map_file +  pos_i;
        uchar *end = map_file + end_i;
        int length = end - position;
        char *buffer = (char*) alloca(length +1);
        memset (buffer, 0, length+1);
        strncpy (buffer, (char*) position, length);
        ret = QVariant(QString(buffer));
        return ret;
    }
    return QVariant();
}

QVariant FileModel::headerData( int section, Qt::Orientation orientation, int role ) const
{
    Q_UNUSED(section);
    Q_UNUSED(orientation);
    if (role != Qt::DisplayRole)
           return QVariant();
    return QString("header");
}

main.cpp

#include <QGuiApplication>
#include <QQmlApplicationEngine>
#include <QtQml>// qmlRegisterType

#include "FileModel.h"

int main(int argc, char *argv[])
{
    QGuiApplication app(argc, argv);
    qmlRegisterType<FileModel>( "FileModel", 1, 0, "FileModel" );
    QQmlApplicationEngine engine;
    engine.load(QUrl(QStringLiteral("qrc:/main.qml")));
    return app.exec();
}

main.qml

import QtQuick 2.3
import QtQuick.Window 2.2
import FileModel 1.0

Window {
    visible: true

    FileModel { id: fileModel }
    ListView {
        id: list
        anchors.fill: parent
        delegate: Text { text: display }
        MouseArea {
            anchors.fill: parent
            onClicked: {
                list.model = fileModel
                fileModel.fileName = "/tmp/file.txt"
            }
        }
    }
}

profile.pro

TEMPLATE = app
QT += qml quick
CONFIG += c++11
SOURCES += main.cpp \
    FileModel.cpp
RESOURCES += qml.qrc
HEADERS += \
    FileModel.h

qml.qrc

<RCC>
    <qresource prefix="/">
        <file>main.qml</file>
    </qresource>
</RCC>

The post Why do we automate? appeared first on ma.ttias.be.

Well that's a stupid question, to save time obviously!

I know, it sounds obvious. But there's more to it.

The last few years I've been co-responsible for determining our priorities at Nucleus. Deciding what to automate and where to focus our development and sysadmin efforts. Which tasks do we automate first?

That turns out to be a rather complicated question with many if's and but's.

For a long time, I only looked at the time-saved metric to determine what we should do next. I should've been looking at many more criteria.

To save time

This is the most common reason to automate and it's usually the only factor that helps decide whether there should be an effort to automate a certain task.

Example: time consuming capacity planning

Task: every week someone has to gather statistics about the running infrastructure to calculate free capacity in order to purchase new capacity in time. This task takes takes an hour, every week.

Efforts to automate: it takes a developer 2 days work to gather info via API's and create a weekly report to management.

Gain: the development efforts pay themselves back in about 16 weeks. Whether this is worth it or not depends on your organisation.

xkcd-automation

Source: XKCD: Automation

It's an image usually referenced when talking about automation, but it holds a lot of truth.

The "time gained" metric is multiplied by the people affected by it. If you can save 10 people 5 minutes every day, you've practically gained an extra workday every week.

To gain consistency

Sometimes a task is very complicated but doesn't need to happen very often. There are checklists and procedures to follow, but it's always a human (manual) action.

Example: complicated migrations

Task: an engineer sometimes has to move e-mail accounts from one server to another. This doesn't happen very often but consists of a large number of steps where human error is easily introduced.

Efforts to automate: it may take a sysadmin a couple of hours to create a series of scripts to help automate this task.

Gain: the value in automating this is in the quality of the work. It guarantees a consistent method of migrations that everyone can follow and creates a common baseline for clients. They know what to expect and the quality of the results is the same every time.

At the same time, this kind of automation reduces human made mistakes and leads to a combined knowledge set. If everyone who is an expert in his/her own domain contributes to the automation, it can bring together the skill set of very different people to create a much bigger whole: a collection of experiences, knowledge and opinions that ultimately lead to better execution and higher quality.

To gain speed, momentum and velocity

There are times when things just take a really long time in between tasks. It's very easy to lose focus or forget about follow-up tasks because you're distracted in the meanwhile.

Example: faster server setups and deliveries

Task: An engineer needs to install a new Windows server. Traditionally, this takes many rounds of Windows Updates and reboots. Most software installations require even more reboots.

Efforts to automate: a combination of PXE servers or golden templates and a series of scripts or config management to get the software stack to a reasonable state. A sysadmin (or team of) can spend several days automating this.

Gain: the immediate gain is in peace of mind and speed of operations. It reduces the time of go-live from several hours to mere minutes. It allows an operation to move much faster and consider new installations trivial.

This same logic applies to automating deployments of code or applications. By taking away the burden of performing deploys, it becomes much cheaper and easier to deploy very simple changes instead of prolonging deploys and going for big waterfall-like go-lives with lots of changes at once.

To schedule tasks

Some things need to happen at ungodly hours or at such a rapid interval that it's either impossible or impractical for a human to do.

Example: nightly maintenances

Task: Either as a one-time task or a recurring event, a set of MySQL tables needs to be altered. Given the application impact, this needs to happen outside office hours.

Efforts to automate: It will depend on the task at hand, but it's usually more work to automate than it is to do manually.

Gain: No one has to look at the task anymore. The fact that the maintenance can now be scheduled during off hours without human intervention makes it so that all preparations can be done during office hours -- well in advance -- and won't cause anyone to lose sleep over it.

It's quite common to spend more time making the script or automation than the time you would spend on it manually. The benefit however is that you no longer need to do things at night and you can prepare things, ask feedback from colleagues and take your time to think about the best possible way to handle it.

There is an additional benefit too: you automate to make things happen when they should, not when you remember they should.

To reduce boring or less fun tasks

If there's a recurring task that no one likes to do but is crucial to the organisation, it's probably worth automating.

Example: combining and merging related support tickets

Task: In a support department, someone is tasked to categorise incoming support tickets, merge the same tickets or link related tickets and distribute tasks.

Efforts to automate: A developer may spend several days writing the logic and algorithms to find and merge tickets automatically, based on pre-defined criteria.

Gain: A task that may be put on hold for too long because no one likes to do it, suddenly happens automatically. While it may not have been time consuming, the fact that it was put on hold too often impacts the organisation.

The actual improvement is to reduce the mental burden of having to perform those tasks in the first place. If your job consists of a thousand little tasks every day, it becomes easy to lose track of priorities.

To keep sysadmins and developers happy

Sometimes you automate things, not necessarily for any of the reasons above, but because your colleagues have signalled that it would be fun to automate it.

The tricky part here is assessing the value for the business. In the end, there should be value for the company.

Example: creating a dashboard with status reports

Task: Create a set of dashboards to be shown on monitors and TVs in the office.

Efforts to automate: Some hardware hacking with Raspberry Pi's, scripts to gather and display data and visualise the metrics and graphs.

Gain: More visibility in open alerts and overall status of the technology in the company.

Everyone that has dashboards knows the value they bring, but assessing whether it's worth the time and energy put into creating them is a very hard thing to do. How much time can you afford to spend creating them?

Improvements like these often come from colleagues. Listen to them and give them the time and resources to help implement them.

Validate your automation

The risk with putting so much faith and trust in your automation is that you may become blind for mistakes.

For instance, if you don't occasionally re-evaluate the rules or parameters on which you based your automation, you may well be doing the wrong things. It could be even worse, because automation usually happens behind the scenes, mistakes like these can go on for weeks/months without someone noticing.

Imagine writing a set of scripts to calculate margins or stock supplies, only to have the business demands shift without updating any of the parameters that are responsible for those decisions.

That'll quickly turn into a ticking time bomb.

Maybe we need to automate the validation & checking of our automation?

When to automate?

Given all these reasons on why to automate, this leaves the most difficult question of all: when to automate?

How and when do you decide whether something is worth automating? The time spent vs. time gained metric is easy to calculate, but how do you define the happiness of colleagues? How much is speed worth in your organisation?

Those are the questions that keep me up.

The post Why do we automate? appeared first on ma.ttias.be.

July 25, 2016

The post A new website layout, focussed on speed and simplicity appeared first on ma.ttias.be.

Out with the old, in with the new!

After a couple of years I felt it was time for a refresh of the design of this blog. It's already been through a lot of iterations, as it usually goes with WordPress websites. It's so easy to download and install a theme you can practically switch every day.

But the downside of WordPress themes is also obvious: you're running the same website as thousands of others.

Not this time, though.

Ps; if you're reading this in your RSS feed or mailclient, consider clicking through to the website to see the actual results.

Custom theme & design

This time, I decided to do do it myself. Well, sort of. The layout is based on the bootstrap CSS framework by Twitter. The design is inspired by Troy Hunt's site. Everything else I hand-crafted with varying degrees of success.

In the end, it's a WordPress theme that started out like this.

<?php

?>

Pretty empty.

Focus on the content

The previous design was chosen with a single goal in mind: maximise advertisement revenue. There were distinct locations for Google AdSense banners in the sidebar and on the top.

This time, I'm doing things differently: fuck ads.

I'm throwing away around 1.000eur per year in advertisement revenue, but what I'm gaining is more valuable to me: peace of mind. Knowing there are ads influences your writing and topic selection. You're all about the pageviews. More views = more money. You chose link-bait titles. You write as quickly as you can just to get the exclusive of a story, not always for the better.

So from now on, it's a much more simple layout: content comes first. No more ads. No more bloat.

Speed improvements

The previous site was -- rather embarrassingly -- loading over 100 resources on every pageview. From CSS to JavaScript to images to remote trackers to ... well, everything.

blog_performance_old_theme

The old design: 110 requests with a total of 1.7MB of content. The page took more than 2 seconds to fully render.

With this new design, I focussed on getting as few requests as possible. And I think it worked.

blog_performance_new_theme

Most pages load with 14 HTTP requests for a total of ~300KB. It also renders a lot faster.

There are still some requests being made that I'd like to get rid of, but they're well hidden inside plugins I use -- even though I don't need their CSS files.

A lot of the improvements came from not including the default Facebook & Twitter widgets but working with the Font Awesome icon set to render the same buttons & links, without 3rd party tools.

Social links

I used to embed the Twitter follow & Facebook share buttons on the site. It had a classic "like this page" at the right column. But those are loaded from a Twitter/Facebook domain and do all sorts of JavaScript and AJAX calls in the background, all slowing down the site.

Not to mention the tracking: just by including those pieces of JavaScript I made every visitor involuntarily give their browsing habbits to those players, all for their advertisement gains. No more.

To promote my social media, you can now find all necessary links in the top right corner -- in pure CSS.

social_follow

Want to share a page on social media? Those links are embedded in the bottom, also in CSS.

social_share

While the main motivator was speed and reducing the number of HTTP requests, not exposing my visitors to tracking they didn't ask for feels like a good move.

Why no static site generator?

If I'm going for speed, why didn't I pick a static site generator like Jeckyll, Hugo, or Octopress?

My biggest concern were comments.

With a statically generated site, I would have to embed some kind of 3rd party comment system like Disqus. I'm not a big fan for a couple of reasons:

  • Another 3rd party JavaScript/AJAX call that can be used for tracking
  • Comments are no longer a "part of" the website, in terms of SEO
  • I want to "own" the data: if all comments are moved to Disqus and they suddently disappear, I've lost a valuable part of this website

So, no static generator for me.

I do however combine WordPress with a static HTML plugin (similar to Wordfence). For most visitors, this should feel like a static HTML page with fast response times. It also helps me against large traffic spikes so my server doesn't collapse.

Typography

I'm a bit of a font-geek. I was a fan of webfonts for all the freedom they offered, but I'm taking a step back now to focus on speed. You see, webfonts are rather slow.

An average webfont that isn't in the browser cache takes about 150-300ms to load. All that for some typography? Doesn't seem worth it.

Now I'm following Github's font choice.

font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica, Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol";

In short: it takes the OS default wherever possible. This site will look slightly different on Mac OSX vs Windows.

For Windows, it looks like this.

font_windows.png

And for Mac OSX:

font_macosx

Despite being a Linux and open source focussed blog, hardly any of my visitors use a Linux operating system -- so I've decided to ignore those special fonts for now.

Having said that, I do still use the Font Awesome webfonts for all the icons and glyphs you see. In terms of speed, I found it to be faster & more responsive to load a single webfont than to load multiple images and icons. And since I'm no frontend-guru, sprites aren't my thing.

Large, per-post cover images

This post has a cover image at the very top that's unique to this post. I now have the ability to give each post (or page) a unique feeling and design, just by modifying the cover image.

For most of my post categories I have sane defaults in case older posts don't have a custom header image. I like this approach, as it gives a sense of familiarity to each post. For instance, have a look at the design of these pages;

I like how the design can be changed for each post.

At the same time, I'm sacrificing a bit of my identity. All my previous layouts all had the same theme for each page, causing -- hopefully -- a sense of familiarity and known-ground. I'll have to see how this goes.

There's a homepage

This blog has always been a blog, pur sang. Nothing more.

But as of today, there is an actual homepage! One that doesn't just list the latest blogposts.

I figured it was time for some kind of persona building and having a homepage to showcase relevant projects or activities might persuade more visitors to keep up with my online activities (aka: Twitter followers).

Feedback appreciated!

I'm happy with the current layout, but I want to hear from you want you think: is it better or worse?

There are a couple of things I'm considering but haven't quite decided on yet:

  • related posts: should they be shown below every post? They clutter the UI and I don't think anyone ever bothers clicking through?
  • cronweekly/syscast "advertisements": the previous layout had a big -- but ugly -- call-to-action for every visitor to sign up for cron.weekly or check out the SysCast podcast. Those are missing now, I'm not yet sure if -- and how -- they should return.

If there are pages that need some additional markup, I'm all ears. Ping me on Twitter with a link!

The post A new website layout, focussed on speed and simplicity appeared first on ma.ttias.be.

July 24, 2016

It’s that time of the year again where I humbly ask Autoptimize’s users to download and test the “beta”-version of the upcoming release. I’m not entirely sure whether this should be 2.0.3 (a minor release) or 2.1.0 (a major one), but I’ll let you guys & girls decide, OK?

Anyway, the following changes are in said new release;

  • Autoptimize now adds a small menu to the admin-toolbar (can be disabled with a filter) that shows the cache size and provides the possibility to purge the cache. A big thanks to Pablo Custo for his hard work on this nice feature!
  • If the cache size becomes too big, a mail will be sent to the site admin (pass `false` to `autoptimize_filter_cachecheck_sendmail` filter to disable or pass alternative email to the `autoptimize_filter_cachecheck_mailto` filter)
  • An extra tab is shown (can be hidden with a filter) with information about my upcoming premium power-ups and other optimization tools- and services.
  • Misc. bugfixes & small improvements (see the commit-log on GitHub)

So, if you’re curious about Pablo’s beautiful menu or if you just want to help Autoptimize out, download the beta and provide me with your feedback. If all goes well, we’ll be able to push it (2.1.0?) out in the first half of August!

July 23, 2016

  • visited FOSDEM; giving a lightning talk about Buildtime Trend; meeting Rouslan, Eduard and Ecaterina, and many others
  • attended a Massive Attack concert in Paleis 12.
  • visited Mount Expo, the outdoor fair organised by KBF
  • saw some amazing outdoor films on the BANFF film festival
  • spent a weekend cleaning routes with the Belgian Rebolting Team in Comblain-La-Tour. On Sunday we did some climbing in Awirs, where I finished a 6b after trying a few times.
  • First time donating blood plasma
  • Climbing trip to Gorges du Tarn with Vertical Thinking : climbing 6 days out of 7 (one day of rain), doing multipitch Le Jardin Enchanté, sending a lot of 5, 6a and 6a+ routes, focusing on reading the route, looking for footholds and taking small steps.
  • Some more route cleaning with BRT, this time in Flône, removing loose rocks and preparing to open new routes.
  • went to DebConf16 in CapeTown, talking about 365 days of Open Source and made a first contribution to Debian.
  • Visited South Africa and climbed in Rocklands/Cederberg

July 22, 2016

The post vsftpd on linux: 500 OOPS: vsftpd: refusing to run with writable root inside chroot() appeared first on ma.ttias.be.

The following error can occur when you just installed vsftpd on a Linux server and trying to FTP to it.

Command:	USER xxx
Response: 	331 Please specify the password.
Command:	PASS ******************
Response: 	500 OOPS: vsftpd: refusing to run with writable root inside chroot()
Error:        	Critical error: Could not connect to server

This is caused by the fact that the directory of the user you're connecting to, is write-enabled. In normal chroot() situations, the parent directory needs to be read-only.

This means for most situations of useradd, which will create a home directory owned and writeable by the user, the above error of "vsftpd: refusing to run with writable root inside chroot()" will be shown.

To fix this, modify the configuration as such.

$ cat /etc/vsftpd/vsftpd.conf
...
allow_writeable_chroot=YES

If that parameter is missing, just add it to the bottom of the config. Next, restart vsftpd.

$ service vsftpd restart

After that, FTP should run smoothly again.

Alternatively: please consider using sFTP (FTP over SSH) or FTPs (FTP via TLS) with a modified, non-writeable, chroot.

The post vsftpd on linux: 500 OOPS: vsftpd: refusing to run with writable root inside chroot() appeared first on ma.ttias.be.

July 21, 2016

The before and after of Boston.gov

Yesterday the City of Boston launched its new website, Boston.gov, on Drupal. Not only is Boston a city well-known around the world, it has also become my home over the past 9 years. That makes it extra exciting to see the city of Boston use Drupal.

As a company headquartered in Boston, I'm also extremely proud to have Acquia involved with Boston.gov. The site is hosted on Acquia Cloud, and Acquia led a lot of the architecture, development and coordination. I remember pitching the project in the basement of Boston's City Hall, so seeing the site launched less than a year later is quite exciting.

The project was a big undertaking as the old website was 10 years old and running on Tridion. The city's digital team, Acquia, IDEO, Genuine Interactive, and others all worked together to reimagine how a government can serve its citizens better digitally. It was an ambitious project as the whole website was redesigned from scratch in 11 months; from creating a new identity, to interviewing citizens, to building, testing and launching the new site.

Along the way, the project relied heavily on feedback from a wide variety of residents. The openness and transparency of the whole process was refreshing. Even today, the city made its roadmap public at http://roadmap.boston.gov and is actively encouraging citizens to submit suggestions. This open process is one of the many reasons why I think Drupal is such a good fit for Boston.gov.

Boston gov tell us what you think

More than 20,000 web pages and one million words were rewritten in a more human tone to make the site easier to understand and navigate. For example, rather than organize information primarily by department (as is often the case with government websites), the new site is designed around how residents think about an issue, such as moving, starting a business or owning a car. Content is authored, maintained, and updated by more than 20 content authors across 120 city departments and initiatives.

Boston gov tools and apps

The new Boston.gov is absolutely beautiful, welcoming and usable. And, like any great technology endeavor, it will never stop improving. The City of Boston has only just begun its journey with Boston.gov - I’m excited see how it grows and evolves in the years to come. Go Boston!

Boston gov launch event
Boston gov launch event
Boston gov launch event
Last night there was a launch party to celebrate the launch of Boston.gov. It was an honor to give some remarks about this project alongside Boston Mayor Marty Walsh (pictured above), as well as Lauren Lockwood (Chief Digital Officer of the City of Boston) and Jascha Franklin-Hodge (Chief Information Officer of the City of Boston).
6775216057_e6eedeb5d5_b
Ceci est le billet 39 sur 39 dans la série Printeurs

Nellio, Eva, Max et Junior fuient l’usine de mannequins sexuels à bord d’un taxi automatique gratuit.

Le taxi nous emmène à toute allure.
— Junior, tu es sûr que l’on ne sera pas tracé ?
— Pas si on utilise le mode gratuit. Les données sont agrégées et anonymisées. Un vieux reliquat d’une ancienne loi. Et comme le système informatique fonctionne, personne n’ose le mettre à jour ni triturer un peu trop les bases de données. Par contre, si on achète quoi que ce soit dans le tunnel, nous serions immédiatement remarqués !

Tout en répondant, il regarde avec émerveillement les doigts métalliques que Max lui a greffé.

— Waw, dire que j’ai attendu tout ce temps pour me faire greffer un implant auriculaire ! C’est génial !
— C’était nécessaire pour t’implanter le logiciel de gestions des doigts, ajoute Max. Mais l’implant auriculaire est fournit avec une légère euphorie pour atténuer la douleur.
— Au fait, Max, où va-t-on ?
— J’ai contacté FatNerdz sur le réseau. Il m’a filé les coordonnées du siège du conglomérat de la zone industrielle.
— Peut-on réellement faire confiance à ce FatNerdz que personne n’a jamais vu ni ne connait ?

Max semble hésiter un instant.

— À vrai dire, que peut-il nous arriver de pire que nous faire descendre par des drones explosifs ? Et c’est ce qui nous arrivera si nous ne faisons rien. Il y a un combat certain pour te capturer, Nellio. Autant tirer tout cela au clair une bonne fois pour toute…

Je me tourne vers Eva.

— Eva ? Parle moi ! Aide-nous !

Elle me darde d’un regard froid, cruel.

— Je pense savoir qui est FatNerdz. Je n’ai pas de preuve mais j’ai l’intime conviction que je le connais bien. Trop bien même…

Je n’ai pas le temps d’exprimer mon étonnement que la voiture ralentit soudainement. Toutes les vitres descendent et nos sièges se tournent automatiquement vers l’extérieur. Junior nous hurle un ordre avec un ton incroyablement autoritaire.

— Surtout, ne touchez rien, n’achetez rien ! Gardez les mains coincées en dessous de vos fesses.

Devant nos yeux se mettent à défiler des distributeurs nous présentant toutes sortes de produits : barres sucrées, boissons colorées, alcools, vêtements, accessoires…

— Junior, fais-je un peu honteux d’avouer mon ignorance, je n’ai jamais pris les tunnels gratuits. J’ai toujours pu me payer des courses individuelles…
— Heureux veinard ! Les tunnels gratuits n’ont de gratuit que le nom. À force de les utiliser, ils coûtent bien plus cher à l’usager que de payer directement des courses individuelles. C’est ce qui rend les pauvres encore plus pauvres : ils vendent la seule chose qui leur reste, leur personnalité et leur libre arbitre, pour une illusion de gratuité.

Des hologrammes commencent à danser devant mes yeux, des femmes et des hommes nus se trémoussent, boivent d’alléchantes boissons et me tendent langoureusement des cuillerées de yaourt ou des morceau de fruits recomposés. Je sens monter en moi un mélange d’appétit, de désir sexuel, de fringale… Instinctivement, je tends le bras vers une délicieusement rafraichissante bouteille de jus…

— Non ! me hurle Junior en me tapant violemment sur le bras. Si tu touches le moindre objet, il te sera crédité via un scan rétinien. Les transactions financières étant étroitement surveillées dans le cadre des lois anti-terroristes, nous serons pulvérisés dans la seconde ! Tiens bon !

La voiture me semble de plus en plus lente. Ce tunnel est interminable.

– Tant qu’on n’achète pas, la voiture ralentit, me souffle Junior. Mais il y a une durée maximale. Tiens bon !

Je ferme les yeux afin de soulager mes pulsions mais les phéromones de synthèse aguichent mes sens. Mes nerfs sont à fleur de peau, je me sens agressé, écorché, violé. Le désir monte en moi, j’ai envie de hurler, je me mords les mains jusqu’au sang. Je…

Lumière !

— Nous sommes sortis !

La voiture reprend de la vitesse Je respire douloureusement. De grosses gouttes de sueur perlent sur mon front. De sa main cybernétique, Junior me caresse l’épaule.

— C’est vrai que ça doit être violent si c’est la première fois. Le problème c’est que lorsqu’on y est exposé enfant, on développe une forme d’accoutumance. Les réflexes d’achats sont ceux ancrés dans la petite enfance. Les publicitaires sont donc dans une concurrence de plus en plus violente afin d’outrepasser ces habitudes.

Je me tourne vers Eva, qui semble être restée impassible.

— Eva, pourtant toi aussi tu m’avais dit ne pas avoir été exposé à la publicité. Encore moins que moi ! Tu m’as raconté que tes parents avaient fait d’énormes sacrifice pour cela.

Elle hésite. Se triture les lèvres. Un silence gêné s’installe que Max rompt.
— Eva, il est peut-être temps de lui dire la vérité.
— Je ne sais pas s’il est prêt à l’entendre…

Je hurle !

— Bon sang, je suis manipulé, pourchassé et traqué, j’ai bien le droit de savoir ce qui m’arrive ! Merde, Eva, je croyais sincèrement que je pouvais compter sur toi.
— Tu as toujours pu compter sur moi, Nellio. Toujours ! Je ne t’ai menti que sur une seule chose : mon origine.
— Alors dis moi tout !
— Je croyais que ce que tu as vu à l’usine Toy & Sex était suffisant.
— Et bien non ! Cela a rendu tout encore plus confus pour moi ! Pourquoi ces poupées gonflables nouvelle génération sont-elles à ton effigie ?

Max émet un son qui, s’il avait un larynx biologique, ressemblerait sans doute à un toussotement.

— Nellio, continue Eva doucement. Ces poupées ne sont pas à mon effigie.
— Mais…
— C’est moi qui suis…

Une formidable explosion retentit soudain. La voiture est soufflée et projetée violemment sur le flanc. Des crépitements d’armes à feu se font entendre.

— Ils nous ont repéré, hurlé-je !
— Non, me répond Junior. Si c’était le cas, nous serions mort. C’est certainement un attentat.

Nous sommes tous les quatre emmêlés, culs par dessus tête. Max tente de s’extirper du véhicule. Ses pieds et se genoux me broient les côtes mais la douleur reste supportable.

— Oh merde, un attentat, soupiré-je en portant la main à mon front ensanglanté. Encore ces foutus militants du sultanats islamiques !
— Ou alors, des policiers en service commandé, ajoute Junior avec un sourire narquois.
— Hein ?
— Oui, s’il n’y a pas assez d’attentat, on en organise des petits histoires de justifier les budgets. Parfois ce sont des initiatives locales. Parfois, c’est carrément des ordres qui viennent d’en haut afin de faire passer des lois ou de prendre des mesures. Dans tous les cas, ça fait consommer de l’info, ça occupe les télépass.

La voix de Max nous parvient de l’extérieur.

— Dîtes, vous vous magnez le train ? Ils sont en train de descendre tout le monde de l’autre côté de la rue. Mais ils risque bien de venir canarder les survivants de l’explosion.
— Après toi, fais-je à Junior d’un air blasé, heureux de vivre enfin une explosion dont je ne suis pas la cible prioritaire.

 

Photo par Oriolus.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins: 12uAH27PhWepyzZ8kzY9rMAZzwpfsWSxXt, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

July 20, 2016

A bit of history One thing that never ceases to amaze me is how Activiti is being used in some very large organisations at some very impressive scales. In the past, this has led to various optimizations and refactorings, amongst which was the async executor – replacement for the old job executor. For the uninitiated: these executors handle […]

July 19, 2016

FOSDEM 2017 will take place at ULB Campus Solbosch on Saturday 4 and Sunday 5 February 2017. Further details and calls for participation will be announced in the coming weeks and months. Have a nice summer!
We now invite proposals for main track presentations, developer rooms, stands and lightning talks. FOSDEM offers open source and free software developers a place to meet, share ideas and collaborate. Renowned for being highly developer-oriented, the event brings together some 5000+ geeks from all over the world. The seventeenth edition will take place on Saturday 4th and Sunday 5th February 2017 at the usual location: ULB Campus Solbosch in Brussels. Main Tracks Previous editions have featured main tracks centered around security, operating system development, community building, and many other topics. Presentations are expected to be 50 minutes long and舰

July 18, 2016

FOSDEM 2017 will take place at ULB Campus Solbosch on Saturday 4 and Sunday 5 February 2017. Further details and calls for participation will be announced in the coming weeks and months. Have a nice summer!

July 15, 2016

I published the following diary on isc.sans.org: “Name All the Things!“.

With our more and more complex environments and processes, we have to handle a huge amount of information on a daily basis. To improve the communication with our colleagues, peers, it is mandatory to speak the same language and to avoid ambiguities while talking to them. A best practice is to apply a naming convention to everything that can be labeled. It applies to multiple domains and not only information security… [Read more]

[The post [SANS ISC Diary] Name All the Things! has been first published on /dev/random]

July 14, 2016

Notes from Implementing Domain Driven Design, chapter 2: Domains, Subdomains and Bounded Contexts (p58 and later only)

  • User interface and service orientated endpoints are within the context boundary
  • Domain concepts in the UI form the Smart UI Anti-Pattern
  • A database schema is part of the context if it was created for it and not influenced from the outside
  • Contexts should not be used to divide developer responsibilities; modules are a more suitable tactical approach
  • A bounded context has one team that is responsible for it (while teams can be responsible for multiple bounded contexts)
  • Access and identity is its own context and should not be visible at all in the domain of another context. The application services / use cases in the other context are responsible for interacting with the access and identity generic subdomain
  • Context Maps are supposedly real cool

I published the following diary on isc.sans.org: “The Power of Web Shells“.

Web shells are not new in the threats landscape. A web shell is a script (written in PHP, ASL, Perl, … – depending on the available environment) that can be uploaded to a web server to enable remote administration. If web shells are usually installed for good purposes, many of them are installed on compromised servers. Once in place, the web shell will allow a complete takeover of the victim’s server but it can also be used to pivot and attack internal systems… [Read more]

[The post [SANS ISC Diary] The Power of Web Shells has been first published on /dev/random]

July 13, 2016

The post Highly Critical Remote Code Execution patch for Drupal (PSA-2016-001) appeared first on ma.ttias.be.

Update: patch released, see updates below.

For everyone running Drupal, beware: today a highly critical patch is going to be released.

There will be multiple releases of Drupal contributed modules on Wednesday July 13th 2016 16:00 UTC that will fix highly critical remote code execution vulnerabilities (risk scores up to 22/25). These contributed modules are used on between 1,000 and 10,000 sites. The Drupal Security Team urges you to reserve time for module updates at that time because exploits are expected to be developed within hours/days. Release announcements will appear at the standard announcement locations. PSA-2016-001

Important to know is that the Drupal core isn't affected.

Drupal core is not affected. Not all sites will be affected. You should review the published advisories on July 13th 2016 to see if any modules you use are affected. PSA-2016-001

The vulnerability is a "Arbitrary PHP code execution" one, meaning anyone could use this vulnerability to execute PHP code they wrote on the server. In most environments, PHP isn't limited to what it can and can not do, so allowing arbitrary PHP execution is just as dangerous as a Bash remote code execution exploit. Make sure to keep an eye on the patch!

Update 13/07/2016

3 modules have been updated:

Get patching!

Here's the diff for the Coder module:

$ diff -r coder_upgrade/scripts/coder_upgrade.run.php \
   coder_upgrade/scripts/coder_upgrade.run.php
54,59d53
< if (!script_is_cli()) {
<   // Without proper web server configuration, this script can be invoked from a
<   // browser and is vulnerable to misuse.
<   return;
< }
<
219,227d212
<
< /**
<  * Returns boolean indicating whether script is being run from the command line.
<  *
<  * @see drupal_is_cli()
<  */
< function script_is_cli() {
<   return (!isset($_SERVER['SERVER_SOFTWARE']) && (php_sapi_name() == 'cli' || (is_numeric($_SERVER['argc']) && $_SERVER['argc'] > 0)));
< }

Here's the diff for the RESTWS module:

$ diff -r restws.module restws.module
268c268
<         'page arguments' => array($resource, 'drupal_not_found'),
---
>         'page arguments' => array($resource),
287c287
<         'page arguments' => array($resource, 'drupal_not_found'),
---
>         'page arguments' => array($resource),
308c308
<           'page arguments' => array($resource, 'drupal_not_found'),
---
>           'page arguments' => array($resource),
319,327d318
<  *
<  * @param string $resource
<  *   The name of the resource.
<  * @param string $page_callback
<  *   The page callback to pass through when the request is not handled by this
<  *   module. If no other pre-existing callback is used, 'drupal_not_found'
<  *   should be passed explicitly.
<  * @param mixed $arg1,...
<  *   Further arguments that are passed through to the given page callback.
329c320
< function restws_page_callback($resource, $page_callback) {
---
> function restws_page_callback($resource, $page_callback = NULL) {
431,433c422,427
<   // Fall back to the passed $page_callback and pass through more arguments.
<   $args = func_get_args();
<   return call_user_func_array($page_callback, array_slice($args, 2));
---
>   if (isset($page_callback)) {
>     // Further page callback arguments have been appended to our arguments.
>     $args = func_get_args();
>     return call_user_func_array($page_callback, array_slice($args, 2));
>   }
>   restws_terminate_request('404 Not Found');

The post Highly Critical Remote Code Execution patch for Drupal (PSA-2016-001) appeared first on ma.ttias.be.

July 11, 2016

Earlier today I updated my performance-centric TwentyTwelve child theme to fix a problem with the mobile navigation (due to the fact that TwentyTwelve changed the menu-button from a h3 to a button, which required the navigation JS which 2012.FFWD inlines to be updated as well). You can download the update here.

This update “officially” marks the end-of-life of this child-theme. Although a lot of optimizations can be done on a theme-level, I prefer focusing on tools like my own Autoptimize, which not only optimize code spit out by the theme but also any CSS/ JS introduced by plugins or widgets.

The post How To Get Pokémon Go on iPhone Outside US appeared first on ma.ttias.be.

In case you missed it, the world is going crazy over Pokémon Go. But, it's -- apparently -- only for the US, UK or Australia. That shouldn't stop you from getting the game, though!

These steps get you the game on an iPhone outside of the supported regions.

Important update: the Pokémon Go app will get full access to your Google Account. It can read all your email, see all your browsing history and see all your contacts. If you do not want this, do not install the app.

It gets full access to your google account.


Here's what full access means:


But the iOS app will look harmless.


If you can live with this, follow the steps below to get the game on your phone.

If at any point you change your mind, you can revoke the app's permissions to your Google Account in Google's Security settings.

Get a new e-mail address

One of the steps is create a new Apple ID and that requires a unique e-mail address. If you're nerdy enough to have your own domain name, create a new alias that points to your main address.

If you don't have that, use a throwaway e-mail account like on throwawaymail.com, which gives you one-time use e-mail addresses.

Sign out of the App Store

Open the App Store, scroll to the bottom of the Featured tab and sign out of your account by clicking on the "Apple ID: email address" button.

pokemon_1_app_store_log_out

Click your e-mail address, then choose "sign out".

pokemon_2_app_store_log_out

Go to the Australian App Store

Follow this link: itunes.apple.com/au/app/pokemon-go/id1094591345?mt=8.

The easiest way is to open this blogpost on your iPhone and click it, it'll prompt you to open the Apple Store.

pokemon_3_open_store

Alternatively, if you have a Mac with handoff enabled, open the above link in Safari and continue the session on your iPhone.

Once the App Store opens, you'll get a message that the game isn't available in your own store and you should switch to the Australian one, click on Change Store to proceed.

pokemon_4_change_app_store

Now, on to the fun stuff!

Search for Pokémon Go

Once you're in the correct app store, search for Pokémon Go.

pokemon_5_search_pokemon_go

Install the app. It'll prompt you to create a new account.

pokemon_6_log_in_apple_id

Now, on to create your new account.

Create a new App Store ID for Pokémon Go

Create a new Apple ID with the e-mail address you chose in step 1. Chose Australia as the region.

pokemon_7_apple_id_australia

As with everything, accept the terms and conditions.

pokemon_8_apple_id_accept

When you are prompted for your billing information, simply chose none.

pokemon_9_billing_information_none.png

The billing address needs to be valid, so I suggest you go with this address (I randomly chose it and it seems to be valid).

  • Title: Mr.
  • First name: John
  • Last name: Doe
  • Address: 301 Dogville Avenue
  • Postcode: 7000
  • City: Hobart
  • State: TAS
  • Phone: 123-456789

At the next step, Apple will send a validation e-mail to your e-mail address.

pokemon_10_validation_email

Go to it and confirm the address.

Search for Pokémon Go and install it

While still in the App Store, search for the game again, log in with your e-mail address and password, and install away!

Cleanup: log back into your original account

As a last task, you'll want to go back to the Featured page in the app store, scroll all the way to the bottom and log back in to your original account.

Done!

If you have concerns about 'what will happen when the game officially launches in Belgium? Will I lost my state in the game?', I can't say for sure, but since you have to log into your Google account in the game -- same as with Ingress -- it appears to be tied to your Google account, not your Apple device.

If this guide didn't work for you, there's also an excellent Youtube walk-through available.

The post How To Get Pokémon Go on iPhone Outside US appeared first on ma.ttias.be.

July 10, 2016

Blijkbaar zijn er weer een paar rechercheurs die geloven dat de rechtstaat niet aan hen besteed is; dat ze zoals Judge Dredd aan de slag kunnen met het afluisteren van ziekenhuizen, artsen en hulpverleners zoals psychologen.

Alles is goed om hun parallelle constructies te ondersteunen. De wet zit voor hen niet in de weg. Wie heeft dat nu nodig? Wetten? Pfuh. Daar doet Dredd niet aan mee. Judge Dredd is de wet. Wat is dat nu.

Hadden ze aanwijzing dat de arts mee in het complot zat? Nee dat was er niet. Want waarom is de orde der geneesheren dan niet op de hoogte gebracht? Het was gewoon volstrekt illegaal om dat ziekenhuis af te luisteren.

Ik hoop van harte dat deze rechercheurs een zware gevangenisstraf krijgen en tot slot nooit nog het beroep van rechercheur mogen uitoefenen.

We hebben dat hier niet nodig. Ga maar in Het VK politieagentje spelen. Zolang het nog bestaat. Bende knoeiers.

July 07, 2016

In one of my recent blog posts, I articulated a vision for the future of Drupal's web services, and at DrupalCon New Orleans, I announced the API-first initiative for Drupal 8. I believe that there is considerable momentum behind driving the web services initiative. As such, I want to provide a progress report, highlight some of the key people driving the work, and map the proposed vision from the previous blog post onto a rough timeline.

Here is a bird's-eye view of the plan for the next twelve months:

8.2 (Q4 2016) 8.3 (Q2 2017) Beyond 8.3 (2017+)
New REST API capabilities
Waterwheel initial release
New REST API capabilities
JSON API module
GraphQL module?
Entity graph iterator?

New REST API capabilities

Wim Leers (Acquia) and Daniel Wehner (Chapter Three) have produced a comprehensive list of the top priorities for the REST module. We're introducing significant REST API advancements in Drupal 8.2 and 8.3 in order to improve the developer experience and extend the capabilities of the REST API. We've been focused on configuration entity support, simplified REST configuration, translation and file upload support, pagination, and last but not least, support for user login, logout and registration. All this work starts to address differences between core's REST module and various contributed modules like Services and RELAXed Web Services. More details are available in my previous blog post.

Many thanks to Wim Leers (Acquia), Daniel Wehner (Chapter Three), Ted Bowman (Acquia), Alex Pott (Chapter Three), and others for their work on Drupal core's REST modules. Though there is considerable momentum behind efforts in core, we could always benefit from new contributors. Please consider taking a look at the REST module issue queue to help!

Waterwheel initial release

As I mentioned in my previous post, there has been exciting work surrounding Waterwheel, an SDK for JavaScript developers building Drupal-backed applications. If you want to build decoupled applications using a JavaScript framework (e.g. Angular, Ember, React, etc.) that use Drupal as a content repository, stay tuned for Waterwheel's initial release later this year.

Waterwheel aims to facilitate the construction of JavaScript applications that communicate with Drupal. Waterwheel's JavaScript library allows JavaScript developers to work with Drupal without needing deep knowledge of how requests should be authenticated against Drupal, what request headers should be included, and how responses are molded into particular data structures.

The Waterwheel Drupal module adds a new endpoint to Drupal's REST API allowing Waterwheel to discover entity resources and their fields. In other words, Waterwheel intelligently discovers and seamlessly integrates with the content model defined on any particular Drupal 8 site.

A wider ecosystem around Waterwheel is starting to grow as well. Gabe Sullice (Aten Design Group), creator of the Entity Query API module, has contributed an integration of Waterwheel which opens the door to features such as sorts, conditions and ranges. The Waterwheel team welcomes early adopters as well as those working on other REST modules such as JSON API and RELAXed or using native HTTP clients in JavaScript frameworks to add their own integrations to the mix.

Waterwheel is the currently the work of Matt Grill (Acquia) and Preston So (Acquia), who are developing the JavaScript library, and Ted Bowman (Acquia), who is working on the Drupal module.

JSON API module

In conjunction with the ongoing efforts in core REST, parallel work is underway to build a JSON API module which embraces the JSON API specification. JSON API is a particular implementation of REST that provides conventions for resource relationships, collections, filters, pagination, and sorting, in addition to error handling and full test coverage. These conventions help developers build clients faster and encourages reuse of code.

Thanks to Mateu Aguiló Bosch, Ed Faulkner and Gabe Sullice (Aten Design Group), who are spearheading the JSON API module work. The module could be ready for production use by the end of this year and included as an experimental module in core by 8.3. Contributors to JSON API are meeting weekly to discuss progress moving forward.

Beyond 8.3: GraphQL and entity graph iterator

While these other milestones are either certain or in the works, there are other projects gathering steam. Chief among these is GraphQL, which is a query language I highlighted in my Barcelona keynote and allows for clients to tailor the responses they receive based on the structure of the requests they issue.

One of the primary outcomes of the New Orleans web services discussion was the importance of a unified approach to iterating Drupal's entity graph; both GraphQL and JSON API require such an "entity graph iterator". Though much of this is still speculative and needs greater refinement, eventually, such an "entity graph iterator" could enable other functionality such as editable API responses (e.g. aliases for custom field names and timestamp formatters) and a unified versioning strategy for web services. However, more help is needed to keep making progress, and in absence of additional contributors, we do not believe this will land in Drupal until after 8.3.

Thanks to Sebastian Siemssen, who has been leading the effort around this work, which is currently available on GitHub.

Validating our work and getting involved

In order to validate all of the progress we've made, we need developers everywhere to test and experiment with what we're producing. This means stretching the limits of our core REST offerings, trying out JSON API for your own Drupal-backed applications, reporting issues and bugs as you encounter them, and participating in the discussions surrounding this exciting vision. Together, we can build towards a first-class API-first Drupal.

Special thanks to Preston So for contributions to this blog post and to Wim Leers for feedback during its writing.

July 06, 2016

The post The Bash For Loop, The First Step in Automation on Linux appeared first on ma.ttias.be.

I believe mastering the for loop in Bash on Linux is one of the fundamentals for Linux sysadmins (and even developers!) that takes your automation skills to the next level. In this post I explain how they work and offer some useful examples.

Update 07/06/2016: lots of critique on Reddit (granted: well deserved), so I updated most of the examples on this page for safer/saner defaults.

Let me first start by saying something embarrassing. For the first 4 or 5 years of my Linux career -- which is nearing 10 years of professional experience -- I never used loops in Bash scripts. Or at the command line.

The thing is, I was a very fast mouse-clicker. And a very fast copy/paster. And a good search & replacer in vim and other text editors. Quite often, that got me to a working solution faster than working out the quirky syntax, testing, bugfixing, ... of loops in Bash.

And, to be completely honest, if you're managing just a couple of servers, I think you can get away with not using loops in Bash. But, once you master it, you'll wonder why you haven't learned Bash for-loops sooner.

The Bash For Loop: Example

First, let me show you the most basic -- and the one you'll see most often -- form of a Bash loop.

#!/bin/bash
for i in 1 2 3 4 5; do
  echo "counter: $i"
done

If you execute such a script, it outputs like this.

$ ./script.sh
counter: 1
counter: 2
counter: 3
counter: 4
counter: 5

Pretty basic, right? Here's what it breaks down to.

bash_loop_explained_1

The first part, #!/bin/bash, is the shebang, somethings called a hashbang. It indicates which interpreter is going to be used to parse the rest of the script. In short, it's what makes this a Bash script.

The rest is where the Bash for loop actually comes in.

  1. for: indicates that this is a loop, and that you'd like to iterate (or "go over") multiple items.
  2. i: a placeholder for a variable, which can later be referenced as $i. i is often used by developers to loop or iterate over an array or a hash, but this can be anything (*). For clarity, it could also have been named counter, the variable to reference it later on would then be $counter, with a dollar sign.
  3. in: a keyword, indicating the separator between the variable i and the collection of items to run over.
  4. 1 2 3 4 5: whichever comes between the in keyword and the ; delimiter is the collection of items you want to run through. In this example, the collection "1 2 3 4 5" is considered a set of 5 individual items.
  5. do: this keyword defines that from this point on, the loop starts. The code that follows will be executed n times, where n is the amount of items that's in the collection, in this case a set of 5 digits.
  6. echo "counter: $i": this is the code inside the loop, that will be repeated -- in this case -- 5 times. The $i variable is the individual value of each item.
  7. done: this keyword indicates that the code that should be repeated in this loop, has finished.

(*) Technically, the variable can't be anything as there are limitations of what characters can be used in a variable, but that's beyond the scope here. Keep it to alphanumerics without spaces or special chars, and you're probably safe.

Lots of text, isn't it?

Well, look at the screenshot again and just remember the different parts of the for loop. And remember also that this isn't limited to actual "scripts", it can be concatenated to a single line for use at the command line, too.

$ for i in 1 2 3 4 5; do echo "counter: $i"; done

The same breakdown occurs there.

bash_loop_explained_2

There is one, important difference though. Right before the last done keyword, there is a semicolon ; to indicate that the command ends there. In the Bash script, that isn't needed, because the done keyword is placed on a new line, also ending the command above.

Actually, that very first example I showed you? It can be rewritten without a single ;, by just new-lining each line.

#!/bin/bash
for i in 1 2 3 4 5
do
  echo "counter: $i"
done

You can pick which ever style you prefer or find most readable/maintainable.

The result is exactly the same: a set of items is looped and for each occurrence, an action is taken.

Values to loop in Bash

Looping variables isn't very exciting in and of itself, but it gets very useful once you start to experiment with the data to loop through.

For instance:

$ for file in *; do echo "$file"; done
file1.txt
file2.txt
file3.txt

Granted, you can just do ls and get the same value, but you can use * inside your for statement, which is essentially the same as an $(ls), but with safe output. It's execute before the actual for-loop and the output is being used as the collection of items to iterate.

This opens up a lot of opportunities, especially if you take the seq command in mind. With seq you can generate sequences at the CLI.

For instance:

$ seq 25 30
25
26
27
28
29
30

That generates the numbers 25 through 30. So if you'd like to loop items 1 until 255, you can do this:

$ for counter in $(seq 1 255); do echo "$counter"; done

When would you use that? Maybe to ping a couple of IPs or connect to multiple remote hosts and fire of a few commands.

$ for counter in $(seq 1 255); do ping -c 1 "10.0.0.$counter"; done
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
...

Now we're talking.

Ranges in Bash

You can also use some of the built-in Bash primitives to generate a range, without using seq. The code below does exactly the same as the ping example above.

$ for counter in {1..255}; do ping -c 1 10.0.0.$counter; done

More recent Bash version (Bash 4.x at least) can also modify this command increase the step by which each integer increments. By default it's always +1, but you can make that +5 if you like.

$ for counter in {1..255..5}; do echo "ping -c 1 10.0.0.$counter"; done
ping -c 1 10.0.0.1
ping -c 1 10.0.0.6
ping -c 1 10.0.0.11
ping -c 1 10.0.0.16

Looping items like this can allow you to quickly automate an otherwise mundane task.

Chaining multiple commands in the Bash for loop

You obviously aren't limited to a single command in a for loop, you can chain multiple ones inside the for-loop.

#!/bin/bash
for i in 1 2 3 4 5; do
  echo "Hold on, connecting to 10.0.1.$i"
  ssh root@"10.0.1.$i" uptime
  echo "All done, on to the next host!"
done

Or, at the command line as a one-liner:

$ for i in 1 2 3 4 5; do echo "Hold on, connecting to 10.0.1.$i"; ssh root@"10.0.1.$i" uptime; echo "All done, on to the next host"; done

You can chain multiple commands with the ; semicolon, the last command will be the done keyword to indicate you're, well, done.

Bash for-loop examples

Here are a couple of "bash for loop" examples. They aren't necessarily the most useful ones, but show some of the possibilities.

For each user on the system, write their password hash to a file named after them

One-liner:

$ for username in $(awk -F: '{print $1}' /etc/passwd); do grep $username /etc/shadow | awk -F: '{print $2}' > $username.txt; done

Script:

#!/bin/bash
for username in $(awk -F: '{print $1}' /etc/passwd)
do
  grep $username /etc/shadow | awk -F: '{print $2}' > $username.txt
done

Rename all *.txt files to remove the file extension

One-liner:

$ for filename in *.txt; do mv "$filename" "${filename%.txt}"; done

Script:

!#/bin/bash
for filename in *.txt
do
  mv "$filename" "${filename%.txt}"
done

Use each line in a file as an IP to connect to

One-liner:

$ for ip in $(cat ips.txt); do ssh root@"$ip" yum -y update; done

Script:

#!/bin/bash
for ip in $(cat ips.txt)
do
  ssh root@"$ip" yum -y update
done

Debugging for loops in Bash

Here's a one way I really like to debug for-loops: just echo everything. This is also a great way to "generate" a static Bash script, by catching the output.

For instance, in the ping example, you can do this:

$ for counter in {1..255..5}; do echo "ping -c 1 10.0.0.$counter"; done

That will echo each ping statement. Now you can also catch that output, write it to another Bash file and keep it for later (or modify manually if you're struggling with the Bash loop -- been there, done that).

$ for counter in {1..255..5}; do echo "ping -c 1 10.0.0.$counter"; done > ping-all-the-things.sh
$ more ping-all-the-things.sh
ping -c 1 10.0.0.1
ping -c 1 10.0.0.6
ping -c 1 10.0.0.11
...

It may be primitive, but this gets you a very long way!

The post The Bash For Loop, The First Step in Automation on Linux appeared first on ma.ttias.be.

July 05, 2016

I've been tweaking the video review system which we're using here at debconf over the past few days so that videos are being published automatically after review has finished; and I can happily announce that as of a short while ago, the first two files are now visible on the meetings archive. Yes, my own talk is part of that. No, that's not a coincidence. However, the other talks should not take too long ;-)

Future plans include the addition of a video RSS feed, and showing the videos on the debconf16 website. Stay tuned.

July 04, 2016

I’m in the train from Paris where I attended the RMLL Security Track version 2016. The RMLL or “Rencontres Mondiales du Logiciel Libre” is an annual event around free software. Amongst multiple tracks, there is always one dedicated to information security (around free software of course). The global event was not scheduled this year but the team behind the security track are really motivated people and they decided to organize the security track despite the cancellation of the main event. I already attended the previous editions (2013, 2014, 2015) and came back again for this edition.

The organisation of the security track was the same: free for everybody, a good size to facilitate networking, interesting talks, live streaming and a nice opportunity to meet again good friends… It was held at the Mozilla offices in the center of the French capital, a nice place! After the welcome speech and some house-keeping rules, the first half-day started with a keynote by Ange Albertini: “Connecting Communities“. Ange started with a fact: it’s not always easy to share knowledge and there is no exception with hackers. This can be resumed by the following quote: “Rage against the infosec circus“. For Ange, it is clear: stop having ideas, try! If conferences are a nice way to share findings and results of security researches. Two good examples of such initiatives:

Ange is an active contributor to PoC||GTFO and he gave more information about the magazine. Behind printed first, there are hard deadlines to get things done. The electronic version comes always later. There is one issue per quarter so there’s no rush to miss one and they definitively prefer quality to quantity (the articles). Also, they don’t have any commitment regarding the number of pages. An article is often the result of exchanges between people.

The funny part of the electronic version: each issue has a proof-of-concept. It is delivery as a PDF but it contains always a hidden part. Some examples from the previous editions:

  • a MBR
  • a TrueCrypt container
  • a JPG image
  • a Ruby web server serving the file itself (my preferred one)

About those PoC, Ange’s response is just “because why not?“. The conclusion to this keynote: “We are looking for more people to share more knowledge“.

The first talk was presented by Andrea Barisani about his baby: the USB Armory. This is not the first time that I see Andrea talking about this awesome project (the last time it was at hack.lu), I won’t describe again the project. Just saying: when I saw it for the first time, I ordered one with the crowd funding action and I don’t regret. This is a very cool device. I’m sure that anybody could find an interesting way to use it. Some examples are:
  • a Tor proxy
  • an SSH proxy
  • a password manager
  • a safe file container
  • a portable toolbox
Andrea made a nice demo of Interlock. The integration of the Signal application with the safe file container makes the USB Armory a cool device for your privacy.
The next presentation was performed by Paul Kocialkowski: “Verified boot and free software: reconciling freedom and security“. Paul reviewed the status of open implementation of boot software. He started with a review of the BIOS history:
  • In the 80’s, the BIOS was stored in read-only memory
  • In the 90’s, the hardware became more complex and the BIOS moved to read-write memory (ex: for upgrade reasons)
  • In the 2000’s, we saw run-time services

Paul reviewed how a computer boots and what are the security issues. There are different open source projects like Coreboot, U-Boot, Barebox or Libreboot. About the security issues, we have two different approaches: “to boot or not to boot” (verified boot) or “measured boot” (with a state indication). He reviewed all controls are implemented and how complex it is to implement a full open source boot environment. It was a very technical talk with many abbreviations now always easy to understand if you don’t play with such tools every day.

After a short coffee break, the project Qubes OS was presented by the lead developer: Marek Marczykowski-Górecki. The project Qubes OS started with a fact: the current design of operating systems is not safe enough. Current systems are monolithic systems: all the drivers, services are part of a single environment and applications(PDF viewers, web browsers, …) have access to all data. For years, people are fighting for more network segmentation: If today, it is normal to have DMZ’s for public servers, why not implement the same at OS level? That’s what is doing Qubes OS. The OS is based on a “bare-metal” hypervisor (based on Xen) which execute multiple virtual machines to address very specific functions of the operating system (example: to handle USB devices). Secure inter-VM communications is implemented via qrexec. The project seems interesting but being based on an hypervisor, the hardware used must be compatible. There is an HCL (“Hardware Compatibility List”) available but not all computers are able to run Qubes OS. Nice story during the Q&A session after the presentation: It looks like the project Let’s Encrypt is using Qubes OS.
To finish the first half-day, Serge Guelton came to present his tool: binmap. This project started from a simple idea: automate boring stuff that we do regularly. What does binmap?
  • It walks through a file system and collects binary files
  • It analyses them and saves results in a database
  • It builds dependencies graphs

Besides an inventory of (interesting) binaries, binmap is very helpful to detect if applications installed on a system are vulnerable. If a component is vulnerable to CVE-xxx and this component is used by multiple applications (a good example is OpenSSL – just saying), you “see” immediately where are the vulnerable applications. Based on multiple scans, it is also possible to track changes and build a (kind of) versioning system. This is a nice tool that you should add to your personal toolbox!

After a nice evening in Paris and the speaker dinner, the second day with a schedule full of talks. The first one was Julien Pivotto who talked about “DNS & Security“, a great cocktail! Indeed, without DNS, no Internet and DNS servers and protocol are great targets. After a quick recap about how DNS works, Julien gave some best practices:
  • Do not mix authoritative and recursive servers
  • Mix different brands of DNS servers
  • Hide your DNS master
  • Do not invent new TLD (this is especially important since the market of “funny” TLD’s expanded)

A lot of information can be stored in DNS: there are many different record types. Amongst them, the TXT records may contain a lot of useful information like SPF, DKIM records, keybase.io validation records or Let’s Encrypt DNS challenges. Then Julien switched to more details about DNSSec. This isn’t something brand new (the RFC are from the year 2000) but they took a long time before being implemented. The root DNS are DNSsec aware since 2010 only. The goal is to proof the origin and integrity of zones and record signing. Julien reviewed how it works. If it sounds very interesting, honestly DNSSec is not very convenient to implement and mistakes are common. There are also constraints like the regular key renewal.

Then, Guiseppe Longo came to speak about  mixing IDPS and nftables. nftables is the next-generation of iptables. The implementation of IP filters is now completely different and a new language was made available. To interact with the kernel, a new user-space tool is available: nft. Guiseppe reviewed the different concepts behind nftables: tables, chains, types (filter, route, NAT), hooks (prerouting, forward, input, …), expressions, rules, sets and dictionaries. It seems very powerful but completely different from the syntax we use for years. Then, Guiseppe explained how nftables can be interconnected with the Suricata ID(P)S. There are two modes available:
  • The “IDS” mode: working with pcap, AF_PACKET or NFLOG technologies
  • The “IPS” mode: working with NFQUEUE, IPFW, AF_PACKET.

The different modes and the available options to automatically block malicious traffic were reviewed. It is possible to implement very nice filters like to drop SSH connections from suspicious SSH clients. Powerful! The only remark I had was: how many organisations really implement such filters? Many of them can’t take the risk to block regular traffic for business reasons…

After a first coffee break, Sébastien Bardin presented a tool called BINSEC: “Binary level semantic analysis to the rescue“. It was very hard for me to follow the technical presentation. What you must keep in mind: The life of a program has the following stages: model > source code > assembler > executable code. To analyse the behaviour of a program, we do not always have access to its source code. Also, can we trust source code coming from external sources? Can we trust the compiler which optimize the code itself? To illustrate this case, Sébastien reviewed the CVE-2016-0777. The compiler decided, for optimisation reasons, to remove a memset() instruction which was clearing a memory zone with sensitive information. The presentation was a deep introduction to BINSEC.

Ivan Kwiatkowski presented his tool called Manalyze. The goal of his tool is to analyse PE files (Microsoft Windows executable). The PE format can be very complex and being used by many malwares, it is always interesting to have a deeper view of the file and “prevent annoyance of antivirus’ opaque decisions” as said Ivan. The tools is available as a command line or a web site exists to submit your own samples. Note that an intensive testing phase (fuzzing) has been performed and a bug bounty organised to ensure of the quality of the tool.

After the lunch break, J.C Jones from Mozilla presented the “Let’s Encrypt” project. This was not a technical talk, the project was already analysed multiple time. J. C. came back to the problematic of becoming a certificate authority. Netscape introduced HTTPS in 1995 with the release 1.0N of its browser. Not a long time ago, only 40,01% of the web traffic was based on HTTPS and, since the official launch of Let’s Encrypt, it increased by 8% (in only seven months). The main difficulty was to become trusted. It’s binary: you are or you aren’t! It is also based on a threat model: if someone issues a bad certificate, Let’s Encrypt will not be trusted anymore. J.C. reviewed the contraints they faced during the design and deployment of the platform. As example, do you know that the data and states must be kept at least 7.5 years?). Is was a very interesting talk.

Then, Julien Vehent also from Mozilla presented a talk about DevOps & Security. It started with a fact: today speed matters. You must be able to deploy new code in production in 15 minutes. The traditional cycle does not work anymore. In an ideal world, all deployments are automated and instantaneous. Which can be an issue for the security peeps. That’s why Julien explained that security must be integrated INTO DevOps. Security tests must be implemented into the delivery pipeline. By example: a 30 minutes meeting can be organised to perform the RRA (“Rapid Risk Assessment”). Some tests can be automated to prevent developers to make common mistakes (ex: based on the OWASP top-ten for web applications). As usual, plenty of ideas but, IMHO, not so easy to implement in a real world.

The next talk was epic: Clément Oudot started with… a song! “Imagine SSOng” based on the John Lennon’s “Imagine” song. Awesome!
Clément singing "Imagine SSOng"

The Clement’s presentation was a review of the common authentication mechanism for web applications. After the classic BasicAuth, DigestAuth and cookies, Clément reviewed some protocols developed by many US universities like:

  • CoSign
  • PubCookie
  • WebAuth
  • CAH
  • CAS
  • WebID
  • BrowserID
  • SAML
  • BrowserID
  • OpenID

As you can see, there is a lack of standardisation. Each protocol was reviewed with more focus on SAML and OpenID.

The last talk was about Ring by Adrien Béraud. Today, people want to communicate privately. They are already plenty of applications available but some of them are restricted to a limited supported devices, others are obscure, not easy to choose the right one. Based on this fact, Ring was developed as an easy to use, free, distributed communication platform. It is secure, robust and build on top of open standards and distributed with a GPLv3 license. A Ring account is a key-pair. No account is created on a central server. To communicate with a peer, create a new key, scan the QRcode and… talk! Text, video and audio communications are available on multiple platforms. The quick demo was nice. It worked like a charm. Peers can find each others via the OpenDHT protocol. Connections are established using peer-to-peer TLS connections and calls are placed via SIP. The project is still ongoing and some major features are missing like: using multiple devices per user (with a sub-key for each device) or a user name registration. It looks promising, keep an eye on it! Finally a tool available on most platform? To close the second day, a rump session was organized (read: “lightning talks”) with interesting topics.

And finally, the third day! The planning slightly changed: the first talk about Git was cancelled (Anne Nicolas was not present due to illness). In place, we had a live demo of Qubes OS. Marek demonstrated how VM’s can be quickly deployed to perform different tasks and how to safely exchange information between them. The first example was the creation of a quick service to retrieve data from Internet (the price of a BitCoin) and to make them available into a VM which has no network connectivity. The different steps were reviewed: Creation of the service, creation of the policy as explained during the talk on Monday. Another demo was based on PDF files: “qvm-convert-pdf <file.pdf>” will spawn a temporary VM, convert the PDF into images and make it safe to be viewed. This was a very interesting demo which definitively helped to better understand the logic of the Qubes OS.
The next talk was about the security of Python applications and how to audit them with the tool Bandit by Michael Scherer from RedHat. Here is a fact: “It’s all about code”. Many organisations rely on business critical applications and often updates have a cost. If something can go wrong (and it will – remember Murphy’s law). The update of a piece of code can have a huge impact if not performed properly or if issues occur. Testing the code is always interesting but, according to Michael, there was a lack of tools focusing on Python. That’s why Bandit was developed. The goal is to reveal dangerous pieces of code. Bandit is based on the AST module for python parsing (Abstract Syntax Grammar).
With the last minute planning change, an extra rump session was proposed by Julien Vehent from Mozilla about tips and best practices to implement TLS. They provide a tool to generate your TLS config for well-known web servers: https://mozilla.github.io/server-side-tls/ssl-config-generator/.

Then,  Antoine Cervoise presented his research about IoT (“Internet of Threats” as he said). His talk was called “Hands-on security for DIY projects“. All started as usual with a connected gadget (in this case smart-plug) and it quickly became a nightmare. To control the device, an Android application is available but it was malicious! Antoine performed several security checks against the device with classic scanners (Nmap, Nessus, …). While using Nessus, the standard profiles did not return interesting information but a specifically created profile (with longer timeouts) revealed interesting stuff like an increase of the device temperature (too many CPU cycles?). Scaring! Antoine reviewed some classic vulnerabilities or bad practices around the IoT but also some best practice. Thanks to him for the mention to my previous year presentation).
The next time slot was assigned to me. I presented the same talk as last week in Athens: “Building A Poor man’s Fir3Ey3 Mail Scanner“.
After the lunch break, Raphaël Vinot presented the MISP project (“Malware Information Sharing Platform”). He reviewed the most interesting features and also who is (should) using it? In fact, MISP addresses requirements from many profiles: malware reverses, incident handlers, law enforcement agencies, risk analysis team but also fraud analyst (MISP can handle IOC’s like BitCoin, mobile numbers or bank accounts). But no tool is perfect and MISP suffers of some issues with its default setup. By example: it is not possible to merge events (related to the same campaign) or to compare campaigns. There could be search performance issues in some cases. That’s why they are some interesting project in parallel to the main one like:
  • MISP Galaxy
  • MISP Hashstore
  • MISP Workbench
Have a look at them on https://github.com/MISP. Raphaël made an interesting demo with the analysis of PE files.
Then, Sébastien Larinier & Paul Rascagneres presented: “Complex malware & forensic investigation”. The talk was based on Fast-IR (“Fast Incident Response collector”). The goal of this tool is to be able to collect quickly artefacts from a (potentially) compromised host. It is a standalone tool (easy to be executed from a USB key). It collects plenty of artefacts like: Filesystems (IE History, Named Pipes, Prefetch, Recycle-bin, health, ARP Table, Drives list, Network drives, Networks Cards, Processes, Routes Tables, Tasks, Scheduled jobs, Services, Sessions, Network Shares, Sockets), the Windows regsitry (Installer Folders, OpenSaveMRU, Recents Docs, Services, Shellbags, Autoruns, USB History, Userassists, Networks List), memory (Clipboard, dlls loaded, Opened Files), dumps (MFT, MBR, RAM, DISK, Registry, SAM) and FileCatcher (based on interesting MIME types). The goal of the talk is not to describe the tool but show it in real cases: rootkit, bootkit, user land RAT. Paul reviewed several cases of real malwares:
  • Uroburos
  • ComRAT
  • Babar
  • Casper
  • Powerless (running entirely from the registry)
But the main issue with the Fast-IR collector is the lack of support from people who can’t execute it or the host to be investigated is simply located somewhere else. The idea was to build a system based on agents and a servers. The agent is deployed and running in the background. Every x minutes, it asks to the server if there is task to be executed. If needed a Fast-IR collector can be remotely launched. Agents act like a real botnet in fact but this one is not malicious! The code is brand new and is available here: the agent and the server.
For the next slot, the “serial speaker“,Julien Vehent (as named by the organizers), came back on stage to present a Mozilla project: MIG (“Mozilla InvestiGator“). I already had the chance to attend this presentation last year at HITB Amsterdam. If you’re interested, read my wrap-up. In a few words, the idea of the tool was based on a lack of time and resources to start investigation across thousands of hosts (same issues as described by Sébastien and Paul in the previous talk). Here are some practical cases:
  • Finding systems connected to a specific C&C
  • Fixed small mistakes (ex: search for files containing a key or password)
  • Measuring security compliance (ex: search for “^passwordauthentication no$” in /etc/ssh/sshd_config
It’s a very interesting tool but like the FastIR agent, the main issue for most customers is “how to deploy it across thousands of hosts”. In the case of Mozilla, it’s easy because all the hosts are internal.
Finally, the last talk was about “MOWR” (“More Obvious Web-malware Repository”) by Julien Revered and Antide Petit. Honestly, I did not know the tool before the presentation and, based on the abstract, I was expecting to learn about a tool to analyse URLs. In fact, MOWR can be easily deployed on a Linux host and analyse submitted web files (mainly PHP or ASP). Once the analysis completed, a report is presented with the findings. See it as a “VT-like website dedicated to malwares”.
This talk closed the 2016 edition of the stand-alone-security-track. Next year, the track will be back to the main organisation of the RMLL and will be held in Saint-Etienne, France. All the slides are already available here and the video should follow soon…

[The post RMLL Security Track 2016 Wrap-Up has been first published on /dev/random]

July 01, 2016

Consider these rather simple relationships between classes

Continuing on this subject, here are some code examples.

Class1 & Class2: Composition
An instance of Class1 can not exist without an instance of Class2.

Example of composition is typically a Bicycle and its Wheels, Saddle and a HandleBar: without these the Bicycle is no longer a Bicycle but just a Frame.

It can no longer function as a Bicycle. Example of when you need to stop thinking about composition versus aggregation is whenever you say: without the other thing can’t in our software the first thing work.

Note that you must consider this in the context of Class1. You use aggregation or composition based on how Class2 exists in relation to Class1.

Class1 with QScopedPointer:

#ifndef CLASS1_H
#define CLASS1_H

#include <QObject>
#include <QScopedPointer>
#include <Class2.h>

class Class1: public QObject
{
    Q_PROPERTY( Class2* class2 READ class2 WRITE setClass2 NOTIFY class2Changed)
public:
    Class1( QObject *a_parent = nullptr )
        : QObject ( a_parent) {
        // Don't use QObject parenting on top here
        m_class2.reset (new Class2() );
    }
    Class2* class2() {
        return m_class2.data();
    }
    void setClass2 ( Class2 *a_class2 ) {
        Q_ASSERT (a_class2 != nullptr); // Composition can't set a nullptr!
        if ( m_class2.data() != a_class2 ) {
            m_class2.reset( a_class2 );
            emit class2Changed()
        }
    }
signals:
    void class2Changed();
private:
    QScopedPointer<Class2> m_class2;
};

#endif// CLASS1_H

Class1 with QObject parenting:

#ifndef CLASS1_H
#define CLASS1_H

#include <QObject>
#include <Class2.h>

class Class1: public QObject
{
    Q_PROPERTY( Class2* class2 READ class2 WRITE setClass2 NOTIFY class2Changed)
public:
    Class1( QObject *a_parent = nullptr )
        : QObject ( a_parent )
        , m_class2 ( nullptr ) {
        // Make sure to use QObject parenting here
        m_class2 = new Class2( this );
    }
    Class2* class2() {
        return m_class2;
    }
    void setClass2 ( Class2 *a_class2 ) {
         Q_ASSERT (a_class2 != nullptr); // Composition can't set a nullptr!
         if ( m_class2 != a_class2 ) {
             // Make sure to use QObject parenting here
             a_class2->setParent ( this );
             delete m_class2; // Composition can never be nullptr
             m_class2 = a_class2;
             emit class2Changed();
         }
    }
signals:
    void class2Changed();
private:
    Class2 *m_class2;
};

#endif// CLASS1_H

Class1 with RAII:

#ifndef CLASS1_H
#define CLASS1_H

#include <QObject>
#include <QScopedPointer>

#include <Class2.h>

class Class1: public QObject
{
    Q_PROPERTY( Class2* class2 READ class2 CONSTANT)
public:
    Class1( QObject *a_parent = nullptr )
        : QObject ( a_parent ) { }
    Class2* class2()
        { return &m_class2; }
private:
    Class2 m_class2;
};
#endif// CLASS1_H

Class3 & Class4: Aggregation

An instance of Class3 can exist without an instance of Class4. Example of composition is typically a Bicycle and its driver or passenger: without the Driver or Passenger it is still a Bicycle. It can function as a Bicycle.

Example of when you need to stop thinking about composition versus aggregation is whenever you say: without the other thing can in our software the first thing work.

Class3:

#ifndef CLASS3_H
#define CLASS3_H

#include <QObject>

#include <QPointer>
#include <Class4.h>

class Class3: public QObject
{
    Q_PROPERTY( Class4* class4 READ class4 WRITE setClass4 NOTIFY class4Changed)
public:
    Class3( QObject *a_parent = nullptr );
    Class4* class4() {
        return m_class4.data();
    }
    void setClass4 (Class4 *a_class4) {
         if ( m_class4 != a_class4 ) {
             m_class4 = a_class4;
             emit class4Changed();
         }
    }
signals:
    void class4Changed();
private:
    QPointer<Class4> m_class4;
};
#endif// CLASS3_H

Class5, Class6 & Class7: Shared composition
An instance of Class5 and-or an instance of Class6 can not exist without a instance of Class7 shared by Class5 and Class6. When one of Class5 or Class6 can and one can not exist without the shared instance, use QWeakPointer at that place.

Class5:

#ifndef CLASS5_H
#define CLASS5_H

#include <QObject>
#include <QSharedPointer>

#include <Class7.h>

class Class5: public QObject
{
    Q_PROPERTY( Class7* class7 READ class7 CONSTANT)
public:
    Class5( QObject *a_parent = nullptr, Class7 *a_class7 );
        : QObject ( a_parent )
        , m_class7 ( a_class7 ) { }
    Class7* class7()
        { return m_class7.data(); }
private:
    QSharedPointer<Class7> m_class7;
};

Class6:

#ifndef CLASS6_H
#define CLASS6_H

#include <QObject>
#include <QSharedPointer>

#include <Class7.h>

class Class6: public QObject
{
    Q_PROPERTY( Class7* class7 READ class7 CONSTANT)
public:
    Class6( QObject *a_parent = nullptr, Class7 *a_class7 )
        : QObject ( a_parent )
        , m_class7 ( a_class7 ) { }
    Class7* class7()
        { return m_class7.data(); }
private:
    QSharedPointer<Class7> m_class7;
};
#endif// CLASS6_H

Interfaces with QObject

FlyBehavior:

#ifndef FLYBEHAVIOR_H
#define FLYBEHAVIOR_H
#include <QObject>
// Don't inherit QObject here (you'll break multiple-implements)
class FlyBehavior {
    public:
        Q_INVOKABLE virtual void fly() = 0;
};
Q_DECLARE_INTERFACE(FlyBehavior , "be.codeminded.Flying.FlyBehavior /1.0") 
#endif// FLYBEHAVIOR_H

FlyWithWings:

#ifndef FLY_WITH_WINGS_H
#define FLY_WITH_WINGS_H
#include <QObject>  
#include <Flying/FlyBehavior.h>
// Do inherit QObject here (this is a concrete class)
class FlyWithWings: public QObject, public FlyBehavior
{
    Q_OBJECT
    Q_INTERFACES( FlyBehavior )
public:
    explicit FlyWithWings( QObject *a_parent = nullptr ): QObject ( *a_parent ) {}
    ~FlyWithWings() {}

    virtual void fly() Q_DECL_OVERRIDE;
}
#endif// FLY_WITH_WINGS_H

It’s official, nginx is a heap of donkey dung. I replaced it with ye olde apache:

sudo service nginx stop
sudo apt-get -y purge nginx
sudo apt-get -y install apache2 apachetop libapache2-mod-php5
sudo apt-get -y autoremove
sudo service apache2 restart

AND DONE!

The post Ye Olde Apache appeared first on amedee.be.