Subscriptions

Planet Grep is open to all people who either have the Belgian nationality or live in Belgium, and who actively work with or contribute to Open Source/Free software.

About Planet Grep...

Other planets can be found at the Planet Index.

A complete feed is available in a number of syndication formats: RSS 1.0, RSS 2.0, FOAF, and OPML.

The layout of this site was done by Gregory

November 24, 2014

Mattias Geniar

Debugging Performance Problems With Zabbix Internal Items

Even after all these years, Zabbix remains my monitoring tool of choice. There's plenty of alternatives, but years of investing in the configs, the templates and the automation have kept my love for it. But, it's not always easy to

The post Debugging Performance Problems With Zabbix Internal Items appeared first on ma.ttias.be.

by Mattias Geniar at November 24, 2014 08:57 PM

Remove Orphaned Data From Zabbix’s MySQL Tables

A few years ago, I wrote a couple of SQL queries that I put onto Github to clean up a Zabbix database. It'll take items, triggers, events etc. that are no longer attached to a host, and remove them from

The post Remove Orphaned Data From Zabbix’s MySQL Tables appeared first on ma.ttias.be.

by Mattias Geniar at November 24, 2014 06:00 PM

Snakes On A Keyboard

Now this is a very cool hardware mod. You have had this keyboard for all of 24 hours now. The thing has a bunch of LEDs and some arrow keys. I'm disappointed that you haven't got Snake running on it

The post Snakes On A Keyboard appeared first on ma.ttias.be.

by Mattias Geniar at November 24, 2014 04:23 PM

Fabian Arrotin

Switching from Ethernet to Infiniband for Gluster access (or why we had to …)

As explained in my previous (small) blog post, I had to migrate a Gluster setup we have within CentOS.org Infra. As said in that previous blog post too, Gluster is really easy to install, and sometimes it can even "smells" too easy to be true. One thing to keep in mind when dealing with Gluster is that it's a "file-level" storage solution, so don't try to compare it with "block-level" solutions (so typically a NAS vs SAN comparison, even if "SAN" itself is wrong for such discussion, as SAN is what's *between* your nodes and the storage itself, just a reminder.)

Within CentOS.org infra, we have a multiple nodes Gluster setup, that we use for multiple things at the same time. The Gluster volumes are used to store some files, but also to host (different gluster volumes with different settings/ACLs) KVM virtual-disks (qcow2). People knowing me will say : "hey, but for performances reasons, it's faster to just dedicate for example a partition , or a Logical Volume instead of using qcow2 images sitting on top a filesystem for Virtual Machines, right ?" and that's true. But with our limited amount of machines, and a need to "move" Virtual Machine without a proper shared storage solution (and because in our setup, those physical nodes *are* both glusterd and hypervisors), Gluster was an easy to use solution to :

It was working, but not that fast ... I then heard about the fact that (obviously) accessing those qcow2 images file through fuse wasn't efficient at all, but that Gluster had libgfapi that could be used to "talk" directly to the gluster daemons, bypassing completely the need to mount your gluster volumes locally through fuse. Thankfully, qemu-kvm from CentOS 6 is built against libgfapi so can use that directly (and that's the reason why it's automatically installed when you install KVM hypervisor components). Results ? better , but still not was I/we was/were expecting ...

When trying to find the issue, I discussed with some folks in the #gluster irc channel (irc.freenode.net) and suddenly I understood something that it's *not* so obvious for Gluster in distributed+replicated mode : for people having dealt with storage solutions at the hardware level (or people using DRBD, which I did too in the past, and that I also liked a lot ..) in the past, we expect the replication to happens automatically at the storage/server side, but that's not true for Gluster : in fact Glusterd just exposes metadata to gluster clients, which then know where to read/write (being "redirected" to correct gluster nodes). That means so than replication happens at the *client* side : in replicated mode, the clients will write itself twice the same data : once on each server ...

So back to our example, as our nodes have 2*1Gb/s Ethernet card, and that one is a bridge used by the Virtual Machines, and the other one "dedicated" to gluster, and that each node is itself a glusterd/gluster client, I let you think about the max perf we could get : for a write operation : 1Gbit/s , divided by two (because of the replication) so ~ 125MB / 2 => in theory ~ 62 MB/sec (and then remove tcp/gluster/overhead and that drops to ~ 55MB/s)

How to solve that ? well, I tested that theory and confirmed directly that it was the case, when in distributed mode only, write performances were automatically doubled. So yes, running Gluster on Gigabit Ethernet suddenly was the bottleneck. Upgrading to 10Gb wasn't something we could do, but , thanks to Justin Clift (and some other Gluster folks), we were able to find some "second hand" Infiniband hardware (10Gbps HCAs and switch)

While Gluster has native/builtin rdma/Infiniband capabilities (see "tranport" option in the "gluster create volume" command), we had in our case to migrate existing Gluster volumes from plain TCP/Ethernet to Infiniband, while trying to get the downtime as small as possible. That is/was my first experience with Infiniband, but it's not as hard as it seems, especially when you discover IPoIB (IP over Infiniband). So from a Syadmin POV, it's just "yet another network interface", but a 10Gbps one now :)

The Gluster volume migration then goes like this : (schedule a - obvious - downtime for this) :

On all gluster nodes (assuming that we start from machines installed only with @core group, so minimal ones) :

yum groupinstall "Infiniband Support"

chkconfig rdma on

<stop your clients or other  apps accessing gluster volumes, as they will be stopped>

service glusterd stop && chkconfig glusterd off &&  init 0

Install then the hardware in each server, connect all Infiniband cards to the IB switch (previously configured) and power back on all servers. When machines are back online, you have "just" to configure the ib interfaces. As in my cases, machines were "remote nodes" and not having a look at how they were configured, I  had to use some IB tools to see which port was connected (a tool like "ibv_devinfo" showed me which port was active/connected, while "ibdiagnet" shows you the topology and other nodes/devices). In our case it was port 2, so let's create the ifcfg-ib{0,1} devices (and ib1 being the one we'll use) :

DEVICE=ib1
TYPE=Infiniband
BOOTPROTO=static
BROADCAST=192.168.123.255
IPADDR=192.168.123.2
NETMASK=255.255.255.0
NETWORK=192.168.123.0
ONBOOT=yes
NM_CONTROLLED=no
CONNECTED_MODE=yes

The interesting part here is the "CONNECTED_MODE=yes" : for people who already uses iscsi, you know that Jumbo frames are really important if you have a dedicated VLAN (and that the Ethernet switch support Jumbo frames too). As stated in the IPoIB kernel doc , you can have two operation mode : datagram (default 2044 bytes MTU) or  Connected (up to 65520 bytes MTU). It's up to you to decide which one to use, but if you understood the Jumbo frames thing for iscsi, you get the point already.

An "ifup ib1" on all nodes will bring the interfaces up and you can verify that everything works by pinging each other node, including with larger mtu values :

ping -s 16384 <other-node-on-the-infiniband-network>

If everything's fine, you can then decide to start gluster *but* don't forget that gluster uses FQDN (at least I hope that's how you configured initially your gluster setup, already on a dedicated segment, and using different FQDN for the storage vlan). You just have to update your local resolver (internal DNS, local hosts files, whatever you want) to be sure that gluster will then use the new IP subnet on the Infiniband network. (If you haven't previously defined different hostnames for your gluster setup, you can "just" update that in the different /var/lib/glusterd/peers/* and /var/lib/glusterd/vols/*/*.vol)

Restart the whole gluster stack (on all gluster nodes) and verify that it works fine :

service glusterd start

gluster peer status

gluster volume status

# and if you're happy with the results :

chkconfig glusterd on

So, in a short summary:

by fabian.arrotin at November 24, 2014 10:37 AM

Mattias Geniar

Remote Code Execution via ‘less’ on Linux Boxes

Mondays, gotta love'm. Many Linux distributions ship with the 'less' command automagically interfaced to 'lesspipe'-type scripts, usually invoked via LESSOPEN. This is certainly the case for CentOS and Ubuntu. Unfortunately, many of these scripts appear to call a rather large

The post Remote Code Execution via ‘less’ on Linux Boxes appeared first on ma.ttias.be.

by Mattias Geniar at November 24, 2014 08:36 AM

November 23, 2014

Mattias Geniar

Presentation: DNSSEC, The Good, The Bad & The Secure

Another set of slides I found that never got published, it seems. The presentation was actually never given, but was prepared for several conferences. It stops abruptly and was never completed, but still contains a lot of useful material (at

The post Presentation: DNSSEC, The Good, The Bad & The Secure appeared first on ma.ttias.be.

by Mattias Geniar at November 23, 2014 09:33 PM

Presentation: Mobile Zabbix, Why Mobile Matters (MoZBX)

Going through some old files, I found a presentation I gave in Riga on the Zabbix Conference in 2012, that I never posted online. Better late than never! The slides are about a Mobile WebUI I made for the Zabbix

The post Presentation: Mobile Zabbix, Why Mobile Matters (MoZBX) appeared first on ma.ttias.be.

by Mattias Geniar at November 23, 2014 09:23 PM

CPU Flame Graphs

I've only heard of CPU Flame Graphs since the article on NodeJS performance issues at Netflix. ... given a performance problem, observability is of the utmost importance. Flame graphs gave us tremendous insight into where our app was spending most

The post CPU Flame Graphs appeared first on ma.ttias.be.

by Mattias Geniar at November 23, 2014 09:03 PM

Enable MySQL’s slow query log without a restart

You're debugging a MySQL server and want to enable the Slow Query, you can do so via the MySQL CLI. There's no need to make changes to the my.cnf file and restart your MySQL service -- even though that would

The post Enable MySQL’s slow query log without a restart appeared first on ma.ttias.be.

by Mattias Geniar at November 23, 2014 06:00 PM

November 21, 2014

Xavier Mertens

NoSuchCon Wrap-Up Day #3

NoSuchCon VenueHere we go with a review of the last day. As usual, the social event had huge impacts on some attendees but after coffee everything was almost back to normal. The day started with Braden Thomas who presented “Reverse engineering MSP 430 device” or reverse engineering a real-estate lock box.

In US/Canada, such devices are used by real-estate agencies to store the keys of homes for sale. They allow to access the key when the owner is not present. Why focus on such devices? First, because they are used by many people and, usually, they tend to store crypto secrets into the flash. It’s cheap and easy but not necessarily nice. There is a legacy key using cell radio but more and more users use the eKey (an IOS/Android app). Braden explained with many details all the steps he performed to be able to access the firmware and then to extract the crypto key. Guess what? The presentation ended with a live demo: Braden just successfully unlock a lock. During the presentation, he explained the different attacks that are available and a special one (that was successful) called “Paparazzi” attack: the goal is to use the flash from a camera against a decap chip to make it behave differently.

The Paparazzi Attack

 

Then, Peter Hlavaty talked about “Attack on the core”. This talk went in the same direction as the one presented yesterday about bypassing security controls in Windows 8.1. On most operating systems, the kernel is the nice place to place malicious code. Why? Because modern o operating systems are more and more protected by implementing multiple controls. The talk focused on CLP3 to CPL0 (“Current Privilege Level”). Level 3 being the user mode and level 0 the kernel mode. Peter not only focussed on Windows but also on Linux and Android. That’s clear: the kernel is the new target!

After a welcomed coffee break, Jean-Philippe Aumasson, renowned cryptographer, talked about… cryptography with a talk called “Cryptographic backdooring”. Usually, cryptography means a lot of formulas, etc but Jean-Philippe’s talk was very didactic! Why speak about backdoors? Because they are present in many crypto implementation and there is no official research paper on this topic. A backdoor can be used for surveillance, deception, … and also terrorists! There are also more and more backdoors in products and applications today.

Jean-Philippe on Stage

Jean-Philippe explained what is a backdoor. His definition is:

A feature or defect that allows surreptitious access to data

Based on weakened a algorithms or covert channels. But what is a good backdoor? It must be:

  • Undetectable
  • Principle of “NOBUS” (No One But Us, NSA term)
  • Reusable and unmodifiable
  • Simple

Then, he reviewed examples of backdoor and how they have been implemented. A very nice talk!

There was no lunch break for me because I attended a workshop about RF hardware: “Fun with RF remotes” prepared by Damien Cauquil. The goal of the workshop was to build a … RF door bell brute forcer. After an introduction to the RF technology and some demos to capture and analyse signals, it was a hands-on session. All participants received a door bell pack (a remote controller + door bell). The challenge was to hack the remote and make Damien’s doorbell rings. It was a premiere for me. After soldering some components and some stress, it worked! Very nice workshop!

Fun wih RF Remote

And the last half-day started with Guillaume Valadon and Nicolas Vivet who presented “Detecting BGP hijacks in 2014”. I arrived a bit late due to the hardware workshop. The first part was a recap about BGP, how it works, what are the features, etc. BGP hijacks are not new but they can have a dramatic effect! An hijack is a conflicting BGP announcement. It means that your packets are sent across not authorised networks (from a BGP point of view). The next part of the talk focussed on detected the hijacks. This is a critical step for ISP’s. Guillaume and Nicolas explained in details the platform deployed worldwide to collect BGP messages and store them, then they are processed by OCAml. They can emulate a BGP router via some Python code. By putting all the components together, they are able to analyse the BGP announces and detect issues. But this is offline and consumes a lot of data. They also presented a real-time detection mechanism. A nice presentation with many details. I recommend to read the slides if you’re working with BGP. Their conclusions are that such attacks are a real risk and that traffic must be encrypted and authenticated to prevent it to be read by 3rd parties.

Alex Ionescu came with a “surprise talk”. The title was “Unreal mode: Breaking the protected process”. It was a surprised talk because he received a last minute green light from Microsoft. Windows Vista introduced new protections at kernel level. In Windows 8.1, that model was extended to protect key processes even from admin and to mitigate attacks like pass-the-hash. Alex explained how digital signatues are working with the new versions of the OS. He also explained how process protection works (even with admin rights some processes can’t be killed or accessed by debuggers. A mass of interesting information if you’re working with Windows security models.

Alex on Stage

And to close the conference, a keynote was presented by Anthony Zboralski: “No Such Security”. Anthony defines himself as “a bank robber”. When he was young he played with many computers and quickly started to break stuff. After some issues with the Justice, he switched to security consultancy. His keynote was a suite of reflexion about the security that is implemented today by companies but also recommended by consultancy companies.

Anthony on Stage

The second edition of NoSuchCon is over! It is a great event with highly technical and nice presentation. I also met lot of new or old friends. The talks have already been published here: http://www.nosuchcon.org/talks/2014/.

by Xavier at November 21, 2014 09:09 PM

Frank Goossens

Ik ben niet gelovig, maar …

… voor deze uitspraak laat ik Paus Franciscus hier wel graag aan het woord;

Men vindt altijd geld om oorlog te voeren, wapens te kopen en zonder scrupules financiële operaties te leiden maar er ontbreekt altijd geld om jobs te creëren, te investeren in kennis en om het leefmilieu te beschermen.
(Paus Franciscus in een videoboodschap op een congres over de sociale leer van de Kerk 21/11/2014 om 19:47:00).

Bron: deredactie.be (weliswaar zonder de contextuele links)

by frank at November 21, 2014 08:14 PM

Fabian Arrotin

Updating to Gluster 3.6 packages on CentOS 6

I had to do yesterday some maintenance yesterday on our Gluster nodes used within CentOS.org infra. Basically I had to reconfigure some gluster volumes to use Infiniband instead of Ethernet. (I'll write a dedicated blog post about that migration later).

While a lot of people directly consume packages from Gluster.org (for example http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/CentOS/epel-6/x86_64/), you'll be able (soon) to also install directly those packages on CentOS, through packages built by the Storage SIG . At the moment I'm writing this blog post, gluster 3.6.1 packages are built and available on our Community Build Server Koji setup , but still in testing (and unsigned).

"But wait, there are already glusterfs packages tagged 3.6 in CentOS 6.6, right ? " will you say. Well, yes, but not the full stack. What you see in the [base] (or [updates]) repository are the client packages, as for example a base CentOS 6.x can be a gluster client (through fuse, or libgfapi - really interesting to speed up qemu-kvm instead of using the default fuse mount point ..) , but the -server package isn't there. So the reason why you can either use the upstream gluster.org yum repositories or the Storage SIG one to have access to the full stack, and so run glusterd on CentOS.

Interested in testing those packages ? Wanting to test the update before those packages will be released by the Storage SIG ? here we go : http://cbs.centos.org/repos/storage6-testing/x86_64/os/Packages/ (packages available for CentOS 7 too)

By the way, if you never tested Gluster, it's really easy to setup and play with, even within Virtual Machines. Interesting reading : (quick start) : http://wiki.centos.org/SpecialInterestGroup/Storage/gluster-Quickstart

by fabian.arrotin at November 21, 2014 03:08 PM

Frank Goossens

I see you baby, purging that spam!

all we are saying, is give ham a chance!While Akismet does a good job at flagging comments as spam, it by default only purges spam (from the comments and comments_meta tables) after 15 days.

So it’s a good thing Akismet now has a filter to change the amount of days after which spam is removed. Below code (in a small plugin or in a child theme’s functions.php) should do the trick.

/** tell akismet to purge spam sooner */
add_filter('akismet_delete_commentmeta_interval','change_akismet_interval');
add_filter('akismet_delete_comment_interval','change_akismet_interval');

function change_akismet_interval($in) {
     return 5;
}

Happy purging!

by frank at November 21, 2014 06:19 AM

November 20, 2014

Xavier Mertens

NoSuchCon Wrap-Up Day #2

NoSuchCon2014Here is my wrap-up for the second day of the conference NoSuchCon organised in Paris. Where is the first wrap-up will you maybe ask? Due to an important last minute change in my planning, I just drove to Paris yesterday evening and missed the first day! This is the second edition of this French conference organised in Paris at the same place. A very nice location even if the audio/video devices are not of a top-quality. The event remains also the same: one single track with international speakers and talks oriented to “offensive” security. This year, I was invited to take part of the selection commitee.

The very first talk was presented by Andrea Allievi who did a wonderful job around the latest Windows operating system kernel patch protection.  The full title was “Understanding and defeating Windows 8.1 Kernel Path Protection”. Andrea is the designer of the first UEFI bootkit in 2012. The first part of the presentation was a review of the most important terms around the Windows kernel protection:

The research started with the Snake campaign and the Uroburos bootkit. The bootkit can’t affect Windows 8.1. Andrea reversed the bootkit and adapted it to defeat the patch protection. Windows 8.1 has a code integrity feature implemented completely different that windows 7. Andrea’s approach was to use a kernel driver. The next step was to explain how does work the kernel patch protection and how to attack the Patchguard. Finally, Andrea presented new attack types and finished with a demo. Interesting presentation but the most important info is how to protect agains this? The exploit is very difficult to implement (it took 3 months to Andrea to achieve this) but use SecureBoot and don’t trust any code downloaded from the Internet! (sounds logical).

The second talk was presented Benjamin Delpy who is well known for his tool, Mimikatz, one of the pentester’s best friend in Windows environments! After a brief reminder/introduction to the Windows authentication methods (NTLM, Kerberos) and the associated attacks, Pass-The-Hash (NTLM) and Pass-The-Ticket (Kerberos), Benjamin explained what are “Golden Tickets“: A golden ticket is a homemade ticket, not generated by the KDC.

Benjamin on stage

This means they aren’t limited by GPO’s and any data can be put into them. But the nice feature is their expiration! Once generated, the key does not change for … years! Then, Benjamin explained what is “Silver Ticket” (exactly such as a Golden Ticket, except the krbtgt key). Multiple demos were made by Benjamin. I especially liked the fact that Kerberos tickets dumped on OS X or Ubuntu (which can be part of a Windows domain) can be reused on Windows by Mimikatz! Who said that Mac computers are safe on a network? Benjamin continues to develop his tool which is more and more a must have!

After a coffee break, the next target was the Google Apps Engine. Nicolas Collignon investigated the security around this PAAS (“Platform as a Service“) & Devkit used by many developers. The supported programming languages are: Python, Java, PHP & Go. The architecture is quite common and based on a load-balancer, a reverse-proxy, an application server and backend services (DB). The first part focused on the application. Nothing really change and classic attacks remain valid. Example: Developers still manipulate raw SQL queries, control raw HTTP responses and need to implement security features. If security controls are present, they are not always enabled by default. Example: the urlfetch API does not verify SSL certificates by default. Nicolas also explained how to obtain Python RCE (“Remote Code Execution“) via a XMPP service. The next part was attacking the GAE infrastructure. This is difficult because it can’t be reproduce in a lab. The provisioning API is a nice target because developers use weak credentials (hey, what’s new here?) and also share credentials between production and development environments. A classic fail is to store the production domain key in a non-safe place! (Accessing this key is a dangerous as a compromised Windows domain admin account). The next part focused on the sandbox mechanism proposed by Google with many examples. Nicolas’s conclusions are:

After the lunch break, Ezequiel Gutesman presented  “Blended web and database attacks on real-time, in-memory platform“. What is “in-memory” platform? Usually DBMS rely on disk to store their data but today they are solutions which store data in memory. Why? Memory is cheap today, there is an increase amount of data to process and performance is a key. Well-known solutions are Oracle, SQLserver and SAP HANA.

Ezequiel on stage

Ezequiel’s research focused on SAP HANA. The solution is based on many components (DB, HTTP server) and provide a nice attack surface. This is a blended architecture. Instead of an application using a DB connection with limited (or unrestricted) access, the application is the same as the database user. User privileges should be restricted at the DB level. This changes the impact of classic attacks:

After the introduction, some attack vectors against HANA were reviewed. About SQL injections, HANA has a nice feature: history tables. If the user does not delete it, the information remains available! XSS attacks were reviewed as well as integration with the R-Server.

The next talk was the presentation of an awesome hardware project: the USBarmory. Andrea Barisani explained in details on the project started, how they developed the hardware and what issues they faced. The idea of this device was to provide a open-hardware running open-source software with:

  • Mass storage device with automatic encryption, virus scanning, host authentication and data wipping
  • OpenSSH client and agent for untrusted hosts (kiosk) router for end-to-end VPN tunneling, Tor
  • A password manager with integrated web server electronic wallet
  • A portable penetration testing platform low level USB security testing

USBarmory

The development started in January 2014 and the product should be available for sale in December.

And the last talk was the one of Richard Johnson who spoke about fuzzing application with “Fuzzing and patch analysis: SAGEly Advice“. It started with an introduction to automated test generation. The goal is to target a program with a full coverage of all possible states influenced by external input. This can be done via two approaches:

If fuzzing is very interesting, it has limitations because it cannot cover all possible states (a fuzzer tool is unaware of data constraints). That’s where concolic testing can help. Richard explained the concept in details with many examples. Finally, a tool was presented: Moflow::Fuzzflow and some real-life example where the tool was used to find vulnerabilities in software.

The day ended with a nice social event. Stay tuned for the last set of talks tomorrow!

by Xavier at November 20, 2014 09:03 PM

Dries Buytaert

Weather.com using Drupal

One of the world's most trafficked websites, with more than 100 million unique visitors every month and more than 20 million different pages of content, is now using Drupal. Weather.com is a top 20 U.S. site according to comScore. As far as I know, this is currently the biggest Drupal site in the world.

Weather.com has been an active Drupal user for the past 18 months; it started with a content creation workflow on Drupal to help its editorial team publish content to its existing website faster. With Drupal, Weather.com was able to dramatically reduce the number of steps that was required to publish content from 14 to just a few. Speed is essential in reporting the weather, and Drupal's content workflow provided much-needed velocity. The success of that initial project is what led to this week's migration of Weather.com from Percussion to Drupal.

The company has moved the entire website to Acquia Cloud, giving the site a resilient platform that can withstand sudden onslaughts of demand as unpredictable as the weather itself. As we learned from our work with New York City's MTA during Superstorm Sandy in 2012, “weather-proofing” the delivery of critical information to insure the public stays informed during catastrophic events is really important and can help save lives.

The team at Weather.com worked with Acquia and Mediacurrent for its site development and migration.

Weather channel

by Dries at November 20, 2014 04:06 PM

November 19, 2014

Mattias Geniar

The PHP circle: from Apache to Nginx and back

As with many technologies, the PHP community too evolves. And over the last 6 or 7 years, a rather remarkable circle has been made by a lot of systems administrators and PHP developers in that regard. The A in LAMP

The post The PHP circle: from Apache to Nginx and back appeared first on ma.ttias.be.

by Mattias Geniar at November 19, 2014 11:16 PM

Yet Another Microsoft Windows CVE: Local Privilege Escalation MS14-068

As if the SSL/TLS vulnerability dubbed MS14-066 last week wasn't enough, today Microsoft announced an out-of-band patch for a critical Privilege Escalation bug in all Windows Server systems. This time, Kerberos gets patched. A remote elevation of privilege vulnerability exists

The post Yet Another Microsoft Windows CVE: Local Privilege Escalation MS14-068 appeared first on ma.ttias.be.

by Mattias Geniar at November 19, 2014 07:35 AM

Frank Goossens

Music from Our Tube; Hazey by Glass Animals

Heard this in a TV show a couple of days ago, the percussion made me Shazam it;

YouTube Video
Watch this video on YouTube or on Easy Youtube.

Glass Animals with “Hazey” from their debut album “Zaba”.

by frank at November 19, 2014 05:15 AM

November 18, 2014

Mattias Geniar

Make HTTPerf use a proxy for connections

I like HTTPerf. It's a simply tool for a simply job: start HTTP calls and benchmark a remote system. But the CLI syntax for making it work with proxies is ... cumbersome. So, here's how to get it to work.

The post Make HTTPerf use a proxy for connections appeared first on ma.ttias.be.

by Mattias Geniar at November 18, 2014 06:00 PM

A Certificate Authority to Encrypt the Entire Web

Eff.org today announced A Certificate Authority to Encrypt the Entire Web. The biggest obstacle to HTTPS deployment has been the complexity, bureaucracy, and cost of the certificates that HTTPS requires.eff.org Completely agree. Especially the cost, since most certificates are automated

The post A Certificate Authority to Encrypt the Entire Web appeared first on ma.ttias.be.

by Mattias Geniar at November 18, 2014 04:00 PM

Follow-up: 3 years of automation with Puppet

Yesterday I blogged about my lessons learned after 3 years of using Puppet. In reply, @roidelapluie also posted his list of lessons learned. Accidentally, also after 3 years. Go figure. And he touches on topics I didn't think of, but

The post Follow-up: 3 years of automation with Puppet appeared first on ma.ttias.be.

by Mattias Geniar at November 18, 2014 07:27 AM

November 17, 2014

Mattias Geniar

REST API best practices and versioning

This is a short and nice read: Some REST best practices. I especially like the versioning part, which I've been (trying to) tell for years. API versions should be mandatory. This way, you will be futureproof as the API changes

The post REST API best practices and versioning appeared first on ma.ttias.be.

by Mattias Geniar at November 17, 2014 08:11 PM

The Chocolatey Kickstarter: Making Windows More Like Linux

Remember when I said Microsoft has an Open Source strategy? Well, this could fit right in. Except it isn't from Microsoft. The Chocolatey Project is an independent effort of porting the package managers we all love and use in Linux

The post The Chocolatey Kickstarter: Making Windows More Like Linux appeared first on ma.ttias.be.

by Mattias Geniar at November 17, 2014 05:07 PM

Remove a single iptables rule

How do you remove a single iptable rule from a large ruleset? The easiest way is to delete the rule by the chain-name and the line-number. Here's an example. ~# iptables -n -L --line-numbers Chain INPUT (policy ACCEPT) num target

The post Remove a single iptables rule appeared first on ma.ttias.be.

by Mattias Geniar at November 17, 2014 04:04 PM

3 Years of Puppet Config Management: lessons learned

A little over 3 years ago, I started using Puppet as a config management system on my own servers. I should've started much sooner, but back then I didn't see the "value" in it. How foolish ... Around 2011, when

The post 3 Years of Puppet Config Management: lessons learned appeared first on ma.ttias.be.

by Mattias Geniar at November 17, 2014 01:31 PM

November 16, 2014

Mattias Geniar

Clear the APC cache in PHP

How do you clear the APC cache? There are basically two methods: as a PHP developer, you can use the built-in PHP functions -- or as a SysAdmin, you can restart the necessary services to flush the APC cache. As

The post Clear the APC cache in PHP appeared first on ma.ttias.be.

by Mattias Geniar at November 16, 2014 08:52 PM

Running Kali Linux as a Vagrant Box (virtual machine)

Here's the simplest way to start a Kali Linux virtual machine on your desktop or laptop: run it as a Vagrant box! First, download the latest version of Vagrant, you need at least version 1.6 or newer. You'll also need

The post Running Kali Linux as a Vagrant Box (virtual machine) appeared first on ma.ttias.be.

by Mattias Geniar at November 16, 2014 05:00 PM

November 15, 2014

Dag Wieers

Couldn't capture screenshot on Cyanogenmod/Android

Today I wanted to create a screenshot on my OnePlus One device with a stock Cyanogenmod 11 M11. Screenshots in recent Android can be made using the Volume-Down + Power buttons simultaneously. However this time it failed with an obscure message "Couldn't capture screenshot". Google was of no direct help either, so this required diving in at the deep end...

I quickly checked the File Manager (enabled Root Access Mode in the Settings in the process) to see if something was up with my /sdcard/Pictures/Screenshots directory and indeed, this directory did not have the "execute" bit set. Could this be the cause ? However the File Manager did not allow me to modify the directory's permissions. Bummer.

I started SSHDroid and logged on to my device using SSH (much more convenient than trying to type directories in the Shell). To my surprise I noticed this:

root@A0001:/ # ls -la /sdcard/Pictures/
drwxrwx--- root sdcard_r 2014-11-11 20:37 Instagram
drwxrwx--- root sdcard_r 2014-10-08 21:40 Paper Pictures
drw-rw---- root sdcard_r 2014-10-08 19:21 Screenshots
root@A0001:/ # chmod 0770 /sdcard/Pictures/Screenshots
root@A0001:/ # ls -ld /sdcard/Pictures/Screenshots
drw-rw---- root sdcard_r 2014-10-08 19:21 Screenshots

Changing directories does not work ? Hmm, let's remove this empty directory:

root@A0001:/ # rm -rf /sdcard/Pictures/Screenshots/
rm failed for /sdcard/Pictures/Screenshots/, Directory not empty
1|root@A0001:/ # ls -la /sdcard/Pictures/Screenshots/
root@A0001:/ # rm -rf /sdcard/Pictures/Screenshots/*
rm failed for /sdcard/Pictures/Screenshots/Screenshot_2014-10-08-12-21-55.png, Permission denied
rm failed for /sdcard/Pictures/Screenshots/Screenshot_2014-10-08-12-21-57.png, Permission denied
1|root@A0001:/ # echo /sdcard/Pictures/Screenshots/*
/sdcard/Pictures/Screenshots/Screenshot_2014-10-08-12-21-55.png /sdcard/Pictures/Screenshots/Screenshot_2014-10-08-12-21-57.png

So the directory is not empty, although the files are not being listed using ls, and echo does show them. And I cannot remove the individual files either ?

The only solution I could think of to restore capturing screenshots, was to rename this entry and recreate the Screenshots directory:

root@A0001:/ # mv /sdcard/Pictures/Screenshots /sdcard/Pictures/Screenshots2
root@A0001:/ # mkdir -p /sdcard/Pictures/Screenshots

Crisis diverted ! Screenshot-capturing restored.

But the bogus directory and files still remain. Anyone with a clue knows what is going on ? Is my root access somehow limited through SSHDroid and Shell apps ? Does the system have some bolt on capabilities preventing me from doing certain things ? Do I need ADB for full capabilities ?

It is also unclear to me what has caused this situation in the first place. My OnePlus One originally shipped with Cyanogenmod 11S, but I quickly rooted it and installed Cyanogenmod 11 M10 (which was then updated to M11).

Insights from Android experts welcome !

PS You can perform the same rename/move operation using the File Manager in Root Access Mode. Then you can recreate the Screenshots directory. However you cannot remove the original Screenshots directory using File Manager either. You'll be stuck with that one :-(

by dag at November 15, 2014 11:50 AM

November 14, 2014

Xavier Mertens

Repression VS. Prevention

Speed TicketThis morning, I retweeted a link to an article (in Dutch) published by a Belgian newspaper. It looks that Belgian municipalities (small as well as largest) which do not properly secure their data could be fined in a near future! Public services  manage a huge amount of private data about us. They know almost everything about our lifes! Increasing the security around these data looks a very good idea but… are fines a good idea? Fines are very repressive.

I’ll make a rough comparison with speeds tickets. I’m driving a lot, always on the road between two customers. More kilometers you spend on roads, more chances you have to be controlled by speed cameras. Sometimes, I receive a nice gift… a speed ticket! Ok, I admit: it’s frustrating. I’ve always the feeling to be 0wn3d but guess what? I just pay the bill and continue to use roads as before. This does not affect my way of driving, it is “part of the game”. I even know people who reserve a budget to pay their speed tickets! Just like any other risk, it can be quantified and we are free to take it into account … or not! Where is the breaking point between paying fines and driving slowly?

Let’s go back to Belgian municipalities. They could be facing the same issue: To invest in information security (tools, services, audits) or cross their fingers and hope to not be cought? The could also reserve some budget to pay fines. Fortunately, before punishing the bad players, the authorities will perform some checks as stated in the article:

By beginning of the next year, all municipalities must have a safety plan in place.

IMHO, fines won’t be the solution! Instead of paying fines, why not invest this money into security projects? When a company (read: “with commercial activities“) has to pay a fine in case of security incident, only its customers are affected. In case of a municipality, all citizens are involved as the buget is based on taxes! Instead of repression, authorities must implement more prevention like forcing municipalities to have a safety plan. Repression gives always a feeling of acting badly while prevention helps to stop something from happening or arising!

by Xavier at November 14, 2014 03:57 PM

November 13, 2014

Frank Goossens

Bijna met een zwaailicht op het hoofd

Na 5 jaar en 1 serieus ongeluk mijn fietsmuts op rust gesteld. De oude was versleten, mijn vrouwtje heeft het geld op de online toonbank neergeteld;

abus scraper, the orange version

De Abus Scraper, zoals gezien bij een collega in het fel orange. Ge kunt immers niet genoeg opvallen op de fiets. De pasvorm is veel beter dan die van m’n oude Giro Flak overigens.

by frank at November 13, 2014 11:04 PM

Mattias Geniar

Microsofts Open Source Strategy

If Microsoft ever does applications for Linux it means I've won. Linus Torvalds Microsoft appears to be rebooting internally. After years of fighting the "Linux Cancer" as Steve Ballmer called it, they seem to be embracing it now. Satya Nadelle

The post Microsofts Open Source Strategy appeared first on ma.ttias.be.

by Mattias Geniar at November 13, 2014 01:33 PM

Frank Goossens

Music from Our Tube; Dorian Concept fooling around on a microKORG

In 2006 Oliver Thomas Johnson uploaded a video to YouTube with him “Fooling around on a Micro Korg”;

YouTube Video
Watch this video on YouTube or on Easy Youtube.

Two years later he got discovered as Dorian Concept. He released his second album, “Joined Hands” last month on Ninja Tunes. If you’re interested in his technique on the microKORG, be sure to watch this “Studio Science” video by the Red Bull Music Academy.

by frank at November 13, 2014 05:49 AM

November 12, 2014

Mattias Geniar

Benchmarking the performance of ‘Wordfence’, a WordPress plugin

I decided to give Wordfence a try, a plugin for WordPress. It advertises itself as a Security plugin with an additional benefit: performance improvements. Its claim is "to better survive a DDoS attack on your site, your site needs to

The post Benchmarking the performance of ‘Wordfence’, a WordPress plugin appeared first on ma.ttias.be.

by Mattias Geniar at November 12, 2014 04:09 PM

Slow login/DNS via Active Directory on OSX Mavericks and Yosemite (.local domains)

On a Mac OSX (Mavericks or Yosemite) you can experience very slow logins when the Mac is joined to a Windows Domain, and you're not on-site. The Offline Accounts will eventually log in, after a 2-3minute delay on the login

The post Slow login/DNS via Active Directory on OSX Mavericks and Yosemite (.local domains) appeared first on ma.ttias.be.

by Mattias Geniar at November 12, 2014 01:25 PM

Frederic Hornain

[Red Hat Enterprise Linux 7 Atomic Host] Beta Now Available

Red Hat Enterprise Linux 7 Atomic Host Beta

 





What can you expect from the Red Hat Enterprise Linux 7 Atomic Host Beta?[1]

Specifically Designed to Run Containers
Red Hat Enterprise Linux 7 Atomic Host Beta provides a streamlined host platform that is optimized to run application containers. The software components included in Red Hat Enterprise Linux 7 Atomic Host Beta, as well as the default system tunings, have been designed to enhance the performance, scalability and security of containers, giving you the optimal platform on which to deploy and run application containers.

The Confidence of Red Hat Enterprise Linux
Red Hat Enterprise Linux 7 Atomic Host Beta is built from Red Hat Enterprise Linux 7, enabling Red Hat Enterprise Linux 7 Atomic Host Beta to deliver open source innovation built on the stability and maturity of Red Hat Enterprise Linux. It also means that Red Hat Enterprise Linux 7 Atomic Host Beta inherits the hardware certifications of Red Hat Enterprise Linux 7, giving you a vast choice of certified hardware partners.

Atomic Updating and Rollback
Red Hat Enterprise Linux 7 Atomic Host Beta features a new update mechanism that operates in an image-like fashion. Based on rpm-ostree, updates are composed into “atomic” trees, which can be downloaded and deployed in a single step. The previous version of the operating system is retained, enabling you to easily rollback to an earlier state. This simplified upgrade and rollback capability reduces the time you spend “keeping the lights on.”

Container Orchestration
Through our collaboration with Google, Red Hat Enterprise Linux 7 Atomic Host Beta includes Kubernetes, a framework for managing clusters of containers. Kubernetes helps with horizontal scaling of multi-container deployments across a container host, and interconnecting multiple layers of the application stacks. This enables you to orchestrate services running in multiple containers into unified, large-scale business applications.

Secure Host by Default
Security is paramount when it comes to running applications. Containers alone do not contain, but you can more effectively isolate vulnerable containers with a secure host like Red Hat Enterprise Linux 7 Atomic Host Beta that implements a secure environment by default. First, applications are only run within containers, not directly on the host, creating a clear security boundary. Each container is then confined using a combination of SELinux in enforcing mode, control groups, and kernel namespaces. These technologies prevent a compromised container from affecting other containers or the host and are the same proven technologies that have been delivering military-grade security to Red Hat customers for more than 10 years.

Red Hat Enterprise Linux Container Images and Building Containers
Red Hat Enterprise Linux 7 Atomic Host Beta provides all of the required tools to build and run container images based on Red Hat Enterprise Linux, including Red Hat Enterprise Linux 6 and 7 container images as well as the docker services. This means that applications that run on Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 can be deployed in a container on Red Hat Enterprise Linux 7 Atomic Host Beta, opening access to a vast ecosystem of certified applications. Additionally, Red Hat Enterprise Linux 7 Atomic Host Beta users will have access to the full breadth of their Red Hat subscriptions inside these containers, including the the popular programming language stacks and development tools delivered through Red Hat Software Collections..

Deploy Across the Open Hybrid Cloud
Red Hat Enterprise Linux 7 Atomic Host Beta extends container portability across the open hybrid cloud by enabling deployment on physical hardware; certified hypervisors, including Red Hat Enterprise Virtualization and VMware vSphere; private clouds such as Red Hat Enterprise Linux OpenStack Platform; and Amazon Web Services and Google Compute Platform public clouds. This ability to “deploy anywhere, deploy everywhere” enables you to choose the best platform for your container infrastructure.

RHEL 7 AH Beta is available here [2]

[1] http://www.redhat.com/en/about/blog/small-footprint-big-impact-red-hat-enterprise-linux-7-atomic-host-beta-now-available

[2] https://access.redhat.com/products/red-hat-enterprise-linux



Kind Regards
Frederic


by Frederic Hornain at November 12, 2014 10:43 AM

Mattias Geniar

Microsoft SSL/TLS vulnerability MS14-066

It's peanut butter patching time. And it's urgent: MS14-066 Vulnerability in Schannel Could Allow Remote Code Execution (2992611). This security update resolves a privately reported vulnerability in the Microsoft Secure Channel (Schannel) security package in Windows. The vulnerability could allow

The post Microsoft SSL/TLS vulnerability MS14-066 appeared first on ma.ttias.be.

by Mattias Geniar at November 12, 2014 08:56 AM

November 11, 2014

Frank Goossens

Over de onbekende soldaat

Knap artikel op deredactie; “Met veel eerbied zal op dinsdag 11 november, de Dag van de Wapenstilstand, opnieuw hulde gebracht worden aan het graf van de Onbekende Soldaat in Brussel. Bij het aanduiden van die mysterieuze oorlogsheld speelde Brugge een absolute hoofdrol. Niet alleen vond de aanduiding plaats in het Brugse station, het was bovendien een Assebroekse oorlogsblinde die uit vijf kisten de uiteindelijke onbekende soldaat mocht aanwijzen. Maar wat gebeurde er met de vier anderen? Op zoek naar de vier strijders die nog onbekender zijn dan de onbekende soldaat.”

by frank at November 11, 2014 12:48 PM

November 10, 2014

Mattias Geniar

SQL-Injection Test Targets / Websites

Say you wanted to test a bit of SQL-injection. You know the theory, how would you put it in practice? There are numerous Virtual Machine images you can download to host a webapp yourself and attack the VM, but that

The post SQL-Injection Test Targets / Websites appeared first on ma.ttias.be.

by Mattias Geniar at November 10, 2014 09:25 PM

Xavier Mertens

Ninja’s OpenVAS Reporting

OpenVAS LogoHere is a quick blogpost which might be helpful to the OpenVAS users. OpenVAS is a free vulnerability scanner maintained by a German company. Initiality, it was a fork of Nessus but today it has nothing in common with the commercial vulnerability scanners. OpenVAS is a good alternative to commercial solutions when you need to deploy a vulnerability management process and you lack of a decent budget. But, like many “free” solutions, it does not mean that they don’t have a cost associated to it. Particularly, OpenVAS is lacking of a good documentation, even if the users mailing list is quite active.

I used OpenVAS for a project which involved a scanning of multiple IP subnets (multiples /16’s and /24’s – for those who are not familier with the CIDR notification, a /16 is 65536 IP addresses). For performance reasons, you have to install OpenVAS on a very strong server. The scan took several hours but was completed successfully. The problems came with the generation of reports. OpenVAS offers multiple report formats:

OpenVAS Reports

When you select “XML” as output format, data are simply extracted from the internal database by the process “openvasmd” which is the manager daemon. The default format being XML, when you select another type of report, the XML data must be also processed by external tools to generate the final file. In my case, besides XML, I also needed the TXT and PDF reports. And problems started…  My sessions always reached a timeout and the browser returned all the time the same error message. After some investigations, here is how to generate your reports manually. [Requirement: you need root access to the OpenVAS server]

When you select an alternative file format, OpenVAS creates a temporary directory containing interesting files and scripts. On my standard Debian server running OpenVAS 6 (installation performed using the packages), the temporary directory was:

/usr/share/openvas/openvasmd/global_report_formats/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

If you have multiple directories, check the time stamps and use the most recent one. Depending on the selected file format, the directory will contain:

Once openvasmd has successfully generated the XML output file, it uses another tool called “xsltproc” via a shell script created in the builtin task manager. This tool applyes XSL stylesheets (found in the directory above) to the generated XML file. The process tree looks like:

init-+-...
     +-openvasad
     +-2*[openvasmd───openvasmd───openvasmd───sh───sh───xsltproc]
     +-...

In case of PDF, a Latex file is procuded. The next step will be to convert the Latex file to PDF using “pdflatex“. The problem is the following: When you have a very big XML file to process, the xsltproc tools takes a very long time (in my case anyway) and PDF file is not returned to the browser! The XML output generated by openvasmd is stored in another temporary directory file:

/tmp/openvasmd_xxxxxx/report.xml

The problem is that this directory is deleted when the report generation tasks is stopped. You need to make a copy of it before or generate first a XML report via the web console. While xsltproc is running:

# cd /tmp/openvasmd_*
# cp report.xml ..

Once killed or terminated, re-execute the script which is not deleted by the task scheduler (replace the directory name with the one corresponding to your environment):

# cd /usr/share/openvas/openvasmd/global_report_formats/c402cc3e-b531-11e1-9163-406186ea4fc5
# ./generate /tmp/report.xml >/tmp/report.pdf

I did not investigate further what caused the timeout (I did not have time) but at least, I was able to extract useful reports from my OpenVAS server!

by Xavier at November 10, 2014 05:28 PM

Mattias Geniar

Firefox Developer Edition released

Perhaps this can get Firefox back into the picture? They just released their Firefox Developer Edition browser, a semi-fork/clone of the Firefox browser but for developers specifically. It's meant to run next to your current Firefox browser, as a separate

The post Firefox Developer Edition released appeared first on ma.ttias.be.

by Mattias Geniar at November 10, 2014 04:01 PM

A collection of PHP exploit scripts

If you're in the hosting business for a while, you start to see your fair share of PHP exploit code. Code that's been uploaded through a CMS exploit, and then used to further exploit others: attack other servers, send spammails,

The post A collection of PHP exploit scripts appeared first on ma.ttias.be.

by Mattias Geniar at November 10, 2014 10:16 AM

November 09, 2014

Bert de Bruijn

GEM WS2 MIDI System Exclusive structure and checksums

MIDI is the standard for communication between electronic music instruments like keyboards and synthesizers. And computers! While tinkering with an old floppy-less GEM WS2 keyboard, I wanted to figure out the structure of their System Exclusive memory dumps. SysEx is the vendor-specific (and non-standard) part of MIDI. Vendors can use it for real-time instructions (changing a sound parameter in real-time) and for non-real-time instructions (sending or loading a configuration, sample set, etc.).

In the GEM WS2, there's two ways of saving the memory (voices, globals, styles and songs): in .ALL files on floppy, and via MIDI SysEx.

The .ALL files are binary files, 60415 bytes long. The only recognizable parts are the ASCII encoded voice and global names. The SysEx dumps are 73691 bytes long. As always in MIDI, only command start (and end) bytes have MSB 1, and all data bytes have MSB 0. The data is spread out over 576 SysEx packets, preceded by one SysEx packet with header information.

Each SysEx data packet starts with these bytes (decimal representation):



Because the original data (the WS2 memory and the .ALL file) has 8 bits per byte, and MIDI SysEx bytes can only have 7 bits (MSB 0), GEM uses an encoding to go from one to the other:
Seven 8-bit bytes have their LSB stripped, and the LSB's form byte number 8, from the first of seven bytes in the LSB of byte number 8, to the last of seven bytes in bit number 7 (64 decimal value).
Using this encoding, a group of 7 bytes from the .ALL format is transformed into a group of 8 SysEx bytes.

The length byte in each data packet indicates how many of those byte groups there are in the current data packet. Data is sent per 15 byte groups., resulting in a 127 byte SysEx packet, with the last data packet containing the remaining 6 byte groups. There's only five bytes in the .ALL format to fill the last byte group of the last data packet, and that byte group is padded with two FF(255) bytes.

The checksum byte is calculated as the XOR of all other bytes in the SysEx data packet, excluding the 240 and 247 start and stop bytes. When receiving a SysEx dump, the total XOR checksum of the bytes between 240 and 247 should therefore always be 0. (NB this is substantially different from the Roland way of doing SysEx checksums).

With this knowledge, I wrote a Perl script to convert .ALL files to SysEx (known as .syx) bytestreams. Owners of GEM WS1/WS2/WS400 keyboards who find themselves without floppies or without a working floppy drive can now load their .ALL files via a computer (with e.g. MIDI-OX or SysEx Librarian). If interested, send me an e-mail!




by Bert de Bruijn (noreply@blogger.com) at November 09, 2014 09:26 PM

Mattias Geniar

Behind the scenes at World of Warcraft: the datacenter tech

There's an interesting documentary on World of Warcraft to be found on Youtube. It goes on for about an hour and contains quite a bit of "behind the scenes" footage of Blizzard's technology: their datacenters. If you're interested, the video

The post Behind the scenes at World of Warcraft: the datacenter tech appeared first on ma.ttias.be.

by Mattias Geniar at November 09, 2014 12:15 PM

Frederic Hornain

[IT Quotient] Take the 5-Minute Assessment

https://insights.redhat.com/s/it-quotient/


SIMPLIFY THE EQUATION.
Building highly capable IT infrastructures and next-generation applications doesn’t have to be complicated. Take The IT Quotient assessment and answer key questions about the standardization, optimization and automation of your environment. Your IT Quotient is calculated instantly, pinpointing whether you’re tactical, strategic or visionary when it comes to managing IT. A detailed report also presents symptoms you likely face at your current level of maturity and provides useful recommendations on how to take IT to the next level.





Ref :
https://insights.redhat.com/s/it-quotient/

KR
Frederic


by Frederic Hornain at November 09, 2014 11:57 AM

[Red Hat | FeedHenry] MBaaS Services

FH_Redhat_Logo


This demo video shows how MBaaS services help to integrate with backend systems, promoting code reuse. The video also shows how easy it is to create custom MBaaS services for your app projects in FeedHenry 3.



Ref:
http://www.feedhenry.com/

KR
Frederic


by Frederic Hornain at November 09, 2014 11:55 AM

[Antwerp] Red Hat Technical User Group BeNeLux – November 17th 2014

http://www.redhatonline.com/benelux/events.php

Save the date and join us on November 17th at the Antwerp ZOO for the free Red Hat Technical User Group half-day seminar. This technical afternoon is an excellent opportunity to get the latest updates regarding technical topics and new releases. You can also discuss all your technical questions with Red Hat experts and your peers.

During the plenary presentation you will hear more about Red Hat’s involvement in OpenStack and our unique perspective on it. You will learn how to start using OpenStack and how it can be implemented successfully.

You also have the opportunity to attend break-out sessions discussing topics such as Data Virtualization, OpenShift and the release of Satellite 6. We will inform you of updates, new features and how to get the most out of these products. More detailed information about the agenda will follow shortly.

We will end the afternoon with networking drinks which gives you the opportunity to talk to other customers and Red Hat technical experts.

Don’t miss the Red Hat Technical User Group half-day seminar! The number of seats is limited so register today. Participation to this seminar is free.

Agenda can be found at the following URL : http://www.redhatonline.com/benelux/events.php


by Frederic Hornain at November 09, 2014 11:38 AM

[Assessment] Red Hat Private Cloud Blueprint

https://insights.redhat.com/s/private-cloud-blueprint
Thinking of implementing a private cloud for your organization? Make sure you build one that fits your needs.

With the Red Hat® private cloud blueprint, you can identify your primary drivers for introducing private cloud capabilities into your infrastructure and get recommended options ideally suited to your current environment.

Take the assessment and get your blueprint at https://insights.redhat.com/s/private-cloud-blueprint

KR
Frederic


by Frederic Hornain at November 09, 2014 11:23 AM

[OpenStack] Together we move forward

http://www.redhat.com/en/technologies/linux-platforms/openstack-platform




Red Hat Enterprise Linux OpenStack Platform architecture


Learn more at: redhat.com/openstack

Ref :
http://www.redhat.com/en/technologies/linux-platforms/openstack-platform
http://www.redhat.com/en/insights/openstack?sc_cid=70160000000eBtyAAE

KR
Frederic


by Frederic Hornain at November 09, 2014 11:02 AM

Wouter Verhelst

Thanks, joeyh

Last friday saw a somewhat distressing email to the debian-devel mailinglist, wherein Joey Hess, one of Debian's most valuable contributors, announced his decision to quit the project.

For all of Joey's contributions over the years, this is an unwelcome message; I'd much rather have seen him remain active in Debian, both on a personal and a technical level. As it is, I have a feeling of not just losing a colleague in Debian, but also a friend.

For people not active in Debian, it's easy to miss Joey's contributions to the project, but let me tell you that without Joey Hess, Debian simply would not be where it is today. We would not have the debian-installer and we would not have debhelper (so we would have massively complicated debian/rules files). More than that, though, Joey has been one of those people who, on a technical level, seemed to instinctively sense the right thing to do; as in, whenever Joey Hess disagrees with you on a technical matter, you know you must be doing something wrong.

As sudden and as unwelcome as last friday's announcement was, however, I can't say that it was totally unexpected. I've noticed Joey reducing his efforts in core areas of the project over the past few years, where on the one hand he has been mostly withdrawing from debian-installer development, and on the other hand his two most recent 'large' projects (ikiwiki and git-annex) haven't really been about Debian. Still, it's a painful loss; and while Debian has lost other high-profile contributors in the past over divisive issues, and recovered, that doesn't make it any less fun.

Here's to you, Joey. May you find joy in whatever you decide to do next. May you not disappear from our collective radars. May we meet again, at some conference in the future, so I can buy you a beer. (hint: FOSDEM ;-) )

November 09, 2014 08:28 AM

November 07, 2014

Frederic Hornain

[Red Hat Customer Portal] Reorganized homepage, and updated navigation

Screenshot from 2014-11-07 18:45:59

Inspired by the new RedHat.com website, the Customer Portal[1] received a complete refresh including a new design, reorganized homepage, and updated navigation and menus

[1] https://access.redhat.com

KR
Frederic


by Frederic Hornain at November 07, 2014 05:55 PM

[MBAAS] FeedHenry 3 Mobile Application Platform Overview Demo

FH_Redhat_Logo


The FeedHenry 3 Mobile Application Platform brings agility, visibility and efficiency to enterprise mobility — providing support for a range of toolkits and offering a suite of features that embrace collaborative app development across multiple teams and projects, with centralized control of security & backend integration, and a choice of cloud deployment options.

KR
Frederic


by Frederic Hornain at November 07, 2014 05:27 PM

The Death of 2.0



“The speed with which strategy and business is changing now us fundamentally changing the way IT needs to and can deliver. I mean, I call it the death of version 2.0. Used to be IT would put in a system, relax for six months, catch their breath, and then start collecting a whole new set of requirements for a version 2.0—spend six months doing that, do planning, ultimately do coding, and three months after the first release is out come out with the second release.


Business doesn’t wait for that anymore. These fundamental large releases, whether it’s taking big, packaged applications and going through revisions or developing massive sets of functionality that are standalone and upgrade over multiyear time frames no longer works. What we’re seeing is successful IT organizations today really focus on rapid prototyping, get something up, get it running, and add to it. That’s what we’re seeing happening in Web 2.0 companies that have developed tremendous technologies like big data. We are seeing enterprise companies doing that by saying departmental pilot or let’s do a very specific thing with one set of customers or one specific function for our customers and then build out functionality from there because demands are changing much too quickly.”

Agree with that.

KR
Frederic


by Frederic Hornain at November 07, 2014 10:16 AM

[CIO] The Enterprisers Project

The Enterprisers Project







The Enterprisers Project[1] is a community built to discuss the evolving role of the CIO and how they can maximize their impact on the business.
The goal of that project is to bring together business-minded IT leaders to discuss how they drive business strategy and inspire enterprise-wide innovation.


The Enterprisers Project is a collaborative effort between CIO magazine, Harvard Business Review, and Red Hat [2].



[1] https://enterprisersproject.com
[2] http://www.redhat.com


Kind Regards
Frederic


by Frederic Hornain at November 07, 2014 10:02 AM

November 06, 2014

Wim Leers

Drupal 8's render pipeline

In Drupal 8, we’ve significantly improved the way pages are rendered. This talk explains the entire render pipeline, in some detail. But it also covers:

Besides that, I also cover a few of the most interesting new possibilities in Drupal 8:

Update (November 14, 2014): since I gave this talk, https://www.drupal.org/node/2352155 was committed, so this talk is now indeed a comprehensive, correct talk about the finalized Drupal 8 render pipeline!

by Wim Leers at November 06, 2014 11:59 PM

Frank Goossens

Bericht voor Jo Libeer (Voka) en Luc Coene (NBB/ OpenVLD)

ik wandel vandaag misschien niet mee van Noord- naar Zuidstation, maar dat is helemaal géén politiek signaal, in mijn hart (en met dat van mij dat van flink wat andere werkenden) betoog ik mee!

by frank at November 06, 2014 12:11 PM

November 05, 2014

Frank Goossens

Music from Our Tube; Arsenal ft. Lydmor

Belgian band Arsenal ft. Lydmor with “Temul (Lie Low)” and part of the movie “Dance Dance Dance“;
YouTube Video
Watch this video on YouTube or on Easy Youtube.

Kind of reminds me of “It’s the end of the world” and “Subterranean Homesick Blues” somehow, but in a good way :-)

by frank at November 05, 2014 03:21 PM

Dries Buytaert

The job of Drupal initiative lead

Drupal 8 is the first time we introduced the concept of formal initiatives and initiative leads. Over the course of these Drupal 8 initiatives we learned a lot and people are floating several ideas to increase the initiatives' success and provide Drupal initiative leads with more support. As we grow, it is crucial that we evolve our tools, our processes, and our organizational design based on these learnings. We've done so in the past and we'll continue to do so going forward.

But let's be honest, no matter how much support we provide, leading a Drupal initiative will unquestionably remain difficult and overwhelming. As a Drupal initiative lead, you are asked to push forward some of the most difficult and important parts of Drupal.

You will only succeed if you are able to build a strong team of volunteers that is willing to be led by you. You have to learn how to inspire and motivate by articulating a vision. You establish credibility by setting clear objectives and roadmaps in partnership with others. You have to motivate, guide and empower people to participate. You have to plan and over-communicate.

Not only do you have to worry about building and leading a team, you also have to make sure the rest of the community has shared goals and that everyone impacted has a shared understanding of why those decisions are being made. You use data, ideas and feedback from different sources to inform and convince people of your direction. Your "soft skills" are more important than your "hard skills". Regardless, you will lose many battles. You only "win" when you remain open to feedback and value change and collaboration. To lead a community, you need both a thick skin and a big heart.

Success is never a coincidence. You put in long hours to try and keep your initiative on track. You need relentless focus on doing whatever is necessary to succeed; to be the person who fills all the gaps and helps others to be successful. Instead of just doing the things you love doing most, you find yourself doing mundane tasks like updating spreadsheets or planning a code sprint to help others be successful. In fact, you might need to raise money for your code sprint. And if you succeed, you still don't have enough money to achieve what is possible and you feel the need to raise even more. You'll be brushing aside or knocking down obstacles in your path, and taking on jobs and responsibilities you have never experienced before.

Your objectives will constantly shift as Drupal itself iterates and evolves. You will want to go faster and you will struggle with the community processes. Imagine working on something for a month and then having to throw it out completely because you realize it doesn't pass. Frustration levels will be off the charts. Your overall goal of achieving the perfect implementation might never be achieved and that feeling haunts you for weeks or months. You will feel the need to vent publicly, and you probably will. At the worst moments, you'll think about stepping down. In better times, you realize that if most of your initiative succeeds it could take years of follow-up work. You will learn a lot about yourself; you learn that you are bad at many things and really good at other things.

Leading is incredibly hard and yet, it will be one of the best thing you ever did. You work with some of the finest, brightest, and most passionate people in the world. You will see tangible results of your hard work and you will impact and help hundreds of thousands of people for the next decade. There is no better feeling than when you inspire or when you help others succeed. Leading is hard, but many of you will look back at your time and say this was the most gratifying thing you ever did. You will be incredibly proud of yourself, and the community will be incredibly proud of you. You will become a better leader, and that will serve you for the rest of your life.

by Dries at November 05, 2014 02:18 PM

Joram Barrez

How to create an Activiti pull request

Once every while, the question on how to create a pull request for Activiti is asked in our Forum. It isn’t hard to do, but specially for people that don’t know git it can be daunting. So, I created a short movie that shows you how easy it is and what you need to do […]

by Joram Barrez at November 05, 2014 01:53 PM

October 31, 2014

Dries Buytaert

Michael Skok

Topic: 

Some of you picked up that Michael Skok is leaving North Bridge, Acquia's lead investor. A number of people asked me if Michael is leaving Acquia's Board of Directors as part of that. I'm pleased to say that Michael is staying on as a Director on Acquia's Board.

I first met Michael in the summer of 2007. From the moment I met Michael I knew that he was someone that I could trust and learn from. From the day we started Acquia, we had big dreams -- many of which we have realized today. In large part because Michael went all-in and helped us every step of the way. From his operational experience, to his relevant domain expertise, to his passion for Open Source and focus on building great teams and products, Michael has been an incredible asset to our Board. Fast forward 8 years and I'm as excited as ever to work with Michael to realize even bigger dreams with Acquia.

by Dries at October 31, 2014 09:12 PM

Joram Barrez

Model your BPMN 2.0 processes in the Activiti Cloud

Last month, we’ve released our very first version of our cloud offering around Activiti. If you missed it, there’s a short press release here (scroll to section about Activiti). Doing it as a cloud-first release has provided us with very good feedback. Not only on the deployment and setup side, but also from partners and potential customers […]

by Joram Barrez at October 31, 2014 09:42 AM