I published the following diary on isc.sans.edu: “Simple but Efficient VBScript Obfuscation“:
Today, it’s easy to guess if a piece of code is malicious or not. Many security solutions automatically detonate it into a sandbox by security solutions. This remains quick and (most of the time still) efficient to have a first idea about the code behavior. In parallel, many obfuscation techniques exist to avoid detection by AV products and/or make the life of malware analysts more difficult. Personally, I like to find new techniques and discover how imaginative malware developers can be to implement new obfuscation techniques… [Read more]
I published the following diary on isc.sans.edu: “Quick Analysis of an Encrypted Compound Document Format“:
We like when our readers share interesting samples! Even if we have our own sources to hunt for malicious content, it’s always interesting to get fresh meat from third parties. Robert shared an interesting Microsoft Word document that I quickly analysed. Thanks to him! The document was delivered via an email that also contained the password to decrypt the file… [Read more]
For a few months, I’m writing less often on this blog, except to publish my conference wrap-up’s and cross-posting my SANS Internet Storm Center diaries. But today, I decided to write a quick post after spending a few hours to debug a problem with my mail server…
It started with a notification from a contact who found one of my emails stored in the junk folder of his Office365 account. In the past, I already had the same issue a few times but I did not pay attention. This could always happen due to a “border line” expression in the message. I asked him to provide me all headers of my blocked email and I found this:
To decode why it was categorised as spam, Microsoft provides a page with all details. Apparently, Microsoft was not able to check the SPF record (“SPF:TempError” means a temporary DNS failure occurred). I don’t know the reason because the DNS hosting the zone with SPF records are available! Anyway…
In the mean time, I started to investigate on my web server what could be the reason to flag a mail as “suspicious” or “malicious”. To achieve this, I started to review all the security controls related to SMTP:
DKIM? Oops… It was not working, outgoing emails were not signed.
And it started… I dived into my Postfix config, my OpenDKIM config, checked the keys, configured domains, the selectors, the DNS records… For most of us, Google is our fist step to get some support. Let’s search for potentially useful keywords. Ouch, the first hit suggest stackoverflow.com… Not the most reliable place on the Internet Finally, the problem was fixed and DKIM is now working properly again!
If you step back a few seconds, this is a good example to demonstrate why security of many systems keeps failing… Everything is way too complex and many people are afraid when they have to implement more security controls. Their motto sounds like:
If it works, don’t break it! It was running like this for a long time, nobody know exactly how…
Here are some examples (feel free to share yours via comments):
SPF records are kept in “safe mode” to avoid outdated filters to block emails’ delivery
Certificates expire without notification (always on December 31st at 23:59)
DNSSEC is a pain to maintain with the regular renewal of keys (Did you read the latest story about DNSSEC?)
Docker credentials are difficult to integrate into some application with Docker secrets.
Let’sEncrypt SSL certificates are nice but the certbot may fail for multiple reason… The most classic one being a too strict redirect of HTTP to HTTPS.
Products’ documentation is not complete and does not mention what ports/IPs/domains allow. People configure “any to any” to be certain that the application will work.
Mail encryption with PGP is out of the competition (people will never understand the difference between public / private keys
Finally, while working at customers’ premises, I see that there is a trend to make platforms more and more complex. It seems that today, everything must be “agile”, must be “orchestrated”, … Finally, we loose the control!
I published the following diary on isc.sans.edu: “Keep an Eye on Command-Line Browsers“:
For a few weeks, I’m searching for suspicious files that make use of a command line browser like curl.exe or wget.exe in Windows environment. Wait, you were not aware of this? Just open a cmd.exe and type ‘curl.exe’ on your Windows 10 host… [Read more]
I've been using Icinga2 at various customers and on our own infrastructure since it was released. I never released the roles on Ansible Galaxy, since I didn't consider the individual roles as a useful product and it would require quite some effort to show how they're supposed to work. One of the disadvantages of that, is that after a few years I had at least three different versions. One version for Debian only, one that also supported CentOS and also one for enterprise customers with a Satellite server. All of them had their own checks and tweaks and it was not always easy (and allowed) to keep them in sync.
At my previous enterprise customer, management was open minded enough to allow an open source license for our automation work and we have been able to add molecule testing and a few neat features. Pieter Avonts had the brilliant idea to support flexible Icinga host vars based on Ansible inventory variables, which solved one of the important issues we had with Ansible variable inclusion in host.conf and checks (you could only use variables defined in group_vars/all before).
Once Ansible collections became a thing with Ansible 2.9, the time was there to get it started. Now I finally consider it in a ready to release state. Check the Icinga2 collection on Ansible galaxy and the code on Github. It supports Debian, Ubuntu, CentOS and RHEL (and it should still support Satellite based repos with a few configuration tweaks). The molecule tests have some examples to get you started, but I'll try to write a few blog posts with practical examples in the future.
The title says it all; I just pushed the first beta of Autoptimize 2.7 which has some small fixes/ improvements but which most importantly finally sees the “Autoptimize CriticalCSS.com power-up” fully integrated.
Next to the actual integration and switching to object-oriented for most (but not all) of AOCCSS files, there are some minor functional changes as well, most visible ones being buttons to clear all rules and to clear all jobs from the queue.
I hope to be able to release AO27 officially in April, but for that I need the beta thoroughly tested off course. Most important part to test is the critical CSS logic obviously, so if you have the power-up running, download/ install the beta and simply disable the power-up to have Autoptimize fill that void (if the power-up is active, AO refrains from loading it’s own critical CSS functionality).
I published the following diary on isc.sans.edu: “Sandbox Detection Tricks & Nice Obfuscation in a Single VBScript“:
I found an interesting VBScript sample that is a perfect textbook case for training or learning purposes. It implements a nice obfuscation technique as well as many classic sandbox detection mechanisms. The script is a dropper: it extracts from its code a DLL that will be loaded if the script is running outside of a sandbox. Its current VT score is 25/57 (SHA256: 29d3955048f21411a869d87fb8dc2b22ff6b6609dd2d95b7ae8d269da7c8cc3d)… [Read more]
Lieu de cette séance : Université de Mons, Campus Sciences Humaines, bâtiment FWEG, 17 place Warocqué, Auditoire 236L, 2ème étage. Cf.ce plan sur le site de l’UMONS, ou la carte OSM, ainsi que l’accès parking, rue Lucidel.
La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié.
Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.
Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.
Description : Pour tout le monde, le mot wiki fait référence à l’encyclopédie en ligne Wikipedia. Plus généralement, un wiki est un système de gestion de contenu permettant de créer, modifier et présenter des informations dans de nombreuses pages reliées entre elles par des liens hypertextes. L’édition est souvent collaborative, utilise un langage de balises simplifié, et le logiciel dédié assure l’indexation du contenu, les recherches, le contrôle des accès et la conservation des versions successives. Les wikis réputés proposent des extensions pour des fonctionnalités supplémentaires et des thèmes graphiques pour changer l’apparence du site.
DokuWiki est un logiciel libre écrit en PHP, présentant l’avantage de fonctionner sans base de données, au contraire de MediaWiki (le moteur de Wikipedia). DokuWiki fait partie des moteurs de wiki les plus connus. Il se destine aux équipes de développement, aux travaux de groupe et aux petites entreprises, mais sa facilité de maintenance, de sauvegarde et d’intégration ainsi que les nombreuses extensions et thèmes graphiques proposés en font un couteau suisse du web très efficace tant pour une utilisation individuelle que pour celle d’une grande communauté.
La présentation sera essentiellement un retour d’expérience, et vous permettra de comprendre l’installation, la configuration et le fonctionnement de DokuWiki, et de découvrir ses nombreuses possibilités d’utilisation.
In 2014, I declared that companies would need more than a website or a CMS to deliver great customer experiences. Instead, they would need a DXP that could integrate lots of different marketing and customer success technologies, and operate across more digital channels (e.g. mobile, chat, voice, and more).
For five years, Acquia has diligently been building towards an "open DXP". As part of that, we expanded our product portfolio with solutions like Lift (website personalization), and acquired two marketing companies: Mautic (marketing automation + email marketing), and AgilOne (Customer Data Platform).
Being named a Leader by Gartner in the Digital Experience Platforms Magic Quadrant validates years and years of hard work, including last year's acquisitions. We also join an amazing cohort of leaders, like Salesforce and Adobe, who are among the very best software companies in the world. All of this makes me incredibly proud of the Acquia team.
This recognition comes after six years as a Leader in the Gartner Magic Quadrant for Web Content Management (WCM). We learned recently that Gartner is deprecating its Web Content Management Magic Quadrant as Gartner clients' demand has been shifting to the broader scope of DXPs. We also see growing interest in DXPs from our clients, though we believe WCM and content will continue to play a foundational and critical role.
You can read the full Gartner report on Acquia's website. Thank you to the entire Acquia team who made this incredible accomplishment possible!
Mandatory disclaimer from Gartner
Gartner, Magic Quadrant for Digital Experience Platforms, Irina Guseva, Gene Phifer, Mike Lowndes, 29 January 2020.
This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Acquia.
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
Tout ce qui fait de la lumière nous hypnotise. Nous pouvons rester des minutes entières à contempler le crépitement des flammes d’un feu de camp. Lorsque nous rentrons dans une pièce avec une télévision allumée, notre regard est immédiatement attiré, aspiré.
Se passer des réseaux sociaux comme je l’ai fait avec ma déconnexion ? Ce n’est qu’une petite partie du problème. Nous avons toujours en permanence un écran dans notre poche, un écran que nous avons envie de consulter au moindre signe d’ennui, de temps mort. Un écran qui nous satisfait plus que lire quelques pages d’un livre ou de prendre quelques secondes pour méditer.
J’ai réalisé que je n’étais pas addict aux applications sur mon téléphone ou mon ordinateur, mais bien à l’écran lui-même. Que je coupe une activité et je me retrouve, presque consciemment, à tenter d’imaginer ce que je pourrais bien faire pour justifier de passer du temps sur un écran.
Un phénomène que je n’ai pas du tout avec ma liseuse. L’écran est essentiellement réfléchissant. Il n’y pas de mouvements fluides, hypnotiques, d’images.
Logiquement, je me suis mis à la recherche d’un téléphone e-ink, un téléphone sans écran lumineux, sans couleur. Plusieurs modèles ont vu le jour dans les années 2013-2014, mais aucun ne semble avoir eu de succès. Le seul modèle qui semble exister est le Hisense A5, que j’ai décidé d’acheter.
Initialement annoncé à 100$, le Hisense A5 coûte en réalité plus de 200$ sur la majorité des sites d’import. Ajoutez à cela les frais de douane et on est plus proche des 250€. Mais je devais de toute façon remplacer mon téléphone défectueux.
Le Hisense A5 tourne sous Android, mais est fourni en deux langues : anglais et chinois. Pas d’autre choix. À noter que la traduction est incomplète. Les idéogrammes chinois apparaissent un peut partout. Ce n’est pas dramatique, mais certains mouvements lancent des applications en chinois que je n’ai pas réussi à désinstaller. Un peu embêtant.
Autre point d’attention : les services Google ne sont pas installés sur le téléphone. Cela tombe bien, car je voulais justement me passer de Google, mais cela vient avec quelques surprises sur lesquelles je reviendrai.
Un peu de bidouillage est donc nécessaire pour installer vos applications : installer F-Droid et, depuis F-Droid, installer Aurora Store qui permet d’installer n’importe quelle application de l’appstore Google.
L’écran e-ink à l’usage
Le fait d’avoir un écran e-ink est une expérience étonnante. On s’habitue très vite au noir et blanc. L’écran possède deux modes d’affichage (un plus net, mais plus lent, pour les photos, et l’un rapide, pour l’interaction quotidienne). Il est facile de passer d’un mode à l’autre grâce à un raccourci et c’est vraiment très utile.
Après quelques jours, le fait de voir les écrans des autres téléphones semble incroyablement agressif. Ça bouge, c’est coloré, c’est violent. Le téléphone e-ink est incroyablement zen.
Autre particularité de l’e-ink : il est assez lent. Pas question de faire défiler des flux ou des news à toute vitesse. Loin d’être un problème, c’est au contraire une fonctionnalité incroyable. Pendant un temps mort, j’ai le réflexe de prendre mon téléphone avant de me dire « Mais pourquoi je prends ce truc ? Qu’est-ce que j’ai envie d’y faire ? ». Le téléphone e-ink est donc un véritable sevrage !
Pocket, en mode tournage de page, est dans son univers. C’est à peu près la seule application qui donne envie d’être utilisée sur ce téléphone.
Pour les applications quotidiennes, le fait d’être en e-ink n’est pas particulièrement handicapant. On s’habitue très vite sauf pour les applications mal conçues dans lesquelles on ne fait pas la différence entre l’activation et la désactivation, le gris foncé (désactivé) et le bleu foncé (activé) apparaissant de la même manière. C’est particulièrement frappant dans Garmin Connect et cela me fait dire que ces applications ne sont pas utilisables par les personnes avec des troubles de la vue comme le daltonisme.
Photos et vidéos
L’appareil photo fonctionne très bien et prend des photos tout à fait correctes, voire belles… une fois affichée sur un autre appareil ! C’est d’ailleurs assez amusant de ne pas pouvoir juger immédiatement du résultat d’une photo. On retrouve un peu l’esprit des appareils argentiques : prendre moins de photos (parce que l’interface est moins réactive et décourage le mitraillage) et ne pas les regarder avant d’être rentré à la maison.
La peur de rater une bonne photo est également une addiction problématique de nos smartphones. Il suffit de voir la foule lors des événements importants : tout le monde regarde le monde à travers son smartphone ! Un écran e-ink, sans résoudre le problème, le rend moins prégnant. Une sorte d’équilibre que je trouve acceptable : on peut prendre des photos, mais cela ne remplace pas la réalité.
Ce sentiment est également présent en voyage, alors que je reçois des photos de ma famille. L’appel vidéo en noir et blanc saccadé renforce l’impression d’éloignement, de distance. Je trouve cela personnellement une bonne chose, car cela permet de se rendre compte de l’importance d’une présence réelle, que le chat et la vidéo ne sont que des succédanés.
La merveille de l’écran e-ink est qu’il ne consomme presque rien ! Lors d’une journée normale, connecté sur le wifi avec un usage léger, je consomme entre 10 et 15% de batterie. En voyage, après une très longue journée en permanence sur la 4G, avec utilisation intensive du GPS, appels vidéo, prises de photo, je n’ai jamais réussi à descendre en dessous de 50% de batterie. En étant attentif (usage limité, mode avion quand pas utilisé), on peut sans problème dépasser la semaine.
Bref, ce téléphone est l’arme absolue du baroudeur : énorme autonomie et obligation de rester en permanence connecté au monde réel autour de soi !
Nous en discutions avec Thierry Crouzet : ce téléphone pourrait bien être le compagnon ultime du bikepacker. Grande autonomie, impossibilité de passer la soirée à trier les photos de la journée et permet de remplacer la liseuse.
En usage courant, je préfère très largement ma liseuse Vivlio, avec écran large, boutons physiques et sans distraction. Mais, en bikepacking où la fatigue rend la lecture très sporadique, le Hisense A5 serait suffisant, permettant d’économiser une grosse centaine de grammes de bagages.
La vie sans Google
Comme je l’ai signalé, ce téléphone ne permet pas d’installer les services Google. Que sont les services Google ? Une couche logicielle qui permet de lier une application avec votre compte Google.
Cela signifie que vous pouvez installez des apps à travers l’Aurora store, mais uniquement des apps gratuites. Cela signifie également qu’il est impossible de se connecter avec son compte Google. Google Maps fonctionne très bien, mais pas avec votre compte (ne vous permettant pas de partager votre position avec vos amis ou d’utiliser vos favoris). Cela veut dire également pas d’import facile de mes contacts depuis mon téléphone précédent.
Personnellement les manques les plus difficiles et pour lesquels je n’ai pas encore de solution sont Google Agenda et Google Musique. Je peste également contre le manque de Google Photos qui me permettait d’accéder immédiatement à mes photos depuis mon ordinateur. J’utilise Tresorit, mais la fonctionnalité d’importation de photos n’est pas vraiment au point et ne semble pas fonctionner sur ce téléphone (tout comme celle de Dropbox).
Autre manque qui m’a surpris : Strava refuse de se lancer sans les Google Services ! Brain.fm me prévient également qu’il ne fonctionnera pas, mais, après avoir accepté l’erreur, se lance sans problème. J’avoue que je ne saurais pas me passer de Brain.fm même si j’ai l’impression que la qualité de son dans mon casque Bluetooth est inférieure à mon téléphone précédent.
Pour le reste, tout fonctionne plutôt bien, y compris Google Maps ou mes gadgets Bluetooth. Certaines applications peuvent cependant avoir de petits soucis : l’appli météo que j’utilise n’arrive pas à obtenir une position. Signal ne me notifie parfois de certains messages que lorsque je lance manuellement l’application. Brain.fm plante parfois si j’utilise d’autres applications pendant qu’il tourne. Et Strava ne se lance pas du tout !
À la lecture de ce qui précède, il est évident que ce téléphone est réservé aux geeks prêts à bidouiller et à faire des sacrifices.
Cependant, pour un téléphone qu’il faut bidouiller et un système Android pas du tout prévu pour un téléphone e-ink, il se comporte incroyablement bien. Contrairement à un LightPhone, très joli sur papier, mais incroyablement limité dans la vie quotidienne (pas d’application bancaire, pas de système d’authentification à deux facteurs, pas moyen de trouver rapidement une adresse), le Hisense A5 est un véritable téléphone.
Je suis persuadé que des interfaces spécifiquement conçues pour e-ink changeront complètement la relation que nous avons avec nos écrans. Je le vois d’ailleurs avec le Freewrite qui, malgré tous ses problèmes software, change ma relation avec l’écriture.
Je crois que je ne pourrais déjà plus revenir à un téléphone traditionnel. J’attends juste que Protonmail sorte son appli agenda pour Android, que je trouve une solution de rechange pour streamer ma musique et ce téléphone sera presque parfait !
Cela signifie également que lorsque je n’ai pas besoin de mon laptop, je me balade désormais sans le moindre écran couleur. Une véritable libération !
Je suis @ploum, conférencier et écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !
I’m just back from Lille (France) where is organized the “FIC” or “International Cybersecurity Forum” today and tomorrow. This event is very popular for some people but not technical at all. Basically, you find all the vendors in one big place trying to convince you that their solution, based on blockchain, machine learning and running in the cloud, is the one that you MUST have! Anyway… Besides this mega-vendors party, there is a small event that is organized in parallel for a few years (I think it was already the 4th edition): “CoRIIN” or, in French, “Conférence sur la réponse aux incidents et l’investigation numérique”. 400 people registered and came to follow presentations around forensics, investigations, etc… Here is my quick wrap-up as usual!
The conference started with an overview of the WhatsApp application on mobile phones: “L’investigateur, le Smartphone, et l’application WhatsApp” by Guénaëlle De Julis. She had the opportunity to investigate a mobile phone and was asked to find data in the WhatsApp application. With the lack of documentation available, she decided to have a look at the application. She explained covered iOS & Android devices. First, Guénaëlle explained how to access data, some are available via backups, some need root access (jailbreak) depending on what you need to extract. Data are stored in SQLite database. Once extracted, how to search for juicy information? You can, of course, use one of the multiple SQLite viewer available but Guénaëlle decided to use Python with the Pandas library to write powerful queries. Some examples:
Export messages for a defined period
Try to find the number of missing messages for a defined period
Groups: identify members and associates messages
I don’t have experience with mobile forensics and learned interesting stuff with this first presentation.
Then, Philippe Baumgart et Fahim Hasnaoui presented their point of view regarding investigations in cloud environments. They started by reviewing common issues that organizations are facing when using the cloud: They are easy to order (just a credit card is required) and “tests” platforms become often regular platforms. Welcome to shadow IT! Then, they explained the challenges to collect information from cloud providers. An interesting point of view: they offer powerful monitoring tools that can also be used to detect suspicious activities. Example: a peak of traffic on a database server could be related to some data exfiltration! I liked their mention of “intra-cloud” attacks where people rent virtual machines inside cloud infrastructure and scan for servers with non-public IP addresses.
Then, Sébastien Mériot, head of the OVH CSIRT came to discuss data leaks and credential stuffing. Data leaks are everywhere and people like to know what has been done with stolen data. When, according to Sébastien, it’s interesting to have a look at what’s happening before the leak. Often, the data leak is not the primary goal of the attackers, it’s an opportunity. Leaked data are not immediately released because passwords are often hashed+salted. Attackers need to take time to brute-force the passwords. But, it’s a fact, the time between the leak and the public release tends to be reduced due to… the power of GPU! Passwords can be decrypted very quickly today. Then, they are used for credential stuffing. This means they’re tested against many services. To perform this, there exist tools like:
Sébastien explained that OVH, as a major cloud player, is also a good target for credential stuffing. He also explained some counter-measures they implemented like dynamic forms (that change all the time to prevent automation). From a business point of view, phishers are moving to credential stuffing… many come from Africa
The next presentation was performed by François Normand about Pastebin and all the malicious content that can be found on this service. He reviewed some examples of abuse like storing malicious code encoded. Encoding can be different, XOR can be performed to defeat security controls. Interesting to keep in mind that attackers use legit services to distribute malicious content. And they are not easy to block.
After a lunch break, “A year hunting in a bamboo forest” was presented by Aranone Zarkan and Sébastien Larinier. Sorry, no information because the presentation was under TLP:Amber.
Giovanni Rarrato came to present his baby: “Tsurugi Linux”. He’s the core developer of this Linux distribution, Ubuntu-based, dedicated to forensics investigations. It contains a lot of tools that, preinstalled, speed-up the processing of data.
Solal Jacob, from the ANSSI agency, came to present his research: memory analysis of a Cisco router running IOS-XR. This type of router is running a modified version of QNX. Solal explained step by step how he learning the way the platform runs, then how to access the memory to extract it and save it. Finally, the framework used to extract artifacts from the memory image. Great research that was not easy to present in only 30 minutes! His framework “Amnesic Sherpa” will be available soon.
After a welcomes coffee break, Stéphane Bortzmeyer, from AFNIC, presented “Who’s really the owner of this IP prefix?”. Usually, when you need to learn more information about the owner of an IP address, you just use the “whois” command. Easy… but are you confident with the results? Stéphane started with an example: In 1990, the prefix 18.104.22.168/16 was assigned to the company called “Athenix” based in California. She went to bankruptcy. In 2008, a new company was created called “Athenix” based in Massachusetts. They took over the prefix, easy! The business of IPv4 is growing because, as we are out of addresses, organizations tend to use many techniques to get more IPs. The problem is that techniques exist to authenticate the ownership of prefixes but people don’t use them (for example: RKPI). Conclusions; investigating is easy because tools are easy but what’s being? Business relations etc…. I really liked the presentation!
Jean Gautier presented another tool: DFIR-ORC. “ORC” stands for “Outil de Recherche de Compromisation”. It’s a tool dedicated to extracting artifacts from a compromized system. Why reinvent a tool? Because it must be reliable, easy to use, with a minimum impact on the tested system and highly customizable! Some features are really impressive like using the NTFS MFT to access locked files, volume shadow copies or deleted files metadata! Jean explained in detail how the tool is working and it deserves to be tested. It is available here.
The last presentation was a return of experience by Mathieu Hartheiser et Maxence Duchet. It was based on a real case when one of their customers was compromized via a vulnerable JunOS Pulse appliance. I expected more technical details but it looked more like a GRC presentation…
We proudly present the last set of speaker interviews. See you at FOSDEM this weekend! Chris Aniszczyk, Max Sills and Michael Cheng: Open Source Under Attack. How we, the OSI and others can defend it Danese Cooper: How FOSS could revolutionize municipal government. with recent real-world examples James Bottomley and Mike Rapoport: Address Space Isolation in the Linux Kernel Kris Nova: Fixing the Kubernetes clusterfuck. Understanding security from the kernel up Krzysztof Daniel: Is the Open door closing?. Past 15 years review and a glimpse into the future. Mateusz Kowalski and Kamila Součková: SCION. Future internet that you can use舰
About a week and a half ago, I
that I'd been working on making
SReview, my AGPLv3 video review and
transcode system work from inside a Kubernetes cluster. I noted at the
time that while I'd made it work inside minikube, it couldn't actually
be run from within a real Kubernetes cluster yet, mostly because I
misunderstood how Kubernetes works, and assumed you could just mount the
and share data that way (answer: no you can't).
The way to fix that is to share the data not through volumes, but
through something else. That would require that the individual job
containers download and upload files somehow.
I had a look at how the
Net::Amazon::S3 perl module
works (answer: it's very simple really) and whether it would be doable
to add a transparent file access layer to SReview which would access
files either on the local file system, or an S3 service (answer: yes).
So, as of yesterday or so, SReview supports interfacing with an S3
service (only tested with MinIO for now) rather than
"just" files on the local file system. As part of that, I also updated
the code so it would not re-scan all files every time the
sreview-detect job for detecting new files runs, but only when the
"last changed" time (or mtime for local file system access) has
changed -- otherwise it would download far too many files every time.
This turned out to be a lot easier than I anticipated, and I have now
successfully managed, using MinIO, to run a full run of a review cycle
inside Kubernetes, without using any volumes except for the
"ReadWriteOnce" ones backing the database and MinIO containers.
Additionally, my kubernetes configuration
are now split up a bit (so you can apply the things that make sense for
your configuration somewhat more easily), and are (somewhat) tested.
If you want to try out SReview and you've already got Kubernetes up and
running, this may be for you! Please give it a try and don't forget to
send some feedback my way.
Submission has closed for the PGP keysigning event. The list of all submitted keys is now published. The annual PGP keysigning event at FOSDEM is one of the largest of its kind. With more than one hundred participants every year, it is an excellent opportunity to strengthen the web of trust. If you are participating in the PGP keysigning, you must now: Download the list of keys; Verify that your key fingerprint is correct; Optionally, verify the integrity of the file using the detached signature; Print the list of keys on paper; Calculate the checksums, as detailed in the舰
Bedankt Mark Van Damme. Goed gedaan. Die 60 euro gemeentebelasting zal ik dan maar betalen volgend jaar. De politieke verantwoordelijken moeten niet verwachten nog verkozen te worden.
Iedereen die denkt dat het allemaal onzichtbare wegen zijn: onzin. Kom hier kijken. Het gaat om wandelpaden tussen de natuur en huizen. Ze zijn met gele borden aangegeven. En als je het aan de meeste mensen uit de omgeving hier vraagt is men hier voorstander van. Misschien een paar die hun tuin moeten beschikbaar stellen niet. Maarja.
Ik vermoed dat het ook voor toerisme zorgt. Wat de lokale handelaren waarschijnlijk niet slecht vinden.
I published the following diary on isc.sans.edu: “Why Phishing Remains So Popular?“:
Probably, some phishing emails get delivered into your mailbox every day and you ask yourself: “Why do they continue to spam us with so many emails? We are aware of phishing and it will not affect my organization!”
First of all, emails remain a very popular way to get in content with the victim. Then, sending massive phishing campaigns does not cost a lot of money. You can rent a bot to send millions of emails for a few bucks. Hosting the phishing kit is also very easy. They are tons of compromised websites that deliver malicious content. But phishing campaigns are still valuable from an attacker perspective when some conditions are met… [Read more]
I published the following diary on isc.sans.edu: “Complex Obfuscation VS Simple Trick“:
Today, I would like to make a comparison between two techniques applied to malicious code to try to bypass AV detection. The Emotet malware family does not need to be presented. Very active for years, new waves of attacks are always fired using different infection techniques. Yesterday, an interesting sample was spotted at a customer. The security perimeter is quite strong with multiple lines of defenses based on different technologies/vendors. This one passed all the controls! A malicious document was delivered via a well-crafted email. The document (SHA256:ff48cb9b2f5c3ecab0d0dd5e14cee7e3aa5fc06d62797c8e79aa056b28c6f894) has a low VT score of 18/61 and is not detected by some major AV players… [Read more]
We have just published the second set of interviews with our main track and keynote speakers. The following interviews give you a lot of interesting reading material about various topics, from the forgotten history of early Unix to the next technological shift of computing: Daniel Riek: How Containers and Kubernetes re-defined the GNU/Linux Operating System. A Greybeard's Worst Nightmare David Stewart: Improving protections against speculative execution side channel James Bottomley: The Selfish Contributor Explained Jon 'maddog' Hall: FOSSH - 2000 to 2020 and beyond!. maddog continues to pontificate Justin W. Flory and Michael Nolan: Freedom and AI: Can Free Software舰
Nineteen years ago today, I released Drupal 1.0.0. Every day, for the past nineteen years, the Drupal community has collaborated on providing the world with an Open Source CMS and making a difference on how the web is built and run.
It's easy to forget that software is written one line of code at the time. And that adoption is driven one website at the time. I look back on nearly two decades of Drupal, and I'm incredibly proud of our community, and how we've contributed to a more open, independent internet.
Today, many of us in the Drupal community are working towards the launch of Drupal 9. Major releases of Drupal only happen every 3-5 years. They are an opportunity to bring our community together, create something meaningful, and celebrate our collective work. But more importantly, major releases are our best opportunity to re-engage past users and attract new users, explaining why Drupal is better than closed CMS platforms.
As we mark Drupal's 19th year, let's all work together on a successful launch of Drupal 9 in 2020, including a wide-spread marketing strategy. It's the best birthday present we can give to Drupal.
I spent the last week or so building Docker images and a set of YAML
that allows one to run
99%-automated video review and transcode system, inside
minikube, a program that sets
up a mini Kubernetes cluster inside a VM for development purposes.
I wish the above paragraph would say "inside Kubernetes", but alas,
unless your Kubernetes implementation has a ReadWriteMany volume that
can be used from multiple nodes, this is not quite the case yet. In
order to fix that, I am working on adding an abstraction layer that will
transparently download files from an S3-compatible object store; but
until that is ready, this work is not yet useful for large
But that's fine! If you're wanting to run SReview for a small
conference, you can do so with minikube. It won't have the redundancy
and reliability things that proper Kubernetes provides you, but then
you don't really need that for a conference of a few days.
Here's what you do:
Download minikube (see the link above)
Run minikube start, and wait for it to finish
Run minikube addon enable ingress
Clone the SReview git repository
From the toplevel of that repository, run perl -I lib
scripts/sreview-config -a dump|sensible-pager to see an overview of
the available configuration options.
Edit the file dockerfiles/kube/master.yaml to add your configuration
variables, following the instructions near the top
Once the file is configured to your liking, run kubectl apply -f
master.yaml -f storage-minikube.yaml
Add sreview.example.com to /etc/hosts, and have it point to the
output of minikube ip.
Create preroll and postroll templates, and download them to minikube
in the location that the example config file suggests. Hint: minikube
ssh has wget.
Store your raw recorded assets under /mnt/vda1/inputdata, using the
format you specified for the $inputglob and $parse_re
This doesn't explain how to add a schedule to the database. My next big
project (which probably won't happen until after the next
FOSDEM is to add a more advanced
administrator's interface, so that you can just log in and add things
from there. For now though, you have to run kubectl port-forward
svc/sreview-database 5432, and then use psql to localhost to issue
SQL commands. Yes, that sucks.
Having said that, if you're interested in trying this out, give it a go.
(many thanks to the people on the #debian-devel IRC channel for helping
me understand how Kubernetes is supposed to work -- wouldn't have worked
nearly as nice without them)
I’ve been studying the Rust Programming Language
over the holidays, here are some of my first impressions. My main interest in
Rust is compiling performance-critical code to WebAssembly for usage in the
Rust is an ambitious language: it tries to eliminate broad categories of
browser errors by detecting them during compilation. This requires more help
from the programmer: reasoning about what a program does exactly is famously
impossible (the halting problem),
but that doesn’t mean we can’t think about some aspects, provided that we give
the compiler the right inputs. Memory management is the big thing in Rust where
this applies. By indicating where a value is owned and where it is only
temporarily borrowed, the compiler is able to infer the life-cycle of values.
Similar things apply for type safety, handling errors, multi-threading and
preventing null references.
All very cool off-course, but nothing in life is for free: it requires a much
higher level of precise input with regards to what exactly you’re trying to
achieve. So programming in Rust is less careless than other languages, but the
end result is guaranteed correctness. I’d say that’s worth it.
This very strict mode of compilation also means that the compiler is very
picky about what it accepts. You can expect many error messages and much
fighting (initially) to even get your program to compile. The error messages
are very good though, so usually (but not always) they give a pretty good
indication of what to fix. And once it compiles you’re rather certain that the
result is good.
Another consequence is that Rust is by no means a small language. Compared to
the rather succinct Go, there’s an enormous amount of
concepts and syntax. All needed, but it certainly doesn’t make things easier to
Other random thoughts:
It’s a mistake to see a reference as a pointer. They’re not the same thing,
but it’s very easy to confuse them while learning Rust. Thinking about
moving ownership takes some adaptation.
Lifetimes are hard and confusing at first. This is one of the points where I
feel you spend more attention to getting the language right than the actual
functionality of your code.
Rust has the same composable IO abstractions
(Read/Write) as in the Go io
package. These are excellent and a joy to work with.
My main worry is the complexity of the language: each new corner-case of
correctness will lead to the addition of more language complexity. Have we
reached the end or will things keep getting harder? One example of where the
model already feels like it’s reaching the limits is RefCell.
In all, I’d say Rust is a good addition to the toolbox, for places where it
makes sense. But I don’t foresee it replacing Go yet as my go-to language on
the backend. It all comes down to the situation, finding the right balance
between the need for performance/correctness and productivity: the right tool
for the job. To be continued.
At the beginning of every year, I like to publish a retrospective to look back and take stock of how far Acquia has come over the past 12 months. I take the time to write these retrospectives because I want to keep a record of the changes we've gone through as a company and how my personal thinking is evolving from year to year.
If you'd like to read my previous retrospectives, they can be found here: 2018, 2017, 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009. This year marks the publishing of my eleventh retrospective. When read together, these posts provide a comprehensive overview of Acquia's growth and trajectory.
Our product strategy remained steady in 2019. We continued to invest heavily in (1) our Web Content Management solutions, while (2) accelerating our move into the broader Digital Experience Platform market. Let's talk about both.
Acquia's continued focus on Web Content Management
We continued to invest heavily in Acquia Cloud in 2019. As a result, Acquia Cloud remains the most secure, scalable and compliant cloud for Drupal. An example and highlight was the successful delivery of Special Counsel Robert Mueller's long-awaited report. According to Federal Computer Week, by 5pm on the day of the report's release, there had already been 587 million site visits, with 247 million happening within the first hour — a 7,000% increase in traffic. I'm proud of Acquia's ability to deliver at a very critical moment.
Time-to-value and costs are big drivers for our customers; people don't want to spend a lot of time installing, building or upgrading their websites. Throughout 2019, this trend has been the primary driver for our investments in Acquia Cloud and Drupal.
In September, we announced that Acquia acquired Cohesion, a Software-as-a-Service visual Drupal website builder. Cohesion empowers marketers, content authors and designers to build Drupal websites faster and cheaper than ever before.
We launched a multitude of new features for Acquia Cloud which enabled our customers to make their sites faster and more secure. To make our customer's sites faster, we added a free CDN for all Cloud customers. All our customers also got a New Relic Pro subscription for application performance management (APM). We released Acquia Cloud API v2 with double the number of endpoints to maximize customer productivity, added single-sign on capabilities, obtained FIPS compliance, and much more.
We rolled out many "under the hood" improvements; for example, thanks to various infrastructure improvements our customers' sites saw performance improvements anywhere from 30% to 60% at no cost to them.
Making Acquia Cloud easier to buy and use by enhancing our self-service capabilities has been a major focus throughout all of 2019. The fruits of these efforts will start to become more publicly visible in 2020. I'm excited to share more with you in future blog posts.
At the end of 2019, Gartner announced it is ending their Magic Quadrant for Web Content Management. We're proud of our six year leadership streak, right up to this Magic Quadrant's end. Instead, Gartner is going to focus on the broader scope of Digital Experience Platforms, leaving stand-alone Web Content Management platforms behind.
Gartner's decision to drop the Web Content Management Magic Quadrant is consistent with the second part of our product strategy; a transition from Web Content Management to Digital Experience Management.
Acquia's expansion into Digital Experience Management
We started our expansion from Web Content Management to the Digital Experience Platform market five years ago, in 2014. We believed, and still believe, that just having a website is no longer sufficient: customers expect to interact with brands through their websites, email, chat and more. The real challenge for most organizations is to drive personalized customer experiences across all these different channels and to make those customer experiences highly relevant.
For five years now, we've been patient investors and builders, delivering products like Acquia Lift, our web personalization tool. In June, we released a completely new version of Acquia Lift. We redesigned the user interface and workflows from scratch, added various new capabilities to make it easier for marketers to run website personalization campaigns, added multi-lingual support and much more. Hands down, the new Acquia Lift offers the best web personalization for Drupal.
In addition to organic growth, we also made two strategic acquisitions to accelerate our investment in becoming a full-blown Digital Experience Platform:
In May, Acquia acquired Mautic, an open marketing automation platform. Mautic helps open up more channels for Acquia: email, push notifications, and more. Like Drupal, Mautic is Open Source, which helps us deliver the only Open Digital Experience Platform as an alternative to the expensive, closed, and stagnant marketing clouds.
In December, we announced that Acquia acquired AgilOne, a leading Customer Data Platform (CDP). To make customer experiences more relevant, organizations need to better understand their customers: what they are interested in, what they purchased, when they last interacted with the support organization, how they prefer to consume information, etc. Without a doubt, organizations want to better understand their customers and use data-driven decisions to accelerate growth.
We have a clear vision for how to redefine a Digital Experience Platform such that it is free of any silos.
In 2020, expect us to integrate the data and experience layers of Lift, Mautic and AgilOne, but still offer each capability on its own aligned with our best-of-breed approach. We believe that this will benefit not only our customers, but also our agency partners.
Demand for our Open Digital Experience Platform continued to grow among the world's most well-known brands. New customers include Liverpool Football Club, NEC Corporation, TDK Corporation, L'Oreal Group, Jewelers Mutual Insurance, Chevron Phillips Chemical, Lonely Planet, and GOL Airlines among hundreds of others.
We ended the year with more than 1,050 Acquians working around the globe with offices in 14 locations. The three acquisitions we made during the year added an additional 150 new Acquians to the team. We celebrated the move to our new and bigger India office in Pune, and ended the year with 80 employees in India. We celebrated over 200 promotions or role changes showing great development and progression within our team.
We continued to introduce Acquia to more people in 2019. Our takeover of the Kendall Square subway station near MIT in Cambridge, Massachusetts, in April, for instance, helped introduce more than 272,000 daily commuters to our company. In addition to posters on every wall of the station, the campaign — in which photographs of fellow Acquians were prominently featured — included Acquia branding on entry turnstiles, 75 digital live boards, and geo-targeted mobile ads.
Last but not least, we continued our tradition of "Giving back more", a core part of our DNA or values. We sponsored 250 kids in the Wonderfund Gift Drive (an increase of 50 from 2018), raised money to help 1,000 kids in India to get back to school after the floods in Kolhapur, raised more than $10,000 for Girls Who Code, $10,000 for Cancer Research UK, and more.
Some personal reflections
With such a strong focus on product and engineering, 2019 was one of the busiest years for me personally. We grew our R&D organization by about 100 employees in 2019. This meant I spent a lot of time restructuring, improving and scaling the R&D organization to make sure we could handle the increased capacity, and to help make sure all our different initiatives remain on track.
It feels a bit surreal that we crossed 1,000 employees in 2019.
There were also some low-lights in 2019. On Christmas, Acquia's SVP of Engineering Mike Aeschliman, unexpectedly passed away. Mike was one of the three people I worked most closely with and his passing is a big loss for Acquia. I miss Mike greatly.
If I have one regret for 2019, it is that I was almost entirely internally focused. I missed hitting the road — either to visit employees, customers or Drupal and Mautic community members around the world. I hope to find a better balance in 2020.
2019 was a busy year, but also a very rewarding year. I remain very excited about Acquia's long-term opportunity, and believe we've steered the company to be positioned at the right place at the right time. All of this is not to say 2020 will be easy. Quite the contrary, as we have a lot of work ahead of us in 2020, including the release of Drupal 9. 2020 should be another exciting year for us!
If your non-geek partner and/or kids are joining you to FOSDEM, they may be interested in spending some time exploring Brussels while you attend the conference. Like previous years, FOSDEM is organising sightseeing tours.
We have performed some interviews with main track speakers from various tracks. To get up to speed with the topics discussed in the main track talks, you can start reading the following interviews: Amanda Brock: United Nations Technology and Innovation Labs. Open Source isn't just eating the world, it's changing it Daniel Stenberg: HTTP/3 for everyone. The next generation HTTP is coming Free Ekanayaka: dqlite: High-availability SQLite. An embeddable, distributed and fault tolerant SQL engine Geir Høydalsvik: MySQL Goes to 8! James Shubin: Over Twenty Years Of Automation Joe Conway: SECCOMP your PostgreSQL Ludovic Courtès: Guix: Unifying provisioning, deployment, and舰
On 7 January, 2020, the Drupal module JSON:API 1.x was officially marked unsupported. This date was chosen because it is exactly 1 year after the release of JSON:API 2.0, the version of JSON:API that was eventually committed to core. Since then, the JSON:API maintainers have beenurgingusers to upgrade to the 2.x branch and then to switch to the Drupal core version.
We understand that there are still users remaining on the 1.x branch. We will maintain security coverage of the 8.x-1.x branch for 90 days. That is, on 6 April, 2020, all support for JSON:API, not in Drupal core, will end. Please upgrade your sites accordingly.
I decided to use the holiday break to do a link audit for my personal blog. I found hundreds of links that broke and hundreds of links that now redirect. This wasn't a surprise, as I haven't spent much time maintaining links in the 13 years I've been blogging.
Some of the broken links were internal, but the vast majority were external.
"Internal links" are links that go from one page on https://dri.es to a different page on https://dri.es. Fixing broken links feels good so I went ahead and fixed all internal links.
It's a different story for external links. "External links" are links that point to domains not under my control.
Some sites that I link to have since been hijacked by porn sites. The URL used for Hillary Clinton's 2008 campaign website now points to a porn site, for example. This is unfortunate so I simply removed those links.
Some of the external links now have URL redirects. I found what I call "obvious redirects" and "less obvious redirects".
An example of an "obvious redirect" was a link to Apple's pressroom. In my 2015 Acquia retrospective I linked to an Apple press release, https://www.apple.com/pr/library/2015/12/03Apple-Releases-Swift-as-Open-Source.html, to highlight that large organizations like Apple were starting to embrace Open Source. Today, that link automatically redirects to https://www.apple.com/newsroom/2015/12/03Apple-Releases-Swift-as-Open-Source. A slightly different URL, but ultimately the same content. One day, that redirect might cease to exist, so it felt like a good idea to update my blog post to use the new link instead. I went ahead and updated hundreds of "obvious redirects".
The more interesting case is what I call "less obvious redirects". For example, in 2012 I blogged about how the White House contributed to Drupal. It was the first time in history that the White House contributed to Open Source, and Drupal in particular. It's something that many of us in the Drupal community are very proud of. In my blog post, I linked to http://www.whitehouse.gov/blog/2012/08/23/open-sourcing-we-the-people, a page on whitehouse.gov explaining their decision to contribute. That link now redirects to http://obamawhitehouse.archives.gov/blog/2012/08/23/open-sourcing-we-the-people, a permanent archive of the Obama administration's White House website. For me, it is less obvious what to do about this link: updating the link future proofs my site, but at the cost of losing some of its significance and historic value. For now, I left the original whitehouse.gov link in place.
How to best care for old links?
I'm not entirely sure why I picked the Wikipedia link over archive.org when I updated the Sun Fire X4200 blog post, or why I left the original whitehouse.gov link in place. I also left many broken links in place because I'm undecided about what to do with them.
It is important that we care for old links. Before I continue my link clean up, I'd like to come up with a more structured, and possibly automatable, approach for link maintenance. I'm curious to learn how others care for old links, and if you know of any best practices, guidelines, or even automations.
With FOSDEM just around the corner, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers make FOSDEM happen and make it a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, especially for heralding during the conference, during the buildup (starting Friday at noon) and teardown (Sunday evening). No need to worry about missing lunch. Food will be provided. Would you like to be part of the team that makes FOSDEM tick? Sign up here! You舰
I just released AO26, which comes with a bunch of new features, improvements and bugfixes.
New: Autoptimize can be configured at network level or at individual site-level when on multisite.
Extra: new option to specify what resources need to be preloaded.
Extra: add display=swap to Autoptimized (CSS-based) Google Fonts.
Images: support for lazyloading of background-images when set in the inline style attribute of a div.
Images: updated to lazysizes 5.2.
CSS/ JS: no longer add type attributes to Autoptimized resources.
Improvement: cache clearing now also integrates with Kinsta, WP-Optimize & Nginx helper.
Added “Critical CSS” tab to highlight the criticalcss.com integration, which will be fully included in Autoptimize 2.7.
Batch of misc. smaller improvements & fixes, more info in the GitHub commit log.
The release has been tested extensively (automated unit testing, manual testing on several of my own sites and testing by users of the beta-version on Github), but as with all software it is very unlikely to be bug-free. Feel free to post any issue with the update here or to create a separate topic in this forum.
To make live backups of a guest on an ESXi host’s SSH UNIX shell, you need to utilize the fact that when a snapshot of a VMDK file gets made, the original VMDK file turns so called read-only. Releasing the locks that would otherwise withhold vmkfstools from creating a clone.
This means that if you make a snapshot that you can use vmkfstools of the non-snapshot VMDK files from which the snapshot was made.
For a Jenkins environment I had to automate the creation of a lot of identical build agents. Identical up until of course the network configuration. Sure I could have used Docker or what not. But the organization standardized on VMWare ESXi. So I had to work with the tools I got.
A neat trick that you can do with VMWare is to write so called guestinfo variables in the VMX file of your guests.
You can get SSH access to the UNIX-like environment of a VMWare ESXi host. In that environment you can do typical UNIX scripting.
First we prepare a template that has VMWare guest tools installed. We punch the zeros of the vmdk file and all that stuff. So that it’s nicely packaged and quick to make clones from. On the guest you do:
Now you can for example do this (on the ESXi host’s UNIX environment):
mkdir -p $DST/$1
# Don't use cp to make copies of vmdk files. It'll just
# take ages longer as it will copy 0x0 bytes too.
# vmkfstools is what you should use instead
vmkfstools -i $SRC/DISK.vmdk $DST/$1/DISK.vmdk -d thin
# Replace some values in the destination VMX file
cat $SRC/TEMPLATE.vmx | sed s/TEMPLATE/$1/g > $DST/$1/$1.vmx
And now of course you add the guestinfo variables:
Now when the guest boots, you can make a script to read those guestinfo things out and let it for example configure itself (on the guest):
HOSTN=`vmtoolsd --cmd "info-get guestinfo.HOSTN"`
EXTRA=`vmtoolsd --cmd "info-get guestinfo.EXTRA"`
if test "$EXTRA" = "provision"; then
echo $HOSTN > /etc/hostname
Some other useful VMWare ESXi commands:
# Register the VMX as a new virtual machine
VIMID=`vim-cmd /solo/register $DST/$1/$1.vmx`
# Turn it on
vim-cmd /vmsvc/power.on $VIMID &
# Answer 'Copied' on the question whether it got
# copied or moved
VMMSG=`vim-cmd /vmsvc/message $VIMID | grep "Virtual machine message" | cut -d : -f -1 | cut -d " " -f 4`
if [ ! -z $VMMSG ]; then
vim-cmd /vmsvc/message $VIMID $VMMSG 2
That should be all you need. I’m sure we can adapt the $1.vmx file such that the question doesn’t get asked. But my solution with answering the question also worked for me.
Next thing we know you’re putting a loop around this and you just ‘programmed’ creating a few hundred Jenkins build agents on some powerful piece of ESXi equipment. Imagine that. Bread on the table and the entire flock of programmers of your customer happy.
But! Please don’t hire me to do your DevOps. I’ve been there before several times. It sucks. You get to herd brogrammers. They suck the blood out of you with their massive ignorance on almost all really simple standard things (like versioning, building, packaging, branching, etc. Anything that must not be invented here). Instead of people who take the time to be professional about their job and read five lines of documentation, they’ll waste your time with their nonsense self invented crap. Which you end up having to automate. Which they always make utterly impossible and (of course) non-standard. While the standard techniques are ten million times better and more easy.
Opening up the thing it was. Because in a video the guy explained about the water flow sensor being a magnetic switch I decided to try taking the sensor itself out of the component. Then I tried with a external magnet to get the detached switch to close. The error was gone and I could make the motor run without any water flowing. That’s probably not a great idea if you don’t want to damage anything. So, of course, I didn’t do that for too long.
However. When I reinserted the sensor into the component, and closed the valve myself, the ER02 error did still happen. I figured the magnet that gets pushed to the ceiling of the component was somehow weakened.
Then I noticed a little notch on it. I marked it in a red circle:
I decided to take a flat file and file it off. When I now closed the valve myself, I could just like with the magnet make the motor run without any water flowing.
I reassembled it all. Reattached the device to the bath tube. It all works. Warm water this evening! I hope there will be stars outside.
Our keyserver is now accepting submissions for the FOSDEM 2020 keysigning event. The annual PGP keysigning event at FOSDEM is one of the largest of its kind. With more than one hundred participants every year, it is an excellent opportunity to strengthen the web of trust. For instructions on how to participate in this event, see the keysigning page. Key submissions close on Wednesday 22 January, to give us some time to generate and distribute the list of participants. Remember to bring a printed copy of this list to FOSDEM.
As part of our Oh Dear! monitoring service, we run workers. A lot of workers. Every site uptime check is a new worker, every broken links job is a new worker, every certificate check is a- you get the idea.
I published the following diary on isc.sans.edu: “Code & Data Reuse in the Malware Ecosystem“:
In the past, I already had the opportunity to give some “security awareness” sessions to developers. One topic that was always debated is the reuse of existing code. Indeed, for a developer, it’s tempting to not reinvent the wheel when somebody already wrote a piece of code that achieves the expected results. From a gain of time perspective, it’s a win for the developers who can focus on other code. Of course, this can have side effects and introduce bugs, backdoors, etc… but it’s not today’s topic. Malware developers are also developers and have the same behavior. Code reuse has been already discussed several times… [Read more]
CDPs pull customer data from multiple sources, clean it up and combine it to create a single customer profile. That unified profile is then made available to marketing and business systems to improve the customer experience.
For the past 12 months, I've been watching the CDP space closely and have talked to a dozen CDP vendors. I believe that every organization will need a CDP (although most organizations don't realize it yet).
According to independent research firm The CDP Institute, CDPs are a part of a rapidly growing software category that is expected to exceed $1 billion in revenue in 2019. While the CDP market is relatively new and small, a plethora of CDPs exist in the market today.
One of the reasons we really liked AgilOne is their machine learning capabilities — they will give our customers a competitive advantage. AgilOne supports machine learning models that intelligently segment customers and predict customer behaviors (e.g. when a customer is likely to purchase something). This allows for the creation and optimization of next-best action models to optimize offers and messages to customers on a 1:1 basis.
For example, lululemon, one of the most popular brands in workout apparel, collects data across a variety of online and offline customer experiences, including in-store events and website interactions, commerce transactions, email marketing, and more. AgilOne helped them integrate all those systems and create unified customer data profiles. This unlocked a lot of data that was previously siloed. Once lululemon better understood its customers' behaviors, they leveraged AgilOne's machine learning capabilities to increase attendance to local events by 25%, grow revenue from digital marketing campaigns by 10-15%, and increase site visits by 50%.
Another example is TUMI, a manufacturer of high-end suitcases. TUMI turned to AgilOne and AI to personalize outbound marketing (like emails, push notifications and one-to-one chat), smarten its digital advertising strategy, and improve the customer experience and service. The results? TUMI sent 40 million fewer emails in 2017 and made more money from them. Before AgilOne, TUMI's e-commerce revenue decreased. After they implemented AgilOne, it increased sixfold.
Fundamentally improving the customer experience
Having a great customer experience is more important than ever before — it's what sets competitors apart from one another. Taxis and Ubers both get people from point A to B, but Uber's customer experience is usually superior.
Building a customer experience online used to be pretty straightforward; all you needed was a simple website. Today, it's a lot more involved.
I've long maintained that the two fundamental building blocks to delivering great digital experiences are (1) content and (2) user data. This is consistent with the diagram I've been using in presentations and on my blog for many years where "user profile" and "content repository" represent two systems of record (though updated for the AgilOne acquisition).
To drive results, wrangling data is not optional
To dramatically improve customer experiences, organizations need to understand their customers: what they are interested in, what they purchased, when they last interacted with the support organization, how they prefer to consume information, etc.
But as an organization's technology stack grows, user data becomes siloed within different platforms:
When an organization doesn't have a 360º view of its customers, it can't deliver a great experience to its customers. We have all interacted with a help desk person that didn't know what you recently purchased, is asking you questions you've answered multiple times before, or isn't aware that you already got some help troubleshooting through social media.
Hence, the need for integrating all your backend systems and creating a unified customer profile. AgilOne addresses this challenge, and has helped many of the world's largest brands understand and engage better with their customers.
Acquia's strategy and vision
It's easy to see how AgilOne is an important part of Acquia's vision to deliver the industry's only open digital experience platform. Together, with Drupal, Lift and Mautic, AgilOne will allow us to redefine the customer experience stack. Everything is based on Open Source and open APIs, and designed from the ground up to make it easier for marketers to create relevant, personal campaigns across a variety of channels.
Welcome to the team, AgilOne! You are a big part of Acquia's future.
Het land in crisis: alle hoofdredacteuren van het land schrijven opiniestukken!
Honderden vrouwen naar de rechtbank. Christine Mussche fantaseert zich al rijk: driehonderd keer een factuur van een paar duizend Euro! Dat is een villa. Dat zal vast een vruchtgebruik worden. Want haar kuuroord voor misnoegde vrouwen kan nadien nog omgetoverd worden in een sauna- en massagesalon of een heus bedevaartsoord voor het Belgisch feminisme.
Met wat mevrouw de advocaat er waarschijnlijk aan gaat verdienen hadden we er ook een begrotingstekort van een gemiddeld groot dorp mee kunnen oplossen, er een school of een typisch gemeentelijk zwembad mee kunnen bouwen.
Meanwhile: we hebben ook al maanden geen regering. Nobody cares.
Just sent in my vote. After carefully considering what I consider to be
important, and reading all the options, I ended up with 84312756.
There are two options that rule out any compromise position; choice 1,
"Focus on systemd", essentially says that anything not systemd is
unimportant and we should just drop it. At the same time, choice 6,
"support for multiple init systems is required", essentially says that
you have to keep supporting other systems no matter what the rest of the
world is doing lalala I'm not listening mom he's stealing my candy
Debian has always been a very diverse community; as a result,
historically, we've provided a wide range of valid choices to our users.
The result of that is that some of our derivatives have chosen Debian
to base their very non-standard distribution on. It is therefore, to me,
no surprise that Devuan, the very strongly no-systemd distribution, was
based on Debian and not Fedora or openSUSE.
Personally I consider this variety in our users to be a good thing, and
something to treasure; so any option that essentially throws that away,
like choice 1, is not something I could live with. At the same time, the
world has evolved, and most of our users do use systemd these days,
which does provide a number of features over and above the older
sysvinit system. Ignoring that fact, like option 6 does, is unreasonable
in today's world.
So neither of those two options are acceptable to me.
That just leaves the other available options. All of those accept the
fact that systemd has become the primary option, but differ in the
amount of priority given to alternate options. The one outlier is option
5; it tries to give general guidance on portability issues, rather than
commenting on the systemd situation specifically. While I can understand
that desire, I don't agree with it, so it definitely doesn't get first
position for me. At the same time, I don't think it's a terrible option,
in that it still provides an opinion that I agree with. All in all, it
barely made the cutoff above further discussion -- but really only
It’s a classic issue for BotConf attendees, the last day is always a little bit stronger due to the social event organized every Thursday night. This year, we are in the French area where good wines are produced and the event took place at the “Cité du Vin”. The night was short but I was present at the first talk! Ready as usual!
The first talk was “End-to-end Botnet Monitoring with Automated Config Extraction and Emulated Network Participation” by Kevin O’Reilly and Keith Jarvis. Sometimes, you hesitate to attend the first conference of the day due to a lack of motivate and you have to force yourself.
Today, it really deserved to wake up on time to attend this wonderful talk. It started with a comparison of two distinct approaches to analyze malware samples: emulator vs. sandbox. For a sandbox, you provide a sample and results can be used as input for an emulator: C2, IPs, encryption keys, etc… (and vice-versa). The next part of the talk was dedicated to CAPE (“Config And Payload Extraction”) which is a fork of Cuckoo but with many decoding features that help to unpack and analyze via the browser. It has an API monitor, a debugger, a dumper, import reconstructor, etc. Many demos were performed. I’ll definitively install it to replace my old Cuckoo instance! The project is here.
The next talk was presented by Suguru Ishimaru, Manabu Niseki and Hiroaki Ogawa: “Roaming Mantis: A melting pot of Android bots“. This campaign started in Japan in 2018 and compromised residentials routers. Classic attacks: DNS settings were changed to redirect victims to rogue websites that… delivered malware samples! The talk covered malware samples that were used in the campaign:
For each of them, the malware was analyzed using the same way: how was it delivered, compromization, connections with the C2. The relation between them? They used the same technique to steal money. The money laundering process was also reviewed.
After a welcome coffee break,”The Cereals Botnet” was presented by Robert Neumann and Gergely Eberhardt. This time, no Android, no Windows but… NASes! (storage devices). This makes it a pretty unique botnet of approximatively 10K bots. The target was mainly D-Link NAS devices (consumer models).
They started to explain how the NAS was compromized using a vulnerability in a CGI script. To better understand how it worked, they simply connected a NAS on the wild Internet (note: don’t try this at home!) and were able to observe the behavior of the attacker (unusual HTTP request, outgoing traffic and suspicious processes. The exploit was via the SMS notification system in system_mgr.cgi. Once compromized, there was also a backdoor installed, many software components (a VPN client) and persistence was implemented too. I like the way the bot communicated with the C2: Via a feed RSS! Easy! Finally they explained that the vulnerability was patched by the vendor (years later!) but many devices remain unpatched…
The next talk was the result of a student’s research who tried to develop a tool to improve the generation of YARA rules. “YARA-Signator: Automated Generation of Code-based YARA Rules” presented by Felix Bilstein and Daniel Plohmann.
No need to present YARA. A fact is that most public rules (73%) are based on text strings but code-based rules are nice: they are robust, harder to circumvent by attackers and easier to automate but… not manually! A tool can be used for this purpose: YARA-Signator. The approach was to create signatures based on families from Malpedia. First, they explained how the tool works then the results. Interesting if you’re a big fan of YARA. The code is here.
After the lunch, the scheduled talk was “Using a Cryptographic Weakness for Malware Traffic Clustering and IDS Rule Generation” by Matthijs Bomhoff and Saskia Hoogma. This is kind of talk that I’m fear of after a nice lunch… The idea of the talk was to explain that bad encryption implementations by attackers can be used to track them.
There was no coffee break foreseen in this afternoon, so we continued with three talks in a row. “Zen: A Complex Campaign of Harmful Android Apps” by Lukasz Siewierski. A common question related to malware samples is: Are all apps are coming from the same author (or group?). Lukasz explained different techniques called “Zen” that perform malicious activities:
Repacking with a custom ad SDK
Rooting the device
Fake Google account creation
It was a review of another Android malware…
Martijn Grooten continues with “Malspam is Different Spam“. Martijn explained that, if spam is not a new topic, it remains a common infection vector. Why some emails are more likely to pass through our filters? A few words about spam-filtering, we rely on:
Block lists (IP, domains)
Sender verify (SPF, DKIM, DMARC, etc)
Content filtering (viagra, casino, …)
Link & attachments scanning
Some emails go to the sam traps and results are sent to the spam filter (which can also update itself). Spam scales badly. Then, Martijn showed some better examples that have chances to not be blocked.
Finally, the conference ended with “Demystifying Banking Trojans from Latin America” by Juraj Hornák, Jakub Soucek and Martin Jirkal. They presented the LATAM banking landscape (targetting Spanish & Portuguese speaking people) with malware. They explained the techniques used by the different malware families with the final goal to see the relations between them:
I like the “easter egg and human error” part where they explained some mistakes make by the attackers like… keeping a command in the code to spawn a calc.exe
This edition could be called the “Android Malware Edition” seeing the number of presentations related to Android! (Like we had the “DGA Edition” a few years ago). This wrap-up closes these three days of BotConf 2019! Here are some stats provided by the organization:
CFP: 73 submissions
1730 minutes of shared knowledge.
As usual, they also disclosed the location of the 2020 edition: we will be back to Nantes!
The second day is over. Here is my daily wrap-up. Today was a national strike day in France and a lot of problems were expected with public transports. However, the organization provided buses to help attendees to travel between the city center and the venue. Great service as always
After some coffee, the day started with another TLP:AMBER talk: “Preinstalled Gems on Cheap Mobile Phones” by Laura Guevara. So, to respect the TLP, nothing will be disclosed. Just keep in mind that if you use cheap Android smartphones, you get what you paid for…
Then, Brian Carter @ presented “Honor Among Thieves: How Stealer Malware Fuels an Underground Economy of Compromised Accounts”. It started with facts: Attackers make mistakes and leak online information like panels, configuration files or logs. You can find this information just via a browser, no need to hack them back.
Brian’s research goals were:
Understand the value of stolen data
Develop tools to collect, parse data
Collect info help identify malware families based on C2.
Develop detection rules to catch them
But, very important, we cannot commit crimes, not contribute to malicious activities or harm victims of malware. The golden rule here is: Be ethic! Research had also limitations, it’s impossible to collect everything and data may not be representative! What was the collection process? Terabytes of data, 1M stealer logs archives, many panels, builders, chat logs, forum posts, and market listings. Again, “stolen data” were collected from ethical sources like VT, shares, open directories. Brian also commented on the economics of stealers: The market is relatively small today and only a few criminals are successful in their business and stealer malware is low-volume. Here is an example of prices:
Then, when you collected so much data, what to do with a huge amount of data? Warn users if they are compromized, research adversaries, develop countermeasures, take down? This was an interesting talk.
The next speakers were Alexander Eremin and Alexey Shulmin who presented “Bot with Rootkit: Update and Mine!”. The research started with a “nice” sample that installed a Microsoft patch. Cool isn’t? Of course, the dropper was obfuscated and, once decoded, it used the classic VirtualAlloc() and GetModuleHandleA() to decrypt the data.
When they were able to extract strings, they continued to investigate and discovered interesting behaviors. The first one, the develop was nice enough to leave debugging comments via a write_to_log() function. The malware checked the OS version, the keyboard layout and created a MUTEX. Until now, nothing fancy, but the malware also installed a Microsoft patch, if not already installed: KB3033929. Why? The patch was required to allow check_crypt32_version() and support of SHA-2! Then, they explained how C2 communications are performed, based on HTTP and encrypted using RC4 and Base64 encoding. Then, a rootkit is installed to finally deploy a cryptominer. The rootkit is like Necurs and uses IOCTL-like registry keys. Example of commands supported:
0x220004 – update rootkit
0x220008 – update payload
After a caffeine refill, it was the turn of Tom Ueltschi who presented “DESKTOP-Group – Tracking a Persistent Threat Group (using Email Headers)”. Tom is a regular speaker at Botconf and always provides good content. This talk was tagged as TLP:GREEN. Good idea: he will try to make a “white” version of his slides as soon as possible. Remember that TLP:GREEN means “Limited disclosure, restricted to the community.”
Before the lunch break, two short presentations were scheduled. “The Bagsu Banker Case” by Benoît Ancel. Sorry, TLP:AMBER again.
The following talk was “Tracking Samples on a Budget” by <redacted>. It covered a personal project running for two years now which explained how to acquire a collection of malware samples… but being a student, with no budget! What are the (free) sources available? Open-source repositories, honeypots, pivots, feed, sandboxes, existing malware zoo like malshare or malc0de, etc. You can run also your own honeypot but it’s difficult to deploy a lot of them. The talk covered the components put in place to crawl the web, how to avoid some stupid limitations that you’ll face. Then comes the post-processing:
Upload new samples to VT
Enrichment (metadata, strings, …)
A good idea is also to search recursively for open directories that remain a goldmine. Here is the sample tracking lifecycle as implemented:
What about the results? Running for 2 years, 270K unique samples have been discovered, 25% of them not on VT, 600GB of data collected, 78% of the samples have 5+ score on VT. I had the opportunity to talk later with the speaker, it’s a great project… The code is running fine for 2 years and just does the job! Amazing project!
After the lunch, we had another restricted presentation (sorry, interesting content can’t be disclosed) – TLP:RED – by Thomas Dubier and Christophe Rieunier: “Botnet Tracking Story: from Spam Mail to Money Laundering”.
The next talk was “Finding Neutrino Botnet: from Web Scans to Botnet Architecture” by Kirill Shipulin and Alexey Goncharov. This research started with some interesting hits on a honeypot. They adapted the honeypot to respond positively and mimick a webshell. The scan was brute-forcing different webshells with a strange command: “die(md5(Ch3ck1ng))”. The malware they found had a classic behavior: check if already installed, exfiltrate system information and download/execute a payload (a cryptominer in this case). It implemented persistence via WMI, was fileless and also killed its competitors.
The next malware to be analyzed was BackSwap: “BackSwap Malware Campaign Evolution” by Carlos Rubio Ricote and David Pastor Sanz. This malware was found by eSet in May 2018 via trojanized apps (OllyDbg, Filezilla, ZoomIt, …). It used PIC – Position Independent Code and used shellcodes hidden in BMP images (see the DKMC tool). They explained how the malware worked, how the configuration was encrypted and, once decoded, what were the parameters like the statistics URL, the C2, User-Agent touse, and the injection delimiter. They decrypted the web inject and explained the technique used: via the navigation bar or the developer console. Then, they reviewed some discovered campaigns targetting banks in Poland, Spain.
After the afternoon coffee break, Mathieu Tartare came on stage to talk about “Winnti Arsenal: Brand-new Supplies“. WINNIT is a group that is often cited in the news and that compromized multiple organizations (telco, editors, healthcare, gambling, etc…) They are specialized in supply-chain attacks. They look to be active since 2013 (first public report) and in 2017… there was the famous CCleaner case! In 2018, the gaming industry was compromised. The 1st part of this talk covered this. The technique used by the malware was CRT patching. A function is called to unpack/execute the malicious code before returning to the original code. The payload was packed using a custom packer using RC4. The first stage is a downloader that gets its C2 config then gathers information about the victim’s computer. Amongst them, the C: drive name and volume ID are exfiltrated. The 2nd stage will decrypt a payload that was encrypted using… the volume ID! Finally, a cryptominer is installed. Mathieu also covered two backdoors: PortReuse that works like a good old port-knocking. It sniffs the network traffic and waits for a specific magic packet. The second one was ShadowPad.
The last talk for today was “DFIR & Crisis Management – Post-mortems & Lessons Learned in the Pain from the Field” by Vincent Nguyen. He was involved in many security incidents and explained some interesting facts about them. How they worked, what they found (or not and also, very important, lessons learned to improve the IR process.
The scheduled ended with a set of lightning talks. 19 talks of 3 mins with lot of interesting information, tools, stories, etc.
Hello from Bordeaux, France where I’m attending the 7th edition (already!) of the BotConf security conference dedicated to fighting against botnets. After Nantes, Nancy, Paris, Lyon, Montpellier, Toulouse and now Bordeaux, their “tour de France” is almost completed. What will be the next location? I attended all the previous editions and many wrap-up’s are available on this blog. So, let’s start with the 2019 edition!
The conference was kicked off by Eric Freyssinet. This year, they received 73 submissions to the call for paper and 3 workshops (organized yesterday). The selection process was difficult and, according to Eric, we can expect interesting talks.
After the introduction, the first talk was ‘DeStroid – Fighting String Encryption in Android Malware” presented by Daniel Baier. He worked with Martin Lambertz on this topic.
Analyzing Android malware is not that hard because most developers use the standard API and applications can easily be decompiled. So, they have to use alternative techniques to obfuscate their interesting content. One of these is the use of encryption to hide strings. They are decrypted at run time. As you can imagine, doing this process manually is a pain. To test the DeStroid tool, they use the data set provided by Malpedia (also presented at BotConf preciously). They detected three techniques used to encrypt strings: by using a string repository, by passing strings to a decryption routine and via native libraries. Each technique was reviewed by Daniel. From a statistics point of view, 52% of the test Android samples use strings encryption and 56% of them use the “direct pass” method. The DeStroid tool was also compared to other solutions like JMD, Deobfuscator or Dex-Oracle. The tool is available here if you’re interested.
The next speaker was Marco Riccardi who presented “Golden Chickens: Uncovering A Malware-as-a-Service (MaaS) Provider and Two New Threat Actors Using It“. The talk was flagged as TLP:AMBER so I won’t disclose details about it. Just the same remark as usual which this kind of “sensitive” slides, can you trust a room full of 500 people? I like a slide (without any confidential information) that explained how to NOT do attribution:
IR > Found sample > Use technique “X” > Google for “technique X” > Found references to “China” > We are attacked by China > Case closed
The next talk focused on the gaming industry. I’m definitely not a game addict, so I’m always curious about what happens in this field. Basically, like any domain, if some profit can be made, bad guys are present! Making huge profits and targeting millions of players, it is normal to see attacks targetting big names like Counter-Strike. Ivan Korolev and Igor Zdobnov presented “Unrevealing the Architecture Behind the Counter-Strike 1.6 Botnet: Zero-Days and Trojans”. Do you remember that Counter-Strike was born in 1999? What’s the business behind CS 1.6? People want to play with more and more people. Servers need to be “promoted” to attract more gamers. “Boosting” is the technique to attract new players. If it can be performed with good & clean techniques, it can also be performed via malware. Ivan & Igor reviewed the different vulnerabilities found in the CS client and how it was (ab)used to build the botnet. Vulnerabilities were found in the parsing of SAV files (saved game files), BMP files. By exploiting vulnerabilities in the game, the client becomes part of a botnet. A rogue server can issue commands to a client like:
redirecting players to another server
tampering with configs
change behavior of the game
In the next part of the presentation, the trojan Belonard was described. It is active since 05/2017 and constantly use new exploits & modules. Finally, they explained the take-down process.
After the lunch breach, the keynote was presented by Gilles Schwoerer and Michal Salat. Michal is working for an AV vendor and Gilles is working for the French law enforcement services. Their keynote was called “Putting an end to Retadup“.
What is Retadup? Michal started the technical part and explained the facts around this malware: It’s a worm that affected 1.3M+ computers via malicious LNK files. It’s not only a botnet but also a platform malware being used by others to distribute more malware through it. The original version was developed in AutoIt and has persistence, extraction of the victim’s details. It also implements a lot of anti-analysis techniques. Communications with the C2 is performed via a hardcoded list of domains and simple HTTP GET request, Base64 or hex-encoded. When the malware was discovered and analyzed, the C2 was found to be located in France. That was the second part of the keynote, presented by Gilles who explained the take-down process. Their plan was to present an empty update to the bots to make them inactive. This method did not prevent the infection but was less dangerous for the end-user. US Agencies were also active in the process to redirect the DNS to the rogue C2 server deployed by the French police. This keynote was a great example of collaboration between a vendor and LE services.
The next presentation was about an “Android Botnet Analysis – Shaoye Botnet” by Min-Chun Tsai, Jen-Ho Hsiao and Ding-You Hsiao. Like the previous talk, they did a review of another malware targeting Android devices. In Chinese, “Shaoye” means “Young master”. They analyzed two versions of the malware. The first one used DNS hijacking in residential routers to redirect victims to a rogue web site. The second version used compromised websites to spread malware. In the first, a fake Facebook app was installed and, in the second version, a fake Sagawa express application (Sagawa is a major transportation company in Japan). The malware samples were completely analyzed to explain how they work.
After a welcome coffee break, a very interesting talk was presented by Piotr Bialczak and Adrian Korczak: “Tracking Botnets with Long Term Sandboxing”. The idea behind the research was about improving the analysis of bot in a sandbox for a long period of time. We know that reverse engineering malware costs a lot of time and we use sandboxes as much as possible. The biggest constraint is that time allowed to execute, usually a few minutes. Malware developers know this and make their malware wait for a long period of time (> sandbox timeout) but increase the chances that the sandbox will stop by itself. They created a “LTS” – “Long Term Sandboxing” System that is optimized to allow a bot to run for a long time but without the technical constraints. Based on Qemu and a system of external snapshots combined with other tools like an ELK stack and Moloch, they are able to reduce the CPU resource, network bandwidth, etc. They analyzed 20 families of well-known botnets and showed interest information they learned at the network level, SMTP traffic or DGA. For example, it was possible to detect unusual protocols, traffic to non-standard ports).
The next talk was “Insights and Trends in the Data-Center Security Landscape” by Daniel Goldberg and Ophir Harpaz. There were some changes but I covered this talk last week at DeepSec (see my wrap-up here).
Then, Dimitris Theodorakis and Ryan Castellucci presented “The Hunt for 3ve“. Here again another family of malware that was dissected. This one targeted online ads. Yes, ads remain a business. In a few numbers: 1.8M+ infected computers, 3B+ ad requests/day, 10K+ spoofed domains, 60K+ accounts. Eve used three techniques that were reviewed by Dimitris and Ryan:
Boaxxe (residential proxy)
Here again, after the technical details, they explained how the take-down process.
Finally, the first day ended with “Guildma: Timers Sent from Hell” presented by Adolf Streda, Luigino Camastra, and Jan Vojtešek. This time was analyzed malware was not only a RAT but also a spyware, password stealer and banking malware. The malware was spread through spam campaigns and was targeting mainly Brazil (at a first stage) then they targeted more countries.
That the end of day 1! Many botnets were covered, always with the same approach: hunting, find samples, analyze them, learn how they work and organize the take-down. See you tomorrow for the second wrap-up!
Here we go for the second wrap-up! DeepSec is over, flying back tomorrow to Belgium. My first choice today was to attend: “How To Create a Botnet of GSM-devices” by Aleksandr Kolchanov. Don’t forget that GSM devices are not only “phones”. Aleksandr covered nice devices like alarm systems, electric sockets, smart-home controllers, industrial controllers, trackers and… smartwatches for kids!
They all have features like to send notifications via SMS, call pre-configured numbers but also be configured or polled via SMS. Example of attacks? Brute-force the PIN code, spoof calls, use “hidden” SMS commands. Ok, but what are the reasons to hack them? We have direct attacks (unlock the door, steal stuff) or spying: abuse the built-in microphone. Attacks on the property are also interesting: switch off electric devices (a water pump, a heating system). Also terrorism or political actions? Financial attacks (call or send SMS to premium numbers). Why a botnet? The get some money! Just use it to send huge amounts of SMS but also to DoS or for political/terrorism actions: Can you imagine thousands of alarms at the same time. Thanks to powerful marketing, people buy them so we have many devices in the wild:
Not properly installed
Insecure by default
Absence of certification
After the introduction, Aleksandr explained how he performed attacks against different devices. It’s easy to hack them but the real challenge is to find targets. How? You can do a mass scanning and call all numbers but it will cost money and some operators will detect you (“Why are your calling xxx times per day?”) How to search without making a call? They are web services provided by some operators that help to get info about used numbers, they are open API, databases, leaked data, etc… Once you have enough valid devices, it’s time to build the botnet:
The next talk was about… pacemakers! Wait, everything has been said about those devices, right? A lot of material has already been published. The big story was in 2017 when a big flaw was discovered. The talk presented by Tobias Zillner was called “500.000 Recalled Pacemakers, 2 Billion $ Stock Value Loss – The Story Behind”.
When you need to assess such medical devices, where to get one? On a second-hand webshop! Just have a look at dotmed.com, their stock of medical devices is awesome! The eco-system tested was: pacemakers / programmers/home monitors and the “Merlin Net” alias “the cloud”. The first attack vector covered by Tobias was the new generation of devices that use wireless technologies (SDR), low power, short-range (2M) – 401-406Mhz). How to find technical specs? Just check the FCC-ID and search for it. Google remains always your best friend. The vulnerabilities found were an energy depletion attack (draining the battery) and a… crash of the pacemaker! The next target was the “Merlin@Home” device which is a home monitoring system. They are easy to find on eBay:
Just perform an attack like against any embedded Linux device: Connect a console, boot it, press a key to get the bootloader, change the boot command add “init=/bin/bash” like any Linux and boot in single-user mode! Once inside the box, it’s easy to find a lot of data left by developers (source code, SSH keys, encryption keys, source code, … The second part of the talk was dedicated to the full-disclosure process.
After a short coffee break, Fabio Nigi presented “IPFS As a Distributed Alternative to Logs Collection”. The idea behind this talk was to try to solve a classic headache for people who are involved in log management tasks. This can quickly become a nightmare due to the ever-changing topologies, the number of assets, amount of logs to collect and process. Storage is a pain to manage.
So, Fabio had the idea to use IPFS. IPFS means “Interplanetary file system” and is a P2P distributed file system that helps to store files in multiple locations. He introduced the tool, how it works (it look interesting, I wasn’t aware of it). Then he demonstrated how to interconnect it with a log collection solution using different tools like IPFS GW, React, Brig or Minerva. It’s an interesting approach, however, the project is still in the development phase (as stated on the website)…
There were many interesting talks today and, with a dual-track conference, it’s not always easy to choose the one that will be the most entertaining or interesting. My next choice was “Extracting a 19-Year-Old Code Execution from WinRAR” by Nadav Grossman.
WinRAR is a well-known tool to handle many archive formats. As the tool is very popular, it’s a great target for attackers because it is installed on many computers! After a very long part about fuzzing (the techniques, tools like WinAFL), Nadav explained how the vulnerability was found. It was located in a DLL used to process ACE files. Many details were disclosed and, if you are interested, there is a blog post available here. Note that since the vulnerability has been found and disclosed, the support of ACE archives has been removed from the last versions of WinRAR!
After the lunch break, I attended “Setting up an Opensource Threat Detection Program” by Lance Buttars (Ingo Money). This was an interesting talk about tools that you can deploy to protect your web services but also counterattack the bad guys. Many tools are used in Lance’s arsenal (ModSecurity, Reverse proxies, Fail2ban, etc…)
Lance also explained what honeypots are and the different types of data that you collect: domains, files, ports, SQL tables or DB. For each type, he gave some examples. Note that “active defense” is not allowed in many countries!
And the day continued with “Once Upon a Time in the West – A story on DNS Attacks” by Valentina Palacín and Ruth Esmeralda Barbacil. They reviewed well-known DNS attack techniques (DNS tunneling, hijacking, and poisoning) then they presented a timeline of major threats that affected DNS services and that abused the protocols like:
DNSChangerOperation Ghost Click
Syrian Electronic Army
Oilrig: Suspected Iranian
Project Sauron (suspected USA)Darkhydrus (
For each of them, they applied the Mitre ATT&CK framework. Nothing really new but a good recap which concludes that DNS is a key protocol and that it must be carefully controlled.
The two next talks focused more on penetration testing: “What’s Wrong with WebSocket APIs? Unveiling Vulnerabilities in WebSocket APIs“ by Mikhail Egorov. He already published a lot of researches around WebSocket and started with a review of the protocol. Then he described different types of attacks. The second one was “Abusing Google Play Billing for Fun and Unlimited Credits!” by Guillaume Lopes. Guillaume explained how Google provides a payment framework for developers. Like the previous talk, it started with a review of the framework then how it was abused. He tested 50 apps, 29 were vulnerable to this attack. All developers were contacted and only 1 replied!
To close the day, Robert Sell presented “Techniques and Tools for Becoming an Intelligence Operator“. Open-source intelligence can be used in many fields: forensics, research, etc. Robert defines it as “Information that is hard to find but freely available”.
He explained how to prepare yourself to perform investigations, which tools to use, network connections, creation of profiles on social network and many more. The list of tools and URLs provided by Robert was amazing! Don’t forget that good OpSec is important. If you’re excited to search for information about your target, (s)he won’t probably be as excited as you! Also, keep in mind, that all techniques used can also be used against you!
That’s all Folks! DeepSec is over! Thanks again to the organizers for a great event!
Hello from Vienna where I’m at the DeepSec conference. Initially, I was scheduled to give my OSSEC training but it was canceled due to a lack of students. Anyway, the organizers proposed to me to join (huge thanks to them!). So, here is a wrap-up of the first day!
After the short opening ceremony by René Pfeiffer, the DeepSec organizer, the day started with a keynote. The slot was assigned to Raphaël Vinot and Quinn Norton: “Computer security is simple, the world is not”.
I was curious based on the title but the idea was very interesting. Every day, as security practitioners, we deal with computers but we also have to deal with people that use a computer we are protecting! We have to take them into account. Based on different stories, Raphaël and Quinn gave their view of how to communicate with our users. First principle: Listen to them! Let them explain with their own words and shut up. Even if it’s technically incorrect, they could have interesting information for us. The second principle is the following: If you don’t listen to your users, you don’t know how to make your job! The next principle is to learn how they work because you must adapt to them. More close to your users you are, the more you can understand the risks they are facing. Also, don’t say “Do this, then this, …” but explain what is behind the action, why they have to do this. Don’t go too technical if people don’t ask details. Don’t scare your users! The classic example is the motivated user that has to finish his/her presentation for tomorrow morning. She/he must transfer files to home but how? If you block all the classic file transfer services, be sure that the worst one will be used. Instead, promote a tool that you trust and that is reliable! Very interesting keynote!
The first regular talk was presented by Abraham Aranguren: “Chinese Police & Cloudpets”. If you don’t Cloudpets, have a look at this video. What could go wrong? A lot! The security of this connected toy was so bad that major resellers like Walmart or Amazon decided to stop selling it. It’s a connected toy linked to mobile apps to exchange messages between parents and kids. Nice isn’t it? But they made a lot of mistakes regarding the security of the products. Abraham reviewed them:
Bad BlueTooth implementation, no control to pair or push/fetch data from the toy
Unprotected MongoDB indexed by Shodan, attacked multiple times by ransomware
All recordings available in an S3 bucket (800K customers!)
The next part of the talk was about mobile apps used by Chinese police to track people, especially the Muslim population in Xinjiang: IJOP & BXAQ. IJOP means “Integrated Joint Operations Platform” and is an application used to collect private information about people and to perform big data analysis. The idea is to collect unusual behaviors and report them to central services for further investigations. The app was analyzed, reverse-engineered and what they found is scaring. Collected data are:
Recording of height & blood type
Abnormal electricity use
Problematic tools -> to make explosives?
IF stopped using phone
The BXAQ app is a trojan that is installed even on tourists phones to collect “interesting” data about them:
It scans the device on which it is installed
Collected info: calendar, contacts, calls, SMS, IMEI, IMSI, hardware details
Scan files on SD card (hash comparison)
A zip file created (without any password) and uploaded to police server
After a welcomed coffee break, I came back to the same track to attend “Mastering AWS pen testing and methodology” by Ankit Giri. The idea behind this talk is to get a better idea about how to pentest an AWS environment.
The talk was full of tips & tricks but also references to tools. The idea is to start by enumerating the AWS accounts used by the platform as well as the services. To achieve this, you can use aws-inventory. Then check for CloudWatch, CloudTrail of BillingAlerts. Check the configuration of services being used. Make notes of services interacting with each other. S3 buckets are, of course, juicy targets. Another tool presented was S3SCanner. Then keep an eye on the IAM: how accounts are managed, what are the access rights, keys, roles. In this case, PMApper can be useful. EV2 virtual systems must be reviewed to find open ports, ingress/egress traffic, and their security groups! If you are interested in testing AWS environments, have a look at this arsenal. To complete the presentation, a demo of prowler was performed by Ankit.
Then Yuri Chemerkin presented “Still Secure. We Empower What We Harden Because We Can Conceal“. To be honest with you, I did not understand the goal of the presentation, the speaker was not very engaging and many content was in Russian… Apparently, while discussing with other people who attended the talk, it was related to the leak of information from many tools and how to use them in security testing…
The next one was much more interesting: “Android Malware Adventures: Analyzing Samples and Breaking into C&C” presented by Kürşat Oğuzhan Akıncı & Mert Can Coşkuner. The talk covered the hunt for malware in the mobile apps ecosystem, mainly Android (>70% of new malware are targeting Android phones). Even if Google implemented checks for all apps submitted to the Play store, the solution is not bullet-proof and, like on Windows systems, malware developers have techniques to bypass sandbox detection… They explained how they spotted a campaign targetting Turkey. They analyzed the malware and successfully exploited the C2 server which was vulnerable to:
Lack of encryption keys
Password found in source code
Weak upload feature, they uploaded a webshell
In the end, they uncovered the campaign, they hacked back (with proper authorization!), they restored stolen data and prevented further incidents. Eight threat actors were arrested.
My next choice was again a presentation about the analysis of a known campaign: “The Turtle Gone Ninja – Investigation of an Unusual Crypto-Mining Campaign” presented by Ophir Harpaz, Daniel Goldberg.
The campaign was “NanshOu” and it’s not a classic one. Ophir & Daniel gave many technical details about the malware, how it infected thousands of MSSQL servers to deploy a crypto-miner. Why servers? Because they require less interaction, they have better uptime, they have lot of resources and are maintained by poor IT teams ;-). The infection path was: scanning for MSSQL servers, brute force them, enable execution of code (via xp-cmdshell()), drop files and execute them.
Then, Tim Berghoff and Hauke Gierow presented “The Daily Malware Grind” – Looking Beyond the Cybers“. They performed a review of the threat landscape, ransomware, crypto-miners, RATs, etc… Interesting fact: old malware remains active.
Lior Yaari talked about a hot topic these days: “The Future Is Here – Modern Attack Surface On Automotive“. Do you know that IDS are coming to connected cars automotive today? It’s a fact, cars are ultra-connected today and it will be worse in the future. If, in the year 2005, cars had an AUX connected and USB ports, today they have GPS, 4G, BT, WiFi and a lot of telemetrics data sent to the manufacturer! By 2025, cars will be part of clouds, be connected to PLC, talk to electric chargers, gas stations, etc. Instead of using ODB2 connections, we will use regular apps to interact with them. Lior gave multiple examples of potential issues that people will face with their connected cards. A great topic!
To close the first day, I attended “Practical Security Awareness – Lessons Learnt and Best Practices” by Stefan Schumacher. He explained in detail why awareness trainings are not always successful.
It’s over for today! Stay tuned for the next wrap-up tomorrow! I’m expecting a lot from some presentations!