Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

April 10, 2020

I published the following diary on isc.sans.edu: “PowerShell Sample Extracting Payload From SSL“:

Another diary, another technique to fetch a malicious payload and execute it on the victim host. I spotted this piece of Powershell code this morning while reviewing my hunting results. It implements a very interesting technique. As usual, all the code snippets below have been beautified. First, it implements a function to reverse obfuscated strings… [Read more]

[The post [SANS ISC] PowerShell Sample Extracting Payload From SSL has been first published on /dev/random]

Last week, the first Release Candidate of Caddy 2 saw the light of day. I don’t usually like to run production environments on beta software, but for Caddy I wanted to make an exception

April 09, 2020

One of my sources of threat intelligence is a bunch of honeypots that I’m operating here and there. They are facing the wild Internet and, as you can imagine, they get hit by many “attackers”. But are they really bad people? Of course, the Internet is full of bots tracking vulnerabilities 24 hours a day and 7 days a week. But many IP addresses that hit my honeypots are servers or devices that have just been compromised.

To get an idea of whose behind these IP addresses, why not try to get a screenshot of available web services running on common ports? (mainly 80,81,8080,8088,443,8443,8844, etc). Based on a list of 60K+ IP addresses, I used Nmap to collect screenshots of detected web services. Nmap has a nice script to automate this task.

Once the huge amount of screenshots generated, I searched for an idea to display them in a nice way: a big “patchwork” of all images. Here is the resulting image:

The final picture is quite big:

  • Contains 20599 screenshots
  • 223MB
  • 60000 x 41100

It can be zoomed in and out to browse the screenshots for fun:

To generate this patchwork image, I used a few lines of Python code together with the Pillow library. Images have been resized and “white” images removed (because the Nmap script does not work well with technologies like Flash).

I won’t share the full picture because it contains sensitive information like files listing, admin interfaces and much more. Based on screenshots, many IP addresses run open interfaces, interfaces with default credentials or unconfigured applications (like CMS). Some screenshots reveal corporate organizations. This is probably due to NAT in place for egress traffic generated by compromised internal hosts. As you can imagine, some interfaces reveal critical services. Some examples?

If you’re interested to generate such kind of big images, the script I write is available on my GitHub repo. Don’t blame me for the quality of the code 😉

[The post Hey Scanners, Say “Cheese!” has been first published on /dev/random]

April 03, 2020

Bad guys are always trying to use “exotic” file extensions to deliver their malicious payloads. If common dangerous extensions are often blocked by mail security gateways, there exists plenty of less common extensions. These days, with the COVID19 pandemic, we are facing a peak of phishing and scams trying to lure victims. I spotted one that uses such exotic extension: “DAA”.

“DAA” stands for “Direct-Access-Archive” and is a file format developed by Power Software and its toolbox PowerISO. This is not a brand new way to distribute malware, my friend Didier Stevens already wrote an Internet Storm Center diary about this file format. A DAA file can normally only be processed by PowerISO. This restricts greatly the number of potential victims because, today, Microsoft Windows is able to handle ISO files natively. So, how to handle a suspicious DAA file?

Hopefully, PowerISO has a command-line tool available for free (and statically compiled!). It helps to extract the content of DAA files. Let’s do it in a Docker to not mess with your base OS…

xavier : /Volumes/MalwareZoo/20200401 $ ls Covid-19.001.daa
xavier : /Volumes/MalwareZoo/20200401 $ docker run -it --rm -v $(pwd):/data ubuntu bash
root@0c027d353187:/# cd /data
root@0c027d353187:/data# wget -q -O - http://poweriso.com/poweriso-1.3.tar.gz|tar xzvf -
poweriso
root@0c027d353187:/data# chmod a+x poweriso
root@0c027d353187:/data# ./poweriso extract Covid-19.001.daa / -od . 

PowerISO   Copyright(C) 2004-2008 PowerISO Computing, Inc                
            Type poweriso -? for helpExtracting to 

./Covid-19.001.exe ...   100%

root@0c027d353187:/data# file Covid-19.001.exe
Covid-19.001.exe: PE32 executable (GUI) Intel 80386, for MS Windows

Now, you have got the PE file and you go further with the analysis…

As you can see in the Copyright message, the tool is old (2008) but it works pretty well and deserves to be added to your personal reverse-engineering arsenal!

[The post Handling Malware Delivered Into .daa Files has been first published on /dev/random]

I published the following diary on isc.sans.edu: “Obfuscated with a Simple 0x0A“:

With the current Coronavirus pandemic, we continue to see more and more malicious activity around this topic. Today, we got a report from a reader who found a nice malicious Word document part of a Coronavirus phishing campaign. I don’t know how the URL was distributed (probably via email) but the landing page is fake White House-themed page. So, probably targeting US citizens… [Read more]

[The post [SANS ISC] Obfuscated with a Simple 0x0A has been first published on /dev/random]

March 29, 2020

March 27, 2020

When cached HTML links to deleted Autoptimized CSS/ JS the page is badly broken … no more with a new (experimental) option in AO27 to use fallback CSS/ JS which I just committed on the beta branch on GitHub.

For this purpose Autoptimize hooks into template_redirect and will redirect to fallback Autoptimized CSS/ JS if a request for autoptimized files 404’s.

For cases where 404’s are not handled by WordPress but by Apache, AO adds an ErrorDocument directive in the .htaccess-file redirecting to wp-content/autoptimize_404_handler.php. Users on NGINX or MS IIS or … might have to configure their webserver to redirect to wp-content/autoptimize_404_handler.php themselves though, but those are smart cookies anyway, no?

If you want to test, you can download Autoptimize 2.7 beta here and replace 2.6 with it.

I published the following diary on isc.sans.edu: “Malicious JavaScript Dropping Payload in the Registry“:

When we speak about “fileless” malware, it means that the malware does not use the standard filesystem to store temporary files or payloads. But they need to write data somewhere in the system for persistence or during the infection phase. If the filesystem is not used, the classic way to store data is to use the registry. Here is an example of a malicious JavaScript code that uses a temporary registry key to drop its payload (but it also drops files in a classic way)… [Read more]

[The post [SANS ISC] Malicious JavaScript Dropping Payload in the Registry has been first published on /dev/random]

March 26, 2020

I published the following diary on isc.sans.edu: “Very Large Sample as Evasion Technique?“:

Security controls have a major requirement: they can’t (or at least they try to not) interfere with normal operations of the protected system. It is known that antivirus products do not scan very large files (or just the first x bytes) for performance reasons. Can we consider a very big file as a technique to bypass security controls? Yesterday, while hunting, I spotted a very interesting malware sample. The malicious PE file was delivered via multiple stages but the final dropped file was large… very large… [Read more]

[The post [SANS ISC] Very Large Sample as Evasion Technique? has been first published on /dev/random]

March 25, 2020

A blue heart with the Drupal icon in it

Today, I'm asking for your financial support for the Drupal Association. As we all know, we are living in unprecedented times, and the Drupal Association needs our help. With DrupalCon being postponed or potentially canceled, there will be a significant financial impact on our beloved non-profit.

Over the past twenty years, the Drupal project has weathered many storms, including financial crises. Every time, Drupal has come out stronger. As I wrote last week, I'm confident that Drupal and Open Source will weather the current storm as well.

While the future for Drupal and Open Source is in no doubt, the picture is not as clear for the Drupal Association.

Thirteen years ago, six years after I started Drupal, the Drupal Association was formed. As an Open Source non-profit, the Drupal Association's mission was to help grow and sustain the Drupal community. It still has that same mission today. The Drupal Association plays a critical role in Drupal's success: it manages Drupal.org, hosts Open Source collaboration tools, and brings the community together at events around the world.

The Drupal Association's biggest challenge in the current crisis is to figure out what to do about DrupalCon Minneapolis. The Coronavirus pandemic has caused the Drupal Association to postpone or perhaps even cancel DrupalCon Minneapolis.

With over 3,000 attendees, DrupalCon is not only the Drupal community's main event — it's also the most important financial lever to support the Drupal Association and the staff, services, and infrastructure they provide to the Drupal project. Despite efforts to diversify its revenue model, the Drupal Association remains highly dependent on DrupalCon.

No matter what happens with DrupalCon, there will be a significant financial impact to the Drupal Association. The Drupal Association is now in a position where it needs to find between $400,000 and $1.1 million USD depending on if we postpone or cancel the event.

In these trying times, the best of Drupal's worldwide community is already shining through. Some organizations and individuals proactively informed the Drupal Association that they could keep their sponsorship dollars or ticket price whether or not DrupalCon North America happens this year: Lullabot, Centarro, FFW, Palantir.net, Amazee Group and Contegix have come forward to pledge that they will not request a refund of their DrupalCon Minneapolis sponsorship, even if it will be cancelled. Acquia, my company, has joined in this campaign as well, and will not request a refund of its DrupalCon sponsorship either.

These are great examples of forward-thinking leadership and action, and is what makes our community so special. Not only do these long-time Drupal Association sponsors understand that the entire Drupal project benefits from the resources the Drupal Association provides for us — they also anticipated the financial needs the Drupal Association is working hard to understand, model and mitigate.

In order to preserve the Drupal Association, not just DrupalCon, more financial help is needed:

  • Consider making a donation to the Drupal Association.
  • Other DrupalCon sponsors can consider this year's sponsorship as a contribution and not seek a refund should the event be cancelled, postponed or changed.
  • Individuals can consider becoming a member, increasing their membership level, or submitting an additional donation.

I encourage everyone in the Drupal community, including our large enterprise users, to come together and find creative ways to help the Drupal Association and each other. All contributions are highly valued.

The Drupal Association is not alone. This pandemic has wreaked havoc not only on other technology conferences, but on many organizations' fundamental ability to host conferences at all moving forward.

I want to thank all donors, contributors, volunteers, the Drupal Association staff, and the Drupal Association Board of Directors for helping us work through this. It takes commitment, leadership and courage to weather any storm, especially a storm of the current magnitude. Thank you!

March 24, 2020

March 23, 2020

March 22, 2020

Unbound

In previous blog posts, I described howto setup stubby as an DNS-over-TLS resolver. I used stubby on my laptop(s) and unbound on my internal network.

But I’m migrating away from stubby in favour of unbound.

Unbound is a popular DNS resolver, it’s less known that you can also use it as an authoritative DNS server.

I created a docker container that can serve both purposes, although you can use the same logic without docker.

It’s available at https://github.com/stafwag/docker-stafwag-unbound.

docker-stafwag-unbound

Dockerfile to run unbound inside a docker container. The unbound daemon will run as the unbound user. The uid/gid is mapped to 5000153.

Installation

clone the git repo

$ git clone https://github.com/stafwag/docker-stafwag-unbound.git
$ cd docker-stafwag-unbound

Configuration

Port

The default DNS port is set to 5353 this port is mapped with the docker command to the default port 53 (see below). If you want to use another port, you can edit etc/unbound/unbound.conf.d/interface.conf.

Use unbound as an authoritative DNS server

To use unbound as an authoritative authoritive DNS server - a DNS server that hosts DNS zones - add your zones file etc/unbound/zones/. Alternatively, you can also use a docker volume to mount /etc/unbound/zones/ to your zone files.

The entrypoint script will create a zone.conf file to serve the zones.

You can use subdirectories. The zone file needs to have $ORIGIN set to our zone origin.

Use DNS-over-TLS

The default configuration uses quad9 to forward the DNS queries over TLS. If you want to use another vendor or you want to use the root DNS servers director you can remove this file.

Build the image

$ docker build -t stafwag/unbound . 

Run

Recursive DNS server with DNS-over-TLS

Run

$ docker run -d --rm --name myunbound -p 127.0.0.1:53:5353 -p 127.0.0.1:53:5353/udp stafwag/unbound

Test

$ dig @127.0.0.1 www.wagemakers.be

Authoritative dns server.

If you want to use unbound as an authoritative dns server you can use the steps below.

Create a directory with your zone files:

[staf@vicky ~]$ mkdir -p ~/docker/volumes/unbound/zones/stafnet
[staf@vicky ~]$ 

Create the zone files

Zone files

stafnet.zone:

$TTL	86400 ; 24 hours
$ORIGIN stafnet.local.
@  1D  IN	 SOA @	root (
			      20200322001 ; serial
			      3H ; refresh
			      15 ; retry
			      1w ; expire
			      3h ; minimum
			     )
@  1D  IN  NS @ 

stafmail IN A 10.10.10.10

stafnet-rev.zone:

$TTL    86400 ;
$ORIGIN 10.10.10.IN-ADDR.ARPA.
@       IN      SOA     stafnet.local. root.localhost.  (
                        20200322001; Serial
                        3h      ; Refresh
                        15      ; Retry
                        1w      ; Expire
                        3h )    ; Minimum
        IN      NS      localhost.
10      IN      PTR     stafmail.

Make sure that the volume directoy and zone files have the correct permissions.

$ chmod 755 ~/docker/volumes/unbound/zones/stafnet/
$ chmod 644 ~/docker/volumes/unbound/zones/stafnet/*

run the container

$ docker run -d --rm --name myunbound -v ~/docker/volumes/unbound/zones/stafnet:/etc//unbound/zones/ -p 127.0.0.1:53:5353 -p 127.0.0.1:53:5353/udp stafwag/unbound

test

[staf@vicky ~]$ dig @127.0.0.1 soa stafnet.local

; <<>> DiG 9.16.1 <<>> @127.0.0.1 soa stafnet.local
; (1 server found)
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 37184
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;stafnet.local.			IN	SOA

;; ANSWER SECTION:
stafnet.local.		86400	IN	SOA	stafnet.local. root.stafnet.local. 3020452817 10800 15 604800 10800

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Sun Mar 22 19:41:09 CET 2020
;; MSG SIZE  rcvd: 83

[staf@vicky ~]$ 

Have fun

Links

March 19, 2020

Abstract image representing an infinite wave

The world is experiencing a scary time right now. People feel uncertain about the state of the world: we're experiencing a global pandemic, the OPEC is imploding, the trade war between the US and China could escalate, and stock markets around the world are crashing. Watching the impact on people's lives, including my own family, has been difficult.

People have asked me how this could impact Open Source. What is happening in the world is so unusual, it is hard to anticipate what exactly will happen. While the road ahead is unknown, the fact is that Open Source has successfully weathered multiple recessions.

While recessions present a difficult time for many, I believe Open Source communities have the power to sustain themselves during an economic downturn, and even to grow.

Firstly, large Open Source communities consist of people all around the world who believe in collective progress, building something great together, and helping one another. Open Source communities are not driven by top-line growth -- they're driven by a collective purpose, a big heart, and a desire to build good software. These values make Open Source communities both resilient and recharging.

Secondly, during an economic downturn, organizations will look to lower costs, take control of their own destiny, and strive to do more with less. Adopting Open Source helps these organizations survive and thrive.

Open Source continues to grow despite recessions

I looked back at news stories and data from the last two big recessions — the dot-com crash (2000-2004) and the Great Recession (2007-2009) — to see how Open Source fared.

According to an InfoWorld article from 2009, 2000-2001 (the dot-com crash) was one of the largest periods of growth for Open Source software communities.

Twenty years ago, Open Source was just beginning to challenge proprietary software in the areas of operating systems, databases, and middleware. According to Gartner, the dot-com bust catapulted Linux adoption into the enterprise market. Enterprise adoption accelerated because organizations looked into Open Source as a way to cut IT spending, without compromising on their own pace of innovation.

Eight years later, during the Great Recession, we saw the same trend. As Forrester observed in 2009, more companies started considering, implementing, and expanding their use of Open Source software.

Red Hat, the most prominent public Open Source company in 2009, was outperforming proprietary software giants. As Oracle's and Microsoft's profits dropped in early 2009, Red Hat's year-over-year revenue grew by 11 percent.

Anecdotally, I can say that starting Acquia during the Great Recession was scary, but ended up working out well. Despite the economic slump, Acquia continued to grow year-over-year revenues in the years following the Great Recession from 2009 to 2011.

I also checked in with some long-standing Drupal agencies and consultancies (Lullabot, Phase2 and Palantir.net), who all reported growing during the Great Recession. They attribute that growth directly to Drupal and the bump that Open Source received as a result of the recession. Again, businesses were looking at Open Source to be efficient without sacrificing innovation or quality.

Why Open Source will continue to grow and win

Fast forward another 10 years, and Open Source is still less expensive than proprietary software. In addition, Open Source has grown to be more secure, more flexible, and more stable than ever before. Today, the benefits of Open Source are even more compelling than during past recessions.

Open Source contribution can act as an important springboard for individuals in their careers as well. Developers who are unemployed often invest their time and talent back into Open Source communities to expand their skill sets or build out their resumes. People can both give and get from participating in Open Source projects.

That is true for organizations as well. Organizations around the world are starting to understand that contributing to Open Source can give them a competitive edge. By contributing to Open Source, and by sharing innovation with others, organizations can engage in a virtuous and compounding innovation cycle.

No one wants to experience another recession. But if we do, despite all of the uncertainty surrounding us today, I am optimistic that Open Source will continue to grow and expand, and that it can help many individuals and organizations along the way.

If you planned to attend some security conferences in the coming weeks, there are risks to have them canceled… Normally, I should be now in Germany to attend TROOPERS… Canceled! SAS2020 (“Security Analyst Summit”)… Canceled! FIRST TC Amsterdam… Canceled! And more will probably be added to the long list. And, from an organizer perspective, it’s a hassle trying to decide what to go with upcoming conferences in the coming months…

We all like infosec events! It’s nice to meet good old friends and to make new ones in a relaxed atmosphere. We learn a lot, we exchange, we kick-off new projects or crazy ideas, etc… Some people might be disappointed to cancel some trips and, trust me, I’m in the same mood! Every time I’m attending infosec events, my reading list keeps growing with new articles, papers, slide decks “that I’ll read later”. Today, most conferences have live streaming and publish a recording of all talks when authorized by speakers (please always respect the TLP!).

Let’s be positive and why not clean up your reading list and bookmarks? We have now plenty of time, being blocked at $HOME, to watch hours of recordings!

Here is a list of some useful resources (based on my interests of course):

They are many more, Google is your best friend! Also, don’t forget the books! From the other side, if you submitted a talk to a conference that has been canceled, use your time to improve your research or start new ones to submit more crazy papers! 🙂

Stay safe and keep learning!

[The post InfoSec Conferences Canceled? We’ve Hours Of Recordings! has been first published on /dev/random]

I published the following diary on isc.sans.edu: “COVID-19 Themed Multistage Malware“:

More and more countries are closing their borders and ask citizens to stay at home. The COVID-19 virus is everywhere and also used in campaigns to lure more victims who are looking for information about the pandemic. I found a malicious email that delivers a multi-stage malware. It spoofs a World Health Organisation email and pretends to provide recommendations to the victim… [Read more]

[The post [SANS ISC] COVID-19 Themed Multistage Malware has been first published on /dev/random]

March 18, 2020

GNU Savannah

The CGIpaf project has a new home at GNU savannah: https://savannah.nongnu.org/projects/cgipaf/

The source code was - and is still also hosted - on GitHub.

There are a few reasons for the move;

  • I was looking for an easy way to store binary releases. Binary releases aren’t supported by GitHub. There might be a solution for this at GitLab but scp to upload a release is more convenient.
  • GitHub is becoming too dominant.
  • I prefer a solution that is based on Free Software.
  • I was already using GNU savannah for another project lookat.

Have fun

March 17, 2020

Tips voor thuiswerken luc Tue, 17/03/2020 - 21:19

Nu de Corona maatregelen strenger en strenger worden, wordt thuiswerk stilaan de norm voor de gemiddelde kantoor werker. Voor veel mensen is dit nieuw en een beetje zoeken. Er is ook totaal geen thuiswerk cultuur in Belgie en dat is een beetje jammer in een land met zoveel files.

De voorbij jaren heb ik drie jaar thuis gewerkt (en sinds begin dit jaar terug op kantoor bij de klant). Voor mij was dit een zeer positieve en productieve ervaring. Hierbij wat tips uit mijn ervaring...

  1. Je werkt thuis, maar probeer in de mate van het mogelijke werk en prive toch wat gescheiden te houden. Een afgescheiden bureau of een andere plek waar je gaat zitten om te werken helpt om het werk gevoel en starten of stoppen van de dag wat gestructureerder te maken. (en ik besef dat dit in deze Corona tijden zeker niet altijd evident is als er meerdere mensen thuis werken en/of kinderen in huis zijn).
  2. Je zit in een huis waar ook andere mensen leven. Probeer hen duidelijk te maken dat je echt aan het werken bent en ze je gerust moeten laten, maar lig er anderzijds ook niet te hard wakker van als de kinderen eens door je vergadering wandelen of lawaai maken. Ik heb vele vergaderingen gedaan met kinder- lawaai op de achtergrond. Mute jezelf als je niet spreekt om eventueel achtergrond lawaai zoveel mogelijk te beperken.
  3. Zorg voor tijd met je collega's. Je ontmoet geen mensen aan de koffie en gaat dus minder informeel met elkaar spreken en verliest zo meer het contact. Probeer met de mensen waar je meest mee werkt een regelmatig slot te boeken om bij te praten, ergens samen aan te werken etc
  4. Blijf in beweging. Thuis werken betekent dat je een pak minder afstand aflegt op een dag. Zeker nu er ook gevraagd wordt om binnen te blijven. Voorzie dus wat tijd om te wandelen of op een andere manier te bewegen. (ik heb een hometrainer in mijn thuiskantoor...). Tijdens vergaderingen waar je niet naar een scherm moet kijken, kan je ook gerust rond stappen.
  5. Let erop dat je werkplek voldoende comfortabel is. Uit de zetel werken klinkt misschien wel aanlokkelijk, maar als je dit een hele dag doet, heb je veel kans op rugpijn en RSI (ik spreek uit ervaring van toen ik gedwongen uit de zetel moest werken...). Een goede bureaustoel en wat maatregelen voor ergonomie zijn thuis even zeer aangeraden. Ik heb me bv een Varidesk aangeschaft toen ik thuis begon te werken om te wisselen tussen zittend en staand werk.
  6. Ga eens buiten. Je zit heel de dag binnen, want moet niet naar kantoor. Schep af en toe eens een luchtje of lunch in de tuin. (een van mijn favoriete thuiswerk voordelen: een half uurtje in de zon zitten onder de middag)
  7. Zie je thuiswerk ook als een positief iets. Je haalt meer uit je dag. Geen files of overvol openbaar vervoer. Er zijn veel tools beschikbaar om contact te houden. Je kan je dag wat flexibeler indelen, gerust voort werken aan zaken...

March 16, 2020

With the COVID19 pandemic ongoing, more and more countries are taking strong decisions to limit the movements of people. This is one of the best behavior to prevent more and more people to be infected. This has a big impact on many organizations that are now facing a business continuity issue: How to allow people stuck at home to perform their day to day job? In some cases, it’s impossible to work from a remote location (ex: when your organization has to “produce” some goods). In others, like cybersecurity, it’s much more convenient. I’m working remotely for some of my customers for years! With some exceptions… In very sensitive (classified) environments, you, as a contractor, have to be onsite.

But, for one week, things are moving ahead! Many organizations are implementing remote access solutions to ensure their business continuity. And, things done in an emergency are rarely the best ones! Here is a recap of problems or “don’t do this” situations that I faced or heard for a few days:

  • The bandwidth dedicated to the remote access is not properly dimensioned and people can’t work or simply can’t log in. This is especially true when you use VDI (“Virtual Desktop Infrastructure”) or remote control tools.
  • The remote access solution is used by regular AND power users. The administrators can’t simply access the infrastructure to fix users’ rights, close accounts, etc…
  • There is a lack of hardware MFA tokens: all of them have been assigned (and nothing is foreseen to use software tokens)
  • There is a license issue on the remote access platform that allows only 1/3 of the users.
  • The bandwidth for remote access is shared with critical services like corporate web servers, API servers, …
  • The channel to safely exchange critical data is not properly defined. If an employee can’t be met physically, how to exchange credentials, certificates, etc…
  • It’s tempting to disable security controls to reduce the issues faced by users and flood the helpdesk. Is it a good idea to disable the host-checker feature of your VPN client to allow more people to connect from personal computers? Risks must be evaluated.
  • No, “Any to Any: Allow” is not an emergency rule in your firewall…

And if you don’t have a remote solution already in place? It’s tempting to deploy services quickly…

  • Don’t expose in the wild services running on top of critical protocols like RDP, SMB, HTTP, … (use them behind a VPN)
  • Avoid low-cost remote access protocols like VNC, TeamViewer or ZoHo, … (same advice, use them behind a VPN)
  • It’s tempting to quickly share data over the Internet. Be 200% sure to use very-very-very-strong passwords!
  • Using the cloud might be your survival solution but keep in mind that you loose control of your data in many cases.

Keep this in mind and update your BCP (“Business Continuity Plan”) with those lessons learned once we’ll get rid of COVID19! Stay safe at home!

[The post Remote Access Bad Stories has been first published on /dev/random]

Il y’aura un avant et un après Coronavirus

Quel moment étrange sommes-nous en train de vivre ? Une brève page d’humanité que nous vivons individuellement, confinés, et à l’échelle de la planète, connectés.

Depuis quelques jours, nous ne sortons plus de chez nous sauf pour prendre l’air, nous balader tout en saluant les passants du plus loin que nous puissions. Nous restons à la maison. Plus de réunion, plus de rendez-vous. Les enfants ne vont plus à l’école. Un week-end comme un autre. Mais un week-end qui va durer des semaines, un mois, peut-être plus.

Plus question d’aller au restaurant. Plus question de caser les enfants chez les grands-parents, plus question de regarder les résultats des courses cyclistes après une journée de boulot. Le temps semble s’être arrêté.

Difficile de penser à autre chose qu’à l’épidémie. Difficile de faire autre chose que de tenter de convaincre ceux qui pensent qu’on en fait trop, de s’inquiéter pour nos parents et, même si les statistiques sont en leur faveur, pour nos enfants. Difficile de ne pas penser à comment gérer les enfants si nous tombons tous les deux malades sans pouvoir faire appel aux grands-parents et si les provisions viennent à manquer.

Mais difficile également de ne pas réaliser que, dans notre position privilégiée, cette situation est un prix bien léger à payer si cela permet de sauver des vies.

À ceux qui parlent de catastrophisme, de paranoïa, je ne peux répondre que « quel est le coût d’avoir tort ? ». Est-il préférable de prendre trop de mesures pour une maladie bénigne ou, au contraire, de sous-estimer un fléau mortel ? Nous ne saurons jamais si nous en avons fait trop, mais nous pourrions regretter toute notre vie de ne pas en avoir fait assez.

Quoi qu’il en soit, la rapidité de réaction de l’humanité me convainc que rien ne sera plus jamais comme avant.

En quelques jours, porter un masque en public, habitude typiquement asiatique, est devenue une norme presque mondiale. Faire du télétravail et des téléconférences se révèle soudainement possible même chez les plus réfractaires. Quelques vieillards cacochymes qui toussent ont enfin réussi là où 20 années de réunions mondiales au sommet ont échoué : réduire la production mondiale de CO2 et de NO2.

Soudainement, les milliers d’avions en permanence dans le ciel ne se révèlent pas si indispensables que ça. Soudainement, les millions de tonnes de gadgets en plastique peuvent rester dans leurs conteneurs quelques mois de plus. Soudainement, nous pouvons vivre sans le nouvel iPhone.

La diminution de la pollution liée à cette quarantaine bientôt mondiale sauvera probablement plus de vies qu’elle ne protégera du Coronavirus.

Lorsque la menace s’éloignera, il faudra tout d’abord se battre avec des gouvernements qui auront du mal à rendre le pouvoir extrême qu’ils auront acquis en quelques semaines. Les luttes pour nos vies privées et pour nos libertés devront, pendant des décennies, affronter l’argument de la pandémie. Les abus seront nombreux, des régimes totalitaires se mettront en place insidieusement, profitant de l’aubaine, se camouflant sous des mesures de santé publique.

Mais même sans cela, nous ne reviendrons jamais à « la normale ».

Pour beaucoup, le télétravail sera désormais démontré comme efficace et faire chaque jour 2h de trajet ne se justifiera plus. Pour d’autres, il sera désormais impossible de camoufler que le monde se porte mieux sans leur creusage de trou, leur bullshit job. Certains métiers trop souvent oubliés seront enfin remis à l’honneur: personnel soignant, éboueurs, livreurs, postiers … On découvrira à quel point se passer d’enseignants, de restaurateurs et de personnel d’entretien est éprouvant. Peut-être aurons-nous appris, contraints et forcés, à vivre avec notre famille, à adopter un horaire et un mode de vie imposé par nos proches plutôt que par un patron obsédé de la pointeuse.

Nous commencerons à réfléchir sérieusement à l’idée de payer les gens un revenu de base pour rester à la maison, nous rendant compte que cela ne va pas détruire le monde, mais au contraire le sauver. Nous réaliserons que lorsque nos enfants nous accuseront de n’avoir rien fait pour le réchauffement climatique et que nous leur répondrons que c’était impossible, ils nous pointeront du doigt en disant : « Pourtant, en 2020, vous l’avez fait pour le Coronavirus ! ».

Nous attendons tous, avachis dans nos salons, le retour à la vie normale. Une vie normale qui ne reviendra plus, qui sera à jamais différente.

Oserons-nous encore un jour nous faire un bisou en nous croisant dans la rue, cette coutume qui parait tellement étrange, voire répugnante, pour certains Asiatiques ? Nous moquerons-nous encore de cette personne qui porte un masque dans la rue ? Serons-nous enfin convaincus que la santé n’est pas un bien et que le secteur ne doit pas être « rentable » ? Pourrons-nous enfin ne plus entendre ces abrutis criminels qui refusent tout vaccin et qui sont les bombes à retardement des prochaines épidémies ?

Car, oui, il y’en aura d’autres. Que ce soit dans un an, deux ans, dix ans ou cent ans. Une épidémie future que nous ne pourrons désormais plus nous empêcher d’attendre. De guetter. En prévision de laquelle nous garderons toujours un stock raisonnable de papier toilette, de masques et de gel désinfectant.

Nous ne pourrons également plus nous empêcher de réaliser que nous vivons avec nos proches, que nous les aimons et que, l’immense majorité du temps de notre vie, nous ne faisons que les croiser dans la cuisine et la salle de bain. Nous réaliserons enfin que ceux à qui nous tenons ne sont pas éternels, que nous les avons appelés plusieurs fois pendant la quarantaine alors que cela faisait peut-être 3 semaines, 6 mois ou 1 an que nous n’avions plus le temps de leur parler.

Ce tableur à compléter pour un client, ce rapport à terminer, cette réunion à organiser. Ces embouteillages journaliers pour s’asseoir face à un écran, cet ulcère évité de justesse. Ce match de foot au sommet à la télé. Ils étaient indispensables et, pourtant, nous avons pu soudainement nous en passer pendant plusieurs semaines. À quelles futilités consacrons-nous notre énergie, notre temps, notre vie ? Il sera désormais impossible de ne plus se poser la question.

Rien ne sera jamais plus comme avant.

Photo by Daniel Tafjord on Unsplash

Je suis @ploum, conférencier et écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

March 12, 2020

Important note: this post is written by me, an IT professional. I’m no virologist or medical professional, but I have been obsessively following the news and believe to be reasonably well literate in the matter.

March 11, 2020

I published the following diary on isc.sans.edu: “Agent Tesla Delivered via Fake Canon EOS Notification on Free OwnCloud Account“:

For a few days, there are new waves of Agent Tesla landing in our mailboxes. I found one that uses two new “channels” to deliver the trojan. Today, we can potentially receive notifications and files from many types of systems or devices. I found a phishing sample that tries to hide behind a Canon EOS camera notification. Not very well designed but it’s uncommon to see this. It started with a simple email… [Read more]

[The post [SANS ISC] Agent Tesla Delivered via Fake Canon EOS Notification on Free OwnCloud Account has been first published on /dev/random]

March 10, 2020

March 09, 2020

I just lost an hour or 2 of my life to this, so I figure I’ll do a small write-up to save future me from having to do the same dance.

March 08, 2020

I’m not a big fan of AMP but I do have it active here on this blog using the official AMP plugin for WordPress, using it in “Reader” (aka “classic”) mode. That’s as far as I want to take it, but suppose (as was the case for an Autoptimize/ Critical CSS user I was helping out) you want to redirect all mobile traffic to AMP, then you could use below code snippet to do just that.

add_action('init','amp_if_mobile');
function amp_if_mobile() {
  $_request = $_SERVER['REQUEST_URI'];
	if ( strpos( $_request, '?amp' ) === false && strpos( $_request, '&amp' ) === false && ! is_admin() && wp_is_mobile() ) {
    if ( strpos( $_request, '?' ) === false ) {
      $_amp = '?amp';
    } else {
      $_amp = '&amp';
    }
    $amp_url = home_url() . $_request . $_amp;
    wp_redirect( $amp_url );
    exit;
  }
}

It might be a little rough around the edges, but it (mostly) gets the job done ;-)

March 06, 2020

I published the following diary on isc.sans.edu: “A Safe Excel Sheet Not So Safe“:

I discovered a nice sample yesterday. This excel sheet was found in a mail flagged as “suspicious” by a security appliance. The recipient asked to release the mail from the quarantine because “it was sent from a known contact”. Before releasing such a mail from the quarantine, the process in place is to have a quick look at the file to ensure that it is safe to be released. The file is called ‘Info01.xls’ (SHA256:89e6e635c1101a6a89d3abbb427551fd9b0c1e9695d22fa44dd480bf6026c44c) is a VT score of 0/59. Yes, you read it correctly, it remains undetected by antivirus solutions… [Read more]

[The post [SANS ISC] A Safe Excel Sheet Not So Safe has been first published on /dev/random]

March 05, 2020

I published the following diary on isc.sans.edu: “Show me Your Clipboard Data!“:

Thanks to one of our readers who submitted this interesting piece of phishing. Personally, I was not aware of this technique which is interesting to bypass common anti-spam filter and reputation systems. The idea is to create a fake survey on a well-known online service. In this case, the attacker used surveygizmo.com which offers you to build an online presence for surveys or feedback forms. Most of these websites are paid services but offer free trials. Enough to build a phishing campaign… [Read more]

[The post [SANS ISC] Will You Put Your Password in a Survey? has been first published on /dev/random]

March 02, 2020

March 01, 2020

February 28, 2020

I published the following diary on isc.sans.edu: “Show me Your Clipboard Data!“:

Yesterday I’ve read an article about the clipboard on iPhones and how it can disclose sensitive information about the device owner. At the end of the article, the author gave a reference to an iPhone app that discloses the metadata of pictures copied to the clipboard (like the GPS coordinates).

This is true, our clipboards may contain sensitive information like… passwords, private URLs, confidential text, and man more! About passwords, many password managers use the clipboard to make the user’s life easier. You just need to paste data exported by the password manager in the password field. Some of them implement a technique to restore the content of the clipboard with previous data. This is very convenient but it is NOT a security feature. Once data has been copied into the clipboard, it can be considered as “lost” if other applications are monitoring its content… [Read more]

[The post [SANS ISC] Show me Your Clipboard Data! has been first published on /dev/random]

February 27, 2020

There’s an update for Async Javascript that needs your urgent attention. Update asap!

[Update] I was warned by WordFence about a vulnerability in Async JavaScript that was being actively exploited. Based on their input I updated the plugin to fix the bug. WordFence in the meantime published a post about this and other affected plugins and with regard to AsyncJS writes:

Async JavaScript’s settings are modified via calls to wp-admin/admin-ajax.php with the action aj_steps. This AJAX action is registered only for authenticated users, but no capabilities checks are made. Because of this, low-privilege users including Subscribers can modify the plugin’s settings.

Similar to Flexible Checkout Fields above, certain setting values can be injected with a crafted payload to execute malicious JavaScript when a WordPress administrator views certain areas of their dashboard.

I published the following diary on isc.sans.edu: “Offensive Tools Are For Blue Teams Too“:

Many offensive tools can be very useful for defenders too. Indeed, if they can help to gather more visibility about the environment that must be protected, why not use them? More information you get, more you can be proactive and visibility is key. A good example is the combination of a certificate transparency list with a domain monitoring tool like Dnstwist, you could spot domains that have been registered and associated with a SSL certificate: It’s a good indicator that an attack is being prepared (like a phishing campaign)… [Read more]

[The post [SANS ISC] Offensive Tools Are For Blue Teams Too has been first published on /dev/random]

Have you noticed how this page doesn’t have a cookie pop-up? Nothing to accept before you can read the content?

February 26, 2020

Bram Van Damme recently shared a really cool method of showing a table of contents on a page and I’ve wanted to implement it on this blog ever since.

February 24, 2020

This is not a structured essay. By letting ideas flow, I want you to share my pain and enlightenment, to travel from the stupid administrative inefficiency of our time to Bitcoin, from the fake marketed revolutions to the real social innovation which is happening right in front of us and might be our only hope to save us from ourselves.

Rambling on my own academic condition

In his essay « The Utopia of Rules: On Technology, Stupidity, and the Secret Joys of Bureaucracy » (a must read), David Graeber brings an incredible insight on administrative societies and describes the root of all problems with two simple yet powerful sentences : Administration used to be a tool to conduct poetic endeavours. Today, poetry and imagination are there to serve the administration.

As I’m struggling with my PhD, not because of the research itself but because of the administrative stupidity, the book shed a new light on my suffering.

Yes, my PhD subject had been accepted by some committee. But for some reason, I’m not a PhD student because I missed the « student » part of it. Nobody told me that I had to also become a student. And when I understood it, the absurdity of the procedure made me throw my laptop in rage after two weeks of effort.

Suffice to say that I had to find a certificate of employment for all my jobs between my master thesis and now. That I had to find back every possible certificate, to scan them and to upload them in a buggy web interface. That this took me several weeks (given the number of documents) and that, on completion, it basically sent me an email with my uploads attached, telling me that I had to go in an office to hand a list of documents, on paper, which were basically the list I had uploaded (with several unexplainable differences).

Previously, I already had to present the original version of my diploma to an administrative service who… gave me the diploma 12 years before. I had to present it not only once but on three different occasions. Because I was a lecturer, a researcher and a PhD student. Thus three different persons called at three different times. The salt of the joke is that I asked what I could do if I would have lost the paper. The answer was straightforward : « As you are from this university, our service would be able to give you a copy. »

This deep and unexplainable absurdity was the inspiration behind my short story « Ticket to Hell » where an atheist man realises that, after our death, we go to the hell we feared the most. Being a true materialist atheist, he never feared any kind of hell and finds himself in a dedal of administrative procedures. Every sequence in the story is something I experienced first-hand. I had so many anecdotes that I could not even put them all in the story. I removed some as « too unbelievable ».

But what strikes me the most is not that we have completely absurd and stupid procedures. It’s the fact that nobody seems to realise it. When pinpointing absurdities, it looks like I’m always the one not understanding it. Or, when I really have the point, I get that terrible yet universal answer: « Well, that’s how it is. You will not change it, better get used to it. »

As Graeber points it out, all human structures are now huge marketing bullshit machines. Everything has to be sold, no new idea can’t be even envisaged if we are not sure we could sell them. And selling became an administrative procedure by itself.

Most academics now have a full-time job: sell their research to some anonymous bureaucrat in exchange of a grant.

Grants are so complex and need so much paperwork that, in my country, there are grants to make the work done to get a grant. I’m not making this up. I was really on a project to get a 25k€ grant that would enable us to work on getting a larger grant. The small grant was part of the project since the start because even bureaucrats realised how hard it was to get the grant. I simply refused to start to fill out paper to receive money to be able to fill out even more papers.

Universities employ staff paid full time to help academics get grants. I met one and he suggested me a grant that would fit perfectly my area of research. There was a catch: a sixty pages long form to fill.

Well, what could take so many pages to fill, I asked?

Not the research description, because nobody cared. But I had to explain what product my research would bring to the market in 2 to 4 years. I had to already give the name of companies which whom I would be partnering to distribute the product. And I had to give an estimation of how many jobs the product will create. A product that needed a scientific breakthrough I had yet to make.

As I was in shock, I looked at the man in front of me in the eyes.
— How did you arrive in this office? I don’t understand…
— Well, it’s not by choice. I’m an astrophysicist.
— What ? But why are you not doing cool astrophysical stuff?
— Because there’s no public money for astrophysics. So I had to get this job, there’s money for that.

Let’s sink that in. There’s not enough public money to pay scientists but there’s public money to help scientists fill paperwork to get public money? Getting a grant is hard, by design. But, as I realised, having a grant is even worse!

The grant I had for one year was subjected to a rule which plague most of the people in my situation: timesheets.

It means that, besides several regular reports (which seems sensible), I had to fill out a form on a very old and buggy platform. The kind of platform that requires Internet Explorer 6 and tenth click to edit a case.

How often?

Every day!

Every damn day you have to tell the platform exactly what you did on an hourly basis. As the platform was a pain to use, it was agreed to do it once a month. Filling your whole month should be taking « only a few hours ».

That’s silly in essence but the worst is yet to come: those timesheets are carefully studied by some bureaucrats. If they consider that what you fill is not really relevant to your research, they don’t pay the grant for that day. The fact that they have no way to understand your research is irrelevant.

But what’s a bit more fun is that the grant is paid to the university which pay you a salary based on a contract. So if you don’t fill the timesheet, you are paid but the university is losing money.

That’s why there are multiple levels of supervision. First, for every 3 to 10 researchers, there’s now a project manager. His job is to make sure every researcher does its administrative tasks. He also often carries some of the administrative tasks that are common to the team but, in the end, he can’t help the researchers more than telling what to do. And, more than often, he doesn’t know. When you are conducting research across multiple departments, you have to deal with multiple project managers that often have contradictive visions.

Then you have the department secretaries, checking that all boxes are green for every researcher. Which makes things funnier because it means, from time to time, filling an Excel sheet to send her the data you already filled in other platforms.

To summarise, it’s hard to find money to do research. But there’s money to pay people to help you get the grant, to pay people to manage the grant and pay people that spend their time trying to not pay the grant or, at least, making sure that being paid is not easy. Money from your taxes that goes to Europe that distribute it, in my case, to the Walloon region who give it to the university who give it to me with each layer taking a cut and trying to make things harder for the layer below.

Seems silly? It’s only the beginning…

To ensure that a timesheet is « good », several categories of tasks are agreed before the research project starts. When doing your timesheet, you must choose one category for every day then explain what you did in that category that specific day. And it should make some sense. My project had initially 12 very specific categories. None of them were even remotely adapted to my work because the grant was a generic one given before I arrived. Remember those 60-page paper to get a grant? It’s just for the fun. A grant is very often taken by a professor who will spread it around his team. In my case, the grant was unused for several years so I had to write reports for a time before I arrived to make it look better.

But all of that is not an issue by itself as I was instructed several times that, in the timesheets, I was not allowed to read, to attend conferences or lectures nor to write articles during my time. I had to « discover », « build » or « test ». Teaching was not an option either.

I was part of a meeting where the chief bureaucrat responsible for my grant and several others met with all the involved professors and researchers. Professors travelled from the whole south part of the country for this meeting. For one afternoon, those very bright minds (I was the only not PhD in the room) had mainly one subject of discussion: the format of the timesheets.

At one point, the professor responsible for everything told the bureaucrat as she was complaining about us not filling the timesheets correctly: « Nobody here denies the importance of timesheets but… ». I loudly said: « Yes, I do! » but I was quickly muffled by my colleague and I thought that I would stay calm as it was my very first month on the job.

This is simply not stupid or a loss of time. It’s barbaric, a torture of the mind. I was suffering like I never did. My wife made me get out of the job because I was always in a terrible mood. The job didn’t want me either.

As I asked my PhD supervisor, « but when do you research ». He answered, « weekends. During the week, you buy the legitimacy to research during your free time. »

To add pain to injury, nobody seemed to agree with me. It was seen as an inevitable duty which takes less time if you don’t complain. I ended writing a timesheet generator that would make random sentences that appeared to be linked to my project. When I told my colleagues, thinking they would appreciate it, they were angry. Some were even afraid of the reputation of the team. They told me it was lying.

« It’s lying anyway. You can’t get sick, you can’t read, you can’t write, you can’t attend lectures nor conferences. You even have to put yourself on holiday when you are at a public conference in case the bureaucrats realise that you were not in your office that day but that’s OK because you can timesheet some fake work when you are on private holiday given by the university. All of that is OK for you but generating it randomly is not? »

Apparently, it was not. Publishing this piece alone is very stressful because I fear that most people will see it an exaggeration. It sounds like a ramble from an eccentric lunatic who was unable to comply with rules that others have accepted for decades.

For David Graeber, every ground breaking invention comes from eccentric individuals that can’t fit well in a rigid structure. I’m probably one of those.

Not that I will ever make any groundbreaking things but, as Graeber said, the only way to foster innovation is to get ten eccentrics and let them do whatever they want. Nine of them will lose your time but the tenth will invent something worth everything else.

My suffering in this system makes me probably one of the nine others. If eccentrics cannot become researchers, what will you get from research? Answers is simple: papers. Lots of papers. Academics are not trying to invent or discover anything. They are mostly trying to get papers published. The word « paper » itself is an ode to an anti-poetic administrative society.

From Bitcoin to blockchain

It’s not by accident that Bitcoin was invented out of nowhere by the anonymous Satoshi Nakamoto. Bitcoin could not have been invented by a private company, not to mention academia. It had to spur out of its open source roots. But even open source projects quickly tend to become bureaucratic structures with processes and acclaimed leaders were marketing your contribution is nearly as important as the quality of your code.

By being anonymous then disappearing, Satoshi Nakamoto was truly groundbreaking. Bitcoin is now part of the common good. Everyone can use the code, improve it, try to convince people to use it. Through Bitcoin, the sociological concept of « fork », were a team splits, each continuing the project with their own vision, became purely technological and functional. The only remaining centralised bit is the name. Who can use the name « Bitcoin »? That’s why we had Bitcoin Cash, Bitcoing Gold, Bitcoin SV and so on.

By replacing fiat money with a decentralised open source project, Bitcoin was set to destroy governments and bureaucracies. Other projects such as Aragon or Colony.io clearly try to replace a centralised structure by a lean, adaptative decentralised one.

Then came the term « blockchain ».

While Bitcoin was mostly seen as a shady scam, a way to buy drugs, even from academics in the field, blockchain quickly carried the impression of an apolitical neutrality. Blockchain is boring. Blockchain is heavily studied, theorised. Blockchain is exciting because it can carry smart contracts. It can be used to track land ownership and pieces of art. It can be used for logistics. Let me underline it once again : who find « contracts » and « logistics » exciting?

While Bitcoin was set to destroy bureaucracy, blockchain is set to serve it.

Bureaucratic capitalism, an oxymoron according to David Graeber, has always used the same strategy: if you can’t destroy it, embrace it. Even the word « revolution » is now a word used to describe a smartphone with a bigger screen. The picture of Che Guevara is one of the most popular images printed on t-shirts produced in China by underage children then sold at an expensive price to western consumers.

The revolution has been marketed.

Same happened for Bitcoin. From an anonymous subversive project with a clear political message (block zero includes a clear reference to the bailout of the banks), it has become an anonymous commercial technology developed by IBM and studied by scholars. All that was needed was to change the name to « blockchain ». Blockchain, which is supposed to represent a decentralised technology, even evolved itself to allow the distinction between permission-less and permissioned blockchains. The later meaning that a central authority has to give a permission to the user of that specific blockchain. Yes, you understand it right: blockchain decentralisation can now be centralised. It’s just a cool new name for « distributed database ».

The remaining true chaotic believers in Bitcoin or other cryptocurrencies have simply been bought. How can you make the revolution if you are insanely rich and profiting of it? Would you try to subvert a system with your bitcoins if the system offers you literally millions of dollars for them? Better sell at least some of them on an exchange and buy a Lambo while it’s the good timing.

By the way, those crypto-exchanges are heavily centralised and regulated. The cryptocurrency fanatics themselves are now impatiently waiting for regulations on their assets in the hope that this would send a green light to other investors and make the price skyrockets.

Another anecdote illustrates this heavy trend. The Counterparty project had the goal to make a decentralised and unregulated exchange based on the Bitcoin protocol itself. Technically, this is simply awesome. Unlike many ICO (Initial Coin Offering, the launch of a new cryptocurrency) that were simply scams or weak projects designed to get easy money from naive investors, Counterparty tried to build stuff in such a way that developers themselves would not have any economic advantages over other users. Most projects offer tokens that you can buy with bitcoins or dollars that are going to the developers. To buy XCP, the CounterParty token, you have to destroy bitcoins, ensuring they can’t be used by anyone else later.

This is deeply groundbreaking.

Interestingly enough, the XCP token was, for sometimes, listed on some centralised exchanges before being removed. When Counterparty or another similar project will succeed and blacklisting or forbidding it will not be enough, there will still be the solution to corrupt the believers in the project by making them rich.

After all, the strategy works well with artists and researchers. Silent them with stupid bullshit jobs, complex administrative procedures. If they still manage to get their voice heard, transform them into a product, make them rich, popular so they don’t want to change the system anymore. Or, at the very least, they don’t have the credibility to do it.

Killing innovation in the name of fostering it

The vast complexity of our society becomes understandable as soon as you accept that any system tries to survive by avoiding and killing any change. While the naive version is to actively prevent any change, which ultimately leads to revolution, bureaucratic capitalism is pretending to foster change, announcing that we are living in an age of unparalleled progress. But, to protect children, this progress should be regulated. A bit like religious inquisitors burning and killing people in the name of love, pretending that this is the only way to create heaven on earth.

Patents are a particularly crystal clear stance of such hypocritical nonsense. In most industries, patents are the very metric of innovation. The words « patent pending » are, for most consumers, the synonym with an innovative solution to a problem.

Yet, the very purpose of patent is to prevent someone else to make use of an invention. Bitcoin itself is using some algorithm called ECDSA while a better solution existed for the same task: Schnorr. Unfortunately, Schnorr was patented, demonstrating that a patent is a way to actively kill innovation or, at least, slow it enough. The work to migrate Bitcoin from ECDSA to Schnorr is an order of magnitude bigger than simply starting with Schnorr. All that energy and time could have been saved without patents.

Getting a patent involves a nightmare of paperwork and, besides counsel fees, involves paying quite a lot of money. This ensures that nobody could get a patent but big bureaucratic corporations.

Having a patent validated doesn’t mean anything. It facts, it means that some anonymous bureaucrat from the patent office didn’t find any clear previous prior art in other patents. A patent can still be invalidated in court. But who can afford legal fees to go in court? Big administrative corporations, of course.

In my career, I worked once for a multinational company in the automotive industry. As a creative R&D engineer, I was given a white card to come with a prototype for a new feature. In only a few weeks, I came up with an algorithm that would automatically guess your destination based on your habits. This simple tool could be used to automatically set up the navigation system of the car as soon as you enter it, greatly improving your experience and warning you about traffic on roads for which you would not take the time to enter the destination in the system (there was no smartphone at the time).

My prototype worked remarkably well and was tested with the data of multiple colleagues. But I was instructed to make a patent out of it. I even received some training to learn how to make my patent the vaguest, to sounds like it includes other stuff already patented elsewhere. That way, it could be used by the company to attack other companies or to defend against such attacks. As I found that unethical, I could not do the required work and my algorithm was never released. I was told explicitly that nothing could be released if not patented, that was a company policy.

Twelve years later, I’ve yet to see one navigation system in a car that can do something that a young engineer could invent in a few weeks and that was highly praised by the testers.

This tells a lot about the state of innovation: through patents, innovation has been transformed into a bureaucratic process mastered by lawyers. There are even academic masters in « innovation » to ensure that no idea can spur spontaneously. The inventors, the engineers are working for nothing unless they can find the tiny bit of innovation that please both the marketing department, the legal department and the whole hierarchy.

Innovation in the academia?

Let me tell you, again, two anecdotes. When I was a student, one of my philosophy professors asked me to take a philosophical stance on a given subject. I explained it successfully. He followed by asking who was the philosopher saying that. I said I was. He was angry and told me that the whole point of the course was not to take my own stance but to know what true philosophers think. My answer was immediate :
— Can you prove that I’m not a true philosopher?
This ended the examination with the lowest successful grade.

Another professor asked me to analyse a 20-page text that was handed to us two weeks before. As I took the text out of my bag, he told be I was supposed to have read the text before.
— Yes, of course, but I did not study it by heart. Did you really expect me to analyse a text without even looking at it?
— Well, that’s how we always did.
— I’m not asking you what you did but if you find it smart or not.
— That’s not the point.
— As it is a philosophical class, I think this is the very essence of the point.
— Forget it, tells me what you remember about the text.

So I did but I pointed out that the text was really badly written and some sentences were not clear. I cited a sentence I remembered as notable and which seemed in contradiction with some part of the text.

— This sentence is not in the text. It’s not what the text is about.
— Well, let me show you, I underlined it (I made the gesture of putting the text out of my bag)
— No! You can’t take the text! You could have notes on it! It’s forbidden! said the professor with a panicked voice.
— Well, then let me show the sentence on your copy of the text.
He hands me the paper. I’m surprised : it’s in English while the text he gave us was in French.
— I don’t understand, mine was in French.
— Indeed, reply the professor. I gave you a translation I found. I thought it would be easier for you. But I prefer to read it myself in the original version.
— Excuse-me but… did you ever read the translation before giving it to us?
— Not really…
— So, basically, you are trying to judge how I memorised a text you didn’t even read yourself?
— …
— I think there’s nothing to add.

I was so angry that I walked out of the room without an eye for the teacher. I was 20 and willing to learn philosophy. Once again, I received the minimal succesful grade. A way to not see me anymore in their classes.

Those anecdotes are not only about two professors. They are the very essence of our system. When I write a blog post detailing some ideas, the negatives reactions are always about the fact that I’m not original, just saying the same thing as someone else. Or that I’m saying the opposite of some famous people. Or that, if I can’t cite who told it before me, then I can’t say anything.

All academic social sciences seem plagued with the same disease. You can’t have any opinion before reading and knowing all the opinions of the « great predecessors ». And once you know them, you are not allowed to differ from what they say. And you are not allowed to think like them because that woul’d be plagiarism.

The summary is clear and is what my philosophy professors were trying to teach me: you are not allowed to think at all. You are only allowed to recite the « great ancients » and to analyse what they said. This is, of course, reminiscent of every decadent civilisation.

Academia has turned upside down. People don’t go to university to learn anymore but to get an administrative form (called « diploma ») that will allow them, they hope, to land a useless but prestigious punching card job with a good salary in an administrative society. They may also use the opportunity to meet other people and create a network. Learning? When some skills will be necessary, they will be learned on-the-fly on a day-to-day basis. At the cost of losing depth and the big picture of the knowledge. If really needed, reading a book worth more than spending hours on a chair listening to someone who has a full-time job getting grants for his/her PhD students and which was selected for that job because she was one of the best to find grants and to memorise answers to arbitrary questions at an exam.

We are not doing innovation, we are doing marketing everywhere.

It’s hard to convince ourselves that we live in an incredible age of innovation. After all, all great science-fiction achievements have been stopped with the 20th century. We have yet to go back to the moon. We are excited to follow a small robot on Mars, something that we did in the seventies. Sure, we have a better camera but that’s all. We are excited by electric cars, something that existed since the inception of cars themselves, only because we were holding back any innovation in that field. Imagine telling someone from 1970 that, 50 years later, we will get excited by electric cars, rocket that barely put satellites in orbit and having a phone in our pocket. A phone that takes all our attention all the time.

As Graeber pointed out, huge bureaucratic infrastructures have historically served for what he calls « poetic purpose ». Poetic is not good nor bad. It’s basically irrational. Like building pyramids, cathedral or rockets to kill other countries or send a handful of men on the moon.

But the bureaucracy is so efficient, so strong that, in order to preserve itself, it now kills any means of innovation, any sense of purpose. Something as seemingly important as « saving the planet so we can live on it » seems impossible because, well, we have to put the administrative procedures in place. Kyoto or Rio or Paris agreements are nothing but transforming the « saving the planet » purpose in a bunch of administrative forms. It simply can’t work.

Administration doesn’t want to change and, thus, doesn’t want to save the planet. It is quite simple to demonstrate it: vastly increasing taxes on fuel looks like a sensible way to reduce consumption of such fuel. This has never been done at a scale, not because of rational arguments but because « it might cost jobs, it might slow the economy ». Well, the whole point is that jobs and growth is what is killing the planet.

We have built a machine to create meaningless jobs in order to foster an arbitrary virtual metric and nobody, by design of the machine, can shut it down even if not doing it implies destroying the whole planet.

As administrative structures seem to go hand in hand with centralised power, our only hope to get back to innovation and to poetic technology is having a look at what can be decentralised.

And this is why Bitcoin is so exciting. Bitcoin is truly decentralised. The defunct Moonshot Express project aimed at sending a bitcoin wallet to the moon. Everybody could then decide to send money to that wallet. Claiming the money on the wallet would involve going to the moon and bringing back a small metal piece. That would be innovative, groundbreaking. Poetic.

In my lengthy introduction, I tried to demonstrate that even « scientific taxes » does not pay researchers. It is used to build a huge bullshit factory around the word « research ». The same applies for entrepreneurship and innovation. There are even grants to help startups get patents!

Thanks to projects like Moonshot Express, citizens could instead send their money directly to the moon, fostering informal technological cooperation. As the project r.loop demonstrated, open source enthusiasts are ready to switch from purely software project to literally rocket science. All in a pure decentralised way. Decentralisation is becoming the politically correct word for anarchism.

This will be heavily disruptive. As Ladrière and Simondon explored it, a technology is disruptive by essence. If it does not disrupt the society, this is not innovation but product marketing.

As the visionary Vinay Gupta pointed out, technologies like Bitcoin and Ethereum, once you remove the « blockchain » marketing crap, enables a new kind of projects were decentralised investors give resources and directions, hoping to get benefits or only for the sake of supporting the project. Transparency and collaboration become the new default. Which is the opposite of an administration which can survive only thanks to its opaqueness.

Unfortunately, we are still living in a bureaucratic world dedicated to marketing. Today’s innovation has been transformed in a pure marketing machine. A company doesn’t sell innovation anymore. It innovates in order to serve the marketing.

Kickstarter and other crowdfunding websites are the epitome of this 100% marketing, no product approach. On Kickstarter, the customer is sold products that don’t exist yet and that have never been tested.

All Kickstarter pages are basically the same with a clean-looking video, some sketches of the product and pictures of smiling people using a fake version of the product. The customer is then asked to pay in advance and to wait.

Which, of course, means that the product is often not completed successfully. But, when it does, one quickly realises that what was good on video is, in fact, unusable in real life. Software is crappy, with an unfinished proprietary cloud platform and no iteration. But that’s not a real problem because, most of the time, we forgot that we paid for a given project two years before. The product is less important than looking cool on social networks. I like to call that « false good idea ». It often ends with the following discussion:

— Wow, look at that cool project.
— Well, it looks cool at first glance but, think about it. This would be unusable in real life.
— Oh, you are so negative. The idea is cool.

The marketing approach is so systematic that there are marketing agencies specialised in Kickstarter marketing to get you contributors. They even go as far as producing your « product » with your logo on it.

We could see Kickstarter as a contribution to research. Giving money without expecting any return. But the worst of all is probably that, given free rein on imaginative products, all we see on Kickstarter are plastic smartphones holders, backpacks for laptops or GPS chips to spy our kids.

We are so used to the imaginary world of marketers that we pay for the right of dreaming to use a product for which we have no need (else, we won’t accept to wait for months or years). Driven by carefully designed dopamine rushes, we tend to spend every waking moment in the curated world of those targeted ads.

Can we still have creative ideas?

As Graeber points out, the Internet, like patents, kill every inspiration because « it has already been done and it failed. Or, worst, because it succeeded. » Or, as I learned through my career : « If nobody has done it, there’s probably a good reason. »

Inventing is impossible. Only selling is acceptable.

Blockchain was no exception with the ICO frenzy. While ICO was, at first, seen as a sensible way for a project to raise money, the ICO bubble made people realise that you only need to create a token (which is a few lines of codes in the Ethereum blockchain) and a good website to raise millions of dollars.

As I was explaining it to my circles, everybody asked me why I was not launching my own cryptomoney. ICO are not about the underlying project anymore anyway. It’s purely about selling yours as the next Bitcoin that will hit 10,000$, despites all the economic fallacies.

Given all those gigantic scams, guess what the public asked?

More regulations of the ICOs! More administrative procedures.

ICO are still an incredible way to fund a project and to bring new kind of governance to decentralised systems. If we get it right, Bitcoin and subsequent technologies could become the administrative layers that would render bureaucracy obsolete.

Bitcoin itself is a pure piece of poetic technology.

But do we really want to break free of our virtual dream world?

After all, the rise of administrative bureaucracy as a goal by itself is heavily correlated with the rise of television which, as pointed out by Michel Desmurget in TV Lobotomie, is highly correlated with an education crisis.

Now that we constantly have in our pocket a television tailored to our short term desires and tastes, now that we forgot how to focus, how to think by ourselves, how to be creative (we grow up surrounded by screens and, as pointed out by Desmurget, this has severe and long-lasting impact on our brains), are we even willing to make the new revolution?

Not by cutting heads but by throwing our screens filled with ads, by stopping to give our hours to meaningless work, by refusing to do stuff that doesn’t make sense, even if we are paid to do it (our jobs) or we are forced to do it (legal forms)?

Don’t count on « democracy ». Elections are only used to choose some public bureaucrats. Ballot is only one kind of administrative form. The whole social choice theory field demonstrated that there’s no objective election system. Arrow theorem can be oversimplified as « voters are not deciding anything, it’s the voting system that makes the result » or « give me the result you want, I will design a voting system that makes it appear as the legit result for every voter ». Nevertheless, academic discussions about voting systems are still limited to « how can we vote using Internet » or, worse, « how can we build a social media to make people vote » (I’m not making this stuff up, those were real discussions I was involved with academic people and high-ranking European politicians).

The future cannot be administratively reduced to votes. As Vinay Gupta likes to say, the future is like another country. We should have diplomatic relationship with the future and not only sending them our trash. We should have a system that envision the future as a part of ourselves, something fluid, lean, permanently changing. Unlike the « Let’s throw that for after the elections » philosophy.

We like to criticise politicians. But they are doing exactly what the system asked them to do. You can throw them all, nothing will change because they are puppets of the system. A good way to illustrate it is that, if you shut down all media, you will realise it’s impossible for the vast majority of the population to know what kind of ideology has won the latest elections and is running the system. If you except some edge cases (that might be very important, like if you are an immigrant without passport in a country suddenly ruled by the extreme right), the system is mostly running alone. The time of reaction between a political action and its effect (when there’s one) is simply superior to the time between two elections. And after an election, the winner usually takes the time to undo everything the president did. In the end, nothing changes. I’m from a small country called Belgium where the electoral game is so complex, forming a government requires agreement between so many parties that it is borderline impossible.

This led the country to become officially without government for very long times, sometimes more than a year. This is for example currently the case.

In reality, nobody realises it. Except for the media, which could as well be describing something happening on another continent. There’s one exception: bureaucrats, who ends up taking decisions or waiting for decisions about which form should go where. If you are not a bureaucrat and not waiting for money from them, life is in fact easier without a government.

Advertising and media, which are in fact the same stuff, are building a world where politicians are supposed to take important decisions, where we should vote and fill out forms to create jobs in order to buy more stuff that will create even more jobs and make us happy.

Screens are made addictives to forbid us to disconnect from this virtuality, to prevent us to dream about innovations and see the world with naked eyes. Resistance looks futile, our future is doomed. Our only hope lies in poetry.

If we throw away our screens, if we start to boycott every single media, if we refuse to do absurd jobs, if we hang marketers and advertisers then we will be able to see the poetry. A poetry which could, maybe, save us all.

Photo by Dmitry Ratushny on Unsplash

Je suis @ploum, conférencier et écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

February 23, 2020

The “diff” command is a very nice tools in *NIX environments to compare the content of two files. But there are some situations where diff is a pain to use. The classic case is when you need to compare many files from different directory trees (by example two different releases of a tool) located in the following directories:

/data/tool-1.0/
/data/tool-2.0/

Diff has a feature to display differences between files in a recursive way:

diff -r /data/tool-1.0/ /data/tool-2.0/

Nice but not convenient to just compare some files when browsing the directory tree. Example, if I need to compare foo.h between the two trees:

$ cd /data/tool-2.0/src/include
$ diff foo.h /data/tool-1.0/src/include/foo.h
1c1
< Line1
---
> Line2 

When you need to compare many files, it’s a pain to always type long path (yes, I’m a lazy guy). I write this quick & dirty script to allow me to compare files between two directory trees without re-typing paths all the time:

#!/bin/bash
test -z $SOURCEPATH && echo "\$SOURCEPATH not defined" && exit 1
test -z $DESTPATH && echo "\$DESTPATH not defined" && exit 1
NEWPATH=`echo $PWD | sed -e "s~$SOURCEPATH/~$DESTPATH/~"`
FILENAME=${@: -1}
NARGS=$(($#-1))
ARGS=${@:1:$NARGS}
diff $ARGS $PWD/$FILENAME $NEWPATH/$FILENAME

Create your two environment variables and just type the filename you want to compare:

$ export SOURCEPATH=/data/tool-2.0
$ export DESTPATH=/data/tool-1.0
$ cd $SOURCEPATH/src/include
$ mydiff foo.sh
1c1
< Line1
---
> Line2 

Note that args passed via the original command line as passed to diff in the script. As I said, quick & dirty…

[The post Diff’ing Some Files Across Similar Directory Trees has been first published on /dev/random]

February 22, 2020

I published the following diary on isc.sans.edu: “Simple but Efficient VBScript Obfuscation“:

Today, it’s easy to guess if a piece of code is malicious or not. Many security solutions automatically detonate it into a sandbox by security solutions. This remains quick and (most of the time still) efficient to have a first idea about the code behavior. In parallel, many obfuscation techniques exist to avoid detection by AV products and/or make the life of malware analysts more difficult. Personally, I like to find new techniques and discover how imaginative malware developers can be to implement new obfuscation techniques… [Read more]

[The post [SANS ISC] Simple but Efficient VBScript Obfuscation has been first published on /dev/random]

I recently got a PuTTY private key sent to me that I wanted to use to log into a remote server.

February 21, 2020

I published the following diary on isc.sans.edu: “Quick Analysis of an Encrypted Compound Document Format“:

We like when our readers share interesting samples! Even if we have our own sources to hunt for malicious content, it’s always interesting to get fresh meat from third parties. Robert shared an interesting Microsoft Word document that I quickly analysed. Thanks to him! The document was delivered via an email that also contained the password to decrypt the file… [Read more]

[The post [SANS ISC] Quick Analysis of an Encrypted Compound Document Format has been first published on /dev/random]

February 14, 2020

For a few months, I’m writing less often on this blog, except to publish my conference wrap-up’s and cross-posting my SANS Internet Storm Center diaries. But today, I decided to write a quick post after spending a few hours to debug a problem with my mail server…

It started with a notification from a contact who found one of my emails stored in the junk folder of his Office365 account. In the past, I already had the same issue a few times but I did not pay attention. This could always happen due to a “border line” expression in the message. I asked him to provide me all headers of my blocked email and I found this:

Incoming X-Forefront-Antispam-Report: CIP:x.x.x.x;IPV:;CTRY:DE;EFV:NLI;SFV:SPM;SFS:(10001)(199004)(189003)(7596002)(86362001)(7636002)(1096003)(54906003)(63350400001)(26005)(8676002)(36756003)(336012)(53546011)(58800400005)(2616005)(966005)(5660300002)(6966003)(4326008)(33656002)(246002)(22186003)(356004)(6266002)(6862004)(107886003)(33964004)(6666004)(112026008);DIR:INB;SFP:;SCL:5;SRVR:PS1PR0302MB2507;
H:mail.xxxxxxxx.xx;FPR:;SPF:TempError;LANG:en;
PTR:mail.xxxxxxxx.xx;A:1;MX:3;CAT:SPOOF;

To decode why it was categorised as spam, Microsoft provides a page with all details. Apparently, Microsoft was not able to check the SPF record (“SPF:TempError” means a temporary DNS failure occurred). I don’t know the reason because the DNS hosting the zone with SPF records are available! Anyway…

In the mean time, I started to investigate on my web server what could be the reason to flag a mail as “suspicious” or “malicious”. To achieve this, I started to review all the security controls related to SMTP:

  • SPF? ✔
  • DMARC? ✔
  • DKIM? Oops… It was not working, outgoing emails were not signed.

And it started… I dived into my Postfix config, my OpenDKIM config, checked the keys, configured domains, the selectors, the DNS records… For most of us, Google is our fist step to get some support. Let’s search for potentially useful keywords. Ouch, the first hit suggest stackoverflow.com… Not the most reliable place on the Internet 😉 Finally, the problem was fixed and DKIM is now working properly again!

If you step back a few seconds, this is a good example to demonstrate why security of many systems keeps failing… Everything is way too complex and many people are afraid when they have to implement more security controls. Their motto sounds like:

If it works, don’t break it! It was running like this for a long time, nobody know exactly how…

Here are some examples (feel free to share yours via comments):

  • SPF records are kept in “safe mode” to avoid outdated filters to block emails’ delivery
  • Certificates expire without notification (always on December 31st at 23:59)
  • DNSSEC is a pain to maintain with the regular renewal of keys (Did you read the latest story about DNSSEC?)
  • Docker credentials are difficult to integrate into some application with Docker secrets.
  • Let’sEncrypt SSL certificates are nice but the certbot may fail for multiple reason… The most classic one being a too strict redirect of HTTP to HTTPS.
  • Products’ documentation is not complete and does not mention what ports/IPs/domains allow. People configure “any to any” to be certain that the application will work.
  • Mail encryption with PGP is out of the competition (people will never understand the difference between public / private keys 😉
  • Etc…

Finally, while working at customers’ premises, I see that there is a trend to make platforms more and more complex. It seems that today, everything must be “agile”, must be “orchestrated”, … Finally, we loose the control!

Feel free to share your thoughs!

[The post Wondering Why Security Keeps Failing? I’ve One Idea… has been first published on /dev/random]

I published the following diary on isc.sans.edu: “Keep an Eye on Command-Line Browsers“:

For a few weeks, I’m searching for suspicious files that make use of a command line browser like curl.exe or wget.exe in Windows environment. Wait, you were not aware of this? Just open a cmd.exe and type ‘curl.exe’ on your Windows 10 host… [Read more]

[The post [SANS ISC] Keep an Eye on Command-Line Browsers has been first published on /dev/random]

February 13, 2020

Icinga2 Ansible collection luc Thu, 13/02/2020 - 21:03

I've been using Icinga2 at various customers and on our own infrastructure since it was released. I never released the roles on Ansible Galaxy, since I didn't consider the individual roles as a useful product and it would require quite some effort to show how they're supposed to work. One of the disadvantages of that, is that after a few years I had at least three different versions. One version for Debian only, one that also supported CentOS and also one for enterprise customers with a Satellite server. All of them had their own checks and tweaks and it was not always easy (and allowed) to keep them in sync.

At my previous enterprise customer, management was open minded enough to allow an open source license for our automation work and we have been able to add molecule testing and a few neat features. Pieter Avonts had the brilliant idea to support flexible Icinga host vars based on Ansible inventory variables, which solved one of the important issues we had with Ansible variable inclusion in host.conf and checks (you could only use variables defined in group_vars/all before).

Once Ansible collections became a thing with Ansible 2.9, the time was there to get it started. Now I finally consider it in a ready to release state. Check the Icinga2 collection on Ansible galaxy and the code on Github. It supports Debian, Ubuntu, CentOS and RHEL (and it should still support Satellite based repos with a few configuration tweaks). The molecule tests have some examples to get you started, but I'll try to write a few blog posts with practical examples in the future.

The title says it all; I just pushed the first beta of Autoptimize 2.7 which has some small fixes/ improvements but which most importantly finally sees the “Autoptimize CriticalCSS.com power-up” fully integrated.

Next to the actual integration and switching to object-oriented for most (but not all) of AOCCSS files, there are some minor functional changes as well, most visible ones being buttons to clear all rules and to clear all jobs from the queue.

I hope to be able to release AO27 officially in April, but for that I need the beta thoroughly tested off course. Most important part to test is the critical CSS logic obviously, so if you have the power-up running, download/ install the beta and simply disable the power-up to have Autoptimize fill that void (if the power-up is active, AO refrains from loading it’s own critical CSS functionality).

February 10, 2020

February 07, 2020

I published the following diary on isc.sans.edu: “Sandbox Detection Tricks & Nice Obfuscation in a Single VBScript“:

I found an interesting VBScript sample that is a perfect textbook case for training or learning purposes. It implements a nice obfuscation technique as well as many classic sandbox detection mechanisms. The script is a dropper: it extracts from its code a DLL that will be loaded if the script is running outside of a sandbox. Its current VT score is 25/57 (SHA256: 29d3955048f21411a869d87fb8dc2b22ff6b6609dd2d95b7ae8d269da7c8cc3d)… [Read more]

[The post [SANS ISC] Sandbox Detection Tricks & Nice Obfuscation in a Single VBScript has been first published on /dev/random]

February 05, 2020

Ever since I became independent, I’ve wanted to write about the financial aspects of running a business.

February 03, 2020

Logo DokuWikiCe 20 février 2020 à 19h se déroulera la 80ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : DokuWiki, un wiki “One size fits all”

Thématique : Web|communautés

Public : Tout public

L’animateur conférencier : Didier Villers (UMONS & ASBL LoLiGrUB)

Lieu de cette séance : Université de Mons, Campus Sciences Humaines, bâtiment FWEG, 17 place Warocqué, Auditoire 236L, 2ème étage. Cf. ce plan sur le site de l’UMONS, ou la carte OSM, ainsi que l’accès parking, rue Lucidel.

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Pour tout le monde, le mot wiki fait référence à l’encyclopédie en ligne Wikipedia. Plus généralement, un wiki est un système de gestion de contenu permettant de créer, modifier et présenter des informations dans de nombreuses pages reliées entre elles par des liens hypertextes. L’édition est souvent collaborative, utilise un langage de balises simplifié, et le logiciel dédié assure l’indexation du contenu, les recherches, le contrôle des accès et la conservation des versions successives. Les wikis réputés proposent des extensions pour des fonctionnalités supplémentaires et des thèmes graphiques pour changer l’apparence du site.

DokuWiki est un logiciel libre écrit en PHP, présentant l’avantage de fonctionner sans base de données, au contraire de MediaWiki (le moteur de Wikipedia). DokuWiki fait partie des moteurs de wiki les plus connus. Il se destine aux équipes de développement, aux travaux de groupe et aux petites entreprises, mais sa facilité de maintenance, de sauvegarde et d’intégration ainsi que les nombreuses extensions et thèmes graphiques proposés en font un couteau suisse du web très efficace tant pour une utilisation individuelle que pour celle d’une grande communauté.

La présentation sera essentiellement un retour d’expérience, et vous permettra de comprendre l’installation, la configuration et le fonctionnement de DokuWiki, et de découvrir ses nombreuses possibilités d’utilisation.

January 31, 2020

Today, for the first time in our company's history, Acquia was named a Leader in Gartner's new Magic Quadrant for Digital Experience Platforms (DXPs). This is an incredible milestone for us.

Gartner Magic Quadrant for Digital Experience Platforms

In 2014, I declared that companies would need more than a website or a CMS to deliver great customer experiences. Instead, they would need a DXP that could integrate lots of different marketing and customer success technologies, and operate across more digital channels (e.g. mobile, chat, voice, and more).

For five years, Acquia has diligently been building towards an "open DXP". As part of that, we expanded our product portfolio with solutions like Lift (website personalization), and acquired two marketing companies: Mautic (marketing automation + email marketing), and AgilOne (Customer Data Platform).

Being named a Leader by Gartner in the Digital Experience Platforms Magic Quadrant validates years and years of hard work, including last year's acquisitions. We also join an amazing cohort of leaders, like Salesforce and Adobe, who are among the very best software companies in the world. All of this makes me incredibly proud of the Acquia team.

This recognition comes after six years as a Leader in the Gartner Magic Quadrant for Web Content Management (WCM). We learned recently that Gartner is deprecating its Web Content Management Magic Quadrant as Gartner clients' demand has been shifting to the broader scope of DXPs. We also see growing interest in DXPs from our clients, though we believe WCM and content will continue to play a foundational and critical role.

You can read the full Gartner report on Acquia's website. Thank you to the entire Acquia team who made this incredible accomplishment possible!

Mandatory disclaimer from Gartner

Gartner, Magic Quadrant for Digital Experience Platforms, Irina Guseva, Gene Phifer, Mike Lowndes, 29 January 2020.

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Acquia.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

In case you’ve missed it: the cron.weekly newsletter is back! I’m about 6 issues in to the new season and I feel I’ve gotten a good flow going.

January 29, 2020

Vivre au quotidien avec le téléphone Hisense A5

Tout ce qui fait de la lumière nous hypnotise. Nous pouvons rester des minutes entières à contempler le crépitement des flammes d’un feu de camp. Lorsque nous rentrons dans une pièce avec une télévision allumée, notre regard est immédiatement attiré, aspiré.

Se passer des réseaux sociaux comme je l’ai fait avec ma déconnexion ? Ce n’est qu’une petite partie du problème. Nous avons toujours en permanence un écran dans notre poche, un écran que nous avons envie de consulter au moindre signe d’ennui, de temps mort. Un écran qui nous satisfait plus que lire quelques pages d’un livre ou de prendre quelques secondes pour méditer.

J’ai réalisé que je n’étais pas addict aux applications sur mon téléphone ou mon ordinateur, mais bien à l’écran lui-même. Que je coupe une activité et je me retrouve, presque consciemment, à tenter d’imaginer ce que je pourrais bien faire pour justifier de passer du temps sur un écran.

Un phénomène que je n’ai pas du tout avec ma liseuse. L’écran est essentiellement réfléchissant. Il n’y pas de mouvements fluides, hypnotiques, d’images.

Logiquement, je me suis mis à la recherche d’un téléphone e-ink, un téléphone sans écran lumineux, sans couleur. Plusieurs modèles ont vu le jour dans les années 2013-2014, mais aucun ne semble avoir eu de succès. Le seul modèle qui semble exister est le Hisense A5, que j’ai décidé d’acheter.

Premier contact

Initialement annoncé à 100$, le Hisense A5 coûte en réalité plus de 200$ sur la majorité des sites d’import. Ajoutez à cela les frais de douane et on est plus proche des 250€. Mais je devais de toute façon remplacer mon téléphone défectueux.

Le Hisense A5 tourne sous Android, mais est fourni en deux langues : anglais et chinois. Pas d’autre choix. À noter que la traduction est incomplète. Les idéogrammes chinois apparaissent un peut partout. Ce n’est pas dramatique, mais certains mouvements lancent des applications en chinois que je n’ai pas réussi à désinstaller. Un peu embêtant.

Autre point d’attention : les services Google ne sont pas installés sur le téléphone. Cela tombe bien, car je voulais justement me passer de Google, mais cela vient avec quelques surprises sur lesquelles je reviendrai.

Un peu de bidouillage est donc nécessaire pour installer vos applications : installer F-Droid et, depuis F-Droid, installer Aurora Store qui permet d’installer n’importe quelle application de l’appstore Google.

L’écran e-ink à l’usage

Le fait d’avoir un écran e-ink est une expérience étonnante. On s’habitue très vite au noir et blanc. L’écran possède deux modes d’affichage (un plus net, mais plus lent, pour les photos, et l’un rapide, pour l’interaction quotidienne). Il est facile de passer d’un mode à l’autre grâce à un raccourci et c’est vraiment très utile.

Après quelques jours, le fait de voir les écrans des autres téléphones semble incroyablement agressif. Ça bouge, c’est coloré, c’est violent. Le téléphone e-ink est incroyablement zen.

Autre particularité de l’e-ink : il est assez lent. Pas question de faire défiler des flux ou des news à toute vitesse. Loin d’être un problème, c’est au contraire une fonctionnalité incroyable. Pendant un temps mort, j’ai le réflexe de prendre mon téléphone avant de me dire « Mais pourquoi je prends ce truc ? Qu’est-ce que j’ai envie d’y faire ? ». Le téléphone e-ink est donc un véritable sevrage !

Pocket, en mode tournage de page, est dans son univers. C’est à peu près la seule application qui donne envie d’être utilisée sur ce téléphone.

Pocket sur le Hisense A5

Pour les applications quotidiennes, le fait d’être en e-ink n’est pas particulièrement handicapant. On s’habitue très vite sauf pour les applications mal conçues dans lesquelles on ne fait pas la différence entre l’activation et la désactivation, le gris foncé (désactivé) et le bleu foncé (activé) apparaissant de la même manière. C’est particulièrement frappant dans Garmin Connect et cela me fait dire que ces applications ne sont pas utilisables par les personnes avec des troubles de la vue comme le daltonisme.

Photos et vidéos

L’appareil photo fonctionne très bien et prend des photos tout à fait correctes, voire belles… une fois affichée sur un autre appareil ! C’est d’ailleurs assez amusant de ne pas pouvoir juger immédiatement du résultat d’une photo. On retrouve un peu l’esprit des appareils argentiques : prendre moins de photos (parce que l’interface est moins réactive et décourage le mitraillage) et ne pas les regarder avant d’être rentré à la maison.

La peur de rater une bonne photo est également une addiction problématique de nos smartphones. Il suffit de voir la foule lors des événements importants : tout le monde regarde le monde à travers son smartphone ! Un écran e-ink, sans résoudre le problème, le rend moins prégnant. Une sorte d’équilibre que je trouve acceptable : on peut prendre des photos, mais cela ne remplace pas la réalité.

Ce sentiment est également présent en voyage, alors que je reçois des photos de ma famille. L’appel vidéo en noir et blanc saccadé renforce l’impression d’éloignement, de distance. Je trouve cela personnellement une bonne chose, car cela permet de se rendre compte de l’importance d’une présence réelle, que le chat et la vidéo ne sont que des succédanés.

Autonomie

La merveille de l’écran e-ink est qu’il ne consomme presque rien ! Lors d’une journée normale, connecté sur le wifi avec un usage léger, je consomme entre 10 et 15% de batterie. En voyage, après une très longue journée en permanence sur la 4G, avec utilisation intensive du GPS, appels vidéo, prises de photo, je n’ai jamais réussi à descendre en dessous de 50% de batterie. En étant attentif (usage limité, mode avion quand pas utilisé), on peut sans problème dépasser la semaine.

Bref, ce téléphone est l’arme absolue du baroudeur : énorme autonomie et obligation de rester en permanence connecté au monde réel autour de soi !

Nous en discutions avec Thierry Crouzet : ce téléphone pourrait bien être le compagnon ultime du bikepacker. Grande autonomie, impossibilité de passer la soirée à trier les photos de la journée et permet de remplacer la liseuse.

En usage courant, je préfère très largement ma liseuse Vivlio, avec écran large, boutons physiques et sans distraction. Mais, en bikepacking où la fatigue rend la lecture très sporadique, le Hisense A5 serait suffisant, permettant d’économiser une grosse centaine de grammes de bagages.

La vie sans Google

Comme je l’ai signalé, ce téléphone ne permet pas d’installer les services Google. Que sont les services Google ? Une couche logicielle qui permet de lier une application avec votre compte Google.

Cela signifie que vous pouvez installez des apps à travers l’Aurora store, mais uniquement des apps gratuites. Cela signifie également qu’il est impossible de se connecter avec son compte Google. Google Maps fonctionne très bien, mais pas avec votre compte (ne vous permettant pas de partager votre position avec vos amis ou d’utiliser vos favoris). Cela veut dire également pas d’import facile de mes contacts depuis mon téléphone précédent.

Personnellement les manques les plus difficiles et pour lesquels je n’ai pas encore de solution sont Google Agenda et Google Musique. Je peste également contre le manque de Google Photos qui me permettait d’accéder immédiatement à mes photos depuis mon ordinateur. J’utilise Tresorit, mais la fonctionnalité d’importation de photos n’est pas vraiment au point et ne semble pas fonctionner sur ce téléphone (tout comme celle de Dropbox).

Autre manque qui m’a surpris : Strava refuse de se lancer sans les Google Services ! Brain.fm me prévient également qu’il ne fonctionnera pas, mais, après avoir accepté l’erreur, se lance sans problème. J’avoue que je ne saurais pas me passer de Brain.fm même si j’ai l’impression que la qualité de son dans mon casque Bluetooth est inférieure à mon téléphone précédent.

Pour le reste, tout fonctionne plutôt bien, y compris Google Maps ou mes gadgets Bluetooth. Certaines applications peuvent cependant avoir de petits soucis : l’appli météo que j’utilise n’arrive pas à obtenir une position. Signal ne me notifie parfois de certains messages que lorsque je lance manuellement l’application. Brain.fm plante parfois si j’utilise d’autres applications pendant qu’il tourne. Et Strava ne se lance pas du tout !

Conclusion

À la lecture de ce qui précède, il est évident que ce téléphone est réservé aux geeks prêts à bidouiller et à faire des sacrifices.

Cependant, pour un téléphone qu’il faut bidouiller et un système Android pas du tout prévu pour un téléphone e-ink, il se comporte incroyablement bien. Contrairement à un LightPhone, très joli sur papier, mais incroyablement limité dans la vie quotidienne (pas d’application bancaire, pas de système d’authentification à deux facteurs, pas moyen de trouver rapidement une adresse), le Hisense A5 est un véritable téléphone.

Je suis persuadé que des interfaces spécifiquement conçues pour e-ink changeront complètement la relation que nous avons avec nos écrans. Je le vois d’ailleurs avec le Freewrite qui, malgré tous ses problèmes software, change ma relation avec l’écriture.

Je crois que je ne pourrais déjà plus revenir à un téléphone traditionnel. J’attends juste que Protonmail sorte son appli agenda pour Android, que je trouve une solution de rechange pour streamer ma musique et ce téléphone sera presque parfait !

Cela signifie également que lorsque je n’ai pas besoin de mon laptop, je me balade désormais sans le moindre écran couleur. Une véritable libération !

Je suis @ploum, conférencier et écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

January 28, 2020

I’m just back from Lille (France) where is organized the “FIC” or “International Cybersecurity Forum” today and tomorrow. This event is very popular for some people but not technical at all. Basically, you find all the vendors in one big place trying to convince you that their solution, based on blockchain, machine learning and running in the cloud, is the one that you MUST have! Anyway… Besides this mega-vendors party, there is a small event that is organized in parallel for a few years (I think it was already the 4th edition): “CoRIIN” or, in French, “Conférence sur la réponse aux incidents et l’investigation numérique”. 400 people registered and came to follow presentations around forensics, investigations, etc… Here is my quick wrap-up as usual!

The conference started with an overview of the WhatsApp application on mobile phones: “L’investigateur, le Smartphone, et l’application WhatsApp” by Guénaëlle De Julis. She had the opportunity to investigate a mobile phone and was asked to find data in the WhatsApp application. With the lack of documentation available, she decided to have a look at the application. She explained covered iOS & Android devices. First, Guénaëlle explained how to access data, some are available via backups, some need root access (jailbreak) depending on what you need to extract. Data are stored in SQLite database. Once extracted, how to search for juicy information? You can, of course, use one of the multiple SQLite viewer available but Guénaëlle decided to use Python with the Pandas library to write powerful queries. Some examples:

  • Export messages for a defined period
  • Try to find the number of missing messages for a defined period
  • Groups: identify members and associates messages

I don’t have experience with mobile forensics and learned interesting stuff with this first presentation.

Then, Philippe Baumgart et Fahim Hasnaoui presented their point of view regarding investigations in cloud environments. They started by reviewing common issues that organizations are facing when using the cloud: They are easy to order (just a credit card is required) and “tests” platforms become often regular platforms. Welcome to shadow IT! Then, they explained the challenges to collect information from cloud providers. An interesting point of view: they offer powerful monitoring tools that can also be used to detect suspicious activities. Example: a peak of traffic on a database server could be related to some data exfiltration! I liked their mention of “intra-cloud” attacks where people rent virtual machines inside cloud infrastructure and scan for servers with non-public IP addresses.

Then, Sébastien Mériot, head of the OVH CSIRT came to discuss data leaks and credential stuffing. Data leaks are everywhere and people like to know what has been done with stolen data. When, according to Sébastien, it’s interesting to have a look at what’s happening before the leak. Often, the data leak is not the primary goal of the attackers, it’s an opportunity. Leaked data are not immediately released because passwords are often hashed+salted. Attackers need to take time to brute-force the passwords. But, it’s a fact, the time between the leak and the public release tends to be reduced due to… the power of GPU! Passwords can be decrypted very quickly today. Then, they are used for credential stuffing. This means they’re tested against many services. To perform this, there exist tools like:

  • SentyMBA
  • OpenBullet
  • Storm
  • Snipr
  • Private Keeper
  • Woxy

Sébastien explained that OVH, as a major cloud player, is also a good target for credential stuffing. He also explained some counter-measures they implemented like dynamic forms (that change all the time to prevent automation). From a business point of view, phishers are moving to credential stuffing… many come from Africa

The next presentation was performed by François Normand about Pastebin and all the malicious content that can be found on this service. He reviewed some examples of abuse like storing malicious code encoded. Encoding can be different, XOR can be performed to defeat security controls. Interesting to keep in mind that attackers use legit services to distribute malicious content. And they are not easy to block.

After a lunch break, “A year hunting in a bamboo forest” was presented by Aranone Zarkan and Sébastien Larinier. Sorry, no information because the presentation was under TLP:Amber.

Giovanni Rarrato came to present his baby: “Tsurugi Linux”. He’s the core developer of this Linux distribution, Ubuntu-based, dedicated to forensics investigations. It contains a lot of tools that, preinstalled, speed-up the processing of data.

Solal Jacob, from the ANSSI agency, came to present his research: memory analysis of a Cisco router running IOS-XR. This type of router is running a modified version of QNX. Solal explained step by step how he learning the way the platform runs, then how to access the memory to extract it and save it. Finally, the framework used to extract artifacts from the memory image. Great research that was not easy to present in only 30 minutes! His framework “Amnesic Sherpa” will be available soon.

After a welcomes coffee break, Stéphane Bortzmeyer, from AFNIC, presented “Who’s really the owner of this IP prefix?”. Usually, when you need to learn more information about the owner of an IP address, you just use the “whois” command. Easy… but are you confident with the results? Stéphane started with an example: In 1990, the prefix 143.95.0.0/16 was assigned to the company called “Athenix” based in California. She went to bankruptcy. In 2008, a new company was created called “Athenix” based in Massachusetts. They took over the prefix, easy! The business of IPv4 is growing because, as we are out of addresses, organizations tend to use many techniques to get more IPs. The problem is that techniques exist to authenticate the ownership of prefixes but people don’t use them (for example: RKPI). Conclusions; investigating is easy because tools are easy but what’s being? Business relations etc…. I really liked the presentation!

Jean Gautier presented another tool: DFIR-ORC. “ORC” stands for “Outil de Recherche de Compromisation”. It’s a tool dedicated to extracting artifacts from a compromized system. Why reinvent a tool? Because it must be reliable, easy to use, with a minimum impact on the tested system and highly customizable! Some features are really impressive like using the NTFS MFT to access locked files, volume shadow copies or deleted files metadata! Jean explained in detail how the tool is working and it deserves to be tested. It is available here.

The last presentation was a return of experience by Mathieu Hartheiser et Maxence Duchet. It was based on a real case when one of their customers was compromized via a vulnerable JunOS Pulse appliance. I expected more technical details but it looked more like a GRC presentation…

[The post CoRIIN 2020 Wrap-Up has been first published on /dev/random]

Running Linux and noticed that ng serve or ng test don’t observe changes and thus won’t reload/test automatically?

This might be caused by simply having too many files in your tree, causing your system to hit an internal limit. This limit can be raised using the following command:

echo 524288 | sudo tee /proc/sys/fs/inotify/max_user_watches

You’ll have to do this after every boot. Want it persistently? Use the following:

echo fs.inotify.max_user_watches=524288 | sudo tee /etc/sysctl.d/40-max-user-watches.conf
sudo sysctl --system

Comments | More on rocketeer.be | @rubenv on Twitter

January 27, 2020

We proudly present the last set of speaker interviews. See you at FOSDEM this weekend! Chris Aniszczyk, Max Sills and Michael Cheng: Open Source Under Attack. How we, the OSI and others can defend it Danese Cooper: How FOSS could revolutionize municipal government. with recent real-world examples James Bottomley and Mike Rapoport: Address Space Isolation in the Linux Kernel Kris Nova: Fixing the Kubernetes clusterfuck. Understanding security from the kernel up Krzysztof Daniel: Is the Open door closing?. Past 15 years review and a glimpse into the future. Mateusz Kowalski and Kamila Součková: SCION. Future internet that you can use舰

January 26, 2020

About a week and a half ago, I mentioned that I'd been working on making SReview, my AGPLv3 video review and transcode system work from inside a Kubernetes cluster. I noted at the time that while I'd made it work inside minikube, it couldn't actually be run from within a real Kubernetes cluster yet, mostly because I misunderstood how Kubernetes works, and assumed you could just mount the same Kubernetes volume from multiple pods, and share data that way (answer: no you can't).

The way to fix that is to share the data not through volumes, but through something else. That would require that the individual job containers download and upload files somehow.

I had a look at how the Net::Amazon::S3 perl module works (answer: it's very simple really) and whether it would be doable to add a transparent file access layer to SReview which would access files either on the local file system, or an S3 service (answer: yes).

So, as of yesterday or so, SReview supports interfacing with an S3 service (only tested with MinIO for now) rather than "just" files on the local file system. As part of that, I also updated the code so it would not re-scan all files every time the sreview-detect job for detecting new files runs, but only when the "last changed" time (or mtime for local file system access) has changed -- otherwise it would download far too many files every time.

This turned out to be a lot easier than I anticipated, and I have now successfully managed, using MinIO, to run a full run of a review cycle inside Kubernetes, without using any volumes except for the "ReadWriteOnce" ones backing the database and MinIO containers.

Additionally, my kubernetes configuration files are now split up a bit (so you can apply the things that make sense for your configuration somewhat more easily), and are (somewhat) tested.

If you want to try out SReview and you've already got Kubernetes up and running, this may be for you! Please give it a try and don't forget to send some feedback my way.

January 25, 2020

Submission has closed for the PGP keysigning event. The list of all submitted keys is now published. The annual PGP keysigning event at FOSDEM is one of the largest of its kind. With more than one hundred participants every year, it is an excellent opportunity to strengthen the web of trust. If you are participating in the PGP keysigning, you must now: Download the list of keys; Verify that your key fingerprint is correct; Optionally, verify the integrity of the file using the detached signature; Print the list of keys on paper; Calculate the checksums, as detailed in the舰

January 24, 2020

Prachtig dat die er zijn.

Bedankt Mark Van Damme. Goed gedaan. Die 60 euro gemeentebelasting zal ik dan maar betalen volgend jaar. De politieke verantwoordelijken moeten niet verwachten nog verkozen te worden.

Iedereen die denkt dat het allemaal onzichtbare wegen zijn: onzin. Kom hier kijken. Het gaat om wandelpaden tussen de natuur en huizen. Ze zijn met gele borden aangegeven. En als je het aan de meeste mensen uit de omgeving hier vraagt is men hier voorstander van. Misschien een paar die hun tuin moeten beschikbaar stellen niet. Maarja.

Ik vermoed dat het ook voor toerisme zorgt. Wat de lokale handelaren waarschijnlijk niet slecht vinden.

I published the following diary on isc.sans.edu: “Why Phishing Remains So Popular?“:

Probably, some phishing emails get delivered into your mailbox every day and you ask yourself: “Why do they continue to spam us with so many emails? We are aware of phishing and it will not affect my organization!”

First of all, emails remain a very popular way to get in content with the victim. Then, sending massive phishing campaigns does not cost a lot of money. You can rent a bot to send millions of emails for a few bucks. Hosting the phishing kit is also very easy. They are tons of compromised websites that deliver malicious content. But phishing campaigns are still valuable from an attacker perspective when some conditions are met… [Read more]

[The post [SANS ISC] Why Phishing Remains So Popular? has been first published on /dev/random]

January 23, 2020

I published the following diary on isc.sans.edu: “Complex Obfuscation VS Simple Trick“:

Today, I would like to make a comparison between two techniques applied to malicious code to try to bypass AV detection. The Emotet malware family does not need to be presented. Very active for years, new waves of attacks are always fired using different infection techniques. Yesterday, an interesting sample was spotted at a customer. The security perimeter is quite strong with multiple lines of defenses based on different technologies/vendors. This one passed all the controls! A malicious document was delivered via a well-crafted email. The document (SHA256:ff48cb9b2f5c3ecab0d0dd5e14cee7e3aa5fc06d62797c8e79aa056b28c6f894) has a low VT score of 18/61 and is not detected by some major AV players… [Read more]

[The post [SANS ISC] Complex Obfuscation VS Simple Trick has been first published on /dev/random]

January 17, 2020

We have just published the second set of interviews with our main track and keynote speakers. The following interviews give you a lot of interesting reading material about various topics, from the forgotten history of early Unix to the next technological shift of computing: Daniel Riek: How Containers and Kubernetes re-defined the GNU/Linux Operating System. A Greybeard's Worst Nightmare David Stewart: Improving protections against speculative execution side channel James Bottomley: The Selfish Contributor Explained Jon 'maddog' Hall: FOSSH - 2000 to 2020 and beyond!. maddog continues to pontificate Justin W. Flory and Michael Nolan: Freedom and AI: Can Free Software舰
Here’s a small little trick when using mysqldump: you don’t have to dump an entire database, you can dump individual tables too.