AWS Support and leaked credentials

Once you have enough people each working in multiple accounts it becomes a waiting game until you’ll eventually get the dreaded “Your AWS account 666 is compromised.” email. As someone who’s been using AWS since S3 arrived this is the first time I’ve encountered this so I thought I’d write up some notes about what actually happens.

First comes the easy, but not recommended, part of the whole experience; push some credentials to GitHub. You get bonus points, well faster discovery anyway, if you use the perfect combination of export AWS_ACCESS_KEY and export AWS_SECRET_ACCESS_KEY or a literal add of your .aws/credentials file. As all AWS_ACCESS_KEYs begin ‘AKIA’ I assume they scan the fire hose for commits containing that string.

Once the AWS scanning systems detect an access key in the wild, which in our case took them mere minutes, You’ll receive an email to your accounts contact address, and if you’ve got enterprise support a phone call too. The email looks like this:

Amazon Web Services has opened case 111111111 on your behalf.

The details of the case are as follows:

Case ID: 111111111
Subject: Your AWS account 6666666666666 is compromised
Severity: Urgent
Correspondence: Dear AWS Customer,

Your AWS Account is compromised! Please review the following notice and
take immediate action to secure your account.

Your security is important to us. We have become aware that the AWS
Access Key AKIATHEEVILHASESCAPED (belonging to IAM user
"ohgodpleasenotme") along with the corresponding Secret Key is publicly
available online at
https://github.com/deanwilson/aws-creds-test-repo/blob/11b1111d1

If, like me, it’s the first time you’ve seen one of these you might experience a physical flinch. Once the initial “Oh god, think of the Lambdas!” has passed and you read on you’ll find the email is both clear and comprehensive. You’ll note a line informing you that your ability to create some AWS resources (I’d assume the expensive GPU based instances) is temporarily limited until you resolve the issue to their satisfaction. It then guides you through checking your accounts activity, deleting the root accounts keys (which you don’t have, right?) and all AWS access keys in that account that were created before the breach happened. Because we have quite good role and account isolation we made the repository private for future investigation, confirmed we don’t have root access keys and forced a rotation (re: deleted all existing keys in the account.)

A little while later we received a followup phone call and email from AWS support to check in and ensure we were OK and had actioned the advice given. The second email reiterates the recommendation to rotate both the root password / access key and all AWS access keys created before the breach happened. You’ll also get some handy recommendations for aws-labs git-secrets and a Trusted Advisor / Cloudwatch Events based monitoring mechanism to help detect this kind of leak.

All in all this is actually a well handled process for what could be an amazingly painful and stress filled event. While some of this communication may vary for people without Enterprise Support I’d assume at the very least you’d still get the emails. If you have good key rotation practices this kind of event becomes just another trello card rather than an outage and now we’ve seen one of these it’s an easy scenario to mockup and add to our internal game day scenarios.