General security

Building container images using Dockerfile best practices

Srinivas
March 15, 2021 by
Srinivas

Docker images are built by writing all the build instructions in a configuration file named Dockerfile. Once an image is built using this Dockerfile, containers are launched from the images. It is important for developers to maintain security hygiene of Dockerfiles by following best practices in order to avoid security pitfalls. This article provides some of the best practices that can be used when building container images using Dockerfile.

Use of non root user accounts

When a container is started from docker images, a root account will be available within the container by default. Even though these root accounts on the docker containers do not come with all the capabilities that a Linux root user has, it is recommended to use a non root user within a container. The following Dockerfile shows how the alpine image is modified to create a new user and use this low privileged user for all the operations within the container. 

FROM alpine:latest

RUN addgroup -S user && adduser -S user -G user

USER user

Build the image using the following command.

$ docker build . -t lowpriv

When we run a container using this docker image and get a shell, we should see a low privileged user instead of a root user.

$ docker run -it lowpriv sh

/ $ id

uid=100(user) gid=101(user) groups=101(user)

/ $

As we can observe in the preceding excerpt, we got a shell as a non root user named user as specified in the Dockerfile.

Avoid the use of ADD, use COPY instead

Docker provides ADD and COPY commands to achieve a similar goal - Getting content into the container. While COPY is clearly for copying files from a local directory to the container, ADD comes with one additional feature. It can be used to download content from a URL during build time. This can lead to unwanted behaviour especially if the URL used loads content from an untrusted source.

COPY only what is needed

COPY command can be used to get content onto the image. One commonly seen practice with COPY command is to copy everything from the current directory as shown in the following excerpt.

COPY . /app

Everything available in the local directory, will be copied into the container. This can be risky especially if the current directory has any sensitive files such as secrets or backup files. It is also possible that we may copy files that are not needed on a container, Dockerfile for instance. This can also lead to larger image size. So, it is recommended to copy only what is needed as shown in the following excerpt.

COPY api.py /app

As we can see in the preceding excerpt, we are copying a single file instead of copying 

Avoid building secrets into images

Environment variables, args, hardcoded credentials are some of the common patterns seen in Docker containers. In some cases, developers leave SSH keys for Docker to pull source code from repositories during the build phase. Never put any secrets into these places as they will be available in several stages. For instance, environment variables can be found in both images built as well as the containers running. 

Use of multi-stage builds

Multi-stage docker builds is a common pattern seen especially when writing large Dockerfiles. Multistage builds are useful to anyone who wishes to optimize Dockerfiles while keeping them easy to read and maintain. One of the primary benefits of multi stage builds is to reduce the overall size of the final docker image. While reducing size is one of the primary benefits, it has positive side effects on security as the final image only contains what is necessary. Multi stage builds reduce the size by performing build operations on an intermediate container and keeping only the libraries and output binaries that are needed on the final image. The final image will not contain pieces like the build tools (eg, gcc), which can lead to larger image size as well as increased attack surface. 

Let us review the following Dockerfile, which produces a single stage build.

FROM python:3.9.1-alpine

RUN python -m venv /opt/venv

ENV PATH="/opt/venv/bin:$PATH"

RUN mkdir /app

COPY api.py /app

COPY requirements.txt /app

WORKDIR /app

RUN pip3 install -r requirements.txt

RUN addgroup -S user && adduser -S user -G user --no-create-home

RUN chmod -R 755 /app

USER user

ENTRYPOINT ["python3"]

CMD ["api.py"]

In the preceding excerpt, we set up a python virtual environment and install all the requirements as described in requirements.txt. Once done, we run the python code in a non root user’s context. Following is the size of the final image produced using this Dockerfile.

$ docker images

REPOSITORY TAG   IMAGE ID CREATED SIZE

api                  v2 99b179b08646 2 minutes ago 64.2MB

Now, let us examine the following Dockerfile, which is meant to build the same application as a docker image but this as a multi-stage build.

FROM python:3.9.1-alpine AS compile-image

RUN python -m venv /opt/venv

ENV PATH="/opt/venv/bin:$PATH"

COPY requirements.txt .

RUN pip3 install -r requirements.txt

FROM python:3.9.1-alpine AS build-image

COPY --from=compile-image /opt/venv /opt/venv

ENV PATH="/opt/venv/bin:$PATH"

RUN mkdir /app

COPY api.py /app

WORKDIR /app

RUN addgroup -S user && adduser -S user -G user --no-create-home

RUN chmod -R 755 /app

USER user

ENTRYPOINT ["python3"]

CMD ["api.py"]

We install all the dependencies in an intermediary image and copy those dependencies from this intermediary image onto the final image. This will leave all the files other than the actual dependencies on the intermediary image and the final image will only contain what is needed to run the file api.py. Following is the size of the final docker image produced. 

$ docker images

REPOSITORY TAG   IMAGE ID CREATED SIZE

api                  v2 048ced8bf6a7 2 minutes ago 60.2MB

As we can notice, the size is reduced by 4MB. Our example is very simple and thus there is not much of a difference in image sizes but when multi stage builds are used with programming languages like GO, we will see great results with multi stage builds.

Avoid the use of untrusted base images

Most docker images use a base image of some sort by specifying the FROM command in the Dockerfile. As a best practice, we should always pull images from trusted sources. Docker images are no different from any other software applications. They can be easily backdoored and thus pulling images from untrusted sources can be risky. Docker hub provides official images of the most popular operating system base images such as Ubuntu and CentOS. While these images can be trusted and can be free from unwanted/malicious behaviour, the official images also ensure that security updates are applied in a timely manner. 

https://docs.docker.com/docker-hub/official_images/

Use minimal base images

When images are built using Dockerfile, it is recommended to use base images with minimal tools and utilities. If we can, it is better to use scratch images. This will obviously minimize the attack surface as a large number of unused libraries and tools will not be present on the container. The following excerpt shows how wget is present on an alpine container even though the application running on it does not require it.

$ docker exec -it fd65 sh

/app $ wget

BusyBox v1.32.1 () multi-call binary.

Usage: wget [-c|--continue] [--spider] [-q|--quiet] [-O|--output-document FILE]

[-o|--output-file FILE] [--header 'header: value'] [-Y|--proxy on/off]

[-P DIR] [-S|--server-response] [-U|--user-agent AGENT] [-T SEC] URL...

Retrieve files via HTTP or FTP

--spider Only check URL existence: $? is 0 if exists

-c Continue retrieval of aborted transfer

-q Quiet

-P DIR Save to DIR (default .)

-S Show server response

-T SEC Network read timeout is SEC seconds

-O FILE Save to FILE ('-' for stdout)

-o FILE Log messages to FILE

-U STR Use STR for User-Agent header

-Y on/off Use proxy

/app $

Tools like these can be useful during the post exploitation phase on compromised containers and thus it is recommended to use base images that do not contain unused utilities like wget, curl etc.

Avoid the use of latest tag for base images

Most docker images use a base image of some sort by specifying the FROM command in the Dockerfile. As a best practice, it  is recommended to avoid pulling images using the latest tag. The latest tag is rolling and the underlying image with the latest tag can be different in future and it becomes hard to track the exact version we pulled earlier. This can also be prone to vulnerabilities in future as we assume we are on the latest version, but in reality we are not. Instead of using the tag latest, use base images of a specific version as shown in the following example.

FROM python:3.9.1-alpine

Root only writable binaries/scripts

In one of the previous sections, we discussed how a non root user can be configured in Docker containers. Even after configuring root-less containers, it is often seen that these non root users on a Docker container can modify the binaries/scripts as they are given ownership of the binaries as highlighted in the following excerpt.

RUN addgroup -S user && adduser -S user -G user --no-create-home

RUN chown -R user:user /app && chmod -R 755 /app

USER user

When a container is started from an image built with the preceding Dockerfile, the permissions on /app directory look as follows.

/ $ ls -ld /app

drwxr-xr-x    1 user     user          4096 Mar 22 08:24 /app

/ $ 

This shows that the directory is owned by the user and the directory /app is writable by this user. If this container is compromised, an attacker can modify the file contents of files within this directory.

This can be prevented by ensuring that all files are owned by root users with read, write & execute permissions and executable by the rest of the world. The following command shows how this can be achieved.

RUN addgroup -S user && adduser -S user -G user --no-create-home

RUN chmod -R 755 /app

USER user

As we can notice, we are changing the folder permissions as root user before switching context to a low privileged user. After spinning up the container, it looks as follows.

$ ls -ld /app/

drwxr-xr-x    1 root     root          4096 Mar 22 08:24 /app/

/ $ 

With these permissions, the binaries and scripts will still be executed by the low privileged user, but they can be modified only by a root user.

Do not expose administration services like SSH

When building Docker images, do not run services such as SSH for administration purposes. Containers are not meant to be administered this way and this can lead to unwanted visitors landing on the containers.

Learn Container Security

Learn Container Security

Build your skills around Docker and Kubernetes security, including key technologies, creating and running a secure cluster, and more.

Conclusion

The Docker image build process is the first step in building images and thus it is one of the crucial steps to ensure that the container images we are building are safe to use in both production as well as non production environments. This article has attempted to provide some of the best practices developers can follow to avoid security related errors in Dockerfiles.

Sources

https://docs.docker.com/docker-hub/official_images/

https://docs.docker.com/develop/dev-best-practices/

https://docs.docker.com/engine/reference/builder/

Srinivas
Srinivas

Srinivas is an Information Security professional with 4 years of industry experience in Web, Mobile and Infrastructure Penetration Testing. He is currently a security researcher at Infosec Institute Inc. He holds Offensive Security Certified Professional(OSCP) Certification. He blogs atwww.androidpentesting.com. Email: srini0x00@gmail.com