General security

CI/CD container security considerations

Srinivas
March 17, 2021 by
Srinivas

Containers are heavily used in the CI/CD environments. There are various phases, where security needs to be taken care of in a CI/CD pipeline. The developers laptop, before committing the code to a repo, while committing the code to a repo, while pulling the code from the repo, during the build process, while pushing built images to a registry and when deploying the containers.

This article provides an overview of some of the security considerations when using containers in CI/CD environments.

FREE role-guided training plans

FREE role-guided training plans

Get 12 cybersecurity training plans — one for each of the most common roles requested by employers.

Reproducible builds 

According to https://reproducible-builds.org, reproducible builds are a set of software development practices that create an independently-verifiable path from source to binary code.

Let us go through an example to understand how reproducible builds can be produced.  The assumption here is that a team has dockerized their python application using the following Dockerfile. At the time of writing this Dockerfile, the latest version of python is 3.9.1.

FROM python:alpine

RUN python -m venv /opt/venv

# Make sure we use the virtualenv:

ENV PATH="/opt/venv/bin:$PATH"

RUN mkdir /app

COPY api.py /app

COPY requirements.txt /app

WORKDIR /app

RUN pip3 install -r requirements.txt

RUN addgroup -S user && adduser -S user -G user --no-create-home

RUN chmod -R 755 /app

USER user

ENTRYPOINT ["python3"]

CMD ["api.py"]

We can run a container from the image built using the preceding Dockerfile and check the python version.

$ docker exec -it 295 sh

/app $ python3 --version

Python 3.9.1

/app $ 

As shown in the preceding excerpt, the python version available on the container is 3.9.1. If the same Dockerfile is used in a CICD pipeline to build a new image a few days/weeks later, there is no guarantee that the same version of python will be installed. The following excerpt shows the python version available on the container a few weeks later.

$ docker exec -it 9ce sh

/app $ python3 --version

Python 3.9.2

/app $ 

Even though the Dockerfile is not changed, the image is not the same anymore. These kinds of unexpected changes can break the application functionality in future as the dependencies the application was originally tested with and the dependencies currently running are completely different without making any changes to the build instructions. 

Established practices to produce reproducible builds can help to avoid such problems. The build instructions will produce the exact same build bit by bit at any given point of time. Coming back to our Dockerfile example, the following Dockerfile will always build an image with python 3.9.1 regardless of the latest python version available. This is achieved by using version pinning as highlighted in the following excerpt.

FROM python:3.9.1-alpine

RUN python -m venv /opt/venv

# Make sure we use the virtualenv:

ENV PATH="/opt/venv/bin:$PATH"

RUN mkdir /app

COPY api.py /app

COPY requirements.txt /app

WORKDIR /app

RUN pip3 install -r requirements.txt

RUN addgroup -S user && adduser -S user -G user --no-create-home

RUN chmod -R 755 /app

USER user

ENTRYPOINT ["python3"]

CMD ["api.py"]

As we can observe in our example, reproducible builds provide the teams with the ability to consistently create the same image (and thus container) in all environments for all versions of an application. With the reproducible images, the underlying operating system and application environment is always consistent and only the application code changes whenever there is a new release. 

Following are the places to consider when we want to build reproducible images:

  • Specific version of a base image preferably an image with long term support so we can get timely updates and patches.
  • Application environments such as libraries and other packages should be installed with specific versions. Some of these libraries/dependencies may be pulled using a file like requirements.txt for python and pom.xml for java. Explicit versions must be used in those files in such cases.
  • Optionally, installing specific versions of OS packages such as curl, nginx, apache.  

From a security standpoint, binary reproducible builds also establish a trust factor for the end users as they allow verification that no vulnerabilities or backdoors have been introduced during the build process.

Public vs internal image repositories 

Images can be pulled from both public and private repositories in CI/CD pipelines. Public repositories usually come with their own restrictions and other issues such as hosting untrusted images. Use of private image registries is a recommended approach to have better security for the pipelines as we can have more control on what images can be pushed to the repository, who can push it in addition to the controls such as automatic image vulnerability scanning. 

Private image repositories let us define specific policies and prevent deployment of images that are not inline with the set policies. For instance, we can set a policy stating that an image with high risk vulnerabilities cannot be deployed. A library with a specific license cannot be used in our environment as it can lead to legal complications. 

Public registry services such as come with their own downsides. Anyone with a docker hub account can upload a malicious image and it is the users’ responsibility to decide whether or not to install these images. On the other hand, private registries provide the flexibility to control who can push images and pull images. In addition to it, Role-based access control can be enforced to have granular control over users by specifying which department or environment a specific user has access to.

Private image repositories can enforce image signing so only signed images are pushed and used in a given environment.  One of the key features of private repositories is vulnerability scanning of docker images. Most private registries support policy driven vulnerability scanning, so automated vulnerability scans can be performed in various stages. For instance, we can enforce a vulnerability scan immediately after a new image is uploaded to the private registry. Similarly, we can trigger a scan whenever there is an update to the image or a whenever a new CVE is released.

Hardening the build infrastructure 

Hardening the build infrastructure is crucial to prevent software supply chain attacks.  Access to the build infrastructure should be provided only to authorized users. A malicious user gaining access to the build infrastructure can do massive damage to the build infrastructure as well as the users/systems, which are using the builds produced on the infrastructure. Traditional hardening such as updating the operating system, installing security patches, limiting network level exposure of the services should be done.

The build system preferably should not contain any secrets that may give access to other systems. Secrets should always be retrieved at runtime from a secret vault. In addition to it, user access and code used for the build process should be regularly audited. Another commonly seen issue on build systems is running build tools with root privileges. This should be avoided and least privileges should be used wherever possible.

Code should be committed to a repo from a developer machine with appropriate controls. Great restrictions on code commits should be applied. For instance, pre-commit security checks should be performed on code being committed and developers should sign the commit using a hardware token so it is trusted and verifiable. Developers should be able to commit to only their projects and authentication controls should be implemented to access the source code repositories. When the code is pulled from the repo, the same rules apply. The user/process/system should be appropriately authenticated and authorized. Similarly, any tasks performed on the build server should be verifiable.

On top of all the measures discussed, immutable logs should be maintained to be able to perform regular audits on the entire process.

What should you learn next?

What should you learn next?

From SOC Analyst to Secure Coder to Security Manager — our team of experts has 12 free training plans to help you hit your goals. Get your free copy now.

Conclusion

Hardening the build infrastructure is a key in securing CI/CD environments. After doing all the heavy lifting of securing application code, images and container runtimes, leaving build infrastructure open to attacks can waste all these efforts.

Use of internal image repositories can also add great value in controlling the images built and used in the environment.

Sources

https://reproducible-builds.org/

https://www.csoonline.com/article/3601508/solarwinds-supply-chain-attack-explained-why-organizations-were-not-prepared.html

https://www.bankinfosecurity.com/improving-supply-chains-verified-reproducible-builds-a-15816

Srinivas
Srinivas

Srinivas is an Information Security professional with 4 years of industry experience in Web, Mobile and Infrastructure Penetration Testing. He is currently a security researcher at Infosec Institute Inc. He holds Offensive Security Certified Professional(OSCP) Certification. He blogs atwww.androidpentesting.com. Email: srini0x00@gmail.com