Docker, A brief history and security considerations for modern environments
Blog post 29 November 2018, by Dave Wurtz, security specialist at Secura
Some of our technical security specialists attended the security conference Hack.lu on 16th- 18th October, to gain knowledge and to learn some new tricks and tools. In this blog post, Dave Wurtz shares a small history on container technology and his insight of the workshop given by Paul Amar on Docker security and best practices.
During the development of Unix in 1997, a new system call was introduced. This 'chroot' system call changed the root directory of a process and its children to a new location in the file system. With the introduction of this call, process isolation was born.
In the years that followed, the technique gradually evolved. Some steps taken where for example the FreeBSD jails that where introduced in the year 2000. This was followed up by the Linux VServer virtualisation environment that was introduced in 2001.
The name 'containers' was first introduced in 2004 during the first public beta of 'Solaris Containers'. The 'LinuX Containers System (LXC) that was introduced in 2008 was the first implementation of a Linux Container manager that did not need a custom kernel. It was also the most complete container manager software during that time.
In 2013 Docker emerged. This container manager had its roots in LXC, however a custom library was developed and replaced LXC in future releases. Docker separated itself from competitors by creating a complete ecosystem for container management.
As containers are executed by the container management engine (as opposed to a hypervisor), they are not fully isolated. However, the trade-off is a small footprint: unlike full virtualisation products, container technology does not create an entire virtual operating system— instead, all components which are required by the application, and are not already running on the host machine, are packaged up inside the container with the application. Since the host kernel is shared among containers, applications only ship with what they need to run—no more, no less. This makes Docker applications easier and more lightweight to deploy and faster to start up than virtual machines.
The need for securing container technology
As container technology is used more and more in the present, the need for securing this technology becomes more important than ever. Vulnerabilities like for example 'Dirty Cow' gave attackers inside a container the possibility to change protected files on the host system. This showed the need for adequate security.
For example, The National Institute of Standards and Technology (NIST) has released a container security guide to provide practical recommendations for addressing the container environment’s specific security challenges. A copy of this document can be retrieved from https://nvlpubs.nist.gov/nistpubs/specialpublications/nist.sp.800-190.pdf
There is als a CIS bechmark available on the URL: https://docs.docker.com/compliance/cis/docker_ce/
Workshop by Paul Amar on Docker security and best practices for implementing Docker
During the Hack.lu 2018 conference a workshop was given by Paul Amar, who is a Security System Engineer at Michelin. Paul explained the fundamentals of Docker security and best practices for implementing Docker.
Paul started with an introduction in process isolation, where Namespaces provide the first and most straightforward form of isolation. One of the possibilities of namespaces is for example: A root user created within the docker container can for be remapped to an ordinary user on the host system. This mapped user will function within the namespace as a root user, but has no (root) privileges on the host machine itself. Namespaces are used in combination with control groups. These allow Docker to share available hardware resources with containers and optionally enforce limits and constraints. For example, a limit can be set for the memory available to a specific container. This limits the impact of a denial of service attack on the host system when a container is attacked.
Paul then moved gradually towards more complex protection mechanisms like 'Secure Computing Mode Profiles' or 'Seccomp' in short. Seccomp is a white-list which denies access to system calls by default, then allows specific actions (system calls). This feature can be used to restrict a container application’s access to specific system calls. A Seccomp can be made by hand, though the configuration of such a profile is complex as a regular application uses a broad collection of system calls to support its functionality. Docker provides a sane default for running containers with seccomp and allows around 44 system calls out of 300+. It is moderately protective while providing wide application compatibility.
Paul then moved on to the network segmentation options available in Docker. Docker uses different techniques to create separate networks where containers communicate which each other or with the underlying host system. An overlay network, for example, is used to create additional layers of network abstraction running on top of a physical network. This network can for be used for a multi host environment. Containers can communicate with the outside world using port mappings where an internal port inside the container is mapped to an external port on the host system.
Next point on the list of the workshop was Docker auditing, where Paul described the use of the 'Docker Bench for Security' (https://github.com/docker/docker-bench-security.git) and Clair (https://github.com/coreos/clair). Both solutions scan containers for any misconfigurations and vulnerabilities.
As a last point of the workshop, Paul explained how to enable encryption for the Docker socket used by Docker. In the case that this socket needs to be reachable via the network in a safe manner, this Docker socket will only connect- or allow connections from clients or servers authenticated by a certificate signed by a trusted certificate authority.
Guidelines to use Docker
Paul ended the workshop with the following guidelines regarding the use of Docker:
- Use minimal and certified images, for example 'alpine'.
- Use images pulled with content trust like for example BlackDuck, Artifactory, DTR and others.
- Scan images nightly with Clair-scanner and generate reports of the scan results.
- Check your host implementation with Docker CIS.
- Push to your consumers with content trust (Artifactory).
- Analyse results from Docker Security Scanning.
- TLS encrypt everything and authenticate with certificates.
- Use Read-only volumes and containers if possible (container policies).
- Separate networks whenever possible.
- Drop root privileges / unused system calls (seccomp).
At Secura we keep a keen eye on the news regarding such technologies in order to incorporate them into our expertise, to ensure you get the best service we can offer.
Other blog posts: