Applications need input to function on any operating system, and containers are no exception, needing instructions to run and perform their intended operations. However, given the scale at which containers are deployed within organizations, managing them manually is impractical.
This is where container runtimes come in. While container runtimes solve some challenges, they also manage the interface between the container and host system, which poses a new security challenge. This blog post will examine container runtimes and their function within a container architecture, as well as key considerations for security teams.
What is a Container Runtime?
A container runtime is a software program designed to unpack a container image and translate that into a running process on a computer. The runtime interacts with the environments to perform its functions – whether in the cloud, on a bare-metal server, or running on a Linux host – and its core function is to pull the container image and translate everything within it into a functioning application.
Container runtimes are thus essential when using containerized workloads. Without a runtime, containers remain static, non-functional images of applications. Runtimes in all their variants ensure containers operate effectively throughout their entire lifecycle.
Container Runtime Functions and Responsibilities
Container runtimes are designed to facilitate the execution and management of containers. They have three core responsibilities within the container ecosystem:
- Container Execution
- Interaction with the host OS
- Resource allocation and management
Container Execution
Container runtimes execute containers and manage them throughout the entire lifecycle. This also includes monitoring container health and restarting if the container fails during normal operation. Runtimes will also clean up when the container completes its tasks.
Interacting with the Host Operation System
Container runtimes use features of the operating system, like namespaces and cgroups
, to isolate and manage resources for the container workloads. The idea is to isolate containers so that the processes inside them cannot disrupt the host or other containers to ensure a secure environment.
Container Resource Allocation
Container runtimes allocate and regulate CPU, memory, and I/O for each container. By doing this, runtimes eliminate monopolization of resources – a potential major stumbling block in multi-tenant environments. The efficient management of these resources, in coordination with the host OS, is one of the reasons containerization is so widely adopted in modern software development.
How Does a Container Runtime Work?
Back in 2015, the Open Container Initiative (OCI) was launched to define common standards for container formats and container runtimes. Although Docker created the OCI, it has since been supported by major cloud-native companies like Amazon Web Services, IBM, Microsoft, Alibaba Cloud, and OpenStack.
OCI, also backed by the Linux Foundation, defines standards that runtimes must follow and are based on three key specifications:
- What the actual container image includes
This defines the contents of a container image, including application code, dependencies, libraries, and configurations that make the container functional.
2. How runtimes can retrieve container image
Container runtimes must follow specific protocols to fetch container images from registries or repositories, ensuring consistent platform management.
3. How container images are unpacked, layered, mounted, and executed
Runtimes must adhere to structures on how images are structured, layered, and executed. This ensures they can be efficiently decompressed, mounted, and run on any OCI-compliant platform.
Following standards ensures interoperability within the container management ecosystem.
In terms of process, the runtime follows this basic flow based on OCI standards:
- The runtime is asked to create a container instance, including where the image can be retrieved and a unique identifier.
- The runtime next reads and verifies the container image’s configuration.
- The runtime issues a start command, launching the main process specified in the container image’s configuration.
- The container’s root filesystem is mounted, and namespaces ensure isolation from the host and other containers.
- Resource limits are enforced by cgroups to ensure the container operates within defined quotas.
- The start is issued, launching a new process with a root file system set to the mount point created in the previous step. That way, the container can only see specific files and operate within the defined quotas and security policies.
- Once finished, a stop is issued to shut down the container instance.
- The container instance is deleted, which then removes all references to the container and cleans up the file system.
Most of the time, users have no direct interaction with this process. Instead, they will use an orchestration platform like Kubernetes or a higher-level container engine, which handles container scheduling, scaling, and monitoring. Regardless, the above process is how runtimes interact with containers, even if not directly exposed to users.
What are the Types of Container Runtimes?
There are three general types of container runtimes, separated out primarily based on how close they are to the individual containers themselves. They are:
- Low-level runtimes
- High-level runtimes
- Sandboxed and virtualized runtimes
Low-level Container Runtimes
Low-level container runtimes are the closest to the Linux kernel. They’re responsible for launching the containerized process and have the most direct interaction with containers. Low-level runtimes are also the ones that implemented the OCI standard, as the focus there is on container lifecycle management. These are the basic building blocks that make containers possible and do the actual unpacking, creating, starting, and stopping of the container instances.
The types of low-level container runtimes are:
- Runc — The defacto standard for containers. Docker originally created it, but donated it to the OCI in 2015.
- Runhcs — A fork of runc that Microsoft created to run containers on Windows machines.
- Crun — A runtime focused on being small and efficient with binaries of about 300kb in size.
- Containerd — This runtime straddles the border between low-level and high-level because it includes an API layer. Users often interact with it via container engines like Docker or Kubernetes, although it can also be accessed directly through its API for advanced use cases.
High-level Container Runtimes
High-level container runtimes, as a general rule, offer greater abstraction than their low-level counterparts. They’re responsible for the transport and management of container images, unpacking, and passing off to the low-level runtime to run the container. These are the runtimes that users directly interact with. A few examples include:
- Docker (Containerd) — Docker, which uses Containerd under the hood, is the leading container system and is the most common Kubernetes container runtime with image specifications, a command-line interface, and a container image-building service, among others.
- CRI-O — An open-source version of the Kubernetes container runtime interface (CRI), an alternative to rkt and Docker. It enables running pods through OCI-compatible runtimes with support mostly for runc and Kata, but it is possible to use any OCI-compatible runtime.
- Windows Containers and Hyper-V Containers — Two alternatives to Windows Virtual Machines (VMs), available on Windows Server. Windows containers offer abstraction, such as Docker, while Hyper-V provides virtualization. Hyper-V containers offer numerous security benefits because they each have their own kernel, which empowers companies to run incompatible applications in their host systems. However, they also have the potential to introduce performance overhead (compared to regular Windows containers) due to virtualization.
- Podman — Red Hat built Podman as a more secure model than Docker’s original implementation. Along with Buildah and Skopeo, Podman’s goal is to provide the same high-quality experience as Docker. Unlike Docker, which uses a centralized daemon, Podman operates without a long-running process and runs containers as individual processes.
Sandboxed or Virtualized Runtimes
Within the OCI standards are guidance for sandboxed or virtualized runtimes as well:
- Sandboxed runtimes — These runtimes offer greater isolation between the containerized process and the host because they don’t share a kernel. The runtime process runs on a unikernel or kernel proxy layer, which interacts with the host kernel, thus reducing the attack surface. Some examples are gVisor and nabla-containers.
- Virtualized runtimes — The container process is run in a virtual machine through a VM interface rather than a host kernel. This offers greater host isolation but can also slow down the process compared to a native runtime. Kata-containers is one such example of this class.
Common Container Runtime Tools: Engines vs. Orchestrators
Within the container management universe, there are three classes of tools and processes used to operate containers no matter where they are hosted.
- Container runtime — Operates the container directly, as has been discussed.
- Container engine — A software program that accepts user requests and includes a command-line interface, pulls images, and, from the user perspective, runs the container.
- Container orchestrator – A piece of software managing sets of containers across different computing resources, handling network and storage configurations, and delegating to different runtimes.
It’s important to note that container engines can sometimes operate like runtimes and be used from within other tools like orchestrators. The difference among these is a matter of scale. Runtimes function directly on containers. Engines offer interfaces for handoff to runtimes on single containers. Orchestrators are used to manage multiple sets of containers across multiple environments.
What CISOs Need to Understand About Container Runtimes
As containers become more common in more contexts, security teams and CISOs need to pay closer attention to keeping the runtime phase protected against attack and unintentional error. Much like with traditional applications, it’s easy for developers and DevOps teams to introduce mistakes or misconfigurations within containers that are not found until they try to unpack and run the image.
As a result, CISOs need to ensure that mandatory scanning for vulnerabilities is included in container deployment processes, such as static scanning prior to deployment, sandboxing for dynamic testing, and behavior monitoring for runtime environments. Ensuring that this testing occurs can prevent the addition of unintentional errors and protect containers from known vulnerabilities.
Runtime is especially a critical phase for testing, regardless of whether it includes containers, APIs, or traditional applications. After all, static code or image scanning can only surface a limited number of issues or known vulnerabilities. Dynamic testing is thus vital to resolve issues.
A few of the common best practices CISOs and security teams can implement include:
- Securing container registries — Implementing access control for registries and image signing to ensure trackability and security.
- Securing container deployment — Reinforcing the host operating system, using strong firewall regulations, implementing role-based access control, and ensuring that containers are deployed with the least privilege necessary.
- Monitoring container activity — Understanding how containers act and monitoring them for any irregular activity is critical for ensuring the security of the container as well as the underlying host OS.
Container runtimes are a potentially risky phase of container deployment, especially if the underlying architecture is not monitored or kept secure. CISOs need to ensure that their teams implement strong controls and testing for containers before and during runtimes to ensure that they remain protected against potential compromise.
Upwind Secures Container Runtimes
Cloud-native deployments are becoming more common and, therefore, more vital to security. As the security implications of container runtimes are also better understood and accounted for, vendors who operate within the space will likely add more security features to ensure that containerized workloads are more protected. In addition, developers, operations professionals, DevOps teams, and security professionals alike need to ensure that, in the interim, they can protect their containers against threats.
That’s why cloud-native application protection platforms like Upwind are so vital. Adding security for container runtime to track behavior, mitigate risk, and monitor workloads for risks means that developers and DevOps teams can protect their applications and the underlying host more readily. A comprehensive CNAPP also means that that CISOs and security teams have greater visibility for improved detection and response and alerts for potential incidents.
To learn more about Upwind’s container runtime protection solution and get advice on best practices, schedule a demo.