Understanding container runtime interface (CRI) in Kubernetes

Kubernetes is a complex system with multiple components running in synergy to make sure it can handle hundreds or thousands of containers efficiently. It is important to understand these components and how they interact with each other.

The two major categories of Kubernetes components are:

  • Control plane components: These consist of the kube-apiserver, etcd, kube-scheduler, kube-controller-manager, and cloud-controller-manager.
  • Node components: These consist of the kubelet, kube proxy, CNI plugin, and container runtime.

Daemonsets enable Kubernetes to perform different actions, such as collecting logs and metrics and managing secrets and configurations.

We have seen many components getting decoupled from Kubernetes via interfaces, with multiple interfaces coming into the picture: container network interface (CNI), container runtime interface (CRI), container storage interface (CSI), and more.

This post will explain CRI and how it helps Kubernetes in decoupling the functionality of launching and managing containers.

What is CRI?

A container runtime interface is an API specification that Kubernetes uses to talk to the runtime managing containers. If anyone implements this CRI specification, Kubernetes can use it to launch containers in its ecosystem.

Kubernetes pulls images, launches containers, kills containers, and manages other container life cycle actions via these APIs.

Kubelet uses CRI on gRPC to connect with the container lifecycle, CRI being the standard protocol for communication between the kubelet and the container runtime. Kubelet uses APIs and acts as a client of the container runtime.

CRI was introduced in 2015 with Kubernetes version 1.5 and became stable in version 1.23, which removed support for Docker as a runtime. The Kubernetes community continuously manages CRI on GitHub.

Why CRI?

CRI provides an interface, and any tool/code/script that implements these interfaces can work easily with CRI.

CRI decouples the container runtime, allowing you to replace the underlying container runtime in the future with the best runtime for your needs. These may include faster runtimes like containerd and CRI-O

How does CRI in K8s work?

CRI works by providing an interface between the container runtime, which runs on each node, and the kubelet. Kubernetes also has a node component named kubelet running on each worker node or VM whenever there is a change in the state of any containers on the node.

Kubelet fetches this information from the kube-api server for the node and then calls the container runtime by using the CRI definitions. In response to these kubelet requests, the container runtime makes the necessary changes.

Benefits of CRI and why Kubernetes introduced it

Kubernetes relied on Docker early on for its default container runtime. This eventually became problematic, as Docker turned into an enterprise organization. The Docker runtime is also heavier, requiring a shim to manage the runtime and not interacting with containers directly. Sending changes to Docker was also not straightforward due to the platform’s complexity.

CRI brings some major benefits to K8s users, allowing you to:

  • Make changes in the underlying container runtime easily without any major changes in the Kubernetes cluster
  • Run multiple container runtimes in your clusters together
  • Switch to a more performant runtime easily
  • Make sure your cluster is running optimally with minimal resources.
  • Implement new advancements in container runtimes since it is decoupled from K8s components.

The last bullet is key, as it means container runtimes can evolve separately from core Kubernetes

Kubernetes versions and support for CRI

CRI became stable in version 1.23, while support for the older Docker runtime was deprecated from 1.20 versions and removed in 1.23

Version 1.31 prefers to use the v1 version of CRI if present. If not, it tries to execute on older versions available.

In version 1.26, Kubelet by default tries to connect with v1 of CRI.

What is CRI-O?

CRI-O is a container runtime implementation of the CRI specification for Kubernetes. This open-source project’s primary objective is to provide an optimized container runtime for Kubernetes clusters.

As before, kubelet talks to CRI-O via APIs and instructs it to perform operations for the container such as pulling images, creating containers, and managing its lifecycle.

CRI-O support for the Open Container Initiative (OCI)

OCI specifications and standards guide the development of container runtimes. They inform containers on how to talk to the Linux operating system and define how the container lifecycle and interaction should happen in Linux.

CRI-O uses an OCI-compliant runtime, which by default is runc, to launch and run the containers. This makes sure that in the future, any OCI-compliant runtime can be used via CRI-O.

Steps in a CRI-O container launch

  1. Whenever any request for container state changes is raised, the Kubernetes API server conveys the information about containers on a specific node to the kubelet running on that node.
  2. The kubelet then interacts with CRI-O to pass along this information via CRI APIs.
  3. CRI-O pulls the specified image and unpacks it to mount it in rootfs.
  4. CRI-O then creates rootfs and generates an OCI specification in JSON format for the container.
  5. CRI-O launches an OCI-compatible runtime (the default being runc).
  6. These containers are monitored by a conmon process; conmon is the OCI container runtime monitor.
  7. CNI then makes sure that the IP is allocated to the pod and networking configurations are present.

Installation of CRI-O as a container runtime

Before installing CRI-O, you will need to first add a Kubernetes repository and a CRI-O repository. However, you will first need to enable a couple of things.

Enable IPV4 packet forwarding:

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
EOF

Enable the IPV4 setting:

sudo sysctl --system

Kubernetes documentation features other configuration options as well.

Add the Kubernetes repo:

cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/$KUBERNETES_VERSION/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/$KUBERNETES_VERSION/rpm/repodata/repomd.xml.key
EOF

Add the CRI-O repo:

cat <<EOF | tee /etc/yum.repos.d/cri-o.repo
[cri-o]
name=CRI-O
baseurl=https://pkgs.k8s.io/addons:/cri-o:/stable:/$CRIO_VERSION/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/addons:/cri-o:/stable:/$CRIO_VERSION/rpm/repodata/repomd.xml.key
EOF

Install dependencies:

dnf install -y container-selinux

Install CRI-O:

dnf install -y cri-o kubelet kubeadm kubectl

Start CRI-O:

dsystemctl start crio.service

You can also find more installation steps for Debian-based and other systems here.

Why CRI-O?

There are multiple reasons for choosing CRI-O, a major one being its massive community support and its close relationship with Kubernetes. Following OCI runtime specifications, CRI-O is optimized for Kubernetes and is lightweight versus other runtimes available. It also works with any registry and image that follows OCI specifications.

Backed by Red Hat, Intel, SUSE, IBM, and major players, CRI-O is highly reliable. Plus, it is a graduate project of the Cloud Native Computing Foundation (CNCF), which adds another huge community behind it and boosts its trustworthiness.

Conclusion

CRI, together with OCI, has decoupled container runtime implementations from Kubernetes; this has enabled the community to work on their own, experimenting with runtimes and adopting better options. With recent standards such as CSI, CRI, CNI, and OCI, developers can make modifications and run their own flavor of implementation.

This results in a faster release cycle for each component and the removal of dependencies from tools such as Docker.

To stay on top of any new developments, as this space is continuously, evolving, make sure to follow the CRI community on GitHub.

Was this article helpful?

Related Articles

Write For Us

Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 "Learn" portal. Get paid for your writing.

Facing infrastructure performance issues?
  • Get complete visibility into on-prem and cloud systems
  • Identify resource spikes and prevent outages
  • Automate alerting and incident response workflows
  • Optimize capacity planning with predictive analytics
Write For Us

Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 “Learn” portal. Get paid for your writing.

Apply Now
Write For Us