Kubernetes is a complex system with multiple components running in synergy to make sure it can handle hundreds or thousands of containers efficiently. It is important to understand these components and how they interact with each other.
The two major categories of Kubernetes components are:
Daemonsets enable Kubernetes to perform different actions, such as collecting logs and metrics and managing secrets and configurations.
We have seen many components getting decoupled from Kubernetes via interfaces, with multiple interfaces coming into the picture: container network interface (CNI), container runtime interface (CRI), container storage interface (CSI), and more.
This post will explain CRI and how it helps Kubernetes in decoupling the functionality of launching and managing containers.
A container runtime interface is an API specification that Kubernetes uses to talk to the runtime managing containers. If anyone implements this CRI specification, Kubernetes can use it to launch containers in its ecosystem.
Kubernetes pulls images, launches containers, kills containers, and manages other container life cycle actions via these APIs.
Kubelet uses CRI on gRPC to connect with the container lifecycle, CRI being the standard protocol for communication between the kubelet and the container runtime. Kubelet uses APIs and acts as a client of the container runtime.
CRI was introduced in 2015 with Kubernetes version 1.5 and became stable in version 1.23, which removed support for Docker as a runtime. The Kubernetes community continuously manages CRI on GitHub.
CRI provides an interface, and any tool/code/script that implements these interfaces can work easily with CRI.
CRI decouples the container runtime, allowing you to replace the underlying container runtime in the future with the best runtime for your needs. These may include faster runtimes like containerd and CRI-O
CRI works by providing an interface between the container runtime, which runs on each node, and the kubelet. Kubernetes also has a node component named kubelet running on each worker node or VM whenever there is a change in the state of any containers on the node.
Kubelet fetches this information from the kube-api server for the node and then calls the container runtime by using the CRI definitions. In response to these kubelet requests, the container runtime makes the necessary changes.
Kubernetes relied on Docker early on for its default container runtime. This eventually became problematic, as Docker turned into an enterprise organization. The Docker runtime is also heavier, requiring a shim to manage the runtime and not interacting with containers directly. Sending changes to Docker was also not straightforward due to the platform’s complexity.
CRI brings some major benefits to K8s users, allowing you to:
The last bullet is key, as it means container runtimes can evolve separately from core Kubernetes
CRI became stable in version 1.23, while support for the older Docker runtime was deprecated from 1.20 versions and removed in 1.23
Version 1.31 prefers to use the v1 version of CRI if present. If not, it tries to execute on older versions available.
In version 1.26, Kubelet by default tries to connect with v1 of CRI.
CRI-O is a container runtime implementation of the CRI specification for Kubernetes. This open-source project’s primary objective is to provide an optimized container runtime for Kubernetes clusters.
As before, kubelet talks to CRI-O via APIs and instructs it to perform operations for the container such as pulling images, creating containers, and managing its lifecycle.
OCI specifications and standards guide the development of container runtimes. They inform containers on how to talk to the Linux operating system and define how the container lifecycle and interaction should happen in Linux.
CRI-O uses an OCI-compliant runtime, which by default is runc, to launch and run the containers. This makes sure that in the future, any OCI-compliant runtime can be used via CRI-O.
Before installing CRI-O, you will need to first add a Kubernetes repository and a CRI-O repository. However, you will first need to enable a couple of things.
Enable IPV4 packet forwarding:
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
EOF
Enable the IPV4 setting:
sudo sysctl --system
Kubernetes documentation features other configuration options as well.
Add the Kubernetes repo:
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/$KUBERNETES_VERSION/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/$KUBERNETES_VERSION/rpm/repodata/repomd.xml.key
EOF
Add the CRI-O repo:
cat <<EOF | tee /etc/yum.repos.d/cri-o.repo
[cri-o]
name=CRI-O
baseurl=https://pkgs.k8s.io/addons:/cri-o:/stable:/$CRIO_VERSION/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/addons:/cri-o:/stable:/$CRIO_VERSION/rpm/repodata/repomd.xml.key
EOF
Install dependencies:
dnf install -y container-selinux
Install CRI-O:
dnf install -y cri-o kubelet kubeadm kubectl
Start CRI-O:
dsystemctl start crio.service
You can also find more installation steps for Debian-based and other systems here.
There are multiple reasons for choosing CRI-O, a major one being its massive community support and its close relationship with Kubernetes. Following OCI runtime specifications, CRI-O is optimized for Kubernetes and is lightweight versus other runtimes available. It also works with any registry and image that follows OCI specifications.
Backed by Red Hat, Intel, SUSE, IBM, and major players, CRI-O is highly reliable. Plus, it is a graduate project of the Cloud Native Computing Foundation (CNCF), which adds another huge community behind it and boosts its trustworthiness.
CRI, together with OCI, has decoupled container runtime implementations from Kubernetes; this has enabled the community to work on their own, experimenting with runtimes and adopting better options. With recent standards such as CSI, CRI, CNI, and OCI, developers can make modifications and run their own flavor of implementation.
This results in a faster release cycle for each component and the removal of dependencies from tools such as Docker.
To stay on top of any new developments, as this space is continuously, evolving, make sure to follow the CRI community on GitHub.
Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 “Learn” portal. Get paid for your writing.
Apply Now