containerd: The Runtime Underpinning Kubernetes
Actualizado: 2026-05-03
containerd[1] is probably the most widely deployed container runtime on the planet, and at the same time one of the least understood. When Kubernetes 1.24 finalised the removal of dockershim in May 2022, thousands of clusters ended up running containerd as their default runtime. Few operators noticed — which is precisely the signal that the migration went well: a properly integrated runtime is supposed to be invisible. This article makes it visible just long enough to understand what runs underneath each pod.
Key takeaways
- containerd manages the full lifecycle of a container on a node: pulling image, creating, starting, pausing, stopping, mounting filesystem, and coordinating networking with the CNI.
- The layered stack (Kubernetes → CRI → containerd → OCI → runc) allows swapping pieces without breaking anything above.
- Docker has used containerd internally since 2017; dockershim was just a translator that Kubernetes 1.24 removed.
- containerd dominates in cloud (EKS, GKE, AKS); CRI-O is standard in OpenShift.
crictl,ctr, andnerdctlare the three tools for operating containerd directly when kubectl isn’t enough.
What containerd Is and Isn’t
containerd is a high-level container runtime. Its responsibility is the full lifecycle of a container on a single machine: pulling images from a registry, storing them, creating the container, starting, pausing and stopping it, mounting filesystem snapshots, and coordinating network namespaces with whatever CNI plugin the node has configured. All of that happens inside a daemon, without a GUI and without developer-experience niceties.
What it doesn’t do: it doesn’t build images (it has no Dockerfile concept), doesn’t orchestrate (that’s Kubernetes’ job), doesn’t compose multiple containers, and doesn’t try to replace an interactive tool like the Docker CLI.
containerd was originally an internal component of Docker Engine. In March 2017 it was donated to the CNCF; in February 2019 it reached graduated status. Since then it has become the common substrate on which Docker, Kubernetes, BuildKit, and many other tools build.
The Layered Stack
Understanding containerd means looking at the full stack:
Kubernetes (orchestration)
│
└── CRI (Container Runtime Interface) — gRPC API
│
▼
containerd (high-level runtime)
│
└── OCI spec → runc (or crun, gVisor, Kata Containers)
│
└── Linux kernel (namespaces, cgroups, seccomp)This layering allows swapping pieces without breaking anything above them. You can replace runc with crun (C reimplementation), with gVisor (userspace sandbox for strong multi-tenant isolation), or with Kata Containers (microVM per container), and Kubernetes neither knows nor cares. You can replace containerd with CRI-O with little more than a kubelet flag.
CRI and OCI are the two contracts that make all of this possible.
Two important internal concepts. containerd namespaces are not kernel namespaces: they are logical partitions separating images and containers belonging to different consumers. Kubernetes uses k8s.io; Docker uses moby. Querying the daemon without specifying a namespace makes it look empty on a node packed with pods. Snapshotters mount container filesystems as overlays: the default is overlayfs, with alternatives (btrfs, zfs, stargz for lazy pulling) configurable in the daemon.
containerd and Docker: The Real Relationship
The simple truth is that containerd and Docker have been the same thing underneath for years. Docker Engine has used containerd internally since 2017: when you type docker run, Docker builds the OCI spec, calls containerd over a local socket, and containerd invokes runc. The command is a convenient abstraction on top of the real runtime.
That’s why Kubernetes clusters said to “use Docker” were already running their containers through containerd; dockershim was a translator between CRI and the Docker Engine API. Removing dockershim in Kubernetes 1.24 eliminated that translation layer. Kubernetes now talks directly to containerd (or CRI-O), without routing through Docker Engine. For a cluster, the change amounted to reconfiguring kubelet and restarting nodes. For a development server with Docker, nothing changes.
containerd vs CRI-O
CRI-O[2] is the main alternative. It was born at Red Hat with an explicit goal: a minimal runtime built exclusively for Kubernetes, with nothing that CRI doesn’t require.
- containerd: more general — Docker uses it, development environments use it, BuildKit uses it, Kubernetes uses it.
- CRI-O: more specific — made only for Kubernetes, with nothing extra.
Both implement CRI and OCI, both use runc by default, and performance is comparable in serious benchmarks. The choice is more about ecosystem and support than capability: containerd dominates upstream Kubernetes and cloud distributions (EKS, GKE, AKS); CRI-O is standard in OpenShift and the Red Hat ecosystem.
Operating containerd Directly
Day to day with Kubernetes you never touch containerd: kubelet handles it. When something breaks on a node and kubectl describe stops being enough, three tools are essential:
ctr: containerd’s native CLI. Raw and unfriendly, but lets you see exactly what the daemon holds.crictl: the official Kubernetes project tool for debugging CRI runtimes. Speaks to containerd through the CRI socket (unix:///run/containerd/containerd.sock) and offers pod- and container-oriented subcommands. Configuration in/etc/crictl.yaml.- nerdctl[3]: Docker-compatible CLI that talks to containerd directly. See the full article at nerdctl-alternativa-docker.
# Typical debug from a Kubernetes node running containerd
sudo crictl pods --namespace kube-system
sudo crictl ps -a
sudo crictl logs --tail 200 <container-id>
sudo crictl exec -it <container-id> sh
sudo crictl inspecti registry.k8s.io/pause:3.9
# If you miss Docker syntax
sudo nerdctl --namespace k8s.io ps
sudo nerdctl --namespace k8s.io images
# Watch the daemon itself
sudo journalctl -u containerd -fFour concrete cases where this pays off:
- Pod that won’t start whose kubelet logs don’t explain why — containerd often has the detail in
journalctl -u containerd. - Node with full disk needing
crictl rmi --prunebecause Kubernetes GC is falling behind. ImagePullBackOffresolved bycrictl pulldirectly to isolate whether the problem is credentials or networking.- Auditing which runtime each node runs with
kubectl get nodes -o wide.
Typical Configuration
The main configuration file lives at /etc/containerd/config.toml. The most common edits:
- Registry mirrors: to avoid saturating Docker Hub or depending on it in production.
- Alternative runtimes: configuring gVisor or Kata Containers for sensitive workloads.
- Snapshotter: choosing btrfs or zfs if the environment justifies it.
- Cgroup driver to
systemd: aligning with kubelet. This is the classic installation mistake — kubelet onsystemdand containerd oncgroupfsproduces mysterious pod restarts that are painful to diagnose.
Any configuration change requires systemctl restart containerd and, in production, doing it node by node with cordon and drain. The node observability patterns that support containerd incident diagnosis integrate well with the Grafana Stack observability setup.
Conclusion
containerd is the quiet piece on which most of Kubernetes runs. Understanding its architecture isn’t academic vanity: it’s what turns an opaque incident into a diagnosable one, what lets you make informed decisions about alternative runtimes when security or isolation demand them, and what lets you read release notes about CRI, snapshotters, or shims with actual comprehension. Under ordinary operation it will remain an invisible layer; the day you have to descend into it, you need to know the route.