Understanding CRI Shim: How Kubernetes Manages Containers and Streaming APIs
CRI shim is a gRPC server that implements the Container Runtime Interface, enabling kubelet to manage pod and container lifecycles, network via CNI, and streaming APIs like Exec, Attach, and PortForward, with detailed explanations of RuntimeService, ImageService, and the CRI‑containerd architecture.
What is a CRI shim?
A CRI shim is a container runtime that implements the CRI interface; it runs as a gRPC server listening on a local Unix socket, while kubelet acts as the gRPC client to manage pod, container, and image lifecycles. The runtime also handles container networking, typically using CNI.
kubelet does not call the Docker API directly; it uses the CRI gRPC interfaces, which abstract away differences between underlying runtimes. For Docker compatibility, Kubernetes provides a dockershim that translates gRPC requests into Docker REST API calls. This extra layer isolates Kubernetes from runtime variations.
The design reflects the principle that any problem can be solved by adding an indirection layer.
CRI shim server interface diagram
CRI defines two services: RuntimeService and ImageService, which can be implemented in a single gRPC server or split into separate servers. Most runtimes implement both in one server.
ImageService provides five interfaces for managing container images:
List images
Pull an image to the local host
Inspect image status
Remove a local image
Query image usage and storage size
Container image operations are straightforward, so we focus on RuntimeService.
RuntimeService offers many interfaces, grouped into four functional categories:
PodSandbox management: CRI ensures this interface deals only with containers, not Pods.
PodSandbox abstraction: provides an isolated environment (CGroup, network namespace) for containers, usually represented by a pause container or a VM.
Container management: create, start, stop, and delete containers within a specified PodSandbox.
Streaming API
CRI shim implements the Streaming API via an independent Streaming Server. The API includes Exec, Attach, and PortForward, which return a URL to a streaming server rather than direct container interaction. kubelet maintains a long‑lived connection to this server for data transfer.
Because all streaming requests pass through kubelet, they can become a network bottleneck. Therefore, CRI requires the runtime to start a dedicated streaming server for each request and return its URL to kubelet, which forwards it to the API server.
The Exec workflow proceeds as follows:
kubectl exec -i -t … (client)
kube-apiserver sends a streaming request to kubelet
kubelet asks the CRI shim for an Exec URL via the CRI interface
CRI shim returns the Exec URL
kubelet redirects the response to kube-apiserver
kube-apiserver redirects the request to the Exec URL, establishing a data exchange between the streaming server and the client.
The Streaming Server is started together with the CRI shim and can be implemented by the shim maintainer; for Docker, dockershim simply forwards to Docker’s Exec API.
CRI‑containerd architecture and main interfaces
The architecture consists of Meta services, Runtime service, and Storage service provided by containerd, exposing generic container operations such as image and runtime management. CRI wraps these with a gRPC service, and the right side shows concrete implementations (runtime and containerd‑shim). A Pod comprises a PodSandbox and one or more containers.
containerd offers richer interfaces accessible via the ctr tool, beyond the standard CRI.
CRI implements two gRPC APIs: ImageService and RuntimeService.
type grpcServices interface {
runtime.RuntimeServiceServer
runtime.ImageServiceServer
}
type CRIService interface {
Run() error
io.Closer
plugin.Service
grpcServices
}Key components in the CRI implementation include CNI for network configuration and a containerd client for container creation.
type criService struct {
config criconfig.Config
imageFSPath string
os osinterface.OS
sandboxStore *sandboxstore.Store
sandboxNameIndex *registrar.Registrar
containerStore *containerstore.Store
containerNameIndex *registrar.Registrar
imageStore *imagestore.Store
snapshotStore *snapshotstore.Store
netPlugin cni.CNI
client *containerd.Client
streamServer streaming.Server
eventMonitor *eventMonitor
initialized atomic.Bool
cniNetConfMonitor *cniNetConfSyncer
baseOCISpecs map[string]*oci.Spec
}Kubernetes operates in a desired‑state loop: kubelet fetches pod specifications, ensures images are present, creates the PodSandbox (including CNI network setup), and then creates and starts the application containers using the pulled images.
The workflow includes:
kubelet calls CRI runtime service to create a pod
CRI uses CNI to set up pod network and namespaces
CRI creates and starts a pause container as the sandbox
kubelet retrieves container images via CRI image service
CRI fetches images from containerd
kubelet launches the application container in the pod’s namespace
CRI creates/starts the container, completing pod startup
Summary
CRI serves Kubernetes and reports status upward; it does not aim to support OCI directly. Integrating CRI with alternative runtimes like gVisor or Kata can be challenging because of mismatched assumptions. Multiple CRI implementations (e.g., cri‑o, containerd) ultimately rely on runC, leading to duplicated shim code. Containerd ShimV2 addresses this duplication.
Reference
https://time.geekbang.org/column/article/71499?utm_campaign=guanwang&utm_source=baidu-ad&utm_medium=ppzq-pc&utm_content=title&utm_term=baidu-ad-ppzq-title
https://blog.frognew.com/2021/04/relearning-container-02.html
https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md
https://developer.aliyun.com/article/679993
Ops Development Stories
Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
