Cloud Native 17 min read

How PouchContainer Implements the Kubernetes CRI: Architecture and Deep Dive

This article explains the motivation behind the Kubernetes Container Runtime Interface (CRI), outlines its design, and provides a detailed walkthrough of PouchContainer’s CRI manager architecture, including pod creation, network configuration, and I/O stream handling, with code examples and configuration snippets.

Alibaba Cloud Native
Alibaba Cloud Native
Alibaba Cloud Native
How PouchContainer Implements the Kubernetes CRI: Architecture and Deep Dive

1. Introduction to CRI

Every Kubernetes node runs a low‑level program that creates and deletes containers; this is called the container runtime. Docker is a well‑known example, but other runtimes such as rkt, runV, gvisor, and the focus of this article, PouchContainer, also exist. Early Kubernetes versions hard‑coded calls to Docker (and later rkt) directly in the core, which made adding new runtimes difficult and increased maintenance burden.

New runtimes required deep knowledge of Kubelet code to integrate.

Hard‑coding each runtime made the core code bulky and unstable when interfaces changed.

To solve these problems, Kubernetes 1.5 introduced the Container Runtime Interface (CRI), an abstract set of gRPC APIs that decouple the core from any specific runtime.

2. CRI Design Overview

Kubelet (the node agent) monitors container state and repeatedly calls CRI APIs. The CRI shim translates these calls into the concrete runtime’s native API. For Docker, the shim runs as a separate process; for PouchContainer the shim is built into the pouchd binary and called the CRI manager.

CRI defines two gRPC services: ImageService for image management and RuntimeService for container lifecycle and exec/attach/port‑forward operations.

3. CRI Manager Architecture in PouchContainer

The CRI manager implements all CRI interfaces and acts as the shim. When Kubelet calls a CRI method, the request travels via Kubelet’s gRPC client to the CRI manager’s gRPC server, which dispatches to the appropriate internal module.

Key modules (all compiled into the same binary) are:

Image Manager – handles ImageService calls.

Container Manager – creates, starts, and stops containers.

CNI Manager – configures pod networking via CNI plugins.

Stream Server – processes exec/attach/port‑forward streams.

These modules call each other directly, avoiding the overhead of inter‑process communication used by Docker’s shim.

4. Implementation of the Pod Model

A Pod is the smallest scheduling unit in Kubernetes, consisting of one or more tightly coupled containers that share a network namespace, IP address, and optionally storage volumes.

Pod creation proceeds as follows:

Kubelet calls RunPodSandbox. The CRI manager creates an “infra container” (a normal container based on the pause-amd64:3.0 image) whose sole purpose is to provide shared Linux namespaces for the pod.

For each additional container, Kubelet calls CreateContainer and StartContainer. The CRI manager translates the CRI container spec to a PouchContainer spec and forwards it to the Container Manager. Namespace sharing is achieved by setting the container’s PidMode, IpcMode, and NetworkMode to Container and referencing the infra container’s ID.

To distinguish infra containers from regular containers, the CRI manager adds a special label during creation. The ListPodSandbox and ListContainers calls filter containers based on this label.

The overall flow is: create the infra container first, then create the remaining containers and attach them to the infra container’s namespaces.

5. Pod Network Configuration

All containers in a pod share a network namespace, so configuring the network for the infra container configures the whole pod.

The CNI manager loads plugin configuration files from /etc/cni/net.d. An example 10-mynet.conflist file:

{
    "cniVersion": "0.3.0",
    "name": "mynet",
    "plugins": [{
        "type": "bridge",
        "bridge": "cni0",
        "isGateway": true,
        "ipMasq": true,
        "ipam": {
            "type": "host-local",
            "subnet": "10.22.0.0/16",
            "routes": [{"dst": "0.0.0.0/0"}]
        }
    }]
}

When creating the infra container, NetworkMode is set to None to allocate a private network namespace. The CRI manager then calls the CNI manager’s SetUpPodNetwork with the namespace path (e.g., /proc/{pid}/ns/net) to attach the namespace to the chosen CNI network.

For host‑networked pods, NetworkMode is set to Host and the CNI step is skipped. All other containers use NetworkMode=Container to join the infra container’s namespace.

6. I/O Stream Handling

Commands such as kubectl exec/attach/port-forward allow users to interact with a container. The flow is:

Kubectl sends an ExecRequest (containing container ID, command, TTY flag, and stream booleans) to the API server, which forwards it to the node’s Kubelet.

Kubelet calls the CRI Exec method. The CRI manager forwards the request to its internal Stream Server’s GetExec method, which stores the request in a cache and returns a token.

The API server upgrades the HTTP connection to a streaming protocol (e.g., SPDY) using the token.

The Stream Server validates the token, establishes separate streams for stdin, stdout, and stderr, and then invokes the Container Manager’s CreateExec and StartExec methods.

Data flows from the container through the Stream Server back to the API server and finally to the user’s kubectl client.

Before CRI, Kubelet performed the exec directly, which placed heavy I/O load on the node agent. The CRI‑based design offloads this work to the Stream Server, improving scalability.

7. Conclusion

The article traced the motivation for introducing the CRI, described its gRPC‑based architecture, and detailed how PouchContainer implements each CRI component—image management, container lifecycle, pod networking, and I/O streaming. By conforming to the CRI, PouchContainer can be used as a drop‑in runtime for Kubernetes, enriching the ecosystem.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

KubernetesCRICNIPodPouchContainer
Alibaba Cloud Native
Written by

Alibaba Cloud Native

We publish cloud-native tech news, curate in-depth content, host regular events and live streams, and share Alibaba product and user case studies. Join us to explore and share the cloud-native insights you need.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.