Cloud Native 10 min read

How Flannel and Containerd Enable Pod IP Allocation in Kubernetes

This article explains how Kubernetes assigns unique IP addresses to Pods using Flannel as the CNI network provider and Containerd as the container runtime, covering underlying concepts like Linux bridges, VXLAN encapsulation, node IPAM, and the interactions among kubelet, CRI, and CNI plugins.

Ops Development Stories
Ops Development Stories
Ops Development Stories
How Flannel and Containerd Enable Pod IP Allocation in Kubernetes

Background Concepts

Container Network: Brief Overview

Containers on the same host communicate via a Linux bridge using veth pairs; each veth connects the container’s network namespace to the host bridge, which also acts as a gateway for traffic leaving the pod.

Containers on Different Hosts

Cross‑host communication relies on packet encapsulation. Flannel uses VXLAN to wrap original packets in UDP and forward them to the destination node.

What is CRI?

The Container Runtime Interface (CRI) is a plugin interface that lets kubelet use different container runtimes.

What is CNI?

CNI defines a standard plugin‑based networking model for Linux containers and provides various plugins to configure pod networks.

Assigning Subnets to Nodes for Pod IPs

Each node receives a unique subnet (podCIDR) from the cluster CIDR, ensuring every pod gets a unique IP.

Node IPAM Controller

When the

nodeipam

controller is enabled, kube‑controller‑manager allocates a non‑overlapping podCIDR to each node. The podCIDR can be listed with:

<code>$ kubectl get node <nodeName> -o json | jq '.spec.podCIDR'</code>

Kubelet, Container Runtime, and CNI Plugin Interaction

When a pod is scheduled, kubelet invokes the container runtime’s CRI plugin, which in turn calls the appropriate CNI plugin to configure the pod’s network.

The CNI configuration files reside in

/etc/cni/net.d/

and binaries in

/opt/cni/bin

. With containerd, the CNI paths are set in the

plugins."io.containerd.grpc.v1.cri".cni

section.

Flannel’s daemon (flanneld) installs a CNI configuration file (

/etc/cni/net.d/10-flannel.conflist

) and creates a VXLAN device on each node.

CNI Plugin Interactions

Flannel’s CNI plugin reads

/run/flannel/subnet.env

for network details and then invokes the bridge CNI plugin:

<code>{
  "name": "cni0",
  "type": "bridge",
  "mtu": 1450,
  "ipMasq": false,
  "isGateway": true,
  "ipam": {
    "type": "host-local",
    "subnet": "10.244.0.0/24"
  }
}</code>

The bridge plugin creates a Linux bridge, establishes veth pairs for each pod, and hands the IP address allocation to the host‑local IPAM plugin:

<code>{
  "name": "cni0",
  "ipam": {
    "type": "host-local",
    "subnet": "10.244.0.0/24",
    "dataDir": "/var/lib/cni/networks"
  }
}</code>

The host‑local IPAM plugin returns the allocated IP, e.g.:

<code>{
  "ip4": {
    "ip": "10.244.4.2",
    "gateway": "10.244.4.3"
  },
  "dns": {}
}</code>

Summary

Kube‑controller‑manager assigns a unique podCIDR to each node; pods obtain IPs from their node’s subnet, guaranteeing uniqueness across the cluster. The kubelet calls the container runtime’s CRI plugin, which invokes the CNI plugin (Flannel in this case) to set up the pod’s network, resulting in a functional IP address for every pod.

Reference: https://ronaknathani.com/blog/2020/08/how-a-kubernetes-pod-gets-an-ip-address/

kubernetesNetworkingcontainerdCNIPod IPFlannel
Ops Development Stories
Written by

Ops Development Stories

Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.