Cloud Native 19 min read

How Kubernetes Enables Seamless Container Networking: From Docker0 to CNI

This article explains how Kubernetes ensures container-to-container communication using network namespaces, veth pairs, bridges like docker0, and advanced CNI plugins such as Flannel and Calico, detailing their underlying Linux networking components, routing mechanisms, and deployment considerations for both intra‑host and inter‑host scenarios.

Efficient Ops
Efficient Ops
Efficient Ops
How Kubernetes Enables Seamless Container Networking: From Docker0 to CNI

Kubernetes does not implement its own container network; it relies on plug‑in architectures that must satisfy basic principles: pods on any node can communicate directly without NAT, nodes can talk to pods, and each pod shares a single network stack visible both inside and outside.

Container Network Fundamentals

A Linux container’s network stack resides in its own network namespace, which includes interfaces, a loopback device, routing tables, and iptables rules. Implementing container networking depends on several Linux features:

Network Namespace : isolates a full network protocol stack per namespace.

Veth Pair : a pair of virtual Ethernet devices that connect two namespaces; traffic sent on one appears on the other.

Iptables/Netfilter : kernel‑level packet filtering and manipulation; iptables userspace tool manages Netfilter rule tables.

Bridge : a layer‑2 virtual switch that forwards frames based on learned MAC addresses.

Routing : Linux routing tables determine packet forwarding at the IP layer.

On a single host, Docker creates the

docker0

bridge and a veth pair for each container. The container’s

eth0

is one end of the veth pair, and the other end appears on the host as a virtual interface (e.g.,

veth20b3dac

) attached to

docker0

. The container’s default route points to

eth0

, sending all traffic for the

172.17.0.0/16

pod network through the bridge.

<code>docker run -d --name c1 hub.pri.ibanyu.com/devops/alpine:v3.8 /bin/sh</code>
<code>docker exec -it c1 /bin/sh
ifconfig
route -n</code>

Running a second container and pinging its IP demonstrates that packets are forwarded by the bridge without NAT, using ARP broadcasts to resolve MAC addresses.

<code>docker run -d --name c2 -it hub.pri.ibanyu.com/devops/alpine:v3.8 /bin/sh
docker exec -it c1 /bin/sh
ping 172.17.0.3</code>

Cross‑Host Networking

Default Docker networking cannot reach containers on different hosts. Kubernetes introduces the CNI (Container Network Interface) API, allowing plug‑ins such as Flannel, Calico, Weave, and Contiv to provide cross‑node connectivity. CNI typically creates its own bridge (

cni0

) and configures each pod’s network namespace via the plug‑in.

CNI supports three implementation modes:

Overlay : encapsulates pod traffic in tunnels (e.g., VXLAN, IPIP) independent of the underlying network.

Layer‑3 Routing : relies on routing tables without tunnels, requiring all nodes to be on the same L2 segment.

Underlay : uses the physical network directly, with each node participating in routing (often via BGP).

Flannel’s host‑gw mode configures a static route on each node pointing to the pod CIDR of other nodes, avoiding encapsulation overhead but requiring L2 connectivity.

<code>10.244.1.0/24 via 10.168.0.3 dev eth0</code>

Calico, in contrast, does not create a bridge; it installs a veth pair per pod and programs routing rules directly. Calico’s

felix

component maintains these rules, while the

BIRD

daemon distributes routes using BGP, forming a full‑mesh of node‑to‑node peers.

<code>10.92.77.163 dev cali93a8a799fe1 scope link</code>

When nodes are not on the same L2 network, Calico can fall back to an IPIP tunnel, encapsulating pod packets in a tunnel device (

tunnel0

) before routing them to the remote node.

<code>10.92.203.0/24 via 10.100.1.2 dev tunnel0</code>

For large clusters, Calico’s default mesh scales quadratically; using a Router‑Reflector (RR) topology reduces the number of BGP sessions, making the control plane more manageable.

In summary, Kubernetes networking can be built on simple host‑level bridges for single‑node clusters, or on sophisticated CNI plug‑ins that leverage Linux networking primitives, routing protocols, and tunneling to provide scalable, cross‑node pod communication.

kubernetesBGPCNIContainer NetworkingCalicoFlannelNetwork Namespace
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.