How Does Kubernetes Enable Container Networking? A Deep Dive into CNI, Veth, and Bridges
This article explains the fundamental principles of Kubernetes container networking, covering network namespaces, veth pairs, bridges, iptables, and routing, and compares intra‑host communication with cross‑host solutions such as CNI plugins, overlay, host‑gw, and Calico’s BGP‑based approaches.
Container Network Basics
Kubernetes relies on plugins to provide container networking, ensuring that pods can communicate directly without NAT, that nodes and pods can talk to each other, and that each pod shares a single network stack.
Fundamentals of Linux Container Networking
A Linux container’s network stack lives in its own network namespace, which includes network interfaces, a loopback device, routing tables, and iptables rules. Key Linux networking features used are:
Network Namespace – isolates independent network stacks.
Veth Pair – a pair of virtual Ethernet devices that connect different namespaces.
Iptables/Netfilter – user‑space and kernel‑space tools for packet filtering and NAT.
Bridge – a layer‑2 virtual switch that forwards frames based on MAC addresses.
Routing – uses routing tables to determine packet destinations.
On a single host, containers communicate via the
docker0bridge and a veth pair that links each container to the bridge, similar to two physical hosts connected by a cable.
<code>docker run -d --name c1 hub.pri.ibanyu.com/devops/alpine:v3.8 /bin/sh</code>Inspecting the container shows an
eth0interface (one end of the veth pair) and its routing table, which directs traffic for the
172.17.0.0/16subnet through
eth0.
<code>docker exec -it c1 /bin/sh</code>
<code>ifconfig</code>
<code>route -n</code>The host side of the veth pair appears as
veth20b3dacattached to
docker0, confirmed with
brctl show.
<code># brctl show</code>
<code>docker0 8000.02426a4693d2 no veth20b3dac</code>Launching a second container and pinging it demonstrates successful intra‑host communication via the bridge.
<code>docker run -d --name c2 -it hub.pri.ibanyu.com/devops/alpine:v3.8 /bin/sh</code>
<code>docker exec -it c1 ping 172.17.0.3</code>Cross‑Host Networking
Default Docker networking cannot reach containers on different hosts, so Kubernetes uses CNI plugins (e.g., Flannel, Calico, Weave) that create a dedicated bridge (
cni0) on each node.
CNI supports three modes:
Overlay – encapsulates traffic in tunnels (Flannel UDP/VXLAN, Calico IPIP).
Host‑gw (layer‑3 routing) – uses routing tables without tunnels, requiring the hosts to be on the same L2 network (Flannel host‑gw).
Underlay – relies on the underlying network and BGP for routing (Calico).
In the host‑gw mode, a pod on node1 routes to a pod on node2 via a static route pointing to node2’s IP, and the
cni0bridge forwards the packet.
<code>10.244.1.0/24 via 10.168.0.3 dev eth0</code>Calico avoids a bridge and instead programs a veth pair for each pod and installs routing rules directly on the host. It uses the BIRD daemon to distribute routes via BGP, forming a full‑mesh of node‑to‑node routes.
<code>10.92.77.163 dev cali93a8a799fe1 scope link</code>When nodes are not on the same L2 segment, Calico can fall back to IPIP encapsulation, adding a tunnel device (
tunnel0) for the encapsulated traffic.
<code>10.92.203.0/24 via 10.100.1.2 dev tunnel0</code>For large clusters, Calico recommends a Router‑Reflector (RR) topology to reduce BGP peer connections.
In summary, Kubernetes container networking can be implemented with simple bridge‑based setups for single‑host clusters, while production environments typically adopt CNI plugins such as Flannel host‑gw for simplicity or Calico for BGP‑driven scalability, choosing the solution that best fits the underlying infrastructure.
Efficient Ops
This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.