How Containers Talk in Kubernetes: Network Namespaces, Veth Pairs & CNI Explained
This article explains how Kubernetes enables container-to-container communication using network namespaces, veth pairs, bridges, and CNI plugins such as Flannel and Calico, covering both intra‑host networking with docker0 and cross‑host networking models, routing rules, and BGP‑based solutions.
Container Network Basics
In Kubernetes, container networking is provided by plug‑in modules rather than a built‑in implementation. The basic principles are that any pod can communicate directly with any other pod without NAT, nodes can talk to pods, and each pod shares a single network stack across its containers.
A Linux container’s network stack lives in its own network namespace, which contains network interfaces, a loopback device, a routing table, and iptables rules. Implementing a container network relies on several Linux features:
Network Namespace : isolates a full network stack.
Veth Pair : a pair of virtual Ethernet devices that connect two namespaces.
Iptables/Netfilter : provides packet filtering and NAT.
Bridge : a layer‑2 virtual switch that forwards frames based on MAC addresses.
Routing : kernel routing tables decide where packets are sent.
On a single host, Docker creates a docker0 bridge. Containers are attached to this bridge via a veth pair; one end appears inside the container as eth0, the other end appears on the host (e.g., veth20b3dac). The container’s default route points to the bridge, enabling direct L2 forwarding.
docker run -d --name c1 hub.pri.ibanyu.com/devops/alpine:v3.8 /bin/sh</code>
<code>docker exec -it c1 /bin/sh</code>
<code># ifconfig</code>
<code># route -nRunning a second container and pinging its IP (e.g., 172.17.0.3) demonstrates that the packets travel from the first container’s eth0 to the host bridge, are broadcast on the bridge, and are received by the second container’s veth peer.
Cross‑Host Network Communication
By default, containers on different hosts cannot reach each other by IP. Kubernetes solves this with the CNI (Container Network Interface) API, allowing plug‑ins such as Flannel, Calico, Weave, and Contiv to provide cross‑host networking.
CNI plug‑ins create their own bridge (e.g., cni0) and support three implementation modes:
Overlay : encapsulates container traffic in tunnels (Flannel UDP/VXLAN, Calico IPIP).
Layer‑3 Routing : uses host routing tables without tunnels, requiring the hosts to be on the same L2 network (Flannel host‑gw, Calico BGP).
Underlay : relies on the underlying network, with pods and hosts in the same L3 space, also using BGP for route distribution.
For example, Flannel’s host‑gw mode adds a route like 10.244.1.0/24 via 10.168.0.3 dev eth0 on each node, directing pod traffic to the remote node’s IP.
Calico Architecture
Calico consists of a CNI plug‑in, the felix daemon (maintains host routes), the BIRD BGP daemon (distributes routes), and confd (configuration management). Instead of a bridge, Calico creates a veth pair for each pod and installs a host‑side route, e.g., 10.92.77.163 dev cali93a8a799fe1 scope link. Packets travel from the pod through the veth pair to the host, are routed according to the BGP‑learned tables, and reach the destination pod.
Calico’s default “node‑to‑node mesh” mode creates a full BGP mesh among all nodes, which scales poorly (O(N²)). For larger clusters, a Route Reflector (RR) topology is recommended, reducing the number of BGP sessions.
When hosts are not on the same L2 segment, Calico can fall back to IPIP encapsulation, adding routes such as 10.92.203.0/24 via 10.100.1.2 dev tunnel0. The tunnel device encapsulates pod packets, which are then decapsulated on the remote host.
Choosing a Solution
In public‑cloud environments, using the cloud provider’s CNI or Flannel host‑gw is often sufficient. In private data‑center or bare‑metal setups, Calico’s BGP‑based routing provides better performance and flexibility. Select the network plug‑in that matches your infrastructure and scalability requirements.
References
https://github.com/coreos/flannel/blob/master/Documentation/backends.md
https://coreos.com/flannel/
https://docs.projectcalico.org/getting-started/kubernetes/
https://www.kancloud.cn/willseecloud/kubernetes-handbook/1321338
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
MaGe Linux Operations
Founded in 2009, MaGe Education is a top Chinese high‑end IT training brand. Its graduates earn 12K+ RMB salaries, and the school has trained tens of thousands of students. It offers high‑pay courses in Linux cloud operations, Python full‑stack, automation, data analysis, AI, and Go high‑concurrency architecture. Thanks to quality courses and a solid reputation, it has talent partnerships with numerous internet firms.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
