Cloud Native 26 min read

Tracing the Path of Network Traffic in Kubernetes

This article provides a comprehensive guide to Kubernetes networking, covering pod network requirements, Linux network namespaces, the role of the pause container, IP allocation, veth pairs, bridge connections, inter‑pod traffic on same and different nodes, CNI plugins, and how services use iptables and conntrack for traffic routing.

Cloud Native Technology Community
Cloud Native Technology Community
Cloud Native Technology Community
Tracing the Path of Network Traffic in Kubernetes

Kubernetes Network Requirements

Kubernetes defines three core networking rules: every pod can communicate with any other pod without NAT, any process on a node can reach any pod on that node without NAT, and each pod receives its own unique IP address (IP‑per‑Pod). These rules are agnostic to the underlying implementation.

Linux Network Namespaces in Pods

When a pod is created, the container runtime creates a Linux network namespace for the pod. All containers in the pod share this namespace, receiving the same IP address and being able to see each other's network interfaces.

apiVersion: v1
kind: Pod
metadata:
  name: multi-container-pod
spec:
  containers:
  - name: container-1
    image: busybox
    command: ['/bin/sh','-c','sleep 1d']
  - name: container-2
    image: nginx

The pod gets its own namespace, an IP address, and the containers share the same network stack.

Pause Container

Each pod also contains a hidden pause container. The pause container creates and holds the pod’s network namespace; the other containers join this namespace. Because the pause container does almost nothing after startup, it provides a stable anchor for the network.

$ docker ps | grep pause
fa9666c1d9c6   k8s.gcr.io/pause:3.4.1   "/pause"   k8s_POD_kube-dns-599484b884-sv2js…

IP Allocation

The pod’s IP can be inspected with kubectl get pod … -o jsonpath={.status.podIP} . On the node, the corresponding network namespace appears under /var/run/netns with a name like cni-0f226515‑e28b‑df13‑9f16‑dd79456825ac . Inside that namespace the pod’s interface eth0 holds the assigned IP.

$ ip netns exec cni-0f226515-e28b-df13-9f16-dd79456825ac ip a
3: eth0@if12:
mtu 1450 ...
    inet 10.244.4.40/32 brd 10.244.4.40 scope global eth0

Inter‑Pod Traffic on the Same Node

Pods on the same node communicate via a pair of virtual Ethernet (veth) devices. One end of the veth pair resides in the pod’s namespace, the other end in the root namespace and is attached to a Linux bridge that acts as a virtual switch.

$ ip link add veth1 netns pod-namespace type veth peer veth2 netns root

The bridge forwards frames based on ARP resolution, allowing pod‑A to reach pod‑B on the same node.

Inter‑Pod Traffic Across Nodes

When the destination pod is on a different node, the packet is first sent to the node’s default gateway (the node’s physical eth0 interface). Bitwise AND operations on the source and destination IPs determine that the destination is off‑network, triggering routing to the gateway.

CNI (Container Network Interface)

CNI plugins automate the steps required to set up pod networking: creating the namespace, veth pair, bridge, IP allocation, routing, and NAT rules. Popular plugins include Calico, Cilium, Flannel, and Weave Net.

{
  "name": "k8s-pod-network",
  "cniVersion": "0.3.1",
  "plugins": [
    {
      "type": "calico",
      "datastore_type": "kubernetes",
      "mtu": 0,
      "log_level": "Info",
      "ipam": {"type": "calico-ipam", "assign_ipv4": "true", "assign_ipv6": "false"},
      "kubernetes": {"k8s_api_root": "https://10.96.0.1:443", "kubeconfig": "/etc/cni/net.d/calico-kubeconfig"}
    },
    {"type": "bandwidth", "capabilities": {"bandwidth": true}},
    {"type": "portmap", "snat": true, "capabilities": {"portMappings": true}}
  ]
}

CNI supports four operations: ADD (attach a container to the network), DEL (remove it), CHECK (verify connectivity), and VERSION (show plugin version).

Service Traffic, Netfilter, and iptables

Kubernetes Services allocate a stable virtual IP (VIP). When a pod sends traffic to a Service VIP, Netfilter’s PRE_ROUTING chain applies a DNAT rule that rewrites the destination IP to the selected backend pod’s IP. Conntrack records the original connection so that the response can be SNAT‑ed back to the Service VIP.

$ iptables-save

The same iptables chains (PREROUTING, INPUT, FORWARD, OUTPUT, POSTROUTING) are used to implement these NAT transformations.

Review

How containers communicate within a pod.

Pod‑to‑Pod communication on the same and different nodes.

Pod‑to‑Service traffic and the role of iptables/NAT/conntrack.

The essential networking components in Kubernetes: namespaces, veth pairs, bridges, CNI plugins, overlay networks, and more.

KubernetesNetworkingIPTablesCNIPodLinux Namespaces
Cloud Native Technology Community
Written by

Cloud Native Technology Community

The Cloud Native Technology Community, part of the CNBPA Cloud Native Technology Practice Alliance, focuses on evangelizing cutting‑edge cloud‑native technologies and practical implementations. It shares in‑depth content, case studies, and event/meetup information on containers, Kubernetes, DevOps, Service Mesh, and other cloud‑native tech, along with updates from the CNBPA alliance.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.