How kube-proxy and CNI Collaborate to Enable Pod‑to‑Service Communication in Kubernetes
This article explains how kube-proxy and CNI plugins cooperate within a Kubernetes cluster to translate Service ClusterIP requests into real Pod IPs, detailing the Netfilter, iptables/ipvs rule configuration, overlay networking modes, and the emerging proxy‑free approaches using eBPF.
In a Kubernetes cluster, the kube-proxy component and the CNI (Container Network Interface) plugin work together to ensure that Pods can communicate with Services across nodes.
When a Pod (e.g., Pod A) accesses a Service of type ClusterIP , the request follows these steps:
Pod A sends traffic to the Service name, which resolves via CoreDNS to the Service's virtual IP (ClusterIP).
The Linux kernel’s Netfilter performs DNAT, selecting a real backend Pod IP.
If the selected backend Pod resides on a different node, the CNI plugin forwards the packet according to its network mode, either directly via the host network or by encapsulating it with VXLAN or IPIP .
The kube-proxy runs as a DaemonSet on every worker node, watches Service and Endpoint objects, and programs the kernel’s Netfilter rules using either iptables or ipvs . Both iptables and ipvs rely on Netfilter to perform packet filtering, DNAT/SNAT, and forwarding.
The Service’s ClusterIP is a virtual IP (VIP) that has no physical network entity; kube-proxy creates iptables/ipvs rules that map this VIP to the actual Pod IPs. iptables selects a backend Pod randomly, while ipvs supports multiple load‑balancing algorithms such as round‑robin ( rr ), least‑connection ( lc ), and source‑hashing ( sh ).
CNI plugins (e.g., Flannel, Calico) provide overlay networking using Linux kernel features like VXLAN or IPIP . After Netfilter finishes DNAT/SNAT, if the Pod’s MAC/IP is reachable, the packet is sent directly; otherwise the CNI encapsulates the packet and forwards it across the overlay.
While kube-proxy works well, it has scalability limits: large numbers of Services and Pods can degrade iptables rule matching performance, and frequent updates cause rule re‑application latency. Modern CNI solutions such as Cilium use eBPF to implement proxy‑free service traffic forwarding, bypassing the kernel’s Netfilter path entirely.
In summary, kube-proxy and CNI plugins configure Linux kernel networking components (Netfilter, VXLAN, etc.) to enable seamless Pod‑to‑Service communication, and emerging eBPF‑based approaches can further optimize performance by eliminating the need for a traditional kube‑proxy.
System Architect Go
Programming, architecture, application development, message queues, middleware, databases, containerization, big data, image processing, machine learning, AI, personal growth.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.