Cloud Native 11 min read

Understanding kube-proxy Service Implementation and Packet Flow with Calico in Kubernetes

This guide explains how kube-proxy in iptables mode, together with Calico BGP routing, translates service IPs to pod IPs, creates custom iptables chains, and routes traffic from external or node ports to the correct pod across a Kubernetes cluster.

360 Tech Engineering
360 Tech Engineering
360 Tech Engineering
Understanding kube-proxy Service Implementation and Packet Flow with Calico in Kubernetes

This article shares the implementation principle of the kube-proxy service, focusing on how traffic packets are forwarded to pods using iptables mode and Calico BGP routing.

In our production environment we expose services via ExternalIPs ClusterIP and NodePort. ExternalIPs allow us to assign fixed worker‑node IPs as load‑balancing VIPs, which is more efficient than NodePort because traffic is directed to specific nodes rather than all workers.

When a packet reaches a worker node (e.g., node A) via node_ip:port or cluster_ip:port, the kernel’s DNAT rule rewrites the destination to the pod IP. Calico’s BGP deployment ensures that the pod IP’s CIDR is advertised to the upstream switches, so the packet is routed to the worker node that actually hosts the pod (e.g., node B). Inside node B, Calico creates a virtual interface (veth pair) and routing rules that move the packet from the host network namespace to the pod’s network namespace.

To reproduce the scenario locally, you can start a Minikube cluster with Calico:

minikube start --network-plugin=cni --cni=calico
# or
minikube start --network-plugin=cni
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

Then deploy a sample nginx workload with two replicas and create a ClusterIP Service with ExternalIPs and a NodePort Service:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-demo-1
  labels:
    app: nginx-demo-1
spec:
  replicas: 2
  template:
    metadata:
      name: nginx-demo-1
      labels:
        app: nginx-demo-1
    spec:
      containers:
      - name: nginx-demo-1
        image: nginx:1.17.8
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            port: 80
            path: /index.html
          failureThreshold: 10
          initialDelaySeconds: 10
          periodSeconds: 10
      restartPolicy: Always
  selector:
    matchLabels:
      app: nginx-demo-1
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-demo-1
spec:
  selector:
    app: nginx-demo-1
  ports:
  - port: 8088
    targetPort: 80
    protocol: TCP
  type: ClusterIP
  externalIPs:
  - 192.168.64.57
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-demo-2
spec:
  selector:
    app: nginx-demo-1
  ports:
  - port: 8089
    targetPort: 80
  type: NodePort
---

After deployment you can access the services via the ExternalIP ClusterIP or the NodePort. kube-proxy creates custom iptables chains in the nat table (e.g., KUBE‑SERVICES, KUBE‑SEP) to perform DNAT/SNAT and load‑balance between pod replicas using the statistic module (round‑robin).

To view the rules you can run:

sudo iptables -v -n -t nat -L PREROUTING | grep KUBE-SERVICES
sudo iptables -v -n -t nat -L KUBE-SERVICES
sudo iptables -v -n -t nat -L KUBE-NODEPORTS

Each KUBE‑SVC‑xxx chain forwards to one or more KUBE‑SEP‑xxx chains, and because we have two pod replicas the packet is sent to one of the two pods with a 50 % probability.

Calico’s BGP deployment also creates routing entries on the switches so that a pod CIDR (e.g., 10.20.30.40/26) is reachable via the worker node’s IP as the next hop:

# example BGP entry
Network                 NextHop         ...
10.20.30.40/26          10.203.30.40    ...

When the packet knows the pod IP (e.g., 10.217.120.72:80) it is routed to the node that owns that IP. You can inspect the host’s network interfaces and the veth pair index with a netshoot container:

# because the nginx container lacks ip/ifconfig, launch a netshoot container in its namespace
docker ps -a | grep nginx
export CONTAINER_ID=f2ece695e8b9
docker run -it --network=container:$CONTAINER_ID --pid=container:$CONTAINER_ID --ipc=container:$CONTAINER_ID nicolaka/netshoot:latest ip -c addr
ip -c addr

The veth pair index (e.g., 13) links the host interface (cali1087c975dd9) to the pod’s eth0, allowing the packet to move from the host network namespace to the pod network namespace.

In summary, regardless of whether you access the service via cluster_ip:port, external_ip:port, or node_ip:port, the packet is first processed by kube‑proxy‑generated iptables rules, then routed by Calico BGP to the appropriate worker node, and finally delivered to the target pod through the host’s routing table and virtual interface.

References:

https://docs.projectcalico.org/about/about-kubernetes-service

https://mp.weixin.qq.com/s/bYZJ1ipx7iBPw6JXiZ3Qu

https://mp.weixin.qq.com/s/oaW87xLnlUYYrwVjBnqee

https://mp.weixin.qq.com/s/RziLRPYqNoQEQuncm47rHg

KubernetesnetworkingServiceBGPiptablesCalicokube-proxy
360 Tech Engineering
Written by

360 Tech Engineering

Official tech channel of 360, building the most professional technology aggregation platform for the brand.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.