Cloud Native 12 min read

Demystifying Kubernetes Networking: From HTTP Request Journey to Load Balancer, kube-proxy, iptables, and Pod Network

This article walks through the complete path of an HTTP request in a GKE Kubernetes cluster, explaining the role of the cloud load balancer, Service and ReplicaSet resources, kube-proxy operation modes, iptables rules, pod networking, and security considerations.

Cloud Native Technology Community
Cloud Native Technology Community
Cloud Native Technology Community
Demystifying Kubernetes Networking: From HTTP Request Journey to Load Balancer, kube-proxy, iptables, and Pod Network

Kubernetes networking can be confusing even for engineers with experience in virtual networking; this article uses a two‑node Google Kubernetes Engine (GKE) cluster to trace an HTTP request from a user’s browser to a service running inside the cluster.

1. Request Journey – A user clicks a link, the request travels over the Internet to the cloud provider, reaches the provider’s load balancer, and is forwarded to the Kubernetes Service, which then routes it to a ReplicaSet pod.

2. Load Balancer – The article explains that a standard LoadBalancer‑type Service creates a cloud‑provider load balancer; GCP’s network load balancer forwards traffic to the node ports of the backend pods.

3. kube‑proxy – Each node runs a kube‑proxy container that forwards traffic destined for a Service’s virtual IP (VIP) to the appropriate backend pods. Three operation modes are described: user‑space (deprecated), iptables (default on most platforms), and IPVS (available from Kubernetes 1.11).

4. iptables – In the iptables mode, kube‑proxy programs Netfilter chains. The article shows example iptables rules, including chains KUBE‑FW‑…, KUBE‑SVC‑…, and KUBE‑SEP‑… that perform DNAT, SNAT, and packet marking for the hello‑world Service.

5. Pod Network – Pods receive IPs from a dedicated CIDR block separate from node IPs. GKE uses the Kubernetes CNI to create a bridge on each node, allowing pods to address each other directly across the cluster.

6. Request – By following the iptables chains, the request is NAT‑translated to the pod’s IP and port, resulting in an HTTP 200 response.

7. Security Services – The article notes that cloud load balancers may not honor loadBalancerSourceRanges on all providers, recommends using NetworkPolicy (e.g., Calico) for pod‑level firewalling, and cautions against using hostNetwork or privileged containers without proper controls.

The following YAML creates the example ReplicaSet and Service used throughout the walkthrough:

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: hello-world
  labels:
    app: hello-world
spec:
  selector:
    matchLabels:
      app: hello-world
  replicas: 2
  template:
    metadata:
      labels:
        app: hello-world
    spec:
      containers:
      - name: hello-world
        image: gcr.io/google-samples/node-hello:1.0
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: hello-world
spec:
  selector:
    app: hello-world
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
  type: LoadBalancer
  externalTrafficPolicy: Cluster

Understanding these components provides a solid foundation for monitoring, troubleshooting, and securing Kubernetes networking in production environments.

cloud nativeKubernetesnetworkingiptableskube-proxyloadbalancerPod Network
Cloud Native Technology Community
Written by

Cloud Native Technology Community

The Cloud Native Technology Community, part of the CNBPA Cloud Native Technology Practice Alliance, focuses on evangelizing cutting‑edge cloud‑native technologies and practical implementations. It shares in‑depth content, case studies, and event/meetup information on containers, Kubernetes, DevOps, Service Mesh, and other cloud‑native tech, along with updates from the CNBPA alliance.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.