Cloud Native 15 min read

Demystifying Kubernetes Networking: Services, IPs, and Ports Explained

This article breaks down Kubernetes' internal networking model, explaining key concepts such as network namespaces, veth pairs, iptables, services, ClusterIP, NodePort, and Ingress, and illustrates how pods communicate within a node, across nodes, and how external traffic reaches the cluster.

Efficient Ops
Efficient Ops
Efficient Ops
Demystifying Kubernetes Networking: Services, IPs, and Ports Explained

Terminology

1. Network namespace: Linux isolates network stacks into separate namespaces, preventing communication between them; Docker uses this for container network isolation.

2. Veth pair: A virtual Ethernet pair that enables communication between different network namespaces.

3. Iptables/Netfilter: Netfilter runs in the kernel to apply packet filtering rules; Iptables runs in user space to manage those rules.

4. Bridge: A Layer‑2 device that connects multiple Linux ports, functioning like a switch.

5. Routing: Linux uses routing tables to decide where to forward IP packets.

Complex Network Model

Kubernetes abstracts the cluster network to achieve a flat network topology, allowing us to reason about networking without physical node constraints.

A Service

A Service abstracts a set of Pods, providing a stable access point and load‑balancing. It is typically bound to a Deployment and uses label selectors to map to backend Pods.

Service types (ClusterIP, NodePort, LoadBalancer) determine how the Service is exposed.

<code>$ kubectl get svc --selector app=nginx
NAME   TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
nginx  ClusterIP  172.19.0.166   <none>        80/TCP    1m
$ kubectl describe svc nginx
Name:         nginx
Namespace:    default
Labels:       app=nginx
Selector:     app=nginx
Type:         ClusterIP
IP:           172.19.0.166
Port:         <unset>  80/TCP
Endpoints:   172.16.2.125:80,172.16.2.229:80</code>

The Service routes traffic to the two backend Pods at 172.16.2.125:80 and 172.16.2.229:80.

Two IPs

Pod IP: Each Pod receives an IP from Docker's bridge network; Pods can communicate directly via these IPs.

Cluster IP: A virtual IP used only by a Service; it cannot be pinged directly but forwards traffic to backend Pods, typically via round‑robin. Implemented by kube‑proxy using iptables or IPVS.

Three Ports

Port: The Service port exposed to other Pods (e.g., MySQL default 3306). It is internal to the cluster.

nodePort: Exposes the Service on each Node’s IP at a static port, allowing external access (e.g., http://node:30001).

targetPort: The container port defined in the Pod spec (e.g., Dockerfile EXPOSE 80).

<code>kind: Service
apiVersion: v1
metadata:
  name: mallh5-service
  namespace: abcdocker
spec:
  selector:
    app: mallh5web
  type: NodePort
  ports:
  - protocol: TCP
    port: 3017
    targetPort: 5003
    nodePort: 31122</code>

Cluster Internal Communication

Single‑Node Communication

Within a node, communication occurs between containers in the same Pod via the shared network namespace (localhost) and between Pods via the docker0 bridge and veth pairs.

<code>root@node-1:/opt/bin# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.23.100.1    0.0.0.0         UG    0      0        0 eth0
10.1.0.0        0.0.0.0         255.255.0.0     U     0      0        0 flannel.1
10.1.1.0        0.0.0.0         255.255.255.0   U     0      0        0 docker0</code>

Pod‑to‑Pod Communication on the Same Node

Pods share the docker0 bridge; traffic uses the Pod IP and is forwarded via veth pairs.

Cross‑Node Communication

CNI (Container Network Interface) plugins provide standardized networking across nodes. Common implementations include overlay networks (VXLAN, Geneve), encapsulation (Weave, early Flannel), and L3 SDN solutions (Calico, IPIP).

Flannel creates a flannel.1 bridge on each node, assigns a unique subnet, and routes traffic between nodes using UDP encapsulation.

External Access to the Cluster

NodePort

Setting a Service type to NodePort exposes it on a static port on every node, allowing access via

nodeIP:nodePort

.

<code>kind: Service
apiVersion: v1
metadata:
  name: influxdb
spec:
  type: NodePort
  ports:
  - port: 8086
    nodePort: 31112
  selector:
    name: influxdb</code>

Ingress

Ingress provides HTTP layer (L7) load balancing and routing based on host and path, consolidating external access to multiple Services behind a single endpoint.

<code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test
  annotations:
    ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: test.name.com
    http:
      paths:
      - path: /test
        backend:
          serviceName: service-1
          servicePort: 8118
      - path: /name
        backend:
          serviceName: service-2
          servicePort: 8228</code>

Conclusion

This article illustrated Kubernetes networking through the lens of a Service, two IP concepts, and three port types, covering both intra‑cluster communication and external access methods. Future posts will dive deeper into each networking detail.

KubernetesnetworkingServiceIngressCNIPodnodeport
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.