Cloud Native 15 min read

Demystifying Kubernetes Networking: Services, IPs, and Ports Explained

This article breaks down Kubernetes' networking model by defining key terms, explaining Service abstractions, IP and port concepts, detailing intra‑cluster communication, and outlining external access methods such as NodePort and Ingress, all supported by practical YAML and command examples.

Efficient Ops
Efficient Ops
Efficient Ops
Demystifying Kubernetes Networking: Services, IPs, and Ports Explained

In a previous article we introduced Kubernetes; now we explore its core networking concepts.

Term Definitions

1. Network Namespace : Linux isolates network stacks into separate namespaces, preventing communication between them; Docker uses this for container network isolation.

2. Veth Pair : A virtual Ethernet pair that enables communication between different network namespaces.

3. Iptables/Netfilter : Netfilter runs in kernel mode to apply packet‑filtering rules; Iptables runs in user space to manage those rules.

4. Bridge : A layer‑2 device that connects multiple Linux ports, functioning like a switch.

5. Routing : Linux uses routing tables to decide where IP packets are forwarded.

Network Model

Kubernetes abstracts the cluster network to achieve a flat topology, allowing us to ignore physical node details.

Kubernetes network model diagram
Kubernetes network model diagram

Service

A Service abstracts a set of Pods, providing a stable endpoint and load‑balancing. It uses label selectors to map to backend Pods. Service types (ClusterIP, NodePort, LoadBalancer) determine visibility and external exposure.

<code>$ kubectl get svc --selector app=nginx</code>
<code>NAME   TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE</code>
<code>nginx  ClusterIP  172.19.0.166  <none>        80/TCP   1m</code>
<code>$ kubectl describe svc nginx</code>
<code>Name:         nginx</code>
<code>Namespace:    default</code>
<code>Labels:       app=nginx</code>
<code>Selector:     app=nginx</code>
<code>Type:         ClusterIP</code>
<code>IP:           172.19.0.166</code>
<code>Port:         <unset>  80/TCP</code>
<code>Endpoints:    172.16.2.125:80,172.16.2.229:80</code>

The Service routes traffic to the two backend Pods shown above.

IP Concepts

Pod IP is assigned to each Pod by Docker's bridge network and enables direct Pod‑to‑Pod communication.

Cluster IP is a virtual IP used only by a Service; it cannot be pinged directly but forwards traffic to backend Pods via kube‑proxy (iptables or IPVS).

Port Concepts

Port refers to the Service port exposed to other Pods (e.g., MySQL default 3306) and is not reachable from outside the cluster.

NodePort exposes a Service on a static port on each Node, allowing external access via

nodeIP:nodePort

.

TargetPort is the container’s port defined in the Dockerfile (e.g., 80 for Nginx).

<code>kind: Service</code>
<code>apiVersion: v1</code>
<code>metadata:</code>
<code>  name: mallh5-service</code>
<code>  namespace: abcdocker</code>
<code>spec:</code>
<code>  selector:</code>
<code>    app: mallh5web</code>
<code>  type: NodePort</code>
<code>  ports:</code>
<code>  - protocol: TCP</code>
<code>    port: 3017</code>
<code>    targetPort: 5003</code>
<code>    nodePort: 31122</code>

In‑Cluster Communication

Single‑Node Communication

Pods on the same node share the

docker0

bridge; containers within a Pod share the network namespace and can reach each other via

127.0.0.1

.

Pod intra‑node communication diagram
Pod intra‑node communication diagram

Pod‑to‑Pod Communication on Same Node

Pods share the same

docker0

bridge, so traffic is forwarded via veth pairs directly between Pods using their Pod IPs.

Same‑node pod communication
Same‑node pod communication

Cross‑Node Communication (CNI)

The Container Network Interface (CNI) standard allows Kubernetes to plug in various network solutions (e.g., Flannel, Calico, Weave). These plugins create additional bridges (e.g.,

flannel.1

) and use encapsulation (VXLAN, IPIP) to route traffic between nodes.

Flannel cross‑node communication
Flannel cross‑node communication

External Access to the Cluster

NodePort

Setting a Service type to NodePort exposes it on a static port on every Node, allowing access via

nodeIP:nodePort

from outside the cluster.

<code>kind: Service</code>
<code>apiVersion: v1</code>
<code>metadata:</code>
<code>  name: influxdb</code>
<code>spec:</code>
<code>  type: NodePort</code>
<code>  ports:</code>
<code>  - port: 8086</code>
<code>    nodePort: 31112</code>
<code>  selector:</code>
<code>    name: influxdb</code>

Ingress

Ingress provides HTTP‑level load balancing and path‑based routing, exposing multiple Services behind a single external URL.

<code>apiVersion: extensions/v1beta1</code>
<code>kind: Ingress</code>
<code>metadata:</code>
<code>  name: test</code>
<code>  annotations:</code>
<code>    ingress.kubernetes.io/rewrite-target: /</code>
<code>spec:</code>
<code>  rules:</code>
<code>  - host: test.name.com</code>
<code>    http:</code>
<code>      paths:</code>
<code>      - path: /test</code>
<code>        backend:</code>
<code>          serviceName: service-1</code>
<code>          servicePort: 8118</code>
<code>      - path: /name</code>
<code>        backend:</code>
<code>          serviceName: service-2</code>
<code>          servicePort: 8228</code>

Summary and Outlook

This article illustrated Kubernetes networking through the lens of a Service, two IP types, and three Port concepts, covering both intra‑cluster and external access mechanisms. Future posts will dive deeper into each networking detail.

KubernetesnetworkingServiceIngressCNInodeport
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.