Cloud Native 12 min read

Understanding Kubernetes Service: Definition, Purpose, and Working Mechanism

This article explains what a Kubernetes Service is, why it is needed, how to create one with a Deployment, and details the internal workings of Service controllers, Endpoints, and kube-proxy using iptables and ipvs modes.

Aikesheng Open Source Community
Aikesheng Open Source Community
Aikesheng Open Source Community
Understanding Kubernetes Service: Definition, Purpose, and Working Mechanism

1. What is a Service?

Service is a Kubernetes resource that provides a stable entry point to a group of Pods offering the same service. Each Pod has its own IP and port, but clients can access the Service IP and port without needing to know individual Pod locations, allowing Pods to move within the cluster.

First, we create three Pods with the label app=nginx using a Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

The above creates three Pods:

NAME                                 READY   STATUS    RESTARTS   AGE    IP          NODE   NOMINATED NODE   READINESS GATES
nginx-deployment-6b474476c4-hc6k4   1/1     Running   0          7d2h   10.42.1.3   node8
nginx-deployment-6b474476c4-mp8vw   1/1     Running   0          7d2h   10.42.0.7   node10
nginx-deployment-6b474476c4-wh8xd   1/1     Running   0          7d2h   10.42.1.4   node8

Next, we define a Service named my-service with the same selector app=nginx :

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app: nginx
  ports:
  - name: default
    protocol: TCP
    port: 80   # service port
    targetPort: 80

The Service is created as a ClusterIP type with a randomly assigned ClusterIP (e.g., 10.109.163.26). Clients can now reach the Pods via this IP and port, with traffic load‑balanced across the Pods.

NAME        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE    SELECTOR
my-service  ClusterIP   10.109.163.26
80/TCP    4d1h   app=nginx

2. Why Do We Need a Service?

Pods have mutable IPs; when managed by ReplicaSet or Deployment they can be destroyed and recreated, causing IP changes.

Pod IPs are assigned only after scheduling, so clients cannot know them beforehand.

Horizontal scaling creates multiple Pods with the same functionality; clients should not need to track each Pod’s IP but should use a single stable IP.

3. How Service Works

Creating a Service involves two main components: the Service controller (which creates corresponding Endpoint objects) and kube‑proxy (which updates node‑level network rules).

The kube‑apiserver stores the Service object in etcd.

The Endpoint controller creates an Endpoint object that lists the Pods matching the Service selector.

kube‑proxy watches Service and Endpoint changes and updates iptables/ipvs rules on each node.

Endpoint

When a Service is created, an Endpoint object is generated, e.g.:

NAME        ENDPOINTS                     AGE
my-service  10.42.0.7:80,10.42.1.3:80,10.42.1.4:80   7d3h

Endpoints store the IP and port of all Pods selected by the Service. The Endpoint controller watches Service and Pod events, creating, updating, or deleting Endpoint objects accordingly.

kube‑proxy

kube‑proxy runs on every worker node and maintains network rules that implement Service load‑balancing. In iptables mode, creating a Service adds a series of iptables rules.

On a node, run:

iptables -nvL OUTPUT -t nat

to see the KUBE‑SERVICES chain. Inspect the Service chain:

iptables -nvL KUBE‑SERVICES -t nat

and the specific Service chain (e.g., KUBE‑SVC‑KEAUNL7HVWWSEZA6 ) to observe random probability rules that forward traffic to the individual Pod IPs.

iptables -nvL KUBE‑SVC‑KEAUNL7HVWWSEZA6 -t nat

Finally, inspect a Pod‑specific chain to see the DNAT rule:

iptables -nvL KUBE‑SEP‑SKMF2UJJQ24AYOPG -t nat

When many Services exist, iptables rules grow dramatically, causing latency. Kubernetes 1.11+ introduced the ipvs mode to address this scalability issue.

Reference

[1] ipvs mode: https://kubernetes.io/blog/2018/07/09/ipvs-based-in-cluster-load-balancing-deep-dive

cloud nativeKubernetesServiceiptableskube-proxyIPVSEndpoint
Aikesheng Open Source Community
Written by

Aikesheng Open Source Community

The Aikesheng Open Source Community provides stable, enterprise‑grade MySQL open‑source tools and services, releases a premium open‑source component each year (1024), and continuously operates and maintains them.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.