Kubernetes (K8s) Overview: Architecture, Components, Probes, Rolling Updates, Image Policies, and Persistent Storage
This article provides a comprehensive introduction to Kubernetes, covering its origin, master‑node and worker‑node architecture, pod health‑checking probes, rolling‑update controls, image pull policies, service concepts, external access, persistent storage options, label selectors, and common kubectl commands, all illustrated with practical YAML examples.
Kubernetes (K8s) is an open‑source system for automating deployment, scaling, and management of containerized applications, originally derived from Google’s internal Borg platform.
A Kubernetes cluster consists of at least one master node and multiple worker nodes. The master hosts components such as kubectl (CLI), API Server (the unified entry point), controller‑manager , scheduler , and etcd (state store). Each worker node runs a container runtime (e.g., Docker), the kubelet agent, kube‑proxy for service networking, and hosts the actual pods.
Containers differ from traditional host deployments by offering near‑instant startup, immutable packaging, and isolated execution, while requiring careful handling of data persistence.
Pod health is monitored through three probe types: livenessProbe , readinessProbe , and startupProbe . All probes share parameters such as initialDelaySeconds , periodSeconds , timeoutSeconds , and successThreshold . Probes can be implemented via exec , httpGet , or tcpSocket methods. Example exec probe YAML:
spec:
containers:
- name: liveness
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- "touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600"
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5Example httpGet probe YAML:
spec:
containers:
- name: liveness
image: k8s.gcr.io/liveness
livenessProbe:
httpGet:
scheme: HTTP
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 3Example tcpSocket probe YAML:
spec:
containers:
- name: goproxy
image: k8s.gcr.io/goproxy:0.1
ports:
- containerPort: 8080
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 15
periodSeconds: 20Rolling updates can be tuned with maxSurge (how many extra pods may be created) and maxUnavailable (how many pods may be down) – viewable via kubectl explain deploy.spec.strategy.rollingUpdate .
Image pull policies are Always , Never , and IfNotPresent . By default, tags equal to latest use Always ; other tags use IfNotPresent .
Pod status values include Pending , Running , Succeeded , Failed , and Unknown . Restart policies are Always (default) and OnFailure .
A Service provides a stable virtual IP and DNS name for a set of pods, enabling load‑balanced access and service discovery. External traffic can reach pods through a Service of type NodePort , which opens the same port on every node.
Persistent storage options are:
emptyDir : temporary directory shared among containers in the same pod; deleted when the pod is removed.
hostPath : mounts a host‑machine directory into the container (tight node‑pod coupling).
PersistentVolume (PV) with access modes ReadWriteOnce , ReadOnlyMany , ReadWriteMany and reclaim policies Recycle , Retain , Delete . A PersistentVolumeClaim (PVC) requests storage that matches a PV’s size, access mode, and storage class.
Labels and label selectors are used to group and query resources. Equality‑based selectors use =, ==, !=; set‑based selectors use in , notIn , exists . Common kubectl commands:
# view pod labels
kubectl get pod --show-labels
# filter by label keys
kubectl get pod -L env,tier
# add, modify, delete a label on a pod
kubectl label pod my-pod env=prod
kubectl label pod my-pod env=staging --overwrite
kubectl label pod my-pod env-
# similar commands for nodes
kubectl label nodes node01 disk=ssd
kubectl label nodes node01 disk=ssd --overwrite
kubectl label nodes node01 disk-Other resource objects covered include:
DaemonSet : ensures one pod runs on every node; no replicas field.
Job : runs batch tasks to completion; can control parallelism and completions.
Pod creation flow: client submits YAML → API server stores object in etcd → controller‑manager creates the pod → scheduler selects a suitable node → kubelet on that node launches the pod. Deletion follows a graceful termination sequence: removal from Service endpoints, execution of pre‑stop hooks, SIGTERM to containers, and finally SIGKILL after the grace period.
For further reading, the article links to the official Kubernetes documentation on probes and provides additional references on related technologies.
Architect
Professional architect sharing high‑quality architecture insights. Topics include high‑availability, high‑performance, high‑stability architectures, big data, machine learning, Java, system and distributed architecture, AI, and practical large‑scale architecture case studies. Open to ideas‑driven architects who enjoy sharing and learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.