Understanding Kubernetes: Architecture, Core Components, and Deployment Workflow
This article explains how Kubernetes serves as a cloud‑native middle layer that abstracts servers and application services, detailing its control‑plane and node components, the role of pods, containers, and the kubectl tool, and walks through a typical service deployment and request flow.
Developers often face challenges when manually managing multiple application services across several servers, such as frequent restarts, manual updates, and resource constraints. Adding an intermediate layer can automate these tasks, and Kubernetes (k8s) is the open‑source solution that provides this capability.
What is Kubernetes? Kubernetes, abbreviated from the long name of Google’s open‑source project, is a cloud‑native platform that coordinates and manages multiple application services using declarative yaml configurations.
Architecture Overview Kubernetes splits the cluster into a control plane (the brain) and nodes (the workers). The control plane includes components such as API Server, Scheduler, Controller Manager, and etcd for state storage. Nodes run the actual workloads and contain components like kubelet, container runtime, kube‑proxy, and host one or more Pods.
Control‑Plane Components
API Server – exposes the Kubernetes API used by users and internal components.
Scheduler – decides which node should run a newly created Pod based on resource availability.
Controller Manager – implements higher‑level logic (e.g., replication, node health) by interacting with the API Server.
etcd – a distributed key‑value store that persists cluster state.
Node Components
Pod – the smallest deployable unit, which may contain one or more containers (application, logging, monitoring, etc.).
kubelet – an agent that ensures containers in a Pod are running as described.
Container runtime – pulls container images and runs them (e.g., Docker, containerd).
kube‑proxy – handles network routing so that external requests reach the correct Pod.
Cluster and Ingress The control plane and nodes together form a Cluster . Multiple clusters can be used for different environments (test, production). An Ingress controller (often Nginx) exposes services to the outside world.
kubectl is the command‑line client that talks to the API Server. Users typically write a yaml manifest describing a Pod or higher‑level workload and apply it with:
kubectl apply -f xx.yaml
The API Server validates the manifest, the Scheduler selects a suitable node, the Controller Manager creates the Pod, and the node’s kubelet pulls the container image via the container runtime to start the containers.
Service Invocation Flow An external HTTP request first reaches the Ingress controller, then the kube‑proxy on a node forwards it to the appropriate Pod, which runs the containerized service. The response follows the reverse path back to the client.
Summary Kubernetes acts as a middle layer between applications and servers, offering APIs that simplify deployment, scaling, and self‑healing of containerized workloads. Its modular architecture—control plane plus nodes—makes it a foundational cloud‑native technology for modern infrastructure.
Architect
Professional architect sharing high‑quality architecture insights. Topics include high‑availability, high‑performance, high‑stability architectures, big data, machine learning, Java, system and distributed architecture, AI, and practical large‑scale architecture case studies. Open to ideas‑driven architects who enjoy sharing and learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.