Master Kubernetes Architecture: From Master Nodes to Pods Explained
This article provides a comprehensive overview of Kubernetes architecture, detailing the control plane components, worker node services, and the end‑to‑end workflow that enables automated deployment, scaling, and management of containerised applications in cloud‑native environments.
Kubernetes (K8s) is the core technology of cloud native and the future of cloud computing.
K8s is an open‑source container orchestration platform originally developed by Google and open‑sourced in 2014. It provides a powerful platform for automating the deployment, scaling, and management of containerised applications.
The K8s cluster follows a master‑node (control plane) and worker‑node architecture.
The control plane (master) manages the cluster state and consists of the API Server, Scheduler, Controller Manager, and etcd.
API Server
The
kube-apiserveris the central API server of the control plane and the entry point for all cluster operations. It provides a REST API used by kubectl, UI, and controllers, handling authentication, authorization, admission control, and data validation.
Scheduler
The Scheduler selects suitable nodes for Pods. Its scheduling process includes filtering nodes that lack resources or violate constraints, scoring the remaining nodes based on priority functions, and binding the Pod to the chosen node.
Controller Manager
The
kube-controller-managerruns various controllers that monitor the cluster and ensure the desired state. Examples include ReplicaSetController for replica management, NodeController for node health detection, and other controllers such as JobController and DaemonSetController.
etcd
etcd is the distributed key‑value store that holds the entire cluster state. It supports high availability through multiple replicas and the Raft consensus algorithm.
Worker Nodes
Each worker node runs the kubelet, kube‑proxy, and a container runtime.
kubelet
The kubelet is an agent that runs on each node, managing Pods and containers to ensure they run in the desired state.
kube-proxy
kube-proxy implements network proxying and load balancing for Services, maintaining network rules that forward traffic to the appropriate Pods.
Container Runtime
The container runtime (e.g., Docker, containerd) is responsible for actually running the containers.
Kubernetes Workflow
User interacts with the cluster via
kubectl, which communicates with the API Server.
The API Server stores the desired state in etcd.
The Controller Manager watches the cluster state and takes actions to maintain the desired state.
The Scheduler assigns Pods to suitable nodes.
The kubelet on each node manages the Pods and containers.
kube-proxy provides network proxying and load balancing.
Through the coordinated operation of these components, Kubernetes achieves automated deployment, scaling, and management of containerised applications.
Mike Chen's Internet Architecture
Over ten years of BAT architecture experience, shared generously!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.