Cloud Native 9 min read

Inside Kubernetes Control Plane: API Server, Scheduler, and Controller Manager Explained

An in‑depth look at Kubernetes’ control plane reveals how the API Server, Scheduler, and Controller Manager work together to manage cluster state, handle authentication, schedule pods, and ensure convergence, with practical HA tips, advanced features, and real‑world deployment workflows.

Ray's Galactic Tech
Ray's Galactic Tech
Ray's Galactic Tech
Inside Kubernetes Control Plane: API Server, Scheduler, and Controller Manager Explained

API Server (kube-apiserver)

Front‑door of the cluster; all requests pass through it.

Core functions

Single entry point : Scheduler, Controller Manager, Kubelet and external clients (kubectl, SDKs) interact only via the API Server.

Authentication, Authorization, Admission Control

Authentication : certificates, tokens, OIDC, etc.

Authorization : RBAC, ABAC, etc.

Admission control : defaulting, quota, sidecar injection, etc.

RESTful API : HTTP CRUD for Pods, Services, Deployments, and other resources.

Persistence : Writes desired state to etcd and reads current state.

Workflow

Client sends a request (e.g., kubectl create -f pod.yaml) to the API Server.

Request passes through authentication → authorization → admission control.

On success the object is stored in etcd.

Other control‑plane components receive the change via the Watch mechanism.

Production tips

Deploy multiple replicas behind a load balancer for high availability.

Extend the API with aggregation or CustomResourceDefinitions.

All inter‑component traffic uses TLS with automated certificate rotation.

Scheduler (kube-scheduler)

Selects a suitable Node for each newly created Pod.

Core functions

Watch : Detect Pods with an empty nodeName.

Filter : Exclude nodes that lack resources, violate taints, or fail affinity rules.

Score : Rank remaining nodes and pick the optimal one.

Bind : Update the Pod’s nodeName via the API Server.

Workflow

Pod object is stored in etcd by the API Server.

Scheduler watches for unscheduled Pods.

Executes Filter → Score to choose a node.

Calls the API Server to bind the Pod.

Kubelet on the selected node starts the containers.

Advanced features

Scheduling framework with plug‑in points for custom Filter, Score, and Bind logic.

Multiple schedulers via the schedulerName field.

Preemption: evict lower‑priority Pods to make room for higher‑priority ones.

Strategies: BinPack (pack densely) and Spread (distribute evenly).

Controller Manager (kube-controller-manager)

Runs a set of control loops that continuously drive the actual cluster state toward the desired state.

Built‑in controllers

Deployment Controller : creates and updates ReplicaSets for Deployments.

ReplicaSet Controller : ensures the number of Pods matches the replica count.

Node Controller : monitors node health and handles failures.

Service Controller : provisions cloud load balancers for Service objects.

Endpoint Controller : maintains Service‑to‑Pod endpoint mappings.

Namespace Controller : manages namespace lifecycle.

Example loop (ReplicaSet)

Watch for changes to ReplicaSets and Pods.

Compare desired replica count with actual Pods.

Create or delete Pods to reconcile the difference.

Repeat continuously until the state converges.

Production considerations

Leader election ensures only one instance acts as the active manager in multi‑replica deployments.

Cloud‑specific controllers can be split into cloud‑controller‑manager.

Custom controllers (Operators) can be built with CRDs to manage external systems.

Coordinated workflow (Deployment creation)

API Server

Receives kubectl apply -f deployment.yaml, runs auth checks, stores the Deployment in etcd.

Controller Manager

Deployment controller creates a ReplicaSet.

ReplicaSet controller creates the required Pods.

Scheduler

Finds the unscheduled Pods, selects nodes, and binds them via the API Server.

Kubelet (node)

Observes the bound Pods, pulls images, and starts the containers.

Relationship with etcd

API Server is the only component that talks directly to etcd.

Scheduler and Controller Manager access cluster state exclusively through the API Server.

etcd stores the source‑of‑truth for all cluster objects.

High‑availability best practices

Run multiple API Server replicas behind a load balancer.

Deploy an odd number of etcd members (3 or 5) to maintain Raft consensus.

Run multiple Scheduler and Controller Manager replicas with leader election.

Control‑plane summary

The API Server, Scheduler, and Controller Manager form a declarative control loop: they watch etcd changes, compute the difference between desired and actual state, and act to converge the cluster.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

schedulerAPI Serveretcdcontrol planeHAController Manager
Ray's Galactic Tech
Written by

Ray's Galactic Tech

Practice together, never alone. We cover programming languages, development tools, learning methods, and pitfall notes. We simplify complex topics, guiding you from beginner to advanced. Weekly practical content—let's grow together!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.