Cloud Native 13 min read

Understanding Kubernetes Master Components: API Server, etcd, Scheduler, and More

This article explains the key components running on a Kubernetes master node—including the API Server, etcd, kube‑scheduler, kube‑controller‑manager, and Cloud Provider—detailing their roles, how they interact, and providing practical curl and kubectl commands for common operations.

Test Development Learning Exchange
Test Development Learning Exchange
Test Development Learning Exchange
Understanding Kubernetes Master Components: API Server, etcd, Scheduler, and More

Kubernetes Master Node Components

The master node (control plane) runs the core components that manage the cluster state, schedule workloads, and interact with underlying infrastructure. Typical components are:

Kubernetes API Server

etcd

kube‑scheduler

kube‑controller‑manager

Cloud Provider integration

Kubernetes API Server

The API Server is the central entry point for all client requests (kubectl, UI, controllers). It validates, authenticates, and authorises requests, then writes the desired state to etcd. Changes in etcd trigger events that other components consume.

Common curl commands (development environment)

curl -X POST -H "Content-Type: application/json" http://localhost:8080/api/v1/namespaces/default/pods \
  -d '{
    "apiVersion": "v1",
    "kind": "Pod",
    "metadata": {"name": "my-pod"},
    "spec": {"containers": [{"name": "my-container", "image": "nginx:latest"}]}
  }'

curl http://localhost:8080/api/v1/nodes/my-node

curl -X DELETE http://localhost:8080/api/v1/namespaces/default/services/my-service

curl -X PUT -H "Content-Type: application/merge-patch+json" \
  http://localhost:8080/api/v1/namespaces/default/deployments/my-deployment \
  -d '{"spec":{"replicas":3}}'

In production the API Server must be accessed over TLS and with proper authentication credentials (client certificates, bearer tokens, etc.).

etcd – Distributed Key‑Value Store

etcd stores the entire cluster state as a consistent, highly‑available key‑value store. It uses the Raft consensus algorithm to guarantee that writes are replicated to a majority of nodes before being committed.

Basic etcd operations (HTTP API)

curl -L http://localhost:2379/v2/keys/my-key -XPUT -d value=my-value
curl -L http://localhost:2379/v2/keys/my-key
curl -L http://localhost:2379/v2/keys/my-key -XDELETE
while true; do curl -L http://localhost:2379/v2/keys/my-key; sleep 1; done

Production deployments should enable TLS (--cert, --key, --trusted-ca-file) and may require client authentication. Watch operations can exhibit slight latency due to Raft log replication.

kube‑scheduler

The scheduler watches for newly created Pods that have no node assigned and selects an appropriate worker node based on resource requests, node labels, affinity/anti‑affinity rules, taints, and other constraints.

Deploy a three‑replica Deployment

cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
EOF

kubectl get deployments

List available nodes with kubectl get nodes. To pin a Pod to a specific node, add a nodeSelector (e.g., kubernetes.io/hostname: node1) to the pod spec.

kube‑controller‑manager

This binary runs a collection of controllers that continuously reconcile the desired state (as stored in etcd) with the actual state of the cluster. Controllers include ReplicationController, Endpoint, Namespace, Node, PersistentVolumeClaim, ServiceAccount, ResourceQuota, Token, and CronJob controllers.

Create a ReplicationController with three replicas

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ReplicationController
metadata:
  name: my-rc
  labels:
    app: my-app
spec:
  replicas: 3
  selector:
    app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-container
        image: busybox
        command: ["sleep", "infinity"]
EOF

kubectl get rc

Service accounts can be listed with kubectl get serviceaccounts. To create one:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-account
EOF

kubectl get serviceaccounts

Cloud Provider Integration

The cloud‑provider module enables Kubernetes to provision cloud‑specific resources such as LoadBalancer services, PersistentVolumes, and automatic node registration on platforms like AWS, GCP, Azure, and Alibaba Cloud.

Example on Google Cloud Platform (GKE)

# Create a GKE cluster with three nodes
gcloud container clusters create cluster-name \
  --num-nodes=3 \
  --zone us-central1-a \
  --machine-type=n1-standard-1

# Expose a Deployment as a LoadBalancer service
kubectl expose deployment my-deployment --type=LoadBalancer --port=80

# Create a 1 GiB PersistentVolumeClaim
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
EOF

kubectl get pvc

Inspect node status and labels with kubectl get nodes --show-labels. In production, ensure the cloud‑provider binary is configured with the appropriate credentials and that the API Server runs with --cloud-provider flag.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

KubernetesschedulerAPI ServeretcdMaster NodeCloud ProviderController Manager
Test Development Learning Exchange
Written by

Test Development Learning Exchange

Test Development Learning Exchange

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.