Cloud Native 9 min read

Understanding Kubernetes: Core Concepts Explained Through Q&A

This article provides a concise, question‑driven overview of Kubernetes, covering the roles of master and worker nodes, pod scheduling, data storage with etcd, service exposure, dynamic scaling, and how the various control‑plane components collaborate to manage a distributed container cluster.

Efficient Ops
Efficient Ops
Efficient Ops
Understanding Kubernetes: Core Concepts Explained Through Q&A

Readers familiar with Docker will find Kubernetes (K8S) a more complex, distributed container orchestration system built on Google’s extensive experience with large‑scale container deployments.

Question 1: How do the master and worker nodes communicate?

The master node runs the

kube-apiserver

process, providing the central API for cluster management and security. Each worker node runs a

kubelet

process that reports node status to the master and receives commands to create Pods.

A Pod is the basic unit in K8S and may contain one or more containers that share a network namespace, allowing them to communicate via

localhost

. Network sharing is achieved by launching a special

pause

container that holds the shared network settings.

Question 2: How does the master schedule Pods onto specific nodes?

The

kube-scheduler

process runs complex algorithms to select the optimal node for each Pod, using strategies such as round‑robin. To force placement on a particular node, you can match node Labels with a Pod’s nodeSelector attribute.

Question 3: Where is the cluster’s state stored and who maintains it?

Kubernetes uses

etcd

, a highly available, strongly consistent key‑value store, to keep all configuration and state data. All read/write operations on this data are performed through the

kube-apiserver

, which also exposes a RESTful API for internal components and external users (e.g., via

kubectl

).

Question 4: How do external users access Pods running inside the cluster?

Instead of simple port‑mapping used in single‑host Docker, Kubernetes introduces the Service abstraction. A Service groups Pods with the same label selector, stores its definition in

etcd

via the API server, and relies on a

kube-proxy

process on each node to route traffic and perform load balancing.

Question 5: How are Pods dynamically scaled up or down?

Scaling is achieved by the Replication Controller . You specify a desired replica count for a Pod; the controller continuously compares the actual number of Pods with the desired count and creates or deletes Pods to match.

Question 6: How do the various control‑plane components cooperate?

The

kube-controller-manager

runs multiple controllers (e.g., Node Controller, ResourceQuota Controller, Namespace Controller) that watch the cluster state via the API server and act to reconcile the actual state with the desired state. It orchestrates components such as the Service Controller and Replication Controller.

Summary

This Q&A style overview introduces the fundamental Kubernetes concepts without deep implementation details, covering:

Node

Pod

Label

Selector

Replication Controller

Service Controller

ResourceQuota Controller

Namespace Controller

Node Controller

Key processes include:

kube-apiserver

kube-controller-manager

kube-scheduler

kubelet

kube-proxy

pause

The author hopes this concise summary helps newcomers navigate the extensive official documentation.

cloud-nativekubernetesContainer OrchestrationServicesPods
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.