Cloud Native 10 min read

Understanding Kubernetes: Architecture, Components, and Deployment Workflow

This article explains how Kubernetes serves as a middle layer between applications and servers, detailing its control plane and node components, the role of pods, containers, and the deployment process using YAML and kubectl, and how it simplifies scaling, restarting, and service access.

IT Services Circle
IT Services Circle
IT Services Circle
Understanding Kubernetes: Architecture, Components, and Deployment Workflow

You built a blog service and deployed it on a cloud platform, but the popularity caused frequent crashes due to high traffic.

To keep the service alive you used tools to automatically restart the crashed instances and deployed the service across several servers.

Later you added more services such as a mall and a voice service, each with different requirements (e.g., external access restrictions, minimum memory). Manually logging into each server to update them became error‑prone and time‑consuming.

The solution is to introduce a middle layer: Kubernetes , an open‑source container orchestration platform from Google, often abbreviated as k8s .

Kubernetes sits between application services and servers. By writing a yaml configuration file you can define deployment order, resource requirements, and other policies, allowing automatic deployment, restart, and scaling of services.

Internally Kubernetes splits the infrastructure into two parts: the control plane (formerly called master) and the worker nodes (Node). The control plane acts as the brain, issuing commands, while the nodes execute the workloads.

The control plane consists of several components:

API Server : exposes the Kubernetes API used by clients and internal components.

Scheduler : decides which node has sufficient CPU and memory for a new pod.

Controller Manager : implements higher‑level logic such as replication and lifecycle management.

etcd : a key‑value store that persists the cluster state.

Each node can be a bare‑metal server or a virtual machine. A node runs pods , which are the smallest scheduling units. A pod may contain one or more containers, for example an application container together with a logging or monitoring container. The node includes a container runtime to pull and run container images.

Pods are the basic unit that the scheduler moves between nodes, restarts, and scales dynamically.

To receive commands from the control plane, a node runs a kubelet that manages pods, and a kube‑proxy that handles network traffic, forwarding external requests to the appropriate pod.

A set of control‑plane and node components forms a cluster . Organizations typically run multiple clusters (e.g., separate test and production clusters) and expose services to the outside world via an Ingress controller such as Nginx.

Interaction with the cluster is done through the command‑line tool kubectl , which internally calls the Kubernetes API.

To deploy a service you write a YAML manifest describing the pod, its container image, and resource limits, then execute kubectl apply -f xx.yaml . The command sends the manifest to the API Server, which stores it in etcd, triggers the Scheduler to pick a suitable node, and instructs the Controller Manager to create the pod. The kubelet on the selected node pulls the container image via the container runtime and starts the pod.

When an external user sends an HTTP request, it first reaches the cluster’s Ingress controller, then the request is forwarded by the kube‑proxy on a node to the appropriate pod, and finally to the container that processes the request.

In summary:

Kubernetes (k8s) is Google’s open‑source platform for managing large numbers of containerized services.

A cluster consists of a control plane (brain) and nodes (workers).

The control plane includes API Server, Scheduler, Controller Manager, and etcd; nodes run pods, kubelet, container runtime, and kube‑proxy.

Deploying services is simplified to writing a single YAML file and running one kubectl command.

External traffic flows through an Ingress controller, then kube‑proxy, and finally the target pod.

cloud nativedeploymentKubernetesDevOpsContainer Orchestration
IT Services Circle
Written by

IT Services Circle

Delivering cutting-edge internet insights and practical learning resources. We're a passionate and principled IT media platform.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.