What Happens When You Deploy an App on Kubernetes? A Deep Dive
This article walks through the entire lifecycle of deploying an application on Kubernetes, explaining how Docker containers differ from virtual machines, the role of Pods, ReplicationControllers, Deployments, and how automatic scaling with HPA and VPA keeps services reliable and efficient.
Introduction
After studying Kubernetes for a few months, the author organizes notes around the theme “What happens when we deploy an application?” to connect scattered concepts and help beginners understand the scheduling process.
Docker vs. Virtual Machine
Docker is an application container engine that shares the host OS kernel, while a virtual machine runs a full independent OS. Docker containers are lightweight, share layers, and have limited portability compared to VMs.
Docker shares the host kernel; VM has its own kernel.
Docker isolates resources per container; VMs share resources on the same host.
Docker images are layered and can be shared; VM images cannot.
Docker portability depends on kernel version; VMs are fully portable.
Containers
Containers run multiple services on the same machine using Linux cgroups and namespaces for isolation, including CPU, memory, network, and file system isolation.
Kubernetes Overview
Kubernetes (K8s) orchestrates containers. The control plane (master) includes API server, etcd, scheduler, controller manager, and the nodes run kubelet, kube-proxy, and the containers.
Control Plane Components
Kubernetes API: communication hub.
Scheduler: decides which node a Pod runs on.
Controller Manager: handles ReplicationController, ReplicaSet, etc.
etcd: persistent storage of cluster state.
Node Components
Container runtime (Docker, rkt, …).
Kubelet: talks to the API server and manages Pods on the node.
Kube-proxy: provides service load‑balancing.
Pods
A Pod is the smallest deployable unit in K8s, grouping one or more containers that share the same network namespace and storage.
ReplicationController and ReplicaSet
ReplicationController (RC) ensures a desired number of Pod replicas. It defines a selector, replica count, and a Pod template. RC has been superseded by ReplicaSet, which adds richer label selectors.
StatefulSet
StatefulSet manages stateful Pods that require stable network IDs and persistent storage, unlike the stateless Pods managed by RC/ReplicaSet.
Deployment – Declarative Application Updates
Deployment abstracts manual rolling updates. By declaring a new Pod template, K8s creates a new ReplicaSet, scales it up while scaling the old one down, and can roll back automatically.
Rolling speed is controlled by maxUnavailable and maxSurge parameters.
Automatic Scaling
Horizontal Pod Autoscaler (HPA) scales the number of Pods based on metrics such as CPU usage, pulling data from Heapster (or its successors). Vertical Pod Autoscaler (VPA) adjusts CPU and memory requests for individual Pods, though it requires a Pod restart.
Cluster Autoscaler adds or removes nodes when overall resource demand changes.
Declarative vs. Imperative APIs
K8s primarily uses declarative APIs: users state the desired state, and the system works toward it. Imperative commands require manual sequencing and are more error‑prone.
Conclusion
When a user clicks “redeploy” in a platform like Aone, the API creates a Deployment, which creates a ReplicaSet, which creates Pods that are scheduled onto nodes and run via Docker. Understanding the components and their interactions demystifies the deployment process.
Alibaba Cloud Developer
Alibaba's official tech channel, featuring all of its technology innovations.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
