Kubernetes Overview, Architecture, and Hands‑On Deployment with Minikube
This article introduces Kubernetes fundamentals, explains its production‑grade nature, container concepts, orchestration features, core architecture, and provides step‑by‑step commands for installing Minikube, creating a cluster, deploying an Nginx application, exposing it as a service, scaling, updating, and deleting the deployment.
Preface
Kubernetes (k8s) is a production‑grade container orchestration system. The term comes from Greek meaning "captain" or "steersman". The official website describes it as a "production‑grade container orchestration system".
Production‑grade container orchestration system
From this definition three key concepts can be extracted: production‑grade, containers, and orchestration system.
1. Production‑grade
Reasons why k8s is considered production‑grade:
k8s is Google’s open‑source system, based on Google’s internal design and has been running stably for a long time.
k8s is the first graduated project of the Cloud Native Computing Foundation (CNCF).
2. Containers
Key characteristics of containers:
Portability – containers can run on any operating system.
Inclusiveness – they can package many types of software.
Standard format.
Co‑existence – multiple containers can run on the same host.
Isolation – each container’s software is isolated from others.
The most important statement: without containers there is no micro‑service architecture.
Benefits of container‑based micro‑services include faster independent deployment and release, and the ability to customize isolated runtime environments for each module.
3. Orchestration System
The orchestration system efficiently manages containers on the host.
Network and access management.
Tracking container status.
Scaling services up or down.
Load balancing.
Re‑allocation of containers when a host becomes unresponsive.
Service discovery.
Storage management for containers, etc.
Main Functions
Data Volumes
Containers in a pod can share data through volumes.
Application Health Checks
Health‑check policies can be set to detect blocked processes inside containers.
Replica Management
Controllers maintain the desired number of pod replicas to ensure availability.
Horizontal Autoscaling
Pod replica counts can be automatically adjusted based on defined metrics.
Service Discovery
Environment variables or DNS plugins allow containers to discover pod entry points.
Load Balancing
A set of pod replicas receives a private ClusterIP; the load balancer forwards requests to backend containers, and other pods can access the service via the ClusterIP.
Rolling Updates
Updates are performed without downtime by updating one pod at a time.
Service Orchestration
Declarative files describe service deployments, making application rollout efficient.
Resource Monitoring
Node components integrate cAdvisor for resource collection, Heapster aggregates data, stores it in InfluxDB, and visualizes it with Grafana.
Authentication and Authorization
RBAC mechanisms are supported.
Design Architecture
Functional Components
Kubernetes clusters consist of control‑plane (master) nodes and worker (node) nodes.
The master node manages the cluster, handles inter‑node communication, schedules tasks, and controls the lifecycle of containers, pods, namespaces, persistent volumes, etc.
Worker nodes provide compute resources for containers and pods; the kubelet on each node communicates with the master to manage container lifecycles.
Master Components
kube‑apiserver
The sole entry point for Kubernetes API operations, coordinating all components and providing HTTP APIs with authentication and authorization.
kube‑controller‑manager
Runs background control loops; each resource has a controller, and this manager maintains cluster state.
kube‑scheduler
Assigns pods to appropriate nodes based on scheduling policies.
Node Components
kubelet
The agent running on each node, managing the lifecycle of containers, volumes, and network settings.
kube‑proxy
Implements network proxying for pod/service traffic, providing internal service discovery and layer‑4 load balancing.
docker
The container runtime that actually runs containers.
etcd cluster
A distributed key‑value store that persists cluster state such as pods and services.
Layered Architecture
Core layer provides the API; Application layer handles stateless app deployment and routing; Management layer offers metrics, automation, and RBAC; Interface layer includes kubectl and client SDKs; Ecosystem layer covers external logging/monitoring and internal image registries.
Installation
Creating a Cluster
Check the Minikube version:
$ minikube version
minikube version: v0.25.0Start Minikube:
$ minikube start
Starting local Kubernetes v1.9.0 cluster...
... (output omitted for brevity) ...
Loading cached images from config file.After start, a single‑node Kubernetes cluster is created.
Check cluster version:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", ...}
Server Version: version.Info{GitVersion:"v1.9.0", ...}Get detailed cluster info:
$ kubectl cluster-info
Kubernetes master is running at https://172.17.0.77:8443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.List nodes:
$ kubectl get node
NAME STATUS ROLES AGE VERSION
host01 Ready
20m v1.9.0In this environment the same host acts as both master and node.
Deploying an Application
Deploy an Nginx example:
$ kubectl run first-app --image=nginx --port=80
deployment "first-app" createdCheck the deployment:
$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
first-app 1 1 1 1 1mList pods:
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
first-app-6db44b474-dbbtp 1/1 Running 0 4mDescribe the pod for detailed information:
$ kubectl describe pod first-app-6db44b474-dbbtp
Name: first-app-6db44b474-dbbtp
Namespace: default
Node: host01/172.17.0.77
... (additional details omitted) ...Exposing the Service
Create a NodePort service to expose the deployment:
$ kubectl expose deployment/first-app --type="NodePort" --port=80
service "first-app" exposedVerify the service:
$ kubectl get svc first-app
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
first-app NodePort 10.102.0.12
80:30491/TCP 1mAccess the service via the host IP and the allocated NodePort (30491):
$ curl 172.17.0.77:30491
... (Nginx welcome page) ...Scaling the Application
Scale the deployment to three replicas:
$ kubectl scale deployment/first-app --replicas=3
deployment "first-app" scaledList pods to confirm three instances are running.
Updating the Application
Check the current Nginx version by requesting a non‑existent page:
$ curl 172.17.0.77:30491/abc
... 404 Not Found ... nginx/1.13.9 ...Update the deployment to use an older Nginx tag (1.10):
$ kubectl set image deployment/first-app first-app=nginx:1.10
deployment "first-app" image updatedVerify the new version:
$ curl 172.17.0.77:30491/abc
... 404 Not Found ... nginx/1.10.3 ...Deleting the Application
Delete the deployment, which removes all associated pods:
$ kubectl delete deployment/first-app
deployment "first-app" deletedConfirm no pods remain:
$ kubectl get pod
No resources found.Full-Stack Internet Architecture
Introducing full-stack Internet architecture technologies centered on Java
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.