Mastering Kubernetes: From Container Operations to Advanced Networking
This article provides a comprehensive overview of Kubernetes, covering its core container‑operation capabilities, component architecture, two‑site three‑center design, multi‑layer service discovery, pod shared resources, common CNI plugins, layered load balancing, isolation dimensions, network model principles, and IP address classifications.
Container Operations
Kubernetes (K8s) is an open‑source platform for automated container operations, including deployment, scheduling and cluster‑wide scaling.
Specific functions:
Automated container deployment and replication.
Real‑time elastic scaling of container workloads.
Container orchestration into groups with built‑in load balancing.
Scheduling: deciding on which node a container runs.
Components:
kubectl : client command‑line tool, the entry point for operating the system.
kube-apiserver : REST API server that provides the control plane interface.
kube-controller-manager : runs background tasks such as node status, pod counts, and service‑pod associations.
kube-scheduler : assigns newly created pods to suitable nodes based on resource availability.
etcd : highly available, strongly consistent key‑value store used for configuration sharing and service discovery.
kube-proxy : runs on each node, implements pod network proxy by periodically fetching service information from etcd.
kubelet : node‑level agent that receives pod assignments, manages containers, and reports status to the apiserver.
DNS (optional) : provides DNS records for each Service so pods can resolve services by name.
Two‑Site Three‑Center Architecture
The architecture consists of a local production center, a local disaster‑recovery center, and a remote disaster‑recovery center, addressing data‑consistency challenges.
K8s uses the etcd component as a highly available, strongly consistent service‑discovery store, inspired by Zookeeper and doozer, offering simplicity, security (optional SSL client authentication), speed (≈1 000 writes per second per instance), and trustworthiness (Raft‑based consensus).
Four‑Layer Service Discovery
K8s provides two service‑discovery mechanisms:
Environment Variables : when a pod is created, kubelet injects environment variables for all services in the cluster. This method requires the service to be created before the pod, limiting its practicality.
DNS : a cluster add‑on (e.g., KubeDNS) creates DNS records for services, enabling name‑based discovery.
Both mechanisms operate over the transport layer (TCP for env vars, UDP for DNS) and rely on the fourth‑layer protocol.
Five Pod Shared Resources
A pod is the basic unit in K8s and can contain one or more tightly coupled containers. Containers within the same pod share the following resources:
PID namespace – processes can see each other's IDs.
Network namespace – containers share the same IP address and port range.
IPC namespace – containers can communicate via SystemV IPC or POSIX message queues.
UTS namespace – containers share a hostname.
Volumes – containers can access volumes defined at the pod level.
Pod lifecycle is managed by a Replication Controller; pods receive an IP address, and K8s assigns a unique hostname equal to the pod name.
Six Common CNI Plugins
CNI (Container Network Interface) defines a standard for configuring container networking. The six widely used CNI plugins are illustrated below.
Seven‑Layer Load Balancing
Load balancing in a data center (IDC) involves multiple network devices:
Top‑of‑Rack (TOR) switches – connect servers to the network, typically 40‑48 servers per switch with a /24 subnet.
Core switches – forward traffic between TOR switches and across data centers.
MGW/NAT – MGW (LVS) provides load balancing; NAT translates internal to external addresses.
External core routers – connect the data center to the Internet via static or BGP links.
Load‑balancing layers:
Layer 2 – MAC‑based balancing.
Layer 3 – IP‑based balancing.
Layer 4 – IP + port balancing.
Layer 7 – Application‑level balancing based on URLs and other HTTP attributes.
K8s typically exposes services via NodePort, which binds a host port to a pod. This approach has drawbacks (port exhaustion, firewall rule limitations). An external load balancer (e.g., Nginx) or Ingress controller provides a more flexible, layer‑7 solution.
Eight Isolation Dimensions
K8s scheduling must consider isolation from coarse‑grained to fine‑grained dimensions, ensuring appropriate placement of workloads.
Nine Network Model Principles
K8s networking follows four basic principles, three network‑requirement principles, one architectural principle, and one IP principle. Key points include:
Each pod receives a unique IP address and can communicate directly with any other pod, regardless of node location.
The IP‑per‑Pod model treats a pod like an independent VM or physical machine.
Containers within the same pod share the network stack and can reach each other via localhost.
Ten IP Address Classes
<code>A class: 1.0.0.0‑126.255.255.255, default mask /8 (255.0.0.0).
B class: 128.0.0.0‑191.255.255.255, default mask /16 (255.255.0.0).
C class: 192.0.0.0‑223.255.255.255, default mask /24 (255.255.255.0).
D class: 224.0.0.0‑239.255.255.255, used for multicast.
E class: 240.0.0.0‑255.255.255.255 (255.255.255.255 is the broadcast address), reserved for research.
0.0.0.0 – default route, represents unknown hosts/networks.
127.0.0.1 – loopback address.
169.254.x.x – link‑local address assigned when DHCP fails.
10.x.x.x, 172.16‑31.x.x, 192.168.x.x – private address spaces.
</code>Efficient Ops
This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.