Cloud Native 14 min read

Master Kubernetes: Core Concepts, Architecture, and Advanced Networking Explained

This comprehensive guide demystifies Kubernetes by covering its core principles, component architecture, service discovery mechanisms, pod resource sharing, CNI plugins, multi‑layer load balancing, and IP addressing models, providing engineers with the knowledge needed to design and operate robust cloud‑native clusters.

Raymond Ops
Raymond Ops
Raymond Ops
Master Kubernetes: Core Concepts, Architecture, and Advanced Networking Explained

One Goal: Container Operations

Kubernetes (K8s) is an open‑source platform for automating container operations such as deployment, scheduling, and scaling across node clusters.

Key Functions

Automated container deployment and replication.

Real‑time elastic scaling of container workloads.

Container grouping with built‑in load balancing.

Scheduling: deciding on which machine a container runs.

Core Components

kubectl : command‑line client that serves as the entry point for all operations.

kube‑apiserver : exposes a REST API for controlling the entire system.

kube‑controller‑manager : runs background tasks such as node health, pod counts, and service‑pod associations.

kube‑scheduler : assigns newly created pods to appropriate nodes based on resource availability.

etcd : a highly available, strongly consistent key‑value store used for configuration sharing and service discovery.

kube‑proxy : runs on each node, handling pod network proxying and periodically fetching service information from etcd.

DNS : optional DNS service that creates records for each Service, enabling pods to resolve services by name.

K8s Architecture Diagram

Kubernetes architecture diagram
Kubernetes architecture diagram

Two‑Site Three‑Center Model

This model consists of a local production center, a local disaster‑recovery center, and a remote disaster‑recovery center, addressing data‑consistency challenges.

Kubernetes uses the etcd component as a highly available, strongly consistent service‑discovery store, inspired by Zookeeper and doozer, offering four notable characteristics:

Simple: HTTP + JSON API usable via curl.

Secure: optional SSL client authentication.

Fast: each instance supports ~1,000 writes per second.

Trustworthy: Raft‑based distributed consensus.

Four‑Layer Service Discovery

Kubernetes provides two service‑discovery methods:

Environment Variables : kubelet injects environment variables for all Services into a pod at creation time; however, a Service must be created before the pod, limiting practical use.

DNS : a cluster add‑on (KubeDNS) creates DNS records for Services, allowing pods to resolve services by name.

Both methods rely on TCP/UDP transport and operate at the fourth OSI layer.

Five Shared Resources in a Pod

A pod is the smallest unit in K8s, containing one or more tightly coupled containers that share the following resources:

PID namespace – containers can see each other’s process IDs.

Network namespace – containers share the same IP address and port range.

IPC namespace – containers can communicate via SystemV IPC or POSIX message queues.

UTS namespace – containers share the same hostname.

Volumes – containers can access volumes defined at the pod level.

Pod lifecycle is managed by a Replication Controller, which defines a template, schedules pods onto nodes, and terminates them when containers finish.

Six Common CNI Plugins

CNI (Container Network Interface) defines a standard library for configuring container networking. The six widely used plugins are illustrated in the diagram below.

CNI plugins diagram
CNI plugins diagram

Seven‑Layer Load Balancing

Load balancing in data centers involves various network devices:

Access switches (Top‑of‑Rack) connecting 40‑48 servers per switch.

Core switches handling intra‑data‑center traffic.

MGW/LVS for load balancing and NAT for outbound traffic.

External core routers connecting to ISP or BGP networks.

Load‑balancing layers include:

Layer 2 – MAC‑based balancing.

Layer 3 – IP‑based balancing.

Layer 4 – IP + port balancing.

Layer 7 – URL or application‑layer balancing, typically implemented with Ingress controllers in Kubernetes.

Eight Isolation Dimensions

Isolation dimensions diagram
Isolation dimensions diagram

Kubernetes scheduling must respect these isolation dimensions when placing pods.

Nine Network Model Principles

K8s networking must satisfy four basic principles, three network‑requirement principles, one architectural principle, and one IP principle. Each pod receives a unique IP address, assuming a flat, directly reachable network space.

The IP‑per‑Pod model means all containers in a pod share the same network stack, similar to processes on a single VM.

Ten IP Address Classes

Beyond the traditional A‑E classes, there are special‑purpose ranges:

A class: 1.0.0.0‑126.255.255.255 (default mask /8)</code>
<code>B class: 128.0.0.0‑191.255.255.255 (default mask /16)</code>
<code>C class: 192.0.0.0‑223.255.255.255 (default mask /24)</code>
<code>D class: 224.0.0.0‑239.255.255.255 (multicast)</code>
<code>E class: 240.0.0.0‑255.255.255.255 (research)</code>
<code>0.0.0.0 – default route (unspecified)</code>
<code>127.0.0.1 – loopback address</code>
<code>224.0.0.1 – multicast example</code>
<code>169.254.x.x – link‑local address (APIPA)</code>
<code>10.x.x.x, 172.16‑31.x.x, 192.168.x.x – private address space
cloud-nativeKubernetesLoad Balancingservice discoveryCNIContainer OrchestrationIP addressing
Raymond Ops
Written by

Raymond Ops

Linux ops automation, cloud-native, Kubernetes, SRE, DevOps, Python, Golang and related tech discussions.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.