Cloud Native 13 min read

Kubernetes Architecture Overview: Components, Service Discovery, Networking, and IP Addressing

This article provides a comprehensive overview of Kubernetes architecture, covering its core components, multi‑center deployment, service discovery mechanisms, pod resource sharing, common CNI plugins, load‑balancing layers, isolation dimensions, network model principles, and detailed IP address classifications.

Top Architect
Top Architect
Top Architect
Kubernetes Architecture Overview: Components, Service Discovery, Networking, and IP Addressing

Kubernetes Overview

Kubernetes (k8s) is an open‑source platform for automated container operations, including deployment, scaling, and node‑level scheduling.

Core Components

kubectl : command‑line client used as the entry point for cluster operations.

kube‑apiserver : exposes a REST API that serves as the control plane entry.

kube‑controller‑manager : runs background tasks such as node status monitoring, pod counting, and service‑pod association.

kube‑scheduler : assigns newly created pods to suitable nodes based on resource availability.

etcd : a highly available, strongly consistent key‑value store used for configuration sharing and service discovery.

kube‑proxy : runs on each node, implements pod network proxying by periodically reading service information from etcd.

kubelet : node‑level agent that receives pod assignments, manages container lifecycles, and reports status to the apiserver.

DNS (optional) : provides DNS records for each Service so pods can resolve services by name.

Architecture diagram: (original diagram omitted for brevity)

Two‑Location Three‑Center Deployment

The model includes a local production center, a local disaster‑recovery center, and a remote disaster‑recovery center, addressing data‑consistency challenges with etcd as a high‑availability store.

etcd Features

Simple: HTTP+JSON API usable via curl .

Secure: optional SSL client authentication.

Fast: supports up to 1,000 writes per second per instance.

Reliable: uses the Raft consensus algorithm.

Service Discovery

Kubernetes offers two native service‑discovery methods:

Environment variables : kubelet injects all Service IP/Port pairs into a pod at creation time. This method requires the Service to exist before the pod, limiting its practicality.

DNS : Deploying the KubeDNS add‑on creates DNS entries for Services, enabling standard DNS lookups. Environment‑variable discovery works over TCP, while DNS uses UDP, both operating at layer‑4.

Pod Shared Resources

Containers within the same pod share five resources, enabling efficient intra‑pod communication:

PID namespace – processes can see each other's IDs.

Network namespace – share the same IP address and port range.

IPC namespace – use SystemV IPC or POSIX message queues.

UTS namespace – share a common hostname.

Volumes – access storage volumes defined at the pod level.

Pod lifecycle is managed by a Replication Controller, defined via a template and scheduled onto a node.

Common CNI Plugins

The Container Network Interface (CNI) provides a standard for configuring container networking. Six widely used plugins are illustrated in the original diagram (e.g., bridge, host‑local, macvlan, etc.).

Load Balancing Layers

Kubernetes supports load balancing at multiple OSI layers:

Layer 2 – MAC‑address based balancing.

Layer 3 – IP‑address based balancing.

Layer 4 – IP + port based balancing (e.g., NodePort ).

Layer 7 – Application‑level balancing using Ingress (HTTP/HTTPS, URL routing).

Four‑layer service exposure via NodePort has drawbacks (port explosion, firewall rule conflicts). An external load balancer (e.g., Nginx) combined with Ingress provides a more flexible solution.

Isolation Dimensions

Kubernetes defines eight isolation dimensions (e.g., namespace, node, pod, container, network, storage, etc.) that influence scheduling strategies.

Network Model Principles

The network model follows the IP‑per‑Pod principle: each pod receives a unique IP address, and all pods are assumed to be in a flat, directly reachable network space.

Pod IPs are allocated from the cluster’s CIDR (e.g., via docker0 ).

Containers within a pod share the network stack, allowing localhost communication.

No NAT is required for intra‑pod traffic.

IP Address Classifications

Traditional classful addressing (A‑E) and special‑purpose ranges are described below.

A class: 1.0.0.0‑126.255.255.255, default mask /8 (255.0.0.0)
B class: 128.0.0.0‑191.255.255.255, default mask /16 (255.255.0.0)
C class: 192.0.0.0‑223.255.255.255, default mask /24 (255.255.255.0)
D class: 224.0.0.0‑239.255.255.255, used for multicast
E class: 240.0.0.0‑255.255.255.255, reserved for research
0.0.0.0 – default route (unknown destination)
127.0.0.1 – loopback address
224.0.0.1 – multicast address (used by IRDP)
169.254.x.x – APIPA address assigned when DHCP fails
10.x.x.x, 172.16‑31.x.x, 192.168.x.x – private address ranges for internal networks
Kubernetesservice discoverynetworkingCNIContainer OrchestrationIP addressing
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.