Cloud Native 8 min read

How to Build Your Own Kubernetes‑Style Container Orchestration System

This article walks through the evolution from a single‑machine Java monolith to a distributed, container‑based platform, detailing master‑worker roles, core Kubernetes‑like components, networking, scheduling, and plug‑ins for a complete cloud‑native orchestration solution.

Efficient Ops
Efficient Ops
Efficient Ops
How to Build Your Own Kubernetes‑Style Container Orchestration System

1. Story Begins

With rising consumer purchasing power, users own more electronic fast‑moving consumer goods. Xiao Li predicts rapid expansion of the second‑hand electronics market in five years and decides to launch a platform called “XX”.

2. Story Development

Initially the “Qiyu” system was an all‑in‑one Java application on a single physical server. As traffic grew the server was upgraded from 64C‑256G to 160C‑1920G, but eventually even that could not keep up, prompting a service‑oriented, distributed refactor using middleware such as HSF, TDDL, Tair, Diamond and MetaQ.

After splitting the monolith into many small services, the number of managed servers increased, each with different hardware and OS versions, leading to operational problems.

Virtual machines masked hardware differences but introduced performance overhead.

Container technology (e.g., Docker/Podman) provides the same isolation without the overhead and simplifies CI/CD delivery.

As the number of containers grew to thousands, a scheduler and network management solution became necessary.

The team decided to build a container orchestration system.

3. Container Orchestration System

The system distinguishes master nodes (running the orchestration components) and worker nodes (running business containers).

The master exposes management APIs via kube-apiserver . Two clients interact with it: kubectl for administrators and kubelet on each worker.

Workers periodically report resource usage and container status to the master, which stores the data in etcd for high‑availability consistency.

When a worker fails, the master’s controllers (node controller, replica controller, endpoint controller) managed by kube-controller-manager reschedule affected containers and adjust network routes.

Network communication between containers is achieved through unique container IPs, with routing handled on workers via

iptables

or

ipvs

. kube-proxy watches for IP or container count changes and updates the rules.

Higher‑level services are exposed via a Service abstraction (cluster VIP or DNS name) backed by an internal DNS service.

Additional plug‑ins such as a web UI, DNS, resource monitoring, and log aggregation improve the user experience.

Summary of Components

Master components: kube-apiserver, kube-scheduler, etcd, kube-controller-manager

Node components: kubelet, kube-proxy

Plug‑ins: DNS, web UI, container resource monitoring, cluster logging

cloud nativeKubernetesETCDContainer Orchestrationmaster worker
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.