Cloud Native 15 min read

Kubernetes Architecture Analysis and Comparison of Scheduling Models with Mesos

This article explains the Kubernetes architecture, details each core component, demonstrates how a Deployment is created, and critically compares Kubernetes' two‑layer scheduling model with Mesos, evaluating resource utilization, scalability, flexibility, performance, and scheduling latency while discussing why cluster schedulers struggle with horizontal scaling.

Cloud Native Technology Community
Cloud Native Technology Community
Cloud Native Technology Community
Kubernetes Architecture Analysis and Comparison of Scheduling Models with Mesos

The article begins by introducing the latest Kubernetes architecture, noting its dominance in cloud‑native environments and its role in managing both infrastructure and applications.

It then presents a simplified view of the official architecture, describing the key components:

ETCD – the distributed key‑value store that holds cluster state, provides event watching, subscription, and leader election.

API Server – a RESTful proxy to ETCD that adds authentication, caching, and other functions.

Controller Manager – handles task‑level scheduling for objects such as Deployments, DaemonSets, and Jobs.

Scheduler – performs resource‑level scheduling by assigning Pods to nodes based on cluster state.

Kubelet – an agent on each node that watches ETCD for Pods assigned to its node and runs them.

Kubectl – the command‑line client that interacts with the API Server.

A concrete example shows how a multi‑instance Nginx Deployment is created step‑by‑step using kubectl , how the Deployment controller creates a ReplicaSet, how the ReplicaSet creates Pods, how the Scheduler binds Pods to nodes, and how Kubelet launches the Pods.

The article then raises two questions: whether Kubernetes is a two‑layer scheduler and how its scalability compares to Mesos. It references Google’s Omega paper, classifying Borg as a monolithic scheduler, Mesos as two‑layer, and Omega as a shared‑state model.

It argues that Kubernetes, like Mesos, separates task scheduling (Controller Manager) from resource scheduling (Scheduler), making it a two‑layer scheduler. The comparison of push‑based scheduling (Mesos) versus pull‑based scheduling (Kubernetes) leads to an analysis of five criteria:

Resource utilization: Kubernetes wins because a single Scheduler has a global view of resources.

Scalability: Mesos wins; its push model makes it easier to migrate existing workloads.

Flexible task‑scheduling strategies: Mesos wins, supporting “all‑or‑nothing” semantics that Kubernetes lacks.

Performance: Mesos wins; historical data shows Mesos clusters managing far more nodes than early Kubernetes versions.

Scheduling latency: Kubernetes wins due to the inefficiency of Mesos’ Offer cycle.

The discussion then shifts to why most cluster schedulers cannot scale out horizontally. It introduces the concept of an “independent resource pool” and uses an e‑commerce analogy to illustrate that horizontal scalability is limited by the number of such pools.

In cluster scheduling, the entire cluster’s resources form a single resource pool, so the scheduler effectively has only one independent pool, preventing true multi‑active (multi‑node) scaling. The article concludes that because the scheduler’s decision space cannot be partitioned, cluster schedulers remain single‑active despite the desire for horizontal expansion.

cloud nativeArchitectureKubernetesResource ManagementSchedulingMesos
Cloud Native Technology Community
Written by

Cloud Native Technology Community

The Cloud Native Technology Community, part of the CNBPA Cloud Native Technology Practice Alliance, focuses on evangelizing cutting‑edge cloud‑native technologies and practical implementations. It shares in‑depth content, case studies, and event/meetup information on containers, Kubernetes, DevOps, Service Mesh, and other cloud‑native tech, along with updates from the CNBPA alliance.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.