Cloud Native 5 min read

Running a Kubernetes Cluster Across Multiple Zones

This article explains how Kubernetes 1.2 enables a single cluster to operate across multiple zones within the same cloud provider, detailing automatic zone labeling, pod and volume scheduling behavior, and the key limitations and considerations for multi‑zone deployments.

Architects Research Society
Architects Research Society
Architects Research Society
Running a Kubernetes Cluster Across Multiple Zones

Introduction

Kubernetes 1.2 added support for running a single cluster across multiple failure zones (called “zones” in GCE, “availability zones” in AWS). This lightweight multi‑zone capability, sometimes nicknamed “Ubernetes Lite”, allows a more highly‑available cluster within a single cloud provider.

Limitations: a single cluster can span multiple zones only if they belong to the same cloud provider; currently only GCE and AWS are automatically supported, though other providers can be added by labeling nodes and volumes appropriately.

Features

When a node starts, kubelet automatically adds a label with its zone information.

Kubernetes automatically spreads Pods of a ReplicationController or Service across nodes in a zone, and for multi‑zone clusters this spreading extends across zones to reduce the impact of a zone failure (implemented via SelectorSpreadPriority). The distribution is best‑effort; heterogeneous zones may prevent even spreading.

After a PersistentVolume is created, the PersistentVolumeLabel admission controller adds a zone label, and the scheduler (using the VolumeZonePredicate) ensures that Pods using the volume are scheduled in the same zone, because volumes cannot be attached across zones.

Limitations

Assumes zones are network‑proximate; no zone‑aware routing is performed, so service traffic may cross zones, adding latency and cost.

Zone affinity applies only to PersistentVolumes; specifying an EBS volume directly in a Pod spec bypasses the mechanism.

A cluster cannot span different clouds or regions; full federation is required for that.

By default kube‑up creates a single master in one zone; for high‑availability control planes users must follow HA instructions.

Volume Limitations

Topology‑aware volume binding addresses several constraints:

Dynamic provisioning with StatefulSet volume zone expansion may conflict with existing pod‑affinity policies.

If a StatefulSet name contains a hyphen (“‑”), zone expansion may not provide uniform storage distribution.

When multiple PVCs are specified in a Deployment or Pod spec, a StorageClass must be configured for each target zone, or static PVs must be created per zone; using a StatefulSet ensures all replica volumes reside in the same zone.

high-availabilityKubernetesClusterPersistent VolumeMulti-ZoneZone Awareness
Architects Research Society
Written by

Architects Research Society

A daily treasure trove for architects, expanding your view and depth. We share enterprise, business, application, data, technology, and security architecture, discuss frameworks, planning, governance, standards, and implementation, and explore emerging styles such as microservices, event‑driven, micro‑frontend, big data, data warehousing, IoT, and AI architecture.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.