Cloud Native 5 min read

Mastering Kubernetes Pod Topology Spread Constraints: Why Pods Stay Pending

This article explains Kubernetes pod topology spread constraints, breaks down its key fields, shows how to combine them with affinity rules, and provides troubleshooting steps for pods stuck in Pending state due to scheduling constraints.

MaGe Linux Operations
MaGe Linux Operations
MaGe Linux Operations
Mastering Kubernetes Pod Topology Spread Constraints: Why Pods Stay Pending

While deploying a component I noticed its pods remained in Pending because the scheduler could not satisfy the pod topology spread constraints that had been added.

What is topologySpreadConstraints ?

It is a field in the Pod spec that controls how pods are distributed across topology domains.

spec:
  topologySpreadConstraints:
  - maxSkew: <integer>
    topologyKey: <string>
    whenUnsatisfiable: <string>
    labelSelector: <object>

For beginners, focus on the four basic fields:

labelSelector : selects pods with matching labels; the scheduler counts matching pods per topology domain.

topologyKey : defines the topology domain, usually a node label such as zone or kubernetes.io/hostname.

maxSkew : the maximum allowed difference in the number of matching pods between domains; a smaller value enforces a more even spread.

whenUnsatisfiable : action when the skew exceeds maxSkew. Options are DoNotSchedule (default) or ScheduleAnyway.

The following diagram illustrates these concepts:

Pod topology constraints can be combined with affinity/anti‑affinity rules for richer scheduling behavior. Example:

affinity:
  podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
        matchLabels:
          app.kubernetes.io/name: app-server
      topologyKey: kubernetes.io/hostname
schedulerName: default-scheduler
topologySpreadConstraints:
- maxSkew: 1
  topologyKey: topology.kubernetes.io/zone
  whenUnsatisfiable: DoNotSchedule
  labelSelector:
    matchLabels:
      app.kubernetes.io/instance: app-server
      app.kubernetes.io/name: app-server

This configuration prevents pods with matching labels from being placed on the same node (anti‑affinity) while trying to spread them evenly across zones (topology spread). In a two‑node cluster where both nodes belong to the same zone, the constraints cannot be satisfied, so both pods remain Pending.

To resolve the issue, either increase maxSkew to 2 or change whenUnsatisfiable to ScheduleAnyway .
Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

KubernetesAffinityPodSchedulingTopologySpreadConstraints
MaGe Linux Operations
Written by

MaGe Linux Operations

Founded in 2009, MaGe Education is a top Chinese high‑end IT training brand. Its graduates earn 12K+ RMB salaries, and the school has trained tens of thousands of students. It offers high‑pay courses in Linux cloud operations, Python full‑stack, automation, data analysis, AI, and Go high‑concurrency architecture. Thanks to quality courses and a solid reputation, it has talent partnerships with numerous internet firms.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.