Cloud Native 7 min read

Master Kubernetes Pod Anti-Affinity to Distribute Replicas Across Nodes

Learn how to prevent multiple replica pods from landing on the same node by using Kubernetes Pod Anti-Affinity, understand hard and soft scheduling rules, apply a ready‑to‑use YAML example, and verify that each replica runs on a separate host for high availability.

Full-Stack DevOps & Kubernetes
Full-Stack DevOps & Kubernetes
Full-Stack DevOps & Kubernetes
Master Kubernetes Pod Anti-Affinity to Distribute Replicas Across Nodes

Why Pod Anti‑Affinity Matters

In many Kubernetes deployments, setting replicas for a critical service (e.g., nginx) improves availability, but the default scheduler may place all replicas on a single node. If that node fails, all replicas go down, causing a complete service outage. The root cause is the lack of a scheduling policy that enforces distribution.

Pod Affinity vs. Anti‑Affinity

Kubernetes provides two complementary scheduling concepts:

Pod Affinity – prefers pods with matching labels to be co‑located on the same node.

Pod Anti‑Affinity – forces pods with matching labels to avoid being scheduled on the same node.

Typical Scenario: Automatic Replica Distribution

Goal: Deploy two nginx replicas such that each runs on a different node, ensuring the service stays up even if one node crashes.

Configuration Options

Hard requirement (recommended) – Use requiredDuringSchedulingIgnoredDuringExecution. The scheduler must satisfy the rule; otherwise the pod will not be placed.

Soft requirement – Use preferredDuringSchedulingIgnoredDuringExecution. The scheduler tries to satisfy the rule but will still place the pod if it cannot, which may lead to replica concentration.

Practical Example: Hard Anti‑Affinity for an Nginx Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - nginx
            topologyKey: "kubernetes.io/hostname"
      containers:
      - name: nginx
        image: nginx
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
          name: http
          protocol: TCP

Key Configuration Items Explained

labelSelector

: selects pods with app=nginx label. topologyKey: set to kubernetes.io/hostname to spread pods across host machines. requiredDuringSchedulingIgnoredDuringExecution: a hard rule that must be met at scheduling time and remains unchanged during execution.

⚠️ Ensure the cluster has at least as many nodes as the replica count; otherwise the pod may fail to schedule.

Expected Outcome

After applying the deployment, the two nginx pods will be placed on different nodes:

$ kubectl get pods -o wide
NAME                READY   STATUS    NODE
nginx-7f46d78f9-abcde   1/1   Running   node1
nginx-7f46d78f9-xyz12    1/1   Running   node2

If one node goes down, the other pod continues to serve traffic, providing high availability for the service.

Conclusion

Using Pod Anti‑Affinity allows you to enforce replica distribution at the pod level, making your Kubernetes workloads resilient to node failures. This technique is essential for high‑availability services such as databases, web front‑ends, and critical micro‑service components.

high availabilityKubernetesschedulingYAMLPod Anti-Affinity
Full-Stack DevOps & Kubernetes
Written by

Full-Stack DevOps & Kubernetes

Focused on sharing DevOps, Kubernetes, Linux, Docker, Istio, microservices, Spring Cloud, Python, Go, databases, Nginx, Tomcat, cloud computing, and related technologies.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.