Cloud Native 20 min read

Mastering Kubernetes Taints and Tolerations: A Practical Guide

This guide explains the structure, effects, default settings, and management of Kubernetes taints, demonstrates how to view, add, delete, and modify them, and shows how to validate and use tolerations to control pod scheduling across nodes.

Open Source Linux
Open Source Linux
Open Source Linux
Mastering Kubernetes Taints and Tolerations: A Practical Guide

Kubernetes Taints and Tolerations Overview

Official documentation: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/

What Is a Taint?

A taint is a property applied to a node (including worker and master nodes) that creates a repelling relationship with pods, preventing pods from being scheduled onto the node or even evicting existing pods.

It is similar to a label but is attached to a node. Its syntax resembles a label but has distinct semantics.

Taint Structure

A taint consists of three parts: key=value:effect key : the taint key (custom, e.g., node-type).

value : the taint value (optional, e.g., special).

effect : determines how pods are affected. Possible values are:

Taint diagram
Taint diagram

Core Uses of Taints

Taints are node‑level attributes used to prevent certain pods from being scheduled on a node or to make a node “repellent”. They are typically used together with tolerations for fine‑grained scheduling strategies.

Isolating Nodes

Mark nodes for special purposes (e.g., GPU or high‑memory nodes) and block regular pods from being scheduled there, ensuring dedicated resources for critical workloads.

Evicting Unexpected Pods

Using a NoExecute taint forces eviction of pods that do not tolerate the taint, which is useful during node maintenance, upgrades, or failure handling.

Hierarchical Scheduling Strategies

Combine tolerations with taints to achieve “node grouping + pod‑directed scheduling”, preventing resource contention and improving security across environments (development, testing, production).

Default Taints in a Kubernetes Cluster

node-role.kubernetes.io/control-plane:NoSchedule

– prevents regular pods from being scheduled on control‑plane nodes while allowing system components. node.kubernetes.io/not-ready:NoExecute – dynamically added when a node is NotReady; evicts pods after a timeout. node.kubernetes.io/unreachable:NoExecute – added when a node loses contact with the control plane; evicts pods after the eviction timeout. node.kubernetes.io/out-of-disk:NoExecute – triggered when disk usage exceeds a threshold; evicts disk‑heavy pods. node.kubernetes.io/memory-pressure:NoExecute – triggered by memory pressure; evicts high‑memory pods. node.kubernetes.io/disk-pressure:NoExecute – triggered by disk pressure before the disk is completely full. node.kubernetes.io/pid-pressure:NoExecute – triggered when PID resources are exhausted; evicts pods to protect the node.

Managing Taints

Viewing Taints

# View all node taints
kubectl describe node | grep -C <int> Taints
# View a specific node's taints
kubectl describe node <node-name> | grep -C <int> Taints

Adding a Taint

# Add a taint (value is optional)
kubectl taint node <node-name> key[=value]:effect

Removing a Taint

# Remove a taint
kubectl taint node <node-name> key[=value]:effect-

Modifying a Taint

Modifying a taint is performed by deleting the existing taint and adding a new one.

# Delete old taint
kubectl taint node <node-name> oldKey:oldEffect-
# Add new taint
kubectl taint node <node-name> newKey:newEffect

Validating the Three Taint Types

PreferNoSchedule

This is the weakest restriction. Pods are preferably not scheduled on the node, but the scheduler will still place them there if no other nodes are available.

# Add PreferNoSchedule taint to node01
kubectl taint node node01 name=zhangsan:PreferNoSchedule
# Verify
kubectl describe node node01 | grep Taints

When a deployment of 10 pods is created, all pods are scheduled to another node because of the taint.

NoSchedule

This taint outright blocks pod scheduling onto the node unless the pod has a matching toleration. Existing pods are not affected.

# Add NoSchedule taint to node01 and node02
kubectl taint node node01 app:NoSchedule
kubectl taint node node02 app:NoSchedule
# Deploy 10 pods
kubectl apply -f deploy.yaml
# Pods remain pending because they lack tolerations

NoExecute

This taint not only blocks scheduling but also evicts pods that do not tolerate it.

# Add NoExecute taint to node02
kubectl taint node node02 app:NoExecute
# Pods are evicted from node02 and rescheduled to node01

Tolerations

To allow pods to be scheduled onto tainted nodes, define tolerations in the pod spec using spec.tolerations.

tolerations:
- key: "env"
  operator: "Equal"
  value: "prod"
  effect: "NoSchedule"
  tolerationSeconds: 3600

Toleration Matching Rules

Full match (key + operator + value + effect) tolerates a specific taint. operator: Exists matches any taint with the given key, regardless of value or effect.

Omitting key and operator matches all taints (use with caution).

Example: Tolerate a Specific Taint

apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
  - name: nginx
    image: nginx
  tolerations:
  - key: "app"
    operator: "Equal"
    value: "web"
    effect: "NoSchedule"

Example: Tolerate Multiple Taints

apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
  - name: nginx
    image: nginx
  tolerations:
  - key: "app"
    operator: "Equal"
    value: "web"
    effect: "NoSchedule"
  - key: "env"
    operator: "Equal"
    value: "prod"
    effect: "NoSchedule"
Toleration diagram
Toleration diagram
Kubernetesnode schedulingTaintToleration
Open Source Linux
Written by

Open Source Linux

Focused on sharing Linux/Unix content, covering fundamentals, system development, network programming, automation/operations, cloud computing, and related professional knowledge.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.