Cloud Native 12 min read

Boost Kubernetes Efficiency: 5 Practical Pod Scheduling Techniques

This article explains how to improve Kubernetes resource utilization by addressing node fragmentation, pod misconfiguration, and HPA settings, and then details five concrete scheduling methods—including nodeName, nodeSelector, nodeAffinity, taints, and pod priority—to optimize pod placement and reduce waste.

Qu Tech
Qu Tech
Qu Tech
Boost Kubernetes Efficiency: 5 Practical Pod Scheduling Techniques

Background

During the containerization process of QuTouTiao services, some pod scheduling configurations were found to be unreasonable, causing resource waste. As the company grows, higher resource utilization is required. Inefficiencies stem from node resource fragmentation, pod resource misconfiguration, improper HPA replica settings, and idle business periods.

Typical Solution Approach

Compress PODs (ensure limits and requests are set), oversell nodes (allow more pods on a node), optimize VPA/HPA configurations, and make effective use of idle business time. All four steps rely on pod scheduling to maintain service availability.

POD Scheduling in Kubernetes

In Kubernetes, a pod is mainly a container carrier. Scheduling and automatic control are usually achieved through objects such as ReplicationController, Deployment, DaemonSet, and Job. The kube‑scheduler implements the scheduling mechanism, which consists of a filtering phase (predicates) and a scoring phase.

Filtering Predicates

CheckNodeConditionPred – node ready status

MemoryPressure – sufficient memory

DiskPressure – sufficient disk space

PIDPressure – sufficient PID resources

GeneralPred – matches pod.spec.hostname

MatchNodeSelector – matches pod.spec.nodeSelector labels

PodFitsResources – checks if requested resources are available

PodToleratesNodeTaints – tolerates node taints

CheckNodeLabelPresence

CheckServiceAffinity

CheckVolumeBinding

NoVolumeZoneConflict

These predicates can be inspected with kubectl describe node <node-id>.

Scoring Algorithms

least_requested – node with minimal resource consumption

balanced_resource_allocation – node with most balanced resource usage

node_prefer_avoid_pods – node preference

taint_toleration – lower score for nodes with taints

selector_spreading – node selector spreading

interpod_affinity – pod affinity traversal

most_requested – node with maximal resource consumption

node_label – node label matching

Method 1: Specify nodeName

Set pod.spec.nodeName to bind a pod to a specific node. If the field is set, the scheduler skips normal scheduling. Example YAML (image):

Apply the YAML with kubectl apply -f <file.yaml> and verify the pod runs on the designated node.

Method 2: Use nodeSelector

Label target nodes, e.g., kubectl label nodes <node> k8szone=test, then add a matching nodeSelector to the pod spec.

After applying the pod definition, kubectl get pods -o wide shows the pod on the selected node.

Method 3: Apply nodeAffinity

Node affinity offers richer expressions (In, NotIn, Exists, DoesNotExist, Gt, Lt) and supports required and preferred rules. Example: requiredDuringSchedulingIgnoredDuringExecution for zone=test and preferredDuringSchedulingIgnoredDuringExecution for priority=true.

Apply the YAML with kubectl apply -f <affinity.yaml> and verify the pod is scheduled to the node that satisfies both conditions.

Method 4: Use Taints and Tolerations

Taints mark nodes to repel pods that do not tolerate them. Commands:

kubectl taint nodes <node> key=value:NoSchedule
kubectl taint nodes <node> key=value:NoExecute
kubectl taint nodes <node> key=value:PreferNoSchedule

Pods declare tolerations to allow scheduling onto tainted nodes.

Method 5: Set Pod Priority

When resources are scarce, high‑priority pods can preempt lower‑priority ones. Define a PriorityClass (example YAML image) and reference it in the pod spec via priorityClassName. The scheduler queues pods by priority, and preemption occurs if a high‑priority pod remains pending.

Conclusion

By using the above five methods—nodeName, nodeSelector, nodeAffinity, taints/tolerations, and pod priority—engineers can fine‑tune pod placement, improve resource utilization, and reduce waste in Kubernetes clusters. Additional techniques such as DaemonSet, Job, and AI‑driven scheduling also exist.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

KubernetesResource OptimizationPod SchedulingpriorityNode AffinityTaints
Qu Tech
Written by

Qu Tech

Qutoutiao technology sharing

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.