Cloud Native 6 min read

How Kubernetes Extended Resources Enable Custom Scheduling (and Their Limits)

This article explains how Kubernetes Extended Resources let you define custom resource types, describes the creation, synchronization, and scheduling workflow, highlights the non‑real‑time allocatable status behavior, and discusses practical limitations and the role of Device Plugins and Operators.

System Architect Go
System Architect Go
System Architect Go
How Kubernetes Extended Resources Enable Custom Scheduling (and Their Limits)

Kubernetes not only schedules CPU and memory resources but also supports custom Extended Resource types, allowing the cluster to manage various non‑standard resources.

Extended Resource

The creation and usage flow is illustrated below:

Define the resource: a user or administrator adds a custom extended resource to a node via the Kubernetes API, e.g., example.com/dongle with 4 units.

Node syncs the resource: kubelet periodically queries GET /api/v1/nodes/<nodeName> from the kube-apiserver to retrieve the node’s resource information and updates the extended resource data.

Scheduling and usage: a pod requesting example.com/dongle is scheduled by the Scheduler onto a node that satisfies the condition; the node’s kubelet then creates and starts the pod.

Once an Extended Resource is added to a node, Kubernetes automatically manages its allocation during pod scheduling and creation without manual intervention.

However, the status.allocatable field on a node does not update in real time as pods are created or deleted. For example, a pod may consume three units of example.com/dongle, but status.allocatable still reports four units. This design avoids frequent updates to the kube-apiserver and etcd, reducing system load.

Even though the allocatable information is stale, scheduling is not affected because both the scheduler and kubelet keep their own records of resource usage.

Limitations of Extended Resources

While the concept appears simple—just add a resource via the API—practical use faces several challenges:

Resource configuration and consumption: declaring a GPU as an Extended Resource enables the scheduler to place pods, but the container still needs compatible drivers and runtimes (e.g., NVIDIA container runtime) to actually use the GPU.

Automation needs: manually adding or modifying extended resources on each node does not scale for large clusters or complex hardware requirements.

In practice, administrators typically rely on Device Plugin and Operator mechanisms. The Device Plugin framework automatically discovers and manages hardware such as GPUs, while an Operator can further automate deployment and configuration.

For deeper coverage, see the author’s earlier article on Kubernetes GPU scheduling, Device Plugin, CDI, NFD, and GPU Operator.

Conclusion

Extended Resource

gives Kubernetes a flexible extension point, enabling clusters to handle diverse resource types. However, it only solves the declaration and scheduling part; full lifecycle management still requires additional tooling.

References:

https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/

https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

KubernetesOperatorCluster ManagementDevice PluginCustom SchedulingExtended Resource
System Architect Go
Written by

System Architect Go

Programming, architecture, application development, message queues, middleware, databases, containerization, big data, image processing, machine learning, AI, personal growth.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.