Why Does kube-scheduler Fail to Start? A Step‑by‑Step Fix
In a Kubernetes cluster the kube-scheduler pod may stay pending due to a container-name reservation conflict; this guide shows how to locate the offending container with ctr, delete it, restart kubelet, and verify that the pod returns to Running state.
Problem
In a Kubernetes cluster the kube-scheduler pod fails to start. The pod logs contain an error indicating that the container name is already reserved for a specific container ID, which prevents the pod from reaching the Running state.
Solution
1. Identify the conflicting container
Use the containerd client ctr in the Kubernetes namespace to locate the container ID reported in the log.
# List containers in the k8s.io namespace and filter by the ID
ctr -n=k8s.io containers list --quiet | grep <CONTAINER_ID>Replace <CONTAINER_ID> with the ID from the log, for example
830ab0f0a4e17a39e2b1d254038f05bf84aa586e39da300a2a4cd97c77bab4f0.
2. Clean up the conflicting container
2.1 Stop the task
# Kill the task associated with the container
ctr -n=k8s.io tasks kill 830ab0f0a4e17a39e2b1d254038f05bf84aa586e39da300a2a4cd97c77bab4f02.2 Delete the container
# Delete the container
ctr -n=k8s.io containers delete 830ab0f0a4e17a39e2b1d254038f05bf84aa586e39da300a2a4cd97c77bab4f03. Restart kubelet
# Restart the kubelet service to refresh node state
systemctl restart kubelet4. Verify the pod status
# List pods in the kube-system namespace
kubectl get pod -n kube-system
# Show detailed information for the kube-scheduler pod
kubectl describe pod kube-scheduler -n kube-systemWhen the kube-scheduler pod reports Running, the issue is resolved.
Full-Stack DevOps & Kubernetes
Focused on sharing DevOps, Kubernetes, Linux, Docker, Istio, microservices, Spring Cloud, Python, Go, databases, Nginx, Tomcat, cloud computing, and related technologies.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
