Debug Kubernetes Pods Instantly with kubectl‑debug: A Practical Guide
This article introduces kubectl‑debug, a powerful kubectl plugin that launches a sidecar debugging container sharing the target pod's namespaces, enabling on‑the‑fly use of familiar tools like netstat, tcpdump, and iftop without bloating the original image, and provides installation steps, usage examples, and advanced configuration options for efficient Kubernetes pod troubleshooting.
Background
A best practice for container images is to keep them as minimal as possible, but this often removes essential debugging tools and even a shell, making troubleshooting difficult. Traditional solutions involve pre‑installing tools like procps, net‑tools, tcpdump, and vim in the business pod image, which violates the minimal image principle and introduces security risks.
kubectl‑debug is a simple, easy‑to‑use kubectl plugin that launches a debugging container and joins it to the target pod's PID, network, user, and IPC namespaces, allowing you to use familiar tools without modifying the business container.
kubectl‑debug: the command‑line tool
debug‑agent: deployed on each node to start the debugging container
How It Works
Containers are isolated processes with cgroup limits and namespaces. By starting a process and joining it to the target container's namespaces, the process can "enter" the container, similar to docker exec or kubectl exec.
The debugging approach packages the required tools into a separate container image. This tool container is launched and attached to the target container's namespaces, effectively bringing a full toolbox into the minimal business container.
<code>export TARGET_ID=666666666
# Join target container's network, pid, and ipc namespaces
docker run -it --network=container:$TARGET_ID \
--pid=container:$TARGET_ID \
--ipc=container:$TARGET_ID busybox</code>The kubectl‑debug command follows a workflow: query the API server for the pod, create a Debug Agent pod on the node, the agent launches a debugging container that joins the target pod's namespaces, and a tty connection is established for interactive debugging.
Steps:
Plugin queries the API server to verify the pod exists and locate its node.
API server returns the node information.
Plugin requests creation of a Debug Agent pod on the target node.
Kubelet creates the Debug Agent pod.
Plugin detects the Debug Agent is ready and opens a long‑lived connection.
Debug Agent receives the request, creates the debugging container, joins the target namespaces, and connects the tty.
After debugging, the Debug Agent cleans up the debugging container and the plugin removes the Debug Agent.
Installation
GitHub repository: https://github.com/aylei/kubectl-debug
Mac:
brew install aylei/tap/kubectl-debugOr download binaries:
<code>export PLUGIN_VERSION=0.1.1
# Linux x86_64
curl -Lo kubectl-debug.tar.gz https://github.com/aylei/kubectl-debug/releases/download/v${PLUGIN_VERSION}/kubectl-debug_${PLUGIN_VERSION}_linux_amd64.tar.gz
# macOS
curl -Lo kubectl-debug.tar.gz https://github.com/aylei/kubectl-debug/releases/download/v${PLUGIN_VERSION}/kubectl-debug_${PLUGIN_VERSION}_darwin_amd64.tar.gz
tar -zxvf kubectl-debug.tar.gz kubectl-debug
sudo mv kubectl-debug /usr/local/bin/</code>Windows users can download the Windows binary from the release page and add it to the PATH.
Note: Deploying the debug‑agent as a DaemonSet pre‑installs an agent pod on every node, which consumes resources continuously; for low‑frequency debugging, the agentless mode is more efficient.
Typical Usage
Simple Usage
kubectl 1.12+ automatically discovers plugins:
<code># Show help
kubectl debug -h</code>If a debug‑agent DaemonSet is installed, you can omit
--agentlessfor faster startup:
<code>kubectl debug POD_NAME --daemonset-ns=default --daemonset-name=debug-agent</code>Agentless mode creates the agent pod and debugging container on demand and removes them after exit:
<code>kubectl debug POD_NAME --agentless --port-forward</code>Use
--port-forwardwhen the node has no public IP or firewall blocks direct access.
Advanced Usage
Debug an init‑container:
kubectl debug POD_NAME --container=init-podWhen a pod is in CrashLoopBackOff, duplicate it for debugging:
kubectl debug POD_NAME --forkCustom Image Configuration
<code># Specify a custom debugging tool image (default: nicolaka/netshoot:latest)
--image <custom-image>
# Specify a custom debug‑agent image for agentless mode (default: aylei/debug-agent:latest)
--agent-image <custom-agent-image></code>Configuration File
Place a
~/.kube/debug-configfile to set defaults, e.g.:
<code>agentPort: 10027
agentless: false
namespace: default
agentPodNamespace: default
daemonset: debug-agent
image: nicolaka/netshoot:latest</code>Typical Cases
Using iftop to view pod network traffic
<code>kubectl debug kube-flannel-ds-amd64-2xwqp -n kube-system
# Inside the debug container
iftop -i eth0</code>Using drill to diagnose DNS
<code>kubectl debug kube-flannel-ds-amd64-2xwqp -n kube-system
# Inside the debug container
drill any www.baidu.com</code>Using tcpdump to capture packets
<code># Capture a single packet
kubectl debug kube-flannel-ds-amd64-2xwqp -n kube-system
tcpdump -i eth0 -c 1 -Xvv
# Save to a pcap file
tcpdump -i eth0 -vvv -w /tmp/kube-flannel-ds-amd64-2xwqp.pcap</code>Copy the pcap file from the node for analysis with Wireshark.
Diagnosing CrashLoopBackOff
Use
--forkto create a copy of the problematic pod without its labels, probes, or original command, then debug the new pod:
<code>kubectl-debug srv-es-driver-7445f6cf48-ff7bq -n devops --agentless --port-forward --fork</code>Inside the debug container you can
chroot /proc/1/rootto explore the original filesystem.
Custom sidecar image for containers lacking package managers
<code>kubectl debug srv-es-driver-7445f6cf48-ff7bq -n devops --agentless --port-forward --image centos
# Inside the centos sidecar
yum install -y redis</code>Reference: https://aleiwu.com/post/kubectl-debug-intro/
Ops Development Stories
Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.