How to Pinpoint CPU‑Hogging Processes in Under 2 Minutes
When a production server suddenly hits 100% CPU, this guide shows a systematic two‑minute workflow—using top, thread inspection, hexadecimal conversion, and jstack—to quickly identify the offending process or Java thread and restore service stability.
CPU Full‑Utilization Troubleshooting
In production, a sudden CPU spike can degrade service. The following method identifies the responsible Java process and thread within about two minutes.
Step‑by‑step workflow
Identify the CPU‑hungry process (≈30 s) Run top , press P to sort by CPU usage, and note the PID of the process with the highest %CPU.
Find the offending thread inside that process (≈30 s) Execute top -Hp <PID> (‑H shows threads, ‑p selects the process). Record the TID of the top‑ranked thread (displayed in decimal).
Convert the thread ID to hexadecimal (≈10 s) Java stack traces use hexadecimal thread IDs (nid). Convert the decimal TID with printf "%x\n" <TID> . The output is the hex ID needed for the next step.
Locate the code in the Java stack trace (≈50 s) Run jstack <PID> | grep <hexTID> -A 30 and examine the surrounding lines.
If the thread state is RUNNABLE, the displayed code line is actively executing (e.g., infinite loop or heavy computation).
If the thread name is VM Thread or GC task thread, frequent Full GC may be the cause of the CPU surge.
Following this disciplined approach lets you pinpoint the exact process or Java thread responsible for a CPU overload quickly, enabling rapid remediation.
Mike Chen's Internet Architecture
Over ten years of BAT architecture experience, shared generously!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
