Operations 10 min read

How to Diagnose and Fix 900% CPU Spikes in MySQL and Java Processes

This guide explains why MySQL or Java processes can consume 700‑900% CPU in production, walks through step‑by‑step diagnosis using Linux tools, and provides concrete remediation techniques such as indexing, caching, thread analysis, and code adjustments to restore normal performance.

Go Development Architecture Practice
Go Development Architecture Practice
Go Development Architecture Practice
How to Diagnose and Fix 900% CPU Spikes in MySQL and Java Processes

Scenario 1 – MySQL CPU spikes to 900%+

High concurrency combined with poorly indexed or heavy SQL statements can drive MySQL CPU usage far beyond normal limits, especially when slow‑query logging is enabled.

Diagnosis steps

Run top to confirm mysqld is the culprit.

Execute show processlist; to locate long‑running sessions.

Identify the offending SQL, examine its execution plan, and check for missing indexes.

Remediation process

Kill the offending threads and observe CPU drop.

Add missing indexes (e.g., on user_code).

Disable slow‑query logging in high‑load periods.

Introduce a cache layer (Redis) to offload read traffic.

Iteratively apply adjustments and re‑measure.

Real‑world MySQL case

The author observed CPU at 900% caused by a missing index on user_code. After creating the index and disabling slow‑query logging, CPU fell to 70‑80%. Adding Redis caching further reduced it to 70‑80% stable usage.

Scenario 2 – Java process CPU spikes to 700‑900%

Java processes normally stay below 200% CPU, but high‑concurrency loops, excessive object creation, or selector spin can push usage dramatically.

Diagnosis steps

Use top to find the Java PID.

Run top -Hp <PID> to list threads and locate the one consuming the most CPU.

Convert the thread ID to hex with printf "%x\n" <tid>.

Extract the stack trace with jstack -l <PID> > jstack_result.txt and grep for the hex thread ID.

Remediation process

If the thread is in an empty loop, add Thread.sleep() or proper locking.

If massive object creation triggers GC, reduce allocations or use an object pool.

If a selector spins, rebuild the selector as shown in Netty source.

Real‑world Java case

A Java service showed 700% CPU; thread analysis pointed to ImageConverter.run() looping on an empty LinkedBlockingQueue. Replacing poll() with take() (blocking until data arrives) eliminated the spin, dropping CPU usage below 10%.

while (isRunning) {
    byte[] buffer = new byte[0];
    try {
        buffer = device.getMinicap().dataQueue.take();
    } catch (InterruptedException e) {
        e.printStackTrace();
    }
    // process buffer …
}
JavaLinuxMySQLTroubleshootingCPU
Go Development Architecture Practice
Written by

Go Development Architecture Practice

Daily sharing of Golang-related technical articles, practical resources, language news, tutorials, real-world projects, and more. Looking forward to growing together. Let's go!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.