Databases 12 min read

How to Diagnose and Fix 900% CPU Spikes in MySQL and Java Processes

This guide explains why MySQL or Java services can suddenly consume 900% CPU, outlines step‑by‑step diagnostics using Linux tools, and provides concrete optimization actions such as killing offending queries, adding indexes, tuning caches, and fixing Java thread loops.

Code Ape Tech Column
Code Ape Tech Column
Code Ape Tech Column
How to Diagnose and Fix 900% CPU Spikes in MySQL and Java Processes

Problem Overview

In production environments CPU usage can easily exceed 200% and, in extreme cases, reach 900% for MySQL or Java processes, causing service degradation or crashes.

Scenario 1 – MySQL Process CPU at 900%

Typical causes include missing indexes, heavy concurrent queries, and unnecessary slow‑log collection.

Use top to confirm that mysqld is the culprit.

Run show processlist to identify long‑running or resource‑heavy sessions.

Inspect the execution plan of the offending SQL, check for missing indexes or large data scans.

Resolution steps:

Kill the offending threads and observe CPU drop.

Add missing indexes, rewrite inefficient SQL, or adjust MySQL memory parameters.

Limit connection counts if a sudden surge of sessions is observed.

Avoid enabling slow‑log during high‑load periods, as it can further degrade performance.

Real‑world case: a query without an index on user_code caused CPU to stay at 900%; adding the index reduced usage to 70‑80%.

show processlist;
select id from user where user_code = 'xxxxx';
show index from user;

After disabling the slow‑log and moving frequent reads to Redis cache, CPU stabilized around 70‑80%.

Do not enable slow‑log when CPU is already high.

Use show processlist to pinpoint problematic queries (often index, lock, or full‑table scans).

Introduce a caching layer (e.g., Redis) to reduce MySQL query frequency.

Consider memory tuning as an additional lever.

Scenario 2 – Java Process CPU at 900% (or 700%)

Common reasons are infinite loops, excessive garbage collection, or busy selectors.

Diagnostic steps:

Run top to find the high‑CPU Java PID.

Use top -Hp <PID> to list threads and identify the hottest thread.

Convert the thread ID to hexadecimal (e.g., printf "%x\n" 30309) to match the nid in a thread dump.

Generate a thread dump with jstack -l <PID> > jstack_result.txt and grep for the hexadecimal nid.

Locate the corresponding Java method in the stack trace and analyze the code.

Typical fixes:

If the thread is stuck in an empty loop, add Thread.sleep or proper locking.

Reduce object allocation in tight loops or use an object pool to lessen GC pressure.

For Netty selector spin loops, rebuild the selector after a threshold of empty polls.

Example code before fix (busy‑loop using poll() on an empty queue):

while (isRunning) {
    if (dataQueue.isEmpty()) {
        continue;
    }
    byte[] buffer = device.getMinicap().dataQueue.poll();
    // process buffer
}

Improved version using take() to block until data arrives, eliminating the empty‑loop CPU burn:

while (isRunning) {
    try {
        byte[] buffer = device.getMinicap().dataQueue.take();
        // process buffer
    } catch (InterruptedException e) {
        e.printStackTrace();
    }
}

After applying the fix, the Java process CPU dropped below 10%.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

JavaPerformanceLinuxMySQLCPU
Code Ape Tech Column
Written by

Code Ape Tech Column

Former Ant Group P8 engineer, pure technologist, sharing full‑stack Java, job interview and career advice through a column. Site: java-family.cn

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.