Backend Development 15 min read

JVM Performance Tuning: Diagnosing High CPU Usage, Deadlocks, and Memory Leaks

This article explains practical JVM tuning scenarios—including how to identify and resolve excessive CPU consumption, thread deadlocks, and memory leaks—by using Linux tools such as top, jstack, jps, jstat, and jmap, and by analyzing heap dumps with Eclipse MAT.

Top Architect
Top Architect
Top Architect
JVM Performance Tuning: Diagnosing High CPU Usage, Deadlocks, and Memory Leaks

Recently many developers have learned JVM tuning theory but are unsure when to apply it in practice; this article introduces three common JVM performance problems—high CPU usage, deadlocks, and memory leaks—and shows step‑by‑step how to locate and fix them using standard tools.

High CPU Usage

If CPU spikes only during a traffic surge (e.g., a promotion), the increase is normal; otherwise, persistent high CPU often indicates a tight loop or runaway thread. The troubleshooting steps are:

(1) Use top to view CPU usage.

The process ID shown by top matches the VM ID reported by jps . Next, identify the offending thread:

(2) Use top -Hp <pid> to list threads.

Suppose thread ID 7287 is constantly consuming CPU. Convert the decimal thread ID to hexadecimal:

[root@localhost ~]# printf "%x" 7287
1c77

Then use jstack to dump the stack of the process and grep for the hex ID:

[root@localhost ~]# jstack 7268 | grep 1c77 -A 10
"http-nio-8080-exec-2" #16 daemon prio=5 os_prio=0 tid=0x00007fb66ce81000 nid=0x1c77 runnable [0x00007fb639ab9000]
   java.lang.Thread.State: RUNNABLE
   at com.spareyaya.jvm.service.EndlessLoopService.service(EndlessLoopService.java:19)
   ...

The stack shows the thread stuck in EndlessLoopService.service at line 19, confirming a runaway loop.

Deadlock

Deadlocked threads appear in a WAITING state and do not consume CPU, but the application hangs. Use jps -l to find the Java process, then jstack to dump thread information; the deadlock report is printed at the end of the output.

[root@localhost ~]# jps -l
8737 sun.tools.jps.Jps
8682 jvm-0.0.1-SNAPSHOT.jar

[root@localhost ~]# jstack 8682
... (output omitted) ...
"Thread-4":
   at com.spareyaya.jvm.service.DeadLockService.service2(DeadLockService.java:35)
   - waiting to lock <0x00000000f5035ae0> (a java.lang.Object)
   - locked <0x00000000f5035af0> (a java.lang.Object)
"Thread-3":
   at com.spareyaya.jvm.service.DeadLockService.service1(DeadLockService.java:27)
   - waiting to lock <0x00000000f5035af0> (a java.lang.Object)
   - locked <0x00000000f5035ae0> (a java.lang.Object)

Found 1 deadlock.

The dump clearly shows the two threads waiting on each other's locks, confirming a classic deadlock.

Memory Leak

Even though Java has automatic garbage collection, applications can still leak memory when objects remain reachable. The article demonstrates a simple program that repeatedly creates thread pools without shutting them down, leading to OOM.

import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class Main {
    public static void main(String[] args) {
        Main main = new Main();
        while (true) {
            try { Thread.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); }
            main.run();
        }
    }
    private void run() {
        ExecutorService executorService = Executors.newCachedThreadPool();
        for (int i = 0; i < 10; i++) {
            executorService.execute(() -> { /* do something... */ });
        }
    }
}

Running with -Xms20m -Xmx20m -XX:+PrintGC produces frequent GC logs and eventually an OutOfMemoryError :

... 
[GC (Allocation Failure) 12776K->10840K(18432K), 0.0309510 secs]
... 
java.lang.OutOfMemoryError: Java heap space

Using Eclipse MAT on a heap dump reveals thousands of Thread and ThreadPoolExecutor objects that were never shut down. The fix is to reuse a singleton thread pool or call shutdown() after tasks complete.

When the heap is too large for a dump, the article suggests alternative diagnostics:

(1) Use jps to locate the process ID.

C:\Users\spareyaya\IdeaProjects\maven-project\target\classes\org\example\net>jps -l
24836 org.example.net.Main
...

(2) Use jstat -gcutil to monitor GC activity.

C:\...>jstat -gcutil -t -h8 24836 1000
Timestamp   S0   S1   E   O   M   CCS   YGC  YGCT  FGC  FGCT  GCT
...

(3) Use jmap -dump:live,format=b,file=heap.bin <pid> to capture a live heap snapshot without waiting for OOM.

jmap -dump:live,format=b,file=heap.bin 24836

Summary

The three cases illustrate how to use JVM tools to locate performance problems; they are not full JVM tuning recipes but essential steps before adjusting any JVM flags. Proper analysis, incremental tuning, and understanding default parameters are crucial for improving throughput and reducing pause times.

JavaJVMdeadlockPerformance Tuningmemory-leakCPUjstack
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.