How to Quickly Diagnose and Fix CPU & JVM Memory Hotspots in Java Apps

This article explains common CPU and JVM memory bottlenecks in Java applications, shows how to detect abnormal usage with monitoring and alerts, and provides step‑by‑step methods—including top, jstack, flame‑graph analysis, and APM tools—to pinpoint and resolve performance hotspots efficiently.

Alibaba Cloud Observability
Alibaba Cloud Observability
Alibaba Cloud Observability
How to Quickly Diagnose and Fix CPU & JVM Memory Hotspots in Java Apps

CPU Performance Optimization

CPU (Central Processing Unit) is the core execution unit of a computer system. When CPU usage stays high, the system behaves like an overloaded brain, leading to reduced efficiency or crashes. Production systems should keep total CPU usage below 70%.

High user‑mode CPU usage often indicates problems in the application code. The typical investigation flow is: top – find the process ID that consumes the most CPU. top -Hp <pid> – locate the thread ID with the highest CPU consumption. printf "%x\n" <tid> – convert the thread ID to its hexadecimal representation. jstack <pid> | grep <hex_tid> -A 10 – retrieve the stack trace of the hot thread.

This manual approach is complex and often requires multiple jstack runs; it also cannot record historical snapshots.

Modern APM products (e.g., Alibaba Cloud ARMS) provide continuous thread‑level CPU profiling, allowing you to view flame graphs for any time period and instantly identify hot methods such as CPUPressure.runBusiness() with 99.7% CPU consumption. They also support differential flame graphs and AI‑driven diagnostics to compare CPU usage across releases.

JVM Memory Performance Optimization

Memory is a critical system component that determines program speed and multitasking capability. In the JVM, memory is divided into heap, stack, and method area. The heap is the most common source of memory hotspots and can trigger OutOfMemoryError.

Typical JVM memory hotspot causes include:

Frequent object creation – many short‑lived objects cause excessive GC cycles.

Large object allocation – big objects are promoted to the old generation, leading to Full GC pauses.

Memory leaks – static collections, unclosed resources, or off‑heap allocations that accumulate over time.

Improper heap size settings – Xms/Xmx too low cause frequent GC; too high waste physical memory.

Too many or large classes – Metaspace overflow results in OOM errors.

To investigate JVM memory hotspots, follow a process similar to CPU analysis:

Monitor JVM metrics and alerts to detect abnormal memory or GC behavior.

Use continuous memory profiling to generate flame graphs of object allocation (e.g., AllocMemoryAction.runBusiness() consuming 99.92% of allocations).

Take memory snapshots; ARMS offers one‑click snapshot creation and analysis, combined with ATP tools for deep object‑reference inspection.

Summary

The article introduces common CPU and JVM memory hotspot causes in Java applications, explains how to detect resource anomalies via monitoring and alerts, and demonstrates method‑level CPU/memory flame‑graph analysis with APM tools to quickly locate and resolve performance issues, ensuring stable operation under high load.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

JavaAPMCPU optimizationJVM Memory
Alibaba Cloud Observability
Written by

Alibaba Cloud Observability

Driving continuous progress in observability technology!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.