Backend Development 5 min read

How to Diagnose and Resolve JVM Out‑Of‑Memory Errors: A Practical Checklist

This article walks through a step‑by‑step troubleshooting process for JVM Out‑Of‑Memory (OOM) incidents, covering symptom identification, monitoring tools like top, jstat, and jmap, root‑cause analysis, and actionable recommendations to prevent future memory leaks.

Su San Talks Tech
Su San Talks Tech
Su San Talks Tech
How to Diagnose and Resolve JVM Out‑Of‑Memory Errors: A Practical Checklist

When faced with a JVM OOM, you might feel nervous or helpless. This article shares a systematic approach to diagnosing JVM OOM incidents.

Typical OOM symptoms include sustained high CPU usage after running for a while and a service that appears dead with no log output.

First, observe the process with

top -p <pid>

to check memory usage, as shown in the screenshot below:

Next, use

jstat -gcutil <pid> 1000

to examine GC frequency. The screenshot shows both the young and old generations are nearly full, indicating massive object allocation leading to OOM.

Monitoring data often reveals a sudden spike in memory usage, suggesting a large‑object creation pattern.

Further investigation with

jmap -heap <pid>

provides a detailed heap snapshot, confirming whether the allocated heap size is insufficient.

From the analysis, the following conclusions can be drawn:

Check for code that performs massive one‑time queries producing large objects.

Identify any long‑lived shared objects that are not being released.

For small applications, a 12 GB heap is usually ample, especially in low‑concurrency, internal‑network scenarios.

Key recommendations include examining what the application was doing at the moment logs stopped to pinpoint the exact incident scene.

The root cause often matches expectations: a poorly constructed SQL query causing a full‑table scan and massive object creation.

In summary, the JVM OOM troubleshooting workflow is:

Analyze the incident scene (CPU, memory, logs).

Use

top -p <pid>

to determine whether memory growth is explosive or gradual.

Run

jstat -gcutil <pid> 1000

to assess GC frequency and detect large‑object generation.

Execute

jmap -heap <pid>

to inspect actual JVM memory usage and verify heap size adequacy.

Identify actions performed at the time of the incident, looking for spikes in memory or CPU usage.

If spikes are present, pinpoint the exact operations causing them.

For gradual growth, consider using MAT with a divide‑and‑conquer approach to analyze memory consumption.

This practical, albeit rough, process has helped resolve memory‑overflow issues in various systems, such as order‑query services and user‑center modules.

JavaJVMmonitoringperformanceTroubleshootingOutOfMemory
Su San Talks Tech
Written by

Su San Talks Tech

Su San, former staff at several leading tech companies, is a top creator on Juejin and a premium creator on CSDN, and runs the free coding practice site www.susan.net.cn.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.