How to Detect and Fix Memory Leaks in Spring Boot Applications

This guide explains the fundamentals of memory leaks in Java, outlines common causes in Spring Boot, and provides step‑by‑step techniques—including GC log analysis, JConsole, VisualVM, MAT, Actuator, custom endpoints, jstack, BTrace, and best‑practice recommendations—to identify, diagnose, and prevent memory leaks for stable long‑running services.

Code Ape Tech Column
Code Ape Tech Column
Code Ape Tech Column
How to Detect and Fix Memory Leaks in Spring Boot Applications

Introduction

Memory leaks are common and tricky problems in project development, causing performance degradation, slow response, and even OutOfMemoryError crashes.

Compared with traditional Java applications, Spring Boot apps may have more hidden and complex memory leaks due to rich component ecosystem and dependency injection.

This article introduces several practical methods to investigate memory leaks in applications.

Memory Leak Basics

Before deep diving into investigation methods, briefly review basic concepts:

Memory Leak : memory allocated by the program cannot be released for some reason, staying occupied and not reclaimed by GC.

In Java, memory leaks usually appear as objects still referenced but no longer needed, preventing GC.

Common causes in Spring Boot:

Static collection references : static Map, List holding objects.

Singleton bean collection references : Spring singleton beans live as long as the app.

Unclosed resources : DB connections, file streams.

Improper cache usage : unbounded cache or wrong expiration.

Thread pool mismanagement : task queue grows infinitely.

JNI native memory not released

Classloader leaks : e.g., WebappClassLoader not released on hot redeploy.

Memory Leak Investigation Methods

1. JVM startup parameters and GC log analysis

Configure appropriate JVM parameters to record detailed GC logs for memory usage analysis.

Steps:

Add JVM parameters to enable GC logs:

-XX:+PrintGCDetails
-XX:+PrintGCDateStamps
-XX:+PrintGCTimeStamps
-Xloggc:/path/to/gc.log

Configure in application.properties:

spring.jvm.args=-XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/path/to/gc.log

Use tools like GCViewer to analyze GC logs, focusing on:

Full GC frequency spikes

Insignificant memory recovery after GC

Old generation memory continuously growing

Example GC log snippet analysis:

2023-08-10T14:15:30.245+0800: [GC (Allocation Failure) [PSYoungGen: 786432K->9437K(917504K)] 786432K->9445K(3014656K), 0.0088311 secs]
2023-08-10T14:16:30.377+0800: [GC (Allocation Failure) [PSYoungGen: 795869K->8941K(917504K)] 795877K->23757K(3014656K), 0.0102321 secs]
2023-08-10T14:17:30.502+0800: [GC (Allocation Failure) [PSYoungGen: 795373K->10022K(917504K)] 810189K->54038K(3014656K), 0.0143901 secs]

Observe continuous memory increase after GC, indicating possible leaks.

2. Real‑time monitoring with JConsole

JConsole is a built‑in JDK GUI tool for monitoring JVM memory, threads, and class loading.

Steps:

Start Spring Boot with JMX parameters:

-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=9010
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false

Run jconsole and connect to the target application.

In the “Memory” tab watch:

Heap usage trend (steady rise suggests leak)

Metaspace usage

GC activity frequency

Inspect “MBeans” for Spring bean information.

3. Advanced heap analysis with VisualVM

VisualVM can generate heap dumps and analyze memory usage.

Steps:

Download and launch VisualVM.

Connect to the target application and select it.

Watch memory usage in the “Monitor” tab.

Create a heap dump via the “Heap Dump” button.

In the “Classes” view, sort by instance count to find unusually growing objects.

Examine reference chains of suspicious objects.

Analysis tips:

Compare multiple heap dumps to spot objects with abnormal growth.

Use OQL for advanced queries, e.g. SELECT s FROM java.util.HashMap s WHERE s.size > 1000.

4. Detailed heap analysis with MAT

Eclipse Memory Analyzer (MAT) specializes in analyzing heap dump files.

Steps:

Obtain a heap dump (e.g., jmap -dump:format=b,file=heap.hprof <PID>).

Open the dump in MAT.

Run “Leak Suspects Report” to automatically locate potential leaks.

Use “Dominator Tree” to view objects consuming most memory.

Check GC Roots and reference paths of suspicious objects.

Key points:

Focus on the “Retained Heap” column.

Use “Path to GC Roots” to find why objects are not reclaimed.

Inspect collection classes for excessive elements.

5. Monitoring with Spring Boot Actuator

Actuator provides rich endpoints for monitoring application memory.

Steps:

Add the Actuator dependency.

Enable endpoints in application.properties:

management.endpoints.web.exposure.include=health,metrics,heapdump
management.endpoint.health.show-details=always

Access metrics such as /actuator/metrics/jvm.memory.used, /actuator/metrics/jvm.gc.memory.promoted, and download heap dumps via /actuator/heapdump.

Optionally integrate with Prometheus/Grafana for long‑term monitoring and alerts.

Custom memory endpoint example:

@Component
@Endpoint(id = "memory-status")
public class MemoryStatusEndpoint {

    @ReadOperation
    public Map<String, Object> memoryStatus() {
        Map<String, Object> status = new HashMap<>();
        Runtime runtime = Runtime.getRuntime();
        long totalMemory = runtime.totalMemory();
        long freeMemory = runtime.freeMemory();
        long maxMemory = runtime.maxMemory();
        long usedMemory = totalMemory - freeMemory;
        status.put("total", bytesToMB(totalMemory));
        status.put("free", bytesToMB(freeMemory));
        status.put("used", bytesToMB(usedMemory));
        status.put("max", bytesToMB(maxMemory));
        status.put("usagePercentage", usedMemory * 100.0 / maxMemory);
        return status;
    }

    private double bytesToMB(long bytes) {
        return bytes / (1024.0 * 1024.0);
    }
}

6. Thread‑stack analysis with jstack

Thread issues can also cause memory leaks, e.g., thread‑pool misuse.

Steps:

Generate a thread dump: jstack <PID> > thread_dump.txt.

Look for many BLOCKED threads (possible deadlock), abnormal thread count (thread leak), or deep stack traces.

Combine with jmap -histo:live <PID> to see memory‑heavy threads.

7. Runtime tracing with BTrace

BTrace allows dynamic tracing without restarting the app.

Steps:

Install BTrace.

Write a script to trace suspicious methods, e.g., cache additions.

Attach the script: btrace <PID> MemoryLeakTracer.java.

Analyze output for abnormal object growth.

8. Detecting DB connection and resource leaks

Unclosed DB connections or file handles are common leak sources.

Steps:

Enable HikariCP MBeans: spring.datasource.hikari.register-mbeans=true.

Monitor pool metrics via JMX (active, idle, total connections).

Ensure all resources are used in try‑with‑resources blocks.

Check open file handles with lsof -p <PID> | wc -l.

9. Stress testing to expose leaks

Load testing can quickly reveal memory problems.

Steps:

Create scripts with JMeter or Gatling that simulate real business scenarios.

Run loops and monitor memory trends.

Observe GC activity and memory allocation.

Increase load until abnormal memory growth appears.

Collect heap dumps for analysis.

10. Code‑review patterns that cause leaks

Common patterns include static collections, unclosed resources, inner‑class references, unbounded caches, and unbounded thread pools. Code examples illustrate each issue.

11. Best practices to prevent memory leaks

Key recommendations:

Prefer bounded collections (e.g., ArrayBlockingQueue) and use WeakHashMap for cacheable objects.

Always close IO resources with try‑with‑resources and implement AutoCloseable with @PreDestroy cleanup.

Use professional cache frameworks (Caffeine, Ehcache) with size limits and expiration policies.

Perform memory checks during development: small heap, unit tests for resource release, static analysis tools.

In production, set memory usage alerts, regularly analyze GC logs, and automate periodic heap dumps.

Conclusion

Memory leaks are a common challenge for Java and especially long‑running Spring Boot applications. In practice, a combination of the methods above is usually required to pinpoint the root cause. A solid monitoring system also helps detect and resolve issues early, ensuring long‑term stability.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

JVMPerformance MonitoringMemory LeakSpring BootHeap Dump
Code Ape Tech Column
Written by

Code Ape Tech Column

Former Ant Group P8 engineer, pure technologist, sharing full‑stack Java, job interview and career advice through a column. Site: java-family.cn

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.