Why Did My Spring Boot Service Consume 7 GB? Uncovering Native Memory Leaks

This article walks through a real‑world investigation of excessive native memory usage in a Spring Boot application, detailing JVM settings, Linux‑level tracing, custom memory allocators, and the root cause in Spring Boot’s ZipInflaterInputStream, ultimately providing a fix and best‑practice recommendations.

Programmer DD
Programmer DD
Programmer DD
Why Did My Spring Boot Service Consume 7 GB? Uncovering Native Memory Leaks

Background

After migrating a project to the MDP framework (based on Spring Boot), the system began reporting high Swap usage. Although the JVM was configured with a 4 GB heap, the process’s resident memory reached 7 GB, which was abnormal.

Key JVM parameters were:

-XX:MetaspaceSize=256M
-XX:MaxMetaspaceSize=256M
-XX:+AlwaysPreTouch
-XX:ReservedCodeCacheSize=128m
-XX:InitialCodeCacheSize=128m
-Xss512k -Xmx4g -Xms4g -XX:+UseG1GC
-XX:G1HeapRegionSize=4M

The physical memory usage was visualized in the following chart:

Investigation Process

1. Locate the memory region from the Java side

Added -XX:NativeMemoryTracking=detail and restarted the service, then ran: jcmd pid VM.native_memory detail The output showed memory distribution but the committed memory was lower than the physical usage, indicating additional native allocations outside the JVM’s view (e.g., from C code).

Using pmap revealed many 64 MB regions not reported by jcmd, suggesting native allocations.

2. System‑level tracing of native memory

First tried gperftools to monitor malloc usage; the graph showed a peak at 3 GB followed by a drop to ~800 MB, raising the question whether malloc was used at all.

Next, strace -f -e"brk,mmap,munmap" -p pid was executed, but no suspicious allocation calls appeared.

3. Dumping memory with GDB

Used gdp -pid pid inside GDB, then: dump memory mem.bin startAddress endAddress Followed by strings mem.bin to inspect the dump, which revealed JAR‑related data, indicating that the allocation happened during startup.

4. Re‑tracing startup with strace

Running strace during the application start captured numerous 64 MB mmap calls. The corresponding address ranges were shown by pmap.

5. Identifying the responsible thread

Using jstack pid and matching thread IDs from strace pinpointed the thread performing the scans.

The culprit turned out to be MCC (Meituan Configuration Center) using Reflections to scan all JARs. Spring Boot’s ZipInflaterInputStream decompresses JARs with native memory via Inflater, but does not release that memory promptly.

Why Native Memory Was Not Released

Spring Boot relied on Inflater ’s finalize method to free native buffers, meaning the release depended on garbage collection. Even after GC, the underlying glibc memory allocator (glibc 2.12) retained the pages in its per‑thread arena (64 MB chunks), so the OS memory footprint remained high.

Testing with a custom allocator (built from zjbmalloc.c and preloaded via LD_PRELOAD) confirmed that the process consistently allocated ~800 MB of native memory, while the OS reported ~1.7 GB due to page rounding and lazy allocation.

Resolution

Configuring MCC to scan only specific packages eliminated the excessive native allocations. Later versions of Spring Boot (2.0.5.RELEASE) added an explicit release of the native buffer in ZipInflaterInputStream, removing the reliance on finalization.

Summary Diagram

References

GNU C Library (glibc)

Native Memory Tracking

Spring Boot

gperftools

Btrace

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

JVMNative MemoryLinux tracingspring-bootgperftoolsmemory-leak
Programmer DD
Written by

Programmer DD

A tinkering programmer and author of "Spring Cloud Microservices in Action"

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.