Why Did My Spring Boot Service Consume 7 GB? Uncovering Native Memory Leaks
This article walks through a real‑world investigation of excessive native memory usage in a Spring Boot application, detailing JVM settings, Linux‑level tracing, custom memory allocators, and the root cause in Spring Boot’s ZipInflaterInputStream, ultimately providing a fix and best‑practice recommendations.
Background
After migrating a project to the MDP framework (based on Spring Boot), the system began reporting high Swap usage. Although the JVM was configured with a 4 GB heap, the process’s resident memory reached 7 GB, which was abnormal.
Key JVM parameters were:
-XX:MetaspaceSize=256M
-XX:MaxMetaspaceSize=256M
-XX:+AlwaysPreTouch
-XX:ReservedCodeCacheSize=128m
-XX:InitialCodeCacheSize=128m
-Xss512k -Xmx4g -Xms4g -XX:+UseG1GC
-XX:G1HeapRegionSize=4MThe physical memory usage was visualized in the following chart:
Investigation Process
1. Locate the memory region from the Java side
Added -XX:NativeMemoryTracking=detail and restarted the service, then ran: jcmd pid VM.native_memory detail The output showed memory distribution but the committed memory was lower than the physical usage, indicating additional native allocations outside the JVM’s view (e.g., from C code).
Using pmap revealed many 64 MB regions not reported by jcmd, suggesting native allocations.
2. System‑level tracing of native memory
First tried gperftools to monitor malloc usage; the graph showed a peak at 3 GB followed by a drop to ~800 MB, raising the question whether malloc was used at all.
Next, strace -f -e"brk,mmap,munmap" -p pid was executed, but no suspicious allocation calls appeared.
3. Dumping memory with GDB
Used gdp -pid pid inside GDB, then: dump memory mem.bin startAddress endAddress Followed by strings mem.bin to inspect the dump, which revealed JAR‑related data, indicating that the allocation happened during startup.
4. Re‑tracing startup with strace
Running strace during the application start captured numerous 64 MB mmap calls. The corresponding address ranges were shown by pmap.
5. Identifying the responsible thread
Using jstack pid and matching thread IDs from strace pinpointed the thread performing the scans.
The culprit turned out to be MCC (Meituan Configuration Center) using Reflections to scan all JARs. Spring Boot’s ZipInflaterInputStream decompresses JARs with native memory via Inflater, but does not release that memory promptly.
Why Native Memory Was Not Released
Spring Boot relied on Inflater ’s finalize method to free native buffers, meaning the release depended on garbage collection. Even after GC, the underlying glibc memory allocator (glibc 2.12) retained the pages in its per‑thread arena (64 MB chunks), so the OS memory footprint remained high.
Testing with a custom allocator (built from zjbmalloc.c and preloaded via LD_PRELOAD) confirmed that the process consistently allocated ~800 MB of native memory, while the OS reported ~1.7 GB due to page rounding and lazy allocation.
Resolution
Configuring MCC to scan only specific packages eliminated the excessive native allocations. Later versions of Spring Boot (2.0.5.RELEASE) added an explicit release of the native buffer in ZipInflaterInputStream, removing the reliance on finalization.
Summary Diagram
References
GNU C Library (glibc)
Native Memory Tracking
Spring Boot
gperftools
Btrace
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Programmer DD
A tinkering programmer and author of "Spring Cloud Microservices in Action"
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
