Root Cause Analysis of Excessive Native Memory Usage in a Spring Boot Application after Migrating to MDP Framework
After migrating a project to the MDP framework based on Spring Boot, the system repeatedly reported high swap usage; the author investigated JVM settings, used tools like jcmd, pmap, gperftools, strace, and GDB, identified native memory leaks caused by Spring Boot’s Reflections scanning and Inflater usage, and resolved the issue by configuring scan paths and fixing Inflater handling.
To better manage a project, the team migrated a module to the MDP framework (based on Spring Boot) and soon encountered frequent swap‑area usage exceptions.
The author was called to investigate and discovered that although the JVM was configured with a 4 GB heap, the actual physical memory consumption reached 7 GB.
JVM parameters used:
-XX:MetaspaceSize=256M -XX:MaxMetaspaceSize=256M -XX:+AlwaysPreTouch -XX:ReservedCodeCacheSize=128m -XX:InitialCodeCacheSize=128m, -Xss512k -Xmx4g -Xms4g,-XX:+UseG1GC -XX:G1HeapRegionSize=4MTop command output showed the high physical memory usage.
Investigation Process
1. Locate memory regions from the Java side
The author added -XX:NativeMemoryTracking=detail to the JVM options and restarted the service, then ran jcmd pid VM.native_memory detail to view memory distribution, which revealed that the committed memory reported by the command was smaller than the physical memory because it did not include native allocations made via unsafe.allocateMemory or DirectByteBuffer .
Using pmap , many 64 MB address ranges were found that were not listed by jcmd , suggesting native code allocations.
2. Use system‑level tools to locate off‑heap memory
Since Java‑level tools could not trace the issue, the author turned to system tools.
gperftools was used for monitoring (see https://github.com/gperftools/gperftools ). The monitor showed memory allocated via malloc spiking to 3 GB and then stabilising around 700‑800 MB.
Because gperftools did not capture the suspicious allocations, strace -f -e"brk,mmap,munmap" -p pid was run. The trace displayed many 64 MB mmap requests during application start‑up.
Next, gdb was used to dump the suspect memory region with gdp -pid pid followed by dump memory mem.bin startAddress endAddress . The dumped binary contained JAR information, indicating that the memory was allocated while unpacking JAR files.
Further strace runs during start‑up confirmed the large mmap allocations, and jstack pid identified the thread IDs responsible.
The analysis pointed to Meituan’s Unified Configuration Center (MCC) using Reflections to scan all JAR packages. During scanning, Spring Boot loads JARs via Inflater , which allocates off‑heap memory. The wrapper class ZipInflaterInputStream never releases the Inflater instance, relying on the finalize method for cleanup.
Because the GC‑based finalizer was the only release mechanism, the native memory remained in the process’s memory pool even after GC, giving the impression of a leak.
3. Why the off‑heap memory was not released
Investigation of the C implementation of Inflater showed that it allocates memory with malloc and frees it in finalize . However, the underlying glibc memory allocator (and tcmalloc used by gperftools) keeps freed blocks in per‑thread arenas (typically 64 MB each), so the memory is not returned to the OS.
To verify the arena hypothesis, a custom allocator without arenas was built:
#include
#include
#include
#include
void* malloc(size_t size) {
long* ptr = mmap(0, size + sizeof(long), PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, 0, 0);
if (ptr == MAP_FAILED) return NULL;
*ptr = size;
return (void*)(&ptr[1]);
}
void* calloc(size_t n, size_t size) {
void* ptr = malloc(n * size);
if (ptr == NULL) return NULL;
memset(ptr, 0, n * size);
return ptr;
}
void* realloc(void* ptr, size_t size) {
if (size == 0) { free(ptr); return NULL; }
if (ptr == NULL) return malloc(size);
long* plen = (long*)ptr; plen--; long len = *plen;
if (size <= len) return ptr;
void* rptr = malloc(size);
if (rptr == NULL) { free(ptr); return NULL; }
memcpy(rptr, ptr, len);
free(ptr);
return rptr;
}
void free(void* ptr) {
if (ptr == NULL) return;
long* plen = (long*)ptr; plen--; long len = *plen;
munmap((void*)plen, len + sizeof(long));
}Tests with this allocator showed that although the custom malloc requested 800 MB, the physical memory usage grew to about 1.7 GB because mmap rounds up to whole pages and the OS allocates pages lazily.
Finally, the author modified the MCC configuration to limit the scan to specific JARs, replaced the Spring Boot ZipInflaterInputStream with a version that explicitly releases the Inflater , and upgraded Spring Boot (2.0.5.RELEASE) where the issue has been fixed.
Summary
The excessive native memory consumption was caused by Spring Boot’s default Reflections‑based package scanning, which uses Inflater to unpack JARs and relies on GC finalization to free the off‑heap buffers. The underlying memory allocator keeps freed blocks in per‑thread arenas, so the memory is not returned to the OS, appearing as a leak. By restricting the scan path, fixing the Inflater release logic, and upgrading Spring Boot, the problem was eliminated.
Selected Java Interview Questions
A professional Java tech channel sharing common knowledge to help developers fill gaps. Follow us!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.