Backend Development 11 min read

Investigation of Excessive Native Memory Usage in a Spring Boot Application

This article details a step‑by‑step investigation of unusually high native memory consumption in a Spring Boot service, covering JVM configuration, system‑level diagnostics with jcmd, pmap, gperftools, strace, GDB, and jstack, and explains how the MCC component’s default package scanning caused the leak and how configuring scan paths or upgrading Spring Boot resolved the issue.

Architect's Tech Stack
Architect's Tech Stack
Architect's Tech Stack
Investigation of Excessive Native Memory Usage in a Spring Boot Application

Background – After migrating a project to the MDP framework (based on Spring Boot), the system repeatedly reported high Swap usage. Although the JVM was configured with a 4 GB heap, the physical memory usage reached 7 GB.

JVM parameters used were:

-XX:MetaspaceSize=256M
-XX:MaxMetaspaceSize=256M
-XX:+AlwaysPreTouch
-XX:ReservedCodeCacheSize=128m
-XX:InitialCodeCacheSize=128m
-Xss512k
-Xmx4g
-Xms4g
-XX:+UseG1GC
-XX:G1HeapRegionSize=4M

Physical memory usage was visualized with top and pmap , revealing large 64 MB address spaces not accounted for by jcmd .

Investigation Process

1. Java‑level memory inspection

Added -XX:NativeMemoryTracking=detail and ran jcmd pid VM.native_memory detail , which showed that the committed memory reported by jcmd was smaller than the actual physical usage because it omitted native allocations made by C code.

2. System‑level diagnostics

Used gperftools to monitor malloc activity, then strace -f -e brk,mmap,munmap -p pid to trace system calls, but no suspicious allocations were found. Finally, gdb was used to dump memory regions and inspect them with strings .

Repeating strace during application startup captured many 64 MB mmap requests, which matched the regions shown by pmap .

3. Thread analysis

Identified the thread responsible for the large allocations using jstack pid . The stack trace revealed that MCC (Meituan Configuration Center) used Reflections to scan all JARs, and Spring Boot’s ZipInflaterInputStream allocated off‑heap memory via Inflater without releasing it promptly.

Further investigation showed that the JDK Inflater relies on finalize to free native memory, which may not run before the OS reclaims the pages. The glibc memory allocator keeps freed pages in a per‑thread 64 MB pool, causing the OS to report high memory usage even after GC.

Solution: limit MCC’s scan path to specific JARs or upgrade Spring Boot (>= 2.0.5.RELEASE) where ZipInflaterInputStream now releases native memory explicitly.

Additional Experiments

A custom allocator (compiled with gcc zjbmalloc.c -fPIC -shared -o zjbmalloc.so and preloaded via LD_PRELOAD ) demonstrated that mmap‑based allocations can reserve large address spaces, and the OS only commits physical pages on actual access.

Memory‑pool behavior of both glibc and tcmalloc (used by gperftools) was identified as the root cause of the apparent “memory leak”.

Overall, the investigation shows how native memory, JVM settings, and underlying allocator policies interact, and provides practical steps to diagnose and fix similar issues in Java backend services.

JVMMemory LeakSpring BootNative MemoryPerformance DebuggingLinux tools
Architect's Tech Stack
Written by

Architect's Tech Stack

Java backend, microservices, distributed systems, containerized programming, and more.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.