Backend Development 7 min read

Diagnosing and Resolving a Java Application Memory Leak During Load Testing

During a load test of a Java 1.6 application on a CentOS server, memory usage climbed from 20% to 100% after an hour of 300‑concurrent requests, prompting a detailed investigation that identified off‑heap leaks caused by GZIP compression in a JimDB client and led to fixes such as upgrading the JDK and avoiding unnecessary compression.

JD Retail Technology
JD Retail Technology
JD Retail Technology
Diagnosing and Resolving a Java Application Memory Leak During Load Testing

Recently a Java application was subjected to a load test on a CentOS system with 4 CPU cores, 8 GB RAM, JDK 1.6.0_25 and JVM options -server -Xms2048m -Xmx2048m . After running 300 concurrent users for one hour, memory usage rose from 20% to 100% and TPS dropped from about 1100 to 600.

The investigation started with the top command to view memory consumption, followed by checking heap memory distribution and garbage‑collection logs, which appeared normal. The suspicion then shifted to off‑heap memory leakage, likely related to the many JSF interfaces that rely on Netty.

DirectByteBuffer from the java.nio package was considered; the -XX:MaxDirectMemorySize=1g flag was set, but memory continued to grow beyond the combined 2 GB heap and 1 GB off‑heap limits, without OOM exceptions or frequent Full GCs, indicating the leak was not caused by DirectByteBuffer.

To pinpoint the source, Google Perftools (tcmalloc) was installed. The installation steps were:

wget http://download.savannah.gnu.org/releases/libunwind/libunwind-0.99-beta.tar.gz
./configure
make
sudo make install   # requires root
wget http://google-perftools.googlecode.com/files/google-perftools-1.8.1.tar.gz
./configure --prefix=/home/admin/tools/perftools --enable-frame-pointers
make
sudo make install   # requires root
# Add libunwind path to /etc/ld.so.conf.d/usr-local_lib.conf and run:
sudo /sbin/ldconfig

Before launching the Java program, the following environment variables were added:

export LD_PRELOAD=/home/admin/tools/perftools/lib/libtcmalloc.so
export HEAPPROFILE=/home/admin/heap/gzip

The application was then started, producing heap files such as gzip_pid.xxxx.heap in /home/admin/heap . These files were analyzed with pprof :

/home/admin/tools/perftools/bin/pprof --text $JAVA_HOME/bin/java gzip_22366.0005.heap > gzip-0005.txt

The profiling output highlighted that the function Java_java_util_zip_Inflater_init was continuously allocating memory:

Total: 4504.5 MB
4413.9  98.0%  zcalloc
60.0    1.3%  os::malloc
16.4    0.4%  ObjectSynchronizer::omAlloc
8.7     0.2%  Java_java_util_zip_Inflater_init
...

Inspecting the JDK source revealed the relevant code:

public GZIPInputStream(InputStream in, int size) throws IOException {
    super(in, new Inflater(true), size);
    usesDefaultInflater = true;
    readHeader(in);
}

The root cause was traced to the JimDB client’s SerializationUtils class, which uses GZIP compression for serializing objects. Under high concurrency, the repeated use of GZIPInputStream and GZIPOutputStream caused off‑heap memory to be exhausted, and the memory was not released after the test.

Two remedial actions were taken:

Upgrading the JDK to version 7u71, which reduced the memory growth rate and prevented the server from running out of memory.

Avoiding the use of JimDB.getObject and JimDB.setObject with compression; instead, implementing custom serialization or applying compression only when the object size exceeds a defined threshold.

These steps resolved the memory‑leak issue observed during the load test.

BackendJavaPerformance ProfilingMemory Leakload testingJDKGoogle Perftools
JD Retail Technology
Written by

JD Retail Technology

Official platform of JD Retail Technology, delivering insightful R&D news and a deep look into the lives and work of technologists.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.