Why G1GC Slowed Down During Load Test and How ParallelRefProc Fixed It

During a full‑chain load test, latency spikes were traced to a slow G1GC pause caused by single‑threaded reference processing of many Finalizer objects, which originated from an over‑aggressive Jedis connection pool; enabling the JVM flag -XX:+ParallelRefProcEnabled eliminated the bottleneck and restored performance.

Yanxuan Tech Team
Yanxuan Tech Team
Yanxuan Tech Team
Why G1GC Slowed Down During Load Test and How ParallelRefProc Fixed It

Background

On November 6, a full‑chain load test of the inventory system revealed occasional latency spikes on several machines, noticeably longer than normal response times.

Problem Identification

Initial analysis with the internal monitoring tool indicated that the database was not the bottleneck. GC logs from the production environment showed unusually long pauses, especially entries like [GC ref‑proc, 0.6261676 secs] and Finalize Marking, pointing to garbage‑collection overhead.

2019-11-05T01:27:25.579+0800: 380439.759: [GC remark]
2019-11-05T01:27:25.579+0800: 380439.759: [Finalize Marking, 0.0013365 secs]
2019-11-05T01:27:25.580+0800: 380439.761: [GC ref‑proc, 0.6261676 secs]
...

Finalize Marking and ref‑proc

In the JVM version used, ordinary objects and Reference objects (e.g., SoftReference, WeakReference, PhantomReference, Finalizer) are handled differently. The reference‑processing logic can run either single‑threaded or multi‑threaded, controlled by the ParallelRefProcEnabled flag, which is disabled by default.

// STW ref processor
_ref_processor_stw = new ReferenceProcessor(mr,
    ParallelRefProcEnabled && (ParallelGCThreads > 1),
    MAX2((int)ParallelGCThreads, 1),
    (ParallelGCThreads > 1),
    MAX2((int)ParallelGCThreads, 1),
    true,
    &_is_alive_closure_stw);

When ParallelRefProcEnabled is off, all reference objects are processed sequentially, which becomes a bottleneck if many Finalizer objects exist.

Why So Many Finalizer Objects?

A heap histogram taken with jmap -histo showed a large number of java.lang.ref.Finalizer instances:

num #instances #bytes class name
----------------------------------------------
38: 25031 1001240 java.lang.ref.Finalizer
...

The root cause was identified as the commons‑pool eviction timer thread used by the Jedis connection pool. The pool’s eviction policy constantly creates and discards idle Socket objects; each socket is an instance of AbstractPlainSocketImpl, which implements a finalize() method, generating a Finalizer reference.

Jedis Pool Configuration

<bean id="jedisPoolConfig" class="redis.clients.jedis.JedisPoolConfig">
  <property name="minIdle" value="100"/>
  <property name="maxIdle" value="200"/>
  <property name="maxTotal" value="500"/>
</bean>

This configuration allowed a large number of idle connections, causing the eviction timer to create many sockets, which in turn produced a flood of finalizer references.

Solution

To mitigate the single‑threaded reference‑processing bottleneck, the JVM flag -XX:+ParallelRefProcEnabled was added and the service restarted. After the change, subsequent load tests showed a significant reduction in GC pause times and overall latency.

Summary

Excessive idle Jedis connections caused frequent socket creation.

The commons‑pool eviction timer repeatedly created sockets, each adding a Finalizer reference.

Single‑threaded reference processing (default) struggled with the large number of finalizers, extending GC pauses.

Enabling parallel reference processing with -XX:+ParallelRefProcEnabled resolved the issue.

Adjusting Redis/Jedis pool settings to reasonable values further prevents recurrence.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

RedisJedisg1gcjvm-tuningGC performanceParallelRefProcReference processing
Yanxuan Tech Team
Written by

Yanxuan Tech Team

NetEase Yanxuan Tech Team shares e-commerce tech insights and quality finds for mindful living. This is the public portal for NetEase Yanxuan's technology and product teams, featuring weekly tech articles, team activities, and job postings.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.