Backend Development 7 min read

Druid Connection Pool Performance Optimization: Fair Lock vs Unfair Lock

The article explains how a Druid 1.1.20 connection pool was throttled to ~10k RPS by fair‑locking in recycle(), and how enabling the unfair lock (setUseUnfairLock(true)) doubled single‑machine TPS, raised CPU usage to near 100 % and boosted cluster throughput by about 70 %.

Youzan Coder
Youzan Coder
Youzan Coder
Druid Connection Pool Performance Optimization: Fair Lock vs Unfair Lock

This article details the performance optimization of a database connection pool in application T using Druid 1.1.20. During stress testing, the system hit a bottleneck with throughput stuck at approximately 10,000 requests per second despite having capacity for more.

Investigation Process:

The team used tcpdump to capture network packets and discovered that while database response times were under 1ms, the interval between requests averaged 4-5ms. Using Alibaba's Arthas diagnostic tool, they monitored connection return times and found the bottleneck was in the recycle operation—specifically the lock acquisition time.

Further analysis revealed the system was using fair locking (ReentrantLock$FairSync), which causes threads to wait in a queue order. The stack trace showed threads blocked at sun.misc.Unsafe.park(Native Method) waiting to acquire the lock in DruidDataSource.recycle() .

Solution:

The fix was remarkably simple—enabling unfair lock mode:

// 设置druid 连接池非公平锁模式
dataSource.setUseUnfairLock(true);

Druid defaults to unfair locking, but automatically switches to fair locking when maxWait is configured.

Results:

Under 300 concurrent connections, unfair locking achieved 9k+ TPS compared to fair locking's 5k TPS. After deployment, single-machine performance nearly doubled, and cluster throughput improved by approximately 70%. CPU utilization reached nearly 100%, indicating the CPU resources were now fully utilized.

The team also tested HikariCP as an alternative, which performed even better (reaching ~3k TPS from Druid's 1.5k with fair locking), but decided to stick with Druid due to existing monitoring infrastructure.

performance optimizationDatabaseConnection PoolArthasDruidJava concurrencyFair Lockunfair lock
Youzan Coder
Written by

Youzan Coder

Official Youzan tech channel, delivering technical insights and occasional daily updates from the Youzan tech team.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.