Which Database Connection Pool Performs Best? A Real-World Multi-Threaded Test
This article presents a comprehensive multi‑threaded performance and safety evaluation of four popular database connection pools—Druid, JNDI, Tomcat, and Oracle UCP—across Tomcat, WebSphere, and JWS servers, detailing test setup, methodology, results, and best‑practice recommendations for proper pool usage.
Test Background
The organization experienced transaction errors when a single database connection obtained from a pool was accessed concurrently by multiple threads. The environment includes Java (JVM) and .NET applications that use Oracle and DB2 databases, with several connection‑pool implementations in use.
Test Objectives
Determine whether common connection‑pool implementations exhibit multi‑thread safety issues.
Compare the performance characteristics of the pools under increasing concurrency.
Test Subjects
Druid – open‑source pool from Alibaba, used in the internal framework.
JNDI – J2EE‑provided pool, typical in WebSphere deployments.
Tomcat JDBC Pool – widely adopted open‑source pool.
Oracle UCP – Oracle‑provided pool with official support.
Test Environment
Application servers: JWS (PAAS), Apache Tomcat 9, IBM WebSphere.
Databases: Oracle 12c RAC (12.0.0.12) and Oracle 19c RAC.
Virtual machines: 2 CPU cores, 8 GB RAM each.
Test Methodology
Test program : Developed a Java web application that can switch among the four pools. The program performs CRUD operations in multiple threads. Each pool is configured with initialSize=10, minIdle=10, maxTotal=100, and default values for all other parameters.
Deployment : Installed Oracle 12c RAC and 19c RAC, set up WebSphere on SUSE 12, Tomcat 9 on RHEL 7.5, and JWS on the internal container platform.
Performance test : Used Apache JMeter to generate concurrent threads (20, 40, 80, 160) for 120 seconds. Recorded CPU usage, memory consumption, and JMeter‑reported TPS for each run.
Multi‑thread safety test : Kept the same environment and executed 2, 4, 8, 16 threads that repeatedly accessed a **single** connection obtained from the pool. Each concurrency level was repeated 3–5 times; any exception was treated as a safety failure.
Performance Test Results
Tomcat 9 server:
WebSphere server:
JWS (PAAS) server:
Multi‑Thread Safety Test Results
Conclusions
Increasing the number of concurrent threads raises TPS but also lengthens request latency and increases CPU contention. CPU usage for Druid is slightly higher, yet overall TPS differences among the four pools are minimal.
In the safety tests, Oracle UCP and Tomcat JDBC Pool showed no exceptions under any concurrency level. Druid and JNDI produced exceptions in most scenarios, indicating they are not safe for sharing a single connection across threads.
Given Oracle’s official support for UCP, it is the preferred choice when Oracle services are procured.
Correct Usage of Database Connection Pools
The typical lifecycle of a connection pool consists of three phases:
1) Pool Creation
When the application starts, the pool creates the minimum number of connections defined in the configuration (e.g., 10) and stores them internally.
2) Pool Usage
When a thread requests a connection, the pool returns an idle connection if available. If none are idle and the pool has not reached maxTotal, a new connection is created; otherwise the request waits up to the configured timeout before throwing an exception.
3) Connection Release
After the operation, the thread must close (return) the connection to the pool. If the pool already contains the maximum number of connections, the returned connection may be discarded.
Proper usage requires each thread to obtain its own connection from the pool and return it promptly, avoiding long‑held connections that can cause transaction errors.
dbaplus Community
Enterprise-level professional community for Database, BigData, and AIOps. Daily original articles, weekly online tech talks, monthly offline salons, and quarterly XCOPS&DAMS conferences—delivered by industry experts.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
