Databases 10 min read

How to Size Database Connection Pools: Lessons from HikariCP and Real‑World Performance Tests

This article explains why a smaller database connection pool often yields better latency and throughput, illustrates the effect with Oracle and PostgreSQL benchmarks, and provides a practical formula (connections = (CPU cores × 2) + effective disks) for sizing pools in production systems.

Top Architect
Top Architect
Top Architect
How to Size Database Connection Pools: Lessons from HikariCP and Real‑World Performance Tests

The author translates a HikariCP wiki article that dispels common misconceptions about database connection‑pool sizing and shares the key insights.

Configuring a connection pool is a frequent source of pitfalls; several counter‑intuitive principles must be understood before setting pool sizes.

For a web service handling roughly 10,000 concurrent users (≈20,000 TPS), the correct question is not "how large should the pool be?" but "how small should it be?" The author references an Oracle Real World Performance video that demonstrates this.

The video shows a test with 9,600 concurrent threads, a 2,048‑connection pool, and a 550 ms sleep between DB calls. Results: each request waited ~33 ms in the pool queue and ~77 ms to execute SQL, with high CPU usage and many buffer‑busy waits.

When the pool size was halved to 1,024 connections (threads unchanged), the SQL execution time dropped roughly by half while queue wait time stayed similar, indicating reduced contention.

Further reducing the pool to 96 connections yielded average queue wait of 1 ms and SQL execution of 2 ms, cutting overall response time from about 100 ms to 3 ms without any other changes.

The author explains why: a CPU core can execute only one thread at a time; excess threads cause context‑switch overhead. Resources are limited to CPU, disk, and network. I/O‑bound workloads block threads, allowing more threads than cores, but the optimal number depends on the storage medium (SSD vs HDD) and network bandwidth.

A practical formula derived from PostgreSQL is presented: connections = ((core_count * 2) + effective_disk_count) . Hyper‑threading cores are excluded, and "effective disks" reflect the number of disks that actually cause I/O waits.

Applying the formula to a 4‑core i7 yields about 9–10 connections; testing confirms that exceeding this number degrades latency and TPS.

The author generalizes the rule: use a small pool sized to the number of concurrent queries the DB can handle (often ≤ 2 × CPU cores) and let the remaining application threads queue, possibly separating pools for long‑running and short‑running transactions.

Finally, the pool size must be tuned to the specific system characteristics, as mixed workloads may require multiple pools or other adjustments.

BackendPerformance TuningHikariCPDatabase Connection Poolconnection sizing
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.