Why Redis’s Single‑Threaded Design Beats Multithreading in Real‑World Performance
This article explains how Redis achieves lightning‑fast performance with a single‑threaded architecture by leveraging in‑memory operations, IO multiplexing, and event‑driven design, and it details the four key advantages of this approach as well as the evolution introduced in Redis 6.0.
1 Why Single‑Threaded Design Can Be Lightning Fast
Even though modern hardware offers many CPU cores, Redis deliberately uses a single‑threaded model and still handles hundreds of thousands of requests per second. The key insight is that Redis’s bottleneck is not the CPU but memory and network I/O.
All data resides in memory, so a single core can process the vast majority of commands. Adding threads would introduce lock contention and context‑switch overhead, which outweigh any parallelism gains. As Redis creator Salvatore Sanfilippo said, “Redis is an in‑memory database; most operations finish in 100 ns, and the coordination cost of multithreading can be larger than the operation itself.”
Redis uses IO multiplexing to efficiently manage a large number of concurrent connections. IO multiplexing is an event‑driven technique that lets one thread monitor many sockets, dramatically improving throughput.
2 Five I/O Models Explained
2.1 Blocking I/O
In the blocking model, a process waits synchronously until the requested I/O operation completes.
2.2 Non‑blocking I/O
Non‑blocking I/O returns immediately after a request, even if the data is not ready, requiring the process to poll for readiness.
2.3 Signal‑driven I/O
The kernel sends an asynchronous signal when data is ready, eliminating the need for active polling.
2.4 Asynchronous I/O
After issuing a request, the process returns immediately while the kernel completes the operation in the background and notifies the process upon completion.
2.5 I/O Multiplexing
I/O multiplexing allows a single process to monitor many file descriptors (sockets) and be notified when any become ready for reading or writing.
3 Core Mechanism: How I/O Multiplexing Handles Thousands of Connections
Traditional one‑thread‑per‑connection models cannot scale to tens of thousands of connections. Redis solves this with I/O multiplexing, using epoll on Linux, kqueue on BSD/macOS, or select as a fallback.
3.1 Restaurant Analogy
Blocking I/O : One waiter serves one table and must wait while the customer orders.
Multithreaded Model : One dedicated waiter per table, which is costly and leads to interference.
I/O Multiplexing : A super‑waiter watches all tables and serves any that need attention.
Redis adopts the third approach. It uses the operating system’s epoll/kqueue/select mechanisms so a single thread can monitor thousands of sockets.
Multiplex: multiple socket connections.
Reuse: a single thread checks the readiness of many file descriptors.
Technologies: select, poll, and the modern, high‑performance epoll.
3.2 epoll – Efficient Linux Implementation
epoll’s event‑driven design provides three main benefits:
Efficient Registration : epoll_ctl() registers all sockets with the kernel in one call.
Smart Waiting : epoll_wait() puts the thread to sleep without consuming CPU until an event occurs.
Precise Notification : When a socket becomes readable or writable, the kernel wakes the thread only for those ready connections.
This yields O(1) handling of active connections; even with 50 000 sockets, only the few that are actually communicating are processed.
3.3 Event Loop – Redis’s Heartbeat
The core of Redis is a never‑ending event loop that repeatedly performs three steps:
while (server_is_running) {
// 1. Wait for events
events = aeApiPoll(event_loop, timeout);
// 2. Process file events (network I/O)
for (i = 0; i < events; i++) {
if (event->mask & AE_READABLE) {
// Read client request
readQueryFromClient(connection);
}
if (event->mask & AE_WRITABLE) {
// Write response to client
writeToClient(connection);
}
}
// 3. Process time events (e.g., key expiration, persistence)
processTimeEvents();
}This loop handles all client requests, network I/O, and periodic tasks such as key expiration checks.
4 Four Advantages of the Single‑Threaded Model
4.1 Eliminates Lock Contention
Without multiple threads, there is no need for locks; every operation executes atomically, removing synchronization overhead.
4.2 Removes Context‑Switch Overhead
Thread switches require saving and restoring CPU registers and stacks, which is costly when frequent. A single thread avoids this entirely, dedicating almost all CPU cycles to useful work.
4.3 Simplifies Memory Access Patterns
Continuous memory access in a single thread maximizes CPU cache hit rates, avoiding the cache thrashing that can occur with many threads accessing disparate data.
4.4 Simplifies Development and Maintenance
Developers do not need to reason about thread safety, deadlocks, or race conditions, resulting in a more stable and maintainable code base.
5 Evolution in Redis 6.0: Network Becomes the New Bottleneck
With the advent of 10 Gbps and faster networks, the single‑threaded network I/O path can become a limitation. Redis 6.0 introduced optional multithreaded network I/O to parallelize packet processing while keeping the core event loop single‑threaded.
6 Summary
Redis’s single‑threaded architecture is not a relic but a purposeful design that leverages in‑memory data, I/O multiplexing, and an event‑driven loop to achieve extreme performance. By avoiding the overhead of locks, context switches, and complex concurrency, it remains simple, fast, and reliable. Redis 6.0’s optional multithreaded network I/O demonstrates a balanced evolution: retaining the core’s simplicity while addressing emerging network bottlenecks.
Architecture & Thinking
🍭 Frontline tech director and chief architect at top-tier companies 🥝 Years of deep experience in internet, e‑commerce, social, and finance sectors 🌾 Committed to publishing high‑quality articles covering core technologies of leading internet firms, application architecture, and AI breakthroughs.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
