How Redis Achieves Million-Connection Performance: Memory, Single Thread, and Epoll
This article explains why Redis can handle massive concurrent traffic by storing all data in RAM, using a single‑threaded event loop, leveraging Linux epoll for I/O multiplexing, and employing highly optimized in‑memory data structures that keep latency in the microsecond range.
Pure In‑Memory Operations
All Redis data resides in RAM, eliminating disk I/O. Typical DRAM latency is ~100 ns versus ~10 ms for spinning disks, giving a >100 000× speed advantage. Reads and writes therefore complete in microseconds. Persistence mechanisms (RDB snapshots and AOF logs) are performed asynchronously in background threads, so they never block the main event loop.
Single‑Threaded Event Loop
Redis adopts a minimalist single‑threaded event‑loop model. Every client command is processed sequentially in one thread, which removes context switches, lock contention, and thread‑synchronisation overhead. This maximizes CPU cache locality and enables a single core to handle tens of thousands of operations per second on commodity hardware.
I/O Multiplexing with epoll
To support thousands of concurrent connections on that single thread, Redis relies on Linux's epoll API (edge‑triggered mode). epoll monitors a large set of socket descriptors and notifies the event loop only when a descriptor becomes readable or writable, eliminating blocking waits and the need for a thread per connection. The result is non‑blocking, high‑concurrency I/O that behaves like a multithreaded server while remaining single‑threaded.
Highly Optimized Data Structures
Redis implements its core data types with custom, memory‑efficient structures designed for O(1) or O(log N) operations:
String : Simple Dynamic String (SDS) that automatically expands, tracks length, and prevents buffer overflows.
List : Either a doubly‑linked list for frequent pushes/pops or a compressed list (ziplist) for small elements, providing O(1) head/tail operations.
Hash : A hash table that switches to a compact representation (zipmap) when the number of fields is small, reducing per‑field overhead.
Set : Hash‑based table offering O(1) membership checks and insert/delete.
ZSet : Combination of a skiplist (for ordered range queries) and a hash table (for O(1) lookups), enabling fast sorted‑set operations.
These structures are tuned for both time complexity and memory footprint, allowing Redis to maintain sub‑millisecond latency even with large data volumes.
In summary, Redis achieves its high‑throughput, low‑latency performance through pure in‑memory storage, a single‑threaded event‑driven architecture, epoll‑based non‑blocking I/O, and purpose‑built data structures that minimize both CPU and memory overhead.
Mike Chen's Internet Architecture
Over ten years of BAT architecture experience, shared generously!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
