Why Redis Achieves Millisecond Latency and Million QPS: Core Design Principles
Redis delivers millisecond‑level response and millions of operations per second by storing data entirely in memory, using a single‑threaded event‑driven model, efficient I/O multiplexing with epoll, and highly optimized data structures such as strings, hashes, lists, sets and sorted sets.
Pure Memory Operations: Speed Foundation
Redis achieves "millisecond‑level response, million‑level concurrency" because it keeps most data in RAM, eliminating frequent disk I/O. Memory random‑access latency is thousands of times faster than disk, allowing operations to complete in microseconds or even nanoseconds, which is critical for latency‑sensitive scenarios such as caching, session management, and real‑time analytics.
Single‑Threaded Architecture: Simplicity and Efficiency
Redis primarily uses a single‑threaded event‑driven model. Although Redis 6.0+ adds optional multi‑threaded I/O, command execution remains single‑threaded. This design avoids context switches, lock contention, and memory‑visibility issues common in multi‑threaded systems. The result is simpler program logic, easier state consistency, and reduced runtime overhead because most code paths do not require locks.
Commands are processed sequentially, guaranteeing atomicity and enabling throughput of over 100 k QPS per CPU core.
I/O Multiplexing: Concurrency Key
To serve thousands of simultaneous client connections on a single thread, Redis relies on efficient I/O multiplexing. On Linux it uses epoll (or the platform‑specific equivalent) to implement non‑blocking I/O and an event loop. This allows the thread to monitor many sockets and react only when a socket becomes readable or writable, avoiding thread blocking and frequent context switches.
Optimized Data Structures: Performance Weapon
Redis provides a set of core data types—String, Hash, List, Set, Sorted Set (ZSet)—implemented in C with memory‑layout and algorithm optimizations tailored to common use cases. Each structure balances speed and memory usage:
Hash tables enable fast key lookups.
Compressed lists and skip‑lists reduce memory footprint while supporting ordered range queries.
Linked lists and quicklists excel at double‑ended operations.
These specialized implementations allow typical operations to run at very low cost, contributing to Redis’s overall high performance.
Mike Chen's Internet Architecture
Over ten years of BAT architecture experience, shared generously!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
