Why Can Redis Handle Over 100,000 QPS? A Deep Technical Breakdown
Redis can sustain over 100,000 queries per second thanks to four key pillars—memory‑first storage, highly optimized data structures like SDS and skip lists, a single‑threaded event loop with epoll multiplexing, and multi‑core I/O threading—each explained with benchmarks, code samples, and real‑world comparisons.
Introduction
During a recent interview at Xiaomi, a candidate was asked a classic question: "Why can Redis support 100k+ QPS?" A superficial answer of "because it is an in‑memory database" was rejected, prompting a deeper technical exploration.
1. What Does 100k+ QPS Mean?
Official benchmark results on a typical laptop show:
GET: ~103,504 QPS
SET: ~100,894 QPS
INCR: ~99,662 QPS
When pipelining INCR requests, QPS can surge to 1,061,301 , breaking the million‑ops barrier.
The interview expects more than the generic "memory is fast" answer.
2. Pillar One: Memory Is King
All Redis data resides in RAM. A memory access costs about 0.1 µs, whereas a random disk I/O takes ~10 ms—making memory roughly 100,000× faster than disk.
Example Comparison : Querying a user ID among 10 million users.
MySQL (even with indexes) needs 2–3 disk I/Os (20–30 ms).
Redis performs an in‑memory hash lookup in ~0.1 ms.
This illustrates the magnitude of the speed gap.
3. Pillar Two: Extreme Data Structures
Redis offers five core structures—strings, hashes, lists, sets, and sorted sets—each finely tuned for specific workloads.
3.1 Simple Dynamic String (SDS)
struct sdshdr {
int len; // used length
int free; // unused length
char buf[]; // byte array
}O(1) Length : Directly read the len field.
Buffer Overflow Prevention : Checks free space before modification and expands automatically.
Pre‑allocation : Allocates extra space on growth to reduce reallocations.
3.2 Ziplist (Compressed List)
For small hashes or lists, Redis stores elements in a contiguous memory block called a ziplist, eliminating pointer overhead and improving cache utilization.
Conditions: automatically used when element count < 512 and value length < 64 bytes.
3.3 Skip List
Used as the underlying implementation for sorted sets (ZSET). A multi‑level linked list provides O(log N) lookups with simpler code than balanced trees.
Search starts from the highest level and jumps forward, achieving high efficiency.
3.4 Incremental Rehash
When a hash table expands, Redis migrates entries gradually instead of moving all keys at once, spreading the cost across subsequent operations and avoiding service stalls.
4. Pillar Three: Single Thread + I/O Multiplexing
Redis processes network requests with a single core thread.
CPU Not a Bottleneck : Memory operations are extremely fast, leaving little idle CPU time.
No Lock Contention : Absence of multi‑thread synchronization overhead.
I/O Multiplexing : The thread uses epoll to monitor thousands of connections and handles events only when they occur.
This design yields high concurrency while keeping the core logic simple and reliable.
5. Pillar Four: Multi‑Core I/O Utilization
Before Redis 6.0, both network read and write were handled by the single thread, potentially becoming a bottleneck under heavy traffic.
Since 6.0, Redis introduces I/O threading for network read/write, while command execution remains single‑threaded to preserve atomicity.
IO Read : Multiple threads concurrently read client requests and parse the protocol.
Command Execution : The main thread executes commands sequentially, guaranteeing atomicity.
IO Write : Multiple threads concurrently write responses back to clients.
This fully exploits multi‑core CPUs for network handling while keeping the core logic simple.
6. Other Performance Boosters
Pipeline Batch Operations
Jedis jedis = new Jedis("localhost");
Pipeline p = jedis.pipelined();
for (int i = 0; i < 1000; i++) {
p.incr("counter");
}
p.sync();Sending many commands in one round‑trip reduces network latency.
Avoid Large Keys
Keys larger than 10 KB can block the service. Use redis-cli --bigkeys to detect them.
Reasonable Persistence
During load testing, disabling persistence (RDB/AOF) avoids interference.
7. Advantages, Disadvantages, and Suitable Scenarios
Advantages : Extremely high performance (10 k+ QPS), rich data structures, persistence guarantees, high‑availability clustering.
Disadvantages : High memory cost, single‑threaded command path can block, limited single‑node capacity, risk of large keys.
Suitable Scenarios : Cache acceleration, real‑time counters, distributed locks, leaderboards/social feeds.
Conclusion
Redis achieves >100 k QPS through the combined effect of four pillars:
Memory‑First Storage : Bypasses disk I/O.
Optimized Data Structures : SDS, ziplist, skip list, incremental rehash, etc.
Single‑Threaded Event Loop with Epoll : Eliminates lock contention.
Multi‑Core I/O Threading : Boosts network throughput while keeping core logic simple.
Understanding both the "what" and the "why" equips engineers to harness Redis’s performance effectively.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Su San Talks Tech
Su San, former staff at several leading tech companies, is a top creator on Juejin and a premium creator on CSDN, and runs the free coding practice site www.susan.net.cn.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
