Why Redis Added Multithreaded I/O: Deep Dive into Architecture and Performance
This article explains the shift from Redis's traditional single‑threaded model to the multithreaded I/O architecture introduced in Redis 6.0, detailing core design principles, performance benchmarks, Go code simulation, interview questions, and practical tuning tips for high‑concurrency workloads.
Understanding Redis’ “Single‑Threaded” Myth
“Single‑threaded” in Redis refers only to the command‑execution path; the rest of the process may use multiple threads.
Limitations of the Pre‑Redis 6.0 Model
Before Redis 6.0 a single main thread handled connection establishment, network I/O, and command execution, causing I/O bottlenecks under high concurrency.
Single‑Threaded Core Flow (Redis ≤ 5.x)
IO multiplexing (select/epoll/kqueue) lets one thread listen to many sockets. The flow is: accept → read request → execute command → write response, all performed by the main thread.
When many clients or large payloads are involved, the main thread spends most CPU time in network I/O, leading to latency spikes and QPS saturation.
Performance Bottleneck Example
CPU usage >90 %, with ~60 % spent on network I/O.
Latency rises from 1 ms to >15 ms, causing time‑outs.
With >5 000 connections, I/O multiplexing efficiency drops and QPS declines.
Redis 6.0+ Multithreaded I/O Model
Redis 6.0 introduces a pool of I/O threads (default 4, configurable) that handle only network I/O – connection establishment, request reading, and response writing. Command execution remains single‑threaded, preserving atomicity.
Design Principles
Command execution stays single‑threaded to avoid lock overhead.
I/O threads process network operations; the number of threads can be tuned via io-threads.
Work distribution follows “I/O thread reads → main thread executes → I/O thread writes”.
I/O threads use lock‑free task queues to avoid contention.
Core Flow (Sequence Diagram)
Client request → main thread accepts → task dispatched to an I/O thread → I/O thread reads request → request handed to main thread → command executed → response returned to I/O thread → I/O thread writes response.
Configuration and Go Example
The io-threads option (default 4, max 8) controls the number of I/O threads. The following Go program simulates the core logic: a main thread accepts connections and distributes them to a pool of I/O workers, which read requests, forward them to executeCommand, and write responses.
package main
import (
"net"
"sync"
"time"
)
// Simulate Redis multithreaded I/O: main thread listens, I/O threads handle read/write
func main() {
// 1. Configure I/O thread count (simulating Redis io-threads)
ioThreadNum := 4
// 2. Create task queue: stores client I/O tasks (connections, reads, writes)
taskQueue := make(chan net.Conn, 1000)
// 3. Start I/O thread pool
var wg sync.WaitGroup
wg.Add(ioThreadNum)
for i := 0; i < ioThreadNum; i++ {
go ioThread(i, taskQueue, &wg)
}
// 4. Main thread: listen for client connections, dispatch tasks to I/O threads
listener, err := net.Listen("tcp", ":6379")
if err != nil {
panic(err)
}
defer listener.Close()
println("Redis server started, listening on :6379")
for {
// Main thread accepts client connections (simplified, non‑blocking)
conn, err := listener.Accept()
if err != nil {
time.Sleep(100 * time.Millisecond)
continue
}
// Dispatch connection task to I/O thread queue
taskQueue <- conn
}
wg.Wait()
}
// ioThread: I/O thread, processes client connection, reads request, writes response
func ioThread(threadId int, taskQueue chan net.Conn, wg *sync.WaitGroup) {
defer wg.Done()
println("IO thread", threadId, "started")
for conn := range taskQueue {
// Simulate: I/O thread handles client connection, reads request
handleClient(conn, threadId)
}
}
// handleClient: simulate I/O thread handling client request (read + write response)
func handleClient(conn net.Conn, threadId int) {
defer conn.Close()
buf := make([]byte, 1024)
// 1. I/O thread reads client request (simulating Redis read I/O)
n, err := conn.Read(buf)
if err != nil {
println("IO thread", threadId, "read error:", err.Error())
return
}
request := string(buf[:n])
println("IO thread", threadId, "received request:", request)
// 2. Simulate: submit request to main thread for command execution (simplified)
response := executeCommand(request)
// 3. I/O thread writes response (simulating Redis write I/O)
_, err = conn.Write([]byte(response))
if err != nil {
println("IO thread", threadId, "write error:", err.Error())
return
}
println("IO thread", threadId, "send response:", response)
}
// executeCommand: simulate main thread executing Redis command (single‑threaded)
func executeCommand(request string) string {
// Simulate command execution (GET, SET); actual Redis executes commands serially
time.Sleep(10 * time.Millisecond)
if request[:3] == "GET" {
return "OK: value123" // simulated return value
} else if request[:3] == "SET" {
return "OK"
}
return "ERR: unknown command"
}Key Optimizations
Parallelize request reading across multiple I/O threads.
Parallelize response writing across multiple I/O threads.
Parallelize connection establishment to reduce wait time under high concurrency.
Note: I/O threads never execute commands, so there is no multithreaded contention; command execution remains single‑threaded, guaranteeing atomicity.
Performance Comparison
Benchmarks on identical hardware (4‑core, 8 GB RAM, Intel Xeon E5‑2670) compare Redis 5.0.14 (single‑threaded) with Redis 6.2.10 (4 I/O threads). The test suite uses redis‑benchmark for three scenarios: small strings (1 KB), large strings (10 KB), and 10 000 concurrent clients.
Results Overview
String read/write (1 KB): QPS ↑ 78 % (1.8 → 3.2 M/s), latency ↓ 44 % (0.9 ms → 0.5 ms).
Large data (10 KB): QPS ↑ 217 % (0.6 → 1.9 M/s), latency ↓ 68 % (15.3 ms → 4.8 ms).
High‑concurrency connections (10 000 clients): QPS ↑ 133 % (1.2 → 2.8 M/s), latency ↓ 63 % (8.7 ms → 3.2 ms).
Real‑world cases confirm these gains: an e‑commerce service saw QPS rise from 5 k to 18 k and latency drop from 18 ms to <5 ms after enabling four I/O threads; a social platform reduced CPU usage from 98 % to 90 % and doubled QPS.
However, setting the I/O thread count higher than the number of CPU cores introduces context‑switch overhead. Redis recommends configuring io-threads to roughly half or a quarter of the core count.
Interview Hot‑Spot Questions
Why did Redis 6.0 add multithreading?
To eliminate the I/O bottleneck while preserving the single‑threaded command execution model, which provides lock‑free, low‑overhead processing for memory‑bound operations.
Why does command execution remain single‑threaded?
Because executing commands concurrently would require locking shared data structures, destroying the atomicity and consistency guarantees that make Redis fast.
What is the workflow of the multithreaded I/O model?
Main thread creates an I/O thread pool and listens with epoll.
When a client event occurs, the main thread dispatches the task to an idle I/O thread.
I/O threads handle connection, read request, and forward it to the main thread.
Main thread executes the command and returns the result.
I/O threads write the response back to the client.
How to tune the model for best performance?
Set io-threads to ½ or ¼ of CPU cores.
Enable TCP_NODELAY and adjust socket buffers.
Avoid very large payloads (>100 KB) on a single instance; use clustering or data sharding.
Conclusion
The multithreaded I/O model does not discard Redis’ “single‑threaded” myth; it augments it by parallelizing only the I/O path. This design keeps command execution atomic while dramatically improving throughput and latency in high‑concurrency or large‑payload workloads.
Architecture & Thinking
🍭 Frontline tech director and chief architect at top-tier companies 🥝 Years of deep experience in internet, e‑commerce, social, and finance sectors 🌾 Committed to publishing high‑quality articles covering core technologies of leading internet firms, application architecture, and AI breakthroughs.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
