Understanding Redis 6.0 Multithreading Model and Configuration
The article explains Redis 6.0's new features—especially the multithreaded network I/O model—why earlier versions were single‑threaded, how the IO threads cooperate with the main thread, configuration steps, thread‑count recommendations, and the model's limitations and performance implications.
Redis 6.0, officially released in May 2020, brings many exciting features such as multithreaded network I/O, client‑side caching, fine‑grained ACL, the RESP3 protocol, faster RDB loading, and the removal of obsolete RDB replication files.
Multithreaded handling of network I/O.
Client‑side caching.
Fine‑grained access control lists (ACL).
Support for the RESP3 protocol.
Obsolete RDB replication files are removed.
Faster RDB file loading.
The most discussed feature is the multithreaded model combined with client caching , which requires understanding its principles to use Redis 6.0 effectively.
Earlier Redis versions were single‑threaded because CPU was rarely the bottleneck; Redis is limited by memory and network. Using pipelining , a single thread can handle up to one million requests per second, and a single thread simplifies maintenance and avoids concurrency complexities such as locks, context switches, and potential deadlocks.
Redis processes a client request in several sequential stages—socket read, command parsing, execution, and socket write—all performed by the main thread, which is why the term “single‑threaded” refers to this entire pipeline.
With the rise of faster network hardware, the network read/write speed can outpace a single thread, making I/O the new bottleneck. To address this, Redis 6.0 introduces multiple I/O threads that handle only the network read/write part, while command execution remains on the main thread.
Typical ways to improve network I/O include using zero‑copy or DPDK techniques, or leveraging multithreading to parallelize socket handling, as demonstrated by projects like DPDK and Memcached .
The multithreaded I/O workflow is as follows:
The main thread accepts connections and places the sockets into a global pending queue.
It polls the queue and assigns readable sockets to I/O threads.
The main thread blocks until an I/O thread finishes reading a socket.
The main thread parses the received Redis commands.
The main thread blocks again until the I/O thread writes the response back to the socket.
The main thread clears the global queue and waits for the next client request.
This design splits the network I/O work into parallel threads while keeping command execution single‑threaded, preserving compatibility and simplicity.
Multithreading is disabled by default. To enable it, edit redis.conf and set:
io-threads-do-reads yesThen specify the number of I/O threads, for example:
io-threads 4Official recommendations suggest using 2–3 threads on a 4‑core machine and up to 6 threads on an 8‑core machine; more than 8 threads usually provides no additional benefit.
Key limitations: only network I/O is multithreaded; all command execution still runs on the main thread, so the model is not a full Multi‑Reactors/Master‑Workers architecture and does not fully exploit multi‑core CPUs.
To improve Redis performance, two directions are highlighted: optimizing network I/O (zero‑copy, DPDK, multithreading) and accelerating memory read/write, the latter being dependent on hardware advances.
Architecture diagram of Redis multithreaded I/O model.
Cooperation flow between the main thread and I/O threads.
Overall, Redis 6.0’s multithreaded I/O model offers a pragmatic compromise: it retains the simplicity of the original single‑threaded command execution while allowing parallel network processing to better utilize modern multi‑core servers.
Full-Stack Internet Architecture
Introducing full-stack Internet architecture technologies centered on Java
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.