Redis Lazy Free and Multi‑Threaded I/O: Architecture, Implementation, and Limitations
The article explains how Redis evolves from a single‑threaded in‑memory cache to using Lazy Free and multi‑threaded I/O to alleviate blocking on large key deletions, describing the underlying event model, code implementations, performance trade‑offs, and comparisons with Tair's threading approach.
Redis, a high‑performance in‑memory cache, traditionally runs a single‑threaded event loop, which limits it to one CPU core, can block for seconds when deleting large keys, and caps QPS.
To address these issues, Redis 4.0 introduced the Lazy Free mechanism and Redis 6.0 added multi‑threaded I/O, gradually moving the server toward a multi‑threaded architecture.
Single‑Threaded Principle
Redis operates as an event‑driven program handling two kinds of events:
File events : socket operations such as accept , read , write , and close .
Time events : scheduled tasks like key expiration and server statistics.
The server uses a reactor pattern with I/O multiplexing, processing ready file events before time events, all within a single thread.
Lazy Free Mechanism
When a slow command (e.g., deleting a Set with millions of members or executing FLUSHALL ) would block the main thread, Redis 4.0 makes the operation asynchronous by delegating the memory reclamation to a background thread via the UNLINK command.
Example of the DEL command entry point:
void delCommand(client *c) {
delGenericCommand(c, server.lazyfree_lazy_user_del);
}
/* This command implements DEL and LAZYDEL. */
void delGenericCommand(client *c, int lazy) {
int numdel = 0, j;
for (j = 1; j < c->argc; j++) {
expireIfNeeded(c->db, c->argv[j]);
// Determine whether to delete lazily based on configuration
int deleted = lazy ? dbAsyncDelete(c->db, c->argv[j]) :
dbSyncDelete(c->db, c->argv[j]);
if (deleted) {
signalModifiedKey(c, c->db, c->argv[j]);
notifyKeyspaceEvent(NOTIFY_GENERIC, "del", c->argv[j], c->db->id);
server.dirty++;
numdel++;
}
}
addReplyLongLong(c, numdel);
}The asynchronous path calculates a "free effort" value; if it exceeds LAZYFREE_THRESHOLD and the object is not shared, the deletion is queued to a background job:
#define LAZYFREE_THRESHOLD 64
int dbAsyncDelete(redisDb *db, robj *key) {
if (dictSize(db->expires) > 0) dictDelete(db->expires, key->ptr);
dictEntry *de = dictUnlink(db->dict, key->ptr);
if (de) {
robj *val = dictGetVal(de);
size_t free_effort = lazyfreeGetFreeEffort(val);
if (free_effort > LAZYFREE_THRESHOLD && val->refcount == 1) {
atomicIncr(lazyfree_objects, 1);
bioCreateBackgroundJob(BIO_LAZY_FREE, val, NULL, NULL);
dictSetVal(db->dict, de, NULL);
}
}
if (de) {
dictFreeUnlinkedEntry(db->dict, de);
if (server.cluster_enabled) slotToKeyDel(key->ptr);
return 1;
} else {
return 0;
}
}By removing shared objects from aggregated data types, Redis also prepares the codebase for true multi‑threading.
Multi‑Threaded I/O and Its Limitations
Redis 6.0 adds dedicated I/O threads that handle read/write system calls, while the main event thread continues to process commands. The workflow is:
The event thread distributes ready read events to a pool of I/O threads.
All I/O threads complete their reads.
The event thread processes the commands.
Write events are again handed to the I/O threads for sending responses.
Sample code for distributing pending reads:
int handleClientsWithPendingReadsUsingThreads(void) {
// Distribute the clients across N different lists.
listIter li;
listNode *ln;
listRewind(server.clients_pending_read, &li);
int item_id = 0;
while ((ln = listNext(&li))) {
client *c = listNodeValue(ln);
int target_id = item_id % server.io_threads_num;
listAddNodeTail(io_threads_list[target_id], c);
item_id++;
}
// Wait for all I/O threads to finish.
while (1) {
unsigned long pending = 0;
for (int j = 1; j < server.io_threads_num; j++)
pending += io_threads_pending[j];
if (pending == 0) break;
}
return processed;
}The I/O thread main loop processes reads or writes based on the assigned operation:
void *IOThreadMain(void *myid) {
while (1) {
// Iterate over assigned clients
while ((ln = listNext(&li))) {
client *c = listNodeValue(ln);
if (io_threads_op == IO_THREADS_OP_WRITE) {
writeToClient(c, 0);
} else if (io_threads_op == IO_THREADS_OP_READ) {
readQueryFromClient(c->conn);
} else {
serverPanic("io_threads_op value is unknown");
}
}
listEmpty(io_threads_list[id]);
io_threads_pending[id] = 0;
}
}Despite the added parallelism, the design is not a full pipeline: the event thread must wait for all I/O threads, incurring polling overhead, and I/O threads can only perform reads or writes collectively.
Tair’s Multi‑Threaded Design
Tair separates responsibilities into a Main Thread (connection handling), I/O Threads (network I/O and command parsing), and Worker Threads (command execution). Communication between I/O and Worker threads uses lock‑free queues and pipes, achieving higher throughput than Redis’s native multi‑threaded I/O.
Conclusion
Redis 4.0’s Lazy Free solves blocking on large‑key deletions, while Redis 6.0’s I/O threading introduces limited multi‑threaded processing with modest performance gains (≈2×). Compared with Tair’s more elegant threading (≈3×), Redis’s approach remains constrained, and the Redis author favors cluster scaling and slow‑operation threading over full I/O threading.
References
Lazy Redis is better Redis
An update about Redis developments in 2019
阿里云Redis多线程性能提升思路解析
Top Architect
Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.