Databases 50 min read

What Happens Under the Hood When a Redis Command Executes? – A Deep Dive into Redis’s Reactor Model

This article explains the three‑stage execution flow of a Redis command—connection establishment, command processing, and result return—detailing how the single‑threaded reactor pattern drives event handling, how commands are parsed and dispatched, and how the output is sent back, with code snippets and a comparison to Netty’s multithreaded reactor.

Tech Freedom Circle
Tech Freedom Circle
Tech Freedom Circle
What Happens Under the Hood When a Redis Command Executes? – A Deep Dive into Redis’s Reactor Model

Redis command execution lifecycle

When a client issues a command, Redis processes it in three tightly coupled stages, all driven by a single‑threaded event loop ( aeEventLoop).

Stage 1 – Connection establishment : The reactor detects a readable event on the listening socket and invokes acceptTcpHandler. This function calls anetTcpAccept (which wraps the OS accept call), creates a client object via createClient, configures the socket as non‑blocking, disables Nagle ( TCP_NODELAY) and enables keep‑alive, then registers a readable file event that points to readQueryFromClient.

void acceptTcpHandler(aeEventLoop *el, int fd, void *privdata, int mask) {
    int cfd = anetTcpAccept(server.neterr, fd, cip, sizeof(cip), &cport);
    if (cfd == ANET_ERR) return;
    acceptCommonHandler(cfd,0,cip);
}

Stage 2 – Command processing : When the client socket becomes readable, readQueryFromClient reads raw bytes into the client’s querybuf. It checks for buffer overflow, updates statistics and, for master‑replication clients, stores the raw data in pending_querybuf. After the read, it calls processInputBuffer.

void readQueryFromClient(aeEventLoop *el, int fd, void *privdata, int mask) {
    client *c = (client*)privdata;
    // reserve space and read
    nread = read(fd, c->querybuf+qblen, readlen);
    if (nread <= 0) { /* error handling */ }
    sdsIncrLen(c->querybuf,nread);
    // limit check
    if (sdslen(c->querybuf) > server.client_max_querybuf_len) {
        freeClient(c);
        return;
    }
    if (!(c->flags & CLIENT_MASTER)) {
        processInputBuffer(c);
    } else {
        // replication handling omitted for brevity
    }
}

processInputBuffer determines the request type (inline vs. multibulk), parses arguments into argv and argc , and finally calls processCommand . <code>void processInputBuffer(client *c) { while(sdslen(c->querybuf)) { if (!c->reqtype) { c->reqtype = (c->querybuf[0] == '*') ? PROTO_REQ_MULTIBULK : PROTO_REQ_INLINE; } if (c->reqtype == PROTO_REQ_INLINE) { if (processInlineBuffer(c) != C_OK) break; } else { if (processMultibulkBuffer(c) != C_OK) break; } if (c->argc == 0) { resetClient(c); } else { if (processCommand(c) == C_OK) { if (!(c->flags & CLIENT_BLOCKED) || c->btype != BLOCKED_MODULE) resetClient(c); } } } } </code> processCommand performs the following checks in order: Special‑case QUIT (reply OK and mark the client for closure). Lookup the command in redisCommandTable via lookupCommand . Validate argument count ( arity ). Enforce authentication, cluster redirection, maxmemory limits, write‑command restrictions on slaves, read‑only mode, PUB/SUB state, loading state, Lua script state, and transaction rules. If the command passes all checks, invoke call(c, CMD_CALL_FULL) to execute the command’s proc function (e.g., setCommand , getCommand ). <code>int processCommand(client *c) { if (!strcasecmp(c->argv[0]->ptr,"quit")) { addReply(c,shared.ok); c->flags |= CLIENT_CLOSE_AFTER_REPLY; return C_ERR; } c->cmd = lookupCommand(c->argv[0]->ptr); if (!c->cmd) { addReplyErrorFormat(c,"unknown command %s",c->argv[0]->ptr); return C_OK; } if ((c->cmd->arity > 0 && c->cmd->arity != c->argc) || (c->argc < -c->cmd->arity)) { addReplyErrorFormat(c,"wrong number of arguments for '%s' command",c->cmd->name); return C_OK; } // authentication, cluster, maxmemory, slave‑write, PUB/SUB, loading, Lua, transaction checks omitted for brevity if (c->flags & CLIENT_MULTI && c->cmd->proc != execCommand && ... ) { queueMultiCommand(c); addReply(c,shared.queued); } else { call(c, CMD_CALL_FULL); } return C_OK; } </code> The call function records slow‑log entries, updates command statistics, and propagates write commands to the AOF and replicas when the database state becomes dirty. <code>void call(client *c, int flags) { long long dirty, start, duration; // monitor handling omitted dirty = server.dirty; start = ustime(); c->cmd->proc(c); // actual command logic duration = ustime() - start; dirty = server.dirty - dirty; if (flags &amp; CMD_CALL_SLOWLOG) slowlogPushEntryIfNeeded(c,c->argv,c->argc,duration); if (flags &amp; CMD_CALL_STATS) { c->lastcmd->microseconds += duration; c->lastcmd->calls++; } if (flags &amp; CMD_CALL_PROPAGATE && dirty) { propagate(c->cmd, c->db->id, c->argv, c->argc, PROPAGATE_AOF|PROPAGATE_REPL); } } </code>

Stage 3 – Data return : The command’s result is placed into the client’s output buffer via addReply. addReply first calls prepareClientToWrite to decide whether the client should be added to the pending‑write list ( clients_pending_write). If the reply fits into the fixed buffer ( c->buf) it is copied there; otherwise it is appended to the reply list ( c->reply).

void addReply(client *c, robj *obj) {
    if (prepareClientToWrite(c)!=C_OK) return;
    if (sdsEncodedObject(obj)) {
        if (_addReplyToBuffer(c,obj->ptr,sdslen(obj->ptr))!=C_OK)
            _addReplyObjectToList(c,obj);
    } else if (obj->encoding == OBJ_ENCODING_INT) {
        char buf[32]; size_t len = ll2string(buf,sizeof(buf),(long)obj->ptr);
        if (_addReplyToBuffer(c,buf,len)!=C_OK) _addReplyObjectToList(c,obj);
    } else {
        serverPanic("Wrong obj->encoding in addReply()");
    }
}

During each loop iteration, aeMain invokes the beforeSleep hook, which calls handleClientsWithPendingWrites . That function iterates over clients_pending_write , attempts to write the buffered data to the socket with writeToClient , and if the write would block it re‑registers a writable file event ( sendReplyToClient ) so the kernel notifies Redis when the socket becomes writable again.

int handleClientsWithPendingWrites(void) {
    listIter li; listNode *ln;
    listRewind(server.clients_pending_write,&li);
    while((ln = listNext(&li))) {
        client *c = listNodeValue(ln);
        c->flags &= ~CLIENT_PENDING_WRITE;
        listDelNode(server.clients_pending_write,ln);
        if (writeToClient(c->fd,c,0) == C_ERR) continue;
        if (clientHasPendingReplies(c)) {
            int ae_flags = AE_WRITABLE;
            if (server.aof_state == AOF_ON && server.aof_fsync == AOF_FSYNC_ALWAYS)
                ae_flags |= AE_BARRIER;
            aeCreateFileEvent(server.el, c->fd, ae_flags, sendReplyToClient, c);
        }
    }
    return 1;
}

Reactor implementation in Redis

Redis ships its own lightweight event library ( ae) that supports two kinds of events:

File events – network I/O (readable / writable). The library abstracts the underlying OS multiplexer ( epoll on Linux, kqueue on macOS, etc.) via aeApiPoll.

Time events – periodic tasks such as serverCron (expiration, statistics, background saving). Time events are stored in a linked list and processed after I/O.

The main loop is:

void aeMain(aeEventLoop *eventLoop) {
    eventLoop->stop = 0;
    while (!eventLoop->stop) {
        if (eventLoop->beforesleep) eventLoop->beforesleep(eventLoop);
        aeProcessEvents(eventLoop, AE_ALL_EVENTS|AE_CALL_AFTER_SLEEP);
    }
}
aeProcessEvents

works as follows:

Find the nearest time event to compute the poll timeout.

Block in aeApiPoll (e.g., epoll_wait) for I/O readiness.

For each ready file descriptor, invoke the registered read/write callbacks.

After I/O, iterate the time‑event list and execute any events whose scheduled time has arrived.

int aeProcessEvents(aeEventLoop *eventLoop, int flags) {
    aeTimeEvent *nearest = aeSearchNearestTimer(eventLoop);
    long long ms = (nearest) ? computeTimeout(nearest) : -1;
    int numevents = aeApiPoll(eventLoop, ms);
    for (int j = 0; j < numevents; j++) {
        aeFileEvent *fe = &eventLoop->events[eventLoop->fired[j].fd];
        int mask = eventLoop->fired[j].mask;
        if (fe->mask & mask & AE_READABLE) fe->rfileProc(eventLoop,fe->fd,fe->clientData,mask);
        if (fe->mask & mask & AE_WRITABLE) fe->wfileProc(eventLoop,fe->fd,fe->clientData,mask);
    }
    processTimeEvents(eventLoop);
    return numevents;
}

Key data structures

client

: holds fd, querybuf (input), buf (fixed output buffer), reply list (large replies), flags, and statistics. redisCommand: maps a command name to its implementation function ( proc), arity, and flag bits. Example entries:

struct redisCommand redisCommandTable[] = {
    {"get", getCommand, 2, "rF", 0, NULL, 1,1,1,0,0},
    {"set", setCommand, -3, "wm", 0, NULL, 1,1,1,0,0},
    // ... other commands
};
aeEventLoop

: maintains the file‑event array, the list of pending writes, and the linked list of time events.

Comparison with Netty’s multithreaded reactor

Both Redis and Netty use the reactor pattern, but their designs differ:

Thread model : Redis runs a single thread that handles accept, read, write, and command execution, eliminating lock contention. Netty uses a boss thread for accept and a pool of worker threads for I/O and pipeline processing, allowing scaling across many CPU cores.

Handler chain : Redis dispatches directly to C function pointers (e.g., readQueryFromClient). Netty builds a ChannelPipeline of ChannelHandler objects, providing flexible composition.

Scalability : Redis is optimized for extremely fast in‑memory operations on a single core; Netty can handle long‑running tasks and high concurrency by offloading work to executor groups.

Summary of the execution flow

Server start → initServer creates aeEventLoop, registers the listening socket with acceptTcpHandler, and schedules periodic serverCron time events.

Client connects → acceptTcpHandler creates a client and registers readQueryFromClient for readable events.

Client sends data → readQueryFromClient reads into querybuf and calls processInputBuffer.

Parsing → processInputBuffer builds argv/argc and invokes processCommand.

Command execution → processCommand performs validation, looks up the command in redisCommandTable, and calls call which finally runs the command’s proc (e.g., setCommand, getCommand).

Reply generation → command implementation uses addReply to place the result into the client’s output buffer.

Write back → When the event loop reaches the beforeSleep hook, handleClientsWithPendingWrites attempts to flush the buffer; if the socket is not ready, a writable event is registered and the kernel notifies Redis later.

This tightly coupled, event‑driven pipeline enables Redis to achieve sub‑microsecond latency while keeping the implementation simple and lock‑free.

RedisI/O MultiplexingReactor PatternEvent LoopDatabase InternalsCommand ExecutionaeEventLoopNetty Comparison
Tech Freedom Circle
Written by

Tech Freedom Circle

Crazy Maker Circle (Tech Freedom Architecture Circle): a community of tech enthusiasts, experts, and high‑performance fans. Many top‑level masters, architects, and hobbyists have achieved tech freedom; another wave of go‑getters are hustling hard toward tech freedom.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.