Databases 26 min read

Mastering Codis: Seamless Redis Scaling and High‑Availability Strategies

This comprehensive guide details how Codis extends Redis with a proxy‑based architecture to achieve transparent horizontal scaling, smooth data migration, high availability, fault tolerance, and operational best‑practices, while also covering common Redis pitfalls and performance tuning.

Efficient Ops
Efficient Ops
Efficient Ops
Mastering Codis: Seamless Redis Scaling and High‑Availability Strategies
Abstract : As the king of NoSQL KV databases, Redis is favored for its high performance, low latency, and rich data structures, but its horizontal scalability is limited. Codis, an open‑source Redis distributed solution, addresses this by enabling seamless horizontal expansion without impacting business. This article explains how Codis achieves business‑transparent migration, high migration performance, exception handling, high availability, and common Redis pitfalls. Although Codis is nearing end‑of‑life internally, it remains widely used externally, and the principles are still of interest.

Background

With the rise of live‑streaming, many products rely on ranking lists stored in Redis. In 2017 the system was rebuilt using Redis, and after evaluating various open‑source solutions, Codis was selected. The team has operated 15 Codis clusters (≈2 TB total, >100 billion daily accesses) supporting interactive video, activity systems, and thousands of leaderboards.

Redis Overview

2.1 Redis Introduction

Redis is an in‑memory key‑value store with persistence, supporting strings, hashes, lists, sets, and sorted sets.

Redis (Remote Dictionary Server) is an open‑source (BSD‑licensed), in‑memory data structure store, used as a database, cache and message broker. It supports data structures such as strings, hashes, lists, sets, sorted sets with range queries.

2.2 Redis Features

Single‑threaded asynchronous architecture with multiplexed I/O.

KV model with rich value types (string, hash, list, set, sorted set).

High performance, low latency (10⁵+ GET/SET ops), persistence via RDB/AOF.

Versatile use cases: caching, message queue, TTL expiration.

Transactional support with atomic execution.

2.3 Redis Use Cases

Redis use case diagram
Redis use case diagram

2.4 Relationship Between Codis and Redis

Codis adds a routing layer on top of multiple Redis instances, each handling a shard of data.

2.5 Learning Resources

For deeper Redis knowledge, refer to:

Redis Development and Operations (Fu Lei)

Redis Design and Practice (Huang Jianhong)

Redis Distributed Solutions: Internal vs External Comparison

Based on our requirements (low cost, smooth scaling, non‑intrusive to business), Codis stands out as the most suitable open‑source option.

Data safety is ensured by 48‑hour rolling backups on the host plus an additional backup system, and monitoring is handled via internal tools and alerting.

Codis Architecture Design

4.1 Overall Architecture

Codis architecture diagram
Codis architecture diagram

Codis follows a two‑layer design: proxy + storage. Compared with a proxy‑less design, it simplifies scaling because only the proxy layer needs to be expanded as client connections grow.

Proxy registration via ZK
Proxy registration via ZK

The proxy is the core component handling routing, shard migration, and request processing.

4.2 Codisproxy Architecture and Implementation

4.2.1 Routing Mapping Details

Routing mapping diagram
Routing mapping diagram

Key mapping uses CRC32 % 1024 to obtain a slot, which is then mapped to a virtual group (one master + multiple slaves). This indirection makes slot reassignment transparent to the application.

Key terms:

slot : logical shard index.

group : virtual node composed of a master and its replicas.

4.2.2 Proxy Request Handling Details

Proxy request flow
Proxy request flow

The proxy, written in Go, creates a session per client connection with separate reader and writer goroutines. The reader parses requests, splits multi‑key commands, routes them via the router, and forwards responses to the writer, which sends them back to the client.

Data Reliability, High Availability, Disaster Recovery, and Split‑Brain Handling

5.1 Data Reliability

Reliability relies on Redis persistence (RDB + AOF) for local durability, master‑slave replication for hot standby, and periodic cold backups (48‑hour rolling snapshots plus an external backup system).

5.2 High Availability & Disaster Recovery

Codis combines a proxy cluster (HA via ZK or L5) with a Redis cluster (HA via Sentinel). The proxy listens to Sentinel’s +switch‑master events to update its routing information and persist the new master to storage.

5.2.3 Split‑Brain Mitigation

To avoid split‑brain, a quorum of at least four out of five Sentinel instances must agree before a master is considered down. Additionally, an agent on each Redis node checks ZK connectivity; if lost, the node is forced into read‑only mode to preserve consistency.

Codis Horizontal Scaling and Migration Exception Handling

6.1 Expansion and Migration Details

Codis scaling diagram
Codis scaling diagram
Impact during migration: Phase 1 – proxy read/write requests are blocked briefly while slots are synchronized. During migration, reads are allowed, writes to migrating slots are blocked, and completed slots incur two network I/Os.

Migration consists of preparation (locking slots), execution (batch‑wise key transfer using the custom

SLOTSMGRT‑EXEC‑WRAPPER

command), and performance guarantees (optimised for large ZSET migrations).

6.2 Migration Exception Handling

Key points:

Large keys are split into batches; on failure, the batch is retried after deleting the target key to ensure idempotence.

During migration, keys receive a temporary 90‑second TTL to avoid stale data if a node crashes.

If the source node crashes, the replica is promoted and migration continues from the replica to avoid data loss.

Disabling AOF/RDB on the source is prohibited because it would lead to data loss on restart.

Operational Handbook and Pitfall Guide

8.1 Master‑Slave Switch

<code>grep "Generatedby CONFIG REWRITE" -C 10 {redis_conf_path}/*.conf</code>

8.2 Data Migration

Always back up data and shard information before migration. Use

slotsmgrt‑async‑status

to monitor large‑key migration progress.

8.3 Exception Handling

After a Redis crash, monitor for error reports while the server reloads keys.

8.4 Client Timeouts

Check network congestion with the NOC assistant.

Inspect listen queue length (min(backlog, somaxconn)). Adjust

net.core.somaxconn

if needed.

Investigate slow queries via

slowlog get

(default threshold 10 ms).

8.5 Fork Overhead

Fork is used for RDB/AOF rewriting; its duration scales with memory size. Keep instance memory ≤10 GB and limit concurrent forks.

8.6 AOF Persistence Details

The

everysec

policy balances performance and safety by fsync‑ing once per second. Heavy disk I/O can block the main thread.

8.7 Accidental FLUSHDB

If

appendonly no

, increase RDB trigger thresholds and back up the RDB file immediately. If

appendonly yes

, enlarge AOF rewrite thresholds, avoid manual

bgrewriteaof

, back up the AOF file, and strip the

FLUSHDB

command before restoration.

8.8 Switching from RDB to AOF in Production

Do not edit the config and restart directly. Instead, back up the RDB file, enable AOF via

CONFIG SET appendonly yes

, rewrite the config, and run

BGSAVE

to persist data.

References

Redis Development and Operations (Fu Lei)

Redis Design and Practice (Huang Jianhong)

Large‑Scale Codis Cluster Governance and Practice

distributed systemsproxyhigh availabilityRedisscalingCodis
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.