Databases 19 min read

Understanding Redis High Availability: Master‑Slave, Sentinel, and Cluster Explained

This article explains why single‑node Redis suffers from single‑point failures, describes the master‑slave replication model, details the Sentinel automatic failover mechanism, compares various sharding solutions like client‑side sharding, Twemproxy, Codis, and outlines the features and deployment considerations of Redis Cluster.

Senior Brother's Insights
Senior Brother's Insights
Senior Brother's Insights
Understanding Redis High Availability: Master‑Slave, Sentinel, and Cluster Explained

Background

In a single‑server deployment a crash makes the service unavailable. High availability therefore requires deploying the service on multiple machines, forming a distributed system.

Master‑Slave Replication

Redis can persist data with RDB or AOF, but all data resides on one node, causing I/O bottlenecks and no read/write separation. Replication copies updates from a master to one or more slaves, enabling read scaling and data backup.

Master‑Slave architecture diagram
Master‑Slave architecture diagram

A master may have many slaves; slaves can be cascaded.

Pros and Cons of Master‑Slave

Advantages: read/write separation, higher read throughput, data backup, multiple replicas.

Disadvantages: no automatic failover; a slave must be promoted manually when the master fails, reducing availability.

Manual failover steps

On the chosen slave execute SLAVE NO ONE to promote it to master.

Restart the original master and run SLAVEOF <new‑master‑ip> <port> so it becomes a slave of the new master.

Sentinel Mode

Sentinel, stable since Redis 2.8, adds an automatic election mechanism that promotes a slave to master when the master crashes, eliminating manual intervention.

Sentinel architecture diagram
Sentinel architecture diagram

Sentinel functions

Periodically sends INFO to all monitored servers to check health.

When a master is deemed down, the elected leader selects the best‑candidate slave and promotes it to master, then notifies other nodes via Pub/Sub.

Configuration example

sentinel monitor <master-name> <ip> <port> <quorum>
# master-name: logical name of the master
# ip & port: address of the master
# quorum: number of sentinels that must agree before failover

Failover process

Select the leader sentinel.

The leader chooses the slave with the highest slave-priority.

If priorities tie, the slave with the larger replication offset (more up‑to‑date data) wins.

If still tied, the slave with the smallest run ID is chosen.

The chosen slave receives SLAVE NO ONE to become master; all other slaves are reconfigured with SLAVEOF <new‑master‑ip> <port>.

Pros and Cons of Sentinel

Pros: automatic master promotion without manual steps.

Cons: still a single‑master write path, all nodes store full data (memory waste), and during election writes are blocked.

Industry Redis Sharding Solutions

Client‑side sharding

Clients embed sharding logic (e.g., Jedis ShardedJedis) and use consistent hashing to route keys to specific Redis instances.

Client‑side sharding diagram
Client‑side sharding diagram

Advantages

Full control over sharding and routing; no external middleware.

Each Redis instance operates independently, enabling linear scaling.

Disadvantages

Static sharding requires code changes when adding or removing nodes.

Higher operational cost due to coordination between developers and ops.

Duplicate routing logic must be maintained across language clients.

Proxy sharding (Twemproxy)

Twemproxy, an open‑source proxy from Twitter, sits between clients and Redis instances, routing requests according to predefined rules.

Twemproxy architecture
Twemproxy architecture

Pros

Clients connect to Twemproxy without code changes.

Failed Redis instances are automatically removed.

Reduces the number of client‑to‑Redis connections.

Cons

Additional latency introduced by the proxy layer.

Lacks a built‑in monitoring UI.

Adding or removing nodes requires manual reconfiguration.

Codis

Codis provides a proxy layer with the concept of a Redis Server Group. Each group contains one master and one or more slaves, enabling high availability.

Codis architecture
Codis architecture

Codis pre‑creates 1024 virtual slots. A key is mapped to a slot with crc32(key) % 1024. Slots are assigned to Redis Server Groups. Administrators can rebalance slots manually with codisconfig or automatically via its rebalance feature.

When a new server group is added, slots are redistributed either:

Manually, by specifying slot ranges for each group with codisconfig.

Automatically, using codisconfig rebalance, which moves slots based on each group's memory usage.

Redis Cluster

Redis Cluster (official since Redis 3.0) implements true data sharding across multiple nodes, eliminating the memory waste of Sentinel mode. It uses a decentralized multi‑master, multi‑slave topology with 16384 hash slots.

Redis Cluster diagram
Redis Cluster diagram

Key characteristics

Fully decentralized; each master owns a subset of hash slots.

Clients connect directly to any node; the cluster redirects requests to the appropriate master.

All nodes maintain the complete slot map, enabling any node to route queries.

Typical deployment: at least three masters (recommended three masters with three slaves) for fault tolerance.

Redis Cluster is suited for scenarios with massive data volume, high concurrency, and strict availability requirements. For smaller workloads, Sentinel may be sufficient, but Cluster offers superior performance and scalability.

databaseshardinghigh availabilityRedisSentinelCluster
Senior Brother's Insights
Written by

Senior Brother's Insights

A public account focused on workplace, career growth, team management, and self-improvement. The author is the writer of books including 'SpringBoot Technology Insider' and 'Drools 8 Rule Engine: Core Technology and Practice'.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.