Choosing the Right Redis Architecture: From Single Node to Cluster
This article reviews the main Redis deployment options—including single‑node, master‑slave with Sentinel, sharding via consistent hashing, and Redis Cluster—explaining their advantages, high‑availability mechanisms, scalability limits, and recommending suitable scenarios for each architecture.
Redis is a popular in‑memory data store prized for nanosecond‑level response speed, rich data structures, persistence mechanisms, and flexible deployment topologies.
1. Single‑node deployment
The simplest setup runs Redis on a single server, offering low cost and easy maintenance. However, it lacks high availability; if the node crashes, the entire system may become unavailable, and data loss risk remains despite persistence options.
2. Master‑slave mode with Sentinel
High‑availability is achieved by a primary node handling writes and one or more replicas handling reads. Replicas continuously synchronize data from the master, providing read‑write separation and automatic failover. Sentinel monitors the master, sends heartbeats, and promotes a replica to master if the original master becomes unresponsive, ensuring continuous service.
3. Sharding with consistent hashing
When data volume exceeds a single machine’s memory, sharding distributes data across multiple Redis nodes. Each node is placed on a hash ring based on its IP; keys are hashed and stored on the nearest node clockwise on the ring. Adding a new node only requires moving a subset of keys, minimizing rebalancing overhead.
When a new server (e.g., redis_04) is added, the hash ring rotates clockwise; the segment between the previous node (A) and the new node (B) is reassigned, and the affected keys are migrated, which can temporarily impact performance.
4. Redis Cluster
Redis Cluster extends high‑availability and scaling by partitioning data across multiple nodes using hash slots. Each node owns a range of slots calculated via CRC16. Clients initially contact any node; if the key’s slot belongs to a different node, the contacted node redirects the client to the correct node.
Nodes exchange health information through a Gossip protocol. If a majority detects a node failure, the node is marked offline, preserving cluster stability. Adding a node simply involves assigning new hash slots and migrating the corresponding key ranges.
Recommendation Summary
For personal projects or testing, use single‑node deployment for simplicity and low cost.
For small‑to‑medium projects with modest cache size, adopt master‑slave with Sentinel to gain read scalability and automatic failover.
For large‑scale applications requiring massive cache capacity and high concurrency, choose Redis Cluster to benefit from sharding, fault tolerance, and online scaling.
Lobster Programming
Sharing insights on technical analysis and exchange, making life better through technology.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
