Databases 9 min read

Understanding Redis Cluster Architecture: High Availability, Data Partitioning, and Proxy Strategies

This article explains the fundamental concepts of Redis cluster architecture, covering high‑availability with Sentinel, data partitioning methods, proxy‑based sharding techniques, the mechanics of Redis Cluster without a central node, and practical considerations for multi‑key operations in a distributed environment.

Aikesheng Open Source Community
Aikesheng Open Source Community
Aikesheng Open Source Community
Understanding Redis Cluster Architecture: High Availability, Data Partitioning, and Proxy Strategies

1. Basic Concepts of Cluster Architecture

When only a single Redis instance is used (Single architecture), practical problems such as total service outage on node failure, limited capacity, and performance bottlenecks must be considered.

The three primary goals of a cluster architecture are high availability, alleviating resource‑limit bottlenecks, and improving network throughput.

1.1 High Availability – Sentinel

Redis Sentinel is a distributed system that can run multiple Sentinel processes. These processes use gossip protocols to receive information about master failures and an agreement protocol to decide whether to perform automatic failover and which replica should become the new master.

1.2 Alleviating Resource Limits – Data Partitioning (Sharding)

Data is automatically split across different nodes according to an algorithm, with each node acting as a master for its portion of the data.

Even if some nodes fail or become unreachable, the cluster can continue processing commands.

The data split can follow the AKF principle, allowing flexible partitioning along different dimensions.

1.3 Improving Network Throughput

Redis uses the epoll I/O model, which provides excellent single‑node throughput, but when a single entry point cannot handle traffic, load‑balancing strategies are needed.

Typical approaches include adding slave nodes, using a proxy as the traffic entry point, deploying Redis Cluster, or employing LVS.

A flexible architecture lets the business side ignore which specific node handles a request; a unified traffic entry point abstracts away node‑level resource constraints.

2. Client‑Side Partitioning

In this context, the client refers to the business side, which maintains a mapping between keys and Redis nodes or uses a service‑discovery mechanism.

While simple scenarios work fine, drawbacks include the need for unified access rules, difficulty understanding node bottlenecks, and the requirement for each client to connect to all Redis nodes.

3. Proxy‑Based Partitioning

Several Redis proxies provide a unified traffic entry point and useful features such as data sharding.

Below are common Redis proxy and sharding algorithm logics:

3.1 Modula (Algorithm + Modulo Access)

Keys are hashed and the modulo result determines the target node.

Drawback: data distribution may be uneven, and scaling requires adjusting the modulo strategy.

3.2 Random Access

When used as a message queue, multiple Redis instances can form a topic; producers push data (LPUSH) and consumers pop data (RPOP).

Drawback: data distribution may be uneven.

3.3 Ketama (Consistent Hashing)

Consistent hashing maps a set of numbers onto a ring; adding or removing a server only minimally changes the mapping between requests and servers.

It solves dynamic scaling issues of simple hash algorithms in distributed hash tables.

Advantage: Adding nodes distributes storage pressure without affecting existing nodes because there is no modulo step.

Disadvantage: New nodes may cause a small portion of keys to miss (requiring a lookup of nearby nodes).

Operational Steps

Plan a hash ring where slots hashed from node identifiers represent physical nodes; the remaining slots are virtual nodes.

Mark all physical nodes on the ring.

When a key is added, hash it to find its slot; if the slot maps to a virtual node, locate the nearest physical node and store the key there.

4. Redis Cluster (Leader‑less Architecture)

Redis Cluster does not use consistent hashing; instead it introduces the concept of hash slots. Each instance is assigned a set of slots and carries both data and routing logic, making every node a master.

Clients send requests to any Redis instance, which then forwards the request to the appropriate node based on the hash slot.

Advantage: Easy scaling; Redis provides built‑in tools and scripts for cluster management.

Disadvantage: Client connections are placed directly on the instances, which can increase request forwarding overhead; adding a proxy layer can mitigate this.

Illustrative diagrams:

5. Additional Considerations

When using Redis Cluster:

Operations involving multiple keys are generally unsupported (e.g., intersecting two sets) because the keys may reside on different nodes.

Multi‑key transactions cannot be used.

Keywords: #RedisCluster #RedisSharding

proxyhigh-availabilityRedisClusterconsistent hashingData Partitioning
Aikesheng Open Source Community
Written by

Aikesheng Open Source Community

The Aikesheng Open Source Community provides stable, enterprise‑grade MySQL open‑source tools and services, releases a premium open‑source component each year (1024), and continuously operates and maintains them.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.