Databases 15 min read

Deploying Redis Master‑Slave Architecture and Sentinel Cluster for High Availability

This guide walks through upgrading a single‑node Redis deployment to a high‑availability setup by building a master‑slave cluster, configuring Sentinel services, testing replication and failover, and enabling client auto‑detection of master changes, all using Docker containers and configuration files.

Wukong Talks Architecture
Wukong Talks Architecture
Wukong Talks Architecture
Deploying Redis Master‑Slave Architecture and Sentinel Cluster for High Availability

In the project we upgraded a single‑node MySQL, Redis, ES and micro‑services deployment to a high‑availability architecture, focusing on Redis.

We first prepared three servers, each running a Redis instance (one master, two slaves) and a Sentinel service, with ports 6379 for Redis and 26379 for Sentinel.

We backed up the Redis Docker image, loaded it on the new servers, created configuration files (redis.conf, sentinel.conf) with authentication (requirepass, masterauth) and started the containers using Docker commands.

sudo docker save -o redis.tar redis:0.1
sudo chmod 777 redis.tar
sudo docker load -i redis.tar
mkdir /home/redis
vim /home/redis/redis.conf
# ... set requirepass and masterauth ...
docker run -p 6379:6379 --restart=always --name redis \
 -v /home/redis/redis.conf:/usr/local/etc/redis/redis.conf \
 -v /home/redisdata:/data/ -d 301

After launching the master and slave containers we verified replication with info replication , confirming the master role and connected slaves.

Sentinel configuration (sentinel.conf) defines monitoring, ports, timeouts and authentication. We mounted the config and log files into Sentinel containers and started three Sentinel instances.

mkdir /home/redis/sentinel
vim /home/redis/sentinel.conf
vim /home/redis/sentinel/sentinel-26379.log
docker run --name mysentinel1 --restart=always -p 26379:26379 \
 -v /home/redis/sentinel.conf:/usr/local/etc/redis/sentinel.conf \
 -v /home/redis/sentinel/sentinel-26379.log:/var/log/sentinel-26379.log \
 -d 9a2f redis-sentinel /usr/local/etc/redis/sentinel.conf

When the master node was stopped, Sentinel performed a failover: it detected the subjective and objective down states, elected a new leader, selected a new master (the former slave), and updated the cluster.

Client applications using Spring Redis with Sentinel configuration automatically switched to the new master without restart, as demonstrated by the JedisPool logs and successful read/write operations after failover.

The article also lists common issues such as “READONLY You can't write against a read‑only replica” and missing passwords, providing solutions by ensuring proper requirepass and masterauth settings on all nodes.

Overall, the guide provides a practical, Docker‑based method to deploy a real multi‑server Redis master‑slave and Sentinel cluster, verify replication, and achieve automatic client failover.

DockerHigh AvailabilityRedisMaster‑SlaveSentinelFailover
Wukong Talks Architecture
Written by

Wukong Talks Architecture

Explaining distributed systems and architecture through stories. Author of the "JVM Performance Tuning in Practice" column, open-source author of "Spring Cloud in Practice PassJava", and independently developed a PMP practice quiz mini-program.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.