Databases 16 min read

Understanding Redis Master‑Slave Replication: Principles, Configuration, and Common Issues

This article explains the fundamentals of Redis master‑slave replication, covering CAP theory, the replication workflow, configuration methods, heartbeat mechanisms, and typical problems with practical solutions to ensure high availability and data consistency.

Architect
Architect
Architect
Understanding Redis Master‑Slave Replication: Principles, Configuration, and Common Issues

Redis replication is a core mechanism for achieving high availability by copying data from a master node to one or more slave nodes. The article begins with a brief recap of Redis persistence and introduces the CAP theorem, emphasizing the trade‑off between consistency and availability during network partitions.

CAP Theory – In distributed systems, a network partition forces a choice between maintaining consistency or availability; Redis prioritizes availability while eventually achieving consistency through asynchronous replication.

Redis Master‑Slave Replication Overview – The master handles writes, slaves handle reads, enabling read‑write separation, load balancing, fault recovery, data redundancy, and forming the basis for sentinel and cluster high‑availability solutions.

Configuration Methods

Client command: slaveof <masterip> <masterport>

Server start‑up parameter: redis-server -slaveof <masterip> <masterport>

Configuration file (redis.conf): slaveof <masterip> <masterport>

Additional commands for authentication and disconnection include slaveof no one , requirepass <password> , auth <password> , and masterauth <password> .

Replication Workflow

1. Connection Establishment

The slave sends slaveof ip port to the master, which records the slave’s address, establishes a socket, and optionally performs authentication.

2. Data Synchronization

The slave requests a full data dump with psync ? -1 . The master creates an RDB snapshot using copy‑on‑write, streams the file to the slave, and sends +FULLRESYNC runid offset . After loading the RDB, the slave records the master’s runid and offset , then requests incremental updates with psync runid offset . If the master’s runid or offset mismatches, a full resync is triggered.

3. Command Propagation

Subsequent write commands are propagated from the master to slaves based on the recorded offset. If offsets overflow or the master changes, a full resync occurs.

Heartbeat Mechanism

Master sends PING (default every 10 s, configurable via repl-ping-slave-period ) to check slave liveness.

Slave acknowledges with REPLCONF ACK {offset} every second, reporting its current offset.

If the number of slaves falls below min-slaves-to-write 2 or their lag exceeds min-slaves-max-lag 8 , the master disables writes to protect data integrity.

Common Issues and Solutions

Master restart : Preserve runid using master_replid and store it in the RDB to avoid unnecessary full resyncs.

Small replication backlog : Increase repl-backlog-size based on workload.

Blocking commands on slaves : Set repl-timeout to a reasonable value to drop unresponsive slaves.

Ping frequency and packet loss : Increase ping frequency and set timeout to 5‑10× the ping interval.

Network latency causing data divergence : Optimize network topology and consider slave-serve-stale-data yes|no to control stale reads.

Summary

The article consolidates the implementation details of Redis master‑slave replication, highlighting its role as a high‑availability cornerstone and preparing readers for subsequent topics such as sentinel and cluster deployment.

Databasehigh availabilityRedisMaster‑SlaveReplicationCAP Theory
Architect
Written by

Architect

Professional architect sharing high‑quality architecture insights. Topics include high‑availability, high‑performance, high‑stability architectures, big data, machine learning, Java, system and distributed architecture, AI, and practical large‑scale architecture case studies. Open to ideas‑driven architects who enjoy sharing and learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.