Fundamentals 21 min read

Distributed Consistency Algorithms: CAP, BASE, Paxos, and Raft

From CAP and BASE trade‑offs to the rigorous Paxos consensus and the more approachable Raft protocol, this article explains how modern distributed systems achieve consistency despite partitions, failures, and latency, detailing roles, phases, and safety guarantees that underpin reliable micro‑service architectures.

Tencent Cloud Developer
Tencent Cloud Developer
Tencent Cloud Developer
Distributed Consistency Algorithms: CAP, BASE, Paxos, and Raft

Introduction

Backend service architecture has evolved from centralized, SOA, micro‑services to service‑mesh. While micro‑services and service‑mesh are widely adopted, distributed architectures amplify problems such as communication failures, request time‑outs, and node crashes, leading to data inconsistency. This article introduces the core theories and classic algorithms for achieving distributed consistency.

1. CAP Theory and BASE Theory

CAP Theory

Proposed by Eric Brewer in 2000, CAP describes three properties of a distributed system:

Consistency : every read returns the latest write.

Availability : every non‑failed node returns a response within a bounded time.

Partition Tolerance : the system continues to operate despite network partitions.

Only two of the three can be satisfied simultaneously. The article illustrates this with a five‑node example and a diagram.

BASE Theory

Introduced by Dan Pritchett (eBay) as a pragmatic alternative to strong consistency. BASE stands for:

Basically Available : the system tolerates partial loss of availability during failures.

Soft State : intermediate states are allowed and may be overwritten.

Eventual Consistency : all replicas converge to the same state after a period of synchronization.

BASE targets large‑scale, highly available, and scalable systems.

2. Classic Distributed Algorithms

2.1 Paxos Algorithm

Paxos, proposed by Leslie Lamport in 1990, is a complete and rigorous consensus algorithm. Its goal is to reach agreement on a single value for a given key across nodes.

Roles

Proposer: proposes a value.

Acceptor: votes on proposals and promises not to accept lower numbers.

Learner: learns the chosen value.

Constraints (P0‑P4)

P0: a value chosen by a majority is considered chosen.

P1: an Acceptor must accept the first proposal it receives.

P2/P2a/P2b: once a value is chosen, all higher‑numbered proposals must carry the same value.

P3: a proposal must obtain a majority of promises before being issued.

P4: an Acceptor that promises a number N must reject any proposal with a smaller number.

The article visualizes these constraints with several diagrams and explains how they prevent split‑brain and rollback scenarios.

Protocol Flow

Phase 1 (Prepare) : Proposer sends a Prepare(N) to a majority of Acceptors. Acceptors respond with the highest numbered proposal they have accepted (if any) and promise not to accept lower numbers.

Phase 2 (Accept) : If a majority of promises are received, the Proposer sends an Accept(N, V) where V is the value from the highest prior proposal (or its own value if none). Acceptors accept the proposal if they have not promised a higher number.

Variations such as Multi‑Paxos, Fast‑Paxos, and EPaxos are mentioned.

2.2 Raft Algorithm

Raft, introduced by Stanford in 2017, aims to be an understandable consensus algorithm. Its name is an acronym for Reliable, Replicated, Redundant, and Fault‑Tolerant.

Key Concepts

Roles: Leader, Follower, Candidate.

Term: monotonically increasing identifier for election cycles.

Randomized election timeout (150‑300 ms) to avoid split‑brain.

Heartbeat messages from Leader to maintain authority.

Protocol Flow

a. Leader Election

Follower times out → increments its term, becomes Candidate, votes for itself.

Candidate sends RequestVote RPCs to other nodes.

Followers grant vote if the candidate’s term is newer and its log is at least as up‑to‑date as theirs.

If a Candidate receives votes from a majority, it becomes Leader and starts sending heartbeats.

b. Log Replication

Leader appends the client command to its local log.

Leader sends AppendEntries RPCs in parallel to Followers.

Leader waits for acknowledgments from a majority; once received, the entry is considered committed.

Leader applies the entry to its state machine and replies to the client.

Leader continues retrying AppendEntries until all Followers have applied the entry.

Raft guarantees several safety properties: election safety, log matching, leader completeness, and state‑machine safety. The article includes diagrams illustrating these properties and links to interactive Raft visualizations.

Conclusion

Distributed consistency algorithms have evolved over nearly four decades. Paxos remains the foundational theory, while Raft provides a more approachable implementation for modern systems. Ongoing research continues to push the boundaries of distributed consensus.

distributed systemsBASE theoryCAP theoremconsistencyRaftPaxos
Tencent Cloud Developer
Written by

Tencent Cloud Developer

Official Tencent Cloud community account that brings together developers, shares practical tech insights, and fosters an influential tech exchange community.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.