Understanding CAP, BASE, Paxos and Raft: Core Distributed Consistency Algorithms

This article explains the evolution of backend service architectures, introduces the CAP and BASE theories, and provides detailed walkthroughs of the Paxos and Raft consensus algorithms, covering their roles, constraints, protocol steps, and practical considerations for building reliable distributed systems.

Tencent Cloud Middleware
Tencent Cloud Middleware
Tencent Cloud Middleware
Understanding CAP, BASE, Paxos and Raft: Core Distributed Consistency Algorithms

CAP Theorem

Eric Brewer (2000) defined three properties of distributed systems:

Consistency : every read returns the most recent write.

Availability : every non‑faulty node responds within a bounded time.

Partition Tolerance : the system continues operating despite network partitions.

Only two of the three can be guaranteed simultaneously. A five‑node example shows that a network split forces a trade‑off among the three properties.

BASE Theory

Dan Pritchett (eBay) relaxed CAP for large‑scale, highly available systems. It consists of:

Basically Available : the system tolerates temporary loss of availability (e.g., degraded latency or functionality) during failures.

Soft State : data may exist in intermediate states; updates propagate asynchronously.

Eventual Consistency : all replicas converge to the same state after some time.

Paxos Algorithm

Leslie Lamport’s Paxos (1990) achieves consensus on a single value across distributed nodes. Roles:

Proposer : initiates a proposal (value + unique, monotonically increasing number).

Acceptor : votes on proposals and promises not to accept lower‑numbered ones.

Learner : learns the chosen value.

Key safety constraints (P0‑P4):

P0: a value chosen by a majority of acceptors is considered selected.

P1: an acceptor must accept the first proposal it receives.

P2/P2a/P2b: once a value is chosen, any higher‑numbered proposal must carry the same value.

P3: a proposal’s number must be known to a majority before it can be submitted.

P4: acceptors must reject proposals with numbers lower than the highest they have promised.

The protocol proceeds in two phases:

Prepare (Phase 1) : a proposer sends Prepare(N) to a majority; acceptors respond with the highest-numbered proposal they have accepted (if any) and promise not to accept proposals < N.

Accept (Phase 2) : if the proposer receives a majority of promises, it sends Accept(N, V) where V is the value from the highest prior proposal (or its own value if none). Acceptors accept if they have not promised a higher number.

Variants such as Multi‑Paxos, Fast Paxos, EPaxos, and PBFT extend the basic protocol for performance or Byzantine fault tolerance.

Raft Algorithm

Raft (2014, Stanford) was designed for understandability while providing the same safety guarantees as Paxos. It decomposes consensus into three sub‑problems:

Leader election : a candidate becomes leader after obtaining votes from a majority of nodes.

Log replication : the leader appends client commands to its log and replicates them to followers.

Safety : once a log entry is committed, it appears in the logs of all future leaders.

Key Concepts

Roles : Leader, Follower, Candidate.

Term : a monotonically increasing identifier; each election starts a new term.

Random election timeout : followers start an election if they do not receive a heartbeat within a random interval (e.g., 150‑300 ms).

Heartbeat : periodic AppendEntries RPCs from the leader to maintain authority.

Election Process

Follower increments its currentTerm, becomes a Candidate, votes for itself, and sends RequestVote RPCs to all other nodes.

Followers grant at most one vote per term and only to candidates whose log is at least as up‑to‑date as theirs (higher term or equal term with longer log).

If a candidate receives votes from a majority, it becomes Leader and immediately sends heartbeats (empty AppendEntries) to assert leadership.

If no majority is reached before the election timeout, the term increments and a new election starts.

Log Replication

Client request arrives at the Leader.

Leader appends the command as a new log entry locally.

Leader sends AppendEntries RPCs (in parallel) to all followers.

When a majority of followers acknowledge the entry, the leader marks it committed and applies it to its state machine.

The leader then responds to the client.

Followers apply committed entries to their state machines; if a follower falls behind, the leader overwrites conflicting entries and brings the follower up to date.

Raft guarantees several safety properties:

Election safety : at most one leader per term.

Log matching : if two logs contain an entry with the same index and term, all preceding entries are identical.

Leader completeness : a leader’s log contains all entries that were committed in previous terms.

State machine safety : once an entry is committed, it will be present in the logs of all future leaders and applied in the same order.

Practical Resources

Raft step‑by‑step animation: http://thesecretlivesofdata.com/raft/

Official Raft site: https://raft.github.io/

Reference papers and tutorials include the original Paxos paper, the Raft paper ( https://web.stanford.edu/~ouster/cgi-bin/papers/raft-atc14 ), and various online articles.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

BASE theoryRaftConsensus AlgorithmPaxosCAP theory
Tencent Cloud Middleware
Written by

Tencent Cloud Middleware

Official account of Tencent Cloud Middleware. Focuses on microservices, messaging middleware and other cloud‑native technology trends, publishing product updates, case studies, and technical insights. Regularly hosts tech salons to share effective solutions.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.