Fundamentals 9 min read

How Multi-Paxos Achieves Global Order Without Frequent Prepare Steps

This article explains the Multi-Paxos algorithm, showing how independent Paxos instances can be combined into a globally ordered sequence, how the Prepare phase can be omitted in certain cases, and why a leader emerges naturally to maximize efficiency.

WeChat Backend Team
WeChat Backend Team
WeChat Backend Team
How Multi-Paxos Achieves Global Order Without Frequent Prepare Steps
Previous article: Paxos Theory Introduction (1): Naïve Paxos Algorithm Derivation and Proof Understanding naïve Paxos is a prerequisite for reading this article.

Multi-Paxos

Naïve Paxos determines a value through multiple rounds of Prepare/Accept, forming an Instance. Multi-Paxos uses Paxos to decide many values whose order is identical across all nodes, establishing a global order.

How do multiple Instances operate? First, we consider the simplest model where each Instance runs independently.

Each Instance runs an independent naïve Paxos; we ensure that Instance i’s value is decided before starting Instance i+1, guaranteeing ordered Instances.

This approach is inefficient because naïve Paxos has high latency. Multi-Paxos seeks to link multiple Instances to eliminate the Prepare step in some cases.

Below is a sample evolution illustrating the algorithm’s key points, which are simple and require no further proof.

First, we define the elements of Multi-Paxos:

Three participating nodes A, B, C.

Prepare(b): Node A initiates a Prepare with number b.

Promise(b): Node A promises number b.

Accept(b): Node A initiates an Accept with number b.

1(A) means number 1 generated by node A; 2(B) means number 2 generated by node B. Green indicates Accept succeeded, red indicates rejection.

Figure: Parallel submission evolution of nodes A/B/C

In this scenario, Node A receives many Prepare messages, causing its Promise numbers to grow and forcing it to continually increase its Prepare number, offering no optimization.

Figure: Evolution when only node A submits

Here, without interference from other nodes, each Prepare uses the same number. This suggests making Promised(b) global: if the effective range of Promise(b) is Instance[i, ∞), we can skip further Prepare after Instance i.

If no other node submits after Instance 1, Node A’s subsequent Accepts will succeed. The Prepare‑less window exists while only one node is submitting.

When node B submits at Instance 4, Promised(b) becomes 2(B), causing Node A’s Accept to be rejected. Node A must raise its Prepare number to compete, revealing a clear Prepare‑less window (Instances 1‑3) where only one node submits.

Key point: Skipping Prepare and directly Accepting is safe because the Accept value has already been promised.

Summary

Multi-Paxos expands the effective range of Promised(b) to the global Instance space, allowing consecutive submissions from a single node to bypass the Prepare phase.

Note: A common misconception is that Multi-Paxos requires a leader to eliminate Prepare. In fact, Multi-Paxos optimizes under specific request patterns and does not mandate a leader; it simply benefits from parallel submissions.

Leader

Although Multi-Paxos permits parallel submissions, this degrades efficiency to that of naïve Paxos. To maintain the optimization, we prefer a leader so that only one node submits most of the time.

Obtaining a leader is simple: Lamport’s paper barely mentions it. In Multi-Paxos, an Accept(b) can only occur if b has been promised. Continuous Accepts are interrupted when a Promise is raised due to another node’s submission (which starts with a Prepare). To prevent other nodes from submitting, we simply reject submission requests for a period after receiving an Accept from another node.

When an Accept is received from another node, reject new submissions for a short time.

This implicit leader emerges without explicit election; nodes naturally avoid breaking the continuous Accept state, causing other nodes to reject requests.

Many complex leader election algorithms online misuse Paxos for leader selection, often stemming from a misunderstanding of Multi-Paxos and the belief that a leader is required.

Using Paxos for leader election is meaningful, but it should not be conflated with the leader concept in Multi-Paxos. Paxos also supports strong consistency for reads by electing a Master, which differs from a Leader. Implementing a Master involves strong consistency algorithms and lease mechanisms, which will be discussed later.

Reading the source code is the best way to understand; see the open‑source Paxos library: https://github.com/tencent-wechat/phxpaxos
Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

algorithmleader electionMulti-PaxosPaxosdistributed consensus
WeChat Backend Team
Written by

WeChat Backend Team

Official account of the WeChat backend development team, sharing their experience in large-scale distributed system development.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.