Databases 15 min read

How Google Cloud Spanner Achieves Global Scale with Paxos and TrueTime

This article explains how Google Cloud Spanner combines relational database features with NoSQL scalability, using multi‑version storage, TrueTime, Paxos consensus, and dynamic sharding to deliver a globally distributed, strongly consistent, high‑availability database solution.

Radish, Keep Going!
Radish, Keep Going!
Radish, Keep Going!
How Google Cloud Spanner Achieves Global Scale with Paxos and TrueTime

Google Cloud Spanner is a revolutionary database system that blends the strengths of traditional relational databases with the scalability of NoSQL systems, designed for massive workloads across multiple regions.

Key Features of Cloud Spanner

Multi‑Version Database – Synchronous replication ensures durability and availability even during regional failures.

TrueTime Technology – Combines GPS and atomic clocks to provide a globally consistent timeline.

Simplified Data Management – Offers a familiar SQL interface while handling distributed complexity behind the scenes.

Data Splitting and Dynamic Sharding – Partitions data into continuous key ranges (splits) and automatically adjusts shards based on load.

Overall, Spanner provides enterprises with a competitive database solution that supports global operations while retaining the robustness of traditional relational systems.

Spanner Architecture Overview

Spanner is organized into a logical "Universe" spanning multiple zones. Each zone contains dedicated spanservers that store data and process transactions, building on concepts from Google’s early distributed storage system, Bigtable.

Spanner architecture diagram
Spanner architecture diagram

Key Architectural Components

Data is managed as smaller units called tablets , distributed across spanservers.

Tablets – Store key‑value pairs with timestamps for version control, enabling multi‑version reads.

Colossus File System – Distributed storage that provides fault tolerance and high‑performance scaling.

Splits – Continuous key ranges that are dynamically re‑sharded when they become too large or hot.

Cross‑Region Replication – Replicates each split across multiple regions for redundancy.

Spanner uses the Paxos consensus algorithm to manage cross‑region replication, ensuring consistency among replicas.

Leader Election – One replica acts as the leader for a split, handling all writes; followers can serve reads, improving scalability.

Paxos leader election diagram
Paxos leader election diagram

Spanner instances span multiple zones within a region, with replicas distributed across zones to maintain availability even if a zone fails. Data resides in Colossus, a distributed, replicated file system that separates storage from compute for independent scaling.

Data replication diagram
Data replication diagram

Paxos Consensus Mechanism

Paxos groups of replicas reach agreement on values such as transaction commits or leader assignment.

Leader Assignment

Each split is associated with a Paxos group spanning multiple regions.

One replica is designated leader to handle all writes for that split.

Followers assist with reads and contribute to scalability.

The leader’s responsibilities include processing writes, maintaining order via TrueTime timestamps, and communicating proposals to followers.

Transaction Processing

Spanner provides strong, robust transaction handling for both writes and reads.

Write Transactions

Locking – The Paxos leader locks rows before modification.

Timestamp Assignment – TrueTime assigns a globally consistent timestamp.

Majority Replication – Details are sent to a majority of replicas; the transaction commits only after acknowledgment.

Commit Wait – The leader waits briefly to ensure the timestamp is visible to all replicas before final commit.

For multi‑split writes, Spanner uses a two‑phase commit with one split acting as the coordinator.

Read Transactions

Strongly Consistent Reads – Always return the latest committed data, verified via TrueTime.

Stale Reads – May return data up to a short delay (e.g., 10 seconds) for lower latency.

Strong read diagram
Strong read diagram
Stale read diagram
Stale read diagram

To avoid deadlocks, Spanner employs the wound‑wait algorithm: younger transactions wait for older ones, and older transactions abort younger ones holding needed locks.

TrueTime Technology

TrueTime is a key innovation that provides a globally synchronized time interval rather than a single point, using atomic clocks and GPS.

Atomic Clocks – Offer high‑precision timekeeping with minimal drift.

GPS Clocks – Provide global synchronization, supplemented by atomic clocks when GPS is unavailable.

TrueTime exposes a time interval TTInterval [earliest, latest] and synchronizes with time masters roughly every 30 seconds, keeping uncertainty under ~10 ms.

Global External Consistency – Guarantees a total order of transactions across all replicas.

Lock‑Free Reads – Enables read‑only transactions to access a consistent snapshot without locking.

Atomic Schema Updates – Treats schema changes as timestamped transactions.

Historical Reads – Allows reading data as of a specific timestamp for auditing.

Conclusion

Google Spanner represents a major breakthrough in database engineering, merging relational reliability with NoSQL scalability. Its architecture, built on Paxos consensus and TrueTime, delivers globally distributed, strongly consistent transactions while maintaining high performance and availability.

Spanner is redefining the possibilities of distributed databases, setting new standards for scalability, reliability, and innovation.

scalabilitydistributed databasesPaxosTrueTimeGoogle Cloud Spanner
Radish, Keep Going!
Written by

Radish, Keep Going!

Personal sharing

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.