Fundamentals 17 min read

Mastering Distributed Transactions: 2PC, TCC, and Message Queue Solutions

This article explains the fundamentals of distributed transactions, covering ACID properties, the CAP theorem, two‑phase commit, TCC compensation, and a message‑queue based eventual consistency approach, while highlighting their advantages, drawbacks, and practical application scenarios.

Architect's Must-Have
Architect's Must-Have
Architect's Must-Have
Mastering Distributed Transactions: 2PC, TCC, and Message Queue Solutions

What Is a Distributed Transaction

A distributed system consists of components deployed on different nodes that cooperate via a network, such as a recharge‑and‑points service where two independent systems must work together.

What Is a Transaction

A transaction is a unit of work with four ACID properties:

Atomicity : all operations succeed or all are rolled back.

Consistency : data moves from one correct state to another.

Isolation : changes are invisible to other transactions until committed.

Durability : once committed, changes survive crashes.

What Is a Local Transaction

Local transactions are controlled by a relational database that provides ACID guarantees, typical in monolithic applications.

What Is a Distributed Transaction

When a single logical operation spans multiple systems (or multiple databases) that must coordinate over a network, it is a distributed transaction.

Another form occurs when an application uses several data sources that each connect to different databases; after database sharding this situation is common.

Application Scenarios for Distributed Transactions

CAP Theory

The CAP theorem states that a distributed system can simultaneously satisfy only two of three guarantees: Consistency, Availability, and Partition Tolerance.

Consistency : all nodes see the same data at the same time.

Availability : the system continues to serve requests even if some nodes fail.

Partition Tolerance : the system tolerates network partitions that cause nodes to be unable to communicate.

Can a System Achieve All Three?

In practice, guaranteeing both consistency and availability while also tolerating partitions is impossible; designers must choose a trade‑off.

CAP Combination Modes

1. CA : sacrifice partition tolerance, emphasize consistency and availability (typical of relational databases). 2. AP : sacrifice strong consistency, favor availability and partition tolerance, achieving eventual consistency (common in many NoSQL systems). 3. CP : sacrifice availability, prioritize consistency and partition tolerance (used in systems like cross‑bank transfers).

Most modern distributed systems adopt the AP model, accepting eventual consistency for higher availability.

Distributed Transaction Solutions (Three Examples)

Two‑Phase Commit (2PC)

2PC coordinates multiple nodes to ensure atomicity across a distributed transaction.

The protocol consists of a prepare phase and a commit phase .

Prepare Phase

The transaction manager sends a Prepare request to each participant. Participants write redo/undo logs locally and respond with “yes” (ready) or “no” (abort).

Commit Phase

If all participants reply “yes”, the manager sends a Commit message; otherwise it sends Rollback. Participants then finalize or revert their work and release locks.

Drawbacks of 2PC include synchronous blocking, a single point of failure at the coordinator, and potential data inconsistency if failures occur after the commit message is sent.

Transaction Compensation (TCC)

TCC extends 2PC with three explicit steps: Try (reserve resources), Confirm (commit), and Cancel (rollback). It is illustrated with an order‑and‑inventory example.

Advantages: strong consistency and flexible business‑level control. Disadvantages: higher development effort and the need for idempotent Try/Confirm/Cancel interfaces.

Idempotency

An operation is idempotent if repeated executions produce the same result. Common implementations include pre‑check flags, request caching, or status fields in database rows.

Message‑Queue Based Eventual Consistency

This approach breaks a distributed transaction into multiple local transactions coordinated asynchronously via a message queue.

Steps:

Order and inventory services reserve resources.

The order service records the order and a “decrease‑stock” message within a local transaction.

A scheduled task reads the message table and sends the message to the MQ.

The inventory service consumes the message, reduces stock, and records the message status to ensure idempotency.

The inventory service replies via MQ when the stock reduction succeeds.

The order service removes the pending message after receiving the confirmation.

Pros: higher performance, no need for Try/Confirm/Cancel interfaces, lower development cost. Cons: increased read/write load on the database and less suitable for extremely high‑concurrency scenarios.

In summary, the article introduces the concepts and solutions for distributed transactions, laying the groundwork for a future deep dive into the message‑queue based eventual consistency pattern using RabbitMQ, Spring Task, and Spring Cloud.

distributed systemsCAP theoremMessage Queuetransactions2PCTCC
Architect's Must-Have
Written by

Architect's Must-Have

Professional architects sharing high‑quality architecture insights. Covers high‑availability, high‑performance, high‑stability designs, big data, machine learning, Java, system, distributed and AI architectures, plus internet‑driven architectural adjustments and large‑scale practice. Open to idea‑driven, sharing architects for exchange and learning.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.