How to Resolve Account Transaction Conflicts with Split Writes and Lazy Integration

This article explains the "account conflict" problem caused by massive concurrent transactions, proposes splitting read/write operations and using lazy or timed data integration to improve performance and consistency, and discusses how to ensure atomicity and isolation with transactions or distributed locks.

NiuNiu MaTe
NiuNiu MaTe
NiuNiu MaTe
How to Resolve Account Transaction Conflicts with Split Writes and Lazy Integration

Problem: Account Conflict

When a huge number of transactions (e.g., 1 million orders) target the same account key simultaneously, the system must serialize updates to keep the balance correct, which dramatically slows down processing.

Idea: Split the Amount Data

Separate read and write paths: a read request simply sums the account balance and the transaction log, while a write request appends a new transaction record to the log. Because each new record uses a unique key, write conflicts disappear.

Consequences of a Growing Log

Two drawbacks appear as the log grows:

Storage cost: 100 bytes per record × 100 million records ≈ 10 GB.

Performance: each balance query must scan the entire log, so 100 million records mean 100 million additions per query.

Solution: Data Integration

Periodically merge the transaction log into the account balance, similar to Redis’s lazy and timed key expiration.

Lazy Integration

When the number of conflicts exceeds a configurable threshold, trigger an immediate merge. The threshold should be large enough not to affect normal performance but small enough to keep conflicts low.

Timed Integration

A dedicated process runs at a fixed interval (e.g., every 5 seconds) and merges a batch of log entries (e.g., 1 000 records). At this rate, 1 million daily transactions can be fully merged.

Ensuring Atomicity & Isolation

The merge consists of two steps: deleting processed log entries and adding the summed amount to the account. Both steps must succeed together or fail together, without exposing intermediate states to users.

Use a transactional storage (e.g., MySQL) to wrap the two operations in a single transaction, guaranteeing atomicity.

To prevent concurrent merges from interfering, acquire a lock on the account before starting the merge. This can be done with the database’s native row‑level lock or with a distributed lock implemented in the same storage system.

Conclusion

By first splitting the data to avoid write conflicts and then lazily or periodically merging it back, you can achieve both high concurrency and data correctness. Applying transactions or distributed locks ensures the merge is atomic and isolated, making the system reliable and performant.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

Data Consistencydistributed lock
NiuNiu MaTe
Written by

NiuNiu MaTe

Joined Tencent (nicknamed "Goose Factory") through campus recruitment at a second‑tier university. Career path: Tencent → foreign firm → ByteDance → Tencent. Started as an interviewer at the foreign firm and hopes to help others.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.