How Alibaba Scaled Double 11: Backend Strategies for Billions of Transactions

Alibaba's Double 11 festival broke sales records with 120.7 billion RMB, and the article details the backend engineering challenges and solutions—such as database sharding, SQL optimization, multi‑level caching, and modular architecture—that enabled the platform to handle millions of orders per second while preserving data consistency and performance.

Alibaba Cloud Developer
Alibaba Cloud Developer
Alibaba Cloud Developer
How Alibaba Scaled Double 11: Backend Strategies for Billions of Transactions

Overview of Double 11 Scale

Alibaba’s eighth Double 11 shopping festival broke sales records with 120.7 billion RMB, generating billions of orders per second across diverse interactive experiences.

Transaction Core Challenges

At peak, the system handled 175 k orders per second, requiring precise budget control, order‑level discount calculations, and reliable red‑envelope processing.

Red‑Envelope System Solutions

Issuance : Budget stored in a single DB row caused hotspot bottlenecks; the team split budgets into multiple sub‑buckets, routed requests by user ID, and added SQL optimizations (combined statements, conditional updates, COMMIT_ON_SUCCESS, TARGET_AFFECT_ROW) achieving ~30 k QPS per shard and up to 300 k QPS overall.

Display : Cached user‑level red‑envelope summaries; cache invalidation was driven by comparing the last update timestamp with the cache generation time.

Usage : Supported up to 80 k QPS; employed batch INSERT and multi‑row UPDATE with row‑level locking and TARGET_AFFECT_ROW to ensure atomicity.

Reliability : Implemented a lightweight cross‑database consistency mechanism (hjbus) that writes a message record within the same transaction as the business update, enabling sub‑second propagation and consumption.

Performance‑Oriented Architecture Enhancements

Shipping‑fee calculations were moved from remote calls to local cache lookups, reducing network latency and dependency on downstream services.

The TMF2 framework separated platform capabilities from business logic, exposing reusable ability models and configuration models, which accelerated multi‑team development.

Marketing Platform Improvements

Unified UMP and PromotionCenter handled discount calculations and coupon distribution; a generic data‑reconciliation platform (DataCheck) built on JStorm ensured cross‑system consistency.

A three‑tier cache (“狼烟”) combined pre‑heat, hot, and full‑layer caches, providing unified APIs, fine‑grained flow control, and high hit rates (≈20 % overall, >40 % for hot keys) during the event.

Conclusion

The combined optimizations in database sharding, SQL tuning, caching strategies, and modular architecture enabled Alibaba to sustain massive traffic, maintain data consistency, and deliver a seamless shopping experience during Double 11.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

backende-commercedatabasecachinghigh-concurrencysystem-architecture
Alibaba Cloud Developer
Written by

Alibaba Cloud Developer

Alibaba's official tech channel, featuring all of its technology innovations.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.