Mask Reservation Mini‑Program: From Perfect Experience to Lossy Service – Architecture and Design
During the COVID‑19 pandemic, Tencent and Guangzhou built the “Suikang” mask‑reservation mini‑program in two days, handling 1.7 billion visits by shifting from real‑time inventory checks to a four‑layer “lossy” architecture—CDN caching, batch releases, Redis, Kafka queues, and asynchronous processing—to trade consistency for high availability and rapid response.
During the COVID‑19 pandemic, Tencent collaborated with the Guangzhou municipal government to launch the "Suikang" mini‑program mask reservation feature within two days. The first day attracted 1.7 billion visits and over 14 million mask reservation attempts.
The project began on January 30 with three basic functions (health reporting, epidemic clue reporting, medical supplies reporting) and quickly added mask reservation, online consultation, and health code features. The high public interest and strict stability requirements posed significant challenges for the system.
Business flow: each day before 19:00 the Guangzhou Pharma partner provides the list of pharmacies and mask inventory, which is imported into the system. Reservations open at 20:00, and results are sent back to the partner before 24:00 for distribution to stores. Users receive a reservation code to pick up masks the next day.
Initially the system offered a “perfect” experience with real‑time inventory checks, but the unexpected traffic surge forced a redesign toward a “lossy” service.
Four‑layer buffering strategy:
Layer 1 – Move static data (pharmacy list, mask inventory) to CDN and stop real‑time stock verification.
Layer 2 – Implement batch random release on the frontend with configurable thresholds, similar to staggered entry in a subway during peak hours.
Layer 3 – Use a cache to determine whether the reservation period has ended (time limit or stock exhausted) and mark the status accordingly.
Layer 4 – Queue reservation requests and process them asynchronously in FIFO order, returning the final result later.
Backend design: Kafka is used for peak‑shaving queues, sequential DB inserts maximize write speed, and Redis caches frequent queries. Two services are defined:
Preorder Services – handle submission, registration, and query interfaces, avoiding direct DB hits under high concurrency.
Spoorder Services – consume Kafka messages, write results to the database, and update the cache, providing an asynchronous processing path.
Frontend design: CDN configuration with a 10‑minute cache, user self‑report guidance to offload traffic, WeChat Autofill for pre‑filled user data, batch release parameters that can be tuned dynamically, and all configurable items stored in configuration files for rapid operational changes.
First‑day performance: 1.7 billion visits and more than 100 k QPS were sustained.
Post‑launch issues included misleading UI text that caused users to think reservations were successful before processing completed. The team responded with SMS notifications, later replaced by WeChat subscription messages, and added online payment and delivery features.
The discussion then broadened to the concept of “lossy service” as a trade‑off between consistency and availability (CAP theorem). It contrasted ACID (strong consistency) with BASE (eventual consistency) and illustrated the principles with a voting system and QQ album case studies.
Key practices for building lossy services:
Give up absolute consistency to achieve high availability and fast response.
Design for retry mechanisms when failures occur.
Implement scalable, modular components with graceful degradation (elastic scaling, tiered service levels).
The Q&A section covered technical details such as CDN static files, batch release implementation, security testing, dead‑letter queue handling, server count (16 servers), core team size (7 members), and the balance between cost and performance.
Tencent Cloud Developer
Official Tencent Cloud community account that brings together developers, shares practical tech insights, and fosters an influential tech exchange community.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.