Backend Development 7 min read

From Chengdu COVID Testing System Crash to High‑Concurrency Architecture: Lessons for Backend Engineers

The article examines the Chengdu COVID‑19 testing system failure, analyzes its root causes such as oversized MySQL tables and insufficient load handling, and then presents a step‑by‑step high‑concurrency roadmap—including single‑machine, service‑database separation, caching, load balancing, read/write splitting, sharding, hardware and DNS load balancing—to help backend developers design scalable systems.

IT Services Circle
IT Services Circle
IT Services Circle
From Chengdu COVID Testing System Crash to High‑Concurrency Architecture: Lessons for Backend Engineers

Recently, Chengdu made headlines again due to a COVID‑19 testing system collapse that left thousands of citizens waiting for hours, exposing severe performance bottlenecks in the underlying software.

Investigation revealed that the system relied on a single MySQL instance without sharding, leading to massive tables (tens of millions of rows per day) and overwhelming query latency, while the web tier could not handle the surge of concurrent requests.

Experts identified two main reasons for the failure: (1) database performance degradation caused by huge tables, and (2) inability of the server farm to sustain high concurrent traffic, even with Nginx load balancing.

High‑Concurrency Roadmap

1. Single‑Machine Era : Small traffic handled by one server running both the web service and a MySQL database.

2. Service‑Database Separation : Deploy separate machines for the web layer and the database to allocate dedicated CPU and memory.

3. Caching Layer : Introduce a cache (e.g., Redis) to reduce repetitive database reads and lower response time.

4. Software Load Balancing : Replicate web services and use Nginx to distribute requests across multiple instances.

5. Read/Write Splitting : Add replica databases, directing reads to slaves and writes to the master, improving I/O throughput.

6. Database Sharding : Partition large tables across multiple databases or tables to keep query latency low as data grows to billions of rows.

7. Hardware Load Balancing : Deploy dedicated hardware (e.g., F5) to evenly spread traffic among several Nginx clusters.

8. DNS Load Balancing : Use geographic DNS resolution to route users to the nearest data center, further distributing load.

These steps illustrate the evolution from a simple single‑server setup to a complex, multi‑layered architecture capable of handling massive concurrent workloads, and they provide practical guidance for backend engineers facing similar scalability challenges.

backend architectureload balancinghigh concurrencyMySQLdatabase shardingnginxsystem scalability
IT Services Circle
Written by

IT Services Circle

Delivering cutting-edge internet insights and practical learning resources. We're a passionate and principled IT media platform.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.