Backend Development 19 min read

Design and Implementation of a High‑QPS Spring Festival Red Envelope System Simulating 10 Billion Requests

This article describes the design, hardware and software setup, implementation, testing phases, and performance analysis of a Go‑based backend system that simulates a Spring Festival red‑envelope service capable of handling up to 10 billion requests with peak loads of 60 k QPS per server.

IT Architects Alliance
IT Architects Alliance
IT Architects Alliance
Design and Implementation of a High‑QPS Spring Festival Red Envelope System Simulating 10 Billion Requests

1. Introduction

The author was inspired by a 2015 article about building a reliable Spring Festival red‑envelope system and decided to recreate a similar high‑QPS backend to validate the concepts and gain practical experience.

2. Background Knowledge

Key terms: QPS (queries per second), "shake red envelope" (client requests a random red envelope), and "send red envelope" (server creates a red envelope for a set of users).

3. Goals

The target system should support roughly 1 million concurrent users, handle at least 30 k QPS per server (ideally up to 60 k QPS), and process about 10 billion shake‑red‑envelope requests within the four‑hour Spring Festival broadcast window.

4. Software and Hardware

Software: Go 1.8r3, shell, Python; OS: Ubuntu 12.04 (server) and Debian 5.0 (client). Hardware: 600 × Dell R2950 servers (8‑core, 16 GB RAM) and 17 ESXi 5.0 VMs (4 core, 5 GB RAM) acting as 1 million simulated clients.

5. Technical Analysis and Implementation

5.1 Single‑machine 1 million connections

Using Go’s goroutine model and a set‑based connection partitioning, the author achieved the required scalability; source code is available at GitHub .

5.2 Achieving 30 k QPS

Clients synchronize via NTP and use a simple modulo algorithm to decide when each user should send a request, ensuring an even distribution of load.

5.3 Shake‑red‑envelope business

Red envelopes are produced at a fixed rate; clients request them, and the server returns success or failure. Lock contention is reduced by sharding users into buckets; a Disruptor queue is suggested for further optimization.

5.4 Send‑red‑envelope business

Servers randomly generate envelopes and assign them to users; clients then request to claim them. Payment logic is omitted for simplicity.

5.5 Monitoring

A lightweight monitoring module (derived from another project) aggregates client counters and displays them; screenshots are included in the original article.

6. Code Implementation and Analysis

The architecture splits 1 million connections into multiple independent SETs, each managing a few thousand connections, allowing horizontal scaling by adding more SETs. Goroutine usage is minimized to roughly one per connection plus a few workers per SET.

7. Practice

Three testing phases were performed:

Start servers and 17 client VMs, establishing 1 million connections (verified with ss -ant | grep 1025 | grep EST | awk -F: "{print $8}" | sort | uniq -c ).

Increase client QPS to 30 k, observe stable network and red‑envelope distribution.

Increase client QPS to 60 k, note increased jitter and occasional performance degradation.

8. Data Analysis

Client and server QPS graphs show three distinct load regions (baseline, 30 k, 60 k). The 60 k region exhibits more fluctuation due to goroutine scheduling, network latency, and packet loss. Overall, the system meets the design goals.

9. Conclusion

The prototype successfully simulates a 10‑billion‑request red‑envelope scenario with up to 60 k QPS per server, demonstrating that a horizontally scalable Go backend can handle massive concurrent loads, though real‑world production systems would require additional features such as encryption, payment integration, and sophisticated monitoring.

backendDistributed Systemssimulationgolangperformance testingHigh Concurrency
IT Architects Alliance
Written by

IT Architects Alliance

Discussion and exchange on system, internet, large‑scale distributed, high‑availability, and high‑performance architectures, as well as big data, machine learning, AI, and architecture adjustments with internet technologies. Includes real‑world large‑scale architecture case studies. Open to architects who have ideas and enjoy sharing.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.