Backend Development 18 min read

Design and Implementation of a High‑Throughput 10‑Billion Red‑Envelope System Simulation

This article describes how to design, implement, and evaluate a scalable backend that can simulate 10 billion WeChat red‑envelope requests by supporting up to 1 million concurrent users and handling 30 k–60 k QPS per server using Go, Linux tools, and custom monitoring.

IT Xianyu
IT Xianyu
IT Xianyu
Design and Implementation of a High‑Throughput 10‑Billion Red‑Envelope System Simulation

Inspired by a 2015 article on handling 100 billion red‑envelope requests, the author builds a practical prototype to explore whether a similar system can be reproduced locally, focusing on backend design, performance, and scalability.

Background knowledge : QPS (queries per second) measures request load; “shake red‑envelope” is a client request that returns a red‑envelope if available, while “send red‑envelope” creates a red‑envelope for a set of users.

Goals : Determine target load per server, estimate user capacity (≈228 k users per server from 14.3 billion total users across 638 servers), and calculate per‑machine QPS (≈2.3 k–6.6 k QPS) and red‑envelope issuance rates (≈83 shake requests per second, scaled to 200 per second for testing).

Hardware & software : Server – Dell R2950, 8‑core, 16 GB RAM, Ubuntu 12.04; Client – ESXi 5.0 VMs (17 instances, each 4 cores, 5 GB RAM) establishing 60 k connections each; Development stack – Go 1.8r3, shell, Python.

Technical analysis & implementation : The prototype uses Go goroutines to manage 1 million connections, grouping them into independent SETs to limit goroutine count and lock contention. Red‑envelope generation uses simple queues; optional high‑performance Disruptor queues are mentioned for future scaling.

Code snippet for connection counting :

Alias ss2=ss –ant | grep 1025 | grep EST | awk –F: "{print $8}" | sort | uniq –c

Practice phases :

Phase 1 – launch server and monitoring, then start 17 client VMs to create 1 million connections; verify connections with ss command.

Phase 2 – increase client QPS to 30 k, run a red‑envelope generator at 200 per second, observe stable QPS and successful envelope distribution.

Phase 3 – raise client QPS to 60 k, repeat generation and distribution, noting increased network jitter and QPS fluctuation.

Data analysis : Python scripts and gnuplot visualize client and server QPS over time, showing stable 30 k QPS and more variance at 60 k QPS due to goroutine scheduling, network latency, and packet loss. Additional graphs illustrate envelope generation rates and client receipt rates, confirming the prototype meets design expectations.

Conclusion : The prototype successfully simulates a system capable of supporting 1 million users and 30 k–60 k QPS per server, demonstrating that 10 billion shake requests could be processed in roughly 7 minutes with 600 servers. Differences from a production system (complex protocols, payment integration, advanced monitoring, security, hot‑updates) are acknowledged.

distributed systemsbackend architecturesimulationGoperformance testinghigh QPS
IT Xianyu
Written by

IT Xianyu

We share common IT technologies (Java, Web, SQL, etc.) and practical applications of emerging software development techniques. New articles are posted daily. Follow IT Xianyu to stay ahead in tech. The IT Xianyu series is being regularly updated.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.