Backend Development 17 min read

Implementing and Testing a High‑Throughput WeChat Red‑Packet System: 1M Connections and Up to 60k QPS

This article details a practical reproduction of a large‑scale WeChat red‑packet service, describing the design goals, hardware and software setup, concurrency architecture, monitoring tools, and performance results that demonstrate a single‑machine handling one million connections and up to sixty thousand queries per second.

Architecture Digest
Architecture Digest
Architecture Digest
Implementing and Testing a High‑Throughput WeChat Red‑Packet System: 1M Connections and Up to 60k QPS

Introduction: The author was inspired by a 2015 article about handling 10 billion WeChat red‑packet requests and decided to reproduce a similar load on a single machine.

Background: Definitions of QPS, PPS, shake‑red‑packet and send‑red‑packet are given, and the target metrics are outlined: support 1 million connections, peak QPS 60 k, shake‑red‑packet rate 83 /s, send‑red‑packet rate 200 /s.

System capacity calculations: Based on 638 servers and 5.4 hundred million users, a single server should handle about 90 k users and roughly 2.3 k–6 k QPS.

Implementation: The prototype uses Golang 1.8r3 on Ubuntu 12.04 with Dell R2950 servers (8 CPU, 16 GB RAM) and 17 Debian 5.0 VMs as clients (each 6 k connections). Connections are divided into multiple SETs, each managing a few thousand connections, reducing goroutine count and lock contention. A simple NTP‑based algorithm distributes client requests evenly.

Monitoring: Custom Python scripts with ethtool and counters record per‑second request numbers; logs are visualized with gnuplot. Example command used to count connections: Alias ss2=Ss –ant | grep 1025 | grep EST | awk –F: "{print $8}" | sort | uniq –c

Results: The system achieved 1 M connections, sustained 30 k QPS stably and 60 k QPS with some fluctuations due to goroutine scheduling, network latency and packet loss. Graphs show client and server QPS, red‑packet generation and consumption, and Go pprof data.

Conclusion: The prototype meets the design goals of supporting 1 M users and up to 60 k QPS, demonstrating that a single‑machine can simulate a large‑scale WeChat red‑packet service, and the architecture can be horizontally scaled.

backenddistributed systemsGohigh concurrencyStress TestingWeChatQPS
Architecture Digest
Written by

Architecture Digest

Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.