Design Practices for a Billion‑Scale User Center
This article presents a comprehensive set of design practices for building a highly available, high‑performance, and secure user‑center system that can handle hundreds of millions of users, covering service architecture, API design, token degradation, data sharding, security, asynchronous processing, and monitoring.
The user center is a core subsystem for managing users in internet companies, providing login, registration, password changes, profile updates, token generation, and verification. To support billion‑scale traffic, a practical solution based on micro‑service architecture is proposed.
1. Service Architecture – The system is split into three independent micro‑services: a gateway service that aggregates business logic and external calls, a core service that handles simple logic and data storage (relying only on Redis or a database), and an asynchronous consumer service for processing messages. This separation isolates changes to the core and consumer services while allowing rapid iteration on the gateway.
2. Interface Design – APIs are divided for Web and App, with cross‑origin single sign‑on for Web and distinct encryption, signature, and token verification for App. Core interfaces (e.g., login) receive special treatment: user tables are vertically split into a core table (userId, username, phone, password, salt) and a profile table (avatar, nickname, etc.). Short‑circuit login paths rely only on read‑only databases and automatic degradation to fallback strategies when dependent services fail.
3. Database Sharding – With user growth exceeding 100 million, vertical and horizontal sharding are applied. Core user fields stay in a vertically split table, while large event tables are moved to separate databases. For high‑frequency queries, a MySQL master‑slave setup is used; for analytical queries, Elasticsearch provides scalable, replicated search capabilities.
The horizontal sharding methods include an index‑table approach (mapping phone or username to UID) and a “gene” method that encodes phone or username bits into the UID, allowing deterministic routing to specific shards.
4. Token Flexible Degradation – Tokens for Web (cookie‑based) and App (custom) are generated using encrypted userId, phone, random code, and expiration. When Redis is unavailable, a special token format is produced; during validation, the token is decrypted and, if Redis is down, the system falls back to database verification with rate‑limiting to protect performance.
5. Data Security – Sensitive data is stored separately and encrypted with salted hashes, blacklist password checks, and strong algorithms such as bcrypt or scrypt. Multi‑layer encryption and key separation increase resistance to rainbow‑table attacks while balancing performance.
6. Asynchronous Consumption – After login or registration, user events are written to a database and published to a message queue. Downstream services consume these events for rewards, profiling, etc., decoupling the user center from dependent systems and enabling compensation when the queue is unavailable.
7. Monitoring – Comprehensive monitoring covers QPS, memory usage, GC time, service latency, database binlog, front‑end metrics, and Zipkin‑based end‑to‑end tracing. Alerts trigger on abnormal drops, enabling rapid response to issues.
Conclusion – The article outlines a holistic design covering architecture, API design, token degradation, sharding, security, asynchronous processing, and monitoring for a user center capable of supporting hundred‑million‑level traffic, while acknowledging ongoing challenges such as auth service separation, monitoring granularity, and continuous performance improvements.
Architecture Digest
Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.