How Meiyou Scaled Overseas Messaging with TiDB Architecture
Meiyou, a leading women‑health platform, migrated its overseas messaging system and other core services from MySQL to TiDB, detailing the selection process, performance testing, deployment configurations, and the resulting gains in scalability, latency, high availability, and reduced operational costs.
Meiyou, an internet company focused on women’s health management, has grown since its 2013 founding to host multiple applications with daily active users exceeding ten million. Rapid business expansion, especially overseas, put pressure on its underlying database architecture.
Early TiDB Adoption (2020)
In 2020 the team first tried TiDB (version 4.0) for offline query workloads, using TiDB’s Data Migration (DM) tool to sync full and incremental MySQL data to a TiDB cluster. Queries that previously took minutes on MySQL completed in seconds on TiDB, delivering performance improvements of dozens of times.
Deepening TiDB Experience (2022‑2023)
From 2022 to 2023 the team invested heavily in learning TiDB—studying documentation, watching video courses, obtaining certifications, and sharing knowledge internally through bi‑weekly tech talks. They also built a TiDB knowledge base by conducting fault‑injection tests, SQL throttling tests, and multi‑region deployment experiments.
Pain Points Leading to a Major Migration (2024)
The overseas messaging system (version 1.0) relied on three IDC sites, with a MySQL master in Singapore replicating to São Paulo and Frankfurt. This architecture caused complex query paths, untraceable push status, and poor user experience. When evaluating TiDB versus PolarDB, performance replay tests showed TiDB superior in latency and total cost, especially given high overseas server prices.
The new architecture decommissioned MySQL, deploying a centralized TiDB cluster in Singapore that serves both São Paulo and Frankfurt application layers, simplifying data sync and achieving several‑fold query speed improvements.
Production Deployment Details
Three main systems run on TiDB:
Messaging push system: 2 TiDB nodes, 3 PD nodes, 4 TiKV nodes, plus 2 TiFlash nodes; data volume ~3 TB.
Order aggregation system: 3 TiDB nodes, one dedicated to offline analytics.
Internal systems: mixed deployment to save cost; data volume ~140 GB.
Configuration Optimizations Before Launch
1. PD (Placement Driver) Settings
Labels for cloud provider, availability zone, and hostname were set; isolation level configured to zone‑level to ensure replicas span different zones, improving stability during incidents such as an Alibaba Cloud zone fire.
Replica count kept at the default of 3, log retention set to 7 days, and disk‑usage threshold adjusted from 80 % to 90 % to prevent over‑scheduling.
2. TiDB Server Settings
Temporary directories were placed under the data directory to avoid filling the root partition. Log retention remained 7 days, and max connections were limited to 5000 to protect the cluster from traffic spikes.
3. TiKV Settings
TiKV logs also retain 7 days. GRPC traffic is gzip‑compressed to reduce latency across zones. Region size was increased from the default 64 MB to 128 MB (TiDB v8.4+ uses 256 MB by default) to limit region count and reduce PD pressure.
4. System Variable Adjustments
Lock timeout and idle connection timeout were unified. The maximum SQL execution time was set to 600 seconds; queries exceeding this are terminated to protect the cluster. Statistics collection was shifted to midnight to avoid peak‑time impact. SQL mode was aligned with MySQL to ensure compatibility during migration.
Key Benefits Achieved
Scalability : Linear read/write scaling by adding TiDB and TiKV nodes.
Performance & Load Handling : High concurrency and low latency; HTAP capability consolidates OLTP and OLAP workloads.
High Availability : Multi‑replica and zone‑level isolation guarantee continuity even during data‑center failures.
Reduced Operational Cost : Online DDL enables schema changes in seconds, eliminating lengthy manual MySQL DDL operations.
Practical Advice for TiDB Users
Leverage official documentation and video courses for authoritative knowledge.
Participate actively in community forums to exchange solutions.
Prepare comprehensive failure simulations and detailed runbooks before production rollout.
Maintain consistent TiDB major versions across the fleet to avoid upgrade friction.
Deploy production components separately to prevent resource contention.
Conclusion
Meiyou’s journey with TiDB illustrates a successful transition from exploratory use to core business enablement, delivering measurable improvements in scalability, performance, availability, and operational efficiency for its overseas services.
Wukong Talks Architecture
Explaining distributed systems and architecture through stories. Author of the "JVM Performance Tuning in Practice" column, open-source author of "Spring Cloud in Practice PassJava", and independently developed a PMP practice quiz mini-program.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
