Design and Evolution of the Price‑Increase Coupon Service for a C2B Recycling Platform
This article details the design, evolution, and scaling strategies of a price‑increase coupon system for a C2B digital product recycling platform, covering its initial experimental phase, platformization, sharding‑JDBC implementation, intelligent coupon recommendation, Elasticsearch integration, and operational optimizations for high‑throughput stability.
Business Introduction
Zhuanzhuan C2B recycling business is summarized as collecting 3C digital products from users. Users request a valuation, ship the product, receive a quality inspection and a concrete quote, and if the price is acceptable, confirm the recycle and receive payment.
To increase order volume, cash subsidies are offered: "full‑reduction coupons" for buyers and "full‑addition coupons" for sellers, collectively called "price‑increase coupons". When sellers meet a certain sales amount, the platform provides an additional price subsidy.
Terminology
Coupon Configuration: stores the configuration information of price‑increase coupons.
Coupon Relation Table: stores the binding relationship between coupons and users.
Evolution Process
1.0 Experiment Phase
Keywords: Exploration, Uncertainty
In the early stage, the data model of price‑increase coupons was undefined and the product was unsure which mechanisms would effectively incentivize users, leading to frequent changes in the data structure.
The configuration was written to Apollo Config Center for rapid experimentation, though it lacked strong validation rules and required high product configuration effort.
The coupon relation table was a single table without sharding because data volume and growth metrics were unclear.
2.0 Platform Construction
Keywords: Standardization
After the trial phase, the product clarified coupon mechanics and data model, and the coupons began to serve other channels such as door‑to‑door and offline store recycling.
Problems with Apollo (no validation, high JSON manipulation cost, lack of visual tools) led to the abandonment of Apollo and the building of a dedicated marketing backend.
3.0 Splitting Large Table into Small Tables
As the number of user‑coupon bindings grew to billions, a sharding strategy was required.
Two sharding approaches were considered:
1. JDBC‑based proxy, requiring only backend changes.
2. Database‑level proxy, requiring DBA or ops involvement.
The team chose the JDBC proxy and adopted the widely used sharding-jdbc framework.
Based on growth modeling, the single table was split into 8 databases with 8 tables each, using the user UID hash for partitioning.
How to select appropriate coupons for a user?
Coupon selection involves many criteria (category, brand, model). To avoid heavy joins, coupon configuration is cached locally; updates are broadcast via MQ, and a periodic task ensures consistency. The system first filters configuration IDs in memory, then fetches the corresponding coupons from the database.
4.0 Intelligent Construction
Intelligence means analyzing user behaviors (orders, reviews, browsing) to push suitable coupons instead of static activity‑based distribution.
Process:
Behavior logs are collected via Kafka, processed by Flink for rule filtering, and sent to the search‑recommendation service, which returns the most suitable coupon when a user accesses the activity page.
5.0 Introducing Elasticsearch Middleware
When data volume exceeds the capacity of traditional databases for complex queries, Elasticsearch is introduced for full‑text search and flexible data structures.
After the intelligent phase, the growing number of coupons caused memory pressure and stability issues; Elasticsearch provides efficient retrieval with acceptable eventual consistency and sub‑second latency.
ES Cluster Optimization Techniques
Compress data transfer and index only required fields, fetching only primary keys.
Shard by user UID to limit search scope.
Archive expired coupons; MySQL data can be archived to TiDB, while Elasticsearch retains only active data.
Use appropriate field types (e.g., keyword for exact numeric searches).
6.0 Introducing NoSQL, Read‑Replicas, and Other Stability Enhancements
Peak traffic can exceed 10,000 QPS, stressing services and databases.
Stability improvements include:
Read‑write separation to handle read‑heavy scenarios and tolerate replica lag.
Redis caching for frequently accessed interfaces.
Circuit‑breaker fallback when Elasticsearch or services experience timeouts.
Insights & Summary
1. Avoid over‑design.
2. Coordinate the right people for the right tasks.
3. Share technical insights with colleagues to solve problems faster and gain inspiration.
4. Architectural principles:
Rollback design – ensure forward compatibility and version rollback capability.
Feature toggle design – allow quick disabling of malfunctioning features.
Monitoring design – embed observability from the design phase.
Adopt mature technologies – avoid untested open‑source components lacking commercial support.
Resource isolation – prevent a single business from monopolizing resources.
Horizontal scalability – design for scaling out to avoid bottlenecks.
Rapid iteration – develop small features quickly, validate early, and reduce delivery risk.
Author Introduction
Wang Ruigang, backend developer in the Innovation Technology Department, responsible for the online recycling business.
Zhuanzhuan Tech
A platform for Zhuanzhuan R&D and industry peers to learn and exchange technology, regularly sharing frontline experience and cutting‑edge topics. We welcome practical discussions and sharing; contact waterystone with any questions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.