Redis Cache Optimization and Architecture Evolution in JD Daojia Coupon System
This article details the JD Daojia coupon system's high‑traffic architecture, describing its multi‑layer design, Redis cache challenges such as large‑key and hot‑key issues, and practical optimization techniques including key redesign, expiration strategies, and active‑expire algorithms to improve performance and scalability.
1. Content Introduction
The author works in JD Daojia's coupon team, responsible for high‑traffic ToC interfaces and coupon settlement, with Redis source‑code experience and notable performance contributions.
The coupon system supports O2O one‑hour delivery promotions and relies heavily on Redis for high‑flow scenarios.
2. Cache Status
1) Quantity: 5 cache clusters covering ~200 business scenarios.
2) Traffic: T‑level memory usage, peak QPS reaches tens of millions per second.
3) Complex I/O: Multiple exposure points on the store homepage, each with high request rates and extensive inventory checks.
3. Architecture Evolution
As business grew, the monolithic system faced slow feature delivery, performance tuning difficulty, and tight coupling, prompting a split into user, access, application, service, data‑cache, data‑storage, and infrastructure layers.
Application layer includes coupon creation, display, acquisition, settlement, middle‑platform, gateway, and closed‑loop business.
Service layer separates B‑end and C‑end services, with dedicated modules for display, settlement, and middle‑platform functions.
Data‑cache layer consists of five clusters: coupon‑tag, store‑coupon, user‑coupon, activity, and used‑coupon.
Data‑storage layer uses MySQL and Elasticsearch, handling billions of daily records.
Infrastructure relies on JD private cloud and self‑developed middleware.
3.2 System Split
Before: a single system handling creation, acquisition, usage, and query, split into open (C‑end), inner (B‑end), and web (operations) projects, causing DB connection limits, high‑concurrency impact, and tight coupling.
After: split into C‑end (store, product, red‑packet, core transaction), B‑end (creation, middle‑platform, acquisition, subsidy rules), and upstream closed‑loop business.
4. Cache Optimization
The C‑end uses Redis for high‑traffic; optimization focuses on large‑key, hot‑key, and expiration strategies.
4.1 Eliminate Large Keys
Large keys (e.g., String >8000 bytes or collections >8000 elements) block the single‑threaded Redis core, causing latency and possible node failure.
Case study: coupon‑store applicability cache with millions of store IDs per coupon caused high outbound traffic; solution was to redesign the key to include store dimension, reducing per‑second traffic from gigabytes to megabytes.
Key takeaways: split large keys by business logic, shorten key names, use integers, and separate hot fields.
4.2 Solve Hot Key Issues
Hot keys (e.g., flash‑sale spikes) can exceed 100k+ QPS per node; mitigation includes business throttling, cache isolation, sharding, and front‑end pre‑caching.
4.3 Emphasize Expiration Policies
Redis employs lazy deletion, periodic deletion, and active expiration. Code examples illustrate expireIfNeeded and activeExpireCycle implementations.
Best practices: set appropriate TTLs, use lazy and periodic deletion, avoid dense expiration, and trigger scans during low‑traffic periods.
Overall, activeExpireCycle uses an adaptive algorithm to balance CPU usage and expiration workload.
Dada Group Technology
Sharing insights and experiences from Dada Group's R&D department on product refinement and technology advancement, connecting with fellow geeks to exchange ideas and grow together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.