Backend Development 18 min read

Design and Implementation of a Cache Access Component and Update Platform for High‑QPS Scenarios

This article describes a backend architecture for a high‑traffic e‑commerce project, detailing a cache access component and a cache update platform that use asynchronous messaging, hotspot‑key handling, versioned cache entries, and Redis to achieve low latency, high QPS support and strong data consistency.

Ctrip Technology
Ctrip Technology
Ctrip Technology
Design and Implementation of a Cache Access Component and Update Platform for High‑QPS Scenarios

The "爆款" project at Ctrip aims to aggregate all travel products into a single channel, resulting in three main characteristics: extremely high traffic, certain items becoming hot‑selling, and the need to handle order placement. To meet these demands, the system must handle high QPS while ensuring users always see the latest data.

To address high QPS, the solution relies on Redis caching, but the key challenge is using the cache effectively. A comparative table evaluates four caching strategies on criteria such as QPS handling, hotspot‑key support, update latency, and complexity, concluding that the proposed approach offers the best overall trade‑offs despite higher initial complexity.

The architecture consists of a business layer (light green) and the proposed caching layer (dark green). The cache access component encapsulates cache interactions, performing two main tasks: asynchronously sending add/update/delete operations to a cache update platform via messages, and locally caching hotspot keys to prevent cache‑snowball effects.

Asynchronous cache operations are introduced to avoid stale data caused by concurrent updates. A detailed thread‑timeline example shows how synchronous updates can leave the cache with outdated values, motivating the message‑driven approach that serializes operations per key.

Hotspot‑key handling involves three steps: identifying hotspot keys (dynamic detection and pre‑configuration), storing them in application memory for fast access, and updating them via a broadcast mechanism from the cache update platform. This ensures rapid response to traffic spikes while keeping data fresh.

The cache update platform provides two core functions: executing actual cache mutations and notifying business services of changes. It guarantees ordered, single‑threaded processing of messages for the same key by hashing the key and routing messages to the same processing thread, enabling high‑throughput consumption (tens of thousands of messages per minute) without bottlenecks.

Version numbers are attached to each cache entry to prevent older messages from overwriting newer data. Deletion messages mark entries as deleted rather than removing them outright, avoiding cache penetration and accidental data resurrection. Add/modify messages compare version numbers and only update the cache when the incoming version is newer.

To ensure no message is lost, the system records each message in a business‑side database table; the cache update platform periodically polls this table to recover any missed messages, providing fault tolerance against message‑queue failures.

In summary, the combined cache access component and update platform achieve fast consistency between cache and database, maintain low latency (single‑digit milliseconds) under near‑10k QPS, reduce Redis request volume and memory usage dramatically, and provide a scalable solution for future hotspot‑key and performance enhancements.

Backenddistributed systemsRediscachingMessage Queuehigh QPShotspot key
Ctrip Technology
Written by

Ctrip Technology

Official Ctrip Technology account, sharing and discussing growth.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.