Backend Development 6 min read

Distributed Real-Time Local Cache Practice in iQIYI TV Backend

This article explains iQIYI TV's backend distributed real-time local cache solution, comparing local and centralized caches, detailing its advantages, drawbacks, and a message-driven update mechanism that improves high‑concurrency read performance and reduces cache‑related risks.

High Availability Architecture
High Availability Architecture
High Availability Architecture
Distributed Real-Time Local Cache Practice in iQIYI TV Backend

High‑concurrency systems rely on caching, and using more local caches can boost throughput and stability, but ensuring real‑time consistency of distributed local caches is challenging.

The article presents iQIYI TV's backend distributed real‑time local cache practice as a reference for solving high‑concurrency problems.

Background : Most internet services are read‑heavy; they split read and write paths to improve stability and throughput. iQIYI stores large amounts of metadata (a few KB each) in a microservice that serves many downstream services, leading to high QPS on centralized caches.

Local Cache vs. Centralized Cache :

Local Cache Advantages

Hotspot caching – each instance adds a hot‑spot database.

High hit rate.

Custom expiration policies.

Fast business logic with low machine overhead.

Strong fault tolerance.

Local Cache Disadvantages

Generally passive, poor real‑time freshness.

Limited storage (2‑4 GB per instance).

Centralized Cache Advantages

Easy real‑time updates.

Strong consistency.

Centralized Cache Disadvantages

Heavy reliance on large clusters.

IO bottlenecks under high concurrency.

Sensitive to network jitter between application and cache nodes.

Hot‑key traffic can saturate bandwidth, requiring multiple cache clusters.

Most teams choose local hotspot caches, but their real‑time freshness is insufficient. The proposed solution uses a unified messaging mechanism to trigger real‑time updates of local caches, with optional business‑side filtering for personalized updates.

Solution Overview :

1. Management console to control cache instances and policies. 2. Data change source. 3. Message bus as the distribution hub. 4. Business filter for custom handling. 5. Monitoring via iQIYI's unified logging system to track cache hit rates and other metrics.

Extension : When a local cache's hit rate falls below a threshold (e.g., 70%) and memory cannot be expanded, the cache can be split into lightweight logical shards to improve hit rates.

Effect Summary :

Reduced risk of cluster avalanche.

Resolved high‑concurrency read issues.

Decreased network penetration of hotspot data, easing pressure on centralized caches.

distributed systemsbackend architectureCachehigh concurrencyreal-time updates
High Availability Architecture
Written by

High Availability Architecture

Official account for High Availability Architecture.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.