Backend Development 12 min read

Cache Strategies: From Local Page Cache to Distributed Multi‑Level Caching

This article shares a senior architect’s ten‑year journey with caching, covering local page and object caches, refresh policies, distributed solutions like Redis and Memcached, pagination caching techniques, multi‑level cache architectures, common pitfalls, and practical optimization lessons for high‑performance backend systems.

Qunar Tech Salon
Qunar Tech Salon
Qunar Tech Salon
Cache Strategies: From Local Page Cache to Distributed Multi‑Level Caching

Author Introduction : Zhang Yong, senior architect at iFlytek with 11 years of backend experience, shares practical insights on caching.

Core Idea : "Nginx + business logic layer + database + cache layer + message queue" can fit most scenarios, leading the author to focus on cache technologies.

1. Local Cache

1.1 Page‑Level Cache : Early use of OSCache in JSP pages with pseudo‑code: <cache:cache key="foobar" scope="session"> ... </cache:cache> . Page‑level cache is now rare on the server side but still popular in front‑end.

1.2 Object Cache : Adoption of Ehcache for order status caching, reducing task time from 40 minutes to 5‑10 minutes. Object cache stores rarely changing data (e.g., global config, completed orders) with finer granularity than page cache.

1.3 Refresh Strategies : Local cache using Guava with a configuration‑center architecture, updated via scheduled pulls or push notifications (RocketMQ Remoting). Also discusses Zookeeper watch, WebSocket push, and HTTP long‑polling mechanisms for cache invalidation.

2. Distributed Cache

2.1 Object Size & Read Strategy : Experience with large cache entries (300 KB–500 KB) causing frequent Young GC; solution involved reducing JSON size to compact arrays, cutting average cache size from ~300 KB to ~80 KB.

2.2 Pagination List Cache : Two approaches – cache whole page (keyed by page number & size) vs. cache individual items. Recommended pattern: fetch IDs from DB, batch‑get cached items, query missing IDs, then store them back.

3. Multi‑Level Cache : Combines fast but limited local cache (Guava) with scalable distributed cache (Redis). Benefits include higher throughput and reduced pressure on remote cache. Example architecture shows local cache pre‑warming from Redis or RPC, periodic Guava refresh, and fallback to remote cache.

Lessons Learned : Lazy loading can cause data inconsistency across nodes; thread‑pool sizing for LoadingCache matters. Solutions: combine lazy loading with message‑driven updates and monitor/adjust thread‑pool parameters.

Conclusion : Caching is essential for performance; mastering its principles and practical patterns—from local to distributed and multi‑level—yields significant gains. Future articles will explore high‑availability cache mechanisms and Codis internals.

BackendJavaperformanceRediscachingDistributed CacheMultilevel Cache
Qunar Tech Salon
Written by

Qunar Tech Salon

Qunar Tech Salon is a learning and exchange platform for Qunar engineers and industry peers. We share cutting-edge technology trends and topics, providing a free platform for mid-to-senior technical professionals to exchange and learn.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.