Designing a Universal Cache for Static Data in Microservices
Static business data in microservice systems, such as vehicle models and user profiles, change rarely yet demand high accuracy and real‑time access; this article proposes a universal caching architecture using a business service, persistent queue, Redis cache, and consistency checks to achieve scalable, reliable reads.
In distributed systems, especially microservice architectures, static business data such as vehicle models, user profiles, and vehicle information change infrequently but require high accuracy and real‑time availability.
Because relational databases have limited I/O and cannot sustain high read concurrency, an in‑memory cache is introduced to accelerate queries. Simple process‑local caches are insufficient due to memory limits and cache‑avalanche risks.
Proposed universal caching mechanism
The solution consists of six core components:
Business service: provides CRUD APIs for static data.
Relational database: persists the data (MySQL, SQLServer, Oracle, etc.).
Persistent queue: decouples services (RabbitMQ, RocketMQ, Kafka).
Cache writer: consumes queue messages and writes to cache.
Data‑consistency checker: periodically compares DB and cache, updating stale entries.
Cache database (Redis): a durable, horizontally scalable in‑memory store.
Two external roles are defined:
Data producer: the source of static data (frontend app, web module, etc.).
Data consumer: services that need the static data (e.g., alarm system).
Rationale for each component
Why a business service? It centralises data operations, avoids duplication across terminals, and provides a single high‑performance interface.
Why not process‑local cache? Memory limits and cache‑avalanche risks make it unsuitable for large or high‑concurrency data.
Why Redis? Independent deployment, clustering, persistence, O(1) access, and rich data structures make it the de‑facto cache choice.
Why a queue? It decouples services, improves fault tolerance, and supports asynchronous processing such as awarding points or sending emails.
Why a persistent queue? Persistence prevents data loss during network glitches or crashes; RabbitMQ is recommended.
Why a consistency‑check program? It recovers from failures where updates to cache are missed, ensuring eventual consistency.
Why not rely solely on cache expiration? Fixed TTLs either cause stale data or increase cache‑penetration risk; dynamic TTLs add complexity.
Summary
The mechanism uses a business service to abstract data access, a queue to propagate changes to a Redis cache, and a periodic consistency checker to handle edge cases. It provides near‑real‑time, high‑concurrency reads for static data while mitigating cache‑avalanche and ensuring durability.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Programmer DD
A tinkering programmer and author of "Spring Cloud Microservices in Action"
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
