Universal Cache Mechanism for Business Static Data in Microservice Architecture

This article examines a universal caching solution for business static data in microservice systems, detailing the definition of static data, the need for caching, a six‑component architecture involving Redis, queues, and consistency checks, and the trade‑offs of various design choices.

Big Data Technology & Architecture
Big Data Technology & Architecture
Big Data Technology & Architecture
Universal Cache Mechanism for Business Static Data in Microservice Architecture

What Is Static Data

Static data refers to information that changes infrequently or has a low update frequency, such as vehicle model libraries, basic user profiles, or vehicle details. Although updates occur (e.g., monthly model releases or occasional user edits), the data must remain highly accurate and timely.

Why Cache Static Data

In user‑oriented and vehicle‑networking scenarios, static data is queried heavily. Storing it in relational databases yields poor I/O performance under high concurrency. An in‑memory cache (e.g., Redis) provides O(1) read latency, dramatically increasing throughput for read‑heavy workloads.

Alternative techniques like read‑write splitting or sharding improve both read and write performance but do not address the pure read‑throughput problem as effectively as a dedicated cache.

General Cache Mechanism

The proposed universal mechanism consists of six core components:

Business Service : Exposes CRUD APIs for a specific domain (e.g., vehicle service).

Relational Database : Persists business data (MySQL, SQLServer, Oracle, etc.).

Persistent Queue : Decouples services and guarantees durability (RabbitMQ, RocketMQ, Kafka).

Cache Processor : Consumes queue messages and writes them to the cache.

Data Consistency Checker : Periodically verifies that cache and database are synchronized, correcting discrepancies.

Cache Database (Redis) : A durable, clustered in‑memory store that serves as the primary read source.

Two external roles are also defined:

Data Producer : The source of static data changes (frontend app, web module, etc.).

Data Consumer : Services that need the static data (e.g., alarm system).

Why a Business Service?

Encapsulating data operations in a service avoids duplication, reduces inconsistency, and provides a single, high‑performance entry point for all consumers.

Why Not In‑Process Cache?

In‑process caches are limited by the host’s memory and can cause cache‑snowball effects when many instances restart simultaneously, overwhelming the database.

Why Redis?

Redis offers independent deployment, clustering, persistence, excellent read/write performance, and rich data structures, making it the de‑facto standard for distributed caching.

Why a Queue?

A persistent queue decouples the business service from cache updates, improves scalability, and aligns with common microservice patterns such as the Actor model.

Why Persistent Queue?

Durability prevents data loss caused by network glitches or crashes; RabbitMQ is recommended for its balance of reliability and concurrency.

Why a Data Consistency Checker?

If a service crashes before updating the cache or if Redis experiences a failover, the checker reconciles differences, ensuring near‑real‑time consistency.

Why Not Rely Solely on Cache Expiration?

Expiration introduces trade‑offs: short TTL reduces staleness but increases cache‑penetration risk; long TTL improves hit rate but may violate the high‑accuracy requirement of static data.

Summary

1. Business services encapsulate data operations, shielding consumers from underlying storage details.

2. A persistent queue decouples cache writes from business logic, enabling asynchronous updates.

3. For most scenarios, queue‑driven cache updates achieve real‑time performance comparable to direct writes.

4. In extreme failure cases, the consistency checker quickly restores cache correctness.

5. Consumers query Redis for fast, high‑concurrency access; missing keys simply indicate absent data, avoiding cache‑penetration.

The mechanism leverages common microservice patterns—queues for decoupling, Redis for durable caching, and periodic consistency checks—to provide a broadly applicable solution for static data while acknowledging the added operational complexity.

Postscript

Redis Coupling Issue: Direct Redis access from services can be abstracted via AOP or OpenResty to achieve transparent data handling.

Service Availability: Each component should be deployed redundantly (load‑balanced or active‑passive) to maintain high availability.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

microservicesRediscachingData Consistencyqueuestatic-data
Big Data Technology & Architecture
Written by

Big Data Technology & Architecture

Wang Zhiwu, a big data expert, dedicated to sharing big data technology.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.