A General Cache Handling Mechanism for Static Business Data in Microservice Architecture

The article proposes a comprehensive microservice‑based caching solution for low‑frequency static data such as vehicle models and user profiles, detailing why caching is needed, why Redis and persistent queues are chosen, how consistency checks work, and the trade‑offs compared with simple expiration strategies.

IT Architects Alliance
IT Architects Alliance
IT Architects Alliance
A General Cache Handling Mechanism for Static Business Data in Microservice Architecture

What is static data

Static data refers to information that changes infrequently, such as vehicle model libraries, basic user information, and vehicle details, which may be updated monthly or upon user registration, and requires high accuracy and real‑time availability.

Why caching is needed

In user‑oriented and vehicle‑connected scenarios, these data are stored in relational databases whose I/O performance is limited; an in‑memory cache like Redis can dramatically increase read throughput with O(1) access time, while alternatives like read‑write splitting or sharding do not fully address read‑only bottlenecks.

General cache mechanism

The proposed mechanism consists of six core components:

Business service : provides CRUD APIs for static data.

Relational database : persists the data (e.g., MySQL, SQLServer, Oracle).

Persistent queue : decouples services and guarantees delivery (e.g., RabbitMQ, RocketMQ, Kafka).

Cache processing program : consumes queue messages and writes to the cache.

Data‑consistency checker : periodically verifies and reconciles data between the relational DB and the cache.

Cache database (Redis) : a persistent, clustered in‑memory store.

Two external definitions are also introduced: the data producer (source of static data) and the data consumer (services that need the data, such as an alarm system).

Why a business service?

Encapsulating data operations in a service avoids redundancy, improves performance, and reduces inconsistency across multiple terminals (PC, mobile, etc.).

Why not use in‑process cache?

Process‑local caches are limited by memory, can cause cache avalanche on mass expiration or restarts, and therefore are unsuitable for large‑scale static data.

Why Redis?

Redis can be deployed independently, scaled horizontally, persists data across restarts, offers excellent read/write performance, and supports rich data structures, making it the de‑facto choice for a general cache.

Why a queue?

A queue decouples services, simplifies failure handling, and supports patterns like Actor; it also enables asynchronous processing such as awarding points or sending emails after user registration.

Why persistent queues?

Persistence prevents data loss due to network glitches or crashes; acknowledgments ensure reliable delivery, though they may introduce duplicate messages and slightly reduce throughput.

Why a data‑consistency check program?

This program handles edge cases where the business service crashes before updating the cache or Redis experiences failover, ensuring eventual consistency by comparing timestamps and updating stale cache entries.

Why not rely solely on cache expiration?

Expiration can cause cache penetration or avalanche, and choosing an appropriate TTL is a trade‑off between freshness and cache hit rate; dynamic TTLs may be needed for fluctuating traffic.

Summary

The solution wraps data operations in a business service, uses a persistent queue to feed updates to a Redis cache, and employs a consistency checker for extreme failure scenarios, achieving high‑concurrency, near‑real‑time access to static data while mitigating cache avalanche, data loss, and consistency issues.

It also acknowledges that for very small static datasets a simple in‑process cache may suffice, and that the added complexity must be justified by business requirements and available resources.

backend architecturemicroservicesRedisData Consistencyqueuestatic-data
IT Architects Alliance
Written by

IT Architects Alliance

Discussion and exchange on system, internet, large‑scale distributed, high‑availability, and high‑performance architectures, as well as big data, machine learning, AI, and architecture adjustments with internet technologies. Includes real‑world large‑scale architecture case studies. Open to architects who have ideas and enjoy sharing.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.