Backend Development 17 min read

How a Three‑Tier Cache (Nginx + Redis + Ehcache) Boosts High‑Concurrency Systems

This article explains how a three‑layer caching architecture—combining Nginx, Redis, and Ehcache—along with template engines, double‑Nginx routing, persistence mechanisms, cluster setups, and various cache‑update strategies can dramatically improve hit rates, reduce database pressure, and prevent cache‑related failures in high‑traffic applications.

Efficient Ops
Efficient Ops
Efficient Ops
How a Three‑Tier Cache (Nginx + Redis + Ehcache) Boosts High‑Concurrency Systems

Nginx

Nginx is commonly used for traffic distribution and also provides a limited‑size cache that can store hot data, allowing user requests to be served directly from cache and reducing load on upstream servers.

Template Engine

Freemaker or Velocity can be combined with Nginx to handle massive request volumes. Small systems may render all pages server‑side and cache them, while large systems keep templates in Nginx+Lua (OpenResty) cache with expiration to maintain freshness.

Two‑Layer Nginx for Higher Hit Rate

Deploying a double‑layer Nginx setup improves cache hit rates. The front‑layer Nginx distributes traffic based on rules (e.g., hash of productId) and routes requests for a specific product to a dedicated back‑layer Nginx, which caches hotspot data.

Redis

If Nginx does not have the requested data, the request falls back to Redis, which can cache the full dataset and scale horizontally to handle high concurrency and high availability.

Persistence Mechanism

Redis persists in‑memory data to disk (RDB) and logs every write operation (AOF). Using both mechanisms together provides fast recovery (AOF) and periodic snapshots (RDB). A common pitfall is that AOF takes precedence during restart, so disabling AOF temporarily can help restore from RDB.

Redis Cluster

Redis supports master‑slave replication for read‑write separation and high availability. Sentinel monitors instances and performs automatic failover when a master crashes. Redis Cluster adds multiple masters with slaves, allowing horizontal scaling and automatic slave promotion.

Ehcache

Ehcache (Tomcat JVM heap cache) acts as a safety net when Redis fails, handling part of the traffic and preventing all requests from hitting the database.

Cache Data Update Strategies

For highly time‑sensitive data, use a DB‑and‑Redis dual‑write approach. For less critical data, employ asynchronous MQ notifications to update Tomcat JVM cache and Redis after the DB change, and let Nginx local cache expire before pulling fresh data from Redis.

Cache‑Aside Pattern

Read operations first check the cache; on a miss, the database is queried, the result is cached, and the response is returned. Updates delete the cache first, then modify the database, allowing lazy re‑caching on the next read.

Cache Inconsistency Issues and Solutions

Simple inconsistency arises when cache deletion fails after a DB update. A safer order is delete‑then‑update. More complex scenarios require serializing update and read operations via an internal JVM queue, ensuring that reads see the latest data after updates complete.

Distributed Cache Rebuild Conflict Resolution

When multiple machines detect expired Redis/Ehcache entries simultaneously, a distributed lock (e.g., Redis or Zookeeper) prevents concurrent rebuilds; the latest data wins based on timestamps.

Cache Cold Start and Warm‑up Solutions

To avoid DB overload during system startup or after a full cache loss, pre‑populate Redis with hot data identified by real‑time access statistics collected via Kafka and Storm, using Zookeeper locks to coordinate parallel warm‑up across instances.

Hot‑Data Overload Mitigation

For sudden spikes of identical requests, identify hot keys (e.g., >95th percentile access count) and downgrade traffic routing: initially hash to a single application‑layer Nginx, then broadcast to all Nginx nodes when a key becomes hot, while Storm tracks and removes hot flags when traffic normalizes.

Cache Avalanche Solutions

Prevent a full Redis cluster failure from cascading to DB overload by deploying a highly available Redis cluster (multi‑master, multi‑region), adding an Ehcache layer, isolating resources for Redis access, and applying circuit‑breaker and degradation strategies.

Cache Penetration Solutions

When a request misses all cache levels and the DB returns no data, store a placeholder (e.g., empty marker) in each cache tier to avoid repeated DB hits; asynchronous listeners will refresh the cache when the real data appears.

Nginx Cache Expiration Impact on Redis Load

Randomize Nginx cache TTLs to prevent simultaneous expiration, which would otherwise cause a sudden surge of requests to Redis.

backend architectureRedisCachinghigh concurrencynginxEhcache
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.