Using Redis as a High‑Performance Cache Layer for MySQL‑Backed Services
The article explains how to alleviate MySQL bottlenecks in high‑traffic product services by introducing Redis as a local and remote cache, covering data structures, expiration policies, eviction strategies, persistence mechanisms, and a lightweight TCP protocol to achieve scalable, reliable performance.
You are a programmer maintaining a product service that directly connects to a MySQL database.
Assume the service must handle 10,000 queries per second, while MySQL can only sustain 5,000 QPS, causing the database to be overwhelmed during peak traffic such as flash sales or ticket purchases.
To prevent MySQL overload and still support 10k QPS, we add an intermediate caching layer—Redis.
Local Cache
Since memory access is much faster than disk, moving frequently accessed MySQL data into memory dramatically improves query speed. A simple in‑process dictionary (e.g., a Python dict or Java Map ) can store product ID as the key and product data as the value. Queries first check this dictionary; if a miss occurs, the request falls back to MySQL and the result is then cached for future accesses.
This in‑process cache is referred to as a local cache . With it, the number of queries hitting MySQL drops dramatically, making 10k QPS feasible.
Remote Cache
When multiple service instances run for high availability, each maintaining its own local cache wastes memory. Extract the dictionary into a separate service— a remote cache service . All instances read and write through this single service, eliminating duplicate caches.
Concurrency issues are avoided by funneling all read/write commands into a single thread inside the remote cache, so there is no lock contention or thread‑switch overhead.
Support for Multiple Data Types
The cache service is extended beyond simple strings to support FIFO queues ( List ), deduplication sets ( Set ), and sorted sets for leaderboards ( ZSet ), making it more versatile.
Memory Expiration Strategy
To control memory growth, each cached entry can be assigned an expiration time. The client decides the appropriate TTL via an EXPIRE command.
Cache Eviction
When memory approaches its limit, eviction policies such as Least Recently Used (LRU) remove stale entries, ensuring that hot data remains in memory.
Persistence
To avoid data loss on restart, the cache periodically writes its full state to disk using Redis Database Backup (RDB). Additionally, an Append‑Only File (AOF) logs every write operation, allowing reconstruction of most data after a crash. AOF files are compacted regularly to keep size manageable.
Simplified Network Protocol
Instead of HTTP, the cache service communicates over raw TCP with a minimal command set (e.g., SET key value , GET key ). The official redis-cli tool demonstrates this protocol, and many language libraries provide compatible clients.
What Is Redis?
Redis (Remote Dictionary Server) is a high‑performance, in‑memory data store that acts as a remote dictionary. It accelerates MySQL by caching queries and offers extensions such as RedisJSON, RediSearch, RedisGraph, and RedisTimeSeries for advanced use cases.
Summary
Redis is essentially a remote dictionary service where all core read/write logic runs in a single thread, eliminating concurrency problems.
It supports multiple data types, expiration policies, and eviction strategies, exposing a simple TCP‑based protocol.
Persistence is provided via RDB snapshots and AOF logs, ensuring data survives service restarts.
Rich extensions like RediSearch and RedisJSON enable advanced functionalities comparable to dedicated databases.
Future topics will cover Redis high‑availability setups such as master‑slave replication, Sentinel, and clustering.
IT Services Circle
Delivering cutting-edge internet insights and practical learning resources. We're a passionate and principled IT media platform.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.