Redis Interview Topics: Persistence, Caching Issues, Data Types, Cluster Architecture, and More
This article provides a comprehensive overview of Redis interview questions, covering persistence mechanisms, cache avalanche and penetration, hot vs. cold data, differences from Memcached, single‑threaded performance, data structures, internal architecture, expiration policies, clustering options, distributed locks, and transaction handling.
Redis Persistence Mechanism
Redis is an in‑memory database that supports persistence by synchronizing data to disk files, allowing data recovery after a restart.
Implementation: a child process is forked, copies the parent’s data, writes it to a temporary file, then replaces the previous snapshot file. RDB: default snapshot persistence, writes binary dump files (dump.rdb) based on configurable save intervals. AOF: appends every write command to a log file, similar to MySQL binlog; on restart, Redis replays the log to rebuild the dataset.
When both are enabled, Redis prefers AOF for recovery.
Cache Avalanche, Penetration, Warm‑up, Update, and Degradation
1. Cache Avalanche
Occurs when many cached keys expire simultaneously, causing a sudden surge of database requests that can overwhelm the DB.
Solutions include using locks or queues to throttle requests and staggering expiration times.
2. Cache Penetration
When a request queries a non‑existent key, it bypasses the cache and hits the database each time.
Common mitigations are Bloom filters to filter out impossible keys, or caching empty results for a short TTL (e.g., five minutes).
3. Cache Warm‑up
Load essential data into the cache proactively during system startup or via manual refresh pages, scheduled jobs, or periodic refreshes.
4. Cache Update
Two strategies: scheduled expiration cleanup or lazy update on request when the cached data is stale.
5. Cache Degradation
When the system is under heavy load or a service fails, degrade gracefully by returning default values or disabling non‑critical cache reads.
Degradation levels can be configured based on log severity (info, warning, error, critical).
Hot Data vs. Cold Data
Hot data is frequently accessed and benefits from caching; cold data is rarely accessed and may be evicted quickly.
Effective caching requires that data be read at least twice before it expires.
Differences Between Memcached and Redis
Memcached stores data only in memory and does not persist; Redis can persist data to disk.
Redis supports richer data types (list, set, sorted set, hash) while Memcached only stores strings.
Redis provides built‑in replication, higher value size limits (up to 512 MB), and generally better performance.
Why Redis Is Single‑Threaded and Fast
Pure in‑memory operations.
Single thread eliminates context switches.
Uses non‑blocking I/O multiplexing (select/epoll/kqueue).
Redis Data Types and Typical Use Cases
String : simple key/value, often used for counters.
Hash : stores structured objects; useful for session data.
List : ordered collection; can implement simple queues or pagination.
Set : unordered unique values; ideal for deduplication and set operations.
Sorted Set : elements with scores; perfect for leaderboards and top‑N queries.
Redis Internal Structures
dict: hash table for key‑value mapping. sds: simple dynamic string, stores binary data with length. skiplist: high‑performance ordered list used for sorted sets. quicklist: linked list of ziplist nodes for efficient list storage. ziplist: compact sequential encoding for small lists or hashes.
Expiration Strategies and Memory Eviction
Redis uses a combination of periodic (every 100 ms) and lazy deletion to reclaim expired keys.
If memory pressure remains, Redis applies eviction policies configured via maxmemory-policy: volatile-lru: evicts least‑recently‑used keys with an expiration. volatile-ttl: evicts keys closest to expiration. volatile-random: evicts random expiring keys. allkeys-lru: evicts LRU keys regardless of expiration. allkeys-random: evicts random keys. noeviction: disables eviction; writes fail when memory is full.
Why Redis Is Single‑Threaded (FAQ)
Because Redis operations are memory‑bound, CPU is not the bottleneck; a single thread simplifies design and avoids concurrency issues.
Redis Cluster Solutions
twemproxy: proxy with consistent hashing; does not automatically rebalance on node changes. codis: similar to twemproxy but supports data migration when scaling. redis‑cluster 3.0: native clustering using hash slots and built‑in replication.
Multi‑Machine Deployment and Data Consistency
Typical master‑slave replication with read‑write separation; a master handles writes and propagates to slaves.
Common Performance Issues and Solutions
Avoid persistence on the master (RDB/AOF) to reduce load.
Enable AOF on a slave for reliable backups.
Keep master and slaves in the same LAN for low latency.
Limit the number of slaves attached to a heavily loaded master.
Prefer linear replication chains (master ← slave1 ← slave2 …) for stability.
Redis Thread Model
Redis uses an event‑driven architecture: a file‑event dispatcher monitors sockets, and an I/O multiplexing layer (select/epoll/kqueue) delivers events to handlers sequentially.
Atomicity of Redis Operations
All commands are atomic because Redis processes them in a single thread; multi‑command transactions ensure batch atomicity.
Redis Transactions
Implemented via MULTI, EXEC, DISCARD, and WATCH. Commands are queued after MULTI and executed atomically on EXEC. WATCH provides optimistic locking.
Distributed Lock with Redis
Use SETNX to create a lock key; release with DEL. To avoid deadlocks, set an expiration on the lock or combine SETNX with GETSET for safe lock renewal.
Thank you for reading, hope this helps :) Source: https://blog.csdn.net/Butterfly_resting
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Selected Java Interview Questions
A professional Java tech channel sharing common knowledge to help developers fill gaps. Follow us!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
