Databases 18 min read

Analyzing Redis Latency Issues and How to Diagnose Them

This article explains common causes of Redis latency spikes—including slow commands, large keys, concentrated expirations, memory limits, fork overhead, CPU binding, AOF settings, swap usage, and network saturation—and provides step‑by‑step diagnostic commands and practical mitigation techniques.

Architect's Tech Stack
Architect's Tech Stack
Architect's Tech Stack
Analyzing Redis Latency Issues and How to Diagnose Them

Redis is an in‑memory database with extremely high QPS, but users often encounter sudden latency spikes; understanding Redis internals and proper operation is essential for effective troubleshooting.

1. Using the slowlog – Set a threshold (e.g., 5 ms) and limit the log length, then query recent entries:

# Command execution over 5 ms will be logged
CONFIG SET slowlog-log-slower-than 5000
# Keep only the latest 1000 entries
CONFIG SET slowlog-max-len 1000

After configuration, retrieve the last five slowlog records:

127.0.0.1:6379> SLOWLOG get 5
1) 1) (integer) 32693   # slowlog ID
   2) (integer) 1593763337  # execution time
   3) (integer) 5299   # execution time (µs)
   4) 1) "LRANGE"   # command and arguments
      2) "user_list_2000"
      3) "0"
      4) "-1"
2) 1) (integer) 32692
   ...

If your workload frequently runs O(n) commands such as sort , sunion , or zunionstore , or processes large data sets with these commands, they can cause noticeable latency.

2. Large keys – Detect big keys with the built‑in scanner:

redis-cli -h $host -p $port --bigkeys -i 0.01

The scanner iterates all keys using SCAN and measures size with STRLEN , LLEN , HLEN , SCARD , ZCARD . When scanning live instances, limit the scan rate with -i to avoid QPS spikes.

3. Concentrated expirations – A burst of keys expiring at the same time can trigger Redis's active expiration cycle, which runs in the main thread and may block requests for up to 25 ms. Search code for expireat or pexpireat and randomize expiration times:

# Randomly expire within 5 minutes after the scheduled time
redis.expireat(key, expire_time + random(300))

Monitor info and watch the expired_keys metric for sudden jumps.

4. Memory limit reached – When maxmemory is hit, Redis must evict keys before accepting new writes. The eviction strategy (e.g., allkeys-lru , volatile-lru , allkeys-random , etc.) determines the overhead; avoiding large keys and using random eviction can reduce latency.

5. Fork overhead – RDB/AOF persistence and full‑sync operations fork a child process. Forking copies page tables, which can be costly for large instances and may block the main thread. Check info for latest_fork_usec to see fork duration.

6. CPU binding – Binding Redis to specific CPUs can cause the forked persistence process to compete for the same cores, increasing latency; avoid CPU pinning when using persistence.

7. AOF configuration – Choose appendfsync everysec for a good balance of durability and performance; always incurs high I/O latency, while no risks data loss.

8. Swap usage – If the host runs out of RAM and starts swapping, Redis latency can rise to seconds. Monitor memory and swap, and restart or failover instances to reclaim RAM.

9. Network saturation – High NIC load causes packet loss and increased RTT, directly affecting Redis latency. Identify the offending instance, and consider scaling out or increasing bandwidth.

Conclusion – Redis performance depends on careful command selection, memory management, persistence tuning, and system‑level monitoring (CPU, memory, swap, network). Understanding these factors enables developers and DBAs to keep latency low and maintain stable service.

memory managementRedisPerformance TuninglatencyDatabase Operationsslowlog
Architect's Tech Stack
Written by

Architect's Tech Stack

Java backend, microservices, distributed systems, containerized programming, and more.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.