Databases 26 min read

Is Redis Really Slowing Down? A Comprehensive Diagnosis and Optimization Guide

This article explains how to determine whether Redis is truly experiencing latency issues, outlines benchmark testing methods, identifies common causes such as network problems, high‑complexity commands, big keys, slow logs, memory limits, fork overhead, AOF configuration, swap usage, fragmentation, and provides practical troubleshooting and optimization steps.

政采云技术
政采云技术
政采云技术
Is Redis Really Slowing Down? A Comprehensive Diagnosis and Optimization Guide

Before concluding that Redis has become slower, you must first verify whether the latency increase originates from the Redis layer. If your API response time grows, start by tracing the internal service flow and measuring the latency of each external dependency.

When the Redis call is identified as the bottleneck, two main reasons are usually responsible:

Network issues (poor link quality, packet loss, etc.).

Redis‑specific problems.

This guide focuses on the second category.

Benchmarking Redis performance

Use the intrinsic latency command to obtain the minimum, maximum and average latency over a short interval:

$ redis-cli -h 127.0.0.1 -p 6379 --intrinsic-latency 60
Max latency so far: 1 microseconds.
... 
1428669267 total runs (avg latency: 0.0420 microseconds / 42.00 nanoseconds per run).
Worst run took 1429x longer than the average latency.

Another useful command shows latency history sampled every second:

$ redis-cli -h 127.0.0.1 -p 6379 --latency-history -i 1
min: 0, max: 1, avg: 0.13 (100 samples) -- 1.01 seconds range
...

To decide if Redis is truly slower, compare the latency of the instance under investigation with the baseline latency of a normal Redis instance on identical hardware. If the observed latency exceeds the baseline by more than two‑fold, the instance can be considered slow.

1. Slow‑log analysis

Redis provides a MySQL‑like slow‑log. Set the threshold (e.g., 5 ms) and keep the most recent 500 entries:

# Record commands slower than 5 ms
CONFIG SET slowlog-log-slower-than 5000
# Keep only the latest 500 entries
CONFIG SET slowlog-max-len 500

Query the recent slow logs:

127.0.0.1:6379> SLOWLOG get 5
1) 1) (integer) 32693   # log id
   2) (integer) 1593763337 # timestamp
   3) (integer) 5299      # execution time (µs)
   4) 1) "LRANGE"
      2) "user_list:2000"
      3) "0"
      4) "-1"
...

Slow‑log helps pinpoint which commands are taking unusually long.

2. High‑complexity commands

Commands with O(N) or higher complexity (e.g., SORT , SUNION , ZUNIONSTORE ) or those operating on very large N can saturate the single‑threaded Redis CPU, causing other requests to wait. Mitigation:

Avoid such commands; perform aggregation on the client side.

If unavoidable, keep N ≤ 300 and fetch data in small chunks.

3. Big keys

Large values make memory allocation and release expensive. Detect big keys with:

$ redis-cli -h 127.0.0.1 -p 6379 --bigkeys -i 0.01
... 
-------- summary -------
Sampled 829675 keys in the keyspace!
Biggest string found 'key:291880' has 10 bytes
Biggest list found 'mylist:004' has 40 items
...

Recommendations:

Avoid writing big keys.

On Redis ≥ 4.0 use UNLINK instead of DEL to free memory asynchronously.

On Redis ≥ 6.0 enable lazyfree-lazy-user-del = yes .

4. Concentrated expirations

When many keys expire at the same moment, Redis’s active expiration task (run in the main thread) can block client requests, especially if the expired keys are big. Mitigation:

Stagger expiration times randomly.

Enable lazy‑free eviction on Redis ≥ 4.0.

5. Memory limit and eviction policies

When maxmemory is reached, Redis evicts keys according to the configured policy (e.g., allkeys-lru , volatile-lru , noeviction , etc.). Choose a policy that matches your workload.

6. Fork overhead (RDB/AOF rewrite)

RDB/AOF rewrite spawns a child process via fork() . Copy‑on‑write on large datasets can block the main thread for seconds. Monitor with INFO → latest_fork_usec . Reduce impact by:

Keeping instance size < 10 GB.

Running heavy persistence tasks on replicas or during off‑peak hours.

Avoiding virtual machines for large instances.

Increasing repl-backlog-size to reduce full sync frequency.

7. AOF configuration

Three fsync policies:

appendfsync always – safest but highest latency.

appendfsync no – best performance, risk of data loss.

appendfsync everysec – balanced choice.

Even with everysec , a busy disk can block the background thread, which in turn blocks the main thread during the write system call. To avoid this, you can temporarily disable fsync during AOF rewrite:

# Disable AOF fsync during rewrite
no-appendfsync-on-rewrite yes

8. Swap usage

If Redis starts swapping, latency spikes to hundreds of milliseconds. Check swap with:

# Find Redis PID
ps -aux | grep redis-server
# Inspect swap usage
cat /proc/
/smaps | egrep '^(Swap|Size)'

Solutions: add RAM, free memory, or restart the instance after a controlled failover.

9. Memory fragmentation

Fragmentation ratio = used_memory_rss / used_memory . A ratio > 1.5 indicates > 50 % fragmentation. Mitigate by upgrading to Redis ≥ 4.0 and enabling automatic fragmentation reclamation, or by restarting older versions.

10. Network bandwidth saturation

When a Redis instance consumes the whole network bandwidth, packet loss and latency increase. Monitor traffic and scale out or migrate heavy instances.

Conclusion

The article enumerates typical Redis latency sources—network, command complexity, big keys, expiration bursts, memory limits, fork overhead, AOF settings, swap, fragmentation, and bandwidth—and provides concrete commands and configuration tweaks to diagnose and resolve each issue.

MonitoringperformanceoptimizationDatabaseRedisLatencytroubleshooting
政采云技术
Written by

政采云技术

ZCY Technology Team (Zero), based in Hangzhou, is a growth-oriented team passionate about technology and craftsmanship. With around 500 members, we are building comprehensive engineering, project management, and talent development systems. We are committed to innovation and creating a cloud service ecosystem for government and enterprise procurement. We look forward to your joining us.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.