Mastering Redis Big Key Issues: Detection, Analysis, and Efficient Deletion
This article explains how to identify oversized Redis keys, analyze their impact with built‑in commands and third‑party tools, and safely remove them using UNLINK and lazy‑free configurations to prevent performance bottlenecks.
Hello everyone, I’ve been busy at work and the article was delayed, but today I share our team’s summary "Redis Big Key Issue Handling Summary", packed with practical tips.
Redis Big Key Issue Handling Summary
What you will gain from this article:
Reasonable key value size is recommended to be less than 10 KB. Oversized values can cause data skew, hot keys, and consume instance traffic or CPU, so they should be avoided at the design stage.
Thus, a value size greater than 10 KB can be used as a reference for judging a big key.
How large is considered a big key?
Alibaba Cloud Redis best practice mentions the 10 KB threshold.
How to discover big keys
1. For string type, use the command
--bigkeysThe
--bigkeyscommand scans all keys and reports the largest key for each common data type (string, list, set, zset, hash). For strings it reports the value size in bytes; for complex types it reports element counts, which may not directly reflect byte size.
<code>root@vm1:~# redis-cli -h 127.0.0.1 -p 6379 -a "password" --bigkeys</code> --bigkeysuses a scan‑based lazy evaluation, so it does not block Redis, but on instances with many keys the command can take a long time; it is recommended to run it on a replica.
Sample output shows the instance has 52,992 keys occupying 1,470,203 bytes. The largest string key uses 157,374 bytes, the largest list key has 153,462 elements, etc.
If the top key for a type is under 10 KB, it indicates no big key for that type, but to list all keys larger than 10 KB you need third‑party tools that scan the RDB file.
2. For non‑string types, two common methods are:
2.1 Use the
MEMORY USAGEcommand (Redis 4.0+).
<code>root@vm1:~# redis-cli -h 127.0.0.1 -p 6379 -a "password"
127.0.0.1:6379> MEMORY USAGE keyname1
(integer) 157481
127.0.0.1:6379> MEMORY USAGE keyname2
(integer) 312583
</code>The
MEMORY USAGEcommand provides an estimated size, which may differ slightly from the
--bigkeyssummary.
2.2 Use the third‑party
Rdbtoolsutility (Python) to parse Redis snapshot files and list keys larger than a threshold.
<code># Install
git clone https://github.com/sripathikrishnan/redis-rdb-tools
cd redis-rdb-tools && sudo && python setup.py install
</code>Run the tool to export all keys >10 KB to a CSV:
<code># rdb dump.rdb -c memory --bytes 10240 -f live_redis.csv</code>How to gracefully delete big keys
Before Redis 4.0, deleting large keys with
DELcould block the server, so scripts were needed to split lists, hashes, or batch‑delete keys.
Since Redis 4.0, the lazy‑free feature allows asynchronous deletion without blocking.
1. Proactive deletion
<code>127.0.0.1:6379> UNLINK mykey</code>The
UNLINKcommand is the asynchronous version of
DEL, implemented via lazy‑free: the key is logically removed and the actual memory free is performed in a background I/O thread, avoiding main‑thread blockage even when deleting many big keys.
2. Passive deletion
Redis can automatically evict or expire big keys. In versions prior to 4.0 this could block the main thread. From 4.0 onward, lazy‑free can be enabled for expiration, eviction, and server‑side deletion:
<code>lazyfree-lazy-expire on # lazy expiration
lazyfree-lazy-eviction on # lazy eviction when memory limit is reached
lazyfree-lazy-server-del on # lazy server‑side deletion
</code>Summary
Use Redis 4.0 or newer.
Employ
--bigkeys,
MEMORY USAGE, and Rdbtools to identify big keys.
For big keys, always use
UNLINKfor proactive deletion and enable lazy‑free options for passive deletion.
Fundamentally, avoid creating big keys in the first place.
References
Analysis of Redis 4.0 LazyFree feature.
Cloud Database Redis Development and Operations Specification.
Ops Development Stories
Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.