Databases 8 min read

Strategies for Splitting Large Keys and Values in Redis to Reduce Memory Usage and Latency

This article explains how to handle Redis scenarios with oversized keys, massive collections, billions of keys, and large bitmaps or Bloom filters by partitioning data into multiple keys or hashes, using bucketization, multi‑get, and careful bitmap splitting to improve performance and lower memory consumption.

Code Ape Tech Column
Code Ape Tech Column
Code Ape Tech Column
Strategies for Splitting Large Keys and Values in Redis to Reduce Memory Usage and Latency

In many business scenarios Redis may store very large values for a single key, massive numbers of elements in hash/set/zset/list structures, or billions of keys, all of which can increase memory consumption and degrade response time because Redis processes commands single‑threaded.

For a single large key, the value can be split into several key‑value pairs and retrieved with MULTIGET , or stored as fields in a hash so that only needed parts are accessed using HGET / HMGET and updated with HSET / HMSET .

When a hash, set, zset or list contains millions of elements, a common technique is to pre‑define a fixed number of buckets (e.g., 10,000) and compute fieldHash % 10000 to decide which hash key stores the element, thereby distributing the load across many keys and reducing per‑key I/O.

If a cluster holds hundreds of millions of keys, consolidating related keys into a single hash can dramatically cut memory usage. For example, three keys user.zhangsan-id = 123; user.zhangsan-age = 18; user.zhangsan-country = china; can be transformed into one hash user.zhangsan with fields id , age , and country .

When keys have no natural correlation, estimate the total key count (e.g., 200 million) and allocate a fixed number of bucket hashes (e.g., 2 million). Each original key is mapped to a bucket via hash(key) % 2000000 , then stored with HSET(bucketKey, field, value) and retrieved with HGET .

Large bitmaps or Bloom filters (e.g., a 512 MB bitmap) should be split into many smaller bitmaps (e.g., 1024 pieces of 512 KB). Each piece is stored under a separate Redis key, and a consistent hash assigns incoming keys to one of these pieces, ensuring a request touches only a single key and avoiding cross‑node lookups.

The false‑positive rate of a Bloom filter depends on the number of hash functions k , the number of elements n , and the bitmap size m ; splitting does not change the rate as long as the n/m ratio remains constant. It is recommended to keep k = 13 and each Bloom filter under 512 KB.

Note: when using hash modulo, handle negative results appropriately, and keep the number of fields per bucket around 100–512 for optimal performance.

Finally, the author invites readers to like, share, and follow the “码猿技术专栏” public account for PDF versions of related Spring Cloud, Spring Boot, and MyBatis advanced articles.

BackendMemory OptimizationRedisbitmapbloom filterhash-bucketkey-splitting
Code Ape Tech Column
Written by

Code Ape Tech Column

Former Ant Group P8 engineer, pure technologist, sharing full‑stack Java, job interview and career advice through a column. Site: java-family.cn

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.