Backend Development 14 min read

Handling Large Keys in Redis: Causes, Impacts, and Optimization Strategies

This article explains what constitutes a large Redis key, the performance problems it can cause such as CPU spikes, client timeouts and memory imbalance, and presents practical solutions including key cleanup, splitting, compression, pipeline usage, and alternative storage options.

JD Retail Technology
JD Retail Technology
JD Retail Technology
Handling Large Keys in Redis: Causes, Impacts, and Optimization Strategies

Background: The system's cache CPU usage exceeded the 70% alarm threshold, prompting an investigation of large keys in Redis.

Definition: A large key refers to a key whose associated value occupies a lot of memory, for example a single String key larger than 20KB with high OPS, a String key over 100KB, a collection key exceeding 1MB in total size, or a collection with more than 5000 elements.

Impacts: Large keys can cause client timeout blocking, network congestion due to massive data transfer, worker thread blockage when deleting keys, and uneven memory distribution across cluster shards, potentially leading to OOM.

Solutions:

1. Historical keys: Identify keys that are no longer used, verify that persistent storage (e.g., MySQL, ES) holds the data, and delete the stale keys.

2. Excessive element count: For Set or HASH structures with element counts over 5000, split the data into multiple smaller keys.

3. Large object transformation: Decompose big objects into several key‑value pairs, use mGet / mSet or pipeline to retrieve and assemble the full object.

4. Compression: Store large values in compressed form using Deflater, GZIP, or Zlib to reduce size, while being mindful of CPU overhead.

5. Alternative storage: For extremely large data, consider using document stores such as Elasticsearch or MongoDB instead of Redis.

Code examples:

public String refreshHistoryData(){
    try {
        String key = "historyKey";
        Map
redisInfoMap = redisUtils.hGetAll(key);
        if (redisInfoMap.isEmpty()){
            return "查询缓存无数据";
        }
        for (Map.Entry
entry : redisInfoMap.entrySet()) {
            String redisVal = entry.getValue();
            String filedKey = entry.getKey();
            String newDataRedisKey = "newDataKey" + filedKey;
            redisUtils.set(newDataRedisKey, redisVal);
        }
        return "success";
    } catch (Exception e){
        LOG.error("refreshHistoryData 异常:", e);
    }
    return "failed";
}
public enum CacheKeyConstant {
    REDIS_ORDER_BASE_INFO("ORDER_BASE_INFO"),
    ORDER_SUB_INFO("ORDER_SUB_INFO"),
    ORDER_PRESALE_INFO("ORDER_PRESALE_INFO"),
    ORDER_INVOICE_INFO("ORDER_INVOICE_INFO"),
    ORDER_TRACK_INFO("ORDER_TRACK_INFO"),
    ORDER_PREMISE_INFO("ORDER_PREMISE_INFO"),
    ORDER_FEE_INFO("ORDER_FEE_INFO");
    private String prefix;
    public static final String COMMON_PREFIX = "XXX";
    CacheKeyConstant(String prefix){ this.prefix = prefix; }
    public String getPrefix(String subKey){
        if(StringUtil.isNotEmpty(subKey)){
            return COMMON_PREFIX + prefix + "_" + subKey;
        }
        return COMMON_PREFIX + prefix;
    }
    public String getPrefix(){ return COMMON_PREFIX + prefix; }
}
public void mSetString(Map
mappings) {
    CallerInfo callerInfo = Ump.methodReg(UmpKeyConstants.REDIS.REDIS_STATUS_READ_MSET);
    try {
        redisClient.getClientInstance().mSetString(mappings);
    } catch (Exception e) {
        Ump.funcError(callerInfo);
    } finally {
        Ump.methodRegEnd(callerInfo);
    }
}
public List
mGet(List
queryKeys) {
    CallerInfo callerInfo = Ump.methodReg(UmpKeyConstants.REDIS.REDIS_STATUS_READ_MGET);
    try {
        return redisClient.getClientInstance().mGet(queryKeys.toArray(new String[0]));
    } catch (Exception e) {
        Ump.funcError(callerInfo);
    } finally {
        Ump.methodRegEnd(callerInfo);
    }
    return new ArrayList
(queryKeys.size());
}
public static byte[] compressToByteArray(String text) throws IOException {
    ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
    Deflater deflater = new Deflater();
    DeflaterOutputStream deflaterOutputStream = new DeflaterOutputStream(outputStream, deflater);
    deflaterOutputStream.write(text.getBytes());
    deflaterOutputStream.close();
    return outputStream.toByteArray();
}

Best practices: split large values, avoid unnecessary data structures, regularly clean expired keys, compress large objects when appropriate, and consider alternative storage for massive datasets.

Conclusion: Proper identification and handling of large keys can improve Redis stability and overall system performance.

BackendJavaRedisCache Optimizationlarge-key
JD Retail Technology
Written by

JD Retail Technology

Official platform of JD Retail Technology, delivering insightful R&D news and a deep look into the lives and work of technologists.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.