Databases 8 min read

Designing Cache for Relational List Data with Redis

This article explains how to design Redis cache for relational list data such as user timelines or news feeds, covering fixed-length caching, consistency, resource utilization, using ZSET structures, handling additions, deletions, queries, and strategies like preloading, retry mechanisms, and asynchronous rebuilding.

Sohu Tech Products
Sohu Tech Products
Sohu Tech Products
Designing Cache for Relational List Data with Redis

In many daily business scenarios, we deal with relational data such as a blogger's post list or a news channel's article list. Compared with caching single items, designing a cache for relational lists requires attention to three main points: fixed length, consistency between cache and DB, and efficient resource utilization.

Typical cache structures in Redis use zset where the key is the unique list ID, members are item IDs, and scores are millisecond timestamps. A tail marker (score = -1) is added when the list size is smaller than the fixed length to indicate that the cache holds the entire list.

When adding a new relational item, the process includes: writing to the DB first (cache is built on next read), extending the expiration time on each zadd , employing a retry mechanism for cache writes, optionally pre‑loading hot lists, and optimizing truncation logic to avoid excessive write pressure.

When deleting a relational item, if zrem fails it may indicate stale data, so the DB must also be cleaned; if the last member in the zset is the tail marker, the cache already contains the full list and DB deletion is unnecessary.

Querying a list involves more scenarios. The typical flow fetches n+1 items from cache to determine if the tail marker is present, handles cache miss or out‑of‑range cases, and may trigger asynchronous cache rebuilding with optional distributed locks. During rebuilding, an extra DB query fetches incremental data to avoid inconsistency caused by concurrent writes.

Resource utilization is improved by assigning different cache length tiers based on access hotness (e.g., 100, 200, 500 entries for normal users, power users, and VIPs). This balances memory usage while reducing DB load for frequently accessed lists.

The article concludes with a thought question: how to implement a generic code framework that can handle different business data without duplicating logic?

BackendPerformanceRedisZsetcache designRelational Data
Sohu Tech Products
Written by

Sohu Tech Products

A knowledge-sharing platform for Sohu's technology products. As a leading Chinese internet brand with media, video, search, and gaming services and over 700 million users, Sohu continuously drives tech innovation and practice. We’ll share practical insights and tech news here.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.