Tendis Hybrid Storage Architecture: Design, Features, and Implementation Details
This article introduces the pain points of using Redis as a cache, presents Tencent's Tendis solution with its three product variants, and provides an in‑depth explanation of the hybrid storage version’s overall architecture, component functions, version control, cold‑hot data interaction, eviction policies, and scaling mechanisms.
Redis is widely used as a high‑performance cache but suffers from several pain points, including high memory cost, poor data reliability, cumbersome cache‑DB consistency, expensive fork memory reservation, and asynchronous replication data loss.
Tendis, jointly developed by Tencent CROS DBA and Cloud Database teams, offers three product forms—Cache, Hybrid Storage, and Storage—each 100% compatible with the Redis protocol and Redis 4.0 data models.
The Hybrid Storage version integrates a cache layer (Redis Cluster), a storage layer (Tendis Storage built on RocksDB), a Proxy for request routing and monitoring, and a stateless synchronization layer (Redis‑sync) that imports RDB and AOF data into the storage layer.
Component Overview
Proxy : Routes client requests to the appropriate shard, collects monitoring data, and can disable high‑risk commands.
Cache Layer (Redis Cluster) : Based on community Redis 4.0, adds version control, automatic cold‑data eviction, Cuckoo Filter for full‑key representation, and supports efficient RDB+AOF scaling.
Storage Layer (Tendis Cluster) : Uses RocksDB as the storage engine, provides horizontal scalability, fault‑automatic failover, and full compatibility with Redis commands.
Sync Layer (Redis‑sync) : Simulates Redis slave behavior, receives RDB/AOF streams, and parallelly imports them into Tendis while ensuring correct ordering, handling special commands, and supporting fault‑tolerant resume.
Version Control
Each key and AOF entry receives a monotonically increasing 64‑bit version stored in the redisObject structure, enabling incremental RDB generation and idempotent AOF execution.
typedef struct redisObject {
unsigned type:4;
unsigned encoding:4;
unsigned lru:LRU_BITS;
int refcount;
unsigned flag:4; /* OBJ_FLAG_... */
unsigned reserved:4;
unsigned counter:8; /* for cold‑data‑cache‑policy */
unsigned long long revision:REVISION_BITS; /* value version */
void *ptr;
} robj;Cold‑Hot Data Interaction
When a key is missing in the cache, the system checks the Cuckoo Filter; if the key may exist in storage, it issues dumpx dbid key withttl to retrieve the value and then restores it with RESTOREEX dbid key ttl value . This process is performed via a dedicated connection pool to avoid bottlenecks.
Key Cooling and Cuckoo Filter
To reduce memory usage, keys and values are evicted together from the cache (key cooling). The Cuckoo Filter, implemented as a Dynamic Cuckoo Filter, represents the full key set to prevent cache penetration while using minimal space.
Intelligent Eviction / Loading Strategies
Two eviction policies are employed: maxmemory‑policy for immediate memory pressure and value‑eviction‑policy for periodic eviction of keys not accessed for N days. Loading policies ensure only frequently accessed keys are promoted to the cache.
RDB+AOF Scaling
Traditional Redis cluster scaling suffers from non‑atomic slot importing/migrating and low‑efficiency key‑level migration. Tendis implements a slot‑sync protocol that adds new nodes, synchronizes slots via cluster slotsync , transfers full snapshots (RDB) followed by incremental AOF streams, and finally performs a failover when the target node catches up.
Sync Layer Details
Slot‑level serialization with inter‑slot parallelism ensures correct ordering.
Serial‑parallel conversion handles special commands (e.g., FLUSHDB) by waiting for preceding commands to finish.
Periodic version persistence enables fault‑tolerant resume after crashes.
Dynamic slot‑to‑node mapping allows correct routing of client requests to the appropriate Tendis node.
Storage Layer Features
Full Redis protocol compatibility.
Persistent storage with RocksDB, supporting petabyte‑scale data.
Decentralized architecture using gossip for node communication and hashtag‑based data distribution.
Horizontal scalability up to 1,000 nodes with transparent scaling for operators.
Automatic failover and master‑slave promotion.
For more details, refer to the original article and the open‑source project at Tencent/Tendis .
Architect's Tech Stack
Java backend, microservices, distributed systems, containerized programming, and more.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.