Engula: Redis‑Compatible In‑Memory Database Cutting Memory Use by 50%
Engula is a Redis‑compatible, high‑performance in‑memory database that cuts memory usage by up to 50% through compression and metadata optimization, while incurring only about 10% performance overhead, and its architecture, testing methodology, and benchmark results are detailed in this article.
1. Engula Overview
Engula, developed by Yunai (Beijing) Technology Development Co., is a high‑performance in‑memory database that is 100% compatible with Redis 7.2. It aims to dramatically lower memory costs while preserving the full Redis feature set and ecosystem.
2. Core Advantages
Full Redis 7.2 compatibility : supports all data types, commands, clustering, transactions, and scripting.
Memory savings of roughly 50% : achieved through a compressed storage structure and metadata optimizations.
Minimal performance overhead : only about 10% extra cost compared with native Redis.
Future Valkey support : plans to support Valkey 8.0+ and 9.0+.
3. Architecture Details
The system is organized into five layers, each optimized for its role:
Network Access Layer
Connection & Network – manages client connections and data transmission.
Command Processor – parses RESP protocol, schedules and executes commands.
ACL Controller – provides access control and permission checks.
Memory Database Engine
Modern hash tables and compact metadata improve CPU cache hit rates and reduce fragmentation.
Supports native Redis data types such as String, Hash, List, etc.
Stable Integration Layer
Bridges the engine with persistence and logging, offering consistent interfaces for data flow and recovery.
Compressed HybridBlock
Performs real‑time memory compression and includes an asynchronous background compression engine.
Provides a high‑performance memory management mechanism.
Persistence
Supports both RDB and AOF persistence.
Version 3.0 will introduce a private format that speeds up BGSAVE and RDB loading by 5–10×.
Scalability & Availability
Fully compatible with Redis v7.2 replication, cluster, and sentinel modes.
Future work will optimize full‑sync speed.
Module System
Allows users to load or develop custom modules as plugins without altering the core engine.
4. Testing and Verification
Test environment: Redis 7.2.4 and Engula 2.1.2 (compatible with Redis/Valkey 7.2.4). Memory usage was measured with Engula’s built‑in comparison tool, validated against Redis INFO and TOP commands.
Key memory‑saving results:
String type: 62.87% reduction.
Hash type: 49% reduction (key size compressed 54%, value size 61%).
Set type: 37.53% reduction (key size compressed 52%; Set has no value component).
Overall conclusion: String compression exceeds 50%, while Hash and Set achieve more than 40% savings.
5. Performance Comparison
Benchmarks were run with redis‑benchmark: 1 million requests, average value size 3 KB, using concurrency levels of 20, 50, and 100. The example below shows results at 20 concurrency.
The data shows that Engula’s performance is essentially on par with Redis, and in some commands it even outperforms the open‑source Redis implementation.
6. Conclusion
Engula targets large‑scale Redis deployment scenarios, offering a drop‑in compatible engine that reduces memory cost by about 50% through metadata optimization, compression, and modern hash tables. It maintains 100% Redis compatibility, provides a full suite of testing, migration, and evaluation tools, and is well‑suited for enterprise‑grade caching and storage upgrades.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Xiaolei Talks DB
Sharing daily database operations insights, from distributed databases to cloud migration. Author: Dai Xiaolei, with 10+ years of DB ops and development experience. Your support is appreciated.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
