Databases 16 min read

Benchmark Comparison of Redis 7.0 and Dragonfly Memory Cache Systems

This article presents a detailed performance benchmark of the new open‑source memory cache Dragonfly against Redis 7.0, describing test methodology, hardware setup, command configurations, and the resulting throughput and latency differences across various GET and SET workloads.

Architect's Tech Stack
Architect's Tech Stack
Architect's Tech Stack
Benchmark Comparison of Redis 7.0 and Dragonfly Memory Cache Systems

In mid‑2023 a former Google and Amazon engineer released the open‑source memory data cache system Dragonfly , written in C/C++ under the Business Source License, claiming to be the fastest Redis‑compatible store with higher performance and lower memory usage.

Redis co‑founder and CTO, along with Redis Labs architects, responded with an article titled “Does Redis need a new architecture after 13 years?” and provided their own benchmark results.

Speed Comparison

Dragonfly’s benchmarks compared a single‑process Redis instance (single core) with a multi‑threaded Dragonfly instance (using all available cores). To improve fairness, Redis was tested as a 40‑shard cluster on AWS c4gn.16xlarge instances.

Results showed Redis achieving 18‑40% higher throughput than Dragonfly under the same conditions.

Architectural Differences

The article discusses Redis’s multi‑process, multi‑node design versus Dragonfly’s multi‑threaded approach, emphasizing horizontal scalability, resource utilization, and NUMA considerations.

Test Setup

Both client and server VMs used AWS c6gn.16xlarge (aarch64, 64 cores, 126 GB RAM). Memtier_benchmark was used for load generation.

Benchmark Commands

Single GET (latency < 1 ms): Redis:2X: memtier_benchmark –ratio 0:1 -t 24 -c 1 –test-time 180 –distinct-client-seed -d 256 –cluster-mode -s 10.3.1.88 –port 30001 –key-maximum 1000000 –hide-histogram

Dragonfly: memtier_benchmark –ratio 0:1 -t 55 -c 30 -n 200000 –distinct-client-seed -d 256 -s 10.3.1.6 –key-maximum 1000000 –hide-histogram

Similar commands were used for 30‑pipeline GET and SET workloads, with appropriate –pipeline 30 flags.

Results Summary

Single GET: Redis 4.43 M ops/s (0.383 ms latency) vs Dragonfly ~3.8 M ops/s (0.390 ms).

30‑GET: Redis 22.9 M ops/s (2.239 ms) vs Dragonfly ~15.9 M ops/s (3.99 ms).

Single SET: Redis 4.74 M ops/s (0.391 ms) vs Dragonfly ~4.0 M ops/s (0.500 ms).

30‑SET: Redis 19.85 M ops/s (2.879 ms) vs Dragonfly ~14.0 M ops/s (4.203 ms).

The analysis concludes that while Dragonfly introduces interesting ideas, Redis’s architecture remains robust, offering superior performance, scalability, and flexibility for real‑time in‑memory data workloads.

PerformanceRedisAWSBenchmarkdatabasesmemory cachedragonfly
Architect's Tech Stack
Written by

Architect's Tech Stack

Java backend, microservices, distributed systems, containerized programming, and more.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.