Redis No Longer Dominates: Discover the Best Python Caching Alternatives

A benchmark of Redis, Memcached, DragonflyDB, and Cashews using the same FastAPI workload reveals that Redis falls behind on latency, throughput, and memory efficiency, while DragonflyDB and Cashews offer superior performance and developer experience for Python caching.

Code Mala Tang
Code Mala Tang
Code Mala Tang
Redis No Longer Dominates: Discover the Best Python Caching Alternatives

Test Environment

The author evaluated four caching tools with an identical FastAPI service that required response caching for expensive API endpoints, session storage for authenticated users, and rate limiting on public endpoints. Four metrics were recorded under 500 concurrent requests: throughput (ops/sec), average latency per operation, memory usage per 100 K cached items, and time required to get the cache running from scratch.

Redis

import redis.asyncio as redis
from fastapi import FastAPI
app = FastAPI()
cache = redis.from_url("redis://localhost", decode_responses=True)

@app.get("/products/{product_id}")
async def get_product(product_id: int):
    cached = await cache.get(f"product:{product_id}")
    if cached:
        return json.loads(cached)
    product = await fetch_from_db(product_id)
    await cache.setex(f"product:{product_id}", 300, json.dumps(product))
    return product

Performance (500 concurrent requests): average latency 0.8 ms, throughput 142 000 ops/sec, memory 48 MB per 100 K items.

Pros: reliable across all three scenarios, mature Python ecosystem, extensive documentation.

Cons: requires a separate server process, adds operational overhead for small or dev environments, connection‑pool tuning needed, encountered connection‑limit issues under very high concurrency.

Setup time: 15 minutes.

Conclusion: fast and battle‑tested; operational cost is the only real drawback.

Memcached

from aiomcache import Client
cache = Client("127.0.0.1", 11211)

@app.get("/products/{product_id}")
async def get_product(product_id: int):
    key = f"product:{product_id}".encode()
    cached = await cache.get(key)
    if cached:
        return json.loads(cached)
    product = await fetch_from_db(product_id)
    await cache.set(key, json.dumps(product).encode(), exptime=300)
    return product

Performance (500 concurrent requests): average latency 0.6 ms, throughput 168 000 ops/sec, memory 31 MB per 100 K items.

Pros: fastest on pure key‑value operations, lowest memory footprint.

Cons: no native support for complex data structures, requires manual serialization for sessions, rate‑limiting logic must be implemented by the user, lacks persistence (data lost on restart), unsuitable for two of the three test scenarios.

Setup time: 20 minutes (including fixing rate‑limit limits).

Conclusion: wins on simple workloads but is too limited for production apps with diverse caching needs.

DragonflyDB

DragonflyDB is a Redis‑compatible in‑memory store that claims up to 25× better memory efficiency and higher throughput.

import redis.asyncio as redis
cache = redis.from_url("redis://localhost:6379", decode_responses=True)

Performance (500 concurrent requests): average latency 0.5 ms, throughput 198 000 ops/sec, memory 19 MB per 100 K items.

Pros: beats Redis on every metric (≈39 % lower latency, ≈39 % higher throughput, ≈60 % less memory), fully compatible with Redis commands, works in all three test scenarios.

Cons: newer project with a smaller ecosystem (fewer monitoring tools, limited documentation), an edge‑case in cluster mode required direct support.

Setup time: 12 minutes (faster than Redis because the client code is unchanged).

Conclusion: outperforms Redis on all measurable dimensions; maturity gap is the only reason not to switch immediately.

Cashews

Cashews is an async‑Python caching library that wraps any backend (Redis, in‑memory, disk) and provides a decorator‑based API.

from cashews import cache
cache.setup("redis://localhost")

@app.get("/products/{product_id}")
@cache(ttl="5m", key="product:{product_id}")
async def get_product(product_id: int):
    return await fetch_from_db(product_id)

Performance (500 concurrent requests): average latency 0.9 ms, throughput 128 000 ops/sec, memory depends on chosen backend.

Pros: eliminates ~80 % of caching boilerplate, single‑line endpoint cache, most friendly cache‑invalidation API among the four tools. await cache.delete_match("product:*") One line invalidates all cached products, whereas raw Redis would need a SCAN followed by multiple DEL commands.

Cons: adds an abstraction layer; still requires a backend such as Redis, and the extra overhead makes it slightly slower than direct Redis usage.

Setup time: 8 minutes (fastest among the four).

Conclusion: best developer experience; when paired with DragonflyDB it gives both performance and ergonomics.

Results

Raw performance numbers:

Throughput: DragonflyDB 198K, Memcached 168K, Redis 142K, Cashews 128K ops/sec

Latency: DragonflyDB 0.5 ms, Memcached 0.6 ms, Redis 0.8 ms, Cashews 0.9 ms

Memory per 100 K items: DragonflyDB 19 MB, Memcached 31 MB, Redis 48 MB, Cashews varies

Developer experience: Cashews best, Redis good, DragonflyDB good, Memcached poor.

Production maturity: Redis best, Memcached good, DragonflyDB growing, Cashews new.

Why Redis Didn't Win

Redis is not slow or broken; it lost because DragonflyDB is objectively faster and more memory‑efficient while remaining compatible, and Cashews provides a superior async‑Python developer experience.

Current Choice

For new projects the author uses DragonflyDB as the backend and Cashews as the Python interface, combining performance and ergonomics. Existing Redis deployments can evaluate a switch to DragonflyDB with minimal code changes, gaining measurable speed and memory gains at the cost of learning a newer tool.

PythonRediscachingMemcachedperformance benchmarkingDragonflyDBCashews
Code Mala Tang
Written by

Code Mala Tang

Read source code together, write articles together, and enjoy spicy hot pot together.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.