Databases 27 min read

Understanding Key Redis Performance Metrics: Memory Usage, Command Processing, Latency, and Key Eviction

This article explains how to use Redis INFO and related commands to monitor critical performance metrics such as memory usage, memory fragmentation, total commands processed, latency, and key eviction, and provides practical tips for interpreting these metrics and optimizing Redis deployments.

Top Architect
Top Architect
Top Architect
Understanding Key Redis Performance Metrics: Memory Usage, Command Processing, Latency, and Key Eviction

Performance‑Related Data Metrics

Access a Redis server via redis-cli and run the INFO command to retrieve a wealth of information. The output is divided into ten sections (server, clients, memory, persistence, stats, replication, cpu, commandstats, cluster, keyspace), but the most relevant for performance are the memory and stats sections.

You can limit the output to a single category, e.g. INFO memory , to focus on memory‑related fields.

Memory Usage (used_memory)

The used_memory field shows the total bytes allocated by Redis' allocator. used_memory_human displays the same value in a readable format (e.g., MB).

used_memory_rss : total memory the OS reports as allocated to the process.

mem_fragmentation_ratio : memory fragmentation ratio.

used_memory_lua : memory used by the Lua engine.

mem_allocator : allocator used at compile time (libc, jemalloc, tcmalloc).

If used_memory exceeds the configured maximum memory, the OS will start swapping, causing severe latency spikes because disk I/O is orders of magnitude slower than RAM.

Tracking Memory Usage

When persistence (RDB/AOF) is disabled, memory usage above 95 % of the available limit can lead to data loss due to swapping. Enabling snapshots at a lower threshold (e.g., 45 %) mitigates this risk.

Practical ways to reduce memory pressure:

Use a 32‑bit Redis instance for caches smaller than 4 GB (smaller pointer size reduces overhead).

Prefer HASH structures for many small fields instead of many separate keys.

Set expiration times on keys (e.g., EXPIRE key seconds ).

Configure maxmemory and an appropriate eviction policy (e.g., volatile‑ttl or allkeys‑lru ).

Command Processing Count (total_commands_processed)

The total_commands_processed field is a monotonically increasing counter of all commands executed by the server. A sudden drop or slowdown in this metric often indicates command‑queue buildup or slow commands blocking the single‑threaded event loop.

Analyzing Command Count to Diagnose Latency

Because Redis processes commands sequentially, a high command‑queue length can increase response latency (typical network latency on a 1 Gbps link is ~200 µs). Monitoring the trend of this counter helps identify whether latency spikes are caused by command backlog or slow commands.

Ways to Reduce Latency Related to Command Processing

Use multi‑argument commands (e.g., RPUSH key val1 val2 … valN ) instead of issuing many single‑argument commands.

Employ pipelining to batch multiple commands in a single network round‑trip.

Avoid high‑complexity commands on large collections; refer to the “high‑time‑complexity command” table for alternatives.

Latency

Redis does not expose latency via INFO . To measure it, run:

redis-cli --latency -h 127.0.0.1 -p 6379

Typical latency on a 1 Gbps NIC is around 200 µs; values significantly higher indicate performance problems.

Using Slowlog to Identify Slow Commands

Enable the slow‑log (default threshold 10 ms) to capture commands that exceed a configurable execution time:

config set slowlog-log-slower-than 5000   # log commands slower than 5 ms

Query the log with SLOWLOG GET (or SLOWLOG GET 10 for the last ten entries). The log shows an ID, timestamp, execution time (µs), and the command array.

Monitoring Client Connections

Because Redis is single‑threaded, a surge in client connections can increase per‑connection latency. View the current count with INFO clients (field connected_clients ). The default limit is 10 000; values above 5 000 may degrade performance.

Adjust the limit via config set maxclients or the maxclients directive in redis.conf . Set it to 110‑150 % of the expected peak.

Memory Fragmentation Ratio

The mem_fragmentation_ratio field shows OS‑allocated memory divided by Redis‑allocated memory. A ratio slightly above 1 is normal; values > 1.5 indicate severe fragmentation, which can cause swapping.

To mitigate fragmentation:

Restart Redis to release fragmented memory.

Limit memory usage (set maxmemory to 45 % of physical RAM when snapshots are enabled, otherwise up to 95 %).

Consider switching the allocator (jemalloc, tcmalloc, or libc) – note that changing the allocator requires recompiling Redis.

Key Eviction (evicted_keys)

The evicted_keys field counts keys removed because the maxmemory limit was reached. Eviction policies are set with maxmemory-policy (e.g., volatile‑ttl or allkeys‑lru ).

Frequent evictions increase latency because Redis must both serve client commands and perform eviction work.

Ways to reduce evictions:

Increase maxmemory (respecting the 45 %/95 % guidelines based on persistence).

Shard the dataset across multiple Redis instances (hash‑sharding, proxy‑sharding, consistent hashing, virtual buckets).

Summary

Redis is a high‑performance in‑memory key‑value store, but its speed depends on careful monitoring of metrics such as memory usage, fragmentation, command count, latency, and key eviction. By regularly collecting INFO data, interpreting trends, and applying the optimization techniques described above, developers can prevent common performance pitfalls and keep Redis running efficiently.

The original content was translated from an e‑book available at https://www.datadoghq.com/wp-content/uploads/2013/09/Understanding-the-Top-5-Redis-Performance-Metrics.pdf .

OptimizationRedislatencyPerformance MetricsMemory usageDatabase MonitoringKey Eviction
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.