How Redis Stream Revolutionized Real‑Time Traffic Processing and Cut Costs by 90%

This article explains how the traffic team replaced a costly MQ system with Redis Stream, covering its concepts, design, implementation details such as load balancing and cross‑region deployment, monitoring metrics, performance benchmarks, practical lessons learned, and the scenarios where Redis Stream is most suitable.

Alibaba Cloud Developer
Alibaba Cloud Developer
Alibaba Cloud Developer
How Redis Stream Revolutionized Real‑Time Traffic Processing and Cut Costs by 90%

The traffic team, responsible for AMAP trajectory collection and real‑time calculations, needed a cheaper, lower‑latency messaging middleware to replace the existing MQ solution. After evaluating options, they chose Redis Stream and have now upgraded the entire traffic pipeline, achieving significant cost and latency reductions.

Redis Stream Concept

Redis Stream, introduced in Redis 5.0, provides a FIFO queue where messages are stored as id and content. It supports automatic length trimming, lazy deletion, consumer groups with broadcast and cluster consumption, per‑group last_delivered_id, and an ACK mechanism.

Design and Implementation

Load Balancing

Because Redis Stream lacks a native tag concept and a Redis cluster contains multiple shards, the design splits a logical topic into topic_tag keys and distributes them across shards using a hash function: global_idx = tag % total_shards, instance_idx = global_idx / shards_per_instance, local_idx = global_idx % shards_per_instance. The full Redis key becomes topic_tag_{hash}.

Cross‑Datacenter Read/Write

Two approaches were compared: an asynchronous hiredis client and a globally active Redis deployment. Tests showed the asynchronous mode achieved average latencies of 22‑23 ms, while the global active mode ranged from 51‑57 ms, with the latter requiring additional Redis instances.

Engineering Implementation

The Redis Stream SDK (C++) allows producers and consumers to specify only a topic and tag. It supports multiple Redis instances, configurable producer/consumer thread counts, load‑balanced thread pools, consumer‑group tag subscriptions, per‑thread tag limits, and flexible processing thread pools for heavy callbacks.

Real‑Time Monitoring

Monitoring focuses on production/consumption message rates, batch pull size, and latency statistics to detect back‑pressure and network‑induced delays.

Performance Testing

Single‑threaded benchmarks show that TPS decreases as message size grows: >3000 TPS for messages under 10 KB, dropping to ~1500 TPS for 100 KB messages.

Practical Experience

Online Performance

After migration, the traffic pipeline processes ~20 million messages per minute with average 1 KB size, using four 64 GB, 64‑shard Redis instances, achieving >90 % cost and latency reduction compared to MQ.

Applicable Scenarios

Redis Stream is ideal for high‑volume, cost‑sensitive workloads where occasional message loss is acceptable and large‑scale persistent storage is not required.

Pros and Cons

Pros: extremely low cost, high availability, high throughput, low latency, supports cluster/broadcast consumption and offset reset.

Cons: potential data loss on server failure, no dedicated operations platform, limited C++ client support.

Pitfalls & Tips

Use the latest hiredis (1.2.0) for async connections with timeout support.

Keep message size below 100 KB to avoid TPS degradation.

Ensure sufficient tag diversity to prevent data skew across shards.

CPU, not memory or bandwidth, is the primary factor when sizing Redis instances.

backend architecturePerformance TestingMessage Queuecross‑region replicationRedis Stream
Alibaba Cloud Developer
Written by

Alibaba Cloud Developer

Alibaba's official tech channel, featuring all of its technology innovations.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.