7 Real-World Message Queue Patterns Every Backend Engineer Should Know
This article shares seven classic use cases of message queues—including asynchronous decoupling, traffic smoothing, bus architecture, delayed tasks, broadcast consumption, distributed transactions, and data hub integration—illustrated with real-world experiences and code examples to help engineers design robust high‑concurrency systems.
Hello everyone, I'm Sanyou.
In my view, message queues, caching, and sharding are the three swords of high‑concurrency solutions.
Throughout my career I have used ActiveMQ, RabbitMQ, Kafka, and RocketMQ.
This article combines my real experiences to share seven classic message‑queue application scenarios.
1 Asynchronous & Decoupling
In a user service for an e‑commerce site, after a user registers, a SMS must be sent. Previously, registration and SMS sending were tightly coupled, causing problems when the SMS channel was unstable or changed, leading to long response times and risky core system modifications.
Refactoring with a message queue solves this:
Asynchronous: After successfully saving user info, the service sends a message to the queue and immediately returns to the front end, avoiding long latency.
Decoupling: The user service consumes the message and calls the SMS service, separating core and non‑core functions and reducing coupling.
2 Traffic Smoothing
In high‑concurrency scenarios, sudden request spikes can overload databases or exhaust CPU/IO resources.
At Shenzhou Ride‑hailing, order updates first modify the cache, then send a message to MetaQ. The order persistence service consumes the message, validates order integrity, and finally writes to the database. This limits consumer concurrency, smooths consumption speed, and protects the database while keeping the front‑end order system stable.
3 Message Bus Architecture
A message bus works like a motherboard data bus, providing data transfer and interaction without direct communication between parties.
At a lottery company, a scheduling center service maintains order information and communicates with downstream services (ticket gateway, prize calculation) via a message queue, keeping systems decoupled and each responsible for its own domain.
4 Delayed Tasks
When a user places an order on Meituan but does not pay immediately, a countdown is shown and the order should be cancelled after the timeout.
The elegant solution is to use delayed messages in the queue: the order service sends a delayed message; when the delay expires, the consumer checks the order status and cancels unpaid orders.
RocketMQ 4.x delayed message example:
<code>Message msg = new Message();
msg.setTopic("TopicA");
msg.setTags("Tag");
msg.setBody("this is a delay message".getBytes());
// set delay level 5, which corresponds to 1 minute
msg.setDelayTimeLevel(5);
producer.send(msg);
</code>RocketMQ 4.x supports 18 delay levels configured via messageDelayLevel . RocketMQ 5.x allows arbitrary timestamps with three APIs for specifying delay or schedule time.
5 Broadcast Consumption
Broadcast consumption means each message is delivered to every consumer in the cluster, guaranteeing at least one consumption per consumer.
It is mainly used for message push and cache synchronization.
01 Message Push
In the ride‑hailing driver app, after a user places an order, the dispatch system pushes the order to drivers via a TCP service that acts as a consumer with broadcast consumption.
02 Cache Synchronization
In high‑concurrency scenarios, many applications use local caches (HashMap, ConcurrentHashMap, Guava, Caffeine). When dictionary data changes, a message is sent to RocketMQ; each node consumes the message and refreshes its local cache.
6 Distributed Transactions
In an e‑commerce transaction, a user’s payment triggers downstream actions such as logistics, points, and cart clearing.
Traditional XA transactions ensure consistency but suffer from low concurrency and performance.
Plain message solutions struggle with consistency because messages lack commit/rollback capabilities.
RocketMQ distributed transaction messages add a two‑phase commit, binding the second‑phase commit with the local transaction to achieve global consistency.
Interaction flow:
Producer sends message to broker.
Broker persists the message and returns ACK; the message is marked “cannot be delivered” (half‑transaction message).
Producer executes local transaction logic.
Producer sends a second‑phase commit (Commit or Rollback) to the broker.
If the broker does not receive the commit (e.g., network loss), it performs a message check and the producer re‑evaluates the local transaction result before resubmitting the commit.
7 Data Hub Integration
Specialized systems like HBase, Elasticsearch, Storm, Spark, OpenTSDB, etc., often need the same data set.
Using Kafka as a data hub, the same log data can be ingested into multiple systems.
Key components: log collection client, Kafka queue, and downstream processing applications (e.g., Logstash, Hadoop). The client batches and compresses logs, Kafka persists them, and processing apps consume the messages for search or further analysis.
Sanyou's Java Diary
Passionate about technology, though not great at solving problems; eager to share, never tire of learning!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.