Backend Development 13 min read

Why Use Message Queues? Pain Points, Challenges, and Practical Solutions

This article explains the drawbacks of traditional synchronous architectures, outlines why adopting message queues improves latency, coupling, and peak‑handling, and then details common MQ problems such as duplicate messages, data inconsistency, loss, ordering, backlog, and increased complexity along with concrete mitigation strategies.

Wukong Talks Architecture
Wukong Talks Architecture
Wukong Talks Architecture
Why Use Message Queues? Pain Points, Challenges, and Practical Solutions

Preface

Message queues (MQ) have become increasingly popular in many companies, but developers often wonder why they should be used, what new issues they introduce, and how to solve those issues.

1. Pain Points of Traditional Synchronous Architecture

1.1 Latency

Complex business systems may need to synchronously call N downstream services for a single user request, causing long total response times, especially under unstable network conditions, which leads to time‑outs and a poor user experience.

1.2 Tight Coupling

When a request flows through multiple subsystems (order, payment, inventory, points, logistics, etc.), high coupling means that a failure in any subsystem propagates to the whole request, threatening system stability.

1.3 Request Peaks

Sudden traffic spikes (e.g., flash sales) can overwhelm the database, causing slow responses or crashes, and the system cannot guarantee stability under such peak loads.

2. Why Use MQ?

Introducing an MQ can address the three problems above.

2.1 Asynchrony

By converting synchronous calls to asynchronous messages, the producer can return immediately after publishing a message, reducing overall response time and avoiding the "total latency is long" issue.

2.2 Decoupling

Subsystems only depend on the MQ instead of each other, eliminating strong inter‑service dependencies and dramatically lowering coupling.

2.3 Peak Shaving

MQ buffers bursty traffic; when request peaks occur, messages are queued and processed at the consumer’s pace, preventing overload of downstream services.

3. New Problems Introduced by MQ

3.1 Duplicate Messages

Duplicate consumption can happen due to producer duplication, offset rollback, consumer ack failures, time‑outs, or manual retries, potentially causing duplicate business data.

3.2 Data Consistency

If a consumer fails after the upstream transaction commits (e.g., order created but points not awarded), the system ends up with inconsistent data; MQ typically provides eventual consistency.

3.3 Message Loss

Network failures, server disk errors, offset rollbacks, or consumer crashes before processing can cause messages to be lost.

3.4 Message Ordering

Stateful workflows (order → payment → completion) require ordered processing; however, Kafka partitions or RabbitMQ queues may not guarantee order across multiple consumers or partitions.

3.5 Message Backlog

If consumer throughput is lower than producer rate, messages accumulate, leading to delayed business actions (e.g., users waiting long to become members).

3.6 Increased System Complexity

Adding an MQ introduces extra components (producer, broker, consumer) and operational overhead, raising the learning curve and troubleshooting difficulty.

4. How to Solve These Problems

4.1 Duplicate Message Handling

Implement idempotent consumers by storing a unique messageId with a unique index; before processing, check if the message has already been handled.

4.2 Data Consistency

Adopt retry mechanisms—synchronous retries for low‑volume scenarios and asynchronous retry tables/jobs for high‑volume cases—to achieve eventual consistency when consumer processing fails.

4.3 Message Loss Prevention

Maintain a "message send" table recording each produced message with a pending status; a periodic job checks for messages that remain unconfirmed after a timeout and republishes them.

4.4 Ordering Guarantees

Route all messages of the same logical key (e.g., order ID) to the same Kafka partition or RabbitMQ queue, ensuring they are consumed in order; if strict ordering is not required, consider redesigning the workflow to rely only on final state.

4.5 Backlog Mitigation

When ordering is not required, use multithreaded consumers to increase processing speed; when ordering is required, dispatch messages to multiple single‑threaded queues to preserve order while scaling.

4.6 Reducing Complexity

Start with a minimal MQ setup, use well‑documented brokers (e.g., RabbitMQ, Kafka), and encapsulate MQ interactions behind libraries to hide implementation details from business services.

Backend Developmentsystem designasynchronousMessage QueueReliabilityDecoupling
Wukong Talks Architecture
Written by

Wukong Talks Architecture

Explaining distributed systems and architecture through stories. Author of the "JVM Performance Tuning in Practice" column, open-source author of "Spring Cloud in Practice PassJava", and independently developed a PMP practice quiz mini-program.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.