How to Solve Common RocketMQ Issues: Duplicates, Throttling, Retries, and Loss

This article examines frequent RocketMQ problems such as duplicate sending, flow‑control throttling, message retries, duplicate consumption, backlog, and loss, and provides practical configuration tweaks, scaling strategies, batch sending, idempotent handling, and retry mechanisms to ensure reliable message delivery.

Lin is Dream
Lin is Dream
Lin is Dream
How to Solve Common RocketMQ Issues: Duplicates, Throttling, Retries, and Loss

RocketMQ Common Issues and Solutions

In production environments, sending and consuming messages with RocketMQ often encounter problems such as duplicate sends, throttling, message retries, duplicate consumption, backlog, and loss. This article summarizes these issues and provides practical solutions, including configuration adjustments, broker scaling, batch sending, idempotency, and retry mechanisms.

1. Duplicate Send

Network timeouts, broker exceptions, or missing ACK may trigger producer retries, causing the same message to be sent multiple times. Parameters retryTimesWhenSendFailed (default 2) and retryTimesWhenSendAsyncFailed control retry counts. To avoid duplicates, set a unique message ID or configure retry count to 0 on the producer side, and ensure idempotent handling on the consumer side.

rocketmq:
  name-server: 192.168.1.200:9867
  producer:
    group: order-group
    retry-times-when-send-async-failed: 2
    retry-times-when-send-failed: 2

In multi‑master deployments, if the current broker fails, asynchronous sends will not automatically switch to another master, requiring manual resend.

2. Send Throttling

When the producer’s instantaneous sending rate exceeds the broker’s processing capacity, the broker throws a flow‑control exception such as:

org.springframework.messaging.MessagingException
CODE: 2 DESC: [TIMEOUT_CLEAN_QUEUE] broker busy, start flow control

The broker checks the head of the send‑request queue every 10 ms; if the wait time exceeds waitTimeMillsInSendQueue (default 200 ms), the request is rejected and no retry is performed.

3. Message Retry

Unstable networks may cause the producer not to receive ACK, leading to retransmission and duplicate consumption when the consumer is not idempotent. Example: 1.5 million orders produced three duplicate messages.

4. Duplicate Consumption

If deserialization or business logic fails, the consumer does not ACK the message, causing the broker to retry and the message to be consumed again. Solutions include manual retry with delayed messages or using the broker’s built‑in retry and dead‑letter queues.

5. Message Backlog

When producers send faster than consumers can process, messages accumulate. Monitor TPS and delay via the console, then increase consumer instances, raise ConsumeThreadMin / ConsumeThreadMax, or apply producer‑side throttling.

6. Message Loss

Broker failures, power loss, or disk damage can cause loss. Mitigate by enabling asynchronous flush (99 % durability), configuring master‑slave replication, or using synchronous double‑write for critical scenarios.

These measures together form a comprehensive approach to ensure reliable RocketMQ operation.

distributed-systemsJavaPerformance TuningMessage QueueRocketMQ
Lin is Dream
Written by

Lin is Dream

Sharing Java developer knowledge, practical articles, and continuous insights into computer engineering.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.