Cloud Native 20 min read

How RocketMQ 5.0’s New Proxy Layer Enables Compute‑Storage Separation and Cloud‑Native Scaling

RocketMQ 5.0 replaces the monolithic Broker with a stateless Proxy layer that decouples compute from storage, solves scalability, multi‑protocol and cloud‑native adaptation challenges, and is demonstrated through detailed architecture comparisons, Java code samples, and two real‑world IoT and finance case studies showing significant performance and cost benefits.

Architecture & Thinking
Architecture & Thinking
Architecture & Thinking
How RocketMQ 5.0’s New Proxy Layer Enables Compute‑Storage Separation and Cloud‑Native Scaling

1. Evolution Background

In RocketMQ 4.x and earlier, the Broker handled both message storage and compute (protocol handling, permission control, consumption management). This tight coupling works in small‑scale scenarios but creates four major bottlenecks as traffic grows: storage‑compute coupling limits scaling, multi‑protocol support becomes complex, client SDKs are heavyweight, and cloud‑native features (containerization, serverless) are poorly supported.

Storage‑compute coupling : when device connections surge, compute pressure impacts storage stability and vice‑versa.

Multi‑protocol complexity : embedding MQTT, AMQP, CloudEvents parsers in the Broker leads to code bloat and hard‑to‑optimize performance.

Heavy client SDK : rich features such as load balancing and retry increase upgrade difficulty and hinder sidecar deployments.

Poor cloud‑native fit : high node coupling makes elastic scaling and lossless scaling hard.

RocketMQ 5.0 addresses these issues by introducing a “compute‑storage separation” architecture with a Proxy layer that takes over all compute responsibilities, leaving the Broker (now called Store) to focus solely on persistence.

2. Core Architecture Upgrade

The new stack consists of four layers from top to bottom: SDK, NameServer, Proxy (compute), and Store (storage). The most critical change is the Proxy layer and the split of Broker duties.

2.1 Architecture Comparison

RocketMQ 5.0 architecture diagram
RocketMQ 5.0 architecture diagram

The diagram shows that all compute duties (protocol adaptation, permission control, consumption management, load balancing) are moved to Proxy, while Store retains only persistence, multi‑replica sync, and indexing. This enables independent deployment and elastic scaling of compute and storage resources.

2.2 Layer Responsibilities

SDK layer : a lightweight, language‑agnostic gRPC SDK replaces the heavyweight 4.x SDK, exposing native, CloudEvents, MQTT, and AMQP APIs.

NameServer layer : remains a stateless service discovery component; it now also registers Proxy nodes.

Proxy layer (core evolution) : stateless compute node handling protocol conversion, permission, consumption, load balancing, and routing. Supports two deployment modes:

LOCAL – Proxy runs in the same process as Store for backward compatibility.

CLUSTER – Proxy runs as an independent cluster, allowing independent scaling.

Store layer (Broker) : dedicated to message persistence, multi‑replica, indexing, and optional cloud‑storage integration.

2.3 Why Proxy Instead of Direct Broker Splitting

Compatibility: Proxy can serve 4.x clients without code changes, ensuring smooth migration.

Unified protocol entry: all protocol parsing happens in Proxy, keeping Store simple and high‑performance.

Elastic scaling: Proxy’s stateless design lets it scale independently of Store, reducing operational cost.

3. Key Technical Implementations

3.1 Proxy Startup and Deployment Modes

The entry class org.apache.rocketmq.proxy.ProxyStartup parses command‑line arguments (nameserver address, deployment mode, config file) and performs four steps: init config, init thread pool, init handlers/services, start service and register shutdown hook.

import org.apache.rocketmq.proxy.config.ProxyConfig;</code>
<code>import org.apache.rocketmq.proxy.server.ProxyServer;</code>
<code>import org.apache.rocketmq.proxy.server.ProxyMode;</code>

public class ProxyStartup {
    public static void main(String[] args) {
        try {
            // 1. Parse args
            ProxyConfig proxyConfig = parseCommandLineArgs(args);
            // 2. Init server with mode
            ProxyServer proxyServer = new ProxyServer(proxyConfig);
            proxyServer.setProxyMode(ProxyMode.CLUSTER); // switch to LOCAL if needed
            // 3. Init gRPC and processors
            proxyServer.initGrpcServer();
            proxyServer.initMessagingProcessor();
            // 4. Start and add shutdown hook
            proxyServer.start();
            Runtime.getRuntime().addShutdownHook(new Thread(proxyServer::shutdown));
        } catch (Exception e) {
            e.printStackTrace();
            System.exit(-1);
        }
    }
    private static ProxyConfig parseCommandLineArgs(String[] args) {
        ProxyConfig config = new ProxyConfig();
        for (String arg : args) {
            if (arg.startsWith("--namesrvAddr=")) {
                config.setNamesrvAddr(arg.split("=")[1]);
            } else if (arg.startsWith("--proxyMode=")) {
                config.setProxyMode(ProxyMode.valueOf(arg.split("=")[1]));
            }
        }
        return config;
    }
}

3.2 Multi‑Protocol Conversion (MQTT → RocketMQ)

Proxy converts MQTT messages to RocketMQ native messages, preserving payload, adding tags, and mapping QoS as a user property.

public class Mqtt2RocketMQProcessor implements MessagingProcessor {
    public Message convertMqttToRocketMessage(String mqttTopic, MqttMessage mqttMsg) {
        String rocketTopic = mqttTopic.replace("/", "_");
        Message rocketMsg = new Message();
        rocketMsg.setTopic(rocketTopic);
        rocketMsg.setBody(mqttMsg.getPayload());
        rocketMsg.setTags("mqtt_proxy");
        rocketMsg.putUserProperty("mqtt_qos", String.valueOf(mqttMsg.getQos()));
        return rocketMsg;
    }
    public void processMqttPublish(String clientId, String mqttTopic, MqttMessage mqttMsg) {
        Message rocketMsg = convertMqttToRocketMessage(mqttTopic, mqttMsg);
        sendToStore(rocketMsg);
        sendMqttAck(clientId, mqttMsg.getId());
    }
    // sendToStore and sendMqttAck are placeholders for the actual broker communication.
}

3.3 Stateless Pop Consumption Model

Proxy implements a Pop API where the client fetches messages without maintaining offsets; after processing, the client calls Delete to confirm consumption.

public class PopMessageProcessor {
    private MessageService messageService;
    public PopMessageResult popMessage(String topic, String consumerGroup, long invisibleTime, int maxNum) {
        PopMessageResult result = messageService.popMessage(topic, consumerGroup, invisibleTime, maxNum);
        recordMessageInvisibleTime(result.getMessages(), invisibleTime);
        return result;
    }
    public boolean deleteMessage(String topic, String consumerGroup, String messageId) {
        boolean success = messageService.deleteMessage(topic, consumerGroup, messageId);
        clearMessageInvisibleRecord(messageId);
        return success;
    }
    private void recordMessageInvisibleTime(List<Message> messages, long invisibleTime) {
        for (Message msg : messages) {
            long expireTime = System.currentTimeMillis() + invisibleTime;
            cache.put(msg.getMsgId(), expireTime);
        }
    }
    private void clearMessageInvisibleRecord(String messageId) {
        cache.remove(messageId);
    }
}

3.4 Service Discovery and Load Balancing

After startup, each Proxy registers its address, supported protocols, and topic partitions to the NameServer. Clients obtain Proxy addresses from NameServer and then rely on Proxy to balance traffic across Store nodes.

Service discovery sequence diagram
Service discovery sequence diagram

Load balancing uses a default RoundRobin algorithm for normal messages; for ordered messages, a hash of the business key routes to a fixed Store node to preserve order.

4. Real‑World Cases

Case 1: IoT – Massive Device Connections

Design: 12 Proxy nodes (8 CPU × 16 GB) in CLUSTER mode, scalable to 20 nodes; 6 Store nodes (16 CPU × 32 GB, NVMe) with 3‑replica.

Metrics: average latency 8 ms (peak 28 k TPS), latency drops to 6 ms after scaling; Proxy CPU 75 % peak, Store CPU 45 %; resource utilization ↑ 32 % vs 4.x; operational cost ↓ 50 % thanks to independent Proxy scaling.

Case 2: Finance – Distributed Transaction Messages

Design: 8 Proxy nodes (16 CPU × 32 GB) handling AMQP ↔ RocketMQ conversion and two‑phase commit; 4 Store nodes (24 CPU × 64 GB, RAID5) with 4‑replica for high availability.

Transaction processor code (simplified):

public class TransactionMessageProcessor implements TransactionProcessor {
    @Override
    public LocalTransactionState executeLocalTransaction(Message msg, Object arg) {
        try {
            Message rocketTxMsg = convertAmqpToTransactionMessage(msg);
            sendHalfMessageToStore(rocketTxMsg);
            boolean localTxSuccess = executeLocalTx(arg);
            return localTxSuccess ? LocalTransactionState.COMMIT_MESSAGE : LocalTransactionState.ROLLBACK_MESSAGE;
        } catch (Exception e) {
            return LocalTransactionState.UNKNOW;
        }
    }
    @Override
    public LocalTransactionState checkLocalTransactionState(CheckTransactionStateRequestHeader requestHeader, Message msg) {
        boolean txSuccess = checkLocalTxStatus(requestHeader.getTransactionId());
        return txSuccess ? LocalTransactionState.COMMIT_MESSAGE : LocalTransactionState.ROLLBACK_MESSAGE;
    }
    // conversion, execution, and check methods are placeholders for actual logic.
}

Metrics: transaction commit rate 99.992 %, latency 11 ms, peak TPS 180 k /s (throughput ↑ 50 %, latency ↓ 40 % vs 4.x); AMQP compatibility achieved without code changes.

5. Summary

Compute‑storage decoupling : Proxy’s stateless compute can scale independently, raising overall resource utilization by >30 %.

Multi‑protocol support : Unified Proxy entry handles MQTT, AMQP, CloudEvents, expanding use‑case coverage.

Lightweight SDK : gRPC‑based SDK aligns APIs across languages, cutting client‑side complexity by ~60 %.

Cloud‑native readiness : Container‑friendly deployment, automatic scaling, and lossless scaling lower ops cost by ~50 %.

Backward compatibility : LOCAL mode preserves full 4.x behavior, enabling risk‑free migration.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

Cloud NativeProxyScalabilityMessage QueueRocketMQMulti-ProtocolCompute-Storage Separation
Architecture & Thinking
Written by

Architecture & Thinking

🍭 Frontline tech director and chief architect at top-tier companies 🥝 Years of deep experience in internet, e‑commerce, social, and finance sectors 🌾 Committed to publishing high‑quality articles covering core technologies of leading internet firms, application architecture, and AI breakthroughs.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.