Backend Development 29 min read

Understanding Disruptor: High‑Performance In‑Memory Queue, Core Concepts, Demo, and Source Code Analysis

The article explains the Disruptor—an intra‑process, lock‑free, array‑based queue that achieves millions of operations per second—by covering its core concepts, demo code, source‑code mechanics, performance optimizations such as pre‑allocation and false‑sharing avoidance, and real‑world Vivo iTheme applications with best‑practice tips.

vivo Internet Technology
vivo Internet Technology
vivo Internet Technology
Understanding Disruptor: High‑Performance In‑Memory Queue, Core Concepts, Demo, and Source Code Analysis

This article introduces the Disruptor, a high‑performance in‑memory queue developed by LMAX, and explains its basic concepts, usage demo, performance principles, and source‑code analysis. It also shows two real‑world applications of Disruptor in the iTheme business of Vivo.

Background : Disruptor differs from distributed message queues such as RocketMQ or Kafka; it is an intra‑process, lock‑free, array‑based queue that can handle up to 6 million orders per second in a single thread. It is used in projects like Apache Storm, Camel, Log4j 2, and internally at Vivo for monitoring data and iTheme metrics.

Comparison with JDK queues : Unlike ConcurrentLinkedQueue and LinkedTransferQueue (unbounded) or ArrayBlockingQueue and LinkedBlockingQueue (bounded but lock‑based and suffering from cache inefficiency), Disruptor is a bounded, lock‑free queue built on a circular array, which benefits from spatial locality and avoids false sharing.

Core concepts (illustrated in the article): RingBuffer, Producer, Event, EventHandler, WaitStrategy, EventProcessor, Sequence, Sequencer, SequenceBarrier.

Demo code :

public class OrderEvent { private long value; public long getValue() { return value; } public void setValue(long value) { this.value = value; } }

public class OrderEventFactory implements EventFactory<OrderEvent> { public OrderEvent newInstance() { return new OrderEvent(); } }

public class OrderEventProducer { private RingBuffer<OrderEvent> ringBuffer; public OrderEventProducer(RingBuffer<OrderEvent> ringBuffer) { this.ringBuffer = ringBuffer; } public void sendData(ByteBuffer data) { long sequence = ringBuffer.next(); try { OrderEvent event = ringBuffer.get(sequence); event.setValue(data.getLong(0)); } finally { ringBuffer.publish(sequence); } } }

public class OrderEventHandler implements EventHandler<OrderEvent> { public void onEvent(OrderEvent event, long sequence, boolean endOfBatch) throws Exception { System.out.println("Consumer: " + event.getValue()); } }

public static void main(String[] args) { OrderEventFactory factory = new OrderEventFactory(); int ringBufferSize = 4; ExecutorService executor = Executors.newFixedThreadPool(1); Disruptor<OrderEvent> disruptor = new Disruptor<>(factory, ringBufferSize, executor, ProducerType.SINGLE, new BlockingWaitStrategy()); disruptor.handleEventsWith(new OrderEventHandler()); disruptor.start(); RingBuffer<OrderEvent> ringBuffer = disruptor.getRingBuffer(); OrderEventProducer producer = new OrderEventProducer(ringBuffer); ByteBuffer bb = ByteBuffer.allocate(8); for (long i = 0; i < 5; i++) { bb.putLong(0, i); producer.sendData(bb); } disruptor.shutdown(); executor.shutdown(); }

Source‑code highlights include the Disruptor constructor, RingBuffer creation, sequence allocation, and publishing logic, demonstrating how the library achieves lock‑free behavior using CAS, padding to avoid false sharing, and batch processing.

Performance principles :

Space pre‑allocation: the RingBuffer pre‑creates all Event objects, eliminating runtime allocation and GC pressure.

Avoiding false sharing: padding fields (e.g., LhsPadding , RhsPadding ) separate critical counters across cache lines.

Lock‑free algorithms: CAS on the cursor, careful gating sequence checks, and minimal volatile writes.

Batch consumption: consumers fetch the highest available sequence and process a batch before updating their own sequence.

Real‑world usage in iTheme :

Monitoring data buffering: multiple producers feed metric events into a Disruptor queue, a scheduled task drains and reports them.

Local cache key statistics: multi‑threaded producers record cache accesses; a single‑threaded consumer aggregates distinct keys using HyperLogLog and regex‑based categorisation.

Best‑practice recommendations :

Do not perform long‑running tasks inside Disruptor consumers; keep handling lightweight.

Prefer one thread per EventHandler; let Disruptor create its own thread pool.

Use tryPublishEvent instead of next to avoid producer blocking.

Select an appropriate WaitStrategy (e.g., YieldingWaitStrategy for low latency, BlockingWaitStrategy for low CPU usage).

In summary, the article provides a comprehensive overview of Disruptor’s architecture, source‑code details, performance mechanisms, and practical deployment scenarios, equipping readers with both theoretical understanding and hands‑on examples.

JavaConcurrencyDisruptorHigh Performancelock-freeproducer-consumerRing Buffer
vivo Internet Technology
Written by

vivo Internet Technology

Sharing practical vivo Internet technology insights and salon events, plus the latest industry news and hot conferences.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.