Backend Development 20 min read

Understanding RxJava2 Backpressure Strategies and Flowable Implementation

This article explains RxJava2's backpressure mechanisms, compares the five Flowable strategies (MISSING, ERROR, BUFFER, DROP, LATEST), demonstrates their behavior with practical experiments, and shows how to use Subscription and FlowableEmitter to build a demand‑driven, memory‑safe data pipeline.

Sohu Tech Products
Sohu Tech Products
Sohu Tech Products
Understanding RxJava2 Backpressure Strategies and Flowable Implementation

RxJava2 introduces backpressure support for Flowable, where the default internal queue size is 128 and every operator must honor backpressure. The article first describes the five built‑in strategies—MISSING, ERROR, BUFFER, DROP, and LATEST—detailing their semantics and typical use cases.

Experimental results are shown by emitting 500 items at a rate of one item every 100 ms while the downstream consumes one item every 300 ms. The observed behavior for each strategy is summarized with screenshots (omitted here) and explanations of why certain strategies cause MissingBackpressureException, data loss, or OOM.

Next, the article explores the Subscription interface, which allows a downstream Subscriber to request a specific number of items via s.request(n) . A simple test demonstrates that without an explicit request the upstream emits data but the downstream receives none because the default request count is zero.

To control the flow more precisely, the FlowableEmitter is introduced. Unlike ObservableEmitter, FlowableEmitter provides a requested() method that returns the current outstanding demand. The following code snippet illustrates a custom Flowable that emits data only when the downstream has pending demand:

public void flowableEmitTest(final int emitCount, final int requestCount) {
    Flowable.create(new FlowableOnSubscribe
() {
        @Override
        public void subscribe(FlowableEmitter
e) throws Exception {
            for (int i = 1; i <= emitCount; i++) {
                Log.d(TAG, "current pending requests -> " + e.requested());
                if (e.requested() == 0) {
                    continue; // wait until downstream requests more
                }
                Log.d(TAG, "emit -> " + i);
                e.onNext(i);
            }
            e.onComplete();
        }
    }, BackpressureStrategy.MISSING)
    .subscribeOn(Schedulers.newThread())
    .observeOn(Schedulers.newThread())
    .subscribe(new Subscriber
() {
        Subscription mSubscription;
        @Override public void onSubscribe(Subscription s) { s.request(1); mSubscription = s; }
        @Override public void onNext(Integer v) { Log.d(TAG, "receive -> " + v); mSubscription.request(1); }
        @Override public void onError(Throwable t) {}
        @Override public void onComplete() {}
    });
}

This pattern ensures that the upstream never emits more items than the downstream can handle, preventing both MissingBackpressureException and OOM.

The article then dives into the RxJava source code to explain how the default buffer size (128) is defined, how observeOn creates a prefetch queue, and how the internal BaseObserveOnSubscriber manages the requested count and the limit (default 96) that triggers a request for more items from upstream.

Finally, a complete solution is presented that combines the MISSING strategy with explicit demand checks and incremental requests, providing a robust, backpressure‑aware Flowable that can process an infinite upstream stream without losing data or exhausting memory.

JavaConcurrencyReactive ProgrammingRxJavabackpressureFlowable
Sohu Tech Products
Written by

Sohu Tech Products

A knowledge-sharing platform for Sohu's technology products. As a leading Chinese internet brand with media, video, search, and gaming services and over 700 million users, Sohu continuously drives tech innovation and practice. We’ll share practical insights and tech news here.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.