Backend Development 13 min read

Common Pitfalls and Solutions When Building an Asynchronous Redis Client with Netty

This article shares five practical pitfalls encountered while building a pure‑async Redis client with Netty, covering thread‑model changes, pooled ByteBuf allocation across threads, ByteBuf expansion, connection timeout handling, and flow‑control in asynchronous writes, and offers concrete code‑based solutions.

Qunar Tech Salon
Qunar Tech Salon
Qunar Tech Salon
Common Pitfalls and Solutions When Building an Asynchronous Redis Client with Netty

When developing a pure asynchronous Redis client, the author discovered several subtle issues caused by Netty 4’s changes compared to Netty 3, especially around threading, memory allocation, and timeout handling.

Pitfall 1: Netty 4 thread‑model shift – In Netty 3, inbound handlers ran on the I/O thread while outbound handlers ran on a business thread. Netty 4 executes both inbound and outbound operations on the EventLoop (I/O thread), making object mutation after channel.write(user) unsafe because serialization happens asynchronously. The solution is to deep‑copy objects before writing or perform serialization outside the handler.

Example of the unsafe pattern:

User user = new User();
user.setName("admin");
channel.write(user);
user.setName("guest");

Pitfall 2: Using PooledByteBufAllocator across different threads – DirectByteBuf allocation is cheap but pooled, and Netty’s PooledByteBufAllocator uses thread‑local pools. Allocating in a business thread and releasing in an I/O thread breaks the pooling benefit, leading to memory growth and Full GC.

// Business thread
PooledByteBufAllocator allocator = PooledByteBufAllocator.DEFAULT;
ByteBuf buffer = allocator.buffer();
User user = new User();
serialization.serialize(buffer, user);
channel.writeAndFlush(buffer);

Pitfall 3: ByteBuf expansion issues – When a buffer allocated by the pool expands beyond its default size (256 bytes), Netty pulls a larger buffer from the pool and releases the old one, but this expansion occurs in the business thread, again defeating the dedicated allocation thread.

import io.netty.buffer.ByteBuf;
import io.netty.buffer.ByteBufAllocator;
import io.netty.buffer.PooledByteBufAllocator;
import io.netty.util.ReferenceCountUtil;
import java.util.concurrent.*;

public class Allocator {
    public static final ByteBufAllocator allocator = PooledByteBufAllocator.DEFAULT;
    private static final BlockingQueue
bufferQueue = new ArrayBlockingQueue<>(100);
    private static final BlockingQueue
toCleanQueue = new LinkedBlockingQueue<>();
    private static final int TO_CLEAN_SIZE = 50;
    private static final long CLEAN_PERIOD = 100;
    private static class AllocThread implements Runnable {
        @Override
        public void run() {
            long lastCleanTime = System.currentTimeMillis();
            while (!Thread.currentThread().isInterrupted()) {
                try {
                    ByteBuf buffer = allocator.buffer();
                    buffer.retain();
                    bufferQueue.put(buffer);
                } catch (InterruptedException e) {
                    Thread.currentThread().interrupt();
                }
                if (toCleanQueue.size() > TO_CLEAN_SIZE || System.currentTimeMillis() - lastCleanTime > CLEAN_PERIOD) {
                    List
toClean = new ArrayList<>(toCleanQueue.size());
                    toCleanQueue.drainTo(toClean);
                    for (ByteBuf buffer : toClean) {
                        ReferenceCountUtil.release(buffer);
                    }
                    lastCleanTime = System.currentTimeMillis();
                }
            }
        }
    }
    static {
        Thread thread = new Thread(new AllocThread(), "qclient-redis-allocator");
        thread.setDaemon(true);
        thread.start();
    }
    public static ByteBuf alloc() {
        try {
            return bufferQueue.take();
        } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
            throw new RuntimeException("alloc interrupt");
        }
    }
    public static void release(ByteBuf buf) {
        toCleanQueue.add(buf);
    }
}

Usage from a business thread:

ByteBuf buffer = Allocator.alloc();
// serialize object into buffer
ChannelPromise promise = channel.newPromise();
promise.addListener(future -> {
    // after write completes, return buffer to allocator thread
    Allocator.release(buffer);
});
channel.write(buffer, promise);

Pitfall 4: Connection timeout inconsistencies – Setting a short timeout on bootstrap.connect(...).await(1000, TimeUnit.MILLISECONDS) while a longer value is configured via ChannelOption.CONNECT_TIMEOUT_MILLIS can cause false‑negative timeout detection, leading to duplicate connections. The recommendation is to rely solely on the option value.

bootstrap.connect(address).await(1000, TimeUnit.MILLISECONDS);
bootstrap.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 3000);

Pitfall 5: Asynchronous processing and flow‑control – Netty’s ChannelOutboundBuffer grows without bound unless high‑ and low‑water marks are configured. When the remote end is slow, the buffer can expand, consume memory, and eventually trigger OOM or Linux OOM killer. Setting appropriate water marks mitigates the risk.

.option(ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK, 64 * 1024)
.option(ChannelOption.WRITE_BUFFER_LOW_WATER_MARK, 32 * 1024)

By understanding these pitfalls and applying the demonstrated solutions—deep copying objects, keeping allocation and release on the same thread, avoiding uncontrolled buffer expansion, using consistent timeout settings, and configuring write‑buffer water marks—developers can build a more stable and high‑performance asynchronous Redis client on top of Netty.

BackendJavaperformanceRedisasynchronousnettyByteBuf
Qunar Tech Salon
Written by

Qunar Tech Salon

Qunar Tech Salon is a learning and exchange platform for Qunar engineers and industry peers. We share cutting-edge technology trends and topics, providing a free platform for mid-to-senior technical professionals to exchange and learn.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.