Master TCP Packet Framing: Build a Mini Netty with Java NIO to Fix Sticky/Partial Packets

This article explains the root causes of TCP sticky and half‑packet issues, designs a length‑field message protocol and a generic decoder, provides complete Java NIO code for a mini Netty framework, and discusses its startup process and current limitations.

Lin is Dream
Lin is Dream
Lin is Dream
Master TCP Packet Framing: Build a Mini Netty with Java NIO to Fix Sticky/Partial Packets

In the previous article we upgraded from a single‑thread NIO server to a multi‑reactor model where the Boss thread accepts connections and Worker threads handle read/write events, enabling thousands of concurrent connections without blocking.

However, the read/write logic still uses a single channel.read(buffer) or channel.write(buffer), and TCP does not guarantee that a sent message is received in one piece, leading to the classic sticky/half‑packet problem.

1. Causes of Sticky/Half Packets

When the receiver reads only 1 KB at a time but the client sends more (e.g., 2 KB), the message is split across multiple reads, producing a half packet. TCP treats data as a byte stream without message boundaries, so a complete message may be divided into several TCP packets (half packet) or multiple messages may be combined into one packet (sticky packet).

2. Designing a Message Protocol

To define message boundaries we use a “length + body” protocol: a 4‑byte integer indicating the body length followed by the body bytes. The receiver first reads the length field, then reads the specified number of bytes, handling half packets by storing incomplete data until the next read.

ByteBuffer buffer = ByteBuffer.allocate(1024); // fixed 1KB buffer
int len = channel.read(buffer); // single read

3. Designing a Generic Decoder

We define a MessageCodec interface with encode and decode methods. The implementation LengthFieldMessageCodec reads the 4‑byte length header, extracts complete messages, stores remaining bytes with compact(), and returns a list of decoded strings.

public interface MessageCodec {
    ByteBuffer encode(String msg);
    List<String> decode(ByteBuffer buffer);
}
public class LengthFieldMessageCodec implements MessageCodec {
    private static final int LENGTH_HEADER_SIZE = 4;
    @Override
    public ByteBuffer encode(String msg) {
        byte[] bytes = msg.getBytes(StandardCharsets.UTF_8);
        ByteBuffer buffer = ByteBuffer.allocate(4 + bytes.length);
        buffer.putInt(bytes.length);
        buffer.put(bytes);
        buffer.flip();
        return buffer;
    }
    @Override
    public List<String> decode(ByteBuffer buffer) {
        List<String> messages = new ArrayList<>();
        buffer.flip();
        while (buffer.remaining() > LENGTH_HEADER_SIZE) {
            buffer.mark();
            int len = buffer.getInt();
            if (buffer.remaining() < len) {
                buffer.reset();
                break;
            }
            byte[] msgBytes = new byte[len];
            buffer.get(msgBytes);
            messages.add(new String(msgBytes, StandardCharsets.UTF_8));
        }
        buffer.compact();
        return messages;
    }
}

The decoder is used in a read loop that accumulates bytes in a per‑connection buffer, invokes the codec, and forwards each complete message to the business handler.

4. Client & Server Startup Process

Both client and server are started with a custom event‑loop group, a codec instance, and a packet handler. The server binds to a port, while the client connects to the server. The shutdown is performed gracefully via shutdownGracefully().

public static void main(String[] args) {
    PacketEventLoopGroup workGroup = new PacketEventLoopGroup(4);
    MessageCodec codec = new LengthFieldMessageCodec();
    PacketHandler handler = new ServerPacketHandler();
    try {
        PacketServerBootstrap bootstrap = new PacketServerBootstrap();
        bootstrap.group(workGroup)
                 .childHandler(codec, handler)
                 .bind(10020)
                 .sync();
    } finally {
        workGroup.shutdownGracefully();
    }
}

The mini‑Netty framework now supports selector‑driven I/O, buffer management that solves TCP sticky/half‑packet issues, and decoupled event handling via handlers. Remaining shortcomings include blocking writes, lack of heartbeat/keep‑alive, and a single‑handler pipeline, which will be addressed in the next article by introducing a Netty‑style pipeline.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

tcpnetwork programmingJava NIOsticky packetMessage Decoder
Lin is Dream
Written by

Lin is Dream

Sharing Java developer knowledge, practical articles, and continuous insights into computer engineering.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.