Backend Development 11 min read

Implementing a High‑Concurrency Netty Server and Client with WebSocket, Heartbeat, and Reconnection

This article demonstrates how to build a Netty‑based server and client that support long‑living WebSocket connections, high concurrency, heartbeat monitoring, and automatic reconnection, providing complete code examples and detailed explanations of each component.

Hujiang Technology
Hujiang Technology
Hujiang Technology
Implementing a High‑Concurrency Netty Server and Client with WebSocket, Heartbeat, and Reconnection

Background To support CCtalk video live streaming and real‑time chat on the web, a short‑lived connection is insufficient; a long‑lived connection is required, which demands a framework capable of handling massive concurrency. Netty combined with WebSocket is presented as an effective solution.

Overview The previous article covered Netty fundamentals; this piece focuses on practical usage, implementing both server‑side and client‑side components through a simple Echo program.

Server Implementation

The server design addresses four key aspects: optimal thread model for high concurrency, fault‑tolerance, business logic handling, and heartbeat monitoring. The complete server code is shown below.

public class NormalNettyServer {
    private int serverPort = 9000;
    private String serverIp = "192.168.2.102";

    public NormalNettyServer(int port) {
        serverPort = port;
    }

    public void start() throws Exception {
        // (1) Create boss thread pool
        EventLoopGroup bossGroup = new NioEventLoopGroup(10);
        // (2) Create worker thread pool
        EventLoopGroup workGroup = new NioEventLoopGroup(10);
        try {
            // (3) ServerBootstrap
            ServerBootstrap b = new ServerBootstrap();
            b.group(bossGroup, workGroup)
             .channel(NioServerSocketChannel.class)
             .childHandler(new ChannelInitializer
() {
                 @Override
                 public void initChannel(SocketChannel ch) throws Exception {
                     // (4) Add heartbeat handler (read timeout 60s, write timeout 10s)
                     ch.pipeline().addLast("timeout", new IdleStateHandler(60, 10, 10, TimeUnit.SECONDS));
                     // (5) Add business handler
                     ch.pipeline().addLast("echo", new EchoHandler());
                 }
             })
             .option(ChannelOption.SO_BACKLOG, 128)
             .childOption(ChannelOption.SO_KEEPALIVE, true);

            // (6) Bind and start accepting connections
            ChannelFuture f = b.bind(serverIp, serverPort).sync();
            f.channel().closeFuture().sync();
        } finally {
            workGroup.shutdownGracefully();
            bossGroup.shutdownGracefully();
        }
    }
}

Key explanations:

EventLoopGroup provides multithreaded I/O event loops; two groups (boss and worker) manage 10 threads each.

ServerBootstrap is a helper class for configuring and starting NIO servers.

Both thread pools are registered with the bootstrap.

NioServerSocketChannel is the NIO‑based server socket implementation.

SocketChannel represents a TCP connection; handlers are added to the pipeline for heartbeat and business logic.

ChannelOption.SO_BACKLOG and SO_KEEPALIVE configure socket parameters.

Binding returns a ChannelFuture; its asynchronous nature is discussed later.

The server also defines two handlers:

public class EchoHandler extends SimpleChannelInboundHandler
{
    @Override
    protected void channelRead0(ChannelHandlerContext ctx, Object msg) throws Exception {
        ByteBuf buf = (ByteBuf) msg;
        byte[] packet = new byte[buf.readableBytes()];
        buf.readBytes(packet);
        // PB protocol parsing (example)
        HelloTest.Hello hello = HelloTest.Hello.parseFrom(packet);
        System.out.println(hello.getContent().toString());
        ctx.channel().writeAndFlush(buf);
    }

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
        cause.printStackTrace();
        ctx.close();
    }

    @Override
    public void channelActive(ChannelHandlerContext ctx) throws Exception {
        super.channelActive(ctx);
    }

    @Override
    public void channelInactive(ChannelHandlerContext ctx) throws Exception {
        super.channelInactive(ctx);
    }
}

Notes: Netty handlers typically extend SimpleChannelInboundHandler or ChannelInboundHandlerAdapter ; data is transferred using ByteBuf ; this example uses a protobuf (PB) protocol and simply echoes received packets.

Client Implementation

The client must support reconnection, heartbeat, and business processing. Important functions are highlighted below.

public boolean connect() {
    Bootstrap b = new Bootstrap();
    final HeartBeatHandler hearthandler = new HeartBeatHandler(this);
    final ClientHandler handler = new ClientHandler(this);
    EventLoopGroup loop = new NioEventLoopGroup();
    b.group(loop).channel(NioSocketChannel.class);
    b.handler(new ChannelInitializer
() {
        @Override
        protected void initChannel(Channel ch) throws Exception {
            ChannelPipeline pipeline = ch.pipeline();
            // (3) IdleStateHandler for heartbeat (read idle 60s, write idle 20s)
            pipeline.addLast(new IdleStateHandler(60, 20, 0, TimeUnit.SECONDS));
            pipeline.addLast("hearthandler", hearthandler);
            // Business handler
            pipeline.addLast("handler", handler);
        }
    });
    b.option(ChannelOption.SO_KEEPALIVE, true);
    b.option(ChannelOption.TCP_NODELAY, true);
    ChannelFuture future = b.connect(host, port);
    future.addListener(new ConnectionListener(this)); // (4) listen for connection result
    return true;
}

Supporting classes:

public class ConnectionListener implements ChannelFutureListener {
    @Override
    public void operationComplete(ChannelFuture future) throws Exception {
        if (!future.isSuccess()) {
            System.out.println("connect reconnect");
            this.client.reconnect(future.channel());
        } else {
            System.out.println("connect success");
            this.client.setChannel(future.channel());
        }
    }
}
public class ClientHandler extends ChannelInboundHandlerAdapter {
    @Override
    public void channelInactive(ChannelHandlerContext ctx) throws Exception {
        System.out.println("SuperServer is disconnect " + ctx.channel().remoteAddress().toString());
        client.reconnect(ctx.channel());
        super.channelInactive(ctx);
    }
}
public void reconnect(final Channel ch) {
    final EventLoop eventLoop = ch.eventLoop();
    eventLoop.schedule(new Runnable() {
        @Override
        public void run() {
            connect();
            System.out.println("reconnect server:" + host + ", Port:" + port);
        }
    }, 10L, TimeUnit.SECONDS);
}

The reconnection logic is triggered either by a failed ChannelFuture or by the handler’s channelInactive event. Heartbeat handling mirrors the server side, ensuring both ends can detect idle connections.

Summary

The article provides a concise yet complete Netty server and client implementation covering thread models, fault tolerance, business processing, heartbeat monitoring, and automatic reconnection, with all source code available on GitHub.

JavaNettyWebSocketServerheartbeatClientreconnection
Hujiang Technology
Written by

Hujiang Technology

We focus on the real-world challenges developers face, delivering authentic, practical content and a direct platform for technical networking among developers.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.