Implementing High‑Performance Long‑Connection Services with Netty: Challenges and Optimizations
This article explains how to build a scalable long‑connection service using Netty, covering the underlying concepts, Linux kernel tuning, code examples for Java NIO and Netty, performance bottlenecks such as QPS, data‑structure and GC issues, and practical optimization techniques to achieve hundreds of thousands of concurrent connections.
About a year and a half ago the author needed an Android push service and discovered that the Android ecosystem lacks a unified push solution; most projects resorted to polling. After evaluating JPush’s long‑connection service (supporting 500k‑1M connections), the team adopted its free plan and later decided to optimize their own long‑connection server.
The article summarizes the difficulties and optimization points when implementing a long‑connection service with Netty.
What is Netty
Netty is an asynchronous event‑driven network application framework for rapid development of maintainable high‑performance protocol servers & clients.
Netty offers high performance, zero‑copy, native Linux sockets, compatibility with Java NIO/NIO2, pooled buffers, etc. The author recommends reading Netty in Action for deeper insight.
Bottlenecks
The two main goals are increasing the number of connections and raising QPS. The real bottlenecks are not in Netty itself but in Linux kernel limits (max open files, process limits) and data‑structure design.
More Connections
Both Java NIO and Netty can handle millions of connections because they use non‑blocking I/O. Sample Java NIO code:
ServerSocketChannel ssc = ServerSocketChannel.open();
Selector sel = Selector.open();
ssc.configureBlocking(false);
ssc.socket().bind(new InetSocketAddress(8080));
SelectionKey key = ssc.register(sel, SelectionKey.OP_ACCEPT);
while (true) {
sel.select();
Iterator it = sel.selectedKeys().iterator();
while (it.hasNext()) {
SelectionKey skey = (SelectionKey) it.next();
it.remove();
if (skey.isAcceptable()) {
ch = ssc.accept();
}
}
}Equivalent Netty bootstrap code:
NioEventLoopGroup bossGroup = new NioEventLoopGroup();
NioEventLoopGroup workerGroup = new NioEventLoopGroup();
ServerBootstrap bootstrap = new ServerBootstrap();
bootstrap.group(bossGroup, workerGroup);
bootstrap.channel(NioServerSocketChannel.class);
bootstrap.childHandler(new ChannelInitializer
() {
@Override
protected void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
// todo: add handlers
}
});
bootstrap.bind(8080).sync();Linux kernel parameters (e.g., max open files) must be increased; otherwise the connection count is limited.
Higher QPS
Since Netty uses non‑blocking I/O, QPS does not degrade with more connections as long as memory is sufficient. The real QPS bottleneck often lies in data‑structure choices; for example, frequent calls to ConcurrentLinkedQueue.size() become expensive at large scales.
Optimizing by maintaining an AtomicInteger counter can alleviate this, provided eventual consistency is acceptable.
CPU and GC Bottlenecks
Use profiling tools such as VisualVM (Sample mode) to locate hot spots. The author discovered that ConcurrentLinkedQueue.size() was a major hotspot.
GC pressure can be reduced by adjusting -XX:NewRatio to allocate a larger old generation, especially in production where long‑lived connections keep many objects alive.
Other Optimizations
Refer to the “Netty Best Practices” site and the Netty in Action book for additional tweaks that can further boost QPS.
Running on a 16‑core, 120 GB RAM machine with Java 1.6, the author achieved 600 k concurrent connections and 200 k QPS, leaving headroom for further scaling.
Top Architect
Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.