Inside Netty: How Clients Send Requests and Receive Responses After a Connection Is Established
This article explains Netty's internal mechanisms for buffering outbound data, flushing it to the network, handling inbound responses with NIO selectors, outlines the overall Netty workflow, and clarifies the roles of Backlog and KeepAlive parameters in TCP connections.
How does Netty send client requests after a connection is established?
After a connection is successfully created, Netty buffers outbound messages in a linked list. The write() method places the message into the unflushedEntry list rather than sending it immediately.
When flush() is invoked, outboundBuffer.addFlush() moves entries from unflushedEntry to flushedEntry. The addFlush() method of NioSocketChannel reads the data from the unflushed list and queues it for actual network transmission, requiring two linked‑list buffers to support the operation.
Finally, Netty delegates the actual write to Java NIO's low‑level socket write operation.
How does the Netty client receive responses from the server?
The client’s NioEventLoop registers an OP_READ event on the selector. When the server pushes data into the SocketChannel, the selector triggers the read event, the event loop reads the bytes, and the user‑defined handler processes them.
Netty workflow and design overview
Netty’s processing can be summarized in the following steps:
Server opens a listening port.
Client initiates a connection request.
Boss thread pool performs the TCP three‑way handshake via ServerSocketChannel.
A SocketChannel instance is created to represent the connection.
The connection is assigned to a Worker thread, and its read events are registered on the selector.
When data arrives, the Worker thread reads it.
The read data is handed to a custom business thread pool for processing.
After business logic finishes, the response is returned to the Worker thread.
The Worker thread writes the response back through the SocketChannel.
Netty uses three types of threads: Boss thread: handles TCP handshake and assigns connections to workers. Worker thread: monitors read/write events and performs I/O.
Custom business thread: executes non‑I/O tasks such as encoding, decoding, business logic, and database access.
BackLog and KeepAlive parameters
BackLog controls the size of the accept_queue, which holds connections that have completed the handshake but have not yet been accepted by the application. A too‑large backlog can degrade performance.
The TCP three‑way handshake process determines how connections move from syn_queue to accept_queue.
KeepAlive is a TCP keep‑alive probe that checks whether an idle connection is still alive. Netty disables it by default to avoid unnecessary bandwidth consumption and accidental termination of short‑lived connections.
TCP KeepAlive and HTTP Keep‑Alive are unrelated; HTTP Keep‑Alive simply reuses an existing TCP connection for multiple HTTP requests, while TCP KeepAlive is a low‑level probe to detect dead peers.
Summary
This article covered how Netty buffers and flushes outbound data, how the client reads inbound responses, the overall Netty workflow with its Boss, Worker, and business threads, and the significance of the BackLog and KeepAlive configuration parameters.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Java Captain
Focused on Java technologies: SSM, the Spring ecosystem, microservices, MySQL, MyCat, clustering, distributed systems, middleware, Linux, networking, multithreading; occasionally covers DevOps tools like Jenkins, Nexus, Docker, ELK; shares practical tech insights and is dedicated to full‑stack Java development.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
