Backend Development 33 min read

Deep Dive into Netty’s Asynchronous Model, Epoll, IO Multiplexing, and JNI with Hands‑On C Code

This article explains Netty’s asynchronous architecture, compares classic multithread, Reactor, select, poll and epoll models, clarifies level‑triggered versus edge‑triggered event handling, and provides step‑by‑step JNI and hand‑written epoll server examples in C to illustrate high‑performance backend development.

Code Ape Tech Column
Code Ape Tech Column
Code Ape Tech Column
Deep Dive into Netty’s Asynchronous Model, Epoll, IO Multiplexing, and JNI with Hands‑On C Code

Netty’s Asynchronous Model

Netty uses a pure asynchronous, event‑driven model driven by bossGroup and workerGroup in ServerBootstrap , allowing efficient handling of many connections without a thread per client.

Classic Multithread Model

Each client gets a dedicated thread; this works for few clients but quickly exhausts memory and crashes under heavy load.

Classic Reactor Model

The Reactor dispatches I/O events to handlers, similar to a telephone exchange, separating the Reactor (event dispatcher) from Handlers (business logic).

Reactor Evolution in Netty

Netty implements several Reactor variants: single‑thread, thread‑pooled, and multi‑Reactor (main and sub reactors), improving scalability on multi‑CPU machines.

JNI – Calling Native C from Java

JNI (Java Native Interface) lets Java invoke C functions. The article provides a minimal Java class DataSynchronizer with a native method syncData , and shows how to generate the header, implement the C side, compile a shared library, and run the program.

public class DataSynchronizer {
    static { System.loadLibrary("synchronizer"); }
    private native String syncData(String status);
    public static void main(String... args) {
        String rst = new DataSynchronizer().syncData("ProcessStep2");
        System.out.println("The execute result from C is : " + rst);
    }
}
#include
#include
#include "DataSynchronizer.h"
JNIEXPORT jstring JNICALL Java_DataSynchronizer_syncData(JNIEnv *env, jobject obj, jstring str) {
    const char *inCStr = (*env)->GetStringUTFChars(env, str, NULL);
    if (inCStr == NULL) return NULL;
    printf("In C, the received string is: %s\n", inCStr);
    (*env)->ReleaseStringUTFChars(env, str, inCStr);
    char outCStr[128];
    printf("Enter a String: ");
    scanf("%s", outCStr);
    return (*env)->NewStringUTF(env, outCStr);
}

IO Multiplexing Models

The article reviews select , poll , and epoll system calls, showing their signatures, usage patterns, and limitations (e.g., fd_set bitmap size, O(n) scanning, kernel‑user copy overhead).

Select Example

int select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *errorfds, struct timeval *timeout);

Poll Example

int poll(struct pollfd *fds, nfds_t nfds, int timeout);

epoll creates an epoll instance ( epoll_create ), registers fds with epoll_ctl , and waits for ready events with epoll_wait . It uses a kernel‑side red‑black tree and a ready‑list, eliminating the 1024‑fd limit and reducing O(n) scans.

Level‑Triggered vs Edge‑Triggered

Level‑triggered (LT) repeatedly notifies while a buffer is readable/writable; edge‑triggered (ET) notifies only on state changes, requiring the application to drain the buffer until EAGAIN .

Examples show how a single write of many bytes generates one ET notification but multiple LT notifications.

Hand‑Written epoll Server (C)

The article provides a complete ~200‑line epoll server with functions to set non‑blocking mode, add fds, and separate LT and ET processing loops. Key snippets are:

#define MAX_EVENT_NUMBER 1024
#define BUFFER_SIZE 10
#define ENABLE_ET 0
int SetNonblocking(int fd) { ... }
void AddFd(int epoll_fd, int fd, bool enable_et) { ... }
void lt_process(struct epoll_event* events, int number, int epoll_fd, int listen_fd) { ... }
void et_process(struct epoll_event* events, int number, int epoll_fd, int listen_fd) { ... }
int main(int argc, char* argv[]) { ... }

Running the server demonstrates both LT and ET behavior with sample client input.

Performance Note

The author mentions plans to benchmark the server on a VM cluster to achieve “single‑machine million‑connections” using ET mode and kernel tuning.

Promotional Closing

Finally, the author asks readers to like, share, and follow, and advertises a paid “knowledge planet” with Spring, MyBatis, DDD, and other premium content.

backend developmentC++NettyepollJNIIO Multiplexing
Code Ape Tech Column
Written by

Code Ape Tech Column

Former Ant Group P8 engineer, pure technologist, sharing full‑stack Java, job interview and career advice through a column. Site: java-family.cn

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.