Understanding the C10K Problem and I/O Models: BIO, NIO, select, poll, epoll, and AIO
This article explains the historic C10K problem of handling ten thousand concurrent connections, compares traditional BIO with modern I/O models such as NIO, select, poll, epoll, and AIO, and provides Java and C++ code examples illustrating how each model improves scalability and performance.
In 1999 Dan Kegel introduced the C10K problem, which challenges a single machine to handle 10,000 concurrent connections efficiently. Solving this requires an I/O model that can schedule connections without excessive thread or process creation.
BIO (Blocking I/O) creates a dedicated thread for each client, leading to massive resource consumption and context‑switch overhead when scaling to thousands of connections.
public void serverStartBIO() {
// Create server socket listening on port 8080
ServerSocket server = new ServerSocket(8080);
while (true) {
// Block until a client connects
Socket client = server.accept();
new Thread(new Runnable(){
public void run() {
InputStream in = client.getInputStream();
BufferedReader reader = new BufferedReader(new InputStreamReader(in));
while(true){
// Block waiting for client message
String dataline = reader.readLine();
if(null != dataline){
// Process data
} else {
// Close client
client.close();
}
}
System.out.println("客户端断开");
}
}).start();
}
}NIO (Non‑blocking I/O) introduces channels and buffers, allowing the application to continue processing while I/O operations are pending. It reduces the need for a thread per connection.
public void serverStartNIO() {
// List to maintain all client connections
LinkedList
clients = new LinkedList<>();
ServerSocketChannel ss = ServerSocketChannel.open();
ss.bind(new InetSocketAddress(8080));
ss.configureBlocking(false);
while (true) {
Thread.sleep(1000);
SocketChannel client = ss.accept();
if (client == null) {
// No new connection
} else {
client.configureBlocking(false);
clients.add(client);
}
for (SocketChannel c : clients) {
int num = c.read(buffer);
if (num > 0) {
// Data read
}
}
}
}The non‑blocking approach still requires the program to poll each connection, increasing complexity.
Select is an I/O multiplexing mechanism that lets a single process monitor multiple file descriptors. It reduces thread count but copies fd sets between user and kernel space and is limited to 1024 descriptors on many systems.
int select(int maxfd, fd_set *readset, fd_set *writeset, fd_set *exceptset, const struct timeval *timeout);Implementation example:
public void serverStartSelect() {
ServerSocketChannel server = ServerSocketChannel.open();
server.configureBlocking(false);
server.bind(new InetSocketAddress(8080));
Selector selector = Selector.open();
server.register(selector, SelectionKey.OP_ACCEPT);
while (true) {
while (selector.select() > 0) {
Set
selectionKeys = selector.selectedKeys();
Iterator
iter = selectionKeys.iterator();
while (iter.hasNext()) {
SelectionKey key = iter.next();
iter.remove();
if (key.isAcceptable()) {
ServerSocketChannel ssc = (ServerSocketChannel) key.channel();
SocketChannel client = ssc.accept();
client.configureBlocking(false);
client.register(selector, SelectionKey.OP_READ, buffer);
} else if (key.isReadable()) {
// Read data from client
}
}
}
}
}Poll works similarly to select but removes the 1024‑descriptor limit. It still suffers from copying fd sets and linear scanning overhead.
epoll (Linux‑specific) overcomes select/poll limitations by using an event‑driven design with an epoll instance, epoll_ctl for registration, and epoll_wait for retrieving ready events. It employs a red‑black tree and shared memory (mmap) to achieve O(1) event notification.
#include
int epoll_create(int size);
int epoll_ctl(int epfd, int op, int fd, struct epoll_event *event);
int epoll_wait(int epfd, struct epoll_event *events, int maxevents, int timeout);Example C++ epoll server:
#include
#include
#include
#include
#include
#include
#include
#include
int create_and_bind(int port) {
int listen_fd = socket(AF_INET, SOCK_STREAM, 0);
struct sockaddr_in server_addr{};
memset(&server_addr, 0, sizeof(server_addr));
server_addr.sin_family = AF_INET;
server_addr.sin_addr.s_addr = htonl(INADDR_ANY);
server_addr.sin_port = htons(port);
bind(listen_fd, (struct sockaddr*)&server_addr, sizeof(server_addr));
return listen_fd;
}
int main() {
int listen_fd = create_and_bind(8080);
listen(listen_fd, 10);
int epoll_fd = epoll_create1(0);
struct epoll_event event{};
event.data.fd = listen_fd;
event.events = EPOLLIN;
if (epoll_ctl(epoll_fd, EPOLL_CTL_ADD, listen_fd, &event) == -1) {
perror("epoll_ctl");
exit(EXIT_FAILURE);
}
struct epoll_event events[64];
while (true) {
int num_events = epoll_wait(epoll_fd, events, 64, -1);
for (int i = 0; i < num_events; ++i) {
if (events[i].data.fd == listen_fd) {
// Accept new client and add to epoll
} else {
// Handle client data
}
}
}
close(listen_fd);
return 0;
}AIO (Asynchronous I/O) lets the OS perform I/O in the background and notifies the application upon completion. It offers high concurrency without per‑connection threads, but its programming model is more complex.
Java provides AIO via AsynchronousServerSocketChannel and AsynchronousSocketChannel . Simple example:
import java.io.IOException;
import java.net.InetSocketAddress;
import java.nio.ByteBuffer;
import java.nio.channels.AsynchronousServerSocketChannel;
import java.nio.channels.AsynchronousSocketChannel;
import java.nio.channels.CompletionHandler;
public class AioServer {
public static void main(String[] args) throws IOException {
AsynchronousServerSocketChannel serverChannel = AsynchronousServerSocketChannel.open()
.bind(new InetSocketAddress(8080));
serverChannel.accept(null, new CompletionHandler
() {
@Override
public void completed(AsynchronousSocketChannel clientChannel, Void attachment) {
serverChannel.accept(null, this);
handleClient(clientChannel);
}
@Override
public void failed(Throwable exc, Void attachment) {
// Handle accept failure
}
});
try { Thread.currentThread().join(); } catch (InterruptedException e) { e.printStackTrace(); }
}
private static void handleClient(AsynchronousSocketChannel clientChannel) {
ByteBuffer buffer = ByteBuffer.allocate(1024);
clientChannel.read(buffer, null, new CompletionHandler
() {
@Override
public void completed(Integer bytesRead, Void attachment) {
if (bytesRead > 0) {
// Process data
} else if (bytesRead == -1) {
try { clientChannel.close(); } catch (IOException e) { e.printStackTrace(); }
}
}
@Override
public void failed(Throwable exc, Void attachment) {
// Read failed
}
});
}
}In summary, the C10K problem sparked a progression of I/O models—from blocking BIO to non‑blocking NIO, select/poll, and finally epoll and AIO—each reducing kernel‑space transitions and resource overhead, enabling modern servers to handle tens or hundreds of thousands of concurrent connections and paving the way toward the emerging C10M challenge.
政采云技术
ZCY Technology Team (Zero), based in Hangzhou, is a growth-oriented team passionate about technology and craftsmanship. With around 500 members, we are building comprehensive engineering, project management, and talent development systems. We are committed to innovation and creating a cloud service ecosystem for government and enterprise procurement. We look forward to your joining us.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.