Comparison of select, poll, and epoll: Time Complexity, Advantages, and Implementation Details
This article compares the three I/O multiplexing mechanisms—select, poll, and epoll—by analyzing their time complexities, limitations, trigger modes, performance characteristics, and implementation details, helping developers choose the most suitable method for different Linux networking scenarios.
Overview
select has a time complexity of O(n) because it must linearly poll all file descriptors to find ready ones.
poll
poll also operates with O(n) complexity; it copies the user‑provided array to kernel space and checks each fd, but it does not have a hard limit on the number of connections because it stores fds in a linked list.
epoll
epoll provides O(1) complexity by using an event‑driven model: the kernel notifies the application only about fds that have generated I/O events, eliminating the need for full scans.
All three mechanisms are synchronous I/O; they block until the application performs the actual read/write. Asynchronous I/O offloads data transfer to the kernel.
select
select monitors fds via a bitmap, which leads to several drawbacks:
Limited number of fds per process (FD_SETSIZE, typically 1024 on 32‑bit systems).
Linear scanning of all fds on each call, causing performance degradation as the number of fds grows.
Large memory copy between user and kernel space for the fd set.
poll
poll behaves similarly to select but stores fds in a linked list, removing the hard fd limit. Its disadvantages include:
Entire fd array is copied between user and kernel space regardless of readiness.
"Level‑triggered" behavior: if an fd is reported but not processed, it will be reported again on the next poll.
epoll
epoll supports two trigger modes: LT (default, level‑triggered) and ET (edge‑triggered). In ET mode, an event is reported only once until new data arrives, requiring the application to read until EAGAIN or a short read.
Advantages of epoll:
No practical limit on concurrent connections; thousands of fds can be handled on modest hardware.
Higher efficiency because only active fds generate callbacks, avoiding linear scans.
Uses memory‑mapped I/O (mmap) to reduce copy overhead between user and kernel space.
Key Differences Summary
Maximum connections per process
select is limited by FD_SETSIZE; poll removes this limit; epoll can handle tens of thousands of connections depending on system memory.
Performance with many fds
select suffers linear degradation; poll behaves similarly; epoll maintains performance as long as only a subset of fds are active.
Message transfer method
select and poll copy data between kernel and user space; epoll shares a memory region between kernel and user space to avoid copies.
Implementation Details
select implementation
The call flow includes copying the fd_set from user to kernel, registering a callback via __pollwait , iterating over all fds, invoking their poll methods, and finally copying the result back to user space.
poll implementation
Poll uses the pollfd structure instead of an fd_set, but still copies the entire array to kernel space and traverses it for readiness, incurring similar linear overhead.
epoll implementation
epoll introduces three system calls: epoll_create , epoll_ctl , and epoll_wait . The fd is copied only once during epoll_ctl , and readiness is tracked via a kernel‑maintained ready list, making epoll_wait an O(1) operation.
Conclusion
Choosing between select, poll, and epoll depends on the specific use case: epoll offers the best performance for large numbers of connections, but for a small set of highly active connections, select or poll may be simpler and equally efficient.
References:
https://www.cnblogs.com/zhaodahai/p/6831456.html https://www.cnblogs.com/sky-heaven/p/7011684.html
Architecture Digest
Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.