Understanding IO Models: Blocking, Non‑blocking, Multiplexing and Asynchronous
This article explains the fundamental concepts of I/O models—including blocking, non‑blocking, multiplexing and asynchronous approaches—detailing their mechanisms, advantages, drawbacks, code examples in Python, and practical optimization strategies for high‑concurrency backend systems.
In computer programming, I/O (Input/Output) is the bridge between a program and external devices such as disks, networks, keyboards, and displays. Different I/O models—blocking, non‑blocking, multiplexing, and asynchronous—determine how a program interacts with the kernel and how efficiently it can handle concurrent operations.
Common I/O models :
Blocking I/O: the call blocks the process until data is ready.
Non‑blocking I/O: the call returns immediately; the program must poll for readiness.
Multiplexing I/O: functions like select , poll , or epoll monitor many file descriptors and notify when any become ready.
Asynchronous I/O: the kernel notifies the process via signals or callbacks when the operation completes, without any blocking.
Analysis method distinguishes two questions: who initiates the I/O request (user‑space synchronous vs. kernel‑initiated asynchronous) and how the process waits for data (blocking, polling, or signal‑driven).
Blocking I/O example (Linux sockets) :
# Implement your own I/O monitoring and dispatch to a single thread
from socket import *
server = socket(AF_INET, SOCK_STREAM)
server.bind(("192.168.2.209",8800))
server.listen(5)
server.setblocking(False) # default True (blocking), changed to False (non‑blocking)
print("starting...")
rlist = []
wlist = []
while True:
try:
conn, addr = server.accept()
rlist.append(conn)
print(rlist)
except BlockingIOError:
# receive messages
del_rlist = [] # connections to remove
for conn in rlist:
try:
data = conn.recv(1024)
if not data:
del_rlist.append(conn)
continue
wlist.append((conn, data.upper()))
except BlockingIOError:
continue
except Exception:
conn.close()
del_rlist.append(conn)
# send messages
del_wlist = []
for item in wlist:
try:
conn = item[0]
data = item[1]
conn.send(data)
del_wlist.append(item)
except BlockingIOError:
pass
for item in del_wlist:
wlist.remove(item)
for conn in del_rlist:
rlist.remove(conn)
server.close()Using a thread‑per‑connection or a thread pool can alleviate blocking, but it consumes significant resources under high concurrency.
Non‑blocking I/O example (Python sockets) :
from socket import *
server = socket(AF_INET, SOCK_STREAM)
server.bind(("192.168.2.209",9900))
server.listen(5)
server.setblocking(False)
rlist = [server]
wlist = []
wdata = {}
while True:
rl, wl, xl = select.select(rlist, wlist, [], 1)
for sock in rl:
if sock == server:
conn, addr = sock.accept()
rlist.append(conn)
else:
try:
data = sock.recv(1024)
if not data:
sock.close()
rlist.remove(sock)
continue
wlist.append(sock)
wdata[sock] = data.upper()
except Exception:
sock.close()
rlist.remove(sock)
for sock in wl:
sock.send(wdata[sock])
wlist.remove(sock)
wdata.pop(sock)
server.close()Multiplexing with select allows a single thread to handle many connections, reducing the number of system calls compared with pure blocking I/O.
Asynchronous I/O (Linux AIO) is introduced later with a diagram; the kernel returns immediately after the request and later signals the process when data is ready, enabling true non‑blocking behavior without explicit polling.
Comparison highlights that blocking vs. non‑blocking differs in whether the system call itself blocks, while synchronous vs. asynchronous differs in whether the process is blocked during the actual data transfer. Non‑blocking I/O still requires the process to poll, whereas asynchronous I/O lets the kernel handle notification.
Practical optimization case : an online education platform switched from synchronous blocking I/O to an epoll‑based multiplexing model for network handling and asynchronous I/O for file reads, dramatically reducing latency and improving resource utilization.
Overall, the choice of I/O model depends on concurrency requirements, expected load, and platform capabilities; multiplexing and asynchronous techniques are preferred for high‑throughput, low‑latency backend services.
Deepin Linux
Research areas: Windows & Linux platforms, C/C++ backend development, embedded systems and Linux kernel, etc.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.