Fundamentals 14 min read

Understanding Synchronous vs Asynchronous, Blocking vs Non-Blocking, and Linux I/O Models

This article explains the concepts of synchronous and asynchronous execution, blocking and non‑blocking operations, user and kernel space, process switching, file descriptors, cache I/O, and compares various Linux I/O models such as blocking, non‑blocking, multiplexing, signal‑driven and asynchronous I/O, including the differences among select, poll and epoll.

Architects' Tech Alliance
Architects' Tech Alliance
Architects' Tech Alliance
Understanding Synchronous vs Asynchronous, Blocking vs Non-Blocking, and Linux I/O Models

The article begins by defining **synchronous** execution, where a task must wait for its dependent task to finish, and **asynchronous** execution, where a task proceeds without waiting for the dependent task’s completion.

It then distinguishes **blocking** (the calling thread is suspended until the operation completes) from **non‑blocking** (the function returns immediately and the caller must poll for completion), noting that non‑blocking can increase CPU usage due to busy‑waiting.

Next, the text describes the division of a 32‑bit virtual address space into **kernel space** (top 1 GB) and **user space** (lower 3 GB) in Linux, emphasizing the kernel’s privileged access.

The process‑switching steps are outlined: saving CPU context, updating the PCB, moving the PCB to the appropriate queue, selecting another process, updating memory structures, and restoring context.

It explains **file descriptors** as integer indexes into the kernel’s per‑process open‑file table, primarily used in Unix‑like systems.

The article introduces **cache I/O**, where data is first copied to the kernel page cache before being transferred to user space, and notes the overhead of multiple data copies.

In the second part, several **I/O models** are presented:

**Blocking I/O** – the caller waits until data is ready and copied.

**Non‑blocking I/O** – the kernel returns an error if the operation cannot complete immediately, requiring the caller to poll.

**I/O multiplexing** – uses system calls like select , poll , and epoll to monitor multiple sockets simultaneously, reducing the need for many threads.

**Signal‑driven I/O** – the process receives a SIGIO signal when data is ready.

**Asynchronous I/O** – the kernel notifies the process via callbacks after the operation finishes, allowing the process to perform other work in the meantime.

The article compares these models, highlighting that non‑blocking I/O still requires active polling, whereas asynchronous I/O offloads the waiting to the kernel.

Finally, it compares **select**, **poll**, and **epoll**:

Maximum number of file descriptors each can handle.

Performance impact when the number of descriptors grows.

Message‑passing mechanisms.

It notes that epoll generally offers the best performance for large numbers of connections, but for few active connections select or poll may be faster. It also explains level‑triggered and edge‑triggered modes of epoll , and that select and poll are level‑triggered, while signal‑driven I/O is edge‑triggered.

asynchronousLinuxepollselectIOnon-blockingblockingsynchronous
Architects' Tech Alliance
Written by

Architects' Tech Alliance

Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.