Backend Development 11 min read

Introduction to Two IO Model Architectures: Thread‑Based and Event‑Driven Designs

This article explains the two main network I/O architectures—thread‑based designs such as one‑connection‑per‑thread, pre‑forked processes, and their pros and cons, as well as event‑driven designs like the Reactor pattern, thread pools, and multiple reactors—helping readers choose the appropriate model for different server workloads.

58 Tech
58 Tech
58 Tech
Introduction to Two IO Model Architectures: Thread‑Based and Event‑Driven Designs

IO Model Two Architecture Introduction

Network I/O can be implemented in many ways, but the two primary architectures are thread‑based designs and event‑driven designs.

1. Thread‑Based Design

The thread‑based approach follows the idea of one connection per thread (One‑Connection‑Per‑Thread). It is suitable for sites that use non‑thread‑safe libraries and want to avoid thread contention.

1.1 Iterative Server

This is the most primitive network model where the program has a single main process that loops, accepts a connection, processes the request, closes the socket, and repeats. It works for write‑only services like daytime but cannot handle long‑lived connections or multiple clients simultaneously.

1.2 Pre‑Forked Child Processes (Process‑Per‑Connection)

When a connection is created, the server forks a child process to handle the request while the parent immediately accepts new connections. Each child serves a long‑lived connection, allowing the server to handle many clients concurrently, limited only by the OS's maximum number of processes.

Using threads instead of processes yields a thread‑per‑connection model, which was common in Java before NIO. Scalability is limited by the OS thread count.

Note: The parent process does not close the socket after forking; the connection is closed only after both parent and child close it.

1.3 Pre‑Forked Processes with Accept in Children

If the workload consists mainly of short connections, frequent forking becomes costly. Pre‑creating several child processes that each call accept() reduces runtime fork overhead.

Advantages

Reduces fork overhead by creating processes at initialization.

Disadvantages

Creating more processes than clients wastes resources and increases context‑switch overhead.

Creating fewer processes than clients leads to missed connections and higher response latency.

The “thundering herd” problem: many processes block on the same listen socket, and when a connection arrives all are awakened, but only one handles it, causing performance loss.

1.4 Pre‑Forked Processes with Lock‑Protected Accept

By placing a lock around the accept() call, only one child process can block on accept at a time, eliminating the thundering herd. Nginx used this technique (CPU spin lock or file lock) before Linux 4.5 introduced EPOLLEXCLUSIVE , which moves the lock handling to the kernel.

1.5 Pre‑Forked Processes with Parent Accept

An alternative to the thundering herd is for the parent to accept connections and then distribute the data to child processes via a pipe. Experiments by W. Richard Stevens show this adds extra data‑copy overhead and complexity, so it is generally not recommended.

Thread vs. Process Design Bottlenecks

Threads share address space, making data sharing efficient, while processes share code but not data. Threads are faster but a crash in one thread can bring down the whole process, and hot upgrades are harder (nginx uses a process model for reload/upgrade). Both threads and processes are limited by the OS scheduler; hundreds of threads are manageable, but thousands become burdensome.

Each connection typically occupies a thread for its lifetime; using Keep‑Alive reduces connection creation cost but can lead to many idle threads, consuming large stack memory.

2. Event‑Driven Design

The event‑driven approach separates threads from connections; threads only handle callbacks or business logic.

2.1 Reactor

The Reactor pattern uses a single thread (the acceptor) to listen on a port, accept multiple connections, and sequentially process their events. It can handle many connections but is not suitable for multi‑core CPUs.

2.2 Reactor + Thread‑per‑Task

After the acceptor reads tasks, a new thread is created for each task. This utilizes CPU cores better but adds thread‑creation overhead and can cause out‑of‑order execution for requests belonging to the same connection.

2.3 Reactor + Thread‑Pool

To avoid creating threads on the fly, a pool of worker threads is pre‑created. The acceptor dispatches tasks to the pool, which efficiently handles CPU‑intensive work. However, for I/O‑bound services a single thread reading may become a bottleneck.

2.4 Multiple Reactors

The main thread accepts connections and distributes them to several sub‑reactors. Each sub‑reactor handles read/write for its assigned connections, improving I/O throughput by leveraging multiple CPU cores.

2.5 Multiple Reactors + Thread‑Pool

This combines multiple sub‑reactors with a thread pool to handle both I/O‑intensive and CPU‑intensive workloads efficiently.

Summary

Software such as Nginx, Apache, Tomcat, Netty, Muduo, and Java NIO employ one or more of the above designs. When choosing an implementation, consider factors like long‑ vs. short‑lived connections, ordering requirements, transactionality, I/O intensity, and CPU intensity.

Reactor Patternevent-drivenServer ArchitectureIO modelthread-per-connection
58 Tech
Written by

58 Tech

Official tech channel of 58, a platform for tech innovation, sharing, and communication.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.