Backend Development 6 min read

Comparing BIO, NIO, and Asynchronous Models Using a Bank Process Analogy

The article uses a simple bank workflow with ten staff members to illustrate the differences in throughput between BIO (blocking I/O), NIO (non‑blocking I/O), and asynchronous processing, showing how task decomposition and specialized threads dramatically improve performance in backend systems.

Architecture Digest
Architecture Digest
Architecture Digest
Comparing BIO, NIO, and Asynchronous Models Using a Bank Process Analogy

Assume a bank with ten employees and a four‑step service process: customer fills a form (5 min), employee reviews (1 min), employee calls security to fetch cash (3 min), and employee prints a receipt and returns cash (1 min). The total time per customer is 10 minutes.

1. BIO (Blocking I/O) model : Each arriving customer is handled by a single employee who performs all four steps. With ten employees, the bank can serve 6 customers per hour per employee, i.e., 60 customers per hour total. Many employees are idle during the waiting phases, indicating low resource utilization.

2. NIO (Non‑blocking I/O) model : One employee (A) only collects forms and distributes completed forms to the remaining nine employees for the subsequent steps. The nine employees each process a customer in 5 minutes, yielding 108 customers per hour. This model introduces a thread‑pool‑like separation of concerns, similar to a mainReactor accepting connections and subReactor threads handling I/O, while worker threads perform business logic.

3. Asynchronous model : A dedicated employee (B) handles the third step (security interaction). When the cash is ready, B notifies the customer, allowing the teller to immediately proceed to the final step. This reduces the teller’s wait time to 2 minutes per customer, enabling eight tellers to serve 240 customers per hour. The pattern mirrors asynchronous RPC or HTTP calls where long‑running operations are off‑loaded, e.g., Jetty Continuations.

All three approaches demonstrate the "divide‑and‑conquer" principle: breaking a request into independent tasks handled by specialized workers dramatically increases system throughput, a concept applicable both in software engineering and broader societal workflows.

backendconcurrencyasynchronousNIOthread poolBIO
Architecture Digest
Written by

Architecture Digest

Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.