Node.js TCP Connections Explained: Event Loop & Libuv Architecture

This article walks through a complete TCP connection example in Node.js, detailing server and client code, the three-layer Node.js architecture (JS, C++, C), the initialization process, Libuv’s event‑loop phases, task scheduling, I/O models, and how Node.js maintains non‑blocking, concurrent connections.

ELab Team
ELab Team
ELab Team
Node.js TCP Connections Explained: Event Loop & Libuv Architecture

Node.js TCP Connection Example

TCP Server

const net = require('net');

const server = new net.Server();

server.listen(9999, '127.0.0.1', () => {
  console.log(`server is listening on ${server.address().address}:${server.address().port}`);
});

server.on('connection', (socket) => {
  server.getConnections((err, connections) => {
    console.log('current clients is: ', connections);
  });

  socket.on('data', (data) => {
    console.log(`received data: ${data.toString()}`);
  });
});

TCP Client

const net = require('net');

const client = new net.Socket();

client.connect({
  port: 9999,
  address: '127.0.0.1',
});

client.on('connect', () => {
  console.log('connect success');
  client.write(`Hello Server!, I'm ${Math.round(Math.random() * 100)}`);
});

Key questions raised by the example:

How does the Node.js code actually run?

How does a TCP connection stay listening without the process exiting?

How does Node.js handle concurrent connections and avoid blocking the main thread on blocking calls?

Core Architecture

Node.js Architecture

Node.js source code is divided into three layers: JS, C++, and C.

JS Layer

The JS layer provides user‑facing APIs that wrap native modules such as net, http, fs, dns, and path.

C++ Layer

The C++ layer uses V8 to bridge the JS layer with the underlying C layer, handling both JavaScript execution and extending the language capabilities.

C Layer

The C layer mainly contains libuv, the cross‑platform asynchronous I/O library, and other third‑party C libraries.

Startup Process

Analysis

Register C++ modules

void RegisterBuiltinModules() {
  _register_async_wrap();
  _register_buffer();
  _register_fs();
  _register_url();
  // ...
}

The function registers built‑in C++ modules into an internal list so that native JS modules can locate them by name.

Create Environment object The Environment represents the runtime context; after creation it is bound to a V8 Context , allowing V8 to retrieve the environment later.

Initialize loader and execution context Node loads lib/internal/bootstrap/loader.js to expose a binding function that lets JS load C++ modules, then runs lib/internal/bootstrap/node.js to set up the global object and process properties.

Initialize libuv

void Environment::InitializeLibuv(bool start_profiler_idle_notifier) {
  HandleScope handle_scope(isolate());
  Context::Scope context_scope(context());

  CHECK_EQ(0, uv_timer_init(event_loop(), timer_handle()));
  uv_unref(reinterpret_cast<uv_handle_t*>(timer_handle()));
  // ... other libuv initializations ...
}

Execute user JS code

// src/node.cc
MaybeLocal<Value> StartExecution(Environment* env, StartExecutionCallback cb) {
  if (!first_argv.empty() && first_argv != "-") {
    return StartExecution(env, "internal/main/run_main_module");
  }
  // ...
}

The entry point loads internal/main/run_main_module.js, which finally calls

require('internal/modules/cjs/loader').Module.runMain(process.argv[1]);

Enter libuv event loop

// src/node_main_instance.cc
do {
  uv_run(env->event_loop(), UV_RUN_DEFAULT);
  per_process::v8_platform.DrainVMTasks(isolate_);
  more = uv_loop_alive(env->event_loop());
  if (more && !env->is_stopping()) continue;
  if (!uv_loop_alive(env->event_loop())) {
    EmitBeforeExit(env.get());
  }
  more = uv_loop_alive(env->event_loop());
} while (more == true && !env->is_stopping());

The loop runs until there are no active handles or requests.

Source Code Overview

// src/node_main.cc
int main(int argc, char* argv[]) {
  return node::Start(argc, argv);
}
// src/node.cc (excerpt)
namespace node {
  int Start(int argc, char** argv) {
    InitializationResult result = InitializeOncePerProcess(argc, argv);
    // ...
    NodeMainInstance main_instance(¶ms, uv_default_loop(), per_process::v8_platform.Platform(), result.args, result.exec_args, indexes);
    result.exit_code = main_instance.Run();
    // ...
  }
}

Libuv Architecture

Libuv is the core asynchronous I/O library used by Node.js. It categorises requests into network I/O and file I/O/DNS/User‑code groups. Platform‑specific mechanisms (epoll, kqueue, IOCP, etc.) are used for network I/O, while a thread pool handles file I/O.

Event Loop

After Node starts, it enters libuv’s event loop. Each iteration processes a series of phase queues (timer, pending, idle, prepare, poll, check, close) and interleaves micro‑tasks such as process.nextTick and Promise callbacks.

Check if the loop is alive; exit if no active handles.

Update the timestamp.

Execute expired timers.

Run pending callbacks (e.g., I/O success/error).

Run idle callbacks.

Run prepare callbacks.

Poll for I/O events (blocking up to the next timer).

Run check callbacks (e.g., setImmediate).

Run close callbacks.

Return to step 1.

Task Scheduling

Libuv manages five primary queues:

Timer queue – tasks added by setTimeout / setInterval.

Pending queue – callbacks for successful or failed I/O.

I/O event queue – callbacks when an I/O operation completes.

Immediates queue – tasks added by setImmediate (executed in the check phase).

Close queue – callbacks for close events.

Two additional Node‑specific queues are:

Next‑tick queue – tasks added by process.nextTick (executed before any other I/O phase).

Micro‑task queue – Promise callbacks and other micro‑tasks.

Next‑tick has higher priority than other micro‑tasks.

Example

// timer → pending → idle → prepare → poll io → check → close

// timer phase
setTimeout(() => {
  Promise.resolve().then(() => {
    console.log('promise resolve in timeout');
    process.nextTick(() => {
      console.log('tick task in timeout promise');
    });
  });
  process.nextTick(() => {
    console.log('tick task in timeout');
    process.nextTick(() => {
      console.log('tick task in timeout->tick');
    });
  });
  console.log('timer task');
}, 0);

// check phase
setImmediate(() => {
  process.nextTick(() => {
    console.log('immediate->tick task');
  });
  console.log('immediate task');
});

Promise.resolve().then(() => {
  console.log('promise resolve');
});

process.nextTick(() => {
  console.log('tick task');
});

console.log('run main thread');
run main thread
tick task
promise resolve
timer task
tick task in timeout
tick task in timeout->tick
promise resolve in timeout
tick task in timeout promise
immediate task
immediate->tick task

I/O Models

Synchronous Blocking

In this model the process blocks on recvfrom until data arrives, then blocks again while copying data from kernel to user space. It is simple but stalls the process if data is not ready.

// blocking socket example (C)
int serv_sock = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
bind(serv_sock, ...);
listen(serv_sock, ...);
while (1) {
  int clnt_sock = accept(serv_sock, ...); // blocks until a client connects
  recvfrom(clnt_sock, ...);               // blocks until data is ready
  handle(data);
}

Synchronous Non‑Blocking

The socket is set to non‑blocking mode; accept or recvfrom return immediately with EAGAIN if no data is available, allowing the program to poll.

// non‑blocking socket example (C)
int serv_sock = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
int flags = fcntl(serv_sock, F_GETFL, 0);
fcntl(serv_sock, F_SETFL, flags | O_NONBLOCK);
bind(serv_sock, ...);
listen(serv_sock, ...);
while (1) {
  int clnt_sock = accept(serv_sock, ...);
  if (clnt_sock == -1 && errno == EAGAIN) {
    // no pending connections
    continue;
  }
  // non‑blocking recv
  while (1) {
    int ret = recvfrom(clnt_sock, ...);
    if (ret == -1 && errno == EAGAIN) {
      // no data yet
      continue;
    }
    // handle data
    handle(data);
  }
}

I/O Multiplexing

Multiplexing lets a single thread monitor many file descriptors. When any descriptor becomes ready, the kernel notifies the process.

// select‑based multiplexing (C)
int serv_sock = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
bind(serv_sock, ...);
listen(serv_sock, ...);
fd_set readfds;
FD_SET(serv_sock, &readfds);
while (1) {
  int res = select(maxfd + 1, &readfds, NULL, NULL, NULL);
  if (res > 0) {
    for (int i = 0; i <= maxfd; ++i) {
      if (FD_ISSET(i, &readfds)) {
        if (i == serv_sock) {
          int clnt_sock = accept(serv_sock, ...);
          FD_SET(clnt_sock, &readfds);
        } else {
          int ret = recvfrom(i, ...);
          if (ret > 0) handle(data);
        }
      }
    }
  }
}

epoll (Linux)

epoll improves on select by returning only ready descriptors, supporting a much larger number of fds, and using a red‑black tree for efficient add/remove operations.

// epoll‑based event‑driven server (C)
int epoll_fd = epoll_create(5);
struct epoll_event ev, events[5];
ev.data.fd = serv_sock;
ev.events = EPOLLIN;
epoll_ctl(epoll_fd, EPOLL_CTL_ADD, serv_sock, &ev);
while (1) {
  int nfds = epoll_wait(epoll_fd, events, 5, -1);
  for (int i = 0; i < nfds; ++i) {
    if (events[i].data.fd == serv_sock) {
      int connfd = accept(serv_sock, ...);
      ev.data.fd = connfd;
      ev.events = EPOLLIN;
      epoll_ctl(epoll_fd, EPOLL_CTL_ADD, connfd, &ev);
    } else {
      int ret = recvfrom(events[i].data.fd, ...);
      if (ret > 0) handle(data);
    }
  }
}

Server Architectures

Single‑process Single‑thread Serial Model

// blocking server (C)
int serv_sock = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
bind(serv_sock, ...);
listen(serv_sock, ...);
while (1) {
  int clnt_sock = accept(serv_sock, ...); // blocks
  recvfrom(clnt_sock, ...);               // blocks
  handle(data);
}

Multi‑process / Multi‑thread

// fork per connection (C)
while (1) {
  int clnt_sock = accept(serv_sock, ...);
  pid_t pid = fork();
  if (pid == 0) { // child
    recvfrom(clnt_sock, ...);
    handle(data);
    exit(0);
  }
  // parent continues to accept new connections
}

Single‑process Event‑driven (Node.js)

Node.js runs all JavaScript on a single thread, uses libuv’s event loop for I/O multiplexing, and offloads CPU‑intensive or blocking file I/O to a thread pool, keeping the main thread responsive.

Answers

How does Node.js code run? Node initializes V8, registers built‑in C++ modules, creates an Environment, loads the module loader, sets up the global context, initializes libuv, then executes the user script which registers tasks into the event loop.

Why does a TCP connection stay listening? The server registers a listen task with libuv; the event loop continuously polls for new connections, so the process never exits as long as there are active handles.

How does Node.js handle concurrent connections without blocking? It uses a single‑threaded event‑driven model backed by libuv’s I/O multiplexing (epoll/kqueue/IOCP). Network I/O is non‑blocking; file I/O that would block is delegated to a thread pool.

References

https://blog.insiderattack.net/event-loop-and-the-big-picture-nodejs-event-loop-part-1-1cb67a182810

https://zhuanlan.zhihu.com/p/115912936

https://www.cnblogs.com/javalyy/p/8882066.html

https://github.com/theanarkh/understand-nodejs/blob/master/docs/chapter01-Node.js%E7%BB%84%E6%88%90%E5%92%8C%E5%8E%9F%E7%90%86.md

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

Backend DevelopmentNode.jstcplibuv
ELab Team
Written by

ELab Team

Sharing fresh technical insights

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.