Fundamentals 16 min read

Understanding Localhost (127.0.0.1) Network I/O in the Linux Kernel

This article explains in detail how Linux handles local network I/O for 127.0.0.1, comparing it with cross‑machine communication, describing the routing, device subsystem, driver code, and soft‑interrupt processing, and concluding with performance considerations and a discussion on eBPF acceleration.

IT Services Circle
IT Services Circle
IT Services Circle
Understanding Localhost (127.0.0.1) Network I/O in the Linux Kernel

Hello, I am Fei Ge! After finishing the analysis of Linux network packet receive and send processes, a reader asked: "How does 127.0.0.1 local network I/O work?" This article answers that question.

Local network I/O is widely used in PHP (Nginx + php‑fpm) and in micro‑service side‑car patterns, so understanding its inner workings is valuable.

1. Cross‑machine network communication process

Before discussing local communication, we briefly review cross‑machine networking.

1.1 Cross‑machine data sending

From the send system call to the NIC transmitting the packet, the overall flow is:

The user data is copied to kernel space, processed by the protocol stack, placed into a ring buffer, and finally the NIC driver sends it out. Completion is signaled by a hardware interrupt that cleans the ring buffer.

1.2 Cross‑machine data receiving

When a packet arrives on another host, the NIC raises an interrupt, the kernel’s soft‑interrupt handler processes the packet, and finally wakes the user process.

1.3 Summary of cross‑machine communication

2. Local (127.0.0.1) sending process

The cross‑machine flow is already known; for local I/O we focus on the differences: routing and the driver program.

2.1 Network‑layer routing

The entry point is ip_queue_xmit . For local traffic the routing lookup finds an entry in the local table, which points to the loopback device lo .

// file: net/ipv4/ip_output.c
int ip_queue_xmit(struct sk_buff *skb, struct flowi *fl) {
    // check socket‑cached route
    rt = (struct rtable *)__sk_dst_check(sk, 0);
    if (rt == NULL) {
        // no cache, perform lookup and cache it
        rt = ip_route_output_ports(...);
        sk_setup_caps(sk, &rt->dst);
    }
    ...
}

The lookup uses fib_lookup , which first searches the local routing table.

// file: include/net/ip_fib.h
static inline int fib_lookup(struct net *net, const struct flowi4 *flp,
                            struct fib_result *res) {
    struct fib_table *table;
    table = fib_get_table(net, RT_TABLE_LOCAL);
    if (!fib_table_lookup(table, flp, res, FIB_LOOKUP_NOREF))
        return 0;
    table = fib_get_table(net, RT_TABLE_MAIN);
    if (!fib_table_lookup(table, flp, res, FIB_LOOKUP_NOREF))
        return 0;
    return -ENETUNREACH;
}

Running ip route list table local on a Linux host shows entries such as:

# ip route list table local
local 10.143.x.y dev eth0 proto kernel scope host src 10.143.x.y
local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1

Thus the packet is handed to net->loopback_dev (the lo virtual NIC).

2.2 Network device subsystem

The entry point is dev_queue_xmit . For a real NIC the function enqueues the packet and eventually calls dev_hard_start_xmit . For the loopback device the queue is disabled, so the packet goes directly to dev_hard_start_xmit and then to the loopback driver’s loopback_xmit .

// file: net/core/dev.c
int dev_queue_xmit(struct sk_buff *skb) {
    q = rcu_dereference_bh(txq->qdisc);
    if (q->enqueue) {
        rc = __dev_xmit_skb(skb, q, dev, txq);
        goto out;
    }
    // loopback device: enqueue is false
    if (dev->flags & IFF_UP) {
        dev_hard_start_xmit(skb, dev, txq, ...);
        ...
    }
}

The hard‑start function calls the device’s ndo_start_xmit operation.

// file: net/core/dev.c
int dev_hard_start_xmit(struct sk_buff *skb, struct net_device *dev,
                       struct netdev_queue *txq) {
    const struct net_device_ops *ops = dev->netdev_ops;
    rc = ops->ndo_start_xmit(skb, dev);
    ...
}

2.3 Loopback “driver”

The loopback driver is pure software located in drivers/net/loopback.c . Its loopback_xmit function simply orphanes the skb and hands it to netif_rx .

// file: drivers/net/loopback.c
static netdev_tx_t loopback_xmit(struct sk_buff *skb,
                                 struct net_device *dev) {
    // detach from original socket
    skb_orphan(skb);
    // deliver to the receive path
    if (likely(netif_rx(skb) == NET_RX_SUCCESS)) {
        ...
    }
}

The receive path enqueues the skb into the per‑CPU input_pkt_queue and raises a soft IRQ.

3. Local (127.0.0.1) receiving process

Because no hardware interrupt occurs, the soft‑interrupt handler net_rx_action processes the packet directly.

// file: net/core/dev.c
static void net_rx_action(struct softirq_action *h) {
    while (!list_empty(&sd->poll_list)) {
        work = n->poll(n, weight);
    }
}

The per‑CPU softnet_data structure’s poll function is set to process_backlog during initialization.

// file: net/core/dev.c
static int __init net_dev_init(void) {
    for_each_possible_cpu(i) {
        sd->backlog.poll = process_backlog;
    }
}

process_backlog moves skbs from input_pkt_queue to process_queue and finally calls __netif_receive_skb , which hands the packet to the IP stack ( ip_rcv ).

4. Summary of local network I/O

Key conclusions:

127.0.0.1 traffic does not pass through a physical NIC; it uses the loopback virtual device.

The kernel still traverses the full network stack (system call → IP layer → neighbor subsystem → device subsystem → driver → soft‑interrupt → protocol stack).

Loopback avoids hardware‑related overhead (no DMA, no hardware interrupt), but most software processing remains.

eBPF can be used to bypass the kernel stack entirely for side‑car communication, offering further performance gains.

Local network I/O may still involve IP fragmentation if the skb exceeds the MTU, but the loopback MTU is typically 65535, far larger than Ethernet’s 1500.

Discussion question: When accessing a local server, is using 127.0.0.1 faster than using the host’s own IP address (e.g., 192.168.x.x)? Share your thoughts in the comments.

kernelLinuxeBPFTCP/IPNetworkingloopbacknetwork I/O
IT Services Circle
Written by

IT Services Circle

Delivering cutting-edge internet insights and practical learning resources. We're a passionate and principled IT media platform.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.