How Linux Handles Network Packets: From NIC to Kernel and Back
This article explains the complete Linux network packet lifecycle, detailing how a UDP packet is received from a physical NIC, processed through driver interrupts, soft‑interrupts, the kernel network stack, and finally transmitted back out, covering key functions, queues, and netfilter hooks.
Linux Network Packet Receiving and Sending Process
Packet Reception Flow
To simplify, we describe the process of receiving and sending Linux network packets on a physical NIC using UDP as an example, ignoring irrelevant details.
From NIC to Memory
Each network device has a driver loaded at kernel boot; the driver connects the device to the kernel network stack. When a new packet arrives, the NIC triggers an interrupt handled by the driver.
The diagram shows how a packet moves from the NIC into system memory and is processed by the driver and network stack.
Packet enters the physical NIC; if the destination address does not match and the NIC is not in promiscuous mode, the packet is dropped.
The NIC uses DMA to write the packet to a memory address allocated by the driver.
The NIC raises a hardware interrupt (IRQ) to notify the CPU of the new packet.
The CPU invokes the registered interrupt handler, which calls the driver.
The driver disables the NIC interrupt, indicating it knows data is in memory and tells the NIC to write directly next time without interrupt.
A soft interrupt is started to continue processing because the hard‑interrupt handler cannot be pre‑empted.
Kernel Packet Processing
The driver triggers a soft‑interrupt handler in the kernel network module to process the packet.
The ksoftirqd process calls net_rx_action. net_rx_action calls the driver’s poll function to handle packets one by one. poll reads the packet data written by the NIC; only the driver knows the memory format.
The driver converts the memory data to an skb (socket buffer) and calls napi_gro_receive. napi_gro_receive merges packets for GRO and, if RPS is enabled, calls enqueue_to_backlog. enqueue_to_backlog places the packet into input_pkt_queue and returns; if the queue is full the packet is dropped (size configurable via net.core.netdev_max_backlog).
The CPU processes the queue in soft‑interrupt context by calling __netif_receive_skb_core.
If RPS is not enabled, napi_gro_receive calls __netif_receive_skb_core directly.
If a raw socket of type AF_PACKET exists, the packet is copied to it (e.g., for tcpdump).
Finally the packet is handed to the kernel TCP/IP stack.
When all packets have been processed ( poll finishes), the NIC interrupt is re‑enabled.
Kernel Network Protocol Stack
The packet now resides at the IP layer and proceeds to the transport layer.
IP Layer
ip_rcvchecks whether the packet should be dropped and then invokes the NF_INET_PRE_ROUTING netfilter hook.
Routing: if the destination IP is not local and IP forwarding is disabled, the packet is dropped; otherwise ip_forward handles forwarding. ip_forward runs the NF_INET_FORWARD hook and then calls dst_output_sk.
If the packet is for the local host, ip_local_deliver runs the NF_INET_LOCAL_IN hook and passes the packet to the transport layer.
Transport Layer
udp_rcvis the entry point for UDP; it looks up the socket with __udp4_lib_lookup_skb. If no socket matches, the packet is dropped. sock_queue_rcv_skb checks the socket receive queue, applies any BPF filter via sk_filter, and enqueues the packet. __skb_queue_tail adds the packet to the socket’s receive queue; sk_data_ready notifies the socket that data is ready.
All the above runs in soft‑interrupt context.
Packet Sending Flow
The sending path mirrors the receiving path, illustrated with UDP.
Application Layer
The application creates a socket and calls sendto, which invokes inet_sendmsg.
socket(...)creates and initializes a socket structure. sendto(sock, ...) triggers inet_sendmsg. inet_sendmsg ensures the socket has a source port, calling inet_autobind if needed. inet_autobind obtains a free port via get_port.
Transport Layer
udp_sendmsgobtains routing info with ip_route_output_flow, builds an skb via ip_make_skb, and fills it with UDP headers.
ip_route_output_flowselects the outgoing device and source IP; it may drop the packet if the chosen device cannot reach the destination. ip_make_skb constructs the skb and calls __ip_append_dat, which may return ENOBUFS if the send buffer is exhausted. udp_send_skb(skb, fl4) adds the UDP header and checksum, then passes the packet to the IP layer.
IP Layer
The IP layer sends the packet using ip_send_skb → __ip_local_out_sk → dst_output_sk → ip_output → NF_INET_POST_ROUTING → ip_finish_output → ip_finish_output2 → neighbor lookup → dev_queue_xmit.
ip_send_skbis the entry point for sending. __ip_local_out_sk sets length and checksum, then runs the NF_INET_LOCAL_OUT hook. ip_output writes the packet into the skb and runs NF_INET_POST_ROUTING (SNAT may occur). ip_finish_output checks for routing changes and may invoke dst_output_sk again. ip_finish_output2 looks up the next‑hop neighbor; if missing it creates a placeholder. dst_neigh_output fills the skb with the MAC address and calls dev_queue_xmit.
Kernel Transmission
dev_queue_xmitobtains the device’s queueing discipline; if none, it calls dev_hard_start_xmit. Traffic control may filter or drop packets.
dev_hard_start_xmitcopies the skb for packet‑capture tools, then calls the driver’s ndo_start_xmit to transmit.
If ndo_start_xmit fails, the kernel schedules a soft‑interrupt NET_TX_SOFTIRQ handled by net_tx_action for retry. ndo_start_xmit is driver‑specific; it places the skb into the NIC’s transmit queue, notifies the NIC, and the NIC raises an interrupt on completion.
Source
https://www.sobyte.net/post/2022-10/linux-net-snd-rcv/
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Open Source Linux
Focused on sharing Linux/Unix content, covering fundamentals, system development, network programming, automation/operations, cloud computing, and related professional knowledge.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
