Virtual Network Fundamentals: Virtual NIC, Virtio Backend, KVM Interaction, and Node-Level Switching
This article explains how virtual networks operate in cloud computing by covering virtual NIC implementation, Virtio frontend/backend mechanisms, KVM signal interaction, and both layer‑2 and layer‑3 forwarding within a node using OVS and related technologies.
1. Virtual NIC
In virtual machine scenarios, KVM virtualizes CPU and memory while Qemu emulates hardware devices, including the virtual NIC. Modern IaaS clouds use Virtio as the standard virtual NIC; earlier implementations relied on fully simulated devices such as E1000, which required extensive software handling of registers, interrupts, and bus operations, leading to multiple VM exits, IO copies, and frequent kernel‑user mode switches.
Virtio solves compatibility problems by introducing a frontend (running inside the VM as a driver) and a backend (running on the host node) that communicate via shared memory, turning the NIC into a producer‑consumer model.
Virtio device initialization on a Linux VM follows three steps: virtio‑pci, virtio bus, and the virtio device itself. The VM reads the PCI BAR space, triggers a VM exit, and Qemu handles the I/O, registering the device on the virtio bus so that drivers can share common initialization code such as vring setup.
During initialization the VM and host negotiate supported Virtio features and set up the virtqueue (vring) by translating guest virtual addresses to host physical addresses, ensuring both sides reference the same memory region.
2. VirtIO Backend
The backend processes packets in two directions: (a) forwarding packets sent by the VM to the physical network, and (b) delivering received packets from the host to the VM. Three backend implementations exist, increasing in performance: virtio‑net (user‑mode Qemu), vhost‑net (kernel‑mode), and vhost‑user (user‑mode with a socket).
2.1. virtio‑net
In this mode the backend runs in user‑mode Qemu; the VM writes packets to a virtqueue, KVM notifies Qemu, which then pushes the packets through a TAP interface into the host kernel stack. The reverse path also requires copying between kernel and user space, resulting in relatively low forwarding performance.
2.2. vhost‑net
vhost‑net moves the data path into the Linux kernel. Qemu creates a /dev/vhost‑net device, spawns a kernel thread for each queue, and the kernel thread directly consumes packets from the virtqueue, eliminating user‑kernel context switches and memory copies, thus greatly improving efficiency.
2.3. vhost‑user
vhost‑user places both packet processing and virtqueue memory sharing in user space, typically using OVS‑DPDK or Tungsten Fabric as the backend. The VM’s virtio frontend negotiates with the backend over a Unix domain socket, allowing the backend to act as a client or server; the common practice is to let Qemu be the server so that the backend can reconnect after crashes.
3. KVM Signal Interaction
Packet transmission between VM and host relies on two signaling mechanisms: ioeventfd for the transmit direction and irqfd for the receive direction. Both are implemented via KVM ioctls and are polled by the backend.
3.1. ioeventfd
The VM driver writes to a memory‑mapped address; KVM intercepts the write, triggers a VM exit, and writes to the associated ioeventfd. The backend polls this fd and, upon notification, reads the packet from the virtqueue for forwarding.
3.2. irqfd
When the backend has placed a received packet into the virtqueue, it writes to irqfd. KVM receives the signal, injects an interrupt into the appropriate vCPU, and the VM’s virtio‑net driver handles the interrupt, ultimately delivering the packet to the guest network stack.
4. In‑Node Layer‑2 Forwarding
Using OVS as the virtual switch, ports are tagged with VLAN IDs to isolate VPCs. A layer‑2 forwarding table maps MAC addresses to OVS ports; when a packet arrives, OVS matches the VLAN tag and destination MAC, then forwards the frame to the correct port.
5. In‑Node Layer‑3 Forwarding
Two approaches are described:
5.1. IPForward + OVS
Linux network namespaces provide a gateway virtual NIC per subnet, OVS tags traffic with VLANs, and the host kernel’s IP forwarding routes packets between subnets. OVS forwards packets to the gateway port, the namespace performs routing, updates MAC addresses, and sends the traffic back through OVS.
5.2. OVS‑Only
All routing logic resides in OVS. VLAN‑based routing tables match destination IPs, rewrite source MAC to the subnet gateway MAC, decrement TTL, then use a MAC‑rewrite table to set the final destination MAC before the packet is forwarded via the layer‑2 table.
6. Node‑Level Packet Forwarding Process
The article concludes with a diagram (not reproduced here) that shows the end‑to‑end flow: packets enter the host kernel, are processed by OVS (or OVS‑DPDK), handed to the virtio backend, and finally delivered to the VM via KVM. The next part of the series will cover inter‑node (Overlay) forwarding and SDN control/forwarding planes.
360 Smart Cloud
Official service account of 360 Smart Cloud, dedicated to building a high-quality, secure, highly available, convenient, and stable one‑stop cloud service platform.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.