Backend Development 31 min read

Understanding QEMU Vhost‑User Backend for Virtio‑Net Devices

This article explains the architecture and implementation of the virtio/vhost device model, details how QEMU creates and initializes virtio‑net‑pci devices, describes the vhost‑user communication protocol and its integration with DPDK and the Linux vhost‑net kernel driver, and provides practical command‑line examples and code snippets.

Deepin Linux
Deepin Linux
Deepin Linux
Understanding QEMU Vhost‑User Backend for Virtio‑Net Devices

Vhost/virtio is a semi‑virtualized device abstraction used by QEMU and KVM to achieve high‑performance I/O; the front‑end driver runs in the guest (virtio) while the back‑end runs on the host (vhost). The Linux kernel provides virtio‑net and vhost‑net drivers for networking.

The article first introduces virtio as a generic set of simulated devices exposed by a hypervisor via a common API, and explains the two‑layer communication model consisting of virtual queues that connect front‑end and back‑end drivers.

QEMU Back‑End Driver

QEMU implements the control plane of virtio devices, while the data plane is handed off to the vhost framework (user‑mode vhost‑user or kernel‑mode vhost‑net). The creation of a virtio‑net‑pci device is illustrated with the following command line:

gdb --args ./x86_64-softmmu/qemu-system-x86_64 \
    -machine accel=kvm -cpu host -smp sockets=2,cores=2,threads=1 -m 3072M \
    -object memory-backend-file,id=mem,size=3072M,mem-path=/dev/hugepages,share=on \
    -hda /home/kvm/disk/vm0.img -mem-prealloc -numa node,memdev=mem \
    -vnc 0.0.0.0:00 -monitor stdio --enable-kvm \
    -netdev type=tap,id=eth0,ifname=tap30,script=no,downscript=no \
    -device e1000,netdev=eth0,mac=12:03:04:05:06:08 \
    -chardev socket,id=char1,path=/tmp/vhostsock0,server \
    -netdev type=vhost-user,id=mynet3,chardev=char1,vhostforce,queues=$QNUM \
    -device virtio-net-pci,netdev=mynet3,id=net1,mac=00:00:00:00:00:03,disable-legacy=on

The -device option creates the virtio‑net‑pci front‑end, which depends on a QEMU netdev object. The netdev in turn depends on a character device that provides the vhost‑user socket.

QEMU parses these options in main() , stores them in internal structures, and processes them with qemu_opts_foreach . For the netdev, net_init_netdev calls net_init_vhost_user , which matches the character device and finally invokes net_vhost_user_init to create the back‑end.

Device Creation Flow

During device creation QEMU calls qdev_device_add , which allocates a DeviceState instance via object_new , then runs the class instance_init function (e.g., virtio_net_pci_instance_init ) to set up the common virtio structures.

static void virtio_net_pci_instance_init(Object *obj)
{
    VirtIONetPCI *dev = VIRTIO_NET_PCI(obj);
    virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev), TYPE_VIRTIO_NET);
    object_property_add_alias(obj, "bootindex", OBJECT(&dev->vdev), "bootindex");
}

After instance initialization, QEMU invokes the class realize methods layer by layer (PCI → virtio‑pci → virtio‑net) to complete the device setup.

QEMU and Vhost‑User Interface

The vhost‑user back‑end communicates with QEMU over a Unix socket. The protocol consists of a fixed‑size header followed by a payload:

typedef struct {
    VhostUserRequest request;
    uint32_t flags;
    uint32_t size; /* payload size */
} VhostUserHeader;

typedef union {
    uint64_t u64;
    struct vhost_vring_state state;
    struct vhost_vring_addr addr;
    VhostUserMemory memory;
    VhostUserLog log;
    struct vhost_iotlb_msg iotlb;
    VhostUserConfig config;
    VhostUserCryptoSession session;
    VhostUserVringArea area;
} VhostUserPayload;

typedef struct VhostUserMsg {
    VhostUserHeader hdr;
    VhostUserPayload payload;
} VhostUserMsg;

Typical requests include VHOST_USER_GET_FEATURES , VHOST_USER_SET_MEM_TABLE , VHOST_USER_SET_VRING_ADDR , and VHOST_USER_SET_VRING_KICK . When the guest driver writes VIRTIO_CONFIG_S_DRIVER_OK to the device status register, QEMU calls virtio_net_start() , which triggers the vhost‑user VHOST_USER_SET_MEM_TABLE and subsequent queue configuration messages, finally logging “virtio is now ready for processing”.

Vhost Framework Design (DPDK)

DPDK implements the vhost‑user back‑end in lib/librte_vhost . The initialization sequence is:

rte_vhost_driver_register(path, flags) – allocates a vhost_user_socket and opens the socket file.

rte_vhost_driver_callback_register(path, ops) – registers the vhost_device_ops callbacks.

rte_vhost_driver_start(path) – creates a connection, registers a read callback, and waits for QEMU messages.

Each incoming message is dispatched via the vhost_message_handlers table, updating the local virtio_net structures and memory tables.

vhost‑net Kernel Driver

The kernel side provides /dev/vhost-net . QEMU opens this character device and uses a series of ioctl calls to associate the guest’s memory, configure queues, and exchange eventfd descriptors (kick and call). A dedicated kernel thread ( vhost‑$pid ) polls these eventfds, enabling zero‑copy data plane processing without involving the QEMU process.

Virtio‑Net Device Operation

Virtio‑net uses multiple virtqueues: one for transmission (TX), one for reception (RX), and one for control. The guest driver fills descriptors in the available ring, notifies the device via a MMIO write, and the back‑end (vhost‑user or vhost‑net) reads the buffers, forwards packets through a TAP interface, and writes completions to the used ring, generating an interrupt for the guest.

The article also lists practical QEMU command‑line examples for creating a virtio‑net device and for attaching an OVS port via the vhost‑user client interface.

Limitations and Considerations

Performance overhead compared with bare‑metal NICs.

Complexity of QEMU configuration and the need for deep knowledge of its device model.

Potential driver compatibility issues on guest operating systems.

Dependency on QEMU/KVM limits deployment on some embedded platforms.

Overall, the article provides a comprehensive walkthrough of how virtio‑net devices are built, initialized, and connected to high‑performance back‑ends using the vhost‑user protocol and the Linux vhost‑net kernel driver.

backendkernelDPDKQEMUNetwork Virtualizationvhostvirtio
Deepin Linux
Written by

Deepin Linux

Research areas: Windows & Linux platforms, C/C++ backend development, embedded systems and Linux kernel, etc.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.