Cloud Computing 34 min read

Unlock High‑Performance VM Networking: Deep Dive into Virtio, Vhost‑net, and OVS

This article explains how modern data‑center virtualization is shifting from one‑to‑many to many‑to‑one VM models, then walks through the virtio and vhost‑net architectures, their kernel and user‑space components, and provides a step‑by‑step guide to set up a nested KVM environment with OVS for external network access.

AI Cyberspace
AI Cyberspace
AI Cyberspace
Unlock High‑Performance VM Networking: Deep Dive into Virtio, Vhost‑net, and OVS

Introduction

Cloud data‑centers are evolving from the traditional "one VM on many CPUs" (one‑to‑many) model to a "many VMs on one virtual NIC" (many‑to‑one) approach, driven by HPC and AI workloads that suggest the world may soon need only five computers.

Virtualization, a core technology for cloud computing, relies on three key techniques: CPU virtualization, memory virtualization, and I/O virtualization.

Virtio and Vhost‑net Architecture

Key Components

KVM – the kernel‑based virtual machine that turns Linux into a hypervisor.

QEMU – the virtual machine monitor that emulates hardware devices.

Libvirt – the management daemon that converts XML configurations into QEMU commands.

These components combine to expose virtual NICs to guests via virtio devices.

Virtio Specification

Virtio provides a standard, efficient, and extensible interface for virtual devices. It separates the control plane (capability negotiation) from the data plane (packet forwarding). A virtio device consists of a front‑end (guest) and a back‑end (host) component, typically exposed through PCI.

Virtqueues are the mechanism for batch data transfer. Each virtqueue contains buffers allocated by the guest; the host reads or writes these buffers. Notifications inform the driver when buffers are available or have been used.

struct virtqueue {
    /* ring buffers, descriptor tables, etc. */
};

Vhost Protocol

The vhost protocol offloads the data‑plane from QEMU to a separate component (user‑space or kernel) to avoid costly context switches. It provides the host memory layout and a pair of file descriptors for notifications, allowing the data plane to be processed directly by the vhost‑net driver.

vhost‑net Kernel Driver

When loaded, the /dev/vhost-net character device creates a per‑VM kernel thread ( vhost‑$pid) that polls for I/O events and forwards packets. QEMU registers eventfd and ioeventfd descriptors with both vhost‑net and KVM, enabling asynchronous notifications without stopping the vCPU.

Integration with Open vSwitch (OVS)

To connect VMs to external networks, OVS is used as a software switch. OVS runs a kernel data‑path and a user‑space daemon ( ovs‑vswitchd) that manages ports, including the TAP interfaces created for each VM. The bridge forwards traffic between physical NICs and vhost‑net back‑ends.

Practical Setup Guide

1. Prepare Host Environment

Enable nested virtualization in VMware Workstation (Intel VT‑x/EPT or AMD‑V).

Verify KVM support with egrep -c '(vmx|svm)' /proc/cpuinfo and kvm‑ok.

2. Install Required Packages

apt install -y qemu-kvm libvirt-daemon-system virtinst libosinfo-bin bridge-utils

3. Create a Cloud Image and Template

wget http://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img
qemu-img create -f qcow2 -b focal-server-cloudimg-amd64.img vhost-net.qcow2 20G

4. Define and Start a Libvirt Network

virsh net-define /etc/libvirt/qemu/networks/default.xml
virsh net-start default
virsh net-list

5. Create the VM with Virtio NIC

virt-install --import --name vhost-net --ram 2048 --vcpus 1 \
    --network network:default,model=virtio \
    --disk /home/ubuntu/vhost-net/vhost-net.qcow2,bus=virtio \
    --os-variant ubuntu20.04 --noautoconsole

6. Configure Guest Networking

After boot, obtain an IP via DHCP: dhclient enp1s0 -v Optionally install network-manager and use Netplan for automatic configuration.

7. Verify Connectivity

Check the guest IP and default route.

Ping the gateway (e.g., 192.168.122.1) and an external host (e.g., baidu.com).

Inspect iptables NAT and filter tables created by libvirt for SNAT.

8. Observe Performance Threads

On the host, ps -ef | grep vhost shows the QEMU process and the per‑VM vhost‑$pid kernel thread handling the data plane.

The combination of virtio, vhost‑net, and OVS provides a high‑performance, low‑overhead networking stack for virtual machines.

cloud computingvirtualizationQEMUKVMVirtiovhost-netOpen vSwitch
AI Cyberspace
Written by

AI Cyberspace

AI, big data, cloud computing, and networking.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.