Mastering Libvirt & QEMU‑KVM: From Setup to Live Migration
This comprehensive guide explains Libvirt’s architecture, installation, network configurations, bridge and NAT modes, optional Open vSwitch usage, and detailed steps for KVM live migration, providing command‑line examples and XML snippets for effective virtualization management.
Libvirt Overview
Libvirt is a widely used heterogeneous virtualization management tool composed of the libvirt API library and the libvirtd daemon, with the virsh CLI as the default command‑line manager.
libvirt API Library
It provides a unified northbound API for managing virtual resources, while southbound drivers interface with various hypervisors such as KVM, VMware, Xen. Bindings exist for C, Python, Java, Perl, Ruby, etc., and OpenStack uses the API to support multiple hypervisors.
libvirtd Daemon
The daemon follows a multi‑driver architecture offering management of compute, storage, network, security, and monitoring for virtual machines.
Virtual machine management : create, delete, pause, resume, migrate, monitor VMs.
Virtual network management : create, delete, modify virtual networks including Bridge/OvS, NAT, VLAN.
Storage management : manage VM QCOW2 images and virtual disks.
Cluster management : manage VMs across multiple hosts running libvirtd.
Security policies : control access permissions for host and VMs.
Monitoring and statistics : retrieve CPU, memory, network, and I/O usage.
Software Architecture
The core components are Listener (listens on TCP port 16509), Driver (interacts with hypervisors), and Database (stores VM information, default SQLite, also supports MySQL and PostgreSQL).
Permission Modes
libvirtd can run in two permission modes: system mode (root privileges) and session mode (non‑root user).
Running Modes
Local control (application and libvirtd on the same host) or remote control (application and libvirtd on different hosts communicating over TCP over SSH).
VM XML Example
libvirtd stores VM configuration in XML. Example:
<domain type='kvm'>
<name>myvm</name>
<memory unit='KiB'>1048576</memory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-2.9'>hvm</type>
<boot dev='hd'/>
</os>
<devices>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/myvm.qcow2'/>
<target dev='vda' bus='virtio'/>
</disk>
<interface type='network'>
<mac address='52:54:00:9b:08:fa'/>
<source network='default'/>
<model type='virtio'/>
</interface>
</devices>
</domain>domain element : type='kvm' indicates KVM hypervisor.
name element : specifies VM name.
memory element : defines VM memory size in KiB.
vcpu element : defines number of virtual CPUs; placement='static' means static allocation.
os element : defines OS type and boot order.
devices element : defines VM devices such as disk and interface.
virsh CLI
virsh list # list running VMs
virsh list --all # list all VMs
virsh console centos72 # connect to VM console
virsh start centos72 # start VM
virsh reboot centos72 # reboot VM
virsh shutdown centos72 # graceful shutdown
virsh destroy centos72 # force shutdown
virsh suspend centos72 # pause VM
virsh resume centos72 # resume VM
virsh undefine centos72 # remove VM XML config
virsh autostart centos72 # enable autostart
virsh autostart --disable centos72 # disable autostart
virsh dumpxml centos72 # view VM XML
virsh edit centos72 # edit VM XML
virsh setvcpus ...
virsh setmaxmem ...Libvirt + QEMU‑KVM Environment Deployment
HostOS Configuration Optimization
Use domestic yum and EPEL mirrors for faster downloads.
Upgrade HostOS.
(Optional) Enable KVM Nested Virtualization
If the host OS itself runs inside a VM, enable nested virtualization to expose the same CPU hardware assistance to the inner KVM VMs.
# Check if nested is enabled
cat /sys/module/kvm_intel/parameters/nested
# Enable nested
echo 'options kvm_intel nested=1' > /etc/modprobe.d/kvm-nested.conf
modprobe -r kvm_intel
modprobe kvm_intel
# For existing VMs, edit XML to set CPU mode to host-passthrough
virsh shutdown <domain>
virsh edit <domain>
<cpu mode='host-passthrough'/>
virsh start <domain>Install CentOS GNOME GUI
yum groupinstall -y "X Window System"
yum groupinstall -y "GNOME Desktop" "Graphical Administration Tools"
init 5Install Libvirt + QEMU‑KVM
# Disable SELinux enforcement
setenforce 0
sed -i 's/=enforcing/=disabled/g' /etc/selinux/config
# Disable firewall
systemctl disable firewalld && systemctl stop firewalld
# Install packages
yum install -y qemu-kvm libvirt virt-manager virt-install bridge-utils
# Allow libvirtd to run as root
vi /etc/libvirt/qemu.conf # set user="root" and group="root"
systemctl start libvirtd && systemctl enable libvirtdKey packages and their roles: qemu‑kvm (KVM hypervisor), qemu‑img (QCOW2 tool), libvirt (management API), libvirt‑client (client utilities), virt‑manager (GUI), virt‑install (CLI creation), libvirt‑python (Python API), python‑virtinst, virt‑viewer, virt‑top, virt‑clone, libguestfs‑tools, bridge‑utils.
Libvirt Virtual Network Modes
Default Linux Bridge (NAT)
On first start, libvirtd creates a vSwitch named virbr0 as a Linux Bridge operating in NAT mode, allowing VMs to access external networks while remaining invisible from outside.
Bridge Mode
Acts as an L2 switch, placing VMs on the same LAN as the host, enabling mutual access and external visibility.
Configure Bridge on HostOS
Management Bridge
# Create bridge br-mgmt
vi /etc/sysconfig/network-scripts/ifcfg-br-mgmt
TYPE=Bridge
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.1.2
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
DNS1=114.114.114.114
DNS2=8.8.8.8
# Bind physical NIC to bridge
vi /etc/sysconfig/network-scripts/ifcfg-enp2s0
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
BRIDGE=br-mgmt
systemctl restart network
ip aLAN Bridges
# Create LAN bridges
brctl addbr br-lan01
brctl addbr br-lan02
# Configure bridge devices
vi /etc/sysconfig/network-scripts/ifcfg-br-lan01
TYPE=Bridge
DEVICE=br-lan01
ONBOOT=yes
BOOTPROTO=static
vi /etc/sysconfig/network-scripts/ifcfg-br-lan02
TYPE=Bridge
DEVICE=br-lan02
ONBOOT=yes
BOOTPROTO=static
systemctl restart network
brctl show
ifup br-lan01
ifup br-lan02NAT Mode
Uses iptables MASQUERADE to let VMs use the host IP for outbound traffic while remaining invisible from external networks.
Routed Mode
The vSwitch functions as an L3 router, providing VMs with a default gateway and enabling L3 routing to external networks.
Isolated Mode
The vSwitch does not connect to external networks; VMs can communicate with each other and the host but not beyond.
(Optional) Use Open vSwitch (OvS) Instead of Linux Bridge
# Define OvS network XML
vi ovs-net.xml
<network>
<name>ovs-net</name>
<forward mode="bridge"/>
<bridge name="ovs-br0"/>
<virtualport type="openvswitch"/>
</network>
# Apply OvS network
virsh net-destroy default
virsh net-define ovs-net.xml
virsh net-start default
virsh net-autostart default
virsh net-list --all
# Create VM using OvS network
virt-install \
--virt-type kvm \
--name vm01 \
--ram 128 \
--boot hd \
--disk path=cirros.qcow2 \
--network network=ovs-net,mac=52:54:00:aa:69:cc \
--graphics vnc,listen=0.0.0.0 \
--noautoconsoleLibvirt Live Migration
Live migration consists of a data transfer layer and a control layer.
Data Transfer Layer
Hypervisor‑based transfer: direct connection between hypervisors, low data volume but requires network configuration and may lack encryption.
libvirtd‑tunnel transfer: RPC tunnel between libvirtd daemons, easier firewall setup (single port) and encrypted, but higher data copying overhead.
Control Layer
Libvirt client controls migration, receiving feedback from source and destination libvirtd.
Source libvirtd controls migration, client only initiates the command.
Hypervisor controls migration directly, assuming the hypervisor supports live migration.
KVM Pre‑Copy Live Migration Process
Step 1: Verify target host resources and reserve necessary CPU, memory, and network settings.
Step 2: While the VM runs on the source, copy all memory to the target.
Step 3: Iteratively copy dirty pages detected after each cycle.
Step 4: Continue cycles until the dirty‑page rate falls below a threshold, then pause the source VM.
Step 5: With both source and target VMs stopped, copy the final dirty pages and VM state.
Step 6: Unlock storage on the source, lock it on the target, start the target VM and reconnect network and storage.
Migration Programming Example
import libvirt
import pprint
conn_src = libvirt.open('qemu+tcp://username@src_server/system')
conn_dest = libvirt.open('qemu+tcp://username@dest_server/system')
vm_domain = conn_src.lookupByName('instance_name')
vm_domain.migrate(conn_dest, True, 'instance_name', None, 0)
pprint.pprint(help(vm_domain.migrate))How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
