How Kube-OVN Enables Seamless Live Migration for KubeVirt VMs
This article explains the challenges of live‑migrating KubeVirt virtual machines, how Kube‑OVN addresses network‑bridge limitations and IP changes, provides the required VM annotation, step‑by‑step migration commands, and details the multi‑stage migration process that keeps network interruption under 0.5 seconds with no TCP break.
Background and challenges
Live migration of VMs moves a VM from one node to another for maintenance, upgrades, or failover. KubeVirt has the following limitations:
Bridge network mode is not supported for live migration by default.
KubeVirt only migrates memory and disks; it lacks network‑specific optimizations.
If the VM's IP changes during migration, seamless migration fails.
Network interruptions during migration also break seamless migration.
Kube‑OVN solution
Kube‑OVN adds real‑time multi‑host port binding to keep the VM's IP unchanged and mirrors traffic during migration. Tests show network interruption under 0.5 seconds and no TCP connection loss.
Enable seamless migration
Add the annotation kubevirt.io/allow-pod-bridge-network-live-migration: "true" to the VM spec. Kube‑OVN will automatically manage the pod's bridge network.
Example VM manifest
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: testvm
spec:
runStrategy: Always
template:
metadata:
labels:
kubevirt.io/size: small
kubevirt.io/domain: testvm
annotations:
kubevirt.io/allow-pod-bridge-network-live-migration: "true"
spec:
domain:
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
interfaces:
- name: default
bridge: {}
resources:
requests:
memory: 64M
networks:
- name: default
pod: {}
volumes:
- name: containerdisk
containerDisk:
image: quay.io/kubevirt/cirros-container-disk-demo
- name: cloudinitdisk
cloudInitNoCloud:
userDataBase64: SGkuXG4=Migration procedure
Create the VM using the manifest above.
SSH into the VM (default password gocubsgo) and verify connectivity, e.g., ping 8.8.8.8.
In another terminal run virtctl migrate testvm. The SSH session stays alive; only occasional packet loss may be observed.
Migration mechanism
Kube‑OVN follows a “real‑time migration – multi‑host port binding” workflow:
KubeVirt initiates migration and creates the target pod.
Kube‑OVN detects the target pod and reuses the source pod's port information.
Kube‑OVN configures traffic mirroring so packets are sent to both source and target pods, reducing interruption.
KubeVirt synchronizes VM memory.
KubeVirt stops the source pod from handling network traffic.
Kube‑OVN activates the target pod, sends a RARP packet to update ARP tables, and temporarily disables the target pod's port to avoid traffic mixing.
KubeVirt finally activates the target pod, completes memory sync, and the Watch Migration custom resource stops mirroring.
The brief interruption occurs between steps 5 and 6; the delay is dominated by libvirt's RARP emission (≈ 0.5 s). TCP retransmission prevents connection loss.
Illustrative diagrams
Cloud Native Technology Community
The Cloud Native Technology Community, part of the CNBPA Cloud Native Technology Practice Alliance, focuses on evangelizing cutting‑edge cloud‑native technologies and practical implementations. It shares in‑depth content, case studies, and event/meetup information on containers, Kubernetes, DevOps, Service Mesh, and other cloud‑native tech, along with updates from the CNBPA alliance.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
