Cloud Native 15 min read

Why Use KubeVirt and How to Deploy It on a Kubernetes Cluster

This article explains the motivations for adopting KubeVirt, introduces its concepts and architecture, details the component design, and provides step‑by‑step instructions—including code snippets—for deploying KubeVirt, CDI, Ceph‑CSI, external snapshotter, creating VMs and exposing VNC, while outlining future directions for cloud‑native virtualization.

HomeTech
HomeTech
HomeTech
Why Use KubeVirt and How to Deploy It on a Kubernetes Cluster

1. Why Use KubeVirt? As Kubernetes offers fault tolerance, high availability, openness, and scalability, the core business of the platform has been fully containerized, but legacy Windows workloads with high migration cost cannot be moved directly; KubeVirt enables the final mile of full containerization for such scenarios.

2. What Is KubeVirt?

2.1 Concept KubeVirt is an open‑source project (Red Hat) that runs and manages virtual machines inside a Kubernetes cluster, allowing unified management and operation of both containers and VMs.

2.2 Architecture Design The design draws from OpenStack rapid delivery: the OS disk uses external‑snapshotter for snapshot cloning (seconds‑level delivery), the data disk uses PV/PVC managed by Ceph‑CSI, and a custom FSM state machine connects K8s API with KubeVirt CRUD operations for fast resource provisioning.

2.3 Component Description

Virt‑api : RESTful API server for managing KubeVirt VM resources.

Virt‑controller : Monitors VM resources in the cluster and automatically adjusts resource allocation.

Virt‑handler : Runs on each node to manage and monitor VM resources on that node.

Virt‑launcher : Starts and manages VM instances, supporting multiple virtualization technologies.

Virt‑vnc : Provides VNC remote access to VMs.

CDI (Containerized Data Importer): Handles import and management of VM images.

Ceph‑CSI : Integrates Ceph storage into Kubernetes as a CSI driver.

External‑snapshotter : Manages snapshot resources for storage plugins.

FSM : Custom distributed state machine for resource creation, scheduling, and cross‑system delivery.

3. How to Implement KubeVirt

3.1 Business Requirements Include third‑party private services for special license migration, legacy Windows scenarios, network‑specific private services, and third‑party machine‑learning services.

3.2 Process Flow User → Ticket → Tiered Approval → Resource Confirmation → Scheduling → Delivery.

3.3 Image Customization

virt-install -n windows2012 -f /data/centos74.img -s 120 -r 16384 -v --vcpus=16 --vnc ---cdrom=/mnt/windows2012

Start the image:

virt-install -n windows2012 -f /data/centos74.img -s 120 -r 16384 -v --vcpus=16 --vnc ---cdrom=/mnt/windows2012

Standardize system services, partitions, security, kernel parameters, and auto‑generate network configuration via init scripts. Data disks are formatted and auto‑mounted after VM delivery.

3.4 Component Standardization Upload stable component versions to an internal Harbor registry to avoid unintended upgrades.

3.5 CDI Deployment (v1.55.0)

https://github.com/kubevirt/containerized-data-importer/releases/download/v1.55.0/cdi-cr.yaml
https://github.com/kubevirt/containerized-data-importer/releases/download/v1.55.0/cdi-operator.yaml

Modify the CDI operator to run on master nodes with appropriate node selectors and tolerations (see original YAML for details).

3.6 KubeVirt Deployment (v0.55.2)

https://github.com/kubevirt/kubevirt/releases/download/v0.55.2/kubevirt-cr.yaml
https://github.com/kubevirt/kubevirt/releases/download/v0.55.2/kubevirt-operator.yaml

Adjust the CR to run core pods on control‑plane nodes and workloads on nodes labeled node-role.kubernetes.io/BIZ‑Kubevirt . Apply the manifests:

Kubectl apply -f kubevirt-operator.yaml
Kubectl apply -f kubevirt-cr.yaml

3.7 Ceph‑CSI Deployment

Create a Ceph pool for KubeVirt:

ceph osd pool create kubevirt 1024

Create a client key for RBD access:

ceph auth get-or-create client.kubevirt mon 'profile rbd' osd 'profile rbd pool=kubevirt' mgr 'profile rbd pool=kubevirt'

Deploy the CSI ConfigMap, StorageClass, and Secret (see original YAML snippets).

3.8 External‑Snapshotter Deployment (v6.1.0)

wget https://github.com/kubernetes-csi/external-snapshotter/archive/refs/tags/v6.1.0.tar.gz
 tar -xzvf v6.1.0.tar.gz
cd ./external-snapshotter-6.1.0/client/config/crd
kubectl apply -f snapshot.storage.k8s.io_volumesnapshotcontents.yaml
kubectl apply -f snapshot.storage.k8s.io_volumesnapshots.yaml
kubectl apply -f snapshot.storage.k8s.io_volumesnapshotclasses.yaml
cd ./kubevirt/external-snapshotter-6.1.0/deploy/Kubernetes/
kubectl create -f csi-snapshotter/
kubectl create -f snapshot-controller/

Obtain the CDI upload‑proxy address and upload a Windows image:

kubectl get all -n kubevirt | grep 'cdi-uploadproxy' | grep 'ClusterIP' | awk '{print $3}'
virtctl image-upload --namespace=vm-windows-12 --pvc-size=129G --storage-class=csi-rbd-sc --pvc-name=windows2012 --image-path=/data/windows2012.qcow2 --uploadproxy-url=https://uploadproxy:ip --insecure

3.9 Create VM Snapshot and VM

Define a VolumeSnapshotClass and a VolumeSnapshot for the Windows OS, then create PVCs for the OS and data disks, and finally a VirtualMachine manifest that references those PVCs (see original YAML for full details).

Apply the manifests:

kubectl apply -f win12-snap2-data.yaml
kubectl apply -f win12-snap2-os.yaml
kubectl apply -f win12-snap2.yaml

3.10 VNC Deployment

wget https://github.com/wavezhang/virtVNC/raw/master/k8s/virtvnc.yaml
kubectl apply -f virtvnc.yaml

Expose the VNC service via an Ingress (example YAML shown in the source).

4. Future Outlook Continue deepening Kubernetes expertise, promote KubeVirt in atypical containerization scenarios, and achieve a unified virtualization stack for full platform containerization.

cloud nativeKubernetesDevOpsVirtualizationCSICephKubevirt
HomeTech
Written by

HomeTech

HomeTech tech sharing

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.