Cloud Native 14 min read

Integrating Virtualization and Containerization with KubeVirt: Practices and Architecture at Yiche

This article describes Yiche's cloud‑native transformation that unifies virtualization and containerization by evaluating Kata, Virtlet and KubeVirt, selecting KubeVirt, detailing its architecture, storage integration, custom enhancements, deployment challenges, and future work for a hybrid infrastructure.

Yiche Technology
Yiche Technology
Yiche Technology
Integrating Virtualization and Containerization with KubeVirt: Practices and Architecture at Yiche

Cloud computing has matured, and enterprises now demand higher deployment efficiency and flexibility, prompting Yiche to transition to cloud‑native solutions while still needing to support legacy services that cannot be containerized, leading to a coexistence of virtual machines and containers.

The need for unified management arises from several issues with the early VMware vSphere setup: closed source code, lack of distributed file system, slow VM provisioning, and separate host images that increase operational costs; meanwhile, most applications have already moved to containers, making a separate virtualization platform undesirable.

Yiche evaluated three mainstream solutions—Kata Containers, Virtlet, and KubeVirt. Kata was rejected because it does not support Windows workloads; Virtlet suffered from limited CRI compatibility, lack of CSI support, and low community activity; KubeVirt, an open‑source Red Hat project, offers Linux/Windows VM support, vm‑import‑operator for VMware migration, active community, and flexible storage options, and was therefore selected.

KubeVirt’s architecture consists of several components: virt‑api handles all kubectl virt requests and communicates with the Kubernetes API server; VirtualMachineInstance (VMI) defines VM specifications; virt‑controller watches VMI CRDs and manages associated Pods; virt‑launcher runs inside each Pod to provide cgroups and namespaces for the VM process; and virt‑handler runs as a DaemonSet on each node to synchronize VMI state with libvirtd and handle storage and network plugins.

For storage, KubeVirt uses PVCs combined with CDI (Containerized Data Importer). CDI includes a Deployment that runs a controller scanning PVCs with specific annotations, creates temporary importer Pods (Golden Pods) to transfer image data into a Persistent Volume (Golden PV), and supports dynamic provisioners and secret handling. The workflow involves creating a CDI Deployment, optional secrets in the “golden” namespace, and PVCs that trigger import Pods.

The deployment environment comprises three Kubernetes master nodes and multiple worker nodes, a Ceph cluster with dual‑network architecture for data and cluster traffic, and the kube‑ovn CNI plugin providing direct VLAN connectivity to the physical network. External‑snapshotter 4.1.0 is deployed to enable snapshot and CDI smart‑clone capabilities, and an internal ticketing system is built using the Kubernetes Go SDK.

Custom enhancements to KubeVirt include a fast smart‑clone implementation that reduces VM start‑up time to under 10 seconds, a bug fix for virt‑handler startup when limiting VM count per node, node‑selection via annotations for vm‑import, and a temporary workaround to force Block volume type for vm‑import resize issues. The following ConfigMap illustrates one of the changes:

apiVersion: v1
data:
  importWithoutTemplate: "true" # add this configuration
kind: ConfigMap
metadata:
  name: vm-import-controller-config
  namespace: kubevirt-hyperconverged

Operational challenges encountered include the requirement of Kubernetes ≥ 1.20 for GA snapshot support, incompatibility between Ceph CSI images and the deployed Ceph version, rbd‑plugin duplicate operation errors, strict conditions for CDI smart‑clone (matching storage classes, snapshot classes, and volume types), and limitations of vm‑import for certain storage configurations.

Future work focuses on developing a dedicated KubeVirt management system, refining monitoring, and integrating CI/CD pipelines with the existing container cloud platform.

Reference: https://kubevirt.io/

cloud-nativekubernetescontainerizationstoragevirtualizationKubeVirt
Yiche Technology
Written by

Yiche Technology

Official account of Yiche Technology, regularly sharing the team's technical practices and insights.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.