Cloud Native 21 min read

How to Secure KubeSphere Virtualization with Kasten K10: A Step‑by‑Step Guide

This article walks through using Kasten K10 to protect the KubeSphere Virtualization (KSV) platform, covering environment setup, project and VM creation, K10 installation, backup policy configuration, and restoration of virtual machines with QingStor object storage, demonstrating a complete cloud‑native data protection workflow.

Qingyun Technology Community
Qingyun Technology Community
Qingyun Technology Community
How to Secure KubeSphere Virtualization with Kasten K10: A Step‑by‑Step Guide

1. Background and Goal

When looking at the CNCF Landscape, the rich cloud‑native ecosystem feels like a Cambrian explosion of solutions that are lightweight, flexible, and easy to deliver, often built on Kubernetes and capable of integrating with many external systems.

Although containerization and Serverless are the current trends, many enterprise workloads still run on virtual machines (VMs). Projects like Red Hat kubevirt and Mirantis virtlet enable running VMs as containers, allowing developers to manage VMs alongside containers and Serverless workloads on a single platform.

This article explores how to use Kasten K10 to protect the QingCloud KubeSphere Virtualization (KSV) platform. KSV is a lightweight VM management platform derived from KubeSphere, meeting enterprise‑grade virtualization needs, and we will focus on data protection using Kasten.

2. Kasten K10 and KubeSphere Virtualization (KSV)

Kasten K10 is Veeam's data management solution for Kubernetes, providing backup, restore, disaster recovery, and migration for cluster resources and persistent volumes.

KubeSphere Virtualization (KSV) is a lightweight VM management platform derived from KubeSphere, supporting single‑node and multi‑node deployments, with a front‑end web console and a back‑end built on KubeVirt, MinIO, Multus, Calico, and other cloud‑native components.

3. Validation Objectives

Log in to the KSV console and view/manage nodes and resource pools.

Create projects and VMs in KSV and write data.

Install Kasten K10 on the K3s cluster hosting KSV.

Use Kasten to protect the created projects and VMs.

Restore KSV VMs and data.

Using K10 to call Ceph RBD CSI snapshots achieves fast RPO/RTO backup and restores, while QingCloud object storage provides off‑site disaster recovery and long‑term retention.

4. Creating Projects and VMs in KSV

4.1 Log in to the KSV console

Open a web browser and navigate to the KSV web console IP and port.

After logging in, the dashboard shows cluster resources, virtual resources, IOPS, and throughput.

4.2 Create Project

Click Project then Create. A new namespace (e.g., ksv4) is created that matches the project name.

Enter project name, alias, and description.

Verify the namespace via CLI:

$ kubectl get ns
NAME                     STATUS   AGE
cdi                      Active   4d19h
default                  Active   4d21h
kasten-io                Active   45h
ksv2                     Active   4h50m
ksv3                     Active   4h33m
... (other namespaces omitted)

4.3 Create VM

Click VM to start VM creation. Choose a system image (e.g., Ubuntu), configure resources, network, authentication, hostname, and replica count.

After creation, verify VM status with CLI:

# View running VMs
$ kubectl get vmi -n ksv3
NAME        AGE    PHASE    IP    NODENAME
i-cyh3xmet  4h26m  Running        ksvnode1

# View defined VMs
$ kubectl get vm -n ksv3
NAME        AGE    VOLUME
i-cyh3xmet  4h42m

# View the pod (VM runs as a pod)
$ kubectl get po -n ksv3
NAME                                 READY   STATUS    RESTARTS   AGE
virt-launcher-i-cyh3xmet-xtvtv        1/1     Running   0          4h27m

# View PVCs to locate disks
$ kubectl get pvc -n ksv3
NAME               STATUS   VOLUME                                 CAPACITY   ACCESS MODES   STORAGECLASS   AGE
img-2yndvxw0       Bound    pvc-6ae3bbac-5b74-4559-a863-19cebfa884ca   24Gi       RWO            rook-ceph-block   4h46m
tpl-vol-z6tlpkzl   Bound    pvc-6e35ad24-1251-44ee-ae17-8bec73945a56   32Gi       RWO            rook-ceph-block   4h46m

4.4 Simulate Data Inside VM

Open VNC or terminal and write a file:

root@ubuntu1:~# touch mars
root@ubuntu1:~# echo "mars 9:40" > mars
root@ubuntu1:~# cat mars
mars 9:40

5. Install Kasten K10 on the KSV Cluster

5.1 Verify Kubernetes and Container Runtime Versions

The KSV cluster runs K3s v1.21.6+ with containerd 1.4.11‑k3s1, which is supported by Kasten.

$ kubectl get node -o wide
NAME       STATUS   ROLES                         AGE   VERSION          INTERNAL-IP   OS-IMAGE          KERNEL-VERSION   CONTAINER-RUNTIME
ksvnode1   Ready    control-plane,master,worker   4d22h v1.21.6+k3s1   172.16.10.2   Ubuntu 20.04.3 LTS 5.4.0-91-generic  containerd://1.4.11-k3s1

5.2 Verify StorageClass and SnapshotClass

# Ceph RBD StorageClass
$ kubectl get sc
NAME                PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block (default)   rook-ceph.rbd.csi.ceph.com   Delete   Immediate   true   4d22h

# Ceph RBD SnapshotClass
$ kubectl get volumesnapshotclass
NAME                     DRIVER                     DELETIONPOLICY   AGE
csi-rbdplugin-snapclass   rook-ceph.rbd.csi.ceph.com   Delete   4d22h

# Annotate SnapshotClass for K10
kubectl annotate volumesnapshotclass csi-rbdplugin-snapclass \
    k10.kasten.io/is-snapshot-class="true"

# Verify annotation
$ kubectl get volumesnapshotclass csi-rbdplugin-snapclass -o yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
  annotations:
    k10.kasten.io/is-snapshot-class: "true"
    snapshot.storage.kubernetes.io/is-default-class: "true"
  name: csi-rbdplugin-snapclass
...

# K10‑created snapshot class
$ kubectl get volumesnapshotclass
NAME                                 DRIVER                     DELETIONPOLICY   AGE
csi-rbdplugin-snapclass               rook-ceph.rbd.csi.ceph.com   Delete   4d22h
k10-clone-csi-rbdplugin-snapclass      rook-ceph.rbd.csi.ceph.com   Retain   43h

5.3 Install Kasten K10

Fetch Helm chart

helm repo add kasten https://charts.kasten.io/
helm repo update
helm fetch kasten/k10 --version=4.5.4

Create namespace kubectl create ns kasten-io Install K10

helm install k10 k10-4.5.4.tgz --namespace kasten-io \
  --set global.airgapped.repository=ccr.ccs.tencentyun.com/kasten-k10 \
  --set auth.tokenAuth.enabled=true \
  --set metering.mode=airgap \
  --set global.persistence.storageClass=rook-ceph-block

Check installation

$ kubectl get po -n kasten-io
NAME                                 READY   STATUS    RESTARTS   AGE
aggregatedapis-svc-57d7c44b9d-gwmtq   1/1     Running   3          46h
... (other pods omitted)

5.4 Expose Kasten UI

# Expose gateway as NodePort
kubectl expose service gateway -n kasten-io --type=NodePort --name=gateway-nodeport

# Get NodePort and node IP
kubectl get svc -n kasten-io
# (output shows gateway-nodeport with port 8000:32387/TCP)

kubectl get node -o wide
# (output shows internal IP 172.16.10.2)

# Access UI
http://<external‑ip>:32387/k10/#/

5.5 Retrieve Token for UI Login

sa_secret=$(kubectl get serviceaccount k10-k10 -o jsonpath="{.secrets[0].name}" --namespace kasten-io) && \
  kubectl get secret $sa_secret --namespace kasten-io -o jsonpath="{.data.token}{'
'}" | base64 --decode

6. Configure QingStor Object Storage as Backup Repository

Set up an S3‑compatible repository in Kasten Settings → Locations → New Profile, using QingStor credentials. This provides off‑site backup following the 3‑2‑1‑1‑0 rule.

7. Backup and Restore with Kasten

7.1 Application Discovery

Kasten automatically discovers applications in the cluster and lists them under Application. Unprotected apps show a Create Policy button.

7.2 Create Backup Policy

Click Create Policy, configure a policy that creates local snapshots and copies data to QingStor for long‑term retention. Elements such as VM images can be excluded.

Run the policy once.

The dashboard shows the backup completed.

7.3 Restore

In the Applications view, click Restore, select a restore point, and create a new namespace (e.g., kvs3) for the restored resources.

The restored VM appears in the KSV console and starts normally, confirming successful data restoration.

8. Conclusion

As the cloud‑native ecosystem matures, KubeSphere has evolved into a multi‑core platform. KSV leverages Kubernetes orchestration to meet virtualization needs, while Kasten K10 provides reliable data management, enabling backup, disaster recovery, migration, and DevOps for cloud‑native virtualized workloads.

9. References

KubeSphere Virtualization Introduction https://kubesphere.cloud/docs/ksv/management/web-console-introduction/ Kasten Support List https://docs.kasten.io/latest/install/requirements.html#prerequisites OpenShift Virtualization https://cloud.redhat.com/learn/topics/virtualization/ kubevirt GitHub https://github.com/kubevirt Mirantis – virtlet run VMs as Kubernetes pods https://www.mirantis.com/blog/virtlet-run-vms-as-kubernetes-pods/ virtlet GitHub https://github.com/Mirantis/virtlet
Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

KubernetesBackupKasten K10KubeSphere Virtualization
Qingyun Technology Community
Written by

Qingyun Technology Community

Official account of the Qingyun Technology Community, focusing on tech innovation, supporting developers, and sharing knowledge. Born to Learn and Share!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.