Cloud Native 14 min read

Deploying LiteIO Cloud‑Native Block Storage Service on Kubernetes

This guide explains how to set up the high‑performance, cloud‑native LiteIO block storage service on a Kubernetes cluster, covering prerequisite VM preparation, kernel upgrade, Docker and Kubernetes installation, CRI configuration, LiteIO component deployment for both LVM and SPDK engines, and verification of Pods and PVCs.

AntData
AntData
AntData
Deploying LiteIO Cloud‑Native Block Storage Service on Kubernetes

LiteIO is a high‑performance, cloud‑native block storage service that supports SPDK and LVM storage engines and is designed for hyper‑converged Kubernetes environments.

Basic concepts include StoragePool (a per‑node storage pool), AntstorVolume (a volume allocated from a StoragePool), AntstorSnapshot, AntstorMigration, AntstorDataControl, and AntstorVolumeGroup.

Prerequisites : a root‑access x86 VM (e.g., CentOS 7 on AWS Lightsail, Alibaba Cloud, or a local Linux VM).

VM preparation involves enabling the ELRepo repository, installing a newer long‑term support kernel, and setting it as the default:

# Enable the ELRepo Repository sudo rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org sudo rpm -Uvh https://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm # List available kernels sudo yum list available --disablerepo='*' --enablerepo=elrepo-kernel # Install the latest LTS kernel sudo yum -y --enablerepo=elrepo-kernel install kernel-lt # Set default kernel and rebuild GRUB sudo grub2-set-default "CentOS Linux (5.4.261-1.el7.elrepo.x86_64) 7 (Core)" sudo grub2-mkconfig -o /boot/grub2/grub.cfg sudo reboot

Install Docker :

sudo yum install -y yum-utils sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo sudo yum install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin sudo systemctl start docker

Install Kubernetes components (kubelet, kubeadm, kubectl) and configure SELinux:

cat <

Configure containerd CRI by commenting out disabled_plugins = ["cri"] in /etc/containerd/config.toml and restarting containerd:

# comment line: `disabled_plugins = ["cri"]` sudo vi /etc/containerd/config.toml sudo systemctl restart containerd

Initialize the cluster (replace the IP with your VM’s address):

sudo kubeadm init --ignore-preflight-errors Swap --apiserver-advertise-address=172.26.10.67 --pod-network-cidr=10.244.0.0/16 # copy kubeconfig to $HOME/.kube/config kubectl create -f https://raw.githubusercontent.com/coreos/flannel/v0.22.0/Documentation/kube-flannel.yml

If using the SPDK engine, enable hugepages before starting kubelet:

# set hugepage sudo bash -c "echo 256 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages" sudo systemctl restart kubelet kubectl get nodes -oyaml | grep hugepages-2Mi

Simple deployment with Kind (single‑node cluster):

# Install kind [ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64 chmod +x ./kind sudo mv ./kind /usr/bin/kind # Create cluster kind create cluster --image kindest/node:v1.24.6 # Verify kubectl get nodes

Deploy LiteIO components by applying the manifests in the repository:

kubectl create -f hack/deploy/base

The base directory contains namespace, secret, webhook, RBAC, storage‑class, CRDs, disk‑operator, csi‑controller, disk‑agent, and csi‑node manifests.

LVM engine deployment :

kubectl create -f hack/deploy/lvm

The LVM configmap defines a storage pool named test‑vg with a 1 GiB PV located at /local-storage/pv01 .

Create a test Pod and PVC :

kubectl create -f hack/deploy/example/pod.yaml

Verify that the PVC is bound, the AntstorVolume is ready, and the Pod can see the block device mounted at /test-data :

# verify PVC kubectl -n obnvmf get pvc # check AntstorVolume kubectl -n obnvmf get antstorvolume # list pods kubectl -n obnvmf get pods # exec into pod and check mount kubectl -n obnvmf exec -it test-pod -- df -h

Delete the PVC and Pod when finished:

kubectl delete -f hack/deploy/example/pod.yaml kubectl -n obnvmf get antstorvolume # should show no resources

SPDK engine deployment :

kubectl create -f hack/deploy/aio-lvs

The aio‑lvs configmap creates an AIO bdev of 1 GiB at /local-storage/aio-lvs and a corresponding LVS storage pool.

After deployment, verify the storage pool status and create another test Pod as above.

Finally, the article invites readers to join the LiteIO open‑source community and provides the GitHub repository link.

kubernetescloud-native storageCSISPDKLVMLiteIOKind
AntData
Written by

AntData

Ant Data leverages Ant Group's leading technological innovation in big data, databases, and multimedia, with years of industry practice. Through long-term technology planning and continuous innovation, we strive to build world-class data technology and products.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.