Cloud Native 6 min read

Dynamic Local Disk Allocation & Resource Overcommit in KubeVirt using OpenEBS‑LVM

This guide explains how to replace KubeVirt's local‑storage with OpenEBS‑LVM for dynamic PV allocation, configure CPU/memory overcommit ratios, perform hot‑plug upgrades, expand disks online, and set node affinity and fixed IPs, providing full YAML examples and reference links.

Ops Development Stories
Ops Development Stories
Ops Development Stories
Dynamic Local Disk Allocation & Resource Overcommit in KubeVirt using OpenEBS‑LVM

In the previous articles we covered KubeVirt installation, basic usage, and migration from oVirt, leaving two tasks: dynamic local‑disk allocation and fixed IP assignment. This article resolves the first task by replacing local‑storage with OpenEBS‑LVM , which adds capacity limits, dynamic PV provisioning, and resize capabilities.

Enable Overcommit Ratios

When creating a StorageClass, enable volume expansion ( allowVolumeExpansion: true ) and set the binding mode to WaitForFirstConsumer .

Configure the KubeVirt overcommit settings by editing the KubeVirt custom resource:

<code>kubectl -n kubevirt edit kubevirts.kubevirt.io kubevirt</code>
<code>spec:
  configuration:
    developerConfiguration:
      cpuAllocationRatio: 2   # CPU overcommit 2x
      memoryOvercommit: 200   # Memory overcommit 2x
</code>

Example: set a VM spec to 8 CPU cores and 16 GiB memory without declaring a request. After creation, the Pod’s request values are half of the specified limits, confirming the overcommit behavior.

<code>spec:
  domain:
    cpu:
      cores: 8
    memory:
      guest: 16Gi
</code>

After the VM starts, the actual resources observed inside the VM match the expected 8c/16g.

If a Request value is declared, it takes precedence over the overcommit calculation.

CPU/MEM Upgrade

Modify the CPU and memory values in the VM spec and restart the VM:

<code>virtctl restart &lt;vmi&gt;</code>

KubeVirt also supports hot‑plug upgrades (CPU/Memory) without downtime, subject to kernel version and storage mode constraints.

Online Disk Expansion

Enable the

ExpandDisks

feature gate in the KubeVirt configuration:

<code>spec:
  configuration:
    developerConfiguration:
      featureGates:
        - ExpandDisks
</code>

Check the current data‑disk size (e.g., 500 Gi) and expand it to 600 Gi by editing the associated PVC size; OpenEBS‑LVM will automatically resize the volume.

<code>spec:
  resources:
    requests:
      storage: "600Gi"
</code>

After expanding the PVC, resize the filesystem on the node:

<code>resize2fs /dev/vdb   # ext4 filesystem, data disk is vdb</code>

Fixed IP and Node Affinity

While fixed IP is not yet implemented, node affinity can be used to keep a VM on a specific node to avoid IP changes. Annotate the VMI with the desired pod IP and set

nodeSelector

to the target node hostname.

<code># View node block allocations
kubectl get blockaffinities.crd.projectcalico.org
# Choose an unused IP
calicoctl ipam show --ip=$(ip_addr)
# Set pod IP annotation
cni.projectcalico.org/ipAddrs: '["pod-ip"]'
# Node selector
nodeSelector:
  kubernetes.io/hostname: "nodename"
</code>

Complete VM YAML Example

<code>apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  labels:
    kubevirt.io/vm: <vmi-name>
  name: <vmi-name>
spec:
  running: false
  template:
    metadata:
      labels:
        kubevirt.io/vm: <vmi-name>
      annotations:
        cni.projectcalico.org/ipAddrs: '["pod-ip"]'
    spec:
      nodeSelector:
        kubernetes.io/hostname: "nodename"
      domain:
        cpu:
          cores: 8
        memory:
          guest: 16Gi
        devices:
          disks:
          - disk:
              bus: virtio
            name: datavolumedisk1-sys
          - disk:
              bus: virtio
            name: datavolumedisk1-data
        interfaces:
        - name: default
          bridge: {}
      networks:
      - name: default
        pod: {}
      volumes:
      - dataVolume:
          name: demo-1-sys
        name: datavolumedisk1-sys
      - dataVolume:
          name: demo-1-data
        name: datavolumedisk1-data
      dataVolumeTemplates:
      - metadata:
          name: demo-1-sys
        spec:
          storage:
            storageClassName: openebs-lvm
            accessModes:
            - ReadWriteOnce
            resources:
              requests:
                storage: 80Gi
          source:
            http:
              url: <img_url>
      - metadata:
          name: demo-1-data
        spec:
          storage:
            storageClassName: openebs-lvm
            accessModes:
            - ReadWriteOnce
            resources:
              requests:
                storage: 500Gi
          source:
            blank: {}
</code>

Reference Links

Node overcommit: https://kubevirt.io/user-guide/compute/node_overcommit/#node-overcommit

CPU Hotplug: https://kubevirt.io/user-guide/compute/cpu_hotplug/#cpu-hotplug

Memory Hotplug: https://kubevirt.io/user-guide/compute/memory_hotplug/#memory-hotplug

Disk expansion: https://kubevirt.io/user-guide/storage/disks_and_volumes/#disk-expansion

Node assignment: https://kubevirt.io/user-guide/compute/node_assignment/#node-assignment

kubernetesLVMResource OvercommitKubeVirtOpenEBSDynamic Storage
Ops Development Stories
Written by

Ops Development Stories

Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.