Kubernetes 1.20 Brings Volume Snapshots to GA – What Changed and How to Use Them
Kubernetes v1.20 promotes the Volume Snapshot feature to GA, detailing the new default enablement, added validation webhook, supported CSI drivers, deployment steps, usage patterns, and current limitations for creating and restoring snapshots in any Kubernetes cluster.
Kubernetes volume snapshot functionality reached GA in Kubernetes v1.20. Introduced as an Alpha in v1.12, it progressed through Alpha2, Beta, and now GA, bringing several enhancements and making the feature always enabled.
What is a volume snapshot?
Many storage systems (e.g., Google Cloud Persistent Disks, Amazon Elastic Block Store, and many on‑premises solutions) can create snapshots of persistent volumes. A volume snapshot is a point‑in‑time copy of a volume that can be used to pre‑populate a new volume or restore an existing volume to a previous state.
Why add volume snapshots to Kubernetes?
Kubernetes aims to provide an abstraction layer between distributed applications and the underlying cluster, allowing apps to be independent of specific cluster details. The Storage SIG identifies snapshot operations as a critical feature for many stateful workloads, such as databases that need a snapshot before performing operations.
By offering a standard way to trigger snapshot operations, Kubernetes enables portable snapshot usage across any Kubernetes environment without worrying about the underlying storage.
Snapshots also serve as a building block for advanced enterprise storage management, including application‑ or cluster‑level backup solutions.
What changed since the beta?
With the upgrade to GA, the feature is enabled by default and cannot be disabled. Numerous enhancements were made to raise the quality to production‑grade:
Snapshot API and client libraries moved to a separate Go module.
A snapshot validation webhook was added to enforce object validation (see the Volume Snapshot Validation Webhook KEP for details).
The controller now marks invalid snapshot objects, allowing users to identify and delete them; once the API switches to v1, such invalid objects cannot be removed.
Initial operational metrics were added to the snapshot controller.
More end‑to‑end tests on GCP, including hostPath‑based stress tests, were introduced to verify stability.
Aside from stricter validation, there is no functional difference between the v1beta1 and v1 APIs in this release. Both versions are served, but the stored API version remains v1beta1; future releases will switch storage to v1 and drop v1beta1 support.
Which CSI drivers support volume snapshots?
Only CSI drivers support snapshots; in‑tree or FlexVolume drivers do not. Ensure that deployed CSI drivers implement the snapshot interface. Over 50 CSI drivers currently support snapshots, with GCE Persistent Disk CSI driver already GA‑qualified.
How to deploy volume snapshots?
The snapshot feature consists of:
Kubernetes Volume Snapshot CRDs
Volume snapshot controller
Snapshot validation webhook
CSI driver with CSI Snapshotter sidecar
It is strongly recommended that Kubernetes distributors bundle and deploy the controller, CRDs, and webhook as part of cluster management.
Warning: The snapshot validation webhook is essential for a smooth transition from v1beta1 to v1 APIs. Without it, invalid VolumeSnapshot objects can be created/updated, blocking their removal in future upgrades.
If your cluster lacks the proper components, you can install them manually; see the CSI Snapshotter documentation for details.
How to use volume snapshots?
Assuming all required components (including a CSI driver) are deployed, you can create a snapshot with the VolumeSnapshot API object, or restore a PVC by specifying a VolumeSnapshot as the data source.
Note: The Kubernetes Snapshot API does not provide any application‑consistent guarantees. You must ensure data consistency yourself (e.g., pause the application, freeze the filesystem) before taking a snapshot.
Dynamic snapshot provisioning
To provision snapshots dynamically, first create a VolumeSnapshotClass object:
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: test-snapclass
driver: testdriver.csi.k8s.io
deletionPolicy: Delete
parameters:
csi.storage.k8s.io/snapshotter-secret-name: mysecret
csi.storage.k8s.io/snapshotter-secret-namespace: mysecretnamespaceThen create a VolumeSnapshot that references the class:
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: test-snapshot
namespace: ns1
spec:
volumeSnapshotClassName: test-snapclass
source:
persistentVolumeClaimName: test-pvcImporting an existing snapshot into Kubernetes
First manually create a VolumeSnapshotContent object that represents the external snapshot:
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotContent
metadata:
name: test-content
spec:
deletionPolicy: Delete
driver: testdriver.csi.k8s.io
source:
snapshotHandle: 7bdd0de3-xxx
volumeSnapshotRef:
name: test-snapshot
namespace: defaultThen create a VolumeSnapshot that points to this content:
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: test-snapshot
spec:
source:
volumeSnapshotContentName: test-contentPre‑populating a volume from a snapshot
A bound VolumeSnapshot can be used as a data source for a new PVC, pre‑populating the volume with snapshot data:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-restore
namespace: demo-namespace
spec:
storageClassName: test-storageclass
dataSource:
name: test-snapshot
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1GiWhat are the limitations?
The GA implementation of Kubernetes volume snapshots has the following limitation:
Restoring an existing PVC to an earlier state represented by a snapshot is not supported; only new volumes can be created from a snapshot.
Other resources
The snapshot API and controller code repository is located at https://github.com/kubernetes-csi/external-snapshotter .
Additional documentation can be found at https://k8s.io/docs/concepts/storage/volume-snapshots and https://kubernetes-csi.github.io/docs/ .
Ops Development Stories
Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
