How PouchContainer Volumes Provide Persistent Storage for Containers
This article explains PouchContainer's volume architecture, the supported volume types—including local, tmpfs, and Ceph—how to create and manage them with CLI commands, compares volumes with bind mounts, and outlines future CSI integration for container storage.
1. Overview of PouchContainer Volume Architecture
PouchContainer is an open‑source, lightweight enterprise‑grade container engine from Alibaba that uses a layered image model similar to Docker. To persist data beyond a container’s lifecycle, PouchContainer introduces a volume mechanism that stores data on the host filesystem, independent of the container’s read‑write layer.
VolumeManager : entry point for all volume operations.
Core : implements the business logic for creating, removing, attaching, and detaching volumes.
Store : persists volume metadata locally in a BoltDB file (future support may include etcd).
Driver : abstracts the driver interface required by each volume backend.
Modules : concrete drivers such as local, tmpfs, volume plugin, and Ceph.
2. Supported Volume Types
2.1 Local Volume
The default driver, suitable for persisting stateful data. A local volume is created as a subdirectory under /var/lib/pouch/volume on the host.
Specify a host directory to mount the volume.
Optionally limit the volume size (requires ext4 or xfs with appropriate kernel support).
pouch volume create --driver local --option mount=/mnt/mysql_data --name mysql_data pouch volume create --driver local --option size=10G --name test_quota2.2 Tmpfs Volume
Data lives only in memory (or swap when memory is insufficient). It disappears when the container stops, making it ideal for temporary or sensitive data.
pouch volume create --driver tmpfs --name tmpfs_test2.3 Ceph Volume
Stores data in a Ceph RBD cluster, enabling cross‑node migration. Currently unavailable to external users; it relies on Alibaba’s internal storage controller that bridges Ceph, Pangu, NAS, etc.
2.4 Volume Plugin
Provides a generic extension mechanism compatible with Docker’s volume‑plugin protocol. Implementations expose a web server that handles POST requests for standard driver operations.
/VolumeDriver.Create // Volume creation service
/VolumeDriver.Remove // Volume deletion service
/VolumeDriver.Mount // Volume mount service
/VolumeDriver.Path // Retrieve volume path
/VolumeDriver.Unmount // Volume unmount service
/VolumeDriver.Get // Get volume details
/VolumeDriver.List // List volumes
/VolumeDriver.Capabilities// Report driver capabilities3. Bind Mounts vs. Volumes
PouchContainer also supports bind mounts, which directly mount a host directory into a container.
pouch run -d -t -v /hostpath/data:/containerpath/data:ro ubuntu shCompared with bind mounts, volumes offer easier backup and management, dedicated CLI/API, safe sharing across multiple containers, and a plugin system for third‑party storage.
Simpler backup and lifecycle management.
Dedicated CLI and API for volume operations.
Secure sharing between containers.
Plugin mechanism for third‑party storage integration.
4. Future Development
The Container Storage Interface (CSI) v0.2 is already released. PouchContainer plans to add a generic driver that can interface with any storage system implementing the CSI specification.
5. Summary
PouchContainer’s volume subsystem addresses container data persistence by decoupling storage from the container’s writable layer. It currently supports local, tmpfs, and Ceph drivers, and can extend functionality through the volume‑plugin mechanism, with future CSI integration on the roadmap.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Alibaba Cloud Native
We publish cloud-native tech news, curate in-depth content, host regular events and live streams, and share Alibaba product and user case studies. Join us to explore and share the cloud-native insights you need.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
