Cloud Native 14 min read

What Is Cloud‑Native Storage and How QingStor NeonIO Solves Its Challenges?

This article explains the concept of cloud‑native storage, why it is needed for modern containerized workloads, outlines the key challenges such as usability, performance, high availability and multi‑cloud management, compares popular solutions like OpenEBS and Rook‑Ceph, and details QingStor NeonIO's architecture, features, performance results, and ideal use cases.

Qingyun Technology Community
Qingyun Technology Community
Qingyun Technology Community
What Is Cloud‑Native Storage and How QingStor NeonIO Solves Its Challenges?

Storage systems are a core part of the infrastructure that supports business applications and have continuously evolved from physical servers to virtualized environments and now large‑scale cloud deployments.

What Is Cloud‑Native Storage?

Cloud‑native storage refers to storage solutions that meet the requirements of cloud‑native applications, which emphasize containerization, service mesh, declarative APIs, elastic scaling, automated DevOps, fault tolerance, and platform independence.

Why Do We Need Cloud‑Native Storage?

According to CNCF surveys, storage remains a major challenge in container adoption, with about 29% of users citing it as a difficulty comparable to security.

Key challenges include:

Usability: Complex deployment and low integration with orchestration platforms.

Performance: High IOPS and low latency requirements for intensive workloads.

High Availability: Need for fault‑tolerant, non‑single‑point‑of‑failure designs.

Agility: Fast creation, deletion, and scaling of Persistent Volumes (PVs) that follow pod migrations.

Stateful Applications Are Now the Main Workload for Containers

With the rise of Kubernetes, IoT, 5G, and AI, more than half of users run stateful applications in containers, and 23% are planning to do so.

Typical storage requirements for stateful apps include:

Volume lifecycle tied to the pod.

Persistent data retention.

Stable pod‑to‑storage relationships across upgrades.

Different workloads have distinct I/O characteristics, such as databases (OLTP), AI/ML, big‑data analytics (OLAP), HPC/rendering, and DevOps pipelines.

Challenges in Multi‑Cloud Environments

Managing storage across hybrid and multi‑cloud setups introduces issues like multiple APIs, compatibility problems, and complex management and visibility.

Common Cloud‑Native Storage Solutions

OpenEBS is a Container‑Attached Storage (CAS) implementation that runs directly on Kubernetes and offers three storage types: cStor, Jiva, and LocalPV.

Each volume is managed by a lightweight controller pod, often co‑located with the application pod (sidecar model).

Controllers are independent per volume, providing isolation.

Rook‑Ceph brings Ceph into Kubernetes, providing block, file, and object storage interfaces via operators, and also supports EdgeFS, CockroachDB, Cassandra, NFS, and YugabyteDB.

Comparison of OpenEBS and Rook‑Ceph

Both support CSI for seamless container volume integration.

Active communities with abundant resources.

Deep integration with cloud‑native orchestration for deployment, upgrade, and scaling.

However, they share drawbacks such as limited I/O performance and higher operational complexity.

QingStor NeonIO Cloud‑Native Storage and Practice

NeonIO is an enterprise‑grade distributed block storage system designed for containerized deployment, offering dynamic provisioning, cloning, snapshots, restoration, and resizing of PVs.

NeonIO Architecture

The architecture includes:

zk/etcd for cluster discovery and coordination.

MySQL for metadata storage.

Center service for logical management (PV creation, snapshots).

Monitor service exposing metrics to Prometheus.

Store service handling application I/O.

Portal providing a UI.

CSI driver for standard storage interface.

Key Features of NeonIO

Full‑stack component containerization (services, CSI, portal).

Complete CSI implementation with static and dynamic PV creation.

Zero‑ops UI for management, alerts, and monitoring at PV granularity.

Deep integration with cloud‑native ecosystems (Prometheus, etcd, CRDs).

One‑click deployment via an Operator.

Performance Highlights

NeonIO uses an all‑flash distributed architecture with NVMe SSDs and optional RDMA, providing linear IOPS scaling with node count, ultra‑short I/O paths, and HostNetwork mode to reduce latency.

Benchmark results on a 3‑node cluster (NVMe SSDs, 1 TiB volumes) show NeonIO outperforming competitors in both IOPS and latency for single‑replica and three‑replica configurations.

High Availability Design

Service components run with three replicas by default, using probes for health checks and automatic restarts. Data is sharded, with each shard replicated across three distinct nodes, ensuring strong consistency and fault tolerance.

Agility

NeonIO can rebuild pods across nodes quickly (2000 PV mount/unmount in 16 s) and create 2000 PVs in about 5 minutes.

Typical Application Scenarios

DevOps: rapid bulk PV creation/deletion.

Databases: stable, high‑IOPS storage for MySQL, etc.

Big‑Data Analytics: support for massive capacities (up to 100 TB per PV).

Compute‑Storage Separation: one Kubernetes cluster runs NeonIO, another consumes its storage via CSI.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

cloud-nativeKubernetesOpenEBSNeonIORook-Ceph
Qingyun Technology Community
Written by

Qingyun Technology Community

Official account of the Qingyun Technology Community, focusing on tech innovation, supporting developers, and sharing knowledge. Born to Learn and Share!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.