Cloud Native 12 min read

Why Observability 2.0 Is Redefining Cloud‑Native Monitoring and Data Pipelines

Observability 2.0 unifies logs, metrics and traces into a single platform, leveraging event‑centric wide‑key‑value models, AI‑driven anomaly detection, and cloud‑native elastic architectures to deliver faster system insight, lower downtime, scalable data pipelines, reduced costs, and improved developer experience across SLS services.

Alibaba Cloud Observability
Alibaba Cloud Observability
Alibaba Cloud Observability
Why Observability 2.0 Is Redefining Cloud‑Native Monitoring and Data Pipelines

Observability 2.0 (o11y 2.0) has become a hot topic in the DevOps community, extending the classic log/metric/trace model into a unified, event‑centric platform.

Breaks the isolation of logs, metrics and traces by providing a complete health view on a single platform.

Enables engineers to understand system behavior faster, diagnose issues quickly, and reduce downtime, with AI‑assisted anomaly detection and fault localization.

Uses logs as the core data source, enriches them with wide‑key‑value (Wide Events) to reconstruct facts.

Leverages cloud‑native elastic architecture for scalable, low‑cost querying and analysis of massive event streams.

Observability 2.0 promotes a wide, structured event model that supports high‑dimensional, high‑granularity data, making precise system state description and query possible. When selecting an observability solution, key considerations include storage capacity for massive real‑time streams, low‑latency high‑throughput writes, and the ability to perform billion‑scale instant analytics.

SLS, a core product of Alibaba Cloud’s observability family, uses o11y 2.0 as a foundation to evolve its data‑pipeline services. Since 2019 SLS added data‑processing capabilities; in 2024 the pipeline was upgraded to three service shapes:

Write‑processor: users maintain the client, SLS stores and computes the data.

Consume‑processor: users maintain the consumer, SLS reads, computes and returns results.

Full‑managed data‑processing: SLS reads source data, processes it, and writes to the target store.

The upgraded pipeline delivers higher performance, better experience and lower cost. Performance gains come from a columnar SPL engine written in C++, SIMD acceleration, and fine‑grained elastic scaling at the DataBlock level, allowing sub‑second latency even during burst traffic.

Experience improvements stem from low‑code SPL syntax that reuses familiar Linux‑style commands and provides hundreds of built‑in functions, reducing the learning curve compared with Python‑DSL.

Cost reductions include a 66.7 % price cut for the new processing service, lower storage fees (raw data and processed results stored together), and decreased operational overhead for shard management and resource scaling.

Integration with third‑party tools (Flink, Spark, Flume, DataWorks, OSS, etc.) is simplified by SPL‑based processors. For example, a function‑compute job that reads SLS data, filters error logs, extracts fields and writes to a database can be replaced by a concise SPL statement, cutting execution time from seconds to milliseconds and reducing function‑compute charges.

Network bandwidth, a major cost in cross‑region pipelines, can be optimized by compressing traffic (e.g., ZSTD) and by transmitting only required columns or rows using SPL projections and filters:

* | project time_local, request_uri, status, user_agent, client_ip
* | where status != '200'

Overall, Observability 2.0 and the SPL‑driven SLS pipeline provide a scalable, high‑performance, cost‑effective foundation for modern cloud‑native monitoring and big‑data analytics, and they are poised to support emerging AI workloads.

Performancecloud-nativecost optimization
Alibaba Cloud Observability
Written by

Alibaba Cloud Observability

Driving continuous progress in observability technology!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.