Big Data 11 min read

Blaze: Kuaishou’s Rust‑Based Vectorized Execution Engine for Spark SQL

Blaze is a Rust‑implemented, DataFusion‑based vectorized execution engine created by Kuaishou to accelerate Spark SQL queries, delivering up to 60% faster computation, 30% average compute‑power gains in production, and extensive architectural innovations such as native engine, protobuf protocol, JNI bridge, and Spark extension, while being open‑source and compatible with Spark 3.0‑3.5.

Kuaishou Tech
Kuaishou Tech
Kuaishou Tech
Blaze: Kuaishou’s Rust‑Based Vectorized Execution Engine for Spark SQL

Blaze is Kuaishou’s self‑developed Spark vectorized execution engine built with Rust and the DataFusion framework, designed to speed up Spark SQL processing through native vectorized execution.

In performance tests, Blaze reduced computation time by 60% compared with Spark 3.3 and by 40% compared with Spark 3.5 on a 1 TB TPC‑DS benchmark, and achieved an average 30% compute‑power improvement in Kuaishou’s production data‑warehouse jobs.

Blaze is now open‑source and fully compatible with Spark 3.0‑3.5; users can integrate it by simply adding the provided JAR.

1. Research Background

Vectorized execution shifts the processing granularity from a single row to a row group, reducing function‑call overhead and enabling SIMD‑based optimizations.

2. Overall Architecture and Core Components

The architecture consists of four core modules: Native Engine (Rust‑based DataFusion implementation), ProtoBuf (operator description protocol), JNI Bridge (communication between Spark and native engine), and Spark Extension (translation of Spark operators to native operators).

The Spark‑on‑Blaze flow inserts a Blaze Session Extension after Spark generates the physical plan, translates it to a native plan, and executes it in the native engine, achieving significant data‑processing efficiency gains.

3. Technical Advantages for Production

Fine‑grained FailBack mechanism that falls back to Spark for unsupported operators or UDFs, using Arrow FFI to exchange column data.

Optimized vectorized data transfer format that removes redundant metadata and applies byte‑transpose columnar serialization, cutting Shuffle data volume by ~30%.

Adaptive memory management across heap and off‑heap, automatically adjusting allocation based on vectorization coverage.

Improved aggregation algorithm using bucketed merge with radix sort, maintaining O(n) complexity and boosting resource utilization.

Whole‑stage codegen‑style expression reuse, merging operators to eliminate duplicate calculations and doubling execution speed for complex UDF workloads.

4. Current Progress and Future Plans

Parquet vectorized read/write support.

Comprehensive operator and expression coverage with fine‑grained fallback.

Integration of Remote Shuffle Service and upcoming support for Apache Celeborn.

Outstanding TPC‑H/TPC‑DS benchmark results (3× and 2.5× speedups respectively).

Future work includes continuous performance iteration, expanding to data‑lake scenarios, and strengthening open‑source community engagement.

Project repository: https://github.com/kwai/blaze

Performance OptimizationBig DatarustSparkVectorized ExecutionDataFusion
Kuaishou Tech
Written by

Kuaishou Tech

Official Kuaishou tech account, providing real-time updates on the latest Kuaishou technology practices.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.