ChunkFlow: Accelerating Long‑Context Model Fine‑Tuning Up to 4.5× Faster

The paper introduces ChunkFlow, an efficient training framework for variable‑length and ultra‑long sequence datasets that powers Qwen models, achieving up to 4.53× speedup over Megatron‑LM and more than 2× overall performance gains by reorganizing data into fixed‑size chunks and employing a state‑aware scheduler.

Alibaba Cloud Big Data AI Platform
Alibaba Cloud Big Data AI Platform
Alibaba Cloud Big Data AI Platform
ChunkFlow: Accelerating Long‑Context Model Fine‑Tuning Up to 4.5× Faster

Recently, the Alibaba Cloud PAI team, Tongyi Lab, and UCAS Frontier Interdisciplinary Science Institute co‑authored a paper titled "Efficient Long Context Fine‑tuning with Chunk Flow" presented at ICML 2025.

ChunkFlow is a solution for efficient training on variable‑length and ultra‑long sequence datasets, supporting the Qwen series of large language models. It delivers more than 2× end‑to‑end performance gain in internal Alibaba Cloud workloads and up to 4.53× speedup compared with other frameworks.

Research Background

Long‑text capability is a core ability of language models and is essential for many downstream tasks. Continue pre‑training and long‑context fine‑tuning are key to extending this ability. Real‑world datasets exhibit a heavy‑tailed length distribution, with most samples short and a few extremely long, causing GPU under‑utilisation and pipeline bubbles.

Existing Problems

Fixed memory and parallelism strategies conflict with varying sequence lengths, leading to load imbalance and performance degradation.

Ultra‑long sequences create huge activation memory pressure, requiring recomputation or off‑loading and further reducing pipeline efficiency.

Paper Contributions

ChunkFlow reorganises data into fixed‑size chunks. Short sequences are concatenated, long sequences are split, and a state‑aware scheduler guarantees correct attention computation while controlling memory usage.

Algorithm 1 (illustrated below) shows how training chunks are built.

For split long sequences, causal attention requires careful ordering; the scheduler (Algorithm 2) adjusts memory usage with recomputation and off‑loading.

Experimental Results

End‑to‑end performance tests on Qwen 2.5 models with context lengths 32 K and 256 K show ChunkFlow achieving up to 4.53× speedup over Megatron‑LM.

Varying ChunkSize demonstrates controllable peak memory usage that depends on the preset ChunkSize rather than the longest sequence, improving robustness.

ChunkFlow now powers all Qwen models for SFT and long‑sequence CPT tasks, delivering >2× performance gains and substantial GPU cost savings.

Paper Details

Title: Efficient Long Context Fine‑tuning with Chunk Flow

Authors: Xiulong Yuan, Hongtao Xu, Wenting Shen, Ang Wang, Xiafei Qiu, Jie Zhang, Yuqiong Liu, Bowen Yu, Junyang Lin, Mingzhen Li, Weile Jia, Yong Li, Wei Lin

Link: https://arxiv.org/pdf/2503.02356

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

large language modelsdistributed trainingAI performanceChunkFlowGPU efficiencylong-context training
Alibaba Cloud Big Data AI Platform
Written by

Alibaba Cloud Big Data AI Platform

The Alibaba Cloud Big Data AI Platform builds on Alibaba’s leading cloud infrastructure, big‑data and AI engineering capabilities, scenario algorithms, and extensive industry experience to offer enterprises and developers a one‑stop, cloud‑native big‑data and AI capability suite. It boosts AI development efficiency, enables large‑scale AI deployment across industries, and drives business value.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.