Data Party THU
Data Party THU
Oct 25, 2025 · Artificial Intelligence

How InfLLM‑V2 Delivers Fast, Low‑Cost Sparse Attention for Long‑Context LLMs

InfLLM‑V2 introduces a zero‑parameter, train‑efficient sparse‑attention framework that dramatically speeds up long‑sequence processing while requiring only 5 B tokens for training, and the open‑source MiniCPM4.1 model demonstrates comparable performance to dense attention on both long‑text understanding and deep‑thinking benchmarks.

InfLLM-V2MiniCPM4.1efficiency
0 likes · 10 min read
How InfLLM‑V2 Delivers Fast, Low‑Cost Sparse Attention for Long‑Context LLMs