Lao Guo's Learning Space
Lao Guo's Learning Space
Apr 30, 2026 · Artificial Intelligence

How DeepSeek V4’s CSA + HCA Break the Million‑Token Barrier

Traditional full‑attention cannot handle million‑token contexts due to exponential compute and memory growth, but DeepSeek V4’s Compressed Sparse Attention (CSA) and Heavily Compressed Attention (HCA) compress, sparsely index, and precisely compute tokens, cutting KV cache to 10% and FLOPs to 27% while enabling a 1‑M token window on a single GPU.

Attention MechanismCSAHCA
0 likes · 12 min read
How DeepSeek V4’s CSA + HCA Break the Million‑Token Barrier
Network Intelligence Research Center (NIRC)
Network Intelligence Research Center (NIRC)
Dec 23, 2025 · Artificial Intelligence

ClusterAttn: Compressing KV Cache with Intrinsic Attention Clustering

ClusterAttn tackles the KV‑cache bottleneck of large language models by exploiting the natural clustering of attention scores, achieving up to 92% compression without accuracy loss, boosting throughput 2.6–4.8×, handling 128K‑token sequences on a single GPU, and outperforming existing training‑free compression methods.

KV cache compressionattention clusteringdensity clustering
0 likes · 8 min read
ClusterAttn: Compressing KV Cache with Intrinsic Attention Clustering