DeepSeek V4 Unveiled: How Its Million-Token Context Redefines Open-Source LLMs
DeepSeek released the V4 preview, introducing V4‑Pro (1.6 T parameters, 49 B activation neurons, 33 T tokens) and V4‑Flash (284 B parameters, 13 B activation neurons, 32 T tokens) with 1 M token context, a novel DSA sparse attention that reduces compute and memory, and performance that rivals top closed‑source models in agentic coding, world‑knowledge and reasoning benchmarks, while offering an API compatible with OpenAI and Anthropic.
