Qwen 3.5 Emerges; ByteDance and DeepSeek Set to Release Flagship LLMs for Spring Festival
The LMSYS Chatbot Arena now shows Qwen 3.5 (codenamed Karp-001/002) alongside ByteDance's Pisces‑llm models and DeepSeek‑V4, with new Transformers configs and hints of an Active‑3B MoE architecture, suggesting a fresh wave of flagship large language models arriving for the Spring Festival.
Recent activity on the LMSYS Chatbot Arena – the de‑facto benchmark for large‑model blind tests – has introduced several previously unknown models.
Qwen 3.5 appears: codename "Karp"
Two entries, Karp-001 and Karp-002, answer the identity question by stating they are Qwen 3.5 , the next generation of Alibaba's Tongyi Qianwen series.
The Hugging Face transformers repository has quietly added configuration files for these models:
Qwen3.5-9B-Instruct Qwen3.5-35B-A3B-InstructThe "A3B" suffix in the 35B variant likely denotes an Active 3B MoE (Mixture‑of‑Experts) architecture, implying 3 billion active parameters – a notable attempt by Alibaba to balance performance and inference efficiency.
ByteDance's new move: codename "Pisces"
ByteDance also surfaces two models, Pisces-llm-0206a and Pisces-llm-0206b, which claim to be the "Seed" series. The "0206" suffix suggests a very recent checkpoint.
Karp-001: I'm Qwen3.5, developed by Tongyi Lab.
Pisces series: I'm Seed, a large language model developed by ByteDance.
In a side‑by‑side demonstration, the Pisces model draws a polished Xbox controller SVG with fewer than 100 lines of code, whereas Karp requires over 600 lines and produces a less refined image, hinting that Pisces may be stronger in code and creative generation.
With the Spring Festival approaching, the large‑model community’s "spring recruitment" appears hotter than the job market. Both Alibaba and ByteDance are testing new models on the arena, signaling the start of another round of the "thousand‑model war".
Upcoming releases mentioned include:
ByteDance Seed 2.0
Seed 2.0 Flash
Seed Code 2
Qwen3.5 models (Alibaba)
DeepSeek‑V4
DeepSeek‑V4‑Lite
Thus, a wave of flagship large language models is set to debut around the Spring Festival.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Old Zhang's AI Learning
AI practitioner specializing in large-model evaluation and on-premise deployment, agents, AI programming, Vibe Coding, general AI, and broader tech trends, with daily original technical articles.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
