PaperAgent
PaperAgent
Apr 22, 2026 · Artificial Intelligence

Alibaba Unveils Four New Open‑Source Qwen3.6 Models: 27B Dense and 35B‑A3B MoE

Alibaba has added four new open‑source weight versions to its Qwen3.6 series, featuring the 27‑billion‑parameter dense multimodal model Qwen3.6‑27B and the 35‑billion‑parameter sparse expert model Qwen3.6‑35B‑A3B, both designed for stable, real‑world coding tasks and outperforming their Qwen3.5 predecessors.

AI agentsAlibabaDense Model
0 likes · 4 min read
Alibaba Unveils Four New Open‑Source Qwen3.6 Models: 27B Dense and 35B‑A3B MoE
HyperAI Super Neural
HyperAI Super Neural
Apr 21, 2026 · Artificial Intelligence

Qwen3.6-35B-A3B Boosts Agent Programming: 3B Activation Beats Gemma4-31B

Qwen3.6-35B-A3B, the first open‑source Qwen3.6 model, achieves markedly better scores than Qwen3.5‑35B‑A3B and Gemma4‑31B on Terminal‑Bench2.0, NL2Repo, and QwenClawBench, adds a thought‑process retention option, and is accessible via HyperAI’s ready‑to‑run notebook with free compute credits.

Agent ProgrammingHyperAILarge Language Model
0 likes · 4 min read
Qwen3.6-35B-A3B Boosts Agent Programming: 3B Activation Beats Gemma4-31B
Old Zhang's AI Learning
Old Zhang's AI Learning
Apr 19, 2026 · Artificial Intelligence

Qwen3.6-35B: 4‑bit Quantization, DFlash Speedup, Claude Opus Distillation

The article reviews three optimization paths for the Qwen3.6‑35B model—four‑bit AWQ quantization variants, the DFlash speculative decoding accelerator, and a Claude Opus‑based distillation—detailing their implementation steps, benchmark results, and guidance on selecting the best version for different hardware and performance needs.

AIDFlashDistillation
0 likes · 11 min read
Qwen3.6-35B: 4‑bit Quantization, DFlash Speedup, Claude Opus Distillation
AI Large-Model Wave and Transformation Guide
AI Large-Model Wave and Transformation Guide
Apr 18, 2026 · Artificial Intelligence

Does Qwen3.6‑35B‑A3B Really Outclass All AI Coding Models? Inside the Benchmark Breakdown

Qwen3.6‑35B‑A3B, a mixture‑of‑experts model that activates only 3 B parameters, outperforms leading AI systems across SWE‑bench, Terminal‑Bench, NL2Repo and several agentic coding benchmarks, while also achieving top scores in GPQA, HMMT and RealWorldQA, prompting a reassessment of domestic LLM capabilities.

AI codingAgentic CodingChinese AI
0 likes · 7 min read
Does Qwen3.6‑35B‑A3B Really Outclass All AI Coding Models? Inside the Benchmark Breakdown