How KAG-Thinker Boosts Structured Reasoning in Large Language Models

The KAG-Thinker model, a collaborative effort by Ant Group, Zhejiang University, and Tongji University, introduces a hierarchical "breadth splitting + depth solving" framework that enhances logical stability, knowledge utilization, and retrieval robustness for complex multi‑hop reasoning tasks across general and specialized domains.

AntTech
AntTech
AntTech
How KAG-Thinker Boosts Structured Reasoning in Large Language Models

Ant Group's Knowledge Engine team, together with Zhejiang University and Tongji University, has released the KAG-Thinker model, a major upgrade of the KAG framework focused on stable and explainable structured reasoning for both general and domain‑specific complex tasks.

Since 2025, OpenAI's Deep Research demonstrated strong multi‑round retrieval and planning abilities in large models. Subsequent model‑centric approaches such as Search‑R1 and ReSearch use reinforcement learning to let models retrieve and apply external knowledge, but their natural‑language‑only reasoning often lacks rigor and stability.

Inspired by how human experts decompose problems, KAG-Thinker establishes a clear, layered "scaffold" that improves logical consistency and stability in complex reasoning.

The model retains the KAG framework's Logical Form bilingual representation (natural language and logical expression) and enhances it with a "breadth splitting + depth solving" strategy, a knowledge‑boundary determination mechanism, and a noise‑resistant retrieval module.

The architecture includes four Logical Form solvers—Retrieval, Deduce, Math, and Output—each handling specific sub‑tasks such as knowledge retrieval, logical deduction, mathematical reasoning, and answer aggregation.

Experiments on seven single‑ and multi‑hop QA datasets show that KAG-Thinker 7B outperforms state‑of‑the‑art reinforcement‑learning methods (e.g., ReSearch) by an average of 4.1% EM, surpasses HippoRAG V2 and PIKE‑RAG, and demonstrates strong performance in medical QA.

The KAG V0.8 upgrade expands knowledge‑base capabilities, supporting both private and public sources via the MCP protocol and offering diverse index types (Outline, Summary, KnowledgeUnit, AtomicQuery, Chunk, Table) for customizable retrieval.

Stability tests reveal that KAG‑Thinker 7B achieves higher consistency on HotpotQA, 2Wiki, and Musique compared to previous KAG versions, especially under temperature settings 0.6 and 0.8.

In the medical domain, the adapted KAG‑Med‑Thinker improves accuracy over IRCoT, ReAct, and Naive RAG by 3.8%–4.4% on MedQA benchmarks.

AILLMKnowledge RetrievalStructured reasoningKAG-Thinker
AntTech
Written by

AntTech

Technology is the core driver of Ant's future creation.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.