MiroThinker 1.5: 30B Model Beats 1T‑Scale LLMs via Interactive Scaling
Released by the MiroMind team, MiroThinker 1.5 demonstrates that a 30‑billion‑parameter model can match or surpass the performance of 1‑trillion‑parameter LLMs by leveraging Interactive Scaling, achieving top rankings on multiple search benchmarks, dramatically lower inference cost, and open‑source availability for developers.
Model Overview
The MiroMind team, founded by Chen Tianqiao and AI scholar Dai Jifeng, announced the flagship search‑agent model MiroThinker 1.5. The 30‑billion‑parameter version reportedly delivers performance comparable to 1‑trillion‑parameter models, while the 235‑billion‑parameter version leads several search‑agent benchmarks.
Discovery Intelligence & Interactive Scaling
The model embodies a new paradigm called “Discovery Intelligence,” which shifts focus from merely increasing parameters to enabling the model to actively seek evidence, verify hypotheses, and correct itself through deep interaction with external tools such as search engines. This “Interactive Scaling” adds a third dimension to traditional scaling laws, allowing the model to amplify intelligence by interacting with its environment.
Benchmark Performance
Key benchmark results include:
BrowseComp (web‑retrieval benchmark): 69.8% – surpassing the ChatGPT‑Agent record.
BrowseComp‑ZH (Chinese adaptation): 71.5%.
GAIA‑Val‑165: 80.8%.
HLE‑Text (human‑level test): 39.2%.
These scores are on par with top closed‑source models such as GPT‑5‑High, Gemini‑3‑Pro, and DeepSeek‑V3.2. Compared with the 1‑trillion‑parameter Kimi‑K2‑Thinking model, MiroThinker 1.5 leads BrowseComp‑ZH by 4.5%, costs only $0.07 per call (1/20 of Kimi), and offers faster inference.
Technical Highlights
Evidence‑Seeking : Decomposes questions into verifiable hypotheses and actively searches the web for answers.
Iterative Verification : Performs multi‑turn self‑checking and correction to avoid error accumulation.
Anti‑Hallucination : Filters out reasoning without evidence, responding with “I don’t know” when appropriate.
Temporal‑Sensitive Training : Reduces future‑information leakage, yielding more reliable predictions.
The model supports 400–600 tool calls per session, enabling it to think like a scientist.
Open‑Source Release
Model weights are available on Hugging Face (https://huggingface.co/miromind-ai/MiroThinker-v1.5-235B) and the codebase is hosted on GitHub (https://github.com/MiroMindAI/MiroThinker). The accompanying MiroFlow framework allows developers to deploy the model with minimal effort.
Conclusion
MiroThinker 1.5 proves that parameter count is not the sole path to AI advancement; interactive scaling offers a cost‑effective route to high‑performance, trustworthy AI assistants. The open‑source nature and active community (Discord, WeChat assistant) make it a valuable resource for researchers and developers.
AI Info Trend
🌐 Stay on the AI frontier with daily curated news and deep analysis of industry trends. 🛠️ Recommend efficient AI tools to boost work performance. 📚 Offer clear AI tutorials for learners at every level. AI Info Trend, growing together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
