Can a 30B Open‑Source Model Match Closed‑Source Giants? MiroThinker 1.5 Review

MiroThinker 1.5 adopts a "scientist" mode with Interactive Scaling, runs a hypothesis‑evidence loop, scores 56.1 on the BrowseComp benchmark—close to Gemini DeepSearch’s 59.2—while supporting up to 400 tool calls, 256K context, and delivers detailed research reports, all as an open‑source project on GitHub.

ShiZhen AI
ShiZhen AI
ShiZhen AI
Can a 30B Open‑Source Model Match Closed‑Source Giants? MiroThinker 1.5 Review

Recently the MiroMind team released MiroThinker 1.5, an open‑source search‑oriented AI that follows a “Discovery Intelligence” strategy instead of the usual massive‑parameter approach. The model is described as operating in a “scientist mode” that relies on Interactive Scaling rather than memorizing knowledge in parameters.

Interactive Scaling Loop

Form a hypothesis : build an initial logical model for the question.

Evidence‑seeking : call external search tools to retrieve real‑time information.

Self‑negation : compare hypothesis with evidence and identify contradictions.

Refine hypothesis : iterate until the evidence converges on a reliable conclusion.

This loop gives the model high logical density when tackling complex queries.

Benchmark Results

On the BrowseComp benchmark, which emphasizes search‑agent capabilities, MiroThinker‑v1.5‑30B achieved a score of 56.1, only slightly below Google’s closed‑source Gemini DeepSearch (59.2). The result demonstrates that a 30‑billion‑parameter open‑source model can approach top‑tier performance.

BrowseComp performance comparison
BrowseComp performance comparison

Real‑World Scenario: AI‑Glasses Research

To test deep‑thinking ability, the author issued the prompt “Give me a deep research report on AI glasses.” The model responded with a multi‑step process:

Task decomposition : broke the request into sub‑tasks such as market size, core players (Meta/Ray‑Ban, Apple, Chinese vendors), technical bottlenecks, and future trends.

Multi‑source verification : repeatedly invoked Tool Calls to fetch the latest conference information and in‑depth industry analyses.

Filtering noise : eliminated marketing fluff by cross‑checking data from different sources, producing a rigorously structured report with citations.

The final report included detailed comparisons of domestic manufacturers’ waveguide technologies and provided source links for every claim.

AI glasses research report
AI glasses research report

Technical Highlights from the GitHub Repository

The open‑source code reveals three key innovations:

400 tool calls per task : far exceeds typical agent limits, enabling very long‑running complex tasks.

Temporal‑sensitive training sandbox : the model is constrained to view only past data during training, preventing future‑leakage.

256K context window : allows massive document ingestion and analysis.

Developers can Fork the repository and run the model locally with SGLang or vLLM, or extend it via the MCP protocol.

GitHub architecture diagram
GitHub architecture diagram

Conclusion

MiroThinker 1.5 shows that intelligent search performance does not require trillion‑parameter scaling; Interactive Scaling lets a modest 30B model deliver near‑top search and reasoning capabilities. It offers a practical “external brain” for decision‑makers and a rich platform for developers to experiment with open‑source AI.

benchmarkopen-source LLMtool callsinteractive scalingMiroThinkersearch AI
ShiZhen AI
Written by

ShiZhen AI

Tech blogger with over 10 years of experience at leading tech firms, AI efficiency and delivery expert focusing on AI productivity. Covers tech gadgets, AI-driven efficiency, and leisure— AI leisure community. 🛰 szzdzhp001

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.