How AI Agents Overcome Context Window Limits: Gemini vs Manus Deep Research
The article analyzes the context‑window bottleneck of large language models, compares two architectural strategies—strengthening the model (Gemini Deep Research) and parallel agent decomposition (Manus Wide Research)—and details a wind‑power investment case study, technical implementation, and future directions.
Context Window Trap
Traditional AI assistants suffer when asked to analyze many items because the fixed context window gets saturated; quality drops sharply after about 8‑10 items, leading to hallucinations.
Two Architectural Routes
Route 1: Strengthen the model – exemplified by Google Gemini Deep Research, which expands the context window and inference power (Gemini 2.5 Pro, Gemini 3 Pro in 2025) to handle deep reasoning across many sources.
Route 2: Make the architecture clever – exemplified by Manus Wide Research, which decomposes a large task into hundreds of parallel agents, each with its own context, coordinated by a master agent.
Practical Comparison: Wind‑Power Industry Investment Research
Gemini Deep Research (deep reasoning)
Steps: (1) Understand the task and plan research directions (seven sub‑directions); (2) Multi‑round search for authoritative sources such as industry reports, company filings, policy documents, and academic papers; (3) Search‑while‑thinking, adjusting direction based on intermediate results; (4) Synthesize all information into a logical report. The process takes several minutes and can surface insights like policy impacts on upstream versus downstream players, but still hits the context limit when handling 50 companies.
Manus Wide Research (parallel execution)
Steps: (1) Task decomposition – the master agent splits “analyze 50 wind‑power companies” into 50 independent subtasks, each specifying fields to collect and evaluation criteria.
(2) Parallel launch – all 50 sub‑agents run simultaneously in isolated sandboxes, each receiving a fresh context.
(3) Independent execution – each agent searches, reads filings, crawls news, analyses data, producing equally deep results for every company.
(4) Result aggregation – the master agent gathers structured outputs (tables, reports) and merges them.
This design avoids context‑window collapse because no agent ever exceeds its window.
Technical Foundations
The architecture draws on the “CodeAct” paper, using a ReAct loop (Observation → Thought → Action → …) implemented as:
┌─────────────────────────────────────────────┐
│ Observation → Thought → Action → Observation │
└─────────────────────────────────────────────┘Agents run in sandboxed virtual machines with isolated file systems, networks, and process spaces, preventing a compromised agent from affecting others.
Task decomposition follows a directed acyclic graph (DAG), for example:
[Master Agent: Decompose]
↓
┌───────┬───────┬───────┬───────┐
↓ ↓ ↓ ↓ ↓
[Agent1] [Agent2] … [Agent50]
↓ ↓ ↓ ↓ ↓
└───────┴───────┴───────┴───────┘
↓
[Master Agent: Aggregate]Manus provides 29 tools grouped into command execution (Shell, Python), file operations (txt/md/pdf/xlsx), network capabilities (search, browser, deployment), and system management (process control, software installation).
Dynamic quality checking is performed after each step:
def quality_check(result):
if result.confidence < 0.7:
trigger_self_correction()
return generate_validation_report()If confidence is low, the agent self‑corrects, retries commands, or changes search keywords.
State is tracked in a todo.md file, enabling pause‑and‑resume and visibility of progress.
Broader Insight on “Research” Agents
Effective research agents need five capabilities: planning, tool use, iterative refinement, verification, and synthesis. Gemini emphasizes deep reasoning, while Manus emphasizes breadth.
Own Implementation – Insight Platform
The authors built the Insight platform using the Go library https://github.com/smallnest/langgraphgo, GLM‑4.7 as the backend model, and host it at https://insight.rpcx.io. It demonstrates the techniques described above.
Future Directions
Planned improvements include richer data visualisation, better image selection, and expanded toolsets.
These analyses are based on publicly available product information and the authors' own speculation; actual internal implementations may differ.
BirdNest Tech Talk
Author of the rpcx microservice framework, original book author, and chair of Baidu's Go CMC committee.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
