Building an Autonomous AI Research Team for Paper Writing
The author details how they assembled a multi‑agent AI research team on the Moxt platform, defined specific roles, ran a full literature‑review workflow for Harness Engineering, compared it with a prior OpenClaw‑Discord setup, and highlighted observability and scheduling benefits.
Lu Gong, an AI veteran, explains that scientific research in engineering fields is increasingly code‑driven, prompting the integration of AI programming with AI‑assisted research. After earlier experiments with OpenClaw + Discord, he tests Moxt, an agent‑native workspace where human and AI colleagues share a common work area.
Moxt AI teammates and role definition
In Moxt, AI teammates are created directly under the "AI Teammates" section, each with a name, avatar, and assigned responsibility. For a literature‑review project the author defines five roles: Research Lead, Literature Scout, Paper Analyst, Writer, and Quality‑Check/Final‑Delivery.
"@document" interaction
The platform supports an "@document" command that lets an AI read the referenced file directly in the conversation, mirroring the way AI IDEs such as Cursor provide context. This enables seamless human‑AI and AI‑AI collaboration around a shared document.
Workflow for a Harness Engineering review
The author reuses a previously open‑sourced GitHub skill (medical‑research‑review) and adapts it for a Harness Engineering survey, targeting arXiv and top‑conference OpenReview papers. The workflow proceeds as follows:
The author talks to the Research Lead to assign the review task.
The Research Lead creates a project space ("harness‑engineering‑ai‑agents") and dispatches the Literature Scout, Analyst, Writer, and Quality‑Check agents.
Each AI teammate deposits its artifacts in the shared directory.
Role outputs
The Literature Scout scans over 200 papers (≈150 arXiv, 40 top‑conference, 10 industry blogs) and produces a literature matrix.
The Paper Analyst spends about an hour generating a 50‑page deep‑analysis report based on the matrix.
The Writer then drafts the manuscript, which the Quality‑Check and Final‑Delivery agents polish before submission.
Observations
Two notable details emerged: (1) full session observability—each AI action, tool usage, and reasoning step is logged and replayable, providing essential traceability for academic claims; (2) cron‑based scheduling—e.g., the Literature Scout automatically scans arXiv each morning at 8 am and @‑mentions the team with new candidates.
Comparison with OpenClaw + Discord
OpenClaw + Discord runs multiple AI bots in a shared chat channel, suitable for lightweight monitoring or daily reports, but requires higher deployment effort. Moxt offers a low‑friction web UI where AI agents sit inside the same workspace as the human, making complex deliverable‑focused tasks like literature reviews smoother.
Conclusion
The author emphasizes that AI teammates amplify human capabilities without replacing the researcher; the human remains the decision‑maker, steering the direction while AI handles repetitive or data‑intensive subtasks. The workflow can be adapted to other writing‑heavy tasks such as media content creation by redefining roles.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
