How MiroFish Turns Documents into Parallel AI Worlds for Future Simulation
MiroFish is an open‑source multi‑agent platform that automatically builds high‑fidelity digital societies from any text, enabling realistic opinion, policy, literary, and crisis simulations with a five‑step GraphRAG workflow, Docker or source deployment, and detailed reporting tools.
What Is MiroFish?
MiroFish is a general‑purpose crowd‑intelligence prediction engine. By uploading a TXT, MD, or PDF document—such as news, policy drafts, or literary excerpts—the system automatically extracts entities, relationships, and events, then creates a parallel digital world populated by AI agents that mimic real‑world behavior.
Core Capabilities
Fully automated high‑fidelity digital world construction – No manual role configuration is required; the system parses the input, generates agents with personalities, memories, and behavior logic, and builds a realistic social environment.
Human‑like AI agents – Each agent has an independent persona, background story, and long‑term memory, enabling authentic posting, commenting, debating, and information propagation that closely mirrors real crowd dynamics.
God‑view intervention and comprehensive analysis reports – Users can inject new events or policy changes during simulation, observe outcomes in real time, and receive a detailed report summarizing event trajectories, sentiment shifts, and key turning points.
Real‑World Use Cases
Opinion evolution simulation – Upload a hot‑topic article and the system generates agents representing journalists, netizens, and influencers, reproducing the full lifecycle of public opinion and identifying critical platforms and sentiment inflection points.
Policy impact forecasting – Provide a policy draft; the engine simulates public acceptance, potential controversies, and the effect of different communication strategies, helping governments and enterprises reduce implementation risk.
Literary continuation and character evolution – Input classic literature excerpts (e.g., the first 80 chapters of *Dream of the Red Chamber*) and the system creates intelligent agents for core characters, generating plausible plot continuations for writers and IP designers.
Corporate crisis response – Model various crisis‑management actions (apology, recall, remediation) and see which strategy most quickly lowers negative sentiment and protects brand reputation.
Technical Architecture
The prediction capability relies on an enhanced Temporal GraphRAG + OASIS engine, executed in five tightly coupled steps:
Seed extraction – Structured parsing of unstructured data, coreference resolution, alias unification, and extraction of entities, relations, and timelines.
Graph construction (core technology) – A graph of “entity‑relationship‑time” captures complex temporal memory, solving traditional RAG’s context fragmentation issues.
Environment setup – Automatic generation of agent personas, behavior rules, and initial discussion topics (e.g., tax‑reduction debates for policy simulations).
Parallel simulation – Powered by the OASIS framework from the CAMEL‑AI team, with Zep Cloud providing persistent memory, achieving an average of 5 seconds per simulation step.
Report generation & deep interaction – A dedicated Report Agent produces insight‑rich summaries, while an Interview Sub‑Agent enables virtual interviews with agents for qualitative feedback.
Deployment Options
Two simple deployment methods are provided:
Source deployment (recommended)
Copy .env.example and fill in LLM and Zep Cloud API keys.
Run npm run setup:all to install dependencies.
Start the service with npm run dev.
Access the UI at http://localhost:3000.
Docker deployment (beginner‑friendly)
Configure the .env file as above.
Pull the image and launch with docker compose up -d.
Open http://localhost:3000 and follow the same workflow.
Required environment: Node.js ≥ 18, Python 3.11‑3.12, uv package manager, and optionally Docker.
Best‑Practice Tips & Pitfalls
Simulation quality depends heavily on the completeness of the seed material; include detailed characters, events, and relationships.
When injecting variables, add only one change at a time to isolate its impact.
For large‑scale runs (e.g., millions of agents), consider self‑hosting LLMs to reduce API costs.
Results are advisory; combine them with domain data and user research before making real decisions.
All usage must comply with the AGPL‑3.0 license; derivative code must also be open‑sourced.
Target Audience
Policy makers and public‑affairs professionals seeking scenario planning.
Corporate strategists and PR teams needing crisis simulation.
Content creators and writers looking for AI‑assisted plot generation.
Developers and researchers interested in multi‑agent systems, GraphRAG, and social simulation.
Avoiding Common Mistakes
Ensure seed documents contain rich entity and relationship information for accurate outcomes.
Introduce only one variable per simulation step to maintain clear causal analysis.
Leverage Zep Cloud’s free tier for small experiments; scale privately for massive simulations.
Conclusion
MiroFish lowers the barrier to multi‑agent technology by turning any document into a realistic, interactive digital world, offering actionable insights across governance, business, and creative domains while demonstrating how open‑source AI can move from research labs to practical applications.
AI Architecture Path
Focused on AI open-source practice, sharing AI news, tools, technologies, learning resources, and GitHub projects.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
