How ACoder Achieved Up to 24× Faster Multi‑Platform Development with AI
The ACoder platform combines multi‑model AI, a panoramic code‑understanding engine, and a layered knowledge‑management system to automate the entire software‑development lifecycle, delivering 5‑20× overall efficiency gains, up to 24× speed‑up for cross‑platform code migration, and dramatically higher code‑recall accuracy.
Background and Core Challenges
AI‑assisted coding in large engineering teams faces three fundamental problems: (1) massive domain‑specific knowledge and huge codebases exceed LLM token limits, making it difficult to convey business logic and keep knowledge up‑to‑date; (2) coding is only a small part of the R&D workflow, while requirements clarification, architecture design, integration and deployment dominate effort; (3) fragmented AI‑generated code fragments cause architectural decay, hindering long‑term maintenance.
ACoder Platform Overview
ACoder is a full‑link R&D platform that integrates multiple large models (Qwen, Claude, Gemini, GPT‑5) to provide code generation, test‑case creation, crash analysis, bug fixing and end‑to‑end task automation. The platform turns the workflow from “find people → find tools → find platforms” into a single “find AI” step.
Key Achievements
Multi‑end code translation (iOS, Android, Harmony) achieved an average 6.8× speed‑up and a peak 24× improvement, generating thousands of lines of code with zero manual edits or bugs.
AI‑driven workflow automation (automatic crash analysis, one‑second resolution, PRD review, effort estimation, design, code generation, testing and bug fixing) delivered a 5‑20× overall efficiency boost.
On a 2.67 M‑line, 25 k‑file codebase, code‑recall rose from 65 % to 92 % and accuracy from 58 % to 87 %.
Technical Foundations
1. DeepDiscovery – panoramic code understanding
DeepDiscovery mimics expert reasoning in two stages:
Location : Identify high‑level anchors (project domain, architecture, entry points) using heuristic rules (e.g., file naming patterns, directory structures, business terminology) and generate a lightweight project index that serves as a “thumbnail” for the LLM.
Inference : From the anchors, perform fine‑grained exploration of dependencies, implementation details and implicit relationships (file references, message buses, data flows). The engine injects only metadata summaries into the LLM context, achieving high information density while staying within token limits.
The process produces a project‑wide graph (explicit, implicit, and physical relationships) that can be traversed with tools such as FindAllReferences, TraverseGraph, SearchEntity, and RetrieveEntity.
2. AI‑Friendly Knowledge System
A closed‑loop chain (Production → Storage → Retrieval → Freshness) manages four knowledge carriers:
Unstructured DingTalk knowledge base (requirements, design docs, manuals).
Structured knowledge base (API docs, terminology, test cases) stored with version control.
File‑based knowledge base (wiki, specs, coding standards) for content unsuitable for chunking.
Multi‑modal retrieval (vector similarity, multi‑level index, keyword extraction).
Knowledge freshness is maintained by Git hooks that trigger automatic updates. Knowledge is prioritized (P0‑project, P1‑personal, P2‑business domain, P3‑shared) and versioned to match project releases.
3. Multi‑Agent Architecture – SubAgent‑as‑Tool
Each SubAgent runs with an independent context window and a dedicated toolset. The Main Agent invokes SubAgents as regular tools, eliminating context contamination, simplifying scaling, and allowing seamless addition of new capabilities.
4. Generation → Selection Workflow
Multiple models collaborate:
Claude – code consistency.
GPT‑5 – creative fixes.
Gemini‑2.5‑Pro – mathematical programming.
Qwen3‑Coder‑Plus – cross‑file logical coherence.
During the Generation stage, models iteratively produce patches, self‑test, and refine. The Selection stage uses an “LLM‑as‑a‑Judge” panel that scores candidates on correctness, safety, minimal change, and regression risk, dynamically adjusting reviewer weights based on historical performance. This pipeline yields roughly a 10 % improvement over the best single model.
Evaluation on Large‑Scale Codebase
On a 2.67 M‑line, 25 k‑file repository, DeepDiscovery‑enhanced retrieval raised related‑code recall from 65 % to 92 % and accuracy from 58 % to 87 %, effectively elevating LLM understanding from shallow semantic search to expert‑level exploration.
Conclusion
ACoder demonstrates that a well‑engineered AI‑centric platform can overcome token limits, knowledge decay, and fragmented code generation, delivering multi‑platform efficiency gains up to 24× while preserving architectural integrity. Human engineers remain responsible for final verification and quality assurance.
Key Diagrams
Example of Knowledge Retrieval Code
README_ACODER.md // project preview, file‑path index, model usage instructionsAmap Tech
Official Amap technology account showcasing all of Amap's technical innovations.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
