How AI + Rule Engines Revolutionize Frontend Code Generation for Complex C‑End Apps
This article examines the challenges of pixel‑perfect UI reconstruction and semantic code generation for consumer‑facing front‑end projects, and presents a hybrid rule‑algorithm and large‑language‑model architecture that streamlines design‑to‑code (D2C) and requirement‑to‑code (P2C) workflows, integrates IDE plugins, and dramatically improves developer productivity.
Background
In front‑end development, B‑end projects typically rely on mature UI component libraries and can generate code from mockups with simple prompts, while C‑end (consumer) projects demand 100% pixel‑level fidelity, highly customized components, and complex business logic, making AI‑assisted coding a "deep‑water" problem.
Key C‑End Pain Points
Pixel‑level restoration vs. semantic, component‑ready code are mutually exclusive goals.
Pure AI models struggle with precise style replication, while rule‑based algorithms produce redundant, non‑semantic markup.
Logic code generation (P2C) is blocked by fragmented PRD documents, permission constraints, and lack of stable inputs.
Code generation platforms are separated from IDEs, breaking developer workflow continuity.
Solution Overview
We built a hybrid "rule + AI" pipeline that addresses three core stages: D2C (design‑to‑code) for UI fidelity, P2C (requirement‑to‑code) for logical implementation, and an IDE plugin for seamless integration.
D2C: Rule + AI Fusion Architecture
The rule engine first cleans and simplifies raw Figma JSON, removing ~60 redundant fields (e.g., blendMode, absoluteRenderBounds) and flattening deep nesting from six to three levels, reducing field count by 75%.
{
"title": "春运产品需求文档",
"background": "一、需求背景...",
"detail": [
[
"场景
**进入活动页-未登录**
",
"demo

",
"说明
APP弹窗需更新背景图
"
]
]
}Key conversion steps include:
Color values: {r:0.22,g:0.48,b:1,a:1} → #3878FF Style attributes: rectangleCornerRadii:[8,8,8,8] → border-radius:8px, shadows to box-shadow Layout consolidation: padding, margin merged into a unified layout object
After data cleaning, the AI model receives a concise, standardized input and generates semantic, component‑oriented code. A templated prompt system enforces:
95%+ pixel‑level compliance
Mandatory semantic tags (e.g., <button>, <nav>) and prohibition of generic <div> containers
Technology‑stack‑specific templates for React, React Native, Taro, Shark, etc.
P2C: Logic Generation Pipeline
We first transform PRD documents into structured data:
Authorization workflow extracts Feishu document links, parses tables into key‑value pairs, and stores them in a database.
Tables are serialized as objects, preserving hierarchy for downstream agents.
To handle long or complex PRDs, a multi‑agent concurrency framework splits the document into semantic chunks, assigns each chunk to a dedicated sub‑agent, and aggregates the generated code.
Knowledge‑base enhancement replaces vector‑search RAG with a direct SDK knowledge store (MCP + Feishu KB), ensuring API calls match internal definitions and eliminating hallucinations.
IDE Plugin Integration
The plugin embeds the code‑generation platform inside the IDE, closing the workflow gap. Its layered architecture includes:
Material layer – local storage of parsed design data and PRDs.
Specification layer – automatic selection of stack‑specific coding standards.
Extension layer – MCP capabilities such as image processing and knowledge‑base lookup.
Prompt‑assembly layer – injects material and specs into templated prompts.
Interaction layer – UI for selecting designs, tweaking specs, and sending prompts to the AI.
During execution, the plugin runs a local service that the MCP tool calls, allowing the IDE to share context and retrieve generated code without leaving the development environment.
Results and Summary
Code usability jumped dramatically: AI now emits semantic, component‑ready markup ( <button> instead of <div>), reducing manual refactoring by over 80%.
Development speed increased as generated code aligns with project architecture and supports multiple tech stacks.
Accurate API usage and parameter structures are guaranteed by the internal knowledge base, making the output directly runnable.
The combined rule + AI approach balances precise visual restoration with high‑quality, maintainable code, paving the way for future “engineering‑mind” agents that can review, iterate, and fix code autonomously.
Qunar Tech Salon
Qunar Tech Salon is a learning and exchange platform for Qunar engineers and industry peers. We share cutting-edge technology trends and topics, providing a free platform for mid-to-senior technical professionals to exchange and learn.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
