Can AI Turn UI Images into High‑Quality Front‑End Code? A Step‑by‑Step Case Study
This article details how, starting from only UI screenshots and no design files, a team used imgcook, an Aone Agent with MCP service, and prompt engineering to automatically generate mobile front‑end code, iteratively improving usability from 40% to 80% and documenting the full workflow.
Background
The challenge was to develop a mobile front‑end page when only a few UI images were available, with no Sketch files or design specifications. The goal was to see whether AI techniques could automatically produce high‑quality code from these images.
Initial AI‑Driven Code Generation
Using the imgcook plugin together with the Aone Agent that calls an MCP service, the team generated code directly from the images. The first version, based solely on the Agent, achieved about 40% usable code .
Optimization with imgcook and Component Library
Integrating imgcook improved design fidelity.
Adding the @alife/cook-unify-mobile component library enabled automatic recognition of Button and Icon components.
Three optimization stages raised usable code from 40% to 70% .
Step 2.1 – Input
Images of the target UI were fed into the system. Example image:
Step 2.1.2 – Output Description
The Agent produced a prompt describing the desired file structure:
src/pages/index/</code><code>├── index.tsx # Main page component</code><code>├── style.module.less # Main page styles</code><code>└── components/</code><code> ├── Header/</code><code> │ ├── index.tsx</code><code> │ └── style.module.less</code><code> └── ServiceCard/</code><code> ├── index.tsx</code><code> ├── style.module.less</code><code> ├── ServiceTypeSelector/</code><code> │ ├── index.tsx</code><code> │ └── style.module.less</code><code> ├── BenefitList/</code><code> │ ├── index.tsx</code><code> │ └── style.module.less</code><code> └── SubmitButton/</code><code> ├── index.tsx</code><code> └── style.module.lessStep 2.1.3 – Actual Code
Generated code screenshots (omitted for brevity) demonstrated the layout and component usage.
Further Tuning – Improving Design Restoration
To increase design accuracy, the team used imgcook to parse the UI and then manually refined the prompt. The prompt included:
使用 imgcook 生成代码。模块链接:http://tao-d2c.fc.alibaba-inc.com/modules/621/preview;基于此生成一个语义化命名的 react 组件目录。对其 classname 语义化命名,采用 css modules,组件目录包含 index.tsx、index.modules.css,此组件放入当前 components 目录下They also added component library details: - Button 按钮</code><code>- Icon 图标 and step‑by‑step instructions for the model.
Large‑Model Output and Component Creation
Using a codebase_search tool, the model retrieved implementations from node_modules and built a new component DiningBenefitsCard with the following structure:
src/components/DiningBenefitsCard/</code><code>├── index.tsx # React component</code><code>└── index.module.css # CSS Modules stylesThe component leveraged the @alife/cook-unify-mobile library:
Button component for the "Go to Authorization" action.
Icon components (PayCircleOutline, ExclamationCircleOutline, ClockCircleOutline) for visual cues.
Semantic class names such as .benefitsCard, .cardTitle, .benefitsGrid, etc.
Implementation used TypeScript, React, and CSS Modules, supporting proper prop passing and callbacks.
Script (Story) Assembly
To make the process repeatable, the team created a story.md script that assembled all required prompts and snippets. The configuration JSON listed prompts and code snippets, for example:
[{</code><code> "type": "prompt",</code><code> "path": "./naposMarketing/snippet/story.md",</code><code> "desc": ""</code><code>}, {</code><code> "type": "prompt",</code><code> "path": "../knowledge-graph/cookUnifyMobile.md",</code><code> "desc": ""</code><code>}, {</code><code> "type": "snippet",</code><code> "path": "../code-base/src/services/NaposMarketingController.ts",</code><code> "desc": "src/services/NaposMarketingController.ts 接口定义"</code><code>}]Script Execution Workflow
Open the metaStack platform and create a new front‑end generation task.
Upload the UI mockup images and a brief business description.
Generate the prompt script, which the AI uses to produce code.
Images illustrating each step are included in the original article.
Results
The script‑driven approach produced five changed files, 379 lines of code, and achieved an 80% code usability rating—higher than the previous 70% achieved with only Agent + imgcook.
Conclusion
Combining imgcook, an Agent with MCP service, and a well‑crafted script dramatically improves AI‑generated front‑end code quality. The workflow demonstrates that, with proper prompt engineering and component library integration, AI can effectively mimic human coding behavior and accelerate mobile UI development.
Alibaba Cloud Developer
Alibaba's official tech channel, featuring all of its technology innovations.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
