How Frontend Code Is Auto‑Generated: Inside Alibaba’s Design‑to‑Code System
This article details Alibaba’s front‑end intelligent project that automatically transforms Sketch design files into production‑ready HTML/CSS/JS code, covering the design‑to‑code workflow, layer processing, mask handling, text calibration, layer merging, unused‑layer detection, testing, visual‑restoration metrics, and future enhancements.
Overview
As one of the four technical directions of Alibaba’s Front‑End Committee, the Front‑End Intelligent Project proved its value during the 2019 Double‑11 event, automatically generating 79.34% of the code for new modules. This series shares the technologies and insights behind the automatic front‑end code generation.
Design‑to‑Code (D2C) Workflow
In a typical development cycle, designers create visual mockups which front‑end engineers then hand‑code. The D2C project replaces manual analysis by automatically parsing design files (Sketch, Photoshop, XD) via official APIs, extracting structured layout and style information for downstream layout algorithms.
Layer Hierarchy
The pipeline consists of three layers: material identification (recognizing components and controls), image processing (using a Sketch plugin to extract a JSON conforming to the imgcook specification with absolute positions and CSS‑compatible attributes), and image re‑processing (applying computer‑vision techniques to clean up or merge layers).
Technology Selection
The plugin is built with Sketch’s JavaScript API; where the API lacks coverage, CocoaScript calls the underlying Objective‑C interfaces. The UI is rendered with a WebView, and the development scaffolding is provided by the Skpm toolchain.
Plugin Layer Processing
The plugin traverses the Sketch document depth‑first, extracting each layer’s basic information (position, size). Symbol layers are resolved to their master symbols. Layers affected by masks or occlusion are flagged, and each layer type (Shape, Image, Text, etc.) is converted to CSS‑compatible data. Designers can mark groups or components via naming conventions.
Mask Handling
Masks have no direct HTML/CSS counterpart, so they cannot be exported as CSS properties.
A mask influences both its own layer and any layers above it; multiple layers must be processed together.
Irregular mask shapes require geometric calculations to determine the clipped region.
The system computes mask regions, applies CSS clipping where possible, and screenshots the visible area for non‑CSS‑representable cases. Unused layers outside a mask are discarded, while layers fully inside a mask are left unchanged.
Smart Text Position Calibration
Text layers may contain multiple styles, requiring splitting into separate nodes.
Fixed‑width text boxes in Sketch lack width information in HTML; the plugin exports accurate width and line‑count data.
SVG export from Sketch can be inaccurate for rich text; a computer‑vision algorithm using OpenCV detects baselines and corrects positions.
The calibration steps include detecting rich‑text boxes, capturing screenshots, applying Canny edge detection, extracting font contours, measuring baseline differences, and adjusting positions based on the largest font size.
Layer Re‑processing
Smart Layer Merging
Designers often assemble icons or decorative graphics from multiple small layers. The plugin automatically detects groups that should be merged and exports them as a single layer (screenshot), simplifying downstream processing.
Unused Layer Detection
Duplicate images with different URLs are unified.
Layers without backgroundImage, backgroundColor, or borderWidth are removed.
Layers that do not affect the pixel matrix are filtered out.
Layers fully covered by opaque layers are discarded.
Images with transparent areas below a threshold are removed.
Plugin Testing and Metrics
Unit Test System
Before each release, the Sketch plugin is tested with Skpm‑Test, a Jest‑like framework for the Skpm ecosystem, achieving around 95% test coverage.
Absolute Position Layout Viewing
The exported JSON contains absolute positions and CSS properties for each node, which can be directly transformed into HTML + CSS for visual verification (e.g., via https://imgcook.taobao.org/edit).
Visual Restoration Metric System
Using OpenCV, the system measures how closely the rendered output matches the original design. The process includes resizing images, converting to grayscale, performing template matching for each element, and calculating similarity scores, positional offsets (x, y), and overall restoration score P based on total layer count, similarity, and displacement.
Future Outlook
Continued Standard Upgrades
Current design guidelines contain over 20 rules; intelligent layer processing has already eliminated most, leaving only three. Future work aims to remove the remaining constraints for a zero‑constraint workflow.
Restoration Capability Upgrade
Current average visual restoration accuracy is about 95%; upcoming Sketch versions will push this closer to 100%.
Restoration Efficiency Upgrade
Performance bottlenecks appear when processing many layers or large image uploads; ongoing research targets significant speed improvements.
In the era of rapid technological advancement, front‑end intelligence will increasingly replace repetitive tasks, and machines will gain a deeper understanding of design intent, driving higher automation and smarter front‑end development.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
