Canvas Engine for AIGC‑Enabled Product Design
TMIC’s Canvas Engine combines a Konva‑based front‑end canvas framework with AIGC model interfaces to give merchants low‑level, undoable editing tools—such as template loading, intelligent image adjustments, magic eraser, filters, and watermarking—while exposing an SDK and JSON protocol for easy integration and future AI‑driven extensions.
With the rapid development of AIGC technology, TMIC launched a Canvas Engine to empower merchants in the new product design stage. The engine is a Canvas‑based UI framework that integrates front‑end rendering and AIGC model interfaces, providing low‑level operations on canvas elements.
The design balances professionalism and ease of use. It is built on the Konva library, defines a canvas protocol for abstract state description, and supports undo/redo, save, and other functions. Open components and an SDK enable quick integration or customization for various business scenarios.
Background: Since 2022, projects such as DALL‑E 2, Midjourney and Stable Diffusion have lowered the barrier for ordinary users to generate images with AIGC. TMIC, which covers the whole product lifecycle, identified the design stage as a natural fit for AIGC and began exploring its application.
To avoid duplicated effort, TMIC abstracted common low‑level capabilities—front‑end canvas functions and AI algorithms—into the Canvas Engine.
Product features include file management, template loading, intelligent image adjustment (crop, expand, clarity, local color change), magic eraser, background generation, watermarking, filters, text, frames, and material insertion. Most functions are inspired by consumer‑friendly tools such as Meitu XiuXiu while retaining professional capabilities.
Technical implementation: The core component is the Canvas Component, a pure‑front‑end rendering and interaction layer built with Konva and React. A JSON‑based canvas protocol describes size, background, and atomic components. The Canvas Engine adds a toolbox that binds UI controls to AIGC model calls, updating the protocol in real time. The architecture separates the canvas protocol from AI services, allowing the engine to be used as a standalone component or embedded in other projects.
Quality assurance: Unit tests cover >85 % of the code base, with 35 test cases across five scenarios (component rendering, atomic elements, transformation & drag protection, tool implementation, GUI interaction, undo/redo). Jest and React Testing Library are used; protocol snapshots serve as the primary verification artifact because canvas content cannot be captured by HTML snapshots.
Future work: Expand functionality (style transfer, local repair, scene synthesis), support custom backend model interfaces, and improve performance for non‑product images.
Appendix – example canvas configuration (JSON):
{
"theme": "dark",
"backend": "bizB",
"content": {
"canvas": {
"maxSize": 512,
"width": 512,
"height": 512,
"scale": 1.2,
"backgroundImage": {
"src": "backgroundImageUrl"
},
"objList": []
},
"toolBox": {
"list": [
{
"type": "fileEntry",
"title": "文件",
"children": [
{ "type": "file", "title": "文件" },
{ "type": "template", "title": "模板" }
]
},
{
"type": "adjust",
"title": "调整",
"children": [
{ "type": "cutImage" },
{ "type": "expandImage" },
{ "type": "clarity" },
{ "type": "localChangeColor" }
]
},
{
"type": "brush",
"title": "画笔",
"children": [
{ "type": "magicEraser" }
]
},
{ "type": "filter", "title": "滤镜" },
{ "type": "text", "title": "文字" },
{ "type": "frame", "title": "边框" },
{
"type": "bgAdjust",
"title": "背景",
"children": [
{ "type": "background" }
]
},
{ "type": "material", "title": "素材" },
{
"type": "waterMark",
"title": "水印",
"children": [
{ "type": "patternLocalPrint", "logoUrl": "logoUrl" },
{ "type": "patternFullPrint", "patternImageUrl": "patternImageUrl" }
]
}
]
}
}
}DaTaobao Tech
Official account of DaTaobao Technology
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.