How NetEase Cloud Music’s Front‑End Team Built an AI‑Powered Low‑Code Copilot

NetEase Cloud Music’s front‑end team integrated large language models into their internal low‑code platform, creating an AI Copilot that supports smart page creation, editing, component configuration, code snippet generation, and Q&A, while detailing the underlying architecture, prompt engineering, and mixed‑mode development workflow.

NetEase Cloud Music Tech Team
NetEase Cloud Music Tech Team
NetEase Cloud Music Tech Team
How NetEase Cloud Music’s Front‑End Team Built an AI‑Powered Low‑Code Copilot

Overview

With the rapid emergence of large language models (LLMs) such as ChatGPT, generative AI is expanding into many product scenarios. NetEase Cloud Music’s front‑end team applied LLM capabilities to their internal low‑code platform, Tango Studio, to improve developer productivity and user experience.

LowCode Copilot Features

Smart Page Creation

Developers can describe a page in natural language (e.g., “Create an approval form with fields for approver, time, and comments”) and the system instantly generates a complete form, table, detail view, or dashboard, requiring only minor logical adjustments.

Smart Page Editing

If the generated page needs changes, users can issue natural‑language commands (e.g., “Add a field for agreement”) to modify titles, add or remove components, or adjust styles, providing a low‑cost, high‑tolerance interaction.

Smart Component Property Configuration

When a component is selected, an AI input box can be invoked to fine‑tune properties such as background color, layout direction, size, or state via simple natural‑language prompts.

Smart Code Snippet Generation

The platform also integrates LLM‑generated code for expression writing. For example, a prompt like “filter out records where name is not alice” yields the corresponding JavaScript/TypeScript snippet instantly.

Intelligent Q&A and Programming Assistant

Documentation is chunked and stored in a vector database. When users ask questions, the system retrieves relevant context, assembles a prompt, and leverages the LLM to provide accurate answers, surpassing simple keyword search.

Architecture and Implementation

The core idea is to treat source code as the primary artifact. The platform parses source files into an Abstract Syntax Tree (AST), builds file and node models on top of the AST, and maps visual drag‑and‑drop actions to AST manipulations. Modified ASTs are then rendered back into source code.

When a user issues a natural‑language command, the system constructs a standardized prompt template, extracts intent, performs similarity matching against stored documentation in the vector DB, merges relevant information into the prompt, and invokes a pre‑trained GPT model to generate code. The generated code is injected into the current project, triggering a view re‑render.

Prompt Engineering and Mixed‑Mode Development

The platform isolates natural‑language reasoning as a dedicated service. The prompt includes both the user instruction and the current project context. This enables a mixed development workflow where developers can work in a low‑code IDE, a chatbot, or a local IDE while sharing the same front‑end assets and codebase.

Future Outlook

LLM‑driven low‑code platforms can dramatically lower development costs and accelerate feature delivery. However, proprietary DSLs and closed protocols hinder extensibility and community contributions. Embracing open‑source ecosystems and open standards will be essential for sustainable growth.

Natural‑language interfaces are poised to become the primary human‑computer interaction mode for low‑code products, shifting focus from visual drag‑and‑drop to expressive, fault‑tolerant language commands.

frontend developmentprompt engineeringlarge language modelMixed DevelopmentAI Copilot
NetEase Cloud Music Tech Team
Written by

NetEase Cloud Music Tech Team

Official account of NetEase Cloud Music Tech Team

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.