Frontend Development 15 min read

AIGCDesign: A Cross‑Platform Frontend AI Component Library and Its Technical Implementation

The article introduces AIGCDesign, a cross‑platform frontend component library that leverages AI generation capabilities, explains its motivation, research of existing solutions, architectural layers, lifecycle hooks, configuration examples, multi‑framework support, business integration cases, and future stream‑processing enhancements.

JD Tech Talk
JD Tech Talk
JD Tech Talk
AIGCDesign: A Cross‑Platform Frontend AI Component Library and Its Technical Implementation

Definition of AIGC

AIGC (Artificial Intelligence Generated Content) refers to content such as text, images, audio, and video that is created by AI models, especially deep‑learning models that learn patterns from large datasets and then generate original material.

Why Build an AIGC Component Library?

Following the release of ChatGPT on 30 Nov 2022, AI‑driven content creation surged, prompting a need for reusable, low‑code/no‑code components that can quickly deliver AI‑enhanced applications across multiple systems and scenarios.

To meet this demand, the team surveyed six major AIGC component libraries, identified common capabilities, and aligned them with JD’s internal platform, launching the open‑source‑style project named AIGCDesign .

Research of Existing Frontend AI Component Libraries

The survey evaluated open‑source libraries on extensibility, component coverage, platform and framework support, resulting in key capability items such as lightweight, fast development, out‑of‑the‑box usage, support for TailWind, React, Vue, Native, conversational components, multimodal input, and responsive design.

Consequently, the target library must serve Native, Web, MP, and PC platforms and support frameworks like React, Vue, Android, and iOS.

Core Positioning and Goals

The solution is positioned as a multi‑endpoint AI component library that provides ready‑to‑use AI chat interfaces while allowing deep customization through extensive APIs.

Technical Implementation

Overall Architecture

The library builds on Taro’s cross‑platform capabilities to output MP and H5 components, while the web side uses a responsive approach that also supports PC. The architecture is divided into three layers:

Core Layer : AI platform integration, basic modules, and APIs that can be used directly in containers or standalone projects.

Container Layer : Multi‑platform, multi‑framework containers that handle large‑model integration, basic AI conversation, and highly customizable conversation areas.

Component Layer : A collection of basic, business, and custom components rendered by the container.

Native implementation uses JDHybrid’s hybrid architecture: frequently changing business modules are H5‑based, while stable container and base components are native, and Taro components can be reused in native projects.

Application Lifecycle

The container component exposes lifecycle hooks such as beforeLaunch , onLaunch , onSubmit , onLLMResult , and onChatUpdate , enabling developers to listen to each stage, inject custom logic, and integrate private large‑model services.

User Interaction and Data Flow

Interaction diagrams (omitted for brevity) illustrate how UI events trigger API calls, how data flows between the container and AI backend, and how custom rendering slots can be used.

Minimal and Custom Configuration

With a few props, developers can instantiate the AI container and obtain a full chat UI. Customization is possible via lifecycle events and component properties.

import AiContainer from "@aigcui/container";

Web Multi‑Framework Support

Beyond Taro, the library offers a UMD bundle that can be loaded via a script tag and rendered into any DOM node. If a project already includes React, the pure component package can be used; otherwise the full bundle brings its own React runtime.

<!-- Load UMD bundle -->
<script src="https://storage.jd.com/taro/aigc-ui/1.0.6/aigcjdfe-autobots-full.umd.js"></script>

<!-- Render component into element with id 'app' -->
window['autobots-full'].renderAiChatBubble({
    width: 500,
    height: 500,
    chatInfo: {
      agentId: 'xxx',
      token: 'xxxxxx',
    }
}, 'app');

Business Integration Cases

Multiple multi‑endpoint demos have been integrated, covering MP, Web, Hybrid, and Android scenarios, with a total of 8 MP conversation components and 14 Web business components in version 1.0.6.

Long‑Term Direction and Value

The roadmap focuses on continuously enhancing core capabilities (flexible configuration, multi‑platform/ multi‑framework output, and integration of OCR, ASR/TTS, agents, knowledge bases) and expanding 2B/2C scenarios with low‑cost, highly configurable AI interaction components.

Appendix: Stream Processing Techniques

1. Stream Data Reception and Handling

Front‑end streaming is achieved with the fetch API and ReadableStream , allowing chunk‑by‑chunk processing and abort control via AbortController .

async function fetchStream(url, params) {
    const { onmessage, onclose, ...otherParams } = params;
    const response = await fetch(url, otherParams);
    const reader = response.body.getReader();
    const decoder = new TextDecoder();
    let result = '';
    while (true) {
        const { done, value } = await reader.read();
        if (done) { onclose?.(); break; }
        result = decoder.decode(value, { stream: true });
        onmessage?.(result);
    }
    console.log('Stream complete');
}

const controller = new AbortController();
const signal = controller.signal;
fetchStream('https://example.com/stream-endpoint', {
    signal,
    onmessage: (text) => { console.log(text); },
    onclose: () => { console.log('Stream abort'); }
});
// Abort when needed
controller.abort();

Two processing modes are supported: true streaming (handling each chunk as it arrives) and batch processing (waiting for the final chunk before rendering).

2. Streamed Markdown Rendering

Using react-markdown together with rehype‑highlight , streamed markdown content can be rendered on‑the‑fly, with custom component mapping for links and other elements.

const BubbleMarkdown: React.FC<{ children?: string }> = ({ children }) => {
  return (
{
            const href = aProps?.href || "";
            const isInternal = /^\//.test(href);
            const target = isInternal ? "_self" : aProps?.target ?? "_blank";
            return
;
          },
        }}
      >
        {children}
);
};

Both streaming and non‑streaming responses support typewriter‑style or full‑text rendering, with additional UI optimizations to avoid unnecessary parent re‑renders, smooth input pacing, throttled scrolling, and input disabling during active streaming.

Scan the QR code to join the technical discussion group.

FrontendCross-PlatformAIreactcomponent libraryAIGCTaro
JD Tech Talk
Written by

JD Tech Talk

Official JD Tech public account delivering best practices and technology innovation.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.