How AI‑Native Turns Front‑End Development into a Spec‑Code‑Quality Closed Loop

This article details how Tencent Advertising’s AI‑Native approach restructures front‑end development by centering on spec‑driven repositories, automating code generation, visual verification, and quality checks, ultimately achieving a fully automated AI‑driven development pipeline that reduces coding effort by up to 90 percent.

Tencent Advertising Technology
Tencent Advertising Technology
Tencent Advertising Technology
How AI‑Native Turns Front‑End Development into a Spec‑Code‑Quality Closed Loop

Background

In the intelligent advertising ("Smart Delivery") project, Tencent’s ad‑tech team applied AI throughout the product delivery chain, rebuilding the front‑end with an AI‑Native methodology that links product requirements to final delivery.

Key Problems

Missing business and system context in existing projects makes AI‑driven development difficult.

Quality assurance still relies heavily on manual testing, leading to high error risk.

Solution Overview

Define an AI‑Native R&D specification centered on spec/code/uat directories.

Build a visual‑check tool that creates a spec → code → effect verification loop, allowing a Coding Agent to refine generated code based on visual feedback.

Develop four quality tools covering functional completeness and correctness.

Emphasize the importance of a closed‑loop AI chain that automates the entire flow from requirement description to functional delivery.

Spec‑Centric Organization

All development artifacts are organized under three top‑level folders: spec, code, and uat. Each folder contains common and module sub‑folders to share resources across modules.

**spec**/module/smartdelivery/{feature_name}/
├── requirement.md   # 功能需求文档
├── syscontract.md   # 系统契约,描述外部依赖接口等
├── *.png            # 设计图/截图
└── ui-demo/         # UI原型(可选)

**code**/module/smartdelivery/   # AI 基于 spec 生成的代码

**uat**/module/smartdelivery/    # AI 生成的交付测试用例,人工 Review

Code Generation Process

The project first creates a top‑level architecture spec that defines module structure, framework, and dependencies. The spec includes environment (Node.js 16 LTS, npm 8+), tech stack (React 18, Vite, TypeScript, Redux Toolkit, etc.), naming conventions, and core implementation principles such as strict TypeScript, functional components with hooks, and BEM‑styled Less.

Generation follows the speckit workflow: /speckit.specify – convert handwritten markdown requirements into a spec file. /speckit.clarify – interactive clarification of ambiguous spec items. /speckit.plan – produce technical design and data model. /speckit.tasks – break down into tasks. /speckit.implement – generate code directly in the code directory.

A simplified variant skips the full speckit steps and uses manual requirement writing followed by AI clarification, planning, and execution.

Visual Feedback Loop

A visual‑check tool based on Playwright captures UI screenshots, compares generated pages with the live site, and feeds differences back to the AI for automatic correction.

The loop supports three UI versions: AI‑generated, design‑guided demo, and visual‑check‑driven auto‑repair, enabling precise alignment with the production UI.

Quality Assurance

The QA system consists of two pillars:

Functional completeness : Ensure the spec fully describes all business features of the live system.

Functional correctness : Verify that AI‑generated code implements the spec accurately.

Both are complemented by manual testing for final validation.

Testing Strategies

Two complementary testing approaches are used:

Property‑Based Testing (PBT)

Attributes are extracted from the spec and stored structurally, then fast‑check generates thousands of random inputs to catch mismatches between spec and code.

你是一个代码审查助手,基于当前目录下的requirement.md 需求描述,完成以下任务:
1. **属性提取与结构化存储**
   - 从需求文档中系统性地提取关键属性
   - 按属性名称、需求来源、定义、自然语言描述、验证依据、测试用例、测试策略等维度记录
2. **基于属性的测试实施**
   - 使用 fast‑check 框架执行属性测试,验证代码实现与需求规范的一致性

Results showed >1000 code rules vs ~100 spec rules, revealing ~30 missing business rules.

E2E Delivery Validation

Specs are transformed into Gherkin scenarios, reviewed by humans, and executed automatically by Claude + Playwright agents. Example prompts generate native test cases covering core ad‑delivery flows, audience targeting, and version selection.

# 角色定位
你是一名专业的测试工程师,负责编写P0级别的可执行Web UI自动化测试用例。
# 任务目标
将功能需求转换为自然语言测试用例,用于自动化测试工具执行。
# 前置步骤
1. 读取并理解所有现有的spec文件
2. 梳理依赖关系和调用层级
3. 识别核心业务流程和关键功能模块
4. 分析现有测试覆盖范围
# 输出规范
生成5个P0_5_webUI.feature文件,使用Given‑When‑Then结构。

Execution via Claude + Playwright created over 100 test cases, uncovering 10+ issues.

Future Plans

The roadmap includes:

Embedding the AI‑Native flow into the product lifecycle (Product → Spec → Code → UAT).

Extending the approach to backend modules and multi‑module systems by describing module contracts in the spec.

Gradually retrofitting existing systems: create spec directories, generate specs with AI assistance, and close the spec‑code‑validation loop.

By fully closing the AI loop, manual effort can be reduced by up to 60 % in current projects, with greater gains expected as the model stabilizes.

AIquality assuranceVisual TestingSpec‑Driven Development
Tencent Advertising Technology
Written by

Tencent Advertising Technology

Official hub of Tencent Advertising Technology, sharing the team's latest cutting-edge achievements and advertising technology applications.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.