R&D Management 16 min read

How AI Can Automate the Entire Software Delivery Pipeline from Requirement to Deployment

The article outlines a multi‑stage AI‑driven automation roadmap that extends from requirement gathering through technical solution generation, code creation, testing, and deployment, detailing challenges such as workflow standardization, knowledge‑base construction, skill reuse, and quality assurance, and presents concrete metrics showing up to 80% efficiency gains.

Tencent Technical Engineering
Tencent Technical Engineering
Tencent Technical Engineering
How AI Can Automate the Entire Software Delivery Pipeline from Requirement to Deployment

Background and Challenges

The current end‑to‑end software delivery chain still relies heavily on manual effort for requirement analysis, solution design, code review, and test verification, causing frequent context switches and collaboration delays that limit R&D efficiency. The team therefore aims to extend AI‑assisted automation upstream to requirements and design, and downstream to testing and deployment, ultimately achieving fully automated delivery from demand to production.

The automation evolution is divided into three levels: L1 (fully manual), L2 (human‑AI collaboration), and L3 (full automation). In 2025 the organization is at L2, focusing on "technical‑solution‑to‑code" while expanding to cover the entire chain. The goal is an 80% overall efficiency boost.

Key Challenges

Standardizing delivery workflow : Define a uniform process that the large model follows, with rollback capability.

Normalizing and structuring requirements : Enforce a template covering overview, body, versioning, and external dependencies; add clarification and scoring rules to ensure quality.

Building a high‑quality, searchable knowledge base : Create business‑domain and code knowledge bases to provide context for the model.

Standardizing core skills (Skills) : Capture and reuse capabilities such as technical‑solution generation, code generation, code review, and test‑case execution.

Ensuring code‑quality safeguards : Introduce gates for requirement review, solution review, code review, and security checks to prevent quality decay.

Online issue governance : Develop an automated monitoring and self‑repair framework for runtime logs, metrics, and performance.

Detailed Practice – Human‑AI Collaboration (L2)

Technical Solution to Code

During the past year the team focused on the coding stage, combining AI technical specifications (rules), templated prompts, MCP tool integration, and AI self‑summarization to create a new human‑AI collaborative productivity model. The process is illustrated in the "AI‑Agent R&D workflow" diagram.

AI Agent R&D workflow
AI Agent R&D workflow

To bridge the gap from requirement to deployment, the team extended the technical solution with an execution checklist and integrated it with codebuddy using templated solutions, test execution checklists, and deployment prompts.

Development phase : Generate code and unit‑test coverage based on layered architecture (controller, service, persistence).

Testing phase : Use MCP tools (Qicai‑Stone configuration, DDL/DML tickets, BlueShield pipelines, Lego environment) to prepare test environments and run automated test cases.

Deployment phase : Leverage the same MCP tools for pre‑deployment preparation and trigger unattended deployment via AI‑generated release tickets.

Technical solution to deployment flow
Technical solution to deployment flow

Workflow Optimizations

By integrating testing and operations teams, the team built LegoMCP/Skills, DDLMCP/Skills, and interface‑automation‑test MCP/Skills, enabling end‑to‑end automation from code generation to release. The Lego environment can be provisioned in 5–10 minutes without platform switching.

Lego environment provisioning
Lego environment provisioning

DDL application time was reduced to seconds by building DDLSkill and DDLMCP tools.

DDL request automation
DDL request automation

Interface‑automation testing uses the appeal‑review scenario: AI identifies code, generates both existing and new interface test cases, and triggers one‑click testing via codebuddy MCP.

Interface automation test
Interface automation test

Architecture Upgrade Benefits

Cross‑platform reduction: testing and deployment now close‑loop within CodeBuddy, eliminating six external platforms.

Step reduction: workflow steps dropped from 12 to 5 (potentially 2).

Time savings: test‑environment creation saved ~1 hour (≈60%); deployment saved ~0.5 day (≈50%).

Technical solution to deployment flow
Technical solution to deployment flow

From Requirement to Code Generation

After linking technical‑solution‑to‑deployment, the focus shifted to the upstream requirement‑to‑code chain. The resulting end‑to‑end loop is depicted in the "Requirement to Code Generation" diagram.

Requirement to code generation flow
Requirement to code generation flow

Capability Accumulation

The practice has produced a complete capability system: 1 PRD‑Agent, 5 standardized templates, 3 knowledge bases, and 3 core skills. This system is generic and extensible, supporting most engineering scenarios.

Capability overview
Capability overview

AI Full‑Automation Delivery (L3)

Building on CodeBuddy, the team introduced the "Harness Engineering for AI Coding" concept, establishing a standardized delivery framework that automates the entire pipeline—from requirement, design, development, test, to release—while embedding quality gates (requirement review, solution review, MR checks) and a closed‑loop feedback‑repair mechanism.

AI full‑automation architecture
AI full‑automation architecture

Six pilot demands were selected, using OpenClaw to experiment with end‑to‑end generation. The pilot yielded a standardized delivery framework and reusable capabilities for broader rollout.

Pilot results
Pilot results

Effectiveness Data

Across six pilot demands and three iteration cycles, the team achieved:

Requirement and technical‑solution scores consistently above 80.

Average dialogue rounds per requirement: 2.

Code line adoption rate: >90%.

AI‑generated code proportion: >80%.

Practice Summary

The L2‑to‑L3 journey resolved most identified challenges, establishing a solid foundation for center‑wide promotion. Typical issues on the requirement side and coding side were catalogued and addressed, as shown in the summary diagrams.

Summary of typical issues
Summary of typical issues

Future Outlook

The envisioned AI full‑automation platform consists of two parallel, collaborative frameworks: an AI delivery framework that automates the full pipeline with built‑in quality gates and self‑repair, and an AI governance framework that provides runtime observability, automatic incident remediation, and continuous knowledge‑base and skill updates. Together they form a "delivery‑driven efficiency, governance‑ensured quality" dual‑wheel system.

AI full‑automation platform architecture
AI full‑automation platform architecture

Conclusion

While the current OpenClaw/CodeBuddy implementation provides a prototype of the AI delivery framework, the workflow and skill management remain partially manual and decentralized. The governance framework is still in planning. The next step is to integrate both frameworks into the AMS one‑stop R&D efficiency platform for centralized, online management.

R&D managementAI codingDevOpsSoftware DeliveryAI automation
Tencent Technical Engineering
Written by

Tencent Technical Engineering

Official account of Tencent Technology. A platform for publishing and analyzing Tencent's technological innovations and cutting-edge developments.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.