Can AI Self‑Test and Fix Its Own Code? A Test‑Driven AI Programming Workflow

This article introduces a test‑driven AI programming loop that tackles the “last‑mile” problem of AI‑generated code by adding automated acceptance, self‑testing, bug fixing, and continuous iteration, demonstrated through a favorite‑count feature repair case and outlining future enhancements.

DaTaobao Tech
DaTaobao Tech
DaTaobao Tech
Can AI Self‑Test and Fix Its Own Code? A Test‑Driven AI Programming Workflow

Problem: The "Last Mile" of AI‑Assisted Programming

Even when AI correctly understands requirements, designs solutions, and generates code, the resulting code often contains minor bugs that require developers to either manually edit line‑by‑line ( manual post‑editing ) or engage in lengthy conversational fixes ( dialogue‑based repair ), both of which defeat the purpose of AI acceleration.

Proposed Test‑Driven AI Programming Workflow

The workflow closes the loop by introducing automated acceptance and feedback mechanisms . Clear test cases serve as quality gates, allowing AI to evaluate its output, iterate on failures, and ultimately behave like a competent programmer capable of self‑repair.

Overall Architecture

The core stack includes:

Tool: iFlow CLI

Model: qwen3‑coder‑plus

Core Components

Deployment Agent: java-dev-project-deploy – automates pre‑release environment deployment and status polling.

When a user wants to deploy code to a project environment, call this agent. Steps:
1. Get project env info from .iflow/dev/progressInfo.json.
2. Retrieve app env ID via group_env_apres_list.
3. Deploy with apre_deploy.
4. Poll every 50 s using apre_get until selfStatus changes from DEPLOYING to RUNNING (or timeout after 10 min).
5. Log deployment details to .iflow/dev/codingLog.md.

Prompt Design for the Agent

---
name: java-dev-project-deploy
description: Use this agent when the user wants to deploy code to a project environment. The agent handles the complete deployment workflow including environment validation, application environment retrieval, deployment execution, and status monitoring.
---
You are an expert Java deployment automation agent specialized in managing project environment deployments. Your primary responsibility is to orchestrate the complete deployment workflow with precision and reliability.
## Core Responsibilities
1. Validate project environment configuration
2. Retrieve application environment details
3. Execute deployment process
4. Monitor deployment status until completion
5. Log deployment results for audit trail
## Deployment Workflow
### Step 1: Project Environment Validation
- Check for the existence of `.iflow/dev/progressInfo.json`
- Extract `groupEnvId`
- Prompt user if missing
### Step 2: Application Environment Retrieval
- Call `group_env_apres_list` with the project env ID
- Extract `apreEnvId` and update the JSON file
### Step 3: Deployment Execution
- Call `apre_deploy`
- Record start time and metadata
### Step 4: Status Monitoring
- Poll `apre_get` every 50 s
- Continue while `selfStatus` is DEPLOYING
- Stop when it becomes RUNNING or after 10 min timeout
### Step 5: Deployment Logging
- Append timestamp, env info, branch, and result to `.iflow/dev/codingLog.md`
---

Key Features

Status Polling: Checks selfStatus every 50 s, transitioning from DEPLOYING to RUNNING to confirm success.

Timeout Protection: 10‑minute timeout prevents endless waiting.

Logging: Each deployment records timestamp, environment, branch, and outcome.

Experimental Case: Automatic Fix of Favorite‑Count Feature

A simple requirement—remove Feizhu product counts from the favorite‑count service—was used to validate the workflow.

Steps executed automatically:

Problem discovery via test failure.

AI locates the faulty code in HsfFavoriteCountService.getFavoriteCount and modifies it.

AI commits and triggers the java-dev-project-deploy agent.

Post‑deployment verification repeats the test until it passes.

The entire cycle required no human intervention.

Conclusion and Future Directions

The experiment proves that providing AI with explicit acceptance criteria and a feedback loop enables self‑validation and iterative improvement. Future work includes enhancing test generation, strengthening bug diagnosis, task decomposition, faster hot‑deployment APIs, and adding code‑review or performance‑optimization agents.

图片
图片
code generationdeploymentcontinuous integrationAI programmingtest-driven development
DaTaobao Tech
Written by

DaTaobao Tech

Official account of DaTaobao Technology

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.