TestAgent: Open-Source 7B LLM That Supercharges Automated Test Generation

TestAgent is an open-source 7B test-domain LLM that delivers multi-language test-case generation, automatic assert completion, and a rapid deployment framework, offering industry-leading pass@1 scores, a ChatBot UI, and detailed setup instructions for diverse hardware environments.

Ant R&D Efficiency
Ant R&D Efficiency
Ant R&D Efficiency
TestAgent: Open-Source 7B LLM That Supercharges Automated Test Generation

What is TestAgent?

TestAgent is an open‑source “agent” for software testing that combines a 7‑billion‑parameter large language model (TestGPT‑7B) with engineering tools to automate test‑case generation, assert completion and provide a 24‑hour testing assistant.

Key Features

Multi‑language test case generation : Supports Java, Python, JavaScript (future Go, C++). Generates readable, scenario‑rich test cases, outperforming traditional tools such as EvoSuite, Randoop, and SmartUnit.

Assert completion : Automatically adds missing assert statements to existing test cases, enabling batch improvement of test suites.

Engineering framework : Includes a local model deployment pipeline, a ChatBot UI, rapid model startup, and options for private, on‑premise deployment.

Performance Highlights

TestGPT‑7B achieves industry‑leading pass@1 rates and higher average test‑scenario coverage compared with existing open‑source models. Benchmark figures (shown in the images) illustrate results for Java, Python, JavaScript generation and Java assert completion.

Architecture

The system couples a pre‑trained LLM with domain‑specific tools to overcome the limitations of generic models in complex integration test generation and domain‑specific knowledge.

Quick Start Guide

Prerequisites

Download the model from ModelScope or HuggingFace.

git clone https://github.com/codefuse-ai/Test-Agent
cd Test-Agent
pip install -r requirements.txt

Ensure at least 14 GB of GPU memory is available.

Start Services

Run the controller: python3 -m chat.server.controller Run the model worker (example for Apple Silicon):

python3 -m chat.server.model_worker --model-path models/testgpt --device mps

Launch the web UI: python3 -m chat.server.gradio_testgpt Access the UI at http://0.0.0.0:7860. Additional device flags (--device xpu, --device npu, --device cpu) and --num-gpus allow deployment on Intel, Huawei, or CPU‑only environments.

Future Roadmap

Expand test‑domain applications such as knowledge Q&A and scenario analysis.

Open copilot‑style frameworks for test‑knowledge embedding, generic tool APIs, and intelligent test agents.

Scale the model family to 13 B and 34 B parameters.

TestAgent overview
TestAgent overview
Performance benchmark
Performance benchmark
System architecture
System architecture
Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

model deploymentsoftware testingOpen Sourcelarge language modelAI testingtest generation
Ant R&D Efficiency
Written by

Ant R&D Efficiency

We are the Ant R&D Efficiency team, focused on fast development, experience-driven success, and practical technology.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.