TestAgent: Open-Source 7B LLM That Supercharges Automated Test Generation
TestAgent is an open-source 7B test-domain LLM that delivers multi-language test-case generation, automatic assert completion, and a rapid deployment framework, offering industry-leading pass@1 scores, a ChatBot UI, and detailed setup instructions for diverse hardware environments.
What is TestAgent?
TestAgent is an open‑source “agent” for software testing that combines a 7‑billion‑parameter large language model (TestGPT‑7B) with engineering tools to automate test‑case generation, assert completion and provide a 24‑hour testing assistant.
Key Features
Multi‑language test case generation : Supports Java, Python, JavaScript (future Go, C++). Generates readable, scenario‑rich test cases, outperforming traditional tools such as EvoSuite, Randoop, and SmartUnit.
Assert completion : Automatically adds missing assert statements to existing test cases, enabling batch improvement of test suites.
Engineering framework : Includes a local model deployment pipeline, a ChatBot UI, rapid model startup, and options for private, on‑premise deployment.
Performance Highlights
TestGPT‑7B achieves industry‑leading pass@1 rates and higher average test‑scenario coverage compared with existing open‑source models. Benchmark figures (shown in the images) illustrate results for Java, Python, JavaScript generation and Java assert completion.
Architecture
The system couples a pre‑trained LLM with domain‑specific tools to overcome the limitations of generic models in complex integration test generation and domain‑specific knowledge.
Quick Start Guide
Prerequisites
Download the model from ModelScope or HuggingFace.
git clone https://github.com/codefuse-ai/Test-Agent
cd Test-Agent
pip install -r requirements.txtEnsure at least 14 GB of GPU memory is available.
Start Services
Run the controller: python3 -m chat.server.controller Run the model worker (example for Apple Silicon):
python3 -m chat.server.model_worker --model-path models/testgpt --device mpsLaunch the web UI: python3 -m chat.server.gradio_testgpt Access the UI at http://0.0.0.0:7860. Additional device flags (--device xpu, --device npu, --device cpu) and --num-gpus allow deployment on Intel, Huawei, or CPU‑only environments.
Future Roadmap
Expand test‑domain applications such as knowledge Q&A and scenario analysis.
Open copilot‑style frameworks for test‑knowledge embedding, generic tool APIs, and intelligent test agents.
Scale the model family to 13 B and 34 B parameters.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Ant R&D Efficiency
We are the Ant R&D Efficiency team, focused on fast development, experience-driven success, and practical technology.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
