Generate Complete API Test Cases with AI and Python in Seconds
Learn how to automate API test data creation by leveraging a large language model to parse OpenAPI specifications and generate comprehensive pytest functions covering valid, missing, type‑error, boundary, and format scenarios, with a step‑by‑step Python script that produces ready‑to‑run test files.
In API automation testing, designing test data is often more time‑consuming than writing code. Manually enumerating scenarios such as valid inputs, missing required fields, type mismatches, out‑of‑range values, format errors, and boundary conditions is inefficient and error‑prone, especially when APIs change.
Core Idea
The approach combines two key pieces of information: the structured API definition (e.g., OpenAPI/Swagger) that describes field types, required status, and constraints, and the reasoning power of a large language model (LLM) to automatically generate both valid and invalid test data combinations. The goal is to input an OpenAPI spec and output a set of pytest test functions, each representing a typical scenario.
Preparation
Install required Python packages: pip install dashscope pyyaml requests pytest dashscope provides access to the Qwen model API; pyyaml parses the OpenAPI YAML file; requests and pytest are used for sending requests and running tests.
Create an OpenAPI document for the target endpoint. Example user_api.yaml defines a /register POST endpoint with fields username (string, 3‑20 chars, required), age (integer, 0‑150, optional), and email (string, email format, required).
Constructing the Prompt
The prompt guides the LLM to produce the desired test code. It explicitly lists the interface rules, the test scenarios to cover, and the code formatting requirements.
PROMPT = """
You are an experienced test engineer. Based on the following OpenAPI definition, generate pytest test functions covering these scenarios:
1. Normal case: all fields valid
2. Missing required field <code>username</code>
3. <code>username</code> shorter than 3 characters
4. <code>username</code> longer than 20 characters
5. <code>age</code> less than 0
6. <code>age</code> greater than 150
7. Invalid email format
Requirements:
- Use <code>requests.post</code> to call "http://localhost:8000/register"
- Each test function starts with <code>test_</code>
- Assert status code 201 for the normal case, 400 for error cases
- Output only Python code, no explanations
{spec}
"""Generating the Script
The following Python script reads the OpenAPI file, formats the prompt, calls the Qwen model via DashScope, cleans the returned code, and writes it to test_register.py.
import os
import yaml
from dashscope import Generation
# Read OpenAPI spec
with open("user_api.yaml", "r", encoding="utf-8") as f:
spec = f.read()
# Build prompt
prompt = PROMPT.format(spec=spec)
# Call Qwen model
api_key = os.getenv("DASHSCOPE_API_KEY")
if not api_key:
raise EnvironmentError("Please set the DASHSCOPE_API_KEY environment variable")
response = Generation.call(
model="qwen-max",
prompt=prompt,
api_key=api_key,
temperature=0.1 # reduce randomness for stable output
)
if response.status_code == 200:
code = response.output.text.strip()
# Remove possible markdown fences
if code.startswith("```python"):
code = code[9:]
if code.endswith("```"):
code = code[:-3]
with open("test_register.py", "w", encoding="utf-8") as f:
f.write(code)
print("Test cases generated: test_register.py")
else:
print(f"Generation failed: {response.code} - {response.message}")Resulting Test File
Running the script produces a test_register.py file similar to the example below. Each function sends a request with specific data and asserts the expected HTTP status.
import requests
BASE_URL = "http://localhost:8000"
def test_normal_case():
data = {"username": "alice", "age": 25, "email": "[email protected]"}
resp = requests.post(f"{BASE_URL}/register", json=data)
assert resp.status_code == 201
def test_missing_username():
data = {"age": 25, "email": "[email protected]"}
resp = requests.post(f"{BASE_URL}/register", json=data)
assert resp.status_code == 400
def test_username_too_short():
data = {"username": "ab", "email": "[email protected]"}
resp = requests.post(f"{BASE_URL}/register", json=data)
assert resp.status_code == 400
def test_username_too_long():
data = {"username": "a" * 21, "email": "[email protected]"}
resp = requests.post(f"{BASE_URL}/register", json=data)
assert resp.status_code == 400
def test_age_negative():
data = {"username": "bob", "age": -1, "email": "[email protected]"}
resp = requests.post(f"{BASE_URL}/register", json=data)
assert resp.status_code == 400
def test_age_too_high():
data = {"username": "charlie", "age": 200, "email": "[email protected]"}
resp = requests.post(f"{BASE_URL}/register", json=data)
assert resp.status_code == 400
def test_invalid_email():
data = {"username": "david", "email": "not-an-email"}
resp = requests.post(f"{BASE_URL}/register", json=data)
assert resp.status_code == 400Why This Beats Manual Effort
Rule‑driven generation : The AI uses explicit metadata (minLength, maximum, required) from the OpenAPI spec, avoiding guesswork.
Full coverage : The prompt enumerates all edge cases, preventing human omission.
Reproducible : When the API changes, re‑run the script to obtain updated tests instantly.
Time‑saving : What used to take ~30 minutes of manual design can be produced in seconds.
Precautions
Human review remains essential; the model may misinterpret business‑specific rules.
Never upload sensitive data to external LLM services; use a locally hosted model for confidential APIs.
Consider integrating unique identifiers and fixture‑based cleanup to keep test environments stable.
Conclusion
Manual test‑data construction is becoming obsolete. By harnessing AI’s ability to understand structured API definitions, teams can shift focus from repetitive data creation to designing richer test scenarios and validating business logic. Python’s flexibility makes it an ideal glue language for connecting AI outputs with existing test pipelines.
Quick‑Start Checklist
Prepare your OpenAPI document.
Set the DashScope API key in the environment.
Run generate_test_cases.py.
Review the generated test_register.py and execute it with pytest.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
