Build an AI-Powered API Test Framework on Mac with Ollama and Python
This guide shows how to combine a locally deployed Ollama LLM with Python Requests to create an AI-driven automated API testing framework that generates test data, performs smart assertions, and produces markdown reports, dramatically reducing manual effort and improving test quality.
Traditional API testing spends most time on writing test data and assertions, with scripts becoming costly to maintain when interfaces change. By integrating a locally hosted LLM (Ollama) with Python, you can give test scripts a "brain" that reads API definitions, generates data, evaluates responses, and writes human‑readable reports.
Framework Architecture
The framework consists of three layers that mimic a human tester’s workflow:
Execution Layer (Python + Requests) : Sends HTTP requests and captures raw responses.
Brain Layer (Ollama) : Reads the API spec, creates test cases, and analyzes JSON responses to decide if business logic passes.
Report Layer (Markdown) : Compiles results into a readable analysis report.
Step 1 – Environment Setup and Brain Wrapper
Ensure Ollama is running on your Mac and download a suitable model such as llama3 or qwen2.5. Create a simple OllamaBrain class to handle communication with the local LLM.
import requests
import json
import time
OLLAMA_URL = "http://localhost:11434/api/generate"
MODEL_NAME = "llama3"
class OllamaBrain:
def __init__(self, model):
self.model = model
def ask(self, prompt):
"""Send a prompt to the local AI and return the plain‑text reply"""
payload = {"model": self.model, "prompt": prompt, "stream": False}
try:
response = requests.post(OLLAMA_URL, json=payload, timeout=120)
if response.status_code == 200:
return response.json()['response']
else:
return f"AI call failed: {response.text}"
except Exception as e:
return f"Connection error: {e}"Step 2 – AI‑Generated Test Cases
Instead of hard‑coding data, feed the API definition to the LLM and let it produce a set of JSON test cases covering normal and error scenarios.
def generate_test_cases(ai, api_doc):
"""Use AI to create test data from an API spec"""
prompt = f"""
You are an experienced test engineer. Based on the following API definition, generate 5 JSON test cases.
Requirements:
1. Two normal cases (expected success).
2. Three error cases (missing params, type errors, expected failures).
Return a JSON array only.
API definition:
{api_doc}
"""
response = ai.ask(prompt)
try:
cases = json.loads(response.strip())
return cases
except:
print("AI returned malformed JSON, using fallback data")
return [{"username": "test", "password": "123"}]
swagger_doc = """
POST /login
Parameters: username (string), password (string)
Logic: admin/123456 succeeds, returns code:200
"""Step 3 – Smart Assertions
Replace static assert response.status_code == 200 with an AI‑driven check that evaluates business logic based on the full response payload.
def smart_assert(ai, response_data, expected_status):
"""Ask AI to verify whether the actual response matches expectations"""
prompt = f"""
Interface test assertion analysis:
Expected: {expected_status}
Actual JSON: {json.dumps(response_data)}
Does the response meet the expectation? Reply "PASS" if it does, otherwise point out the error fields and reasons.
"""
result = ai.ask(prompt)
if "PASS" in result.upper():
return True, "AI validation passed"
else:
return False, result
ai_brain = OllamaBrain(MODEL_NAME)
mock_resp = {"code": 200, "msg": "登录成功", "data": "token_xyz"}
is_pass, reason = smart_assert(ai_brain, mock_resp, "登录成功")
print(f"Test result: {'PASS' if is_pass else 'FAIL'} - {reason}")Step 4 – Automatic Markdown Report
After execution, collect logs and let the LLM generate a concise markdown test report, including summary statistics, risk analysis, and improvement suggestions.
def generate_report(ai, test_logs):
"""Summarize logs into a markdown daily report"""
prompt = f"""
Here are today's API automation logs:
{json.dumps(test_logs, indent=2)}
Produce a markdown report containing:
1. Overview (total cases, passes, failures).
2. Potential risk analysis.
3. Recommendations.
"""
report = ai.ask(prompt)
with open("测试日报.md", "w", encoding="utf-8") as f:
f.write("### 接口自动化测试日报
")
f.write(f"Generated at: {time.strftime('%Y-%m-%d %H:%M:%S')}
")
f.write(report)
print("Report saved as 测试日报.md")
logs = [
{"case": "登录_正常", "status": "PASS", "time": "0.5s"},
{"case": "登录_密码错误", "status": "FAIL", "error": "200 returned but msg indicates system error"}
]
# generate_report(ai_brain, logs)Conclusion and Outlook
By coupling a local Ollama model with Python, the framework achieves a perception‑decision‑feedback loop:
Efficiency: No manual data‑construction code.
Quality: AI catches business‑logic errors that simple status‑code checks miss.
Documentation: Real‑time AI‑generated reports replace static, form‑filled summaries.
This approach demonstrates how LLMs can augment traditional testing pipelines, turning scripts into intelligent agents that understand specifications, generate inputs, evaluate outcomes, and communicate results.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
