Unified Interface Automation Testing Tool: Design, Implementation, and Real‑World Practice
This article presents a comprehensive guide to building and applying a unified API automation testing tool, covering its background, framework design, core features, data and configuration management, public functions, test case handling, logging, execution workflow, CI integration, and monitoring in a search service environment.
The article introduces a unified interface automation testing tool developed after the 360 Technology Carnival, aiming to solve common QA challenges such as selecting testing tools, writing code for automation, and handling diverse test scenarios.
Background and Framework Design – It explains the concept of interfaces in a search service architecture, illustrating how various modules communicate via JSON, text, or HTML payloads, and why API testing is a cost‑effective way to ensure stability and rapid feedback.
Core Features of Auto_ApiTest – The tool supports multiple request methods (GET, POST, HTTPS, encrypted, authenticated), separates data from code, provides online monitoring and alerting, offers flexible execution (single or batch calls), and generates clear, structured test reports.
Data Management – Variable input parameters, scenario values, and expected results are stored in CSV files (comma‑separated columns). Examples of normal and special data construction are shown, including online log extraction for dynamic data such as video interfaces.
Configuration Layer – Uses per‑API config.ini files for general settings (threads, log paths, ports) and per‑test customizations. Additional JSON and Python configuration files handle node mappings, report titles, and IP mappings for monitoring.
Public Function Layer
# encoding: utf-8
import csv
class DataReader:
def __init__(self, data_file_path):
self.data_file_path = data_file_path
self.data_file = open(self.data_file_path)
# filter lines starting with '#'
self.data_reader = csv.DictReader(filter(lambda row: row[0] != '#', self.data_file))
self.datas = []
for row in self.data_reader:
self.datas.append(row)
def getDatas(self):
return self.datasFunctions for fetching encrypted/post requests and generic GET requests are also provided:
def getPostDate(base_url, data):
base64data = base64.b64decode(data)
req_header = {'Host':'tip.f.360.cn'}
return_dict = {}
return_dict["url"] = base_url
try:
req = urllib2.Request(base_url, data=base64data, headers=req_header)
res = urllib2.urlopen(req, timeout=3)
doc = res.read()
status = res.getcode()
except Exception as e:
return return_dict
else:
return_dict["data"] = doc
return_dict["status"] = status
return return_dict def getDataforNlp(base_url, **args):
urlargs = ''
for key in args:
urlargs += "&" + key + "=" + args[key]
url = base_url + urlargs
return_dict = {"url": url}
try:
req = requests.get(url, timeout=1, headers=req_header)
response_time = req.elapsed.microseconds
status = req.status_code
return_dict["status"] = status
return_dict["response_time"] = "%sms" % (response_time/1000)
except Exception as e:
print(e)
else:
doc_ = json.loads(req.text)
return_dict["data"] = doc_
return return_dictTest Case Management – Test cases are organized per API using Python's unittest framework. Each API has its own test_*.py file containing multiple test functions that validate responses and log results.
from common import commonMethod, CommonTestCase
import unittest
from conf import data
class Mip(CommonTestCase.CommonTestCase):
def test_result(self):
def fuc(row):
query = row["query"].strip()
# ... (logic to call API, parse response, compare with expected, log)
commonMethod.pool_map(fuc, self.datas, self.thread_num)Logging and Reporting – Logs are structured and stored, enabling flexible report generation. Visual examples show how logs are formatted for both success and failure cases.
Execution Workflow – The tool is triggered via shell scripts, supporting parameters such as port, hostname, recipients, and working directory. It integrates with Jenkins for CI and can be invoked by monitoring jobs.
Practical Cases in Search Service – The tool is applied to continuous integration pipelines for the Merger interface, automated email notifications after builds, and online monitoring using crontab. Features like query white‑listing prevent duplicate alerts, and special URL tags avoid polluting production data.
Quantitative Impact – Over 70 interfaces across more than ten business lines have been automated, accumulating 8,000+ test cases, embedding into multiple CI pipelines, and enabling developers to debug APIs locally without QA involvement.
The article concludes by encouraging readers to adapt these practices to their own business scenarios and invites them to follow the 360 Technology public account for more resources.
360 Tech Engineering
Official tech channel of 360, building the most professional technology aggregation platform for the brand.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.