Backend Development 5 min read

Using HiPlot for Visualizing API Test Results

This article demonstrates how to employ HiPlot in API automated testing to efficiently visualize and analyze large sets of test data, covering single-run results, version comparisons, parameter impact studies, long‑running test sequences, and multi‑environment performance evaluations.

Test Development Learning Exchange
Test Development Learning Exchange
Test Development Learning Exchange
Using HiPlot for Visualizing API Test Results

HiPlot can be applied to API automated testing to provide high‑efficiency visual analysis of large result datasets, such as the impact of different parameter configurations, request sequences, or test environments on response time and success rate.

Basic usage – recording a single test result

import requests
import hiplot as hip

def test_api(url):
    response = requests.get(url)
    return {
        "response_time_ms": response.elapsed.total_seconds() * 1000,
        "status_code": response.status_code
    }

url = "https://api.example.com/data"

test_result = test_api(url)
exp = hip.Experiment({"Test Results": [test_result]})
exp.display()

This example shows how to add a single test's response time and status code to HiPlot for display.

Comparing performance of different API versions

versions = ["v1", "v2", "v3"]
results = []
for version in versions:
    url = f"https://api.example.com/data/{version}"
    result = test_api(url)
    result["version"] = version
    results.append(result)
exp = hip.Experiment({"API Performance Comparison": results})
exp.display()

This code collects test results from multiple API versions, adds a "version" dimension, and visualizes the comparison in HiPlot.

Analyzing the impact of request parameters on response time

params_combinations = [
    {"param1": 1, "param2": "A"},
    {"param1": 2, "param2": "B"},
    # more combinations...
]
results_with_params = []
for params in params_combinations:
    url = f"https://api.example.com/data?param1={params['param1']}&param2={params['param2']}"
    result = test_api(url)
    result.update(params)  # add parameters to the result
    results_with_params.append(result)
exp = hip.Experiment({"Parameter Impact Analysis": results_with_params})
exp.display()

The example runs tests for each parameter combination, merges the parameters with the results, and visualizes how parameter changes affect response time.

Tracking long‑running test sequences

import time
for _ in range(24):  # simulate hourly testing for 24 hours
    result = test_api(url)
    result["timestamp"] = time.strftime("%Y-%m-%d %H:%M:%S")
    exp.add_session("Continuous Testing", result)
    time.sleep(3600)  # wait one hour
exp.display()

This snippet simulates hourly testing over a day, adding each result to HiPlot via add_session to observe performance trends over time.

Comparing performance across different test environments

environments = ["dev", "test", "prod"]
env_results = []
for env in environments:
    url = f"https://{env}.api.example.com/data"
    result = test_api(url)
    result["environment"] = env
    env_results.append(result)
exp = hip.Experiment({"Environment Comparison": env_results})
exp.display()

This example gathers and compares API test results from development, testing, and production environments, helping to identify environment‑related performance differences.

Performance analysisvisualizationAPI testingHiPlot
Test Development Learning Exchange
Written by

Test Development Learning Exchange

Test Development Learning Exchange

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.