Operations 12 min read

Automating Web Performance Tests with WebPageTest and AI

This guide shows how to use WebPageTest as a precise data collector and an AI service as an analysis engine, covering API key setup, three integration options, a Python script to run tests, an AI‑driven reporting script, and a full GitHub Actions CI/CD workflow that posts the performance report back to pull requests.

Advanced AI Application Practice
Advanced AI Application Practice
Advanced AI Application Practice
Automating Web Performance Tests with WebPageTest and AI

Overview

The workflow combines WebPageTest for automated web‑performance data collection with a large‑language‑model AI service to analyze the results and generate actionable reports.

Stage 1 – Basic Preparation

Obtain a WebPageTest API key

Register at WebPageTest.org and copy the key from the “Account → API Key” section.

Choose an AI analysis tool

Option A – Use an off‑the‑shelf AI‑driven platform (e.g., SpeedCurve, Calibre/Treo) by entering the API key.

Option B – Use a large language model API such as OpenAI GPT‑4 or Anthropic Claude (requires custom development).

Option C – Build a custom anomaly‑detection pipeline with open‑source ML libraries like TensorFlow, PyTorch or Prophet.

Advanced integration

Integrate the workflow into a CI/CD pipeline (GitHub Actions is used as an example).

Stage 2 – Core Workflow – Automated Testing & AI Analysis (Option B)

Monitor a critical page and detect performance regressions on every code change.

Step 1 – Run WebPageTest via script

A Python script run_wpt_test.py submits a test, polls for completion, and saves the JSON result.

import requests
import json
import time

# 1. Configuration
WPT_API_KEY = "YOUR_WPT_API_KEY"
WPT_TEST_URL = "https://your-website.com/your-key-page"
WPT_API_URL = "https://www.webpagetest.org/runtest.php"

# 2. Test parameters
params = {
    'url': WPT_TEST_URL,
    'k': WPT_API_KEY,
    'location': 'Dulles:Chrome.Cable',
    'runs': 3,
    'fvonly': 1,
    'medianOfRuns': 'median',
    'format': 'json'
}

print("🚀 Submitting WebPageTest request...")
response = requests.get(WPT_API_URL, params=params)
result = response.json()

if response.status_code == 200 and result['statusCode'] == 200:
    test_id = result['data']['testId']
    print(f"✅ Test submitted! Test ID: {test_id}")
    status_code = 100
    while status_code < 200:
        print("⏳ Waiting for test to finish...")
        time.sleep(30)
        status_response = requests.get(f"https://www.webpagetest.org/jsonResult.php?test={test_id}")
        status_data = status_response.json()
        status_code = status_data['statusCode']
    print("✅ Test completed! Fetching results...")
    final_result = status_data
    with open(f"wpt_result_{test_id}.json", 'w') as f:
        json.dump(final_result, f, indent=2)
    print(f"✅ Result saved to: wpt_result_{test_id}.json")
    # Return test_id and result for next step
else:
    print(f"❌ Test submission failed: {result['statusText']}")
    exit(1)

Step 2 – AI Analysis Script

A second script analyze_with_ai.py loads the JSON, extracts core Web Vitals, builds a prompt, calls the LLM, and writes the report.

import openai  # or from anthropic import Anthropic
import json

# 1. Configure AI client
client = openai.OpenAI(api_key="YOUR_OPENAI_API_KEY")
# or client = Anthropic(api_key="YOUR_ANTHROPIC_API_KEY")

# 2. Load WebPageTest result
test_id = "YOUR_TEST_ID"
with open(f"wpt_result_{test_id}.json", 'r') as f:
    wpt_data = json.load(f)

# 3. Extract key metrics
summary = wpt_data['data']['average']['firstView']
core_web_vitals = {
    'LCP': summary.get('LCP', {}).get('displayValue', 'N/A'),
    'FCP': summary.get('FCP', {}).get('displayValue', 'N/A'),
    'CLS': summary.get('CLS', {}).get('displayValue', 'N/A'),
    'TBT': summary.get('TBT', {}).get('displayValue', 'N/A'),
    'Speed Index': summary.get('SpeedIndex', {}).get('displayValue', 'N/A')
}
requests = summary['requests']
bytes_in = summary['bytesIn']
domains = summary['domains']

# 4. Build prompt
prompt = f"""
You are a senior web‑performance expert. Analyze the following WebPageTest results and produce a detailed, actionable report.

Test URL: {WPT_TEST_URL}
Core Web Vitals:
- LCP: {core_web_vitals['LCP']}
- FCP: {core_web_vitals['FCP']}
- CLS: {core_web_vitals['CLS']}
- TBT: {core_web_vitals['TBT']}
- Speed Index: {core_web_vitals['Speed Index']}

Other data:
- Total requests: {requests}
- Total bytes: {bytes_in} bytes
- Domain distribution: {', '.join([f"{k} ({v} requests)" for k, v in domains.items()])}

Please answer using the following structure:
1. Overall assessment
2. Key issues (2‑3)
3. Root‑cause analysis
4. Concrete optimization suggestions (3‑5)
5. Next steps for engineers
"""

# 5. Call AI API
response = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "system", "content": "You are a helpful web‑performance expert who explains technical details in plain language."},
        {"role": "user", "content": prompt}
    ],
    max_tokens=1500
)

ai_report = response.choices[0].message.content
print("🤖 AI Performance Report")
print("="*50)
print(ai_report)

# 6. Save report
with open(f"ai_report_{test_id}.txt", 'w') as f:
    f.write(ai_report)

Stage 3 – Advanced Integration – CI/CD with GitHub Actions

Create .github/workflows/performance.yml to run both scripts on every pull request to main, store the AI report as a PR comment, and optionally fail the build on regression.

name: Performance Regression Test with AI
on:
  pull_request:
    branches: [ main ]

jobs:
  performance-test:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up Python
        uses: actions/setup-python@v4
        with:
          python-version: '3.x'

      - name: Install dependencies
        run: |
          pip install requests openai  # or anthropic

      - name: Run WebPageTest and AI Analysis
        env:
          WPT_API_KEY: ${{ secrets.WPT_API_KEY }}
          OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
        run: |
          python run_wpt_test.py
          python analyze_with_ai.py > ai_report.txt

      - name: (Optional) Regression check
        run: |
          echo "TODO: add regression detection logic"

      - name: Create PR comment with AI Report
        uses: actions/github-script@v6
        with:
          script: |
            const fs = require('fs');
            const report = fs.readFileSync('ai_report.txt', 'utf8');
            github.rest.issues.createComment({
              issue_number: context.issue.number,
              owner: context.repo.owner,
              repo: context.repo.repo,
              body: `## 🤖 Automated Performance Report

WebPageTest + AI analysis:

\`
${report}
\`

*Generated at: ${{ new Date().toUTCString() }}*`
            });

Configure the required secrets WPT_API_KEY and OPENAI_API_KEY in the repository Settings → Secrets and variables → Actions.

Full Workflow Summary

Trigger : Developer opens a pull request to main.

Automated test : GitHub Actions runs run_wpt_test.py, which submits a WebPageTest run and stores the JSON result.

AI analysis : The same workflow runs analyze_with_ai.py, which extracts metrics, prompts the LLM, and produces a human‑readable performance report.

Result feedback : The AI report is posted as a comment on the PR.

Decision : Reviewers see the performance impact directly; if a regression is flagged, they can request fixes before merging.

By chaining WebPageTest, a Python automation layer, and an LLM, teams obtain precise, repeatable performance data and actionable insights without manual effort.

PythonCI/CDAIPerformance TestingGitHub ActionsWebPageTest
Advanced AI Application Practice
Written by

Advanced AI Application Practice

Advanced AI Application Practice

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.