How a Non‑Coder Test Engineer Can Measure Page Performance in 2 Minutes

This guide shows how to capture a page's HAR file with Chrome DevTools, analyze it using a Python script, and generate an Excel report that lists each menu's access time, highlighting requests that exceed a configurable timeout threshold.

Advanced AI Application Practice
Advanced AI Application Practice
Advanced AI Application Practice
How a Non‑Coder Test Engineer Can Measure Page Performance in 2 Minutes

Capture network data

Open the target page in Chrome.

Open Chrome DevTools (F12 or Ctrl+Shift+I).

Switch to the Network panel.

Check “Preserve log”.

Clear existing records using the red circle button.

Interact with each menu on the page.

After the interactions, click the export button in the Network tab and save the log as a .har file.

Python analysis script

The following script parses the HAR file, extracts page navigation requests (identified by MIME type text/html) and API calls (paths containing /api/, /w/ or /t/), filters requests whose duration exceeds a configurable threshold (default 60 000 ms), and writes the filtered records to an Excel file.

import json
import pandas as pd
from urllib.parse import urlparse
import re

def analyze_menu_access(har_file_path, output_excel_path, timeout_threshold=60000):
    """Analyze menu access times from a HAR file.

    Args:
        har_file_path (str): Path to the HAR file.
        output_excel_path (str): Path for the generated Excel report.
        timeout_threshold (int): Duration threshold in milliseconds (default 60000).
    """
    try:
        with open(har_file_path, 'r', encoding='utf-8') as f:
            har_data = json.load(f)
    except Exception as e:
        print(f"Failed to load HAR file: {e}")
        return

    page_navigations = {}
    api_requests = []

    for entry in har_data.get('log', {}).get('entries', []):
        request = entry.get('request', {})
        response = entry.get('response', {})
        url = request.get('url', '')
        method = request.get('method', '')
        time_ms = entry.get('time', 0)

        # Identify page navigation requests (HTML documents)
        if method == 'GET' and response.get('content', {}).get('mimeType', '').startswith('text/html'):
            parsed_url = urlparse(url)
            page_name = parsed_url.path.split('/')[-1] or 'index'
            page_navigations[page_name] = {'url': url, 'time_ms': time_ms, 'type': 'page'}
        # Identify API requests (customizable pattern)
        elif '/api/' in url or re.search(r'/w/|/t/', url):
            api_requests.append({'url': url, 'time_ms': time_ms, 'type': 'api', 'method': method})

    all_requests = list(page_navigations.values()) + api_requests
    slow_requests = [req for req in all_requests if req['time_ms'] > timeout_threshold]

    if not slow_requests:
        print("No requests exceed the threshold.")
        return

    df = pd.DataFrame(slow_requests)
    df['是否超时(>1分钟)'] = '是'
    df['耗时(秒)'] = df['time_ms'].apply(lambda x: f"{x/1000:.2f}s")
    output_df = df[['type', 'url', 'method', '耗时(秒)', '是否超时(>1分钟)']].sort_values('耗时(秒)', ascending=False)
    output_df.to_excel(output_excel_path, index=False)
    print(f"Analysis complete. Results saved to: {output_excel_path}")

if __name__ == "__main__":
    analyze_menu_access(
        har_file_path='path_to_your.har',
        output_excel_path='menu_access_report.xlsx',
        timeout_threshold=60000
    )

Run the analysis

Save the script as har_menu_analyzer.py.

Install required libraries: pip install pandas openpyxl.

Execute the script, providing the HAR file path and desired Excel output path, e.g.

python har_menu_analyzer.py --har "D:/your_file.har" --output "D:/menu_report.xlsx"

.

Generated Excel report

The Excel file contains the following columns: type: request type (page or api) url: request URL method: HTTP method 耗时(秒): request duration in seconds 是否超时(>1分钟): flag indicating whether the duration exceeds the 1‑minute threshold

Key features

Intelligent page navigation detection : uses MIME type text/html to identify HTML page requests.

API request filtering : customizable patterns such as /api/, /w/, /t/ can be added.

Duration conversion : automatically converts milliseconds to a human‑readable seconds format.

Configurable threshold : default timeout is 60 000 ms (1 minute) but can be changed via the timeout_threshold argument.

Overall page load time : can be derived as “latest request end time – earliest request start time”.

PythonChrome DevToolsweb testingHAR analysispage performance
Advanced AI Application Practice
Written by

Advanced AI Application Practice

Advanced AI Application Practice

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.