How We Built an Automated H5 Performance Testing Platform with mitmproxy and Lighthouse

This article describes the design and implementation of an automated H5 performance testing platform that integrates UI automation, mitmproxy traffic capture, waterfall visualization, and Lighthouse scoring to continuously monitor, evaluate, and improve web page performance in mobile applications.

Huolala Tech
Huolala Tech
Huolala Tech
How We Built an Automated H5 Performance Testing Platform with mitmproxy and Lighthouse

Background and Challenges

H5 development is popular for its cross‑platform compatibility, leading to an increasing proportion of H5 pages in mobile apps. As functionality grows, page complexity rises, demanding higher performance and quality assurance.

Performance testing needs: an automated solution to measure load speed, response time, and resource usage, identify bottlenecks, and optimize user experience.

Automation verification: reduce manual effort during rapid iteration.

Degradation prevention: compare against competitor pages to set a baseline and continuously enforce it.

Solution and Goals

Existing tools like Firebug, Fiddler, and HttpWatch were insufficient for managing and analyzing results, prompting the creation of a platform where users submit a URL (or trigger a UI click) and receive a comprehensive performance report.

Solution flow diagram
Solution flow diagram

Key functionalities include:

Automation tasks combined with mitmproxy: UI automation drives mitmproxy to capture resource request streams while isolating data per app and device.

Transforming request streams into waterfall charts: .har files are converted to readable waterfall visualizations.

Scoring pages via URL: URLs are fed to Lighthouse for comprehensive performance scores, enabling comparison with targets and competitors.

Capability Building

The platform leverages an existing mobile testing framework’s UI automation to collect performance data in parallel with functional tests. The overall architecture is illustrated below.

Platform architecture
Platform architecture

Automation and mitmproxy Integration

mitmproxy, written in Python, offers cross‑platform command‑line operation and scriptable output processing, making it ideal for capturing request streams. The platform wraps a custom keyword WebClick that triggers page clicks and automatically records each request’s data.

WebClick()<br/>.by(id)<br/>.with(com.xiaolachuxing.user:id/nav_coupon_list);<br/>

Data from multiple apps and devices are segregated by AppId and Device ID, stored in separate .har files, and cleared before each new page load to prevent cross‑contamination.

Sample request/response entry structure:

entry = {<br/>    "startedDateTime": started_date_time,<br/>    "time": full_time,<br/>    "request": {<br/>        "method": flow.request.method,<br/>        "url": flow.request.url,<br/>        "headers": name_value(flow.request.headers),<br/>        "bodySize": len(flow.request.content)<br/>    },<br/>    "response": {<br/>        "status": flow.response.status_code,<br/>        "headers": name_value(flow.response.headers),<br/>        "content": {"size": response_body_size}<br/>    },<br/>    "timings": timings<br/>}<br/>

Request Stream to Waterfall Visualization

.har files are fed into a webapp that uses harviewer to render interactive waterfall charts, showing request timing, size, and status, along with a summary pie chart of resource distribution.

Waterfall view
Waterfall view

URL‑Based Page Scoring

Using Appium, the platform extracts page URLs during UI automation and invokes headless Lighthouse to obtain performance scores. Multiple runs are aggregated, taking the median to improve reliability.

Lighthouse scoring
Lighthouse scoring

Application and Practice

Degradation Prevention Testing

Performance targets are defined using metrics such as First Contentful Paint, Largest Contentful Paint, Total Blocking Time, Cumulative Layout Shift, and Speed Index, with thresholds for fast, medium, and slow categories.

After optimization, the goal is to keep all resource load times under 3 seconds and maintain Lighthouse scores above 60.

Performance trend
Performance trend

Performance Optimization Practices

Interaction Optimization

Initial LCP was 3601 ms and TBT 700 ms, yielding a score around 45. By removing a blocking location popup and streamlining API calls, LCP dropped to 2667 ms, FCP to 956 ms, TBT to 400 ms, raising the score above 60.

Optimization results
Optimization results

Resource Optimization

JavaScript and CSS assets exceeding 700 ms were identified. Pre‑fetching resources and compressing files reduced JS size from 691 KB to 574 KB (post‑compression 222 KB to 183 KB) and CSS from 103 KB to 9.3 KB (post‑compression 40.7 KB to 3.6 KB), yielding 7‑16 % load‑time reductions across multiple pages.

Future Outlook

Continuous Integration and Alerting: Automated tests run on each code change; performance regressions trigger alerts for rapid remediation.

AI‑Driven Optimization Guidance: Collected data feeds machine‑learning models that detect common performance issues and suggest code changes.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

frontendAutomationLighthousemitmproxyH5
Huolala Tech
Written by

Huolala Tech

Technology reshapes logistics

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.