Operations 10 min read

Turn Test Reports into Actionable Health Checks with Python & pytest

This guide shows how to automatically archive pytest HTML/Allure reports, extract key metrics, compare with historical data, generate a markdown summary, and optionally push alerts, turning each test run into a valuable health‑check for your software project.

Test Development Learning Exchange
Test Development Learning Exchange
Test Development Learning Exchange
Turn Test Reports into Actionable Health Checks with Python & pytest

Why archive test reports?

Historical test reports are a treasure trove: they reveal slow‑failure trends, unstable test cases, and version‑level quality changes, enabling teams to spot regressions early and drive data‑based improvements instead of discarding results after each run.

Core goals

Automatically compress and archive HTML/Allure reports with timestamps after each test run.

Extract key indicators such as total cases, pass rate, failure list, and duration.

Compare with historical data to identify new failures and recovered cases.

Generate a Markdown analysis summary that can be pushed to enterprise chat tools.

Support branch‑ or environment‑based storage (e.g., main vs feature/login).

Architecture overview

pytest run
   │
   ├── Execute all test cases
   │
   └── On completion → trigger @archive_report decorator (or pytest hook)
        │
        ├── 1. Collect current results (passed/failed/skipped)
        ├── 2. Load historical baseline (JSON file)
        ├── 3. Generate comparative analysis
        ├── 4. Archive HTML/Allure report
        └── 5. Output summary.md and optionally push it

Implementation: ReportArchiver

# utils/report_archiver.py
import os, json, shutil, zipfile
from datetime import datetime
from pathlib import Path
from typing import Dict, List, Any

class ReportArchiver:
    def __init__(self, report_dir: str = "report", archive_root: str = "archived_reports"):
        self.report_dir = Path(report_dir)
        self.archive_root = Path(archive_root)
        self.branch = os.getenv("CI_COMMIT_BRANCH", "local")
        self.timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
        self.archive_dir = self.archive_root / self.branch / self.timestamp
        self.history_file = self.archive_root / self.branch / "history.json"

    def archive(self, test_results: Dict[str, Any]):
        """Main entry: archive report and generate analysis"""
        self.archive_dir.mkdir(parents=True, exist_ok=True)
        # Copy or zip the report
        if self.report_dir.exists():
            if (self.report_dir / "index.html").exists():
                shutil.copytree(self.report_dir, self.archive_dir / "html_report")
            elif (self.report_dir / "allure-results").exists():
                shutil.make_archive(str(self.archive_dir / "allure_report"), 'zip', self.report_dir)
        # Save current results
        result_file = self.archive_dir / "result.json"
        with open(result_file, "w", encoding="utf-8") as f:
            json.dump(test_results, f, indent=2, ensure_ascii=False)
        # Update history
        history = self._load_history()
        history[self.timestamp] = {
            "total": test_results["total"],
            "passed": test_results["passed"],
            "failed": test_results["failed"],
            "skipped": test_results["skipped"],
            "duration": test_results["duration"],
            "failures": test_results["failures"]
        }
        self._save_history(history)
        # Generate summary
        summary = self._generate_summary(history)
        summary_file = self.archive_dir / "summary.md"
        with open(summary_file, "w", encoding="utf-8") as f:
            f.write(summary)
        print(f"✅ Report archived to: {self.archive_dir}")
        print(f"📊 Analysis summary:
{summary}")

    def _load_history(self) -> Dict[str, Any]:
        if self.history_file.exists():
            with open(self.history_file, "r", encoding="utf-8") as f:
                return json.load(f)
        return {}

    def _save_history(self, history: Dict[str, Any]):
        self.history_file.parent.mkdir(parents=True, exist_ok=True)
        with open(self.history_file, "w", encoding="utf-8") as f:
            json.dump(history, f, indent=2, ensure_ascii=False)

    def _generate_summary(self, history: Dict[str, Any]) -> str:
        timestamps = sorted(history.keys())
        latest = history[timestamps[-1]]
        prev = history[timestamps[-2]] if len(timestamps) > 1 else None
        total, passed, failed = latest["total"], latest["passed"], latest["failed"]
        pass_rate = passed / total * 100 if total > 0 else 0
        lines = []
        lines.append(f"# Test Report Analysis - {self.branch} - {self.timestamp}
")
        lines.append(f"- **Total**: {total}")
        lines.append(f"- **Pass Rate**: {pass_rate:.1f}% ({passed}/{total})")
        lines.append(f"- **Failures**: {failed}")
        lines.append(f"- **Duration**: {latest['duration']:.2f}s
")
        if prev:
            new_failures = set(latest["failures"]) - set(prev.get("failures", []))
            recovered = set(prev.get("failures", [])) - set(latest["failures"])
            if new_failures:
                lines.append("## 🔴 New Failures")
                for case in sorted(new_failures):
                    lines.append(f"- `{case}`")
                lines.append("")
            if recovered:
                lines.append("## 🟢 Recovered Cases")
                for case in sorted(recovered):
                    lines.append(f"- `{case}`")
                lines.append("")
        if failed > 0:
            lines.append("## ❌ Current Failure List")
            for case in sorted(latest["failures"]):
                lines.append(f"- `{case}`")
        return "
".join(lines)

Integration with pytest

# conftest.py
import pytest, time
from utils.report_archiver import ReportArchiver

def pytest_configure(config):
    config._start_time = time.time()

@pytest.fixture(scope="session", autouse=True)
def archive_test_report(request):
    yield
    duration = time.time() - request.config._start_time
    tr = request.config.pluginmanager.get_plugin("terminalreporter")
    test_results = {
        "total": tr.stats.get("passed", 0) + tr.stats.get("failed", 0) + tr.stats.get("skipped", 0),
        "passed": len(tr.stats.get("passed", [])),
        "failed": len(tr.stats.get("failed", [])),
        "skipped": len(tr.stats.get("skipped", [])),
        "duration": duration,
        "failures": [report.nodeid for report in tr.stats.get("failed", [])]
    }
    archiver = ReportArchiver(report_dir="report")
    archiver.archive(test_results)

If you use Allure, generate the Allure report first (e.g., allure generate) before archiving the allure-results directory.

Archive directory layout (example)

archived_reports/
└── main/
    ├── 20260124_213000/
    │   ├── html_report/          # or allure_report.zip
    │   ├── result.json          # raw test results
    │   └── summary.md           # analysis summary
    ├── 20260124_220000/
    │   └── ...
    └── history.json            # complete history for this branch

Sample summary.md output

# Test Report Analysis - main - 20260124_213000
- **Total**: 120
- **Pass Rate**: 97.5% (117/120)
- **Failures**: 3
- **Duration**: 45.23s

## 🔴 New Failures
- `test_payment/test_refund.py::test_partial_refund`

## 🟢 Recovered Cases
- `test_user/test_profile.py::test_update_avatar`

## ❌ Current Failure List
- `test_order/test_create.py::test_concurrent_order`
- `test_payment/test_refund.py::test_partial_refund`
- `test_api/test_rate_limit.py::test_exceed_limit`

Advanced integration: automatic alerts

After generating the summary, you can add a conditional push:

if failed > 0 and os.getenv("REPORT_NOTIFY") == "1":
    send_dingtalk_message(summary)  # implement your own notification logic

Supported channels include DingTalk/WeChat work bots, email, and automatic Jira ticket creation for new failures.

Conclusion

Archiving test reports transforms a one‑off test run into a continuous health‑check. With historical data you can see trends, quickly spot regressions, centralise scattered reports, and drive data‑guided quality improvements.

PythonCIpytesttest reportingreport archiving
Test Development Learning Exchange
Written by

Test Development Learning Exchange

Test Development Learning Exchange

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.