Modularize Pytest E2E Flows with a Dependency‑Managing Decorator

This guide shows how to use a custom Python @depends_on decorator to declare test dependencies, automatically skip downstream tests on failure, pass context data between tests, and enforce a topological execution order, making end‑to‑end pytest suites more modular and reliable.

Test Development Learning Exchange
Test Development Learning Exchange
Test Development Learning Exchange
Modularize Pytest E2E Flows with a Dependency‑Managing Decorator

Problem Statement

In interface or E2E automation testing, test steps are often chained (e.g., register → login → create order → pay). Writing all steps in a single large test case leads to debugging difficulties, duplicated code, and cascading failures when an early step fails.

Desired Solution

Introduce a declarative dependency system using a Python decorator that registers dependencies, performs topological sorting, skips dependent tests when a prerequisite fails, and passes return values via a shared context.

Core Implementation

Global structures store test context, status, and dependencies:

# Global state
_test_context = {}      # {"test_register": {"user_id": 123}}
_test_status = {}       # {"test_register": "passed" | "failed" | "skipped"}
_test_dependencies = {} # {"test_login": ["test_register"]}

The @depends_on decorator registers dependencies, checks their status before execution, extracts returned data, runs the test, and records its result:

def depends_on(*test_names: str):
    """Decorator to declare that the current test depends on other tests."""
    def decorator(func):
        _test_dependencies[func.__name__] = list(test_names)
        @functools.wraps(func)
        def wrapper(*args, **kwargs):
            # 1. Verify all dependencies passed
            for dep in test_names:
                status = _test_status.get(dep)
                if status != "passed":
                    pytest.skip(f"Skipping {func.__name__} because dependency {dep} did not pass (status: {status})")
            # 2. Gather dependency results from context
            dep_results = {dep: _test_context[dep] for dep in test_names if dep in _test_context}
            # 3. Execute the test, optionally passing dependency data
            try:
                if dep_results:
                    result = func(*args, dependencies=dep_results, **kwargs)
                else:
                    result = func(*args, **kwargs)
                _test_status[func.__name__] = "passed"
                _test_context[func.__name__] = result
                return result
            except Exception as e:
                _test_status[func.__name__] = "failed"
                raise
        return wrapper
    return decorator

def reset_test_dependencies():
    _test_context.clear()
    _test_status.clear()
    _test_dependencies.clear()

Usage Example

# 1. Register user
def test_register():
    resp = requests.post("/api/register", json={"username": "auto_user"})
    user_id = resp.json()["user_id"]
    return {"user_id": user_id, "username": "auto_user"}

# 2. Login (depends on registration)
@depends_on("test_register")
def test_login(dependencies):
    reg = dependencies["test_register"]
    resp = requests.post("/api/login", json={"username": reg["username"]})
    token = resp.json()["token"]
    return {"token": token, "user_id": reg["user_id"]}

# 3. Create order (depends on login)
@depends_on("test_login")
def test_create_order(dependencies):
    login = dependencies["test_login"]
    resp = requests.post("/api/orders", json={"user_id": login["user_id"], "items": ["A"]}, headers={"Authorization": f"Bearer {login['token']}"})
    order_id = resp.json()["order_id"]
    return {"order_id": order_id}

# 4. Pay order (depends on order creation)
@depends_on("test_create_order")
def test_pay_order(dependencies):
    order = dependencies["test_create_order"]
    resp = requests.post(f"/api/pay/{order['order_id']}")
    assert resp.status_code == 200

Execution Effects

Scenario 1 – All succeed: Each test runs in order, returns data, and the final report shows all tests passed.

Scenario 2 – Registration fails: Subsequent tests are automatically skipped, and the report clearly indicates the root cause without inflating the failure count.

Pytest Integration Tips

Ensure deterministic order by overriding pytest_collection_modifyitems in conftest.py to sort items according to the dependency graph.

Use a session‑scoped fixture to reset the global dictionaries after the test run.

# conftest.py
def pytest_collection_modifyitems(items):
    name_to_item = {item.name: item for item in items}
    ordered = []
    visited = set()
    def dfs(name):
        if name in visited:
            return
        visited.add(name)
        for dep in _test_dependencies.get(name, []):
            if dep in name_to_item:
                dfs(dep)
        if name in name_to_item:
            ordered.append(name_to_item[name])
    for item in items:
        if item.name not in visited:
            dfs(item.name)
    items[:] = ordered

@pytest.fixture(scope="session", autouse=True)
def clean_dependencies():
    reset_test_dependencies()
    yield
    # optional: clean test data here

Best Practices & Safety

Avoid circular dependencies; add detection logic if needed.

Return structured dictionaries so downstream tests can easily extract needed values.

Keep core verification tests independent; use dependencies only for workflow‑like chains.

Combine real calls with mocks where appropriate (e.g., mock payment service).

Conclusion

By splitting large end‑to‑end tests into small, dependent units and managing them with the @depends_on decorator, you gain clearer failure diagnostics, reusable test components, and a maintainable test suite that behaves like modular building blocks.

test automationdependency managementDecoratorpytest
Test Development Learning Exchange
Written by

Test Development Learning Exchange

Test Development Learning Exchange

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.