Backend Development 8 min read

Comprehensive Guide to Pytest: Features, Commands, Marks, and Fixtures

This article provides an in‑depth overview of the Pytest testing framework, covering its advantages, execution principles, essential command‑line options, frequently used functions, mark annotations, and fixture usage with scope and return‑value details for effective Python testing.

360 Quality & Efficiency
360 Quality & Efficiency
360 Quality & Efficiency
Comprehensive Guide to Pytest: Features, Commands, Marks, and Fixtures

Pytest is a third‑party Python testing framework that extends unittest, offering a more concise and efficient syntax, native assertions, and broad compatibility with unittest and nose test cases, along with a rich plugin ecosystem such as flask and xdist for error reruns and parallel execution.

Execution works by running pytest (or variations like pytest -q ) in the project directory, which recursively discovers files and functions whose names start with test_ or end with _test , collecting them as test cases.

Key command‑line utilities include installing the latest version ( pip install -U pytest ), checking the version ( pytest --version ), listing built‑in fixtures ( pytest --fixtures ), displaying help ( pytest -h ), stopping after failures ( pytest -x , pytest --maxfail=2 ), running specific modules or directories ( pytest test_mod.py , pytest testing/ ), selecting tests by keyword ( pytest -k "MyClass and not method" ), and targeting individual test functions or methods ( pytest test_mod.py::test_func , pytest test_mod.py::TestClass::test_method ).

Commonly used Pytest functions are:

pytest.approx(expected, rel=None, abs=None, nan_ok=False) – asserts numerical values are within a tolerance.

pytest.fail(msg='', pytrace=True) – forces a test to fail with a message.

pytest.skip(msg=None, allow_module_level=False) – skips a test with an optional reason.

pytest.xfail(reason='') – marks a test as expected to fail.

pytest.main(args=None, plugins=None) – runs tests programmatically and returns an exit code.

Marks allow annotating tests for selective execution or behavior modification. Examples include simple decorators like @pytest.mark.red and combined usage with @pytest.mark.red , @pytest.mark.green , etc. The pytest-marks plugin enables applying multiple marks in a single decorator: @pytest.marks('red', 'green', 'blue', 'black', 'orange', 'pink') .

Mark utilities also include:

pytest.mark.skip(*, reason=None) – unconditionally skips a test.

pytest.mark.skipif(condition, *, reason=None) – skips when a condition is true.

pytest.mark.usefixtures(*names) – declares that a test uses specified fixtures.

pytest.mark.xfail(condition=None, *, reason=None, raises=None, run=True, strict=False) – marks a test as expected to fail, optionally conditioned on factors such as Python version.

Fixtures provide reusable test resources. They are defined as functions and injected into tests by declaring them as parameters. Fixtures can have different scopes: function (per test), class (per test class), module (per file), and session (once for the entire test run). Fixtures may return values (default None ) that tests can use, and they can accept a request object to introspect the requesting test.

Overall, the article serves as a practical reference for leveraging Pytest’s capabilities, from basic commands to advanced mark and fixture techniques, to streamline and enhance Python testing workflows.

PythonAutomationtestingpytestfixturesmarks
360 Quality & Efficiency
Written by

360 Quality & Efficiency

360 Quality & Efficiency focuses on seamlessly integrating quality and efficiency in R&D, sharing 360’s internal best practices with industry peers to foster collaboration among Chinese enterprises and drive greater efficiency value.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.