Pytest: Getting started with automated testing for Python
Senior Technical Content Marketing Manager
Testing Python applications is critical to ensuring they function as intended in real-world scenarios. Among the numerous testing frameworks available for Python, pytest stands out as a primary choice for many developers. Renowned for its straightforward yet powerful features, pytest makes writing and running tests efficient and effective.
In this guide, we’ll explore how to get started with pytest, from installation to writing your first tests. We’ll also discuss how automating pytest within a continuous integration (CI) environment can simplify your testing process, making it more consistent, reliable, and efficient. By integrating pytest with CI, teams can catch bugs early and often, enhancing the overall quality of their software products.
What is pytest?
Pytest is a popular, flexible, and easy-to-use software testing framework for Python. It offers a broad set of features that simplify testing, making it ideal for both simple unit tests and complex functional testing.
With pytest, teams can write concise and readable test cases using Python’s familiar syntax. Additionally, pytest’s large collection of plugins extends its capabilities and allows for customized testing workflows.
Python’s simple syntax and readability make writing test cases easy. This allows teams to create thorough test suites to ensure the quality and reliability of their software projects.
Key features of pytest
Pytest’s unique features make it one of the most popular testing tools for Python code. These features include:
- Simple syntax: Pytest offers an easy approach to writing tests, making it accessible to beginners.
- Assertion introspection: Unique to pytest, assertion introspection provides easier debugging by highlighting the exact point of failure in a test.
- Compatibility: Pytest integrates with existing unittest test suites, so teams can migrate to pytest without rewriting tests.
- Plugin-based architecture: The framework’s extensibility through plugins supports the customization and enhancement of its core features.
While pytest is particularly adept at handling unit tests, its plugins and fixtures can also handle more complex test scenarios at higher levels of the testing pyramid for a more thorough testing strategy.
Benefits of pytest
Let’s review some additional benefits of using pytest to automate Python testing:
- Reduced boilerplate code: Pytest reduces boilerplate code, making it possible to write concise and expressive tests. This improves test readability and maintainability, leading to more efficient debugging and code comprehension.
- Extensive fixture support: Pytest also provides extensive support for fixtures, simplifying setup and teardown tasks and promoting code reuse. This adds flexibility and scalability to test automation.
- Active community — pytest has an active community that continuously develops plugins that extend its functionality, catering to diverse testing needs.
Installing pytest
The following sections will review the hands-on steps to start using pytest for automated Python testing. Before getting started, ensure Python 3.10 or later is installed on the local machine. Note that this also includes pip, which is needed for pytest installation.
First, create a virtual environment to keep project dependencies isolated:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
To install pytest, open a terminal or command prompt. Then, run pip install pytest, which will download and install pytest and its dependencies.
Next, verify that the installation was successful by running the following command. This displays the installed version of pytest, confirming a successful installation.
pytest --version
Now, run a basic test to ensure pytest is functioning properly. Create a simple test file named test_example.py with the following code. This checks if a simple arithmetic operation (addition) is correct:
def test_answer():
assert 5 == 2 + 3
To run this test, navigate to the directory containing test_example.py in a terminal or command prompt and run the following command:
pytest
pytest will discover and run the test, then report the results. If the test passes, pytest is installed and ready for use.
Pytest fixtures
In pytest, fixtures help set up and tear down the test environment or state before and after tests run. Fixtures help to ensure that tests run under consistent conditions, making test outcomes reliable and repeatable.
Fixtures promote modularity by allowing test setup and teardown logic to be abstracted into reusable components. This abstraction means that the same setup can be shared across multiple tests, reducing code duplication and resulting in a cleaner, more maintainable test suite.
Review how to test a simple caching system using fixtures to manage setup and teardown automatically.
To get started with pytest fixtures, create a file named test_example.py with this content:
# test_example.py
import pytest
class SimpleCache:
def __init__(self):
self.store = {}
def set(self, key, value):
self.store[key] = value
def get(self, key):
return self.store.get(key, None)
@pytest.fixture
def cache():
"""Fixture to provide a fresh SimpleCache instance to each test."""
test_cache = SimpleCache()
yield test_cache # Test runs with this instance of SimpleCache.
# Teardown can be more complex if needed, but here we just clear the cache.
test_cache.store.clear()
def test_cache_set_and_get(cache):
cache.set("test_key", "test_value")
assert cache.get("test_key") == "test_value", "Value should be retrievable after being set."
def test_cache_miss_returns_none(cache):
assert cache.get("nonexistent_key") is None, "Cache miss should return None."
This example introduces a SimpleCache class that represents a simple key-value cache implementation and uses a pytest fixture to provide a fresh instance of this cache to each test. Consequently, tests for setting and retrieving values and handling cache misses run with a clean state.
This approach enhances test modularity by centralizing the cache initialization and cleanup within the fixture, ensuring that cache-related tests are easily maintainable and extensible. It also promotes reusability by allowing different tests to use the same setup.
To verify that the caching system works as expected, navigate to the directory containing test_example.py from a terminal or command prompt. Then, run the tests using the following command:
pytest test_example.py
Writing tests in pytest
Thanks to its minimalistic syntax and powerful features, writing tests in pytest is straightforward. That said, there are some guidelines to follow when writing basic tests effectively.
Test discovery conventions
Pytest automatically discovers tests by following naming conventions. Test files should be named test_*.py or *_test.py. Test functions should start with test_, and test classes should start with Test (with no __init__ method). Pytest searches the current directory and subdirectories for files matching these patterns, then collects all matching functions and methods within them.
Within each test function, use names that clearly describe what is being tested — for example, test_addition() or test_file_reading().
To assert outcomes, use Python’s built-in assert statements. pytest’s assertion introspection provides detailed failure messages, highlighting the exact point of failure. Ensure each test function raises an AssertionError if the condition fails, providing clear feedback on the test outcome.
Organize tests logically, focusing on one aspect per test function, and run them using pytest’s command-line tool or integrated development environment (IDE) integration for efficient testing.
The next two sections describe how to write tests in pytest:
- Testing a
sumfunction - Testing a
swapfunction
Testing a sum function
Suppose there is a simple function, sum_two_numbers, that needs testing:
# sum.py
def sum_two_numbers(a, b):
return a + b
To write a pytest test for this function, use the code below:
# test_sum.py
from sum import sum_two_numbers
def test_sum_two_numbers():
assert sum_two_numbers(1, 2) == 3, "Expected sum of 1 and 2 to be 3"
This test checks that sum_two_numbers correctly calculates the sum of 1 and 2. The assert statement verifies the function’s output against the expected result.
Testing a swap function
As another example, say we want to test a function that swaps the values of two variables using a temporary variable:
# swap.py
def swap_values(a, b):
tmp = a
a = b
b = tmp
return a, b
The pytest test to check this function is:
# test_swap.py
from swap import swap_values
def test_swap_values():
assert swap_values(3, 4) == (4, 3), "Values should be swapped"
Running specific tests
To run tests from a single file, pass the file path directly:
pytest test_sum.py
To run a single test function, append :: and the function name:
pytest test_sum.py::test_sum_two_numbers
The -k flag filters tests by name using a substring match:
pytest -k "swap"
Add -v for verbose output that lists each test result individually.
Parameterizing pytest
Parameterized tests in pytest allow a single test function to run multiple times with different input arguments. Through parameterization, each set of arguments effectively becomes a separate test case, providing broad test coverage with minimal code. Using decorators like @pytest.mark.parametrize, parameters and corresponding values can be specified for each test iteration.
For instance, @pytest.mark.parametrize('input, expected', [(1, 2), (2, 4), (3, 6)]) defines input-output pairs. During test execution, pytest will run the test function for each parameter set, reporting individual outcomes.
Parameterized tests consolidate similar tests into a single function, promoting code readability and maintainability while ensuring thorough test coverage.
Returning to the swap function from the previous section, use the code below to write a parameterized test:
# test_swap_parametrized.py
import pytest
from swap import swap_values
@pytest.mark.parametrize("input_a, input_b, expected_a, expected_b", [
(1, 2, 2, 1), # Test with integers
('a', 'b', 'b', 'a'), # Test with strings
(True, False, False, True), # Test with boolean values
])
def test_swap_values(input_a, input_b, expected_a, expected_b):
result_a, result_b = swap_values(input_a, input_b)
assert result_a == expected_a and result_b == expected_b
The @pytest.mark.parametrize decorator specifies parameters (input_a, input_b, expected_a, expected_b) and their values for each test case, with each tuple representing a unique set of arguments. Pytest then runs the test_swap_values function with these tuples, creating multiple test cases from a single function to cover various input scenarios.
Pytest markers
Markers label test functions so teams can control which tests run. Pytest includes several built-in markers, and custom markers can be defined per project.
The @pytest.mark.skip marker tells pytest to skip a test entirely. @pytest.mark.xfail marks a test as expected to fail — it still runs, but a failure won’t count against the test suite:
import pytest
@pytest.mark.skip(reason="not implemented yet")
def test_future_feature():
assert some_new_function() == True
@pytest.mark.xfail
def test_known_bug():
assert flaky_function() == 42
Custom markers work well for categorizing tests. Register them in pyproject.toml to avoid warnings, then apply them with decorators:
# pyproject.toml
[tool.pytest.ini_options]
markers = [
"slow: marks tests as slow",
"integration: marks integration tests",
]
@pytest.mark.slow
def test_large_dataset():
assert process_large_file("big_data.csv") is not None
To run only tests with a specific marker, use the -m flag:
pytest -m slow
To exclude them:
pytest -m "not slow"
Pytest plugins
Pytest’s functionality can be significantly extended by using plugins. Plugins allow the testing environment to be customized to meet the specific needs of a project. These plugins can add new fixtures, hooks, command-line options, and even integrate with other frameworks.
Some popular pytest plugins:
- pytest-django — Tailored for Django applications, pytest-django simplifies testing by integrating pytest with Django’s test framework. It supports the efficient testing of Django models, views, and templates.
- pytest-asyncio — pytest-asyncio helps test asyncio coroutines and provides support for asynchronous programming in Python.
- pytest-cov — This plugin offers detailed coverage reporting for pytest tests, integrating with the
coverage.pytool. It also helps identify untested code segments. - pytest-mock — This plugin enhances pytest with support for the mock library, making it easy to mock classes, objects, and functions in tests.
Continuous integration with pytest
Testing Python apps with pytest is an important step toward ensuring software operates correctly and efficiently. But running tests manually after every change can be cumbersome and error-prone. Teams may overlook critical test cases, test in inconsistent environments, or forget to test changes altogether, all of which can jeopardize the stability of an application.
CI ensures that every piece of code is tested automatically before it is integrated into the main branch. This continuous testing loop makes it faster and easier to detect and resolve bugs. It also makes collaboration more efficient, providing team-wide visibility into the health of a codebase.
CircleCI provides first-class support for Python developers and makes automating pytest jobs straightforward and efficient. This support includes dependency caching to speed up build times, easy configuration for parallel test execution, and the ability to quickly set up different Python environments to ensure compatibility across multiple versions.
To get started running Python tests in a CI pipeline in 15 minutes or less, follow our Python documentation. For more specific applications, check out the following resources that guide teams through setting up and optimizing pytest configurations:
-
Testing a Flask framework with pytest — This resource provides detailed instructions on setting up pytest within a CI pipeline for Flask applications, ensuring that automated CI processes thoroughly test web applications built with Flask.
-
Testing a PyTorch machine learning model with pytest — Designed for developers working with PyTorch machine learning (ML) models, this guide explains how to incorporate pytest into the testing workflow. It demonstrates how to automate the testing of ML code to verify model behavior and performance.
By using pytest in CI pipelines, teams can automate their testing process, improve software quality, and speed up the development workflow — all things that make it easier to maintain and evolve complex applications.
Conclusion
This article explored the ways pytest can automate testing for Python applications. From installation to advanced features like fixtures, markers, and plugins, the tools and knowledge covered here should help with implementing effective automated testing strategies.
For developers seeking to strengthen their testing methodologies, pytest can make a real difference. Not only does it simplify test creation and execution, but it also supports practices like continuous integration and test-driven development. Sign up for a free CircleCI account and get started building more reliable Python applications today.