Hello! Welcome to part 2 of this pytest tutorial. π
In the previous part we increased the code coverage for the earth project from 53% to 98% by developing a series of automated tests with pytest based on a usage example from the README file of the project. π
There is still quite some room for improvement though, especially with
regards to developer productivity as running all of the new tests takes more
than 20 seconds. This is because test_large_group
uses all of the
adventurers functions and when pandas get ready for an event,
they first need to eat and that takes time! πΌ
def new_panda(name, **kwargs):
def eat(panda):
for i in range(4):
print(f"{panda.profile} {panda.name} is eating... π±")
time.sleep(5)
kwargs.setdefault("location", "Asia")
return Adventurer(
name=name, profile="πΌ", getting_ready=[eat, pack], **kwargs
)
Also, the three new tests are almost identical. Only their respective parameters and markers are different. That’s a great use case for pytest’s parametrization features.
Like in the previous part, you can follow along with the code changes that we’ll make throughout this tutorial by checking out the commits on the write-pytest-plugins branch of the earth repository.
These are the tests and fixtures from the previous part:
tests/test_earth.py
import pytest
from earth import adventurers, Event, Months
@pytest.fixture(name="event")
def fixture_event():
return Event("PyCon US", "North America", Months.MAY)
@pytest.fixture(name="small_group")
def fixture_small_group():
return [
adventurers.new_frog("Bruno"),
adventurers.new_lion("Michael"),
adventurers.new_koala("Brianna"),
adventurers.new_tiger("Julia"),
]
@pytest.fixture(name="large_group")
def fixture_large_group():
return [
adventurers.new_frog("Bruno"),
adventurers.new_panda("Po"),
adventurers.new_fox("Dave"),
adventurers.new_lion("Michael"),
adventurers.new_koala("Brianna"),
adventurers.new_tiger("Julia"),
adventurers.new_fox("Raphael"),
adventurers.new_fox("Caro"),
adventurers.new_bear("Chris"),
# Bears in warm climates don't hibernate π»
adventurers.new_bear("Danny", availability=[*Months]),
adventurers.new_bear("Audrey", availability=[*Months]),
]
@pytest.fixture(name="no_pandas_group")
def fixture_no_pandas_group():
return [
adventurers.new_frog("Bruno"),
adventurers.new_fox("Dave"),
adventurers.new_lion("Michael"),
adventurers.new_koala("Brianna"),
adventurers.new_tiger("Julia"),
adventurers.new_fox("Raphael"),
adventurers.new_fox("Caro"),
adventurers.new_bear("Chris"),
# Bears in warm climates don't hibernate π»
adventurers.new_bear("Danny", availability=[*Months]),
adventurers.new_bear("Audrey", availability=[*Months]),
]
@pytest.mark.wip
@pytest.mark.happy
def test_small_group(event, small_group):
for adventurer in small_group:
event.invite(adventurer)
for attendee in event.attendees:
attendee.get_ready()
attendee.travel_to(event)
event.start()
@pytest.mark.wip
@pytest.mark.slow
@pytest.mark.happy
@pytest.mark.xfail(reason="Problems with TXL airport")
def test_large_group(event, large_group):
for adventurer in large_group:
event.invite(adventurer)
for attendee in event.attendees:
attendee.get_ready()
attendee.travel_to(event)
event.start()
@pytest.mark.wip
@pytest.mark.happy
@pytest.mark.xfail(reason="Problems with TXL airport")
def test_no_pandas_group(event, no_pandas_group):
for adventurer in no_pandas_group:
event.invite(adventurer)
for attendee in event.attendees:
attendee.get_ready()
attendee.travel_to(event)
event.start()
Add a custom marker
Let’s start off this tutorial by adding a custom marker for tests that involve TXL airport. We will use it later as an alias for the xfail marker and add documentation for it.
tests/test_earth.py
pytest.mark.txl = pytest.mark.xfail(reason="Problems with TXL airport")
Write a new test that combines all scenarios
The tree tests above each depend on a different group of adventurers:
small_group
πΈπ¦π¨π―large_group
πΈπΌπ¦π¦π¨π―π¦π¦π»π»π»no_pandas_group
πΈπ¦π¦π¨π―π¦π¦π»π»π»
The quickest way to switch between different groups, based on the individual test item, is to create a fixture that returns a specific group for a given parameter.
We can generate multiple test items from a single test function by using
parametrize. We’ll use the pytest.param
function and pass the markers on
the marks
keyword argument to add markers to specific test parameters.
Note that we also add markers, which apply to every test item, to the test function.
tests/test_earth.py
@pytest.fixture(name="group")
def fixture_group(request, small_group, large_group, no_pandas_group):
group_name = request.param
groups = {
"small_group": small_group,
"large_group": large_group,
"no_pandas_group": no_pandas_group,
}
return groups[group_name]
@pytest.mark.wip
@pytest.mark.happy
@pytest.mark.parametrize(
"group",
[
pytest.param("small_group"),
pytest.param(
"large_group",
marks=[pytest.mark.txl, pytest.mark.slow]
),
pytest.param(
"no_pandas_group",
marks=[pytest.mark.txl]
),
],
indirect=True,
)
def test_earth(group, event):
for adventurer in group:
event.invite(adventurer)
for attendee in event.attendees:
attendee.get_ready()
attendee.travel_to(event)
event.start()
Delete now redundant tests
Now that we created a new test that combines all three different scenarios, we can delete the original tests. Be sure to check that the code coverage is still at 98% and we didn’t miss a scenario. π
Write plugin to deselect slow tests by default
Let’s create a conftest.py file and add a custom CLI option. We
then modify the test items collection and automatically deselect tests with
the slow
marker:
tests/conftest.py
def pytest_addoption(parser):
group = parser.getgroup("earth")
group.addoption(
"--slow",
action="store_true",
default=False,
help="Include slow tests in test run",
)
def pytest_collection_modifyitems(items, config):
"""Deselect tests marked as slow if --slow is set."""
if config.option.slow is True:
return
selected_items = []
deselected_items = []
for item in items:
if item.get_closest_marker("slow"):
deselected_items.append(item)
else:
selected_items.append(item)
config.hook.pytest_deselected(items=deselected_items)
items[:] = selected_items
By default, pytest now deselects slow tests:
pytest
Alternatively, if we want to run all tests, we can append the custom CLI option:
pytest --slow
Test with multiple events
We currently run our tests against a single event taking place in North America in May:
@pytest.fixture(name="event")
def fixture_event():
return Event("PyCon US", "North America", Months.MAY)
That’s an unfounded risk. The library might crash for different combinations for the event location and month! Code coverage is a good indicator for which lines of your code base are executed when you run your tests, but it cannot measure the quality of your test scenarios. β οΈ
It makes sense to run our happy path tests for several probable scenarios. We
could use information about actual Python conferences to generate more events
for our tests. The fantastic pytest-variables plugin
loads test data from a JSON file and makes that data available in the
variables
fixture.
Let’s create a new JSON file:
conferences.json
{
"events": {
"EuroPython": {
"location": "Europe",
"month": "JUL"
},
"PyCon US": {
"location": "North America",
"month": "MAY"
},
"PyCon AU": {
"location": "Australia",
"month": "AUG"
},
"PyCon Namibia": {
"location": "Africa",
"month": "FEB"
},
"Python Brasil": {
"location": "South America",
"month": "OCT"
}
}
}
Now, we modify the event
fixture to feature additional parameters and
create Event
instances based on the information from the loaded JSON file:
tests/test_earth.py
@pytest.fixture(
name="event",
params=[
"EuroPython",
"PyCon AU",
"PyCon Namibia",
"PyCon US",
"Python Brasil",
],
)
def fixture_event(request, variables):
map_to_month = {month.name: month for month in Months}
event_name = request.param
event_info = variables["events"][event_name]
event_location = event_info["location"]
event_month = map_to_month[event_info["month"]]
return Event(event_name, event_location, event_month)
Apply marker to specific fixture parameters
There is one edge case though that the current fixture implementation doesn’t account for. We need to add the custom xfail marker to events taking place in Europe, since attendees will have to land at TXL airport to get to the event. β οΈ
tests/test_earth.py
@pytest.fixture(
name="event",
params=[
"EuroPython",
"PyCon AU",
"PyCon Namibia",
"PyCon US",
"Python Brasil",
],
)
def fixture_event(request, variables):
map_to_month = {month.name: month for month in Months}
event_name = request.param
event_info = variables["events"][event_name]
event_location = event_info["location"]
event_month = map_to_month[event_info["month"]]
# Apply marker for conferences in Europe
if event_location == "Europe":
request.applymarker(pytest.mark.txl)
return Event(event_name, event_location, event_month)
When running the tests we now need to specify the file to load variables from:
pytest --variables conferences.json
Add variables file to pytest config
A better way than manually running pytest with this CLI option every time is to specify options that will be added by default to our pytest configuration:
pytest.ini
[pytest]
markers =
slow: tests that take a long time to complete.
txl: tests that involve TXL airport.
addopts = --variables conferences.json
Write plugin that caches test durations
I finshed up the previous part of this tutorial by saying that we would be
doing some really cool stuff with pytest. Let’s write a custom pytest plugin
that automatically adds a turtle
marker to tests that take a long time to
complete! π’
We can’t possibly know how long a test will take before we’ve run it. However what we do know is how long it took the last time we ran it. We can keep track of this information for the next time we run the tests.
Here’s what our plugin needs to do:
- keep track of test durations in the
pytest_runtest_logreport
hook - write test durations data to cache in the
pytest_sessionfinish
hook
Additionally, we need the plugin to:
- try to load test durations data from the cache on start
- add a marker for slow tests in the
pytest_collection_modifyitems
hook
We can register local plugins from the conftest.py
file:
tests/conftest.py
class Turtle:
"""Plugin for adding markers to slow running tests."""
def __init__(self, config):
self.config = config
self.durations = defaultdict(dict)
self.durations.update(
self.config.cache.get("cache/turtle", defaultdict(dict))
)
self.slow = 5.0
def pytest_runtest_logreport(self, report):
self.durations[report.nodeid][report.when] = report.duration
@pytest.mark.tryfirst
def pytest_collection_modifyitems(self, session, config, items):
for item in items:
duration = sum(self.durations[item.nodeid].values())
if duration > self.slow:
item.add_marker(pytest.mark.turtle)
def pytest_sessionfinish(self, session):
cached_durations = self.config.cache.get(
"cache/turtle", defaultdict(dict)
)
cached_durations.update(self.durations)
self.config.cache.set("cache/turtle", cached_durations)
def pytest_configure(self, config):
config.addinivalue_line(
"markers", "turtle: marker for slow running tests"
)
def pytest_configure(config):
config.pluginmanager.register(Turtle(config), "turtle")
Update hook for deselecting tests
We now need to update our pytest_collection_modifyitems
hook implementation
to additionally deselect tests with the new turtle
marker.
tests/conftest.py
def pytest_collection_modifyitems(items, config):
"""Deselect tests marked as with "slow" or "turtle" by default."""
if config.option.slow is True:
return
selected_items = []
deselected_items = []
for item in items:
if item.get_closest_marker(
"slow"
) or item.get_closest_marker("turtle"):
deselected_items.append(item)
else:
selected_items.append(item)
config.hook.pytest_deselected(items=deselected_items)
items[:] = selected_items
Write plugin to run tests using a specific fixture
When working on fixtures, it’s not uncommon to run tests to see if the fixtures are working as expected, modify the fixture code, and run the tests again. For these situations, it makes sense to only run tests that use the new fixture and deselect all other tests.
Let’s write a plugin for that! π¦
tests/conftest.py
def pytest_addoption(parser):
group = parser.getgroup("earth")
group.addoption(
"--slow",
action="store_true",
default=False,
help="Include slow tests in test run",
)
group.addoption(
"--owl",
action="store",
type=str,
default=None,
metavar="fixture",
help="Run test using the fixture",
)
class Owl:
"""Plugin for running tests using a specific fixture."""
def __init__(self, config):
self.config = config
def pytest_collection_modifyitems(self, items, config):
if not config.option.owl:
return
selected_items = []
deselected_items = []
for item in items:
if config.option.owl in getattr(item, "fixturenames", ()):
selected_items.append(item)
else:
deselected_items.append(item)
config.hook.pytest_deselected(items=deselected_items)
items[:] = selected_items
def pytest_configure(config):
config.pluginmanager.register(Turtle(config), "turtle")
config.pluginmanager.register(Owl(config), "owl")
Summary
Awesome, you made it to the end of this part! π
We’ve covered when and how to run tests for different test parameters, how to deselect specific tests, how to load test data from a JSON file, how to use the pytest cache for storing and loading test durations, and finally how to run tests that use a specific fixture.
I hope you enjoyed and learned something from my tutorial! π
(Sincere thanks to my co-worker Chris Hartjes for proofreading this article.)