Last week, I released a new pytest plugin for generating Markdown reports. My goal for this project is to make it more convenient to generate test summaries for GitHub issue and pull request comments: pytest-md 📝
pytest-md also integrates with another pytest plugin: pytest-emoji 😁
$ pytest --emoji -v --md report.md
# Test Report
*Report generated on 25-Feb-2019 at 17:18:29 by [pytest-md]* 📝
[pytest-md]: https://github.com/hackebrot/pytest-md
## Summary
8 tests ran in 0.06 seconds ⏱
- 1 failed 😰
- 3 passed 😃
- 1 skipped 🙄
- 1 xfailed 😞
- 1 xpassed 😲
- 1 error 😡
Test Coverage for Plugins
As pytest plugin developers we have a real responsibility. pytest’s hook-based plugin architecture allows us to alter almost any aspect of a test harness, which means a bug in our plugin can cause undesired and unexpected test outcomes in projects that have installed our plugin. ⚠️
That’s why I think it’s important pytest plugin authors reduce blind spots and increase test coverage for plugin code by developing meaningful and robust tests.
pytest comes with the pytester plugin, which aims to makes it easier to develop automated tests for plugin projects. If you’re new to this subject, that’s perfectly fine! 🙂
I suggest you check out the according section in the pytest documentation and come back to this article later, as I will be using pytester in my code examples throughout this blog post. 💻
Happy Path Testing
When I set up a test suite for a new project, I make sure to start with what is sometimes referred to as “happy path testing”. Usually that means I write one or more tests for code examples from the README file of my project to verify that my plugin’s default usage is free of bugs and produces the expected outputs.
For pytest plugins that change their behavior based on whether another plugin is installed and enabled or not, I recommend writing additional tests to verify that the integration works correctly and your plugin produces the expected output for every scenario.
I came up with the following test suite design that has worked quite well for me. If you have any questions about it or end up adopting a similar approach for your pytest plugin, I’d love to hear from you on twitter! 😃
Example Project
The pytest-md plugin adds the capability to pytest to generate
Markdown test reports and can optionally add emojis to the report using
pytest-emoji. Then, depending on whether pytest is running in
verbose mode or not, the generated Markdown report includes the outcome
descriptions (failed
, passed
, skipped
, xfailed
, xpassed
,
error
) of the test results and emojis or only emojis respectively.
$ pytest --md report.md
# Test Report
*Report generated on 25-Feb-2019 at 17:18:29 by [pytest-md]*
[pytest-md]: https://github.com/hackebrot/pytest-md
## Summary
8 tests ran in 0.05 seconds
- 1 failed
- 3 passed
- 1 skipped
- 1 xfailed
- 1 xpassed
- 1 error
$ pytest --emoji -v --md report.md
# Test Report
*Report generated on 25-Feb-2019 at 17:18:29 by [pytest-md]* 📝
[pytest-md]: https://github.com/hackebrot/pytest-md
## Summary
8 tests ran in 0.06 seconds ⏱
- 1 failed 😰
- 3 passed 😃
- 1 skipped 🙄
- 1 xfailed 😞
- 1 xpassed 😲
- 1 error 😡
Isolating Test Enviromments
We can use the fantastic tox tool to create different test environments for our automated tests and other code quality checks. 🕵
tox creates a new virtual environment for each test environment with a particular Python version and installs our package into that environment (just like a user would do). This ensures that our Python package is set up correctly and we run our tests against the packaged distribution instead of the source code files in our working directory.
The tox configuration for pytest-md defines 6 test environments:
- Python 3.6, pytest-emoji not installed
- Python 3.7, pytest-emoji not installed
- Python 3.6, pytest-emoji installed
- Python 3.7, pytest-emoji installed
- static type checks with mypy
- code style and quality checks with flake8
tox.ini
[tox]
envlist = py36,py37,{py36,py37}-emoji,mypy,flake8
[testenv]
deps =
freezegun
emoji: pytest-emoji
commands = pytest -v {posargs:tests}
[testenv:flake8]
deps = flake8
commands = flake8
[testenv:mypy]
deps = mypy
commands = mypy {toxinidir}/src/
Test Implementation
Let’s start with an autouse fixture that uses testdir from the pytester plugin to create a temporary test file for each test:
tests/conftest.py
import textwrap
import pytest
pytest_plugins = ["pytester"]
@pytest.fixture(name="emoji_tests", autouse=True)
def fixture_emoji_tests(testdir):
"""Create a test module with several tests that produce all the different
pytest test outcomes.
"""
emoji_tests = textwrap.dedent(
"""\
import pytest
def test_failed():
assert "emoji" == "hello world"
@pytest.mark.xfail
def test_xfailed():
assert 1234 == 100
@pytest.mark.xfail
def test_xpass():
assert 1234 == 1234
@pytest.mark.skip(reason="don't run this test")
def test_skipped():
assert "pytest-emoji" != ""
@pytest.mark.parametrize(
"name, expected",
[
("Sara", "Hello Sara!"),
("Mat", "Hello Mat!"),
("Annie", "Hello Annie!"),
],
)
def test_passed(name, expected):
assert f"Hello {name}!" == expected
@pytest.fixture
def number():
return 1234 / 0
def test_error(number):
assert number == number
"""
)
testdir.makepyfile(test_emoji_tests=emoji_tests)
Now let’s write our happy path test. The following test runs pytest on the
temporary test file that we just created (runpytest
) and generates a
Markdown report at a temporary location (report_path
). We then read the
file content and compare it with what we know is the correct output
(report_content
).
tests/test_generate_report.py
def test_generate_report(testdir, cli_options, report_path, report_content):
"""Check the contents of a generated Markdown report."""
# run pytest with the following CLI options
result = testdir.runpytest(*cli_options, "--md", f"{report_path}")
# make sure that that we get a '1' exit code
# as we have at least one failure
assert result.ret == 1
# Check the generated Markdown report
assert report_path.read_text() == report_content
Note how we add extra CLI options (cli_options
) when calling runpytest
.
We will use this mechanism later to run our tests with --verbose
and
--emoji
respectively.
Test Scenarios
There are 4 scenarios that we need test coverage for:
- pytest-emoji not installed, normal mode
- pytest-emoji not installed, verbose mode
- pytest-emoji installed and enabled, normal mode
- pytest-emoji installed and enabled, verbose mode
We can codify the scenarios by creating a Python Enum with member values that represent the test scenarios:
tests/conftest.py
import enum
class Mode(enum.Enum):
"""Enum for the several test scenarios."""
NORMAL = "normal"
VERBOSE = "verbose"
EMOJI_NORMAL = "emoji_normal"
EMOJI_VERBOSE = "emoji_verbose"
I recommend implementing the following pytest hook for changing the default
test parameter ID, so that pytest prints the Mode
member values instead of
names. This is completely optional, but I think it makes the CLI report more
readable.
tests/conftest.py
def pytest_make_parametrize_id(config, val):
"""Return a custom test ID for Mode parameters."""
if isinstance(val, Mode):
return val.value
return f"{val!r}"
CLI Options
Next we specify a mapping from test scenarios to lists of CLI options. Note how
the return value of cli_options
is based on a mode
fixture. I will
explain how we create this fixture later in this blog post. For now assume that
mode
will be one of the Enum members.
tests/conftest.py
@pytest.fixture(name="cli_options")
def fixture_cli_options(mode):
"""Return CLI options for the different test scenarios."""
cli_options = {
Mode.NORMAL: [],
Mode.VERBOSE: ["--verbose"],
Mode.EMOJI_NORMAL: ["--emoji"],
Mode.EMOJI_VERBOSE: ["--verbose", "--emoji"],
}
return cli_options[mode]
Markdown Report Location
We also need a temporary file path to write our Markdown report to. We can use
pytest’s built-in tmp_path
fixture for that:
tests/conftest.py
@pytest.fixture(name="report_path")
def fixture_report_path(tmp_path):
"""Return a temporary path for writing the Markdown report."""
return tmp_path / "emoji_report.md"
Expected Markdown Reports
The Markdown report generated by pytest-md includes the date and time of when the test session finished and also the duration of the test session. This means we have to mock the current time to be able to compare the generated Markdown report against a text representation of the expected report.
I recommend the freezegun library for mocking all calls to retrieve the current time. 🕖
tests/conftest.py
import freezegun
@pytest.fixture(name="now")
def fixture_now():
"""Patch the current time for reproducable test reports."""
freezer = freezegun.freeze_time("2019-01-21 18:30:40")
freezer.start()
yield datetime.datetime(2019, 1, 21, 18, 30, 40)
freezer.stop()
Next we define a fixture (report_content
) that returns an expected Markdown
report for each test scenario (similar to how cli_options
works).
@pytest.fixture(name="report_content")
def fixture_report_content(mode, now):
"""Return the expected Markdown report for the different
test scenarios.
"""
rdate = now.strftime("%d-%b-%Y")
rtime = now.strftime("%H:%M:%S")
if mode is Mode.EMOJI_NORMAL:
return textwrap.dedent(
f"""\
# Test Report
*Report generated on {rdate} at {rtime} by [pytest-md]* 📝
[pytest-md]: https://github.com/hackebrot/pytest-md
## Summary
7 tests ran in 0.00 seconds ⏱
- 1 😿
- 2 🦊
- 1 🙈
- 1 🤓
- 1 😜
- 1 💩
"""
)
if mode is Mode.EMOJI_VERBOSE:
return textwrap.dedent(
f"""\
# Test Report
*Report generated on {rdate} at {rtime} by [pytest-md]* 📝
[pytest-md]: https://github.com/hackebrot/pytest-md
## Summary
7 tests ran in 0.00 seconds ⏱
- 1 failed 😿
- 2 passed 🦊
- 1 skipped 🙈
- 1 xfailed 🤓
- 1 xpassed 😜
- 1 error 💩
"""
)
# Return the default report for Mode.NORMAL and Mode.VERBOSE
return textwrap.dedent(
f"""\
# Test Report
*Report generated on {rdate} at {rtime} by [pytest-md]*
[pytest-md]: https://github.com/hackebrot/pytest-md
## Summary
7 tests ran in 0.00 seconds
- 1 failed
- 2 passed
- 1 skipped
- 1 xfailed
- 1 xpassed
- 1 error
"""
)
Generating Tests
With the Mode
Enum in place, we now generate one test for each of the
members by creating a parametrized mode
fixture. We also add the custom
pytest.mark.emoji
marker to test scenarios that require the pytest-emoji
plugin:
tests/conftest.py
def pytest_generate_tests(metafunc):
"""Generate several values for the "mode" fixture and add the "emoji"
marker for certain test scenarios.
"""
if "mode" not in metafunc.fixturenames:
return
metafunc.parametrize(
"mode",
[
Mode.NORMAL,
Mode.VERBOSE,
pytest.param(Mode.EMOJI_NORMAL, marks=pytest.mark.emoji),
pytest.param(Mode.EMOJI_VERBOSE, marks=pytest.mark.emoji),
],
)
Document Markers
While this step is not strictly required for implementing the test suite design, I would encourage you do it anyways. Documenting markers certainly helps developers, who are just getting started with working on your test suite. 📝
We add a brief description for the marker to the pytest config, which will be
printed on the CLI when running pytest --markers
:
pytest.ini
[pytest]
markers =
emoji: tests which are skipped if pytest-emoji is not installed.
Skipping Marked Tests
The final piece is to instruct pytest to skip tests marked with emoji
if
the pytest-emoji plugin is not installed. That means that we run all of the 4
test scenarios if pytest-emoji is installed, but only 2 if it is not (calling
pytest with --emoji
in an environment that doesn’t have pytest-emoji
installed would result in an error).
tests/conftest.py
def pytest_collection_modifyitems(items, config):
"""Skip tests marked with "emoji" if pytest-emoji is not installed."""
if config.pluginmanager.hasplugin("emoji"):
return
for item in items:
if item.get_closest_marker("emoji"):
item.add_marker(
pytest.mark.skip(reason="pytest-emoji is not installed")
)
Custom Emojis
The pytest-md test suite has one additional fixture which overwrites the
default emojis for tests marked with emoji
. I mention this here for the
sake of completeness and just in case you have been copying the code from this
blog and wonder why the tests are failing. 😉
If we would do this for all of the tests, regardless of whether they are marked
with emoji
, then we would get a pytest error, because we try to implement
hooks that are not registered.
tests/conftest.py
@pytest.fixture(name="custom_emojis", autouse=True)
def fixture_custom_emojis(request, testdir):
"""Create a conftest.py file for emoji tests, which implements the
pytest-emoji hooks.
"""
if "emoji" not in request.keywords:
# Only create a conftest.py for emoji tests
return
conftest = textwrap.dedent(
"""\
def pytest_emoji_passed(config):
return "🦊 ", "PASSED 🦊 "
def pytest_emoji_failed(config):
return "😿 ", "FAILED 😿 "
def pytest_emoji_skipped(config):
return "🙈 ", "SKIPPED 🙈 "
def pytest_emoji_error(config):
return "💩 ", "ERROR 💩 "
def pytest_emoji_xfailed(config):
return "🤓 ", "XFAILED 🤓 "
def pytest_emoji_xpassed(config):
return "😜 ", "XPASSED 😜 "
"""
)
testdir.makeconftest(conftest)
Test Run
Wow, you made it! You’re awesome! 😁
It’s time we finally run our test suite and see if our harness works as expected:
$ tox
=========================== test session starts ============================
plugins: md-0.1.1
collecting ... collected 4 items
tests/test_generate_report.py::test_generate_report[normal] PASSED
tests/test_generate_report.py::test_generate_report[verbose] PASSED
tests/test_generate_report.py::test_generate_report[emoji_normal] SKIPPED
tests/test_generate_report.py::test_generate_report[emoji_verbose] SKIPPED
=================== 2 passed, 2 skipped in 0.25 seconds ====================
=========================== test session starts ============================
plugins: md-0.1.1
collecting ... collected 4 items
tests/test_generate_report.py::test_generate_report[normal] PASSED
tests/test_generate_report.py::test_generate_report[verbose] PASSED
tests/test_generate_report.py::test_generate_report[emoji_normal] SKIPPED
tests/test_generate_report.py::test_generate_report[emoji_verbose] SKIPPED
=================== 2 passed, 2 skipped in 0.22 seconds ====================
=========================== test session starts ============================
plugins: md-0.1.1, emoji-0.2.0
collecting ... collected 4 items
tests/test_generate_report.py::test_generate_report[normal] PASSED
tests/test_generate_report.py::test_generate_report[verbose] PASSED
tests/test_generate_report.py::test_generate_report[emoji_normal] PASSED
tests/test_generate_report.py::test_generate_report[emoji_verbose] PASSED
========================= 4 passed in 0.42 seconds =========================
=========================== test session starts ============================
plugins: md-0.1.1, emoji-0.2.0
collecting ... collected 4 items
tests/test_generate_report.py::test_generate_report[normal] PASSED
tests/test_generate_report.py::test_generate_report[verbose] PASSED
tests/test_generate_report.py::test_generate_report[emoji_normal] PASSED
tests/test_generate_report.py::test_generate_report[emoji_verbose] PASSED
========================= 4 passed in 0.40 seconds =========================
_________________________________ summary __________________________________
py36: commands succeeded
py37: commands succeeded
py36-emoji: commands succeeded
py37-emoji: commands succeeded
mypy: commands succeeded
flake8: commands succeeded
congratulations :)
Summary
We have developed a set of happy path tests for our example plugin using a variety of pytest features. If you feel overwhelmed, that’s understandable. I know pytest can be quite confusing, but it’s also very powerful and flexible. I hope this blog post demonstrates how we can use it to build better test suites! 💻
Let’s wrap with a summary of what we’ve learned:
- it’s a good idea to codify test scenarios (for example with Python Enums)
- we can then generate a test for each of these test scenarios
- we can add custom markers to tests that require extra plugins
- we can skip marked tests if a plugin is not installed
- tox is a great tool for running our tests in separate virtual environments
If you have any questions, feel free to send me a message on twitter and I’ll try my best to answer your question or find someone else on the pytest core team who can help! 😃