How to get coverage reporting when testing a pytest plugin?

19,324

Solution 1

Instead of using the pytest-cov plugin, use coverage to run pytest:

coverage run -m pytest ....

That way, coverage will be started before pytest.

Solution 2

You can achieve what you want without pytest-cov.


❯ coverage run --source=<package> --module pytest --verbose <test-files-dirs> && coverage report --show-missing
OR SHORTER
❯ coverage run --source=<package> -m pytest -v <test-files-dirs> && coverage report -m
Example: (for your directory structure)
❯ coverage run --source=plugin_module -m pytest -v tests && coverage report -m
======================= test session starts ========================
platform darwin -- Python 3.9.4, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /Users/johndoe/.local/share/virtualenvs/plugin_module--WYTJL20/bin/python
cachedir: .pytest_cache
rootdir: /Users/johndoe/projects/plugin_module, configfile: pytest.ini
collected 1 items

tests/test_my_plugin.py::test_my_plugin PASSED               [100%]

======================== 1 passed in 0.04s =========================
Name                            Stmts   Miss  Cover   Missing
-------------------------------------------------------------
plugin_module/supporting_module.py  4      0   100%
plugin_module/plugin.py             6      0   100%
-------------------------------------------------------------
TOTAL                              21      0   100%

For an even nicer output, you can use:

❯ coverage html && open htmlcov/index.html

coverage HTML Report


Documentation

❯ coverage -h
❯ pytest -h

coverage

run -- Run a Python program and measure code execution.

-m, --module --- Show line numbers of statements in each module that weren't executed.

--source=SRC1,SRC2, --- A list of packages or directories of code to be measured.

report -- Report coverage stats on modules.

-m, --show-missing --- Show line numbers of statements in each module that weren't executed.

html -- Create an HTML report.

pytest

-v, --verbose -- increase verbosity.

Share:
19,324
Thomas Thorogood
Author by

Thomas Thorogood

Updated on June 18, 2022

Comments

  • Thomas Thorogood
    Thomas Thorogood almost 2 years

    Context

    I am updating an inherited repository which has poor test coverage. The repo itself is a pytest plugin. I've changed the repo to use tox along with pytest-cov, and converted the "raw" tests to use pytester as suggested in the pytest documentation when testing plugins.

    The testing and tox build, etc. works great. However, the coverage is reporting false misses with things like class definitions, imports, etc. This is because the code itself is being imported as part of pytest instantiation, and isn't getting "covered" until the testing actually starts.

    I've read pytest docs, pytest-cov and coverage docs, and tox docs, and tried several configurations, but to no avail. I've exhausted my pool of google keyword combinations that might lead me to a good solution.

    Repository layout

    pkg_root/
        .tox/
            py3/
                lib/
                    python3.7/
                        site-pacakges/
                            plugin_module/
                                supporting_module.py
                                plugin.py
                                some_data.dat
        plugin_module/
            supporting_module.py
            plugin.py
            some_data.dat
        tests/
            conftest.py
            test_my_plugin.py
        tox.ini
        setup.py
        
    

    Some relevant snippets with commentary:

    tox.ini

    [pytest]
    addopts = --cov={envsitepackagesdir}/plugin_module --cov-report=html
    testpaths = tests
    

    This configuration gives me an error that no data was collected; no htmlcov is created in this case.

    If I just use --cov, I get (expected) very noisy coverage, which shows the functional hits and misses, but with the false misses reported above for imports, class definitions, etc.

    conftest.py

    pytest_plugins = ['pytester']  # Entire contents of file!
    

    test_my_plugin.py

    def test_a_thing(testdir):
        testdir.makepyfile(
            """
                def test_that_fixture(my_fixture):
                    assert my_fixture.foo == 'bar'
            """
        )
        result = testdir.runpytest()
        result.assert_outcomes(passed=1)
    

    How can I get an accurate report? Is there a way to defer the plugin loading until it's demanded by the pytester tests?

  • Thomas Thorogood
    Thomas Thorogood almost 4 years
    Oh, thanks. I'll give this some looking into. It didn't work out of the box, but that sounds like a much better solution than my own (stackoverflow.com/a/62224965/677283) so is probably worth getting right.
  • Thomas Thorogood
    Thomas Thorogood almost 4 years
    Yes, I'm deleting my own answer. It's terrible and so very brittle, and wreaks havoc on lots of things! I'll spend more time on this next week and try to get it working this way.
  • Thomas Thorogood
    Thomas Thorogood almost 4 years
    Yes, I was able to get this working with commands = coverage run --source=plugin_module -m pytest
  • Cees Timmerman
    Cees Timmerman over 3 years
    That creates a .coverage SQLite database file, which can be read using coverage report.