-
Notifications
You must be signed in to change notification settings - Fork 0
Pytest The Python framework for unit testing
Pytest is a framework that makes building simple and scalable tests easy. Tests are expressive and readable—no boilerplate code required.
Most functional tests follow the four steps (AAAC) mentioned. Testing frameworks typically hook into your test’s assertions so that they can provide information when an assertion fails. However, even a small set of tests requires a fair amount of boilerplate code (unittest, for example).
We'll be using the same files stated in A brief introduction of the case study. Let's begin with the necessary tests that Point needs to implement.
For a list of all the command-line options:
$ pytest -h
usage: pytest [options] [file_or_dir] [file_or_dir] [...]positional arguments: file_or_dir general: -k EXPRESSION only run tests which match the given substring expression. An expression is a python evaluatable expression where all names are substring-matched against test names and their parent classes. Example: -k 'test_method or test_other' matches all test functions and classes whose name contains 'test_method' or 'test_other', while -k 'not test_method' matches those that don't contain 'test_method' in their names. -k 'not test_method and not test_other' will eliminate the matches. Additionally keywords are matched to classes and functions containing extra names in their 'extra_keyword_matches' set, as well as functions which have names assigned directly to them. The matching is case-insensitive. -m MARKEXPR only run tests matching given mark expression. example: -m 'mark1 and not mark2'. --markers show markers (builtin, plugin and per-project ones). -x, --exitfirst exit instantly on first error or failed test. --maxfail=num exit after first num failures or errors. --strict-markers, --strict markers not registered in the `markers` section of the configuration file raise errors. -c file load configuration from `file` instead of trying to locate one of the implicit configuration files. --continue-on-collection-errors Force test execution even if collection errors occur. --rootdir=ROOTDIR Define root directory for tests. Can be relative path: 'root_dir', './root_dir', 'root_dir/another_dir/'; absolute path: '/home/user/root_dir'; path with variables: '$HOME/root_dir'. --fixtures, --funcargs show available fixtures, sorted by plugin appearance (fixtures with leading '_' are only shown with '-v') --fixtures-per-test show fixtures per test --import-mode={prepend,append} prepend/append to sys.path when importing test modules, default is to prepend. --pdb start the interactive Python debugger on errors or KeyboardInterrupt. --pdbcls=modulename:classname start a custom interactive Python debugger on errors. For example: --pdbcls=IPython.terminal.debugger:TerminalPdb --trace Immediately break when running each test. --capture=method per-test capturing method: one of fd|sys|no|tee-sys. -s shortcut for --capture=no. --runxfail report the results of xfail tests as if they were not marked --lf, --last-failed rerun only the tests that failed at the last run (or all if none failed) --ff, --failed-first run all tests but run the last failures first. This may re-order tests and thus lead to repeated fixture setup/teardown --nf, --new-first run tests from new files first, then the rest of the tests sorted by file mtime --cache-show=[CACHESHOW] show cache contents, don't perform collection or tests. Optional argument: glob (default: '*'). --cache-clear remove all cache contents at start of test run. --lfnf={all,none}, --last-failed-no-failures={all,none} which tests to run with no previously (known) failures. --sw, --stepwise exit on test failure and continue from last failing test next time --stepwise-skip ignore the first failing test but stop on the next failing test reporting: --durations=N show N slowest setup/test durations (N=0 for all). -v, --verbose increase verbosity. -q, --quiet decrease verbosity. --verbosity=VERBOSE set verbosity. Default is 0. -r chars show extra test summary info as specified by chars: (f)ailed, (E)rror, (s)kipped, (x)failed, (X)passed, (p)assed, (P)assed with output, (a)ll except passed (p/P), or (A)ll. (w)arnings are enabled by default (see --disable- warnings), 'N' can be used to reset the list. (default: 'fE'). --disable-warnings, --disable-pytest-warnings disable warnings summary -l, --showlocals show locals in tracebacks (disabled by default). --tb=style traceback print mode (auto/long/short/line/native/no). --show-capture={no,stdout,stderr,log,all} Controls how captured stdout/stderr/log is shown on failed tests. Default is 'all'. --full-trace don't cut any tracebacks (default is to cut). --color=color color terminal output (yes/no/auto). --pastebin=mode send failed|all info to bpaste.net pastebin service. --junit-xml=path create junit-xml style report file at given path. --junit-prefix=str prepend prefix to classnames in junit-xml output --result-log=path DEPRECATED path for machine-readable result log. collection: --collect-only, --co only collect tests, don't execute them. --pyargs try to interpret all arguments as python packages. --ignore=path ignore path during collection (multi-allowed). --ignore-glob=path ignore path pattern during collection (multi- allowed). --deselect=nodeid_prefix deselect item (via node id prefix) during collection (multi-allowed). --confcutdir=dir only load conftest.py's relative to specified dir. --noconftest Don't load any conftest.py files. --keep-duplicates Keep duplicate tests. --collect-in-virtualenv Don't ignore tests in a local virtualenv directory --doctest-modules run doctests in all .py modules --doctest-report={none,cdiff,ndiff,udiff,only_first_failure} choose another output format for diffs on doctest failure --doctest-glob=pat doctests file matching pattern, default: test*.txt --doctest-ignore-import-errors ignore doctest ImportErrors --doctest-continue-on-failure for a given doctest, continue to run after the first failure test session debugging and configuration: --basetemp=dir base temporary directory for this test run.(warning: this directory is removed if it exists) -V, --version display pytest version and information about plugins. -h, --help show help message and configuration info -p name early-load given plugin module name or entry point (multi-allowed). To avoid loading of plugins, use the `no:` prefix, e.g. `no:doctest`. --trace-config trace considerations of conftest.py files. --debug store internal tracing debug information in 'pytestdebug.log'. -o OVERRIDE_INI, --override-ini=OVERRIDE_INI override ini option with "option=value" style, e.g. `-o xfail_strict=True -o cache_dir=cache`. --assert=MODE Control assertion debugging tools. 'plain' performs no assertion debugging. 'rewrite' (the default) rewrites assert statements in test modules on import to provide assert expression information. --setup-only only setup fixtures, do not execute tests. --setup-show show setup of fixtures while executing tests. --setup-plan show what fixtures and tests would be executed but don't execute anything. pytest-warnings: -W PYTHONWARNINGS, --pythonwarnings=PYTHONWARNINGS set which warnings to report, see -W option of python itself. logging: --no-print-logs disable printing caught logs on failed tests. --log-level=LEVEL level of messages to catch/display. Not set by default, so it depends on the root/parent log handler's effective level, where it is "WARNING" by default. --log-format=LOG_FORMAT log format as used by the logging module. --log-date-format=LOG_DATE_FORMAT log date format as used by the logging module. --log-cli-level=LOG_CLI_LEVEL cli logging level. --log-cli-format=LOG_CLI_FORMAT log format as used by the logging module. --log-cli-date-format=LOG_CLI_DATE_FORMAT log date format as used by the logging module. --log-file=LOG_FILE path to a file when logging will be written to. --log-file-level=LOG_FILE_LEVEL log file logging level. --log-file-format=LOG_FILE_FORMAT log format as used by the logging module. --log-file-date-format=LOG_FILE_DATE_FORMAT log date format as used by the logging module. --log-auto-indent=LOG_AUTO_INDENT Auto-indent multiline messages passed to the logging module. Accepts true|on, false|off or an integer. [pytest] ini-options in the first pytest.ini|tox.ini|setup.cfg file found: markers (linelist): markers for test functions empty_parameter_set_mark (string): default marker for empty parametersets norecursedirs (args): directory patterns to avoid for recursion testpaths (args): directories to search for tests when no files or directories are given in the command line. usefixtures (args): list of default fixtures to be used with this project python_files (args): glob-style file patterns for Python test module discovery python_classes (args): prefixes or glob names for Python test class discovery python_functions (args): prefixes or glob names for Python test function and method discovery disable_test_id_escaping_and_forfeit_all_rights_to_community_support (bool): disable string escape non-ascii characters, might cause unwanted side effects(use at your own risk) console_output_style (string): console output: "classic", or with additional progress information ("progress" (percentage) | "count"). xfail_strict (bool): default for the strict parameter of xfail markers when not given explicitly (default: False) enable_assertion_pass_hook (bool): Enables the pytest_assertion_pass hook.Make sure to delete any previously generated pyc cache files. junit_suite_name (string): Test suite name for JUnit report junit_logging (string): Write captured log messages to JUnit report: one of no|log|system-out|system-err|out-err|all junit_log_passing_tests (bool): Capture log information for passing tests to JUnit report: junit_duration_report (string): Duration time to report: one of total|call junit_family (string): Emit XML for schema: one of legacy|xunit1|xunit2 doctest_optionflags (args): option flags for doctests doctest_encoding (string): encoding used for doctest files cache_dir (string): cache directory path. filterwarnings (linelist): Each line specifies a pattern for warnings.filterwarnings. Processed after -W/--pythonwarnings. log_print (bool): default value for --no-print-logs log_level (string): default value for --log-level log_format (string): default value for --log-format log_date_format (string): default value for --log-date-format log_cli (bool): enable log display during test run (also known as "live logging"). log_cli_level (string): default value for --log-cli-level log_cli_format (string): default value for --log-cli-format log_cli_date_format (string): default value for --log-cli-date-format log_file (string): default value for --log-file log_file_level (string): default value for --log-file-level log_file_format (string): default value for --log-file-format log_file_date_format (string): default value for --log-file-date-format log_auto_indent (string): default value for --log-auto-indent faulthandler_timeout (string): Dump the traceback of all threads if a test takes more than TIMEOUT seconds to finish. Not available on Windows. addopts (args): extra command line options minversion (string): minimally required pytest version environment variables: PYTEST_ADDOPTS extra command line options PYTEST_PLUGINS comma-separated plugins to load during startup PYTEST_DISABLE_PLUGIN_AUTOLOAD set to disable plugin auto-loading PYTEST_DEBUG set to enable debug tracing of pytest's internals to see available markers type: pytest --markers to see available fixtures type: pytest --fixtures (shown according to specified file_or_dir or current dir if not specified; fixtures with leading '_' are only shown with the '-v' option </pre>
Just like with unittest, we should follow all steps mentioned in Test Driven Development (TDD) fundamentals, but in order not to repeat the already explained method, we'll jump right into the end result.
# Point.py
class Point:
def __init__(self, x: float, y: float):
self.x = x
self.y = y
def get_x(self) -> float:
return self.x
def get_y(self) -> float:
return self.y# test_point.py
import pytest
from src.Point import Point
@pytest.fixture
def setup(request):
point = Point(3.0, 4.0)
yield point
def test_get_x(setup):
assert setup.get_x() == 3.0
def test_get_y(setup):
assert setup.get_y() == 4.0We can test these using the pytest command:
$ poetry run pytest tests/test_point.py
======================================= test session starts ========================================
platform linux -- Python 3.8.0, pytest-5.4.3, py-1.10.0, pluggy-0.13.1
rootdir: path_to_file/
collected 12 items
tests/test_point.py ............ [100%]
======================================== 12 passed in 0.03s ========================================With this example, we can further delve into some features pytest provides.
Firstly, we can observe the absence of different assert methods. Thanks to pytest’s detailed assertion introspection, only plain assert statements are needed, simplifying the workflow by allowing you to use Python's assert keyword directly. No imports or classes, you don’t need to learn or remember all the different self.assert* methods in unittest, either. If you can write an expression that you expect to evaluate to True, then pytest will test it for you.
What pytest provides is insight into the assert execution without the need to use different assert methods. If we were to change the previous example with the following code:
# test_point.py
import pytest
from src.Point import Point
@pytest.fixture
def setup(request):
point = Point(3.0, 4.0)
yield point
def test_get_x(setup):
assert setup.get_x() == 0.0
def test_get_y(setup):
assert setup.get_y() == 4.0The resulting output would be:
================================================================================= test session starts ==================================================================================
platform linux -- Python 3.8.0, pytest-5.4.3, py-1.10.0, pluggy-0.13.1
rootdir: path_to_file/
collected 2 items
tests/test_point.py F. [100%]
======================================================================================= FAILURES =======================================================================================
______________________________________________________________________________________ test_get_x ______________________________________________________________________________________
setup = <src.Point.Point object at 0x7f9c55d9ac40>
def test_get_x(setup):
> assert setup.get_x() == 0.0
E assert 3.0 == 0.0
E + where 3.0 = <bound method Point.get_x of <src.Point.Point object at 0x7f9c55d9ac40>>()
E + where <bound method Point.get_x of <src.Point.Point object at 0x7f9c55d9ac40>> = <src.Point.Point object at 0x7f9c55d9ac40>.get_x
tests/test_point.py:13: AssertionError
=============================================================================== short test summary info ================================================================================
FAILED tests/test_point.py::test_get_x - assert 3.0 == 0.0
============================================================================= 1 failed, 1 passed in 0.06s ==============================================================================Where we can easily read more information about what caused the assertion to fail.
As in unittest, we can set up the arrangement step with a set of fixtures. In the previous example, we saw a function preceded by the @pytest.fixture decorator. With this, we can then use the function (setup in our case) as an argument of a test function. By default, setup is instanced once for every test function, to ensure clean starting conditions.
Let's see what else this feature has to offer:
import pytest
from src.Point import Point
points = [
{"x": 3.0, "y": 4.0},
{"x": 1.0, "y": 2.0},
]
@pytest.fixture(params=points)
def setup(request):
point = Point(request.param["x"], request.param["y"])
yield point
# tearDown the point
@pytest.fixture
def set_origin(request):
return Point(0, 0)
def test_get_x(setup):
assert setup.get_x() in [3.0, 1.0]
def test_get_y(setup):
assert setup.get_y() in [4.0, 2.0]
def test_distanceTo(setup, set_origin):
assert pytest.approx(setup.distanceTo(set_origin), 1.0) in [5, 2.23]
def test_equals(setup, set_origin):
assert set_origin != setup
assert setup in [Point(_["x"], _["y"]) for _ in points]
def test_str(setup):
assert f"{setup}" in ["Point [x:3.00, y:4.00]", "Point [x:1.00, y:2.00]"]
def test_is(setup):
assert isinstance(setup, Point)The arguments passed to the first fixture, configure its scope and parameters received.
But, here we have a problem. We can clearly see that setup and set_origin are called multiple times during the whole module, but having different instances of Point would not make much sense in this scenario. This can be avoided if we alter the scope parameter.
In some tests, we can encounter significant time delays by repeatedly instancing the same object (for example, if a connection to a server is needed).
Extending the previous example, we can add a scope="module" parameter to the @pytest.fixture invocation to cause a setup and set_origin fixture function, to only be invoked once per test module. Therefore, the same instance will be shared by all the test functions inside the module, resulting in a time improvement.
import pytest
from src.Point import Point
points = [
{"x": 3.0, "y": 4.0},
{"x": 1.0, "y": 2.0},
]
@pytest.fixture(scope="module", params=points)
def setup(request):
point = Point(request.param["x"], request.param["y"])
yield point
# tearDown the point
@pytest.fixture(scope="module")
def set_origin(request):
return Point(0, 0)
def test_get_x(setup):
assert setup.get_x() in [3.0, 1.0]
def test_get_y(setup):
assert setup.get_y() in [4.0, 2.0]
def test_distanceTo(setup, set_origin):
assert pytest.approx(setup.distanceTo(set_origin), 1.0) in [5, 2.23]
def test_equals(setup, set_origin):
assert set_origin != setup
assert setup in [Point(_["x"], _["y"]) for _ in points]
def test_str(setup):
assert f"{setup}" in ["Point [x:3.00, y:4.00]", "Point [x:1.00, y:2.00]"]
def test_is(setup):
assert isinstance(setup, Point)Despite this improvement, we should be wary of the instance changing across tests.
Fixtures are created when first requested by a test, and are destroyed based on their scope:
- function: the default scope, the fixture is destroyed at the end of the test.
- class: runs the fixture per test class, the fixture is destroyed during teardown of the last test in the class.
- module: runs the fixture per module, the fixture is destroyed during teardown of the last test in the module.
- package: runs the fixture per package, the fixture is destroyed during teardown of the last test in the package.
- session: will only be executed once per session. Every time you run
pytest, it’s considered to be one session. The fixture is destroyed at the end of the test session.
Within a function request for fixtures, those of higher-scopes (such as session) are executed before lower-scoped fixtures (such as function or class).
One great advantage of using pytest is the flexible fixture system, you can request a fixture or multiple fixtures inside another fixture.
For example, you can do something like this:
import pytest
# Arrange
@pytest.fixture
def first_entry():
return "a"
# Arrange
@pytest.fixture
def order(first_entry):
return [first_entry]And expect to get a list ["a"].
When a fixture requests another fixture, the other fixture is executed first. So if fixture a requests fixture b, fixture b will execute first, because a depends on b and can’t operate without it. Even if a doesn’t need the result of b, it can still request b if it needs to make sure it is executed after b.
For example:
import pytest
@pytest.fixture
def order():
return []
@pytest.fixture
def a(order):
order.append("a")
@pytest.fixture
def b(a, order):
order.append("b")
@pytest.fixture
def c(a, b, order):
order.append("c")
@pytest.fixture
def d(c, b, order):
order.append("d")
@pytest.fixture
def e(d, b, order):
order.append("e")
@pytest.fixture
def f(e, order):
order.append("f")
@pytest.fixture
def g(f, c, order):
order.append("g")
def test_order(g, order):
assert order == ["a", "b", "c", "d", "e", "f", "g"]Lets map the dependencies:
Pytest offers a list of decorators which can be used to apply meta data to test functions. Among them we can find:
-
@pytest.mark.skip(*, reason=None) -
@pytest.mark.skipIf(condition, * reason=None)Example:import sys @pytest.mark.skipif(sys.version_info < (3, 7), reason="requires python3.7 or higher") def test_function(): ...
-
@pytest.mark.xfail(condition=None, *, reason=None, raises=None, run=True, strict=False) -
@pytest.mark.usefixtures(*names) -
@pytest.mark.filterwarnings(filter)Example:@pytest.mark.filterwarnings("ignore:.*usage will be deprecated.*:DeprecationWarning") def test(): ...
-
@pytest.mark.parametrize(...)Example:@pytest.mark.parametrize("test_input,expected", [("3+5", 8), ("2+4", 6), ("6*9", 42)]) def test_eval(test_input, expected): assert eval(test_input) == expected
Pytest also provides the option to define custom marks, and can be given their own behaviour. In this link you can find more information.
By default, pytest searchs for all test_*.py and *_test.py files inside the current directory or recursively in its children directories.
A testpaths variable can also be configured or passed as an argument.
Inside those files, pytest will run all test prefixed functions outside of classes and those inside Test prefixed classes.
# alternative to test_point.py
class TestPoint:
@pytest.fixture(autouse=True)
def setup(request):
point = Point(3.0, 4.0)
yield point
def test_get_x(self):
assert setup.get_x() == 3.0
def test_get_y(self):
assert setup.get_y() == 4.0Tests can be grouped in a class to:
- Help with test organization
- Define fixtures with class scope
- Apply marks at a class level and consequently to all test functions