Making tests for your Python tools with pytest¶
TL;DR: Testing let's you make tools that tell you when they're breaking. Tests are both a fantastic way to develop, and a fantastic resource for maintaining your tools.
You can click here to open this page in Google Colab, which will let you execute this tutorial!
What are unit tests?¶
Unit tests are small, simple functions which test (ideally) a single part of your tool. Tests check that all parts of your code work correctly ("pass") and can be run on different machines and architectures so you know your users won't have issues.
Do I have to make tests¶
You don't have to, but it will make your package much more usable, and much easier to maintain and fix. It is recommended.
Seriously, do I have to?¶
I have lots of untested functions in my packages, but ultimately this ends up being a pain in the future. You don't have to test everything you write! If you're going to have other people contribute to your code, or even use it extensively, you'll benefit from writing unit tests.
You can have a cute badge on your package that shows your tests are passing!
How should I make tests?¶
This is really up to you, but there a few sorts of approaches;
- Build the code, and then test it. This lets you be flexible when designing, but can be a headache to actually test in the end.
- Build the tests, and then write the code. This is known as test driven development, and can be very powerful. You start with tests that should pass, and write code to make sure they do.
- Test as you go. This is a pretty good approach, as you're writing functions and tools, make sure you're adding tests for each new piece of functionality. This means that as you develop, you'll know whether your new functionality breaks your established functionality.
If you find that you have a bug in your code, or something is not behaving in the way that you expected, starting by writing a test that should pass can be a great way to fix that problem and ensure that it doesn't happen again.
But really how should I write a test?¶
We'll do some demos below.
Tests and merging pull requests¶
Sometimes you or one or your users will open a pull request against your tool, or will try to add functionality. Having tests is an excellent way to ensure that features being added by new users do not break your original tool. It's good practice to make sure that any pull request that is opened against your tool includes testing.
Directory structures¶
There are a few ways to structure your directories but this is the one I like
my-package/
... src/
....... my-package/
........... __init__.py
........... package.py
........... data/
........... package_data.fits
... tests/
....... __init__.py
....... test_package.py
....... data/
........... test_data.fits
Coverage¶
There are some tools (e.g. codecov) which can test the "coverage" of your tests, i.e. how many lines of your code are being called by your tests. Having high coverage means less room for error.
Demonstration¶
Here we'll go over a quick demonstration of how to test some functions.
First, we're going to make a very simple function, which adds 10 to a value we pass in.
We're using the writefile
jupyter magic to write this function to a file called func.py
.
%%writefile test_func.py
# Everything below here will get written
import numpy as np
def func(x):
return x + 10
Overwriting test_func.py
Now we're going to write a test for this function. To do this we'll make a new file called test_func.py
. This file will have our test in it. We'll
- Import the function that we want to test
- Use the function
- Make assert statements that should be true
%%writefile test_func.py
# Everything below here will get written
from func import func
def test_func():
y = func(10)
assert y == 20
Overwriting test_func.py
Then we can execute the test. Usually you'd do this from the terminal command line, but here we'll do it from within the notebook so that you can see it in this tutorial.
To run the test we use
pytest test_func.py
but we could have also run
pytest .
to run in this directory or
pytest test_func.py::test_func
To run a specific test in the file.
! pytest test_func.py
============================= test session starts ============================== platform darwin -- Python 3.8.7, pytest-6.2.3, py-1.10.0, pluggy-0.13.1 rootdir: /Users/ch/repos/astronomy_workflow/docs/notebooks/2.0-testing plugins: anyio-2.2.0 collected 1 item test_func.py . [100%] ============================== 1 passed in 0.09s ===============================
When we run the test we see first some information about how the tests were run, and then a progress bar next to the file. Green dots mean a passing test, yellow mean a test that passes but with a warning, and red is a failing test. Let's try making a failing test and see what that looks like
%%writefile test_func.py
# Everything below here will get written
from func import func
def test_func():
y = func(10)
assert y == 23
Overwriting test_func.py
! pytest test_func.py
============================= test session starts ============================== platform darwin -- Python 3.8.7, pytest-6.2.3, py-1.10.0, pluggy-0.13.1 rootdir: /Users/ch/repos/astronomy_workflow/docs/notebooks/2.0-testing plugins: anyio-2.2.0 collected 1 item test_func.py . [100%] ============================== 1 passed in 0.07s ===============================
This time the test failed, and pytest shows us exactly where the test failed. Let's try writing a second test.
%%writefile test_func.py
# Everything below here will get written
from func import func
def test_func_int():
y = func(10)
assert y == 20
def test_func_str():
y = func('christina')
assert y == 20
Overwriting test_func.py
! pytest test_func.py
============================= test session starts ============================== platform darwin -- Python 3.8.7, pytest-6.2.3, py-1.10.0, pluggy-0.13.1 rootdir: /Users/ch/repos/astronomy_workflow/docs/notebooks/2.0-testing plugins: anyio-2.2.0 collected 2 items test_func.py .F [100%] =================================== FAILURES =================================== ________________________________ test_func_str _________________________________ def test_func_str(): > y = func('christina') test_func.py:8: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ x = 'christina' def func(x): > return x + 10 E TypeError: can only concatenate str (not "int") to str func.py:3: TypeError =========================== short test summary info ============================ FAILED test_func.py::test_func_str - TypeError: can only concatenate str (not... ========================= 1 failed, 1 passed in 0.17s ==========================
This time, the first test passes, and the second one fails. We could use the debug command to try to inspect the variables if we wanted to
pytest --pdb test_func.py
Let's make this a passing test. First, let's tell the test if we pass a string
, the test should fail.
%%writefile test_func.py
# Everything below here will get written
from func import func
import pytest
def test_func_int():
y = func(10)
assert y == 20
def test_func_str():
with pytest.raises(TypeError):
y = func('christina')
Overwriting test_func.py
! pytest test_func.py
============================= test session starts ============================== platform darwin -- Python 3.8.7, pytest-6.2.3, py-1.10.0, pluggy-0.13.1 rootdir: /Users/ch/repos/astronomy_workflow/docs/notebooks/2.0-testing plugins: anyio-2.2.0 collected 2 items test_func.py .. [100%] ============================== 2 passed in 0.08s ===============================
Now we've told pytest that if you pass a string
to func
, it should raise a TypeError.
We can write more elaborate tests for more functions, but this should be enough to get you started. Once you've finished building tests, if you have your code archived on GitHub, you can set up actions to automatically run these tests and check your code is always passing!