Using Simple Code Coverage in Python

  1. Practicing Python: Quick ISBN-10 Validation
  2. Basic ISBN-10 Validation in Python: Part 2
  3. ISBN Validation: Adding Simple Python Unit Tests
  4. Reliable ISBN-13 Validation
  5. ISBN Validator: Making it General Purpose
  6. Simple ISBN-10 to ISBN-13 Conversion
  7. Testing Exceptions in Python with unittest
  8. Using Simple Code Coverage in Python

I think that I’ve done a good job writing my unit tests, but how do I know for sure? The goal of unit tests is to make sure that code properly fulfills requirements. All of the code. So, basically I want to make sure that every line of code I have written is doing what it’s supposed to do. Enter the concept of code coverage.

The basic idea behind code coverage is to run our tests and then actually measure to see how much of our code actually executes. This allows us to ensure that our tests are “exercising” all of the code that we’ve written. If they aren’t, we should write more tests! Although, it should be noted that in a complex, real-world application, very rarely do tests exercise ALL of the code in the application. In any case though, code coverage is a great concept to understand because it can reveal “blind spots” in a test plan.

How do we do this in Python?

  1. Install the Coverage package using your favorite package manager.
  2. Run your unit tests while monitoring coverage.
  3. Look at the report and fix any “gaps” that you see by writing more or modifying existing tests.
  4. Go to Step #2.

I am a fan of Miniconda, and this is what I used to install the coverage package. pip works just as well. Also, many Python IDEs like PyCharm and others have support for coverage built in.

$ pip install coverage
Collecting coverage
  Downloading coverage-5.3.1-cp38-cp38-macosx_10_9_x86_64.whl (208 kB)
     |████████████████████████████████| 208 kB 4.3 MB/s
Installing collected packages: coverage
Successfully installed coverage-5.3.1
$

One the package is installed, it’s just a matter of running it with our test suite. On the command line, we actually run the coverage application and then pass in the Python command line that we’d like to execute — this lets our unit tests run under the “umbrella” of the coverage application. Here’s what that looks like:

$ coverage run -m unittest tests.test_small_exercises.TestISBNValidator
....
----------------------------------------------------------------------
Ran 4 tests in 0.001s

OK
$ 

So, where are our results? It turns out that they’ve been stored in our working directory in a “database” file called .coverage. However, the coverage application gives us a couple of good ways to view them. The easiest way is to just run a report using the coverage report command from the directory where the .coverage file resides. Another cool way is to run coverage html which will generate a HTML report, complete with source code, that you can view in a web browser. For now though, we’ll stick with the “simple” approach:

$ coverage report
Name                            Stmts   Miss  Cover
---------------------------------------------------
small_exercises.py                 59      1    98%
tests/__init__.py                   0      0   100%
tests/test_small_exercises.py      26      0   100%
---------------------------------------------------
TOTAL                              85      1    99%
$ 

99% coverage is pretty awesome! We should be happy, but what if we’re perfectionists? Where is that one line that we are missing? We can run coverage report with the -m option and it will tell us exactly which lines are missing:

$ coverage report -m
Name                            Stmts   Miss  Cover   Missing
-------------------------------------------------------------
small_exercises.py                 59      1    98%   57
tests/__init__.py                   0      0   100%
tests/test_small_exercises.py      26      0   100%
-------------------------------------------------------------
TOTAL                              85      1    99%
$ 

Now we can see that line 57 in our small_exercises.py file is not being executed in any of our tests. If we look at our source code, line 57 corresponds to this:

    def calculate_isbn_13_checkdigit(isbn13_first12_numbers: str) -> str:
        if len(isbn13_first12_numbers) != 12 or not isbn13_first12_numbers.isnumeric():
            raise ISBNValidator.FormatException("Improper format in first 12 numbers of ISBN13")
        checksum = 0

We have no test code that is exercising our format error handling in the calculate_isbn_13_checkdigit() method! There are a couple of ways we can handle this, but the most straightforward is to introduce some direct unit tests for our helper method. Here’s what the code looks like when we add this additional test method:

class TestISBNValidator(TestCase):
...

    test_data_isbn_13_checkdigit = {
        "978186197876": "9",
        "978156619909": "4",
        "978129210176": "7"
    }
...
    def test_calculate_isbn_13_checkdigit(self):
        # Check valid conversions
        for (first_12_digits, checkdigit) in TestISBNValidator.test_data_isbn_13_checkdigit.items():
            result = ISBNValidator.calculate_isbn_13_checkdigit(first_12_digits)
            self.assertEqual(result, checkdigit)
        # Test invalid input - Improper length
        with self.assertRaises(ISBNValidator.FormatException):
            ISBNValidator.calculate_isbn_13_checkdigit("123")
        # Test invalid input - Code with dashes
        with self.assertRaises(ISBNValidator.FormatException):
            ISBNValidator.calculate_isbn_13_checkdigit("978-1-86197-")

And now, when we run our unit tests with coverage, what do we get?

$ coverage run -m unittest tests.test_small_exercises.TestISBNValidator
.....
----------------------------------------------------------------------
Ran 5 tests in 0.001s

OK
$ coverage report -m
Name                            Stmts   Miss  Cover   Missing
-------------------------------------------------------------
small_exercises.py                 59      0   100%
tests/__init__.py                   0      0   100%
tests/test_small_exercises.py      35      0   100%
-------------------------------------------------------------
TOTAL                              94      0   100%
$ 

Full coverage. Woo hoo! I want to mention that complete code coverage != perfect or well functioning code. There can still be problems with logic even though every line is executing during testing. However, having this level of coverage should give us confidence that in most situations, our ISBN capabilities will perform admirably.

Here is the current state of this exercise.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top