Delayed Assertions in Python Testtools

I recently contributed a number of new features to the python testtools project, the sum total of which add up to a feature I like to call 'delayed assertions'. In this post I'll explain why I needed this feature, what it's good for, and how you can use it yourself.

Testtools: More than just a Unit Test Framework

The first sentence on the python testtools website says:

testtools is a set of extensions to the Python standard library's unit testing framework. These extensions have been derived from many years of experience with unit testing in Python and come from many different sources. testtools supports Python versions all the way back to Python 2.6.

Unfortunately, the language we use around testing is totally borked. Ask five developers what a 'unit test' is, and you'll get at least three different answers. However you define 'unit test', the additional features found in testtools are useful for all sorts of tests. In particular, using testtools matcher objects instead of custom assertion methods is a fabulous feature.

Matchers allow us to make more complex assertions than the standard assertion methods, and allows us to do so without extending the TestCase class every time. The example from the testtools website shows this in action:

def test_response_has_bold(self):
   # The response has bold text.
   response = self.server.getResponse()
   self.assertThat(response, HTMLContains(Tag('bold', 'b')))

Quite often I'll find myself looking at code that makes several assertions on different, related variables. Yes, this is bad practice for 'real' unit tests, but it's often required for higher-level tests. For example, a test might run some external executable, and inspect it's standard output, standard error, and return code. Such a test might look something like this:

def test_external_process(self):
        stdout, stderr, retcode = run_some_external_process()

        self.assertThat(stdout, Contains("The widgets are being frosted"))
        self.assertThat(stderr, Equals(""))
        self.assertThat(retcode, Equals(0))

On the face of it, this seems reasonably straightforward. When we run these tests, we get the usual output:

Tests running...

Ran 1 test in 0.000s
OK

However, what happens when something goes wrong? We might see output like this instead:

Tests running...
======================================================================
FAIL: test_foo.MyTests.test_external_process
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_foo.py", line 13, in test_external_process
    self.assertThat(stdout, Contains("The widgets are being frosted"))
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 406, in assertThat
    raise mismatch_error
MismatchError: 'The widgets are being frosted' not in ''

Ran 1 test in 0.001s
FAILED (failures=1)

OK, we can tell that standard output was an empty string. But why! this should work! Now imagine that this test is being run on some remote machine, which may have some unique environment that may (or may not) be contributing to the failure. What a headache!

The problem here is that when the process's standard output does not contain "The widgets are being frosted", you'd really like to run the assertions on standard error and return code. We have three inter-dependent variables, and we want to test each one in turn. There are a few ways around this (for example you could use MatchesListwise), but they're all rather inelegant.

The fundamental issue here is that a failed assertion prevents the rest of the test from running. Most of the time this is what we want, but sometimes we'd like the option to continue running further assertions before bailing.

Delayed Assertions

The delayed assertion feature introduces a new method on testtools.TestCase: expectThat. The docs say:

Help on method expectThat in module testtools.testcase:

expectThat(self, matchee, matcher, message='', verbose=False) unbound testtools.testcase.TestCase method
    Check that matchee is matched by matcher, but delay the assertion failure.

    This method behaves similarly to ``assertThat``, except that a failed
    match does not exit the test immediately. The rest of the test code will
    continue to run, and the test will be marked as failing after the test
    has finished.

    :param matchee: An object to match with matcher.
    :param matcher: An object meeting the testtools.Matcher protocol.
    :param message: If specified, show this message with any failed match.

This method is identical to assertThat, except that a failed assertion does not prevent the rest of the test from running. We can now rewrite the previous example as:

def test_external_process(self):
        stdout, stderr, retcode = run_some_external_process()

        self.expectThat(stdout, Contains("The widgets are being frosted"))
        self.expectThat(stderr, Equals(""))
        self.assertThat(retcode, Equals(0))

Now, when we run the tests, we get much more information:

Tests running...
======================================================================
FAIL: test_foo.MyTests.test_external_process
----------------------------------------------------------------------
Failed expectation: {{{
File "test_foo.py", line 13, in test_external_process
    self.expectThat(stdout, Contains("The widgets are being frosted"))
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 447, in expectThat
    postfix_content="MismatchError: " + str(mismatch_error)
MismatchError: 'The widgets are being frosted' not in ''
}}}

Failed expectation-1: {{{
File "test_foo.py", line 14, in test_external_process
    self.expectThat(stderr, Equals(""))
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 447, in expectThat
    postfix_content="MismatchError: " + str(mismatch_error)
MismatchError: '' != 'Error: could not load widget database!'
}}}

traceback-1: {{{
Traceback (most recent call last):
AssertionError: Forced Test Failure
}}}

Traceback (most recent call last):
  File "test_foo.py", line 15, in test_external_process
    self.assertThat(retcode, Equals(0))
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 406, in assertThat
    raise mismatch_error
MismatchError: 0 != 2

Ran 1 test in 0.018s
FAILED (failures=1)

You can clearly see two "failed expectation" content blocks: these match each failed expectThat call. The content block message is identical to the standard testtools traceback. We also have a regular assertion failure. Note that when you have at least one failed expectThat call, you will get this additional content object in the output:

traceback-1: {{{
Traceback (most recent call last):
AssertionError: Forced Test Failure
}}}

This is the testtools test runner manually failing the test because at least one expectThat call failed.

Conclusion

The ability to continue running assertions after the first failure is a powerful enhancement to testtools. It might not be very useful for folks doing strict unit testing, but testtools gets used for so much more than that. I expect to see this pattern getting increased adoption inside autopilot, for example. This feature was released in version 0.9.35, which is already in Ubuntu Trusty Tahr!


comments powered by Disqus