Making the most of Subunit

This post is an introduction to subunit, a tool for serializing a test result to a binary format. We plan on adopting this at Canonical, and this is the first post in a series.

The Problem

We have a lot of tests that we need to run on a regular basis. These include:

  • Unit tests.
  • Integration tests between different components.
  • Acceptance tests for UI components.
  • Memory leak analysis tests.
  • Performance tests, including things like FPS, power draw, wakeups etc.

Different test categories are run on different machines at different times. For example, unit tests can be run inside a virtual machine, and are run when a package is built. On the other hand, performance tests must be run on "real" hardware, and are typically run on a regular basis.

Different test categories are also written using totally different technologies. Unit tests are typically written using one of google-test or python testtools, while power monitoring tests are typically implemented using bash scripts.

Given the huge gulf between all these different scenarios, how do we manage the test result information? We'd like to be able to show pass/fail rates, as well as other associated information on the QA dashboard. This means we need the result information to be in some standard that we can interpret later.

Current Situation

Currently we use junitxml as a standard data store for test results. If you haven't encountered junitxml before, it's about the simplest possible way to encode test results in XML. It looks a lot like this:

<testsuite errors="4" failures="0" name="" tests="7" time="55.051">
    <testcase classname="sudoku_app.tests.test_sudoku.TestMainWindow" name="test_settings_tab(with mouse)" time="9.320">
        <error type="testtools.testresult.real._StringException">
        </error>
    </testcase>
    <testcase classname="sudoku_app.tests.test_sudoku.TestMainWindow" name="test_new_game_button(with mouse)" time="7.985">
        <error type="testtools.testresult.real._StringException">
            Traceback (most recent call last):
            File "/usr/lib/python2.7/dist-packages/sudoku_app/tests/test_sudoku.py", line 215, in
              test_settings_tab self.main_view.switch_to_tab("settingsTab")
            File "/usr/lib/python2.7/dist-packages/ubuntuuitoolkit/emulators.py", line 172, in
              switch_to_tab return self.switch_to_tab_by_index(tab.index)
            File "/usr/lib/python2.7/dist-packages/ubuntuuitoolkit/emulators.py", line 142, in
              switch_to_tab_by_index
            'The tab with index {0} was not selected.'.format(index)) ToolkitEmulatorException: The tab with index 2 was not selected.
        </error>
    </testcase>
    <testcase classname="sudoku_app.tests.test_sudoku.TestMainWindow" name="test_best_scores_tab(with mouse)" time="4.459"/>
    <testcase classname="sudoku_app.tests.test_sudoku.TestMainWindow" name="test_enter_and_cancel(with mouse)" time="7.384"/>
    <testcase classname="sudoku_app.tests.test_sudoku.TestMainWindow" name="test_enter_and_clear_number(with mouse)" time="8.104"/>
    <testcase classname="sudoku_app.tests.test_sudoku.TestMainWindow" name="test_hint_button(with mouse)" time="9.084">
        <error type="testtools.testresult.real._StringException">
        </error>
    </testcase>
    <testcase classname="sudoku_app.tests.test_sudoku.TestMainWindow" name="test_about_tab(with mouse)" time="8.702">
        <error type="testtools.testresult.real._StringException">
        </error>
    </testcase>
</testsuite>

There are many reasons to use junitxml: It's a very simple format, and is supported by a huge number of tools, including jenkins, and all the test runners we care about.

As you can see, it's possible to store information about the test status ("passed", "failed", or "skipped"), along with some textual information from the test - for failures, we include the output from the test (Note: I've trimmed the actual output to make the XML snippet above more readable).

The Problems

junitxml has served us well for a very long time. However, a growing number of issues with it has caused us to start looking for a new result storage format.

Limited Test Statuses

Several test tools (including testtools) add the concept of an "expected failure" or an "unexpected success" to the list of test statuses. Currently, we have to translate these into "success" and "failure" respectively. This loss of information is annoying, to say the least.

Lack of Test Attachments

Almost all our python-based test suites are built on top of testtools, including autopilot. One of the many things testtools gives us is the ability for a test author to add arbitrary content attachments to a test case.

For example, if you're running a test that runs some command-line application, and expects it to exit with a return code of 0, you might run something like this:

import subprocess

from testtools import TestCase


class MyTests(TestCase):

    def test_ls_exits_zero(self):
        proc = subprocess.Popen(
            ["ls", "/some/nonexistant/file"],
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
        )

        self.assertEqual(0, proc.returncode)

Of course, in reality the test would be a little bit more complex than this. If we run this, we get the following output:

Tests running...
======================================================================
FAIL: test_foo.MyTests.test_ls_exits_zero
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_foo.py", line 16, in test_ls_exits_zero
    self.assertEqual(0, proc.returncode)
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 322, in assertEqual
    self.assertThat(observed, matcher, message)
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 417, in assertThat
    raise MismatchError(matchee, matcher, mismatch, verbose)
MismatchError: 0 != 2

Ran 1 test in 0.006s
FAILED (failures=1)

Now imagine that you're looking at this output on a jenkins instance somewhere: You have no idea why your command failed. Not only do you not have the information you need to debug the problem at hand, often the problem will be due to some specific environment setup, contained on a virtual machine that has long since been destroyed. Clearly, this is not a good situation!

Luckily, testtools allows you to attach data to the test result. Now we can update that test to look like this:

import subprocess

from testtools import TestCase
from testtools.content import text_content


class MyTests(TestCase):

    def test_ls_exits_zero(self):
        proc = subprocess.Popen(
            ["ls", "/some/nonexistant/file"],
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
        )
        stdout, stderr = proc.communicate()
        self.addDetail("process-stdout", text_content(stdout))
        self.addDetail("process-stderr", text_content(stderr))

        self.assertEqual(0, proc.returncode)

This gives us a much more helpful output:

Tests running...
======================================================================
FAIL: test_foo.MyTests.test_ls_exits_zero
----------------------------------------------------------------------
Empty attachments:
  process-stdout

process-stderr: {{{ls: cannot access /some/nonexistant/file: No such file or directory}}}

Traceback (most recent call last):
  File "test_foo.py", line 20, in test_ls_exits_zero
    self.assertEqual(0, proc.returncode)
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 322, in assertEqual
    self.assertThat(observed, matcher, message)
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 417, in assertThat
    raise MismatchError(matchee, matcher, mismatch, verbose)
MismatchError: 0 != 2

Ran 1 test in 0.005s
FAILED (failures=1)

Now at last we have the information we need, contained in the test result. However, if we show that result information in junitxml, we get:

<testsuite errors="0" failures="1" name="" tests="1" time="0.003">
<testcase classname="test_foo.MyTests" name="test_ls_exits_zero" time="0.003">
<failure type="testtools.testresult.real._StringException">_StringException: Empty attachments:
  process-stdout

process-stderr: {{{ls: cannot access /some/nonexistant/file: No such file or directory}}}

Traceback (most recent call last):
  File "/home/thomi/Documents/blog/pelican/tech-foo/content/test_foo.py", line 20, in test_ls_exits_zero
    self.assertEqual(0, proc.returncode)
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 322, in assertEqual
    self.assertThat(observed, matcher, message)
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 417, in assertThat
    raise MismatchError(matchee, matcher, mismatch, verbose)
MismatchError: 0 != 2

</failure>
</testcase>

As you can see, these data attachments are all lumped together in the XML, and become very hard to parse. This is annoying for textual content, but completely unusable for binary content: imagine trying to attach core dumps or screenshots to a test result (in that case you just get the file name and no content at all).

File Handling

Since we cannot (easily) attach binary files to a junitxml result, we end up introducing workarounds to make sure we get the data we want. For example, when running our autopilot test suite, we need to instruct jenkins to retrieve several files from the machine that ran the tests, including:

  • The junitxml file that contains the test results (jenkins reads this and gives us our nice test result trend graphs).
  • Any crash dumps from /var/crash/. These get retraced, and tell us what went wrong during the test run.
  • Several log files from /var/log/.
  • Any screenshots that were taken as part of the test.
  • Video recordings of failed tests.
  • Captured output from top and several other performance monitoring applications.
  • LTTNG trace logs, if the application supports tracing.

Ideally jenkins would grab these files and associate each with a test that ran. However, jenkins can only archive build artifacts for a job run, not for an individual test. What's more, jenkins is not the final destination for these files: All this information needs to end up on the Ubuntu QA dashboard. This means copying multiple files several times over... what a pain!

Crashed Tests

Tests occasionally crash or hang. When this happens, we don't get any junitxml file, since the file only gets written when the test runner ends (this is due to the fact that the file contains information about the number of errors and failure, and that can't be calculated until the test run has ended).

It would be nice if, when a test runner crashes, we see results for those tests that have run.

What we Want

Ideally, we're looking for a result format that:

  • Is a streaming format, so crashed or hung tests don't prevent us from reading the state of the first few tests that run.
  • Contains the full set of test statuses that testtools supports.
  • Allows us to embed arbitrary data in the stream - both binary and text-based.

Enter Subunit

Thankfully, subunit meets all these needs, and a few more. This blog post will outline the basics of how to create and consume a subunit result stream.

Creating a Subunit Result Stream

There are several different ways to generate subunit:

Use subunit.run test runner

If your test suite is already based on python's unittest library (or a compatible library like testtools), then generating subunit output is simple - replace unittest.run or testtools.run with subunit.run as your test runner.

For example, if you have a file named test_foo.py that contains the following:

from unittest import TestCase


class MySimpleTests(TestCase):

    def test_something_that_passes(self):
        self.assertEqual(4, 2*2)

    def test_something_that_fails(self):
        self.assertEqual(5, 2*2)

You can use the subunit runner to run these tests like so:

python -m subunit.run test_foo

However, doing so will emit the subunit result stream to stdout. This is a binary format, so it's not very pretty to look at. If you want something human readable, you can convert it to a more agreeable format (see below).

Use autopilot

Autopilot now supports producing subunit output, simply add the -f subunit option to the autopilot command line. For example:

autopilot run -f subunit test_foo

Note: At the time of writing, this support has landed in autopilot trunk, but has not yet been released to Ubuntu Trusty. That should happen shortly.

Consuming a Subunit Result Stream

We've covered how to create a subunit result stream, now let's look at how to consume it.

Converting to a Different Format

If you want to convert subunit to some other standard format, there's probably a conversion tool already. For example, you can get junitxml out of subunit again (this is useful for being able to upgrade from junitxml to subunit in stages), by piping your subunit stream through the subunit2junixml tool, like so:

python -m subunit.run test_foo | subunit2junitxml

Of course, this will re-introduce all the problems we already have with the junitxml format, but it demonstrates the use of the subunit conversion tools.

Extracting Meta-data

There are several scripts that are bundled with subunit that extract meta-data from a subunit stream. For example, if you just want to get a list of all the tests present in the subunit stream, you can pipe the stream through subunit-ls, like so:

python -m subunit.run test_foo | subunit-ls

This will print all the test ids to stdout, one per line. Similarly, we can extract the aggregate statistics from a subunit stream by using the subunit-stats tool. At the time of writing, this is what the autopilot unit test suite looks like, when run through this tool:

$ python -m subunit.run discover autopilot.tests.unit | subunit-stats
Total tests:     183
Passed tests:    182
Failed tests:      0
Skipped tests:     1
Seen tags:

Writing Your Own

Writing your own consumer in python is pretty trivial. The main piece of code that's required is to implement the testtools.StreamResult interface. This interface looks like this:

class StreamResult(object):
    """A test result for reporting the activity of a test run.

    General concepts
    ----------------

    StreamResult is built to process events that are emitted by tests during a
    test run or test enumeration. The test run may be running concurrently, and
    even be spread out across multiple machines.

    All events are timestamped to prevent network buffering or scheduling
    latency causing false timing reports. Timestamps are datetime objects in
    the UTC timezone.

    A route_code is a unicode string that identifies where a particular test
    run. This is optional in the API but very useful when multiplexing multiple
    streams together as it allows identification of interactions between tests
    that were run on the same hardware or in the same test process. Generally
    actual tests never need to bother with this - it is added and processed
    by StreamResult's that do multiplexing / run analysis. route_codes are
    also used to route stdin back to pdb instances.

    The StreamResult base class does no accounting or processing, rather it
    just provides an empty implementation of every method, suitable for use
    as a base class regardless of intent.
    """

    def startTestRun(self):
        """Start a test run.

        This will prepare the test result to process results (which might imply
        connecting to a database or remote machine).
        """

    def stopTestRun(self):
        """Stop a test run.

        This informs the result that no more test updates will be received. At
        this point any test ids that have started and not completed can be
        considered failed-or-hung.
        """

    def status(self, test_id=None, test_status=None, test_tags=None,
        runnable=True, file_name=None, file_bytes=None, eof=False,
        mime_type=None, route_code=None, timestamp=None):
        """Inform the result about a test status.

        :param test_id: The test whose status is being reported. None to
            report status about the test run as a whole.
        :param test_status: The status for the test. There are two sorts of
            status - interim and final status events. As many interim events
            can be generated as desired, but only one final event. After a
            final status event any further file or status events from the
            same test_id+route_code may be discarded or associated with a new
            test by the StreamResult. (But no exception will be thrown).

            Interim states:
              * None - no particular status is being reported, or status being
                reported is not associated with a test (e.g. when reporting on
                stdout / stderr chatter).
              * inprogress - the test is currently running. Emitted by tests when
                they start running and at any intermediary point they might
                choose to indicate their continual operation.

            Final states:
              * exists - the test exists. This is used when a test is not being
                executed. Typically this is when querying what tests could be run
                in a test run (which is useful for selecting tests to run).
              * xfail - the test failed but that was expected. This is purely
                informative - the test is not considered to be a failure.
              * uxsuccess - the test passed but was expected to fail. The test
                will be considered a failure.
              * success - the test has finished without error.
              * fail - the test failed (or errored). The test will be considered
                a failure.
              * skip - the test was selected to run but chose to be skipped. E.g.
                a test dependency was missing. This is purely informative - the
                test is not considered to be a failure.

        :param test_tags: Optional set of tags to apply to the test. Tags
            have no intrinsic meaning - that is up to the test author.
        :param runnable: Allows status reports to mark that they are for
            tests which are not able to be explicitly run. For instance,
            subtests will report themselves as non-runnable.
        :param file_name: The name for the file_bytes. Any unicode string may
            be used. While there is no semantic value attached to the name
            of any attachment, the names 'stdout' and 'stderr' and 'traceback'
            are recommended for use only for output sent to stdout, stderr and
            tracebacks of exceptions. When file_name is supplied, file_bytes
            must be a bytes instance.
        :param file_bytes: A bytes object containing content for the named
            file. This can just be a single chunk of the file - emitting
            another file event with more later. Must be None unleses a
            file_name is supplied.
        :param eof: True if this chunk is the last chunk of the file, any
            additional chunks with the same name should be treated as an error
            and discarded. Ignored unless file_name has been supplied.
        :param mime_type: An optional MIME type for the file. stdout and
            stderr will generally be "text/plain; charset=utf8". If None,
            defaults to application/octet-stream. Ignored unless file_name
            has been supplied.
        """

I have trimmed some of the docstrings to make this a little more concise. With that interface implemented, we then use the subunit.v2.ByteStreamToStreamResult class to do the actual conversion.

A simple debug script might look like this (I called this subunit-debug.py):

#!/usr/bin/env python


from testtools import StreamResult
from subunit.v2 import ByteStreamToStreamResult
import sys


class DebugStreamResult(StreamResult):

    def startTestRun(self):
        print("startTestRun called.")

    def stopTestRun(self):
        print("stopTestRun called.")

    def status(self, test_id=None, test_status=None, test_tags=None,
            runnable=True, file_name=None, file_bytes=None, eof=False,
            mime_type=None, route_code=None, timestamp=None):

        print("status called with args: test_id=%r, test_status=%r, "
            "test_tags=%r, runnable=%r, file_name=%r, file_bytes=%r, "
            "eof=%r, mime_type=%r, route_code=%r, timestamp=%r" % (
            test_id, test_status, test_tags, runnable, file_name, file_bytes,
            eof, mime_type, route_code, timestamp))


if __name__ == '__main__':
    debug_result = DebugStreamResult()
    converter = ByteStreamToStreamResult(source=sys.stdin)
    debug_result.startTestRun()
    converter.run(debug_result)
    debug_result.stopTestRun()

As you can see, all our debug implementation does is print things to stdout. It's not very useful, but it should help to show the various method calls. Running this script gives us output like so:

$ python -m subunit.run test_foo | python subunit-debug.py
startTestRun called.
status called with args: test_id=u'test_foo.MyTests.test_ls_exits_zero', test_status='exists', test_tags=None, runnable=True, file_name=None, file_bytes=None, eof=False, mime_type=None, route_code=None, timestamp=None
status called with args: test_id=u'test_foo.MyTests.test_ls_exits_zero', test_status='inprogress', test_tags=None, runnable=True, file_name=None, file_bytes=None, eof=False, mime_type=None, route_code=None, timestamp=datetime.datetime(2013, 12, 4, 19, 53, 33, 699151, tzinfo=<subunit.iso8601.Utc object at 0x219da50>)
status called with args: test_id=u'test_foo.MyTests.test_ls_exits_zero', test_status=None, test_tags=None, runnable=True, file_name=u'traceback', file_bytes='Traceback (most recent call last):\n  File "test_foo.py", line 20, in test_ls_exits_zero\n    self.assertEqual(0, proc.returncode)\n  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 322, in assertEqual\n    self.assertThat(observed, matcher, message)\n  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 417, in assertThat\n    raise MismatchError(matchee, matcher, mismatch, verbose)\nMismatchError: 0 != 2\n', eof=False, mime_type=u'text/x-traceback; charset="utf8"; language="python"', route_code=None, timestamp=datetime.datetime(2013, 12, 4, 19, 53, 33, 701592, tzinfo=<subunit.iso8601.Utc object at 0x219da50>)
status called with args: test_id=u'test_foo.MyTests.test_ls_exits_zero', test_status=None, test_tags=None, runnable=True, file_name=u'traceback', file_bytes='', eof=True, mime_type=u'text/x-traceback; charset="utf8"; language="python"', route_code=None, timestamp=datetime.datetime(2013, 12, 4, 19, 53, 33, 701592, tzinfo=<subunit.iso8601.Utc object at 0x219da50>)
status called with args: test_id=u'test_foo.MyTests.test_ls_exits_zero', test_status=None, test_tags=None, runnable=True, file_name=u'process-stderr', file_bytes='ls: cannot access /some/nonexistant/file: No such file or directory\n', eof=False, mime_type=u'text/plain; charset="utf8"', route_code=None, timestamp=datetime.datetime(2013, 12, 4, 19, 53, 33, 701592, tzinfo=<subunit.iso8601.Utc object at 0x219da50>)
status called with args: test_id=u'test_foo.MyTests.test_ls_exits_zero', test_status=None, test_tags=None, runnable=True, file_name=u'process-stderr', file_bytes='', eof=True, mime_type=u'text/plain; charset="utf8"', route_code=None, timestamp=datetime.datetime(2013, 12, 4, 19, 53, 33, 701592, tzinfo=<subunit.iso8601.Utc object at 0x219da50>)
status called with args: test_id=u'test_foo.MyTests.test_ls_exits_zero', test_status=None, test_tags=None, runnable=True, file_name=u'process-stdout', file_bytes='', eof=False, mime_type=u'text/plain; charset="utf8"', route_code=None, timestamp=datetime.datetime(2013, 12, 4, 19, 53, 33, 701592, tzinfo=<subunit.iso8601.Utc object at 0x219da50>)
status called with args: test_id=u'test_foo.MyTests.test_ls_exits_zero', test_status=None, test_tags=None, runnable=True, file_name=u'process-stdout', file_bytes='', eof=True, mime_type=u'text/plain; charset="utf8"', route_code=None, timestamp=datetime.datetime(2013, 12, 4, 19, 53, 33, 701592, tzinfo=<subunit.iso8601.Utc object at 0x219da50>)
status called with args: test_id=u'test_foo.MyTests.test_ls_exits_zero', test_status='fail', test_tags=None, runnable=True, file_name=None, file_bytes=None, eof=False, mime_type=None, route_code=None, timestamp=datetime.datetime(2013, 12, 4, 19, 53, 33, 701592, tzinfo=<subunit.iso8601.Utc object at 0x219da50>)
stopTestRun called.

This output shows the results of a test that failed, the last status call tells us that:

status called with args: test_id=u'test_foo.MyTests.test_ls_exits_zero', test_status='fail'

But before the test is failed, several attachments are sent these include a traceback object, which is sent in two chunks:

tatus called with args: test_id=u'test_foo.MyTests.test_ls_exits_zero', test_status=None, test_tags=None, runnable=True, file_name=u'traceback', file_bytes='Traceback (most recent call last):\n  File "test_foo.py", line 20, in test_ls_exits_zero\n    self.assertEqual(0, proc.returncode)\n  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 322, in assertEqual\n    self.assertThat(observed, matcher, message)\n  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 417, in assertThat\n    raise MismatchError(matchee, matcher, mismatch, verbose)\nMismatchError: 0 != 2\n', eof=False, mime_type=u'text/x-traceback; charset="utf8"; language="python"', route_code=None, timestamp=datetime.datetime(2013, 12, 4, 19, 53, 33, 701592, tzinfo=<subunit.iso8601.Utc object at 0x219da50>)
status called with args: test_id=u'test_foo.MyTests.test_ls_exits_zero', test_status=None, test_tags=None, runnable=True, file_name=u'traceback', file_bytes='', eof=True, mime_type=u'text/x-traceback; charset="utf8"; language="python"', route_code=None, timestamp=datetime.datetime(2013, 12, 4, 19, 53, 33, 701592, tzinfo=<subunit.iso8601.Utc object at 0x219da50>)

And then the 'process-stderr' content, also sent in two chunks:

status called with args: test_id=u'test_foo.MyTests.test_ls_exits_zero', test_status=None, test_tags=None, runnable=True, file_name=u'process-stderr', file_bytes='ls: cannot access /some/nonexistant/file: No such file or directory\n', eof=False, mime_type=u'text/plain; charset="utf8"', route_code=None, timestamp=datetime.datetime(2013, 12, 4, 19, 53, 33, 701592, tzinfo=<subunit.iso8601.Utc object at 0x219da50>)
status called with args: test_id=u'test_foo.MyTests.test_ls_exits_zero', test_status=None, test_tags=None, runnable=True, file_name=u'process-stderr', file_bytes='', eof=True, mime_type=u'text/plain; charset="utf8"', route_code=None, timestamp=datetime.datetime(2013, 12, 4, 19, 53, 33, 701592, tzinfo=<subunit.iso8601.Utc object at 0x219da50>)

And then the 'process-stdout' content, also sent in two chunks (this one is empty however):

status called with args: test_id=u'test_foo.MyTests.test_ls_exits_zero', test_status=None, test_tags=None, runnable=True, file_name=u'process-stdout', file_bytes='', eof=False, mime_type=u'text/plain; charset="utf8"', route_code=None, timestamp=datetime.datetime(2013, 12, 4, 19, 53, 33, 701592, tzinfo=<subunit.iso8601.Utc object at 0x219da50>)
status called with args: test_id=u'test_foo.MyTests.test_ls_exits_zero', test_status=None, test_tags=None, runnable=True, file_name=u'process-stdout', file_bytes='', eof=True, mime_type=u'text/plain; charset="utf8"', route_code=None, timestamp=datetime.datetime(2013, 12, 4, 19, 53, 33, 701592, tzinfo=<subunit.iso8601.Utc object at 0x219da50>)

Of course, instead of printing this information to stdout, you'd probably store files on disk somewhere, and record test result statuses in a database.

Conclusion

Subunit is a modern test result format that allows us to make the most of the modern test features offered to us by toolkits like testtools. In particular, for acceptance test suites, being able to attach arbitrary content to the test result, and have that data follow the test result around is invaluable. It allows us to have a single file that contains all the information the test author thought was important to include. This data can be moved between machines, and unpacked at will.

Hopefully by now I've convinced you that subunit is a powerful tool. We're just starting to use it, so you should expect some more blog posts about our specific implementation. For now, this post has expanded much longer than I had initially anticipated, so I'll leave it here for now.