March 11, 2016
By: Jeff Terrell

The benefits of writing a separate test suite for your app

Abstract: Keeping a test suite separate from the application itself has several benefits, including adopting a user's perspective, (potentially) identical tests for development or production, load testing, and flexibility to totally rewrite the app.

We Altometrics engineers were recently discussing the nature of tests and test suites. Why are they useful? What assumptions are they founded on? What about them is lacking? As we discussed, we stumbled upon an idea that was new to us: having a test suite that isn't fundamentally tied to the language and framework you're using for your application. No doubt others have considered and used this approach, but it seems to be far enough out of the way things are done, at least in our programming subculture, that it seems worth publishing.

These days, it's common to have tests live in the same project as the application itself, e.g. in a test/ directory. The tests typically have direct access to the application code, which is in the same runtime and is (implicitly or explicitly) required using the same module mechanisms that the language provides. The application code sometimes, though hopefully not often, can recognize whether it is being tested and do different things in a testing context versus a production context. Also, various aspects of the application's configuration change in a testing context, which can add complexity to the project as well as allow inconsistencies between test and production environments.

This article proposes an alternative: keep test code separate from the application code. By "separate", I don't mean that it must live in a separate repository. (In fact, I recommend it be the same repository, so that changes to the application and tests can be tracked together.) Rather, I mean a separate project. This test project would have its own entry point, probably launched by the command line. It would access the application the same way any user would, using the same API; it has no privileged access to source code and can't "backdoor" past the API. (It also should not, as a matter of principle, access the database used by the application.)

This is certainly a less convenient approach in many ways. It is faster and more convenient to test modules directly, without needing to go through the full stack of processes between the user-accessible endpoint and the module. For example, in an API, it's convenient not to deal with authentication, serializing to and from HTTP, and related issues. We lose all of that if we have a separate test suite. And because of the duplicated effort of penetrating that stack every time, a separate test suite is less performant and more wasteful of resources.

Nevertheless, there are some definite advantages. First, because the test's access to the application is limited to be the same endpoint(s) that users use, the developers are forced to have a user-centric perspective. This can uncover many issues that, while not usually bugs per se, can make life inconvenient for the users. For example, maybe an API action doesn't return all of the information that might be useful, and a user would have to look up important information with another request. These sorts of issues are more easily noticed when the developers write tests this way.

Second, a separate test suite allows for the possibility of running identical tests in development and production. These days, most projects run tests locally on a developer's workstation and sometimes on a test server (a.k.a. continuous integration server) as well. If the tests pass, the code is considered good enough for production. Ideally, this works without a hitch, but occasionally, the production server has a configuration that is different in some relevant way that breaks something, despite the tests passing. A separated test suite, because it only needs a user-level endpoint to test against, can run the exact same test suite in development, or against a test server, or against the production server. Thus, the tests encompass the server configuration as well as the code.

Third, a separate test suite means that you can load-test your application. Just run multiple test processes, perhaps from multiple machines, against the same endpoint, to see how much it can bear before the service performance degrades.

Fourth, you have more freedom to refactor and rewrite large pieces of your application. If your tests are tied to modules, then moving functions from one module to another means you must make a parallel move of test code. This can hinder agility. (Though it's worth saying that tests in general provide much agility, since they lend confidence that if you broke something with a change, you'll know.) Ultimately, you can even rewrite your application in a completely different language, and a separate test suite won't break or treat it any differently.

At a high level, tests are on a spectrum from integration tests, which test a whole system, to component (or unit) tests, which test a single module or unit of code. Both are useful. The testing approach outlined here falls solidly on the "integration" side of the spectrum; in fact, because they encompass configuration as well as code, they are even more end-to-end than so-called integration tests in many frameworks. That said, it's important to note that a separated test suite may not be a good fit in many cases. If you have complicated logic, you may want to write component tests for that logic. These would probably best be done in the conventional style, with tests requiring the module and calling its functions directly.

What do you think? Agree or disagree? I'd be especially interested in reports from anyone who has used this approach before. Let me know in the comments below.

Tags: development testing coding