Should I Separate Unit Tests from Integration Tests?
When does it make sense to keep integration tests separate from your unit tests, and when is it OK to make no distinction?
Well, it's all about getting fast feedback.
Of course separate things should be kept separate! But separating different kinds of tests is just a way to get quicker feedback. There are many ways how this can be done.
If you do Test-Driven Development (TDD), you'll be running your test suite very, very often. The red-green-refactor cycle may be as short as one minute. That requires that your build process and tests suite runs very quickly. If your tests take more than eight or ten seconds, you're going to start losing attention. Even if you don't do TDD: Faster tests are more enjoyable than slow tests.
Quick and easy tests are key to a improve and maintain software quality: It doesn't matter if your test suite has perfect coverage if it isn't ever run. Programmers might avoid running tests if they are difficult to set up, require manual interaction, or just run so slowly that they start doing something else in the mean time.
To get quicker results, programmers must be able to select which tests are currently important. For example if I am fixing a bug in a component: I'll probably create a regression test to detect that bug and then fix the component until the test passes. During that work, I only care about the single test case. Afterwards, I may run other tests to verify that I didn't accidentally break anything, but I don't want them to run all the time: that would just slow us down.
So we need a way to categorize the tests. A simple way is to create a separate test suite for each component. This is useful for larger applications that have clear components with low coupling. We would then have a master test suite that combines all tests from all components. As a developer, I can run a focussed test suite related to the component I am working on, and leave the rest the CI server.
Some test frameworks allow us to annotate tests as slow tests. Others have a concept of “author tests” that take a long time or need special setup. These tests are not run by default when the test suite is executed, and need to be requested explicitly. These are good categorizations: Fast feedback by default, but full coverage when required.
But why do tests take so long? No one is intentionally making tests slow. Well, some things you might be testing are inherently slow:
- Database queries
- Network requests
- Time-dependent behaviour
- Large amounts of data
The good news is that these are only tested in integration tests, never unit tests. The point of unit tests is that we test the system under test in isolation. Any dependencies and external services are replaced by mocks or stubs. Since we are only testing a little bit of code and only that code, unit tests tend to be quite fast.
Since integration tests tend to be slower, keeping them separate from unit tests from the start is a good idea. If the integration tests need a lot of supporting infrastructure (like lots of files with test data, extra utility programs to set up the environment, …), then putting them into another project may be sensible. However, they should share the version-control repository with the main code. Completely separating tests from the system they are testing tends to be a lot of pain, in particular when the tested interfaces change.
Categorizing your tests isn't as important for small projects that don't have significant sub-components, or when the tests are so simple that they run reasonably quickly (less than ten seconds for the complete suite). Introducing some categorization would be a waste of effort here.
Separating automated integration tests from unit tests is not in itself important, it is just a performance hack to improve your test feedback.
As hinted above, using a CI server is a good idea (e.g. running a software like Buildbot or Jenkins). CI servers are useful to regularly run a complete automated test suite. If something fails, the server will send you an error report via email. That allows you to continue with more important work while the tests run in the background on the server, instead of having to wait for the tests to complete on your machine.
However, a requirement for continuous integration (CI) is a high-quality build process. When the CI process fails because it requires some manual setup, this breeds a culture of ignoring the test results. But once everything is automated and reproducible, regular execution of even the slowest tests will have a positive impact on your project, and will help avoid a “but it works on my machine!” attitude to problems.
- next post: Dist::Zilla on Travis CI
- previous post: Dynamic vs. Static Dispatch