Essential XP: Unit Tests at 100
Debugging can be time-consuming and hard to predict. Once found, most defects are easy to fix, but programmers often take a lot of time digging through trying to find the cause. The longer you wait to find out about a defect, the harder it can be to find it. Conversely, if the programmer knows that something he just did caused the problem, he can usually find and fix the problem very quickly. We all know good ways to have a delay before we find out about problems: waiting for Testing to turn around a release, or waiting for the weekly build are examples. Let’s explore here how quickly we can find out about defects.
Suppose it only took a tenth of a second to compile, link, and test your entire system. OK, I admit I’m extreme, but even I know we can’t do that. But go with me here – let’s imagine what would happen if we could. And while we’re at it, let’s imagine that our tests are comprehensive: they test everything that needs to work. When our tests run, we’re sure the system is right.
If things were like this, what might we do?
Well, for sure, as soon as we think we have a feature done, we’ll run the tests. A tenth of a second later, we’ll know if we have it right. If we slipped up, we’ll fix the problem, and test again until we do get it right.
If our new feature happens to break something else, we’ll know right away: those tests are comprehensive. We’ll fix the problem and test again.
But wait, don’t answer yet. We’ll probably change the way we work. Rather than wait a day or so before knowing whether there’s a bug or ten in there, we’ll break features down into smaller pieces. We’ll do a bit, make sure everything is still OK, then go on. We’ll be running those instant tests several times a day, maybe even more.
What will happen to reliability? It’ll be high and it will stay high. These tests are comprehensive: bugs don’t slip through easily.
What will happen to functionality? It will just grow, inch by inch, step by step. Every day, every hour, we’ll make the system just a little bit better.
What about design? It always seems there’s some part of the system that needs work, no matter how carefully we design and try to keep things clean. We used to be afraid, sometimes, to clean that up, for fear it would break something. But now, we can test any time we want. We’ll improve the code a little every day, then test to be sure everything is still OK.
While we’re dreaming, let’s dream big. With function ratcheting up, with reliability staying high, we can ship the software more often. We know it doesn’t break anything, so as soon as a feature is ready, we can cut a new CD and give it to anyone who needs it or who will pay for it. Happier customers, faster return on investment. Wow!
But it’s just a dream, isn’t it? We couldn’t write software that way – could we?
The answer is that for all practical purposes we can write software that way. As XP teams program, they write comprehensive unit tests for everything they build. The really wise teams write a test first, then a little code, then another test, and so on. Teams are reporting 100% coverage using this technique alone, which is quite good in a lot of environments. More tests than these may be needed: as an XP team goes forward, it learns what areas need more testing or different kinds of testing, and it learns how to write just the right tests, neither too few nor too many.
XP teams automate all their tests, usually using jUnit or an equivalent package. They release code frequently: pairs release code to the repository once or twice a day. And whenever they release, they make sure that all the tests run correctly: every single test at 100%.
Each team adds refinements to make the release of clean code smoother in their environment. And all the teams that take this approach report smooth addition of features, great confidence in quality, and very few problems cropping up.
Your team could gain these advantages too. Just write automated unit tests and keep them at 100%. We’re sure that if you do, you’ll see improvement right away.