From an ongoing discussion on comp.software-eng:

Testing can only show the presence of errors, not their absence. You XP guys advocate testing, not inspections or program proofs. Everyone knows that testing can only show that errors exist, not that they don't.

This famous saying, originally made up by a programmer who hated to write tests1, is one of those logic games we all like to play when we're in school. It's formally correct, and completely misleading. The fact is that NOTHING, not inspection, not formal proof, not testing, can give 100% certainty of no errors. Yet all these techniques, at some cost, can in fact reduce the errors to whatever level you wish.

We teach our customers that they don’t have to test anything unless they want it to work. What happens in the real world is that a large network of customer-specified functional tests in fact generate confidence2. Not 100% confidence, but very great confidence. When we run all our tests, the programmers and the customers are very confident that we haven’t screwed up anything important. Why? Because we have tested the hell out of everything important! <p class="topicspaced" align="left">One final thought experiment. You’re going to have to work with one of two word processors for the next six months. All I’ll tell you is that the one in box A has been extensively tested and all the errors found were removed. The one in box B has not been tested. Which one do you choose, and why?</p>


1 A reader points out that E. W. Dijkstra actually first observed that testing can only show the presence of errors, not their absence. We knew that, but thought our fake attribution was more amusing. No disrespect was intended either to Professor Dijkstra or to the humor-challenged.

2 Earlier versions of this article said "certainty" rather than "confidence". Confidence expresses the notion better, I believe.