I'm been struggling for years with notions like having empathy with our mistakes, Kerth's Prime Directive, and the like. Springing from a couple of notes on the extremeprogramming group, and a blog entry from Dale Emery, here's my latest rant.

The Prime Directive

In Project Retrospectives, Norm Kerth would have us focus on what he calls the Prime Directive:

Regardless of what we discover, we understand and truly believe that everyone did the best job they could, given what they knew at the time, their skills and abilities, the resources available, and the situation at hand.

My concern is this: I’m afraid that if we adhere to this belief then in a very important sense, where we “end up”, where we are at any given time, is a kind of “best possible” place. It is a place that we should perhaps accept, even joyfully, because we have all done our best.

And I do adhere to this belief, yet with two caveats:

First, while it is good that we have done our best, and therefore we do need to embrace the goodness of where we are, where we are might still not be a very good place. Second, while our then-current skills, abilities, values might have gotten us there, some different skills, abilities, and values might get us to a better kind of place next time.

It All Started, Doctor ...

… in a couple of notes between Dale Emery, Doug Swartz and I. We were talking about my YAGNI article and whether simple design can lead to things like data driven code. Dale made a point: “You might never get there, but if you never get there, it isn’t where you need to be. That’s not something to to fear; it’s something to rejoice about.”1

Dale was really just saying that if my refactoring didn’t lead to data driven, it would still be OK. But I took him to be saying more.

Doug Schwartz commented further: “It doesn’t mean that ‘it doesn’t matter what you do’. It doesn’t mean you shouldn’t care where you end up.

After a couple of exchanges, Dale blogged on the subject, and on the yahoo group, he said:

“I think those decisions tell us about important internal values conflicts that we haven’t yet resolved in practice. Those conflicts are our richest source of learning. And I think we can learn from them only if we can empathize with what we were dealing with, inside us and outside us, at the time. The moment we decide that our intentions were “wrong,” we turn the light of day from warm to cold. We become reluctant to expose the beliefs and intentions that we have judged as “not okay,” which limits what we can learn from them. We lose courage, feedback, and communication. We might even lose simplicity sometimes.”

I suppose there is some truth here. I have great admiration for Dale, and maybe people need to feel comfortable in order to ask themselves questions. Or to answer them in public.

Further, lots of people are really into this kind of thinking. While I really suspect that it’s a disease, and very likely at or near the root of the obvious dissolution of our society, it’s important to embrace change and all that, so I’ll think about a recent real example:

This Just Happened ...

I’m coding this program in C#, and I’m on deadline. People are asking me where the heck it is. The tests are supporting me pretty well, but there are parts of the user interface where they don’t reach. I don’t know enough about the details of events and the guts of Windows to see how to throw characters at the GUI, and I really don’t want to bring up a GUI during my unit tests anyway because I fear it will take too long.

I can make it work by testing it manually. I’m confident that when I find the defects, I can fix them, and I can remember not to put them back in.

To build the tests will take me way off my current track, for at least a couple of days, which is a long time on this project. Once they are built, they will help me find the same couple of defects. They will help me be sure that I haven’t put them back but I don’t believe that I’m going to put them back.

If my manual debugging goes well – essentially a matter of luck, as nearly as I have ever been able to understand it – I’ll find the bugs faster. If it goes poorly, it will have been better to write the tests. Manual debugging will leave the system in a poorer state: the code will be just as good, but the test suite will be weaker, and the code thus subject to reintroducing errors when it changes. But those changes won’t happen.

My principles say to write the tests. I teach that you should write the tests. I believe that in some perfect world, writing the tests is the right thing. This time, hoping to save time, I choose not to write the tests. Debugging takes a bit longer than I would have wished, but I make it work.

The bottom line is that the program works, and it’s good enough. It took as long as it took, and I already knew I wasn’t perfect and I totally understand that I did the best job that I could, given what I knew at the time, given my skills and abilities, the resources available, the situation at the time.

Great. Prime directive fulfilled, the program works. I love myself and you love me. Kiss kiss.

Well, no, I’m sorry, but I don’t think so. I think Jeffries screwed up.

Debugging took longer than it should – so long that the tests would have been a better decision all around. Going ahead without the tests was, by any reasonable definition of the term, a mistake.

Writing the tests was the better idea. We wound up, not in a “wonderful place”, but in an OK place. Seeing the difference between the OK place and the wonderful place is what lets me see that I should bump up my counter on “Should Write Tests”, and tick down my counter for “Assume that You’re a Good Debugger”.

I don’t get it. I don’t get how accepting badness as “we did our best” leads to learning. There must be something after the Prime Directive. What’s the Second Directive?