CFP, Practices, Kindness
This morning, I plan to write about COSMIC Function Points, Agile Practices, Civility, and Kindness. I can’t wait to find out what I have to say.
Today’s thoughts are inspired by a few different communications:
In a long Twitter thread about metrics, ScopeMaster (C Hammond) joined the thtread and got us talking about COSMIC Function Points.
Also on Twitter, Adam Dymitruk said “TDD is ‘make it up as you go along’ design. No wonder it introduces so much rework that you end up with very little quality code. TDD success stories are the best example of survivor bias.” A brief discussion ensued.
Yet again on Twitter, Steven Pinker posted a link to some “rules of civil conversation”. On a private Slack to which I belong, some folks took issue with the notion of civility (and to some degree with Pinker, who is not universally well thought of).
It’s not impossible that I’ll tie these together. It’s also not impossible that pigs will fly1.
COSMIC Function Points
I freely grant that I didn’t know anything about CFP until a week or so ago, the first time I noticed ScopeMaster tweeting into some thread. Function points are not a new idea, originating around 1980. I took a short course about them at some point in my dark and mysterious past. The COSMIC variant is somewhat simpler than other schemes, which—to my not entirely unbiased eye—offers the advantage of simplicity while surely losing accuracy compared to approaches that consider more detail.
CFP, like all function point schemes, estimates the “functional size” of an information system. It does so, in essence, by counting four types of “data movements”, Entry, Exit, Read, and Write. To estimate an entire system, “all you have to do” is count up these four types for the entire specification of the system. There are even some ideas about CFP for Agile, which I have not yet reviewed.
I have personally never once worked on a software effort where we had a true specification for the entire product, which, I hope you’ll forgive me, kicks any form of function points right out the door. But possibly, if you did have a complete spec, you could add up its points, and they might tell you something. Certainly a similar program with a substantially smaller number of points would require less effort. It might even be proportional, though I find that hard to believe.
Now I think ScopeMaster would offer another use of CFP to us iterative and incremental folks. I think he would suggest that if we were to CFP our small stories as we go, we would quickly develop a CFP-to-time ratio, and I think he would argue that two stories with the same CFP count would probably take the same amount of time to implement. So we could CFP a bunch of stories, and we’d know how many to dump on the team each week, and we’d have a good estimate of how long it would take to get all those stories done.
It’s even possible—if I’m not mistaken—to get some other agency to CFP the stories, because all trained CFPers are quite close in the counts they generate, which would mean that we could have a few people go ahead and CFP everything. With that and the team history, we’d have a standard of performance to hold our team to.
Are you starting to cringe about now? Does the notion of some other agency telling the team exactly what to implement and exactly how long they have to implement it make your hair curl? Yeah, well, me too.
For my dear friends who are concerned about extractive management styles, here we have a perfect example. We “just have to” spec the program, calculate its CFP, observe the team for a while, and voila! we know exactly how hard to whip the ponies to get the work done. Nothing could be better.
Well, there is one drawback. We have taken an entire team, or many teams, and converted them from intelligent, creative, valuable contributors to the company’s knowledge and success, into a mindless machine for cranking out COSMIC Function Points. Not only is that inhumane, it’s ineffective. It wastes most of the value that an expert software developer can provide to the company.
Now ScopeMaster would be among the first to say that that would be a misuse of CFP. CFP doesn’t tell you to whip your ponies harder to get more CFPs done. Not his problem.
And it isn’t his problem. It’s just an often inevitable effect of providing this kind of razor blade to a very common kind of baby, the extractive manager.
My current thoughts on CFP
I have never used CFP, nor met anyone who has admitted using them. My assessment is therefore superficial. I could be wrong.
My guess is that there is a strong correlation between CFP well done and some notion of “size” (is that where Dan got the idea of story points being about size?). And in a sufficiently consistent set of features, I would expect to see a good correlation between CFP count and time to implement.
I’d also expect to see two kinds of variance from that correlation, one good and one bad. The good one would arise, occasionally, if the team managed to create a tool, or a library, or some reusable objects to speed things up. However, in a CFP-driven2 extractive effort, it’s unlikely that they’d build such things.
The bad variance would be quite common. As a software system grows, it often starts to take longer to do things. Changes affect more code, there’s more to review before you can plug something in, and so on. This is especially common in systems where there isn’t ongoing refactoring to improve the code. CFP would see the team slowing down. Since CFP is “correct”, the team is wrong. Whip the ponies harder.
But the big mistake of CFP, in my view, is that it is about prediction. It’s sole purpose, it seems to me, is to predict and compare the cost of future efforts. Agile software development is not about predicting. It is about choosing, day to do, the next feature that will best increase the value of the product. As soon as you start creating new feature ideas based on the response to the previous one, you’ve blown all of CFP’s longer-range predictive ability away. You’re left with the whip.
As a practice, I think CFP, or any predictive scheme, including story points by the way, is a poor fit for iterative, incremental, team-focused software development.
TDD
Adam said:
TDD is “make it up as you go along” design. No wonder it introduces so much rework that you end up with very little quality code. TDD success stories are the best example of survivor bias.
In the thread, Adam refers to rewriting tests, and defends his rather radical view against a wide number of TDD aficionados. You can cruise the thread for entertainment. I want to talk about how TDD works, and doesn’t work, for me.
TDD, done as I learned it at the hands of the author, is a very small-scale practice of writing a test for one tiny thing that the system doesn’t yet do, a thing far smaller than a “feature”. The core process is a bit ritualistic, but it’s not mindless as we’ll discuss. We run the test and observe (RED) that it doesn’t yet work. (Once in a while, we’re surprised and the test does run. Learning ensues.)
We then implement the minimum code to make the test work (GREEN). Our tiny thing works, so we look at the impact of that feature on our design understanding, and we improve the design, then and there (REFACTOR), keeping the design good as we build up tiny things into bigger things.
Probably a better mantra would have been something like
THINK red THINK green THINK refactor THINK
TDD is not some mindless practice that replaces thinking with tiny tests and tiny code improvement. It is a mindful practice embedded in a lot of thinking. In particular, design thinking.
Now, as we’ve talked over the years about TDD and the very many closely related activities, like ATDD, Acceptance Test Driven Development, we’ve observed a common structure, kind of nested. We set out to do some big thing, and we might write a “big” test for it, or we might just think about it. We think of some smaller step for it … think … finally choose some tiny step, TDD, pop up, choose another step … pop up … and the feature works. (Or doesn’t, in which case there’s a missing or mistaken test somewhere.)
Statement
I recall a bit in Stranger in a Strange Land, I think it was, where someone says to Jubal “So you admit …” and Jubal says something like “I do not admit …, I state it.”
I state here something I’ve stated many times. In my personal practice with TDD, I intentionally do less “big” design than I could. I intentionally do small bits of design. I intentionally support my few features with a design that is too small and not general enough for the whole system.
I do that because I want to show what happens. What usually happens is that when I discover that the existing design isn’t good enough, I can refactor it so that it is good enough. And I show my work, so you can decide whether it was easy or hard, whether the path I took was winding but efficient, or winding and far too slow.
Now, of course it seems likely that if I would “just figure out the right design and implement it”, that would be the best thing to do. And since we are smart, we can always figure out the right design. The problem is, that idea doesn’t even work if we could do it.
The big design will take longer to implement than the small one. That means that features will start to come out later. Possibly, once they start coming out, they’ll come out faster. But with small steps toward the big design, even if they are a bit wandering, we’re comparing this feature flow with incremental design:
... A ... B ... C ... ... D ... E ... F
To this one with up front design and building:
... ... ... ... ... ... ... A B C D E F
Maybe.
Now of course this may not always happen. In my work, I like to throw hard problems in the form of new stories, into the mix. I demanded Hex maps in the Dungeon program, and as readers will have noticed, I had trouble with the details and have backed away from that feature (for now).
It could also not work if my small design steps are wrong. But there’s a catch. If my small design choices are wrong, what are the chances that my big design is somehow right? That’s not how I’d bet.
Adam also mentions rewriting tests as an excessive cost of TDD. I do not experience that. I do sometimes rewrite tests, but it’s not difficult. More commonly, my existing tests work just fine, and in fact even find problems in my design improvements. So I can’t really speak to Adam’s problem. I do not experience it.
But I can speak to his conclusion, which seems to be that TDD doesn’t work. I do TDD, and it works quite nicely for me. I know tens, maybe hundreds of people for whom it does work. So I have to conclude that there is something different between where Adam is and where we successful TDDers are. Is it his problem space? His programming language? Is it a variance between how he does TDD and how we do it? Is it a variance in his notion of good design and ours?
I do not know. But I know that TDD, as many people do it, works quite nicely, and we wouldn’t work another way.
As a practice, I find TDD to be a good fit for iterative and incremental software development.
Civil Conversation
I am reliably informed that I am an old white well-off male, and that as such I hold a power level that would allow me to demand civility when in fact I was on the wrong side of this notion that I found in a tweet from Rachel Thomas:
Sometimes people use “respect” to mean “treating someone like a person” and sometimes to mean “treating someone like an authority”.
For some, “if you don’t respect me, I won’t respect you” means “if you don’t treat me like an authority, I won’t treat you like a person”.
I hope I never do that, but certainly people do it. We see it in bosses, police, judges, politicians, who stand on a platform of power and demand respect, decorum, civility, while dispensing injustice at will.
I am also reliably informed that as a Jesuit-trained, math- and computer science-educated computer programmer, I may value “reason” and “logic” more than they deserve. I am informed, and I suppose it’s reliably, that feelings are more important than reasons. And in the context where I was most recently informed of that, namely the “rules of civility”, the notion seemed to be that those rules rely too strongly on reason, not enough on empathy and feelings, and can readily be used by those in power to silence those who are justly upset at not being heard.
I am sure that some of the backlash came from individuals who do not think highly of Pinker and, who knows, may have let their feelings about him contaminate their feelings about those rules. Pinker is associated with the organization, so the concern is not entirely unfair.
Despite all the discussion that I’ve had offline about this topic, I still rather like the rules. I think that a conversation that goes too far away from them almost certainly turns unproductive. But such a conversation might have a longer-term good effect.
Suppose I were to say, with utmost civility, something that, to you, is an aggression or micro-aggression. And suppose that you chose to step well outside those rules of civility and call me on it: “That remark is sexist, racist, cruel, unkind, and I won’t have it, you sexist, racist, cruel, unkind #$%^33@!”
It might well end the conversation. But if I were to think about it, if I were to ask my friends about it, maybe … just maybe … I’d see the point, learn, and improve. So I accept that incivility can be a valid choice even to attain the same ends as a civil conversation might have.
And I accept that on a given day, #$@% it, someone might just be fed up and choose incivility because they feel like, it, results be damned. It’s a choice. You pays your money, you takes your choice, you gets what you gets. I’m OK with that.
I see clearly that empathy and kindness are not listed in those rules, and freely state that they should be, should perhaps be paramount
And yet … I feel that as a practice, something near the space of those rules is a good place to operate, and I am renewing my intention to operate at or near those rules. And to add in a good dose of sensitivity to feelings and emotions.
Summary
I am old enough, white enough, Jesuit-trained enough, to think that what we get depends on what we do. Not solely on what we do, but what we get is influenced by what we do.
Now what we feel, and what we think, influences what we do. Depending on your position and views on free will and autopoiesis and the like, you might apply some multiplier K between zero and one to the equation:
what i do = K*f(thoughts, feelings)
But I choose to think that my thoughts and feelings influence what I do, and that what I do influences my results. So that I try to adjust my thoughts and feelings to increase the chance that I do things that will influence my actions in the direction I would like them to go. Because I think that my actions influence my outcomes.
I call the things that I do “practices”, and as regards practices, yesterday and today cause me to want to think and feel so that:
- I continue to remain open to ideas like CFP, and to consider them as fairly as I can;
- I never find myself in a position to be required to use CFP, because at this point I consider CFP to be razor blades to babies.
- I continue to try to engage opponents of things like TDD, in as kind and rational a fashion as I can, so that I can better encompass what goes right and wrong;
- I continue to improve my own use of TDD and refactoring, because I enjoy them and they seem to help;
- I continue to try to empathize with, and take on board, the feelings and thoughts of others who are not quite the same as me;
- I continue to treat people with kindness, not just reason, to the very best of my ability.
And to the extent that I do those quite imperfectly, I want to think and feel so that I do those practices better than I do now. I feel sure that I’ll get better results when I do.
Thanks for reading, if you did. If not … I wonder what I could have done so that you would have.