Yesterday, I received a very interesting letter from Mr Stuart Kretch, asking me some questions about the role of science and mathematics in Agile Software Development.

I quote his email here with his permission:

I’ve been involved in planning and delivering software consulting projects since the 90s. Software and project plans are mathematical objects and ought to be amenable to more formal analyses. This leads to more agile plans. This does not mean that human factors are unimportant, but rather that there are fundamental limitations inherent in the implementation process. By carefully separating them, we gain better control over work.

I have a couple of general questions that I’d like to pose:

  1. Do you know of any references of work involving the formal theoretical foundations for Agile approaches that you could share?
  2. Do you have any thoughts on the usefulness of such approaches might be? I have no illusions that there are silver bullets, but I’d be extremely interested in your thoughts. The reason that I ask is that there seems to be a long history of software implementation being problematic.

Anything - even just observations - that you can suggest would be very much appreciated. Thanks very much.

I replied with remarks that seem to me to fit my current thinking, but that rather surprised even me who wrote them. I’ll quote myself here, and then sometimes elaborate.

I emphasize here that these are my thoughts based on the questions Stuart asked. I’m not trying to put words into his mouth, nor to judge him. These are just the things that come to my mind.

Perhaps what you mean is something like “it is possible to model software, and project plans, using mathematical modeling”. It seems clear to me that Java is not mathematics, and neither is a random demand from a manager to “build it all by November”.

I’m thinking here that software is in fact not a mathematical object, but one that can be somewhat modeled using mathematics. Similarly, a plan is not a mathematical object but again might be somewhat usefully modeled. I think the distinction is important in that it means (to me) that mathematics cannot capture all the details we might want to capture in our plans or in our code.

And we certainly cannot anticipate the random demand of an impatient manager, or the sudden opportunity offered by a large prospective client, or the change in our ability to build software caused by someone moving to another company. We might be able to add terms to our model to adjust when these things happen, but overall the model’s predictive capacity is limited by the things we don’t know at the time we use it to predict.

I would go on to add that almost all interesting software is incredibly difficult to model mathematically, and that the mathematics is at least as hard as writing the software. Furthermore, the math is not amenable to testing even to the extent that software might be.

An observation we made during the Chrysler Payroll project comes to mind here. We were looking at the rules for some of the more complicated aspects of a real payroll. For example, there were a handful of employees whose union dues were different from everyone else’s. We finally found out that long ago, a small group of employees threatened to go on strike if their union dues were raised, so rather than fight them, the union granted them different dues from everyone else. Since the payroll program collects union dues, this random decision had to be programmed into the system and carried forward for years.

Now it’s certainly true that we can write math to describe these rules, just as we wrote code. But that’s not at all in the spirit of what we expect a mathematical model to be. We expect our model to be simpler than the reality, and more visibly “correct”. I can test the Smalltalk code that implements that union dues rule. All I can do with the mathematical version is subject it to inspection and convince myself that it’s probably correct.

As for projects and their plans, I’d argue that a mathematical model of a software project is likely to be about as accurate as weather models, likely to require about the same amount of processing and data collection, and is still, like weather modeling, likely to be entirely inaccurate beyond about three days in the future.

Weather prediction is a marvelous example of complexity in action. We’ve all heard of the “butterfly effect”, the notion that a butterfly flapping its wings in Brazil might cause a tornado in Nebraska a few weeks later. In our own lives, we know that weather predictions are inaccurate even a few days out, sometimes even a few hours out, even though there are more sensors, more reporting stations, and more computers dedicated to weather forecasting than to nearly any other enterprise.

We can model the statistical distribution of defects somewhat accurately. The models come down to “There are a lot of known defects in this module, and few in that one. Probably we’ll find more new ones in this one and fewer in that one.” Whoopee. That tells us next to nothing about where the next defect will be, nor what to do about it.

We don’t predict what’s going to happen: we choose what’s going to happen next.

The same with plans. We might be able to create a half-decent model that would tell us that we’ll probably have 80% of the features done by September. To model whether Feature 123 will be done? Not gonna happen. But a good Agile team could do Feature 123 next, if you’re that excited about it. We don’t predict what’s going to happen: we choose what’s going to happen next.

If either of these entities were in fact capable of being usefully modeled, by now we’d have seen it happen, frequently. The reality is a lot closer to “it has never happened”.

As I read this now, it may seem flip. But I’m serious: lots of effort has been burned on turning software development into mathematics, or even into engineering. Some useful work has arisen, sure. But by and large, software product development is a complex dance of human discovery, and as such, almost all the interesting concerns will not appear in our models.

We could probably create an interesting descriptive paper by analyzing some large number of projects and fitting curves and models to the data. In fact, people have already done that. What would we learn? In my view we’d learn little that was useful for predicting details about our next project, though we would probably learn some things not to do, like “Don’t put testing off till the end.” However, we already knew that and didn’t need a bunch of math to tell us.

Have you looked at [Dave Snowden’s] Cynefin model?1 I think it is quite applicable to software development as well as to many other things humans try to do. Software development mostly takes place in the Cynefin Chaotic domain, and rarely in the Complex domain (not the mathematical complex domain of course).

In my view, most of software development takes place in Snowden’s Complex and Chaotic domains. In the article pointed to above, Snowden asks us to move from Complex to Complicated as often as we can, because in Complicated, we can set up constraints that keep things more or less on track.

In that same article, Snowden shows his purple curve, where we are grazing the surface of the Chaotic domain. He describes stability as transitory. We dip into chaos and then need to quickly experiment and move back toward at least the relative stability of the Complex domain.

Snowden has done some marvelous work and people using his approach have produced some amazing results. Read his work in detail for more. I am, however, not as optimistic as he is.

My view is that absent a clear dedication from all stakeholders to working some kind of model like Snowden’s, we are always cycling between what he calls Complex and Chaotic. It’s like the weather: there’s just too much going on to allow a decent use of mathematics or other models to predict the future.

In these domains, long or even medium-range planning simply does not work. It literally cannot work. Think in terms of predicting the position of a double pendulum. It’s not difficult: it is in every practical sense impossible. And yet the double pendulum is simpler than the simplest software project one can imagine.

I’ve been reading a lot about complexity, in fact and fiction, and talking with wise and intelligent friends about it. What comes out for me is that all this apparent order we see is almost accidental. We’re sort of dancing among all kinds of forces, some that we can see and some that we cannot even see. Surprisingly often, everything is stable enough that we manage to stand on our feet, cross the street without getting run over, and even show up at the office and do what is accepted as productive work.

Even so, a butterfly flapping its wings in Brazil can cause our project to go down the tubes, leaving us dazed and confused.2

We cannot predict the future. We can, however, create it.

Agile done well is exactly [not about] planning. Agile is about steering. It is not about “controlling” work, it is about choosing work in small slices, doing the work, and seeing what happens.

Here’s my real point. (Talk about burying the lede!) Even if mathematically modeling software and plans were possible, which it isn’t, it would be the wrong thing to do. Agile is about steering, about choosing, not about predicting. The better such a model might be, the more we’d be tempted to use it to do the wrong thing.

I would say that Agile approaches are founded in real observation. [Not in theoretical foundations.] Few if any of the Manifesto authors are theoreticians: they are practical real-world software developers.

Agile thinking, by and large, is relentlessly pragmatic. “Inspect and adapt” while working in very short cycles, delivering concrete working software.

The main theory underlying why we say what we say is complex systems theory, and more modernly the aforementioned Cynefin.

Some authors, myself included, have tried to explain Agile ideas in terms of these theories. I’m sure that the ones who are smarter than I am had some of those theories in their heads as they worked and wrote. But even then, I’m confident that what was in the heads of the Manifesto Authors was relentlessly pragmatic. Like it says in the Manifesto: “We are uncovering better ways of developing software by doing it and helping others do it.”

Software implementation is problematic, but not because of lack of theory, but because it is inherently a process of learning, discovery, and complex interactions between people. Like science itself, you can build software using an orderly, sensible approach, but we cannot model it because it is fundamentally a process of creation.

I agree entirely with Stuart when he says that software implementation is problematic. Hell, it’s worse than that in many places, cf. Dark Scrum And certainly we can gain insights about it with math and modeling. Many (I hope most) of my comments above can be modeled and verified with math.

My point is that that math will show us that most of what we do is aided by trying to be orderly and sensible but that in the end, it will come down to our adaptability and our creativity, not to better models.

I think we’ll let this be enough for today. Stuart is going to write a response to my email, and I’ll respond to that here in a day or so. Thanks for listening, and thanks to Stuart for the priming questions!


  1. I don’t claim to understand Snowden’s Cynefin model well, perhaps not at all. What I’ll write here is what I think about when I read his writings or watch his videos. Thus far, I’ve not had the privilege of sitting down with him and learning. 

  2. Here’s one actual case, names changed to protect the innocent: Butterfly flaps wings. Bird sees butterfly and eats it. Bird, now out of position from where she was, builds her nest in a different tree. Her offspring thrive and their excretions fertilize a banana plant. In that plant, a deadly spider builds her nest. Bananas are harvested, and shipped to your town. Your colleague Sam’s wife, Sam, goes to the store and buys the bananas. A spider hatches in your kitchen, and Sam’s cat, Persimmon, sees it and chases it about. Persimmon knocks over Sam’s half-empty beer glass that Sam has always told Sam he should rinse and put away if he’s not going to finish it. Persimmon tracks beer all over the floor, and Sam, to preserve domestic tranquility, stays home from work for an hour, to clean things up. Knowing Sam will be in soon, you go ahead without a pair to fix just a few little things. It’ll be OK. When Sam comes in, you show him what you’ve done and he glances at it and nods. Neither of you notices the spider on the back of Sam’s shirt. At a critical moment, the spider drops in front of you and you have no choice but to destroy your cubicle with fire, losing the morning’s work. This requires that you both go out to Staples at lunch time to buy a new desk, and while you’re out you notice a “Help Wanted” sign at Staples. You put in your application and they hire you as Regional Vice President. You leave the project and that bug you fixed never gets committed to Git. An analyst for the VC who’s going to do the project’s second-tier funding sees the bug, thinks it’s horrible, and the organization doesn’t get the money it anticipated. Damn that butterfly!