Thoughts: XP Revisited
Started these notes in early September, before the New Framework article. Lightly revised, as this is now late September. The better-formed ideas are in New Framework…
I was thinking vaguely, in that early AM way that one has, about the XP practices, and what practices I might apply today instead. I had in mind the XP Practices as shown here, namely Whole Team, Planning Game, Small Releases, Customer Tests, Simple Design, Pair Programming, Test-Driven Development, Design Improvement, Continuous Integration, Collective Code Ownership, Coding Standard, Metaphor, and Sustainable Pace. What would I say – more accurately, no doubt, what would Chet and I say1 – today, if we were to say something.
Once I sat down at my computer, I decided to start with a quick review of three of the XP books, Extreme Programming Explained (Beck), Extreme Programming Installed (Jeffries, Anderson, Hendrickson), and Extreme Programming Explained Second Edition (Beck, Andres). I scanned the tables of contents, opened at random locations, and took some notes.
In doing that, I was struck by the fact that there’s some great material in those books, which is worth reading again, because while it’s built into our thinking, it’s often not made explicit. There are rich elements, supporting ideas, lines of reasoning, that are worth thinking about. I recommend that you purchase several new copies of each of those books, especially ours. Place them conveniently around your home and office, and read from them occasionally, picking a passage that seems to you particularly apt.
While I’m thinking about books, don’t forget Nature which is also pretty wonderful if I do say so myself. But I digress …
Right now, as I write this, I have a number of lists extracted from those XP books, down at the bottom of this article. I’m not sure if I’ll leave them there or not. If I don’t, I’ll try to remember to delete this paragraph. But right now, I think they’ll make for some interesting comparisons after I write this article and the several others it will probably take to sum up my thinking on the subject: Extreme Programming Revisited.
I was thinking about how I might express the practices of XP if I were to express them today. What would I say about Pair Programming, for example? I’d certainly want to include “mob programming” as a similar practice. I might generalize to something about “work together”. One concern that I’d want to address is whether that practice, or any of them, is “mandatory”. A common question we used to get was “Is it still XP …”, where what followed was something like “if we only pair program on hard stuff”, or “if we don’t have an on-site customer”.
On the one hand, I don’t see the point as “doing XP” versus not doing it. The point is to be effective, and maybe to work “in an Agile fashion”, whatever that might mean. I do believe strongly in the values and principles expressed in the Agile Manifesto, and so I’m always writing to be consistent with those v’s and p’s.2 But back in our early days with XP, we were really pretty serious about the practices. For myself, I think I had seen the C3 team work so effectively within those practices, and I was afraid that if you strayed very far from them, things would go wrong.
Now, I have some reason to think that I wasn’t quite that cowardly, because when that hateful book Extreme Programming Refactored came out, it described the XP practices as a circle of snakes such that if you killed any snake, the whole thing fell apart, and I wasn’t all that worried by that argument. Be that as it may, I think most people new to a process are fearful to move much outside its bounds, and I certainly was.
I’ll proceed here – and I’m really winging this, stick with me – by starting with some of the practices and thinking about what I’d say today. That will take more than one article and quite possibly I’ll interleave Revisited topics from other angles as well. Where will we wind up? I have no idea, but I hope we’ll pass through some interesting areas along the way.
Today …
Small Releases
I think I might start my focus on the practice of Small Releases. This should be considered a “beginner” practice. I have in mind that every week, maybe two, we’ll have another release. Perhaps early on, we’ll just release to our internal stakeholders, but we are building a product that is fully integrated, fully tested, and ready to be delivered to the people who will use the software. We aren’t satisfied until every Small Release is actually delivered to our users.
Small Releases are small steps on the way to Continuous Delivery, a step beyond the old Continuous Integration.
Continuous Delivery
This practice means that the team always has the product fully integrated, tested, and ready to be delivered, and that the overall process isn’t satisfied unless it actually is delivered. This is Small Releases at the limit. The team always has a ready-to-go Increment, better today than yesterday.
The Increment
Kent Beck once said something like that all software processes are based on fear, and while I’d like to hope I’ve got more than that going for me here, I freely grant that I believe this:
Many, perhaps most of the problems I’ve had with software development would have been lessened had my team always had a running tested version of the software ready to go.
Frequent readers will remember my focus on Scrum’s Increment as critical to success with Scrum. With real working software in hand, conversations between the team and the leadership become more concrete and focused. There’s a much better chance to turn the focus from doing it all to doing the few remaining things that keep people from being able to use the product. It’s not perfect, but we have a better chance.
Customer and Programmer Tests
If the product is capable of Continuous Delivery, then we need to be sure that it works. The practice of Customer Tests amounts to a series of examples for all the features we undertake to do. Typically, these examples can be automated so that we have an ongoing record confirming that the system does what it is expected to do. When a feature is first built, the Customer Tests serve to communicate between customer and developers, first, what is to be done, and second, confirmation that it has been done. Running these checks at every build thereafter gives us confidence that we haven’t broken things.
It’s probably worth mentioning that these tests are not sufficient to find all concerns, and that we surely need other ways, such as exploratory testing, to find out what else we should be thinking about. There’s surely a big area to explore, maybe called Rich Testing or the like. But, to my thinking, the Customer Tests practice is a powerful tool and is likely to be a valuable one in most situations.
Programmer Tests fit into having a working Increment a bit differently. Led by the Test-Driven Development practice, Programmer Tests are automated checks that tell the programmers that their code does what it’s intended to do,in a way similar to Customer Tests telling the customer that the code does what was asked for. Programmers write many small modules of code that add up to the product, and it’s important that each module does what was intended. Since the product is growing over time, the modules are often changed and enhanced, and the Programmer Tests help us be sure that we’ve not broken something in the internals of the system.
The Programmer Tests serve to protect quality, as a second tier of checking. When Customer Tests do break, as they sometimes will, one or more Programmer Tests should probably also break3. When they do, they tend to point more precisely to the cause of the problem. When they do not break, we’ve discovered a place where our double layer of checks has a gap in it, and we should probably consider improving our tests.
I should mention the word “tests”. Michael Bolton has long objected to the word “tests” in these and other practices, on the sensible grounds that testing is a lot more than just automated checking. He has suggested the word “checks” for these automated, well, checks, and I am somewhat in agreement. So I might, if I were revisiting XP, call them Customer and Programmer Checks instead of tests.
Cadence
XP (and Scrum) are founded on the notion of iterations (or Sprints), fixed length time boxes within which a chunk of development is done. Their creators believed that a short fixed time box is an enabling constraint, in that teams learn to slice out just enough work to fit in the time box, and that they learn to ship a running increment that’s ready to go. I do agree with that, at least in part. The iteration time box often does enable teams to learn to build an Increment.
There are two issues that I’d like to touch on, however.
First, iteration time boxes don’t always work. Sometimes the team never seems to learn to get done inside the box, and this leads to all kinds of problems. The worst one, in my view, is that they never get an Increment, and that’s bad. Without the Increment, the whole process struggles.
It is commonly argued that a better approach, even for beginners, is to take just one thing from the list of things to do and do it to completion. Then take another, and so on. This scheme, when it works, does lead the team to produce an increment. What it may not do, however, is lead them to produce increments frequently. One loses the good aspects of the time box as well as the bad.
Second however, in my strong opinion, iterations are not the optimal way for a well-functioning team to work. The reason is that slicing things to fit into a fixed timebox leaves gaps and possibilities of overruns. If you’re just learning, those gaps can be useful, and the overruns help you learn to choose how much work to do. But if you’re chugging right along, finishing small features and getting them ready to go every day or so, the iteration time box just gets in the way.
So I conclude that the iteration time box is sometimes not ideal for beginners, and is never ideal for experts (although it is quite close).
However, I would still strongly recommend Cadence. In most cultures and organizations, we have regular cadences, time-sequenced events and intervals. We have days, weeks, and months. We have daily checkins, weekly or monthly meetings, and so on. Things happen on quarterly and annual boundaries as well.
I think it is wise to make use of some of these cadences. I’d likely recommend Daily Standups for many teams, though teams with enough pairing or mobbing might not need them. I think that even for a team who are pulling stories one at a time, a regular planning meeting (a lá Iteration Planning) to look out a few weeks, and perhaps another (a lá Release Planning) to look out a few months, might be worth considering.
And I would very much recommend a Product Review at frequent intervals, every two weeks or at least monthly, where all stakeholders look at the product and provide feedback and guidance on next steps. Certainly, I’d recommend Retrospectives at regular intervals, though XP did not include them originally.
Summing all that up, I think I’d say–at least today–that time boxes are quite frequently useful and that consistent intervals should be considered for planning, making sure everyone’s on the same page, reviewing the product, and reviewing the process.
What’s next?
Some of the other XP Practices are still untouched. I’ll skip Metaphor, if you don’t mind, although I continue to think it’s a brilliant practice. I just don’t know how to explain or teach it. We also have:
Whole Team, Small Releases, Simple Design, Pair Programming, Design Improvement, Collective Code Ownership, Coding Standard and Sustainable Pace.
Extracts from the three XP books
Things I might write about if this series continues …
Values (XP Explained)
- Simplicity
- Communication
- Feedback
- Courage
- Respect (XP Explained 2)
Variables (XP Explained)
- Cost
- Time
- Quality
- Scope
External forces pick three.
Roles
- Programmer
- Customer
- Tester
- Tracker
- Coach
- Consultant
- Big Boss (XP Explained), Manager (XP Installed)
Rights and Responsibilities (XP Installed)
Manager and Customer
- You have the right to an overall plan, to know what can be accomplished, when, and at what cost.
- You have the right to get the most value out of every programming week.
- You have the right to see progress in a running system, proven to work by passing repeatable tests that you specify.
- You have the right to change your mind, to substitute functionality, and to change priorities without paying exorbitant costs.
- You have the right to be informed of schedule changes, in time to choose how to reduce scope to restore the original date. You can cancel at any time and be lift with a useful working system reflecting investment to date.
Programmer
- You have the right to know what is needed, with clear declarations of priority.
- You have the right to produce quality work at all times.
- You have the right to ask for and receive help from peers, superiors, and customers.
- You have the right to make and update your own estimates.
- You have the right to accept your responsibilities rather than having them assigned to you.
Circle of Life (XP Installed)
- Customer defines
-
Programmer builds
- Customer defines value
- Programmer estimates cost
- Customer chooses value considering cost
- Programmer builds value
Define, estimate, choose, build, learn
Practices (XP Explained)
- Planning Game
- Small Releases
- Metaphor
- Simple Design
- Testing
- Refactoring
- Pair Programming
- Collective Ownership
- Continuous Integration
- 40-Hour Week
- On-Site Customer
- Coding Standard
Practices (XP Explained 2)
- Sit Together
- Whole Team
- Informative Workspace
- Energized Work
- Pair Programming
- Stories
- Weekly Cycle
- Quarterly Cycle
- Slack
- Ten-Minute Build
- Continuous Integration
- Test-First Programming
- Incremental Design
Corollary Practices (XP Explained 2)
- Real Customer Involvement
- Incremental Deployment
- Team Continuity
- Shrinking Teams
- Root-Cause Analysis
- Shared Code
- Code and Tests
- Single Code Base
- Daily Deployment
- Negotiated Scope Contract
- Pay-Per-Use
Chapters (XP Installed)
- On-Site Customer
- User Stories
- Acceptance Tests
- Story Estimation
- Small Releases
- Customer Defines Release
- Iteration Planning
- Quick Design Session
- Programming
- Collective Code Ownership
- Simple Design
- Refactoring
- Continuous Integration
- Coding Standard
- Forty Hour Week (Sustainable Pace)
- Pair Programming
- Unit Tests
- Test-First, by Intention
- Releasing Changes
Simple Design (XP Explained p 57)
- Runs all the tests.
- Has no duplicated logic.
- States every intention important to the programmers.
- Has the fewest possible classes and methods.
What is simplest? (XP Explained p 109)
Have the simplest design that runs all the tests. What is simplest?
- The system must communicate everything you want to communicate.
- The system must contain no duplicate code (1+2 = Once and Only Once)
- The system should have the fewest possible classes.
- The system should have the fewest possible methods.
Code Quality (XP Installed p 83)
- Run all the tests
- Express every idea
- Say everything once and only once
- Minimize number of classes and methods
-
Chet Hendrickson and I hang out, every chance we get, at the “BAR”, Brighton Agile Roundtable, more popularly known as the Brighton Michigan Barnes & Noble coffee shop. We talk about things. Mostly we have no clear understanding of whose idea anything is any more. Collective Idea Ownership, I guess. ↩
-
By my life, this is my lady’s hand: these be her very C’s, her U’s, and her T’s; and thus makes she her great P’s. (Malvolio, Twelfth Night, Wm. Shakespeare) ↩
-
There are two major kinds of errors to consider here. Some Customer Test has broken. That customer feature calls many internal functions. Since the customer feature is broken, either one of the internal functions is broken, or the customer feature has called an internal function incorrectly. I’ve found that in a well-factored system, the more common case is that one mr more Programmer Tests fail whenever a Customer Test fails. Why? My guess is that since we presumably weren’t working on that feature, we’ve made an internal change that breaks something. If no Programmer Tests break, we should think about whether we’re missing some. ↩