The XP 2002 Conference

This year the XP conference was again in Sardinia, this time in the town of Alghero.

At the end of May 2002, the XP community once again descended on the Mediterranean island of Sardinia. In this article I look at the plenary speeches from Ken Schwaber, David Parnas, Enrico Zaninotto, Bill Wake, and the Standish Group's Jim Johnson. They lead me into some thoughts on the essence of agile development, the role of mathematical specifications, the complexity of irreversibility, metaphor, and the best way to drastically cut software costs.

02 July 2002

Remember the Spirit

The conference proper began with a talk from Ken Schwaber. Ken opened with the theme that agile development is not just a repackaging of a bunch of old techniques, but something that is new and revolutionary. Articles that saw agile as a variation on an old theme, such as Barry Boehms recent IEEE Computer article, just didn't get what made agile special.

Schwaber characterized Scrum as a product development methodology that went beyond just software. He said that XP provided engineering practices that were missing in so many IT organizations, and the resulting blend of XP and Scrum was the most pure of the agile processes.

In his experience there two kinds of projects that were interested in agile, those that were in trouble and those that were curious. He found the former better because they would follow agile advice, while the latter would mostly say "but" to various ideas. He criticized the common approach of taking some ideas from agile methods, saying that it was like taking a fine spring out of a well crafted Swiss watch.

Schwaber suggested several markers for true agility:

  • Manager feels it's okay to proceed without all the requirements
  • Business people are excited about delivering software
  • Engineers are heard when they have problems
  • A sense of buzz in the building
  • The team makes the decision of how to do the work
  • People don't talk about the usual roles

He said that agile is a power shift, one that returns IT to the business. In business terms ROI (return on investment) is the only real measure of success. The great danger to agile was the that people focused too much on the practices and forgot the bigger picture, the revolutionary aspect of agile development.

The interplay between practices and philosophy is one that's dogged the agile and XP communities for a while. I think this is particularly the case with XP since it contains both some important philosophical approaches to the way software is developed, as well as some very concrete practices for doing it. This led to a real question as to what really is the essence of XP - is it the practices or the underlying philosophy? This is particularly noticeable since as you get more experienced in XP the actual following of the practices becomes less important. But at the same time you have to do the practices to get any sense of what XP is really about, (this is a conundrum I explored in Variations on a Theme of XP).

In many ways I think the "agile" label helps us to think about the separation more easily. The real value of the get together at Snowbird was that those there found so much agreement on philosophy even though there were differences in the practices. That's why the values in the manifesto speak so well. I agree with Ken that the shift to Agile in its essential elements (predictive to adaptive, process-oriented to people-oriented) is more of a discontinuity than it is a progression, although it can be hard to see the boundary.

But I'm not too concerned about focusing on practices - indeed I think it's essential. Good development practices are vital to fulfilling the promise of agile development. XP's offerings here, such as its techniques for evolutionary software design, are vital if we are to maintain flexibility in the future. Without all the parts, there isn't a Swiss watch to admire.

Produce Good Documentation

Agile conferences have never been shy to invite those who many would consider to be opponents of the agile cause, such as Mark Paulk of the CMM giving a talk at XP Universe last year. This year XP 2002 welcomed David Parnas, who took a more critical stand against XP.

He said that a big problem with XP was that it focused on problems that have plagued software development for forty years. XP's focus on code had its benefits, but failed to address other important software development problems. Software's biggest problems were the poor response to unexpected events and poor definition and control of interfaces.

A focus of his criticism was XP's approach to design, which he said would lead to delayed problems. This is an issue of economics, saving money now at the expense of costs later on. It was good to focus on simplicity, but there's a difference between short-sighted simplicity and visionary simplicity. True long-term simplicity requires planning, planning requires reviews, and reviews require documentation because feedback from code comes too late.

But he wasn't advocating the kind of documentation that most projects do. He considers most project documentation to be worse than useless and panned the "Undefined Modeling Language". XP's test cases are not enough because they cannot be complete. He favored a mathematical approach that was well defined, but said the formal methods community isn't on the right path because these mathematical specifications must be easier to read than code, which isn't something that can be said about common formal methods such as Z or VDM.

He said such a mathematical approach needs to based on functions and relations. Identify the variables that need to be monitored and controlled and form derivations of the controlled variables from the monitored variables. These variables can then be placed in tabular form that can be read by domain experts, with one table per controlled variable. He cited a project for an avionics system where pilots found several hundred errors based on such a specification. Naturally time didn't allow him to explain the system in depth.

The debate about XP's approach to design is a heated one, and one that I used as the theme for my talk at XP 2000. The idea of a rigorous yet readable specification was a theme in much of my early career in software. My first paper at a software conference was about how a graphical design technique could be used in a mathematically rigorous way, which was nudging towards the same direction.

My view, and I think Parnas would agree with me on this, is that the value of such a formal specification lies in the ability in finding errors so that they can be fixed more effectively than other techniques. If your development approach is planned design, where you expect the design to be mostly complete before coding begins, then this is essential since otherwise errors in specification can't be found until much later in the process.

XP attacks this problem in several directions. Its use of short iterations means that you can get rapid feedback from code, since users start getting a system that they can work with in a few weeks. Its use of refactoring allows short-sighted simplifications to be fixed at a lower cost than would be true without it, and the support of aggressive testing and Continuous Integration.

The crux of the question is whether a mathematical specification of this form is more cost effective than XP's approach and under what conditions. I've never believed that a particular approach is necessarily appropriate for all the different kinds of software development, so what works best for avionics may not be the right route to take for enterprise applications. My experiments with more formal approaches in the enterprise world led to be belive that domain experts did not work with them well enough to be able to find errors as well as iterative delivery. I was also surprised by how effective evolutionary design was in the context of XP, although I still favor a smidgeon of planned design. Having said all that, however, I haven't tried Parnas's particular techniques. I hope that some XP people give the techniques a try when working with customers and see if they are worth the cost of using them. Although I think Parnas views these techniques as ones to be used in a predictive setting, I see no reason why they couldn't be used in an iterative environment.

Using Practices Discarded Twenty Years Ago

If David Parnas is an outsider to the XP community, Enrico Zaninotto is an outsider to the entire world of software development, being a professor in economics. His interest was in comparing agile approaches to recent developments in manufacturing, triggered by an experience in seeing a software development project use techniques that been discarded by leading manufacturers twenty years ago.

Zaninotto outlined four main drivers that added complexity to a manufacturing or software development effort.

  • the number of states that the system can get into
  • the interdependencies between the various parts
  • the irreversibility of the various actions you can take
  • the uncertainties in the environment

He described the Taylor and Ford models as handling complexity by focusing on the problem of the number of states the system could get into. They tacked these by division of labor, standardized parts, and the assembly line that fixed the product that could be built, thus limiting the number of states. Zaninotto views the waterfall view of process as following this model. The limitation of the Taylor/Ford model lay when you ran into unforeseen consequences. The approach optimizes the situation when you know what's going to happen but becomes vulnerable to uncertainties.

The paradigm shift for manufacturing is the Toyota system. Of the four complexity drivers, he said the Toyota system attacks irreversibility, allowing decisions to be changed. While Taylor/Ford controls the flows of information, embedding it into the system, the Toyota system pushes decision making to the point of action.

The biggest difficulty with flexible systems like the Toyota system is that they may not converge. To get things to converge the Toyota system keeps the variations within bounds and uses a high level of quality control. Zaninotto saw this issue of convergence as the limiting point of XP. The true bounds of XP are the bounds of where convergence still occurs.

I wasn't at my best for Zaninotto's talk, the conference dinner featured a really good band and lots of Mirto - which lead to late and exhausting evening. And 8.30 is never easy for me at the best of times. But I was fascinated by the talk. Speaking in a non-native language, Zaninotto read his text, but in many ways the simplicity of the delivery amplified the quality of the content. The links between the Toyota system and agile processes are a territory that Mary Poppendieck has already mapped out, which led me to explore it further, but Zaninotto added new ideas to what I'd already learned. (This is one of the few cases where it's really worth reading the full text.)

The issue of reversibility, and the notion that irreversibility is a cause of complexity, was one point that particularly tickled me. One of the interesting things about DSDM is that one of its core nine principles is that "All changes during development are reversible". Putting reversibility right at the top of a methodology like that is unusual. One argument is that reversibility is a necessary protection with incremental change. It means that the worst thing that can happen is temporary problems and a reversion to a previous state. Of course those temporary problems could be quite serious, as in any computer defect, but once the failure is spotted you can always fix by reversion even if you can't find the underlying defect immediately.

When I read the recent FDD book, I noticed that in their list of core practices they included configuration management. Configuration management was left out of XP, not because it isn't necessary, but because XPers assume configuration management is part of basic competence. Yet it's not uncommon to run into people who don't seem to understand how to use configuration management properly.

Another element that struck me was the interrelationship between reversibility and iterative development. If you make a wrong decision and detect it rapidly, then you can use some form of reversibility to undo your mistake, or you can refactor before the mistake gets out of hand. But if you can't detect it quickly, then it gets embedded in the system and is difficult to reverse. The iterative cycle gives you the ability to spot errors earlier, before they are hard to reverse, and refactoring further reduces the cost of removing the error. The net result, that you aren't tied into getting things right the first time, removes a major source of complexity. Much of the thinking of rigorous process is about coping with irreversible processes, and techniques aimed at coping with irreversibility become excess baggage once that complexity is removed.

Metaphors for XP Practices

Bill Wake was the only established XP leader on the plenary speaking program this year, and his contribution was lighter in tone than many of the other plenary speeches. His theme was metaphor, not as in the XP practice of metaphor, but in the sense of metaphors for a variety of XP practices. Here's a run down of those I noted.

  • Test first is like math problems where they only tell you the even answers - the point isn't the answer it's the doing
  • Spike is like carpentry where you build one just to see how it works. Test first also plays here as you figure out how things should fit before you make the item.
  • Collective Code Ownership is like the situation room in films and TV shows like West Wing. The point is that everyone has access to all the information and is therefore able to contribute to the problem solving
  • Pair Programming is like four handed piano music, or perhaps like tag team wrestling
  • Iterations are like the escapement on a clock
  • Learning XP is like learning a foreign language
  • Continuous Integration is like balancing a check book. If you only do it every year it's a pain, but if you it every week it's easy.
  • Going home clean (checked in with a green bar) is like day trading where you leave your portfolios balanced at the end of the day.
  • Stand up meetings are like the starter for race, after that everyone is running in the same direction
  • Iteration retrospectives are like a sports team watching a video of their last game

Metaphor is a contraversial subject in XP, I've found that when a metaphor works well, it can be incredibly satisfying, but coming up with one that works well is very hard. Using these kinds of metaphoric thinking as a mandatory practice is limiting because it asks you to do something that is hard, when simpler things often do enough of a job. As a result I don't worry too much about metaphor when working with XP, yet I don't let that stop me using a good one if it comes up.

Like most metaphors, I found a few of Bill's interesting but most just didn't work for me. Continuous Integration == balancing a check book worked pretty well, and I would extend it by pointing out that balancing a check book is even easier when you have automated supported (using Quicken has been very helpful.) I also like the situation room / collective code ownership analogy - although I've never watched West Wing.

But I've never really liked much four hand piano music - and as for tag team wrestling? At least I can play with the mental image of Ron "The Hammer" Jeffries and Ann "Wild Woman" Anderson versus Kent "Alpha Chimp" Beck and Robert "Black Hole" Martin. (I'd just want to make sure that Rob Mee is my tag partner.)

Build only the Features You Need

Jim Johnson is the chairman of the Standish Group and as such there's little surprise that much of his talk included the statistics that the Standish Group have been tracking for the last few years. The oft-quoted headline is that only 28% of IT projects actually succeed fully: 49% are "challenged" and 23% fail.

Of course this kind of statistic rests very heavily on the definition of success, and it's significant that the Standish Chaos report defines success as on-time, on-budget and with most of the expected features. Like most of the agile community I question this. To me a project that is late and over-budget is a failure of the estimate. A project can be late, way over budget and yet still a big success - as Windows 95 was for Microsoft. Project success is more about whether the software delivers value that's greater than the cost of the resources put into it - but that's very tricky to measure.

Another interesting statistic that Jim quoted was the large proportion of features that aren't used in a software product. He quoted two studies: a DuPont study quoted only 25% of a system's features were really needed. A Standish study found that 45% of features were never used and only 20% of features were used often or always.

This certainly fits the anecdotal evidence of many agilists who feel that traditional approaches to requirements gathering end up gathering way more requirements than are really needed to a usable version of a release. I think it also suggests that most projects should be about quarter of the size that they currently are. If you also factor in the diseconomies of scale for software, this argues that many teams should be a lot smaller too.

This point really came out in Johnson's comparison of two SACWIS systems. SACWIS is a child welfare system that all the states in the US must implement. He stated that Florida began its SACWIS system in 1990 with an original cost estimate of $32 million for delivery in 1998 with a development team of around a 100 people. When they last looked the new cost was $170 million due to ship in 2005. He pointed out the state of Minnesota has to build the same capability with pretty much the same demographic issues as Florida - leading to very much the same broad system requirements. Minnesota started work in 1999, finished in 2000 with eight people costing $1.1 million.

The comparison is a startling illustration of Chet Hendrickson's statement that "inside every large system there's a small system trying to get out". I can't claim to really judge the details of what happened to make these two systems so different in cost and effort. The Standish group attributed the difference primarily to Minnesota being more successful at minimizing requirements and also using a standard infrastructure. If nothing else it's very suggestive of the amount of impact you can get by paying attention to boiling down requirements to their essence. The agile contention, of course, is that short value-based iterations are more likely to minimize requirements than an up-front requirements gathering exercise. It'll take a while before we get any real data on that.

Whether you buy this agile contention or not, I think the SACWIS story suggests that the software industry faces a very real challenge in understanding how little to build and still provide value. It suggests that just by building only the features we need, we can drastically cut the costs of a software system. My view is that up-front requirements processes work against you in this because too often people are looking at requirements without costs. Requirements books I've examined hardly ever mention costs and the importance of understanding costs as part of gathering requirements. In my view a requirements document without costs is almost a guarantee of over-building something, particularly if it's tied into a fixed-scope/fixed price contract that only cements the unnecessary expenses into place.

You can find many of the slides from the talks I mention here.

Significant Revisions

02 July 2002: First publication