There is an intriguing question that pops up frequently in organizations developing software in projects: when is a project successful? For sure, one of the most (mis)used resources on the subject is the Standish Group. In their frequently renewed CHAOS Report they define a project successful if it delivers on time, on budget, and with all planned features. For a number of reasons this is, in my opinion, a rather vague definition.
First of all, how do you measure if a project has finished on budget? You would need to compare the actual budget to an originally planned budget. This originally planned budget is of course based on some estimate. At project start. An estimate at project start isn’t called an estimate for nothing. It’s not a calculation. At best, it’s an educated guess of the size of the project, and of the speed at which the project travels.
Estimation bias
As we know from experience, estimates at project start are often incorrect, or at least highly biased. For instance, what if the people who create the estimate will not do the actual work in the project? Is that fair? Also, we often think we know the technical complexity of the technology and application landscape. In reality these appear to be much more complex during execution of the project than we originally thought.
Second, when is a project on time? Again this part of the definition depends on a correct estimate of how long it will take to realize the software. At project start. Even when a project has a fixed deadline, you could well debate the value of such an on-time delivery comparison. How do we know that the time available to the project presents us with a realistic schedule? Once again, it boils down to how much software do we need to produce, and how fast can we do this.
All planned features
But the biggest issue I have with the Standish Group definition is the all planned features part of it. The big assumption here is that we know all the planned features up-front. Otherwise, there is nothing to compare the actual delivered features to. Much research has been done on changes to requirements during projects. Most research shows that, on average, requirements change between twenty and twenty-five percent during a project. Independent of whether a project is traditional or agile. Left aside the accuracy of this research, these are percentages to take into account. And we do. In agile projects we allow for requirements to change, basically because changes in requirements are based on new and improved insights, and will contribute to enhance the usefulness of the software. In short, we consider changes to requirements to increase the value of the delivered software.
So much for software development projects success rates. Back to reality. In 2003 the company I worked for engaged in an interesting project with our client. The project set out to unify a large number of small systems, written in now exotic environments such as Microsoft Access, traditional ASP, Microsoft Excel, SQL Windows and PowerBuilder, into one stable back-end system. This new back-end system was then going to support the front-end software that was used in each of their five-thousand shops.
Preparation sprints
As usual, we started our agile project with a preliminary preparation sprint, during which we worked on backlog items such investigating the goals of the project, and the business processes to support. Using these business processes, we modeled the requirements for the project in (smart) use cases. We outlined a baseline software architecture and came up with a plan for the project. The scope and estimates for the project were based on the modeled smart use cases.
Due to the high relevance of this project for the organization, all twenty-two departments had high expectations of the project, and were considered to be stakeholders on the project. Due to the very mixed interests of the different departments, we decided to talk to the stakeholders directly, instead of appointing a single product owner. We figured that the organization would never be able to appoint a single representative anyway. During the preparation sprint, we modeled eighty-five smart use cases during a short series of workshops with all stakeholders present.
The outcome of the workshops looked very promising and, adhering to the customer collaboration statement in the Agile Manifesto, the client’s project manager, their software architect and I jointly wrote the plan for the project. We included the smart use case model, created an estimate based on the model, listed the team, and planned a series of sixteen two-week sprints to implement the smart use cases on the backlog. Of course we did not fix the scope, as we welcomed changes to the requirements and even the addition of new smart use cases to the backlog by the stakeholders.
No tilt
However, we did add an unusual clause to the project plan. We included a go/no-go decision for further continuation of the project, to be made at the end of each sprint, during the retrospective. We allowed the project sponsor to stop the project for two reasons:
- When the backlog items for the next sprint no longer add enough value compared to the costs of developing them.
- In the rare case the requirements would grow exceptional in size between sprints – we set this to twenty percent of the total scope.
Not that we expected the latter to happen, but given the diversity of interest of the stakeholders, we just wanted to make sure that the project wouldn’t tilt over to a totally new direction or was abundantly more expensive than originally expected.
And, as you might expect, it did tilt over. After having successfully implemented a dozen or so smart use cases during the first two sprints, somewhere during the third sprint the software architect and I sat together with the representative of the accounting department. Much to our surprise, our accountant came up with as so far unforeseen functionality. He required an extensive set of printable reports from the application. Based on this single one-hour conversation, we had to add forty-something new smart use cases for this new functionality to the model and the backlog. A scope change of almost fifty percent.
There we were are at the next retrospective. I can tell you it sure was quiet. All eyes were on the project sponsor. She there and then weighed all the pros and cons of continuing or stopping the project and the necessity of the newly discovered functionality. In the end, after about thirty minutes of intense discussion, she made a brave decision. She cancelled the project and said: it’s better to fail early with low costs than spend a lot of money and fail later.
“We can’t stop now!”
The question you could try to answer here is this: was this project successful or was it a total failure? For sure it didn’t implement all planned features. So, to the Standish Group, this is a failed project. But, on the other hand, at least it failed really early. This to me is one of the big advantages of being in agile projects: it’s not that you will avoid all problems, they just reveal themselves much earlier. Just think of all the painful projects that should have been euthanized a long time ago, but continue to struggle onwards, just because management says: “We already invested so much money in this project, we can’t stop now!” And actually, in that sense, looking at the long history the client had with failing software development projects, the project sponsor, after the project had stopped, considered it to be successful.
What is there to learn from this story? I would say it’s always a good thing to have one or two preliminary preparation sprints, that allow you to reason about the project at hand. I also consider it a good thing to develop an overall model of the requirements of your project at this stage – knowing that the model is neither fixed nor complete, and without going into a big-upfront design. And last but not least, that if you fail, fail fast.
2 thoughts on “Failing fast”
“Much to our surprise, our accountant came up with as so far unforeseen [almost 50% scopechange] functionality. He required an extensive set of printable reports from the application.”
So why was it not foreseen?
And why was this new functionality of vital essence to the project?
And the bigger question, how to prevent this next time?
Comments are closed.