Failing fast

There is an intriguing question that pops up frequently in organizations developing software in projects: when is a project successful? For sure, one of the most (mis)used resources on the subject is the Standish Group. In their frequently renewed CHAOS Report they define a project successful if it delivers on time, on budget, and with all planned features. For a number of reasons this is, in my opinion, a rather vague definition.

First of all, how do you measure if a project has finished on budget? You would need to compare the actual budget to an originally planned budget. This originally planned budget is of course based on some estimate. At project start. An estimate at project start isn’t called an estimate for nothing. It’s not a calculation. At best, it’s an educated guess of the size of the project, and of the speed at which the project travels.

Estimation bias

As we know from experience, estimates at project start are often incorrect, or at least highly biased. For instance, what if the people who create the estimate will not do the actual work in the project? Is that fair? Also, we often think we know the technical complexity of the technology and application landscape. In reality these appear to be much more complex during execution of the project than we originally thought.

Second, when is a project on time? Again this part of the definition depends on a correct estimate of how long it will take to realize the software. At project start. Even when a project has a fixed deadline, you could well debate the value of such an on-time delivery comparison. How do we know that the time available to the project presents us with a realistic schedule? Once again, it boils down to how much software do we need to produce, and how fast can we do this.

All planned features

But the biggest issue I have with the Standish Group definition is the all planned features part of it. The big assumption here is that we know all the planned features up-front. Otherwise, there is nothing to compare the actual delivered features to. Much research has been done on changes to requirements during projects. Most research shows that, on average, requirements change between twenty and twenty-five percent during a project. Independent of whether a project is traditional or agile. Left aside the accuracy of this research, these are percentages to take into account. And we do. In agile projects we allow for requirements to change, basically because changes in requirements are based on new and improved insights, and will contribute to enhance the usefulness of the software. In short, we consider changes to requirements to increase the value of the delivered software.

So much for software development projects success rates. Back to reality. In 2003 the company I worked for engaged in an interesting project with our client. The project set out to unify a large number of small systems, written in now exotic environments such as Microsoft Access, traditional ASP, Microsoft Excel, SQL Windows and PowerBuilder, into one stable back-end system. This new back-end system was then going to support the front-end software that was used in each of their five-thousand shops.

Preparation sprints

As usual, we started our agile project with a preliminary preparation sprint, during which we worked on backlog items such investigating the goals of the project, and the business processes to support. Using these business processes, we modeled the requirements for the project in (smart) use cases. We outlined a baseline software architecture and came up with a plan for the project. The scope and estimates for the project were based on the modeled smart use cases.

Due to the high relevance of this project for the organization, all twenty-two departments had high expectations of the project, and were considered to be stakeholders on the project. Due to the very mixed interests of the different departments, we decided to talk to the stakeholders directly, instead of appointing a single product owner. We figured that the organization would never be able to appoint a single representative anyway. During the preparation sprint, we modeled eighty-five smart use cases during a short series of workshops with all stakeholders present.

The outcome of the workshops looked very promising and, adhering to the customer collaboration statement in the Agile Manifesto, the client’s project manager, their software architect and I jointly wrote the plan for the project. We included the smart use case model, created an estimate based on the model, listed the team, and planned a series of sixteen two-week sprints to implement the smart use cases on the backlog. Of course we did not fix the scope, as we welcomed changes to the requirements and even the addition of new smart use cases to the backlog by the stakeholders.

No tilt

However, we did add an unusual clause to the project plan. We included a go/no-go decision for further continuation of the project, to be made at the end of each sprint, during the retrospective. We allowed the project sponsor to stop the project for two reasons:

  • When the backlog items for the next sprint no longer add enough value compared to the costs of developing them.
  • In the rare case the requirements would grow exceptional in size between sprints – we set this to twenty percent of the total scope.

Not that we expected the latter to happen, but given the diversity of interest of the stakeholders, we just wanted to make sure that the project wouldn’t tilt over to a totally new direction or was abundantly more expensive than originally expected.

And, as you might expect, it did tilt over. After having successfully implemented a dozen or so smart use cases during the first two sprints, somewhere during the third sprint the software architect and I sat together with the representative of the accounting department. Much to our surprise, our accountant came up with as so far unforeseen functionality. He required an extensive set of printable reports from the application. Based on this single one-hour conversation, we had to add forty-something new smart use cases for this new functionality to the model and the backlog. A scope change of almost fifty percent.

There we were are at the next retrospective. I can tell you it sure was quiet. All eyes were on the project sponsor. She there and then weighed all the pros and cons of continuing or stopping the project and the necessity of the newly discovered functionality. In the end, after about thirty minutes of intense discussion, she made a brave decision. She cancelled the project and said: it’s better to fail early with low costs than spend a lot of money and fail later.

“We can’t stop now!”

The question you could try to answer here is this: was this project successful or was it a total failure? For sure it didn’t implement all planned features. So, to the Standish Group, this is a failed project. But, on the other hand, at least it failed really early. This to me is one of the big advantages of being in agile projects: it’s not that you will avoid all problems, they just reveal themselves much earlier. Just think of all the painful projects that should have been euthanized a long time ago, but continue to struggle onwards, just because management says: “We already invested so much money in this project, we can’t stop now!” And actually, in that sense, looking at the long history the client had with failing software development projects, the project sponsor, after the project had stopped, considered it to be successful.

What is there to learn from this story? I would say it’s always a good thing to have one or two preliminary preparation sprints, that allow you to reason about the project at hand. I also consider it a good thing to develop an overall model of the requirements of your project at this stage – knowing that the model is neither fixed nor complete, and without going into a big-upfront design. And last but not least, that if you fail, fail fast.

Agile anti-patterns at CodeMotion Madrid

Many organizations turn towards agile to escape failing traditional software development. Due to this increase in popularity, many newcomers enter the field. Without the necessary real-life experience but proudly waving certificates from two days of training.

During a challenging talk I did at the CodeMotion conference in Madrid, in October 2013, I tried to show what happens to projects that are coached by ill-experienced coaches, and how to move around anti-patterns as Scrumdamentalism, Dogmatic Agile, Bob-the-Builder, Agile Hippies, Kindergarten Agile and Scrumman. Basically the message is: don’t be dogmatic, and assemble the agile approach that suits your project.

2013-10-18 16.11.09

The slides!

A lot of people asked me about the slide deck, so here it is. Please note that I did a shorter version of the talk (which is basically 90 minutes of material) at CodeMotion. However I thought I’d made the whole thing available here.


Last but not least, thanks for all the great feedback guys, a short summary:

  • ‏@daviddezglez No doubt. Yesterday @aahoogendoorn ‘s talk was the best #codemotion #es. True agilism. We have to use the thinks that works (not to be cool).
  • ‏@Codekai @aahoogendoorn You killed it yesterday! Great talk.
  • @daviddiazgismer @aahoogendoorn awesome presentation today, kudos! and the interesting discussion afterwards…
  • ‏@aitorTheRed @aahoogendoorn #codemotion #es only way it can be improved, would be for you to play the guitar while giving the speech.
  • @gfcalderon @aahoogendoorn Best talk of the  day, everything is clearer now,  thanks so much.
  • @sbgermanm @aahoogendoorn also great fun attending it. Great talk, thanks #codemotion.
  • @manuelcr Great talk of @aahoogendoorn about agile in #codemotion #es
  • @RellikCC @aahoogendoorn Really cool talk, very enjoyable.
  • @wnohang Great talk on #agile by @aahoogendoorn at #codemotion #es. Let’s do do some code!
  • @gogomca Excellent talk about “Agile Antipatterns” by @aahoogendoorn un #codemotion. First one that I really enjoy.
  • @odracirnumira Great speech of @aahoogendoorn at @codemotion_es . Agile antipatterns.
  • @RellikCC Hey @aahoogendoorn, my friend @kaseyo23 is really angry because he missed your talk, what can I do about it?

Offshore Agile Software Development: A Practical Guide to Making It Work

In my previous post, I explored how offshore Agile software development offers many benefits over more traditional, Waterfall style approaches, but only if some of the obvious difficulties in communication, overheads and language issues are addressed. So how do organizations overcome those difficulties to make offshore Agile work?

Over many years at Capgemini, we have gained experience with distributed Agile projects, whether onshore or offshore, and have learned a great deal about the dynamics of Agile software development teams. Based on our experience, this article outlines a number of key recommendations for making Agile work across distributed teams.

Cultural exchange

It is highly recommended to facilitate a cultural exchange of the people involved in the project. Prior to starting any implementation, it is good practice to ensure a preliminary stage where the basics for the project, such as an overall model of the requirements, estimates, plan, and a baseline architecture are set. This is an ideal moment to have the client and everybody in the team meet face-to-face, and get acquainted. Despite the obvious costs of arranging this meeting, teams will connect much more easily and collaborate more smoothly later on in the project. This is especially beneficial for long-running Agile projects.

Offshore Agile Software Development: A Practical Guide to Making It Work

Facilitate continuous communication

In Agile projects, communication is key. Distributed projects need to facilitate the ability to continuously communicate. With the distance between team members involved in offshore projects, online communication methods are essential. Where possible, it is important to use phone, but preferably video conferencing for kick-offs, retrospectives and stand-up meetings. Many teams also use simple chat programs for asking questions and sharing knowledge.

Solve language issues early

Many organizations, especially in public service, rely on communication and documentation in their native language. Even code is often written in the native language. To the offshore team members these languages are new and awkward. Even with team members sent to language courses, non-native languages leave room for misinterpretation. It is vital to offshore projects, especially when using Agile approaches, to set up a workflow for translating documentation and code before coding starts. Solving language issues should even be part of the contract.

Standardize requirements

User stories are an immensely popular technique to gather requirements in Agile projects. However, as with traditional use cases, user stories can appear at different levels of granularity and suffer greatly from ambiguity. In many of our projects we therefore successfully apply smart use cases, a more standardized technique for defining requirements. By nature, smart use cases are defined at the same level of granularity, and take a much more standardized approach, facilitating easier distributed communication on individual work items.

Standardize work item workflow

At the start of iterations, many Agile projects spend a lot of time breaking down user stories into individual tasks, and on estimating the required effort in hours, with the goal of being able to negotiate the amount of work that can be handled during each iteration. Iteration kick-off workshops are costly, and even worse, require the whole team to be present. It is good practice to minimize these kick-offs. Work item breakdowns and estimates are required much less if across work items work is aligned to a standardized work item workflow – with steps such as design, coding, developer testing, testing and acceptance.

Visualize work item workflow

Once a standardized number of steps in the work item workflow are defined, the actual status of the individual work items can be visualized easily on a dashboard. Where co-located teams usually stick post-its on a whiteboard, distributed teams will need a distributed dashboard. Usually this is a website that is accessible to all team members, including the client, or aligned with bug tracking or source control tools.

Work item teams

Rather than the traditional divide between analysis and design, onshore and development, and testing offshore, there are great benefits in working in teams that consist of team members on either side of the line working jointly to implement individual work items. By operating in such work item teams, or feature teams, there is a much more implicit focus on getting the work done. Work item teams tend to be more coherent and much more motivating and stimulating.

Standardize architecture and technology

A much-heard complaint in offshore development is that “they” don’t understand the software architecture and the technology that is used. However, it is important to realize that aspects such as software architectures, frameworks, complex domains and service oriented architectures are complicated by nature, and that for any team, whether co-located or distributed, it will take time to get used to any proposed solutions.

Obtain stability

Offshore projects have a bad reputation for team instability. There are sometimes cases where members of the offshore team quit over the course of a weekend, or are replaced by new members who don’t have the required skills and knowledge. It is vital to keeps teams stable, especially in long-running projects. As working in Agile teams is experienced as much more motivating and pro-active, Agile helps to reduce team instability.

In conclusion, whether offshore Agile projects can be as successful as onshore Agile projects depends on a great number of factors in addition to the ones outlined above. But once the obvious difficulties in communication, overheads and language issues are reduced, offshore Agile projects can actually work particularly well, given a collaborative and standardized approach.

This post was also published at IDG Connect at:
Offshore Agile Software Development: A Practical Guide to Making It Work

Validating sending mail messages in smart use case unit tests

When building applications with the Adf framework, smart use cases are implemented in task classes. Quite regularly mail messages are sent from tasks. To do so we use the MailManager class. Using this class mail messages are usually build up as in the following code example.


To send mail messages, the MailManager plugs in an implementation of the IMailProvider interface. Currently, Adf provides two mail providers, the obvious implementation SmtpMailProvider, and the DummyMailProvider, which creates the mail messages and dumps it in a specified directory.

Smart use cases, implemented in task classes, are always unit tested. Here, each of the public methods of the tasks are tested using the Adf.Test libraries from the framework. A number of test methods is usually implemented to test all possible scenario’s of going through the smart use cases. In the following code example, such as method is shown.


In this example, the Init() method of the task is called. After it is called the test framework can perform any number of checks. In this example, all validations should succeed, and the Status property of the Account domain object should be set to Unverified. The last validation ViewIsActivated will check if the accompanying (web, Windows RT) page or Windows form is presented to the user.

Recently, we have added new features to also validate whether a task has sent the mail message it was supposed to send. To provide for this functionality, we’ve added a TestMailProvider to the Adf.Test libraries. In your test project, this mail provider needs to be plugged in to the MailManager, as follows in code (or in the app.config file).


This will ensure that any mail messages being send by the tasks, is sent through the TestMailProvider. The test mail provider will, similar to the DummyMailProvider, place mail messages in a specified directory, but foremost it will also notify the TestManager that a message was sent.

Next, when unit testing the method that either sends, or doesn’t send the mail messages, you will be able to verify this, as in the following code example.


You can use the MailIsSent or the MailIsNotSent methods for this means.

Reaching post-conditions in tasks

Implementing use cases in Adf.Net is covered by the task pattern. Each smart use case in the model is implemented as a descendant of the Task class in Adf.Net.

The task pattern consists of three major parts:

  • Starting the task, either using a parameterized Init() method, or the default Start() method.
  • After a task calls another task, and this second task is finished, control goes back to the calling task, using any of the specific ContinueFrom() methods, or the default Continue() method.
  • When a task reaches any of the post-conditions of a task, the task needs to be ended, usually through the Ok() or Cancel() method.

In this specific post I’ll look at reaching the post-conditions. In the pattern, the Task class itself acts as the layer super type and implements a number of method to end itself. A use case can have multiple post-conditions. Some of them are positive, some can be negative. And sometimes a specific case needs to be addressed.

When a reaches it positive post-condition, it is a best practices to end it using the Ok() method. This method takes a params object[] p, which allows you to pass back results from the task to the calling task, using the following signature.

public virtual void OK(params object[] p);

A similar method Cancel() exists for task that end in a negative post-condition, usually because the user cancels the interaction:

public virtual void Cancel(params object[] p);

But, Adf.Net also supplies a similar method Error(), to allow tasks to finish with an error, possibly technical.

Under the covers, these three methods call on another method, which is called Finish(). This method takes care of passing back an instance of TaskResult. This result also gets posted back to the calling task, so it knows how the called task has ended. As in the following example, you could also use Finish() yourself.

if (persoon.IsNullOrEmpty())

Please note that the code above is equal to the following code.

if (persoon.IsNullOrEmpty())

By default, TaskResult has the following values in Adf.Net: Ok, Cancel, Error.

However, as TaskResult is implemented using the descriptor pattern, additional project specific value can be added easily, by inheriting from TaskResult, and added the specific values. Thus you would be able to end your task in more project specific ways, such as in the code example below.

if (persoon.IsNullOrEmpty())

Agile business intelligence

Het besparen van kosten is een veelgenoemde aanleiding voor Business Intelligence (BI) projecten. Zo wilde een bekende overheidsinstantie weten hoe effectief de bestrijding van uitkeringsfraude was. Het onderzoeken van mogelijke fraude kost de instantie geld, maar het vinden van fraudeurs levert echter direct geld op. En dus ging zoekt de instantie naar de optimale verhouding tussen het aantallen onderzoeken en het aantal opgespoorde fraudes. Kortgezegd wilde men met zo min mogelijk onderzoeken zoveel mogelijk fraudeurs vinden.

Deze doelstelling is archetypisch voor BI-projecten. Hoewel de doelstellingen vaak concreet zijn te definiëren, is het realiseren ervan niet altijd evident. Welke rapportages moeten worden ontwikkeld? Wat staat daar op? Welke bronsystemen moeten worden geraadpleegd? Vaak wanneer het project eenmaal loopt, doen zich doorlopend nieuwe inzichten voor. Bijvoorbeeld over het ontbreken van benodigde informatie in bronsystemen. En anders doet de opdrachtgever wel inspiratie op voor nieuwe wensen en eisen uit feedback over opgeleverde rapportages en analyses. BI-projecten kenmerken zich vaak door onvolledige requirements en doorlopend voortschrijdend inzicht.

Op het gebied van aanpakken voor systeemontwikkeling hebben zich de laatste jaren enorme veranderingen voorgedaan. Steeds meer organisaties en projecten stappen over op een nieuwe generatie aanpakken, die voortschrijdend inzicht niet langer schuwen, maar juist omarmen. Ook kenmerkt deze nieuwe generatie aanpakken zich door multidisciplinaire samenwerking en het frequent in korte iteraties opleveren van software. In één woord: agile. De vraag is nu of en hoe deze aanpakken ook een positieve bijdrage kunnen leveren aan het uitvoeren van BI-projecten.

Wat kenmerkt BI-projecten?

De doelstellingen van BI-projecten zijn altijd direct business gerelateerd. Denk bijvoorbeeld aan het minimaliseren van verzekeringsfraude of het behouden van klanten. Tijdens een project worden analyses en rapportages gedefinieerd die ondersteunen bij het beheersen en optimaliseren van de bedrijfsprocessen van de opdrachtgever. Deze rapportages en analyses worden – meestal dagelijks – gevoed uit een datawarehouse. Kenmerkend voor dit type projecten is dat vaak lastig is vast te stellen welke concrete bijdrage deze analyses en rapportages uiteindelijk leveren. Neem voorbeeld de eerdergenoemde overheidsinstantie, waar niet op voorhand was uit te drukken hoeveel geld men kon besparen bij het vinden van de optimale verhouding tussen het aantal onderzoeken en het aantal gevonden fraudeurs. Uiteindelijk bleek in dit voorbeeld pas na afloop van het project dat de resultaten nog beter waren dan van te voren was geschat.

Hoewel vroegtijdig in projecten nog is vast te stellen welke analyses en rapportages benodigd zijn, is de exacte invulling hiervan nauwelijks concreet te formuleren. Wat wil de opdrachtgever nu echt zien in zijn rapporten? Neem als voorbeeld een rapportage over de verhouding tussen inkomende en uitgaande berichten bij een telecom-operator. Pas toen de opdrachtgever het rapport onder ogen kreeg, bleken er diverse soorten inkomende berichten te zijn, die ook weer gekoppeld zijn aan diverse soorten uitgaande berichten. Een typisch voorbeeld van voortschrijdend inzicht.

Een interessant fenomeen is ook het extractie-, transformatie- en laadproces (ETL). Hierbij worden in een aantal stappen de gegevens uit bronsystemen verzameld, geïntegreerd en geaggregeerd tot een formaat dat voor de rapportages en analyses benodigd is. In de meeste BI-projecten beslaat dit type werk circa tachtig procent van de ontwikkeltijd.


Toch blijft dit werk meestal helaas onzichtbaar voor de opdrachtgever. Deze concentreert zich – terecht – vooral op de op te leveren rapportages en analyses. Maar doordat de bulk van het werk in een BI-project op ETL is gefocust, worden meestal ook de fasen van een dergelijk project rond ETL ingericht. Dat wil zeggen dat eerst alle extracties worden ontwikkeld, aansluitend de transformaties, om vervolgens alle gegevens te laden. Pas nadat dit is gelukt worden de rapportages en analyses gedefinieerd, zoals weergegeven in onderstaande afbeelding.


Het gevolg hiervan is dat pas in deze laatste fase van het project iets wordt opgeleverd waarmee de opdrachtgever aan de slag kan. Dit heeft een belangrijk nadeel. Voor de opdrachtgever blijft het project lang onder water. Pas nadat veel tijd en geld is geïnvesteerd, kan de opdrachtgever feedback geven op de geproduceerde rapportages en analyses. Bovendien is de ETL dan al mind of meer volledig opgeleverd. Aanpassingen aan rapportages en analyses zijn nu lastig te realiseren. Ook is het niet uitgesloten dat als gevolg van deze feedback sommige stappen uit de ETL overbodig blijken. In dit geval is er zelfs werk voor niets uitgevoerd. Ten slotte komt het voor dat voor de gewenste rapportages en analyses gegevens nodig zijn die niet direct uit de bronsystemen zijn af te leiden. Aanvullende gegevens worden dan vaak handmatig toegevoegd tijdens de ETL. Vaak worden hiervoor gaandeweg het project kleine administratieve applicaties ontwikkeld. Nog los van het feit dat deze applicaties door de verkeerde ontwikkelaars worden ontwikkeld – BI-ontwikkelaars in plaats van softwareontwikkelaars – worden deze hiaten meestal pas laat in het BI-project ontdekt. Met alle gevolgen van dien. Zo uitgevoerd zijn veel BI-projecten duurder dan strikt noodzakelijk.

Wat kenmerkt agile?

In software development is afgelopen jaren een nieuwe generatie aan aanpakken ontstaan, die de best practices van eerdere generaties koppelen aan een sterk iteratief en coöperatief karakter. Deze aanpakken, zoals DSDM, extreme programming (XP), Scrum en Smart kenmerken zich in:

  • Korte iteraties. Project worden uitgevoerd in korte iteraties, variërend van twee weken tot een maand. Tijdens ieder van deze iteraties wordt een klein deel van de software geanalyseerd, ontworpen, gebouwd, getest en zelfs opgeleverd aan de opdrachtgever. Pas bij de start van een iteratie wordt vastgesteld welke functionaliteit tijdens de komende iteratie wordt gerealiseerd. Projecten verkorten zo de feedback lus met hun opdrachtgever. Dit verbetert de kwaliteit van de ontwikkelde software in hoog tempo. Dit in tegenstelling tot traditionele projecten, waar de software in een big bang wordt opgeleverd aan het eind van het project.
  • Compacte eenheid van werk. Om dit te kunnen bereiken hanteren projecten een eenduidige en kleine eenheid van werk. Er worden altijd meerdere workitems opgeleverd per iteratie. Individuele workitems leveren direct waarde opleveren voor de opdrachtgever.
  • Snel en frequent opleveren van software. Tijdens agile projecten wordt ook al tijdens de eerste iteraties workitems opgeleverd aan de opdrachtgever, al dan niet direct in productie. Dit zorgt ervoor dat mogelijke problemen, bijvoorbeeld rond architectuur of infrastructuur, al snel in het project boven water komen.
  • Incorporeren voortschrijdend inzicht. Anders dan in traditionele projecten, waar voortschrijdend inzicht zoveel mogelijk wordt uitgebannen, is het in agile projecten mogelijk en zelfs gebruikelijk nieuwe en wijzigende requirements direct mee te nemen. Dit kan doordat steeds bij de start van een nieuwe iteratie wordt vastgesteld welke workitems worden gerealiseerd. Nieuwe workitems kunnen nu al worden meegenomen, ten faveure over al eerder benoemde workitems.
  • Nauwe samenwerking klant en opdrachtnemer. Het snel en frequent opleveren van software in korte iteraties vraagt een intensieve samenwerking tussen opdrachtgever en opdrachtnemer. Er vindt bij voorkeur op dagelijkse basis overleg plaats, bijvoorbeeld om de nieuwe te realiseren workitems te analyseren.
  • Geïntegreerde testen. Omdat software frequent en al vroegtijdig wordt opgeleverd in projecten, is het testen van de workitems van cruciaal belang vanaf dag één in een project.


Van alle agile aanpakken is Scrum verreweg de populairste. Niet zelden wordt Scrum met agile vereenzelvigd. De helderheid van de aanpak maakt Scrum een goed uitgangspunt voor projecten.

Een Scrum-project begint zodra de lijst met workitems is vastgesteld. Dit heet de product backlog. Meestal omvat deze een verzameling user stories. De backlog wordt vastgesteld door de vertegenwoordiger van de opdrachtgever, de product owner.


Iteraties, hier sprints genoemd, duren in de regel twee tot vier weken. Bij de start van een sprint wordt tijdens de sprint planning meeting door de product owner samen met het team de te realiseren user stories vastgesteld. Het team verdeelt deze in taken en schat de hoeveelheid werk hieraan in uren in. Op basis hiervan en op basis van de snelheid in vorige sprints en de samenstelling van het team in de komende iteratie wordt bepaald hoeveel stories er in de sprint passen. Deze stories worden op de sprint backlog geplaatst. Aan het einde van een sprint vindt de sprint review meeting plaats, waarin de gerealiseerde work items worden geëvalueerd. Aansluitend vindt de retrospective plaats, waarin het team de werkwijze evalueert en verbetert.

Scrum kent slechts een beperkt aantal rollen. Het werk wordt gedaan door het team, dat in de regel vijf tot negen personen telt en waarvan de individuele rollen niet zijn beschreven. De product owner vertegenwoordigt in het project de klant. Een Scrum Master coacht de product owner en het team. De voortgang in het project wordt bewaakt op een eenvoudig dashboard.

De eenvoud en populariteit van Scrum maken de aanpak een goed raamwerk voor startende projecten. De toegepaste terminologie uit de aanpak werkt aanstekelijk. Scrum is eenvoudig toe te passen en kan gemakkelijk waar nodig worden uitgebreid met technieken uit andere agile aanpakken.


In onze ervaring kan Scrum voor BI-projecten goed worden gestructureerd met elementen uit andere agile aanpakken, in dit geval Smart. Deze van origine Nederlandse agile aanpak kent meerdere typen iteraties. Bij de start van een project wordt de backlog gevuld met work items die voorwaardelijk zijn om software te realiseren. Denk aan het vaststellen van stakeholders en doelstellingen, het modelleren van bedrijfsprocessen, rapportages en analyses in smart use cases, het maken van een schatting op basis van deze smart use cases, het opstellen van een base line architectuur, het inrichten van de ontwikkelomgeving en het maken van een projectplan. Deze workitems worden gerealiseerd tijdens de voorbereidende iteraties Propose en Scope. De iteratie Propose mondt uit in een eerste projectvoorstel. Scope eindigt met het opleveren van het projectplan.


Na deze inleidende iteraties wordt de backlog gevuld met de gemodelleerde smart use cases, die de standaard eenheid van werk zijn in Smart. De smart use cases worden gerealiseerd tijdens een of meerdere releases. Een release bestaat uit een reeks van Realize iteraties, gevolgd door een Finalize iteratie. Tijdens Realise iteraties worden de smart use cases gerealiseerd, getest en geaccepteerd. Iedere release wordt afgesloten door een Finalize iteratie, waarin de nadruk nog sterker ligt op testen en op het stabiliseren van de code.

Iedere iteratie heeft in Smart dezelfde opbouw. De iteratie start met een kick-off Plan en eindigt met de retrospective Evaluate. Daartussen worden de work items gerealiseerd tijdens Build.


In Smart telt het team meerdere rollen. De belangrijkste zijn projectsponsor, gebruiker, domeinexpert, ontwikkelaar, tester en coach. Smart doet het goed in langlopende, vaak wat complexere projecten zoals BI-projecten, waarin het meer structureren van rapportages en analyses in smart use cases beter past dan eerdergenoemde user stories. Smart use cases zijn direct gerelateerd aan de bedrijfsprocessen van de opdrachtgever. Bovendien worden ze gemodelleerd en geschat op basis van een reeks aan standaardtypen, zogenaamde stereotypes. Er zijn al stereotypes beschreven voor bijvoorbeeld stappen in ETL, rapportages en analyses.

Snel en frequent opleveren

Het snel en frequent opleveren van voor de opdrachtgever relevante software is een aspect van agile dat in BI bijzonder goed van pas komt. In plaats van de verticale fasering van traditionelere BI-projecten, kiezen we ervoor om het realiseren van analyses en rapporten juist horizontaal in te steken. Niet langer worden eerst alle extracties en aansluitend de transformaties en het laden van de gegevens opgeleverd om pas daarna de rapportages en analyses te ontwikkelen.

Liever ontwikkelen we per rapport of analyse. Hierbij kiest de opdrachtgever steeds welke rapporten of analyses de hoogste prioriteit hebben, en werkt het team uitsluitend aan de minimale set aan smart use cases die nodig zijn voor om deze te realiseren. Deze representeren de benodigde extracties en transformaties en natuurlijk het rapport zelf. Deze kanteling van werkzaamheden maakt directe feedback van de opdrachtgever mogelijk. Belangrijk bijkomend voordeel is dat de rapporten en analyses direct kunnen worden aangewend om de bedrijfsprocessen van de opdrachtgever te verbeteren. Soms al enkele weken na de start van het BI-project. Zo toont het project al op heel korte termijn benefits.

Zo wordt voortschrijdend inzicht nu eens niet uitgebannen, zoals traditionele, maar worden nieuwe inzichten direct meegenomen. Hoewel in principe tijdens een lopende iteratie de scope niet wijzigt, kunnen nieuwe requirements al tijdens een eerstvolgende iteratie op de rol worden gezet. Zodra de opdrachtgever de eerste versie van een rapportage of analyse onder ogen krijgt, formuleert hij direct zijn feedback, die dan vrijwel direct wordt geïmplementeerd – denk daarbij bijvoorbeeld aan het realiseren van data entry voor ontbrekende gegevens.

Compacte eenheid van werk

Kort gezegd is een BI-project uit te drukken in drie typen ontwikkelwerk: datamodellering en ETL, het definiëren van analyses en rapporten en het ontwikkelen van aanvullende data entry applicaties. In Smart is de smart use case de leidende eenheid van werk. Zowel voor het modelleren van bedrijfsprocessen en requirements, en voor het schatten, realiseren en testen van voor de gebruiker relevante functionaliteit. Voor het modelleren van smart use cases beschikken we over richtlijnen, die ertoe bijdragen dat smart use cases een lage granulariteit kennen en al tijdens Propose en Scope zijn te modelleren. Er is bovendien een groot aantal stereotypes beschreven die het modelleren, schatten en realiseren standaardiseren en vergemakkelijken. Voorbeelden hiervan zijn manage voor data entry, search voor het zoeken naar record of file import.

Hoewel afkomstig uit reguliere software development, blijken smart use cases ook prima aan te wenden voor agile BI-projecten. Wat betreft het definiëren van rapportages en het ontwikkelen van aanvullende data entry ligt dit voor de hand, omdat dergelijk werk niet wezenlijk verschilt van reguliere software development. Maar ook voor het vaststellen van analyses en zelfs voor het uitvoeren van ETL zijn use cases stereotypes vastgesteld, zoals collect, integrate en aggregate.

Smart use cases worden al vroeg in een agile BI-project gemodelleerd, tijdens Propose en Scope. Daarbij worden de smart use cases slechts geïdentificeerd en geschat. Details worden pas uitgewerkt tijdens latere iteraties. Aansluitend worden Realize en Finalize iteraties ingepland. Tijdens deze iteraties ligt de focus op het realiseren van individuele rapportages, op basis van de hiertoe benodigde smart use cases voor ETL, eventuele data entry en de definitie van het rapport.


De benodigde smart use cases worden zo snel mogelijk gerealiseerd en met de opdrachtgever afgestemd. Bijkomend voordeel is dat de onderliggende dataflows, die nu zijn uitgedrukt in smart use cases, goed individueel te testen zijn, en zelfs is door het modelleren van de use cases hergebruik van dataflows snel geïdentificeerd.

In BI-projecten is het snel en frequent opleveren van nieuwe rapportages en analyses van groot belang voor de opdrachtgever. Immers, ieder nieuwe rapportage kan direct worden benut in de praktijk en levert zo direct toegevoegde waarde aan het optimaliseren van de bedrijfsprocessen van de opdrachtgever. Daarnaast hebben BI-projecten veel baat bij het voortschrijdend inzicht dat ontstaat dankzij de korte iteraties in agile projecten. Beter dan dit uit te bannen, zoals traditioneel wordt getracht, is het om juist effectief gebruik te maken van deze inzichten en feedback.

Dashboards en burn-down-charts

Om agile projecten te beheersen en de voortgang te bewaken wordt in de regel gebruikt gemaakt van een tweetal pragmatische gereedschappen; een agile dashboard of taskboard en een burn-down-chart. Alle te realiseren workitems doorlopen een levenscyclus, die de stappen uit het realiseren ervan beschrijft, doorgaans in enkele dagen tijd. Voor smart use cases telt deze levenscyclus stappen als New, In Iteration, Working, Testing, Rework en Accepted. Doorgaans breiden projecten deze cyclus uit naar gelang de projectspecifieke werkwijze dit verlangt. De stappen in de levenscyclus van workitems of smart use cases vormen de kolommen op het dashboard of taskboard van een agile project.


De meeste agile projecten gebruiken voor dergelijke dashboards post-its aan de muur, of een online gereedschap. In een oogopslag is zo, ook voor de opdrachtgever, te zien wat de voortgang van de smart use cases is.

Omdat de omvang van smart use cases worden geschat in punten, is bij ieder statuswijziging snel na te gaan hoeveel werk nog nodig is om de onder handen zijnde rapportages en smart use cases te voltooien. Zodra een smart use cases is geaccepteerd, krijgt het team de bijbehorende punten. In agile BI-projecten is het daarbij belangrijk dat ook de “back end” smart use cases die stappen uit de ETL of data entry representeren door de opdrachtgever worden geaccepteerd. Dit kan bijvoorbeeld gebeuren met de opdrachtgever de resultaten van dergelijke use cases in de reporting tool te demonstreren.

Een burn-down-chart toont een dagelijkse momentopname van deze punten, uitgezet in de tijd. Met een eenvoudige extrapolatie is nu de verwachte einddatum van het project te calculeren.


In agile BI-projecten is het overigens niet alleen zeer zinvol een burn-down-chart te gebruiken voor het gehele project, maar ook om per te realiseren rapport en analyses deze voortgang te projecteren. Juist omdat de rapportages individueel worden opgeleverd, leveren deze laatstgenoemde burn-down-charts de opdrachtgever directe informatie over wanneer het nieuwe rapport is in te zetten in het besturen van zijn bedrijfsprocessen.

Snel resultaat

Agile aanpakken zoals Scrum en Smart spelen bijzonder goed in op de snel wijzigende en uitbreidende wensen en eisen van opdrachtgevers aan BI-projecten. Daarbij wordt in korte iteraties aan het “goed genoeg” opleveren van individuele rapporten en analyses gewerkt. Zo heeft de opdrachtgever al veel eerder profijt van zijn BI-project, en kan voortschrijdend inzicht sneller en goedkoper leiden tot een optimaal eindresultaat. Het toepassen van smart use cases biedt BI-projecten daarnaast een gestructureerde, maar vooral pragmatische manier om met eenzelfde eenheid van schatten, realiseren en testen te opereren, die bovendien direct te relateren is aan de bedrijfsprocessen van de opdrachtgever. De pragmatische gereedschappen die in agile projecten worden gebruikt om de voortgang te meten, zoals agile dashboards en burn-down-charts per rapport, bieden bovendien direct inzicht in de realisatie van de rapportages en analyses. Agile BI biedt daardoor snel resultaat met snel groeiende tevredenheid van de klant. En daar was het allemaal om te doen toch?

Dit artikel verschijnt in het themanummer Agile Datawarehousing van maandblad Informatie

Sander Hoogendoorn
Principal Technology Officer en Agile Thoughtleader Capgemini, auteur van de boeken Dit Is Agile (agile, Scrum, Smart) en Pragmatisch Modelleren met UML (smart use cases).

Sandra Wennemers
Principal Consultant en Data Warehouse Architect Capgemini

Zie ook en

Agile anti-patterns. Yes you agile projects can and will fail too

Over the years I have noticed a lot of agile anti-patterns during projects. Wrongly used agile approaches, dogmatic use of agile approaches, agile-in-name-only. Recently I have presented a talk at a number of agile and software development conferences that demonstrates patterns of agile misuse. These conferences include Agile Open Holland (Dieren), Camp Digital (Manchester), GIDS (Bangalore), ACCU (Oxford) and Jazoon (Zurich). Anyway, here’s the slide deck. Enjoy.

How to kill your estimates

It must have been about twenty five years ago. I was working for a large international consultancy firm. One of the reliable ones. The ones that you would think that had everything worked out. But I guess this was merely the product of my imagination.

At one time two colleagues and I were working on an estimate for a bid for a software development project. Now the three of us together, despite the fact that this occurred long time ago, had quite some years of experience in the field. So you would reckon we could come up with a decent estimate. And we did. In fact, we created two estimates. One by counting screens, based on a point scale and a real-life velocity. A very popular and suitable technique as we were building desktop applications. Next to that we created a work breakdown structure. Using both techniques we estimated ALL the work in the project, not just coding or coding and testing. Not much to our surprise the results from both estimates were comparable. Happy and confident with the result, we presented our estimates to the account manager in charge.

It was at this particular point in time, at that very instant, that my faith and trust in our industry vanished into thin air, not to return again. Not even more than glancing at the work we’d invested in our estimates, the account manager merely looked at the end result and added 20% to the overall price for the project. For contingency, he claimed.

Up to this deciding moment in my career I had never even heard of the word. Contingency. And still I refuse to stop and think what the impact of that little word is to this industry and probably others too. To sum it up, every confident estimate can be killed by (account) managers adding seemingly random percentages. Just because they don’t trust your estimates. Or they still remember the court cases from previous projects. Frightening all together.

Needless to say we lost the bid. Our competitors were less expensive.

History revisited

Now I wouldn’t write about this unfortunate event if I hadn’t met a similar situation only a few weeks ago. You might imagine we’ve learned a thing or two in the twenty five years since. Alas. We didn’t. And it’s industry wide I assume.


With a small team we had worked on an estimate for re-engineering a client/server desktop application to a .Net web application. We estimated based on smart use case points, a technique that we have used and refined over the years. A reliable estimation technique. The estimate resulted in just under 500 smart use case points, which is a measure for the size and complexity of the system. Given the fact that we have executed multiple similar projects successfully we were able to establish that it takes about 8 hours per smart use case point to realize the software. And yes, we have actually gathered metrics! These 8 hours include ALL work on the project, including project management, documentation, testing, setting up the development environment etc. Simple calculation tells us that we would need 4.000 hours to do the project. So this was what we coughed up for the responsible account manager.

Without so much as a blink of the eye an email came back from the account manager stating that he added 20% to our estimate, and that he would communicate this new number to the client. Leaving us with a lot of questions of course.

So the interesting question is: where does the 20% come from? You would expect that it originated from having evaluated software development projects for years and years, and comparing original estimates in those projects with the final outcome – given the fact that requirements change, teams change and technology changes. But to be quite honest, unfortunately I suspect this is not case. There’s but only a few organizations that actually measure like this. I assume. And then even if he had done that, would it be exactly 20%? Why not 17.6? Or 23.17? Exactly 20%? Or maybe the account manager knows his statistics. Statistics claim that developers estimate 20% too optimistic. As we developers are optimists. But this was a team estimate including all work on the smart use case on the backlog. 

Yesterday’s weather

To cut to the chase, if a team estimates the amount of work in a project, especially on a relative scale, as an account manager this is what you should believe, as it is the best estimate available. This is the yesterday’s weather principle. The best way to predict today’s weather, is to look at the weather from the day before. No more, no less.

Adding 20% – or 10% or 15% – just as a rule of thumb is not based on any real-life experience. In my opinion such actions show that the account manager puts no trust in his own people, in his own organization. This would be similar to saying that if it was 10 degrees yesterday, we’ll just state that it will be 7 degrees today, without having looked at the weather forecast.

Restore trust

So what’s the message here? Please dear account managers, put some trust in the people who try to create as estimate as accurate as they can. Don’t add 20%, or 10% for that matter, just based on gut feeling. And more important, dear software developing organizations, start measuring your projects! Gather metrics so future estimates become much more accurate and realistic, and future project proposals and project will be less off target than so many projects I witness. And on the fly restore some of the trust so many people have lost in our industry. Let’s learn.

This post is published in SDN Magazine.

Death by Dogma versus Agile Assembly

On November 3, 2011 I presented the keynote of the Agile Open Holland Conference in Dieren. During this challenging talk I discussed the current state of affairs in agile organizations and projects and the effects of the recent strong rise in popularity of agile approaches. Let’s put it mildly: there’s a lot of work to be done.

Death by dogma

Almost all organizations, large and small, are turning towards agile to escape failing traditional software development projects. Due to this strong increase in popularity of agile approaches and techniques, many newcomers will enter the field of agile coaching. Of course without the very necessary real-life experience but proudly waving their Certified Professional Scrum Master Sensei Trainer Certificate proving they at least had two days of training.

Going through the hardship of two whole intense days of training becoming a Certified Agile Jedi Knight is worthwhile!

In my opinion, as a result many organizations and projects in the next couple of years will be coached why well-willing consultants who have barely made it through boot camp, and in the famous Shu-Ha-Ri learning model haven’t yet made it beyond copying their teacher. This will lead to very dogmatic applications of the more popular agile approaches, mostly Scrum, especially when the so-called leaders in the field themselves turn to dogma. This dogmatic thinking will block the use of more mature techniques and technology in agile projects, even when these would really improve projects, or would prevent agile projects from failing. “No, you can not do modeling in Scrum” and “Burn-down charts are mandatory” are two such simple real-life example statements that I’ve witnessed some certified agile Jedi Knight make. Due to this lack of experience and the growing dogmatism in the agile beliefs, more and more agile projects will fail. Death by dogma.

During my keynote I discussed many examples of dogmatic Scrum implementations and the drawbacks from being dogmagile, building the story up from my previous posts Scrumdamentalists and Crusaders and Flower-Power Agile Fluffiness.

But maybe even more important during the keynote I also show that there is no such thing as one-size-fits-all agile. Different organizations and different projects require different agile approaches. Sometimes lightweight agile, for instance implemented using Scrum, user stories, simple planning, simple estimation, and one co-located team using post-its on a wall is just fine. But in many projects the way of working used should rather be built up from slightly more enterprise ready approaches, for example using Smart, smart use cases, standardized estimation, multiple distributed teams and on-line dashboards.

What is agile anyway?

Implementing agile in your projects starts with establishing what it means to be in an agile project. As I demonstrate in the keynote, I consider short iterations, collaborative teams, a small unit of work, continuous planning, delivering early and often, and simplifying communication to be crucial to considering a project working in an agile way.

From there you can pick and choose from a wide variety of approaches, techniques and technology. Most of them stemming from the agile era, but some of them can also be related to older (or more mature) era’s. Concluding you might say that to be successful in implementing agile in your organization, you will need to assemble your agile approach from everything that will help you implement these six agile bare necessities. Anyway, enjoy!

Flower-Power Agile Fluffiness

To all the dear people in the agile community and to the faint-hearted: this will not be an easy blog post. There was a time when being a software developer was a decent craft, requiring decent craftsmanship and yes also a lot of creativity, some communication, some collaboration. Still it was a decent craft. The waterfall-ish methodologies we used weren’t extremely optimal, but at least software development was a craft. Similar to a carpenter who uses his tools to craft new furniture, or a industrial designer using his tools to come up with a new model Toyota – I know this is not the best example, but at least I now have the attention of the kind folks in the lean community. And then came agile.

Now believe me, I don’t have anything against agile. I’ve been promoting agile and iterative approaches to software development since the mid-nineties, and haven’t done traditional projects ever since. Agile used to be about engineering. We were improving ourselves by using better techniques, continuous integration or continuous deployment, writing unit test, pair programming, writing smart use cases or user stories, using a bug tracker, burn-down charts and even post-its on the wall. So far, it’s still all in a days work. There was a time that as an analyst, a developer, or a tester you could be proud of being in an agile project.

But these days if I look at what’s going on at agile conferences, on twitter, in blog posts, literature and discussions on agile, Scrum, Lean, Kanban and whatever new flavors of the month are passing by, I get the feeling I’m no longer talking about craftsmanship but rather ending up in Disneyland or in San Francisco in the late sixties. I’ve got a feeling were not in Kansas anymore.

Agile coach at work.

Agile community anti-patterns

Certainly it’s a good thing everybody can join the agile community. But I witness a lot of repetitive behavior I strongly discourage. Let’s name this repetitive behavior agile community anti-patterns – not to confuse with agile anti-patterns. The latter merely describe failures in agile projects, and yes these do occur, while the former describe community failures. Let me sum some up for you – while on the fly breaking my first anti-pattern:

  • Metaphorizing
  • Zenify
  • Kindergarten Agile
  • Open Door Wisdom
  • Scrumdamentalism

Allow me to elaborate a bit on these agile community anti-patterns.


Although I’m not sure metaphorizing is a even good English word, I’m quite sure you get the meaning of it. Everything anybody comes up with these days about agile – or about what people think is agile – is immediately turned into a metaphor or is given a silly name.

Can we please stop talking about the Gemba Walk when we mean that it’s a good thing our manager stops by every now and then? This shouldn’t even have a name.

Japanese manager stopping by.

What does it mean when an agile specialist states that “you should verify the five why’s with the reverse loop”? And what about Feature Injection? According to a recent tweet “feature Injection is more about using examples to discover the relationships you need and missed or got wrong.” Call me old-fashioned but I totally miss what this is about.


Yes, I know lean manufacturing started in Japan at Toyota. So there is a link between agile and Japan. But is that an argument to zenify software development?

Our new Feng Shui office space.

Why do we need to explain roles in a software development project as samurai, sensei or roshi. I thought product owner and agile coach were already abstract enough. What about re-arranging our office in a Feng Shui manner? Also the word kaizen seems to become very popular. Quoting a recent tweet: “Just write down small things on small papers. It’s your kaizen.” Although I’m all for small things what does this mean and why do I need to introduce this in my project?

Kindergarten Agile

Not sure about the average level of maturity of people in agile projects around the world, but in the projects I’ve been in the last decade or so, people where pretty mature. So why is it groups of sensible people at unconferences are discussing the Happiness Index of projects? To me this sounds much more like a weekly column in a women’s magazine than a solid engineering practice in a software development project.

And I for one certainly don’t want to have to pass a baton if the flow in my stand-up meeting or scrum isn’t achieved automagically. Still this was seriously recommended by one of the authorities in the agile community during a recent presentation at a conference.

Some tweets to further illustrate my point? If someone says “Add Ready for Celebration before the Done column on your wall board” should I start decorating the office? It makes you wonder how often his projects get things done. Even worse is “Make sure you don’t miss the agile elephant versus the waterfall elephant in the lobby.” which was tweeted from a recent agile conference. Where was this conference held? At the Toys-R-Us?

Participants at a recent agile conference strolling down the exhibition hall.

Open Door Wisdom

Often I see quotes coming from the agile community that are no more than open doors, and have been open doors in projects for decades and perhaps centuries, but that are treated by others in the community as sources of new and ultimate wisdom.

Recently a speaker at an agile conference claimed that “if your retrospectives don’t add value to your project, you should change your retrospectives.” Duh. The speaker got a loud applause from the audience, he is now considered an absolute guru, and his quote got tweeted and re-tweeted all over the world. This is not even an open door, it’s an open gate.


Now I’ve blogged about Scrumdamentalism before, but with the newer generations of agile converts, some communities are getting more and more populated by religious zealots who will treat their newly gained faith with deep fundamentalism. Any best practice from their own belief is treated as mandatory, while followers from other beliefs are often considered heretics. A recent blog post states that: “A technical project manager can be a good product owner if he sticks to managing the product backlog and abiding by the rules of Scrum.” I wasn’t aware that Scrum even had rules. You learn something new everyday.

And if Scrumdamentalism alone isn’t bad enough already, it is even enforced by the so-called leaders themselves, proven by the following horrible quote “the team needs to listen to god talk and follow the commandments” from one of the Great Leaders in this particular community. Dear agilists there is is no one-true-belief. There’s value in all of them. And also can we please abolish the function title Agile (or Scrum or Kanban or Lean) Evangelist. Moreover, people calling themselves Agile Sensei should be banned from conferences and projects if you ask me.

Flower-Power Agile Fluffiness

Please people can we stop adding all this new age Flower-Power fluffiness to agile. In my opinion the agile community with all it’s great ideals and best practices is slowly degrading into a worldwide Luna park. My guess is I that it won’t be long before someone somewhere will suggest to add a psychotherapist to every software development project. “How do you feel about not being able to get your user story to the Ready for Celebration column?” Or plan a clown’s visit during retrospectives to increase the Happiness Index.

Agile retrospective with product owners present.

We are slowly becoming the laughing stock of the engineering world. I long for the time that we re-introduce engineering in our trade, and all go back to work again.