Your thought process seems logical to me. You seem to drive towards the answer to the question "Did the project team adapt to changing requirements and respond in a timely manner by embracing the change?". Only I don't agree with your intermediate steps to achieve the result.
I also agree that for the most part, you can make an Agile project deliver on time and on budget if you do a good job of working the backlog with the highest priority stuff first. That way when you come near the end of the allocated time or budget, hopefully you have at least the highest priority core working and can release it. So yes, we may need a metric at the Portfolio level above and beyond story points though "story
point trending" can be an apples to apples metric too. (Why do we need a metric at that level to compare teams and what we actually do with it is still somewhat of a puzzle for me. I guess the Sr. Management wants to see how the projects are doing or have done and wants something apples to apples to compare, sometimes to intervene where necessary and sometimes for performance appraisal perspective. Like I said, still a puzzle.)
We went through this struggle at my current organization and realized that really we needed only 2 metrics to give us a good picture of how our projects are doing.
1. Cycle time or Time to market - this is measured from the start of the project to the first release and then from every subsequent release start to implementation. All we are really trying to get to is "how long do we take to turn a set of usable requirements into actual working product increment?"
2. Customer satisfaction rating - this is
measured by asking the customer/product owner how the project team is doing. The frequency can be at every sprint review or every release or something like that. The format is structured and a scoring model is followed. As the same questions were asked for each project, the data can be compared/trended. So basically we are trying to find "if the project delivered what the customer needed in a timely manner with good quality at a reasonable cost?"
This has proved extremely useful to us and has really shown 2 things as we moved from waterfall to agile. 1. Our cycle time was cut to half and 2. Our customers were twice as satisfied. A great story to tell and one of the reasons for the high adoption of agile here. As for the usefulness of the above metrics, with cycle time, for example, see that a project that has 300 days projected cycle time, may not have thought about delivering highest value first, they may be in all or nothing mode. They may need
some consultation. Of course without talking to the teams, we can't jump to conclusions. At least the metrics point us in the direction and give us some clue where a team may need help.
So, why did I say your thinking is on the right track. Because the above metrics pretty much agree with your thought process. Here's how:
You said uncertainty = new scope/total scope. So you are saying uncertainty = "How much scope change did we have in our project".
You said adaptability = implemented new scope/new scope and finally, you said absolute adaptability = implemented new scope/total scope. That is the same as asking "How much of the scope change could be implement or how responsive were we in embracing the change?". Now, here is the kicker, who better to answer this question than the customer him/herself? So if your customer satisfaction scores have these specific areas and they are addressed by the customer, you have what you need in
a much simpler agile manner.
Why is it better to ask the customer rather than to calculate on our own in a complicated time consuming manner? First answer is because it is easier to ask the customer rather than to do heavy lifting (EVM, Function points, your proposed abs adaptability and scope quantification challenges etc.) and secondly because the worst thing you could have is internal metrics that show you are doing great when your customer thinks you are horrible. Tell me why do car manufacturers care so much about the JD Power survey? Because it doesn't matter that Ford thinks their internal metrics show that their quality matches Toyota. All that matters is if the buyers think that Ford's quality matches Toyota's.
Original Message ----
From: Chaehan So
scrumdevelopment@ yahoogroups. com
Sent: Wednesday, April 23, 2008
Subject: [scrumdevelopment] Measuring Agile Project Effectiveness:
Adaptivity vs. Uncertainty? Suggestion
Dear agile community,
I am requesting your help in finding good agile
measurements for my
Ph.D. work. on how agile practices lead to project
through socio-psychological mechanisms.
specifically, I am currently looking for
- measurements of project
- robust enough to withstand scientific
scrutiny (e.g. velocity is not
appropriate to compare different teams due to
measures of project success (time and cost targets)
are not adapted to the
agile context since they are theoretically
always met due to time-boxing.
However, they do not reflect major
aspects of agile projects, in particular
For capturing uncertainty, I suggest the new effectiveness
"adaptivity" . Here's how I derive it:
========= ========= ========= ========= ========= ==
scope = initial scope + new scope [man days]
For feasible data
a) scope is measured at the beginning of the project (initial
b) all changes of initial requirements and new requirements
summarized in the category 'new scope'
========= ========= ========= ========= ========= ==
uncertainty = new scope / total scope [%]
Uncertainty can be quantified
by the quantity of new scope. Note that
new scope also contains the 'unknown'
or 'hidden' scope which was
not considered in the initial scope estimation.
Therefore, new scope
can only be measured at project end.
========= ========= ========= ========= ========= =========
(3) adaptivity := implemented new scope / new scope
Adaptivity in this definition measures how much the team is able
implement of the new, changed or unexpected requirements (new scope).
measurement must therefore be evaluated in relation to
(4) absolute adaptivity := adaptivity * uncertainty
implemented new scope / total scope
============ ========= =========
========= ========= ========= ========= ==
Now, I would really be
grateful to receive your feedback on this measures!
Yet, I must emphasize
that I absolutely need a _concrete_ suggestion
of a better measurement and
corresponding metric if you disagree.
In other words, your feedback
<measurement> : <metric>
quality : SLOCs /number of bugs during 1m after production
Last but not least, please contact me if you are interested
participating in my field study!
Thanks a lot for your
Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try
Be a better friend, newshound, and
know-it-all with Yahoo! Mobile. Try it now.