5892Re: Concurrent work in product backlog
- Jan 5, 2005Heidi (the_heidster) wrote:
> Folks - I am creating my first product backlog. I know that theThe important thing is that the relative sizes hold up. We
> estimate for each requirement/feature should be done in days.
need to make sure a 4 is really twice as big as a 2. Most of
the teams I've worked with seem to have difficulties estimating
in nebulous units, just working with relative sizes. So we
end up correlating the units to something we can have a better
feel for, which is typically ideal days (if I was doing nothing
else, had no interruptions, no unexpected issues, etc).
If we want a first-cut SWAG at the overall schedule, in
addition to size estimates, we also need to estimate velocity.
This seems easiest done correlated to days, and if we are also
estimating in ideal days we can correlate size to velocity. We
can take the available working days in a sprint, add them up
and then take into consideration that just about every day is
not ideal by multiplying by anywhere from 40% to 80%. Just
about every team I work with wants to use from 75% to 100%, and
it never plays out that way.
Once we get some feedback from a couple of sprints, however, it
doesn't matter if the estimating units correlate to time. We
will learn that the team can average 38 estimated ideal days per
sprint. Maybe that means an ideal day maps to 1.75 elapsed days,
but we don't really care. We have 423 total estimated ideal
days left, so figure another 11 or 12 sprints.
> Questions, do I just add up the days to estimate the completionAs mentioned above, we use the total work size and the team's
> date? This would indicate that each day is consecutive. How does
> this take into account concurrent work (or does it?)
estimated velocity per sprint to estimate a completion date.
There are two types of concurrent work. The first is when the
team is working on several backlog items at the same time. The
second is when we have a cross-functional team, where a backlog
item needs several skill sets to get it done (which is pretty
common -- it's especially typical that we have developers and
testers, and usually types of developers as well, DBAs, etc.),
and some of that work happens in parallel.
I use a concept I call a "work stream." A work stream is the
collection of people we gather together to get a backlog item
done. For example, we use a pair of developers, a DBA and a
tester. It's not necessary that these people are dedicated to
one work stream -- a DBA may timeshare across them. For
estimating at the backlog item level, we assume a minimal work
stream size. When we actually task the backlog item out for a
sprint, we may put more than one work stream on it, reducing
the elapsed time but still taking up the same total ideal time.
So as an example, I have a backlog item for an e-commerce
application feature. My minimal work stream is one interface
designer, a pair of developers and a tester. I estimate how
long it takes this group to get the backlog item done. The
interface designer needs a day before the developers can get
started, then everyone works in parallel for 3 days, and then
the tester and developers need a day for final test and fix.
The overall estimate for this backlog item would be 5 days.
This is not exact. The interface designer could very well go
off and start working on another backlog item while the tester
and developers are in final test. But it's close enough, and
the actual measured velocity will rise if the team learns how
to overlap backlog item work like this. The alternative is to
map out a complex set of tasks and dependencies, in other words
something like a Gantt chart, and we know how well that works.
Some folks would say the work stream concept I mention above is
trying to be too exact. An easier approach is to consider what
your critical path resource would be, typically the developers,
and then just estimate their time. This wouldn't consider the
work that has to happen prior to or after the developers, and
thus the actual velocity would tend to fall if there is enough
of it happening.
In any case, we take the number of available work streams times
the number of work days and that gives us our ideal velocity for
a sprint. Divide the total work size in ideal days by this
velocity and we get the number of sprints remaining, and hence
the estimated completion date.
> Also, by what criteria should I decide the "adjustment factor"?There are adjustments that can be applied to both size and
> How important is the adjustment factor?
velocity estimates. Earlier I mentioned adjusting the velocity
to accommodate less than ideal days.
Ken Schwaber has a set of velocity adjustment factors that he
teaches as part of the Certified ScrumMaster course. I'm not
sure if he considers these proprietary to the course, so I won't
detail them here. But some of the more important criteria I use
for adjusting velocity are if the team is distributed, if we
have multiple teams coordinating work, if it's a new team, if
they're new to Scrum, if they have some unpredictable legacy
support on their plate, etc. These things may slow them down.
We can also apply adjustments to size estimates. If we're using
new technology or building in some new domain, the amount of work
may be larger than we might estimate. We may even adjust certain
types of backlog items, for example the web services features may
be proving troublesome so we want to adjust them a bit.
The adjustment factor is important if the team is not taking it
into consideration when estimating. It seems to be the nature
of estimating software projects that we tend to be optimistic.
More experienced teams may have realized their tendencies and
apply their own adjustment factors. Less experienced teams may
not consider them. A ScrumMaster should watch the team and
suggest applying adjustments if they feel they're needed. I
would not take the team's estimates and apply an adjustment
after the fact without their understanding and agreement.
But, after a few iterations, regardless of whether we adjust we
will learn our actual velocity and converge on more accurate
overall estimates. Initially, however, in order to avoid
establishing overly optimistic expectations, we may want to
apply some adjustment factors. Just be sure to establish the
understanding that we're SWAG'ing these things, and the only
realistic numbers are those that result from concrete feedback.
I hope all that helps a bit. There is a Certified ScrumMaster
course that I'm helping Ken teach coming up in San Diego
February 7-8, where you can learn all about planning and the
rest of Scrum.
Paul Hodgetts -- CEO, Principal Consultant
Agile Logic -- www.agilelogic.com
Consulting, Coaching, Training -- On-Site & Out-Sourced Development
Agile Processes/Scrum/Lean/XP -- Java/J2EE, C++, OOA/D, UI/IA, XML
XP San Diego User Group - Thursday, January 6, 2005
"Can RUP Be Agile? Can RUP Be Extreme?"
Orange County Rational Users Group - Thursday, January 20, 2005
Certified ScrumMaster Training, San Diego, CA - February 7-8, 2005
- << Previous post in topic Next post in topic >>