- In my experience; utilisation, as measured by asking people to book
time to various codes in a time system, some of which are seen as
productive/revenue generating and some as unproductive/non revenue
generating, simply encourages people to book time to codes that keep
managers happy. Sure, you get higher utilisation. But you don't get
more productive/valuable code or design. Moreover, just because
something is coded as revenue generating doesn't mean that those that
are non revenue generating aren't important.
Utilisation is a measure that finance or IT managers that aren't
software literate like to look at. It makes them feel happy.
--- In firstname.lastname@example.org, "Steven Gordon"
> On 5/1/06, Mary Poppendieck <maryp@...> wrote:
> > Deb,
> > I still don't understand what you are trying to measure.
> > Utilization is a poisonous measurement and attempting to achieve
> > high utilization is one of the most sub-optimizing practices there
> > is. Slack time IS NOT WASTE, it is required for rapid delivery, and
> > because of this it underlies the ability to deliver high quality.
> > This is not to say that you need to have low utilization - it is
> > only to say that attempts to maximize utilization are virtually
> > guaranteed to decrease it.
> > If you were an operations manager and tried to optimize the
> > utilization of your servers, you'd get fired. Development managers
> > who try to optimize the utilization of their people have no sense of
> > queueing theory, or perhaps think that the laws of mathematics do
> > not apply to them. They are wrong.
> I have been dismissing utilization as a valid metric for intuitive
> that resemble yours.order to
> However, your server example makes me question that assumption now. It
> is indeed standard practice to measure utilization of servers - in
> make sure that utilization is not too close to 100%.something
> Maybe, we should be measuring utilization, but with a target of
> like 70-80% rather than 100%. Surely, < 50% utilization of resources is
> an indication of potential waste, just as more than 80% would be an
> indication of potential systemic inefficiency and unameliorated risk.
> Steven Gordon
- On 1 May 2006 at 15:25, Mary Poppendieck wrote:
> Utilization is a poisonous measurement and attempting to achieveSo I just got Mike Cohn's book today on agile estimating (it's a great book Mike). I haven't
> high utilization is one of the most sub-optimizing practices there
> is. Slack time IS NOT WASTE, it is required for rapid delivery, and
> because of this it underlies the ability to deliver high quality.
read it completely, but leafing through it I came to a section that talked about Critical Chain
Specifically, it talked about how to introduce resource buffers into tasks.
let's say I have 3 tasks,
T1 is estimated at 10 hours (+/- 2 hours)
T2 is estimated at 20 hours (+/- 10 hours)
T3 is estimated at 5 hours (+/- 1 hour).
There are two ways to look at the confidences. I can ignore them (what most of us do :-), I
can extend the estimates for each task (T1 becomes 12 hours, T2 becomes 30 hours, and
T3 becomes 6 hours), or I can take the confidence factor, and collapse it together into a
resource buffer that is used by the entire project (35 hours of tasks, with 13 hours of buffer).
The question (and I appologize if it is covered in your book Mike, I just haven't gotten to it
yet), is it seems that the way to apply this to scrum is to reduce the amount of "capacity" of
the team by the size of the buffer.
So if I have 10 stories, covering 35 story points, with 10 story points of potential error, then I
should make sure the team is capable of completing at least 45 story points in that iteration.
The idea of reducing the overall "capacity" of the iteration to factor in probabaly estimation
error isn't something we've considered.
Do others do something like that? Or do we just adjust velocity over time based on actuals,
rather then try to deal with confidence numbers.
- mpkirby@... wrote:
>Personally I always thought that people are off by a certain
> T1 is estimated at 10 hours (+/- 2 hours)
> T2 is estimated at 20 hours (+/- 10 hours)
> T3 is estimated at 5 hours (+/- 1 hour).
"percentage" and not by a fixed factor? I thought that is also the
reason in planning optimization to reduce the batch size? Did I misread
- On 2 May 2006 at 10:08, David H. wrote:
> Personally I always thought that people are off by a certainIn practice, we use a delphi process for doing the estimates. Depending on the spread, we
> "percentage" and not by a fixed factor?
calculate the "error". Typically we add 1/2 a standard deviation to the estimates. It's
spreadsheet magic. It works pretty well, except for larger features, where we can't seem to
estimate right no matter what we do.
--- In email@example.com, mpkirby@... wrote:
It works pretty well, except for larger features, where we can't seem to
> estimate right no matter what we do.
Mike - there are two things that makes my estimating a lot more
accurate. First is to make sure that you never estimate something
that you can't get your arms around. Typically, I say that anything
less than 40 hours is going to be pretty accurate because you can
easily comprehend what it is going to take to do the work. However,
this number will vary depending on the people and the environment.
The second thing that I do is to estimate the accuracy of my estimates
based on the unknowns in the estimating process. It doesn't take long
to master the technique and it doesn't have to be applied to every
estimate. Just those that are larger than you feel comfortable
accepting the risk of missing the estimate by X% (whatever that number
Good estimating to all.