Loading ...
Sorry, an error occurred while loading the content.

Re: FW: [scrumdevelopment] Metrics

Expand Messages
  • Deb
    ... negative), I m ... else had ... ask a ... recommend this ... product ... I like this! I m going to suggest it to my current client. Right now customer sat
    Message 1 of 22 , May 1 9:58 AM
    • 0 Attachment
      --- In scrumdevelopment@yahoogroups.com, "MaryP" <mpoppendieck@...> wrote:
      >
      > >>From: mpkirby@...
      >
      > >>I've got a manager who asks the very simple question:
      >
      >
      >
      > >>"What metrics do I collect that tells me this agile stuff is actually
      > doing my group any good. If there is no benefit (positive or
      negative), I'm
      > much less inclined to make a change"
      >
      >
      >
      > >>I thought it was a good question, and I was wondering if anyone
      else had
      > >>similar questions.
      >
      >
      >
      > Mike,
      >
      >
      >
      > Here are the three metrics I recommend:
      >
      >
      >
      > 1) Cycle time
      >
      > 2) Business Case Realization
      >
      > 3) Net Promoter Score
      >
      >
      >
      > I'll take them in reverse order.
      >
      >
      >
      > Net Promoter Score - this is a measure of customer satisfaction. See
      > details in the book "The Ultimate Question" by Fred Reichheld. You
      ask a
      > simple question: on a scale of 0-10, how likely are you to
      recommend this
      > team/organization to a friend or colleague? There are three kinds of
      > customers: promoters (10s and 9s) who would recommend the product,
      > detractors (6s through 0s) who would recommend that people avoid the
      product
      > and neutral customers (7s and 8s) who have no bias one way or the other.
      > With this data you can calculate a "net promoter" score: subtract the
      > percentage of detractors from the percentage of promoters.

      I like this! I'm going to suggest it to my current client. Right now
      customer sat is a major goal but they wanted to do it anecdotally
      (small company). After public kudos last week, the time seems ripe to
      start asking this more formally.
      >
      >
      > Business Case Realization - Every product or project should be based
      on the
      > business case of some sort - Profit & Loss, ROI, whatever. Even
      non-profits
      > and government organizations have business cases. I propose that
      measuring
      > the *realization* of the business case is the best way to make the
      > appropriate tradeoffs between cost, schedule, etc. Yes, this is a
      long term
      > measurement, but aren't we trying to create long term value?
      >
      At a former client there was the whole "cost centre" thing... there
      are few business cases made at the customer level, so they know what a
      project costs but not what it's *worth* to the business. This is
      deadly for the reputation of a team - they can be perceived as a dead
      weight, when really they are committed and working hard! Talk about
      hiding your light under a bushel! Ambushelled, to coin a word :-)

      >
      >
      > Cycle Time - Here's an example of how to measure cycle time:
      >
      >
      >
      > 1. When a defect goes on your defect
      > list, give it a date. When it gets resolved, calculate how long it
      was on
      > the list. Keep track of two numbers: Average cycle time of resolved
      > defects and average age of items on the defect list.
      >
      > 2. When an item goes on the product
      > backlog, give it a date. If it is broken out into smaller pieces,
      each piece
      > keeps the original date. If two items are combined, the new item
      gets the
      > older date. When an item is deployed, subtract the original date
      from the
      > deployment date. This is its cycle time. Compute the average cycle
      time
      > and the standard deviation for the items in each release. Also
      compute the
      > average waiting time of each item still in the backlog.
      >
      > I have heard objections to this approach, particularly: I put big
      items on
      > the product backlog; no one has spent any time on them and customers
      don't
      > consider them an "order."
      >
      > 3. In this case you might divide
      backlog
      > items into three categories:
      >
      > A) Features customers think they have "ordered" or items that
      have had
      > any measurable investment of time.

      Mary, how would you relate this to the Scrum product backlog
      maintenance cycle? Current and next N releases? (I know, it will
      differ by client, just trying to get a feel)

      >
      > B) Stuff we need to break out in order to think about architecture,
      > estimate overall schedule, etc.

      This feels like "too big to put into a sprint as-is"
      or "requirement hasn't been well thought out yet" - definitely not
      stuff at the top of the prioritized backlog.
      >
      > C) Really big bullet product roadmap items
      >
      bottom-of-the-backlog stuff, "Replace General Ledger" etc.

      > Items go into the highest category that fits - so if there are customers
      > that are waiting for a roadmap item, it's an A.
      >
      > Date everything as it goes on the backlog. Add a new date if an item
      moves
      > from C to B. Add another date when it moves to A.
      >
      > When an item is deployed, compute three categories of cycle times:
      >
      > 1)
      Elapsed
      > time since being assigned Category A
      >
      > 2)
      Elapsed
      > time since being assigned Category B
      >
      > 3)
      Elapsed
      > time since being assigned Category C
      >
      > Every release, compute the average cycle time and standard deviation for
      > each of the three categories. See what the numbers tell you. If you need
      > different categories, create them.
      >
      This feels like drawing a line on the product backlog: real
      requirements, potential requirements, nice-to-haves. Am I wrong?

      > While you are at it, measure the average age of the A items left in the
      > backlog. Is it within a release or two of the average cycle time of
      the A
      > items in the release? What does this tell you?

      I need to think more about this one... :-)
      Thanks Mary.

      >
      >
      >
      > Best of luck!
      >
      >
      >
      > Mary Poppendieck
      >
      > www.poppendieck.com
      >
    • Steven Gordon
      ... I have been dismissing utilization as a valid metric for intuitive reasons that resemble yours. However, your server example makes me question that
      Message 2 of 22 , May 1 11:39 AM
      • 0 Attachment
        On 5/1/06, Mary Poppendieck <maryp@...> wrote:
        Deb,

        I still don't understand what you are trying to measure.

        Utilization is a poisonous measurement and attempting to achieve
        high utilization is one of the most sub-optimizing  practices there
        is.  Slack time IS NOT WASTE, it is required for rapid delivery, and
        because of this it underlies the ability to deliver high quality.

        This is not to say that you need to have low utilization - it is
        only to say that attempts to maximize utilization are virtually
        guaranteed to decrease it.

        If you were an operations manager and tried to optimize the
        utilization of your servers, you'd get fired.  Development managers
        who try to optimize the utilization of their people have no sense of
        queueing theory, or perhaps think that the laws of mathematics do
        not apply to them.  They are wrong.
         
        I have been dismissing utilization as a valid metric for intuitive reasons that resemble yours.
         
        However, your server example makes me question that assumption now.  It is indeed standard practice to measure utilization of servers - in order to make sure that utilization is not too close to 100%.
         
        Maybe, we should be measuring utilization, but with a target of something like 70-80% rather than 100%.  Surely, < 50% utilization of resources is an indication of potential waste, just as more than 80% would be an indication of potential systemic inefficiency and unameliorated risk.
         
        Steven Gordon

         

         
      • leigh_mullin
        In my experience; utilisation, as measured by asking people to book time to various codes in a time system, some of which are seen as productive/revenue
        Message 3 of 22 , May 1 3:09 PM
        • 0 Attachment
          In my experience; utilisation, as measured by asking people to book
          time to various codes in a time system, some of which are seen as
          productive/revenue generating and some as unproductive/non revenue
          generating, simply encourages people to book time to codes that keep
          managers happy. Sure, you get higher utilisation. But you don't get
          more productive/valuable code or design. Moreover, just because
          something is coded as revenue generating doesn't mean that those that
          are non revenue generating aren't important.

          Utilisation is a measure that finance or IT managers that aren't
          software literate like to look at. It makes them feel happy.

          --- In scrumdevelopment@yahoogroups.com, "Steven Gordon"
          <sgordonphd@...> wrote:
          >
          > On 5/1/06, Mary Poppendieck <maryp@...> wrote:
          > >
          > > Deb,
          > >
          > > I still don't understand what you are trying to measure.
          > >
          > > Utilization is a poisonous measurement and attempting to achieve
          > > high utilization is one of the most sub-optimizing practices there
          > > is. Slack time IS NOT WASTE, it is required for rapid delivery, and
          > > because of this it underlies the ability to deliver high quality.
          > >
          > > This is not to say that you need to have low utilization - it is
          > > only to say that attempts to maximize utilization are virtually
          > > guaranteed to decrease it.
          > >
          > > If you were an operations manager and tried to optimize the
          > > utilization of your servers, you'd get fired. Development managers
          > > who try to optimize the utilization of their people have no sense of
          > > queueing theory, or perhaps think that the laws of mathematics do
          > > not apply to them. They are wrong.
          >
          >
          > I have been dismissing utilization as a valid metric for intuitive
          reasons
          > that resemble yours.
          >
          > However, your server example makes me question that assumption now. It
          > is indeed standard practice to measure utilization of servers - in
          order to
          > make sure that utilization is not too close to 100%.
          >
          > Maybe, we should be measuring utilization, but with a target of
          something
          > like 70-80% rather than 100%. Surely, < 50% utilization of resources is
          > an indication of potential waste, just as more than 80% would be an
          > indication of potential systemic inefficiency and unameliorated risk.
          >
          > Steven Gordon
          >
        • mpkirby@frontiernet.net
          ... So I just got Mike Cohn s book today on agile estimating (it s a great book Mike). I haven t read it completely, but leafing through it I came to a
          Message 4 of 22 , May 1 6:24 PM
          • 0 Attachment
            On 1 May 2006 at 15:25, Mary Poppendieck wrote:

            > Utilization is a poisonous measurement and attempting to achieve
            > high utilization is one of the most sub-optimizing  practices there
            > is.  Slack time IS NOT WASTE, it is required for rapid delivery, and
            > because of this it underlies the ability to deliver high quality.

            So I just got Mike Cohn's book today on agile estimating (it's a great book Mike). I haven't
            read it completely, but leafing through it I came to a section that talked about Critical Chain
            planning.

            Specifically, it talked about how to introduce resource buffers into tasks.

            let's say I have 3 tasks,

            T1 is estimated at 10 hours (+/- 2 hours)
            T2 is estimated at 20 hours (+/- 10 hours)
            T3 is estimated at 5 hours (+/- 1 hour).

            There are two ways to look at the confidences. I can ignore them (what most of us do :-), I
            can extend the estimates for each task (T1 becomes 12 hours, T2 becomes 30 hours, and
            T3 becomes 6 hours), or I can take the confidence factor, and collapse it together into a
            resource buffer that is used by the entire project (35 hours of tasks, with 13 hours of buffer).

            The question (and I appologize if it is covered in your book Mike, I just haven't gotten to it
            yet), is it seems that the way to apply this to scrum is to reduce the amount of "capacity" of
            the team by the size of the buffer.

            So if I have 10 stories, covering 35 story points, with 10 story points of potential error, then I
            should make sure the team is capable of completing at least 45 story points in that iteration.

            The idea of reducing the overall "capacity" of the iteration to factor in probabaly estimation
            error isn't something we've considered.

            Do others do something like that? Or do we just adjust velocity over time based on actuals,
            rather then try to deal with confidence numbers.

            Mike

            ---
            mpkirby@...
          • David H.
            ... Personally I always thought that people are off by a certain percentage and not by a fixed factor? I thought that is also the reason in planning
            Message 5 of 22 , May 2 1:08 AM
            • 0 Attachment
              mpkirby@... wrote:

              >
              > T1 is estimated at 10 hours (+/- 2 hours)
              > T2 is estimated at 20 hours (+/- 10 hours)
              > T3 is estimated at 5 hours (+/- 1 hour).
              >

              Personally I always thought that people are off by a certain
              "percentage" and not by a fixed factor? I thought that is also the
              reason in planning optimization to reduce the batch size? Did I misread
              that?

              -d
            • mpkirby@frontiernet.net
              ... In practice, we use a delphi process for doing the estimates. Depending on the spread, we calculate the error . Typically we add 1/2 a standard
              Message 6 of 22 , May 2 4:00 AM
              • 0 Attachment
                On 2 May 2006 at 10:08, David H. wrote:

                > Personally I always thought that people are off by a certain
                > "percentage" and not by a fixed factor?

                In practice, we use a delphi process for doing the estimates. Depending on the spread, we
                calculate the "error". Typically we add 1/2 a standard deviation to the estimates. It's
                spreadsheet magic. It works pretty well, except for larger features, where we can't seem to
                estimate right no matter what we do.

                Mike

                ---
                mpkirby@...
              • budcookson
                ... It works pretty well, except for larger features, where we can t seem to ... Mike - there are two things that makes my estimating a lot more accurate.
                Message 7 of 22 , May 3 9:29 AM
                • 0 Attachment
                  --- In scrumdevelopment@yahoogroups.com, mpkirby@... wrote:
                  It works pretty well, except for larger features, where we can't seem to
                  > estimate right no matter what we do.
                  >
                  > Mike
                  >

                  Mike - there are two things that makes my estimating a lot more
                  accurate. First is to make sure that you never estimate something
                  that you can't get your arms around. Typically, I say that anything
                  less than 40 hours is going to be pretty accurate because you can
                  easily comprehend what it is going to take to do the work. However,
                  this number will vary depending on the people and the environment.

                  The second thing that I do is to estimate the accuracy of my estimates
                  based on the unknowns in the estimating process. It doesn't take long
                  to master the technique and it doesn't have to be applied to every
                  estimate. Just those that are larger than you feel comfortable
                  accepting the risk of missing the estimate by X% (whatever that number
                  is).

                  Good estimating to all.

                  Bud Cookson
                  www.RidgelineSoftware.biz
                  www.BudCookson.com
                  LinkedIn: www.LinkedIn.com/in/BudCookson
                Your message has been successfully submitted and would be delivered to recipients shortly.