Loading ...
Sorry, an error occurred while loading the content.

Re: Metrics to report up

Expand Messages
  • Doug
    After reading thru Alan s response, below, it occurs to me that most management will want to have a quantitative, comparative sense of teams productivity (so
    Message 1 of 57 , Nov 1, 2010
    • 0 Attachment
      After reading thru Alan's response, below, it occurs to me that "most management" will want to have a quantitative, comparative sense of teams productivity (so they can determine which teams are producing more "business value", with the objective of then trying to improve the productivity of those teams producing "less business value") Of course, since velocity is relevant only to a particular team, that's the wrong comparative measure---SO - any suggestions for obtaining such a quantitative measure? Yes, we can go to the PO and ask them to assess "business value" produced, but that seems to be an ephemeral. vague and Not quantitative measure.

      In scrumdevelopment@yahoogroups.com, Alan Dayley <alandd@...> wrote:
      >
      > I heartily agree with your critique, George. I may have left the
      > wrong impression about velocity. Thank you.
      >
      > Each team has their own velocity and team A may be more productive
      > than team B even with a velocity number less than team B. Don't
      > report velocities up to management. Report what they want to know.
      >
      > Alan
      >
      > On Sun, Oct 31, 2010 at 10:05 PM, George Dinwiddie
      > <lists@...> wrote:
      > > Alan,
      > >
      > > Very well said!  Except when you say
      > >
      > >> The one metric in Scrum is team velocity.  It is important for sprint
      > >> planning and for estimating the amount of work that will be complete by
      > >> a particular date.  I can't think of any other metric defined in Scrum.
      > >>   There may be others that are useful to the managers, you and your
      > >> teams.  You need to write some stories around metrics!
      > >
      > > you may leave the impression that velocity is a valid metric to be
      > > reported to management.  I don't think it is.  It's usefulness is, as
      > > you say, primarily for planning the next sprint. It can also be used for
      > > estimating into the future (as in
      > > http://blog.gdinwiddie.com/2010/04/22/projecting-into-the-future/), but
      > > it's the estimate, not velocity, that should be reported.
      > >
      > > My advice: In general, don't report raw numbers.  Those removed from the
      > > work often won't have the context to interpret them reasonably.  And,
      > > upper managers don't have the time to do the analysis for themselves.
      > > Give them the information they need, and be willing to discuss the
      > > underlying data.  Don't just pass along numbers.
      > >
      > >  - George
      > >
      > > --
      > >  Nov 15-16 Agile Testing Workshop in Orlando
      > >  http://www.sqe.com/AgileDevPracticesEast/Workshop/
      > >  ----------------------------------------------------------------------
      > >   * George Dinwiddie *                      http://blog.gdinwiddie.com
      > >   Software Development                    http://www.idiacomputing.com
      > >   Consultant and Coach                    http://www.agilemaryland.org
      > >  ----------------------------------------------------------------------
      > >
      > >
      > >
      > > ------------------------------------
      > >
      > > To Post a message, send it to:   scrumdevelopment@...
      > > To Unsubscribe, send a blank message to: scrumdevelopment-unsubscribe@...! Groups Links
      > >
      > >
      > >
      > >
      >
    • Hariprakash Agrawal
      I have come across this scenario very often (almost 95% products/projects) in which defects escape and I have seen this irrespective of methodologies (or
      Message 57 of 57 , Dec 13, 2010
      • 0 Attachment
        I have come across this scenario very often (almost 95% products/projects) in which defects escape and I have seen this irrespective of methodologies (or practices) used. Human can make mistakes (at every phase/activity) for various reasons however we would like to keep improving ourselves continuously. We measure 'defects escaped' and take it seriously, means, we get to the root cause and invest in required trainings/expectations.

        We focus on design & code quality metrics, like, cyclomatic complexity, fan-in, fan-out, depth (inheritance) and run some code quality tools to check coding standard compliance and other parameters (like, memory leakage etc). We report this to management as well to keep them in loop. We measure test related metrics, like, # of test cases (manual vs automated), first time pass ratio, # of defects (open, fixed, closed, postponed) etc.

        We do not focus much on velocity, thanks to this forum. We track release burn down, # of stories committed / # of stories achieved (to keep improving team's commitment level), # of demos accepted/ rejected by PO, # of times team got changes in middle of sprint (it is minimal but not zero yet, this helps in deciding sprint length, puts back-pressure on PO) and few more (customer satisfaction and employee satisfaction).

        For us, agile is mix of Scrum and XP practices hence we focus on both.

        --
        Regards,
        Hariprakash Agrawal (Hari),
        Managing Director | Agile Coach | http://opcord.com | http://www.linkedin.com/in/hariprakash
        Software Testing (QTP, Selenium, JMeter, AutoIT, Sahi, NUnit, VB/Java Script, Manual) || Consulting / Trainings (Agile, CMMi, Six Sigma, Project Management, Software Testing
        )

        On Mon, Dec 13, 2010 at 9:11 PM, Ron Jeffries <ronjeffries@...> wrote:
         

        Hello, woynam. On Monday, December 13, 2010, at 10:12:25 AM, you
        wrote:



        > Sorry, but I don't see how "defects" can escape. If you're
        > writing automated tests for every story, an "escaped" defect means
        > that you ignored the failing test. Is that really that common?

        It is possible, and not all that unlikely, to miss a test or write
        one incorrectly. It would be possible, I suppose, to define Done as
        "passes whatever tests we wrote" but that strikes me as a bit too
        lax.

        So an escaped defect would be something we didn't like, that we
        agree we understood and somehow failed to get implemented and
        tested.


        Ron Jeffries
        www.XProgramming.com
        Sorry about your cow ... I didn't know she was sacred.





      Your message has been successfully submitted and would be delivered to recipients shortly.