Loading ...
Sorry, an error occurred while loading the content.
 

Re: [scrumdevelopment] Re: Metrics to report up

Expand Messages
  • Koti Reddy Bhavanam
    Apart from this i see another good metric, No of Escaped Defects. This is very powerful metric when we are delevering a demonstartable peice of code at the
    Message 1 of 57 , Dec 13, 2010
      Apart from this i see another good metric, No of Escaped Defects. This is very powerful metric when we are delevering a demonstartable peice of code at the end of each sprint.
       
      When we certify something as working fine, if any issues comes back from BAs\Prodcut Owners as issues, we should treat them as escaped defectes. Only exclusion here is feedback like improvements\enhancements which are not originally asked for from the stake holders but are realized  by them when the team demos on the last day of the sprint.
       
      Thanks & Regards,
      Koti Reddy Bhavanam
      Ph: 09676863535



      From: Roy Morien <roymorien@...>
      To: scrumdevelopment@yahoogroups.com
      Sent: Sat, November 13, 2010 8:07:42 AM
      Subject: RE: [scrumdevelopment] Re: Metrics to report up

       

      The only usefulness of velocity as a measure somehow indicating 'improvement' is as a relative measure, or comparison. That is the underlying implication of 'improvement'.

      That is, provided everything else is held equal. If stories are estimated in the same rating manner, and the story points remain constant (ie: not suddenly changed from 1,2,4,8 to 4,8,12,16 or something), the only variable is then completion rate in a sprint. If the team has improved in fact, then the velocity will increase ... but not by much, given that velocity is essentially a moving average calculated over a number of sprints.

      There is always the danger, arising from management interference and pressure (and threat) to game the measure and make those changes to estimating. This is an almost inevitable outcome of succumbing to management pressure to 'improve', or to demonstrate how effective and efficient you are.

      This is why trying to find quantitative metrics to report up are fundamentally irrelevant, and it is about time 'management' starting viewing software developers as professionals, and not trying to manage them, and measure them, as if they are base grade clerks filing invoices.

      Regards,
      Roy Morien


      To: scrumdevelopment@yahoogroups.com
      From: wouter@...
      Date: Fri, 12 Nov 2010 23:18:33 +0100
      Subject: Re: [scrumdevelopment] Re: Metrics to report up

       
      Hi Charles,


      On Fri, Nov 12, 2010 at 9:52 PM, Charles Bradley - Scrum Coach, CSM, PSM I <chuck-lists2@...> wrote:

      Wouter,

      I concur with these guys that velocity is NOT an appropriate measure to report up.  Is it ok to be visible? Sure, but to actually "report" on it is inappropriate. 

      If you "must" report up, I do like the idea of a release burndown type of chart being reported up.  Just make sure you reflect scope in the chart as well as progress toward the release.  Cohn(_Agile Estimating and Planning_) and others  have documented how to be able to indicate added/removed scope in a Release burndown chart.

      Well, the only reason I'm including velocity is to be able to give a useful release burndown. This overview I'm working on basically started with me asking them what they really need to do their jobs. Which mostly involves being able to discuss possible release dates. I explained that scope should be part of that discussion with a customer, so that there would always be at least two dimensions to discuss: changing the release date(s) and/or changes in scope. This seemed to make them happy enough...
      I did stress that velocity can't be used to compare teams, or judge 'performance', and to be honest I've never really had a problem with people trying that (or dissuading them...). Obviously, other people here have, so I'll learn from their experience and not make that a central part of the overview...
       
      Some of the most practical stuff I've found on Scrum metrics is in Cohn's _Succeeding With Agile_, Chapter 21. 

      Ah, I'll look that up, it's on my nightstand:-)
       
      For most of these, he says to establish a target and measure how close you are to your target.
      Some example metrics(I'm paraphrasing a bit here too):
      1.  # of defects reported within 30 days of a release
      2.  System downtime
      3.  Scores on quarterly customer surveys
      4.  Number of major releases per quarter
      5.  Scores on employee satisfaction surveys(asking things such as, "Do you enjoy working here" and "Would you recommend Scrum to a friend at another company?")
      6.  % of teams that have an updated release burndown chart (target 100%)

      One would have to make sure that they categorize/define defects properly (as we've done previously on this list) to make #1 work I think.
      I wonder if measuring the number of releases that were required due to bug fixes would be worthwhile.

      That, and how quickly bugs were fixed and released could also be useful. Employee satisfaction is a really nice one, but I'm not too sure it could be very anonymous in such a small company.

      I certainly have had good feedback, which I'll try to use. I'll have a version of what I'll be proposing ready next week, so it might be a good idea to share that here, and see how you all like it... 

      thanks,

      Wouter


    • Hariprakash Agrawal
      I have come across this scenario very often (almost 95% products/projects) in which defects escape and I have seen this irrespective of methodologies (or
      Message 57 of 57 , Dec 13, 2010
        I have come across this scenario very often (almost 95% products/projects) in which defects escape and I have seen this irrespective of methodologies (or practices) used. Human can make mistakes (at every phase/activity) for various reasons however we would like to keep improving ourselves continuously. We measure 'defects escaped' and take it seriously, means, we get to the root cause and invest in required trainings/expectations.

        We focus on design & code quality metrics, like, cyclomatic complexity, fan-in, fan-out, depth (inheritance) and run some code quality tools to check coding standard compliance and other parameters (like, memory leakage etc). We report this to management as well to keep them in loop. We measure test related metrics, like, # of test cases (manual vs automated), first time pass ratio, # of defects (open, fixed, closed, postponed) etc.

        We do not focus much on velocity, thanks to this forum. We track release burn down, # of stories committed / # of stories achieved (to keep improving team's commitment level), # of demos accepted/ rejected by PO, # of times team got changes in middle of sprint (it is minimal but not zero yet, this helps in deciding sprint length, puts back-pressure on PO) and few more (customer satisfaction and employee satisfaction).

        For us, agile is mix of Scrum and XP practices hence we focus on both.

        --
        Regards,
        Hariprakash Agrawal (Hari),
        Managing Director | Agile Coach | http://opcord.com | http://www.linkedin.com/in/hariprakash
        Software Testing (QTP, Selenium, JMeter, AutoIT, Sahi, NUnit, VB/Java Script, Manual) || Consulting / Trainings (Agile, CMMi, Six Sigma, Project Management, Software Testing
        )

        On Mon, Dec 13, 2010 at 9:11 PM, Ron Jeffries <ronjeffries@...> wrote:
         

        Hello, woynam. On Monday, December 13, 2010, at 10:12:25 AM, you
        wrote:



        > Sorry, but I don't see how "defects" can escape. If you're
        > writing automated tests for every story, an "escaped" defect means
        > that you ignored the failing test. Is that really that common?

        It is possible, and not all that unlikely, to miss a test or write
        one incorrectly. It would be possible, I suppose, to define Done as
        "passes whatever tests we wrote" but that strikes me as a bit too
        lax.

        So an escaped defect would be something we didn't like, that we
        agree we understood and somehow failed to get implemented and
        tested.


        Ron Jeffries
        www.XProgramming.com
        Sorry about your cow ... I didn't know she was sacred.





      Your message has been successfully submitted and would be delivered to recipients shortly.