Loading ...
Sorry, an error occurred while loading the content.

Re: [scrumdevelopment] Metrics to report up

Expand Messages
  • Alan Dayley
    Sounds like a good situation, overall, with good managers. I d suggest that the managers do exactly as you mention, talk to team members and attend sprint
    Message 1 of 57 , Nov 1, 2010
    • 0 Attachment
      Sounds like a good situation, overall, with good managers.

      I'd suggest that the managers do exactly as you mention, talk to team members and attend sprint reviews.  After a time they might note the questions that keep coming up or the things they want to know on a consistent basis but aren't readily visible to them.  Then they can work with the team to get the information they need, once they know what they need.

      Alan

      On Mon, Nov 1, 2010 at 8:14 AM, Charles Bradley - Scrum Coach, CSM, PSM I <chuck-lists2@...> wrote:
       

      A couple of people have asked what the mgmt wants.

      They're not really sure what they want, and they're coming to me for advice as to how to report up (beyond the obvious convo with a Scrum Team member, or by attending a sprint review.)  They are very wary of breaking the principles and spirit of Scrum, and they're not the kind to take a metric and make it the bible.  They just want a set of indicators/dashboard that they can keep an eye on and from time to time drill down on by asking questions.  I don't believe they're really looking to try to compare Team A to Team B.  I've explained that there is no metric that does that well, and they seem to think they can assess this pretty well just by communicating well with the teams and individuals.

      Some folks have asked about DoD and backlog
      Two of the teams work on the same product and from the same backlog, the third works on a different product and different backlog.

      The teams' definitions of done are essentially the same and are basically what are in the Scrum Guide, with the exception that they do non-functional testing (things like load testing, security testing, etc) on a pretty adhoc basis as they see the need/risk.  The only other major difference in the Dod as that the 2 teams working on the first product(a big product with a lot of legacy code) don't have good automated test coverage, so they do risk based spot(adhoc manual) checking for regression testing, while the 3rd team is a newer product, with a much better test coverage, so much less need for adhoc manual regression tests (though their product is much smaller too).  I should also mention that they give the right to the PO to approve a story(I'm not sure this is explicitly in the Scrum Guide), so that is in the DoD.

      (btw, the 2 teams working on big product are working towards improving test coverage)

      Hope this helps clarify.

      Thanks for all of the input so far.

      Charles



      From: Ron Jeffries <ronjeffries@...>
      To: scrumdevelopment@yahoogroups.com
      Sent: Mon, November 1, 2010 3:21:07 AM
      Subject: Re: [scrumdevelopment] Metrics to report up

       

      Hello, Charles. On Sunday, October 31, 2010, at 11:57:46 PM, you
      wrote:

      > What metrics, if any, would you suggest to report from the team to the Sr. Mgr?
      > What metrics, if any, would you suggest to report from the Sr. Mgr up to the VP?

      First:
      Whatever they want reported.
      Senior Manager gets to decide what goes up.

      Second:
      What gets done every Sprint.

      Thereafter:
      Impediments reported; Impediments removed.
      Defects reported; Defects removed; Root cause; Fix for cause.

      Ron Jeffries
      www.XProgramming.com
      Agility is not an inescapable law of purity
      but a pragmatic principle of effectiveness. -- Marc Hamann


    • Hariprakash Agrawal
      I have come across this scenario very often (almost 95% products/projects) in which defects escape and I have seen this irrespective of methodologies (or
      Message 57 of 57 , Dec 13, 2010
      • 0 Attachment
        I have come across this scenario very often (almost 95% products/projects) in which defects escape and I have seen this irrespective of methodologies (or practices) used. Human can make mistakes (at every phase/activity) for various reasons however we would like to keep improving ourselves continuously. We measure 'defects escaped' and take it seriously, means, we get to the root cause and invest in required trainings/expectations.

        We focus on design & code quality metrics, like, cyclomatic complexity, fan-in, fan-out, depth (inheritance) and run some code quality tools to check coding standard compliance and other parameters (like, memory leakage etc). We report this to management as well to keep them in loop. We measure test related metrics, like, # of test cases (manual vs automated), first time pass ratio, # of defects (open, fixed, closed, postponed) etc.

        We do not focus much on velocity, thanks to this forum. We track release burn down, # of stories committed / # of stories achieved (to keep improving team's commitment level), # of demos accepted/ rejected by PO, # of times team got changes in middle of sprint (it is minimal but not zero yet, this helps in deciding sprint length, puts back-pressure on PO) and few more (customer satisfaction and employee satisfaction).

        For us, agile is mix of Scrum and XP practices hence we focus on both.

        --
        Regards,
        Hariprakash Agrawal (Hari),
        Managing Director | Agile Coach | http://opcord.com | http://www.linkedin.com/in/hariprakash
        Software Testing (QTP, Selenium, JMeter, AutoIT, Sahi, NUnit, VB/Java Script, Manual) || Consulting / Trainings (Agile, CMMi, Six Sigma, Project Management, Software Testing
        )

        On Mon, Dec 13, 2010 at 9:11 PM, Ron Jeffries <ronjeffries@...> wrote:
         

        Hello, woynam. On Monday, December 13, 2010, at 10:12:25 AM, you
        wrote:



        > Sorry, but I don't see how "defects" can escape. If you're
        > writing automated tests for every story, an "escaped" defect means
        > that you ignored the failing test. Is that really that common?

        It is possible, and not all that unlikely, to miss a test or write
        one incorrectly. It would be possible, I suppose, to define Done as
        "passes whatever tests we wrote" but that strikes me as a bit too
        lax.

        So an escaped defect would be something we didn't like, that we
        agree we understood and somehow failed to get implemented and
        tested.


        Ron Jeffries
        www.XProgramming.com
        Sorry about your cow ... I didn't know she was sacred.





      Your message has been successfully submitted and would be delivered to recipients shortly.