Loading ...
Sorry, an error occurred while loading the content.

RE: [scrumdevelopment] Defect Handling Options in Scrum

Expand Messages
  • Tara Santmire
    Ron s comments make sense to me. Our implementation of this is that: When we plan a user story into a Sprint part of the activity that is planned in is the
    Message 1 of 347 , Jan 29, 2011
    • 0 Attachment

      Ron’s comments make sense to me.

       

      Our implementation of this is that:

      When we plan a user story into a Sprint part of the activity that is planned in is the equivalent of user acceptance testing (UAT) (and defect fixing if necessary).  The PO is involved in UAT.  If UAT is passed then the user story is done and ready to deploy and is removed from sprint backlog and product backlog.  If UAT is not passed and the team can fix the defects in the time allotted in the sprint, then the defects are fixed and tested and the story is done and removed from backlogs.  If the team can’t fix the defects in the time allotted, then the PO makes a decision either to leave the story on the product backlog and finish in the next sprint or call the user story done and add a product backlog item for the non-implemented functionality (in which case the feature is not usually implemented in the next sprint).  This allows the PO to prioritize by business value.  In any case where UAT is not passed, the entire team including the PO is involved in a root cause analysis to understand what could be improved to avoid not passing UAT (since not passing UAT indicates waste).   

       

      In our case UAT is performed by analyst/testers in the scrum team and by the PO.

       

      I think that this is arguably a version of Scrum, but would be interested in the opinions of others on that point or possible improvements to the outlined process. 

       

      Ron said “A common situation is separate testing, either by a separate team
      who receives the result of the Sprint, or separate people who test
      stories before the end of the Sprint.”

       

      It seems like Ron’s implication is that this is a bad thing.  Is the suggestion that we should not have testers outside of developers on the scrum team?? 

       

      I understand a goal where the testers never find defects – potentially indicating low waste in the development process, but certainly my team is not at that point. 

       

      Regards,

      Tara Santmire, CSM, PMP

       

      From: scrumdevelopment@yahoogroups.com [mailto:scrumdevelopment@yahoogroups.com] On Behalf Of Ron Jeffries
      Sent: Friday, January 28, 2011 12:24 PM
      To: scrumdevelopment@yahoogroups.com
      Subject: [scrumdevelopment] Defect Handling Options in Scrum

       

       

      Let's talk about defects and the backlog. I am speaking here about
      "newly created" defects. "Legacy defects" pretty much have to go
      into the backlog if we want them fixed. What is apparently less
      obvious is whether newly created defects go into the backlog. I hold
      that they do. Just how one handles them is, I believe, somewhat
      nuanced.

      If we put defects in the Sprint backlog, they get fixed. That's a
      good thing.

      If we put new defects in the backlog as a matter of course, it
      can avoid recriminations about whose fault it is. That's a good
      thing too.

      If we put new defects in the backlog as a matter of course, it
      can lead to a feeling that defects are no problem, we just put
      them on the backlog. This can lead to a casualness about defects,
      leading to not examining them to see if they could be avoided.
      Since it always (!) takes longer to build a feature and fix it
      than it takes to build it right the first time, this can lead to
      a longer cycle time for getting features right. That's not a good
      thing.

      If we put new defects in the backlog and post them to the burn
      chart, the growth is (of course) made up of both new features and
      fixes for defects put into new features (and of course, legacy
      defects). This means that if we have a finish line in mind, it
      recedes a bit every time we add a new defect to the backlog.
      That's an OK thing, although it amy require us to update the
      finish line a lot.

      If we put new defects in the backlog and do not post them to the
      burn chart, the growth shown is new features and legacy defects
      only. This means that the finish line does not recede so often.
      This can be a good thing, as it makes the PO's job a little
      easier, requires redrawing the chart less often, and may provide
      just a little pressure to keep defects out.

      In all cases, defects, new or old, must go on the backlog if we are
      to do Scrum "well", because in Scrum, all work goes on the backlog.

      We should always keep an eye on the backlog and see whether we are
      putting things into it that are wasteful or do not need to be done,
      since we will deliver more value, sooner, if we do not do wasteful
      or unneeded things. In particular, defects are "spoiled work", and
      therefore always constitute waste of some kind. They are often,
      although perhaps not always, capable of being avoided at a reduction
      in overall cost.

      A useful thing to do can be to track the time spent fixing new
      defects, versus time spent putting in new features. A "swim lane"
      chart can be a good way to do this. If the time spent fixing new
      defects is substantial, it can be valuable to retrospect on the
      subject and see if something should be done. It is often valuable to
      consider defects one at a time, looking to the root cause of each
      one, and then summarizing, rather than looking at a mass of defects
      and deciding that we have to toughen up. Not all defects come from
      the same cause, but there are patterns to be found and exploited.

      A common situation is separate testing, either by a separate team
      who receives the result of the Sprint, or separate people who test
      stories before the end of the Sprint.

      Passing Sprint output to a separate test group is almost always a
      violation of Scrum's approach: a Scrum team is supposed to produce a
      potentially shippable increment of software. If it has to go through
      "QA", and fails at all commonly, then it isn't potentially
      shippable. Really not a good thing. Arguably not Scrum at all.

      If the software is tested after implementation, but within the
      Sprint, this can still be indicative of problems. If the software
      comes back to the programmer, it's still evidence of waste. If it
      doesn't get released and gets put on the next Sprint, it is again
      waste. Either way, these occurrences are often evidence of important
      opportunities to "inspect and adapt".

      There are common, well-known approaches to these issues. These are
      not necessarily listed in the Scrum Guide but might nonetheless be
      suitable for discussion here, should we choose not to limit
      ourselves to a rehashing of Ken's latest N-page guide, but to
      address how actually to do things well.

      Ron Jeffries
      www.XProgramming.com
      Perhaps this Silver Bullet will tell you who I am ...

    • Vikrama Dhiman
      ... Although, this is not Twitter. I really want to do a+1. Echoes my thoughts completely. Won t have been able to put it better myself. Thanks Vikrama Dhiman
      Message 347 of 347 , Feb 2, 2011
      • 0 Attachment
        >>It (and the length of this thread) is a great illustration of why I recommend that teams not get too wrapped up in estimation. They start looking for numerical precision, and that starts consuming the energy that could be put toward accomplish goals.

        Although, this is not Twitter. I really want to do a +1.

        Echoes my thoughts completely. Won't have been able to put it better myself.
         
        Thanks

        Vikrama Dhiman
        ================================================================
        Personal Blog : http://www.vikramadhiman.com/
        My Blog about all things Agile : http://agilediary.wordpress.com/



        From: George Dinwiddie <lists@...>
        To: scrumdevelopment@yahoogroups.com
        Sent: Wed, February 2, 2011 11:09:28 PM
        Subject: Re: [scrumdevelopment] Re: Scheduling Defect Fixes

         

        On 2/2/11 5:48 AM, Ron Jeffries wrote:
        > Hello, kbs_kulbhushan. On Wednesday, February 2, 2011, at
        > 12:23:59 AM, you wrote:
        >
        >> Does this make sense?
        >
        > Not really, but it was a delightful demonstration of how many
        > numbers can dance on the head of a pin.

        It (and the length of this thread) is a great illustration of why I
        recommend that teams not get too wrapped up in estimation. They start
        looking for numerical precision, and that starts consuming the energy
        that could be put toward accomplish goals.

        I suggest that the primary reason for estimating stories & tracking
        velocity is to help the team decide how much work they can do in the
        next iteration. I've found that developing clear acceptance examples
        (a.k.a. tests) helps them do that much better than more time spent
        honing estimates.

        I suggest that the secondary reason for estimating stories & tracking
        velocity is to help the PO predict how much functionality can be done by
        a certain date, or how long it will take to build a certain amount of
        functionality. When doing so, one has to remember that these are just
        estimates, no matter how much work you put into them. You need to allow
        some leeway for the things you don't know and can't predict. You need
        to track actual progress, and give that more weight than any predicted
        progress. And you need to measure actual progress in ways that don't
        mislead you. The more calculations you put in, the more likely you're
        going to fool yourself.

        - George

        P.S. Remember that the abbreviation for "estimation" is "guess."

        --
        ----------------------------------------------------------
        * George Dinwiddie * http://blog.gdinwiddie.com
        Software Development http://www.idiacomputing.com
        Consultant and Coach http://www.agilemaryland.org
        ----------------------------------------------------------


      Your message has been successfully submitted and would be delivered to recipients shortly.