Loading ...
Sorry, an error occurred while loading the content.
 

QA overspill to next sprint

Expand Messages
  • carl myhill
    Hi All, I ve just joined a new company and am hearing their take on Agile. They seem happy for QA to overspill into a subsequent sprint. This doesnt seem quite
    Message 1 of 6 , Mar 16, 2010
      Hi All,

      I've just joined a new company and am hearing their take on Agile.

      They seem happy for QA to overspill into a subsequent sprint. This doesnt seem quite right to me and probably has something to do with the concept of 'done'. I know some developers who are very diligent and testing doesn't often throw up any problems in their code. I know others who don't even bother to run what they've done when they've finished! It seems that if you allow the testing of either extreme to overspill into the next sprint you are going to be storing up problems.

      I've had a poke around on the web and read a few of Jeff Patton's articles (thanks Jeff) but can't put my hands on what I really need.

      Any recommendations?

      Thanks

      Carl

      --
      User Experience Design
      (http://www.userexperiencedesign.co.uk)
    • George Dinwiddie
      Hi, Carl, ... I think you re right. ... You don t mention what you need. I suspect you re looking for an article that will convince them, but I think that s
      Message 2 of 6 , Mar 16, 2010
        Hi, Carl,

        carl myhill wrote:
        >
        >
        > Hi All,
        >
        > I've just joined a new company and am hearing their take on Agile.
        >
        > They seem happy for QA to overspill into a subsequent sprint. This
        > doesnt seem quite right to me and probably has something to do with the
        > concept of 'done'. I know some developers who are very diligent and
        > testing doesn't often throw up any problems in their code. I know others
        > who don't even bother to run what they've done when they've finished! It
        > seems that if you allow the testing of either extreme to overspill into
        > the next sprint you are going to be storing up problems.

        I think you're right.

        > I've had a poke around on the web and read a few of Jeff Patton's
        > articles (thanks Jeff) but can't put my hands on what I really need.

        You don't mention what you need. I suspect you're looking for an
        article that will convince them, but I think that's better done with one
        on one conversations.

        Or perhaps you're asking what the team needs, in which case I'd suggest
        getting the QA people involved in the initial user story discussions,
        and deriving story tests from that.

        This is a topic that's been on my mind lately, and you may find these
        blog posts helpful:

        http://blog.gdinwiddie.com/2010/03/05/testing-in-depth/
        http://blog.gdinwiddie.com/2010/03/03/more-on-automated-acceptance-testing/
        http://blog.gdinwiddie.com/2010/03/01/the-reality-of-automated-acceptance-testing/
        http://blog.gdinwiddie.com/2010/02/25/a-lingua-franca-between-the-three-or-more-amigos/
        http://blog.gdinwiddie.com/2010/02/16/the-testers-get-behind-at-the-end/

        - George

        --
        ----------------------------------------------------------------------
        * George Dinwiddie * http://blog.gdinwiddie.com
        Software Development http://www.idiacomputing.com
        Consultant and Coach http://www.agilemaryland.org
        ----------------------------------------------------------------------
      • Paul Spencer
        Hi Carl, The definition of done can be different between organizations, teams, or even separate projects with the same team. So from a development Team
        Message 3 of 6 , Mar 16, 2010
          Hi Carl,

          The definition of 'done' can be different between organizations, teams, or even separate projects with the same team.  So from a development Team perspective a story can be marked done once it has completed unit testing and/or integration testing and has been delivered to the QA environment.  At that point the QA team can follow their own process to test the latest release and create new defects or stories for the next iteration.  This is not the perfect solution and you are right you loose the high fidelity feedback loop, but it is also in my experience how most large companies operate.  They tend to let the QA team work outside of the Agile process.  It really is a matter of how flexible the organization is to changing their existing processes.

          If by 'overspill' you mean that their definition of 'done' is development, unit/integration tested, and acceptance tested then any stories that do not match that criteria are not 'done' and will have to be carried over into the next sprint.  These partially completed stories should not be made available to the users in production.

          Keep in mind there are no 'right' ways to doing Agile, but there are 'ideal' ways.  It always comes down to the people of your team, the organization they work within, and the type of project and technology they are working on.  After every sprint the team will adjust to make things better and hopefully you end up with an ideal Agile solution for that project.

          - Paul


          Paul Spencer
          Agile Software Development, UX Consultant


          On Tue, Mar 16, 2010 at 10:29 AM, carl myhill <carl@...> wrote:
           

          Hi All,

          I've just joined a new company and am hearing their take on Agile.

          They seem happy for QA to overspill into a subsequent sprint. This doesnt seem quite right to me and probably has something to do with the concept of 'done'. I know some developers who are very diligent and testing doesn't often throw up any problems in their code. I know others who don't even bother to run what they've done when they've finished! It seems that if you allow the testing of either extreme to overspill into the next sprint you are going to be storing up problems.

          I've had a poke around on the web and read a few of Jeff Patton's articles (thanks Jeff) but can't put my hands on what I really need.

          Any recommendations?

          Thanks

          Carl

          --
          User Experience Design
          (http://www.userexperiencedesign.co.uk)


        • William Pietri
          I m going to be my usual semi-radical self here and suggest that if there is a development team and/or a QA team or a design team then were not talking about
          Message 4 of 6 , Mar 16, 2010
            I'm going to be my usual semi-radical self here and suggest that if there is a development team and/or a QA team or a design team then were not talking about Agile at all. Agile teams are cross-functional.

            I'd accept "Agile-like", "in transition to Agile", or something akin, although teams like that I'd generally call either mini-waterfall or plain waterfall depending on cycle time. But collaboration and feedback are core Agile values, and in my view giving up on those means we're well outside the area where Agile practices are known to work.

            Turning to Carl's question, my view on story completion is pretty simple. If it's 100% ready to ship, it's done. If something remains (QA, design cleanup, code cleanup, minor changes, anything) then it is not done. Nothing gets carried over, though. The team re-estimates any not-done stories, and the business reps can put those old stories anywhere they like in the backlog, or just tear 'em up.

            The easy way to tell if your team is really getting what "done" means is to ship at the end of every iteration. Maybe just to a couple of alpha testers, maybe to millions.

            William

            On 03/16/2010 11:57 AM, Paul Spencer wrote: Hi Carl,

            The definition of 'done' can be different between organizations, teams, or even separate projects with the same team.  So from a development Team perspective a story can be marked done once it has completed unit testing and/or integration testing and has been delivered to the QA environment.  At that point the QA team can follow their own process to test the latest release and create new defects or stories for the next iteration.  This is not the perfect solution and you are right you loose the high fidelity feedback loop, but it is also in my experience how most large companies operate.  They tend to let the QA team work outside of the Agile process.  It really is a matter of how flexible the organization is to changing their existing processes.

            If by 'overspill' you mean that their definition of 'done' is development, unit/integration tested, and acceptance tested then any stories that do not match that criteria are not 'done' and will have to be carried over into the next sprint.  These partially completed stories should not be made available to the users in production.

            Keep in mind there are no 'right' ways to doing Agile, but there are 'ideal' ways.  It always comes down to the people of your team, the organization they work within, and the type of project and technology they are working on.  After every sprint the team will adjust to make things better and hopefully you end up with an ideal Agile solution for that project.

            - Paul


            Paul Spencer
            Agile Software Development, UX Consultant


            On Tue, Mar 16, 2010 at 10:29 AM, carl myhill <carl@...> wrote:
             

            Hi All,

            I've just joined a new company and am hearing their take on Agile.

            They seem happy for QA to overspill into a subsequent sprint. This doesnt seem quite right to me and probably has something to do with the concept of 'done'. I know some developers who are very diligent and testing doesn't often throw up any problems in their code. I know others who don't even bother to run what they've done when they've finished! It seems that if you allow the testing of either extreme to overspill into the next sprint you are going to be storing up problems.

            I've had a poke around on the web and read a few of Jeff Patton's articles (thanks Jeff) but can't put my hands on what I really need.

            Any recommendations?

            Thanks

            Carl

            --
            User Experience Design
            (http://www.userexperiencedesign.co.uk)



          • Mouneer
            Carl, You mentioned that QA overspills into subsequent sprint which I guess means that not all testing is performed during the sprint. Which further means
            Message 5 of 6 , Mar 16, 2010

              Carl,

                              You mentioned that “QA overspills into subsequent sprint” which I guess means that not all testing is performed during the sprint. Which further means that part of testing is performed during the sprint and more/further testing is performed in subsequent sprints.

                              If we are looking at a generic agile process, the following may be noticed :

              1.       the team may only perform acceptance and integration testing for the stories during the iteration and perform more thorough testing (deep testing) in the subsequent iteration and after receiving feedback from customer/product owner. This usually happens with teams who are new to agile and due to the fact that feedback received may result in changes , ergo no need to put effort in deep testing of the stories prior to that.

              2.       The team may only apply unit testing and perform acceptance and integration testing for the stories during the iteration and perform more thorough testing (deep testing) in the subsequent iteration and after receiving feedback from customer/product owner. This may happen with more mature teams and proper application of unit testing can greatly boost the overall quality of the produced software. Also the second reason from above still applies.

              3.       In these cases the team include an extra iteration at the end of every release to wrap up any pending issues or bugs; this is usually referred to as “stabilization iteration” after which a release or a “potentially shippable product” is expected.

              4.       All of the above is more common with teams that are yet to apply test automation on different levels; unit, API, functional, acceptance and GUI.

               

              My definition of “done done” is when a feature is ready for production; yet agile adoption is NOT all-or-none; adoption can be phased without losing the principals. If you team has been following an agile method for a while (maybe over 6 month or at least for a project or two) and they are still carrying testing and bugs to subsequent iterations, then this is a sign to stop, reflect and decide what as team needs to be done to go up to the next level. The help of an internal or an external coach is a good thing.

               

              There is no agile silver bullet process nor there are ‘best practices’, but there are ‘current good agile/lean practices’ from which we select according to the context of the team, project, company.  And it always comes down to the people of your team. So always Inspect & Adapt.

               

              -mouneer

               

               

               

              From: agile-usability@yahoogroups.com [mailto:agile-usability@yahoogroups.com] On Behalf Of Paul Spencer
              Sent: Tuesday, March 16, 2010 8:58 PM
              To: agile-usability@yahoogroups.com
              Subject: Re: [agile-usability] QA overspill to next sprint

               

               

              Hi Carl,

               

              The definition of 'done' can be different between organizations, teams, or even separate projects with the same team.  So from a development Team perspective a story can be marked done once it has completed unit testing and/or integration testing and has been delivered to the QA environment.  At that point the QA team can follow their own process to test the latest release and create new defects or stories for the next iteration.  This is not the perfect solution and you are right you loose the high fidelity feedback loop, but it is also in my experience how most large companies operate.  They tend to let the QA team work outside of the Agile process.  It really is a matter of how flexible the organization is to changing their existing processes.

               

              If by 'overspill' you mean that their definition of 'done' is development, unit/integration tested, and acceptance tested then any stories that do not match that criteria are not 'done' and will have to be carried over into the next sprint.  These partially completed stories should not be made available to the users in production.

               

              Keep in mind there are no 'right' ways to doing Agile, but there are 'ideal' ways.  It always comes down to the people of your team, the organization they work within, and the type of project and technology they are working on.  After every sprint the team will adjust to make things better and hopefully you end up with an ideal Agile solution for that project.

               

              - Paul

               

               

              Paul Spencer

              Agile Software Development, UX Consultant

               

               

              On Tue, Mar 16, 2010 at 10:29 AM, carl myhill <carl@...> wrote:

               

              Hi All,

              I've just joined a new company and am hearing their take on Agile.

              They seem happy for QA to overspill into a subsequent sprint. This doesnt seem quite right to me and probably has something to do with the concept of 'done'. I know some developers who are very diligent and testing doesn't often throw up any problems in their code. I know others who don't even bother to run what they've done when they've finished! It seems that if you allow the testing of either extreme to overspill into the next sprint you are going to be storing up problems.

              I've had a poke around on the web and read a few of Jeff Patton's articles (thanks Jeff) but can't put my hands on what I really need.

              Any recommendations?

              Thanks

              Carl

              --
              User Experience Design
              (http://www.userexperiencedesign.co.uk)

               

            • rasmus4200
              ... +1 This is my definition of done too. Not all teams get there but it s where I like to start. With regards to testing spilling over to the next iteration,
              Message 6 of 6 , May 15, 2010
                --- In agile-usability@yahoogroups.com, William Pietri <william@...> wrote:
                > If it's 100% ready to ship, it's done. If something remains (QA,
                > design cleanup, code cleanup, minor changes, anything) then it is not
                > done.
                >
                > William
                >

                +1

                This is my definition of done too. Not all teams get there but it's where I like to start.

                With regards to testing spilling over to the next iteration, try doing less. Do less until you have time for both development AND testing.

                Cheers - JR
              Your message has been successfully submitted and would be delivered to recipients shortly.