Loading ...
Sorry, an error occurred while loading the content.
 

Seperate engineering and QA backlog, burndown and velocity

Expand Messages
  • Mark Striebeck
    We tried for some time to include QA in our iteration planning - added testing tasks to the backlog, estimated them and tracked all tasks (engineering and QA)
    Message 1 of 7 , May 1, 2006
      We tried for some time to include QA in our iteration planning - added testing tasks to the backlog, estimated them and tracked all tasks (engineering and QA) together in burndown and velocity graphs.

      So far, the teams of all the projects don't see any benefit in this. The overall engineering burndown (instead of individual burndowns for each engineer) makes sense as the engineers can shift tasks around depending on who has finished his/her task. But there is no exchange with QA, an engineer can't do the QA and vice versa.

      In our retrospective we discussed today to track engineering and QA in seperate burndown charts and velocities. I know that this does not support the overall team spirit and could lead to fingerpointing ("we didn't finish our iteration because xy isn't done"). But that's not really a concern with the teams I'm working with. They (engineering and QA) really liked the idea and think that it can help scheduling much better then the overall velocity.

      Does anybody have any experience with this?
    • Tobias Mayer
      ... I do. I am working at one organization who is insisting on having QA be a sprint behind Development for the same reasons you give here. It is a terrible
      Message 2 of 7 , May 2, 2006
        > Does anybody have any experience with this?
         
        I do.  I am working at one organization who is insisting on having QA be a sprint behind Development for the same reasons you give here.  It is a terrible model.  It is simply waterfall - worse it is waterfall masquerading as agile, and it leads to the usual big pile up of untested crap at the end.  I'm striving to turn this around; it is slow but it will happen.
         
        I suggest that you do whatever you can (ask the team members to do whatever they can) to avoid this situation and discover how to work in a cross-functional way, delivering tested software at the end of each iteration.  What you are suggesting is a step backwards, it is a short term solution that will lead to more ain further down the line.
         
        One possible path to take would be to bring in outside help - someone who has
        faced this challenge and overcome it - to guide the team.  So many of us are stuck in old ways of thinking that it seems impossible to make something as crazy as this actually work.  Others of us have seen it actually work, and know it is worth striving for. 
         
        Tobias


        Mark Striebeck <mark.striebeck@...> wrote:
        We tried for some time to include QA in our iteration planning - added testing tasks to the backlog, estimated them and tracked all tasks (engineering and QA) together in burndown and velocity graphs.

        So far, the teams of all the projects don't see any benefit in this. The overall engineering burndown (instead of individual burndowns for each engineer) makes sense as the engineers can shift tasks around depending on who has finished his/her task. But there is no exchange with QA, an engineer can't do the QA and vice versa.

        In our retrospective we discussed today to track engineering and QA in seperate burndown charts and velocities. I know that this does not support the overall team spirit and could lead to fingerpointing ("we didn't finish our iteration because xy isn't done"). But that's not really a concern with the teams I'm working with. They (engineering and QA) really liked the idea and think that it can help scheduling much better then the overall velocity.

        Does anybody have any experience with this?

      • Adrian Howard
        On 2 May 2006, at 07:36, Mark Striebeck wrote: [snip] ... [snip] Any particular reason why not? If it were me that would be something I d be trying to change.
        Message 3 of 7 , May 2, 2006
          On 2 May 2006, at 07:36, Mark Striebeck wrote:
          [snip]
          > But there is no exchange with QA, an engineer
          > can't do the QA and vice versa.
          [snip]

          Any particular reason why not? If it were me that would be something
          I'd be trying to change.

          [snip]
          > So far, the teams of all the projects don't see any benefit in
          > this. The
          > overall engineering burndown (instead of individual burndowns for each
          > engineer) makes sense as the engineers can shift tasks around
          > depending on
          > who has finished his/her task. But there is no exchange with QA, an
          > engineer
          > can't do the QA and vice versa.
          [snip]

          This may be me being dim - but how does this work?

          How do the programmers tell whether they have completed a piece of
          work without knowing whether it passes the tests? What are the units
          on the chart for the QA team (units assessed? units passed?) ?

          Surely a feature can only be marked as completed once the "engineer"
          and the "QA" person has finished?

          Adrian
        • Ron Jeffries
          ... My guess is: it doesn t work very well. ... The programmers have to guess whether they have finished the work if they don t have the tests to run until
          Message 4 of 7 , May 2, 2006
            On Tuesday, May 2, 2006, at 8:45:26 AM, Adrian Howard wrote:

            > This may be me being dim - but how does this work?

            My guess is: it doesn't work very well.

            > How do the programmers tell whether they have completed a piece of
            > work without knowing whether it passes the tests? What are the units
            > on the chart for the QA team (units assessed? units passed?) ?

            The programmers have to guess whether they have finished the work if
            they don't have the tests to run until sometime later. Their guesses
            have three possible results, and two of them are bad.

            > Surely a feature can only be marked as completed once the "engineer"
            > and the "QA" person has finished?

            You would think so. Managers and ScrumMasters would do well, I
            think, not to consider work done until it actually is shown to work.
            That means that for the developer, nothing she does in week 1 is
            "done" until week 2 /at the earliest/. Meanwhile, she's working on
            something else, which is also not done ...

            Close out the work within the Sprint, that's my advice. Let done
            mean done.

            Ron Jeffries
            www.XProgramming.com
            How do I know what I think until I hear what I say? -- E M Forster
          • Victor Szalvay
            Mark, Newly formed teams go through less productive stages before they start performing. The effects you re not seeing may be secondary or tertiary effects
            Message 5 of 7 , May 2, 2006
              Mark,
              Newly formed teams go through less productive stages before they start
              performing. The effects you're not seeing may be secondary or
              tertiary effects that will become evident after several sprints and
              most visible after a release.

              One of the main advantages of a cross-functional team is that the team
              is singularly responsible for the delivery of a whole product
              increment (team in this sense includes the PO and SM). There is no
              way any role within the Scrum team (coders, QA, designers, etc.) can
              throw work over-the-fence until ultimately no one is responsible for
              the actual delivery of the software. Rather the team's work is
              evaluated atomically.

              Your coders and QA folks are probably just going through the normal
              stages of team formation and aren't particularly used to the idea of
              working together. The fact that this surfaced is a good thing and
              shows that Scrum is working by surfacing pain points. If you split
              the team along roles then you have effectively abandoned Scrum.

              The others responding to your question have discussed how the two
              groups can better integrate to become a team by decreasing the latency
              between code and test.

              Best of luck,
              -- Victor

              --- In scrumdevelopment@yahoogroups.com, "Mark Striebeck"
              <mark.striebeck@...> wrote:
              >
              > We tried for some time to include QA in our iteration planning - added
              > testing tasks to the backlog, estimated them and tracked all tasks
              > (engineering and QA) together in burndown and velocity graphs.
              >
              > So far, the teams of all the projects don't see any benefit in this. The
              > overall engineering burndown (instead of individual burndowns for each
              > engineer) makes sense as the engineers can shift tasks around
              depending on
              > who has finished his/her task. But there is no exchange with QA, an
              engineer
              > can't do the QA and vice versa.
              >
              > In our retrospective we discussed today to track engineering and QA in
              > seperate burndown charts and velocities. I know that this does not
              support
              > the overall team spirit and could lead to fingerpointing ("we didn't
              finish
              > our iteration because xy isn't done"). But that's not really a
              concern with
              > the teams I'm working with. They (engineering and QA) really liked
              the idea
              > and think that it can help scheduling much better then the overall
              velocity.
              >
              > Does anybody have any experience with this?
              >
            • Ramon Davila
              I have been working on cross-functional teams for the last three years, and we found that the Practice of Test Driven Development have helped the teams get
              Message 6 of 7 , May 3, 2006
                I have been working on cross-functional teams for the last three years, and we found that the Practice of Test Driven Development have helped the teams get over the artificial need to split among functional lines. TDD has the virtue to include the QA folks earlier in the development process, since they are usually responsible for the creation and maintenance of Functional tests. We have learned that once teams have fully embrace TDD, Testers become heavily involved in the ongoing development effort, instead of just being at the end of the queue, waiting to validate if the work is done.

                Ramon Davila

                On 5/2/06, Mark Striebeck <mark.striebeck@...> wrote:
                We tried for some time to include QA in our iteration planning - added testing tasks to the backlog, estimated them and tracked all tasks (engineering and QA) together in burndown and velocity graphs.

                So far, the teams of all the projects don't see any benefit in this. The overall engineering burndown (instead of individual burndowns for each engineer) makes sense as the engineers can shift tasks around depending on who has finished his/her task. But there is no exchange with QA, an engineer can't do the QA and vice versa.

                In our retrospective we discussed today to track engineering and QA in seperate burndown charts and velocities. I know that this does not support the overall team spirit and could lead to fingerpointing ("we didn't finish our iteration because xy isn't done"). But that's not really a concern with the teams I'm working with. They (engineering and QA) really liked the idea and think that it can help scheduling much better then the overall velocity.

                Does anybody have any experience with this?

                To Post a message, send it to:   scrumdevelopment@...
                To Unsubscribe, send a blank message to: scrumdevelopment-unsubscribe@...




                SPONSORED LINKS
                Scrum


                YAHOO! GROUPS LINKS




              • Tobias Mayer
                This reminds me of a quote I stole from the agile-testing list about 18 months ago. and have used in a number of presentations. An Agile project begins when
                Message 7 of 7 , May 3, 2006
                  This reminds me of a quote I stole from the agile-testing list about 18 months ago. and have used in a number of presentations. 
                   
                  "An Agile project begins when testers convert high-level requirements into testable specifications."
                   
                  In other words, testers are at the beginning, not the end of the process.  I cannot loacte the source of the quote now, but there is a little more on this subject from Phlip in this post:
                   
                  Tobias


                  Ramon Davila <davilameister@...> wrote:
                  I have been working on cross-functional teams for the last three years, and we found that the Practice of Test Driven Development have helped the teams get over the artificial need to split among functional lines. TDD has the virtue to include the QA folks earlier in the development process, since they are usually responsible for the creation and maintenance of Functional tests. We have learned that once teams have fully embrace TDD, Testers become heavily involved in the ongoing development effort, instead of just being at the end of the queue, waiting to validate if the work is done.

                  Ramon Davila

                  On 5/2/06, Mark Striebeck <mark.striebeck@...> wrote:
                  We tried for some time to include QA in our iteration planning - added testing tasks to the backlog, estimated them and tracked all tasks (engineering and QA) together in burndown and velocity graphs.

                  So far, the teams of all the projects don't see any benefit in this. The overall engineering burndown (instead of individual burndowns for each engineer) makes sense as the engineers can shift tasks around depending on who has finished his/her task. But there is no exchange with QA, an engineer can't do the QA and vice versa.

                  In our retrospective we discussed today to track engineering and QA in seperate burndown charts and velocities. I know that this does not support the overall team spirit and could lead to fingerpointing ("we didn't finish our iteration because xy isn't done"). But that's not really a concern with the teams I'm working with. They (engineering and QA) really liked the idea and think that it can help scheduling much better then the overall velocity.

                  Does anybody have any experience with this?

                  To Post a message, send it to:   scrumdevelopment@...
                  To Unsubscribe, send a blank message to: scrumdevelopment-unsubscribe@...




                  SPONSORED LINKS
                  Scrum


                  YAHOO! GROUPS LINKS





                Your message has been successfully submitted and would be delivered to recipients shortly.