Loading ...
Sorry, an error occurred while loading the content.

Re: Story - QA Score

Expand Messages
  • peterskeide
    Like others have said, science is not really in favour of rewarding knowledge workers beyond what is considered fair for the work they do. People in our line
    Message 1 of 14 , Dec 25, 2010
    • 0 Attachment
      Like others have said, science is not really in favour of rewarding knowledge workers beyond what is considered fair for the work they do. People in our line of business are often motivated by the work itself. You can avoid limiting this intrinsic motivation by giving workers control aspects of their work such as when to work, where to work, how to do the work and also who to work with (important when assembling a team).

      After reading your description of the problem, I ask myself the following questions:

      Have you targeted the quality issues in team retrospectives? If so, and it's still not improving, how can you change the way you run the retrospectives to generate more/different insight and make the team understand that they own the problem?

      Are your team really acting as a team? If some "ace" programmers consistently get stories to Done and without bugs, how can they help others raise the quality of their work?

      Are some of the teammembers weak on domain-knowledge? Things you did not know that you did not know tends to resurface later as bugs.

      I see a lot of people mentioning testing and acceptance criteria. Tests are fine, but not enough. Are you focusing on defect prevention in addition to detection? Try some (slightly) formal code inspections in combination with checklists. Checklists are a greay way of sharing knowledge. Code inspections done right are another great way of sharing knowledge. Make sure everyone has their code inspected from time to time (the "aces" as well). Results of the inspections are for the team only. Managers need/should not know.

      How often do the teammembers pair program? Pairing is effective for knowledge sharing (but not really a substitute for code inspections - use both).

      Are you using static/dynamic code analysis tools (such as Checkstyle for Java)? If not, suggest that your team start using them. Most such tools contain loads of useful rules for code quality, and can often be customized to comply with team coding standards. Junior programmers can learn a lot from such tools.

      A nice sideeffect of code analysis tools I have seen firsthand is how they can overcome "social" barriers where some teammembers do not respond well to comments about their code from other teammembers. It appears to be easier to take feedback from a purely objective tool.

      --- In scrumdevelopment@yahoogroups.com, "Brian" <bplawlor34@...> wrote:
      >
      > Our Dev Team is having trouble completing Stories with low bug counts at the end of our Iteration. An Executive suggested, even if we fix the problem, creating a Story - QA Score.
      >
      > We can then utilize this Score to determine how successful the Dev Team is each Iteration, and with each Story. We can also use the Score to monitor the performance of the individual programmers, and maybe reward those who complete Stories with High Grades.
      >
      > Is this a good idea? I'm sure it'll make some programmers nervous, but I doubt my Ace Programmers will be concerned at all. If anything, they'll enjoy the rewards.
      >
      > I would also love some ideas for making this effective, assuming if this is a good idea.
      >
      > Thanks in advance for the feedback.
      > -Brian
      >
    • George Dinwiddie
      Kiran, ... Yes, that s hard. It s easier with a shorter sprint. Really. You do, of course, have to approach the work a little differently, but the shorter
      Message 2 of 14 , Dec 25, 2010
      • 0 Attachment
        Kiran,

        On 12/25/10 2:09 AM, Kiran wrote:
        > Squeezing all the activities like development, deployment.testing and QA
        > in 3 weeks sprint is hard job.This finally hits the quality.

        Yes, that's hard. It's easier with a shorter sprint. Really. You do,
        of course, have to approach the work a little differently, but the
        shorter sprint helps people do that.

        - George

        --
        ----------------------------------------------------------------------
        * George Dinwiddie * http://blog.gdinwiddie.com
        Software Development http://www.idiacomputing.com
        Consultant and Coach http://www.agilemaryland.org
        ----------------------------------------------------------------------
      • George Dinwiddie
        Brian, ... I ve seen the responses to this (with which I agree), but I can t help but wonder what the Executive is thinking, here. How would such a score be
        Message 3 of 14 , Dec 27, 2010
        • 0 Attachment
          Brian,

          On 12/23/10 12:10 PM, Brian wrote:
          > Our Dev Team is having trouble completing Stories with low bug counts
          > at the end of our Iteration. An Executive suggested, even if we fix
          > the problem, creating a Story - QA Score.
          >
          > We can then utilize this Score to determine how successful the Dev
          > Team is each Iteration, and with each Story. We can also use the
          > Score to monitor the performance of the individual programmers, and
          > maybe reward those who complete Stories with High Grades.

          I've seen the responses to this (with which I agree), but I can't help
          but wonder what the Executive is thinking, here. How would such a score
          be calculated? This seems to presume that people are working as
          individuals rather than as a team. (That, alone, could be a major
          contributor to the problem of either not completing stories or releasing
          them with bugs.) I'm certainly missing much about the context.

          Counting bugs discovered after the end of the sprint is a good measure
          to track, in my opinion. I would graph that over time, and look at the
          shape of the curve. I would also do root-cause-analysis (5 whys) as
          each is discovered to figure out how the /process/ can be improved to
          prevent similar occurrences in the future. (This is much more valuable
          than assigning blame.)

          - George

          --
          ----------------------------------------------------------------------
          * George Dinwiddie * http://blog.gdinwiddie.com
          Software Development http://www.idiacomputing.com
          Consultant and Coach http://www.agilemaryland.org
          ----------------------------------------------------------------------
        • Brian
          I guess I should elaborate a little bit more, however, I think you all have expressed the concern I was looking for to utilize as ammo back up the ladder. We
          Message 4 of 14 , Dec 27, 2010
          • 0 Attachment
            I guess I should elaborate a little bit more, however, I think you all have expressed the concern I was looking for to utilize as ammo back up the ladder.

            We have 4-week Iterations; which includes QA. We do Unit and Buddy testing, as well as all the regular QA, integration, regression testing. We do not have automated scripts yet, but will have in a month.

            It does sound like the Exec's will push for some sort of "Story Quality Score". I'm not sure I can stop that. But, I may be able to sway them towards focusing on the Story itself... avoiding individual grading.

            We'll see...

            Thanks to all of you for taking the time to lend your advice. Our Company has by migrating towards being more Agile after working in a Waterfall environment for 30 years. I guess we have some growing pains to work through still.

            -Brian
          • scrumnoob
            Hi Brian As has already been asked/stated, I to would be interested to know how this conversation/issue played out at the retrospective(s). It is within the
            Message 5 of 14 , Dec 29, 2010
            • 0 Attachment
              Hi Brian

              As has already been asked/stated, I to would be interested to know how this conversation/issue played out at the retrospective(s).

              It is within the gift of the team to work out how to resolve whatever quality issue there maybe, I would say it is their responsibility also.

              How has the issue manifest itself to execs if your defintion of done/done includes all the levels of QA you mention?

              Best of luck

              Sean


              --- In scrumdevelopment@yahoogroups.com, "Brian" <bplawlor34@...> wrote:
              >
              > I guess I should elaborate a little bit more, however, I think you all have expressed the concern I was looking for to utilize as ammo back up the ladder.
              >
              > We have 4-week Iterations; which includes QA. We do Unit and Buddy testing, as well as all the regular QA, integration, regression testing. We do not have automated scripts yet, but will have in a month.
              >
              > It does sound like the Exec's will push for some sort of "Story Quality Score". I'm not sure I can stop that. But, I may be able to sway them towards focusing on the Story itself... avoiding individual grading.
              >
              > We'll see...
              >
              > Thanks to all of you for taking the time to lend your advice. Our Company has by migrating towards being more Agile after working in a Waterfall environment for 30 years. I guess we have some growing pains to work through still.
              >
              > -Brian
              >
            • Brian
              I think there s a combination of multiple incorrect practices which have been going on...to be honest. - Developers tackling too many Stories - Focus on
              Message 6 of 14 , Dec 31, 2010
              • 0 Attachment
                I think there's a combination of multiple incorrect practices which have been going on...to be honest.
                - Developers tackling too many Stories
                - Focus on Functionality, not incorporating Visuals/Treatment
                - Dev Team not Project focused
                - Dev Team lacking required WPF coding experience

                The Retrospectives are active discussions, but they are repeating. Everyone recognizes the problems, but I guess we're still trying to improve our Iterative practices. Many of the discussions, which involved high bug counts which need to be eliminated, repeated. However, consequences for not succeeding in the goals were never implemented, and that area is outside my control.

                Take multiple iterations of trying to scale back with quantity of Stories, but seeing no improvements, plus a frustrated Exec Branch, and you get discussions for a Story Quality Score to grade Dev performance.

                However, I think I have successfully talked them into a different path. We're going to take 1 Iteration and get caught up with all Bugs and lingering issues. The following Iteration, I'm creating a very transparent Bug Burn Down Monitoring Chart.

                I've also worked hand-in-hand, walking the Dev Team through the Stories. I'm taking it a step further to facilitate proper Story production by only handing over the specific Stories they will start with in the Iteration. Once one is noted as Done (a shipable product), I'll provide the next Story. This should eliminate working on too many at one time. That will happen for the Iteration after next, when they're back to Stories.

                The expertise of the Dev Team has improved and we have a couple more on the way. To be honest, I think we may be in good shape. Proof is in the pudding of course, but I believe I was able to keep the Exec's at bay one last time. If Dev Team doesn't follow what I've outline, they will get the butts kicked next time.


                Thanks for everyone's feedback BTW.
                -Brian

                --- In scrumdevelopment@yahoogroups.com, "scrumnoob" <scrumnoob@...> wrote:
                >
                > Hi Brian
                >
                > As has already been asked/stated, I to would be interested to know how this conversation/issue played out at the retrospective(s).
                >
                > It is within the gift of the team to work out how to resolve whatever quality issue there maybe, I would say it is their responsibility also.
                >
                > How has the issue manifest itself to execs if your defintion of done/done includes all the levels of QA you mention?
                >
                > Best of luck
                >
                > Sean
                >
                >
                > --- In scrumdevelopment@yahoogroups.com, "Brian" <bplawlor34@> wrote:
                > >
                > > I guess I should elaborate a little bit more, however, I think you all have expressed the concern I was looking for to utilize as ammo back up the ladder.
                > >
                > > We have 4-week Iterations; which includes QA. We do Unit and Buddy testing, as well as all the regular QA, integration, regression testing. We do not have automated scripts yet, but will have in a month.
                > >
                > > It does sound like the Exec's will push for some sort of "Story Quality Score". I'm not sure I can stop that. But, I may be able to sway them towards focusing on the Story itself... avoiding individual grading.
                > >
                > > We'll see...
                > >
                > > Thanks to all of you for taking the time to lend your advice. Our Company has by migrating towards being more Agile after working in a Waterfall environment for 30 years. I guess we have some growing pains to work through still.
                > >
                > > -Brian
                > >
                >
              • peterskeide
                It may be that I m overreacting to your choice of words, but I get a bit worried when you write things like consequences for not succeeding in the
                Message 7 of 14 , Jan 3, 2011
                • 0 Attachment
                  It may be that I'm overreacting to your choice of words, but I get a bit worried when you write things like "consequences for not succeeding in the (retrospective) goals" and "they will get the butts kicked next time".

                  If you are changing your development process from a phased/waterfall type to agile and at the same time introducing a new technology, a lot of people have to relearn how to do parts of their work. To ease the "pain" of such a transition, safety in the workplace is very important. Failure must be acceptable; a source of information to be used for process improvement.

                  I get the impression that some of your management do not think this way. Consider very carefully if you have sufficient management support at the right level for the agile initiative. Also, try to gauge the level of managements understanding of what makes scrum/agile work. If managers are still thinking about individual developer performance benchmarks, it is a clear sign of the need for coaching at a different level than development.

                  --- In scrumdevelopment@yahoogroups.com, "Brian" <bplawlor34@...> wrote:
                  >
                  > I think there's a combination of multiple incorrect practices which have been going on...to be honest.
                  > - Developers tackling too many Stories
                  > - Focus on Functionality, not incorporating Visuals/Treatment
                  > - Dev Team not Project focused
                  > - Dev Team lacking required WPF coding experience
                  >
                  > The Retrospectives are active discussions, but they are repeating. Everyone recognizes the problems, but I guess we're still trying to improve our Iterative practices. Many of the discussions, which involved high bug counts which need to be eliminated, repeated. However, consequences for not succeeding in the goals were never implemented, and that area is outside my control.
                  >
                  > Take multiple iterations of trying to scale back with quantity of Stories, but seeing no improvements, plus a frustrated Exec Branch, and you get discussions for a Story Quality Score to grade Dev performance.
                  >
                  > However, I think I have successfully talked them into a different path. We're going to take 1 Iteration and get caught up with all Bugs and lingering issues. The following Iteration, I'm creating a very transparent Bug Burn Down Monitoring Chart.
                  >
                  > I've also worked hand-in-hand, walking the Dev Team through the Stories. I'm taking it a step further to facilitate proper Story production by only handing over the specific Stories they will start with in the Iteration. Once one is noted as Done (a shipable product), I'll provide the next Story. This should eliminate working on too many at one time. That will happen for the Iteration after next, when they're back to Stories.
                  >
                  > The expertise of the Dev Team has improved and we have a couple more on the way. To be honest, I think we may be in good shape. Proof is in the pudding of course, but I believe I was able to keep the Exec's at bay one last time. If Dev Team doesn't follow what I've outline, they will get the butts kicked next time.
                  >
                  >
                  > Thanks for everyone's feedback BTW.
                  > -Brian
                  >
                  > --- In scrumdevelopment@yahoogroups.com, "scrumnoob" <scrumnoob@> wrote:
                  > >
                  > > Hi Brian
                  > >
                  > > As has already been asked/stated, I to would be interested to know how this conversation/issue played out at the retrospective(s).
                  > >
                  > > It is within the gift of the team to work out how to resolve whatever quality issue there maybe, I would say it is their responsibility also.
                  > >
                  > > How has the issue manifest itself to execs if your defintion of done/done includes all the levels of QA you mention?
                  > >
                  > > Best of luck
                  > >
                  > > Sean
                  > >
                  > >
                  > > --- In scrumdevelopment@yahoogroups.com, "Brian" <bplawlor34@> wrote:
                  > > >
                  > > > I guess I should elaborate a little bit more, however, I think you all have expressed the concern I was looking for to utilize as ammo back up the ladder.
                  > > >
                  > > > We have 4-week Iterations; which includes QA. We do Unit and Buddy testing, as well as all the regular QA, integration, regression testing. We do not have automated scripts yet, but will have in a month.
                  > > >
                  > > > It does sound like the Exec's will push for some sort of "Story Quality Score". I'm not sure I can stop that. But, I may be able to sway them towards focusing on the Story itself... avoiding individual grading.
                  > > >
                  > > > We'll see...
                  > > >
                  > > > Thanks to all of you for taking the time to lend your advice. Our Company has by migrating towards being more Agile after working in a Waterfall environment for 30 years. I guess we have some growing pains to work through still.
                  > > >
                  > > > -Brian
                  > > >
                  > >
                  >
                Your message has been successfully submitted and would be delivered to recipients shortly.