Loading ...
Sorry, an error occurred while loading the content.
 

Re: Role of QA

Expand Messages
  • kent.schnaith@westgroup.com
    There seems to be some confusion about the roles of QA and of testing. These are two very different things. The role of QA is not to test the software, but
    Message 1 of 20 , May 2, 2000
      There seems to be some confusion about the roles of QA and of
      testing. These are two very different things. The role of QA is not
      to test the software, but to verify that the development team
      (including both developers and testers) has really done what they
      said they were going to do.

      SQA - as defined by the CMM. Level 2 - Software Quality Assurance
      (SQA)
      The purpose of Software Quality Assurance is to provide management
      with appropriate visibility into the process being used by the
      software project and of the products being built.
      Software Quality Assurance involves reviewing and auditing the
      software products and activities to verify that they comply with the
      applicable procedures and standards and providing the software
      project and other appropriate managers with the results of these
      reviews and audits.

      Testing - is described as one aspect of CMM Level 3 - Software
      Product Engineering (SPE)

      The purpose of software testing is to verify that the software
      satisfies the specified software requirements.
      Integration testing of the software is performed against the
      designated version of the software requirements document and the
      software design document.
      System testing is performed to ensure the software satisfies the
      software requirements.
      Acceptance testing is performed to demonstrate to the customer and
      end users that the software satisfies the allocated requirements.

      One of the many activities of SQA is to verify that testing has been
      performed properly, that:
      a) Required testing is performed.
      b) System and acceptance testing of the software are performed
      according to documented plans and procedures.
      c) Tests satisfy their acceptance criteria, as documented in the
      software test plan.
      d) Tests are satisfactorily completed and recorded.

      -- Kent


      --- In extremeprogramming@egroups.com, Jen Wu <jen@d...> wrote:
      > I don't know if a lot has been said for the role of QA, but here are
      > some questions ...
      >
      > Some background ... a sophisticated QA team will do most if not all
      of
      > the following (among other things):
      >
      > * Develop a test plan, including test suites and cases
      > * Structured black box testing -- tested by hand
      > * Ad hoc black box testing
      > * Structured automated functional testing -- testing using
      automated
      > tools on the UI (no calls to code)
      > * White box and intrusive automated tests (code reviews and tests
      > like the unit tests that the programmers are responsible for in
      > XP)
      > * Code coverage
      > * Bug tracking (correlated with test cases and code coverage)
      > * Multi-user and performance testing using testing tools
      >
      ...
      >
      > Jen
    • Steve Goodhall
      So that you know where I am coming from, I lead Compuware s software quality practice in Michigan. Comments interspersed. Steve Goodhall Principal Architect
      Message 2 of 20 , May 3, 2000
        So that you know where I am coming from, I lead Compuware's software
        quality practice in Michigan. Comments interspersed.

        Steve Goodhall
        Principal Architect
        QASolutions
        Compuware Corporation

        mailto:SGoodhall@...
        mailto:steve.goodhall@...
        http://members.home.net/sgoodhall/
        Victory awaits those who have everything in order. People call this
        k. - Roald Amundsen


        > -----Original Message-----
        > From: Michael D. Hill [mailto:uly@...]
        > Sent: Tuesday, May 02, 2000 12:22 AM
        > To: extremeprogramming@egroups.com
        > Subject: Re: [XP] Role of QA
        >
        >
        > [The following in no way represents the offical XP view, which
        > frankly, I don't even know.]
        >
        > Jen...
        >
        > I have no faith in external QA. I believe it is one of those
        > ideas that looks magnificent on paper, like ISO 900X, but absolutely
        > awful in practice. I've never seen an external test group produce a
        > problem report other than identifying an installation process that
        > doesn't cover all the angles. Possibly, I have only seem crummy
        > QA teams, but that's been my honest experience.
        >

        Like ISO 900X, it looks good on paper, and it works of you do it right.
        Unfortunately, like 900X, few people do it right.
        You have seen crummy QA teams. My approach to this has the QA team
        live with you just like the customer and help with test development.

        > I am *not* in denial about the abysmal quality of most development
        > efforts in our industry. But I believe that quality sucks for little
        > other reason than because underskilled and undercoached development
        > teams are constantly pressed to move faster than they can.
        >

        I agree. Central QA helps with the first problem (underskilled). Nothing
        helps the second one (too fast,) although QA can help persuade
        people not to do it.

        > Many externalities affect this situation to bring even lower lows.
        > 1) Heavyweight processes place unrealistic and value-subtracted
        > burdens on developers and their front-line managers. 2) Ludicrous
        > expectations from the money and over-inflated product descriptions
        > from marketing are a major source of customer disappointment.
        > 3) Magic bullet beliefs add to the pain. 4) The worse we get at
        > delivering the more folks want to find a system of 'control', and the
        > heavier the non-development burdens get, destroying many fine sparks
        > of talent and interest in the industry.
        >
        > I would like to see some figures on the cost-benefit analysis of an
        > external QA department. I would even like to hear some anecdotal
        > evidence. My own experience strongly suggests that external QA is
        > simply not a cost-effective route to quality.
        >

        If you read some of Phil Crosby's work on manufacturing quality (Quality Is
        Free
        for instance, you will find that he agrees. What he suggests is that
        an external QA group is a step on the road to the final state which has
        quality built into the processes.

        > Sorry for the rant, but I couldn't stop myself? [Ron? Phlip? How come
        > I'm doing all the ranting around here? Are you guys well? Cough if you
        > can't talk now.]
        >
        > Seeya!
        > Hill
        >
        >
        > +----------------------------------------------------------+
        > |Michael Hill |
        > |Software-> Developer, Consultant, Teacher, Coach |
        > |Lifeware-> Egghead, Romantic, Grandpa, Communitarian |
        > |<uly_REMOVE_THIS_PART_@...> |
        > +----------------------------------------------------------+
        >
        >
        >
        > To Post a message, send it to: extremeprogramming@...
        >
        > To Unsubscribe, send a blank message to:
        > extremeprogramming-unsubscribe@...
        >
        > Ad-free courtesy of objectmentor.com
        >
      • Duncan Gibson
        David Brady wrote: DB My take on XP is that it s ABOUT quality. I don t stop work DB on a unit until I believe that it can t be broken. If you DB have no
        Message 3 of 20 , May 3, 2000
          David Brady wrote:
          DB> My take on XP is that it's ABOUT quality. I don't stop work
          DB> on a unit until I believe that it can't be broken. If you
          DB> have no QA team, that's your only hope. If you have a good
          DB> QA team, it's easy to get lazy and write lots of hopey code,
          DB> because if it doesn't work, QA should catch it. In a better
          DB> world--one I believe *can* exist, but so far haven't seen,
          DB> and therefore am trying to create--the developers write the
          DB> Quality, and the (good) QA team does the Assurance.

          Many of the XP processes balance each other, or provide constant
          tension between them so that things run smoothly. It seems to me
          that XP offers the opportunity for the same type of balance, or
          constructive tension, between the developers and the QA/testing
          group.

          The current methodologies which offer BigBangIntegration and then
          provide [alpha and] beta versions of software, suffer from the
          problem that the overall team accepts and expects that the first
          release(s) will contain proportionally more defects than later
          ones, and that it will be part of the team's task to iron out
          these problems over time. The number of defects in the product
          should decrease with each release. Many developers don't test
          their own code adequately because they [wrongly] subscribe to the
          point of view that it is the task of the QA/testing group to
          catch errors in the code.

          Various authors[*] stress that this isn't the way to produce high
          quality software, and that the individual developer should strive
          to produce defect free software. Some people go as far as to
          consider any defect discovered by the QA/testing group - or worse
          still - the end user or customer, as a failure on the part of the
          developer.

          Some attempts to improve the situation by offering rewards for
          defects detected and removed proved counter-productive. Wily
          developers introduced known defects so that QA/testing would find
          them, and the developers could "solve" them in order to benefit
          from the reward.

          With XP, there is no BigBangIntegration. There is a series of
          smaller development cycles, even down to the internal 3-week
          iterations. If you consider that each of these cycles delivers
          approximately the same amount of new code, and that the defect
          rate is constant at N defects/KLOC (or however you measure it),
          then each cycle will also deliver the same number of new defects.

          In XP, the practice of UnitTest/TestFirst is intended to reduce
          the number of defects which slip through to the release (so N
          should be smaller). In XP, after the first cycle, the QA/testing
          team should be able to give an estimate for N, basically because
          N defects should have been found. After this first cycle, there
          is the possibility of introducing constructive tension between
          the developers and the QA/testing team. The developers should be
          aiming to deliver fewer than N defects per cycle, and the
          QA/testing group should be aiming to discover more than N defects
          per cycle. This would give measurable goals for both sides,
          possibly with some reward structure. N is adjusted after each
          cycle, and defects discovered by the end user/customer count
          against the QA/testing group.

          Any comments?

          Cheers
          Duncan

          [*] Writing Solid Code, Maguire, ISBN 1-55615-551-4
          The Pragmatic Programmer, Hunt & Thomas, ISBN 020161622X
          Introduction to the PSP, Humphrey, ISBN 0201548097
          The Practice of Programming, Kernigan & Pike, ISBN 020161586X


          This is my article, not my employer's, with my opinions and my disclaimer!
          --
          Duncan Gibson, ESTEC/TOS/MCV, Postbus 299, 2200AG Noordwijk, Netherlands
          Tel: +31 71 5654013 Fax: +31 71 5656142 Email: duncan@...
        • Jen Wu
          The number of bugs found might be a good measure of how development is progressing, but I hate it as a metric for testers. It encourages QA folks to write
          Message 4 of 20 , May 3, 2000
            The number of bugs found might be a good measure of how development is
            progressing, but I hate it as a metric for testers. It encourages QA
            folks to write more bugs than are necessary. Instead, I'd suggest
            that QA be measured on the scope of their tests and the accuracy of
            the results. Number of test cases run, percentage of code covered,
            percent of functionality covered, number of platforms tested, etc.

            Bugs that are found by someone other than QA may be a measure of how
            they could improve in the same way that bugs found by QA is a measure
            of how development could improve. Also, bugs in QA code could count
            against them (after all, QA is often a development project in and of
            itself).

            Jen

            Duncan Gibson wrote:
            >
            > David Brady wrote:
            > DB> My take on XP is that it's ABOUT quality. I don't stop work
            > DB> on a unit until I believe that it can't be broken. If you
            > DB> have no QA team, that's your only hope. If you have a good
            > DB> QA team, it's easy to get lazy and write lots of hopey code,
            > DB> because if it doesn't work, QA should catch it. In a better
            > DB> world--one I believe *can* exist, but so far haven't seen,
            > DB> and therefore am trying to create--the developers write the
            > DB> Quality, and the (good) QA team does the Assurance.
            >
            > Many of the XP processes balance each other, or provide constant
            > tension between them so that things run smoothly. It seems to me
            > that XP offers the opportunity for the same type of balance, or
            > constructive tension, between the developers and the QA/testing
            > group.
            >
            > The current methodologies which offer BigBangIntegration and then
            > provide [alpha and] beta versions of software, suffer from the
            > problem that the overall team accepts and expects that the first
            > release(s) will contain proportionally more defects than later
            > ones, and that it will be part of the team's task to iron out
            > these problems over time. The number of defects in the product
            > should decrease with each release. Many developers don't test
            > their own code adequately because they [wrongly] subscribe to the
            > point of view that it is the task of the QA/testing group to
            > catch errors in the code.
            >
            > Various authors[*] stress that this isn't the way to produce high
            > quality software, and that the individual developer should strive
            > to produce defect free software. Some people go as far as to
            > consider any defect discovered by the QA/testing group - or worse
            > still - the end user or customer, as a failure on the part of the
            > developer.
            >
            > Some attempts to improve the situation by offering rewards for
            > defects detected and removed proved counter-productive. Wily
            > developers introduced known defects so that QA/testing would find
            > them, and the developers could "solve" them in order to benefit
            > from the reward.
            >
            > With XP, there is no BigBangIntegration. There is a series of
            > smaller development cycles, even down to the internal 3-week
            > iterations. If you consider that each of these cycles delivers
            > approximately the same amount of new code, and that the defect
            > rate is constant at N defects/KLOC (or however you measure it),
            > then each cycle will also deliver the same number of new defects.
            >
            > In XP, the practice of UnitTest/TestFirst is intended to reduce
            > the number of defects which slip through to the release (so N
            > should be smaller). In XP, after the first cycle, the QA/testing
            > team should be able to give an estimate for N, basically because
            > N defects should have been found. After this first cycle, there
            > is the possibility of introducing constructive tension between
            > the developers and the QA/testing team. The developers should be
            > aiming to deliver fewer than N defects per cycle, and the
            > QA/testing group should be aiming to discover more than N defects
            > per cycle. This would give measurable goals for both sides,
            > possibly with some reward structure. N is adjusted after each
            > cycle, and defects discovered by the end user/customer count
            > against the QA/testing group.
            >
            > Any comments?
            >
            > Cheers
            > Duncan
            >
            > [*] Writing Solid Code, Maguire, ISBN 1-55615-551-4
            > The Pragmatic Programmer, Hunt & Thomas, ISBN 020161622X
            > Introduction to the PSP, Humphrey, ISBN 0201548097
            > The Practice of Programming, Kernigan & Pike, ISBN 020161586X
            >
            > This is my article, not my employer's, with my opinions and my disclaimer!
            > --
            > Duncan Gibson, ESTEC/TOS/MCV, Postbus 299, 2200AG Noordwijk, Netherlands
            > Tel: +31 71 5654013 Fax: +31 71 5656142 Email: duncan@...
            >
            > To Post a message, send it to: extremeprogramming@...
            >
            > To Unsubscribe, send a blank message to: extremeprogramming-unsubscribe@...
            >
            > Ad-free courtesy of objectmentor.com
          • kent.schnaith@westgroup.com
            The best measure I have found for how development is progressing is the number of open defects that need to be fixed before you can release the product .
            Message 5 of 20 , May 3, 2000
              The best measure I have found for how development is progressing
              is "the number of open defects that need to be fixed before you can
              release the product". This measure is tracked from week to week,
              (or even day to day). The shape of the trend is usually a hump with
              a long tail. Early on, more defects will be discovered than
              are fixed, later few new defects should be discovered and the
              developers will catch up and reduce the backlog. Of course, if
              you have trouble fixing a problem without creating another problem
              then you are in for a very long march.

              Rewards should be based on delivering a quality product, on time.
              The statistically inclined can predict the expected duration of the
              march.

              One attraction of XP is that the focus on finding defects early and
              refactoring prevent you from falling into a long(endless) test and
              fix cycle.

              --- In extremeprogramming@egroups.com, Jen Wu <jen@d...> wrote:
              > The number of bugs found might be a good measure of how development
              is
              > progressing, but I hate it as a metric for testers. It encourages
              QA
              > folks to write more bugs than are necessary. Instead, I'd suggest
              > that QA be measured on the scope of their tests and the accuracy of
              > the results. Number of test cases run, percentage of code covered,
              > percent of functionality covered, number of platforms tested, etc.
              >
              > Bugs that are found by someone other than QA may be a measure of how
              > they could improve in the same way that bugs found by QA is a
              measure
              > of how development could improve. Also, bugs in QA code could count
              > against them (after all, QA is often a development project in and of
              > itself).
              >
              > Jen
              >
              > Duncan Gibson wrote:
              > >
              > > David Brady wrote:
              > > DB> My take on XP is that it's ABOUT quality. I don't stop work
              > > DB> on a unit until I believe that it can't be broken. If you
              > > DB> have no QA team, that's your only hope. If you have a good
              > > DB> QA team, it's easy to get lazy and write lots of hopey code,
              > > DB> because if it doesn't work, QA should catch it. In a better
              > > DB> world--one I believe *can* exist, but so far haven't seen,
              > > DB> and therefore am trying to create--the developers write the
              > > DB> Quality, and the (good) QA team does the Assurance.
              > >
              > > Many of the XP processes balance each other, or provide constant
              > > tension between them so that things run smoothly. It seems to me
              > > that XP offers the opportunity for the same type of balance, or
              > > constructive tension, between the developers and the QA/testing
              > > group.
              > >
              > > The current methodologies which offer BigBangIntegration and then
              > > provide [alpha and] beta versions of software, suffer from the
              > > problem that the overall team accepts and expects that the first
              > > release(s) will contain proportionally more defects than later
              > > ones, and that it will be part of the team's task to iron out
              > > these problems over time. The number of defects in the product
              > > should decrease with each release. Many developers don't test
              > > their own code adequately because they [wrongly] subscribe to the
              > > point of view that it is the task of the QA/testing group to
              > > catch errors in the code.
              > >
              > > Various authors[*] stress that this isn't the way to produce high
              > > quality software, and that the individual developer should strive
              > > to produce defect free software. Some people go as far as to
              > > consider any defect discovered by the QA/testing group - or worse
              > > still - the end user or customer, as a failure on the part of the
              > > developer.
              > >
              > > Some attempts to improve the situation by offering rewards for
              > > defects detected and removed proved counter-productive. Wily
              > > developers introduced known defects so that QA/testing would find
              > > them, and the developers could "solve" them in order to benefit
              > > from the reward.
              > >
              > > With XP, there is no BigBangIntegration. There is a series of
              > > smaller development cycles, even down to the internal 3-week
              > > iterations. If you consider that each of these cycles delivers
              > > approximately the same amount of new code, and that the defect
              > > rate is constant at N defects/KLOC (or however you measure it),
              > > then each cycle will also deliver the same number of new defects.
              > >
              > > In XP, the practice of UnitTest/TestFirst is intended to reduce
              > > the number of defects which slip through to the release (so N
              > > should be smaller). In XP, after the first cycle, the QA/testing
              > > team should be able to give an estimate for N, basically because
              > > N defects should have been found. After this first cycle, there
              > > is the possibility of introducing constructive tension between
              > > the developers and the QA/testing team. The developers should be
              > > aiming to deliver fewer than N defects per cycle, and the
              > > QA/testing group should be aiming to discover more than N defects
              > > per cycle. This would give measurable goals for both sides,
              > > possibly with some reward structure. N is adjusted after each
              > > cycle, and defects discovered by the end user/customer count
              > > against the QA/testing group.
              > >
              > > Any comments?
              > >
              > > Cheers
              > > Duncan
              > >
              > > [*] Writing Solid Code, Maguire, ISBN 1-55615-551-4
              > > The Pragmatic Programmer, Hunt & Thomas, ISBN 020161622X
              > > Introduction to the PSP, Humphrey, ISBN 0201548097
              > > The Practice of Programming, Kernigan & Pike, ISBN 020161586X
              > >
              > > This is my article, not my employer's, with my opinions and my
              disclaimer!
              > > --
              > > Duncan Gibson, ESTEC/TOS/MCV, Postbus 299, 2200AG Noordwijk,
              Netherlands
              > > Tel: +31 71 5654013 Fax: +31 71 5656142 Email: duncan@y...
              > >
              > > To Post a message, send it to: extremeprogramming@e...
              > >
              > > To Unsubscribe, send a blank message to: extremeprogramming-
              unsubscribe@e...
              > >
              > > Ad-free courtesy of objectmentor.com
            • Steve Goodhall
              My view would be that the best metric for testers is the ratio of total bugs to bugs found in production. This will give you a reasonable measurement of
              Message 6 of 20 , May 3, 2000
                My view would be that the best metric for testers is the ratio of total bugs
                to bugs found in production. This will give you a reasonable measurement of
                testing effectiveness and it is easy to maintain.

                Steve Goodhall
                Principal Architect
                QASolutions
                Compuware Corporation
                mailto:SGoodhall@...
                mailto:steve.goodhall@...
                http://members.home.net/sgoodhall/


                Victory awaits those who have everything in order. People call this luck. -
                Roald Amundsen





                > -----Original Message-----
                > From: Jen Wu [mailto:jen@...]
                > Sent: Wednesday, May 03, 2000 12:14 PM
                > To: extremeprogramming@egroups.com
                > Subject: Re: [XP] Role of QA
                >
                >
                > The number of bugs found might be a good measure of how development is
                > progressing, but I hate it as a metric for testers. It encourages QA
                > folks to write more bugs than are necessary. Instead, I'd suggest
                > that QA be measured on the scope of their tests and the accuracy of
                > the results. Number of test cases run, percentage of code covered,
                > percent of functionality covered, number of platforms tested, etc.
                >
                > Bugs that are found by someone other than QA may be a measure of how
                > they could improve in the same way that bugs found by QA is a measure
                > of how development could improve. Also, bugs in QA code could count
                > against them (after all, QA is often a development project in and of
                > itself).
                >
                > Jen
                >
                > Duncan Gibson wrote:
                > >
                > > David Brady wrote:
                > > DB> My take on XP is that it's ABOUT quality. I don't stop work
                > > DB> on a unit until I believe that it can't be broken. If you
                > > DB> have no QA team, that's your only hope. If you have a good
                > > DB> QA team, it's easy to get lazy and write lots of hopey code,
                > > DB> because if it doesn't work, QA should catch it. In a better
                > > DB> world--one I believe *can* exist, but so far haven't seen,
                > > DB> and therefore am trying to create--the developers write the
                > > DB> Quality, and the (good) QA team does the Assurance.
                > >
                > > Many of the XP processes balance each other, or provide constant
                > > tension between them so that things run smoothly. It seems to me
                > > that XP offers the opportunity for the same type of balance, or
                > > constructive tension, between the developers and the QA/testing
                > > group.
                > >
                > > The current methodologies which offer BigBangIntegration and then
                > > provide [alpha and] beta versions of software, suffer from the
                > > problem that the overall team accepts and expects that the first
                > > release(s) will contain proportionally more defects than later
                > > ones, and that it will be part of the team's task to iron out
                > > these problems over time. The number of defects in the product
                > > should decrease with each release. Many developers don't test
                > > their own code adequately because they [wrongly] subscribe to the
                > > point of view that it is the task of the QA/testing group to
                > > catch errors in the code.
                > >
                > > Various authors[*] stress that this isn't the way to produce high
                > > quality software, and that the individual developer should strive
                > > to produce defect free software. Some people go as far as to
                > > consider any defect discovered by the QA/testing group - or worse
                > > still - the end user or customer, as a failure on the part of the
                > > developer.
                > >
                > > Some attempts to improve the situation by offering rewards for
                > > defects detected and removed proved counter-productive. Wily
                > > developers introduced known defects so that QA/testing would find
                > > them, and the developers could "solve" them in order to benefit
                > > from the reward.
                > >
                > > With XP, there is no BigBangIntegration. There is a series of
                > > smaller development cycles, even down to the internal 3-week
                > > iterations. If you consider that each of these cycles delivers
                > > approximately the same amount of new code, and that the defect
                > > rate is constant at N defects/KLOC (or however you measure it),
                > > then each cycle will also deliver the same number of new defects.
                > >
                > > In XP, the practice of UnitTest/TestFirst is intended to reduce
                > > the number of defects which slip through to the release (so N
                > > should be smaller). In XP, after the first cycle, the QA/testing
                > > team should be able to give an estimate for N, basically because
                > > N defects should have been found. After this first cycle, there
                > > is the possibility of introducing constructive tension between
                > > the developers and the QA/testing team. The developers should be
                > > aiming to deliver fewer than N defects per cycle, and the
                > > QA/testing group should be aiming to discover more than N defects
                > > per cycle. This would give measurable goals for both sides,
                > > possibly with some reward structure. N is adjusted after each
                > > cycle, and defects discovered by the end user/customer count
                > > against the QA/testing group.
                > >
                > > Any comments?
                > >
                > > Cheers
                > > Duncan
                > >
                > > [*] Writing Solid Code, Maguire, ISBN 1-55615-551-4
                > > The Pragmatic Programmer, Hunt & Thomas, ISBN 020161622X
                > > Introduction to the PSP, Humphrey, ISBN 0201548097
                > > The Practice of Programming, Kernigan & Pike, ISBN 020161586X
                > >
                > > This is my article, not my employer's, with my opinions and my
                > disclaimer!
                > > --
                > > Duncan Gibson, ESTEC/TOS/MCV, Postbus 299, 2200AG Noordwijk, Netherlands
                > > Tel: +31 71 5654013 Fax: +31 71 5656142 Email: duncan@...
                > >
                > > To Post a message, send it to: extremeprogramming@...
                > >
                > > To Unsubscribe, send a blank message to:
                > extremeprogramming-unsubscribe@...
                > >
                > > Ad-free courtesy of objectmentor.com
                >
                >
                > To Post a message, send it to: extremeprogramming@...
                >
                > To Unsubscribe, send a blank message to:
                > extremeprogramming-unsubscribe@...
                >
                > Ad-free courtesy of objectmentor.com
                >
              Your message has been successfully submitted and would be delivered to recipients shortly.