Loading ...
Sorry, an error occurred while loading the content.

Re: [XP] Role of QA

Expand Messages
  • Michael D. Hill
    [The following in no way represents the offical XP view, which frankly, I don t even know.] Jen... I have no faith in external QA. I believe it is one of
    Message 1 of 20 , May 1 9:21 PM
    • 0 Attachment
      [The following in no way represents the offical XP view, which
      frankly, I don't even know.]

      Jen...

      I have no faith in external QA. I believe it is one of those
      ideas that looks magnificent on paper, like ISO 900X, but absolutely
      awful in practice. I've never seen an external test group produce a
      problem report other than identifying an installation process that
      doesn't cover all the angles. Possibly, I have only seem crummy
      QA teams, but that's been my honest experience.

      I am *not* in denial about the abysmal quality of most development
      efforts in our industry. But I believe that quality sucks for little
      other reason than because underskilled and undercoached development
      teams are constantly pressed to move faster than they can.

      Many externalities affect this situation to bring even lower lows.
      1) Heavyweight processes place unrealistic and value-subtracted
      burdens on developers and their front-line managers. 2) Ludicrous
      expectations from the money and over-inflated product descriptions
      from marketing are a major source of customer disappointment.
      3) Magic bullet beliefs add to the pain. 4) The worse we get at
      delivering the more folks want to find a system of 'control', and the
      heavier the non-development burdens get, destroying many fine sparks
      of talent and interest in the industry.

      I would like to see some figures on the cost-benefit analysis of an
      external QA department. I would even like to hear some anecdotal
      evidence. My own experience strongly suggests that external QA is
      simply not a cost-effective route to quality.

      Sorry for the rant, but I couldn't stop myself? [Ron? Phlip? How come
      I'm doing all the ranting around here? Are you guys well? Cough if you
      can't talk now.]

      Seeya!
      Hill


      +----------------------------------------------------------+
      |Michael Hill |
      |Software-> Developer, Consultant, Teacher, Coach |
      |Lifeware-> Egghead, Romantic, Grandpa, Communitarian |
      |<uly_REMOVE_THIS_PART_@...> |
      +----------------------------------------------------------+
    • Phlip
      From: Michael D. Hill ... I thought in XP QA mapped onto the parallel development of comprehensive functional tests role. Phlip =======
      Message 2 of 20 , May 1 9:52 PM
      • 0 Attachment
        From: Michael D. Hill


        > [The following in no way represents the offical XP view, which
        > frankly, I don't even know.]
        >
        > Jen...
        >
        > I have no faith in external QA. I believe it is one of those
        > ideas that looks magnificent on paper, like ISO 900X, but absolutely
        > awful in practice. I've never seen an external test group produce a
        > problem report other than identifying an installation process that
        > doesn't cover all the angles. Possibly, I have only seem crummy
        > QA teams, but that's been my honest experience.

        I thought in XP QA mapped onto the "parallel development of comprehensive
        functional tests" role.

        Phlip
        ======= http://users.deltanet.com/~tegan/home.html =======
      • Peter D. Padilla
        My group is still deciding whether or not to implement XP, but if it has an inherent animosity for quality assurance, I will be reluctant to go that direction.
        Message 3 of 20 , May 1 10:13 PM
        • 0 Attachment
          My group is still deciding whether or not to implement XP, but if it has an
          inherent animosity for quality assurance, I will be reluctant to go that
          direction. Frankly, from a QA perspective, there need to be checks and
          balances throughout the process - it's better if they're on the same team...
          not really "external."

          I am not an advocate of heavy external processes and metrics; I agree that
          they weigh down the process and contribute to bad code being delivered late.
          I once forced such process upon a group that, while we achieved a high CMM
          status, we wasted energy on non-productive process work. However, it seems
          that unit and even functional tests, whoever performs them, don't
          necessarily ensure that the developed product meets the client requirements.

          There are valid and essential roles for project management, quality
          assurance, and product acceptance - and you'll be hard-pressed to find an
          organization willing to toss those out the window because some developers
          want to "go extreme."

          Truthfully, do you as developers want to walk through and test every
          business process (with every possible variation), test usability, check
          performance and stress against standards, and do the tracking work of a test
          group? Yes, you should be comfortable enough with your code to believe that
          it works... and the more you know about the business process for which you
          are developing, the better... but is it a good use of resources for you to
          test in detail rather than begin new development?

          I've managed QA groups for ten years, and they've produced a whole lot more
          than a few bugs about details in installation. A lot of code which compiles
          successfully, and works to the satisfaction of a development team, is still
          not ready for release to a client. QA teams are trained to look not only at
          how the software works, but at how it doesn't work. That seems to be an
          important function. Development may assist in covering some basic
          functional tests as they build - hallelujah! But if you think that's the
          extent of quality assurance - even in a RAD environment - you are mistaken.

          Look, I've only been lurking for a short while... and my total experience
          with XP is from reading the book three weeks ago. But if it boils down to
          developers wanting to throw a lot of valuable process out the window because
          it "restricts" them, maybe I'm looking at the wrong alternative to enable
          great applications. I don't want to be offensive, and I apologize if
          anything in this message is perceived as derogatory... but a great team
          knows how to delegate its responsibilities - even those which aren't
          strictly development.

          Thanks,
          Peter Padilla


          ----- Original Message -----
          From: Phlip <phlip@...>
          To: <extremeprogramming@egroups.com>
          Sent: Monday, May 01, 2000 11:52 PM
          Subject: Re: [XP] Role of QA


          > From: Michael D. Hill
          >
          >
          > > [The following in no way represents the offical XP view, which
          > > frankly, I don't even know.]
          > >
          > > Jen...
          > >
          > > I have no faith in external QA. I believe it is one of those
          > > ideas that looks magnificent on paper, like ISO 900X, but absolutely
          > > awful in practice. I've never seen an external test group produce a
          > > problem report other than identifying an installation process that
          > > doesn't cover all the angles. Possibly, I have only seem crummy
          > > QA teams, but that's been my honest experience.
          >
          > I thought in XP QA mapped onto the "parallel development of comprehensive
          > functional tests" role.
          >
          > Phlip
          > ======= http://users.deltanet.com/~tegan/home.html =======
          >
          >
          >
          >
          > To Post a message, send it to: extremeprogramming@...
          >
          > To Unsubscribe, send a blank message to:
          extremeprogramming-unsubscribe@...
          >
          > Ad-free courtesy of objectmentor.com
          >
          >
        • Malte Kroeger
          What XP sais (as I understand it): XP doesn t officially have a QA-Team. In XP the developers do all the tests. They write their own unit tests, of course, and
          Message 4 of 20 , May 2 2:22 AM
          • 0 Attachment
            What XP sais (as I understand it):

            XP doesn't officially have a QA-Team. In XP the developers do all the
            tests. They write their own unit tests, of course, and they support the
            customer in working out the functional tests.

            The question is: Is this a good way to do it? Is this enough testing?

            In many development teams now there is a lot less testing than this, so
            doing it the XP way would be an advantage already.

            What people most often object is: How can programmers write tests for
            their own code? They will not find any bugs!
            I think this is leveraged by the customers who specify the functional
            tests, but the concern is justified.
            The effect is also leaveraged by collective code ownership and pair
            programming, since there are several developers writing tests for
            everyones code.

            As I see it, XP is a lightweigt process with a somewhat minimalistic
            approach. But it's also flexible, so if you notice, that it's not
            sufficient, then add the parts you need. But like with all changes to your
            development process, you should keep track of the effects and if the
            change was really worth it.

            For me it's quite possible to have a 'seperate' QA-Team that develops the
            functional tests from the customer specifications.

            But what I think is most important about a QA-Team is, that it stays in
            very close contact with the develpment team. This is often not the case.

            QA-Team members often know much better what to test. What problems usually
            occur. How the tests should look like to find the most likely bugs. etc.
            With this knowledge they need to support the developers and they need to
            be transfered this knowledge to the developers.

            What is really needed is, that QA-Team members pair with developers for
            writing tests. That they review the tests done so far. That they make
            suggestions on how to improve the tests and do the improvements pairing
            with developers.

            In big systems it might be helpfull to have some people of the QA-Team do
            some adhoc-manual testing of the integrated system as well, since not all
            the tests can be fully automated in a resonable amount of time.

            So from my point of view the QA-Team should do less testing themselves,
            but should give more support to the developers in doing the tests and in
            developing a high quality product in the first place.

            So it's more of a QA-Coaching role.

            just my two cents...

            Malte
          • Michael C. Feathers
            ... From: Arrizza, John ... are ... test ... I think that if we take the words Quality Assurance at face value then it is
            Message 5 of 20 , May 2 4:15 AM
            • 0 Attachment
              ----- Original Message -----
              From: Arrizza, John <john_arrizza@...>
              > > From: Malte Kroeger [mailto:kroeger@...]
              >
              > > XP doesn't officially have a QA-Team....
              > > The question is: Is this a good way to do it? Is this enough testing?
              > > ...
              >
              > I think this question stems from two basic meanings of the word quality in
              > QA. If the QA-Team is just doing basic testing than XP functional tests
              are
              > sufficient. If the QA-Team is doing more as Peter Padilla says:
              >
              > ...walk through and test every
              > business process (with every possible variation), test usability, check
              > performance and stress against standards, and do the tracking work of a
              test
              > group?....
              >
              > then XP is not sufficient.

              I think that if we take the words Quality Assurance at
              face value then it is everybody's job. How could
              it not be in a team which is concerned about what they
              do?

              In the more traditional use of the term, I see some
              value if having someone accumulate information
              for a group of developers, but unfortunately, I've
              too often seen this devolve into turf issues where
              QA thinks that because they accumulate info
              and measure adherence to standards, they should
              have veto authority over the standards as well.
              This pollutes process and undermines QA's
              effectiveness. No system survives well when
              it's court reporters act as legislators. Once
              impartiality is gone, you've lost it.

              Michael

              ---------------------------------------------------
              Michael Feathers mfeathers@...
              Object Mentor Inc. www.objectmentor.com
              Training/Mentoring/Development
              -----------------------------------------------------
              "You think you know when you can learn, are more sure when
              you can write, even more when you can teach, but certain when
              you can program. " - Alan Perlis
            • Arrizza, John
              ... I think this question stems from two basic meanings of the word quality in QA. If the QA-Team is just doing basic testing than XP functional tests are
              Message 6 of 20 , May 2 4:26 AM
              • 0 Attachment
                > -----Original Message-----
                > From: Malte Kroeger [mailto:kroeger@...]

                > XP doesn't officially have a QA-Team....
                > The question is: Is this a good way to do it? Is this enough testing?
                > ...

                I think this question stems from two basic meanings of the word quality in
                QA. If the QA-Team is just doing basic testing than XP functional tests are
                sufficient. If the QA-Team is doing more as Peter Padilla says:

                ...walk through and test every
                business process (with every possible variation), test usability, check
                performance and stress against standards, and do the tracking work of a test
                group?....

                then XP is not sufficient.

                I think QA has degenerated into doing just basic testing because the code
                coming out of development has been so bad. If, with XP, the quality of that
                code rises to point where there are no or very few bugs, then QA can go on
                and check the quality of the application not just the correctness of the
                code.

                QA's role would need slight redefinition though since the customer is a
                third party in all of this. The customer can ensure that they are getting
                what they want. QA does not need to "test usability" for example. Perhaps QA
                doesn't need to walk through all business processes, the customer would do
                some or most of that. In short, QA role would be ensuring that nothing slips
                through the Customer's testing and the FTs and UTs that the developers do.

                John
              • Merk, John
                It s disconcerting that the concepts of testing are so foreign to many developers, myself included. To help introduce testing into the development process
                Message 7 of 20 , May 2 5:02 AM
                • 0 Attachment
                  It's disconcerting that the concepts of testing are so foreign to many
                  developers, myself included. To help introduce testing into the development
                  process we've set aside an open work area into which we're going to move 4
                  developers and 2 QA. Sounds a bit like the premise of a show on MTV...
                  Anyway, the idea is to have the QA people pair with the developers when
                  we're writing unit tests. Also, they will be working on the functional
                  testing. I'll post our experiences from time to time. I'd like to hear
                  from other folks who have tried something similar.

                  John
                  john.merk@...
                  -----Original Message-----
                  From: Malte Kroeger [mailto:kroeger@...]
                  Sent: Tuesday, May 02, 2000 5:22 AM
                  To: extremeprogramming@egroups.com
                  Subject: Re: [XP] Role of QA


                  What XP sais (as I understand it):

                  XP doesn't officially have a QA-Team. In XP the developers do all the
                  tests. They write their own unit tests, of course, and they support the
                  customer in working out the functional tests.

                  The question is: Is this a good way to do it? Is this enough testing?

                  In many development teams now there is a lot less testing than this, so
                  doing it the XP way would be an advantage already.

                  What people most often object is: How can programmers write tests for
                  their own code? They will not find any bugs!
                  I think this is leveraged by the customers who specify the functional
                  tests, but the concern is justified.
                  The effect is also leaveraged by collective code ownership and pair
                  programming, since there are several developers writing tests for
                  everyones code.

                  As I see it, XP is a lightweigt process with a somewhat minimalistic
                  approach. But it's also flexible, so if you notice, that it's not
                  sufficient, then add the parts you need. But like with all changes to your
                  development process, you should keep track of the effects and if the
                  change was really worth it.

                  For me it's quite possible to have a 'seperate' QA-Team that develops the
                  functional tests from the customer specifications.

                  But what I think is most important about a QA-Team is, that it stays in
                  very close contact with the develpment team. This is often not the case.

                  QA-Team members often know much better what to test. What problems usually
                  occur. How the tests should look like to find the most likely bugs. etc.
                  With this knowledge they need to support the developers and they need to
                  be transfered this knowledge to the developers.

                  What is really needed is, that QA-Team members pair with developers for
                  writing tests. That they review the tests done so far. That they make
                  suggestions on how to improve the tests and do the improvements pairing
                  with developers.

                  In big systems it might be helpfull to have some people of the QA-Team do
                  some adhoc-manual testing of the integrated system as well, since not all
                  the tests can be fully automated in a resonable amount of time.

                  So from my point of view the QA-Team should do less testing themselves,
                  but should give more support to the developers in doing the tests and in
                  developing a high quality product in the first place.

                  So it's more of a QA-Coaching role.

                  just my two cents...

                  Malte



                  To Post a message, send it to: extremeprogramming@...

                  To Unsubscribe, send a blank message to:
                  extremeprogramming-unsubscribe@...

                  Ad-free courtesy of objectmentor.com
                • Lowell Lindstrom
                  I am responding many of the Role of QA messages from the Digest this ... Acceptance of the system is the responsibility of the customer. Only the customer
                  Message 8 of 20 , May 2 6:58 AM
                  • 0 Attachment
                    I am responding many of the "Role of QA" messages from the Digest this
                    morning:


                    > Date: Mon, 01 May 2000 19:36:43 -0700
                    > From: Jen Wu <jen@...>
                    >Subject: Role of QA
                    >
                    >I don't know if a lot has been said for the role of QA, but here are
                    >some questions ...
                    >

                    Acceptance of the system is the responsibility of the customer. Only
                    the customer can define the cost/quality trade-off. Programmers cannot
                    deliver crap, of course, but given the difference in required quality
                    levels between a life-critical system and release 1.0 of my latest
                    e-commerce experiment, decisions must be made as to how much quality
                    should be designed and tested in. The customer must drive this decision
                    through the conversation with the programmers.

                    XP expands the responsibilities of the customer in this regard. My
                    personal past experience has been that customer acceptance is a cursory
                    check of major functionality. With XP, they write all of the functional
                    tests. This likely means providing resources to the customer that they
                    are not used to having.

                    >Some background ... a sophisticated QA team will do most if not all of
                    >the following (among other things):
                    >
                    >* Develop a test plan, including test suites and cases
                    >* Structured black box testing -- tested by hand
                    >* Ad hoc black box testing
                    >* Structured automated functional testing -- testing using automated
                    > tools on the UI (no calls to code)
                    >* White box and intrusive automated tests (code reviews and tests
                    > like the unit tests that the programmers are responsible for in
                    > XP)
                    >* Code coverage
                    >* Bug tracking (correlated with test cases and code coverage)
                    >* Multi-user and performance testing using testing tools
                    >

                    Most of this should be done by the customer or under the customer's
                    direction. The amount of any of this that you do should be driven by
                    the customer's quality requirements, which will likely be driven by
                    their balance between cost/time pressures to release versus cost of
                    defects after release.

                    I am not sure what best practice is concerning bug tracking in terms
                    customer or programmer ownership. I have heard of bugs turning into
                    user stories and being dealt with by the planning game in subsequent
                    iterations/releases.

                    A note on code coverage. Test-first programming will ensure high code
                    coverage at the unit test level. The biggest benefit I can see from code
                    coverage analysis at the functional test level is helping to identify
                    code that can be refactored out of the system.


                    >With XP, there is some overlap between what a QA department
                    >would do and
                    >what developers would do. Should the QA people be brought into the
                    >development team?
                    >If so, what about the idea that a product should be thoroughly
                    >tested by
                    >someone who didn't take part in writing it?
                    >If not ...
                    >Should QA and development work on the same testing framework?

                    I believe they should moved towards the customer, rather than the
                    developers.

                    There are some interesting functional test frameworks emerging from XP
                    projects. I have also heard of goals to extend JUnit for functional
                    testing. Others that are more closely involved will need to add details
                    here.

                    >When should QA start being involved?

                    Customers begin writing functional tests as soon as the user stories for
                    the iteration have been defined. QA (as part of the customer) must be
                    involved at that time.

                    >Message: 7
                    > Date: Tue, 2 May 2000 00:13:41 -0500
                    > From: "Peter D. Padilla" <padillap@...>
                    >Subject: Re: Role of QA
                    >
                    >
                    >Look, I've only been lurking for a short while... and my total
                    >experience
                    >with XP is from reading the book three weeks ago. But if it
                    >boils down to
                    >developers wanting to throw a lot of valuable process out the
                    >window because
                    >it "restricts" them, maybe I'm looking at the wrong
                    >alternative to enable
                    >great applications. I don't want to be offensive, and I apologize if
                    >anything in this message is perceived as derogatory... but a great team
                    >knows how to delegate its responsibilities - even those which aren't
                    >strictly development.

                    I think Michael's views reflect an experience that many have had of an
                    external QA group not being driven by the customer's requirements.

                    As with many XP practices, the goal is not throw good practices out the
                    window, but rather to focus them on delivering value to customers
                    through the work of empowered programmers. The system must be accepted,
                    but the customer needs to drive what acceptance means. In many cases,
                    this will require lot's of QA, but not because of an external QA
                    engineer/manager's or process document's definition of quality, but
                    because of the customer's definition of quality.


                    >Message: 8
                    > Date: Tue, 02 May 2000 11:22:03 +0200
                    > From: Malte Kroeger <kroeger@...>
                    >Subject: Re: Role of QA
                    >
                    >What XP sais (as I understand it):
                    >
                    >XP doesn't officially have a QA-Team. In XP the developers do all the
                    >tests. They write their own unit tests, of course, and they support the
                    >customer in working out the functional tests.
                    >
                    >The question is: Is this a good way to do it? Is this enough testing?

                    As described,IMHO, this is not XP. In XP, the developers do all the
                    UNIT tests. Acceptance of the system is the responsibility of the
                    customer. In this regard, XP places more responsibility on the customer
                    to say what is acceptable. XP does not describe a QA-team, but the
                    function and responsibilities don't disappear, they become part of the
                    customer's role.

                    >
                    >In many development teams now there is a lot less testing than this, so
                    >doing it the XP way would be an advantage already.
                    >

                    Agreed, we're seeing this in many of the projects that we coach today.
                    The biggest short term benefit is the improved testing.

                    >But what I think is most important about a QA-Team is, that it stays in
                    >very close contact with the development team.

                    Yes, the customer, as well.

                    >What is really needed is, that QA-Team members pair with developers for
                    >writing tests. That they review the tests done so far. That they make
                    >suggestions on how to improve the tests and do the improvements pairing
                    >with developers.
                    >
                    >In big systems it might be helpful to have some people of the
                    >QA-Team do
                    >some adhoc-manual testing of the integrated system as well,
                    >since not all
                    >the tests can be fully automated in a reasonable amount of time.
                    >
                    >So from my point of view the QA-Team should do less testing themselves,
                    >but should give more support to the developers in doing the
                    >tests and in
                    >developing a high quality product in the first place.
                    >
                    >So it's more of a QA-Coaching role.


                    I am not sure I agree with the pairing of QA/developers. XP is trying
                    to maximize the focus of the developers on delivering valuable
                    functionality. But it is interesting proposal. I'd be interested in
                    others experience here.

                    Lowell
                  • Bill Caputo
                    Hi Peter, I am responding to this, because I am genuinely confused. I don t have 10 years experience in anything, so you can look ... I am curious what *would*
                    Message 9 of 20 , May 2 8:00 AM
                    • 0 Attachment
                      Hi Peter,

                      I am responding to this, because I am genuinely confused. I don't have 10 years experience in anything, so you can look
                      at my questions as the naive newbie if you wish:

                      > However, it seems
                      > that unit and even functional tests, whoever performs them, don't
                      > necessarily ensure that the developed product meets the client requirements.

                      I am curious what *would* if these things don't? My understanding is that Unit & Functional Tests (specifically XP
                      style) are written to cover the code (UT) and to cover the User Stories that the *customer* has selected (FT) so if this
                      doesn't do get us to meeting client requirements, what will?

                      > There are valid and essential roles for project management, quality
                      > assurance, and product acceptance - and you'll be hard-pressed to find an
                      > organization willing to toss those out the window because some developers
                      > want to "go extreme."

                      In the case of my company, we either don't have these at all, or they are broken. I agree that perhaps an organization
                      that is delivering good code on time, and under budget won't desire change, but if they aren't (like us) then what is
                      the reason for keeping all this baggage? If XP works, who cares what *tried and true* techniques go the way of the dodo?

                      > Truthfully, do you as developers want to walk through and test every
                      > business process (with every possible variation)

                      >From Code Complete (Steve McConnell):
                      A simple program that takes Name, Address, and Phone Number and stores them in a file:
                      Every Possible Variation =
                      Number 26^20 (20 chars each with 26 possibles)
                      Address 26^20 (20 chars each with 26 possibles)
                      Phone 10^10 (10 digits each with 10 possibles)

                      Total 26^20 * 26^20 * 10^10 = 10^66

                      Your QA department can do that???

                      I guess my bottom line is this: If you can get the benefits that XP seems to offer (good quality, good estimates, good
                      customer satisfaction, and good developer morale, etc.) without it, then why do it?

                      If not, then maybe you should find out more about XP before you dismiss it. At my company, we are slowly, but surely
                      adding Process to our development work. Since I came to this company, I have seen improved team work, a move toward
                      standards, Unit Tests have *improved* my productivity, and many of the other XP practices seem well suited to solving
                      other problems. If they won't, we will look elsewhere, but the fact is, XP is light-weight, low risk to try, and seems
                      to solve the problems of poor development, so I am being open minded. If it flys in the face of what is established,
                      fine by me since what's established (here at least) is broken. We need something, the flexibility of XP makes it
                      attractive.

                      Best,
                      Bill
                    • Peter D. Padilla
                      Thanks, Bill. I don t consider myself to know everything, and you certainly don t sound like a newbie. I don t want to come off as old school or
                      Message 10 of 20 , May 2 8:16 AM
                      • 0 Attachment
                        Thanks, Bill.

                        I don't consider myself to know everything, and you certainly don't sound
                        like a newbie. I don't want to come off as "old school" or controlling...
                        I've just worked very hard to make sure that the software we develop is of
                        use and does what it's supposed to... and doesn't break too easily.

                        "Function tests" as I read them in XP take the developer's intent for design
                        and verify that the code does what it should. That's valuable, and a great
                        start. However, the role of QA *should* be a bit more comprehensive than
                        that. We have to consider not only the "expected" course of action (the
                        "happy path"), but variations from that. What could go wrong? What could a
                        user do wrong? What will cause the system to break? What is the effect of
                        multiple users on the system? At what load does the system fail? Are all
                        of the business questions satisfied by the design? Particularly if there
                        are multiple developers or multiple development teams working on a large
                        project - did they step on each other's toes, and does a new process "break"
                        an existing one?

                        Also, client "stories" and even use cases provide answers to everything that
                        the client/business analyst/whoever has thought out... but what did they
                        miss? What questions about a system weren't answered up front? Obviously,
                        we have to take these things back to the customer for answers - but in my
                        experience, a lot of these questions don't really get raise until QA begins
                        its review. (My QA team is involved from the moment
                        requirements/storyboards are being written, so hopefully we catch a lot of
                        these questions *before* we get to design.)

                        Given that some of the support functions for development (QA, project
                        management) are dysfunctional at your company, you might take the approach
                        of adding them to XP as a need is demonstrated. I've just seen those
                        functions save companies too many times to want to throw them out - and I've
                        spent too many years justifying QA (risk mitigation, cost savings, customer
                        satisfaction, etc.) to believe that its an extraneous function. I don't
                        want heavy-duty processes (been down that path), but I do want a framework
                        that responds quickly and helps validate the process.

                        Re: the functional tests and possible variations. Yes, my QA department can
                        handle a lot of variations on business rules. The phone number example you
                        cite is covered very easily by an automated test - SQA Robot, WinRunner,
                        Segue SilkTest... probably even unit testers like JTest can do that.
                        Nobody - QA or developer - wants to sit and try to put every single
                        character into every space in every field. The variations that are more
                        appropriate to consider are: different paths through the system, varying
                        answers, qualifications (such as in a credit app), incomplete answers,
                        required fields, different selections and database interactions, etc. I
                        don't underestimate the ability of a developer to sit and consider every
                        possible course of action... I am just not certain it's the best use of that
                        resource. It's a crass comparison, but the average developer with several
                        years of experience (at least in Denver) makes nearly twice what the average
                        QA Analyst with several years experience does. Some tasks should be
                        allocated to the less expensive resource, so that the more expensive
                        resource can concentrate on *generating* product.

                        My new question is this: How much *is* XP a flexible framework? Is it an
                        "all or none" proposition? If an organization with a project management
                        group, a quality assurance group, or any other "support" function for
                        development wants to try it, are they supposed to put these other functions
                        "on the bench" to see what happens? Or, should we take advantage of the
                        resources we already have on our team - with their knowledge and expertise,
                        and create a hybrid "process framework"... or should we create a separate
                        "experimental" development group and try to compare their success to the
                        rest of the team?

                        I apologize for the long message...
                        Peter

                        ----- Original Message -----
                        From: Bill Caputo <billc@...>
                        To: <extremeprogramming@egroups.com>
                        Sent: Tuesday, May 02, 2000 10:00 AM
                        Subject: RE: [XP] Role of QA


                        > Hi Peter,
                        >
                        > I am responding to this, because I am genuinely confused. I don't have 10
                        years experience in anything, so you can look
                        > at my questions as the naive newbie if you wish:
                        >
                        > > However, it seems
                        > > that unit and even functional tests, whoever performs them, don't
                        > > necessarily ensure that the developed product meets the client
                        requirements.
                        >
                        > I am curious what *would* if these things don't? My understanding is that
                        Unit & Functional Tests (specifically XP
                        > style) are written to cover the code (UT) and to cover the User Stories
                        that the *customer* has selected (FT) so if this
                        > doesn't do get us to meeting client requirements, what will?
                        >
                        > > There are valid and essential roles for project management, quality
                        > > assurance, and product acceptance - and you'll be hard-pressed to find
                        an
                        > > organization willing to toss those out the window because some
                        developers
                        > > want to "go extreme."
                        >
                        > In the case of my company, we either don't have these at all, or they are
                        broken. I agree that perhaps an organization
                        > that is delivering good code on time, and under budget won't desire
                        change, but if they aren't (like us) then what is
                        > the reason for keeping all this baggage? If XP works, who cares what
                        *tried and true* techniques go the way of the dodo?
                        >
                        > > Truthfully, do you as developers want to walk through and test every
                        > > business process (with every possible variation)
                        >
                        > >From Code Complete (Steve McConnell):
                        > A simple program that takes Name, Address, and Phone Number and stores
                        them in a file:
                        > Every Possible Variation =
                        > Number 26^20 (20 chars each with 26 possibles)
                        > Address 26^20 (20 chars each with 26 possibles)
                        > Phone 10^10 (10 digits each with 10 possibles)
                        >
                        > Total 26^20 * 26^20 * 10^10 = 10^66
                        >
                        > Your QA department can do that???
                        >
                        > I guess my bottom line is this: If you can get the benefits that XP seems
                        to offer (good quality, good estimates, good
                        > customer satisfaction, and good developer morale, etc.) without it, then
                        why do it?
                        >
                        > If not, then maybe you should find out more about XP before you dismiss
                        it. At my company, we are slowly, but surely
                        > adding Process to our development work. Since I came to this company, I
                        have seen improved team work, a move toward
                        > standards, Unit Tests have *improved* my productivity, and many of the
                        other XP practices seem well suited to solving
                        > other problems. If they won't, we will look elsewhere, but the fact is, XP
                        is light-weight, low risk to try, and seems
                        > to solve the problems of poor development, so I am being open minded. If
                        it flys in the face of what is established,
                        > fine by me since what's established (here at least) is broken. We need
                        something, the flexibility of XP makes it
                        > attractive.
                        >
                        > Best,
                        > Bill
                        >
                        >
                        >
                        > To Post a message, send it to: extremeprogramming@...
                        >
                        > To Unsubscribe, send a blank message to:
                        extremeprogramming-unsubscribe@...
                        >
                        > Ad-free courtesy of objectmentor.com
                        >
                        >
                      • Bill Caputo
                        ... Sure, I am just trying to understand this too, and I think you bring formidable arguments to the table. OTOH I have seen them answered here to my
                        Message 11 of 20 , May 2 9:47 AM
                        • 0 Attachment
                          > Thanks, Bill.

                          Sure, I am just trying to understand this too, and I think you bring formidable arguments to the table. OTOH I have seen
                          them answered here to my satisfaction before, so I am trying to explain my insights to you. Thanks for the feedback! :)

                          > I don't consider myself to know everything, and you certainly don't sound
                          > like a newbie. I don't want to come off as "old school" or controlling...
                          > I've just worked very hard to make sure that the software we develop is of
                          > use and does what it's supposed to... and doesn't break too easily.

                          I wasn't being sarcastic. I have about 2 years in as a full time programmer, and I am from non-traditional background
                          (Philosphy!!) I have worked hard to fill in the blanks, and I had a good grounding (great H.S. Comp Sci Dept), but I
                          won't pretend to be an authority. But, I *have* been successful these past 2 years, and I spend a *Lot* of time with
                          these types of issues, and with programming in general. So I spout off from time to time. If you want more Hubris, read
                          on.


                          > "Function tests" as I read them in XP take the developer's intent for design
                          > and verify that the code does what it should. That's valuable, and a great
                          > start.

                          There has been a strong push for changing the term from Functional to Acceptance Tests, because of this very thing. I
                          think the key here is that XP FT's do not represent the the developer's intent, but the customers.

                          However, the role of QA *should* be a bit more comprehensive than
                          > that. We have to consider not only the "expected" course of action (the
                          > "happy path"), but variations from that. What could go wrong? What could a
                          > user do wrong? What will cause the system to break? What is the effect of
                          > multiple users on the system? At what load does the system fail? Are all
                          > of the business questions satisfied by the design? Particularly if there
                          > are multiple developers or multiple development teams working on a large
                          > project - did they step on each other's toes, and does a new process "break"
                          > an existing one?

                          Some of these would qualify as user stories, IMHO. If the customer feels that load limits, and other such are valuable,
                          then they will move them up the list. No one here, I think is expecting them to discover the need themselves, and as
                          stated here before, some encouragement is to be expected, but in the end, *they* choose what goes in and what doesn't.

                          > Also, client "stories" and even use cases provide answers to everything that
                          > the client/business analyst/whoever has thought out... but what did they
                          > miss?

                          If the customer is part of the team, this missage will be noticed immediately, then the User Story/Iteration/Refactoring
                          triumverate will make this manageable too.

                          > What questions about a system weren't answered up front?

                          None, but the most immediate, and valuable/risky.

                          I am going to summarize from here, because others (see the archives) have addressed most of this better than I can, but
                          let me just say that it seems that the more experienced/entrenched people have been the biggest skeptics. Now it could
                          be argued, and may very well be, that this is because you all know more than us who have not been involved in real
                          Process much, but maybe too its because of the investment that you feel you may lose (you made allusions to this in the
                          part I snipped). All I can say to that is that its the results that matter. If heavy weight methods work, keep using
                          them, if not, this one seems easy to implement, and Kent Beck et. al. seem to have a lot of experience with this stuff.

                          As for Project Managers, Analysts, QA people, Architects, Data Designers, etc. They have a very real place in an XP
                          shop--they just might have to come down into the trenches more. Or, they may act as consultants to the team. Personally,
                          I like Architecture, and Data Design, and think I would be good at those jobs, XP says I won't get those titles, but if
                          I get to do the work, what difference does it make?

                          Finally, the strength of XP is in its flexibility. No one here, (or anywhere) can or should (or is?) trying to tell you
                          what you must and must not do. They are simply stating that taken together, the XP practices lead to a successful,
                          lightweight, comprehensive process that covers the concerns you mention, and that these other, heavier structures are
                          meant to address.

                          I will finish with something that has echoed across this list since I joined it. Take the time to learn, and then try
                          *all* of XP before you dismiss it as *impossible*, maybe its just what the doctor ordered. If not, or you run into
                          problems, then posting here is a great way to find out if its the process, your (my) lack of understanding, or other
                          factors that are causing the problem.

                          Remember the only reason to have an explicit methodolgy is that you have one regardless, and having one is supposed to
                          improve the chances of success over the default one. If it doesn't ditch it.

                          Later,
                          Bill
                        • David Brady
                          I m new to XP myself, so I speak for me, not for XP. Peter, First let me say that it s good to have a(nother) healthy skeptic around.
                          Message 12 of 20 , May 2 10:07 AM
                          • 0 Attachment
                            <DISCLAIMER>I'm new to XP myself, so I speak for me, not for
                            XP.</DISCLAIMER>

                            Peter,

                            First let me say that it's good to have a(nother) healthy skeptic around.

                            I have worked only in environments similar to the one mentioned by Michael:
                            where management wants to push code out fast and QA is what comes up when
                            management doesn't get what it *wanted* in that short of a time.

                            My take on XP is that it's ABOUT quality. I don't stop work on a unit until
                            I believe that it can't be broken. If you have no QA team, that's your only
                            hope. If you have a good QA team, it's easy to get lazy and write lots of
                            hopey code, because if it doesn't work, QA should catch it. In a better
                            world--one I believe *can* exist, but so far haven't seen, and therefore am
                            trying to create--the developers write the Quality, and the (good) QA team
                            does the Assurance.

                            In perfect XP, the QA team may be unnecessary. In my XP reality, however, I
                            just don't trust myself enough to write perfect code OR perfect tests. I
                            actually appreciate having someone around with the particular demonic twist
                            of mind required to destroy my happy little programs.

                            I'm looking forward to hearing more from you; you've obviously seen projects
                            where QA was not just a waste of resources or abused as a crutch by
                            development. I think XP is lightweight enough to integrate seamlessly with
                            that kind of QA.

                            > -----Original Message-----
                            > From: Michael D. Hill [mailto:uly@...]
                            >
                            > I've never seen an external test group produce a
                            > problem report other than identifying an installation process that
                            > doesn't cover all the angles.

                            In theory, this is still a good thing. XP does a great job of preventing
                            "holes in code" before they happen. But a good QA department would help you
                            find the holes in your *thinking* as well:

                            What happens if name is null? What happens if the user pastes 400k of
                            garbage into that 10-character field (Overrun exploit)? This is reduced by
                            Pair Programming, but having a full-time devil's advocate on hand--or a team
                            of them--would help me sleep even better at night.

                            Just my $0.02.

                            -dB
                            --
                            David Brady
                            dbrady@...
                            Diagonally parked in a parallel universe
                          • Jen Wu
                            Two issues: 1. XP does not invalidate QA. I m not sure about no QA w/ perfect XP ... that s like saying if programmers didn t write bugs there wouldn t be a
                            Message 13 of 20 , May 2 10:48 AM
                            • 0 Attachment
                              Two issues:

                              1. XP does not invalidate QA.

                              I'm not sure about no QA w/ perfect XP ... that's like saying if
                              programmers didn't write bugs there wouldn't be a need to find them.
                              It's never going to happen.

                              Some things that developers probably will not be testing for in XP:

                              Multi-platform testing
                              Configuration testing (sw and hw conflicts, etc.)
                              Stress testing
                              User testing
                              Algorithmic verification
                              Multi-user functional testing
                              Automated user-interface testing

                              If the above is done with automated tools (e.g., Segue, Mercury,
                              Rational), they can be development projects in and of themselves (I'm
                              used to telling people that QA engineers often *are* software
                              engineers). This is not the sort of thing that developers will do.

                              2. QA as part of product management?

                              This is actually not very different from the traditional view. Product
                              management comes up with what the product should be. Development makes
                              it. QA makes sure that what development makes matches what product
                              management wants, making interpretations where necessary. The biggest
                              problem is that QA often does things that product management isn't
                              interested in (the same way that development does), and as a result can
                              slow down the process. This happens when QA exists independent of
                              product management. Of course, this is analogous to what happens when
                              development goes off and does their own thing, too, which is one of the
                              things that XP addresses.

                              Maybe we should apply XP to QA projects the same as development ... QA
                              should produce test plans incrementally and go to product management for
                              verification that they are verifying the right things? After all, in
                              many ways, QA -- especially QA engineering -- is very similar to
                              development projects.

                              I suspect that there will be a role for someone to know what metrics are
                              most useful, but this might be management or product management's
                              responsibility.

                              Jen
                            • kent.schnaith@westgroup.com
                              There seems to be some confusion about the roles of QA and of testing. These are two very different things. The role of QA is not to test the software, but
                              Message 14 of 20 , May 2 11:41 AM
                              • 0 Attachment
                                There seems to be some confusion about the roles of QA and of
                                testing. These are two very different things. The role of QA is not
                                to test the software, but to verify that the development team
                                (including both developers and testers) has really done what they
                                said they were going to do.

                                SQA - as defined by the CMM. Level 2 - Software Quality Assurance
                                (SQA)
                                The purpose of Software Quality Assurance is to provide management
                                with appropriate visibility into the process being used by the
                                software project and of the products being built.
                                Software Quality Assurance involves reviewing and auditing the
                                software products and activities to verify that they comply with the
                                applicable procedures and standards and providing the software
                                project and other appropriate managers with the results of these
                                reviews and audits.

                                Testing - is described as one aspect of CMM Level 3 - Software
                                Product Engineering (SPE)

                                The purpose of software testing is to verify that the software
                                satisfies the specified software requirements.
                                Integration testing of the software is performed against the
                                designated version of the software requirements document and the
                                software design document.
                                System testing is performed to ensure the software satisfies the
                                software requirements.
                                Acceptance testing is performed to demonstrate to the customer and
                                end users that the software satisfies the allocated requirements.

                                One of the many activities of SQA is to verify that testing has been
                                performed properly, that:
                                a) Required testing is performed.
                                b) System and acceptance testing of the software are performed
                                according to documented plans and procedures.
                                c) Tests satisfy their acceptance criteria, as documented in the
                                software test plan.
                                d) Tests are satisfactorily completed and recorded.

                                -- Kent


                                --- In extremeprogramming@egroups.com, Jen Wu <jen@d...> wrote:
                                > I don't know if a lot has been said for the role of QA, but here are
                                > some questions ...
                                >
                                > Some background ... a sophisticated QA team will do most if not all
                                of
                                > the following (among other things):
                                >
                                > * Develop a test plan, including test suites and cases
                                > * Structured black box testing -- tested by hand
                                > * Ad hoc black box testing
                                > * Structured automated functional testing -- testing using
                                automated
                                > tools on the UI (no calls to code)
                                > * White box and intrusive automated tests (code reviews and tests
                                > like the unit tests that the programmers are responsible for in
                                > XP)
                                > * Code coverage
                                > * Bug tracking (correlated with test cases and code coverage)
                                > * Multi-user and performance testing using testing tools
                                >
                                ...
                                >
                                > Jen
                              • Steve Goodhall
                                So that you know where I am coming from, I lead Compuware s software quality practice in Michigan. Comments interspersed. Steve Goodhall Principal Architect
                                Message 15 of 20 , May 3 12:05 AM
                                • 0 Attachment
                                  So that you know where I am coming from, I lead Compuware's software
                                  quality practice in Michigan. Comments interspersed.

                                  Steve Goodhall
                                  Principal Architect
                                  QASolutions
                                  Compuware Corporation

                                  mailto:SGoodhall@...
                                  mailto:steve.goodhall@...
                                  http://members.home.net/sgoodhall/
                                  Victory awaits those who have everything in order. People call this
                                  k. - Roald Amundsen


                                  > -----Original Message-----
                                  > From: Michael D. Hill [mailto:uly@...]
                                  > Sent: Tuesday, May 02, 2000 12:22 AM
                                  > To: extremeprogramming@egroups.com
                                  > Subject: Re: [XP] Role of QA
                                  >
                                  >
                                  > [The following in no way represents the offical XP view, which
                                  > frankly, I don't even know.]
                                  >
                                  > Jen...
                                  >
                                  > I have no faith in external QA. I believe it is one of those
                                  > ideas that looks magnificent on paper, like ISO 900X, but absolutely
                                  > awful in practice. I've never seen an external test group produce a
                                  > problem report other than identifying an installation process that
                                  > doesn't cover all the angles. Possibly, I have only seem crummy
                                  > QA teams, but that's been my honest experience.
                                  >

                                  Like ISO 900X, it looks good on paper, and it works of you do it right.
                                  Unfortunately, like 900X, few people do it right.
                                  You have seen crummy QA teams. My approach to this has the QA team
                                  live with you just like the customer and help with test development.

                                  > I am *not* in denial about the abysmal quality of most development
                                  > efforts in our industry. But I believe that quality sucks for little
                                  > other reason than because underskilled and undercoached development
                                  > teams are constantly pressed to move faster than they can.
                                  >

                                  I agree. Central QA helps with the first problem (underskilled). Nothing
                                  helps the second one (too fast,) although QA can help persuade
                                  people not to do it.

                                  > Many externalities affect this situation to bring even lower lows.
                                  > 1) Heavyweight processes place unrealistic and value-subtracted
                                  > burdens on developers and their front-line managers. 2) Ludicrous
                                  > expectations from the money and over-inflated product descriptions
                                  > from marketing are a major source of customer disappointment.
                                  > 3) Magic bullet beliefs add to the pain. 4) The worse we get at
                                  > delivering the more folks want to find a system of 'control', and the
                                  > heavier the non-development burdens get, destroying many fine sparks
                                  > of talent and interest in the industry.
                                  >
                                  > I would like to see some figures on the cost-benefit analysis of an
                                  > external QA department. I would even like to hear some anecdotal
                                  > evidence. My own experience strongly suggests that external QA is
                                  > simply not a cost-effective route to quality.
                                  >

                                  If you read some of Phil Crosby's work on manufacturing quality (Quality Is
                                  Free
                                  for instance, you will find that he agrees. What he suggests is that
                                  an external QA group is a step on the road to the final state which has
                                  quality built into the processes.

                                  > Sorry for the rant, but I couldn't stop myself? [Ron? Phlip? How come
                                  > I'm doing all the ranting around here? Are you guys well? Cough if you
                                  > can't talk now.]
                                  >
                                  > Seeya!
                                  > Hill
                                  >
                                  >
                                  > +----------------------------------------------------------+
                                  > |Michael Hill |
                                  > |Software-> Developer, Consultant, Teacher, Coach |
                                  > |Lifeware-> Egghead, Romantic, Grandpa, Communitarian |
                                  > |<uly_REMOVE_THIS_PART_@...> |
                                  > +----------------------------------------------------------+
                                  >
                                  >
                                  >
                                  > To Post a message, send it to: extremeprogramming@...
                                  >
                                  > To Unsubscribe, send a blank message to:
                                  > extremeprogramming-unsubscribe@...
                                  >
                                  > Ad-free courtesy of objectmentor.com
                                  >
                                • Duncan Gibson
                                  David Brady wrote: DB My take on XP is that it s ABOUT quality. I don t stop work DB on a unit until I believe that it can t be broken. If you DB have no
                                  Message 16 of 20 , May 3 3:07 AM
                                  • 0 Attachment
                                    David Brady wrote:
                                    DB> My take on XP is that it's ABOUT quality. I don't stop work
                                    DB> on a unit until I believe that it can't be broken. If you
                                    DB> have no QA team, that's your only hope. If you have a good
                                    DB> QA team, it's easy to get lazy and write lots of hopey code,
                                    DB> because if it doesn't work, QA should catch it. In a better
                                    DB> world--one I believe *can* exist, but so far haven't seen,
                                    DB> and therefore am trying to create--the developers write the
                                    DB> Quality, and the (good) QA team does the Assurance.

                                    Many of the XP processes balance each other, or provide constant
                                    tension between them so that things run smoothly. It seems to me
                                    that XP offers the opportunity for the same type of balance, or
                                    constructive tension, between the developers and the QA/testing
                                    group.

                                    The current methodologies which offer BigBangIntegration and then
                                    provide [alpha and] beta versions of software, suffer from the
                                    problem that the overall team accepts and expects that the first
                                    release(s) will contain proportionally more defects than later
                                    ones, and that it will be part of the team's task to iron out
                                    these problems over time. The number of defects in the product
                                    should decrease with each release. Many developers don't test
                                    their own code adequately because they [wrongly] subscribe to the
                                    point of view that it is the task of the QA/testing group to
                                    catch errors in the code.

                                    Various authors[*] stress that this isn't the way to produce high
                                    quality software, and that the individual developer should strive
                                    to produce defect free software. Some people go as far as to
                                    consider any defect discovered by the QA/testing group - or worse
                                    still - the end user or customer, as a failure on the part of the
                                    developer.

                                    Some attempts to improve the situation by offering rewards for
                                    defects detected and removed proved counter-productive. Wily
                                    developers introduced known defects so that QA/testing would find
                                    them, and the developers could "solve" them in order to benefit
                                    from the reward.

                                    With XP, there is no BigBangIntegration. There is a series of
                                    smaller development cycles, even down to the internal 3-week
                                    iterations. If you consider that each of these cycles delivers
                                    approximately the same amount of new code, and that the defect
                                    rate is constant at N defects/KLOC (or however you measure it),
                                    then each cycle will also deliver the same number of new defects.

                                    In XP, the practice of UnitTest/TestFirst is intended to reduce
                                    the number of defects which slip through to the release (so N
                                    should be smaller). In XP, after the first cycle, the QA/testing
                                    team should be able to give an estimate for N, basically because
                                    N defects should have been found. After this first cycle, there
                                    is the possibility of introducing constructive tension between
                                    the developers and the QA/testing team. The developers should be
                                    aiming to deliver fewer than N defects per cycle, and the
                                    QA/testing group should be aiming to discover more than N defects
                                    per cycle. This would give measurable goals for both sides,
                                    possibly with some reward structure. N is adjusted after each
                                    cycle, and defects discovered by the end user/customer count
                                    against the QA/testing group.

                                    Any comments?

                                    Cheers
                                    Duncan

                                    [*] Writing Solid Code, Maguire, ISBN 1-55615-551-4
                                    The Pragmatic Programmer, Hunt & Thomas, ISBN 020161622X
                                    Introduction to the PSP, Humphrey, ISBN 0201548097
                                    The Practice of Programming, Kernigan & Pike, ISBN 020161586X


                                    This is my article, not my employer's, with my opinions and my disclaimer!
                                    --
                                    Duncan Gibson, ESTEC/TOS/MCV, Postbus 299, 2200AG Noordwijk, Netherlands
                                    Tel: +31 71 5654013 Fax: +31 71 5656142 Email: duncan@...
                                  • Jen Wu
                                    The number of bugs found might be a good measure of how development is progressing, but I hate it as a metric for testers. It encourages QA folks to write
                                    Message 17 of 20 , May 3 9:13 AM
                                    • 0 Attachment
                                      The number of bugs found might be a good measure of how development is
                                      progressing, but I hate it as a metric for testers. It encourages QA
                                      folks to write more bugs than are necessary. Instead, I'd suggest
                                      that QA be measured on the scope of their tests and the accuracy of
                                      the results. Number of test cases run, percentage of code covered,
                                      percent of functionality covered, number of platforms tested, etc.

                                      Bugs that are found by someone other than QA may be a measure of how
                                      they could improve in the same way that bugs found by QA is a measure
                                      of how development could improve. Also, bugs in QA code could count
                                      against them (after all, QA is often a development project in and of
                                      itself).

                                      Jen

                                      Duncan Gibson wrote:
                                      >
                                      > David Brady wrote:
                                      > DB> My take on XP is that it's ABOUT quality. I don't stop work
                                      > DB> on a unit until I believe that it can't be broken. If you
                                      > DB> have no QA team, that's your only hope. If you have a good
                                      > DB> QA team, it's easy to get lazy and write lots of hopey code,
                                      > DB> because if it doesn't work, QA should catch it. In a better
                                      > DB> world--one I believe *can* exist, but so far haven't seen,
                                      > DB> and therefore am trying to create--the developers write the
                                      > DB> Quality, and the (good) QA team does the Assurance.
                                      >
                                      > Many of the XP processes balance each other, or provide constant
                                      > tension between them so that things run smoothly. It seems to me
                                      > that XP offers the opportunity for the same type of balance, or
                                      > constructive tension, between the developers and the QA/testing
                                      > group.
                                      >
                                      > The current methodologies which offer BigBangIntegration and then
                                      > provide [alpha and] beta versions of software, suffer from the
                                      > problem that the overall team accepts and expects that the first
                                      > release(s) will contain proportionally more defects than later
                                      > ones, and that it will be part of the team's task to iron out
                                      > these problems over time. The number of defects in the product
                                      > should decrease with each release. Many developers don't test
                                      > their own code adequately because they [wrongly] subscribe to the
                                      > point of view that it is the task of the QA/testing group to
                                      > catch errors in the code.
                                      >
                                      > Various authors[*] stress that this isn't the way to produce high
                                      > quality software, and that the individual developer should strive
                                      > to produce defect free software. Some people go as far as to
                                      > consider any defect discovered by the QA/testing group - or worse
                                      > still - the end user or customer, as a failure on the part of the
                                      > developer.
                                      >
                                      > Some attempts to improve the situation by offering rewards for
                                      > defects detected and removed proved counter-productive. Wily
                                      > developers introduced known defects so that QA/testing would find
                                      > them, and the developers could "solve" them in order to benefit
                                      > from the reward.
                                      >
                                      > With XP, there is no BigBangIntegration. There is a series of
                                      > smaller development cycles, even down to the internal 3-week
                                      > iterations. If you consider that each of these cycles delivers
                                      > approximately the same amount of new code, and that the defect
                                      > rate is constant at N defects/KLOC (or however you measure it),
                                      > then each cycle will also deliver the same number of new defects.
                                      >
                                      > In XP, the practice of UnitTest/TestFirst is intended to reduce
                                      > the number of defects which slip through to the release (so N
                                      > should be smaller). In XP, after the first cycle, the QA/testing
                                      > team should be able to give an estimate for N, basically because
                                      > N defects should have been found. After this first cycle, there
                                      > is the possibility of introducing constructive tension between
                                      > the developers and the QA/testing team. The developers should be
                                      > aiming to deliver fewer than N defects per cycle, and the
                                      > QA/testing group should be aiming to discover more than N defects
                                      > per cycle. This would give measurable goals for both sides,
                                      > possibly with some reward structure. N is adjusted after each
                                      > cycle, and defects discovered by the end user/customer count
                                      > against the QA/testing group.
                                      >
                                      > Any comments?
                                      >
                                      > Cheers
                                      > Duncan
                                      >
                                      > [*] Writing Solid Code, Maguire, ISBN 1-55615-551-4
                                      > The Pragmatic Programmer, Hunt & Thomas, ISBN 020161622X
                                      > Introduction to the PSP, Humphrey, ISBN 0201548097
                                      > The Practice of Programming, Kernigan & Pike, ISBN 020161586X
                                      >
                                      > This is my article, not my employer's, with my opinions and my disclaimer!
                                      > --
                                      > Duncan Gibson, ESTEC/TOS/MCV, Postbus 299, 2200AG Noordwijk, Netherlands
                                      > Tel: +31 71 5654013 Fax: +31 71 5656142 Email: duncan@...
                                      >
                                      > To Post a message, send it to: extremeprogramming@...
                                      >
                                      > To Unsubscribe, send a blank message to: extremeprogramming-unsubscribe@...
                                      >
                                      > Ad-free courtesy of objectmentor.com
                                    • kent.schnaith@westgroup.com
                                      The best measure I have found for how development is progressing is the number of open defects that need to be fixed before you can release the product .
                                      Message 18 of 20 , May 3 4:22 PM
                                      • 0 Attachment
                                        The best measure I have found for how development is progressing
                                        is "the number of open defects that need to be fixed before you can
                                        release the product". This measure is tracked from week to week,
                                        (or even day to day). The shape of the trend is usually a hump with
                                        a long tail. Early on, more defects will be discovered than
                                        are fixed, later few new defects should be discovered and the
                                        developers will catch up and reduce the backlog. Of course, if
                                        you have trouble fixing a problem without creating another problem
                                        then you are in for a very long march.

                                        Rewards should be based on delivering a quality product, on time.
                                        The statistically inclined can predict the expected duration of the
                                        march.

                                        One attraction of XP is that the focus on finding defects early and
                                        refactoring prevent you from falling into a long(endless) test and
                                        fix cycle.

                                        --- In extremeprogramming@egroups.com, Jen Wu <jen@d...> wrote:
                                        > The number of bugs found might be a good measure of how development
                                        is
                                        > progressing, but I hate it as a metric for testers. It encourages
                                        QA
                                        > folks to write more bugs than are necessary. Instead, I'd suggest
                                        > that QA be measured on the scope of their tests and the accuracy of
                                        > the results. Number of test cases run, percentage of code covered,
                                        > percent of functionality covered, number of platforms tested, etc.
                                        >
                                        > Bugs that are found by someone other than QA may be a measure of how
                                        > they could improve in the same way that bugs found by QA is a
                                        measure
                                        > of how development could improve. Also, bugs in QA code could count
                                        > against them (after all, QA is often a development project in and of
                                        > itself).
                                        >
                                        > Jen
                                        >
                                        > Duncan Gibson wrote:
                                        > >
                                        > > David Brady wrote:
                                        > > DB> My take on XP is that it's ABOUT quality. I don't stop work
                                        > > DB> on a unit until I believe that it can't be broken. If you
                                        > > DB> have no QA team, that's your only hope. If you have a good
                                        > > DB> QA team, it's easy to get lazy and write lots of hopey code,
                                        > > DB> because if it doesn't work, QA should catch it. In a better
                                        > > DB> world--one I believe *can* exist, but so far haven't seen,
                                        > > DB> and therefore am trying to create--the developers write the
                                        > > DB> Quality, and the (good) QA team does the Assurance.
                                        > >
                                        > > Many of the XP processes balance each other, or provide constant
                                        > > tension between them so that things run smoothly. It seems to me
                                        > > that XP offers the opportunity for the same type of balance, or
                                        > > constructive tension, between the developers and the QA/testing
                                        > > group.
                                        > >
                                        > > The current methodologies which offer BigBangIntegration and then
                                        > > provide [alpha and] beta versions of software, suffer from the
                                        > > problem that the overall team accepts and expects that the first
                                        > > release(s) will contain proportionally more defects than later
                                        > > ones, and that it will be part of the team's task to iron out
                                        > > these problems over time. The number of defects in the product
                                        > > should decrease with each release. Many developers don't test
                                        > > their own code adequately because they [wrongly] subscribe to the
                                        > > point of view that it is the task of the QA/testing group to
                                        > > catch errors in the code.
                                        > >
                                        > > Various authors[*] stress that this isn't the way to produce high
                                        > > quality software, and that the individual developer should strive
                                        > > to produce defect free software. Some people go as far as to
                                        > > consider any defect discovered by the QA/testing group - or worse
                                        > > still - the end user or customer, as a failure on the part of the
                                        > > developer.
                                        > >
                                        > > Some attempts to improve the situation by offering rewards for
                                        > > defects detected and removed proved counter-productive. Wily
                                        > > developers introduced known defects so that QA/testing would find
                                        > > them, and the developers could "solve" them in order to benefit
                                        > > from the reward.
                                        > >
                                        > > With XP, there is no BigBangIntegration. There is a series of
                                        > > smaller development cycles, even down to the internal 3-week
                                        > > iterations. If you consider that each of these cycles delivers
                                        > > approximately the same amount of new code, and that the defect
                                        > > rate is constant at N defects/KLOC (or however you measure it),
                                        > > then each cycle will also deliver the same number of new defects.
                                        > >
                                        > > In XP, the practice of UnitTest/TestFirst is intended to reduce
                                        > > the number of defects which slip through to the release (so N
                                        > > should be smaller). In XP, after the first cycle, the QA/testing
                                        > > team should be able to give an estimate for N, basically because
                                        > > N defects should have been found. After this first cycle, there
                                        > > is the possibility of introducing constructive tension between
                                        > > the developers and the QA/testing team. The developers should be
                                        > > aiming to deliver fewer than N defects per cycle, and the
                                        > > QA/testing group should be aiming to discover more than N defects
                                        > > per cycle. This would give measurable goals for both sides,
                                        > > possibly with some reward structure. N is adjusted after each
                                        > > cycle, and defects discovered by the end user/customer count
                                        > > against the QA/testing group.
                                        > >
                                        > > Any comments?
                                        > >
                                        > > Cheers
                                        > > Duncan
                                        > >
                                        > > [*] Writing Solid Code, Maguire, ISBN 1-55615-551-4
                                        > > The Pragmatic Programmer, Hunt & Thomas, ISBN 020161622X
                                        > > Introduction to the PSP, Humphrey, ISBN 0201548097
                                        > > The Practice of Programming, Kernigan & Pike, ISBN 020161586X
                                        > >
                                        > > This is my article, not my employer's, with my opinions and my
                                        disclaimer!
                                        > > --
                                        > > Duncan Gibson, ESTEC/TOS/MCV, Postbus 299, 2200AG Noordwijk,
                                        Netherlands
                                        > > Tel: +31 71 5654013 Fax: +31 71 5656142 Email: duncan@y...
                                        > >
                                        > > To Post a message, send it to: extremeprogramming@e...
                                        > >
                                        > > To Unsubscribe, send a blank message to: extremeprogramming-
                                        unsubscribe@e...
                                        > >
                                        > > Ad-free courtesy of objectmentor.com
                                      • Steve Goodhall
                                        My view would be that the best metric for testers is the ratio of total bugs to bugs found in production. This will give you a reasonable measurement of
                                        Message 19 of 20 , May 3 5:42 PM
                                        • 0 Attachment
                                          My view would be that the best metric for testers is the ratio of total bugs
                                          to bugs found in production. This will give you a reasonable measurement of
                                          testing effectiveness and it is easy to maintain.

                                          Steve Goodhall
                                          Principal Architect
                                          QASolutions
                                          Compuware Corporation
                                          mailto:SGoodhall@...
                                          mailto:steve.goodhall@...
                                          http://members.home.net/sgoodhall/


                                          Victory awaits those who have everything in order. People call this luck. -
                                          Roald Amundsen





                                          > -----Original Message-----
                                          > From: Jen Wu [mailto:jen@...]
                                          > Sent: Wednesday, May 03, 2000 12:14 PM
                                          > To: extremeprogramming@egroups.com
                                          > Subject: Re: [XP] Role of QA
                                          >
                                          >
                                          > The number of bugs found might be a good measure of how development is
                                          > progressing, but I hate it as a metric for testers. It encourages QA
                                          > folks to write more bugs than are necessary. Instead, I'd suggest
                                          > that QA be measured on the scope of their tests and the accuracy of
                                          > the results. Number of test cases run, percentage of code covered,
                                          > percent of functionality covered, number of platforms tested, etc.
                                          >
                                          > Bugs that are found by someone other than QA may be a measure of how
                                          > they could improve in the same way that bugs found by QA is a measure
                                          > of how development could improve. Also, bugs in QA code could count
                                          > against them (after all, QA is often a development project in and of
                                          > itself).
                                          >
                                          > Jen
                                          >
                                          > Duncan Gibson wrote:
                                          > >
                                          > > David Brady wrote:
                                          > > DB> My take on XP is that it's ABOUT quality. I don't stop work
                                          > > DB> on a unit until I believe that it can't be broken. If you
                                          > > DB> have no QA team, that's your only hope. If you have a good
                                          > > DB> QA team, it's easy to get lazy and write lots of hopey code,
                                          > > DB> because if it doesn't work, QA should catch it. In a better
                                          > > DB> world--one I believe *can* exist, but so far haven't seen,
                                          > > DB> and therefore am trying to create--the developers write the
                                          > > DB> Quality, and the (good) QA team does the Assurance.
                                          > >
                                          > > Many of the XP processes balance each other, or provide constant
                                          > > tension between them so that things run smoothly. It seems to me
                                          > > that XP offers the opportunity for the same type of balance, or
                                          > > constructive tension, between the developers and the QA/testing
                                          > > group.
                                          > >
                                          > > The current methodologies which offer BigBangIntegration and then
                                          > > provide [alpha and] beta versions of software, suffer from the
                                          > > problem that the overall team accepts and expects that the first
                                          > > release(s) will contain proportionally more defects than later
                                          > > ones, and that it will be part of the team's task to iron out
                                          > > these problems over time. The number of defects in the product
                                          > > should decrease with each release. Many developers don't test
                                          > > their own code adequately because they [wrongly] subscribe to the
                                          > > point of view that it is the task of the QA/testing group to
                                          > > catch errors in the code.
                                          > >
                                          > > Various authors[*] stress that this isn't the way to produce high
                                          > > quality software, and that the individual developer should strive
                                          > > to produce defect free software. Some people go as far as to
                                          > > consider any defect discovered by the QA/testing group - or worse
                                          > > still - the end user or customer, as a failure on the part of the
                                          > > developer.
                                          > >
                                          > > Some attempts to improve the situation by offering rewards for
                                          > > defects detected and removed proved counter-productive. Wily
                                          > > developers introduced known defects so that QA/testing would find
                                          > > them, and the developers could "solve" them in order to benefit
                                          > > from the reward.
                                          > >
                                          > > With XP, there is no BigBangIntegration. There is a series of
                                          > > smaller development cycles, even down to the internal 3-week
                                          > > iterations. If you consider that each of these cycles delivers
                                          > > approximately the same amount of new code, and that the defect
                                          > > rate is constant at N defects/KLOC (or however you measure it),
                                          > > then each cycle will also deliver the same number of new defects.
                                          > >
                                          > > In XP, the practice of UnitTest/TestFirst is intended to reduce
                                          > > the number of defects which slip through to the release (so N
                                          > > should be smaller). In XP, after the first cycle, the QA/testing
                                          > > team should be able to give an estimate for N, basically because
                                          > > N defects should have been found. After this first cycle, there
                                          > > is the possibility of introducing constructive tension between
                                          > > the developers and the QA/testing team. The developers should be
                                          > > aiming to deliver fewer than N defects per cycle, and the
                                          > > QA/testing group should be aiming to discover more than N defects
                                          > > per cycle. This would give measurable goals for both sides,
                                          > > possibly with some reward structure. N is adjusted after each
                                          > > cycle, and defects discovered by the end user/customer count
                                          > > against the QA/testing group.
                                          > >
                                          > > Any comments?
                                          > >
                                          > > Cheers
                                          > > Duncan
                                          > >
                                          > > [*] Writing Solid Code, Maguire, ISBN 1-55615-551-4
                                          > > The Pragmatic Programmer, Hunt & Thomas, ISBN 020161622X
                                          > > Introduction to the PSP, Humphrey, ISBN 0201548097
                                          > > The Practice of Programming, Kernigan & Pike, ISBN 020161586X
                                          > >
                                          > > This is my article, not my employer's, with my opinions and my
                                          > disclaimer!
                                          > > --
                                          > > Duncan Gibson, ESTEC/TOS/MCV, Postbus 299, 2200AG Noordwijk, Netherlands
                                          > > Tel: +31 71 5654013 Fax: +31 71 5656142 Email: duncan@...
                                          > >
                                          > > To Post a message, send it to: extremeprogramming@...
                                          > >
                                          > > To Unsubscribe, send a blank message to:
                                          > extremeprogramming-unsubscribe@...
                                          > >
                                          > > Ad-free courtesy of objectmentor.com
                                          >
                                          >
                                          > To Post a message, send it to: extremeprogramming@...
                                          >
                                          > To Unsubscribe, send a blank message to:
                                          > extremeprogramming-unsubscribe@...
                                          >
                                          > Ad-free courtesy of objectmentor.com
                                          >
                                        Your message has been successfully submitted and would be delivered to recipients shortly.