Loading ...
Sorry, an error occurred while loading the content.

Software test plan

Expand Messages
  • Taylan Tokay
    Hi Can somebody advise me some resources about building a software test plan in XP Programming and how to prepare manual test caes and test procedures for
    Message 1 of 20 , Apr 3 11:01 PM
    View Source
    • 0 Attachment
      Hi
      Can somebody advise me some resources about building a
      software test plan in XP Programming and how to
      prepare manual test caes and test procedures for XP?

      Thanks;




      __________________________________
      Do you Yahoo!?
      Yahoo! Personals - Better first dates. More second dates.
      http://personals.yahoo.com
    • Brian Spears
      1) Don t write a plan - write an automated test. Minimize documentation - maximize automation. You can write clean tests that are better documentation anyway.
      Message 2 of 20 , Apr 4 7:00 AM
      View Source
      • 0 Attachment
        1) Don't write a plan - write an automated test.
        Minimize documentation - maximize automation.
        You can write clean tests that are better documentation
        anyway.
        2) Avoid manual test cases and procedures whenever
        possible.
        Most things can be automated. Admittingly, our project
        still has a few things we have to test manually - they
        are a pain. We pretty much use old fashioned manual
        test cases in these rare cases where we cannot
        automate.

        --- Taylan Tokay <taylantokay@...> wrote:
        > Hi
        > Can somebody advise me some resources about building a
        > software test plan in XP Programming and how to
        > prepare manual test caes and test procedures for XP?
        >
        > Thanks;
        >
        >
        >
        >
        > __________________________________
        > Do you Yahoo!?
        > Yahoo! Personals - Better first dates. More second dates.
        >
        > http://personals.yahoo.com
        >
        >



        __________________________________
        Yahoo! Messenger
        Show us what our next emoticon should look like. Join the fun.
        http://www.advision.webevents.yahoo.com/emoticontest
      • Brian Marick
        Don t forget exploratory testing, which is *not* manual scripted test cases. As a trivial example, I am writing a rudimentary GUI in Cocoa (the Macintosh s UI
        Message 3 of 20 , Apr 4 7:48 AM
        View Source
        • 0 Attachment
          Don't forget exploratory testing, which is *not* manual scripted test
          cases.

          As a trivial example, I am writing a rudimentary GUI in Cocoa (the
          Macintosh's UI framework). The tests are heavily automated, using
          Feathers' Humble Dialog Box, aka model-view-presenter. Nevertheless,
          it's relatively easy for me to find bugs simply because I don't know
          Cocoa or GUI programming at all well. For example, I have a unit test
          that looks like this:

          public void testSuccessfulCaseAddition() throws Exception {
          // Have a mock "inpatient view" send the three messages
          // to the presenter that should cause it to add a new case
          // record.

          // Check that the inpatient presenter updates itself....
          assertEquals(...); ...

          // Check that the inpatient view has received
          // the appropriate messages and data.
          ==> assertTrue(inpatientView().aRowWasHighlighted());
          ==> assertEquals(0, inpatientView().getHighlightedRow());
          ...

          // And do the same for the patient detail view...
          }

          This unit test passes. My mock view confirms that the presenter tells
          its table view to highlight the just-added case. The actual view is
          very simple, as a humble dialog box should be:

          public void highlightRow(int row) {
          patientTable.selectRowIndexes(new NSIndexSet(row), false);
          }

          However, there is a case in which highlighting doesn't work. Now, that
          particular bug is bloody obvious, but it's a symptom that I don't
          understand what's going on. (As is, probably, my uncertainty about
          whether I should be talking about "highlighting" or "selecting".) There
          are other bugs that are less obvious (but harder to explain here), and
          I'm sure there are bugs that I haven't found.

          Those bugs escape the automated tests for two reasons. Some escape
          because my testing strategy, which avoids the last inch between the
          code and the user, can't find them. (Phlip's might.) The above bug is
          an example. But other bugs - more profoundly - escape because I didn't
          know what to test for programmatically. I learn what to test for by
          finding bugs exploratorily.

          A large class of bugs in shipping software are called "faults of
          omission". They are bugs that are fixed by adding code that handles
          special cases (or by more cleverly generalizing to make the cases
          un-special). Those bugs aren't prevented by writing automated tests up
          front, because you don't anticipate the special case. They're not
          prevented by writing automated tests that are run after the face -
          again, because you don't anticipate the special case, plus that
          scripted manual testing is brain-deadening and makes you prone to miss
          everything except the bugs that the script tells you to search for.

          But such bugs can be discovered by skilled exploratory testing, maybe
          even unskilled.

          Note that faults of omission are characteristic of a domain. In my
          example, they're characteristic of a technical domain - how a
          featureful GUI works. But they can also be characteristic of a business
          domain. Ideally, the onsite business expert should be able to express
          the special cases. In practice, experts are notoriously bad at
          explaining why they act and perceive in an expert way. (See Malcolm
          Gladwell's _Blink_ for just the latest exposition of this fact.)
          Experts need to be tricked into expressing themselves. Exploratory
          testing is one such trick.

          ---

          For more about my little Cocoa program, see the "Design-Driven
          Test-Driven Design" series on my blog, latest installment here:
          <http://www.testing.com/cgi-bin/blog/2005/03/30#presenter3>.

          For more about faults of omission, see
          <http://www.testing.com/writings/omissions.html>




          On Apr 4, 2005, at 9:00 AM, Brian Spears wrote:

          >
          >
          > 1) Don't write a plan - write an automated test.
          > Minimize documentation - maximize automation.
          > You can write clean tests that are better documentation
          > anyway.
          > 2) Avoid manual test cases and procedures whenever
          > possible.
          > Most things can be automated. Admittingly, our project
          > still has a few things we have to test manually - they
          > are a pain. We pretty much use old fashioned manual
          > test cases in these rare cases where we cannot
          > automate.
          >
          > --- Taylan Tokay <taylantokay@...> wrote:
          >> Hi
          >> Can somebody advise me some resources about building a
          >> software test plan in XP Programming and how to
          >> prepare manual test caes and test procedures for XP?
          >>
          >> Thanks;
          >>
          >>
          >>
          >>
          >> __________________________________
          >> Do you Yahoo!?
          >> Yahoo! Personals - Better first dates. More second dates.
          >>
          >> http://personals.yahoo.com
          >>
          >>
          >
          >
          >
          > __________________________________
          > Yahoo! Messenger
          > Show us what our next emoticon should look like. Join the fun.
          > http://www.advision.webevents.yahoo.com/emoticontest
          >
          >
          >
          > Yahoo! Groups Links
          >
          >
          >
          >
          >
          >
          >
          -----
          Brian Marick, independent consultant
          Mostly on agile methods with a testing slant
          www.exampler.com, www.testing.com/cgi-bin/blog
          Book in progress: www.exampler.com/book
        • lisa.crispin@att.net
          Hi, Do you have a special need to write a test plan, for example, your client requires it as a deliverable? If so, try to make it as useful to yourself as you
          Message 4 of 20 , Apr 4 8:24 AM
          View Source
          • 0 Attachment
            Hi,
            Do you have a special need to write a test plan, for example, your client requires it as a deliverable?  If so, try to make it as useful to yourself as you can, don't put more time into it than you have to.  If you aren't required to write one, don't.
             
            Based on my experience as a tester on XP teams, I advise that you try to write test cases in some executable format.  Fit and FitNesse are tools that make this easy (see fit.c2.org and www.fitnesse.org) but there are lots of other tools where this is possible.  Writing them in a spreadsheet format will usually make it easy to transfer them to some executable format for use with an automated test tool.
             
            The other advantage of a tool like FitNesse is you can write additional narrative as needed inline to document the tests (since it uses a wiki). 
             
            As you know, XP iterations move very fast.  If you depend too much on manual testing, you'll be in a world of hurt by the 3rd or 4th iteration.  I like to make it a rule that all tests should be automated.  If it's a rule your team will try to do it.  If not, it's easy to say "we're busy and we'll do that later" and you''ll pay dearly for this.  Without any automated regression tests, you will never have time for exploratory testing. 
             
            It is also a rule of XP that test automation is the responsibility of the whole team.  If you're the tester, make sure the programmers also assist in test automation, even with non-unit level tests.  If you have a test team that is separate from the programming team, make sure you have programmers available to assist with automation tasks. 
            -- Lisa
             
             
            --
            Lisa Crispin
            Co-author,
            Testing Extreme Programming
            http://lisa.crispin.home.att.net

            -------------- Original message from Taylan Tokay <taylantokay@...>: --------------


            >
            > Hi
            > Can somebody advise me some resources about building a
            > software test plan in XP Programming and how to
            > prepare manual test caes and test procedures for XP?
            >
            > Thanks;
            >
            >
            >
            >
            > __________________________________
            > Do you Yahoo!?
            > Yahoo! Personals - Better first dates. More second dates.
            > http://personals.yahoo.com
            >
            >
            >
            >
            > Yahoo! Groups Links
            >
            > <*> To visit your group on the web, go to:
            > http://groups.yahoo.com/group/agile-testing/
            >
            > <*> To unsubscribe from this group, send an email to:
            > agile-testing-unsubscribe@yahoogroups.com
            >
            > <*> Your use of Yahoo! Groups is subject to:
            > http://docs.yahoo.com/info/terms/
            >
            >
            >
            >
          • Rex Madden
            You can write comments inline using FIT as well. The wiki simply makes it easier to edit for people that don t have access to the source (among other
            Message 5 of 20 , Apr 4 9:26 AM
            View Source
            • 0 Attachment
              You can write comments inline using FIT as well.  The wiki simply makes it easier to edit for people that don't have access to the source (among other advantages and disadvantages of using a wiki). 

              On Apr 4, 2005 11:24 AM, lisa.crispin@... <lisa.crispin@...> wrote:
               
               
              The other advantage of a tool like FitNesse is you can write additional narrative as needed inline to document the tests (since it uses a wiki). 
               
               
               
               
              --
              Lisa Crispin
              Co-author,
              Testing Extreme Programming
              http://lisa.crispin.home.att.net

              -------------- Original message from Taylan Tokay <taylantokay@...>: --------------


              >
              > Hi
              > Can somebody advise me some resources about building a
              > software test plan in XP Programming and how to
              > prepare manual test caes and test procedures for XP?
              >
              > Thanks;
              >
              >
              >
              >
              > __________________________________
              > Do you Yahoo!?
              > Yahoo! Personals - Better first dates. More second dates.
              > http://personals.yahoo.com
              >
              >
              >
              >
              > Yahoo! Groups Links
              >
              > <*> To visit your group on the web, go to:
              > http://groups.yahoo.com/group/agile-testing/
              >
              > <*> To unsubscribe from this group, send an email to:
              > agile-testing-unsubscribe@yahoogroups.com
              >
              > <*> Your use of Yahoo! Groups is subject to:
              > http://docs.yahoo.com/info/terms/
              >
              >
              >
              >


              Yahoo! Groups Links

            • Phlip
              ... Each week, the Onsite Customer appoints a new list of features to implement this week. The customer writes them on cards, to make them portable,
              Message 6 of 20 , Apr 4 10:12 AM
              View Source
              • 0 Attachment
                Taylan Tokay wrote:

                > Can somebody advise me some resources about building a
                > software test plan in XP Programming and how to
                > prepare manual test caes and test procedures for XP?

                Each week, the Onsite Customer appoints a new list of features to
                implement this week.

                The customer writes them on cards, to make them portable,
                object-oriented, etc. The developers take those.

                You work with the customer to convert each feature request into
                technical specifications, and you write them as failing test cases, in
                a rig like this:

                http://www.c2.com/cgi/wiki?MiniRubyWiki

                You can substitute Excel spreadsheets, raw XML, etc. for the test data
                tables. The above shows YAML (in the big lower edit field), which is
                relatively easy for civilians to edit.

                Mark your test case Disabled if it doesn't pass. (The above doesn't
                show this feature - it would be just a checkbox in each test case.)

                Anotate your test resources. My wiki shows YAML #comments on the rows.
                Also anotate them using the Wiki features around the test cases. This
                forms the "documentation" that paperwork-oriented process advocates
                claim XP lacks.

                The failing cases are your test plan for this week.

                Programmers will write unit tests for their features (below the level
                of this test-runner clutter). When they finish the feature, the
                acceptance test should pass. Mark it enabled. This adds it to the
                batch of tests that all code must pass

                Testers help programmers put hard things under test, and testers
                maintain the build chain and the various scripts that test. Developers
                run all relevant unit tests after <10 edits, and they run all the
                tests before integrating.

                The test plan for a week includes the cumulative test cases of all
                prior weeks. That's one of the (many) reasons XP sorts features in
                business priority. The most important features will be the most-tested
                ones, for the remaining duration of the project.

                --
                Phlip
              • Phlip
                ... If your product is for farmers, you need an Onsite Customer who is better at farming than other tasks, such as computing. Getting this individual authoring
                Message 7 of 20 , Apr 4 11:36 AM
                View Source
                • 0 Attachment
                  Rex Madden wrote:

                  > You can write comments inline using FIT as well. The wiki simply makes it
                  > easier to edit for people that don't have access to the source (among other
                  > advantages and disadvantages of using a wiki).

                  If your product is for farmers, you need an Onsite Customer who is
                  better at farming than other tasks, such as computing. Getting this
                  individual authoring tests is very important, so simplify their GUI by
                  any means necessary.

                  --
                  Phlip
                • Michael Bolton
                  What is a test plan? A test plan is a set of risks, answered by a set of test ideas--that set is also known as a test strategy. When you prepare a strategy,
                  Message 8 of 20 , Apr 4 9:03 PM
                  View Source
                  • 0 Attachment
                    What is a test plan?
                     
                    A test plan is a set of risks, answered by a set of test ideas--that set is also known as a test strategy.  When you prepare a strategy, you're identifying the risks that exist and tests that, when performed, will tell you whether there's a problem or not.  That will require that you identify the aspects of the product that you intend to address (coverage) and the principles or mechanisms that will allow you to recognize a problem (oracles).
                     
                    Test strategy (the ideas), combined with test logistics (who's going to act on them, and what they'll need to do that) form your test plan.
                     
                    I'm not sure what resources you believe that you need to do this, but I'll tell you this much:  if you acted only on the answers that you've received so far in this thread, I wouldn't want to touch your product.  Examples:
                     
                    >1) Don't write a plan - write an automated test.
                       Minimize documentation - maximize automation.
                       You can write clean tests that are better documentation
                       anyway.
                     
                    This answer presumes that every test can or should be automated.  Suppose that you designing Google Maps:  could you provide automation that would allow you to recognize all of the problems that it presented you?  Could you use automation to recognize that Google Maps don't have a legend on them?  Could you use automation to recognize that a map search for a restaurant, "Sera in Mt. Laurel New Jersey", produces  complete nonsense from most human perspectives (though maybe not from one Google Maps programming algorithm).
                     
                    >2) Avoid manual test cases and procedures whenever possible.
                     
                    Better, I think, to suggest, "use manual tests for conscious observation, automated tests for risks for which you can identify an automated oracle".  But this doesn't really address the planning issue.
                     
                    >Based on my experience as a tester on XP teams, I advise that you try to write test cases in some executable format. 
                     
                    This will automatically bias you towards risks that can be identified by automation.  Assumring that your programmers are reasonably competent, and that they're writing tests as part of a test-driven design effort, you won't find many bugs.  That's not to say that there won't BE many bugs--just that you won't find them, because you won't be looking for them.
                     
                    >You work with the customer to convert each feature request into technical specifications, and you write them as failing test cases.
                     
                    If both your developers and your customers are expert critical thinkers, this approach has some possibility of success.  Alas, my observation is that most customers and most developers (and most testers, I might add) are not terribly good critical thinkers.  So, for each request, rather than writing them as failing test cases, identify things that could go wrong and the potential consequence of each.  Be expansive, because the things that could go wrong are legion.  Frame those things in the form of "what if..." questions.  "The product shall obtain a starting point and a destination from the user, and shall render a set of directions and a map showing the shortest path by distance from one to the other."   How many things could go wrong in that simple little requirement?  (Hint: don't stop when you get to thirty.)
                     
                    >The failing cases are your test plan for this week.
                     
                    This will limit you to the thinking about the tests that are failing. It's true that those probably represent problems, but if you limit yourself to those, you'll find yourself without much to do while the developers fix the problems; moreover, you'll ignore other risks that have not yet been found by tests.
                     
                    >You can write comments inline using FIT as well.  The wiki simply makes it easier to edit for people that don't have access to the source (among other advantages and disadvantages of using a wiki).
                     
                    I would contend that this would inform development or requirements, rather than testing.
                     
                    If your product is for farmers, you need an Onsite Customer who is better at farming than other tasks, such as computing. Getting this individual authoring tests is very important, so simplify their GUI by any means necessary.
                     
                    As a professional tester, I found this reply to be very frustrating, either uncomprehending or dismissive (or both) of testing, testers, and tester skill.  Dumb-down the program or some interface to it so an unskilled person can write tests for it?  By the logic of this suggestion, why not get the farmer to write the code as well?  Why not make the program so simple that farmers can write it?  For that matter, why not get the developers to do the farming?  Why not simplify tractors and feeding schedules so that developers can do farming?  The reasonable answer is that both development and farming require certain sets of skills, skills that do not map on to one another.  The implication remains that testing can be left to the farmers.  This bespeaks naivete about the role of testing and testers.
                     
                    Well, testing can be left to farmers, and it often is--and I might add that testing often gets left to testers with ditch-digger skills.  But that's not to say that it should be so.
                     
                    Here's one way to develop a test plan in any environment, XP or not:
                     
                    1) Learn about the product and the business domain in which it is intended to operate.
                     
                    2) Consider the product from a variety of perspectives, not just the functional.  Consider the structure, the functions, the data, the platforms, and the operations of the product; these are the product elements.  Consider the -iliities--capability, reliability, usability, scalability, performance, installability, compatibility, supportability, testability, maintainability, portability, localizability.  Call these quality criteria.  The intersection of the product elements and the quality criteria produces coverage requirements. 
                     
                    3) Identify a set of things that could go wrong with the product or the way in which people use it.  (Be expansive; consider disfavoured or malicious users.)  Call these risks.
                     
                    4) For each item above, identify one or more tests, manual or automated, that would give you some degree of confidence that the risk is non-existent or as low as you can live with.  As part of this process, identify principles or mechanisms that would allow to recognize a problem if there were one.  Think very critically; use very broad notions of the things that could go wrong.  Brainstorm with others. Invent a very improbable problem and call it into in imaginary tech support department.  Ask tech-support-style questions--those will inform lots of tests.  Ask tech support people for help.  Create a table with risks in one column, and one or more tests in the cell next to each risk.
                     
                    5) Call the previous steps your strategy.
                     
                    6) Identify the people and resources that you'll need to fulfill your strategy.  Call these logistical requirements.
                     
                    7) Reconcile the people and resources that you'd like with the people and resources that you can get.  Those are your test logisitics.  The intersection of your test strategy and your test logistics is your test plan.
                     
                    8) Perform some part of your plan (that is, use the program, observe it critically, and run tests), and  and feel free to explore as you do so.  You will immediately note that some of your risks have been addressed already, or are of lower significance than you thought.  You'll also recognize new risks, new oracles, and new ideas about coverage.
                     
                    9) If the plan is for someone else, colloborate with them on how you'll present it to them.  Keep them happy.  If the plan is for you, make it usable to yourself, but expend no unnecessary effort on making it pretty; make it useful enough to track your strategy and logistics.
                     
                    10) Based on the information you've found in prior steps, revise your plan by revisiting this list, starting from (1) above.  Repeat the process until the product ships.
                     
                    ---Michael B.
                     
                    -----Original Message-----
                    From: Taylan Tokay [mailto:taylantokay@...]
                    Sent: Monday, April 04, 2005 2:01 AM
                    To: agile-testing@yahoogroups.com
                    Subject: [agile-testing] Software test plan

                    Hi
                    Can somebody advise me some resources about building a
                    software  test plan in XP Programming and how to
                    prepare manual test caes and test  procedures for XP?

                    Thanks;



                               
                    __________________________________
                    Do you Yahoo!?
                    Yahoo! Personals - Better first dates. More second dates.
                    http://personals.yahoo.com

                  • Janet Gregory
                    I write a test plan, but it is as Michael pointed out, a strategy for how I am going to test. I identify risks so that others are aware of some of the issues.
                    Message 9 of 20 , Apr 4 9:14 PM
                    View Source
                    • 0 Attachment
                      Message
                      I write a test plan, but it is as Michael pointed out, a strategy for how I am going to test. I identify risks so that others are aware of some of the issues. I may write a paragraph about the test environment and how we are going to handle releases to the test team.  I find it helps me to think about the big picture and consider other groups and outside influences.  When I am working on an agile project, this plan may be all of 3 pages.
                       
                      Automated tests, etc. replace manual test scripts, and I never considered those part of the test plan. I think both are necessary but provide different functions.
                       
                      Janet Gregory
                      -----Original Message-----
                      From: Michael Bolton [mailto:mb@...]
                      Sent: April 4, 2005 10:04 PM
                      To: agile-testing@yahoogroups.com
                      Subject: RE: [agile-testing] Software test plan

                      What is a test plan?
                       
                      A test plan is a set of risks, answered by a set of test ideas--that set is also known as a test strategy.  When you prepare a strategy, you're identifying the risks that exist and tests that, when performed, will tell you whether there's a problem or not.  That will require that you identify the aspects of the product that you intend to address (coverage) and the principles or mechanisms that will allow you to recognize a problem (oracles).
                       
                      Test strategy (the ideas), combined with test logistics (who's going to act on them, and what they'll need to do that) form your test plan.
                       
                      I'm not sure what resources you believe that you need to do this, but I'll tell you this much:  if you acted only on the answers that you've received so far in this thread, I wouldn't want to touch your product.  Examples:
                       
                      >1) Don't write a plan - write an automated test.
                         Minimize documentation - maximize automation.
                         You can write clean tests that are better documentation
                         anyway.
                       
                      This answer presumes that every test can or should be automated.  Suppose that you designing Google Maps:  could you provide automation that would allow you to recognize all of the problems that it presented you?  Could you use automation to recognize that Google Maps don't have a legend on them?  Could you use automation to recognize that a map search for a restaurant, "Sera in Mt. Laurel New Jersey", produces  complete nonsense from most human perspectives (though maybe not from one Google Maps programming algorithm).
                       
                      >2) Avoid manual test cases and procedures whenever possible.
                       
                      Better, I think, to suggest, "use manual tests for conscious observation, automated tests for risks for which you can identify an automated oracle".  But this doesn't really address the planning issue.
                       
                      >Based on my experience as a tester on XP teams, I advise that you try to write test cases in some executable format. 
                       
                      This will automatically bias you towards risks that can be identified by automation.  Assumring that your programmers are reasonably competent, and that they're writing tests as part of a test-driven design effort, you won't find many bugs.  That's not to say that there won't BE many bugs--just that you won't find them, because you won't be looking for them.
                       
                      >You work with the customer to convert each feature request into technical specifications, and you write them as failing test cases.
                       
                      If both your developers and your customers are expert critical thinkers, this approach has some possibility of success.  Alas, my observation is that most customers and most developers (and most testers, I might add) are not terribly good critical thinkers.  So, for each request, rather than writing them as failing test cases, identify things that could go wrong and the potential consequence of each.  Be expansive, because the things that could go wrong are legion.  Frame those things in the form of "what if..." questions.  "The product shall obtain a starting point and a destination from the user, and shall render a set of directions and a map showing the shortest path by distance from one to the other."   How many things could go wrong in that simple little requirement?  (Hint: don't stop when you get to thirty.)
                       
                      >The failing cases are your test plan for this week.
                       
                      This will limit you to the thinking about the tests that are failing. It's true that those probably represent problems, but if you limit yourself to those, you'll find yourself without much to do while the developers fix the problems; moreover, you'll ignore other risks that have not yet been found by tests.
                       
                      >You can write comments inline using FIT as well.  The wiki simply makes it easier to edit for people that don't have access to the source (among other advantages and disadvantages of using a wiki).
                       
                      I would contend that this would inform development or requirements, rather than testing.
                       
                      If your product is for farmers, you need an Onsite Customer who is better at farming than other tasks, such as computing. Getting this individual authoring tests is very important, so simplify their GUI by any means necessary.
                       
                      As a professional tester, I found this reply to be very frustrating, either uncomprehending or dismissive (or both) of testing, testers, and tester skill.  Dumb-down the program or some interface to it so an unskilled person can write tests for it?  By the logic of this suggestion, why not get the farmer to write the code as well?  Why not make the program so simple that farmers can write it?  For that matter, why not get the developers to do the farming?  Why not simplify tractors and feeding schedules so that developers can do farming?  The reasonable answer is that both development and farming require certain sets of skills, skills that do not map on to one another.  The implication remains that testing can be left to the farmers.  This bespeaks naivete about the role of testing and testers.
                       
                      Well, testing can be left to farmers, and it often is--and I might add that testing often gets left to testers with ditch-digger skills.  But that's not to say that it should be so.
                       
                      Here's one way to develop a test plan in any environment, XP or not:
                       
                      1) Learn about the product and the business domain in which it is intended to operate.
                       
                      2) Consider the product from a variety of perspectives, not just the functional.  Consider the structure, the functions, the data, the platforms, and the operations of the product; these are the product elements.  Consider the -iliities--capability, reliability, usability, scalability, performance, installability, compatibility, supportability, testability, maintainability, portability, localizability.  Call these quality criteria.  The intersection of the product elements and the quality criteria produces coverage requirements. 
                       
                      3) Identify a set of things that could go wrong with the product or the way in which people use it.  (Be expansive; consider disfavoured or malicious users.)  Call these risks.
                       
                      4) For each item above, identify one or more tests, manual or automated, that would give you some degree of confidence that the risk is non-existent or as low as you can live with.  As part of this process, identify principles or mechanisms that would allow to recognize a problem if there were one.  Think very critically; use very broad notions of the things that could go wrong.  Brainstorm with others. Invent a very improbable problem and call it into in imaginary tech support department.  Ask tech-support-style questions--those will inform lots of tests.  Ask tech support people for help.  Create a table with risks in one column, and one or more tests in the cell next to each risk.
                       
                      5) Call the previous steps your strategy.
                       
                      6) Identify the people and resources that you'll need to fulfill your strategy.  Call these logistical requirements.
                       
                      7) Reconcile the people and resources that you'd like with the people and resources that you can get.  Those are your test logisitics.  The intersection of your test strategy and your test logistics is your test plan.
                       
                      8) Perform some part of your plan (that is, use the program, observe it critically, and run tests), and  and feel free to explore as you do so.  You will immediately note that some of your risks have been addressed already, or are of lower significance than you thought.  You'll also recognize new risks, new oracles, and new ideas about coverage.
                       
                      9) If the plan is for someone else, colloborate with them on how you'll present it to them.  Keep them happy.  If the plan is for you, make it usable to yourself, but expend no unnecessary effort on making it pretty; make it useful enough to track your strategy and logistics.
                       
                      10) Based on the information you've found in prior steps, revise your plan by revisiting this list, starting from (1) above.  Repeat the process until the product ships.
                       
                      ---Michael B.
                       
                      -----Original Message-----
                      From: Taylan Tokay [mailto:taylantokay@...]
                      Sent: Monday, April 04, 2005 2:01 AM
                      To: agile-testing@yahoogroups.com
                      Subject: [agile-testing] Software test plan

                      Hi
                      Can somebody advise me some resources about building a
                      software  test plan in XP Programming and how to
                      prepare manual test caes and test  procedures for XP?

                      Thanks;



                                 
                      __________________________________
                      Do you Yahoo!?
                      Yahoo! Personals - Better first dates. More second dates.
                      http://personals.yahoo.com

                    • Jared Richardson
                      The plan is nothing; the planning is everything. Dwight Eisenhower ________________________________ From: Janet Gregory [mailto:janet_gregory@shaw.ca] Sent:
                      Message 10 of 20 , Apr 4 9:26 PM
                      View Source
                      • 0 Attachment
                        "The plan is nothing; the planning is everything."

                        Dwight Eisenhower


                        ________________________________

                        From: Janet Gregory [mailto:janet_gregory@...]
                        Sent: Tue 4/5/2005 12:14 AM
                        To: agile-testing@yahoogroups.com
                        Subject: RE: [agile-testing] Software test plan


                        I write a test plan, but it is as Michael pointed out, a strategy for how I am going to test. I identify risks so that others are aware of some of the issues. I may write a paragraph about the test environment and how we are going to handle releases to the test team. I find it helps me to think about the big picture and consider other groups and outside influences. When I am working on an agile project, this plan may be all of 3 pages.

                        Automated tests, etc. replace manual test scripts, and I never considered those part of the test plan. I think both are necessary but provide different functions.

                        Janet Gregory

                        -----Original Message-----
                        From: Michael Bolton [mailto:mb@...]
                        Sent: April 4, 2005 10:04 PM
                        To: agile-testing@yahoogroups.com
                        Subject: RE: [agile-testing] Software test plan


                        What is a test plan?

                        A test plan is a set of risks, answered by a set of test ideas--that set is also known as a test strategy. When you prepare a strategy, you're identifying the risks that exist and tests that, when performed, will tell you whether there's a problem or not. That will require that you identify the aspects of the product that you intend to address (coverage) and the principles or mechanisms that will allow you to recognize a problem (oracles).

                        Test strategy (the ideas), combined with test logistics (who's going to act on them, and what they'll need to do that) form your test plan.

                        I'm not sure what resources you believe that you need to do this, but I'll tell you this much: if you acted only on the answers that you've received so far in this thread, I wouldn't want to touch your product. Examples:

                        >1) Don't write a plan - write an automated test.
                        Minimize documentation - maximize automation.
                        You can write clean tests that are better documentation
                        anyway.

                        This answer presumes that every test can or should be automated. Suppose that you designing Google Maps: could you provide automation that would allow you to recognize all of the problems that it presented you? Could you use automation to recognize that Google Maps don't have a legend on them? Could you use automation to recognize that a map search for a restaurant, "Sera in Mt. Laurel New Jersey", produces complete nonsense from most human perspectives (though maybe not from one Google Maps programming algorithm).

                        >2) Avoid manual test cases and procedures whenever possible.


                        Better, I think, to suggest, "use manual tests for conscious observation, automated tests for risks for which you can identify an automated oracle". But this doesn't really address the planning issue.

                        >Based on my experience as a tester on XP teams, I advise that you try to write test cases in some executable format.

                        This will automatically bias you towards risks that can be identified by automation. Assumring that your programmers are reasonably competent, and that they're writing tests as part of a test-driven design effort, you won't find many bugs. That's not to say that there won't BE many bugs--just that you won't find them, because you won't be looking for them.

                        >You work with the customer to convert each feature request into technical specifications, and you write them as failing test cases.

                        If both your developers and your customers are expert critical thinkers, this approach has some possibility of success. Alas, my observation is that most customers and most developers (and most testers, I might add) are not terribly good critical thinkers. So, for each request, rather than writing them as failing test cases, identify things that could go wrong and the potential consequence of each. Be expansive, because the things that could go wrong are legion. Frame those things in the form of "what if..." questions. "The product shall obtain a starting point and a destination from the user, and shall render a set of directions and a map showing the shortest path by distance from one to the other." How many things could go wrong in that simple little requirement? (Hint: don't stop when you get to thirty.)

                        >The failing cases are your test plan for this week.

                        This will limit you to the thinking about the tests that are failing. It's true that those probably represent problems, but if you limit yourself to those, you'll find yourself without much to do while the developers fix the problems; moreover, you'll ignore other risks that have not yet been found by tests.

                        >You can write comments inline using FIT as well. The wiki simply makes it easier to edit for people that don't have access to the source (among other advantages and disadvantages of using a wiki).

                        I would contend that this would inform development or requirements, rather than testing.

                        If your product is for farmers, you need an Onsite Customer who is better at farming than other tasks, such as computing. Getting this individual authoring tests is very important, so simplify their GUI by any means necessary.

                        As a professional tester, I found this reply to be very frustrating, either uncomprehending or dismissive (or both) of testing, testers, and tester skill. Dumb-down the program or some interface to it so an unskilled person can write tests for it? By the logic of this suggestion, why not get the farmer to write the code as well? Why not make the program so simple that farmers can write it? For that matter, why not get the developers to do the farming? Why not simplify tractors and feeding schedules so that developers can do farming? The reasonable answer is that both development and farming require certain sets of skills, skills that do not map on to one another. The implication remains that testing can be left to the farmers. This bespeaks naivete about the role of testing and testers.

                        Well, testing can be left to farmers, and it often is--and I might add that testing often gets left to testers with ditch-digger skills. But that's not to say that it should be so.

                        Here's one way to develop a test plan in any environment, XP or not:

                        1) Learn about the product and the business domain in which it is intended to operate.

                        2) Consider the product from a variety of perspectives, not just the functional. Consider the structure, the functions, the data, the platforms, and the operations of the product; these are the product elements. Consider the -iliities--capability, reliability, usability, scalability, performance, installability, compatibility, supportability, testability, maintainability, portability, localizability. Call these quality criteria. The intersection of the product elements and the quality criteria produces coverage requirements.

                        3) Identify a set of things that could go wrong with the product or the way in which people use it. (Be expansive; consider disfavoured or malicious users.) Call these risks.

                        4) For each item above, identify one or more tests, manual or automated, that would give you some degree of confidence that the risk is non-existent or as low as you can live with. As part of this process, identify principles or mechanisms that would allow to recognize a problem if there were one. Think very critically; use very broad notions of the things that could go wrong. Brainstorm with others. Invent a very improbable problem and call it into in imaginary tech support department. Ask tech-support-style questions--those will inform lots of tests. Ask tech support people for help. Create a table with risks in one column, and one or more tests in the cell next to each risk.

                        5) Call the previous steps your strategy.

                        6) Identify the people and resources that you'll need to fulfill your strategy. Call these logistical requirements.

                        7) Reconcile the people and resources that you'd like with the people and resources that you can get. Those are your test logisitics. The intersection of your test strategy and your test logistics is your test plan.

                        8) Perform some part of your plan (that is, use the program, observe it critically, and run tests), and and feel free to explore as you do so. You will immediately note that some of your risks have been addressed already, or are of lower significance than you thought. You'll also recognize new risks, new oracles, and new ideas about coverage.

                        9) If the plan is for someone else, colloborate with them on how you'll present it to them. Keep them happy. If the plan is for you, make it usable to yourself, but expend no unnecessary effort on making it pretty; make it useful enough to track your strategy and logistics.

                        10) Based on the information you've found in prior steps, revise your plan by revisiting this list, starting from (1) above. Repeat the process until the product ships.

                        ---Michael B.

                        -----Original Message-----
                        From: Taylan Tokay [mailto:taylantokay@...]
                        Sent: Monday, April 04, 2005 2:01 AM
                        To: agile-testing@yahoogroups.com
                        Subject: [agile-testing] Software test plan


                        Hi
                        Can somebody advise me some resources about building a
                        software test plan in XP Programming and how to
                        prepare manual test caes and test procedures for XP?

                        Thanks;




                        __________________________________
                        Do you Yahoo!?
                        Yahoo! Personals - Better first dates. More second dates.
                        http://personals.yahoo.com



                        ________________________________

                        Yahoo! Groups Links


                        * To visit your group on the web, go to:
                        http://groups.yahoo.com/group/agile-testing/

                        * To unsubscribe from this group, send an email to:
                        agile-testing-unsubscribe@yahoogroups.com <mailto:agile-testing-unsubscribe@yahoogroups.com?subject=Unsubscribe>

                        * Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service <http://docs.yahoo.com/info/terms/> .
                      • Keith Ray
                        ... My product (and many others) are NOT google maps , and much could be tested without going through the UI. ... Most likely yes. ... Maybe. ... Incorrect
                        Message 11 of 20 , Apr 5 7:27 AM
                        View Source
                        • 0 Attachment
                          On Apr 4, 2005 9:03 PM, Michael Bolton <mb@...> wrote:
                          > This answer presumes that every test can or should be automated. Suppose
                          > that you designing Google Maps: could you provide automation that would
                          > allow you to recognize all of the problems that it presented you?

                          My product (and many others) are NOT "google maps", and much could be
                          tested without going through the UI.

                          > Could you
                          > use automation to recognize that Google Maps don't have a legend on them?

                          Most likely yes.

                          > Could you use automation to recognize that a map search for a restaurant,
                          > "Sera in Mt. Laurel New Jersey", produces complete nonsense from most human
                          > perspectives (though maybe not from one Google Maps programming algorithm).

                          Maybe.

                          > This will automatically bias you towards risks that can be identified by
                          > automation. Assumring that your programmers are reasonably competent, and
                          > that they're writing tests as part of a test-driven design effort, you won't
                          > find many bugs.

                          Incorrect assumption for "first-time XP" projects like my current one,
                          where most of the programmers are not consistently doing TDD or unit
                          tests at all, and have done extensive refactoring without the aid of
                          automated tests, and have lots of legacy code.

                          > If your product is for farmers, you need an Onsite Customer who is better at
                          > farming than other tasks, such as computing. Getting this individual
                          > authoring tests is very important, so simplify their GUI by any means
                          > necessary.
                          >
                          > As a professional tester, I found this reply to be very frustrating, either
                          > uncomprehending or dismissive (or both) of testing, testers, and tester
                          > skill. Dumb-down the program or some interface to it so an unskilled person
                          > can write tests for it?

                          You're totally not "getting" this. This is NOT about dumbing down
                          anything. This is about creating an alternative _textual_ testing
                          interface that uses the domain language understood by both the
                          programmers/testers and the domain expert (the farmer if you're
                          writing software for farmers). This textual interface is easier to
                          write automated tests against than a GUI would be.

                          > 1) Learn about the product and the business domain in which it is intended
                          > to operate.

                          This includes learning the domain from those farmers you were just
                          disparaging earlier.

                          --

                          C. Keith Ray
                          <http://homepage.mac.com/keithray/blog/index.html>
                          <http://homepage.mac.com/keithray/xpminifaq.html>
                          <http://homepage.mac.com/keithray/resume2.html>
                        • Lisa Crispin
                          ... that set is ... Test cases should certainly be written to focus on the highest risks. In agile, stories are narrow in focus, and I find this makes the
                          Message 12 of 20 , Apr 5 8:02 AM
                          View Source
                          • 0 Attachment
                            --- In agile-testing@yahoogroups.com, "Michael Bolton" <mb@m...>
                            wrote:

                            > A test plan is a set of risks, answered by a set of test ideas--
                            that set is
                            > also known as a test strategy.

                            Test cases should certainly be written to focus on the highest
                            risks. In agile, stories are narrow in focus, and I find this makes
                            the risk assessment easier.

                            I don't have a problem with a short test plan such as Janet
                            described, but it seems a little grand and time consuming for a 2
                            week iteration.

                            > This answer presumes that every test can or should be automated.

                            Maybe I'm the only tester who had trouble getting help from her team
                            in getting tests automated. It has been a struggle with every team
                            I've been on. With each team, it took months before we had enough
                            regression tests automated and had mastered appropriate test tools
                            to feel like we had time to do a good enough job of testing,
                            including exploratory testing. I feel like we accomplished this by
                            presuming that every test can and should be automated. Of course,
                            we did not automate where we felt something else was more
                            appropriate.

                            > >Based on my experience as a tester on XP teams, I advise that you
                            try to
                            > write test cases in some executable format.
                            >
                            > This will automatically bias you towards risks that can be
                            identified by
                            > automation.

                            Maybe so, but I see so many teams where they haven't got any
                            automation at all, they struggle each iteration to do some manual
                            testing, and the technical debt just keeps going up. I choose bias
                            towards automation over insufficient automation.

                            Again, maybe I've just been on bad teams and only talked to bad
                            teams, but I don't think so. I do feel this is a pretty universal
                            struggle, and maybe people who mastered automation a long time ago
                            have forgotten what it's like.

                            -- Lisa
                          • Brian Marick
                            ... I believe the original person meant that the tests are failing because the code hasn t been written yet (e.g., test-driven design extended to
                            Message 13 of 20 , Apr 5 8:58 AM
                            View Source
                            • 0 Attachment
                              On Apr 4, 2005, at 11:03 PM, Michael Bolton wrote:

                              > >The failing cases are your test plan for this week.
                              >  
                              > This will limit you to the thinking about the tests that are
                              > failing. It's true that those probably represent problems, but if you
                              > limit yourself to those, you'll find yourself without much to do while
                              > the developers fix the problems; moreover, you'll ignore other risks
                              > that have not yet been found by tests.

                              I believe the original person meant that the tests are failing because
                              the code hasn't been written yet (e.g., test-driven design extended to
                              business-facing tests). I think that puts a different slant on things.
                              Rather than trying to find problems in what has been implemented, the
                              tests / tester / person in question is trying to write yes/no
                              statements that will provoke the programmers to solve the next most
                              important problems in a satisfactory way.

                              A lot of testing skills, techniques, and modes of thought are still
                              valuable. But there's a lot else, besides. For example, I used "provoke
                              the programmers" deliberately. Those yes/no questions are not
                              specifications or requirements that would produce satisfactory
                              solutions no matter who implemented them. They have to be tailored to
                              the specific team with its specific history, traditions, and tacit
                              knowledge. I believe that's a tricky new skill.

                              I think that the habit of thinking of Agile testing as after the fact
                              product critique, rather than a way to focus, direct, and improve the
                              process of laying down lines of code, is profoundly misleading. (Which
                              is why I wish we didn't have the same word for two different things.)

                              For example, exploratory testing (which I advocated in an earlier
                              response) can be seen as a way to find new "next most important things"
                              to do, and train the team so that fewer or more automatable tests
                              provoke the programmers correctly. In my experience, it's difficult for
                              people to shake loose of only looking for bugs, and that affects how
                              they do exploratory testing.

                              (Also, as I have noted frequently before, I have no doubt
                              after-the-fact product critique is still needed, no matter how good the
                              team gets at test-first. Some types of testing - like security testing
                              - are much less suited to TDD.)


                              > > If your product is for farmers, you need an Onsite Customer who is
                              > better at farming than other tasks, such as computing. Getting this
                              > individual authoring tests is very important, so simplify their GUI by
                              > any means necessary.
                              >  
                              > As a professional tester, I found this reply to be very frustrating,
                              > either uncomprehending or dismissive (or both) of testing, testers,
                              > and tester skill.  Dumb-down the program or some interface to it so an
                              > unskilled person can write tests for it?  By the logic of this
                              > suggestion, why not get the farmer to write the code as well?  Why not
                              > make the program so simple that farmers can write it?  For that
                              > matter, why not get the developers to do the farming?  Why not
                              > simplify tractors and feeding schedules so that developers can do
                              > farming?  The reasonable answer is that both development and farming
                              > require certain sets of skills, skills that do not map on to one
                              > another.  The implication remains that testing can be left to the
                              > farmers.  This bespeaks naivete about the role of testing and testers.

                              Something flashed across my mind when I read this, which I will try to
                              say in a way that does not offend anyone. (And if I do, remember that
                              at least I tried.)

                              When I hear a statement like the quoted one, I don't have the strong
                              reaction that Michael has. Why not? Part of it is that I read such
                              statements in the context of a team that's attempting to have
                              collective skills. My image is of a bunch of people being tossed into a
                              simmering crockpot, each one having the "flavor" of certain needed
                              skills. Once they're all in the pot, the skills/flavor will diffuse
                              appropriately amongst all the chunks of meat and potatoes and whatever
                              in the pot, driven by the "heat" of colocation, a continuous stream of
                              changes, the pressure to deliver frequently, the extreme visibility
                              that I call "exhibitionist panopticism" when I'm in one of my
                              pseudo-intellectual moods, etc.

                              So I read the original quote as saying that someone who knows farming
                              well should be tossed into the pot. Of course you would want them to be
                              most effective - someone who's floundering, unable to put their skills
                              and knowledge to use, isn't going to spread skills well. So you'd
                              accommodate them by allowing them to work in ways comfortable to them,
                              keeping in mind that what's comfortable will change.

                              Many testers separate out their skills and bundle them in the
                              individual rather than the team. I sometimes get the feeling of the
                              romantic lone hero - maybe even emulation of the canonical hero as in
                              Campbell's _The Hero with a Thousand Faces_. *Some* people who do that
                              sort of thing are defending themselves against contexts that devalue
                              them.

                              I recall once when my wife came home fuming (for her). It turned out
                              she, a veterinarian, had been working on a human physician's llama.
                              That physician complimented her by saying that she was good enough to
                              be a real doctor. Oddly, she didn't feel complimented. Not only were
                              veterinarians as a class "not real", but she - first or second in her
                              vet school class at a time when it was *harder* to get into vet school
                              than medical school, not only board-certified but a board examiner, a
                              professor, working at a referral hospital that gets the hard cases -
                              was supposed to feel honored when compared to any old pill-pusher
                              seeing patients with the flu every ten minutes at some HMO.

                              And that kind of thing happens *all the time* to testers. I'm well
                              aware that a good part of my success is because I can talk to
                              programmers in their own language. Because I can code, they're more
                              likely to credit what I say about testing - which is not fair.

                              So it's not surprising when testers mark the boundaries of their turf
                              and defend it jealously. It's both emotionally comforting and
                              practically necessary - when pushed into an unskilled role, you gotta
                              push back in order to do an honorable job.

                              However, that's not - I believe - the right way forward for testing in
                              Agile projects. What's needed more is (1) a willingness by testers to
                              identify with their team as much as with their profession, not to stand
                              apart as judge, (2) a placid assurance, propagandized by leaders of
                              programmer and manager opinion, that *of course* testers have skills
                              that the team needs, and (3) a higher proportion of skilled testers
                              ready to skip nimbly among coaching, process support, and
                              product-judging activities.

                              -----
                              Brian Marick, independent consultant
                              Mostly on agile methods with a testing slant
                              www.exampler.com, www.testing.com/cgi-bin/blog
                              Book in progress: www.exampler.com/book
                            • Jamie Nettles
                              Automated tests are usually to be preferred over manual tests, but in some cases the costs of automating are not worth it. But if you say that, then you ve
                              Message 14 of 20 , Apr 5 10:17 AM
                              View Source
                              • 0 Attachment
                                Automated tests are usually to be preferred over manual tests, but in some cases the costs of automating are not worth it.
                                 
                                But if you say that, then you've opened the door to being lazy.
                                 
                                I've had a developer tell me, there's no way to automate that.  Then another developer came along, wrote a unit test, and found several bugs in the other developer's code.  The first developer is very clever, even brilliant.  I have to think he just didn't want to right the unit tests.
                                 
                                Automating GUI testing seems to be a challenge.  My testers like to test the first release manually, then write scripts for the second release, using a tool that captures button clicks, etc.
                                 


                                From: Lisa Crispin [mailto:lisa.crispin@...]
                                Sent: Tuesday, April 05, 2005 8:02 AM
                                To: agile-testing@yahoogroups.com
                                Subject: [agile-testing] Re: Software test plan


                                --- In agile-testing@yahoogroups.com, "Michael Bolton" <mb@m...>
                                wrote:

                                > A test plan is a set of risks, answered by a set of test ideas--
                                that set is
                                > also known as a test strategy. 

                                Test cases should certainly be written to focus on the highest
                                risks.  In agile, stories are narrow in focus, and I find this makes
                                the risk assessment easier.

                                I don't have a problem with a short test plan such as Janet
                                described, but it seems a little grand and time consuming for a 2
                                week iteration. 

                                > This answer presumes that every test can or should be automated. 

                                Maybe I'm the only tester who had trouble getting help from her team
                                in getting tests automated.  It has been a struggle with every team
                                I've been on.  With each team, it took months before we had enough
                                regression tests automated and had mastered appropriate test tools
                                to feel like we had time to do a good enough job of testing,
                                including exploratory testing.  I feel like we accomplished this by
                                presuming that every test can and should be automated.  Of course,
                                we did not automate where we felt something else was more
                                appropriate.

                                > >Based on my experience as a tester on XP teams, I advise that you
                                try to
                                > write test cases in some executable format.
                                >
                                > This will automatically bias you towards risks that can be
                                identified by
                                > automation. 

                                Maybe so, but I see so many teams where they haven't got any
                                automation at all, they struggle each iteration to do some manual
                                testing, and the technical debt just keeps going up.  I choose bias
                                towards automation over insufficient automation.

                                Again, maybe I've just been on bad teams and only talked to bad
                                teams, but I don't think so.  I do feel this is a pretty universal
                                struggle, and maybe people who mastered automation a long time ago
                                have forgotten what it's like.

                                -- Lisa



                              • Ron Jeffries
                                Keith, I have had the privilege of meeting Michael, and despite his questionable taste in headgear, his head itself is screwed on more straightly than I might
                                Message 15 of 20 , Apr 5 3:48 PM
                                View Source
                                • 0 Attachment
                                  Keith,

                                  I have had the privilege of meeting Michael, and despite his
                                  questionable taste in headgear, his head itself is screwed on more
                                  straightly than I might have thought from his emails. (He and I have
                                  had some exciting "discussions" here on this list. I trust that we
                                  understand each other better now.)

                                  On Tuesday, April 5, 2005, at 9:27:56 AM, Keith Ray wrote:

                                  > On Apr 4, 2005 9:03 PM, Michael Bolton <mb@...> wrote:
                                  >> This answer presumes that every test can or should be automated. Suppose
                                  >> that you designing Google Maps: could you provide automation that would
                                  >> allow you to recognize all of the problems that it presented you?

                                  > My product (and many others) are NOT "google maps", and much could be
                                  > tested without going through the UI.

                                  I believe that both ideas are correct. When I read Michael's note, I
                                  took it a bit differently ...

                                  >> Could you
                                  >> use automation to recognize that Google Maps don't have a legend on them?

                                  > Most likely yes.

                                  Namely that while we could "readily" write a test to check for a
                                  logo, we might never think to do it until someone (a tester for
                                  example) actually looked at the map and said "Hey, does this thing
                                  need a legend."

                                  Some people might assert that testers are particularly good at
                                  coming up with this kind of unexpected observation.

                                  Now, the discipline I teach would ask that we write a test for the
                                  legend's presence before fixing the bug. In practice, I very likely
                                  would not, unless I believe that the legend was likely to magically
                                  disappear in future.

                                  (One example of how that could happen would be if there were many
                                  different page formats, or legends could appear at many levels. In
                                  that case, there might be different code patches, each of which
                                  "should" contain a legend creation. If that couldn't be
                                  consolidated, I might build a general test for presence of a legend
                                  on each page type.)

                                  >> Could you use automation to recognize that a map search for a restaurant,
                                  >> "Sera in Mt. Laurel New Jersey", produces complete nonsense from most human
                                  >> perspectives (though maybe not from one Google Maps programming algorithm).

                                  > Maybe.

                                  Again, I took Michael to be suggesting that only a person can decide
                                  that something is complete nonsense, and that therefore the eyes of
                                  a "tester" could be of value in checking out the product.

                                  Folks can probably see that I'm thinking now along the lines of what
                                  Brian et al call "Exploratory Testing", probing around with
                                  intelligence, looking for stuff that just ain't right.

                                  I would not recommend, as I trust everyone knows, that testers
                                  should manually check all pages, on every release, looking for
                                  legends. I'd suggest that the check, once contemplated, should be
                                  automated if it had value, or never done again, other than by
                                  chance, if it was of low value.

                                  >> This will automatically bias you towards risks that can be identified by
                                  >> automation. Assumring that your programmers are reasonably competent, and
                                  >> that they're writing tests as part of a test-driven design effort, you won't
                                  >> find many bugs.

                                  > Incorrect assumption for "first-time XP" projects like my current one,
                                  > where most of the programmers are not consistently doing TDD or unit
                                  > tests at all, and have done extensive refactoring without the aid of
                                  > automated tests, and have lots of legacy code.

                                  I won't essay a guess at what Michael was getting at here. I suspect
                                  you're talking past each other, but I'm not sure what his point was.

                                  >> If your product is for farmers, you need an Onsite Customer who is better at
                                  >> farming than other tasks, such as computing. Getting this individual
                                  >> authoring tests is very important, so simplify their GUI by any means
                                  >> necessary.
                                  >>
                                  >> As a professional tester, I found this reply to be very frustrating, either
                                  >> uncomprehending or dismissive (or both) of testing, testers, and tester
                                  >> skill. Dumb-down the program or some interface to it so an unskilled person
                                  >> can write tests for it?

                                  > You're totally not "getting" this. This is NOT about dumbing down
                                  > anything. This is about creating an alternative _textual_ testing
                                  > interface that uses the domain language understood by both the
                                  > programmers/testers and the domain expert (the farmer if you're
                                  > writing software for farmers). This textual interface is easier to
                                  > write automated tests against than a GUI would be.

                                  I would prefer to see this put another way, as "you're totally not
                                  'getting' this" seems even to me, the master of insensitivity, not
                                  to be likely to set Michael's mind on a track toward agreement. But
                                  your point is valid -- and again, so is Michael's.

                                  The farmer (in some domain expert sense) really needs to have and
                                  take responsibility for the program's rightness. While it might be
                                  that testers are particularly good at getting into the farmer's
                                  head, I think that (a) they are not uniquely qualified to do this
                                  and (b) no one can completely get in there. Therefore, the more
                                  leverage we can give the farmer in testing and checking the product,
                                  the better.

                                  In passing, many farmers are pretty darned computer savvy these
                                  days. I would not count them out on offering some pretty
                                  sophisticated inputs and recognizing some pretty complex issues.
                                  Farming isn't like "Hee Haw".

                                  >> 1) Learn about the product and the business domain in which it is intended
                                  >> to operate.

                                  > This includes learning the domain from those farmers you were just
                                  > disparaging earlier.

                                  Again, I'd have liked to see this put in a more palatable way, and
                                  again it's very true -- and again, on both sides.

                                  A good tester will bring a unique perspective to the product, and
                                  the sooner we get that perspective, the sooner we will profit. The
                                  sooner that tester gets on top of the domain, from learning from the
                                  farmer as well as other sources, the sooner she'll become of high
                                  value. Talent and skill are always valuable.

                                  I would like to see the talent and skill of testers addressing,
                                  primarily, two key dimensions of quality:

                                  1. Noticing things that no one else notices, through sitting in on
                                  requirements, design, code (although I have not yet found this to be
                                  valuable, except in rare cases), and exploration of the product.

                                  2. Translating requirements into automated tests where appropriate
                                  (and my assumption to that is "usually appropriate") and to general
                                  exploration where automation isn't important.

                                  One more thing. If a tester, or anyone, finds something wrong with
                                  the product after the iteration is over, that IS NOT GOOD NEWS.
                                  Agile software development works by shipping running tested
                                  working software every iteration. Finding a bug breaks that cycle
                                  to a degree, and slows things down. Therefore, since testers and
                                  others WILL find defects after the iteration is over, every such
                                  occurrence should be taken as an occasion to make such defects be
                                  found before the iteration is over next time.

                                  At the shop of a client that Brian and I share, they have posted a
                                  quote from him, that goes something like this: "No bug should be
                                  hard to find the second time." That's what I'm talkin' about.

                                  Ron Jeffries
                                  www.XProgramming.com
                                  Learn from yesterday, live for today, hope for tomorrow.
                                  The important thing is to not stop questioning. --Albert Einstein
                                • Keith Ray
                                  I ve met Michael Bolton as well (at PSL), so I hope he will take my strong language in the spirit in which it was intended. ... -- C. Keith Ray
                                  Message 16 of 20 , Apr 5 4:00 PM
                                  View Source
                                  • 0 Attachment
                                    I've met Michael Bolton as well (at PSL), so I hope he will take my
                                    strong language in the spirit in which it was intended.

                                    On Apr 5, 2005 3:48 PM, Ron Jeffries <ronjeffries@...> wrote:
                                    >
                                    > Keith,
                                    >
                                    > I have had the privilege of meeting Michael,
                                    --

                                    C. Keith Ray
                                    <http://homepage.mac.com/keithray/blog/index.html>
                                    <http://homepage.mac.com/keithray/xpminifaq.html>
                                    <http://homepage.mac.com/keithray/resume2.html>
                                  • Michael Bolton
                                    Ron: Now, the discipline I teach would ask that we write a test for the legend s presence before fixing the bug. In practice, I very likely would not, unless
                                    Message 17 of 20 , Apr 6 4:54 AM
                                    View Source
                                    • 0 Attachment
                                      Ron: > Now, the discipline I teach would ask that we write a test for the
                                      legend's presence before fixing the bug. In practice, I very likely
                                      would not, unless I believe that the legend was likely to magically
                                      disappear in future.

                                       I'm curious about the apparent disagreement between the discipline you would teach and the discipline that you would practice.  Could you elaborate?
                                       
                                      >(One example of how that
                                      could happen would be if there were many
                                      different page formats, or legends could appear at many levels. In
                                      that case, there might be different code patches, each of which
                                      "should" contain a legend creation. If that couldn't be
                                      consolidated, I might build a general test for presence of a legend
                                      on each page type.) 
                                       
                                      Now here is, in my opinion, a perfectly reasoned decision on whether to automate a test or not; the decision is based on a well-considered risk, and on things that automation can be anticipated to do fairly well.  It's an excellent example, and a perspective that is a fantastic improvement over "automate every test".  I hereby request permission to steal... uh, recycle... this example.

                                       > I would not recommend, as I trust everyone knows, that testers
                                      should manually check all pages, on every release, looking for
                                      legends. I'd suggest that the check, once contemplated, should be
                                      automated if it had value, or never done again, other than by
                                      chance, if it was of low value. 
                                       
                                      In my observation, it's becoming easier to recognize that this is your perspective.  For me it was not ever thus.
                                        
                                      >> This will automatically bias you towards risks that can be
                                      identified by
                                      >> automation.  Assumring that your programmers are
                                      reasonably competent, and
                                      >> that they're writing tests as part of a
                                      test-driven design effort, you won't
                                      >> find many bugs.

                                      >
                                      Incorrect assumption for "first-time XP" projects like my current one,
                                      >
                                      where most of the programmers are not consistently doing TDD or unit
                                      >
                                      tests at all, and have done extensive refactoring without the aid of
                                      >
                                      automated tests, and have lots of legacy code.

                                      I won't essay a guess at what Michael was getting at here. I suspect
                                      you're talking past each other, but I'm not sure what his point was. 
                                       
                                      The OP said "Based on my experience as a tester on XP teams, I advise that you try to write test cases in some executable format."  If you write tests in some executable format, your tests will be limited in terms of their ability to observe, evaluate, conjecture, refute, restrategize, and imagine, because we don't know how to code those things very well.  Your tests will provide coverage to the extent that you can automate those things, which (as 35+ years of AI research suggests) is non-trivial.  As a tester on a healthy, mature XP team, where developers are already writing lots of automated tests, your ability to find bugs will be limited further by the fact that developers have found lots of bugs by the same means.  Keith's experience is apparently that he's on a new-ish XP team, and thus the developer tests aren't written yet.  I don't know if the appropriate workaround for this is to get testers to write the developer tests.  Somehow it doesn't feel right to me--it seems to be short-circuiting one of the most valuable parts of XP, which is to get developers thinking to some degree about risk.  Howeer, testers are there to provide service to the organization.  If the organization determines that this is the best use of the tester's time, then rock on.

                                      >The farmer (in some domain expert sense) really needs to have and
                                      take responsibility for the program's rightness. While it might be
                                      that testers are particularly good at getting into the farmer's
                                      head, I think that (a) they are not uniquely qualified to do this
                                      and (b) no one can completely get in there. Therefore, the more
                                      leverage we can give the farmer in testing and checking the product,
                                      the better.
                                      Testers are by no means uniquely qualified to get into the farmer's head.  However, I think it would be fair to say that we specialize in finding and bridging the gaps between the customer and developer and thinking critically about the product and its relationships with its community.  I acknowledge the significance of making the program testable; it's the emphasis on doing this /so that testing can be done by people unskilled at testing/ at which I bridle.  It's as though the "automation" part of "test automation" were more important than the "test" part.  ANYTHING done by an unskilled person (assuming that person's skills are not intended to emerge within the team's context) on an Agile Team is a waste of time, isn't it ?

                                       > A good tester will bring a unique perspective to the product, and
                                      the sooner we get that perspective, the sooner we will profit. The
                                      sooner that tester gets on top of the domain, from learning from the
                                      farmer as well as other sources, the sooner she'll become of high
                                      value. Talent and skill are always valuable.

                                       > I would like to see the talent and skill of testers addressing,
                                      primarily, two key dimensions of quality:

                                      > 1. Noticing things that no one else notices, through sitting in on
                                      requirements, design, code (although I have not yet found this to be
                                      valuable, except in rare cases), and exploration of the product.

                                       > 2. Translating requirements into automated tests where appropriate
                                      (and my assumption to that is "usually appropriate") and to general
                                      exploration where automation isn't important.
                                       
                                      So would I.
                                       
                                      ---Michael B.
                                    • Michael Bolton
                                      Keith I ve met Michael Bolton as well (at PSL), so I hope he will take my strong language in the spirit in which it was intended. Actually, you met me at AYE,
                                      Message 18 of 20 , Apr 6 4:54 AM
                                      View Source
                                      • 0 Attachment
                                        Keith>I've met Michael Bolton as well (at PSL), so I hope he will take my
                                        strong language in the spirit in which it was intended.
                                         
                                        Actually, you met me at AYE, and again just last month in Santa Clara.  I don't know the spirit in which you intended these remarks.
                                         
                                        >My product (and many others) are NOT "google maps", and much could be tested without going through the UI.
                                         
                                        I don't know how to respond to this, other than to ask Weinberg-style questions:  "Much" compared to what?  What part is left over?  What risks exist inside or outside of the product's attempt to solve them?   So your product is NOT Google Maps; I'll ask the question again:  could you provide automation that would allow you to recognize all of the problems that could exist in your application? Oh--and note that when I referred to Google Maps, I didn't say anything about the UI.
                                         
                                        >> Could you use automation to recognize that Google Maps don't have a legend on them?

                                        Most likely yes.
                                         
                                        Cool!  That would be a powerful oracle.  Did your automation ever noticed that, when you grab the bottom right-hand corner of a window on the Mac (though OS 9 and believe including OS X, that the cursor doesn't change shape to indicate resizing, as it does under Windows?  Maybe that's not a bug; but did your automation ever bring up the possibility that a competing product has this feature?
                                         
                                        >> Could you use automation to recognize that a map search for a restaurant,
                                        >>
                                        "Sera in Mt. Laurel New Jersey", produces  complete nonsense from most human
                                        >> perspectives (though maybe not from one Google Maps
                                        programming algorithm).

                                        >Maybe.
                                         
                                        Would it be worth it to try to automate a cognitive task like this?  What would you use for your oracles?
                                         
                                        Automation can sometimes be very good at assisting us in determining whether there's a problem in existing code; what risks in the product exist outside the realm of existing code?  Ron put it very nicely:  "...while we could "readily" write a test to check for a logo, we might never think to do it until someone (a tester for example) actually looked at the map and said "Hey, does this thing need a legend." 
                                         
                                        Your specialty, a programmer's specialty, is in writing code that helps customers to solve their problems; I honour that.  It embraces an enormous skill set, one for which I've had long and continuously growing appreciation.  My specialty is in identifying risks, finding oracles that can help us to recognize problem, and figuring appropriate ways in which to cover and model the product.  It's my job to assist you and the rest of the team in doing that.  It's deeply frustrating to me when "agile" "testing" is seen as nothing more than a rote, routine task that can be performed by a machine.  This perspective ignores the deeply challenging, important, intellectual, human parts of testing.  It ignores the fact that some tests aren't worth running more than once--and that the time to automate them takes time away from other, more important tests that we could invent, run, and automate.  It ignores innate and learned skills, cognitive and reasoning skills.  It ignores entire categories of risks, alternative ways of modelling the product, and the notion that quality is subjective, value to some person.  Moreover, it ignores the Agile Manifesto, which claims to value people over processes.
                                         
                                        Now: it's quite possible that you are speaking from experience, and have dealt with programmers that are capable of identifying risks better than most testers.  That wouldn't be hard; the testing skills of many people whose job descriptions say "tester" or "Quality Assurance" are abysmal.  But just as your product is not Google Maps, I (and lots of other testers on this list) don't think of ourselves as slow proxies for Fitnesse tests.
                                         
                                        >Incorrect assumption for "first-time XP" projects like my current one,
                                        where most of the programmers are not consistently doing TDD or unit
                                        tests at all, and have done extensive refactoring without the aid of
                                        automated tests, and have lots of legacy code.
                                         
                                        I'm confused by this; what testing IS happening here?
                                         
                                        >> 1) Learn about the product and the business domain in which it is intended
                                        >> to
                                        operate.

                                        >This includes learning the domain from those farmers you
                                        were just
                                        disparaging earlier.
                                         
                                        Pardon me; when I suggested that the program be made so simple that farmers could write it, I was presuming farmers without twenty years of development skill such as you have.  Specifically, I said, "Why not make the program so simple that farmers can write it?  ... The reasonable answer is that both development and farming require certain sets of skills, skills that do not map on to one another. "  That's not disparaging farmers; on the contrary:  it's recognizing that they have skills that we don't have, and that we have skills that they don't have.  It's neither wise nor cost-efficient to try to bring everyone on a team to the same level of skill; it's far better, I would argue, to try to use everyone's existing skills to the maximum while educating each other on the parts that will be most useful to the development of the product.
                                         
                                        ---Michael B.
                                         
                                      • Ron Jeffries
                                        ... Surely. At the beginning of learning a new discipline (e.g. when to automate tests), the beginner s expectation is that she already knows what should be
                                        Message 19 of 20 , Apr 6 7:28 AM
                                        View Source
                                        • 0 Attachment
                                          On Wednesday, April 6, 2005, at 6:54:33 AM, Michael Bolton wrote:

                                          > Ron: > Now, the discipline I teach would ask that we write a test for the
                                          > legend's presence before fixing the bug. In practice, I very likely
                                          > would not, unless I believe that the legend was likely to magically
                                          > disappear in future.

                                          > I'm curious about the apparent disagreement between the discipline you
                                          > would teach and the discipline that you would practice. Could you
                                          > elaborate?

                                          Surely. At the beginning of learning a new discipline (e.g. when to
                                          automate tests), the beginner's expectation is that she already
                                          knows what should be tested and what not, what is hard to test and
                                          what not, what is inevitable and what can be changed. She assumes
                                          she is close to optimal on all those dimensions.

                                          So I suggest that for a while she should make a standard practice of
                                          writing automated tests for everything. I expect that over time she
                                          will learn new settings for her what to test dials, and that her new
                                          balance will be better. Then I expect that she'll do what any
                                          well-experienced person should do, apply judgment in the light of
                                          experience.

                                          My intention in suggesting the strict discipline is to provide new
                                          and enlightening experience.

                                          >>(One example of how that could happen would be if there were many
                                          > different page formats, or legends could appear at many levels. In
                                          > that case, there might be different code patches, each of which
                                          > "should" contain a legend creation. If that couldn't be
                                          > consolidated, I might build a general test for presence of a legend
                                          > on each page type.)

                                          > Now here is, in my opinion, a perfectly reasoned decision on whether to
                                          > automate a test or not; the decision is based on a well-considered risk, and
                                          > on things that automation can be anticipated to do fairly well. It's an
                                          > excellent example, and a perspective that is a fantastic improvement over
                                          > "automate every test". I hereby request permission to steal... uh,
                                          > recycle... this example.

                                          Permission granted, since you would anyway. :)

                                          >> I would not recommend, as I trust everyone knows, that testers
                                          > should manually check all pages, on every release, looking for
                                          > legends. I'd suggest that the check, once contemplated, should be
                                          > automated if it had value, or never done again, other than by
                                          > chance, if it was of low value.

                                          > In my observation, it's becoming easier to recognize that this is your
                                          > perspective. For me it was not ever thus.

                                          I can only wonder why. I seem so reasonable and articulate to myself.

                                          > The OP said "Based on my experience as a tester on XP teams, I advise that
                                          > you try to write test cases in some executable format." If you write tests
                                          > in some executable format, your tests will be limited in terms of their
                                          > ability to observe, evaluate, conjecture, refute, restrategize, and imagine,
                                          > because we don't know how to code those things very well.

                                          This seems quite true in theory, yet in practice, those of us who
                                          have gone strongly toward automate everything have discovered a
                                          place where we emit 1/10th or even 1/100th of the defects we used to.

                                          When practice and theory diverge, I choose practice.

                                          > Your tests will
                                          > provide coverage to the extent that you can automate those things, which (as
                                          > 35+ years of AI research suggests) is non-trivial.

                                          Ah, I wasn't aware that you had engaged in so much AI research
                                          before becoming a testing guru, and I congratulate you on your great
                                          success as a child prodigy or on the ownership of a painting which
                                          seems to be getting older. :)

                                          My much shorter experience in AI left me with the general impression
                                          that it wasn't good for much, and I wouldn't expect any AI system to
                                          come along soon that would say "shouldn't there be a legend here?"

                                          But most of the defects that get found and fixed in the software
                                          that I encounter do not seem to be that subtle. They're more of the
                                          nature of "the product of 3 and 5 is 15, not 8".

                                          Actually, I think that might be a "deep" result. I suspect that most
                                          software defects are actually simple at base: a statement or two. If
                                          that's true, then it might not require sophisticated techniques to
                                          find most defects; close-in detail testing might serve quite well.

                                          Since close-in detail testing DOES serve quite well, I offer the
                                          above as an explanation of why.

                                          Now that's not to say that I don't want smart people on the team. I
                                          do. But I want to use them wisely.

                                          > As a tester on a
                                          > healthy, mature XP team, where developers are already writing lots of
                                          > automated tests, your ability to find bugs will be limited further by the
                                          > fact that developers have found lots of bugs by the same means. Keith's
                                          > experience is apparently that he's on a new-ish XP team, and thus the
                                          > developer tests aren't written yet. I don't know if the appropriate
                                          > workaround for this is to get testers to write the developer tests. Somehow
                                          > it doesn't feel right to me--it seems to be short-circuiting one of the most
                                          > valuable parts of XP, which is to get developers thinking to some degree
                                          > about risk. Howeer, testers are there to provide service to the
                                          > organization. If the organization determines that this is the best use of
                                          > the tester's time, then rock on.

                                          I would not think that testers writing dev tests is in the best
                                          interest of agility. It creates a slower feedback loop, places
                                          discovery and learning in two different minds, and seems likely to
                                          be dominated in most dimensions by the dev writes own tests
                                          approach.

                                          What is hinted at in your words above is that in a top Agile team,
                                          the dev cycle (aided by tester-supported automated acceptance tests,
                                          IMO) might remove so many of the conventional defects we see today,
                                          that the testers could focus on issues at a much higher level of
                                          refinement. I think that would be a good thing.

                                          So far, I've not seen a team that included both testers and devs
                                          attain that high level; the cases I see have testers who are still
                                          locked in on test plans, manual tests, and the like. I hope to see
                                          that changing over time.

                                          > Testers are by no means uniquely qualified to get into the farmer's head.
                                          > However, I think it would be fair to say that we specialize in finding and
                                          > bridging the gaps between the customer and developer and thinking critically
                                          > about the product and its relationships with its community.

                                          I think it would be fair to say that YOU do that, and that many of
                                          the other people here do that. You all are, however, at the top of
                                          your field and learning in order to stay ahead. Testers in general
                                          are, in my experience, not all that special.

                                          Let me go on to say that programmers in general, in my experience,
                                          are also not that special. Our mission as leaders, should we choose
                                          to accept it, is to offer advice that normal, average people can
                                          make use of, while at the same time keeping our eye on the
                                          possibilities when nearly everyone can do what today, only a few can
                                          do.

                                          > I acknowledge
                                          > the significance of making the program testable; it's the emphasis on doing
                                          > this /so that testing can be done by people unskilled at testing/ at which I
                                          > bridle. It's as though the "automation" part of "test automation" were more
                                          > important than the "test" part. ANYTHING done by an unskilled person
                                          > (assuming that person's skills are not intended to emerge within the team's
                                          > context) on an Agile Team is a waste of time, isn't it ?

                                          No, in fact it is not. It is quite worthwhile for an engineer to
                                          automate an assembly line so that a less-skilled person can build
                                          widgets by pressing a button. There may still be judgment required
                                          in pressing the button. One day that, too, may be automated, but
                                          until then, there is value in automating what can be automated.

                                          And we are very far away from automating everything that can be.

                                          >> A good tester will bring a unique perspective to the product, and
                                          >> the sooner we get that perspective, the sooner we will profit.
                                          >> The sooner that tester gets on top of the domain, from learning
                                          >> from the farmer as well as other sources, the sooner she'll
                                          >> become of high value. Talent and skill are always valuable.

                                          >> I would like to see the talent and skill of testers addressing,
                                          >> primarily, two key dimensions of quality:

                                          >> 1. Noticing things that no one else notices, through sitting in on
                                          >> requirements, design, code (although I have not yet found this to be
                                          >> valuable, except in rare cases), and exploration of the product.

                                          >> 2. Translating requirements into automated tests where appropriate
                                          >> (and my assumption to that is "usually appropriate") and to general
                                          >> exploration where automation isn't important.

                                          > So would I.

                                          Cool!

                                          Ron Jeffries
                                          www.XProgramming.com
                                          Speak the affirmative; emphasize your choice
                                          by utterly ignoring all that you reject. -- Ralph Waldo Emerson
                                        • David Vydra
                                          At Agitar we do lots of automation, but we find that we still want to create and maintain test plans. We started using twiki, but we feel that we need
                                          Message 20 of 20 , Apr 6 9:29 AM
                                          View Source
                                          • 0 Attachment
                                            At Agitar we do lots of automation, but we find that we still want to
                                            create and maintain test plans. We started using twiki, but we feel that
                                            we need something more productive on the editing side and the ability to
                                            work disconnected from the network. We are currently looking at TreePad.

                                            What do you use?

                                            Thanks,

                                            David
                                            www.agitar.com
                                            www.testdriven.com
                                          Your message has been successfully submitted and would be delivered to recipients shortly.