Loading ...
Sorry, an error occurred while loading the content.

Automated acceptance tests

Expand Messages
  • Phil Lewis
    I ve tried searching the archives, because I m sure this has been covered, but the search seems to just always return the full archive contents, so I m having
    Message 1 of 18 , Nov 5, 2001
    • 0 Attachment
      I've tried searching the archives, because I'm sure this has been covered,
      but the search seems to just always return the full archive contents, so I'm
      having to ask...

      How do people deal with the customer having to write acceptance tets, and
      those tests having to be automated?

      This kind of implies that the customer must be familiar with an automted
      testing tool and scripting language.

      This is something we don't have, and quite likely will not get.

      I'd be interested to hear what tools and techniques are being successfully
      applied.


      Phil


      **************************************************************************************************
      The views expressed in this E-mail are those of the author and not necessarily those of Knowledge Management Software.
      If you are not the intended recipient or the person responsible for delivering to the intended recipient, please be advised that you have received this E-mail in error and that any use is strictly prohibited.

      If you have received this E-mail in error, please notify us by forwarding this E-mail to the following address:

      mailadmin@...
      **************************************************************************************************
    • Bryan Dollery
      Phil Lewis wrote ... What do your customers currently do when they need to assess your work? What are their current procedures for acceptance? One of my most
      Message 2 of 18 , Nov 5, 2001
      • 0 Attachment
        Phil Lewis wrote
        > How do people deal with the customer having to write acceptance tests, and
        > those tests having to be automated?
        >
        > This kind of implies that the customer must be familiar with an automated
        > testing tool and scripting language.
        >
        > This is something we don't have, and quite likely will not get.
        >
        > I'd be interested to hear what tools and techniques are being successfully
        > applied.

        What do your customers currently do when they need to assess your work? What
        are their current procedures for acceptance?

        One of my most recent customer wanted me to provide the tests, and the
        knowledge to run them. They had a single expert who reviewed our tests, and
        ran them (actually he sat around and watched as we ran them, but it was
        close enough). He was responsible for performing due diligence on our
        acceptance testing strategy. If we were being diligent, then there was no
        problems with the test.

        This meant that we had to have testers who acted as if they belonged to the
        customer, which was no problem for us, and generated more revenue. Our
        testers were very good, and were always on the customers side, they worked
        closely with the customer to write their tests, and they proved diligent.

        Our project was a success. Admittedly it was a relatively small project
        (around 6 people, in all), for three months. Our major difficulty was that
        the customer had never bought software before, and we had to totally educate
        her.

        Bryan
      • Dossy
        ... In our particular implementation of XP: The Customer writes the Acceptance Tests in plain language. We (the developers) implement stories towards this
        Message 3 of 18 , Nov 5, 2001
        • 0 Attachment
          On 2001.11.05, Phil Lewis <phil.lewis@...> wrote:
          >
          > How do people deal with the customer having to write acceptance tets, and
          > those tests having to be automated?
          >

          In our particular implementation of XP:

          The Customer writes the Acceptance Tests in plain language. We (the
          developers) implement stories towards this goal.

          Since we are only recently getting into automating our acceptance
          tests, the way it's working out is that the Customer and the QA
          Team (well, right now, the QA guy) work together to automate the
          tests using the automated testing tools that the QA Team know.

          -- Dossy

          --
          Dossy Shiobara mail: dossy@...
          Panoptic Computer Network web: http://www.panoptic.com/
          "He realized the fastest way to change is to laugh at your own
          folly -- then you can let go and quickly move on." (p. 70)
        • Phil Lewis
          This seems to be pointing at where i was expecting...QA and the customer having a close relationship. The way we have things at the moment, QA write and run
          Message 4 of 18 , Nov 5, 2001
          • 0 Attachment
            This seems to be pointing at where i was expecting...QA and the customer
            having a close relationship.

            The way we have things at the moment, QA write and run the acceptance tests
            from the story cards.

            We are losing out because when we (developers) stick our hand up and ask the

            customer an implementation detail question, the answer is not getting
            encoded
            as an acceptance test.

            For this reason, we are moving towards getting the customer to write the
            tests,
            but want them to be automated. We are currently using a fairly hefty tool
            for automated testing that gives us load and stress figures, as well as
            regression tests. All these things are kind of pretty wrapped up together
            at present, which I don't like. I'd much rather have load n stress seprate
            from acceptance/regression.

            This gives the opportunity to choose a different tool for the accpetance
            testing.
            I guess I was kind of hoping that there would be a magic tool we could
            have the customer use for writing and running acceptance tests that is
            simple
            enough for a non-programmer to use all on his own...



            > -----Original Message-----
            > From: Dossy [mailto:dossy@...]
            > Sent: 05 November 2001 12:35
            > To: 'extremeprogramming@yahoogroups.com'
            > Subject: Re: [XP] Automated acceptance tests
            >
            >
            > On 2001.11.05, Phil Lewis <phil.lewis@...> wrote:
            > >
            > > How do people deal with the customer having to write
            > acceptance tets, and
            > > those tests having to be automated?
            > >
            >
            > In our particular implementation of XP:
            >
            > The Customer writes the Acceptance Tests in plain language. We (the
            > developers) implement stories towards this goal.
            >
            > Since we are only recently getting into automating our acceptance
            > tests, the way it's working out is that the Customer and the QA
            > Team (well, right now, the QA guy) work together to automate the
            > tests using the automated testing tools that the QA Team know.
            >
            > -- Dossy
            >
            > --
            > Dossy Shiobara mail: dossy@...
            > Panoptic Computer Network web: http://www.panoptic.com/
            > "He realized the fastest way to change is to laugh at your own
            > folly -- then you can let go and quickly move on." (p. 70)
            >
            > To Post a message, send it to: extremeprogramming@...
            >
            > To Unsubscribe, send a blank message to:
            > extremeprogramming-unsubscribe@...
            >
            > ad-free courtesy of objectmentor.com
            >
            > Your use of Yahoo! Groups is subject to
            > http://docs.yahoo.com/info/terms/
            >
            >


            **************************************************************************************************
            The views expressed in this E-mail are those of the author and not necessarily those of Knowledge Management Software.
            If you are not the intended recipient or the person responsible for delivering to the intended recipient, please be advised that you have received this E-mail in error and that any use is strictly prohibited.

            If you have received this E-mail in error, please notify us by forwarding this E-mail to the following address:

            mailadmin@...
            **************************************************************************************************
          • Dossy
            ... Perhaps you could increase the odds of getting valuable suggestions for automated testing tools if you specified exactly what your product is and what it s
            Message 5 of 18 , Nov 5, 2001
            • 0 Attachment
              On 2001.11.05, Phil Lewis <phil.lewis@...> wrote:
              > I guess I was kind of hoping that there would be a magic tool we could
              > have the customer use for writing and running acceptance tests that is
              > simple enough for a non-programmer to use all on his own...

              Perhaps you could increase the odds of getting valuable suggestions
              for automated testing tools if you specified exactly what your product
              is and what it's system requirements are.

              Or, if under contractual obligation not to do so, then just be
              as vague as necessary to avoid litigation.

              -- Dossy

              --
              Dossy Shiobara mail: dossy@...
              Panoptic Computer Network web: http://www.panoptic.com/
              "He realized the fastest way to change is to laugh at your own
              folly -- then you can let go and quickly move on." (p. 70)
            • Phil Lewis
              ... Good idea. Basically, the system is focussed around knowledge management. we have some fairly sophistacted search tools that depend on a couple of fancy
              Message 6 of 18 , Nov 5, 2001
              • 0 Attachment
                Dossy said:
                > Perhaps you could increase the odds of getting valuable
                > suggestions for automated testing tools if you specified exactly
                > what your product is and what it's system requirements are.

                Good idea.

                Basically, the system is focussed around knowledge management. we have some
                fairly sophistacted search tools that depend on a couple of fancy tricks.

                Around all this we are building other components to support knowledge
                management as a whole solution, including authentication/ authorisation,
                knowledge discovery, expert profiling etc.

                The upshot is a set of EJB components that are accessed via JSP pages
                browsed in the normal manner. The final system looks somewhat like a fancy
                internet search engine with a few extra bells and whistles.

                Typically, we want acceptance testing to cover things like making sure all
                knoledge itmes expected to be discovered in a given location are in fact
                properly indexed (that is, they appear as solutions to questions when
                expected to do so), make sure a given user only gets access to content she
                should. Make sure that deleted solutions never show up again. All this sort
                of stuff.

                And it all happens in a browser.

                The thing is, I know that there are lots of tools that do this. We have a
                couple ourselves. Some of them even allow you to 'record' correct behaviour,
                then use that for regression testing. The problem with these is that the
                recording often requires that generated scripts be hacked, because it is
                fairly unreliable; and you can't record until you have the software. Which
                means we must finish the story before we get the tests for it, which means
                we could be halfway thru the next story before we discover we haven't
                sattisfactorilly completed the first.

                As this is a core part of XP, I was hoping there would be a solution in
                general use. However, having just found a couple if papers on the matter, it
                seems that when asked, Kent and Ron have usually responded that the best way
                is to grow your own...


                **************************************************************************************************
                The views expressed in this E-mail are those of the author and not necessarily those of Knowledge Management Software.
                If you are not the intended recipient or the person responsible for delivering to the intended recipient, please be advised that you have received this E-mail in error and that any use is strictly prohibited.

                If you have received this E-mail in error, please notify us by forwarding this E-mail to the following address:

                mailadmin@...
                **************************************************************************************************
              • Bill Wake
                ... Is QA part of the team (sitting with customers and programmers)? I ask because tests from the story cards almost makes it sound like they re not
                Message 7 of 18 , Nov 5, 2001
                • 0 Attachment
                  --- In extremeprogramming@y..., Phil Lewis <phil.lewis@k...> wrote:

                  > The way we have things at the moment, QA write and run the
                  > acceptance tests from the story cards.
                  >
                  > We are losing out because when we (developers) stick our hand
                  > up and ask the customer an implementation detail question, the
                  > answer is not getting encoded as an acceptance test.

                  Is QA part of the team (sitting with customers and programmers)? I
                  ask because "tests from the story cards" almost makes it sound like
                  they're not participating in the associated conversations.

                  >
                  > For this reason, we are moving towards getting the customer
                  > to write the tests, but want them to be automated.

                  It's reasonable to want automated, customer-specified tests. But I
                  wouldn't use this as a mechanism to cut QA out of the loop.

                  > I guess I was kind of hoping that there would be a magic tool
                  > we could have the customer use for writing and running
                  > acceptance tests that is simple
                  > enough for a non-programmer to use all on his own...

                  I know a few groups have used spreadsheets; I've generate tab- or
                  comma-separated files, and I know someone has hooked up JDBC drivers
                  and treated it like a database. This won't help with all types of
                  testing though.

                  --
                  Bill Wake William.Wake@... www.xp123.com
                • Manfred Lange
                  Phil, You have made the point: QA and the customer DO have a close relationship! It is the customer who decides whether the quality is sufficient or not. And
                  Message 8 of 18 , Nov 5, 2001
                  • 0 Attachment
                    Phil,

                    You have made the point: QA and the customer DO have a close
                    relationship!

                    It is the customer who decides whether the quality is sufficient or not.
                    And quality also includes - among other important things - the feature
                    set, e.g. functionality, response time, availability, training
                    requirements, etc.

                    It sounds to me as if you are having a separate QA function. I am not
                    convinced that a dedicated QA department (or team or person) fosters
                    better software quality.

                    If you compare it with manufacturing in the industry there were times
                    when 25% of the employees of a plant were working in the QA department.
                    These people were fixing products that did not match the required
                    product quality despite the fact that they were came just out of
                    production!

                    Today the quality "control" is integrated into the manufacturing lines,
                    into the teams. The 25% of the former QA staff now work on things that
                    really matter; they are now part of the manufacturing process, and every
                    single worker is responsible for the quality of the (intermediate)
                    products (s)he delivers. This puts back the responsibility to where it
                    belongs. The side effect is, that it lowered the costs.

                    I tend to believe that this concept that worked for many parts of the
                    manufacturing industry should also be adapted to and embraced by the
                    software development community. Agile methodologies seem to be in good
                    shape to go ahead.

                    Regards,
                    Manfred.
                    ---
                    E-Mail: Manfred_Lange@...
                    Web: http://www.xpexchange.net/english/index.html

                    -----Original Message-----
                    From: Phil Lewis [mailto:phil.lewis@...]
                    Sent: Monday, November 05, 2001 1:55 PM
                    To: 'extremeprogramming@yahoogroups.com'
                    Subject: RE: [XP] Automated acceptance tests


                    This seems to be pointing at where i was expecting...QA and the customer

                    having a close relationship.

                    The way we have things at the moment, QA write and run the acceptance
                    tests
                    from the story cards.

                    [...]

                    > -----Original Message-----
                    > From: Dossy [mailto:dossy@...]
                    > Sent: 05 November 2001 12:35
                    > To: 'extremeprogramming@yahoogroups.com'
                    > Subject: Re: [XP] Automated acceptance tests
                    >
                    >
                    > On 2001.11.05, Phil Lewis <phil.lewis@...> wrote:
                    [...]
                    > Since we are only recently getting into automating our acceptance
                    > tests, the way it's working out is that the Customer and the QA Team
                    > (well, right now, the QA guy) work together to automate the tests
                    > using the automated testing tools that the QA Team know.
                    >
                    > -- Dossy
                    >
                    > --
                    > Dossy Shiobara mail: dossy@...
                    > Panoptic Computer Network web: http://www.panoptic.com/
                    > "He realized the fastest way to change is to laugh at your own
                    > folly -- then you can let go and quickly move on." (p. 70)
                    [...]
                  • Manfred Lange
                    Phil, You have made the point: QA and the customer DO have a close relationship! It is the customer who decides whether the quality is sufficient or not. And
                    Message 9 of 18 , Nov 5, 2001
                    • 0 Attachment
                      Phil,

                      You have made the point: QA and the customer DO have a close
                      relationship!

                      It is the customer who decides whether the quality is sufficient or not.
                      And quality also includes - among other important things - the feature
                      set, e.g. functionality, response time, availability, training
                      requirements, etc.

                      It sounds to me as if you are having a separate QA function. I am not
                      convinced that a dedicated QA department (or team or person) fosters
                      better software quality.

                      If you compare it with manufacturing in the industry there were times
                      when 25% of the employees of a plant were working in the QA department.
                      These people were fixing products that did not match the required
                      product quality despite the fact that they were came just out of
                      production!

                      Today the quality "control" is integrated into the manufacturing lines,
                      into the teams. The 25% of the former QA staff now work on things that
                      really matter; they are now part of the manufacturing process, and every
                      single worker is responsible for the quality of the (intermediate)
                      products (s)he delivers. This puts back the responsibility to where it
                      belongs. The side effect is, that it lowered the costs.

                      I tend to believe that this concept that worked for many parts of the
                      manufacturing industry should also be adapted to and embraced by the
                      software development community. Agile methodologies seem to be in good
                      shape to go ahead.

                      Regards,
                      Manfred.
                      ---
                      E-Mail: Manfred_Lange@...
                      Web: http://www.xpexchange.net/english/index.html

                      -----Original Message-----
                      From: Phil Lewis [mailto:phil.lewis@...]
                      Sent: Monday, November 05, 2001 1:55 PM
                      To: 'extremeprogramming@yahoogroups.com'
                      Subject: RE: [XP] Automated acceptance tests


                      This seems to be pointing at where i was expecting...QA and the customer

                      having a close relationship.

                      The way we have things at the moment, QA write and run the acceptance
                      tests
                      from the story cards.

                      [...]

                      > -----Original Message-----
                      > From: Dossy [mailto:dossy@...]
                      > Sent: 05 November 2001 12:35
                      > To: 'extremeprogramming@yahoogroups.com'
                      > Subject: Re: [XP] Automated acceptance tests
                      >
                      >
                      > On 2001.11.05, Phil Lewis <phil.lewis@...> wrote:
                      [...]
                      > Since we are only recently getting into automating our acceptance
                      > tests, the way it's working out is that the Customer and the QA Team
                      > (well, right now, the QA guy) work together to automate the tests
                      > using the automated testing tools that the QA Team know.
                      >
                      > -- Dossy
                      >
                      > --
                      > Dossy Shiobara mail: dossy@...
                      > Panoptic Computer Network web: http://www.panoptic.com/
                      > "He realized the fastest way to change is to laugh at your own
                      > folly -- then you can let go and quickly move on." (p. 70)
                      [...]
                    • Erik Meade
                      ... The customer should at least supply the inputs and outputs for the acceptance tests. The developers should help the customer with automation. If they
                      Message 10 of 18 , Nov 5, 2001
                      • 0 Attachment
                        > -----Original Message-----
                        > Date: Mon, 5 Nov 2001 10:00:21 -0000
                        > From: Phil Lewis <phil.lewis@...>
                        > Subject: Automated acceptance tests
                        >
                        > I've tried searching the archives, because I'm sure this has been covered,
                        > but the search seems to just always return the full archive
                        > contents, so I'm
                        > having to ask...
                        >
                        > How do people deal with the customer having to write acceptance tets, and
                        > those tests having to be automated?
                        >
                        > This kind of implies that the customer must be familiar with an automted
                        > testing tool and scripting language.
                        >
                        > This is something we don't have, and quite likely will not get.
                        >
                        > I'd be interested to hear what tools and techniques are being successfully
                        > applied.
                        >
                        >
                        > Phil

                        The customer should at least supply the inputs and outputs for the
                        acceptance
                        tests. The developers should help the customer with automation. If they
                        don't I would try adding a task "Automate acceptance test" to every story.

                        The best automation tools I have seen have been homegrown by the teams.
                        There
                        is some talk that Ruby really lends itself to acceptance testing.

                        --
                        Erik Meade emeade@...
                        Senior Consultant Object Mentor, Inc.
                        http://www.junit.org
                      • Robert Watkins
                        Phil Lewis wrote: from the story cards. ... That s a communication problem, Phil. The approach I m trying to get accepted here goes like this: Customers (well,
                        Message 11 of 18 , Nov 5, 2001
                        • 0 Attachment
                          Phil Lewis wrote:
                          from the story cards.
                          >
                          > We are losing out because when we (developers) stick our hand
                          > up and ask the
                          > customer an implementation detail question, the answer is not getting
                          > encoded
                          > as an acceptance test.

                          That's a communication problem, Phil.

                          The approach I'm trying to get accepted here goes like this: Customers
                          (well, Business Analysts, for us) are responsible for specifying tests, and
                          ensuring tests are complete. Our Test group will write the tests in
                          conjunction with the BAs. However, developers can also enhance the test
                          suites.

                          When developers get technical details, they should write a unit test around
                          it. A unit test can be marked as an acceptance test, which basically means
                          it isn't allowed to be changed so easily.

                          > This gives the opportunity to choose a different tool for the
                          > accpetance
                          > testing.
                          > I guess I was kind of hoping that there would be a magic tool we could
                          > have the customer use for writing and running acceptance tests that is
                          > simple
                          > enough for a non-programmer to use all on his own...

                          Depending on what your tests look like, you could use a simple spreadsheet.
                          These work well when your tests basically consist of a set of inputs for a
                          component. Then you write some glue around that to drive the spreadsheet.

                          Robert.

                          --
                          "Duct tape is like the Force: it has a light side, a dark side,
                          and it holds the universe together"
                          Robert Watkins Software Architect QSI Payments Inc.
                          robertdw@... robert.watkins@...
                        • Jeff Miller
                          Phil: As an example of testing browser screen output, the XP project I broke out a logical page layer which created XML pages. The XML pages were fairly
                          Message 12 of 18 , Nov 5, 2001
                          • 0 Attachment
                            Phil:

                            As an example of testing browser screen output, the XP project I
                            broke out a "logical page" layer which created XML pages. The XML
                            pages were fairly easy to do assertion queries against, to verify the
                            presence or absence of various strings, field values, error
                            conditions, or navigation state. [For actual display, the XML was
                            processed into HTML.]

                            We mostly did unit/integration/regression testing using this
                            facility, but a similar approach would seem reasonable for ATs in a
                            browser-based application.

                            If your application generates XHTML (which can be parsed by an XML
                            parser into a document structure), you might be able to use this
                            technique without a separate layer, perhaps exploiting the "id"
                            attributes of various sections of your page to be able to find
                            information you expect to be present.

                            Jeffrey Miller
                            SF Bay Area

                            --- In extremeprogramming@y..., Phil Lewis <phil.lewis@k...> wrote:
                            [system architecture snipped]
                            > Typically, we want acceptance testing to cover things like making
                            sure all
                            > knowledge items expected to be discovered in a given location are
                            in fact
                            > properly indexed (that is, they appear as solutions to questions
                            when
                            > expected to do so), make sure a given user only gets access to
                            content she
                            > should. Make sure that deleted solutions never show up again. All
                            this sort
                            > of stuff.
                            >
                            > And it all happens in a browser.
                            >
                            > The thing is, I know that there are lots of tools that do this. We
                            have a
                            > couple ourselves. Some of them even allow you to 'record' correct
                            behaviour,
                            > then use that for regression testing. The problem with these is
                            that the
                            > recording often requires that generated scripts be hacked, because
                            it is
                            > fairly unreliable; and you can't record until you have the
                            software. Which
                            > means we must finish the story before we get the tests for it,
                            which means
                            > we could be halfway thru the next story before we discover we
                            haven't
                            > sattisfactorilly completed the first.
                            >
                            > As this is a core part of XP, I was hoping there would be a
                            solution in
                            > general use. However, having just found a couple if papers on the
                            matter, it
                            > seems that when asked, Kent and Ron have usually responded that the
                            best way
                            > is to grow your own...
                          • Hugo Garcia
                            Jeffrey: Did you unit test XSL in a similar fashion? -H
                            Message 13 of 18 , Nov 5, 2001
                            • 0 Attachment
                              Jeffrey:

                              Did you unit test XSL in a similar fashion?

                              -H

                              jsmiller@... wrote:

                              >Phil:
                              >
                              >As an example of testing browser screen output, the XP project I
                              >broke out a "logical page" layer which created XML pages. The XML
                              >pages were fairly easy to do assertion queries against, to verify the
                              >presence or absence of various strings, field values, error
                              >conditions, or navigation state. [For actual display, the XML was
                              >processed into HTML.]
                              >
                              >We mostly did unit/integration/regression testing using this
                              >facility, but a similar approach would seem reasonable for ATs in a
                              >browser-based application.
                              >
                              >If your application generates XHTML (which can be parsed by an XML
                              >parser into a document structure), you might be able to use this
                              >technique without a separate layer, perhaps exploiting the "id"
                              >attributes of various sections of your page to be able to find
                              >information you expect to be present.
                              >
                              >Jeffrey Miller
                              >SF Bay Area
                              >
                              >--- In extremeprogramming@y..., Phil Lewis <phil.lewis@k...> wrote:
                              >[system architecture snipped]
                              >
                              >>Typically, we want acceptance testing to cover things like making
                              >>
                              >sure all
                              >
                              >>knowledge items expected to be discovered in a given location are
                              >>
                              >in fact
                              >
                              >>properly indexed (that is, they appear as solutions to questions
                              >>
                              >when
                              >
                              >>expected to do so), make sure a given user only gets access to
                              >>
                              >content she
                              >
                              >>should. Make sure that deleted solutions never show up again. All
                              >>
                              >this sort
                              >
                              >>of stuff.
                              >>
                              >>And it all happens in a browser.
                              >>
                              >>The thing is, I know that there are lots of tools that do this. We
                              >>
                              >have a
                              >
                              >>couple ourselves. Some of them even allow you to 'record' correct
                              >>
                              >behaviour,
                              >
                              >>then use that for regression testing. The problem with these is
                              >>
                              >that the
                              >
                              >>recording often requires that generated scripts be hacked, because
                              >>
                              >it is
                              >
                              >>fairly unreliable; and you can't record until you have the
                              >>
                              >software. Which
                              >
                              >>means we must finish the story before we get the tests for it,
                              >>
                              >which means
                              >
                              >>we could be halfway thru the next story before we discover we
                              >>
                              >haven't
                              >
                              >>sattisfactorilly completed the first.
                              >>
                              >>As this is a core part of XP, I was hoping there would be a
                              >>
                              >solution in
                              >
                              >>general use. However, having just found a couple if papers on the
                              >>
                              >matter, it
                              >
                              >>seems that when asked, Kent and Ron have usually responded that the
                              >>
                              >best way
                              >
                              >>is to grow your own...
                              >>
                              >
                              >
                              >
                              >To Post a message, send it to: extremeprogramming@...
                              >
                              >To Unsubscribe, send a blank message to: extremeprogramming-unsubscribe@...
                              >
                              >ad-free courtesy of objectmentor.com
                              >
                              >Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
                              >
                              >
                            • Mark Striebeck
                              Hi, we are writing HttpUnit tests for all our acceptance tests. Now, we got a new UI widget for our data lists (the option to select how many lines you want to
                              Message 14 of 18 , Mar 30, 2004
                              • 0 Attachment
                                Hi,

                                we are writing HttpUnit tests for all our acceptance tests. Now, we got
                                a new UI widget for our data lists (the option to select how many lines
                                you want to see). It's quite an effort to write the automated tests for
                                this. On the other side all of the lists will use the same custom tag.

                                We decided to write one elaborate test that tests one list exhaustively
                                and test for the other lists only that the custom tag is on the list and
                                can be invoked. But we won't do the exhaustive test for the other lists.

                                The engineers are 100% convinced that there is no way that it works for
                                one list and then fails for one other list.

                                Sounds technically fine, but I and our QA manager are sort of quivering
                                by the thought of having some functionality that is not 100% acceptance
                                tested but 100% unit and a little acceptance tested.

                                Are you normally automating ALL acceptance tests or do you make sometime
                                "educated" decisions like this?

                                MarkS
                              • Ron Jeffries
                                ... I suspect that everyone makes decisions like this. And I suspect that sometimes they are right, and sometimes ... not so right. Ron Jeffries
                                Message 15 of 18 , Mar 30, 2004
                                • 0 Attachment
                                  On Tuesday, March 30, 2004, at 4:50:59 PM, Mark Striebeck wrote:

                                  > Are you normally automating ALL acceptance tests or do you make sometime
                                  > "educated" decisions like this?

                                  I suspect that everyone makes decisions like this. And I suspect that
                                  sometimes they are right, and sometimes ... not so right.

                                  Ron Jeffries
                                  www.XProgramming.com
                                  Inigo Montoya: You are wonderful!
                                  Man in Black: Thank you. I have worked hard to become so.
                                • J. B. Rainsberger
                                  ... I just read this in Mike Cohn s User Stories book, and I happen to agree with it. Coverage is not the point of acceptance testing. The point is verifying
                                  Message 16 of 18 , Mar 30, 2004
                                  • 0 Attachment
                                    Mark Striebeck wrote:

                                    > Are you normally automating ALL acceptance tests or do you make sometime
                                    > "educated" decisions like this?

                                    I just read this in Mike Cohn's User Stories book, and I happen to agree
                                    with it.

                                    Coverage is not the point of acceptance testing. The point is verifying
                                    that features are present. To that end, if you know that the elements of
                                    the list are irrelevant to the feature you're testing, and if you know
                                    that all such lists invoke the same implementation, then there's not
                                    much point in testing them all. Test one.

                                    That said, if your ass be bitten by this problem, then add the tests,
                                    and be more careful -- you obviously assumed something not in evidence.

                                    (Short version: yes; we like to be sensible from time to time.)
                                    --
                                    J. B. Rainsberger,
                                    Diaspar Software Services
                                    http://www.diasparsoftware.com :: +1 416 791-8603
                                    Let's write software that people understand
                                  • Robert C. Martin
                                    Mark, What s the cost of failure? If the cost is high, then find a way to write the tests. If you can afford an occasional failure and can quickly recover
                                    Message 17 of 18 , Mar 31, 2004
                                    • 0 Attachment
                                      Mark,

                                      What's the cost of failure? If the cost is high, then find a way to write
                                      the tests. If you can afford an occasional failure and can quickly recover
                                      from it, then it may not be worth writing exhaustive tests.

                                      Having said that it seems odd to me that you could test it once, and then
                                      find it hard to test it many times. Perhaps there is a way to make the
                                      tests simpler.

                                      -----
                                      Robert C. Martin (Uncle Bob)
                                      Object Mentor Inc.
                                      unclebob@...
                                      800-338-6716


                                      > -----Original Message-----
                                      > From: Mark Striebeck [mailto:mstriebeck@...]
                                      > Sent: Tuesday, March 30, 2004 3:51 PM
                                      > To: extremeprogramming@yahoogroups.com
                                      > Subject: [XP] Automated acceptance tests
                                      >
                                      >
                                      > Hi,
                                      >
                                      > we are writing HttpUnit tests for all our acceptance tests. Now, we got
                                      > a new UI widget for our data lists (the option to select how many lines
                                      > you want to see). It's quite an effort to write the automated tests for
                                      > this. On the other side all of the lists will use the same custom tag.
                                      >
                                      > We decided to write one elaborate test that tests one list exhaustively
                                      > and test for the other lists only that the custom tag is on the list and
                                      > can be invoked. But we won't do the exhaustive test for the other lists.
                                      >
                                      > The engineers are 100% convinced that there is no way that it works for
                                      > one list and then fails for one other list.
                                      >
                                      > Sounds technically fine, but I and our QA manager are sort of quivering
                                      > by the thought of having some functionality that is not 100% acceptance
                                      > tested but 100% unit and a little acceptance tested.
                                      >
                                      > Are you normally automating ALL acceptance tests or do you make sometime
                                      > "educated" decisions like this?
                                      >
                                      > MarkS
                                      >
                                      >
                                      > To Post a message, send it to: extremeprogramming@...
                                      >
                                      > To Unsubscribe, send a blank message to:
                                      > extremeprogramming-unsubscribe@...
                                      >
                                      > ad-free courtesy of objectmentor.com
                                      > Yahoo! Groups Links
                                      >
                                      >
                                      >
                                      >
                                      >
                                      >
                                    • Mark Striebeck
                                      Hi Bob, the cost of failure is not that high in this case. No data corruption, no access violation or anything serious. We have just started with automating
                                      Message 18 of 18 , Apr 1, 2004
                                      • 0 Attachment
                                        Hi Bob,

                                        the cost of failure is not that high in this case. No data corruption,
                                        no access violation or anything serious.

                                        We have just started with automating our acceptance tests and I try to
                                        push everyone to be as rigorous and thorough as possible at this point.
                                        Once we have more experience with these automated tests we might be able
                                        to make more educated decisions but I don't feel that we know enough
                                        about it now.

                                        The tests themselfes are complex to write because they need a lot of
                                        data in the system. And because we just started out to automate the
                                        acceptence tests we have only a few utility classes available. Once we
                                        have those, I guees it will be much easier to plug a complex test together.

                                        Thanks
                                        MarkS

                                        Robert C. Martin wrote:

                                        >Mark,
                                        >
                                        >What's the cost of failure? If the cost is high, then find a way to write
                                        >the tests. If you can afford an occasional failure and can quickly recover
                                        >from it, then it may not be worth writing exhaustive tests.
                                        >
                                        >Having said that it seems odd to me that you could test it once, and then
                                        >find it hard to test it many times. Perhaps there is a way to make the
                                        >tests simpler.
                                        >
                                        >-----
                                        >Robert C. Martin (Uncle Bob)
                                        >Object Mentor Inc.
                                        >unclebob@...
                                        >800-338-6716
                                        >
                                        >
                                        >
                                        >
                                        >>-----Original Message-----
                                        >>From: Mark Striebeck [mailto:mstriebeck@...]
                                        >>Sent: Tuesday, March 30, 2004 3:51 PM
                                        >>To: extremeprogramming@yahoogroups.com
                                        >>Subject: [XP] Automated acceptance tests
                                        >>
                                        >>
                                        >>Hi,
                                        >>
                                        >>we are writing HttpUnit tests for all our acceptance tests. Now, we got
                                        >>a new UI widget for our data lists (the option to select how many lines
                                        >>you want to see). It's quite an effort to write the automated tests for
                                        >>this. On the other side all of the lists will use the same custom tag.
                                        >>
                                        >>We decided to write one elaborate test that tests one list exhaustively
                                        >>and test for the other lists only that the custom tag is on the list and
                                        >>can be invoked. But we won't do the exhaustive test for the other lists.
                                        >>
                                        >>The engineers are 100% convinced that there is no way that it works for
                                        >>one list and then fails for one other list.
                                        >>
                                        >>Sounds technically fine, but I and our QA manager are sort of quivering
                                        >>by the thought of having some functionality that is not 100% acceptance
                                        >>tested but 100% unit and a little acceptance tested.
                                        >>
                                        >>Are you normally automating ALL acceptance tests or do you make sometime
                                        >>"educated" decisions like this?
                                        >>
                                        >>MarkS
                                        >>
                                        >>
                                        >>To Post a message, send it to: extremeprogramming@...
                                        >>
                                        >>To Unsubscribe, send a blank message to:
                                        >>extremeprogramming-unsubscribe@...
                                        >>
                                        >>ad-free courtesy of objectmentor.com
                                        >>Yahoo! Groups Links
                                        >>
                                        >>
                                        >>
                                        >>
                                        >>
                                        >>
                                        >>
                                        >>
                                        >
                                        >
                                        >
                                        >To Post a message, send it to: extremeprogramming@...
                                        >
                                        >To Unsubscribe, send a blank message to: extremeprogramming-unsubscribe@...
                                        >
                                        >ad-free courtesy of objectmentor.com
                                        >Yahoo! Groups Links
                                        >
                                        >
                                        >
                                        >
                                        >
                                        >
                                        >
                                        >


                                        [Non-text portions of this message have been removed]
                                      Your message has been successfully submitted and would be delivered to recipients shortly.