Loading ...
Sorry, an error occurred while loading the content.

Re: [XP] Acceptance Testing as Negative Impact Testing

Expand Messages
  • Ron Jeffries
    ... Yes, that s right. ... Why would you need this? When we release the code that makes 340 run, why don t we just replace 210a with 210b? One easy way to
    Message 1 of 2 , Feb 4, 2003
    • 0 Attachment
      On Tuesday, February 4, 2003, at 4:14:33 PM, Ed Mostrom wrote:

      > I don't remember it being stated in any of the books (maybe I missed
      > it); but based on the comments on this list, Acceptance Tests are to be
      > kept after the initial acceptance and used as Negative Impact testing.
      > I like the idea; but how do you actually implement this real world?

      > I am assuming that most Acceptance Tests run the entire system - perhaps
      > by sending a predetermined set of data, running some type of query
      > against the processed data, and comparing the output values to
      > predetermined values.

      Yes, that's right.

      > At some point in time, a change will not only require new tests; but
      > will also effect older ones. This means you would need some type of
      > smart test engine that says: if test 340 passes, then 210b needs to pass
      > otherwise 210a needs to pass.

      Why would you need this? When we release the code that makes 340 run, why
      don't we "just" replace 210a with 210b?

      One easy way to find out which tests need updating after a code change is
      to run the tests and look. The new failures either indicate a defect in the
      new code, or a test that needs revision.

      > This could get worse if multiple developers changes all effect some of
      > the same older tests. You could end up with:
      > if 340 passes then 210b needs to pass
      > if 341 passes then 210c needs to pass
      > if 340 & 341 passes then 210d needs to pass
      > otherwise 210a needs to pass (the original old Acceptance Test)

      I suppose you could. But I don't see why you would. Think, if you will,
      about releasing your code back to the repository twice a day. Imagine that
      releases are serialized.

      Thus you and I, pairing, put in some new stuff and run the tests. We see
      that 210b is the right new test. We OK that change with the customer and
      release it.

      Later that same day, Dave and Charlie release some other new stuff. They
      see that 210c is the right new test. Rinse, repeat.

      > Has anyone actually put together a system that helps you do this? How
      > hard is it for customers to put in new tests and make changes to older
      > ones?

      I've not seen anyone doing conditional tests. It seems like a nightmare to
      me, but maybe I'm just speaking from fear.

      Creation and updating of tests can be difficult and time-consuming. But
      what's the alternative? Not knowing if the system works. So the team finds
      its own balance between knowing with hard work, and not knowing but not
      working so hard.

      Ron Jeffries
      Discontinue reading if rash, irritation, redness, or swelling develops.
      Especially irritation.
    Your message has been successfully submitted and would be delivered to recipients shortly.