Loading ...
Sorry, an error occurred while loading the content.

Aids to guiding acceptance test definition?

Expand Messages
  • David Bolen
    In a new XP project, I m feeling a little uncertain about how our acceptance tests are being defined, and was wondering if anyone might be able to share any
    Message 1 of 32 , Jan 22, 2004
    • 0 Attachment
      In a new XP project, I'm feeling a little uncertain about how our
      acceptance tests are being defined, and was wondering if anyone might
      be able to share any techniques for how they may have worked with
      customers on such definitions?

      I can't help feeling that the acceptance tests we are ending up with
      are too low level, in that they tend to end up duplicating tests that
      fall out of TDD for a story. Of course, they do define what the
      customer wants out of the story, so it may just be me and there's no
      problem at all. During our planning sessions, the customer often just
      falls back on explaining a story (say, with a few salient features)
      and then when asked about an acceptance test, reiterates each feature
      saying to "test that" (or may be more specific, but it's primarily
      a feature-by-feature test). Thus, in general, the acceptance tests
      are just duplicating (a subset of) the unit tests that will be written
      when TDDing the code for that story.

      For example, part of the system involves interacting with a UI on an
      embedded device that walks the user through a session. Discussion
      with the customer has led to an overall state machine model which is
      useful for communicating the device between customer and developers.
      Each state has a particular presentation, and business logic for
      controlling state transitions. It may be worth noting that the
      customer in this case is an internal representative of a future,
      anticipated, market, and while he is strictly from the business side
      of the house, he also has a technical background.

      One story for the device might be "Invitation", which is an event when
      the user is invited to join (register) with our system. During
      discussion of the story, the customer decides that upon reaching the
      state to invite the user to join the system, the user has a period of
      time (15s) during which a particular action can resume an earlier
      state (Monitor), otherwise the same action moves on to a different
      state (Learn). They can also take some other action which in all
      cases moves to a third state (Idle).

      When asked for an acceptance test the customer wished to use for the
      story, the answer was stated something like:

      * Have the user initiate the action prior to 15s and verify Monitor
      * Have the user initiate the action after 15s and verify Learn
      * Have the user initiate the other action and verify Idle

      which seemed reasonable. However, after a few similar stories, the
      customer was quickly just saying "test each of the boundary
      conditions" for most stories (in this case, much of the business logic
      was state driven, so many stories had the above sort of scenario).

      Given the above, and that the state transitions were obvious tests for
      use to use in TDD to drive development (so we did write them in the
      act of developing the story), would you expect us to rewrite them in
      the context of the acceptance tests? Should I try to move the
      customer in a higher level direction for tests, knowing that raw state
      transitions would be unit tested?

      Perhaps I should be taking this as a smell that the stories are too
      fine grained? To some extent a higher level test would probably
      involve a typical overall use of the system (say a user working
      through a single session) but that would be bigger than a single
      story.

      Anyway, I'm sure there's no black and white answer, but I'd appreciate
      any thoughts on how others might find acceptance tests differing (if
      at all) in content from unit tests, and/or if there are suggestions or
      resources to help work with a customer in designing such acceptance
      tests.

      Thanks.

      -- David
    • J. B. Rainsberger
      ... I consider documentation to be pleasant side-effect of automated tests, in general. It is easy to write tests that are not good documentation, so I don t
      Message 32 of 32 , Feb 3, 2004
      • 0 Attachment
        Amir Kolsky wrote:

        > ]I don't know why.
        > ]
        > ]Customer Tests are meant to give the customer confidence that the
        > ]features he has requested are present in the product.
        > ]
        > ]Programmer Tests are meant to give the programmer confidence that the
        > ]code he has written does what he intended it to do.
        > ]
        > ]I honestly don't know what there is to debate here.
        > ]--
        > ]J. B. Rainsberger,

        > Not a single word on TDD and Documentation?

        I consider documentation to be pleasant side-effect of automated tests,
        in general. It is easy to write tests that are not good documentation,
        so I don't offer it as a key property of the practice.

        If one writes tests that can act as useful documentation, then so much
        the better. First, I'd like to focus on three key properties

        * PTs force the programmer to use the code they write, driving design
        decisions
        * PTs provide an executable specification, increasing confidence that
        code does the thing right
        * PTs provide a safety net for refactoring
        --
        J. B. Rainsberger,
        Diaspar Software Services
        http://www.diasparsoftware.com :: +1 416 791-8603
        Let's write software that people understand
      Your message has been successfully submitted and would be delivered to recipients shortly.