Loading ...
Sorry, an error occurred while loading the content.

task cases as acceptance tests?

Expand Messages
  • jeff621@yahoo.com
    I m in a situation where I need to produce acceptance tests for existing applications. I m burdened with a little understanding of usage-centered design, and
    Message 1 of 2 , Oct 11, 2001
    • 0 Attachment
      I'm in a situation where I need to produce acceptance tests for
      existing applications. I'm burdened with a little understanding of
      usage-centered design, and now have a hard time looking at any
      application without quickly backing up, trying to understand who's
      using the app - their role - and quickly writing a task case for what
      they're trying to accomplish, then comparing this task case to the
      application I'm seeing. Basically I'm using the task case as the
      foundation/starting block for acceptance testing.

      It works for me to just use the task case as an acceptance test.
      Obviously I need to convert the generalities in the task case to
      specifics as I manually execute a test. I'm having a hard time
      getting other folks to make this leap. The temptation is to write a
      long complex test case, that's time consuming to write, and brittle.
      Minor changes in the application result in changes to the acceptance
      test - which often don't get done. The result is acceptance tests
      that don't square with the acceptable product.

      Do I have a question here? Maybe. Does anyone, Larry and Lucy
      included, have experience using usage-centered design methods in
      support of acceptance testing. What components of ucd lend
      themselves well to acceptance testing? What is inapropriate to use?

      thanks,

      -Jeff
    • Larry Constantine
      Jeff, ... Good question. We have used task cases to generate inspection scenarios and usability test scenarios on a couple of projects. We build each scenario
      Message 2 of 2 , Oct 12, 2001
      • 0 Attachment
        Jeff,

        > I'm in a situation where I need to produce acceptance tests for
        > existing applications. I'm burdened with a little understanding of
        > usage-centered design, and now have a hard time looking at any
        > application without quickly backing up, trying to understand who's
        > using the app - their role - and quickly writing a task case for what
        > they're trying to accomplish, then comparing this task case to the
        > application I'm seeing. Basically I'm using the task case as the
        > foundation/starting block for acceptance testing.
        >
        > It works for me to just use the task case as an acceptance test.
        > Obviously I need to convert the generalities in the task case to
        > specifics as I manually execute a test. I'm having a hard time
        > getting other folks to make this leap. The temptation is to write a
        > long complex test case, that's time consuming to write, and brittle.
        > Minor changes in the application result in changes to the acceptance
        > test - which often don't get done. The result is acceptance tests
        > that don't square with the acceptable product.
        >
        > Do I have a question here? Maybe. Does anyone, Larry and Lucy
        > included, have experience using usage-centered design methods in
        > support of acceptance testing. What components of ucd lend
        > themselves well to acceptance testing? What is inapropriate to use?

        Good question. We have used task cases to generate inspection scenarios and
        usability test scenarios on a couple of projects. We build each scenario or
        test cases as the enactment of a cluster of closely connected task cases
        based on (extension, inclusion, precedence, etc.). Because test cases and
        inspection/test scenarios are concrete (referring to an actual interface and
        specifying actual expected behavior), they are necessarily more sensitive to
        design changes. Tests must specify correct results ("the user clicks New or
        the Create Project tool, which launches the new project screen with default
        values for name and type; opening the project view tab from the new project
        screen should display an empty list of project elements..."). If the
        architecture, functionality, or method of navigation changes, the text
        case/scenario does have to be rewritten. I don't know of a simple or elegant
        way around this.

        However, traceability and maintainability can be improved if each test
        case/scenario is linked to the cluster of task cases from which it has been
        generated. Of course, these in turn are linked to the interaction contexts
        that support them. A change in a task or the interaction context that
        supports it can then be traced back to the test case/scenario that validates
        that part of the system and the test/scenario can be rewritten and "rerun."

        Furthermore, the number of task cases combined into a test case/scenario can
        be limited, so that test cases/scenarios are kept relatively compact.
        Furthermore, guided by the task model, you can avoid (most) duplication in
        which one task case is built into a number of different test cases. This
        should help avoid massive rewriting and retesting when some small thing
        changes.

        --Larry Constantine | Director of Research & Development
        Constantine & Lockwood, Ltd. | www.foruse.com
        Winners of the Performance-Centered Design Competition 2001
      Your message has been successfully submitted and would be delivered to recipients shortly.