Loading ...
Sorry, an error occurred while loading the content.

RE: [agile-testing] Goal-Question Metric and Test Metrics

Expand Messages
  • Michael Bolton
    ... make it work exactly like excel. What does it mean for it to work exactly like Excel--to provide exactly the same result for a calculation for a given
    Message 1 of 4 , Mar 31, 2007
    View Source
    • 0 Attachment
      >Example: You are working on Open Office Speadsheet. The requirement is to
      "make it work exactly like excel."

      What does it mean for it to work exactly like Excel--to provide exactly the
      same result for a calculation for a given formula, to a given level of
      precision? To provide exactly the same API (ugh!)? Exactly the same user
      interface? Exactly the same performance?

      >You write a little function to create random formulas, execute them in OO
      and Microsoft Excel, then compare the results. Because of the tight
      feedback loop, and because excel can be a bit of a black box ("make it work
      like excel" - would this mean to reproduce errors as well?) - it might
      actually be less waste to have a computer do the inspection automatically.

      You're talking about randomized high-volume automated testing. Doug Hoffman
      is a big proponent of it (e.g.
      http://www.logigear.com/newsletter/using_oracles_in_testing_and_test_automat
      ion_part-1.asp). Cem Kaner, Pat Bond, and Pat McGee did a paper on it for
      STAR in 2004 (http://www.kaner.com/pdfs/highvolCSTER.pdf; you'll probably
      find pages 7, 8, and 9 on Extended Random Regression Testing to be very
      interesting, and page 16 describes exactly what you suggest above). This
      stuff can be enormously effective, not only for comparison between two
      products, but on some other unexpected levels too. It usually depends on a
      reliable, high-speed oracle (although sometimes the approach produces
      interesting crashes, too. (And yes, reproducing errors might be required for
      "bug-for-bug compatibility", especially in commercial environments, or where
      legacy systems converse with the application under test.)

      >Clock cycles and electricity are cheap - human time is expensive.

      Absolutely. The machine part is fast, for sure. The human part includes
      the programming, the evaluation of the programming, bug-fixing, and the
      evaluation of the results. If and where there is a disagreement between the
      application under test and the oracle, a human has to make the decision as
      to whether the decision is an error in the AUT, the oracle, or the program
      that compares them, AND whether that difference matters to the degree that
      we're going to change the AUT--"does this mean to reproduce errors as well?"

      ---Michael B.
    Your message has been successfully submitted and would be delivered to recipients shortly.