Loading ...
Sorry, an error occurred while loading the content.

Re: [XP] Re: Could customer acceptance tests be sufficient? TDD Adoption Rate Survey

Expand Messages
  • Rick Mugridge
    I ve worked with teams where extensive storytests have happened to reduce the need for unit tests, and the storytests alone have provided high coverage of the
    Message 1 of 36 , Nov 2, 2008
    • 0 Attachment
      I've worked with teams where extensive storytests have happened to
      reduce the need for unit tests, and the storytests alone have provided
      high coverage of the code (> 98%) . I write storytests that specify
      business rules and objects, rather than focussing on integration. So
      some storytests may end up being very specific about the behaviour of a
      single domain level class (eg, validation rules). I've tended to work
      with complex domains, however. I write unit tests as soon as I'm getting
      beyond coverage from the storytests and want to drive the technical design.

      My guess is that the ratio of storytests to unit tests depends on the
      complexity of the business domain and the complexity of the technical
      architecture (that is, independent of that domain). So the ratio of unit
      tests to code may rise as the technical complexity rises. Likewise, the
      ratio of storytests to code may rise with the domain complexity. I
      suppose it also depends on the quality of abstractions that are
      available and/or used in each of those forms (ie, code, unit tests,
      storytests).

      The other factor is whether there is a combinatorial issue, where a
      little code has a lot of "emergent behaviour" and needs lots of tests of
      either kind. Eg, parsing/compiling? Eg, simulations. Eg, ajax across web
      browsers. Eg, checking against different hardware.

      Cheers, Rick

      kentb wrote:
      >
      > Dear Jeff,
      >
      > I measured recently and found that 40% of JUnit's tests refer to
      > JUnitCore,
      > the main entry point. When implementing a new feature, we almost always
      > start with a test at the API level, and then support it with tests of
      > smaller-scale objects as necessary.
      >
      > I would guess that this proportion of acceptance/unit tests is unusually
      > high because we are working with a simple API and moderately simple
      > behavior. Maybe not, though. I am working on another project that has been
      > going for ten years and we have learned to drive most development from
      > acceptance-level tests because testing at too low a level and missing
      > something is often more costly than spending extra time writing and
      > running
      > higher level tests.
      >
      > It'll be interesting to see how this evolves over time in the community.
      >
      > Regards,
      >
      > Kent Beck
      > Three Rivers Institute
      >
      > _____
      >
      > From: extremeprogramming@yahoogroups.com
      > <mailto:extremeprogramming%40yahoogroups.com>
      > [mailto:extremeprogramming@yahoogroups.com
      > <mailto:extremeprogramming%40yahoogroups.com>] On Behalf Of Jeff Grigg
      > Sent: Wednesday, October 29, 2008 6:48 AM
      > To: extremeprogramming@yahoogroups.com
      > <mailto:extremeprogramming%40yahoogroups.com>
      > Subject: [XP] Re: Could customer acceptance tests be sufficient? TDD
      > Adoption Rate Survey
      >
      > > --- "Jeff Grigg" <jeffgrigg@...> wrote:
      > >> (I've never personally seen this happen, but...)
      > >> If the Customer Acceptance Tests specified the desired
      > >> system behavior in sufficient detail, there would be no
      > >> need for Developer Tests.
      >
      > > --- Ron Jeffries wrote:
      > >> This seems theoretically true but false in practice ...
      >
      > --- George Dinwiddie <lists@...> wrote:
      > > I'm not even sure it's theoretically true. Working only
      > > from Customer Acceptance Tests would force me to take
      > > bigger steps. [...]
      >
      > I've gotten close to it on some projects -- typically reusable library
      > code with developers as customers. When the desired behavior of the
      > system (the API) is specified in great detail by the customer, I find
      > that their acceptance tests get remarkably close to the developer
      > integration tests that I'm inclined to write. Coding to a published
      > API standard can get close to this too.
      >
      > But in my (admittedly limited) experience, even published API
      > standards leave room for interpretation, especially in corner cases.
      > So even with the most rigorous of customer specs, I find that I still
      > need additional developer tests to ensure reasonable behavior in all
      > cases.
      >
      > --- "Scott Ambler" <scottwambler@...> wrote:
      > > Agreed. Also didn't come anywhere at all indicating this
      > > in the survey, and certainly didn't insist on it.
      > > - Scott
      >
      > (Oh; sorry! I was thinking about this as an additional talking point
      > -- not a specific comment about the survey.)
      >
      > [Non-text portions of this message have been removed]
      >
      >


      [Non-text portions of this message have been removed]
    • Rick Mugridge
      ... I believe that it s not the scale of the test that should determine who writes it. It s instead whether the tests expresses stuff that s part of the
      Message 36 of 36 , Nov 8, 2008
      • 0 Attachment
        J. B. Rainsberger wrote:
        > I suggest programmers focus on isolated object tests and testers focus
        > on integration and end-to-end tests. If they do that, then they'll
        > come together pretty well at some point.
        > ----

        I believe that it's not the scale of the test that should determine who
        writes it.

        It's instead whether the tests expresses stuff that's part of the
        problem space or part of the solution space. Of course, where that
        boundary sits is critically dependent on the project and who is
        involved. And it changes as the problem, and solution, are better
        understood.

        And there can be several layers, with a solution space at one level
        being a problem space at another. So, for example, I'm happy to use
        storytests for specifying the technical details of communication with
        another system that is managed by another team. And I'm happy to have
        some storytests that mock out that other system so that we can use
        additive "specification"/testing rather than multiplicative across the
        systems. As always, we still need some end-to-end to ensure it's all
        wired together correctly and that failure modes across them are managed
        correctly.

        So I find the usual distinctions between unit tests and end-to-end tests
        and X, Y, Z tests to be unhelpful. As it's too late and too hard to
        refactor the terminology, I try (unsuccessfully) to avoid it.

        I prefer Brian Marick's distinction between customer-facing and
        programmer-facing tests.

        Cheers, Rick
      Your message has been successfully submitted and would be delivered to recipients shortly.