Loading ...
Sorry, an error occurred while loading the content.

Re: [XP] Re: Unit vs Integration TDD

Expand Messages
  • Michael Feathers
    ... It s hard to articulate a good rule on testing at various levels. I tend to push very hard at the isolation level because I think we re better off when we
    Message 1 of 36 , Nov 3, 2008
    • 0 Attachment
      J. B. Rainsberger wrote:
      > On Fri, Oct 31, 2008 at 4:19 PM, Matt <maswaffer@...> wrote:
      >
      >> There are a number of components of quality, two of which are "it does
      >> what we asked it to do" and "it doesn't break".
      >>
      >> Integration or specification tests seem to do a pretty good job of
      >> making sure the software "does what we asked it to do".
      >>
      >> The problem that I see is that it is difficult for these tests to make
      >> sure "it doesn't break" given the number of permutations at higher
      >> levels.
      >>
      >> The response I usually get from BDD guys is "who cares about testing all
      >> the permutations?" and my response usually is "my boss... since he gets
      >> the irate calls about bugs". :)
      >
      > Matt, I'd never thought about this distinction before. I really,
      > really like it. This provides another Competent-level rule on whether
      > to write an isolated object test or an integration test. Tinier tests
      > work better for exhaustive testing to verify the object's don't break,
      > because larger tests run the risk of a combinatoric explosion. Tinier
      > tests can cause trouble for specification testing, because one could
      > lose the forest for the trees. As a result, make tests focused enough
      > (but not too focused) to clarify intended behavior, but make them as
      > focused as possible to verify sensible behavior on the unexpected
      > paths.

      It's hard to articulate a good rule on testing at various levels. I
      tend to push very hard at the isolation level because I think we're
      better off when we really understand the base level.. the pieces that
      everything else is built of. When we encounter combinatoric complexity
      at the higher levels, there's nothing that helps more than a good
      understanding of the base level.

      Whenever I get into discussions about "how much testing we need" at a
      particular level, I try to reframe the discussion in terms of what the
      team/individual needs to know. To me, testing is an avenue for posing
      questions. The number of questions you need depends upon you and the
      code. Personally, I like the 'Jiminy Cricket' Rule: "Let your
      conscience be your guide." There's no escaping judgment.

      Michael Feathers
    • Rick Mugridge
      ... I believe that it s not the scale of the test that should determine who writes it. It s instead whether the tests expresses stuff that s part of the
      Message 36 of 36 , Nov 8, 2008
      • 0 Attachment
        J. B. Rainsberger wrote:
        > I suggest programmers focus on isolated object tests and testers focus
        > on integration and end-to-end tests. If they do that, then they'll
        > come together pretty well at some point.
        > ----

        I believe that it's not the scale of the test that should determine who
        writes it.

        It's instead whether the tests expresses stuff that's part of the
        problem space or part of the solution space. Of course, where that
        boundary sits is critically dependent on the project and who is
        involved. And it changes as the problem, and solution, are better
        understood.

        And there can be several layers, with a solution space at one level
        being a problem space at another. So, for example, I'm happy to use
        storytests for specifying the technical details of communication with
        another system that is managed by another team. And I'm happy to have
        some storytests that mock out that other system so that we can use
        additive "specification"/testing rather than multiplicative across the
        systems. As always, we still need some end-to-end to ensure it's all
        wired together correctly and that failure modes across them are managed
        correctly.

        So I find the usual distinctions between unit tests and end-to-end tests
        and X, Y, Z tests to be unhelpful. As it's too late and too hard to
        refactor the terminology, I try (unsuccessfully) to avoid it.

        I prefer Brian Marick's distinction between customer-facing and
        programmer-facing tests.

        Cheers, Rick
      Your message has been successfully submitted and would be delivered to recipients shortly.