Loading ...
Sorry, an error occurred while loading the content.

RE: [TDD] kind of tests

Expand Messages
  • Charlie Poole
    Hi Carlos ... That s certainly a valid breakdown, but I m pretty sure that Steve and Nat wouldn t say it s the only breakdown. Acceptance: I ve done acceptance
    Message 1 of 74 , Jul 7, 2009
    View Source
    • 0 Attachment
      Hi Carlos

      > I would like to draw a picture of the different kind of
      > tests but I am a bit confused with definitions.
      >
      > According to Steve and Nat (www.mockobjects.com/book)
      >         Acceptance
      >                  does the whole system work?
      >         Integration
      >                 does our code work against code we can't change?
      >         Unit
      >                 do our objects do the right thing, are they
      > convenient to work with?

      That's certainly a valid breakdown, but I'm pretty sure that
      Steve and Nat wouldn't say it's the only breakdown.

      Acceptance: I've done acceptance tests at lower levels than
      whole system. The notion that acceptance applies only at the
      system level is a bit old-fashioned - it goes back to when
      we would work in isolation from the customer, finish the
      whole thing and then present it for acceptance.

      If all acceptance tests were system tests, then it would be
      hard for XP projects to have acceptance tests for each story. :-)

      Integration: In the old days, integration took place after
      developent, so integration often meant making sure your
      code worked with that of other teams or even other people
      on your team. Now - in agile teams anyway - we integrate
      continuously, so it's tempting to suggest that the only
      thing needing integration is third-party code. However,
      I think it's more usful to refer to tests that use many
      objects as integration tests, particularly if they
      cross module lines - whatever that means in the
      language you are using. For practical purposes, I'm
      usually willing to allow tests of one key object and
      one or two helper obects be called a unit test. Beyond
      that, it's integration and goes into a separate suite.

      > Where are functional tests in those leve of testing?

      In a sense, they are all functional tests. They test
      that the system matches some specification, at some
      level of detail. Traditionally, functional tests tie
      directly to functional requirements, so I'd tie them
      to whole stories in XP. I would not expect unit tests
      to do this. Personally, I see no practical difference
      between acceptance tests and functional tests in the
      projects I work on.

      > Adam Sroka wrote something that seems to be different:
      >
      > "Traditionally, "functional test" is a synonym for "system
      > test." It refers to testing the entire system in the context
      > of that system's functional requirements. A "unit test"
      > tests a unit in isolation. An "integration test" tests more
      > than one unit in combination, but not necessarily the entire
      > system. A system/functional test is a special case of
      > integration testing where the combination of units being
      > integrated encompasses at least one route through the entire system.
      > None of these definitions considers who owns the components
      > being tested."

      When I read this I agreed with Adam, because I assumed that
      by "traditionally" he meant circa 1970. If we are trying to
      craft definitions for today, I don't agree with him. :-)

      When I first started working in XP, I expended a lot of
      effort trying to reconcile the terms from older testing
      literature with our new terms. After a few years, I stopped
      because it didn't seem to have any more use.

      > According to Adam, an integration test does not mean we
      > can't change the code, it is just a system test.

      That's not what your quotation above says. In fact, I
      read "not necessarily the entire system." :-)

      >
      > Eventually the picture I've got in my mind is:
      >  Developer tests:

      You haven't defined developer tests, so I will:

      Tests specified by a developer. They will also be written by
      a developer, but that's not the essential thing. They are
      used to make sure the system works as the *developer* expects.

      >        - Unit tests :
      >                  Isolated, atomic, innocous: exercised with xUnit
      >        - Integration tests
      >                 Isolated tests that might change the state
      > of the system, i.e: saving into database, writing file...
      >                 An integration test does not represent a
      > functional requirement as is.
      >                 Can be written for xUnit. They check the
      > integration of our coude with a third party tool or with the
      > different layers of our own code,
      >                 i.e: the business logic layer requires the
      > data access layer
      >        - Functional tests (also known as System tests):
      >                 A test that excersises part of the system as
      > a whole, some functional requirement. It might change the
      > status of the system.

      Your second-level breakdown is based on two different criteria.
      * Unit versus integration relates to the scope of the test
      * Functional relates to the purpose of the test

      I realize that you have equated functional with system, but
      this too confounds scope with purpose.

      While all tests verify that something functions in a particular
      way, I find it most handy to reserve the term functional for
      story-tests - if I am going to use it at all. In practice,
      I only use it when talking to folks who talk about functional
      requirements rather than stories. :-)

      >   Product Owner tests:
      >        - Acceptance tests:

      As an XPer, I tend to call these Customer Tests. I don't see
      a hierarchy here. That is, I think "Customer" and "Acceptance"
      are the same thing. I find that "Acceptance" works poorly in
      many companies because it is taken to mean that the customer
      must accept the product once the "acceptance" test passes.
      It's especially hard to use the term in safety-critical
      environments where many levels of review take place even
      after all the "acceptance" tests pass.

      In my opinion, the most important distinction is between
      tests of developer intent and tests of customer intent.

      Within the general category of developer tests, the next
      logical breakdown is one of scope: am I testing one object,
      a few, an entire module, a system?

      >                 Functional tests which input and output can
      > be validated by a non-technical person, the product owner.
      >
      > Would you agree with this?
      >
      > Why do I need to be that precise with these definitions?
      > I am writing a book on TDD in Spanish and would like to be
      > precise. To me, that fact that I am writing a test and I
      > can't tell what kind of test it is, is a code smell, either
      > in the test or in the sut. So I consider that making clear
      > the type of tests is important.

      If you don't know why you are writing a test, that's
      definitely a process smell. If you don’t know what
      various communities of people might call it, that
      seems to be less of a problem.

      IMO, one of the best things you can tell your readers is
      that they will meet many different uses of various terms
      and that they should try to understand but not be
      confused by them.


      Charlie
      > Hopefully the book will be ready before the end of 2009
      >
      > Thanks :-)
      >
      >
      > ------------------------------------
      >
      > Yahoo! Groups Links
      >
      >
      >
      >
    • Adam Sroka
      ... Perhaps, but I still haven t quite grokked the distinction you are making. So, perhaps it s more about thickness than pedantry ;-) ... Okay, but that
      Message 74 of 74 , Jul 28, 2009
      View Source
      • 0 Attachment
        On Tue, Jul 28, 2009 at 2:03 AM, Nat Pryce<nat.pryce@...> wrote:
        >
        >
        > 2009/7/28 Adam Sroka <adam.sroka@...>:
        >
        >>
        >>
        >> On Mon, Jul 27, 2009 at 5:21 PM, Nat Pryce<nat.pryce@...> wrote:
        >>>
        >>>
        >>> 2009/7/22 Adam Sroka <adam.sroka@...>:
        >>>> 2) The notion that developers care more about how we test than for
        >>>> what we test. I don't think that this is correct. I, for one, care
        >>>> more about what is being tested than how.
        >>>
        >>> I didn't say that or mean to imply that. It would be ridiculous if
        >>> developers cared more about how their tests worked than what they were
        >>> actually testing!
        >>>
        >>
        >> I won't presume to know what you meant. But, you *did* say that
        >> customers care about what was tested and developers care about how it
        >> was tested and the two are orthogonal (Which implies mutual
        >> exclusivity... otherwise it couldn't be "orthogonal.")
        >
        > We're rapidly spiraling into pedantry, but that implication does not hold.
        >

        Perhaps, but I still haven't quite grokked the distinction you are
        making. So, perhaps it's more about thickness than pedantry ;-)

        > Given that customers care about what is tested and do not care about
        > how it is tested, the fact that developers care about how it is tested
        > does not imply that developers do not care care about what it is
        > tested.
        >

        Okay, but that doesn't sound "orthogonal" to me. There is a shared
        goal (Specifying "customer" level behavior) and a separate goal that
        the customer generally doesn't concern herself with (Verifying
        technical details.) But, we certainly "care" about both, and I'm not
        convinced that verifying technical details is entirely independent -
        there should always be a business purpose underlying every technical
        decision.

        > The aspects of "what" and "how" are orthogonal. I can write a test
        > that expresses system behaviour (the what) and implement the test to
        > drive the system in different ways (the how).
        >
        > But I, as a developer, *care* about both concerns, even though I can
        > consider them independently.
        >

        I see "what" as "what the customer asked for" and "how" as "the
        simplest thing that could possibly work." The two aren't independent.
        The former drives the latter.
      Your message has been successfully submitted and would be delivered to recipients shortly.