Loading ...
Sorry, an error occurred while loading the content.

[XP] Re: Is it better to have bad tests than no tests? Understanding growth in a proj

Expand Messages
  • marty.nelson
    ... the ... I m also use to do a small-medium scale research spikes, and record the results as assumptions. For example, just yesterday I was writing a method
    Message 1 of 14 , Mar 1, 2008
    • 0 Attachment
      --- In extremeprogramming@yahoogroups.com, "Daniel Pupek" <dan@...>
      wrote:
      >
      > Most of time, I prefer to see tests as a design tool. But I have to
      > admit that I do like tests that will tell me when something has
      > changed (broken?). It's a nice side affect.
      >
      > On Sat, Mar 1, 2008 at 5:07 AM, marty.nelson <noslenytram@...> wrote:
      > > I tend to document via test any assumptions on arbitrary values or
      > > behaviors of framework or third-part components specifically for
      the
      > > reason that I want to know if they change. That way I'll know right
      > > away if something goes awry.

      I'm also use to do a small-medium scale research spikes, and record the
      results as assumptions. For example, just yesterday I was writing a
      method that needed to use .NET reflection to get the type calling my
      method. It turns out that I needed to get the stack frame and skip so
      many entries to get to the caller. I started by writing a test that
      looped from 0 to 5 and asserting that I found a match to the type of my
      test. When I got an answer, I changed the test to use that value.
    • Paul Campbell
      ... whether ... many ... hostage ... improvements ... behavior at all. ... I definately agree with that, indeed TDD is so ingrained in some enviroments now
      Message 2 of 14 , Mar 7, 2008
      • 0 Attachment
        --- In extremeprogramming@yahoogroups.com, Michael Feathers
        <mfeathers@...> wrote:
        >
        > Ola Ellnestam wrote:
        > > Paul Campbell wrote:
        > >
        > >> --- In extremeprogramming@yahoogroups.com, "Pat Maddox" <pergesu@>
        > >> wrote:
        > >>
        > >>
        > >>> A friend on another list asked "are bad tests better than no tests?"
        > >>>
        > >>>
        > >> I've seen plenty of tests that are worse than useless. A common theme
        > >> in such tests is diffing (rather than interpreting) textual output
        > >> that has semantic content. Common examples include diffing HTML,
        > >> diffing XML, diffing SQL queries. The problem with such tests is that
        > >> in general they only tell you when something has changed - not
        whether
        > >> it works, and in my opinion that makes them worse than useless in
        many
        > >> cases.
        > >>
        > >>
        > >>
        > >
        > > I categorize these kind of tests under: "Tests holding the code
        hostage"
        > >
        > > As said. They are beyond useless. They stop you from making
        improvements
        > > and are too coupled to the implementation. Not focusing at
        behavior at all.
        > >
        > The hardest part of dealing with bad tests is knowing when to let go.
        > We've conditioned ourselves to think "more tests: good", "fewer tests:
        > bad." If tests are in the way, it's okay to delete them even though it
        > feels wrong.

        I definately agree with that, indeed TDD is so ingrained in some
        enviroments now that persuading people to do *less* testing is a real
        problem :-)

        Paul.
      • Paul Campbell
        ... a team ... change ... careful. ... test ... failing unit ... Of course such tests sometimes have their uses but all too often people write such tests
        Message 3 of 14 , Mar 7, 2008
        • 0 Attachment
          --- In extremeprogramming@yahoogroups.com, "Kent Beck" <kentb@...> wrote:
          >
          > I wouldn't want tests that didn't help either. However, I work with
          a team
          > that has extensive tests that diff PDF or HTML. They are valuable as
          change
          > detectors. If one breaks and they didn't expect it, they know to be
          careful.
          > However, the team have made it easy to say, "That change is ok, this
          test
          > should pass now." I suppose the next step would be to do a root cause
          > analysis whenever one of the "hostage" tests breaks to see what
          failing unit
          > tests they should have written.

          Of course such tests sometimes have their uses but all too often
          people write such tests because "I must have a unit test for
          everything" rather then because they are genuinely useful.

          A good "acid test" for such a test is - can I easily change the test
          first to predict the new expected outcome ?, or is the new output
          merely copied into test expectation after the fact. I view the latter
          as strong indicator of a test that I should simply delete.

          We must remember that tests create drag on the code base and reduce
          agility.

          Paul.
        • Paul Campbell
          ... g unit ... Of course I meant *bad* tests create drag ... :-).
          Message 4 of 14 , Mar 12, 2008
          • 0 Attachment
            --- In extremeprogramming@yahoogroups.com, "Paul Campbell" <yahoo@...>
            wrote:
            >
            > --- In extremeprogramming@yahoogroups.com, "Kent Beck" <kentb@> wrote:
            > >
            > > I wouldn't want tests that didn't help either. However, I work with
            > a team
            > > that has extensive tests that diff PDF or HTML. They are valuable as
            > change
            > > detectors. If one breaks and they didn't expect it, they know to be
            > careful.
            > > However, the team have made it easy to say, "That change is ok, this
            > test
            > > should pass now." I suppose the next step would be to do a root cause
            > > analysis whenever one of the "hostage" tests breaks to see what
            > failin
            g unit
            > > tests they should have written.
            >
            > Of course such tests sometimes have their uses but all too often
            > people write such tests because "I must have a unit test for
            > everything" rather then because they are genuinely useful.
            >
            > A good "acid test" for such a test is - can I easily change the test
            > first to predict the new expected outcome ?, or is the new output
            > merely copied into test expectation after the fact. I view the latter
            > as strong indicator of a test that I should simply delete.
            >
            > We must remember that tests create drag on the code base and reduce
            > agility.
            >
            > Paul.
            >

            Of course I meant "*bad* tests create drag ..." :-).
          Your message has been successfully submitted and would be delivered to recipients shortly.