Loading ...
Sorry, an error occurred while loading the content.

RE: [XP] Testing non-deterministic methods

Expand Messages
  • kentb
    Rodolfo, One more refinement on what you are doing. If you want to unit test code that uses random numbers, you can make the random number generator a
    Message 1 of 34 , Dec 2, 2008
    • 0 Attachment
      Rodolfo,

      One more refinement on what you are doing. If you want to unit test code
      that uses random numbers, you can make the random number generator a
      parameter of the code. For unit testing, you can pass in a dummy generator
      that always produces the same sequence of values. In production, you use a
      real generator.

      Cheers,

      Kent Beck
      Three Rivers Institute

      _____

      From: extremeprogramming@yahoogroups.com
      [mailto:extremeprogramming@yahoogroups.com] On Behalf Of Rodolfo Carvalho
      Sent: Monday, December 01, 2008 5:29 PM
      To: extremeprogramming@yahoogroups.com
      Subject: Re: [XP] Testing non-deterministic methods



      Thanks for your words Kent.
      I quite managed to do what I wanted, though the tests may be kinda lacking.

      The tests:

      class TSPTests(unittest.TestCase):
      def testTSPforOneVertexGraph(self):
      gg = GeometricGraph(StringIO('1\n0 0'))
      self.assertEqual(gg.tsp(), ([1], 0))

      def testTSPforTwoVertexGraph(self):
      gg = GeometricGraph(StringIO('2\n0 0\n1 1'))
      path, cost = gg.tsp()
      self.assert_(path in ([1, 2, 1],
      [2, 1, 2]))
      self.assertAlmostEqual(cost, 2.8284, 4)

      def testTSPforFourVertexGraph(self):
      gg = GeometricGraph(StringIO('4\n1 1\n2 2\n1 0\n2 1'))
      path, cost = gg.tsp()
      self.assert_(len(path) == 5)
      self.assertEqual(path[0], path[-1])
      self.assertAlmostEqual(cost, 4.8284, 4)

      def testTSPforHorizontalLineGraph(self):
      n = 100
      gg = GeometricGraph(StringIO(
      '%d\n%s' % (n, ''.join('%d 0\n' % x for x in range(n)))))
      path, cost = gg.tsp()
      self.assert_(len(path) == n+1)
      self.assertEqual(path[0], path[-1])
      self.assertAlmostEqual(cost, 2*(n-1), 4)

      (suggestions and comments are welcome)
      The actual implementation of *GeometricGraph.tsp* takes the best path out of
      a mix of 1000 repetitions of a random shuffle of path and running a greedy
      algorithm from every vertex looking for the one closest to it. None of those
      two approaches can be said to be driven from the tests. The tests only
      assure the very simple cases, and one case that a straight line should be
      optimum.

      Cheers,

      Rodolfo Carvalho

      On Mon, Dec 1, 2008 at 13:07, kentb <kentb@earthlink.
      <mailto:kentb%40earthlink.net> net> wrote:

      > Victor,
      >
      > I read Rodolfo's message differently. The question I heard was, "How do
      you
      > write assertions when multiple answers are acceptable?" It is exacerbated
      > in
      > his case because calculating by hand for a realistic-sized problem is
      > difficult.
      >
      > The techniques I've used in cases like this (some of which were mentioned
      > already) are:
      > * Work out some small problems by hand--"Given this simplified map, this
      > heuristic should produce Chicago-London-Katmandu as the shortest path".
      > These tests can verify that a heuristic behaves as expected. However, they
      > tend to produce false failures because in such algorithmic development you
      > often tweak heuristics.
      > * Make vaguer assertions about large problems--"Given this realistic map,
      > the algorithm should produce a path of less than 10,000 km". I think this
      > was what Rodolfo was proposing but he was concerned that they wouldn't
      > drive
      > development in the same way ordinary TDD tests do. They also tend to run
      > slowly (which I suppose is part of their lack of drive).
      > * Run an exhaustive algorithm in parallel with the optimized algorithm on
      > a small data set and compare the results, either for an exact match or a
      > ratio.
      > * Compare the output of versions of the optimized algorithm to make sure,
      > for example, that the answers never get worse.
      >
      > All in all, heuristic algorithm development is not the sweet spot for TDD,
      > although tests can still support such development.
      >
      > Cheers,
      >
      > Kent
      >
      >

      [Non-text portions of this message have been removed]






      [Non-text portions of this message have been removed]
    • Keith Ray
      FWIW, I think one of the sources of bugs in programs relying on random numbers, is when the numbers are untypical but theoretically possible -- e.g. all the
      Message 34 of 34 , Dec 3, 2008
      • 0 Attachment
        FWIW, I think one of the sources of bugs in programs' relying on
        random numbers, is when the numbers are "untypical" but theoretically
        possible -- e.g. all the "random" numbers are zero, or all are a large
        number, or are all even numbers, or all are prime, etc. It's not that
        the numbers are buggy, it's the logic that assumes these untypical
        sequences won't happen.

        On Wed, Dec 3, 2008 at 2:07 PM, Julian Hall <jules@...> wrote:
        > --- In extremeprogramming@yahoogroups.com, "kentb" <kentb@...> wrote:
        >> I read Rodolfo's message differently. The question I heard was, "How
        > do you
        >> write assertions when multiple answers are acceptable?" It is
        > exacerbated in
        >> his case because calculating by hand for a realistic-sized problem is
        >> difficult.
        >>
        >> The techniques I've used in cases like this (some of which were
        > mentioned
        >> already) are:
        >> * Work out some small problems by hand--"Given this simplified
        > map, this
        >> heuristic should produce Chicago-London-Katmandu as the shortest path".
        >> These tests can verify that a heuristic behaves as expected.
        > However, they
        >> tend to produce false failures because in such algorithmic
        > development you
        >> often tweak heuristics.
        >> * Make vaguer assertions about large problems--"Given this
        > realistic map,
        >> the algorithm should produce a path of less than 10,000 km". I think
        > this
        >> was what Rodolfo was proposing but he was concerned that they
        > wouldn't drive
        >> development in the same way ordinary TDD tests do. They also tend to run
        >> slowly (which I suppose is part of their lack of drive).
        >> * Run an exhaustive algorithm in parallel with the optimized
        > algorithm on
        >> a small data set and compare the results, either for an exact match or a
        >> ratio.
        >> * Compare the output of versions of the optimized algorithm to
        > make sure,
        >> for example, that the answers never get worse.
        >>
        >> All in all, heuristic algorithm development is not the sweet spot
        > for TDD,
        >> although tests can still support such development.
        >>
        >> Cheers,
        >>
        >> Kent
        >
        >
        > As this is something I've been working on recently, I thought I'd
        > share my experiences. The problem domain of my current project is
        > machine learning, specifically neural networks and "simulated
        > annealing" optimization of parameters.
        >
        > Some of the tests I've found useful:
        >
        > * A by-hand calculated example. You can do simple neural networks by
        > hand and work out how an iteration or two should cause the network to
        > be updated.
        > * Tests designed to force the algorithm down specific paths. I've
        > been using implementations of java.util.Random that return a
        > hand-picked sequence that causes the situation I want to test.
        > * "Gold standard" tests: run the algorithm on a particular input with
        > a standard Random seeded to a known value, then check the result
        > manually. If it's acceptable, set up a test that breaks if it changes.
        >
        > A slight variant of this suggestion:
        >
        >> Make vaguer assertions about large problems--"Given this realistic map,
        >> the algorithm should produce a path of less than 10,000 km". I think
        > this
        >> was what Rodolfo was proposing but he was concerned that they
        > wouldn't drive
        >> development in the same way ordinary TDD tests do. They also tend to run
        >> slowly (which I suppose is part of their lack of drive).
        >
        > Because my algorithm is probabilistic and I only expect it to work on
        > 95% of runs, I need to fix the random number generator seed to ensure
        > the test runs every time. If a change breaks it, I have to try a few
        > additional seeds to check it hasn't just shifted the meaning of the
        > random seed and the new one is one of the few that doesn't work.
        >
        >
        >
        > ------------------------------------
        >
        > To Post a message, send it to: extremeprogramming@...
        >
        > To Unsubscribe, send a blank message to: extremeprogramming-unsubscribe@...
        >
        > ad-free courtesy of objectmentor.comYahoo! Groups Links
        >
        >
        >
        >



        --
        C. Keith Ray, IXP Coach, Industrial Logic, Inc.
        http://industriallogic.com 866-540-8336 (toll free)
        Groove with our Agile Greatest Hits: http://www.industriallogic.com/elearning/
        http://agilesolutionspace.blogspot.com/
      Your message has been successfully submitted and would be delivered to recipients shortly.