Loading ...
Sorry, an error occurred while loading the content.
 

Re: [XP] Re: [agile-testing] Unit tests in server environments

Expand Messages
  • Jared Richardson
    ... That makes more sense. :) JUnit times every test by default. Ignoring that information seems silly to me. On the other hand, not everyone runs their tests
    Message 1 of 6 , Sep 2, 2006
      On Sep 1, 2006, at 10:02 PM, Arrowood, Paul (ELS-STL) wrote:

      > Thanks for everyone's replies.
      >
      > Jared, to your question below, my "wasting their time" comment
      > regarded time
      > spent investigating why a certain timed test ran longer than normal.
      > Writing and automating the test is not the concern. It's whether it's
      > prudent to have such timing benchmarks in development/testing
      > environments?
      >
      >
      > I'm begging to wonder why others don't have this concern (of early
      > identification of code checkin causing other timed tests to run
      > longer).
      > Are we over thinking this? Is it more prudent to just get your
      > code to the
      > targetted CERT environment (or whatever env is most like
      > Production) ASAP
      > and run standard performance tests there? So far, for us, we've
      > never had
      > that luxury because our CERT boxes are always arriving, installed,
      > config'd
      > pretty late in the game.
      >
      > Paul
      >


      That makes more sense. :)

      JUnit times every test by default. Ignoring that information seems
      silly to me. On the other hand, not everyone runs their tests on a
      "quiet" machine. I try to run my tests on a dedicated continuos
      integration box using Cruise Control. CC lets you adjust how many
      projects run at the same time (via the system threads setting)... I
      adjust CC to run one project's tests at a time. This means that if I
      given test suite runs slower over time, there's probably a real
      problem to investigate.

      I helped w/an internal project at one company that read those JUnit
      timings, stored them in a database, and then flagged increases on
      average over time. It was hoped to be a tool to spot these
      performance problems early.

      As to your comment about it wasting time... if the tests run slower
      in a given environment, you're going to have to fix it before you
      ship, right? You can't ignore the problem forever. Or if you can,
      don't test on that platform. ;) If it matters, it matters during the
      entire product life cycle. If it doesn't matter, then why bother?

      What I'm trying to say is that if a problem exists, then it has to be
      fixed. Developers aren't wasting time by finding the problem early.
      They are finding and fixing the problem.

      Now, here's the Extra Added Bonus For Free. The performance problems
      are due to ~something~ done (or not done) in the code. By the
      developers. If you don't catch the problem early, these bad patterns
      will be coded into your entire system. If you catch them and correct
      them, your team will (hopefully) learn from the experience and not
      continue to code the pattern into the rest of your system.

      For instance, if a certain platform runs really slow if you do
      operation X one way, but can run respectably if you do operation X
      another way, find out early and teach the developers to do it the
      faster way. Otherwise the slow code will show up all over your
      system. If no-one knows any better, why wouldn't they continue to do
      things the "bad" way?

      Make sense? Catch it early so that you have a smaller problem to fix.
      Catch it later and you'll have a lot more work to do. "A stitch in
      time saves nine."

      Jared
      http://jaredrichardson.net
    Your message has been successfully submitted and would be delivered to recipients shortly.