Loading ...
Sorry, an error occurred while loading the content.
 

Re: [XP] Understanding when

Expand Messages
  • Steven Gordon
    I saw the following on the refactoring yahoo group recently:
    Message 1 of 3 , Dec 3, 2011
      I saw the following on the refactoring yahoo group recently:

      http://community.devexpress.com/blogs/markmiller/archive/2011/11/29/duplicate-detection-and-consolidation-in-coderush-for-visual-studio.aspx#359730

      I believe it does not apply to C++, a language which was not designed to
      support this kind of analysis.

      The answer to the question "when our unit tests and refactoring are enough"
      is when they allow you to do TDD on the new code you are adding.

      I personally feel queasy about refactoring C++ without good test coverage,
      because refactoring C++ is essentially a manual operation, which can be
      error-prone. Of course, you cannot achieve good unit test coverage without
      refactoring first, so you are seemingly blocked.

      So, one approach that can make sense is first achieving functional test
      coverage with something like Fit or Cucumber first. Then you can refactor
      much more safely because the functional test coverage should tell you
      whenever any refactoring broke the system. Then you can TDD new code (and
      even retroactively unit test some legacy code).

      SteveG

      On Fri, Dec 2, 2011 at 9:53 PM, JulienM <julien@...> wrote:

      > **
      >
      >
      > Hi all,
      > Here is the situation.
      > We are adopting TDD on our legacy code that has both C++ and C# that used
      > not to have much unit tests...
      > We do not intend to add any new legacy code.
      > So we are trying to use systematically TDD every time we touch the code
      > base to add a new feature or fix a bug.
      > However, the code base, very often, makes it impossible to use TDD without
      > important prior refactoring.
      > We are looking for some meaningful measures to see when our unit tests and
      > refactoring are enough (because it can be particularly demanding). There is
      > no magic to it though.
      > However, we think meaningful metrics that can help are the test
      > coverage/code complexity/code duplication of the newly added/modified code.
      > Are there any tools that allow to measure this?
      >
      > Comments/ideas are welcome.
      >
      > Thanks,
      > Julien.
      >
      >
      >


      [Non-text portions of this message have been removed]
    • Steven Smith
      If you haven t already, I recommend picking up and reading Working Effectively with Legacy Code. For duplicate code detection across any language
      Message 2 of 3 , Dec 3, 2011
        If you haven't already, I recommend picking up and reading Working
        Effectively with Legacy Code. For duplicate code detection across any
        language (string-based, not expression-tree-based), check out Atomiq, which
        provides a nice visual representation of the code under analysis and has
        some knobs you can turn to make adjust how sensitively it should detect
        duplicate code. http://getatomiq.com

        Cheers,
        Steve

        --
        Software Craftsmanship 2012 AntiPatterns Wall Calendar -
        http://bit.ly/SC_2012

        On Fri, Dec 2, 2011 at 11:53 PM, JulienM <julien@...> wrote:

        > **
        >
        >
        > Hi all,
        > Here is the situation.
        > We are adopting TDD on our legacy code that has both C++ and C# that used
        > not to have much unit tests...
        > We do not intend to add any new legacy code.
        > So we are trying to use systematically TDD every time we touch the code
        > base to add a new feature or fix a bug.
        > However, the code base, very often, makes it impossible to use TDD without
        > important prior refactoring.
        > We are looking for some meaningful measures to see when our unit tests and
        > refactoring are enough (because it can be particularly demanding). There is
        > no magic to it though.
        > However, we think meaningful metrics that can help are the test
        > coverage/code complexity/code duplication of the newly added/modified code.
        > Are there any tools that allow to measure this?
        >
        > Comments/ideas are welcome.
        >
        > Thanks,
        > Julien.
        >
        >
        >



        --
        Steve Smith
        http://SteveSmithBlog.com/
        http://twitter.com/ardalis


        [Non-text portions of this message have been removed]
      Your message has been successfully submitted and would be delivered to recipients shortly.