Loading ...
Sorry, an error occurred while loading the content.
 

Test driven development

Expand Messages
  • Massimo Manca
    I was reading http://en.wikipedia.org/wiki/Software_metric and I found this: One school of thought on metrics design suggests that metrics communicate the
    Message 1 of 3 , Mar 2, 2008
      I was reading http://en.wikipedia.org/wiki/Software_metric and I found this:
      "One school of thought on metrics design suggests that metrics
      communicate the real intention behind the goal, and that people should
      do exactly what the metric tells them to do. This is a spin-off of Test
      driven development
      <http://en.wikipedia.org/wiki/Test_driven_development>, where developers
      are encouraged to write the code specifically to pass the test. If
      that’s the wrong code, then they wrote the wrong test. In the metrics
      design process, gaming is a useful tool to test metrics and help make
      them more robust, as well as for helping teams to more clearly and
      effectively articulate their real goals."
      What do you think about?


      [Non-text portions of this message have been removed]
    • Ron Jeffries
      Hello, Massimo. On Sunday, March 2, 2008, at 5:36:10 PM, you ... Seems at best confused, to me. Ron Jeffries www.XProgramming.com Ron gave me a good
      Message 2 of 3 , Mar 2, 2008
        Hello, Massimo. On Sunday, March 2, 2008, at 5:36:10 PM, you
        wrote:

        > I was reading http://en.wikipedia.org/wiki/Software_metric and I found this:
        > "One school of thought on metrics design suggests that metrics
        > communicate the real intention behind the goal, and that people should
        > do exactly what the metric tells them to do. This is a spin-off of Test
        > driven development
        > <http://en.wikipedia.org/wiki/Test_driven_development>, where developers
        > are encouraged to write the code specifically to pass the test. If
        > that’s the wrong code, then they wrote the wrong test. In the metrics
        > design process, gaming is a useful tool to test metrics and help make
        > them more robust, as well as for helping teams to more clearly and
        > effectively articulate their real goals."
        > What do you think about?

        Seems at best confused, to me.

        Ron Jeffries
        www.XProgramming.com
        Ron gave me a good suggestion once. -- Carlton (banshee858)
      • Steven Gordon
        ... The problem is that there is no single metric (or even small collection of metrics) whose optimization would by itself result in the team behavior that you
        Message 3 of 3 , Mar 2, 2008
          On Sun, Mar 2, 2008 at 3:36 PM, Massimo Manca <micronpn@...> wrote:
          >
          > I was reading http://en.wikipedia.org/wiki/Software_metric and I found this:
          > "One school of thought on metrics design suggests that metrics
          > communicate the real intention behind the goal, and that people should
          > do exactly what the metric tells them to do. This is a spin-off of Test
          > driven development
          > <http://en.wikipedia.org/wiki/Test_driven_development>, where developers
          > are encouraged to write the code specifically to pass the test. If
          > that's the wrong code, then they wrote the wrong test. In the metrics
          > design process, gaming is a useful tool to test metrics and help make
          > them more robust, as well as for helping teams to more clearly and
          > effectively articulate their real goals."
          > What do you think about?
          >

          The problem is that there is no single metric (or even small
          collection of metrics) whose optimization would by itself result in
          the team behavior that you want. Monitoring some simple metrics can
          help, but is not sufficient.

          Clearly, we want a team to deliver as much value as it can in each
          iteration while introducing the least amount of defects and technical
          debt. While there are a variety of testing and code quality metrics,
          none of them can accurately predict today how many defects and how
          much technical debt will occur in the future due to the code we wrote
          today. Furthermore, any extra effort we put into trying to derive
          metrics that come closer and closer to predicting future defects and
          technical debt is potentially pure waste. It would be much more
          productive to apply that same effort directly to delivering more value
          and reducing defects and technical debt.

          When the team runs into defects and technical debt in the future, the
          team does need to reflect on how those problems occurred and decide on
          ways to improve their own process to reduce those problems in the
          future. However, the team cannot know if those changes were really
          effective until some time in the future. No metrics will substitute
          for the team observing itself, being reflective and making
          improvements over time. Overdependence on some metrics will make the
          team less sensitive to self-reflection, because the magic metric would
          be telling it everything it needs to know.

          Steve
        Your message has been successfully submitted and would be delivered to recipients shortly.