Loading ...
Sorry, an error occurred while loading the content.
 

RE: [XP] Metrics to Prove XP Works

Expand Messages
  • rmyers@cysive.com
    Bill, The _eXtreme Programming Installed_ book by Ron Jeffries et al contains some simple metrics for measuring the progress of a project. That may give some
    Message 1 of 20 , Dec 1, 2000
      Bill,

      The _eXtreme Programming Installed_ book by Ron Jeffries et al contains some
      simple metrics for measuring the progress of a project. That may give some
      hints for comparing methodologies.

      IMHO, lines-of-code is truly one of the most horrible measurements of
      productivity ever conceived. (Stronger statements provided upon request.)

      Comparing XP productivity to other methodologies, you could perhaps measure:

      * Stories (or use case scenarios) completed per engineer per month? And,
      yes, to be fair, you do have to count all the months of writing and refining
      use cases, sequence diagrams, et cetera. (I've heard more than one customer
      make a comment like "Why am I paying all this money for a document I didn't
      ask for?!" Hey, if the *customers* are changing their tune, it's probably
      time to change ours...). Be sure to include overtime in the equation, too.

      * Or how about stories per dollar spent on the development team?

      * Unrepaired defects?

      * Customer satisfaction? It's less rigorous, but it is essentially the
      ultimate measurement, is it not? Choose your favorite method for
      determining the Warm-Fuzzy factor. (Take the number of times you see the
      users smiling or laughing, and subtract the number of times they're yelling
      at you?)

      Have fun!

      Rob

      > -----Original Message-----
      > From: kcdeberg@... [mailto:kcdeberg@...]
      > Sent: Friday, December 01, 2000 4:13 PM
      > To: extremeprogramming@egroups.com
      > Subject: [XP] Metrics to Prove XP Works
      >
      >
      > We are planning on piloting some XP projects at work, but we
      > continually run across a question that we can't seem to answer
      > effectively: "How will you quantitatively show that XP is an
      > improvement over more traditional methodologies (iterative
      > waterfall)?" We are wondering about this both post-mortem as well as
      > in-process. One thought is to use a productivity metric such as
      > KAELOC/Staff-month, but as we know, development methodology is only
      > one of many factors that work into a productivity metric such as this
      > one (others being developer experience, skill level, project
      > resources, development environment, etc, etc). Also, since with XP,
      > the documentation basically resides in the code with easy-to-read
      > refactored code, this could help to inflate those metrics (or
      > deflate if prolonged refactoring has occured) depending on the type
      > of LOC counter used. After all, I could utilize the CFCO development
      > methodology (Code First, Code Often) and produce phenomonal metrics
      > which would "compellingly demonstrate" that CFCO is far superior to
      > all other methodologies based on KAELOC/Staff-month. Of course, all
      > CFCO would do is likely generate a mess of code that is inefficient
      > and unmaintainable. Has anyone had to work on establishing some
      > measurements that would help to reflect an increase (or decrease) in
      > productivity when using an XP methodology? How could this be done so
      > that any productivity changes could be reasonably attributed to the
      > choice of development methodology, and not to the many other factors
      > that effect productivity?
      >
      > Bill
      >
      >
      >
      > To Post a message, send it to: extremeprogramming@...
      >
      > To Unsubscribe, send a blank message to:
      > extremeprogramming-unsubscribe@...
      >
      > Ad-free courtesy of objectmentor.com
      >
    • Michael A. Johnson
      ... KLOC s just aren t a meaningful measurement. I liken it to building the heaviest airplane. If after I have refactored my code and I have fewer klocs of
      Message 2 of 20 , Dec 1, 2000
        > deflate if prolonged refactoring has occured) depending on the type
        > of LOC counter used. After all, I could utilize the CFCO development
        > methodology (Code First, Code Often) and produce phenomonal metrics
        > which would "compellingly demonstrate" that CFCO is far superior to
        > all other methodologies based on KAELOC/Staff-month. Of course, all
        > CFCO would do is likely generate a mess of code that is inefficient
        > and unmaintainable. Has anyone had to work on establishing some


        KLOC's just aren't a meaningful measurement. I liken it to building the
        heaviest airplane. If after I have refactored my code and I have fewer klocs
        of code that are more efficient, have I been less productive?

        Some things I would be more interested in:

        The number of requirements, er uh, stories were delivered and accepted by
        the customer per iteration.

        How close does work preformed match to work projected?

        What's the regressive error rate? i.e. the number of defects found in work
        done in a previous iterations that are discovered in the current iteration?

        How does the defect find rate curve look? flat? monotonically decreasing?
        heading for the moon?
      • Malte Kroeger
        KLOC is junk as everybody knows. The biggest problem you will probably have with metrics is, that there are usually no good metrics for old projects to compare
        Message 3 of 20 , Dec 2, 2000
          KLOC is junk as everybody knows.

          The biggest problem you will probably have with metrics is, that there are
          usually no good metrics for old projects to compare agains.
          And comparisons like User Storys per month compared to Heavy Weight Use
          Cases from a RUP-Process per month won't be very helpfull ether.

          I guess the only metrics that makes sense to apply is soft measures like
          customer satisfaction, visible speed compared to similar projects.
          Manageability and resposiveness.

          If you can do a pet project, maybe you can come up with changing
          requirements during the project and see how the team reacts etc.

          Malte
        • kjray
          [...] ... I think the best metrics for comparative success (from the business stand-point) are: - features actually implemented vs features planned - time from
          Message 4 of 20 , Dec 2, 2000
            [...]
            >> We are planning on piloting some XP projects at work, but we
            >> continually run across a question that we can't seem to answer
            >> effectively: "How will you quantitatively show that XP is an
            >> improvement over more traditional methodologies (iterative
            >> waterfall)?"[...]

            I think the best metrics for comparative success (from the business
            stand-point) are:

            - features actually implemented vs features planned
            - time from inception to release
            - number of bugs found after release
            - amount of code that has to be changed to fix
            bugs / amount of time required to fix bugs
            - amount of time / amount of code that has to be changed to
            add a new feature after release
            - man-power used

            You could also measure % of code lines tested with
            unit-tests/acceptance-tests.
          • Lowell Lindstrom
            ... How are you measuring your process today, both post-mortem and in process? I would start be looking at those existing metrics, verifying that they are
            Message 5 of 20 , Dec 4, 2000
              > From: kcdeberg@... [mailto:kcdeberg@...]
              > "How will you quantitatively show that XP is an
              > improvement over more traditional methodologies (iterative
              > waterfall)?"


              How are you measuring your process today, both post-mortem and in process?
              I would start be looking at those existing metrics, verifying that they are
              aligned to your teams/organizations goals. If they are, then XP should
              yield an improvement in those metrics. If not, then the use of XP may be
              questionable.

              If you do not have some history of metrics, then you'll need to establish
              some. But this is no reason to delay using practices from XP. The short
              iterations of XP give you more data points from the process than other
              methods. This accelerates analysis and therefore corrective action. In my
              experience, teams tend to over analyze and over complicate the metrics and
              never end up measuring. So, keep it simple.

              Like any other process, you must track the metrics, not just define them.
              Have a tracker, post the results, use them to improve.

              Lowell


              ==============================================
              Lowell Lindstrom
              lindstrom@...
              Object Mentor, Inc.
              Services to help software developers and their
              customers deliver better software, faster.
              People, Principles, Patterns, Process
              XP | OO | C++ | Java | Patterns
              www.objectmentor.com | www.xprogramming.com
              ==============================================
            • kcdeberg@excite.com
              All, Thanks for the responses. Based on the replies so far, he are my responses. 1). I agree LOC/time is not good. First, it is a poor in-process metric
              Message 6 of 20 , Dec 4, 2000
                All,

                Thanks for the responses. Based on the replies so far, he are my
                responses.

                1). I agree LOC/time is not good. First, it is a poor in-process
                metric because LOC-weighted curves between XP and a "normal" process
                are very different. XP likely forms some sort of a bell curve, or at
                least a "bumpy" curve. Code is produced much earlier, showing a
                rapid rise in productivity, then it will peak, then will appear to
                drop as refactoring takes the mess of code and molds it into a
                design. A literal read of that metric would clearly show a program
                should never leave its early phases of development because that is
                where all the production occurs. That is obviously intuitively
                wrong. A "normal" process would have a parabolic curve. No code is
                produced for the longest time, and then near the end it shoots up to
                its eventual level. Apples and oranges.

                2). No pet projects. Can't be done (commercial environment). Even
                if it could be done, it would be very tough to show XP was the factor
                that changed things. So many other factors exist in a project. It
                would require clones of the developers to be made, and subjected to
                the exact same environment. Maybe, unethically, in the future as
                cloning technology improves, clones could be made and parallel,
                identical projects could be run. Let's rule this out.

                3). The question was asked "Why pilot XP?" The reason falls under
                continuous improvement. New ways want to be tried and evaluated and
                then those improved methodologies used where they would give the best
                results. That's the motivation. The hard part is doing the
                "experiment" and then being able to conclusively give the results.
                We're trying to avoid the Florida "intention" results. If we have to
                hold our results up to the light, flex the results, shake the results
                and then make guesses as to what the real intention of the results
                are, then we're really not learning anything.

                4). Quantitative analysis can be used in this case. We're not
                looking at XP as a "silver bullet" methodology. The premise of XP
                with communication and simplicity intuitively says it could work with
                small projects, and probably would not work with large. While trying
                to determine, "Did XP work?" we're also then trying to characterize
                the type of project it worked and failed on. Then, in the future, we
                can say, "We have a small project, anticipate about 5 developers, who
                need to hit the market place fast. We're not sure of the details of
                the project yet, so requirement details will fluctuate wildly. What
                process do we use to develop?" we want to be able to look at the
                available methodologies and go, "XP looks like the one to use for
                this one because of [...]"

                5). "Customer satisfaction" doesn't work well. First, it takes until
                the very end to get any results whatsoever. Second, we're still
                stuck with the fact that we don't know if it was XP that got us to
                finish on-time or underbudget. Perhaps we would have done that well
                with a traditional process. Ultimately, the customer wants whatever
                product it is fast, cheap, and of quality relative to the price.
                Generally, a customer won't care how we got to the finished product.
                Maybe a "satisfaction" rating could come from feeling like they are
                part of the process, but that is only one part of XP, and not XP as a
                process. You could include the customer to make them happy in a
                traditional development, too.

                6). With respect to the manuals and tests question: It's generally
                accepted that testing improves quality. Taken to an extreme, no
                testing is viewed as bad. Therefore, a lot of testing must be good.
                Grossly stated, that's generally accepted as a practice, and would
                not require metrics. More user manuals to please a customer would be
                easy to measure as a metric using the customer satisfaction. A
                manual is very visible and a lot of interest would be placed on
                supplying the right manuals to the customer and that would be
                measured. The overall question really is ""How will you
                quantitatively show that XP is an improvement over more traditional
                methodologies (iterative waterfall)?"

                I've seen a couple of suggestions in the posts that might prove
                useful. One had to do with planning. Clearly, if we can show that
                estimating was better with XP than the traditional, that would score
                favorably. It would be fairly easy to track plan and actuals for
                that. Another measure might be tracking number of requirements
                changes, and comparing that to the project meeting milestones. This
                could show that XP is more responsive. A third measure suggested
                could be measuring number of stories created to fix errors from
                earlier increments as compared to total stories. This could compare
                to defect containment in a traditional project, but I'm not sure how.

                There probably needs to be some sort of conversion factor so we can
                compare apples to apples, but off the top of my head, I don't know
                what it could be.

                Keep the ideas coming. This could really help out in selling XP to
                management levels. If you can go to a boss and say, "I want to do XP
                because I think it would be more efficient," and the boss says why,
                and you can point to specific reasons, and then back it up with how
                you plan to demonstrate when it is over, the boss would be more
                likely to agree. After all, the boss wants cheaper and faster, too.
                If he can be convinced the experiment is worthwhile, and can show
                results afterward, he would more likely be interested in pursuing it
                than just saying, "It looks good, it feels good, it smells good,
                therefore it should be good."
              • Tom Mostyn
                ... Yes and no. Too much emphasis on KLOC is definitely a bad thing. However, KLOC can be a viable measure of productivity when comparing iterations of the
                Message 7 of 20 , Dec 5, 2000
                  Keith Paton wrote:
                  >
                  > We are never going to get anywhere with "metrics" until we find a better way
                  > of measuring value than counting lines of code.

                  Yes and no. Too much emphasis on KLOC is definitely a bad thing.
                  However, KLOC can be a viable measure of productivity when comparing
                  iterations of the same project, but there are other important things to
                  measure as well: various defect measurements, effort and schedule (late,
                  early, on-time), etc. However, if one sets clear goals for taking
                  measurements and then starts taking them, a nice side-effect is that
                  what you are measuring tends to get optimized. Checkout Chapter 26 of
                  Rapid Development by Steve McConnell.

                  >
                  > Keith
                  >
                  > ----- Original Message -----
                  > From: <kcdeberg@...>
                  > To: <extremeprogramming@egroups.com>
                  > Sent: Friday, December 01, 2000 7:13 PM
                  > Subject: [XP] Metrics to Prove XP Works
                  >
                  > > We are planning on piloting some XP projects at work, but we
                  > > continually run across a question that we can't seem to answer
                  > > effectively: "How will you quantitatively show that XP is an
                  > > improvement over more traditional methodologies (iterative
                  > > waterfall)?" We are wondering about this both post-mortem as well as
                  > > in-process. One thought is to use a productivity metric such as
                  > > KAELOC/Staff-month, but as we know, development methodology is only
                  > > one of many factors that work into a productivity metric such as this
                  > > one (others being developer experience, skill level, project
                  > > resources, development environment, etc, etc). Also, since with XP,
                  > > the documentation basically resides in the code with easy-to-read
                  > > refactored code, this could help to inflate those metrics (or
                  > > deflate if prolonged refactoring has occured) depending on the type
                  > > of LOC counter used. After all, I could utilize the CFCO development
                  > > methodology (Code First, Code Often) and produce phenomonal metrics
                  > > which would "compellingly demonstrate" that CFCO is far superior to
                  > > all other methodologies based on KAELOC/Staff-month. Of course, all
                  > > CFCO would do is likely generate a mess of code that is inefficient
                  > > and unmaintainable. Has anyone had to work on establishing some
                  > > measurements that would help to reflect an increase (or decrease) in
                  > > productivity when using an XP methodology? How could this be done so
                  > > that any productivity changes could be reasonably attributed to the
                  > > choice of development methodology, and not to the many other factors
                  > > that effect productivity?
                  > >
                  > > Bill
                  > >
                  > >
                  > >
                  > > To Post a message, send it to: extremeprogramming@...
                  > >
                  > > To Unsubscribe, send a blank message to:
                  > extremeprogramming-unsubscribe@...
                  > >
                  > > Ad-free courtesy of objectmentor.com
                  > >
                  > >
                  >
                  > To Post a message, send it to: extremeprogramming@...
                  >
                  > To Unsubscribe, send a blank message to: extremeprogramming-unsubscribe@...
                  >
                  > Ad-free courtesy of objectmentor.com

                  --
                  __________________________________________
                  Tom Mostyn - Nortel Networks
                  GSM Surveillance Development
                  P.O. Box 833871 (Mail Stop 992 / 03 / B20 )
                  Richardson, Texas 75083-3871
                  Ph# 972-684-2083 (ESN 444-2083)
                  mailto:tmostyn@...
                • kjray
                  ... Measure lines of code as a form of progress , and you discourage refactoring, where the number of lines of code often goes down. You also discourage
                  Message 8 of 20 , Dec 6, 2000
                    Tom Mostyn <tmostyn@...> on 12/5/00 7:25 PM wrote:

                    >Keith Paton wrote:
                    >>
                    >> We are never going to get anywhere with "metrics" until we find a better way
                    >> of measuring value than counting lines of code.
                    >
                    >Yes and no. Too much emphasis on KLOC is definitely a bad thing.
                    >However, KLOC can be a viable measure of productivity when comparing
                    >iterations of the same project, [...]

                    Measure lines of code as a form of 'progress', and you discourage
                    refactoring, where the number of lines of code often goes down. You also
                    discourage re-use (internal re-use as well as buying commercial
                    libraries). It may also encourage unnecessary comments.
                  • Tom Mostyn
                    ... If you have those kind of people, the kind that are willing to sacrifice quality, readability and conciseness of code, putting themselves before the team,
                    Message 9 of 20 , Dec 6, 2000
                      kjray wrote:
                      >
                      > Tom Mostyn <tmostyn@...> on 12/5/00 7:25 PM wrote:
                      >
                      > >Keith Paton wrote:
                      > >>
                      > >> We are never going to get anywhere with "metrics" until we find a better way
                      > >> of measuring value than counting lines of code.
                      > >
                      > >Yes and no. Too much emphasis on KLOC is definitely a bad thing.
                      > >However, KLOC can be a viable measure of productivity when comparing
                      > >iterations of the same project, [...]
                      >
                      > Measure lines of code as a form of 'progress', and you discourage
                      > refactoring, where the number of lines of code often goes down. You also
                      > discourage re-use (internal re-use as well as buying commercial
                      > libraries). It may also encourage unnecessary comments.

                      If you have those kind of people, the kind that are willing to sacrifice
                      quality, readability and conciseness of code, putting themselves before
                      the team, then no process/methodology/measurements can help you.

                      Also, total LOC is not the only measurement you can make. You can also
                      take changed LOC into account as well.

                      --
                      __________________________________________
                      Tom Mostyn - Nortel Networks
                      GSM Surveillance Development
                      P.O. Box 833871 (Mail Stop 992 / 03 / B20 )
                      Richardson, Texas 75083-3871
                      Ph# 972-684-2083 (ESN 444-2083)
                      mailto:tmostyn@...
                    • kjray
                      Tom Mostyn on 12/6/00 9:19 AM wrote: [...] ... [...] I m sick of people saying these things. People are usually trying to satisfy
                      Message 10 of 20 , Dec 6, 2000
                        Tom Mostyn <tmostyn@...> on 12/6/00 9:19 AM wrote:

                        [...]
                        >If you have those kind of people
                        [...]

                        I'm sick of people saying these things.

                        People are usually trying to satisfy the goals that are presented to
                        them. (Weinbergs law: most people are trying to be helpful.) If they are
                        asked to increase lines of code, because that is measured, and
                        refactoring is NOT measured in a positive way, then they will do the
                        logical thing, which is to increase lines of code.

                        The same kind of goal-focus has been demonstrated experimentally when
                        groups of programmers were asked to write code with various goals: speed,
                        clarity, robustness, and so on.

                        XP's 12 practices are there to achieve _balance_. "Lines of code" is not
                        one of XP's goals... Writing tests for a task/story and making them pass
                        _is_ one of those goals. That's why trackers are supposed to ask how many
                        tasks have been done this iteration, how many are to left be done, etc.
                        If trackers start asking "how many lines of code did you write today?"
                        Guess what the logical person's focus is going to switch to?
                      • Glen Alleman
                        ... better way ... things to ... (late, ... of ... We use a tool Krakatau to produce lots of metrics for each build. One of the obvious ones is LOC and
                        Message 11 of 20 , Dec 6, 2000
                          --- In extremeprogramming@egroups.com, "Tom Mostyn" <tmostyn@N...>
                          wrote:
                          > Keith Paton wrote:
                          > >
                          > > We are never going to get anywhere with "metrics" until we find a
                          better way
                          > > of measuring value than counting lines of code.
                          >
                          > Yes and no. Too much emphasis on KLOC is definitely a bad thing.
                          > However, KLOC can be a viable measure of productivity when comparing
                          > iterations of the same project, but there are other important
                          things to
                          > measure as well: various defect measurements, effort and schedule
                          (late,
                          > early, on-time), etc. However, if one sets clear goals for taking
                          > measurements and then starts taking them, a nice side-effect is that
                          > what you are measuring tends to get optimized. Checkout Chapter 26
                          of
                          > Rapid Development by Steve McConnell.
                          >
                          We use a tool Krakatau to produce lots of metrics for each build. One
                          of the "obvious" ones is LOC and Source LOC. These are not very
                          interesting in their own, but show a trend on how the system
                          complexity grows (along with class and method counts). It is also an
                          indication of system stabilization. With neatly 378K source lines of
                          Java now, the system is no longer a "simple" development project.
                          This "report" is plotted along with "defects found/fixed" and some
                          other obvious SQA metrics. This is posted on a wall where every walks
                          by on the way to the lunch room. With a large system integration
                          project (several mainframes and a doze or so Sun clusters are
                          an "external" integration problem, and the first customer on another
                          continent, this is one way to "keep" the integration staff focused.

                          Glen Alleman
                          www.niwotridge.com
                        • Robert Crawford
                          ... Because we re not producing code; we re producing business value. That s done by completing stories and making releases, not by the number of lines of code
                          Message 12 of 20 , Dec 6, 2000
                            On Wed, Dec 06, 2000 at 01:38:51PM -0600, Tom Mostyn wrote:
                            > I find it odd that some XP'ers say "only the code knows" and "the code
                            > is the design" making XP a very code centric process, but then refuse to
                            > measure productivity in terms of code.

                            Because we're not producing code; we're producing business
                            value. That's done by completing stories and making releases, not
                            by the number of lines of code added/changed/deleted.

                            --
                            crawford@...
                          • Glen Alleman
                            ... [snip] ... also ... The LOC measure is a metric not a management policy. Any development manager that makes decisions on how much code is written
                            Message 13 of 20 , Dec 6, 2000
                              --- In extremeprogramming@egroups.com, kjray <kjray@i...> wrote:
                              [snip]
                              >
                              > Measure lines of code as a form of 'progress', and you discourage
                              > refactoring, where the number of lines of code often goes down. You
                              also
                              > discourage re-use (internal re-use as well as buying commercial
                              > libraries). It may also encourage unnecessary comments.

                              The LOC measure is a metric not a management policy. Any development
                              manager that makes decisions on "how much code" is written wouldn't
                              last a week in our org. All metrics provide "trend" information they
                              are not point measures (to be annoyingly statistical about it). The
                              tools we use provide "data" not information, with trending data we
                              can see where we've been not where we're going.
                              This argument of LOC is like the argument of BDUF, it comes from a
                              lack of understanding of how to use tools to manage projects. If a
                              manager took a metrics tools and applied to our project without
                              knowing the context of the past and the underlying complexity issues,
                              then he (or she) would get garbage from the numbers.
                              The same for the BDUF argument. Not valid product development does
                              BDUF, it's all iterative maybe not XP's definition of iterative, but
                              iterative none the less.
                              Metrics are a fundamental part of good management, from measuring the
                              gardeners progress on cutting the lawn to number of LOC or Classes,
                              or defects per method per class, what ever. LOC "may" not be the
                              best, but it is a foundation number. It is used on our project as
                              a "redaction" measurement. We want the LOC to go down for specific
                              domains – after they have stabilized.

                              Glen Alleman
                              www.niwotridge.com
                            • Tom Mostyn
                              ... Then why do you say them yourself? To me your comments implied that some people would sacrifice quality, readability and conciseness of code in favor of
                              Message 14 of 20 , Dec 6, 2000
                                kjray wrote:
                                >
                                > Tom Mostyn <tmostyn@...> on 12/6/00 9:19 AM wrote:
                                >
                                > [...]
                                > >If you have those kind of people
                                > [...]
                                >
                                > I'm sick of people saying these things.

                                Then why do you say them yourself? To me your comments implied that
                                some people would sacrifice quality, readability and conciseness of code
                                in favor of producing unnecessary code to fulfill a single measurment.
                                IMO, these people are either bad or ignorant. The latter can be fixed
                                and the former needs to be removed.

                                > People are usually trying to satisfy the goals that are presented to
                                > them. (Weinbergs law: most people are trying to be helpful.) If they are
                                > asked to increase lines of code, because that is measured, and
                                > refactoring is NOT measured in a positive way, then they will do the
                                > logical thing, which is to increase lines of code.

                                If total LOC were the only measure then I agree that that would be bad.

                                > The same kind of goal-focus has been demonstrated experimentally when
                                > groups of programmers were asked to write code with various goals: speed,
                                > clarity, robustness, and so on.

                                >
                                > XP's 12 practices are there to achieve _balance_. "Lines of code" is not
                                > one of XP's goals... Writing tests for a task/story and making them pass
                                > _is_ one of those goals. That's why trackers are supposed to ask how many
                                > tasks have been done this iteration, how many are to left be done, etc.
                                > If trackers start asking "how many lines of code did you write today?"
                                > Guess what the logical person's focus is going to switch to?

                                I find it odd that some XP'ers say "only the code knows" and "the code
                                is the design" making XP a very code centric process, but then refuse to
                                measure productivity in terms of code. Perhaps the main misperception
                                is that the more LOC the better. This clearly is not true. For
                                example: In iteration N there were a total of M KLOC in the system
                                accomplishing X tasks. In iteraton N+1 there were a total of M+O KLOC in
                                the system accomplishing X+Y tasks. If O is small, but X+Y is close to
                                2X then it looks pretty good because reuse appears to be high. An
                                indication of good design, IMO. If O is close to M then M+O is close to
                                2M. This might indicate a problem if the Y tasks in iteration N+1 were
                                closely related or similar to X tasks in iteration N. However, Y tasks
                                maybe totally unrelated to X tasks so O close to or even greater than M
                                is probably justified.

                                --
                                __________________________________________
                                Tom Mostyn - Nortel Networks
                                GSM Surveillance Development
                              • Tom Mostyn
                                ... XP produces business value by generating code. One is easier to measure than the other. Given the choice of measuring business value vs. code I would
                                Message 15 of 20 , Dec 6, 2000
                                  Robert Crawford wrote:
                                  >
                                  > On Wed, Dec 06, 2000 at 01:38:51PM -0600, Tom Mostyn wrote:
                                  > > I find it odd that some XP'ers say "only the code knows" and "the code
                                  > > is the design" making XP a very code centric process, but then refuse to
                                  > > measure productivity in terms of code.
                                  >
                                  > Because we're not producing code; we're producing business
                                  > value. That's done by completing stories and making releases, not
                                  > by the number of lines of code added/changed/deleted.

                                  XP produces business value by generating code. One is easier to measure
                                  than the other. Given the choice of measuring business value vs. code I
                                  would choose to measure code. It's easier to measure, IMO.

                                  --
                                  __________________________________________
                                  Tom Mostyn - Nortel Networks
                                  GSM Surveillance Development
                                • Robert Crawford
                                  ... We re already measuring business value, though. Every story has business value; completing a story bumps up the business value metric . This could be
                                  Message 16 of 20 , Dec 6, 2000
                                    On Wed, Dec 06, 2000 at 01:59:55PM -0600, Tom Mostyn wrote:
                                    > Robert Crawford wrote:
                                    > > On Wed, Dec 06, 2000 at 01:38:51PM -0600, Tom Mostyn wrote:
                                    > > > I find it odd that some XP'ers say "only the code knows" and "the code
                                    > > > is the design" making XP a very code centric process, but then refuse to
                                    > > > measure productivity in terms of code.
                                    > > Because we're not producing code; we're producing business
                                    > > value. That's done by completing stories and making releases, not
                                    > > by the number of lines of code added/changed/deleted.
                                    > XP produces business value by generating code. One is easier to measure
                                    > than the other. Given the choice of measuring business value vs. code I
                                    > would choose to measure code. It's easier to measure, IMO.

                                    We're already measuring business value, though. Every story
                                    has business value; completing a story bumps up the "business value
                                    metric".

                                    This could be expressed as a graph of say, "Total Stories" and
                                    "Stories Completed" over time. Total stories gives you the potential
                                    business value of the project; completed stories gives you the present
                                    business value. The slope of the "Stories Completed" value gives you a
                                    very rough velocity (rough because stories have different costs).

                                    Lines of code doesn't mean much to a client -- they want a
                                    system that provides value, not one that had $BIGNUM lines of code
                                    changed last iteration.

                                    --
                                    crawford@...
                                  • Tom Mostyn
                                    ... Business value can change like the wind. At one point in time C3 had enough business value to proceed and continue with the project for years. Then one
                                    Message 17 of 20 , Dec 6, 2000
                                      Robert Crawford wrote:

                                      > We're already measuring business value, though. Every story
                                      > has business value; completing a story bumps up the "business value
                                      > metric".
                                      >
                                      > This could be expressed as a graph of say, "Total Stories" and
                                      > "Stories Completed" over time. Total stories gives you the potential
                                      > business value of the project; completed stories gives you the present
                                      > business value. The slope of the "Stories Completed" value gives you a
                                      > very rough velocity (rough because stories have different costs).

                                      Business value can change like the wind. At one point in time C3 had
                                      enough business value to proceed and continue with the project for
                                      years. Then one executive decision terminated the project. Clearly,
                                      somewhere, the business value declined.

                                      > Lines of code doesn't mean much to a client -- they want a
                                      > system that provides value, not one that had $BIGNUM lines of code
                                      > changed last iteration.

                                      Agreed, LOC don't mean much to a client and I never suggested that they
                                      be presented to a client. Metrics and measurement are simply a way to
                                      figure out where you are and indicate how you might improve.

                                      <gripe on>
                                      I have repeated over and over again that LOC in and of itself is not
                                      necessarily a good measure of anything. It's usefulness depends on your
                                      goals and its context with other measurements/metrics. Yet everyone
                                      seems to be focusing on one thing, LOC.
                                      </gripe off>

                                      --
                                      __________________________________________
                                      Tom Mostyn - Nortel Networks
                                    • Dossy
                                      ... Actually, I ve wondered about this before. I think if you think about the actual meaning, business value never changes. What changes wildly is business
                                      Message 18 of 20 , Dec 6, 2000
                                        On 2000.12.06, Tom Mostyn <tmostyn@...> wrote:
                                        >
                                        > Business value can change like the wind. At one point in time C3 had
                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                                        > enough business value to proceed and continue with the project for
                                        > years. Then one executive decision terminated the project. Clearly,
                                        > somewhere, the business value declined.

                                        Actually, I've wondered about this before. I think if you think
                                        about the actual meaning, business value never changes. What
                                        changes wildly is business itself.

                                        Business may change enough to make the currently implemented stories
                                        irrelevant. They still have business value, just not to your business
                                        anymore.


                                        --
                                        Dossy Shiobara mail: dossy@...
                                        Panoptic Computer Network web: http://www.panoptic.com/
                                      • Ian Hobson
                                        In article , Tom Mostyn writes ... If you want business value, measure business value. (or
                                        Message 19 of 20 , Dec 11, 2000
                                          In article <3A2E9ABB.1216D94A@...>, Tom Mostyn
                                          <tmostyn@...> writes
                                          >Robert Crawford wrote:
                                          >>
                                          >> On Wed, Dec 06, 2000 at 01:38:51PM -0600, Tom Mostyn wrote:
                                          >> > I find it odd that some XP'ers say "only the code knows" and "the code
                                          >> > is the design" making XP a very code centric process, but then refuse to
                                          >> > measure productivity in terms of code.
                                          >>
                                          >> Because we're not producing code; we're producing business
                                          >> value. That's done by completing stories and making releases, not
                                          >> by the number of lines of code added/changed/deleted.
                                          >
                                          >XP produces business value by generating code. One is easier to measure
                                          >than the other. Given the choice of measuring business value vs. code I
                                          >would choose to measure code. It's easier to measure, IMO.
                                          >
                                          If you want business value, measure business value. (or some close
                                          analog)

                                          If you want code, measure code.

                                          Remember is was a *drunk* who dropped his keys in the dark but was
                                          looking under the lamp-post because he could see there!

                                          Regards

                                          Ian Hobson

                                          Every time we teach a child something, we prevent him from inventing
                                          it himself. - Jean Piaget
                                        Your message has been successfully submitted and would be delivered to recipients shortly.