Loading ...
Sorry, an error occurred while loading the content.

RE: [XP] XP & PSP

Expand Messages
  • Bryan Dollery
    Hi John, And you thought that your message was long :) ... Sounds good, but I prefer a process that has as its main goal the production of software. Was this
    Message 1 of 23 , May 2, 2002
    • 0 Attachment
      Hi John,

      And you thought that your message was long :)

      > The main goal of TSP is to make plans that
      > are based on the team's demonstrated velocity, plan only for the
      > foreseeable future, and track the plan so that accurate status is
      > known and mis-course corrections can be made when needed. Finally,
      > review performance on recently completed work to better estimate the
      > next batch of work.

      Sounds good, but I prefer a process that has as its main goal the
      production of software. Was this just an unfortunate turn of phrase, or
      does it indicate something deeper in your psyche - that you believe in
      process over working software? Your first response will be defensive - that
      it's just an unfortunate phrase - but please pause for a while and consider
      the possibility that there is something deeper going on here.

      > PSP measures
      > quality through functions of work, review, and inspection times, and
      > defect rates,(false: PSP measures quality through the minimizing the
      > cost of producing working implementations, the individual
      > measurements are used to determine which activities are cost
      > effective). XP defines quality as number of tests passed.

      XP doesn't define quality - our customer's do. We demonstrate that we meet
      their standards by having those standards formally represented as customer
      tests, and we have a stated aim of passing all of them before we consider
      the job done. Therefore tests are simply a measuring tool, not a definition
      of quality.

      This is an important distinction. In XP quality means something different -
      it means that we've finished - and we don't deliver software that isn't
      finished. Our systems either pass all their tests (what you would call 100%
      quality) or they aren't yet finished. Therefore quality isn't an important
      concept to us - because its a binary property that could be called
      'complete'. So, the question "is your system high-quality?", becomes "is
      your system complete?". Being able to complete our work is important.

      You can't argue that I've just redefined quality, because I haven't got
      that power - it belongs to the customer.

      > PSP data is meant to be used first on the current
      > project, but be consistent enough to be useful across projects). XP
      > values data that answers specific short term questions.

      I think that you're suggesting that XP's data is collected to suggest
      answers to questions about near-term events and the near-term properties of
      the project. This is true, but our data is also used to predict things
      about the distant-term properties and events of the project. However, we
      recognise that in a lot of situations things change quite rapidly, so we
      trust our near-term predictions more than our distant-term predictions.

      > PSP/TSP require
      > customer input as often as it is needed). XP requires it every day.

      Which is as often as it is needed.

      The problem with a wishy-washy definition such as "when it's needed" is
      that it's open to interpretation. Most XPers believe that customer input is
      necessary all day every day - so Kent et. al. decided to remove the
      ambiguity and speculation and simply state that the customer should always
      be available.

      This means that instead of spending time writing low-bandwidth documents we
      can gather our requirements live, and provide rapid feedback to the user
      about the impact of their requirements on the system. The time we save by
      not writing and trying to interpret documentation is, on any non-trivial
      project, significant.

      > See sei@... for a wealth of data on
      > improvement to on estimating accuracy and reduction of time-to-market
      > using the PSP/TSP. Can anyone refer me to data on XP results
      > estimating performance and time-to-market for projects of any
      > significant size?

      Well, I can't. But, then again, I've been using the SEI resources for
      years, and generally disagree with everything they say and everything they
      measure. So, if you can point me to acceptable data on PSP (that is, data
      that doesn't include sweeping assumptions) then I'd be willing to view it.

      > More misinformation is included in his posting regarding reviews and
      > inspections and the use of checklists. Many independent sources have
      > concluded that bugs can be found and fixed at least 4 to 5 times
      > faster by reviews and inspections compared to unit testing

      Ah, unit-tests in XP are different. They're not what you would usually call
      unit-tests, despite the fact that you may look at them and think that they
      look just like unit-tests. I'm not aware of any comparison of inspection to
      XP's quality ensurance techniques (again, feel free to substitute
      'completion' for quality in this sentence).

      In XP unit-tests are a formal system for encoding design constraints and
      driving design (and are considerably more useful than UML for this
      purpose). They also act as a safety-net for refactoring (which is closely
      related to their ability to drive design). Finally they serve a
      psychological purpose - they help a developer subdivide tasks into small
      steps, and then focus on solving those steps one at a time.

      I feel that there can be no meaningful comparison of XP's unit testing
      strategy with inspection. It's comparing apples with pears.

      However, given that the benefits derive the benefits listed above from a
      tool that looks, and acts, just like unit-tests we do also enjoy the usual
      bug-identification benefits that you'd usually identify with unit-tests.

      Another issue is that comparisons of inspection with unit-testing don't
      compare large systems wrapped in XP-tests, and the effects of making a
      change. Inspection will notice any local defects, but it would be necessary
      to understand the full system to recognise any side-effects of a change.
      Unit-tests will identify the side-effect very rapidly.

      IMO: The majority of authors who write studies on Inspection techniques are
      simply trying to justify their belief in the technique. This position alone
      is enough to disqualify their 'scientific' results.

      One last point on this topic. If inspection is good, then 100% inspection
      must be better. XP practices 100% inspection, all the time, through
      pair-programming - and we've got our unit-tests too. It's better to have
      both than only one, isn't it?

      >, and
      > fixing bugs found in acceptance tests takes even longer than in unit
      > test.

      I believe that the general wisdom is that it usually takes 10 times longer
      in fact. XP reduces this figure by some large amount by keeping the
      feedback loop very short, and keeping the code as simple as it could
      possibly be, and therefore more able to change rapidly to incorporate a
      fix.

      > The point of the checklist is that you focus it on your
      > individual tendencies. Don't waste time looking for mistakes you
      > don't make, instead look for the ones that you have made before
      > and
      > take a long time to fix. If you modify the code to eliminate the
      > problem, you have fixed one program, [give a man a fish]. Keeping
      > track with a checklist will help eliminate the problem in ALL future
      > programs,[teach a man to fish].

      I have a similar tool - it's called a brain. What happens is that I'll make
      a mistake, learn from it, and not do it again. I find learning to be a much
      more effective tool than a simple checklist. To extend the fishing analogy,
      your checklist (a tool) would be closer to giving the man a fishing rod (a
      tool), not teaching him to fish (a learning experience) - if you give him a
      rod he can fish, but that doesn't mean that he'll be able to apply that
      knowledge to new contexts - that's what learning is for. Learning is
      superior to checklists, which is why nature allowed it to evolve, and
      checklists had to be invented.

      Cheers,

      Bryan
    • Robert Crawford
      ... That s not clear from the original message. ... Well, the phrase AFTER THE TEAM HAD TURNED OVER THE CODE FOR FINAL CERTIFICATION TEST implies to me that
      Message 2 of 23 , May 2, 2002
      • 0 Attachment
        J.Ciurczak@... wrote:
        > See below:
        >
        >
        >>psp_tsp_practitioner wrote:
        >>
        >>>What really happened was that the 400% rewrite occurred after the
        >>>team had implemented, re-factored and integrated. That is, AFTER
        >>>THE TEAM HAD TURNED OVER THE CODE FOR FINAL CERTIFICATION TEST,
        >>>(we're talking on the order of 40,000 lines of code changed while
        >>>trying to fix the 10,000 lines of code they threw over the wall to
        >>>Certification). These changes were not for re-factoring, they were
        >>>for bug fixes. During this period the team's velocity was ZERO.
        >>>Imagine how much happier the User, (and management), would have been
        >>>if the team had used those 40,000 lines of code to implement more
        >>>User Stories!
        >>>
        >>Why can't refactoring happen while you're taking care of a bug?
        >>
        > (It certainly can, but that's not what happened here.

        That's not clear from the original message.

        >>Why wasn't testing being done continuously?
        >>
        > It was supposed to be, but it wasn't. You would have to ask Chris. I would not presume to
        > guess at the why's the behaviour

        Well, the phrase "AFTER THE TEAM HAD TURNED OVER THE CODE FOR FINAL
        CERTIFICATION TEST" implies to me that there were tests that were not
        available through the course of development. One of the points of XP is
        continuous testing and correction; this phrase implies there was a whole
        set of tests (and attendant corrections) that weren't available until
        the end.

        I'm pretty sure the "official" XP position is that the developers should be
        able to run the acceptance tests during development. It certainly seems
        like a useful idea.
      • Kari Hoijarvi
        Since I use both XP and PSP practices, I d really like to know how test-first-DESIGN compares to unit tests and reviews. For me TFD emphasis is in the design.
        Message 3 of 23 , May 2, 2002
        • 0 Attachment
          Since I use both XP and PSP practices, I'd really like to
          know how test-first-DESIGN compares to unit tests and
          reviews. For me TFD emphasis is in the design.

          Humphrey recommends writing unit tests before the code,
          which is good, but to write tests along the design is even
          better.

          My PSP data shows, that reviews yield 2 times defects/hour
          compared to TFD. SEI data tells, that reviews yield 4 to 5
          times compared to unit tests. So my TFD is about 2-2.5 times
          more efficient than studied unit tests. But, since I have no
          pre-TFD data of myself, and there's only one person involved,
          I would not declare TFD superiority over normal unit testing.
          My data just convinces me, that both reviews and TFD are
          vastly superior to hacking.

          I'd really like to see a study, where TFD efficiency is
          measured. Any links?

          Kari

          -----Original Message-----
          From: psp_tsp_practitioner [mailto:J.Ciurczak@...]

          > inspections and the use of checklists. Many independent sources have
          > concluded that bugs can be found and fixed at least 4 to 5 times
          > faster by reviews and inspections compared to unit testing, and
          > fixing bugs found in acceptance tests takes even longer than in unit
          > test.
        • Ron Jeffries
          ... And reviews vs TDD + Pair Programming? Ron Jeffries www.XProgramming.com The rules are ways of thinking, not ways to avoid thinking.
          Message 4 of 23 , May 2, 2002
          • 0 Attachment
            Around Thursday, May 2, 2002, 11:59:31 AM, Kari Hoijarvi wrote:

            > My PSP data shows, that reviews yield 2 times defects/hour
            > compared to TFD. SEI data tells, that reviews yield 4 to 5
            > times compared to unit tests. So my TFD is about 2-2.5 times
            > more efficient than studied unit tests. But, since I have no
            > pre-TFD data of myself, and there's only one person involved,
            > I would not declare TFD superiority over normal unit testing.
            > My data just convinces me, that both reviews and TFD are
            > vastly superior to hacking.

            And reviews vs TDD + Pair Programming?

            Ron Jeffries
            www.XProgramming.com
            The rules are ways of thinking, not ways to avoid thinking.
          • Kari Hoijarvi
            I m a solo developer, so I can t give the slightest clue. Anyway, measuring effects of a single practice is hard. For example I measured that that reviews are
            Message 5 of 23 , May 2, 2002
            • 0 Attachment
              I'm a solo developer, so I can't give the slightest clue.

              Anyway, measuring effects of a single practice is hard.
              For example I measured that that reviews are twice as
              effective as TFD unit tests in finding bugs. But that
              number does not tell how much TFD helps in producing
              better designs. Or if it's my review performance that's
              relatively inefficient.

              I just use what I have found useful.

              Kari

              -----Original Message-----
              From: Ron Jeffries [mailto:ronjeffries@...]

              Around Thursday, May 2, 2002, 11:59:31 AM, Kari Hoijarvi wrote:

              > My PSP data shows, that reviews yield 2 times defects/hour
              > compared to TFD. SEI data tells, that reviews yield 4 to 5
              > times compared to unit tests. So my TFD is about 2-2.5 times
              > more efficient than studied unit tests. But, since I have no
              > pre-TFD data of myself, and there's only one person involved,
              > I would not declare TFD superiority over normal unit testing.
              > My data just convinces me, that both reviews and TFD are
              > vastly superior to hacking.

              And reviews vs TDD + Pair Programming?
            Your message has been successfully submitted and would be delivered to recipients shortly.