Loading ...
Sorry, an error occurred while loading the content.

[extremeprogramming] Re: Elves in the Night [Stupid XP Question Number 6614]

Expand Messages
  • Robert C. Martin
    Dave Thomas wrote in message news:m266xbzofr.fsf@zip.local.thomases.com... ... meta. ... bit of ... you ... Yes, the right underlying
    Message 1 of 38 , Jan 3, 2000
      Dave Thomas <Dave@...> wrote in message
      > "Robert C. Martin" <rmartin@...> writes:
      > > XP is not anti-meta. XP simply puts tention in the decision to go
      > > You go meta only if you know you must.
      > >
      > > Dave:
      > > > So I feel that YAGNI is too simplistic--there are times where a
      bit of
      > > > up-front investment will be rewarded many times over.
      > >
      > > The problem is predicting those times in advance. I have been the
      > > beneficiary of such up front investment -- and if feels good to know
      > > guessed right. But I've also been burned by investing too much in
      > > generality that wasn't needed.
      > But it's more than 'feeling good' and 'getting burned', isn't it? In
      > the XP risk model, the 'feeling good' actually represents cost
      > savings. In my experience, the right underlying structure can make
      > these substantial--the cost of adding new functions is halved or
      > better. The 'burn' is increased cost--you bet up-front and lost.

      Yes, the right underlying structure can make the savings substantial.
      you say, the cost of adding functions can be halved. In XP, however, we
      won't create this structure until the we have two functions that would
      benefitted. When we see the second function, we refactor the existing
      design until the second function is easy to add. All subsequent
      then benefit from this.

      Thus, we aren't abandoning the right underlying structure, we are just
      demanding that the code show us that it's absolutely needed. We also
      until the code shows us exactly what that structure should be.

      > My belief is that for an experienced developer, we're looking at the
      > venture capital success formula here: arithmetic losses, geometric
      > gains. You invest n days up front, on the basis that you're pretty
      > certain to see returns of 10n down the road.

      But at what failure rate? The majority of startups don't experience the
      geometric growth. Their up front guesses were wrong. They struggle
      years to make a small incremental gain on the initial investment, and
      fold or are absorbed. Investors still find the model useful because the
      occasional geometric success swamps the preponderance of failures.

      Can we afford this model in building software projects? Does the
      geometric success really provide enough benefit to overcome the times we
      guess wrong? It seems to me that project failure due to overengineering
      not all that uncommon.

      How do many startup companies really succeed? They stay nimble. They
      change when the market changes. They try not to invest too much into an
      approach until the market begins to pull that solution from them. Then
      invest like crazy. i.e. they are market driven.

      How can a software project succeed using this formula. By not investing
      unproven infrsstructure. By doing the things that the customer thinks
      most important, and then optimizing the design within that context.
      duplication arises because of a lack of infrastructure, add the
      infrastructure and kill the duplication.
      > Elsewhere, people have argued that you add this infrastructure when
      > you see the need--effectively when the second example of it's use
      > occurs. My experience is that often that would be an expensive
      > proposition--the kind of design I'm talking about here is structural,
      > not just procedural. We're talking about refactoring the metaphor, not
      > just the code.

      Yes, there is certainly rework involved. But since the rework is done
      the *second* instance, it just not that much rework. Yes, there will be
      times when some good idea is missed, and a larger refactoring is
      We live with that. We count it better to have the code force us into a
      better infrastructure than to force that infrastructure on a project
      doesn't need it.

      > So, my problem is that XP as espoused doesn't allow me to use my
      > experience to reduce risk.

      Yes it does. It just asks you to wait until the risk is actually
      As you work in an XP project, you will find all kinds of opportunities
      adding infrastructure. But you wait, rather than immediately adding it.
      You wait until its clear that the infrastructure is really needed.

      This isn't asking a lot. It is reasonble to ask that you avoid extra
      infrastructure that you aren't sure you need.

      > It says 'add it when you need it', 'the
      > first use only pays what it must'. I'd just like to see a tad more
      > flexibility there, allowing me to say "well, I can't guarantee it, but
      > I strongly suspect we'll need XYZ, and if I'm right, it'll pay for
      > itself 10 times over. Implementing it now will n days, but adding it
      > retroactively will affect everything written to that date, and will
      > cost at least 3n days. If I'm right 50% of the time, it pays more to
      > do it now".

      That's a lot of 'ifs'. How certain are you that they are correct? How
      confident are you in the 10X benefit, or the 3X cost, or the 50%
      In effect you are gambling with a lot of variables; and this increases
      variance. Now, what does your customer want? Does the customer want
      variability, or predictability? Consider these two options (forgive the
      inappropriate use of a normal distribution):

      1. Mean = 2 man years. Sigma= 2 man years.
      2. Mean = 3 man years. Sigma= 1 man years.

      Which will the customer be more interested in? In my experience, the
      customer will go for option 2. He'll be willing to pay a higher average
      cost for more predictability.

      > I think this is probably a somewhat academic argument. My guess is
      > that in real life, common sense over wins out over the strict letter
      > of the method. After all, I suspect XP coders use a manifest constant
      > the first time they need a fixed value, not the second. I just get
      > nervous when I read the somewhat extreme and absolute tone of some
      > of the writings.

      I think you should stay nervous. Using XP, a program is built from one
      failing test case to the next. And the granularity of those test cases
      remarkably small -- on the order of a few dozen lines of code. Don't
      presume that XPers actually still add all the infrastructure up front
      "common sense" would dictate -- they don't. Instead, they make each
      test case pass, one at a time. After each test case passes (or before
      write the next test case) they refactor to remove duplication and
      the design. The absolute tone of the writings reflects the behavior of
      XPers. Infrastructure is added after the fact by refactoring something
      already exists.


      Robert C. Martin | OO Mentoring | Training Courses:
      Object Mentor Inc. | rmartin@... | OOD, Patterns, C++,
      PO Box 85 | Tel: (800) 338-6716 | Extreme Programming.
      Grayslake IL 60030 | Fax: (847) 548-6853 |

      "One of the great commandments of science is:
      'Mistrust arguments from authority.'" -- Carl Sagan
    • Robert C. Martin
      Tom Kreitzberg wrote in message news:387364E4.C0A3E6CC@jhuapl.edu... ... There is no fundamental difference between pre XP Object
      Message 38 of 38 , Jan 5, 2000
        Tom Kreitzberg <Tom.Kreitzberg@...> wrote in message

        > But I think "flexibility" means different things to XP and,
        > shall we say, pre-XP OMA. In XP, doesn't it primarily mean
        > once and only once? In pre-XP OMA, doesn't it primarily mean
        > OCP and low coupling? When I wrote that XP "is structured so
        > that inflexible designs are cheap to change," I meant inflexible
        > in this second sense.

        There is no fundamental difference between pre XP Object Mentor, and
        post XP
        Object Mentor except that we have identified XP as the process we like
        use. Even this is not a big shift for us, since XP is very similar in
        spirit and practice to the unnamed process we have used for years.
        are differences, certainly -- specifically in the areas of pair
        and test first programming; but these are differences in intensity, not
        philosophy. As for the rules governing simplity, the planning game,
        iterations, etc, we were very closely aligned.

        Flexibility means the same to me now as it did five years ago. The
        to add or change significant amounts of functionality while changing a
        minimum of exsiting code -- i.e. the OCP. OnceAndOnlyOnce leads to this
        goal just as the OO design principles do. It is my goal over the next
        several months to integrate the principles and XP.
      Your message has been successfully submitted and would be delivered to recipients shortly.