Loading ...
Sorry, an error occurred while loading the content.

[XP] Re: Success rates of Agile Transitions

Expand Messages
  • Joseph Little
    Hi George, How are you judging improvement? You are a smart guy, and doubtless have your own answer for the question. Or feel the question cannot be
    Message 1 of 256 , Apr 1, 2008
    • 0 Attachment
      Hi George,

      "How are you judging improvement?"

      You are a smart guy, and doubtless have your own answer for the
      question. Or feel the question cannot be answered. So, at the risk
      of setting myself up, I will still try to give you (and really others)
      my answer.

      Before that, I must also say that this is an important topic that has
      been discussed here and elsewhere at length. And yet, I find in teams
      there is often not enough discussion of it. Nor enough coming back to
      it in the course of an effort (I am more and more moving away from the
      project idea).

      OK, so, with those caveats...

      There is the small world and the large. But, as we have learned over
      the centuries, there are linkages and similarities.

      In the small world (the iteration), I have come to favor velocity
      (stories points completed). It gives one measure of "productivity"
      and anyway the team needs it to (a) try to make itself better, by
      removing impediments (NOT by working harder, usually), and (b) to use
      to talk to managers who want magic ("I know you guys can get those
      extra features done by the next release...just try harder.")

      In the larger world (a release, a larger effort), I favor some metric
      around Business Value. There are lots of them, and I think each can
      be good and appropriate in some situations. Examples include: NPV,
      ROI, Reduced Cycle Time (eg, for a process the SW will improve), etc,

      Let me emphasize this: Most (all?) metrics around BV have both a
      calculation model and some assumptions that go into that model. My
      experience is that the models are crude (subject to further learning)
      and most definitely the assumptions are subject to further learning.
      They both should be revised during the effort, perhaps several times.
      This fact (that we are always learning about BV) makes things even
      more interesting.

      COMMON SENSE. Yes, I need to emphasize that, because my impression is
      that people want to think that the measure is reality. The measure is
      always just a crude way of understanding reality. But we are crude
      and unsophisticated creatures, so we need simple measures to
      concentrate our minds. Still, we needn't take any measure too
      seriously, especially if common sense tells us that it is not accurate
      (or it leaves important stuff out).

      In the end, it is more blessed to give than to receive. In the end,
      the only measure is: did we satisfy (or surpass) the customers' needs
      and wants. All else is minor commentary. (As a tease to my vanity, I
      will remind myself that almost all customers don't want software; they
      want a solution to a real problem they have identified. Maybe
      software is part of the solution or improvement.)

      For most of us, this "it's in the eye of the beholder" stuff is too
      vague to live by or act on. (Just as "love your neighbor as yourself"
      is simple and also too hard for me to understand.) It does seem to be
      useful, usually, to be a bit scientific about ourselves. Try an
      experiment (largely on ourselves), use some measures, and test whether
      we get the desired results. Helpful, if we have some sense of humor
      about it. We are not rats for inconsiderate experimentation.

      So, I meant a small improvement in Velocity or in the BV metric. To
      me, 10% is often close to the accuracy tolerance of the metric itself,
      so almost not meaningful.

      Was that helpful at all?

      Regards, Joe

      --- In extremeprogramming@yahoogroups.com, George Dinwiddie
      <lists@...> wrote:
      > Joe,
      > A small improvement in what? How are you judging 'improvement'?
      > I hope this question doesn't seem pedantic. It is, I think, at the
      > heart of any conversation with someone who has no experience with Agile
      > Software Development and is considering a transition.
      > - George
      > Joseph Little wrote:
      > > Hi George,
      > >
      > > +10% to me means a small improvement.
      > >
      > > Thanks, Joe
      > >
      > >
      > > --- In extremeprogramming@yahoogroups.com, George Dinwiddie
      > > <lists@> wrote:
      > >> Joseph Little wrote:
      > >>> We also know that many large firms have tried to do "agile" and
      > >>> failed. Or had mediocre results (+10%, +20%).
      > >> Joe, when you say this, what do you mean by "+10%"?
      > --
      > ----------------------------------------------------------------------
      > * George Dinwiddie * http://blog.gdinwiddie.com
      > Software Development http://www.idiacomputing.com
      > Consultant and Coach http://www.agilemaryland.org
      > ----------------------------------------------------------------------
    • Niraj Khanna
      Hi Unmesh, Sorry for not responding in over 1 month. We were away on vacation. ... I think what you re describing maybe a symptom or practice of why some
      Message 256 of 256 , May 8, 2008
      • 0 Attachment
        Hi Unmesh,

        Sorry for not responding in over 1 month. We were away on vacation.
        > So to measure success or failure of "Agile" transition is to measure
        > if people are thinking for themselves, rather than blindly following
        > agile coach's advice and running behind agile buzzword. How can we
        > measure that?

        I think what you're describing maybe a symptom or practice of why some
        agile transitions "succeed" over others that "fail". I'm just
        interested in measuring whether it succeeds or fails. A secondary and
        more useful study would be "why do agile transitions succeed/fail".
        Finally, to be quite honest, I wouldn't be surprised to see "Blindly
        following agile manual" in either the "success" or "failure" camp. I
        think Ron has discussed practicing all the XP practices before
        deciding which ones to drop, but I can also see how practicing and
        applying practices without an understanding of expected benefits
        coulod lead to adoption failure.

      Your message has been successfully submitted and would be delivered to recipients shortly.