Loading ...
Sorry, an error occurred while loading the content.

Success and failure

Expand Messages
  • Dave Nicolette
    Having been offline for several days, I haven t been following the thread about agile, success, and failure. I m afraid I ll have to skip most of it, but I did
    Message 1 of 1 , Jun 27, 2007
    • 0 Attachment
      Having been offline for several days, I haven't been following the
      thread about agile, success, and failure. I'm afraid I'll have to skip
      most of it, but I did want to address a question Ravi Mohan raised
      last week about how agilists look at success and failure, and how they
      examine "failure" with an eye toward improvement.

      To begin, I'd like to offer definitions of "success" and "failure."
      It's very easy to talk in circles when people have different ideas of
      what these words mean.

      Here's a definition of "success":

      1. We predicted X would happen.
      2. X happened.

      Here's a definition of "failure":

      1. We predicted X would happen.
      2. Y happened.

      People who tend to think about things in terms of success vs failure,
      in a binary way, often appear to be working from those definitions of
      the words, even if they don't express it in just those terms. There
      are two fundamental points on which this mindset diverges from agile

      One is in the implicit assumption that it is possible to predict
      future outcomes accurately. Many people in our profession have tried
      to hone the "process" in ways that will guarantee people's predictions
      come true. Agilists recognize this is not possible, and they have
      sought an alternative.

      The other is that a binary pass/fail mindset tends to draw the eye
      backward; people are concerned with assigning blame for "failure" to
      others, and avoiding it for themselves. The time they spend doing so
      is time they can never recover to use again for forward-looking work.
      They are marching boldly into the future...facing backward.

      Agile planning involves looking ahead to several time horizons (often
      there are five planning horizons: day, iteration, release, project,
      and program or strategic plan), and thinking about issues at different
      levels of detail - more detail for near time horizons, and less detail
      for farther horizons. We have to /try/ to predict things, but we
      understand our predictions can only be approximate, even on our best
      day. As each time interval elapses, agile teams pause and reassess
      where they are. Has the customer's goal changed? Have constraints
      changed? Have priorities changed? What have we done that was effective
      in delivering value, and how can we ensure we keep it up? What have we
      done that impeded progress, and how can we improve it? Now let's
      re-plan for the upcoming time horizons at the appropriate level of
      detail, and march on - possibly with a course correction.

      Constant introspection about what is working and what isn't working is
      the norm for agile teams. The idea that agilists don't analyze failure
      is simply an error. Why would intelligent people make such an error?
      Maybe one reason is the differing concepts of success and failure.

      Here's another definition of "success":

      1. The solution in production delivers more business value than its
      own cost.

      And a complementary definition of "failure":

      1. The solution in production delivers less business value than its
      own cost.

      Okay, so what is "business value?" According to a 2006 survey (sorry,
      I don't recall whether it was Forrester or Gartner), business
      executives want the following four qualities from the IT projects they
      sponsor, in priority order:

      1. Time to market
      2. Alignment with business needs
      3. Low total cost of ownership
      4. High quality

      Based on feedback from executives, then, we can see that an approach
      or process or technique that facilitates the delivery of /some/
      working features quickly, and allows new features and enhancements to
      be deployed regularly on short timelines, would be an advantage for
      satisfying point #1. A process that delivers nothing until all
      documented requirements have been satisfied would be a disadvantage
      for satisfying point #1.

      "Alignment with business needs" doesn't mean the same thing as
      "formally satisfies all documented requirements." It means that the
      highest-value features are deployed first. As iterative development
      and incremental delivery continue through a project, whatever features
      are most valuable (as defined by the customer) will be delivered
      first, to the extent that is logistically possible - even given that
      requirements and priorities will change throughout the project. A
      process that
      facilitates this would be an advantage for satisfying point #2. A
      process that calls for software to be built from requirements defined
      far in advance of the delivery date would be a disadvantage for
      satisfying point #2.

      Cost of ownership of software in a corporate IT environment has
      several components, and the largest tends to be production support,
      bug fixes, and enhancements to the software itself, rather than
      facilities costs. Practices that result in software that is
      well-structured, clean, understandable, maintainable, and that has
      built-in tests to verify its correctness would be advantageous for
      satisfying point #3. (I will be presenting on this topic at Agile
      2007, BTW.) Practices that result in code ridden with technical debt
      and that lacks any repeatable, automated verification would be
      disadvantageous for satisfying point #3.

      Although quality was mentioned by respondents to the survey, it is
      more a "soft" value than a "hard" value. However one might define
      "quality," I think it's safe to say most people would agree that
      high-quality software is more likely to satisfy points #1, #2, and #3
      than low-quality software. A process that includes practices designed
      to keep the defect rate low, to keep the structure of the code clean,
      and to provide automatic testing would be advantageous in maintaining
      high quality. A process that depends on ineffective quality assurance
      methods, such as rigorous documented requirements with detailed
      requirements tracking, visual code reviews without automated tests,
      and post-facto QA testing by a separate group from the project team
      would be disadvantageous.

      Agile methods require the direct participation of the customer
      throughout the project. Therefore, agile methods provide a simpler and
      more reliable means of understanding business needs than any
      documentation-based approach. Agile methods are built around the
      complementary ideas of iterative development and incremental
      delivery, as well as continuous improvement. These elements of
      agile development support point #2 very strongly.

      The close collaboration with the customer combined with the constant
      feedback and ("inspect and adapt" cycles, in Scrum terminology) are
      the way agile teams deal with failure, according to the second set of
      definitions I offered. If a project is destined not to deliver more
      value than its cost, the next-best outcome for the enterprise is to
      cancel the project and cut their losses. No development process other
      than "agile" provides the information necessary to make that decision
      /earlier/ in the project. What this means is that the worst-case
      outcome for an agile project is that the organization realizes it's
      been on the wrong track early, and can cut its losses and change
      direction sooner. As "failures" go, that's really not so bad.

      Besides, as Rudyard Kipling wrote, triumph and disaster are merely
      impostors, anyway. What we really have are outcomes that provide some
      combination of positive and some negative consequences. It's more
      important to learn something from the outcomes that we can use going
      forward than it is to worry excessively about which projects to label
      "success" or "failure." Both those designations are gameable.

      Quite a few of us have learned, through experience, that agile methods
      tend to do a good job of supporting the four key desires of business
      executives with respect to IT projects. Far from a "religion,"
      agilists would happily learn about practices that can do an even
      better job. Unfortunately, most critics of agile methods don't offer
      any alternatives that they believe would work better. Until they do,
      it seems only prudent to continue to use the best practices we
      currently know.

    Your message has been successfully submitted and would be delivered to recipients shortly.