Loading ...
Sorry, an error occurred while loading the content.

4243Re: [agile-usability] Agile UCD Velocity Points

Expand Messages
  • William Pietri
    May 9, 2008
    • 0 Attachment

      leina elgohari wrote:
      I really wanted to keep the maths very very simple (ROI can make the maths complicated)

      Yes. This is why most teams I work with judge the value of features intuitively or through proximate measures, like user activity.

      The only people I have seen successfully use rigorous money-based metrics for business value are large companies with relatively stable business models and one or more people devoted full time to tracking metrics and modeling money flow. And there I think I've only seen it used at the project level, not the story level.

      I imagine you could also use money-based metrics in a casual way, as a way for people try to quantify their intuitions a little. I think that would only work if your business model is pretty stable, though.

      (1) My calculations assume that usability involvement peters out towards the tail-end of the project. Is that case with usability in an agile project?


      Not in my experience. Many of the projects I work on don't end, so I don't have a ton of data for you, but I generally see usability involvement as relatively stable. I see more usability-focused or usability-generated stories later in a project, driven from real-world data.

      For example, just this week I saw a team release a usability-improving feature to reduce people dropping out in the middle of a flow. And now they're working on a feature to give them better metrics that they will then use to drive future usability decisions.

      (2) Mynott's table refers to cost of making changes where the project has adopted a Waterfall approach. Is Mynott's table relevent in agile projects?

      If it is, then we have failed, and should go back to waterfall.

      My belief is that the cost-of-change curve has been dramatically lowered, and that other cost curves (e.g., increasing cost of distant research and planning, increasing chance of external change, increasing chance of chained errors, cost of money spent with no return) can more than compensate for the rest.

      At least in my on-the-web world, "ship it and see what happens" isn't just a viable alternative, it's seen as superior in many cases, especially when using techniques like A/B testing.  So I think you should avoid using a mathematical model that encourages a lot of work based on speculation.


      Hoping that helps,

      William
    • Show all 13 messages in this topic