Loading ...
Sorry, an error occurred while loading the content.

7353Re: [agile-usability] How do you avoid scope creep when integrating user research into the scrums?

Expand Messages
  • Austin Govella
    Aug 11, 2011
    • 0 Attachment
      On Aug 11, 2011, at 8:16 PM, Jared Spool <jspool@...> wrote:

      I have a client who recently asked me about this problem. I was wondering how you'd handle it?

      They are new to UX and to Agile methods, so they are struggling in interesting ways.

      As part of a recent project, they added some usability testing to their sprints. They'd never done a usability test before and, as a result, discovered many usability issues with the design.

      However, many of the problems they found were outside the original scope of the project. The UX team members wanted desperately to add many of the problems to the project's backlog, but instantly found resistance from the product owners who didn't want to delay the delivery. The result was a sense from the UX folks that the product owners didn't care about a good user experience.

      The most important thing to remember is that agile changes product development into a continuously moving stream, and you have to be willing to jump in and float along, changing what you can when you can.

      Specifically, though there are lots of small ways to mitigate this problem:

      1. The product backlog should be prioritized. Any new work is added to the product backlog and prioritized accordingly. If the usability fixes get prioritized at the bottom of the pile, then so be it.

      2. UX Bucket. Have the teams begin setting aside a set number of developer hours/points specifically for the UX team to use. These can be used for usability fixes that don't make it high enough on the backlog. Or to do research. Or to do fixes you notice.

      3. The Shelf. Stick the fixes "on the shelf". Next time you're developing in that area of the product, pull them down and roll the changes into adjacent work. (A lot of changes are absolutely negligible if a developer is already in that neighborhood of the code.)

      4. Schedule iteration/fix time into sprints. A test requires time for testing and time for fixes. Of course, no one will know what the fixes will be until the tests are done, so they may grouse about scheduling a black box of time (identical to the aforementioned UX bucket), but they need to suck it up. :-)

      5. Defining UX quality of 'done'. Disagreements about what needs to be completed before you can deliver is really a disagreement about what done means. UX quality should be part of the definition of done. When you schedule a story, everyone on the team should agree whether you're shooting for functional, usable, parity with comparable experiences, or awesome,

      6. Writer better stories. Add comparisons. (Registration should be like Twitter's.)

      7. Publicize how bad the usability errors are to everyone at the company. Commonly perceived problems are more quickly addressed than problems only you see. (Chicken Little never gets the sky fixed unless he's the CEO or product manager.)

      8. Design mockups or prototypes of the fix and publicize. It's easier to squeeze in additional scope when it's super clear what the change is. And/or stick these fixes on the walls. Sooner or later, people will wonder why these completed fixes haven't been implemented. For goodness sakes, they're already designed!

      9. Measure key baselines. If the fixes would effect key baselines, then you have a stronger case for getting them prioritized towards the top of the backlog.

      10. Forward (or print and post, drop on desks) any case studies you see where usability fixes and/or ux improvements improved metrics. No one cares about UX. Everyone cares about success.

    • Show all 13 messages in this topic