Loading ...
Sorry, an error occurred while loading the content.

4111Re: [agile-usability] Re: Online Usability Tests

Expand Messages
  • Todd Zaki Warfel
    Mar 20, 2008

      On Mar 20, 2008, at 1:17 PM, William Pietri wrote:
      I think we're talking about different kinds of quantitative methods.

      The choice I typically see isn't 12 local vs 100 remote. It's 4 versus 1,000. Or 10,000. Or 100,000. And actual users doing actual tasks versus recruited users doing requested tasks.

      Well, 4 isn't enough. I wouldn't recommend any less than 5 and typically 8-12 is best, otherwise you don't have enough to start seeing significance in patterns. That's going to be one reason that 1000 is going to yield better results with web tracking—you don't have enough in your qualitative study.

      [...] It turned out there were a number of minor user interface issues, none affecting more than 2% of order attempts, which is well below the power of a 12-user study to resolve. And several related to different international styles of entering addresses, which we couldn't have solved with a local user study anyhow. The cost-benefit ratio was also much better; from inception to recommendations, it was under two person-days of work.

      A couple of things here: 
      1. I would suspect that the "minor user interface issues" would have been easily corrected with simply having a good informed interaction designer or usability specialist assess the interface. 

      2. Did you do a 12 user study on this interface? I'll bet that if you did, you would have found the same issues—I've done this literally hundreds of times. If you didn't, how would you know that it's beyond what you can find from a 12 person study? We use web metrics to help identify key abandonment areas, then in-person field studies to find out the why. For example, we had a client who had a significant abandonment in one of their cart screens, but didn't know exactly which fields. They could have spent time coding up the fields w/JS to track every single one to figure out. Instead we did a quick study w/12 people and found out that it was a combination of two fields that was causing the problem on that screen and exactly why they were an issue. Problem fixed. 

      Just a different approach. And yes, we used a mix of qual and quan—something we do quite often.


      I'm also fond of tracking metrics. In-person user testing is good for things you know to ask about, but you need live site metrics to catch things that you didn't even know have changed. 

      Not sure I agree with that. Might be the way you're (or the person doing testing) is conducting the tests. Our testing format utilizes an open format with discovery method. We have some predefined tasks based on goals that we know/hope people are trying to accomplish with the site. This info comes from combined sources (e.g. metrics, sales, marketing, customer service, customer feedback). However, that's not all of it—we always include open ended discovery time to watch for things we don't expect, anticipate, our couldn't plan for—unexpected tasks. We've done this in pretty much every test in the last couple of years and every time find a number of new features, functions, and potential lines of revenue for our client. 

      And once you have some metrics, A/B testing can really pay off. Suppose you want to know which of three new landing page versions increases signups by 5%. You can't do that with a 12-person user test. But you can put each version up in parallel for a week with users randomly assigned to each, and get thousands or tens of thousands of data points.

      True. Our method selection is goal driven. What's your goal? That drives your method. Just to provide the counter point to that, the downside of A/B the way you're suggesting is that while it will tell you that one model increased signup by 5%, it won't tell you why. A quick 12 person study will tell you why and give you guidance on which one would probably increase sign-up. You then take that info and challenge/validate it with a quantitative study like you suggest. Or the reverse, take your A/B and do a supplemental 12 person study to find out why. 

      Answering the why will give you far more from a design insight perspective than just seeing what happened.


      Cheers!

      Todd Zaki Warfel
      President, Design Researcher
      Messagefirst | Designing Information. Beautifully.
      ----------------------------------
      Contact Info
      Voice: (215) 825-7423
      Email: todd@...
      ----------------------------------
      In theory, theory and practice are the same.
      In practice, they are not.

    • Show all 29 messages in this topic