Loading ...
Sorry, an error occurred while loading the content.

4110Re: [agile-usability] Re: Online Usability Tests

Expand Messages
  • William Pietri
    Mar 20, 2008
    • 0 Attachment
      Todd Zaki Warfel wrote:
      > Personally, if I had my choice of observing 12 participants in person
      > or 100 people run through an automated remote process, I'd take the 12
      > in person every single time. In the 15+ years I've been doing this
      > type of research, I've yet to find a pattern identified by survey and
      > remote methods w/100 people that we weren't able to identify with 12
      > in person.
      > The main benefit we get from more quantitative methods is to satisfy
      > the marketing research people who only believe in quantitative methods.

      I think we're talking about different kinds of quantitative methods.

      The choice I typically see isn't 12 local vs 100 remote. It's 4 versus
      1,000. Or 10,000. Or 100,000. And actual users doing actual tasks versus
      recruited users doing requested tasks.

      The simplest version of this is just log analysis combined with some
      basic instrumentation. For example, at one client I looked into failed
      orders. Looking at the data, circa 10% of orders were failing at the
      credit card processing stage. Some of them were legitimate failures, but
      I suspected that not all of them were. So we logged every bit of
      information in every order attempt.

      It turned out there were a number of minor user interface issues, none
      affecting more than 2% of order attempts, which is well below the power
      of a 12-user study to resolve. And several related to different
      international styles of entering addresses, which we couldn't have
      solved with a local user study anyhow. The cost-benefit ratio was also
      much better; from inception to recommendations, it was under two
      person-days of work.

      I'm also fond of tracking metrics. In-person user testing is good for
      things you know to ask about, but you need live site metrics to catch
      things that you didn't even know have changed. One client has a very
      data-driven site, and manually testing all the key pages with every data
      update is impossible. They track dozens of metrics, and significant
      deviations in key numbers get people paged. Good metrics also lets you
      catch surprises with what you thought were minor changes to the site.

      And once you have some metrics, A/B testing can really pay off. Suppose
      you want to know which of three new landing page versions increases
      signups by 5%. You can't do that with a 12-person user test. But you can
      put each version up in parallel for a week with users randomly assigned
      to each, and get thousands or tens of thousands of data points.

      For any of these approaches, the data can lead to additional questions.
      Some of those are best answered with more data, but many can be more
      effectively approached with traditional user testing.

    • Show all 29 messages in this topic