Loading ...
Sorry, an error occurred while loading the content.

4085Re: [agile-usability] Re: Online Usability Tests

Expand Messages
  • TomTullis@aol.com
    Mar 16, 2008
    • 0 Attachment
      I've been pretty surprised by the specificity of the usability feedback you can get from an online study.  Although it's not apparent in this particular study, one thing I've often done is select the "distracter" answers for each task in such a way that I could know with some degree of certainty what specific error they made that led them to that answer.  Of course, a great deal depends upon the tasks.  One thing not apparent from the user's perspective in this particular study is that you saw a randomly selected 4 tasks out of a complete set of 9 tasks.  I've done other studies where the total set of tasks was high as 20.  But with enough participants, each person can do a reasonable number and you still get plenty of data, across all participants, on all the tasks.
      On the other hand, it is true that even though an online study may be good at identifying WHAT usability problems may exist, it's not as good at helping you understand WHY the users are having those problems.  Lab tests are generally much better at getting at the why.
      One thing I have discovered is that participants in an online study usually express their frustration pretty clearly.  Here are some sample comments from this study (I'm not going to identify which site these were for!):
      "The information is horribly organized!"
      "The info architecture seems pathetic"
      The tradeoff between the quality and quantity of information you can get from a traditional lab test vs. an online test is definitely an interesting question.  I haven't done a direct comparison between an online test and a lab test of the same site recently, but we did one a few years ago (http://home.comcast.net/~tomtullis/publications/RemoteVsLab.pdf) and got reasonably consistent results from the two tests.  But we did find that there were certain usability issues that only the lab test "caught" and certain ones that only the online test "caught".
      So I'm certainly not advocating that we replace traditional lab usability testing with online testing-- just that both might be useful tools.  In my User Experience team at Fidelity Investments, we do far more lab tests than we do online tests.  But I was wondering if online usability studies might play a significant role in the agile process.  Being able to get usability data from 20-30 people in one day, or even a few hours, seems to fit in nicely with the goals of agile development.
      Yes, we do commonly ask a variety of demographic questions, including ratings of self-reported web experience, subject-matter knowledge, and many other things in our "real" online studies.  They're generally collected on the starting page.  But I didn't include them in this sample study.  Sometimes these data provide very useful ways of slicing and dicing the rest of the data.
      By the way, it's not too late to do the sample online study, if other people are interested: http://www.webusabilitystudy.com/Apollo/.  It should only take about 15 minutes, and if you complete it by 11:00 pm (Eastern US time) Monday night, you get entered in a drawing for a $20 Amazon gift certificate!
      In a message dated 3/16/2008 10:48:44 P.M. Eastern Daylight Time, susan.meszaros@... writes:
      I think you can get a lot of "broad" information doing an on-line study,
      such as whether or not users can get the tasks accomplished at all, if
      so then how accurately/well. You might then know if the site is working
      or if it's not, and maybe even about certain areas of it. But I wonder
      about how you can capture the confusion or frustration of users which is
      most apparent from their body language and/or how they use the site
      (gleaned from watching rather than from a questionaire).

      Have you tried to work out the quality of information / quantity of
      information tradeoff? Is it better to have a broad user testing base
      (like your 1000) or a narrow base (say 5 - 7), and the cost of getting
      them (eg setting up the online test v. other more traditional methods)?
      Presumably it would be contingent upon what was being tested. And
      probably also upon the type of users you require.

      ps. You might want to include some kind of self-reporting on level of
      computer user and subject matter knowledge (1 - 5 or something).

    • Show all 29 messages in this topic