Loading ...
Sorry, an error occurred while loading the content.

4092Re: Online Usability Tests

Expand Messages
  • meszaros_susan
    Mar 18, 2008
      My motivation for asking those questions about on-line testing was due
      to our own shop being both agile and virtual or remote. Three of us
      work in 3 different cities, one of us in another country. Sometimes it
      is really hard to get something tested very quickly using "on-site" or
      "in presence of" testers (who actually have the background or
      knowledge that represents the user base). So I'm interested to get a
      better handle on when it makes sense to invest in some form of remote
      testing (I prefer remote to on-line since most of what we do even with
      face to face testing is on-line anyway).

      To pluck the low-hanging fruit on a site, whether it exists on-line or
      on paper is easy and can be quickly identified using just a few
      testers....they all have the same issues and they quickly come to the
      surface. In contrast, to figure out the subtleties, you need to
      understand more detail on what you are testing, focus your testing on
      and around that, and use the right testers (and probably observe them???).

      But I haven't convinced myself yet that remote testing isn't useful or
      can't be done cost-effectively. In some respect, we use remote testing
      when we analyse our web site stats, even though it is at a rather
      broad level and isn't focused very well, although it probably could
      be. Has anybody used something like Google Analytics to try to do
      testing? Setting up goals and measuring conversions (isn't this the
      same as setting a task for users except not explicitly but imlicitly?)
      for example.

      I think there could be real value in developing dynamic testing
      methods that can be ongoing in the life of a site/application, just
      like we use code tests for refactoring code, why can't we use built-in
      or dynamic user tests for refactoring design (or at least monitoring it)?

      susan

      --- In agile-usability@yahoogroups.com, "Desilets, Alain"
      <alain.desilets@...> wrote:
      >
      > > I think you can get a lot of "broad" information doing an on-line
      > > study,
      > > such as whether or not users can get the tasks accomplished at all, if
      > > so then how accurately/well. You might then know if the site is
      > working
      > > or if it's not, and maybe even about certain areas of it. But I wonder
      > > about how you can capture the confusion or frustration of users which
      > > is
      > > most apparent from their body language and/or how they use the site
      > > (gleaned from watching rather than from a questionaire).
      >
      > Here's an example.
      >
      > When I went to the URL you provided to test this Apollo site, I clicked
      > on a button to start the study.
      >
      > This opened up a new Firefox window for me to do my work in. But for
      > some reason, this window did not have any of the menus, and in
      > particular, I could find no way to search within a page. I fiddled with
      > this for a good 2 minutes until I eventually decided to just copy the
      > URL to a different Firefox window (one that I opened myself).
      >
      > This is presumably something that would not be observable by your
      > system.
      >
      > >
      > > Have you tried to work out the quality of information / quantity of
      > > information tradeoff? Is it better to have a broad user testing base
      > > (like your 1000) or a narrow base (say 5 - 7), and the cost of getting
      > > them (eg setting up the online test v. other more traditional
      > methods)?
      > > Presumably it would be contingent upon what was being tested. And
      > > probably also upon the type of users you require.
      >
      > That would be a really interesting finding.
      >
      > Alain
      >
    • Show all 29 messages in this topic