Loading ...
Sorry, an error occurred while loading the content.

RE: [agile-usability] Asynchronous Remote User Testing was Re: User Acceptance Tests (UAT) versus Usability Evaluation

Expand Messages
  • Desilets, Alain
    This sounds really great James! Can t wait to try it! Alain ... From: agile-usability@yahoogroups.com [mailto:agile-usability@yahoogroups.com] On Behalf Of
    Message 1 of 4 , Jun 19, 2008
    • 0 Attachment
      This sounds really great James! Can't wait to try it!

      Alain

      ----

      From: agile-usability@yahoogroups.com
      [mailto:agile-usability@yahoogroups.com] On Behalf Of James Page
      Sent: June 19, 2008 3:00 PM
      To: agile-usability@yahoogroups.com
      Subject: Re: [agile-usability] Asynchronous Remote User Testing was Re:
      User Acceptance Tests (UAT) versus Usability Evaluation

      The system that I am developing is Goal based, or as you call it
      "scripted". The challenge that we had with just pure logs is that it is
      difficult to work out the users intentions. With Card and Moran they
      could work out the operators intentions from the behaviour, but with
      most applications this is hard. But we will be getting the participants
      to test the system remotely so they will be in their natural
      environment, but the 'goals' will be set.

      As you said in your email in the other thread.
      Even more interesting is the fact that we were able to generate a
      hypothesis for something that the makers of this tool may have missed
      from their analysis of millions of queries, namely, the possibility that
      translators in the legal domain may use their tool in a completely
      different way.

      Our system logs participants usage behaviour as well as user feedback,
      and then combines the results. Each user is set a number of tasks, and
      we will measure such things as completion time, completion/failure rate,
      task deviation, etc... We are also trying to add visualisations to help
      the team spot where the issues are. This is the hard bit. There are many
      theories in how to do this, but very few actual experiments backing up
      the theories. The few results that are up there are using models which
      time consuming to set up.

      Hopefully once some user data has been built up, we can then use that
      data to analyse real user logs. But as you say "One of the advantages of
      the
      scripted approach is that the test cases can double up as both scenarios
      for UAT and Usability testing." I totally agree. So the tool will have
      value from day one :)

      Our intention is that a development team will be able to get results
      back in a couple of hours using more than the normal number of
      participants, and far cheaper. The challenge is that you lose some of
      the data by moving the testing out of the lab. We are running an
      experiment with two universities to see if the visualisations will find
      the same number of issues as normal lab based tests.

      Sadly I will not be Agile 2008 as I will still be in the data collection
      stage of our experiment.

      All the best

      James


      On Thu, Jun 19, 2008 at 2:36 PM, Desilets, Alain
      <alain.desilets@...> wrote:
      > Yes there has been several developed.... It is called Asynchronous
      Remote User Testing. The famous
      > test by Card and Moran used real phone operators key strokes, they
      then modeled the operators
      > behaviour using CPM-GOMS to identify the issues. See
      >
      http://en.wikipedia.org/wiki/GOMShttp://en.wikipedia.org/wiki/GOMS#Succe
      ss_of_GOMS
      >
      > So they kind of used what you suggest in reverse. They collected real
      user data, and then used the
      > model to gain insight in where the system was not performing.
      The systems you describe are interesting, but they are quite different
      from what I was talking about. You're talking about The systems you
      describe can be used to gather *natural* usage data from a deployed
      system (either in actual production, or in a pilot study setting), and
      then making sense of that data. I'm talking about a more traditional
      paradigm where the system is used to collect *scripted* usage data, i.e.
      usage data in the context of tasks that have been pre-established by a
      Ux specialist.

      I think there is value to both approaches. One of the advantages of the
      scripted approach is that the test cases can double up as both scenarios
      for UAT and Usability testing. Also, you can carry out scripted
      usability testing even before the system is ready for use in a more
      natural setting. The disadvantage of course is that the usage data is
      less realistic.


      > I think both techniques could be quite useful....In fact so useful I
      am working on Asynchronous
      > Remote User Testing Software at the moment! :) And I will invite you
      once we are at the stage to test
      > the visualisation of the user behaviour to test the visualisations.
      Cool. Will you be at Agile 2008 in Toronto?

      Alain
    Your message has been successfully submitted and would be delivered to recipients shortly.