Loading ...
Sorry, an error occurred while loading the content.

Re: [agile-usability] What sort of information would a UCD person suggest analysts collect?

Expand Messages
  • William Pietri
    ... It seems to me that you need to engage the analysts in ways that get them to convey raw data without suggesting that you don t respect their analytical
    Message 1 of 110 , Oct 12 10:54 PM
    • 0 Attachment
      On Tue, 2004-10-12 at 21:27, Jeff Patton wrote:
      > Seriously, I'd like to help coach analysts on the sort of information they
      > should collect that would be most valuable to a downstream design solution.
      > Assume they don't understand or have time to understand much about UCD. Any
      > advice and experience others might have would be valuable.

      It seems to me that you need to engage the analysts in ways that get
      them to convey raw data without suggesting that you don't respect their
      analytical skills.

      Ages ago, I knew a number of people who worked at ELab, a design
      consulting firm (eventually swallowed whole by a dot-com consultancy).
      Half anthropologists and half designers, they would generally not design
      anything; instead, they would collect a lot of raw material and then
      filter out the snippets of people's lives that would convey important
      insights about the context into which the products had to fit. They were
      very smart and very analytical, but used their abilities in a way that
      didn't greatly impose their judgements on the material.

      Depending on the analysts you have, it might be enough to suggest that
      they act as anthropologists, creating a documentary about that strange
      tribe, the End Users. It would help if this documentary weren't just for
      the designer, but for the developers and the executives, so that they
      can really understand the people that they are devoting months or years
      to serving.

      Or, you might have some success just getting to use some of the tools
      that these folks used. They used a ton of video, both of interviews and
      of people doing whatever was under study. Video is complicated to work
      with, though, so photographs with notes might be a good substitute.

      Also common was the visual diary. They would give a subject a disposable
      camera and a little blank booklet. The subject would be asked to explain
      some aspect of their life with diary entries and references to pictures.
      Not only does this mean the analyst doesn't have to hover, but it also
      brings the user's own experience to the foreground, unfiltered through
      an analytical framework.

      And personally, my favorite thing to collect is photos of handwritten
      documents and notes. Once I was working for a government body helping to
      automate a process. I spent hours looking through the files of forms,
      and it turned out that it made the staff very nervous; they thought I
      was checking their work and looking for problems.

      But eventually I pulled them over and showed them what I was really
      looking at: their handwritten annotations on the forms. They discovered
      the need to capture all sorts of data and process information that the
      designers of the forms had missed, and they had developed their own
      departmental style for doing it. It was a fantastic lesson for me: they
      were the real analysts; my job was mostly figuring out how to express
      their insights to managers and machines.


      P.S. Although I don't know anybody who works for the former ELab
      anymore, I know a number of their former employees do this sort of
      context investigation on a freelance basis. If people would like
      references to some of them, please drop me a line off-list.
    • Chris Pehura
      MessageJon - understood. Had similar experiences - dire consequences. They re the main reason I m always having those stand up meetings, constantly verifying
      Message 110 of 110 , Oct 27 12:22 PM
      • 0 Attachment
        Jon - understood.
        Had similar experiences - dire consequences. They're the main reason I'm always having those stand up meetings, constantly verifying and validating.
        For that mention of "chaos", I was thinking about systematic change in systems.
        Over the years, I identified several for UIs, architecture, user-interaction and source code.
        From playing around, I got a straight forward generic notation and model to explain change.
        Wondering if any of you did or came across such generic "change models" and "notations".
        -----Original Message-----
        From: Jon Meads [mailto:jon@...]
        Sent: Wednesday, October 27, 2004 11:36 AM
        To: agile-usability@yahoogroups.com
        Subject: RE: [agile-usability] Research on users reaction to changes in an interface

        I wouldn't take the chaos relationship any further than what I said - small changes can have major affects. As for user expectation, here's a real story.
        The engineers were designing a small text-based, command language UI for managing the recovery CD for a computer system. The objective was to allow the user to just reinstall the operating system without reformatting the target drive but with the option to reformat the entire drive. The UI looked perfect to me - made sense, was straightforward and was simple. My expectations were that the user would just follow along naturally and would be successful. I was so sure of it that I came close to recommending that we skip the usability testing and save some money. But that wasn't the professional thing to do.
        During usability testing, 3 out of 4 users failed and ended up reformatting the entire drive. The problem was that my expectations were unrealistic. I was familiar with the need for a CD to take a few seconds to spin up - seemed like waiting just a bit was perfectly natural and I expected the users to do that. They were unfamiliar with the use of the CD and expected it to be immediately accessible just like a floppy drive would be. When they got the DOS response of not being able to read the CD, they immediately went back and took the other option thinking they had done something wrong.
        The moral of the story is that you can't rely on your expectations of what users will do. Your expectations may be right 90% of the time but, just as you wouldn't trust a computer that was right 90% of the time, you don't want to rely on your expectations of what people will do unless there is no other option. It makes sense to study users to understand possible design options and then to test your design to see how right you are.
        -----Original Message-----
        From: Chris Pehura [mailto:chris@...]
        Sent: Wednesday, October 27, 2004 8:54 AM
        To: agile-usability@yahoogroups.com
        Subject: RE: [agile-usability] Research on users reaction to changes in an interface

        My experience with UI changes is user expectation. If the user expects to click a button, change the button to a field, they will click on the field until they unlearn to click. If users are used to doing something when they see a red block on the screen, change that color to blue, they will wait to see red until they unlearn to wait. Even if you tell users which changes are made and where, users still has to unlearn and relearn on their own.
        I've also found that users navigate an interface in a very specific way in sync with their "physical navigation". Minor changes in UI will affect navigation both on the screen and in the "physical environment". Things are used in ways never intended for reasons previously unknown.
        I've found it much faster to make a change, see what happens, than to figure out all of that navigation stuff..
        Also, this mention of chaos. Is it being used to mean "unpredictability", or is it being used in the scientific sense?
        In science, if usability is chaotic, then there are patterns in the changes in usability.
        (order in chaos).
        Any models come to mind?
        I did chaos experiments with analog computers and motors. Not sure if that stuff is mappable to software though.
        -----Original Message-----
        From: Jon Meads [mailto:jon@...]
        Sent: Wednesday, October 27, 2004 5:21 AM
        To: agile-usability@yahoogroups.com
        Subject: RE: [agile-usability] Research on users reaction to changes in an interface

        Tom Landauer has suggested that usability is Chaotic, small changes can have major affects. I have seen this myself with some UIs although, for most, small, non-functional changes have had minimal or no affect.
        But it really takes usability testing to verify the affect a change has on a user. The problem is that, for users, changes to a GUI are not pixel changes but changes in the gestalt of the UI and how it affects users' perceptions and cognition. We normally can make a good guess as to what affect a change will have but we can also be surprised on occasion.
        -----Original Message-----
        From: Lauren Berry [mailto:laurenb@...]
        Sent: Tuesday, October 26, 2004 1:58 PM
        To: agile-usability@yahoogroups.com
        Subject: [agile-usability] Research on users reaction to changes in an interface

        Does anyone know of any research done on users reactions to changes in the GUI?
        Im looking for things such as
        - whats the time taken to re-learn a subtle change/medium change/ substantial change.
        - If you change the UI to improve the usability  - how long before the customer is comfortable in the new system.
        - If you improve the usability - is the user happier with the better UI once they have learned it - or does the cost of learning outweigh the benefits of change
        Of course, Im sure these questions have a variety of answers depending on the users...
        Any pointers to work done most appreciated,

      Your message has been successfully submitted and would be delivered to recipients shortly.