Loading ...
Sorry, an error occurred while loading the content.

Re: What sort of information would a UCD person suggest analysts collect?

Expand Messages
  • tombellman
    Jeff, I don t think you can do effective requirements gathering without a UCD person either leading, or being an integral part of, the site visit team. At my
    Message 1 of 110 , Oct 13, 2004

      I don't think you can do effective requirements gathering without a
      UCD person either leading, or being an integral part of, the site
      visit team.

      At my company, which makes analytical instrumentation used by
      chemists and biologists, I and the other user experience person give
      a half-day course to teams that are going to be doing site visits. We
      talk about how to make good observations, how to interview, and how
      to record information effectively. We also get them thinking about
      UCD deliverables that will be generated: personas, workflow breakdown
      and environment profile so that they will collect information that
      contributes to these. Ideally, one of us will be with the team, but
      if that isn't possible, at least they are primed to collect useful

      It helps that management is really behind this. There are a lot more
      projects at the company than 2 of us can handle, but increasingly
      teams are approaching us to get our help at an early stage, and the
      course has been well received.


      Tom Bellman
      User Experience Architect
      MDS Sciex
      (905) 660-9006 ext. 2700

      --- In agile-usability@yahoogroups.com, "Jeff Patton" <jpatton@a...>
      > I suspect others doing some variation of user centered design might
      have run
      > into this: There's one UCD person on the project, and 4 or 5
      analysts. The
      > analysts rush out and interview lots of users and record lots of
      > processes. They talk directly to the users who tell them what the
      > software should look like. They bring back all this information
      and dump
      > into a document labeled "requirements." "Requirements" are then
      handed to
      > the UCD person who's asked to "do that voodoo that you do." Sadly,
      > not enough in the requirements about the users and why they're
      doing what
      > they do. And worse than that, if the document was actually
      published with
      > the title of "requirements", there's implicit commitments for
      > in it that may or may not be appropriate.
      > In short strokes, there's not enough information to do a good job
      at user
      > centered design. And there's commitments to specific design
      > already made that can't easily be backed away from.
      > Is it just me that's seen this, or has it happened to you as well?
      > Assuming you can't change the ratio of analysts to UCD people, and
      > analysts are on their way out the door, can you suggest what
      > you'd like them to gather? [Keep it quick, they have their laptops
      > and they're headed for the airport...]
      > Seriously, I'd like to help coach analysts on the sort of
      information they
      > should collect that would be most valuable to a downstream design
      > Assume they don't understand or have time to understand much about
      UCD. Any
      > advice and experience others might have would be valuable.
      > Thanks in advance.
      > -Jeff
      > ---------------------
      > Jeff Patton
      > www.abstractics.com/papers
      > OOPSLA Tutorial: www.oopsla.org/2004/ShowEvent.do?id=105
      > Agile usability news group: http://groups.yahoo.com/group/agile-
      > "Computers are useless. They can only give you answers."
      > -- PABLO PICASSO
    • Chris Pehura
      MessageJon - understood. Had similar experiences - dire consequences. They re the main reason I m always having those stand up meetings, constantly verifying
      Message 110 of 110 , Oct 27, 2004
        Jon - understood.
        Had similar experiences - dire consequences. They're the main reason I'm always having those stand up meetings, constantly verifying and validating.
        For that mention of "chaos", I was thinking about systematic change in systems.
        Over the years, I identified several for UIs, architecture, user-interaction and source code.
        From playing around, I got a straight forward generic notation and model to explain change.
        Wondering if any of you did or came across such generic "change models" and "notations".
        -----Original Message-----
        From: Jon Meads [mailto:jon@...]
        Sent: Wednesday, October 27, 2004 11:36 AM
        To: agile-usability@yahoogroups.com
        Subject: RE: [agile-usability] Research on users reaction to changes in an interface

        I wouldn't take the chaos relationship any further than what I said - small changes can have major affects. As for user expectation, here's a real story.
        The engineers were designing a small text-based, command language UI for managing the recovery CD for a computer system. The objective was to allow the user to just reinstall the operating system without reformatting the target drive but with the option to reformat the entire drive. The UI looked perfect to me - made sense, was straightforward and was simple. My expectations were that the user would just follow along naturally and would be successful. I was so sure of it that I came close to recommending that we skip the usability testing and save some money. But that wasn't the professional thing to do.
        During usability testing, 3 out of 4 users failed and ended up reformatting the entire drive. The problem was that my expectations were unrealistic. I was familiar with the need for a CD to take a few seconds to spin up - seemed like waiting just a bit was perfectly natural and I expected the users to do that. They were unfamiliar with the use of the CD and expected it to be immediately accessible just like a floppy drive would be. When they got the DOS response of not being able to read the CD, they immediately went back and took the other option thinking they had done something wrong.
        The moral of the story is that you can't rely on your expectations of what users will do. Your expectations may be right 90% of the time but, just as you wouldn't trust a computer that was right 90% of the time, you don't want to rely on your expectations of what people will do unless there is no other option. It makes sense to study users to understand possible design options and then to test your design to see how right you are.
        -----Original Message-----
        From: Chris Pehura [mailto:chris@...]
        Sent: Wednesday, October 27, 2004 8:54 AM
        To: agile-usability@yahoogroups.com
        Subject: RE: [agile-usability] Research on users reaction to changes in an interface

        My experience with UI changes is user expectation. If the user expects to click a button, change the button to a field, they will click on the field until they unlearn to click. If users are used to doing something when they see a red block on the screen, change that color to blue, they will wait to see red until they unlearn to wait. Even if you tell users which changes are made and where, users still has to unlearn and relearn on their own.
        I've also found that users navigate an interface in a very specific way in sync with their "physical navigation". Minor changes in UI will affect navigation both on the screen and in the "physical environment". Things are used in ways never intended for reasons previously unknown.
        I've found it much faster to make a change, see what happens, than to figure out all of that navigation stuff..
        Also, this mention of chaos. Is it being used to mean "unpredictability", or is it being used in the scientific sense?
        In science, if usability is chaotic, then there are patterns in the changes in usability.
        (order in chaos).
        Any models come to mind?
        I did chaos experiments with analog computers and motors. Not sure if that stuff is mappable to software though.
        -----Original Message-----
        From: Jon Meads [mailto:jon@...]
        Sent: Wednesday, October 27, 2004 5:21 AM
        To: agile-usability@yahoogroups.com
        Subject: RE: [agile-usability] Research on users reaction to changes in an interface

        Tom Landauer has suggested that usability is Chaotic, small changes can have major affects. I have seen this myself with some UIs although, for most, small, non-functional changes have had minimal or no affect.
        But it really takes usability testing to verify the affect a change has on a user. The problem is that, for users, changes to a GUI are not pixel changes but changes in the gestalt of the UI and how it affects users' perceptions and cognition. We normally can make a good guess as to what affect a change will have but we can also be surprised on occasion.
        -----Original Message-----
        From: Lauren Berry [mailto:laurenb@...]
        Sent: Tuesday, October 26, 2004 1:58 PM
        To: agile-usability@yahoogroups.com
        Subject: [agile-usability] Research on users reaction to changes in an interface

        Does anyone know of any research done on users reactions to changes in the GUI?
        Im looking for things such as
        - whats the time taken to re-learn a subtle change/medium change/ substantial change.
        - If you change the UI to improve the usability  - how long before the customer is comfortable in the new system.
        - If you improve the usability - is the user happier with the better UI once they have learned it - or does the cost of learning outweigh the benefits of change
        Of course, Im sure these questions have a variety of answers depending on the users...
        Any pointers to work done most appreciated,

      Your message has been successfully submitted and would be delivered to recipients shortly.