Re: [agile-usability] What sort of information would a UCD person suggest analysts collect?
- I agree that Jeff's project is best served by UCD people going along and
observing how people work, but this is often easier said than done.
"Analyst" often means "domain specialist who enjoys a passing interest
in software". I've seen this in several different industries. Often, the
analyst is an ex-user (e.g. ex-doctor, ex-trader) who has moved on to
the overlap between IT and their chosen profession. This in itself,
would be useful, but the problem arises in the management of such a
skill set. Instead of collaborating with others, analysts are often the
"owner" of requirements. to minimise user disruption, and because
management perceive time being wasted by multiple people interacting, it
is often the analysts who interface with the clients.
Catch 22: you have to show it's worthwhile in order to make it possible.
Breaking out of this dillemma requires some pragmatism.
Jon Meads wrote:
> I noticed you stated that "The analysts rush out and interview lots of
> users ..." Does the UCD person go with them?? If not, why not. I have
> found that oftentimes the problem is one of culture -- the UCD person is
> not considered to be an "analyst". (Is the UCD person trained in
> analysis?) UCD will remain just "voodoo" until the UCD person is able
> to obtain the user information they require.
> There's also another possible problem with the approach taken.
> Interviewing users is insufficient and can be misleading. You need to
> watch them work. I have many stories of how what was reported in user
> interviews was different from what they were doing. And it gets worse
> when you talk to their managers instead of the hands-on users.
> Bottom line, you can't expect quality work from UCD unless the
> requirements gathering process is properly user-centered.
> Jon Meads
> Usability Architects, Inc.
> PO Box 3222
> Kirkland, WA 98083-3222
> Voice: 425-827-9296
> Cell: 206-409-7548
> Fax: 425-827-6692
> Email: jon@...
> Specialists in User-Centered Design & Engineering
> -----Original Message-----
> *From:* Jeff Patton [mailto:jpatton@...]
> *Sent:* Tuesday, October 12, 2004 9:27 PM
> *To:* firstname.lastname@example.org
> *Subject:* [agile-usability] What sort of information would a UCD
> person suggest analysts collect?
> I suspect others doing some variation of user centered design might
> have run
> into this: There's one UCD person on the project, and 4 or 5
> analysts. The
> analysts rush out and interview lots of users and record lots of
> processes. They talk directly to the users who tell them what the new
> software should look like. They bring back all this information and
> into a document labeled "requirements." "Requirements" are then
> handed to
> the UCD person who's asked to "do that voodoo that you do." Sadly,
> not enough in the requirements about the users and why they're doing
> they do. And worse than that, if the document was actually
> published with
> the title of "requirements", there's implicit commitments for
> in it that may or may not be appropriate.
> In short strokes, there's not enough information to do a good job at
> centered design. And there's commitments to specific design solutions
> already made that can't easily be backed away from.
> Is it just me that's seen this, or has it happened to you as well?
> Assuming you can't change the ratio of analysts to UCD people, and the
> analysts are on their way out the door, can you suggest what information
> you'd like them to gather? [Keep it quick, they have their laptops
> and they're headed for the airport...]
> Seriously, I'd like to help coach analysts on the sort of
> information they
> should collect that would be most valuable to a downstream design
> Assume they don't understand or have time to understand much about
> UCD. Any
> advice and experience others might have would be valuable.
> Thanks in advance.
MessageJon - understood.Had similar experiences - dire consequences. They're the main reason I'm always having those stand up meetings, constantly verifying and validating.For that mention of "chaos", I was thinking about systematic change in systems.Over the years, I identified several for UIs, architecture, user-interaction and source code.From playing around, I got a straight forward generic notation and model to explain change.Wondering if any of you did or came across such generic "change models" and "notations".-----Original Message-----
From: Jon Meads [mailto:jon@...]
Sent: Wednesday, October 27, 2004 11:36 AM
Subject: RE: [agile-usability] Research on users reaction to changes in an interfaceChris,I wouldn't take the chaos relationship any further than what I said - small changes can have major affects. As for user expectation, here's a real story.The engineers were designing a small text-based, command language UI for managing the recovery CD for a computer system. The objective was to allow the user to just reinstall the operating system without reformatting the target drive but with the option to reformat the entire drive. The UI looked perfect to me - made sense, was straightforward and was simple. My expectations were that the user would just follow along naturally and would be successful. I was so sure of it that I came close to recommending that we skip the usability testing and save some money. But that wasn't the professional thing to do.During usability testing, 3 out of 4 users failed and ended up reformatting the entire drive. The problem was that my expectations were unrealistic. I was familiar with the need for a CD to take a few seconds to spin up - seemed like waiting just a bit was perfectly natural and I expected the users to do that. They were unfamiliar with the use of the CD and expected it to be immediately accessible just like a floppy drive would be. When they got the DOS response of not being able to read the CD, they immediately went back and took the other option thinking they had done something wrong.The moral of the story is that you can't rely on your expectations of what users will do. Your expectations may be right 90% of the time but, just as you wouldn't trust a computer that was right 90% of the time, you don't want to rely on your expectations of what people will do unless there is no other option. It makes sense to study users to understand possible design options and then to test your design to see how right you are.Cheers,jon-----Original Message-----
From: Chris Pehura [mailto:chris@...]
Sent: Wednesday, October 27, 2004 8:54 AM
Subject: RE: [agile-usability] Research on users reaction to changes in an interfaceMy experience with UI changes is user expectation. If the user expects to click a button, change the button to a field, they will click on the field until they unlearn to click. If users are used to doing something when they see a red block on the screen, change that color to blue, they will wait to see red until they unlearn to wait. Even if you tell users which changes are made and where, users still has to unlearn and relearn on their own.I've also found that users navigate an interface in a very specific way in sync with their "physical navigation". Minor changes in UI will affect navigation both on the screen and in the "physical environment". Things are used in ways never intended for reasons previously unknown.I've found it much faster to make a change, see what happens, than to figure out all of that navigation stuff..Also, this mention of chaos. Is it being used to mean "unpredictability", or is it being used in the scientific sense?In science, if usability is chaotic, then there are patterns in the changes in usability.(order in chaos).Any models come to mind?I did chaos experiments with analog computers and motors. Not sure if that stuff is mappable to software though.-----Original Message-----
From: Jon Meads [mailto:jon@...]
Sent: Wednesday, October 27, 2004 5:21 AM
Subject: RE: [agile-usability] Research on users reaction to changes in an interfaceTom Landauer has suggested that usability is Chaotic, small changes can have major affects. I have seen this myself with some UIs although, for most, small, non-functional changes have had minimal or no affect.But it really takes usability testing to verify the affect a change has on a user. The problem is that, for users, changes to a GUI are not pixel changes but changes in the gestalt of the UI and how it affects users' perceptions and cognition. We normally can make a good guess as to what affect a change will have but we can also be surprised on occasion.Cheers,jon-----Original Message-----
From: Lauren Berry [mailto:laurenb@...]
Sent: Tuesday, October 26, 2004 1:58 PM
Subject: [agile-usability] Research on users reaction to changes in an interfaceHi,Does anyone know of any research done on users reactions to changes in the GUI?Im looking for things such as- whats the time taken to re-learn a subtle change/medium change/ substantial change.- If you change the UI to improve the usability - how long before the customer is comfortable in the new system.- If you improve the usability - is the user happier with the better UI once they have learned it - or does the cost of learning outweigh the benefits of changeOf course, Im sure these questions have a variety of answers depending on the users...Any pointers to work done most appreciated,Cheers,Lauren.