- ... There s not red/green bar to tell you. It s a subjective judgement. Ron s fabulous post here: http://groups.yahoo.com/group/agile- usability/message/761Message 1 of 110 , Oct 22, 2004View Source--- In firstname.lastname@example.org, Keith Nicholas
> How do you know its *right* from a task model / user?There's not red/green bar to tell you. It's a subjective judgement.
Ron's fabulous post here: http://groups.yahoo.com/group/agile-
usability/message/761 says it better than I could. He gave an
example about a tired engineer using software in the cramped cab of a
pickup truck. A mouse driven user interface with tiny buttons and
lots of detail on the screen isn't *right* for that sort of person.
No boolean test for that - just an experienced person's best
judgement. Without a user model telling Ron about that user and
context of use, the odds of him guessing that big buttons, easy to
read UI in dim light, and keybaordable UI - the odds of him or anyone
guessing that was the correct UI would be pretty slim.
> unit tests don't tell you if its right, they just tell you that theIt's similar for personas/roles. They give you a model to
> software works as intended by the developer.
subjectively judge that the software is appropriate for this kind of
user. If the persona/role doesn't accurately represent the person
using it, your subjective evaluation my "run green" - but it doesn't
prove it's right.
> Acceptance testingof
> validates that the real working software is right as intended by the
> customer. Releasing the software for use tests whether the intent
> the developers, user, and customer was right.Usability testing validates both the software, and the role and task
models used to build them. Information learned from usability
testing by real users should be fed back to the software to change
it - and to the models to support intelligent addition and validation
of future functionality.
> The idea of course is that you release often so you can verify thatcustomer,
> what you asked for is actually what you need. As the users,
> and developers gain this real feedback from each Release they makeAs as been discussed before, sometimes frequent release to genuine
> better choices about what should go into the software next.
users isn't possible. Today I'm working at a client where we're
writing software that installs into hospitols for patient records.
For lots of reasons, some of them legal, frequent release to actual
users isn't possible. Even validation in labs by actual users is
difficult. Doctors and nurses seem to be busy people - don't want to
commit their time eaily. And them testing software in a comfortable
office is very different than using it in a crowded noisy emergency
room. A good model of the users and context of use along with good
subjective validation against it is very valuable - literallly worth
By the way, we are developing this software in a very agile way. We
develop in 2 week iterations with frequent interal released validated
by actual users. We have daily contact with clinical nurses helping
us evaluate what we've done. But, even they find it helpful to
remind them of the context of use as they're evaluating what we use.
They find it helpful to remind them that a doctor might be using this
bit, while the receptionist might be using this bit in a completely
> so "How would I know the UI is right without validating against thatNo offense intended - but that opinion seems naive. As I described
> user and task model?" - By observing whether the intended value is
> gained by having real working software that gets used by a user.
above, it's difficult in some domains to get actual, in context, end
user validation. The more software we develop without that
validation, the more money and time we put at risk. In those
situations, a role model/person is critical to helping the design be
more right out of the chute - it reduces dollars risked.
We all agree that real working software in the hands of real users is
best. But, if that's not practical, what then?
Thanks for you questions/comments!
- MessageJon - understood. Had similar experiences - dire consequences. They re the main reason I m always having those stand up meetings, constantly verifyingMessage 110 of 110 , Oct 27, 2004View Source
MessageJon - understood.Had similar experiences - dire consequences. They're the main reason I'm always having those stand up meetings, constantly verifying and validating.For that mention of "chaos", I was thinking about systematic change in systems.Over the years, I identified several for UIs, architecture, user-interaction and source code.From playing around, I got a straight forward generic notation and model to explain change.Wondering if any of you did or came across such generic "change models" and "notations".-----Original Message-----
From: Jon Meads [mailto:jon@...]
Sent: Wednesday, October 27, 2004 11:36 AM
Subject: RE: [agile-usability] Research on users reaction to changes in an interfaceChris,I wouldn't take the chaos relationship any further than what I said - small changes can have major affects. As for user expectation, here's a real story.The engineers were designing a small text-based, command language UI for managing the recovery CD for a computer system. The objective was to allow the user to just reinstall the operating system without reformatting the target drive but with the option to reformat the entire drive. The UI looked perfect to me - made sense, was straightforward and was simple. My expectations were that the user would just follow along naturally and would be successful. I was so sure of it that I came close to recommending that we skip the usability testing and save some money. But that wasn't the professional thing to do.During usability testing, 3 out of 4 users failed and ended up reformatting the entire drive. The problem was that my expectations were unrealistic. I was familiar with the need for a CD to take a few seconds to spin up - seemed like waiting just a bit was perfectly natural and I expected the users to do that. They were unfamiliar with the use of the CD and expected it to be immediately accessible just like a floppy drive would be. When they got the DOS response of not being able to read the CD, they immediately went back and took the other option thinking they had done something wrong.The moral of the story is that you can't rely on your expectations of what users will do. Your expectations may be right 90% of the time but, just as you wouldn't trust a computer that was right 90% of the time, you don't want to rely on your expectations of what people will do unless there is no other option. It makes sense to study users to understand possible design options and then to test your design to see how right you are.Cheers,jon-----Original Message-----
From: Chris Pehura [mailto:chris@...]
Sent: Wednesday, October 27, 2004 8:54 AM
Subject: RE: [agile-usability] Research on users reaction to changes in an interfaceMy experience with UI changes is user expectation. If the user expects to click a button, change the button to a field, they will click on the field until they unlearn to click. If users are used to doing something when they see a red block on the screen, change that color to blue, they will wait to see red until they unlearn to wait. Even if you tell users which changes are made and where, users still has to unlearn and relearn on their own.I've also found that users navigate an interface in a very specific way in sync with their "physical navigation". Minor changes in UI will affect navigation both on the screen and in the "physical environment". Things are used in ways never intended for reasons previously unknown.I've found it much faster to make a change, see what happens, than to figure out all of that navigation stuff..Also, this mention of chaos. Is it being used to mean "unpredictability", or is it being used in the scientific sense?In science, if usability is chaotic, then there are patterns in the changes in usability.(order in chaos).Any models come to mind?I did chaos experiments with analog computers and motors. Not sure if that stuff is mappable to software though.-----Original Message-----
From: Jon Meads [mailto:jon@...]
Sent: Wednesday, October 27, 2004 5:21 AM
Subject: RE: [agile-usability] Research on users reaction to changes in an interfaceTom Landauer has suggested that usability is Chaotic, small changes can have major affects. I have seen this myself with some UIs although, for most, small, non-functional changes have had minimal or no affect.But it really takes usability testing to verify the affect a change has on a user. The problem is that, for users, changes to a GUI are not pixel changes but changes in the gestalt of the UI and how it affects users' perceptions and cognition. We normally can make a good guess as to what affect a change will have but we can also be surprised on occasion.Cheers,jon-----Original Message-----
From: Lauren Berry [mailto:laurenb@...]
Sent: Tuesday, October 26, 2004 1:58 PM
Subject: [agile-usability] Research on users reaction to changes in an interfaceHi,Does anyone know of any research done on users reactions to changes in the GUI?Im looking for things such as- whats the time taken to re-learn a subtle change/medium change/ substantial change.- If you change the UI to improve the usability - how long before the customer is comfortable in the new system.- If you improve the usability - is the user happier with the better UI once they have learned it - or does the cost of learning outweigh the benefits of changeOf course, Im sure these questions have a variety of answers depending on the users...Any pointers to work done most appreciated,Cheers,Lauren.