Re: Subtle User Interaction Experiences
- --- In firstname.lastname@example.org, William Pietri <william@...>
> For the resistant teams you've seen, could you tell us more about theI'll share 2 of these situations (I've got more than that, but that
> communication flow? E.g., where the product managers sat in relation to
> the developers?
would be too much writing ;-) )
One of these teams worked on an internal call-center type of
application that was used constantly, and small UI changes actually
had a rather significant impact on productivity. A single extra
confirmation dialog or even an extra mouse-click was a pretty big deal
for the end users of the application.
However, the "customer" was really the call center director, who
rarely suggested specific UI changes to streamline workflow, but
rather asked for additional features to reduce errors, allow better
tracking or distribution, etc. The end users and the director were
all in a remote location and timezone (East/West coast).
As "customer proxy", I spent a lot of time on the phone, in
web-conferences, and periodic cross-country trips to make sure we
understood the user needs as they started to expand. I also included
the development team on relevant phone calls to make sure they were
hearing requests straight from the customer (not just from me).
So the typical flow was:
1. Customer calls me to ask about a new feature: "Can we have a list
that shows X?"
2. I'd try to get at the heart of it: "What will the new list allow
you to do? What problem are you trying to solve?"
3. Customer answers: "Well, we're making errors when doing Y, and I
think if we had a list of X, we'd make fewer errors".
4. I confer with the team to come up with a solution for the issue,
which might or might not be "a list that shows X", and present to the
customer for approval and validation.
5. We schedule the feature for a release and follow normal agile
processes from there.
The other major resistant situation was for a commercial product,
without direct customer contact (startup = no real customers). So we
worked with the VP of Product Development who was effectively the
Product Manager since the Marketing dept. wasn't very helpful.
So naturally most of the input was subjective, and not validated by
real customers. We used an Agile process, minus the real customer
validation. Later on, once we got a few customers, there was still a
gap here, but at least some of the input was from a real user.
Does this shed some light on it? I suppose these aren't typical
"ideal agile" customer scenarios, but they're part of my real-world
- I just logged in to the PMI site as a member. It had been a while, so I wasn’t sure of my username and password (it’s been a few months since the last time I used it). Of course, one of first thing you would expect is to know whether the system did recognize you or not. Guess what, nowhere on the page that came up is there any mention of my name. As far as I can tell, I don’t even know if I’m really logged in (ok, I see log out somewhere so I’ll assume I’m logged in). My reflex was to look on the page everywhere ( I had to scroll because the first page is long). Because it puzzled me a bit, my next reaction was to look at the left menu and see if I could get my account details. No luck, no menu entry is clearly labelled that way.
- Ah ah, just found the problem. There is a Membership information home button which I thought wrongly was the Menu title (it was not underlined, is of a different color and background than all other menu items, and because it doesn’t look clickable I actually dismissed it as a header and didn’t even read the text anyway). However, it made no sense I could not see my account info, so I investigated further the UI (and then realized what that header actually was). When I clicked on it, it finally got me my account information. I figure that this is normally the first page you get when you log in. For some reason, they decided to put a two pager publicity there instead (“PMI's 250,000 Member Race”).
- Anyway, it now makes me feel a little bit stupid that I lost so much time figuring this out as a user. A simple Hi Pascal on the login page would have just avoided the whole freaking thing, and that menu item that doesn’t look clickable too… Oh well, maybe it’s just me, I’m probably bellow their required target user intelligence level…
- Isn’t that pretty basic usability stuff? And we are talking a fairly prestigious site here (I heard the PMI is targeting 250,000 members worldwide)…
Anyway, the point I want to make is that even basic stuff like that is very common in the field. This leads to software that is harder to use than it should (ever heard of the digital divide?, I think stuff like that contributes heavily to it) and even frustrates and angers people at times. Frankly, I doubt they even had one real user test that part of the site before they put it out there…
Pascal Roy, ing./P.Eng., PMP
Elapse Technologies Inc.
From: agile-usability@... m [mailto: agile-usability@... m ] On Behalf Of Phlip
Sent: 6 septembre 2006 11:15
To: agile-usability@... m
Subject: Re: [agile-usability] Catching usability issues with automated tests
Adrian Howard wrote:
> Some examples:legibility
> * Clean XHTML/CSS validation as a sign that the app will present well
> on all browsers
> * Using the presence of ALT tags as a sign of accessibility.
> * Using a computed "colour contrast" value as a sign of
> * Using the Kincaid formula or similar as a sign of readability* Use pure XHTML, so all that's accessable to the testage
* Run the site's pages thru Tidy and ask if it's accessible
Those tests sound weak, but some GUIs must internationalize and
localize correctly. Users of some rare language are probably familiar
and tired of the same dumb bugs in their GUIs. So switch to each
language and run all those tests again.
Next, do it even if your GUI is not HTML. MS's RESX files are of
course parsable as XML. I wrote
http://www.c2. com/cgi/wiki? MsWindowsResourc eLint
to scan the localized RC files looking for bugs. The program has an
extensible framework so you can add in any kind of test you can think
(At SYSTRAN, I spent a week writing the predecessor of that program. I
didn't notice they didn't nano-manage me during that week because they
were preparing to fire me. So when they did, my last act was to send
to all their executives a complete, automated report describing every
usability issue in every supported locale of every product, with
instructions how to run it again as part of their test server. The
total error count was >4k, in a company that's supposed to do
localization as a core competency!)
> 1) The system take a snapshot of the HTML/CSS of each page in a webThat is a technique under the umbrella I call "Broadband Feedback".
> app whenever somebody commits a change
> 2) Have a flag you can set on each page once you have reviewed them
> 4) Automatically notify you when a reviewed page changes, and have a
> failing test until you mark it as reviewed again
However, marking the test as failing is unfair to programmers, who
just want to check in an innocent change that doesn't break anything.
Move the "reviewed" flag from the bug collumn to some other collumn!
To achieve Broadband Feedback, automate the steps. The reviewer should
simply turn on a web interface that displays each changed GUI, and
reviews the change in the website - not necessarily in the target
program. That's why I wrote this:
http://www.zeroplay er.com/cgi- bin/wiki? TestFlea
(Click on a green bar.)
Imagine if you were the Sanskrit linguist for a project. Wherever you
are (even up a mountain in Nepal), you the project's web site. You get
a page like that; maybe it contains only unreviewed items, or maybe
unreviewed items have a grey spot next to them.
You inspect each GUI, verifying it uses correct Sanskrit, then you
switch the record to Reviewed.
For more complex usability needs, a test batch could also upload
animations of the program in use.
> No we cannot make a computer say whether an arbitrary thing isThe adoption of Agile techniques in the game industry, today, is at
> usable. However we can make a computer spot many of the instances
> where a usability design decision that we have made is actually being
> implemented correctly.
about the same place as Agile adoption was in business 6 years ago.
One common FAQ (unanswered even on many game projects) is this:
if the highest business value feature is Fun, how can you
write an acceptance test for that?
The answer is the same as for any other untestable property (security,
robustness, availability, usability, fault tolerance, etc.). Fun is a
non-functional requirement that generates many functional
requirements, each of which can be tested.
In games, that requires designers to occupy the Onsite Customer role,
and author their scenarios as scripts that test a game automatically.
A scenario should run a hero thru a level and ensure they kill every
Next, games are very dynamic and emergent. A change to a Maya file
here can cause a bug, or a lapse of Fun, in game levels over there.
One way to preserve Fun without locking down every file is to use Gold
Master Copy tests on aspects of a game's internal details.
For example, two runs thru the same scenario should generate the same
log file. A programmer could change the code in an innocent way,
changing the log file without afflicting Fun. But these tests should
run as often as possible, so the programmer will revert their change,
then make a _different_ innocent change which might work.
These kinds of tests can't even easily pinpoint bugs, so run them as
often as possible, so the cause must be the most recent edit. Treat
these tests as seismograph readings, of earthquakes deep beneath the
http://c2.com/ cgi/wiki? ZeekLand <-- NOT a blog!!
__________ Information NOD32 1.1741 (20060906) __________
Ce message a ete verifie par NOD32 Antivirus System.