Re: [agile-usability] user stories - are they mini fixed scope contracts? was: Re: Subtle User Inter..
- Jeff I think that a "fixed scope user story" is a bit of a misnomer. While we need well defined user story language and acceptance criteria, I find the variation comes in the acceptance criteria. The key problem the user story describes is fixed, how that solution emerges from the team is negotiable.One of the key issues faced by many teams is creating well written user stories. It can be difficult to clearly describe the problem without invoking some solution in describing the feature. Sometimes this may be appropriate, but usually it ends up weakening the user story by restricting the freedom of the team to develop the best solution they can think of. User story writing is definitely a skill, however just as important is good customer engagement to guide the team. Make no doubt about it, there will be uncomfortable times when the team and the customer do not agree. The key is to realize the co-dependence and shared responsibility that both have in maximizing business value. Customers can love their teams, or they can be frustrated by them. I have seen both scenarios, and it often depends on expectation, understanding, and communication as to whether these relationships are successful.If the relationship is not working, then looking at how the fundamentals of the process are implemented (or not implemented) will give important clues as to how to fix the problem. For example, customers HATE surprises when it comes to scope removal from an iteration. Make sure they know of all the scope the team will, or may have to remove, by the mid point in the iteration. If they know work is "out" early, they may work much harder to clear the obstacles or road blocks to completing that work. The team also gets benefit from a midpoint review, it helps focus everyone on "getting to done" in the 2nd half of the iteration.
Usability changes seem to arrive in two ways: - as part of a user story (initial spec), or as customer feedback (iterative). In either case, it will help the team if the value of the changes can be clearly articulated from both the user and the business perspective. Saying this change will make the feature "easier to use" may be true, but does not capture value. Saying the change will save customer service agents "15 seconds per call" or some other measureable improvement quantifies the value of the change for the business. Sometimes you can't provide such a hard measure, but often, especially with systems that are widely used ( i.e. B2C website) a value estimate is possible.cheers,Robin DymondOn 8/1/06, Jeff Patton <jpatton@...> wrote:
--- In firstname.lastname@example.org, Adrian Howard <adrianh@...>
> > a date appears in a table. I've stumbled across those sorts of
> > feature
> > opportunities before and in practice, I routinely see push back
> > developers on implementing them.
> Wearing my developer hat, if I suggest something go into a
> story I'm not saying "I don't want to implement this", I'm saying
> "I'm more than happy to implement this, but don't consider it part
> the story I'm working on at the moment".
> When I read "push back from developers on implementing them" I
> "the developers don't want to implement them" - was I mishearing
> you were saying?
No, you're not necessarily. I may be guilty of practicing a very
relaxed form of agile. For very many years I relied on nothing more
than a story written on a 3x5 card and a conversation. What was in
and out of a story was very fluid - and almost always a matter of the
ongoing conversation between developer and customer. Statements
like "I don't consider it part of the story I'm working on" when a
customer might consider it part of the story seem to strike an
adversarial posture I'd characterize as push-back. Possibly a better
statement might be "this seems like a good idea, but my original
estimate hadn't accounted for it. Should we up the estimate -
possibly jeapordizing other things in this iteration? - or should we
defer this?" That's more of the convesation/collaboration that I'd
hope for, and one we were able to cultivate for years. It's a subtle
difference in language. And frankly it's often the body language of
the developers that communicates more than the actual words they
Adrian, in a subsequent post you said:
"Are the usability changes are getting a harsher reception than
changes in requirement from the customer?"
This is often the case that usability changes do get a harsher
reception - are often met with skepticism. Usability changes often
don't add functionality - they just make the functionality we have
better. If one team member is pushing towards implementing more, and
another team member is pushing towards implementing better, there
will likely be tension. And, I've observed the same skeptical
reception when the usability changes come directly from the customer.
I think focal to this discussion is the idea of how maleable stories
are: are they "contracts for fixed scope" or are they general goals
to be achieved during an iteration where the details of those goals
are to be worked through collaboratively during the iteration. I've
observed both extremes in practice on agile projects - and variations
Thanks for posting and responding.
- I just logged in to the PMI site as a member. It had been a while, so I wasn’t sure of my username and password (it’s been a few months since the last time I used it). Of course, one of first thing you would expect is to know whether the system did recognize you or not. Guess what, nowhere on the page that came up is there any mention of my name. As far as I can tell, I don’t even know if I’m really logged in (ok, I see log out somewhere so I’ll assume I’m logged in). My reflex was to look on the page everywhere ( I had to scroll because the first page is long). Because it puzzled me a bit, my next reaction was to look at the left menu and see if I could get my account details. No luck, no menu entry is clearly labelled that way.
- Ah ah, just found the problem. There is a Membership information home button which I thought wrongly was the Menu title (it was not underlined, is of a different color and background than all other menu items, and because it doesn’t look clickable I actually dismissed it as a header and didn’t even read the text anyway). However, it made no sense I could not see my account info, so I investigated further the UI (and then realized what that header actually was). When I clicked on it, it finally got me my account information. I figure that this is normally the first page you get when you log in. For some reason, they decided to put a two pager publicity there instead (“PMI's 250,000 Member Race”).
- Anyway, it now makes me feel a little bit stupid that I lost so much time figuring this out as a user. A simple Hi Pascal on the login page would have just avoided the whole freaking thing, and that menu item that doesn’t look clickable too… Oh well, maybe it’s just me, I’m probably bellow their required target user intelligence level…
- Isn’t that pretty basic usability stuff? And we are talking a fairly prestigious site here (I heard the PMI is targeting 250,000 members worldwide)…
Anyway, the point I want to make is that even basic stuff like that is very common in the field. This leads to software that is harder to use than it should (ever heard of the digital divide?, I think stuff like that contributes heavily to it) and even frustrates and angers people at times. Frankly, I doubt they even had one real user test that part of the site before they put it out there…
Pascal Roy, ing./P.Eng., PMP
Elapse Technologies Inc.
From: agile-usability@... m [mailto: agile-usability@... m ] On Behalf Of Phlip
Sent: 6 septembre 2006 11:15
To: agile-usability@... m
Subject: Re: [agile-usability] Catching usability issues with automated tests
Adrian Howard wrote:
> Some examples:legibility
> * Clean XHTML/CSS validation as a sign that the app will present well
> on all browsers
> * Using the presence of ALT tags as a sign of accessibility.
> * Using a computed "colour contrast" value as a sign of
> * Using the Kincaid formula or similar as a sign of readability* Use pure XHTML, so all that's accessable to the testage
* Run the site's pages thru Tidy and ask if it's accessible
Those tests sound weak, but some GUIs must internationalize and
localize correctly. Users of some rare language are probably familiar
and tired of the same dumb bugs in their GUIs. So switch to each
language and run all those tests again.
Next, do it even if your GUI is not HTML. MS's RESX files are of
course parsable as XML. I wrote
http://www.c2. com/cgi/wiki? MsWindowsResourc eLint
to scan the localized RC files looking for bugs. The program has an
extensible framework so you can add in any kind of test you can think
(At SYSTRAN, I spent a week writing the predecessor of that program. I
didn't notice they didn't nano-manage me during that week because they
were preparing to fire me. So when they did, my last act was to send
to all their executives a complete, automated report describing every
usability issue in every supported locale of every product, with
instructions how to run it again as part of their test server. The
total error count was >4k, in a company that's supposed to do
localization as a core competency!)
> 1) The system take a snapshot of the HTML/CSS of each page in a webThat is a technique under the umbrella I call "Broadband Feedback".
> app whenever somebody commits a change
> 2) Have a flag you can set on each page once you have reviewed them
> 4) Automatically notify you when a reviewed page changes, and have a
> failing test until you mark it as reviewed again
However, marking the test as failing is unfair to programmers, who
just want to check in an innocent change that doesn't break anything.
Move the "reviewed" flag from the bug collumn to some other collumn!
To achieve Broadband Feedback, automate the steps. The reviewer should
simply turn on a web interface that displays each changed GUI, and
reviews the change in the website - not necessarily in the target
program. That's why I wrote this:
http://www.zeroplay er.com/cgi- bin/wiki? TestFlea
(Click on a green bar.)
Imagine if you were the Sanskrit linguist for a project. Wherever you
are (even up a mountain in Nepal), you the project's web site. You get
a page like that; maybe it contains only unreviewed items, or maybe
unreviewed items have a grey spot next to them.
You inspect each GUI, verifying it uses correct Sanskrit, then you
switch the record to Reviewed.
For more complex usability needs, a test batch could also upload
animations of the program in use.
> No we cannot make a computer say whether an arbitrary thing isThe adoption of Agile techniques in the game industry, today, is at
> usable. However we can make a computer spot many of the instances
> where a usability design decision that we have made is actually being
> implemented correctly.
about the same place as Agile adoption was in business 6 years ago.
One common FAQ (unanswered even on many game projects) is this:
if the highest business value feature is Fun, how can you
write an acceptance test for that?
The answer is the same as for any other untestable property (security,
robustness, availability, usability, fault tolerance, etc.). Fun is a
non-functional requirement that generates many functional
requirements, each of which can be tested.
In games, that requires designers to occupy the Onsite Customer role,
and author their scenarios as scripts that test a game automatically.
A scenario should run a hero thru a level and ensure they kill every
Next, games are very dynamic and emergent. A change to a Maya file
here can cause a bug, or a lapse of Fun, in game levels over there.
One way to preserve Fun without locking down every file is to use Gold
Master Copy tests on aspects of a game's internal details.
For example, two runs thru the same scenario should generate the same
log file. A programmer could change the code in an innocent way,
changing the log file without afflicting Fun. But these tests should
run as often as possible, so the programmer will revert their change,
then make a _different_ innocent change which might work.
These kinds of tests can't even easily pinpoint bugs, so run them as
often as possible, so the cause must be the most recent edit. Treat
these tests as seismograph readings, of earthquakes deep beneath the
http://c2.com/ cgi/wiki? ZeekLand <-- NOT a blog!!
__________ Information NOD32 1.1741 (20060906) __________
Ce message a ete verifie par NOD32 Antivirus System.