I agree, but I would go a step further: UX work is more challenging that most business domain programming because usability is somewhat subjective and testingMessage 1 of 12 , Aug 15, 2011View SourceI agree, but I would go a step further: UX work is more challenging that most business domain programming because usability is somewhat subjective and testing for it is difficult to automate (Impossible in some cases.) Therefore, UX is subject to regression due to disruptive changes that can't always be anticipated.The only way that I know to effectively deal with this is to rapidly iterate with real customer feedback. However, releasing to production is not always a desirable first step. I would suggest a multi-faceted approach:1) Make sure that everyone on the team is using the software in realistic scenarios, and not just coding it and throwing it over the wall. This can be done in terms of exploratory testing or actual use. Many of the most glaring problems can be found very early this way.2) Involve real users in demos/reviews. You can do this at the end of each Sprint as Scrum suggests, but you could also do it more often… even continuously. The smaller the resolution of user involvement (e.g. daily vs. bi-weekly) the more opportunities we have to learn and get it right.3) Consider some kind of limited alpha/beta release. Google does this real well. Occasionally they will ask you if you want to turn on some new feature and give feedback about it. A lot of customers really like this opportunity, and others simply decline to contribute, but either way you get useful information (Declining an experimental change is a kind of feedback, "I like it the way it is and don't want you screwing it up.")4) Formal usability testing intra-Sprint is not a bad idea, but you need to figure out how to do it fast and incrementally or otherwise you are giving up the main benefits of an Agile approach.
On Mon, Aug 15, 2011 at 3:28 PM, desireesy <dsy.agileux@...> wrote:
What Jon said, although I agree with Mike that perhaps we should have made your point -- there is no scope creep, only the current, updated scope -- more explicit in the response
The question is how to include UX data as a relevant value during iteration planning, because in many situations agile teams are new to this type of input -- and UX practitioners are new to how to participate in higher-level product planning.
--- In firstname.lastname@example.org, Jon Innes <jinnes@...> wrote:
> Exactly. Not disagreeing at all. Just pointing out that if you don't track all the factors you're lacking the full picture. If user research shows users can't use it, then the story isn't "done". To do otherwise is throwing code over the wall in a waterfall way.
> Sent from my iPhone
>> On Aug 15, 2011, at 11:02 AM, "Mike Dwyer" <mdwyer@...> wrote:> > From: Jon Innes <jinnes@...>
> > ARGH!!!!!! There is no such thing as scope creep in Agile and particularly Scrum. New stuff, when it appears, must be evaluated by Product Owners to see if adds enough value to push lower value down the product backlog.
> > We need to stop worrying about things in the plan and strt focusing on deliver what is currently valuable to the product
> > Sent via BlackBerry by AT&T
> >> >> -DesirÃ©e
> > Sender: email@example.com
> > Date: Mon, 15 Aug 2011 10:34:47 -0700
> > To: <firstname.lastname@example.org>
> > ReplyTo: email@example.com
> > Subject: Re: [agile-usability] Re: How do you avoid scope creep when integrating user research into the scrums?
> > Sorry to jump in so late on this one, but seeing how thread is going I want to raise couple of important points here:
> > 1) If the business feels that the usability issues are not important enough to fix then you need to recognize that might be the case--just make sure everyone understands the trade off.
> > I recently presented on this problem at HCII. In immature markets it might be true that UX is less important, because the market overall has not matured to the point where it's a differentiating factor. It's relative UX that matters here. If your competitors have a terrible UX, you might be able to can get away with being just as bad, at least until someone ups the ante.
> > 2) The key is to establish clear, objective UX metrics that everyone understands, and make the impact of not addressing UX work or usability problems visible. John and I recently talked about this at UPA where I presented my UXI Matrix method for doing this. I just finished giving a presentation at Agile2011 last week on the UXI Matrix (a method for managing the product backlog and tracking UX impact tailored to Scrum). The problem is most teams doing Agile don't track any UX metrics at all since Agile/Scrum doesn't talk about UX stuff.
> > If you're interested in any of the above, my slides from HCII and Agile2011 are on slideshare at http://www.slideshare.net/jinnes
> > I've been discussing doing a blog on the UXI Matrix with one of the members of this list. Now that my schedule is more open, I'm looking forward to doing that. I'd be interested in any feedback from the members of this list so I can consider that in any future blogs/presentations. I got a bunch of good questions at Agile2011 and I'm going to try and cover that in any future discussions.
> > Jon
> > On Aug 15, 2011, at 9:12 AM, desireesy wrote:
> >> > When I teach Agile UX processes (with my colleague Desiree Sy), we are
> >> > both big proponents of establishing a 'charter' at the beginning of the
> >> > process. One of the things in the charter is a 'trade-off matrix',
> >> > where you decide what the priorities are going to be in your project.
> >> In case this isn't clear, Jared, the 'charter' is a lightweight way of doing that upfront definition of project scope that you indicated in your original query.
> >> To add a bit more to John's response, another element of a project 'charter' is to articulate the business goals of the release. This helps define what 'done' means from a business perspective (but not how 'done' happens... this is NOT a feature list), and is a critical element in putting user research into an appropriate context.
> >> This is because if you're putting your product in front of users regularly (and... you are, aren't you? :) ) you will see many, many opportunities for product change. Frankly, you'll see more things to do than you will have time to address. So -- how do you prioritize them?
> >> The most important things are those usability problems that will directly impact business goals in the current (or +2 let's say) iteration of work, but I'm assuming from the description that these aren't the usability problems under discussion, because they're not "scope creep" -- they're in scope.
> >> The next most important things are usability problems that will block the business goals, and which have been loosely scheduled for a future iteration. But these too are not strictly scope creep, since as a team you planned to address them. Now you have more data with which to build a minimum-fidelity prototype before development begins coding.
> >> So what we're really talking about are the usability problems which don't fit into any version of the loosely planned future work: the surprises.
> >> These become new issues which fit into the product backlog, and should be discussed at the next iteration planning meeting, and the criteria used to frame the discussion should be drawn from the charter. First (as John mentioned) through the trade off matrix, and then by looking at the business goals for the project.
> >> In other words -- are any of the new problems more important from a business standpoint than the prior problems? If so -- then act accordingly, and pull work off the planning board and replace it with the new work.
> >> The hard HARD part of this from a UX practitioners' point of view is this: you will -- guaranteed! -- uncover (or let's face it, you'll know even before you put the product in front of users) a really important problem that has a reasonable solution. And its time will not be for the current release. The sad truth of it is sometimes a correct solution to a non-trivial problem will divert the team's focus from another problem that is even more important.
> >> As a UX researcher and/or designer, you must understand the business focus. Because you will also -- guaranteed! -- uncover really important new problems that *will* block "done" from a business standpoint. And you need to be able to articulate which of these problems are more important than what you currently have on the planning board.
> >> Finally, all those problems that didn't get addressed in the current release? Well, they are forming a distinct picture of usage issues for the next planning session when you draw up a new charter. Very often you will have data that indicates a business impact for poor UX. That will inform work for the next series of iterations. This is also a way of making user research a continuous and integrated part of agile work -- not something that just happens up front.
> >> > On 2011-08-11, at 9:16 PM, Jared Spool wrote:
> >> >
> >> > As part of a recent project, they added some usability testing to their
> >> > sprints. They'd never done a usability test before and, as a result,
> >> > discovered many usability issues with the design.
> >> >
> >> > However, many of the problems they found were outside the original
> >> > scope of the project. The UX team members wanted desperately to add many
> >> > of the problems to the project's backlog, but instantly found resistance
> >> > from the product owners who didn't want to delay the delivery. The
> >> > result was a sense from the UX folks that the product owners didn't care
> >> > about a good user experience.
> >> >
> >> > My question is how might you have avoided this problem? I had initially
> >> > suggested that the team needed more upfront definition of what is and
> >> > isn't within the project's scope and a way to record outside-of-scope
> >> > usability issues for future projects.
> >> >
> >> > What do you think?
> >> >
> >> > Jared
> >> >
> >> > Jared M. Spool
> >> > User Interface Engineering
> >> > 510 Turnpike St., Suite 102, North Andover, MA 01845
> >> > e: jspool@ p: +1 978 327 5561
> >> > http://uie.com Blog: http://uie.com/brainsparks Twitter: @jmspool
> >> >
... It s not a problem unique to usability testing either. I ve seen exactly the same prioritisation / value / delivery issue come up when people with a newMessage 1 of 12 , Aug 16, 2011View SourceOn 15 Aug 2011, at 23:28, desireesy wrote:
> What Jon said, although I agree with Mike that perhaps we should have made your point -- there is no scope creep, only the current, updated scope -- more explicit in the responseIt's not a problem unique to usability testing either. I've seen exactly the same prioritisation / value / delivery issue come up when people with a new outlook/skill-set are introduced to a team.
> The question is how to include UX data as a relevant value during iteration planning, because in many situations agile teams are new to this type of input -- and UX practitioners are new to how to participate in higher-level product planning.
For example I've been in the position where, coming into a project late, I've seen systematic security issues (widespread opportunities for SQL injection & CSRF attacks) - and fixing those was going to have a large impact on that lovely regular chugging out of features that had made the business love their agile team so much.
Of course the "agile" response is the same. Responding to feedback is we're supposed to be good at. Write new stories, prioritise the backlog, and move on.
Of course the team shouldn't have been in that situation in the first place. There should have been more security knowledge in the team from the start both on the dev and QA side. Security should have been part of the definition of "done" for stories, and should have been something that the team valued as a whole. There should have been rapid and frequent feedback on security issues throughout the development process.
But those are all *big* changes for a team that have, for the team, come completely out of left field. You get pushback. Just as Jared's client pushed back.
A team/business accepting that they've been building the wrong thing and/or building the thing wrong for a significant amount of time is a tough pill to swallow - no matter how agile they are.
http://quietstars.com adrianh@... twitter.com/adrianh
t. +44 (0)7752 419080 skype adrianjohnhoward del.icio.us/adrianh
I think the question of scope creep is real, even for an Agile project. Yeah, in theory the point of Agile is to expand or change the scope whenever your userMessage 1 of 12 , Aug 16, 2011View SourceI think the question of scope creep is real, even for an Agile project. Yeah, in theory the point of Agile is to expand or change the scope whenever your user feedback suggests it would be valuable to do so. In practice, the business has usually made plans based on certain deliverables, and your project has gotten funding based on prioritizing those deliverables over others. That creates an obligation to the business you can't just wish away.
Against this, it's absolutely true that user feedback is enormously seductive. It will cause the team to lose focus if you don't have any way to constrain yourself.
What we do is to split the problem in two. There's user research and concepting that precedes the start of Agile development proper. That ensures we get the right product and the right basic structure. A this point scope can be fairly broad. Referring back to Jared's initial post, because we do this we wouldn't be uncovering basic issues and misconceptions during Agile sprints. (And in fact we don't.)
Before we start sprints, we set scope explicitly by defining what usage cases we cover--what user tasks and situations the design is to support. Than any new extension of the system--which should translate to a new user story--can be evaluated based on whether it enables one of our defining usage cases or not. If not, it's not in scope, and implementing it requires buy-in from the business. If it does enable a usage case, it's in scope and we can prioritize it against our other stories.
In either case, I'd definitely write the stories. That way you don't lose the learnings, and you have the option to prioritize them in down the road if it turns out to be important.
And in either case, I'm assuming *real* user testing--with end-users, in their workplace, following their work tasks, not just usability tests on fake tasks in a lab. But doing that in the context of an Agile process is a whole separate topic. :-)