Re: [agile-usability] Inside Steve's Brain
- Is what typical to agile? Big, upfront exploratory design processes? Certainly not. Building one feature at a time w/o much thought to how it will fit into the product as a whole? Yes.
I agree with you that if the will is there, then there's a way. What I'm saying is that agile certainly wasn't built as a "design" process, and by design I mean interaction design, interface design, visual design, industrial design etc. Most designers have the impression that agile puts a developer and a business person or customer proxy together everyday. The business person or customer says "this is what I want" and the development team builds it. Designers & UCD researchers see this process as faulty in many ways, and for good reason. Where are the designers? Where is the research?
"If I had asked people what they wanted, they would have said faster horses." - Henry Ford
I'm playing devil's advocate here - I'm not an "agile hater". Many designers are however. For the reasons I mention above, and also because agile's lack of ability to provide exploration of an entire design - not just small features one at a time. And moreover, to not explore just one idea, but many. I presented at Interactions 08 on the topic of Agile UCD/interaction design. Alan Cooper, who is presenting at Agile 08 keynoted that conference. The response was interesting to say the least. See http://www.ixda.org/discuss.php?post=25764#25764. This gives a good idea of what the design community feels about agile. Some of us are practicing it, embracing it and changing. Others are not and strongly feel that agile is not a good fit for the design process.
I'm not familiar at all w/ XP, all my experience is with Scrum. But I'm assuming you don't build the entire software product, then iterate upon it 52 times. You probably add features each week. Is that correct? If so, you don't see the product as a whole until the end of the year. A traditional designer would say that this really is 1 chance, not 52. And since there were no "big upfront" requirements/design, all the features may or may not fit together to form a strong, cohesive product. Even if the feature done in week 17 totally rocked, it's just one small part of the product as a whole.
I bring this up not to be a pain in the *** (which is rare for me :-) ) but to just point out that when I think of Apple's design process - the very last thing that comes to mind is agile, and I'd guess the design community at large feels the same way. Apple's culture is dominated by big exploratory design phases before making decisions and releasing a product. By *desingers* making design decisions, not customers and not engineers. This is very different from agile, at least based on my experience and perception of it.
Again, I agree about the whole "if there is a will" thing. But if you spend years exploring design alternatives before going to production, then handing off that spec to development teams.....how is that not waterfall?
JeffOn Mon, Jun 9, 2008 at 9:42 AM, William Pietri <william@...> wrote:_
Jeff White wrote:One challenge many designers see in agile is that there is no time for this kind of iterative design approach. All of the agile projects I've worked simply do not allow for this type of upfront design/prototyping process.Agile tends to call releases or sprints iterations - but in true design sense, they are not. An iteration means that one idea/concept is refined. Typically in agile, things aren't refined as much as simply added onto [...]
True iterative design can (kind of) happen in agile, where a feature is refined based on feedback or UCD research. However, the focus seems to be always on adding more features and releasing [...]
Is that typical to agile projects? Or just projects? Kaheny believes that Apple is exceptional in that almost nobody pursues this level of refinement, no matter what their process.
I think that agile processes can support this approach very well -- if there is a will for it. Note that their prototypes are fully functional, which I presume includes working software. A team with a waterfall background would resist that, as they would want to plan the software fully before building any of it. But agile teams should be totally happy with fast iteration in support of exploring the design space.
Deciding when to release and to whom is a business decision. So is the amount to invest in design. Agile software development processes are about making good software, and are not really prescriptive with regards to business decisions. And they shouldn't be; low-road and high-road strategies both have their successes.
But agile methods do enable better business decisions. If you use Extreme Programming to develop for a year before public launch, then you will have 52 opportunities to decide whether or not you are on the right track. 52 chances to try the product out. 52 occasions you can see how well it works for your target audience. 52 points to check whether this project will live up to your standards.
However, no process can give you those standards. No process can make you invest in good design. No process can make you brave enough to look at millions of dollars in investment and say, "Good, but not good enough to release."
Still, I think 52 chances to make the right decision -- and more importantly, to learn to make good decisions -- is better than 1.
> But I am against research just been done at the before the project starts and then that is it, andI agree totally. Ethnographic user research should aim at understanding the user THROUGH understanding how they work today. The aim is not to just replicate the way they work today.
> totally against design been fixed before the project starts.
> The challenge is that the research is descriptive and not predictive, and is often used in a
> prescriptive manner. I have been involved in some projects where there has been the assumption that
> the consumers behaviour remains fixed.
Here's an example taken from my contextual inquiries with translators.
We noticed that NOT ONE OF THEM used collaboratively built linguistic resources like ProZ, Wiktionary and OmegaWiki. That's not necessarily to say that collaboratively built resources would not be useful to them and that you should not build them. It may be that collaboratively built resources have just not made it into their world right now.
However, the CI observation gives us many useful information about how a collaborative resource should be built to serve the needs of translators. For example, we noticed that translators at least pay a lot of lip service to "trustworthy sources". So you know that with a collaborative resource where pretty much anybody can write content, you are up against a perception of lack of trustworthiness. At the same time, we noticed that when translators can't find a solution in trusted resources (ex: the terminology database of the Gov of Canada), they have no qualms about looking in less trustworthy resources, for example by doing a search on the internet. So, it could be that translators will be willing to use collaborative resources if they have more coverage than "trusted" ones. As the translators use the resource more and more, they will notice that they quality is high, and may get over the lack of trust barrier.
But all of that is of course hypothetical. They are hints that help you narrow down the search space. But a case like this, those hints are particularly important, because a wiki cannot be tested with individuals. It can only be realistically tested with a large community. It's like the WWW. You couldn't test the concept of the WWW without having a network of millions of interlinked pages already (although you could test the concept of a web browser on individuals). So, the only way to realistically test a wiki is to deploy it and see what happens. So the turnaround time for validating your decisions is longer. Hence the importance of having good a-priori data to guide your initial choices. Of course if you deploy something and it doesn't work, you should listen to what the community is telling you through their actual use of the real thing.
> There is research that uses sweeping statements like from Broadbent and Cara's NEW ARCHITECTURES OFI agree that behaviour which is specifically tied to technology can change rapidly, hence the need to conduct this type of research continuously.
> INFORMATION Paris 2002.
> "During the past four years, we have carried out hundreds of observations of people using the Web."
> which leads them to a conclusion that
> Most light users have very stereotypical behaviours: after six months of usage of the Internet they
> stop even trying to do searches through a search engine and consult systematically the same six or
> seven sites.
> Broadbent and Cara's observations tell us about the time before google.
However, CI will also yield information that is pretty much independent of technology. For example, we have noticed that translators do not blindly trust ANY sources (even official ones like Gov of Canada terminology database), and will systematically consult at least two different resources to resolve any given translation difficulty. The only exception to that is when the translator hast the proper translation at the tip of his tongue, and uses lookup in a single resources to remember it. I'm being told that this behaviour was there already when translators used paper dictionaries only.
> My argument is that any new product changes the user behaviour, and therefore you need to get the newYes, you should do that.
> product in front of the user as soon as possible, so that you can feed back the users behaviour back
> into the next iteration.
> Research needs to be carried out before so that you can set up the goals of the project, and theYep.
> stories. It needs to be done during development so that you can feed back the user behaviour back
> into the product, and then after to find out when you need to make the next version.