Re: [agile-usability] Inside Steve's Brain
- Jeff White wrote:
Excellent. Thank you for clarifying. I believe agile holds a lot of potential for the design community, and I'm going to investigate what PARC was up to, I'm not familiar with it - can you point to any resources?
This is perhaps 5 years in the past, so my memory's a little hazy at this point, but the researcher I talked with most was Nicholas Duchenaut. They had some ideas about better interfaces to email based on the observation is that many people use their inboxes more as to-do lists. Part of their research involved giving different tools to heavy email users. They built the tools on a 1-week iterative cycle. They deployed to their test audiences relatively early in the course of the project, and used data and observation of subjects to generate new hypotheses and new design ideas, changing their software development plans as they went.
There were two main points I remember from his presentation on this. One was the bit I mentioned before, their conclusion that "Extreme Programming expands the design space." The other was that some of their test subjects who were also PARC executives got so hooked on their product that they refused to stop using it when the experiment was over.
> But I am against research just been done at the before the project starts and then that is it, andI agree totally. Ethnographic user research should aim at understanding the user THROUGH understanding how they work today. The aim is not to just replicate the way they work today.
> totally against design been fixed before the project starts.
> The challenge is that the research is descriptive and not predictive, and is often used in a
> prescriptive manner. I have been involved in some projects where there has been the assumption that
> the consumers behaviour remains fixed.
Here's an example taken from my contextual inquiries with translators.
We noticed that NOT ONE OF THEM used collaboratively built linguistic resources like ProZ, Wiktionary and OmegaWiki. That's not necessarily to say that collaboratively built resources would not be useful to them and that you should not build them. It may be that collaboratively built resources have just not made it into their world right now.
However, the CI observation gives us many useful information about how a collaborative resource should be built to serve the needs of translators. For example, we noticed that translators at least pay a lot of lip service to "trustworthy sources". So you know that with a collaborative resource where pretty much anybody can write content, you are up against a perception of lack of trustworthiness. At the same time, we noticed that when translators can't find a solution in trusted resources (ex: the terminology database of the Gov of Canada), they have no qualms about looking in less trustworthy resources, for example by doing a search on the internet. So, it could be that translators will be willing to use collaborative resources if they have more coverage than "trusted" ones. As the translators use the resource more and more, they will notice that they quality is high, and may get over the lack of trust barrier.
But all of that is of course hypothetical. They are hints that help you narrow down the search space. But a case like this, those hints are particularly important, because a wiki cannot be tested with individuals. It can only be realistically tested with a large community. It's like the WWW. You couldn't test the concept of the WWW without having a network of millions of interlinked pages already (although you could test the concept of a web browser on individuals). So, the only way to realistically test a wiki is to deploy it and see what happens. So the turnaround time for validating your decisions is longer. Hence the importance of having good a-priori data to guide your initial choices. Of course if you deploy something and it doesn't work, you should listen to what the community is telling you through their actual use of the real thing.
> There is research that uses sweeping statements like from Broadbent and Cara's NEW ARCHITECTURES OFI agree that behaviour which is specifically tied to technology can change rapidly, hence the need to conduct this type of research continuously.
> INFORMATION Paris 2002.
> "During the past four years, we have carried out hundreds of observations of people using the Web."
> which leads them to a conclusion that
> Most light users have very stereotypical behaviours: after six months of usage of the Internet they
> stop even trying to do searches through a search engine and consult systematically the same six or
> seven sites.
> Broadbent and Cara's observations tell us about the time before google.
However, CI will also yield information that is pretty much independent of technology. For example, we have noticed that translators do not blindly trust ANY sources (even official ones like Gov of Canada terminology database), and will systematically consult at least two different resources to resolve any given translation difficulty. The only exception to that is when the translator hast the proper translation at the tip of his tongue, and uses lookup in a single resources to remember it. I'm being told that this behaviour was there already when translators used paper dictionaries only.
> My argument is that any new product changes the user behaviour, and therefore you need to get the newYes, you should do that.
> product in front of the user as soon as possible, so that you can feed back the users behaviour back
> into the next iteration.
> Research needs to be carried out before so that you can set up the goals of the project, and theYep.
> stories. It needs to be done during development so that you can feed back the user behaviour back
> into the product, and then after to find out when you need to make the next version.