Loading ...
Sorry, an error occurred while loading the content.

Re: [agile-usability] Subtle User Interaction Experiences

Expand Messages
  • Adrian Howard
    On 31 Jul 2006, at 16:30, Jeff Patton wrote: [snip] ... [snip] Wearing my developer hat, if I suggest something go into a subsequent story I m not saying I
    Message 1 of 91 , Aug 1, 2006
    View Source
    • 0 Attachment
      On 31 Jul 2006, at 16:30, Jeff Patton wrote:
      [snip]
      > I think it was Andrew who mentioned that the posture of some
      > developers is
      > to ask if it was part of the original story - or to suggest putting a
      > delight feature in subsequent story. I work in what should be
      > pretty Agile
      > environments and this happens a lot. I didn't look at the Apple
      > article,
      > but if I get the idea of the feature from reading it, it does seem
      > a rather
      > simple feature. And, if implemented once could be easily leveraged
      > wherever
      > a date appears in a table. I've stumbled across those sorts of
      > feature
      > opportunities before and in practice, I routinely see push back from
      > developers on implementing them.
      [snip]

      Wearing my developer hat, if I suggest something go into a subsequent
      story I'm not saying "I don't want to implement this", I'm saying
      "I'm more than happy to implement this, but don't consider it part of
      the story I'm working on at the moment".

      When I read "push back from developers on implementing them" I heard
      "the developers don't want to implement them" - was I mishearing what
      you were saying?

      If not, I'm curious what kind of push back you are getting - since my
      experience is that agile teams are positively eager to implement
      these sorts of feature.

      Is it possibly down to an environment where you're not allowed to add/
      remove/change stories during an iteration?

      Cheers,

      Adrian
    • Pascal Roy
      I just thought I should share this experience because I just frustrated me a bit in terms of user experience: - I just logged in to the PMI site as a
      Message 91 of 91 , Sep 9, 2006
      View Source
      • 0 Attachment

        I just thought I should share this experience because I just frustrated me  a bit in terms of user experience:

        -          I just logged in to the PMI site as a member. It had been a while, so I wasn’t sure of my username and password (it’s been a few months since the last time I used it). Of course, one of first thing you would expect is to know whether the system did recognize you or not. Guess what, nowhere on the page that came up is there any mention of my name. As far as I can tell, I don’t even know if I’m really logged in (ok, I see log out somewhere so I’ll assume I’m logged in). My reflex was to look on the page everywhere ( I had to scroll because the first page is long). Because it puzzled me a bit, my next reaction was to look at the left menu and see if I could get my account details. No luck, no menu entry is clearly labelled that way.

        -          Ah ah, just found the problem. There is a Membership information home button which I thought wrongly was the Menu title (it was not underlined, is of a different color and background than all other menu items, and because it doesn’t look clickable I actually dismissed it as a header and didn’t even read the text anyway). However, it made no sense I could not see my account info, so I investigated further the UI (and then realized what that header actually was). When I clicked on it, it finally got me my account information. I figure that this is normally the first page you get when you log in. For some reason, they decided to put a two pager publicity there instead (“PMI's 250,000 Member Race”).

        -          Anyway, it now makes me feel a little bit stupid that I lost so much time figuring this out as a user. A simple Hi Pascal on the login page would have just avoided the whole freaking thing, and that menu item that doesn’t look clickable too… Oh well, maybe it’s just me, I’m probably bellow their required target user intelligence level…

        -          Isn’t that pretty basic usability stuff? And we are talking a fairly prestigious site here (I heard the PMI is targeting 250,000 members worldwide)…

         

        Anyway, the point I want to make is that even basic stuff like that is very common in the field. This leads to software that is harder to use than it should (ever heard of the digital divide?, I think stuff like that contributes heavily to it) and even frustrates and angers people at times. Frankly, I doubt they even had one real user test that part of the site before they put it out there…

         

        Pascal Roy, ing./P.Eng., PMP

        Vice-Président/Vice President

        Elapse Technologies Inc.

         

        [url]    http://www.elapsetech.com

        [email]  pascal.roy@...

        [cell]   514-862-6836

         

         


        From: agile-usability@... m [mailto: agile-usability@... m ] On Behalf Of Phlip
        Sent: 6 septembre 2006 11:15
        To: agile-usability@... m
        Subject: Re: [agile-usability] Catching usability issues with automated tests

         

        Adrian Howard wrote:

        > Some examples:
        > * Clean XHTML/CSS validation as a sign that the app will present well
        > on all browsers
        > * Using the presence of ALT tags as a sign of accessibility.
        > * Using a computed "colour contrast" value as a sign of
        legibility
        > * Using the Kincaid formula or similar as a sign of readability

        * Use pure XHTML, so all that's accessable to the testage
        * Run the site's pages thru Tidy and ask if it's accessible

        Those tests sound weak, but some GUIs must internationalize and
        localize correctly. Users of some rare language are probably familiar
        and tired of the same dumb bugs in their GUIs. So switch to each
        language and run all those tests again.

        Next, do it even if your GUI is not HTML. MS's RESX files are of
        course parsable as XML. I wrote

        http://www.c2. com/cgi/wiki? MsWindowsResourc eLint

        to scan the localized RC files looking for bugs. The program has an
        extensible framework so you can add in any kind of test you can think
        of.

        (At SYSTRAN, I spent a week writing the predecessor of that program. I
        didn't notice they didn't nano-manage me during that week because they
        were preparing to fire me. So when they did, my last act was to send
        to all their executives a complete, automated report describing every
        usability issue in every supported locale of every product, with
        instructions how to run it again as part of their test server. The
        total error count was >4k, in a company that's supposed to do
        localization as a core competency!)

        > 1) The system take a snapshot of the HTML/CSS of each page in a web
        > app whenever somebody commits a change
        > 2) Have a flag you can set on each page once you have reviewed them
        > 4) Automatically notify you when a reviewed page changes, and have a
        > failing test until you mark it as reviewed again

        That is a technique under the umbrella I call "Broadband Feedback".
        However, marking the test as failing is unfair to programmers, who
        just want to check in an innocent change that doesn't break anything.
        Move the "reviewed" flag from the bug collumn to some other collumn!

        To achieve Broadband Feedback, automate the steps. The reviewer should
        simply turn on a web interface that displays each changed GUI, and
        reviews the change in the website - not necessarily in the target
        program. That's why I wrote this:

        http://www.zeroplay er.com/cgi- bin/wiki? TestFlea

        (Click on a green bar.)

        Imagine if you were the Sanskrit linguist for a project. Wherever you
        are (even up a mountain in Nepal), you the project's web site. You get
        a page like that; maybe it contains only unreviewed items, or maybe
        unreviewed items have a grey spot next to them.

        You inspect each GUI, verifying it uses correct Sanskrit, then you
        switch the record to Reviewed.

        For more complex usability needs, a test batch could also upload
        animations of the program in use.

        > No we cannot make a computer say whether an arbitrary thing is
        > usable. However we can make a computer spot many of the instances
        > where a usability design decision that we have made is actually being
        > implemented correctly.

        The adoption of Agile techniques in the game industry, today, is at
        about the same place as Agile adoption was in business 6 years ago.
        One common FAQ (unanswered even on many game projects) is this:

        if the highest business value feature is Fun, how can you
        write an acceptance test for that?

        The answer is the same as for any other untestable property (security,
        robustness, availability, usability, fault tolerance, etc.). Fun is a
        non-functional requirement that generates many functional
        requirements, each of which can be tested.

        In games, that requires designers to occupy the Onsite Customer role,
        and author their scenarios as scripts that test a game automatically.
        A scenario should run a hero thru a level and ensure they kill every
        monster.

        Next, games are very dynamic and emergent. A change to a Maya file
        here can cause a bug, or a lapse of Fun, in game levels over there.
        One way to preserve Fun without locking down every file is to use Gold
        Master Copy tests on aspects of a game's internal details.

        For example, two runs thru the same scenario should generate the same
        log file. A programmer could change the code in an innocent way,
        changing the log file without afflicting Fun. But these tests should
        run as often as possible, so the programmer will revert their change,
        then make a _different_ innocent change which might work.

        These kinds of tests can't even easily pinpoint bugs, so run them as
        often as possible, so the cause must be the most recent edit. Treat
        these tests as seismograph readings, of earthquakes deep beneath the
        surface.

        --
        Phlip
        http://c2.com/ cgi/wiki? ZeekLand <-- NOT a blog!!


        __________ Information NOD32 1.1741 (20060906) __________

        Ce message a ete verifie par NOD32 Antivirus System.
        http://www.nod32.com

      Your message has been successfully submitted and would be delivered to recipients shortly.