Loading ...
Sorry, an error occurred while loading the content.

18581RE: [scrumdevelopment] Re: Scrum illness, symptoms and possible treatments

Expand Messages
  • Clinton Keith
    Dec 29, 2006
    • 0 Attachment
      -----Original Message-----
      From: Mike Bria

      Clint --

      I find it interesting that you have an "all hands on" review and you're
      observing decreased "done-ness" and quaility - I would think that might
      help to improve your teams' desire to deliver high quality software come
      review time, as they will be showing it not only to a "few in the room",
      but rather to over a hundred people (read: a subtle, positive dose of
      "peer incentive"). Why do you think that is?

      Also, you answered Tobias' question about your 'retrospectives' with a
      description of your latest 'review' strategy - do you still have a
      team-specific 'retrospective' session independent of the review?



      One of the reasons to have the all hands review was to look at the whole
      product rather than the polished (feature team) parts that didn't quite
      fit together. It does generate a lot of awareness of the current status
      of the entire product but it does not increase the full team's
      _behavior_ to deliver high quality integrated features.

      A number of things contribute to this:

      - Feature teams are not completely independent of one another. There
      are overlaps and occasional hand-off issues.
      - Separate disciplines (art, design, and programming) do not communicate
      as effectively as they could. Pipelining and hot potato issues arise
      (e.g. does a poorly moving character mean bad animation or a bug with
      the animation technology?)
      - 120 people on one product/14 teams have more of these issues than 30
      people/4 teams.

      With respect to retrospectives, the team retrospectives have tended to
      focus on the practices of the team. We have encouraged the teams to
      address product issues as well during these, but they seldom do. The
      full team (120 people) has tried product retrospectives, but they didn't
      have a lot of success.

      Recently due to these issues we have started to have customer reviews of
      the game _during_ the Sprint. I know this is not an ideal Scrum
      practice, but the teams have encouraged it. The reviews (twice a week)
      consist of some customers and senior members of the team playing the
      game after hours and identifying story related task work that the team
      might consider "done", but are obviously unpolished. This is a band-aid
      though. We'd like the teams to learn how to catch these issues on their
      own (most are obvious).

      I can't emphasize enough that these issues are usually artistic and
      subjective. Unit tests don't apply here. They are the kind of things
      that every one who has ever played a game has seen (characters acting
      unrealistic, odd physics, etc).

    • Show all 18 messages in this topic