Loading ...
Sorry, an error occurred while loading the content.

change item description during sprint plus measure teams by velocity

Expand Messages
  • kaverjody
    I faced a situation that two behaviours as the title happened together, I do not like that, it make things worse. - First one is, the contents of items change
    Message 1 of 4 , Oct 1, 2008
    • 0 Attachment
      I faced a situation that two behaviours as the title happened
      together, I do not like that, it make things worse.

      - First one is, the contents of items change during sprints sometimes.
      The problem is, when we take the item, the target is a bit vague, e.g.
      it says to implement certain functionality on one OS delivery
      (embedded). But, the situation is there are several versions /
      revisions of OS, in the beginning we were working on r1, and we would
      be on r2 during sprint, and the project expected to have functionality
      against r3 at the end. Then at the end, we realized that PO's
      expectation is different from teams'.

      - Second is, PO wants to promote good teams (we scrum masters want
      also), so an evaluation form was developed to find the star team from
      teams, one estimate is the velocity.

      Theoretically, above two situations are fine, coz first one could due
      to insufficient communication between PO and team, second one should
      be able to work.

      Whereas it leads to a wrong direction. There are some different
      interpretations on related teams, consider the OS contains two
      different parts, e.g. startup and some device drivers. The team who
      relies on the startup part was considered DONE their work against the
      unworking r3. Other teams are not, because their dependent driver part
      was not ready, so they were not DONE against the target OS r3.

      - My feeling is it's not fair. Teams working together to contribute to
      business values, their productions together provides business values,
      instead of single alone. An OS v3 with startup enabled but
      misfunctionalities related to drivers is meaningless and useless to users.

      The measurement on teams by comparing velocity make things worse. If
      it's not fair to calculate their productivity, how can we compare
      their velocity? Also I insist that we should compare the velocity
      change of one team as a feedback for teams to be better, but comparing
      velocity between teams are bad. Promoting star teams in such a
      situation may not achieve the goal to value good teams, instead it may
      to non-productive competition on fake velocity number.

      I have some ideas, but I'd like to know yours too.
    • Ron Jeffries
      Hello, kaverjody. On Wednesday, October 1, 2008, at 9:43:54 AM, ... I don t agree. The second is pernicious. ... Card, conversation, confirmation. Also
      Message 2 of 4 , Oct 1, 2008
      • 0 Attachment
        Hello, kaverjody. On Wednesday, October 1, 2008, at 9:43:54 AM,
        you wrote:

        > - Second is, PO wants to promote good teams (we scrum masters want
        > also), so an evaluation form was developed to find the star team from
        > teams, one estimate is the velocity.

        > Theoretically, above two situations are fine, coz first one could due
        > to insufficient communication between PO and team, second one should
        > be able to work.

        I don't agree. The second is pernicious.

        > - First one is, the contents of items change during sprints sometimes.
        > The problem is, when we take the item, the target is a bit vague, e.g.
        > it says to implement certain functionality on one OS delivery
        > (embedded). But, the situation is there are several versions /
        > revisions of OS, in the beginning we were working on r1, and we would
        > be on r2 during sprint, and the project expected to have functionality
        > against r3 at the end. Then at the end, we realized that PO's
        > expectation is different from teams'.

        Card, conversation, confirmation.

        Also consider not working on three revisions at the same time.

        Ron Jeffries
        www.XProgramming.com
        www.xprogramming.com/blog
        The work teaches us. -- Richard Gabriel
      • kaverjody
        ... Me too. Hard for me to write down the word should . ... Could you give more details about those suggestions? We (the team I was scrum master) saw some
        Message 3 of 4 , Oct 1, 2008
        • 0 Attachment
          --- In scrumdevelopment@yahoogroups.com, Ron Jeffries
          <ronjeffries@...> wrote:
          > I don't agree. The second is pernicious.

          Me too. Hard for me to write down the word "should".

          > Card, conversation, confirmation.

          Could you give more details about those suggestions?

          We (the team I was scrum master) saw some items we marked as not DONE
          was DONE in the product backlog, in next sprint planning. We said
          their tests passed against OS r2, but r3 was not available. PO marked
          as DONE, explained as when r3 ready just do regression testing ... I
          think PO was considering the motivation issue, but I disagree to mix
          it with Done Definition.

          Even though the r3 may be quite stable that all tests just pass, it
          doesn't mean we could mark those items as DONE, coz the customer can
          not use them on expected OS, which is the business value for those items.

          In this aspect, "Card, conversation, confirmation" are useful, but may
          not be the Mr. Right for the problem.

          - Kaveri
        • chuckspublicprofile
          Measuring one team s velocity against another one is EXTREMELY error prone, IMO, and as such, quite invalid. Just forget about that piece, as it s a
          Message 4 of 4 , Oct 1, 2008
          • 0 Attachment
            Measuring one team's velocity against another one is EXTREMELY error
            prone, IMO, and as such, quite invalid. Just forget about that piece,
            as it's a non-starter and ridiculously inaccurate. I think Mike
            Cohn's book has some material on how no 2 agile teams operate on the
            same point system and definition of "done". He also hits it again
            when mgmt stupidly tries to measure or evaluate individual velocity,
            which is equally ridiculous.

            Wrt your other problem, I hope you would retrospect and understand
            that, from now on...

            a) You would specify the story in greater detail, to include not only
            the OS, but also the exact versions of the OS you plan to deliver on.

            b) You should make sure you get the acceptance criteria up front, and
            have your PO do that. In other words, was the acceptance criteria,
            "functionality must pass tests on all supported platforms" or was it
            "functionality must pass tests on OSr1, OSr2, OSr3." Some teams do a
            "preview" meeting a few days before the upcoming sprint to help
            identify dirty details and allow time to resolve these kind of
            acceptance test questions before the sprint planning meeting. You
            might consider that also.

            c) You should counsel your PO that a story is not DONE until ALL
            ACCEPTANCE tests have passed, and thus a story shouldn't be removed
            from the product backlog unless 1) it has passed according to your def
            of done or 2) the story non longer needs to be implemented.

            The idea that a particular story could be considered "done" by Team A
            and "not done" by Team B seems fishy to me. Team A should be working
            on story1A and Team B should be working on story1B -- as such, they
            would probably come off of *different* product backlogs and thus
            should be managed completely diffferently.

            It sounds like your PO and organization is lacking some fundamental
            knowledge about Scrum. Have you all had any training, by chance?

            I don't mean to offend, just expressing my thoughts and opinions.

            Charles Bradley


            --- In scrumdevelopment@yahoogroups.com, "kaverjody" <yi.xu@...> wrote:
            >
            > --- In scrumdevelopment@yahoogroups.com, Ron Jeffries
            > <ronjeffries@> wrote:
            > > I don't agree. The second is pernicious.
            >
            > Me too. Hard for me to write down the word "should".
            >
            > > Card, conversation, confirmation.
            >
            > Could you give more details about those suggestions?
            >
            > We (the team I was scrum master) saw some items we marked as not DONE
            > was DONE in the product backlog, in next sprint planning. We said
            > their tests passed against OS r2, but r3 was not available. PO marked
            > as DONE, explained as when r3 ready just do regression testing ... I
            > think PO was considering the motivation issue, but I disagree to mix
            > it with Done Definition.
            >
            > Even though the r3 may be quite stable that all tests just pass, it
            > doesn't mean we could mark those items as DONE, coz the customer can
            > not use them on expected OS, which is the business value for those
            items.
            >
            > In this aspect, "Card, conversation, confirmation" are useful, but may
            > not be the Mr. Right for the problem.
            >
            > - Kaveri
            >
          Your message has been successfully submitted and would be delivered to recipients shortly.