Loading ...
Sorry, an error occurred while loading the content.

Re: [XP] Re: iterations - NEWBIE question

Expand Messages
  • Curtis Cooley
    ... This is very important, IMHO. We pin the story and task cards to a wall/board/cardboard box. The stories across the top in order from left to right based
    Message 1 of 10 , Sep 2, 2003
    • 0 Attachment
      Kiel Hodges wrote:
      > For example, a team I coached a couple of years ago abondoned
      > cards in favor of a whiteboard that listed stories and tasks.
      > When a pair finished the last task for a story, they were
      > expected to make sure that the story as a whole really was
      > complete and check it off.
      >
      > The board served as an Information Radiator that provided the
      > current status of the iteration. With the information readily
      > available and the team very aware of the importance of finishing
      > /stories/, we stayed on track rather well from that point forward.
      >
      This is very important, IMHO. We pin the story and task cards to a
      wall/board/cardboard box. The stories across the top in order from left
      to right based on priority with the tasks below the stories:

      Story1 Story2 Story3
      Task Task Task
      Task Task Task

      When a task is completed, it gets a green dot. When all tasks on a story
      are green, the story is green. Anyone walking into the room can
      instantly see our progress.

      So now, if it's the Tuesday of the second week of the iteration and
      Story2 and Story3 are not done, everyone can see it and the team can
      focus on getting Story2 done then getting as much of Story3 done as
      possible.
      --
      ======================
      Curtis R Cooley
      RADSoft
      Better software faster
      curtis@...
      ----------------------
      Do what comes naturally. Seethe and fume and throw a tantrum.
    • Arlo Belshee
      ... Hi all. I use a method very similar to Bill s, and find it to work quite well. The reason is that people tend to be consistent, over time. A team that
      Message 2 of 10 , Dec 20, 2003
      • 0 Attachment
        > From: "Steve Bate" <steve@...>
        >
        > It's not uncommon for our team to realize we've overlooked some engineering
        > aspect of a story when tasking it. During planning we mostly focus on a
        > high level business-oriented view of the story. During tasking we are
        > looking
        > at it more from a programming perspective. If we missed something when
        > doing the high level analysis we want to incorporate an estimate of that
        > extra work into a more accurate story estimate. The increased accuracy helps
        > us to "better predict the future", therefore the task estimates /not/
        > extraneous by your definition.

        Hi all.

        I use a method very similar to Bill's, and find it to work quite well.

        The reason is that people tend to be consistent, over time. A team that misses
        one design task on one task, tends to miss one on each story. A consistently
        optimistic team is common. A consistently pessimistic team is common. An
        inconsistent team (over a 1 mo. rolling average) is extremely rare.

        Thus, since you are only estimating task size, your inaccuracies make no
        difference in the final number. As long as you _never_ estimate tasks when
        making plans, and never assume that your estimates for story completion have
        any relation to the sum of the estimates for tasks, then everything works out.

        To clarify: we actually use two different units to estimate stories and tasks.
        We estimate tasks for the purpose of deciding what we're going to do this
        week, and in what orders. We estimate stories for the purpose of predicting
        the future. Thus, we actually have two _independent_ velocities - one used for
        release planning, the other for iteration.

        Since we are fairly consistent in what we miss while looking at the big
        stories (and in what we are afraid of and so overestimate), our velocity is
        fairly stable. Likewise, our task velocity is stable. While we can measure the
        difference, and determine a conversion factor between release units and
        iteration units, it's not really necessary: when release planning, sum the
        story estimates and use release velocity; when iteration planning, sum the
        task estimates and use iteration velocity.

        That way, we never have to estimate the individual tasks (which entails
        planning how that part of the system will work, at least a little), until the
        iteration that we do the feature. If we tried to use task estimates for
        release planning, then we would have to plan the system for the future, which
        would lock us in and break simplest thing.

        See ya'alls,
        Arlo
      • Steve Bate
        ... Wow. Apparently I ve only worked on rare teams during the last few decades. Every team I ve ever worked on overestimated required effort sometimes and
        Message 3 of 10 , Dec 20, 2003
        • 0 Attachment
          > From: Arlo Belshee [mailto:xppdx@...]
          > The reason is that people tend to be consistent, over time. A
          > team that misses one design task on one task, tends to miss one on
          > each story. A consistently optimistic team is common. A consistently
          > pessimistic team is common. An inconsistent team (over a 1 mo.
          > rolling average) is extremely rare.

          Wow. Apparently I've only worked on rare teams during the last few
          decades. Every team I've ever worked on overestimated required
          effort sometimes and underestimated it at other times. In "Characterizing
          People as Non-Linear, First-Order Components in Software Development"
          Alistair Cockburn describes inconsistency as a common human failure
          mode.

          The XP literature also states in several places that people tend to
          improve their estimation skills with practice. That matches my experience
          as well. Some people feel that this learning only happens with point
          estimates and not with ideal time. I don't know why there'd be a difference.
          I know I've seen people improve their estimation abilities on our team
          over the years while using ideal time estimates.

          >...
          > To clarify: we actually use two different units to estimate
          > stories and tasks.
          > We estimate tasks for the purpose of deciding what we're going to do this
          > week, and in what orders. We estimate stories for the purpose of
          > predicting the future. Thus, we actually have two _independent_
          velocities -
          > one used for release planning, the other for iteration.

          Interesting. Are both the units of effort called "points"? How do you
          translate between the two measurement units?

          >...
          > That way, we never have to estimate the individual tasks (which entails
          > planning how that part of the system will work, at least a
          > little), until the
          > iteration that we do the feature. If we tried to use task estimates for
          > release planning, then we would have to plan the system for the
          > future, which would lock us in and break simplest thing.

          This sounds similar to what we do. We only task stories to refine
          estimates for relatively short term activities (an iteration in our case).
          I agree that it's a good idea to do only very limited up front design
          (planning for the future) during release planning. As I've said before,
          we do short releases so that's not much of an issue for us anyway.
        • Ron Jeffries
          ... I m really trying to understand what you do, and I m just not getting it. Let me describe a project using points which happen, through luck or design, to
          Message 4 of 10 , Dec 20, 2003
          • 0 Attachment
            On Saturday, December 20, 2003, at 6:03:18 PM, Steve Bate wrote:

            > This sounds similar to what we do. We only task stories to refine
            > estimates for relatively short term activities (an iteration in our case).
            > I agree that it's a good idea to do only very limited up front design
            > (planning for the future) during release planning. As I've said before,
            > we do short releases so that's not much of an issue for us anyway.

            I'm really trying to understand what you do, and I'm just not getting it.

            Let me describe a project using points which happen, through luck or
            design, to be ideal days.

            We have 100 stories. Each has an estimate of 1 point. We have ten
            programmers. We assume that we can do ten stories an iteration, sign up for
            ten, and go.

            At the end of the first iteration, we have five stories done. Using YW, we
            conclude that we have 19 iterations to go, and that we should sign up for
            five stories in each iteration.

            Over the course of the release, the team's "inherent" velocity remains the
            same. We have about the same overhead in every iteration as in the first.
            Our estimation error is about the same on every story. So in every
            iteration, we get about five done, and after 20 iterations, we're done.

            That's a description of a simple points (or ideal day) project. Now I'm
            going to try to describe what you do with the same situation, not because I
            think I have it but so that you can correct me -- or just throw away what I
            write and do it over:

            Let's suppose that your stories are also estimated at one day each. So your
            customer shows up with ten. You task them, and some show 8 hours, some show
            ten, some show 6. It averages out, so you undertake the ten just like we
            did.

            At the end of the iteration, you have five done. The actuals for the tasks
            are recorded /and overhead, i.e. non-working time is not counted/. (The
            italicized bit is my understanding, correct me if I'm wrong.)

            One possibility is that your estimates are all exactly correct, and the
            team has 50 percent overhead. Another possibility is that your estimates
            are an average of half of what they should be and the team has zero
            overhead. The real case is somewhere in between, most likely. So let's
            suppose for the discussion that you have 25% overhead and the stories are
            50% low in estimate vs actual. You are getting 7.5 days of work out of your
            ten programmers, and since the estimates are 50%, you can get five days of
            stories done, which is what happened.

            Now what happens next time? Do you reestimate all stories at 1.5 days? I
            believe you said that you do not. So what do you do in the next iteration?
            The figures seem to say that you should sign up for 7.5 days of stories.

            If you do, you'll get 5 of them done, and they will take, on the average,
            1.5 days to do.

            What I'm not getting is how, without reestimating the stories, you ever
            come to the conclusion that (a) you should do five a week, and (b) that the
            project will take 20 weeks.

            Please try to take my situation and explain how it would work. I'm assuming
            a simple though impossible case: all the stories are alike; the team has
            constant "inherent" velocity throughout; it's going to take twenty weeks to
            get done / we can do five a week.

            Give that situation, and trying to estimate in actual, what does the team
            do?

            I'm so confused ...

            Ron Jeffries
            www.XProgramming.com
            Reason is and ought only to be the slave of the passions. -- David Hume
          • Steve Bate
            ... I didn t say we never reestimate stories, but that we seldom have to do estimation revision on all remaining stories. In this specific example (which is
            Message 5 of 10 , Dec 20, 2003
            • 0 Attachment
              > From: Ron Jeffries [mailto:ronjeffries@...]
              > Let's suppose that your stories are also estimated at one day
              > each. So your customer shows up with ten. You task them, and
              > some show 8 hours, some show ten, some show 6. It averages out,
              > so you undertake the ten just like we did.
              >
              > At the end of the iteration, you have five done. The actuals for the tasks
              > are recorded /and overhead, i.e. non-working time is not counted/. (The
              > italicized bit is my understanding, correct me if I'm wrong.)
              >
              > One possibility is that your estimates are all exactly correct, and the
              > team has 50 percent overhead. Another possibility is that your estimates
              > are an average of half of what they should be and the team has zero
              > overhead. The real case is somewhere in between, most likely. So let's
              > suppose for the discussion that you have 25% overhead and the stories are
              > 50% low in estimate vs actual. You are getting 7.5 days of work
              > out of your ten programmers, and since the estimates are 50%, you can '
              > get five days of stories done, which is what happened.
              >
              > Now what happens next time? Do you reestimate all stories at 1.5 days? I
              > believe you said that you do not. So what do you do in the next iteration?
              > The figures seem to say that you should sign up for 7.5 days of stories.

              I didn't say we never reestimate stories, but that we seldom have to do
              estimation revision on all remaining stories. In this specific example
              (which is quite different from our team's actual experiences) you defined
              all stories to be very alike. I assume that the intent is to force a
              sitation where an error in the estimate of one story implies an error in
              the estimate of all future stories. If so, then we'd be forced to
              reestimate all future stories to 1.5 days. Again, we've never had this
              happen IRL.

              > If you do, you'll get 5 of them done, and they will take, on the average,
              > 1.5 days to do.
              >
              > What I'm not getting is how, without reestimating the stories, you ever
              > come to the conclusion that (a) you should do five a week, and
              > (b) that the project will take 20 weeks.

              B follows from A and the 100 story count. A is determined by 1.5 day
              stories and a velocity of 7.5 days/iteration.

              > Please try to take my situation and explain how it would work.
              > I'm assuming a simple though impossible case: all the stories are alike;
              > the team has constant "inherent" velocity throughout; it's going to take
              > twenty weeks to get done / we can do five a week.
              >
              > Give that situation, and trying to estimate in actual, what does the team
              > do?

              Does the reestimation clear up the issue?

              > I'm so confused ...

              I'm not sure why it's so confusing. After rereading portions of the Planning
              Extreme Programming book it sounds like we're almost a perfect fit
              to the approaches described by Kent Beck and Martin Fowler. Our stories are
              usually smaller than the examples in the book, but the time-based estimating
              and task-level programmer estimates are described there. What do you think
              is
              different about the approach I've been describing (possibly not very well)?
            • Ron Jeffries
              ... Somehow we re not communicating. I haven t been able to formulate a question that elicits an answer that I can understand. Enough time spent. Thanks. Ron
              Message 6 of 10 , Dec 20, 2003
              • 0 Attachment
                On Saturday, December 20, 2003, at 10:10:36 PM, Steve Bate wrote:

                > I'm not sure why it's so confusing. After rereading portions of the Planning
                > Extreme Programming book it sounds like we're almost a perfect fit
                > to the approaches described by Kent Beck and Martin Fowler. Our stories are
                > usually smaller than the examples in the book, but the time-based estimating
                > and task-level programmer estimates are described there. What do you think
                > is
                > different about the approach I've been describing (possibly not very well)?

                Somehow we're not communicating. I haven't been able to formulate a
                question that elicits an answer that I can understand. Enough time spent.
                Thanks.

                Ron Jeffries
                www.XProgramming.com
                If not now, when? -- The Talmud
              Your message has been successfully submitted and would be delivered to recipients shortly.