Re: Burn Up Charts
- Since both the Silicon Valley Patterns Group list and Scrum Development
list have been high traffic, I've consolidated responses to Mike Cohn,
David J. Anderson, Ron Jeffries, and John P. Gilman in this message.
Mike Cohn wrote:
> I assume the y-axis below is really “time spent” rather than features.Customers aren't asking to use up staff days--they're asking to
implement features. Mythical man month arguments aside, one benefit of
the burn up chart is that it allows you to track and understand your
velocity. We described the burn-up chart in terms of features (e.g.
rough estimates of cumulative story points) because relative difficulty
of features is more easily (and accurately) estimated than staff days.
(more on this below)
A primary goal of the burn up chart is to illustrate the need (or
opportunity) for scope negotiation, and to make these historical
negotiations clear and distinct from changes in velocity.
Measuring against planned features makes feature adjustment the easiest
and most natural response to a projected problem.
David J. Anderson wrote:
> Hence, your chart would be a departure for Scrum. ItYes. I think you've drawn a nice distinction. As I read them, the burn
> shows client-valued output, rather than level of
> effort required to complete a deliverable.
down chart shows how much money (~staff time) the project expected to
spend at a series of historical points. The burn up chart shows
historical progress toward a planned feature set. Both give some idea
of whether the team is on track for delivery, with different emphasis on
what should be done about mismatches (staffing change emphasis vs.
feature scope emphasis).
Given a known stable velocity it's arithmetic to transform between
feature points and staff days. That's if you know your velocity to this
accuracy early in a project? If you do, really, you could use either
label for the y axis interchangeably. But if you don't (which is
probably the case) I don't know how you would modify the chart using
staff days to reflect your growing understanding.
Mike Cohn wrote:
> One concern I would have expected developers to have with this chartWouldn't the boss look at a burn down chart and say, "You mean we're
> would be the boss looking at it and seeing a high value on the y-axis:
> “You mean we’ve spent 20,000 person-hours on that project!!!!”.
going to spend 20,000 person hours on this project!!!"?
David J. Anderson wrote:
> Does your chart compare to the Cumulative Flow DiagramThanks for posting this link. Yes, the simple burn up chart looks like
> I described here?
your Cumulative Flow Diagram. Chris Lopez notes (on the Silicon Valley
Patterns Group list) that it isn't clear whether the Cumulative Flow
Diagram uses a raw feature count or features scaled by difficulty. I
had in mind that our feature axis would indicate scaled "feature
points". I'm not sure what Phil Goodwin had in mind here since we
haven't explicitly discussed this aspect.
Please also have a look at the more complex version of the burn up chart
that Phil posted to the Silicon Valley Patterns Group list. See the
attachment "BurnUp3.jpg" in:
(Note: Phil also addresses Ron's acceptance test chart in this message.)
John P. Gilman wrote:
> Mary, I'm having a little trouble understanding what I'm seeing withClearly the sample burn-up chart we depicted isn't an ideal, smooth
> the horizontal "Expected Feature Set" lines. My best guess is that
> I'm looking at feature creep over time with a reduction in the
> expected features at the end of the sprint, but I'm sure this isn't
running project. In fact I agree with John, a chart that looked like
this one would probably indicate feature creep that had to be reined in.
(Its nice that this feature creep could be distinguished from a change
in velocity--can a burn down chart reveal such distinctions?)
We were trying to show a clear case where scope negotiation was required
to meet a target. Unfortunately we show a project that was almost never
on track to deliver their planned features on the target date, but kept
It would have been clearer had we not confused the sample chart by
showing such a troubled project.
A simpler case would be a project that didn't add features, but whose
progress pointed out that the target wouldn't be hit--leading to a
planned feature reduction (as a mid-course correction, not last minute).
A more interesting case would show stronger than expected progress,
justifying additional features because the team projects they can
accommodate a more ambitious plan. Then indicate a sharp decline in
velocity with a note on the date that 4 developers left the
team--prompting a reduction in scope to keep the project on track.
> I think it is fine to assume that it is "independentAgreed!
> thinking". This is a good thing because it confirms
> that at least 2 people can reach the same conclusions
> and can validate their experiences and explain
> the world the same way.
> (You could alwaysI think we struggle with researching "previous art" because most of the
> ask the question the other way: Is the stuff
> from "Growing Software" coming from somewhere else
> since our stuff was published 5-7 years ago
> i.e. PLOP3 proceedings, PLOPD4 book, etc. I think
> it is safe to assume "independent thinking" because
> our industry is famous for not researching
> "previous art". In hard Science this would actually
> be an embarrassment.)
leading agile thinkers are in the trenches, not in academia. This is why I was
excited when I saw the overlap between the Scrum book and "Growing
Software". I figured that both the Scrum folks and Roy had probably not had
the opportunity to find each other.
I look forward to the outcome of future collaborations between agile thinkers
who find complexity science applicable to software development.