Re: Metrics to report up - and the answer is ..
- .. and, here is an excerpt from a Ken Schwaber book to support this appoach: Chapter 7 - Project Reporting Keeping Everything Visible, from the book called Agile Project Management with Scrum
by Ken Schwaber. Microsoft Press © 2004
Ruth correctly assumed that senior management didn't want to talk about process; it wanted to talk only about results. Introducing a new format for reporting project progress would require teaching management about Scrum. It would require getting the program management office to consider a whole new approach to managing projects. Senior management didn't care about Scrum. It cared about its investment in the project.
Ruth could have continued task-based reporting to senior management. If she had chosen to do so, she would have developed an anticipated task plan and fit each Sprint Backlog into it. She didn't have the time or inclination to do this, but she didn't want to change the reporting format. She correctly assessed that she could deliver a new message using the same reporting format, reporting progress on requirements and functionality rather than tasks. By fitting the Product Backlog into Microsoft Project, she was able to normalize the Product Backlog into a known format.
Translating Product Backlog to the Gantt report wasn't a very big effort. Ruth felt that it was certainly a much smaller effort than convincing the program management office that Scrum and Scrum reporting were acceptable. The only report that didn't readily fit was the Product Backlog Burndown report, which became an appendix to regular management reports. As management asked questions about the regular reports, Ruth was able to support her discussion with the Product Backlog Burndown reports. When management wanted to know the impact of the early release, Ruth was able to show it to management on the Burndown reports. Ruth was able to teach management how to manage a Scrum project without having to teach it a whole new vocabulary.
Scrum proves its value as projects succeed. However, it is a radically different approach from traditional project management, expecting change rather than fearing it. Adaptation is a normal part of the project rather than an exception. If these concepts are theoretically discussed, most people agree on the reasonableness of the approach. When these concepts are brought up in the context of a critical project, however, management is often extremely wary. Managers want to know why this approach is being suggested. They ask if the approach is risky, and whether allowing this much change isn't inviting disaster. In this case, Ruth showed the value of the new approach by putting her mouth where management's money was. She showed the benefits and value of Scrum to management without its knowing or caring about the concepts or theory of agile processes. All management knew was that something good was afoot. As the CEO of another company stated at a Sprint review, "I don't know what this Scrum is, but I like it." That's the way it should be.
--- In firstname.lastname@example.org, "karatasfamily" <Karatasfamily@...> wrote:
> Looks like the person who asked the question did not get the answer so far.
> Regardless of the method you use to deliver a product, scrum or not, the management wants to see a couple of things:
> - what are you doing,
> - when are you doing,
> - are you on schedule,
> - are you on budget?
> So, this is not really a scrum question but a question of software management reporting . Therefore, I recommend thinking in terms of these points above, and not scrum. Depending on the project, you should show the major breakdowns of work, milestones, estimated dates and how you are doing in terms of the estimates.
> We use iterative development sprints but do not roll that up to management. As far as they know, we are workig on module X and we are Y days ahead of schedule so far.
> I hope this helps - Alan
> --- In email@example.com, "JackM" <jack@> wrote:
> > Couple comments
> > Metrics that are important to the team are cycle time and velocity. One cannot use these numbers though to compare teams but they can be used to compare one teams progress across multiple sprints.
> > One should not get caught up in the metrics however. What's important is that the team continues to deliver high quality code in priority order sprint after sprint after sprint.
> > Measure customer or end user satisfaction rather if that's possible.
> > Hope this helps
> > Jack
> > www.agilebuddy.com
> > blog.agilebuddy.com
> > twitter.com/agilebuddy
> > --- In firstname.lastname@example.org, "Charles Bradley - Scrum Coach CSM, PSM I" <chuck-lists2@> wrote:
> > >
> > > What metrics should we report up? Which ones shouldn't we report up?
> > >
> > > An example:
> > > A department has 3 dev teams that all report to one Senior Manager. That Senior
> > > Manager reports along with a few others to a VP.
> > >
> > > What metrics, if any, would you suggest to report from the team to the Sr. Mgr?
> > > What metrics, if any, would you suggest to report from the Sr. Mgr up to the VP?
> > >
> > > This is something I don't have a lot of experience in, so I'd appreciate any
> > > feedback.
> > >
> > > Charles
> > >
- I have come across this scenario very often (almost 95% products/projects) in which defects escape and I have seen this irrespective of methodologies (or practices) used. Human can make mistakes (at every phase/activity) for various reasons however we would like to keep improving ourselves continuously. We measure 'defects escaped' and take it seriously, means, we get to the root cause and invest in required trainings/expectations.
We focus on design & code quality metrics, like, cyclomatic complexity, fan-in, fan-out, depth (inheritance) and run some code quality tools to check coding standard compliance and other parameters (like, memory leakage etc). We report this to management as well to keep them in loop. We measure test related metrics, like, # of test cases (manual vs automated), first time pass ratio, # of defects (open, fixed, closed, postponed) etc.
We do not focus much on velocity, thanks to this forum. We track release burn down, # of stories committed / # of stories achieved (to keep improving team's commitment level), # of demos accepted/ rejected by PO, # of times team got changes in middle of sprint (it is minimal but not zero yet, this helps in deciding sprint length, puts back-pressure on PO) and few more (customer satisfaction and employee satisfaction).
For us, agile is mix of Scrum and XP practices hence we focus on both.
Hariprakash Agrawal (Hari),
Managing Director | Agile Coach | http://opcord.com | http://www.linkedin.com/in/hariprakash
Software Testing (QTP, Selenium, JMeter, AutoIT, Sahi, NUnit, VB/Java Script, Manual) || Consulting / Trainings (Agile, CMMi, Six Sigma, Project Management, Software Testing)On Mon, Dec 13, 2010 at 9:11 PM, Ron Jeffries <ronjeffries@...> wrote:
Hello, woynam. On Monday, December 13, 2010, at 10:12:25 AM, you
wrote:It is possible, and not all that unlikely, to miss a test or write
> Sorry, but I don't see how "defects" can escape. If you're
> writing automated tests for every story, an "escaped" defect means
> that you ignored the failing test. Is that really that common?
one incorrectly. It would be possible, I suppose, to define Done as
"passes whatever tests we wrote" but that strikes me as a bit too
So an escaped defect would be something we didn't like, that we
agree we understood and somehow failed to get implemented and
tested.Sorry about your cow ... I didn't know she was sacred.