19610RE: [CMMi Process Improvement] Review efficiency as a metric
- Jul 31, 2013
Efficiency of a process, activity, or task is the ratio of resources actually consumed to resources expected or desired to be consumed in accomplishing this process, activity or task. As such efficiency means “doing things right.” An efficient behavior, like an effective behavior, delivers results—but keeps the necessary effort to a minimum.
Effectiveness is about having impact. It is the relationship between achieved objectives to defined objectives. Effectiveness means “doing the right things.” Effectiveness looks only at whether defined objectives are reached—not at how they are reached.
Review efficiency therefore relates the actual effort to detect defects in a review with the expected effort to detect these defects.
The actual effort is all effort from preparing, conducting and finalizing the review, e.g. reading a document, using a checklist, recording defects, having a review meeting. It does not include effort from making corrections or project overheads. The expected effort is the planning baseline for doing these activities, based on a realistic baseline, from benchmarks or previous reviews. As an order of magnitude depending on code or document readability, one should find 0,3-1 critical defects per hour.
Review effectiveness relates the amount of defects found to the totality of defects to be found, again both of comparable type and severity.
A measurement baseline implies to comparing apples and apples. Often we see reviews that would report all type of “defects”, independent of severity and type. This is pointless for any forecasting and quality measurement. If you expect based on code size or document size a certain amount of prio 1 defects in a document, then you need to budget resources for detecting them, and you need to have the right processes to detect them. If the review later finds none of those, you need to question its effectiveness first. Often a code review can be substituted or at least made easier by using automatic approaches, such as static code analysis tools. Many companies today build a suite of such tools in front of the otherwise costly reviews. Note though that even tools have manual effort, such as analyzing the reports, and filtering the results.
Concrete examples for such review efficiency both code and documents with lots of industry benchmarks are summarized in the book:
Software Measurement. Springer, ISBN 9783540716488, Heidelberg, New York, 2007,
Could you please ellaborate on Review efficiency as a metric? How can we use in the organization? What is the formula for that? Can review efficiency use for Code review or document review?
Thanks & Regards
- << Previous post in topic