19629RE: [CMMi Process Improvement] Gage R&R
- Sep 22, 2013
Appling G R&R will be challenge unless projects are categorized based on their type, technology, scope etc. (rational grouping) and must to be under SPC.
I am an HMLA, not an implementer of high maturity practices, so I will limit my comments to the appraisal context and leave implementation suggestions to those more qualified to provide such input.
I simply wanted to issue a model-based caution with respect to the metrics that you listed: Effort Variance, Schedule Variance, Defect Densities, Rework, Productivity, etc.
From a CMMI high maturity perspective, the objective is to statistically manage subprocess performance and to exploit that stability of your process execution such that you can build predictive models of attributes of future interest. For EXAMPLE, by statistically managing certain key aspects of the requirements and design phases, we may be able to predict a reasonably “tight” range of defects to be found in system testing, the defect density of the fielded project, and customer satisfaction ratings. (Or we may have OTHER future attributes of interest that we are interested in, so I am merely providing some examples of what we’re trying to do with the high maturity practices).
The metrics you listed: Effort Variance, Schedule Variance, etc. can be captured at multiple levels and, depending on the level of granularity, would serve EITHER as input variables to a predictive model, OR as output projections of said model
For example, if you are talking about Effort Variance for the PROJECT (total effort variance from the start of the project to date), then this is probably NOT an attribute of SUBPROCESS performance as the total project effort variance would be an accumulation of effort variance across many many subprocesses. Such a metric is more suitable as the OUTPUT projection of a predictive model. I.e., given the effort variance and defect density of the business requirements elicitation subprocess, the model predicts that the effort variance for the requirements phase will be in the x1 – x2 range; and the effort variance for the entire project will be in the y1 – y2 range.
Note that some such predictive models forecast the effort variance for each future project phase (as well as the total project effort variance), and then those phase-level projections are replaced by “actuals” and the predictive models rerun as the project continues to progress – generating new and better forecasts for the upcoming phases and the total project.
As in the example above, if you are speaking about the Effort Variance, Schedule Variance, Defect Density, Rework, and/or Productivity of a given SUBPROCESS (e.g., business requirement elicitation), then you are more aligned with model expectations as far as managing subprocess performance and the construction of process performance baselines and models.
Many folks, including many lead appraisers, had trouble understanding why the SEI (and now the CMMI Institute) took such a strong position against the use of Earned Value’s CPI and SPI as a high maturity practice. Personally, I don’t think either Institute had an issue with an organization doing so if they derived value from that practice, but they did have problems calling this statistical management of subprocess performance as project-level CPI and SPI are aggregated measures – they cut across many many subprocesses.
One strong note of caution: DO NOT allow the CMMI or anything else stand in the way of doing what helps your projects succeed. If the projects glean value from statistically managing project-level metrics, including those you listed, or CPI and SPI, or the number of pizza boxes in the trash come Monday morning – then by all means use the associated measures to enhance project success. From a CMMI perspective, however, you should not expect to receive “credit” for statistically managing SUBPROCESS performance based on these metrics.
Hope this helps,
1.Some CMMI HMLAs would like to check institutionalization of Gage R&R in PAs such as M&A, OPP and QPM etc.
2. The interpretation and meaningful usage of Gage R&R as part of MSA in software development and application support projects appears to be not straight forward and hence challenging. The reason being, every software development object is unique ( not identical as in manufacturing) and the measurements including estimations are
· Mostly based on expert advice and/or manual or through tools and
· Measurements are often derived and not direct as in hardware/manufacturing scenarios.
3. In view of above, may I request you to suggest as to how to study repeatability, reproducibility, accuracy, precision etc. in measures (most of them are ratios like, planned against actuals, defects per KLOC/Fn Points. Etc.) such as
· Effort Variance
· Schedule Variance
· Defect Densities
· Productivity etc.
Note: You may use your own formula for above measures(Org to org, it may vary ) or any other measure
4. Request you to give your suggestions/views on points given in Para 3 above .
Thanks & Regards
Disclaimer:This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding
This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited.
Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.
- << Previous post in topic Next post in topic >>