Loading ...
Sorry, an error occurred while loading the content.

19630RE: [CMMi Process Improvement] Gage R&R

Expand Messages
  • Balamurali L.
    Sep 24, 2013
    • 0 Attachment



      Measurement system analysis (MSA) is an experimental and mathematical method of determining how much the variation within the measurement process contributes to overall process variability. A company needs to effectively evaluate its measurement systems. The type of evaluation process depends on the type of data collected.  The measurement system analysis can be generated to deal with discrete or continuous data.

      ·         For continuous data, process output data is measured and re-measured to compare measurement variation to overall process variation. Gage R&R’ shall be performed for analysing the measurement system variation in the continuous data.

      ·         Attribute Agreement Analysis is a type of Measurement Systems Analysis used when the characteristic is an attribute or discrete data. Attribute agreement assesses the results of decision making by human beings. 

      Applying MSA techniques in software scenario mainly deals with Attribute agreement analysis. Attribute agreement analysis (AAA) produces some key statistics that tell us whether the results are due to random chance or if our judgment appears to be better (or worse) than random chance. Attribute Agreement Analysis is used to assess the agreement between the ratings made by appraisers and the known standards. For doing Attribute agreement analysis, we can use Minitab or some other excel addins . Minitab displays the percent for absolute agreement between each appraiser and the standard and the percent of absolute agreement between all appraisers and the standard. Some statistical programs offer an analysis approach for an attribute agreement analysis. Kappa and Kendall’s correlation statistics are used to evaluate the agreement. Kappa’s statistics is used for nominal data.  The measure used for extent of attribute agreement is Kohen’s Kappa value. This can range from –1 to +1. +1 Shows perfect agreement while as 0 shows that agreement is by chance. Kappa of <1 means agreement is less than chance. AIAG recommends Kappa>0.75 for good system and <0.4 as poor system. Kappa’s statistics shall be determined for Binary and Nominal Data. Kendall's Coefficient of Concordance shall be determined if the data is ordinal. Kendall’s correlation is used when there is ordering of the attribute data categories for acceptable values of MSA results, it has to be greater than 80%. We have tried out attribute agreement analysis to evaluate the variation in measurement like review/testing defect classification, Non conformances mapping wrt Process Areas etc. When the results were not acceptable, corrective action also triggered.


      With Regards,


      NeST Logo-Oct06 Group Manager SQA

      Network Systems & Technologies (P) Ltd. 
      Periyar |Technopark Campus |Thiruvananthapuram  695 581 |India
      Cell 91.98471 80100|
      (Work 91.471.3068311 |Fax  91.471.270.0442

      balamurali.l@... |http:\\www.nestsoftware.com




      From: cmmi_process_improvement@yahoogroups.com [mailto:cmmi_process_improvement@yahoogroups.com] On Behalf Of prashant.swaroop@...
      Sent: 23 September 2013 11:43
      To: cmmi_process_improvement@yahoogroups.com
      Subject: RE: [CMMi Process Improvement] Gage R&R




      Appling G R&R will be challenge unless projects are categorized based on their type, technology, scope etc. (rational grouping) and must to be under SPC.





      From: cmmi_process_improvement@yahoogroups.com [mailto:cmmi_process_improvement@yahoogroups.com] On Behalf Of Pat OToole
      Sent: Saturday, September 21, 2013 5:09 PM
      To: cmmi_process_improvement@yahoogroups.com
      Subject: RE: [CMMi Process Improvement] Gage R&R






      I am an HMLA, not an implementer of high maturity practices, so I will limit my comments to the appraisal context and leave implementation suggestions to those more qualified to provide such input.


      I simply wanted to issue a model-based caution with respect to the metrics that you listed: Effort Variance, Schedule Variance, Defect Densities, Rework, Productivity, etc.


      From a CMMI high maturity perspective, the objective is to statistically manage subprocess performance and to exploit that stability of your process execution such that you can build predictive models of attributes of future interest.  For EXAMPLE, by statistically managing certain key aspects of the requirements and design phases, we may be able to predict a reasonably “tight” range of defects to be found in system testing, the defect density of the fielded project, and customer satisfaction ratings.  (Or we may have OTHER future attributes of interest that we are interested in, so I am merely providing some examples of what we’re trying to do with the high maturity practices).


      The metrics you listed: Effort Variance, Schedule Variance, etc. can be captured at multiple levels and, depending on the level of granularity, would serve EITHER as input variables to a predictive model, OR as output projections of said model


      For example, if you are talking about Effort Variance for the PROJECT (total effort variance from the start of the project to date), then this is probably NOT an attribute of SUBPROCESS performance as the total project effort variance would be an accumulation of effort variance across many many subprocesses.  Such a metric is more suitable as the OUTPUT projection of a predictive model.  I.e., given the effort variance and defect density of the business requirements elicitation subprocess, the model predicts that the effort variance for the requirements phase will be in the x1 – x2 range; and the effort variance for the entire project will be in the y1 – y2 range.


      Note that some such predictive models forecast the effort variance for each future project phase (as well as the total project effort variance),  and then those phase-level projections are replaced by “actuals” and the predictive models rerun as the project continues to progress – generating new and better forecasts for the upcoming phases and the total project.


      As in the example above, if you are speaking about the Effort Variance, Schedule Variance, Defect Density, Rework, and/or Productivity of a given SUBPROCESS (e.g., business requirement elicitation), then you are more aligned with model expectations as far as managing subprocess performance and the construction of process performance baselines and models.


      Many folks, including many lead appraisers, had trouble understanding why the SEI (and now the CMMI Institute) took such a strong position against the use of Earned Value’s CPI and SPI as a high maturity practice.  Personally, I don’t think either Institute had an issue with an organization doing so if they derived value from that practice, but they did have problems calling this statistical management of subprocess performance as project-level CPI and SPI are aggregated measures – they cut across many many subprocesses.


      One strong note of caution: DO NOT allow the CMMI or anything else stand in the way of doing what helps your projects succeed.  If the projects glean value from statistically managing project-level metrics, including those you listed, or CPI and SPI, or the number of pizza boxes in the trash come Monday morning – then by all means use the associated measures to enhance project success.  From a CMMI perspective, however, you should not expect to receive “credit” for statistically managing SUBPROCESS performance based on these metrics.


      Hope this helps,




      From: cmmi_process_improvement@yahoogroups.com [mailto:cmmi_process_improvement@yahoogroups.com] On Behalf Of Prashant Kanjilal
      Sent: Friday, September 20, 2013 7:39 AM
      To: cmmi_process_improvement@yahoogroups.com
      Subject: [CMMi Process Improvement] Gage R&R

      Dear Professionals

      1.Some CMMI HMLAs would like to check institutionalization of Gage R&R in PAs such as M&A, OPP and QPM etc.

      2. The interpretation and meaningful usage of Gage R&R as part of MSA in software development and application support projects appears to be not straight forward and hence challenging. The reason being, every software development object is unique ( not identical as in manufacturing) and the measurements including estimations are

      ·         Mostly based on  expert advice and/or manual or through tools and

      ·         Measurements are often derived and not direct as in hardware/manufacturing scenarios.

      3. In view of above, may I request you to suggest as to how to study repeatability, reproducibility, accuracy, precision   etc. in measures (most of them are ratios like, planned against actuals, defects per KLOC/Fn Points. Etc.) such as

      ·         Effort Variance

      ·         Schedule Variance

      ·         Defect Densities

      ·         Rework

      ·         Productivity etc.

      Note: You may use your own formula for above measures(Org to org, it may vary )  or any other measure

      4. Request you to give your suggestions/views on points given in Para 3 above .

      Thanks & Regards

      Prashant K




      Disclaimer:This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding


      This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited.

      Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.



      ***** Confidentiality Statement/Disclaimer *****

      This message and any attachments is intended for the sole use of the intended recipient. It may contain confidential information. Any unauthorized use, dissemination or modification is strictly prohibited. If you are not the intended recipient, please notify the sender immediately then delete it from all your systems, and do not copy, use or print. Internet communications are not secure and it is the responsibility of the recipient to make sure that it is virus/malicious code exempt.
      The company/sender cannot be responsible for any unauthorized alterations or modifications made to the contents. If you require any form of confirmation of the contents, please contact the company/sender. The company/sender is not liable for any errors or omissions in the content of this message.
    • Show all 7 messages in this topic