RE: [CMMi Process Improvement] Gage R&R

Expand Messages
• Hi, Measurement system analysis (MSA) is an experimental and mathematical method of determining how much the variation within the measurement process
Message 1 of 7 , Sep 24, 2013
• 0 Attachment

Hi,

Measurement system analysis (MSA) is an experimental and mathematical method of determining how much the variation within the measurement process contributes to overall process variability. A company needs to effectively evaluate its measurement systems. The type of evaluation process depends on the type of data collected.  The measurement system analysis can be generated to deal with discrete or continuous data.

·         For continuous data, process output data is measured and re-measured to compare measurement variation to overall process variation. Gage R&R’ shall be performed for analysing the measurement system variation in the continuous data.

·         Attribute Agreement Analysis is a type of Measurement Systems Analysis used when the characteristic is an attribute or discrete data. Attribute agreement assesses the results of decision making by human beings.

Applying MSA techniques in software scenario mainly deals with Attribute agreement analysis. Attribute agreement analysis (AAA) produces some key statistics that tell us whether the results are due to random chance or if our judgment appears to be better (or worse) than random chance. Attribute Agreement Analysis is used to assess the agreement between the ratings made by appraisers and the known standards. For doing Attribute agreement analysis, we can use Minitab or some other excel addins . Minitab displays the percent for absolute agreement between each appraiser and the standard and the percent of absolute agreement between all appraisers and the standard. Some statistical programs offer an analysis approach for an attribute agreement analysis. Kappa and Kendall’s correlation statistics are used to evaluate the agreement. Kappa’s statistics is used for nominal data.  The measure used for extent of attribute agreement is Kohen’s Kappa value. This can range from –1 to +1. +1 Shows perfect agreement while as 0 shows that agreement is by chance. Kappa of <1 means agreement is less than chance. AIAG recommends Kappa>0.75 for good system and <0.4 as poor system. Kappa’s statistics shall be determined for Binary and Nominal Data. Kendall's Coefficient of Concordance shall be determined if the data is ordinal. Kendall’s correlation is used when there is ordering of the attribute data categories for acceptable values of MSA results, it has to be greater than 80%. We have tried out attribute agreement analysis to evaluate the variation in measurement like review/testing defect classification, Non conformances mapping wrt Process Areas etc. When the results were not acceptable, corrective action also triggered.

With Regards,

Balamurali

Group Manager SQA

Network Systems & Technologies (P) Ltd.
Periyar |Technopark Campus |Thiruvananthapuram  695 581 |India
Cell 91.98471 80100|
(Work 91.471.3068311 |Fax  91.471.270.0442

balamurali.l@... |http:\\www.nestsoftware.com

From: cmmi_process_improvement@yahoogroups.com [mailto:cmmi_process_improvement@yahoogroups.com] On Behalf Of prashant.swaroop@...
Sent: 23 September 2013 11:43
To: cmmi_process_improvement@yahoogroups.com
Subject: RE: [CMMi Process Improvement] Gage R&R

Hi,

Appling G R&R will be challenge unless projects are categorized based on their type, technology, scope etc. (rational grouping) and must to be under SPC.

thanks

Prashant

From: cmmi_process_improvement@yahoogroups.com [mailto:cmmi_process_improvement@yahoogroups.com] On Behalf Of Pat OToole
Sent: Saturday, September 21, 2013 5:09 PM
To: cmmi_process_improvement@yahoogroups.com
Subject: RE: [CMMi Process Improvement] Gage R&R

Prashant,

I am an HMLA, not an implementer of high maturity practices, so I will limit my comments to the appraisal context and leave implementation suggestions to those more qualified to provide such input.

I simply wanted to issue a model-based caution with respect to the metrics that you listed: Effort Variance, Schedule Variance, Defect Densities, Rework, Productivity, etc.

From a CMMI high maturity perspective, the objective is to statistically manage subprocess performance and to exploit that stability of your process execution such that you can build predictive models of attributes of future interest.  For EXAMPLE, by statistically managing certain key aspects of the requirements and design phases, we may be able to predict a reasonably “tight” range of defects to be found in system testing, the defect density of the fielded project, and customer satisfaction ratings.  (Or we may have OTHER future attributes of interest that we are interested in, so I am merely providing some examples of what we’re trying to do with the high maturity practices).

The metrics you listed: Effort Variance, Schedule Variance, etc. can be captured at multiple levels and, depending on the level of granularity, would serve EITHER as input variables to a predictive model, OR as output projections of said model

For example, if you are talking about Effort Variance for the PROJECT (total effort variance from the start of the project to date), then this is probably NOT an attribute of SUBPROCESS performance as the total project effort variance would be an accumulation of effort variance across many many subprocesses.  Such a metric is more suitable as the OUTPUT projection of a predictive model.  I.e., given the effort variance and defect density of the business requirements elicitation subprocess, the model predicts that the effort variance for the requirements phase will be in the x1 – x2 range; and the effort variance for the entire project will be in the y1 – y2 range.

Note that some such predictive models forecast the effort variance for each future project phase (as well as the total project effort variance),  and then those phase-level projections are replaced by “actuals” and the predictive models rerun as the project continues to progress – generating new and better forecasts for the upcoming phases and the total project.

As in the example above, if you are speaking about the Effort Variance, Schedule Variance, Defect Density, Rework, and/or Productivity of a given SUBPROCESS (e.g., business requirement elicitation), then you are more aligned with model expectations as far as managing subprocess performance and the construction of process performance baselines and models.

Many folks, including many lead appraisers, had trouble understanding why the SEI (and now the CMMI Institute) took such a strong position against the use of Earned Value’s CPI and SPI as a high maturity practice.  Personally, I don’t think either Institute had an issue with an organization doing so if they derived value from that practice, but they did have problems calling this statistical management of subprocess performance as project-level CPI and SPI are aggregated measures – they cut across many many subprocesses.

One strong note of caution: DO NOT allow the CMMI or anything else stand in the way of doing what helps your projects succeed.  If the projects glean value from statistically managing project-level metrics, including those you listed, or CPI and SPI, or the number of pizza boxes in the trash come Monday morning – then by all means use the associated measures to enhance project success.  From a CMMI perspective, however, you should not expect to receive “credit” for statistically managing SUBPROCESS performance based on these metrics.

Hope this helps,

Pat

From: cmmi_process_improvement@yahoogroups.com [mailto:cmmi_process_improvement@yahoogroups.com] On Behalf Of Prashant Kanjilal
Sent: Friday, September 20, 2013 7:39 AM
To: cmmi_process_improvement@yahoogroups.com
Subject: [CMMi Process Improvement] Gage R&R

Dear Professionals

1.Some CMMI HMLAs would like to check institutionalization of Gage R&R in PAs such as M&A, OPP and QPM etc.

2. The interpretation and meaningful usage of Gage R&R as part of MSA in software development and application support projects appears to be not straight forward and hence challenging. The reason being, every software development object is unique ( not identical as in manufacturing) and the measurements including estimations are

·         Mostly based on  expert advice and/or manual or through tools and

·         Measurements are often derived and not direct as in hardware/manufacturing scenarios.

3. In view of above, may I request you to suggest as to how to study repeatability, reproducibility, accuracy, precision   etc. in measures (most of them are ratios like, planned against actuals, defects per KLOC/Fn Points. Etc.) such as

·         Effort Variance

·         Schedule Variance

·         Defect Densities

·         Rework

·         Productivity etc.

Note: You may use your own formula for above measures(Org to org, it may vary )  or any other measure

4. Request you to give your suggestions/views on points given in Para 3 above .

Thanks & Regards

Prashant K

91-9676855151

._,___

______________________________________________________________________
Disclaimer:This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding

This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited.

Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.

______________________________________________________________________________________

www.accenture.com

***** Confidentiality Statement/Disclaimer *****

This message and any attachments is intended for the sole use of the intended recipient. It may contain confidential information. Any unauthorized use, dissemination or modification is strictly prohibited. If you are not the intended recipient, please notify the sender immediately then delete it from all your systems, and do not copy, use or print. Internet communications are not secure and it is the responsibility of the recipient to make sure that it is virus/malicious code exempt.
The company/sender cannot be responsible for any unauthorized alterations or modifications made to the contents. If you require any form of confirmation of the contents, please contact the company/sender. The company/sender is not liable for any errors or omissions in the content of this message.
• Bala, You have hit the point on target!. Very good information shared with all of us. thanks. thanks Prashant - CSQA, ITIL AM, CIO India CI Team (Delivery
Message 2 of 7 , Sep 25, 2013
• 0 Attachment

Bala,

You have hit the point on target!. Very good information shared with all of us. thanks.

thanks

Prashant - CSQA, ITIL

AM, CIO India CI Team (Delivery Excellence Team)

“Doing your best is not good enough. You have to know what to do. Then do your best” - W. Edward Deming

From: cmmi_process_improvement@yahoogroups.com [mailto:cmmi_process_improvement@yahoogroups.com] On Behalf Of Balamurali L.
Sent: Tuesday, September 24, 2013 6:51 PM
To: cmmi_process_improvement@yahoogroups.com
Subject: RE: [CMMi Process Improvement] Gage R&R

Hi,

Measurement system analysis (MSA) is an experimental and mathematical method of determining how much the variation within the measurement process contributes to overall process variability. A company needs to effectively evaluate its measurement systems. The type of evaluation process depends on the type of data collected.  The measurement system analysis can be generated to deal with discrete or continuous data.

·         For continuous data, process output data is measured and re-measured to compare measurement variation to overall process variation. Gage R&R’ shall be performed for analysing the measurement system variation in the continuous data.

·         Attribute Agreement Analysis is a type of Measurement Systems Analysis used when the characteristic is an attribute or discrete data. Attribute agreement assesses the results of decision making by human beings.

Applying MSA techniques in software scenario mainly deals with Attribute agreement analysis. Attribute agreement analysis (AAA) produces some key statistics that tell us whether the results are due to random chance or if our judgment appears to be better (or worse) than random chance. Attribute Agreement Analysis is used to assess the agreement between the ratings made by appraisers and the known standards. For doing Attribute agreement analysis, we can use Minitab or some other excel addins . Minitab displays the percent for absolute agreement between each appraiser and the standard and the percent of absolute agreement between all appraisers and the standard. Some statistical programs offer an analysis approach for an attribute agreement analysis. Kappa and Kendall’s correlation statistics are used to evaluate the agreement. Kappa’s statistics is used for nominal data.  The measure used for extent of attribute agreement is Kohen’s Kappa value. This can range from –1 to +1. +1 Shows perfect agreement while as 0 shows that agreement is by chance. Kappa of <1 means agreement is less than chance. AIAG recommends Kappa>0.75 for good system and <0.4 as poor system. Kappa’s statistics shall be determined for Binary and Nominal Data. Kendall's Coefficient of Concordance shall be determined if the data is ordinal. Kendall’s correlation is used when there is ordering of the attribute data categories for acceptable values of MSA results, it has to be greater than 80%. We have tried out attribute agreement analysis to evaluate the variation in measurement like review/testing defect classification, Non conformances mapping wrt Process Areas etc. When the results were not acceptable, corrective action also triggered.

With Regards,

Balamurali

Group Manager SQA

Network Systems & Technologies (P) Ltd.
Periyar |Technopark Campus |Thiruvananthapuram  695 581 |India
Cell 91.98471 80100|
(Work 91.471.3068311 |Fax  91.471.270.0442

balamurali.l@... |http:\\www.nestsoftware.com

From: cmmi_process_improvement@yahoogroups.com [mailto:cmmi_process_improvement@yahoogroups.com] On Behalf Of prashant.swaroop@...
Sent: 23 September 2013 11:43
To: cmmi_process_improvement@yahoogroups.com
Subject: RE: [CMMi Process Improvement] Gage R&R

Hi,

Appling G R&R will be challenge unless projects are categorized based on their type, technology, scope etc. (rational grouping) and must to be under SPC.

thanks

Prashant

From: cmmi_process_improvement@yahoogroups.com [mailto:cmmi_process_improvement@yahoogroups.com] On Behalf Of Pat OToole
Sent: Saturday, September 21, 2013 5:09 PM
To: cmmi_process_improvement@yahoogroups.com
Subject: RE: [CMMi Process Improvement] Gage R&R

Prashant,

I am an HMLA, not an implementer of high maturity practices, so I will limit my comments to the appraisal context and leave implementation suggestions to those more qualified to provide such input.

I simply wanted to issue a model-based caution with respect to the metrics that you listed: Effort Variance, Schedule Variance, Defect Densities, Rework, Productivity, etc.

From a CMMI high maturity perspective, the objective is to statistically manage subprocess performance and to exploit that stability of your process execution such that you can build predictive models of attributes of future interest.  For EXAMPLE, by statistically managing certain key aspects of the requirements and design phases, we may be able to predict a reasonably “tight” range of defects to be found in system testing, the defect density of the fielded project, and customer satisfaction ratings.  (Or we may have OTHER future attributes of interest that we are interested in, so I am merely providing some examples of what we’re trying to do with the high maturity practices).

The metrics you listed: Effort Variance, Schedule Variance, etc. can be captured at multiple levels and, depending on the level of granularity, would serve EITHER as input variables to a predictive model, OR as output projections of said model

For example, if you are talking about Effort Variance for the PROJECT (total effort variance from the start of the project to date), then this is probably NOT an attribute of SUBPROCESS performance as the total project effort variance would be an accumulation of effort variance across many many subprocesses.  Such a metric is more suitable as the OUTPUT projection of a predictive model.  I.e., given the effort variance and defect density of the business requirements elicitation subprocess, the model predicts that the effort variance for the requirements phase will be in the x1 – x2 range; and the effort variance for the entire project will be in the y1 – y2 range.

Note that some such predictive models forecast the effort variance for each future project phase (as well as the total project effort variance),  and then those phase-level projections are replaced by “actuals” and the predictive models rerun as the project continues to progress – generating new and better forecasts for the upcoming phases and the total project.

As in the example above, if you are speaking about the Effort Variance, Schedule Variance, Defect Density, Rework, and/or Productivity of a given SUBPROCESS (e.g., business requirement elicitation), then you are more aligned with model expectations as far as managing subprocess performance and the construction of process performance baselines and models.

Many folks, including many lead appraisers, had trouble understanding why the SEI (and now the CMMI Institute) took such a strong position against the use of Earned Value’s CPI and SPI as a high maturity practice.  Personally, I don’t think either Institute had an issue with an organization doing so if they derived value from that practice, but they did have problems calling this statistical management of subprocess performance as project-level CPI and SPI are aggregated measures – they cut across many many subprocesses.

One strong note of caution: DO NOT allow the CMMI or anything else stand in the way of doing what helps your projects succeed.  If the projects glean value from statistically managing project-level metrics, including those you listed, or CPI and SPI, or the number of pizza boxes in the trash come Monday morning – then by all means use the associated measures to enhance project success.  From a CMMI perspective, however, you should not expect to receive “credit” for statistically managing SUBPROCESS performance based on these metrics.

Hope this helps,

Pat

From: cmmi_process_improvement@yahoogroups.com [mailto:cmmi_process_improvement@yahoogroups.com] On Behalf Of Prashant Kanjilal
Sent: Friday, September 20, 2013 7:39 AM
To: cmmi_process_improvement@yahoogroups.com
Subject: [CMMi Process Improvement] Gage R&R

Dear Professionals

1.Some CMMI HMLAs would like to check institutionalization of Gage R&R in PAs such as M&A, OPP and QPM etc.

2. The interpretation and meaningful usage of Gage R&R as part of MSA in software development and application support projects appears to be not straight forward and hence challenging. The reason being, every software development object is unique ( not identical as in manufacturing) and the measurements including estimations are

·         Mostly based on  expert advice and/or manual or through tools and

·         Measurements are often derived and not direct as in hardware/manufacturing scenarios.

3. In view of above, may I request you to suggest as to how to study repeatability, reproducibility, accuracy, precision   etc. in measures (most of them are ratios like, planned against actuals, defects per KLOC/Fn Points. Etc.) such as

·         Effort Variance

·         Schedule Variance

·         Defect Densities

·         Rework

·         Productivity etc.

Note: You may use your own formula for above measures(Org to org, it may vary )  or any other measure

4. Request you to give your suggestions/views on points given in Para 3 above .

Thanks & Regards

Prashant K

91-9676855151

._,___

______________________________________________________________________
Disclaimer:This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding

This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited.

Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.

______________________________________________________________________________________

www.accenture.com

***** Confidentiality Statement/Disclaimer *****

This message and any attachments is intended for the sole use of the intended recipient. It may contain confidential information. Any unauthorized use, dissemination or modification is strictly prohibited. If you are not the intended recipient, please notify the sender immediately then delete it from all your systems, and do not copy, use or print. Internet communications are not secure and it is the responsibility of the recipient to make sure that it is virus/malicious code exempt.
The company/sender cannot be responsible for any unauthorized alterations or modifications made to the contents. If you require any form of confirmation of the contents, please contact the company/sender. The company/sender is not liable for any errors or omissions in the content of this message.

• Dear Friends Please refer to my initial mail and subsequent mails by practitioners, on the subject. I am thankful to you all for providing different but
Message 3 of 7 , Sep 27, 2013
• 0 Attachment

Dear Friends

Please refer to my initial mail and subsequent mails by practitioners, on the subject.

I am thankful to you all for providing different but correct inputs.

However, I am looking for something else.

The basic premise of MSA is to minimize measurement errors, since, whenever we measure something we make errors. It happens due to factors like tool’s variations, individual’s skill/experience levels, environmental condition etc. As a result, the concepts of accuracy, precision, repeatability, reproducibility, least count etc. have emerged. So, in order to minimize this error, calibration of tool is done, environmental conditions are controlled and measurements are done by more than one individual, periodically. In the manufacturing scenario, practicing above activities are somewhat easier because measurements are direct (in most of the cases) and objects measured are not changing in specifications often.

In software scenario, the case is somewhat different because 1) each development object is unique in size, functionality and complexity, 2) measures are often derived and not direct, 3) measurements mostly, are in the form of estimations/judgments .

In view of above, let us consider the following examples of metrics

For each development object/enhancements   à

Case1   Effort Variance: ((Actual Effort – Planned Effort)/Planned Effort)*100

1) Planned effort (Person Hr.) is determined by either expert judgment or a tool.

2) Actual effort is measured either by measuring, time to complete the task ,manually or by tool.

Possibility of Error in this case à A) Error in estimation of software objects,  B) Error in measurement of time

Question:

1) How practically both activities could be measured (in time) to understand  Repeatability(same object measured more than once by the same operator) and Reproducibility (same object measured by more than one operator) in the process ?

2) How variability of software tool could be checked, if used?

Case 2 Defect Containment Effectiveness (Design Phase): (No. of defects detected and fixed in Design phase) / ((No. of defects attributed to Design) + (No. of Defects injected into design phase)) * 100

Possibility of Error in this case à A) Error in finding out defects in a particular phase, B) Error in attribution of error to a particular phase C) Error in finding out defects injected in a particular phase.

Defects are found out and corrected mainly due to knowledge, skill and process ability of practitioners. And practitioners do make mistakes, which is why, effectiveness is often less than 100%

Question:

1) How practically Repeatability and Reproducibility could be brought in,  in the above process.

2) How variability of software tool could be checked if used?

I have taken two examples to bring out real issues. The formula used could be different in your organization. Also, one can consider many more metrics which could be used to get “insight” into the process and its capability and for all gage R&R can be institutionalized.

Request you to kindly suggest as to how “practically”  one can practice Gage R&R in above mentioned scenarios (and additional scenario, if possible)

·         in SDLC activities such as  reviews, coding, code walk through, testing etc.

·         in tools and

·         limitations of applicability if any

Thanks & Regards

Prashant K

91-9676855151

From: cmmi_process_improvement@yahoogroups.com [mailto:cmmi_process_improvement@yahoogroups.com] On Behalf Of prashant.swaroop@...
Sent: Wednesday, September 25, 2013 8:53 PM
To: cmmi_process_improvement@yahoogroups.com
Subject: RE: [CMMi Process Improvement] Gage R&R

Bala,

You have hit the point on target!. Very good information shared with all of us. thanks.

thanks

Prashant - CSQA, ITIL

AM, CIO India CI Team (Delivery Excellence Team)

“Doing your best is not good enough. You have to know what to do. Then do your best” - W. Edward Deming

From: cmmi_process_improvement@yahoogroups.com [mailto:cmmi_process_improvement@yahoogroups.com] On Behalf Of Balamurali L.
Sent: Tuesday, September 24, 2013 6:51 PM
To: cmmi_process_improvement@yahoogroups.com
Subject: RE: [CMMi Process Improvement] Gage R&R

Hi,

Measurement system analysis (MSA) is an experimental and mathematical method of determining how much the variation within the measurement process contributes to overall process variability. A company needs to effectively evaluate its measurement systems. The type of evaluation process depends on the type of data collected.  The measurement system analysis can be generated to deal with discrete or continuous data.

·         For continuous data, process output data is measured and re-measured to compare measurement variation to overall process variation. Gage R&R’ shall be performed for analysing the measurement system variation in the continuous data.

·         Attribute Agreement Analysis is a type of Measurement Systems Analysis used when the characteristic is an attribute or discrete data. Attribute agreement assesses the results of decision making by human beings.

Applying MSA techniques in software scenario mainly deals with Attribute agreement analysis. Attribute agreement analysis (AAA) produces some key statistics that tell us whether the results are due to random chance or if our judgment appears to be better (or worse) than random chance. Attribute Agreement Analysis is used to assess the agreement between the ratings made by appraisers and the known standards. For doing Attribute agreement analysis, we can use Minitab or some other excel addins . Minitab displays the percent for absolute agreement between each appraiser and the standard and the percent of absolute agreement between all appraisers and the standard. Some statistical programs offer an analysis approach for an attribute agreement analysis. Kappa and Kendall’s correlation statistics are used to evaluate the agreement. Kappa’s statistics is used for nominal data.  The measure used for extent of attribute agreement is Kohen’s Kappa value. This can range from –1 to +1. +1 Shows perfect agreement while as 0 shows that agreement is by chance. Kappa of <1 means agreement is less than chance. AIAG recommends Kappa>0.75 for good system and <0.4 as poor system. Kappa’s statistics shall be determined for Binary and Nominal Data. Kendall's Coefficient of Concordance shall be determined if the data is ordinal. Kendall’s correlation is used when there is ordering of the attribute data categories for acceptable values of MSA results, it has to be greater than 80%. We have tried out attribute agreement analysis to evaluate the variation in measurement like review/testing defect classification, Non conformances mapping wrt Process Areas etc. When the results were not acceptable, corrective action also triggered.

With Regards,

Balamurali

Group Manager SQA

Network Systems & Technologies (P) Ltd.
Periyar |Technopark Campus |Thiruvananthapuram  695 581 |India
Cell 91.98471 80100|
(Work 91.471.3068311 |Fax  91.471.270.0442

balamurali.l@... |http:\\www.nestsoftware.com

From: cmmi_process_improvement@yahoogroups.com [mailto:cmmi_process_improvement@yahoogroups.com] On Behalf Of prashant.swaroop@...
Sent: 23 September 2013 11:43
To: cmmi_process_improvement@yahoogroups.com
Subject: RE: [CMMi Process Improvement] Gage R&R

Hi,

Appling G R&R will be challenge unless projects are categorized based on their type, technology, scope etc. (rational grouping) and must to be under SPC.

thanks

Prashant

From: cmmi_process_improvement@yahoogroups.com [mailto:cmmi_process_improvement@yahoogroups.com] On Behalf Of Pat OToole
Sent: Saturday, September 21, 2013 5:09 PM
To: cmmi_process_improvement@yahoogroups.com
Subject: RE: [CMMi Process Improvement] Gage R&R

Prashant,

I am an HMLA, not an implementer of high maturity practices, so I will limit my comments to the appraisal context and leave implementation suggestions to those more qualified to provide such input.

I simply wanted to issue a model-based caution with respect to the metrics that you listed: Effort Variance, Schedule Variance, Defect Densities, Rework, Productivity, etc.

From a CMMI high maturity perspective, the objective is to statistically manage subprocess performance and to exploit that stability of your process execution such that you can build predictive models of attributes of future interest.  For EXAMPLE, by statistically managing certain key aspects of the requirements and design phases, we may be able to predict a reasonably “tight” range of defects to be found in system testing, the defect density of the fielded project, and customer satisfaction ratings.  (Or we may have OTHER future attributes of interest that we are interested in, so I am merely providing some examples of what we’re trying to do with the high maturity practices).

The metrics you listed: Effort Variance, Schedule Variance, etc. can be captured at multiple levels and, depending on the level of granularity, would serve EITHER as input variables to a predictive model, OR as output projections of said model

For example, if you are talking about Effort Variance for the PROJECT (total effort variance from the start of the project to date), then this is probably NOT an attribute of SUBPROCESS performance as the total project effort variance would be an accumulation of effort variance across many many subprocesses.  Such a metric is more suitable as the OUTPUT projection of a predictive model.  I.e., given the effort variance and defect density of the business requirements elicitation subprocess, the model predicts that the effort variance for the requirements phase will be in the x1 – x2 range; and the effort variance for the entire project will be in the y1 – y2 range.

Note that some such predictive models forecast the effort variance for each future project phase (as well as the total project effort variance),  and then those phase-level projections are replaced by “actuals” and the predictive models rerun as the project continues to progress – generating new and better forecasts for the upcoming phases and the total project.

As in the example above, if you are speaking about the Effort Variance, Schedule Variance, Defect Density, Rework, and/or Productivity of a given SUBPROCESS (e.g., business requirement elicitation), then you are more aligned with model expectations as far as managing subprocess performance and the construction of process performance baselines and models.

Many folks, including many lead appraisers, had trouble understanding why the SEI (and now the CMMI Institute) took such a strong position against the use of Earned Value’s CPI and SPI as a high maturity practice.  Personally, I don’t think either Institute had an issue with an organization doing so if they derived value from that practice, but they did have problems calling this statistical management of subprocess performance as project-level CPI and SPI are aggregated measures – they cut across many many subprocesses.

One strong note of caution: DO NOT allow the CMMI or anything else stand in the way of doing what helps your projects succeed.  If the projects glean value from statistically managing project-level metrics, including those you listed, or CPI and SPI, or the number of pizza boxes in the trash come Monday morning – then by all means use the associated measures to enhance project success.  From a CMMI perspective, however, you should not expect to receive “credit” for statistically managing SUBPROCESS performance based on these metrics.

Hope this helps,

Pat

From: cmmi_process_improvement@yahoogroups.com [mailto:cmmi_process_improvement@yahoogroups.com] On Behalf Of Prashant Kanjilal
Sent: Friday, September 20, 2013 7:39 AM
To: cmmi_process_improvement@yahoogroups.com
Subject: [CMMi Process Improvement] Gage R&R

Dear Professionals

1.Some CMMI HMLAs would like to check institutionalization of Gage R&R in PAs such as M&A, OPP and QPM etc.

2. The interpretation and meaningful usage of Gage R&R as part of MSA in software development and application support projects appears to be not straight forward and hence challenging. The reason being, every software development object is unique ( not identical as in manufacturing) and the measurements including estimations are

·         Mostly based on  expert advice and/or manual or through tools and

·         Measurements are often derived and not direct as in hardware/manufacturing scenarios.

3. In view of above, may I request you to suggest as to how to study repeatability, reproducibility, accuracy, precision   etc. in measures (most of them are ratios like, planned against actuals, defects per KLOC/Fn Points. Etc.) such as

·         Effort Variance

·         Schedule Variance

·         Defect Densities

·         Rework

·         Productivity etc.

Note: You may use your own formula for above measures(Org to org, it may vary )  or any other measure

4. Request you to give your suggestions/views on points given in Para 3 above .

Thanks & Regards

Prashant K

91-9676855151

._,___

______________________________________________________________________
Disclaimer:This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding

This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited.

Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.

______________________________________________________________________________________

www.accenture.com

***** Confidentiality Statement/Disclaimer *****

This message and any attachments is intended for the sole use of the intended recipient. It may contain confidential information. Any unauthorized use, dissemination or modification is strictly prohibited. If you are not the intended recipient, please notify the sender immediately then delete it from all your systems, and do not copy, use or print. Internet communications are not secure and it is the responsibility of the recipient to make sure that it is virus/malicious code exempt.
The company/sender cannot be responsible for any unauthorized alterations or modifications made to the contents. If you require any form of confirmation of the contents, please contact the company/sender. The company/sender is not liable for any errors or omissions in the content of this message.

______________________________________________________________________
Disclaimer:This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding
• Dear Friends Any inputs on my mail below? Thanks & Regards Prashant K 91-9676855151 From: Prashant Kanjilal Sent: Friday, September 27, 2013 5:24 PM To:
Message 4 of 7 , Nov 7, 2013
• 0 Attachment

Dear Friends

Any inputs on my mail below?

Thanks & Regards

Prashant K

91-9676855151

From: Prashant Kanjilal
Sent: Friday, September 27, 2013 5:24 PM
To: cmmi_process_improvement@yahoogroups.com
Subject: RE: [CMMi Process Improvement] Gage R&R

Dear Friends

Please refer to my initial mail and subsequent mails by practitioners, on the subject.

I am thankful to you all for providing different but correct inputs.

However, I am looking for something else.

The basic premise of MSA is to minimize measurement errors, since, whenever we measure something, we make errors. It happens due to factors like tool’s variations, individual’s skill/experience levels, environmental condition etc. As a result, the concepts of accuracy, precision, repeatability, reproducibility, least count etc. have emerged. So, in order to minimize this error, calibration of tool is done, environmental conditions are controlled and measurements are done by more than one individual, periodically. In the manufacturing scenario, practicing above activities are somewhat easier because measurements are direct (in most of the cases) and objects measured are not changing in specifications often.

In software scenario, the case is somewhat different because 1) each development object is unique in size, functionality and complexity, 2) measures are often derived and not direct, 3) measurements mostly, are in the form of estimations/judgments .

In view of above, let us consider the following examples of metrics

For each development object/enhancements   à

Case1   Effort Variance: ((Actual Effort – Planned Effort)/Planned Effort)*100

1) Planned effort (Person Hr.) is determined by either expert judgment or a tool.

2) Actual effort is measured either by measuring, time to complete the task ,manually or by tool.

Possibility of Error in this case à A) Error in estimation of software objects,  B) Error in measurement of time

Question:

1) How practically both activities could be measured (in time) to understand  Repeatability(same object measured more than once by the same operator) and Reproducibility (same object measured by more than one operator) in the process ?

2) How variability of software tool could be checked, if used?

Case 2 Defect Containment Effectiveness (Design Phase): (No. of defects detected and fixed in Design phase) / ((No. of defects attributed to Design) + (No. of Defects injected into design phase)) * 100

Possibility of Error in this case à A) Error in finding out defects in a particular phase, B) Error in attribution of error to a particular phase C) Error in finding out defects injected in a particular phase.

Defects are found out and corrected mainly due to knowledge, skill and process ability of practitioners. And practitioners do make mistakes, which is why, effectiveness is often less than 100%

Question:

1) How practically Repeatability and Reproducibility could be brought in,  in the above process.

2) How variability of software tool could be checked if used?

I have taken two examples to bring out real issues. The formula used could be different in your organization. Also, one can consider many more metrics which could be used to get “insight” into the process and its capability and for all gage R&R can be institutionalized.

Request you to kindly suggest as to how “practically”  one can practice Gage R&R in above mentioned scenarios (and additional scenario, if possible)

·         in SDLC activities such as  reviews, coding, code walk through, testing etc.

·         in tools and

·         limitations of applicability if any

Thanks & Regards

Prashant K

91-9676855151

From: cmmi_process_improvement@yahoogroups.com [mailto:cmmi_process_improvement@yahoogroups.com] On Behalf Of prashant.swaroop@...
Sent: Wednesday, September 25, 2013 8:53 PM
To: cmmi_process_improvement@yahoogroups.com
Subject: RE: [CMMi Process Improvement] Gage R&R

Bala,

You have hit the point on target!. Very good information shared with all of us. thanks.

thanks

Prashant - CSQA, ITIL

AM, CIO India CI Team (Delivery Excellence Team)

“Doing your best is not good enough. You have to know what to do. Then do your best” - W. Edward Deming

From: cmmi_process_improvement@yahoogroups.com [mailto:cmmi_process_improvement@yahoogroups.com] On Behalf Of Balamurali L.
Sent: Tuesday, September 24, 2013 6:51 PM
To: cmmi_process_improvement@yahoogroups.com
Subject: RE: [CMMi Process Improvement] Gage R&R

Hi,

Measurement system analysis (MSA) is an experimental and mathematical method of determining how much the variation within the measurement process contributes to overall process variability. A company needs to effectively evaluate its measurement systems. The type of evaluation process depends on the type of data collected.  The measurement system analysis can be generated to deal with discrete or continuous data.

·         For continuous data, process output data is measured and re-measured to compare measurement variation to overall process variation. Gage R&R’ shall be performed for analysing the measurement system variation in the continuous data.

·         Attribute Agreement Analysis is a type of Measurement Systems Analysis used when the characteristic is an attribute or discrete data. Attribute agreement assesses the results of decision making by human beings.

Applying MSA techniques in software scenario mainly deals with Attribute agreement analysis. Attribute agreement analysis (AAA) produces some key statistics that tell us whether the results are due to random chance or if our judgment appears to be better (or worse) than random chance. Attribute Agreement Analysis is used to assess the agreement between the ratings made by appraisers and the known standards. For doing Attribute agreement analysis, we can use Minitab or some other excel addins . Minitab displays the percent for absolute agreement between each appraiser and the standard and the percent of absolute agreement between all appraisers and the standard. Some statistical programs offer an analysis approach for an attribute agreement analysis. Kappa and Kendall’s correlation statistics are used to evaluate the agreement. Kappa’s statistics is used for nominal data.  The measure used for extent of attribute agreement is Kohen’s Kappa value. This can range from –1 to +1. +1 Shows perfect agreement while as 0 shows that agreement is by chance. Kappa of <1 means agreement is less than chance. AIAG recommends Kappa>0.75 for good system and <0.4 as poor system. Kappa’s statistics shall be determined for Binary and Nominal Data. Kendall's Coefficient of Concordance shall be determined if the data is ordinal. Kendall’s correlation is used when there is ordering of the attribute data categories for acceptable values of MSA results, it has to be greater than 80%. We have tried out attribute agreement analysis to evaluate the variation in measurement like review/testing defect classification, Non conformances mapping wrt Process Areas etc. When the results were not acceptable, corrective action also triggered.

With Regards,

Balamurali

Group Manager SQA

Network Systems & Technologies (P) Ltd.
Periyar |Technopark Campus |Thiruvananthapuram  695 581 |India
Cell 91.98471 80100|
(Work 91.471.3068311 |Fax  91.471.270.0442

balamurali.l@... |http:\\www.nestsoftware.com

From: cmmi_process_improvement@yahoogroups.com [mailto:cmmi_process_improvement@yahoogroups.com] On Behalf Of prashant.swaroop@...
Sent: 23 September 2013 11:43
To: cmmi_process_improvement@yahoogroups.com
Subject: RE: [CMMi Process Improvement] Gage R&R

Hi,

Appling G R&R will be challenge unless projects are categorized based on their type, technology, scope etc. (rational grouping) and must to be under SPC.

thanks

Prashant

From: cmmi_process_improvement@yahoogroups.com [mailto:cmmi_process_improvement@yahoogroups.com] On Behalf Of Pat OToole
Sent: Saturday, September 21, 2013 5:09 PM
To: cmmi_process_improvement@yahoogroups.com
Subject: RE: [CMMi Process Improvement] Gage R&R

Prashant,

I am an HMLA, not an implementer of high maturity practices, so I will limit my comments to the appraisal context and leave implementation suggestions to those more qualified to provide such input.

I simply wanted to issue a model-based caution with respect to the metrics that you listed: Effort Variance, Schedule Variance, Defect Densities, Rework, Productivity, etc.

From a CMMI high maturity perspective, the objective is to statistically manage subprocess performance and to exploit that stability of your process execution such that you can build predictive models of attributes of future interest.  For EXAMPLE, by statistically managing certain key aspects of the requirements and design phases, we may be able to predict a reasonably “tight” range of defects to be found in system testing, the defect density of the fielded project, and customer satisfaction ratings.  (Or we may have OTHER future attributes of interest that we are interested in, so I am merely providing some examples of what we’re trying to do with the high maturity practices).

The metrics you listed: Effort Variance, Schedule Variance, etc. can be captured at multiple levels and, depending on the level of granularity, would serve EITHER as input variables to a predictive model, OR as output projections of said model

For example, if you are talking about Effort Variance for the PROJECT (total effort variance from the start of the project to date), then this is probably NOT an attribute of SUBPROCESS performance as the total project effort variance would be an accumulation of effort variance across many many subprocesses.  Such a metric is more suitable as the OUTPUT projection of a predictive model.  I.e., given the effort variance and defect density of the business requirements elicitation subprocess, the model predicts that the effort variance for the requirements phase will be in the x1 – x2 range; and the effort variance for the entire project will be in the y1 – y2 range.

Note that some such predictive models forecast the effort variance for each future project phase (as well as the total project effort variance),  and then those phase-level projections are replaced by “actuals” and the predictive models rerun as the project continues to progress – generating new and better forecasts for the upcoming phases and the total project.

As in the example above, if you are speaking about the Effort Variance, Schedule Variance, Defect Density, Rework, and/or Productivity of a given SUBPROCESS (e.g., business requirement elicitation), then you are more aligned with model expectations as far as managing subprocess performance and the construction of process performance baselines and models.

Many folks, including many lead appraisers, had trouble understanding why the SEI (and now the CMMI Institute) took such a strong position against the use of Earned Value’s CPI and SPI as a high maturity practice.  Personally, I don’t think either Institute had an issue with an organization doing so if they derived value from that practice, but they did have problems calling this statistical management of subprocess performance as project-level CPI and SPI are aggregated measures – they cut across many many subprocesses.

One strong note of caution: DO NOT allow the CMMI or anything else stand in the way of doing what helps your projects succeed.  If the projects glean value from statistically managing project-level metrics, including those you listed, or CPI and SPI, or the number of pizza boxes in the trash come Monday morning – then by all means use the associated measures to enhance project success.  From a CMMI perspective, however, you should not expect to receive “credit” for statistically managing SUBPROCESS performance based on these metrics.

Hope this helps,

Pat

From: cmmi_process_improvement@yahoogroups.com [mailto:cmmi_process_improvement@yahoogroups.com] On Behalf Of Prashant Kanjilal
Sent: Friday, September 20, 2013 7:39 AM
To: cmmi_process_improvement@yahoogroups.com
Subject: [CMMi Process Improvement] Gage R&R

Dear Professionals

1.Some CMMI HMLAs would like to check institutionalization of Gage R&R in PAs such as M&A, OPP and QPM etc.

2. The interpretation and meaningful usage of Gage R&R as part of MSA in software development and application support projects appears to be not straight forward and hence challenging. The reason being, every software development object is unique ( not identical as in manufacturing) and the measurements including estimations are

·         Mostly based on  expert advice and/or manual or through tools and

·         Measurements are often derived and not direct as in hardware/manufacturing scenarios.

3. In view of above, may I request you to suggest as to how to study repeatability, reproducibility, accuracy, precision   etc. in measures (most of them are ratios like, planned against actuals, defects per KLOC/Fn Points. Etc.) such as

·         Effort Variance

·         Schedule Variance

·         Defect Densities

·         Rework

·         Productivity etc.

Note: You may use your own formula for above measures(Org to org, it may vary )  or any other measure

4. Request you to give your suggestions/views on points given in Para 3 above .

Thanks & Regards

Prashant K

91-9676855151

._,___

______________________________________________________________________
Disclaimer:This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding

This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited.

Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.

______________________________________________________________________________________

www.accenture.com

***** Confidentiality Statement/Disclaimer *****

This message and any attachments is intended for the sole use of the intended recipient. It may contain confidential information. Any unauthorized use, dissemination or modification is strictly prohibited. If you are not the intended recipient, please notify the sender immediately then delete it from all your systems, and do not copy, use or print. Internet communications are not secure and it is the responsibility of the recipient to make sure that it is virus/malicious code exempt.
The company/sender cannot be responsible for any unauthorized alterations or modifications made to the contents. If you require any form of confirmation of the contents, please contact the company/sender. The company/sender is not liable for any errors or omissions in the content of this message.

______________________________________________________________________
Disclaimer:This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding
Your message has been successfully submitted and would be delivered to recipients shortly.