Loading ...
Sorry, an error occurred while loading the content.

Re: [CMMi Process Improvement] VER SG2 in an agile world

Expand Messages
  • Patrick OToole
    Bruce, I believe in my alternative universe - the one where Super Code Generator eliminated the need for peer review of design, code, and test cases, as well
    Message 1 of 16 , Jun 30, 2009
    • 0 Attachment
       
      Bruce,
       
      I believe in my alternative universe - the one where Super Code Generator eliminated the need for peer review of design, code, and test cases, as well as the elimination of traceability of requirements => design => code => test, as well as the elimination of end-of-life cycle verification, one would find that the Requirements Phase would be the primary focus of VER SG1, SG2, and SG3, and that traceability of customer requirements => product requirements as it would be absolutely critical to get the requirements absolutely right.  Validation would take on a much more important role as the customer could be provided with full system functionality as easy as we generate prototypes today (and hopefully even easier!)
       
      In such a world, there would likely be other practices that emerge to make system generation and maintenance more effective and efficient - and that these would eventually find their way into the CMMI v4.1.
       
      I agree that "alternative goals" is not provided for in the CMMI and the associated SCAMPI appraisal method - but then the Super Code Generator does not yet exist.  It's fairly safe to say that in the next 10-20 years, software development will undergo yet another paradigm shift that will obviate the need for some of the existing practices (and perhaps even some of the goals and/or process areas).
       
      If there comes a time when a CMMI goal becomes absolutely unnecessary, I suspect that the SEI would go into scramble mode to change the model or method to accommodate it.  However, since adoption of new methods and technologies is relatively slow in our industry (how old are agile methods and what percentage of organizations have adopted them?) this would most likely be handled in the standard evolution of the model.
       
      I would be the first to encourage early adopters of such an emerging technology that the continuous representation would be the best way for them to go.  Rather than arguing for alternative goals or, worse yet, performing practices simply to satisfy the model, they would be best served to use the rest of the model to drive their process improvement project.
       
      Regards,
       
      Pat
       
       
       
      ----- Original Message -----
      Sent: Tuesday, June 30, 2009 3:44 PM
      Subject: Re: [CMMi Process Improvement] VER SG2 in an agile world

      Hi Pat,
      Your post included the following: "Where I disagree with Ed and some
      of the others is when they contend that the alternative approach has
      to be "at least as good" as the practices they are replacing - that
      is, you have to demonstrate that pair programming is at least as
      effective as peer reviews in detecting or preventing defects. What
      if a new method was only 75% as effective, but cost 10% as much to
      perform? Conversely, what if a new method was 50% MORE effective
      than peer reviews but cost 10x as much to perform? I'm not venturing
      into the answer to the above rhetorical questions, but they should
      help explain why I'm not in favor of an "equivalence" test. "

      I would like to clarify a point or two.

      I firmly believe in alternative practices; I just don't believe in
      alternative goals. VER SG 2 states: "Peer reviews are performed on
      selected work products". Hence, some form of "methodical
      examination" MUST be performed. My "arms-length" example was just
      one situation that could potentially meet this goal requirement.

      In initially stating my belief that it is important to evaluate the
      developer relationship and documentation in an agile environment if
      you are going to rate VER, I was speaking of determining whether this
      situation constituted some "equivalence to" a "methodical
      examination" of code, not whether it was somehow equivalent to a
      formal inspection at removing defects.

      And there we have it: companies are required to do examinations (SG
      2) which are expected to result in analyzing data (SP 2.3/3.2) that
      presumably will be applied in some un-described manner (no
      informative material on that) to meet a vaguely outlined intention
      (the presumed purpose of the PA).

      The CMMI doesn't so explicitly call out defect identification and
      removal - and VER is focused instead on determining that requirements
      are met. If the most important thing is that such activities remove
      defects, we should cause CMMI v1.3 to incorporate SW-CMM, PR Goal 2:
      "Defects in the (software) work products are identified and removed".

      Best Regards,
      Bruce
      www.alderonconsulti ng.com

      At 04:39 AM 6/30/2009, you wrote:

      >
      >Ed, et.al.,
      >
      >Imagine, if you will, that someone develops a software package that
      >inputs well-specified software requirements and outputs working
      >software code. It includes an interface facility that let's you
      >specify the software-to- software, software-to- hardware, and
      >software-to- human interfaces. It includes other facilities that
      >address all of the concerns that you might raise in the conversion
      >of English-based requirements to working software code (AND a
      >Spanish version is due to be released real soon!) It reduces the
      >software maintenance task to altering the requirements and
      >regenerating the code and documentation - which is also self-generated.
      >
      >Furthermore, the code works as specified each and every time! Test
      >though you will, you are NEVER able to discover a case where the
      >generated code varies from the requirements as specified. (The
      >specified requirements may be wrong, but the code and the
      >requirements are 100% aligned).
      >
      >If such a product actually existed, it would essentially eliminate
      >the need to perform some of the Technical Solution practices - those
      >dealing with design, interfaces, and perhaps, documentation. It
      >would also eliminate the need to do many of the Verification
      >activities - at least as applied to the non-existent design and the
      >perfectly aligned code. Traceability from requirements to design to
      >code would no longer be necessary.
      >
      >As such a miracle product emerged, the lead appraiser community
      >would need to align on the acceptance of this approach as
      >"alternative practice" for a host of specific practices - or
      >not. In 10 years, as this new miracle product became embedded as
      >the standard in software engineering, the model authors would find
      >themselves giving consideration to altering the CMMI model to
      >accommodate this new world order directly.
      >
      >Had the CMMI been around in the 1960s, there may have very well been
      >some practices dealing with the sequencing of punch cards, or the
      >security of programs on paper tape. There certainly would NOT have
      >been any practices around peer reviews - as that had not yet been
      >discovered as a "good practice" worthy of a CMMI goal.
      >
      >This is a rather long-winded way of saying that I agree with Ed -
      >that alternative practices may not be a one-for-one mapping from
      >"Prepare for Peer Reviews" to "Prepare for Pair Programming. " I
      >think it is essential to really understand the desired outcome of
      >the practice/goal/ process area and evaluate whether some alternative
      >approach is targeting the same desired outcome using a different paradigm.
      >
      >Where I disagree with Ed and some of the others is when they contend
      >that the alternative approach has to be "at least as good" as the
      >practices they are replacing - that is, you have to demonstrate that
      >pair programming is at least as effective as peer reviews in
      >detecting or preventing defects. What if a new method was only 75%
      >as effective, but cost 10% as much to perform? Conversely, what if
      >a new method was 50% MORE effective than peer reviews but cost 10x
      >as much to perform?
      >
      >I'm not venturing into the answer to the above rhetorical questions,
      >but they should help explain why I'm not in favor of an
      >"equivalence" test. Rather, I would tend to focus on the intent of
      >the model practices and evaluate whether the same intent is being
      >addressed (or obviated) in the alternative approach.
      >
      >Besides, why try to resolve the issue of whether pair programming is
      >as effective in eliminating or removing defects when there is such a
      >broad set of results from the various activities that organizations
      >call "peer reviews?" Against which "peer review" do we compare pair
      >programming - the formal Fagan Inspection, the "virtual" peer
      >review, or the "buddy" review?
      >
      >Hey, but I'm on vacation, so I'll let the rest of you gnaw on this
      >for a while...
      >
      >Regards,
      >
      >Pat
      >
      >
      >
      >----- Original Message -----
      >From: <mailto:edwardfwelleriii@ msn.com>EDWARD F WELLER III
      >To:
      ><mailto:cmmi_process_ improvement@ yahoogroups. com>cmmi_process_ improvement@ yahoogroups. com
      >
      >Sent: Monday, June 29, 2009 3:38 PM
      >Subject: Re: [CMMi Process Improvement] VER SG2 in an agile world
      >
      >Bruce and others
      >
      >I think the point of an alternative practice, rather than a
      >modification of the practice, is that a different approach gives you
      >the equivalent result. Even though I am strong proponent of
      >inspections, I am willing to see that pair-wise code, or team
      >development, that **measurably* * achieves the same results as peer
      >reviews/inspection s, would be an alternative practice. The goal of
      >peer reviews is to remove defects early and efficiently. If you can
      >demonstrate that the requirements, design, and code defects are at
      >or below the results achieved by peer reviews, then you have met the
      >intent of peer reviews, which then should be a satisfaction of the
      >intent of SG2
      >
      >I recall when Pat had one of his ATLAS questions on alternative
      >practices that most of those offered/commented on were more
      >variations of the practice, rather than an alternative. Take as
      >another example the Personal Software Process. With code, some are
      >able to remove 95+% of their defects in personal reviews - I there
      >then a need for peer reviews (probably product and customer dependent)?
      >
      >What I would debate is whether or not for requirements and design
      >you can skip the peer review which has the characteristics Bruce
      >named below - the deliberate focus on defect removal seems to just
      >work better than other types of review. For code, the effect of
      >paired coding has been shown to reduce defect rates to 1/100th
      >(Randy Jensen experiments and reports going back to the 80s) of the
      >individual rate.
      >
      >Part of this is the definition of an alternative practice - what
      >ATLAS showed is there is no consensus on what that means - another
      >area of the SCAMPI process that needs clarification via experience
      >rather than the MDD glossary definition:
      >The CMMI Glossary includes the following definition of "alternative practice.
      >"A practice that is a substitute for one or more generic or specific
      >practices contained in
      >CMMI models that achieves an equivalent effect toward satisfying the
      >generic or specific
      >goal associated with model practices. Alternative practices are not
      >necessarily one-for-one
      >replacements for the generic or specific practices."
      >
      >
      >It would be useful to try to achieve a common understanding - any
      >other thoughts?
      >
      >Ed
      >----- Original Message -----
      >From: <mailto:brduncil@bellsouth. net>Bruce R. Duncil
      >To:
      ><mailto:cmmi_process_ improvement@ yahoogroups. com>cmmi_process_ improvement@ yahoogroups. com
      >
      >Sent: Monday, June 29, 2009 9:36 AM
      >Subject: Re: [CMMi Process Improvement] VER SG2 in an agile world
      >
      >Hi Winifred,
      >Since I no longer develop software, I don't consider myself an
      >agilist. However, I am increasingly doing process improvement with
      >clients claiming to be agile and leading appraisals of agile
      >implementations and will therefore offer you my perspective.
      >
      >I don't consider joint development of any work product to
      >automatically include - or to otherwise preclude - the need for peer
      >review or verification.
      >
      >The CMMI goal requirement to satisfy is that peer reviews are
      >performed on selected work products (SG 2). Informative material
      >directly supporting the meaning and implementation of that goal
      >states: "Peer reviews involve a methodical examination of work
      >products by the producers' peers to identify defects for removal and
      >to recommend other changes that are needed." Preparation (SP 2.1),
      >conduct of the review and recording of issues (SP 2.2), and analysis
      >of the data on preparation, conducting and results of the peer
      >reviews (SP 2.3) are the key expectations in meeting this goal.
      >Likewise, the informative material supporting SP 2.2 states" "When
      >issues arise during the peer review, they should be communicated to
      >the primary developer of the work product for correction." Finally,
      >the informative material supporting implementation of SP 3.2 (which
      >also references pee reviews - and SP 2.3) states: "Actual results
      >must be compared to established verification criteria to determine
      >acceptability. The results of the analysis are recorded as evidence
      >that verification was conducted." Peer reviews - in whatever form
      >they may take - must support the overall objective of verifying that
      >selected product requirements are met.
      >
      >I recommend you look hard at the developers' and designers'
      >relationship during development/ design and their documentation (such
      >as in developer's notebooks) relative to the CMMI requirement( s),
      >expectations, and intent.
      >
      >In situations where two (or more) people are simply designing and
      >developing everything together and at the same time, and leaving
      >loose notes (at best), rarely is the CMMI effectively implemented. It
      >also tends to cause group-think, blinding those involved to issues or
      >defects and thus not providing meaningful product verification.
      >
      >If, on the other hand, the developers/designer s maintained an
      >arms-length relationship, truly planning and reviewing each other's
      >work (in tandem, for example), then the "peer reviews" -while
      >relatively informal - may prove to be effective. Provided their
      >documentation includes an understandable record of their activities,
      >issues found and addressed, and results, as well as some analysis of
      >it by them or others, then you have a basis that may lead you to
      >conclude that the goal is satisfied.
      >
      >In conclusion, I view an project's ability to demonstrate they have
      >data and have conducted analysis on it (SP 2.3) as the necessary but
      >not sufficient evidence that peer reviews were in fact conducted (SP
      >2.2) and the goal (SG 2) statement to be satisfied.
      >
      >I hope this is helpful to you.
      >Best Regards,
      >Bruce
      >www.alderonconsult ing.com
      >
      >At 03:55 PM 6/28/2009, you wrote:
      >
      > >Hello.
      > >
      > >This question is directed to Jeff, Hillel and all other agilist out there.
      > >
      > >One of my clients claims that they use agile methods (SCRUM mainly)
      > >and develop their design documents collaboratively, therefore they
      > >don't need peer reviews in the traditional sense.
      > >
      > >I can see how collaborative development of "anything" could meet the
      > >expectations of VER SP 2.2. I'd "file it under" alternative
      > >implementation of VER SP2.2. I have suggested that they may want to
      > >schedule a more traditional peer review during a sprint if (for
      > >example) some key person, such as one of the main tech leads is
      > >missing during a collaborative design session. So far so good.
      > >
      > >My question around VER SP 2.3 "Analyze data about preparation,
      > >conduct, and results of the peer reviews." What "alternative
      > >implementation" would you expect to see for this practice in an
      > agile project?
      > >
      > >Thanks for all advice.
      > >
      > >regards
      > >Winifred
      > >
      > >
      >
      >

    • EDWARD F WELLER III
      Pat and Bruce We are overlooking some words in the MDD Appendix C which I stated earlier - identifying which model practices appear to be implemented using an
      Message 2 of 16 , Jul 1, 2009
      • 0 Attachment
        Pat and Bruce
         
        We are overlooking some words in the MDD Appendix C which I stated earlier -
         
        identifying which model practices appear to be implemented using an alternative
        practice, and analyzing whether or not the alternative practice does indeed achieve an
        effect equivalent to that achieved by the model practices toward satisfying the associated
        specific or generic goal
         
        I believe the operative words are "effect equivalent" and for peer reviews the intent is to remove defects early and cheaply - not to do peer reviews.
         
        It would be better phrased "satisfying the intent of the associated specific or generic goal", however, to remove all doubt. Should a CR be raised?
         
        Ed
        ----- Original Message -----
        Sent: Tuesday, June 30, 2009 9:26 PM
        Subject: Re: [CMMi Process Improvement] VER SG2 in an agile world

         
        Bruce,
         
        I believe in my alternative universe - the one where Super Code Generator eliminated the need for peer review of design, code, and test cases, as well as the elimination of traceability of requirements => design => code => test, as well as the elimination of end-of-life cycle verification, one would find that the Requirements Phase would be the primary focus of VER SG1, SG2, and SG3, and that traceability of customer requirements => product requirements as it would be absolutely critical to get the requirements absolutely right.  Validation would take on a much more important role as the customer could be provided with full system functionality as easy as we generate prototypes today (and hopefully even easier!)
         
        In such a world, there would likely be other practices that emerge to make system generation and maintenance more effective and efficient - and that these would eventually find their way into the CMMI v4.1.
         
        I agree that "alternative goals" is not provided for in the CMMI and the associated SCAMPI appraisal method - but then the Super Code Generator does not yet exist.  It's fairly safe to say that in the next 10-20 years, software development will undergo yet another paradigm shift that will obviate the need for some of the existing practices (and perhaps even some of the goals and/or process areas).
         
        If there comes a time when a CMMI goal becomes absolutely unnecessary, I suspect that the SEI would go into scramble mode to change the model or method to accommodate it.  However, since adoption of new methods and technologies is relatively slow in our industry (how old are agile methods and what percentage of organizations have adopted them?) this would most likely be handled in the standard evolution of the model.
         
        I would be the first to encourage early adopters of such an emerging technology that the continuous representation would be the best way for them to go.  Rather than arguing for alternative goals or, worse yet, performing practices simply to satisfy the model, they would be best served to use the rest of the model to drive their process improvement project.
         
        Regards,
         
        Pat
         
         
         
        ----- Original Message -----
        Sent: Tuesday, June 30, 2009 3:44 PM
        Subject: Re: [CMMi Process Improvement] VER SG2 in an agile world

        Hi Pat,
        Your post included the following: "Where I disagree with Ed and some
        of the others is when they contend that the alternative approach has
        to be "at least as good" as the practices they are replacing - that
        is, you have to demonstrate that pair programming is at least as
        effective as peer reviews in detecting or preventing defects. What
        if a new method was only 75% as effective, but cost 10% as much to
        perform? Conversely, what if a new method was 50% MORE effective
        than peer reviews but cost 10x as much to perform? I'm not venturing
        into the answer to the above rhetorical questions, but they should
        help explain why I'm not in favor of an "equivalence" test. "

        I would like to clarify a point or two.

        I firmly believe in alternative practices; I just don't believe in
        alternative goals. VER SG 2 states: "Peer reviews are performed on
        selected work products". Hence, some form of "methodical
        examination" MUST be performed. My "arms-length" example was just
        one situation that could potentially meet this goal requirement.

        In initially stating my belief that it is important to evaluate the
        developer relationship and documentation in an agile environment if
        you are going to rate VER, I was speaking of determining whether this
        situation constituted some "equivalence to" a "methodical
        examination" of code, not whether it was somehow equivalent to a
        formal inspection at removing defects.

        And there we have it: companies are required to do examinations (SG
        2) which are expected to result in analyzing data (SP 2.3/3.2) that
        presumably will be applied in some un-described manner (no
        informative material on that) to meet a vaguely outlined intention
        (the presumed purpose of the PA).

        The CMMI doesn't so explicitly call out defect identification and
        removal - and VER is focused instead on determining that requirements
        are met. If the most important thing is that such activities remove
        defects, we should cause CMMI v1.3 to incorporate SW-CMM, PR Goal 2:
        "Defects in the (software) work products are identified and removed".

        Best Regards,
        Bruce
        www.alderonconsulti ng.com

        At 04:39 AM 6/30/2009, you wrote:

        >
        >Ed, et.al.,
        >
        >Imagine, if you will, that someone develops a software package that
        >inputs well-specified software requirements and outputs working
        >software code. It includes an interface facility that let's you
        >specify the software-to- software, software-to- hardware, and
        >software-to- human interfaces. It includes other facilities that
        >address all of the concerns that you might raise in the conversion
        >of English-based requirements to working software code (AND a
        >Spanish version is due to be released real soon!) It reduces the
        >software maintenance task to altering the requirements and
        >regenerating the code and documentation - which is also self-generated.
        >
        >Furthermore, the code works as specified each and every time! Test
        >though you will, you are NEVER able to discover a case where the
        >generated code varies from the requirements as specified. (The
        >specified requirements may be wrong, but the code and the
        >requirements are 100% aligned).
        >
        >If such a product actually existed, it would essentially eliminate
        >the need to perform some of the Technical Solution practices - those
        >dealing with design, interfaces, and perhaps, documentation. It
        >would also eliminate the need to do many of the Verification
        >activities - at least as applied to the non-existent design and the
        >perfectly aligned code. Traceability from requirements to design to
        >code would no longer be necessary.
        >
        >As such a miracle product emerged, the lead appraiser community
        >would need to align on the acceptance of this approach as
        >"alternative practice" for a host of specific practices - or
        >not. In 10 years, as this new miracle product became embedded as
        >the standard in software engineering, the model authors would find
        >themselves giving consideration to altering the CMMI model to
        >accommodate this new world order directly.
        >
        >Had the CMMI been around in the 1960s, there may have very well been
        >some practices dealing with the sequencing of punch cards, or the
        >security of programs on paper tape. There certainly would NOT have
        >been any practices around peer reviews - as that had not yet been
        >discovered as a "good practice" worthy of a CMMI goal.
        >
        >This is a rather long-winded way of saying that I agree with Ed -
        >that alternative practices may not be a one-for-one mapping from
        >"Prepare for Peer Reviews" to "Prepare for Pair Programming. " I
        >think it is essential to really understand the desired outcome of
        >the practice/goal/ process area and evaluate whether some alternative
        >approach is targeting the same desired outcome using a different paradigm.
        >
        >Where I disagree with Ed and some of the others is when they contend
        >that the alternative approach has to be "at least as good" as the
        >practices they are replacing - that is, you have to demonstrate that
        >pair programming is at least as effective as peer reviews in
        >detecting or preventing defects. What if a new method was only 75%
        >as effective, but cost 10% as much to perform? Conversely, what if
        >a new method was 50% MORE effective than peer reviews but cost 10x
        >as much to perform?
        >
        >I'm not venturing into the answer to the above rhetorical questions,
        >but they should help explain why I'm not in favor of an
        >"equivalence" test. Rather, I would tend to focus on the intent of
        >the model practices and evaluate whether the same intent is being
        >addressed (or obviated) in the alternative approach.
        >
        >Besides, why try to resolve the issue of whether pair programming is
        >as effective in eliminating or removing defects when there is such a
        >broad set of results from the various activities that organizations
        >call "peer reviews?" Against which "peer review" do we compare pair
        >programming - the formal Fagan Inspection, the "virtual" peer
        >review, or the "buddy" review?
        >
        >Hey, but I'm on vacation, so I'll let the rest of you gnaw on this
        >for a while...
        >
        >Regards,
        >
        >Pat
        >
        >
        >
        >----- Original Message -----
        >From: <mailto:edwardfwelleriii@ msn.com>EDWARD F WELLER III
        >To:
        ><mailto:cmmi_process_ improvement@ yahoogroups. com>cmmi_process_ improvement@ yahoogroups. com
        >
        >Sent: Monday, June 29, 2009 3:38 PM
        >Subject: Re: [CMMi Process Improvement] VER SG2 in an agile world
        >
        >Bruce and others
        >
        >I think the point of an alternative practice, rather than a
        >modification of the practice, is that a different approach gives you
        >the equivalent result. Even though I am strong proponent of
        >inspections, I am willing to see that pair-wise code, or team
        >development, that **measurably* * achieves the same results as peer
        >reviews/inspection s, would be an alternative practice. The goal of
        >peer reviews is to remove defects early and efficiently. If you can
        >demonstrate that the requirements, design, and code defects are at
        >or below the results achieved by peer reviews, then you have met the
        >intent of peer reviews, which then should be a satisfaction of the
        >intent of SG2
        >
        >I recall when Pat had one of his ATLAS questions on alternative
        >practices that most of those offered/commented on were more
        >variations of the practice, rather than an alternative. Take as
        >another example the Personal Software Process. With code, some are
        >able to remove 95+% of their defects in personal reviews - I there
        >then a need for peer reviews (probably product and customer dependent)?
        >
        >What I would debate is whether or not for requirements and design
        >you can skip the peer review which has the characteristics Bruce
        >named below - the deliberate focus on defect removal seems to just
        >work better than other types of review. For code, the effect of
        >paired coding has been shown to reduce defect rates to 1/100th
        >(Randy Jensen experiments and reports going back to the 80s) of the
        >individual rate.
        >
        >Part of this is the definition of an alternative practice - what
        >ATLAS showed is there is no consensus on what that means - another
        >area of the SCAMPI process that needs clarification via experience
        >rather than the MDD glossary definition:
        >The CMMI Glossary includes the following definition of "alternative practice.
        >"A practice that is a substitute for one or more generic or specific
        >practices contained in
        >CMMI models that achieves an equivalent effect toward satisfying the
        >generic or specific
        >goal associated with model practices. Alternative practices are not
        >necessarily one-for-one
        >replacements for the generic or specific practices."
        >
        >
        >It would be useful to try to achieve a common understanding - any
        >other thoughts?
        >
        >Ed
        >----- Original Message -----
        >From: <mailto:brduncil@bellsouth. net>Bruce R. Duncil
        >To:
        ><mailto:cmmi_process_ improvement@ yahoogroups. com>cmmi_process_ improvement@ yahoogroups. com
        >
        >Sent: Monday, June 29, 2009 9:36 AM
        >Subject: Re: [CMMi Process Improvement] VER SG2 in an agile world
        >
        >Hi Winifred,
        >Since I no longer develop software, I don't consider myself an
        >agilist. However, I am increasingly doing process improvement with
        >clients claiming to be agile and leading appraisals of agile
        >implementations and will therefore offer you my perspective.
        >
        >I don't consider joint development of any work product to
        >automatically include - or to otherwise preclude - the need for peer
        >review or verification.
        >
        >The CMMI goal requirement to satisfy is that peer reviews are
        >performed on selected work products (SG 2). Informative material
        >directly supporting the meaning and implementation of that goal
        >states: "Peer reviews involve a methodical examination of work
        >products by the producers' peers to identify defects for removal and
        >to recommend other changes that are needed." Preparation (SP 2.1),
        >conduct of the review and recording of issues (SP 2.2), and analysis
        >of the data on preparation, conducting and results of the peer
        >reviews (SP 2.3) are the key expectations in meeting this goal.
        >Likewise, the informative material supporting SP 2.2 states" "When
        >issues arise during the peer review, they should be communicated to
        >the primary developer of the work product for correction." Finally,
        >the informative material supporting implementation of SP 3.2 (which
        >also references pee reviews - and SP 2.3) states: "Actual results
        >must be compared to established verification criteria to determine
        >acceptability. The results of the analysis are recorded as evidence
        >that verification was conducted." Peer reviews - in whatever form
        >they may take - must support the overall objective of verifying that
        >selected product requirements are met.
        >
        >I recommend you look hard at the developers' and designers'
        >relationship during development/ design and their documentation (such
        >as in developer's notebooks) relative to the CMMI requirement( s),
        >expectations, and intent.
        >
        >In situations where two (or more) people are simply designing and
        >developing everything together and at the same time, and leaving
        >loose notes (at best), rarely is the CMMI effectively implemented. It
        >also tends to cause group-think, blinding those involved to issues or
        >defects and thus not providing meaningful product verification.
        >
        >If, on the other hand, the developers/designer s maintained an
        >arms-length relationship, truly planning and reviewing each other's
        >work (in tandem, for example), then the "peer reviews" -while
        >relatively informal - may prove to be effective. Provided their
        >documentation includes an understandable record of their activities,
        >issues found and addressed, and results, as well as some analysis of
        >it by them or others, then you have a basis that may lead you to
        >conclude that the goal is satisfied.
        >
        >In conclusion, I view an project's ability to demonstrate they have
        >data and have conducted analysis on it (SP 2.3) as the necessary but
        >not sufficient evidence that peer reviews were in fact conducted (SP
        >2.2) and the goal (SG 2) statement to be satisfied.
        >
        >I hope this is helpful to you.
        >Best Regards,
        >Bruce
        >www.alderonconsult ing.com
        >
        >At 03:55 PM 6/28/2009, you wrote:
        >
        > >Hello.
        > >
        > >This question is directed to Jeff, Hillel and all other agilist out there.
        > >
        > >One of my clients claims that they use agile methods (SCRUM mainly)
        > >and develop their design documents collaboratively, therefore they
        > >don't need peer reviews in the traditional sense.
        > >
        > >I can see how collaborative development of "anything" could meet the
        > >expectations of VER SP 2.2. I'd "file it under" alternative
        > >implementation of VER SP2.2. I have suggested that they may want to
        > >schedule a more traditional peer review during a sprint if (for
        > >example) some key person, such as one of the main tech leads is
        > >missing during a collaborative design session. So far so good.
        > >
        > >My question around VER SP 2.3 "Analyze data about preparation,
        > >conduct, and results of the peer reviews." What "alternative
        > >implementation" would you expect to see for this practice in an
        > agile project?
        > >
        > >Thanks for all advice.
        > >
        > >regards
        > >Winifred
        > >
        > >
        >
        >

      • Patrick OToole
        Ed, I see that my little trip into Fantasyland has spawned analysis of an interesting cosmic phenomenon. I believe you are suggesting that the goal could be
        Message 3 of 16 , Jul 1, 2009
        • 0 Attachment
           
          Ed,
           
          I see that my little trip into Fantasyland has spawned analysis of an interesting cosmic phenomenon.
           
          I believe you are suggesting that the goal could be rated "Satisfied" in a SCAMPI Class A appraisal if the INTENT of the goal were met, rather than the explicit goal statement itself, correct?
           
          As long as we are talking about MY perception of the intent, then I would tend to agree with you.  The bigger problem is that other lead appraisers may have differing views about the underlying intent of a particular goal - and they would adopt the position: "As long as we are talking about MY perception..."
           
          I'm not disagreeing with you - I'd just need more time to noodle this one through.  This would make for some real interesting (although probably theoretical) debate within the lead appraiser community.  Sounds like an ATLAS study to me!
           
          Regards,
           
          Pat
           
           
          ----- Original Message -----
          Sent: Wednesday, July 01, 2009 9:33 AM
          Subject: Re: [CMMi Process Improvement] VER SG2 in an agile world

          Pat and Bruce
           
          We are overlooking some words in the MDD Appendix C which I stated earlier -
           
          identifying which model practices appear to be implemented using an alternative
          practice, and analyzing whether or not the alternative practice does indeed achieve an
          effect equivalent to that achieved by the model practices toward satisfying the associated
          specific or generic goal
           
          I believe the operative words are "effect equivalent" and for peer reviews the intent is to remove defects early and cheaply - not to do peer reviews.
           
          It would be better phrased "satisfying the intent of the associated specific or generic goal", however, to remove all doubt. Should a CR be raised?
           
          Ed
          ----- Original Message -----
          Sent: Tuesday, June 30, 2009 9:26 PM
          Subject: Re: [CMMi Process Improvement] VER SG2 in an agile world

           
          Bruce,
           
          I believe in my alternative universe - the one where Super Code Generator eliminated the need for peer review of design, code, and test cases, as well as the elimination of traceability of requirements => design => code => test, as well as the elimination of end-of-life cycle verification, one would find that the Requirements Phase would be the primary focus of VER SG1, SG2, and SG3, and that traceability of customer requirements => product requirements as it would be absolutely critical to get the requirements absolutely right.  Validation would take on a much more important role as the customer could be provided with full system functionality as easy as we generate prototypes today (and hopefully even easier!)
           
          In such a world, there would likely be other practices that emerge to make system generation and maintenance more effective and efficient - and that these would eventually find their way into the CMMI v4.1.
           
          I agree that "alternative goals" is not provided for in the CMMI and the associated SCAMPI appraisal method - but then the Super Code Generator does not yet exist.  It's fairly safe to say that in the next 10-20 years, software development will undergo yet another paradigm shift that will obviate the need for some of the existing practices (and perhaps even some of the goals and/or process areas).
           
          If there comes a time when a CMMI goal becomes absolutely unnecessary, I suspect that the SEI would go into scramble mode to change the model or method to accommodate it.  However, since adoption of new methods and technologies is relatively slow in our industry (how old are agile methods and what percentage of organizations have adopted them?) this would most likely be handled in the standard evolution of the model.
           
          I would be the first to encourage early adopters of such an emerging technology that the continuous representation would be the best way for them to go.  Rather than arguing for alternative goals or, worse yet, performing practices simply to satisfy the model, they would be best served to use the rest of the model to drive their process improvement project.
           
          Regards,
           
          Pat
           
           
           
          ----- Original Message -----
          Sent: Tuesday, June 30, 2009 3:44 PM
          Subject: Re: [CMMi Process Improvement] VER SG2 in an agile world

          Hi Pat,
          Your post included the following: "Where I disagree with Ed and some
          of the others is when they contend that the alternative approach has
          to be "at least as good" as the practices they are replacing - that
          is, you have to demonstrate that pair programming is at least as
          effective as peer reviews in detecting or preventing defects. What
          if a new method was only 75% as effective, but cost 10% as much to
          perform? Conversely, what if a new method was 50% MORE effective
          than peer reviews but cost 10x as much to perform? I'm not venturing
          into the answer to the above rhetorical questions, but they should
          help explain why I'm not in favor of an "equivalence" test. "

          I would like to clarify a point or two.

          I firmly believe in alternative practices; I just don't believe in
          alternative goals. VER SG 2 states: "Peer reviews are performed on
          selected work products". Hence, some form of "methodical
          examination" MUST be performed. My "arms-length" example was just
          one situation that could potentially meet this goal requirement.

          In initially stating my belief that it is important to evaluate the
          developer relationship and documentation in an agile environment if
          you are going to rate VER, I was speaking of determining whether this
          situation constituted some "equivalence to" a "methodical
          examination" of code, not whether it was somehow equivalent to a
          formal inspection at removing defects.

          And there we have it: companies are required to do examinations (SG
          2) which are expected to result in analyzing data (SP 2.3/3.2) that
          presumably will be applied in some un-described manner (no
          informative material on that) to meet a vaguely outlined intention
          (the presumed purpose of the PA).

          The CMMI doesn't so explicitly call out defect identification and
          removal - and VER is focused instead on determining that requirements
          are met. If the most important thing is that such activities remove
          defects, we should cause CMMI v1.3 to incorporate SW-CMM, PR Goal 2:
          "Defects in the (software) work products are identified and removed".

          Best Regards,
          Bruce
          www.alderonconsulti ng.com

          At 04:39 AM 6/30/2009, you wrote:

          >
          >Ed, et.al.,
          >
          >Imagine, if you will, that someone develops a software package that
          >inputs well-specified software requirements and outputs working
          >software code. It includes an interface facility that let's you
          >specify the software-to- software, software-to- hardware, and
          >software-to- human interfaces. It includes other facilities that
          >address all of the concerns that you might raise in the conversion
          >of English-based requirements to working software code (AND a
          >Spanish version is due to be released real soon!) It reduces the
          >software maintenance task to altering the requirements and
          >regenerating the code and documentation - which is also self-generated.
          >
          >Furthermore, the code works as specified each and every time! Test
          >though you will, you are NEVER able to discover a case where the
          >generated code varies from the requirements as specified. (The
          >specified requirements may be wrong, but the code and the
          >requirements are 100% aligned).
          >
          >If such a product actually existed, it would essentially eliminate
          >the need to perform some of the Technical Solution practices - those
          >dealing with design, interfaces, and perhaps, documentation. It
          >would also eliminate the need to do many of the Verification
          >activities - at least as applied to the non-existent design and the
          >perfectly aligned code. Traceability from requirements to design to
          >code would no longer be necessary.
          >
          >As such a miracle product emerged, the lead appraiser community
          >would need to align on the acceptance of this approach as
          >"alternative practice" for a host of specific practices - or
          >not. In 10 years, as this new miracle product became embedded as
          >the standard in software engineering, the model authors would find
          >themselves giving consideration to altering the CMMI model to
          >accommodate this new world order directly.
          >
          >Had the CMMI been around in the 1960s, there may have very well been
          >some practices dealing with the sequencing of punch cards, or the
          >security of programs on paper tape. There certainly would NOT have
          >been any practices around peer reviews - as that had not yet been
          >discovered as a "good practice" worthy of a CMMI goal.
          >
          >This is a rather long-winded way of saying that I agree with Ed -
          >that alternative practices may not be a one-for-one mapping from
          >"Prepare for Peer Reviews" to "Prepare for Pair Programming. " I
          >think it is essential to really understand the desired outcome of
          >the practice/goal/ process area and evaluate whether some alternative
          >approach is targeting the same desired outcome using a different paradigm.
          >
          >Where I disagree with Ed and some of the others is when they contend
          >that the alternative approach has to be "at least as good" as the
          >practices they are replacing - that is, you have to demonstrate that
          >pair programming is at least as effective as peer reviews in
          >detecting or preventing defects. What if a new method was only 75%
          >as effective, but cost 10% as much to perform? Conversely, what if
          >a new method was 50% MORE effective than peer reviews but cost 10x
          >as much to perform?
          >
          >I'm not venturing into the answer to the above rhetorical questions,
          >but they should help explain why I'm not in favor of an
          >"equivalence" test. Rather, I would tend to focus on the intent of
          >the model practices and evaluate whether the same intent is being
          >addressed (or obviated) in the alternative approach.
          >
          >Besides, why try to resolve the issue of whether pair programming is
          >as effective in eliminating or removing defects when there is such a
          >broad set of results from the various activities that organizations
          >call "peer reviews?" Against which "peer review" do we compare pair
          >programming - the formal Fagan Inspection, the "virtual" peer
          >review, or the "buddy" review?
          >
          >Hey, but I'm on vacation, so I'll let the rest of you gnaw on this
          >for a while...
          >
          >Regards,
          >
          >Pat
          >
          >
          >
          >----- Original Message -----
          >From: <mailto:edwardfwelleriii@ msn.com>EDWARD F WELLER III
          >To:
          ><mailto:cmmi_process_ improvement@ yahoogroups. com>cmmi_process_ improvement@ yahoogroups. com
          >
          >Sent: Monday, June 29, 2009 3:38 PM
          >Subject: Re: [CMMi Process Improvement] VER SG2 in an agile world
          >
          >Bruce and others
          >
          >I think the point of an alternative practice, rather than a
          >modification of the practice, is that a different approach gives you
          >the equivalent result. Even though I am strong proponent of
          >inspections, I am willing to see that pair-wise code, or team
          >development, that **measurably* * achieves the same results as peer
          >reviews/inspection s, would be an alternative practice. The goal of
          >peer reviews is to remove defects early and efficiently. If you can
          >demonstrate that the requirements, design, and code defects are at
          >or below the results achieved by peer reviews, then you have met the
          >intent of peer reviews, which then should be a satisfaction of the
          >intent of SG2
          >
          >I recall when Pat had one of his ATLAS questions on alternative
          >practices that most of those offered/commented on were more
          >variations of the practice, rather than an alternative. Take as
          >another example the Personal Software Process. With code, some are
          >able to remove 95+% of their defects in personal reviews - I there
          >then a need for peer reviews (probably product and customer dependent)?
          >
          >What I would debate is whether or not for requirements and design
          >you can skip the peer review which has the characteristics Bruce
          >named below - the deliberate focus on defect removal seems to just
          >work better than other types of review. For code, the effect of
          >paired coding has been shown to reduce defect rates to 1/100th
          >(Randy Jensen experiments and reports going back to the 80s) of the
          >individual rate.
          >
          >Part of this is the definition of an alternative practice - what
          >ATLAS showed is there is no consensus on what that means - another
          >area of the SCAMPI process that needs clarification via experience
          >rather than the MDD glossary definition:
          >The CMMI Glossary includes the following definition of "alternative practice.
          >"A practice that is a substitute for one or more generic or specific
          >practices contained in
          >CMMI models that achieves an equivalent effect toward satisfying the
          >generic or specific
          >goal associated with model practices. Alternative practices are not
          >necessarily one-for-one
          >replacements for the generic or specific practices."
          >
          >
          >It would be useful to try to achieve a common understanding - any
          >other thoughts?
          >
          >Ed
          >----- Original Message -----
          >From: <mailto:brduncil@bellsouth. net>Bruce R. Duncil
          >To:
          ><mailto:cmmi_process_ improvement@ yahoogroups. com>cmmi_process_ improvement@ yahoogroups. com
          >
          >Sent: Monday, June 29, 2009 9:36 AM
          >Subject: Re: [CMMi Process Improvement] VER SG2 in an agile world
          >
          >Hi Winifred,
          >Since I no longer develop software, I don't consider myself an
          >agilist. However, I am increasingly doing process improvement with
          >clients claiming to be agile and leading appraisals of agile
          >implementations and will therefore offer you my perspective.
          >
          >I don't consider joint development of any work product to
          >automatically include - or to otherwise preclude - the need for peer
          >review or verification.
          >
          >The CMMI goal requirement to satisfy is that peer reviews are
          >performed on selected work products (SG 2). Informative material
          >directly supporting the meaning and implementation of that goal
          >states: "Peer reviews involve a methodical examination of work
          >products by the producers' peers to identify defects for removal and
          >to recommend other changes that are needed." Preparation (SP 2.1),
          >conduct of the review and recording of issues (SP 2.2), and analysis
          >of the data on preparation, conducting and results of the peer
          >reviews (SP 2.3) are the key expectations in meeting this goal.
          >Likewise, the informative material supporting SP 2.2 states" "When
          >issues arise during the peer review, they should be communicated to
          >the primary developer of the work product for correction." Finally,
          >the informative material supporting implementation of SP 3.2 (which
          >also references pee reviews - and SP 2.3) states: "Actual results
          >must be compared to established verification criteria to determine
          >acceptability. The results of the analysis are recorded as evidence
          >that verification was conducted." Peer reviews - in whatever form
          >they may take - must support the overall objective of verifying that
          >selected product requirements are met.
          >
          >I recommend you look hard at the developers' and designers'
          >relationship during development/ design and their documentation (such
          >as in developer's notebooks) relative to the CMMI requirement( s),
          >expectations, and intent.
          >
          >In situations where two (or more) people are simply designing and
          >developing everything together and at the same time, and leaving
          >loose notes (at best), rarely is the CMMI effectively implemented. It
          >also tends to cause group-think, blinding those involved to issues or
          >defects and thus not providing meaningful product verification.
          >
          >If, on the other hand, the developers/designer s maintained an
          >arms-length relationship, truly planning and reviewing each other's
          >work (in tandem, for example), then the "peer reviews" -while
          >relatively informal - may prove to be effective. Provided their
          >documentation includes an understandable record of their activities,
          >issues found and addressed, and results, as well as some analysis of
          >it by them or others, then you have a basis that may lead you to
          >conclude that the goal is satisfied.
          >
          >In conclusion, I view an project's ability to demonstrate they have
          >data and have conducted analysis on it (SP 2.3) as the necessary but
          >not sufficient evidence that peer reviews were in fact conducted (SP
          >2.2) and the goal (SG 2) statement to be satisfied.
          >
          >I hope this is helpful to you.
          >Best Regards,
          >Bruce
          >www.alderonconsult ing.com
          >
          >At 03:55 PM 6/28/2009, you wrote:
          >
          > >Hello.
          > >
          > >This question is directed to Jeff, Hillel and all other agilist out there.
          > >
          > >One of my clients claims that they use agile methods (SCRUM mainly)
          > >and develop their design documents collaboratively, therefore they
          > >don't need peer reviews in the traditional sense.
          > >
          > >I can see how collaborative development of "anything" could meet the
          > >expectations of VER SP 2.2. I'd "file it under" alternative
          > >implementation of VER SP2.2. I have suggested that they may want to
          > >schedule a more traditional peer review during a sprint if (for
          > >example) some key person, such as one of the main tech leads is
          > >missing during a collaborative design session. So far so good.
          > >
          > >My question around VER SP 2.3 "Analyze data about preparation,
          > >conduct, and results of the peer reviews." What "alternative
          > >implementation" would you expect to see for this practice in an
          > agile project?
          > >
          > >Thanks for all advice.
          > >
          > >regards
          > >Winifred
          > >
          > >
          >
          >

        • EDWARD F WELLER III
          Pat, When considering alternative practices, I think it is necessary to look at goal intent as well as the wording, which indeed can be a slippery slope. The
          Message 4 of 16 , Jul 2, 2009
          • 0 Attachment
            Pat,
             
            When considering alternative practices, I think it is necessary to look at goal intent as well as the wording, which indeed can be a slippery slope.
             
            The overworked example is pair programming, which has been measured (by some, anyway, rather than by arm waving claims) to reduce defects in code to well below what is achieved by peer reviews.
             
            So lets hypothesize case A:
            Org peer reviews ONLY code - that is the "selected work product" per VER SP 2.1. we may not like the narrow application, but it does implement the SP.
             
            Now the org implements XP/Agile/etc with pair programming, and no longer performs code peer reviews - therefore no longer does any peer reviews.
             
            They clearly do NOT "Conduct peer reviews" per SG2 and the words in the SPs (unless you twist the pair programming to include the verifying aspects of the person not banging on the keyboard), but they have removed defects early and efficiently - the goal of Peer Reviews
             
            They have produced a product which clearly does not need the additional step of the formal peer review process (assuming the pair programming delivers as promised/claimed). How rigorously do we hold to the statement "Perform peer reviews" - literally or by looking at the objective of Peer Reviews?
             
            So how do we call this one?
             
            If you remember the one ATLAS where you posed questions on alternative practices - there was considerable variation in the interpretation of what an alternative practice was- from slight modifications of typical work products/subpractices all the way to completely different approaches to solving the problem.
             
            Can this be solved by more words in the MDD, or is it one where team judgment, with appropriate a priori agreement on the characteristics which are included in the alternative practice, are agreed upon?
             
            Not sure I know the answer, but that doesn't stop the philosophical discussion
             
            Ed
             
            ----- Original Message -----
            Sent: Wednesday, July 01, 2009 10:36 PM
            Subject: Re: [CMMi Process Improvement] VER SG2 in an agile world

             
            Ed,
             
            I see that my little trip into Fantasyland has spawned analysis of an interesting cosmic phenomenon.
             
            I believe you are suggesting that the goal could be rated "Satisfied" in a SCAMPI Class A appraisal if the INTENT of the goal were met, rather than the explicit goal statement itself, correct?
             
            As long as we are talking about MY perception of the intent, then I would tend to agree with you.  The bigger problem is that other lead appraisers may have differing views about the underlying intent of a particular goal - and they would adopt the position: "As long as we are talking about MY perception.. ."
             
            I'm not disagreeing with you - I'd just need more time to noodle this one through.  This would make for some real interesting (although probably theoretical) debate within the lead appraiser community.  Sounds like an ATLAS study to me!
             
            Regards,
             
            Pat
             
             
            ----- Original Message -----
            Sent: Wednesday, July 01, 2009 9:33 AM
            Subject: Re: [CMMi Process Improvement] VER SG2 in an agile world

            Pat and Bruce
             
            We are overlooking some words in the MDD Appendix C which I stated earlier -
             
            identifying which model practices appear to be implemented using an alternative
            practice, and analyzing whether or not the alternative practice does indeed achieve an
            effect equivalent to that achieved by the model practices toward satisfying the associated
            specific or generic goal
             
            I believe the operative words are "effect equivalent" and for peer reviews the intent is to remove defects early and cheaply - not to do peer reviews.
             
            It would be better phrased "satisfying the intent of the associated specific or generic goal", however, to remove all doubt. Should a CR be raised?
             
            Ed
            ----- Original Message -----
            Sent: Tuesday, June 30, 2009 9:26 PM
            Subject: Re: [CMMi Process Improvement] VER SG2 in an agile world

             
            Bruce,
             
            I believe in my alternative universe - the one where Super Code Generator eliminated the need for peer review of design, code, and test cases, as well as the elimination of traceability of requirements => design => code => test, as well as the elimination of end-of-life cycle verification, one would find that the Requirements Phase would be the primary focus of VER SG1, SG2, and SG3, and that traceability of customer requirements => product requirements as it would be absolutely critical to get the requirements absolutely right.  Validation would take on a much more important role as the customer could be provided with full system functionality as easy as we generate prototypes today (and hopefully even easier!)
             
            In such a world, there would likely be other practices that emerge to make system generation and maintenance more effective and efficient - and that these would eventually find their way into the CMMI v4.1.
             
            I agree that "alternative goals" is not provided for in the CMMI and the associated SCAMPI appraisal method - but then the Super Code Generator does not yet exist.  It's fairly safe to say that in the next 10-20 years, software development will undergo yet another paradigm shift that will obviate the need for some of the existing practices (and perhaps even some of the goals and/or process areas).
             
            If there comes a time when a CMMI goal becomes absolutely unnecessary, I suspect that the SEI would go into scramble mode to change the model or method to accommodate it.  However, since adoption of new methods and technologies is relatively slow in our industry (how old are agile methods and what percentage of organizations have adopted them?) this would most likely be handled in the standard evolution of the model.
             
            I would be the first to encourage early adopters of such an emerging technology that the continuous representation would be the best way for them to go.  Rather than arguing for alternative goals or, worse yet, performing practices simply to satisfy the model, they would be best served to use the rest of the model to drive their process improvement project.
             
            Regards,
             
            Pat
             
             
             
            ----- Original Message -----
            Sent: Tuesday, June 30, 2009 3:44 PM
            Subject: Re: [CMMi Process Improvement] VER SG2 in an agile world

            Hi Pat,
            Your post included the following: "Where I disagree with Ed and some
            of the others is when they contend that the alternative approach has
            to be "at least as good" as the practices they are replacing - that
            is, you have to demonstrate that pair programming is at least as
            effective as peer reviews in detecting or preventing defects. What
            if a new method was only 75% as effective, but cost 10% as much to
            perform? Conversely, what if a new method was 50% MORE effective
            than peer reviews but cost 10x as much to perform? I'm not venturing
            into the answer to the above rhetorical questions, but they should
            help explain why I'm not in favor of an "equivalence" test. "

            I would like to clarify a point or two.

            I firmly believe in alternative practices; I just don't believe in
            alternative goals. VER SG 2 states: "Peer reviews are performed on
            selected work products". Hence, some form of "methodical
            examination" MUST be performed. My "arms-length" example was just
            one situation that could potentially meet this goal requirement.

            In initially stating my belief that it is important to evaluate the
            developer relationship and documentation in an agile environment if
            you are going to rate VER, I was speaking of determining whether this
            situation constituted some "equivalence to" a "methodical
            examination" of code, not whether it was somehow equivalent to a
            formal inspection at removing defects.

            And there we have it: companies are required to do examinations (SG
            2) which are expected to result in analyzing data (SP 2.3/3.2) that
            presumably will be applied in some un-described manner (no
            informative material on that) to meet a vaguely outlined intention
            (the presumed purpose of the PA).

            The CMMI doesn't so explicitly call out defect identification and
            removal - and VER is focused instead on determining that requirements
            are met. If the most important thing is that such activities remove
            defects, we should cause CMMI v1.3 to incorporate SW-CMM, PR Goal 2:
            "Defects in the (software) work products are identified and removed".

            Best Regards,
            Bruce
            www.alderonconsulti ng.com

            At 04:39 AM 6/30/2009, you wrote:

            >
            >Ed, et.al.,
            >
            >Imagine, if you will, that someone develops a software package that
            >inputs well-specified software requirements and outputs working
            >software code. It includes an interface facility that let's you
            >specify the software-to- software, software-to- hardware, and
            >software-to- human interfaces. It includes other facilities that
            >address all of the concerns that you might raise in the conversion
            >of English-based requirements to working software code (AND a
            >Spanish version is due to be released real soon!) It reduces the
            >software maintenance task to altering the requirements and
            >regenerating the code and documentation - which is also self-generated.
            >
            >Furthermore, the code works as specified each and every time! Test
            >though you will, you are NEVER able to discover a case where the
            >generated code varies from the requirements as specified. (The
            >specified requirements may be wrong, but the code and the
            >requirements are 100% aligned).
            >
            >If such a product actually existed, it would essentially eliminate
            >the need to perform some of the Technical Solution practices - those
            >dealing with design, interfaces, and perhaps, documentation. It
            >would also eliminate the need to do many of the Verification
            >activities - at least as applied to the non-existent design and the
            >perfectly aligned code. Traceability from requirements to design to
            >code would no longer be necessary.
            >
            >As such a miracle product emerged, the lead appraiser community
            >would need to align on the acceptance of this approach as
            >"alternative practice" for a host of specific practices - or
            >not. In 10 years, as this new miracle product became embedded as
            >the standard in software engineering, the model authors would find
            >themselves giving consideration to altering the CMMI model to
            >accommodate this new world order directly.
            >
            >Had the CMMI been around in the 1960s, there may have very well been
            >some practices dealing with the sequencing of punch cards, or the
            >security of programs on paper tape. There certainly would NOT have
            >been any practices around peer reviews - as that had not yet been
            >discovered as a "good practice" worthy of a CMMI goal.
            >
            >This is a rather long-winded way of saying that I agree with Ed -
            >that alternative practices may not be a one-for-one mapping from
            >"Prepare for Peer Reviews" to "Prepare for Pair Programming. " I
            >think it is essential to really understand the desired outcome of
            >the practice/goal/ process area and evaluate whether some alternative
            >approach is targeting the same desired outcome using a different paradigm.
            >
            >Where I disagree with Ed and some of the others is when they contend
            >that the alternative approach has to be "at least as good" as the
            >practices they are replacing - that is, you have to demonstrate that
            >pair programming is at least as effective as peer reviews in
            >detecting or preventing defects. What if a new method was only 75%
            >as effective, but cost 10% as much to perform? Conversely, what if
            >a new method was 50% MORE effective than peer reviews but cost 10x
            >as much to perform?
            >
            >I'm not venturing into the answer to the above rhetorical questions,
            >but they should help explain why I'm not in favor of an
            >"equivalence" test. Rather, I would tend to focus on the intent of
            >the model practices and evaluate whether the same intent is being
            >addressed (or obviated) in the alternative approach.
            >
            >Besides, why try to resolve the issue of whether pair programming is
            >as effective in eliminating or removing defects when there is such a
            >broad set of results from the various activities that organizations
            >call "peer reviews?" Against which "peer review" do we compare pair
            >programming - the formal Fagan Inspection, the "virtual" peer
            >review, or the "buddy" review?
            >
            >Hey, but I'm on vacation, so I'll let the rest of you gnaw on this
            >for a while...
            >
            >Regards,
            >
            >Pat
            >
            >
            >
            >----- Original Message -----
            >From: <mailto:edwardfwelleriii@ msn.com>EDWARD F WELLER III
            >To:
            ><mailto:cmmi_process_ improvement@ yahoogroups. com>cmmi_process_ improvement@ yahoogroups. com
            >
            >Sent: Monday, June 29, 2009 3:38 PM
            >Subject: Re: [CMMi Process Improvement] VER SG2 in an agile world
            >
            >Bruce and others
            >
            >I think the point of an alternative practice, rather than a
            >modification of the practice, is that a different approach gives you
            >the equivalent result. Even though I am strong proponent of
            >inspections, I am willing to see that pair-wise code, or team
            >development, that **measurably* * achieves the same results as peer
            >reviews/inspection s, would be an alternative practice. The goal of
            >peer reviews is to remove defects early and efficiently. If you can
            >demonstrate that the requirements, design, and code defects are at
            >or below the results achieved by peer reviews, then you have met the
            >intent of peer reviews, which then should be a satisfaction of the
            >intent of SG2
            >
            >I recall when Pat had one of his ATLAS questions on alternative
            >practices that most of those offered/commented on were more
            >variations of the practice, rather than an alternative. Take as
            >another example the Personal Software Process. With code, some are
            >able to remove 95+% of their defects in personal reviews - I there
            >then a need for peer reviews (probably product and customer dependent)?
            >
            >What I would debate is whether or not for requirements and design
            >you can skip the peer review which has the characteristics Bruce
            >named below - the deliberate focus on defect removal seems to just
            >work better than other types of review. For code, the effect of
            >paired coding has been shown to reduce defect rates to 1/100th
            >(Randy Jensen experiments and reports going back to the 80s) of the
            >individual rate.
            >
            >Part of this is the definition of an alternative practice - what
            >ATLAS showed is there is no consensus on what that means - another
            >area of the SCAMPI process that needs clarification via experience
            >rather than the MDD glossary definition:
            >The CMMI Glossary includes the following definition of "alternative practice.
            >"A practice that is a substitute for one or more generic or specific
            >practices contained in
            >CMMI models that achieves an equivalent effect toward satisfying the
            >generic or specific
            >goal associated with model practices. Alternative practices are not
            >necessarily one-for-one
            >replacements for the generic or specific practices."
            >
            >
            >It would be useful to try to achieve a common understanding - any
            >other thoughts?
            >
            >Ed
            >----- Original Message -----
            >From: <mailto:brduncil@bellsouth. net>Bruce R. Duncil
            >To:
            ><mailto:cmmi_process_ improvement@ yahoogroups. com>cmmi_process_ improvement@ yahoogroups. com
            >
            >Sent: Monday, June 29, 2009 9:36 AM
            >Subject: Re: [CMMi Process Improvement] VER SG2 in an agile world
            >
            >Hi Winifred,
            >Since I no longer develop software, I don't consider myself an
            >agilist. However, I am increasingly doing process improvement with
            >clients claiming to be agile and leading appraisals of agile
            >implementations and will therefore offer you my perspective.
            >
            >I don't consider joint development of any work product to
            >automatically include - or to otherwise preclude - the need for peer
            >review or verification.
            >
            >The CMMI goal requirement to satisfy is that peer reviews are
            >performed on selected work products (SG 2). Informative material
            >directly supporting the meaning and implementation of that goal
            >states: "Peer reviews involve a methodical examination of work
            >products by the producers' peers to identify defects for removal and
            >to recommend other changes that are needed." Preparation (SP 2.1),
            >conduct of the review and recording of issues (SP 2.2), and analysis
            >of the data on preparation, conducting and results of the peer
            >reviews (SP 2.3) are the key expectations in meeting this goal.
            >Likewise, the informative material supporting SP 2.2 states" "When
            >issues arise during the peer review, they should be communicated to
            >the primary developer of the work product for correction." Finally,
            >the informative material supporting implementation of SP 3.2 (which
            >also references pee reviews - and SP 2.3) states: "Actual results
            >must be compared to established verification criteria to determine
            >acceptability. The results of the analysis are recorded as evidence
            >that verification was conducted." Peer reviews - in whatever form
            >they may take - must support the overall objective of verifying that
            >selected product requirements are met.
            >
            >I recommend you look hard at the developers' and designers'
            >relationship during development/ design and their documentation (such
            >as in developer's notebooks) relative to the CMMI requirement( s),
            >expectations, and intent.
            >
            >In situations where two (or more) people are simply designing and
            >developing everything together and at the same time, and leaving
            >loose notes (at best), rarely is the CMMI effectively implemented. It
            >also tends to cause group-think, blinding those involved to issues or
            >defects and thus not providing meaningful product verification.
            >
            >If, on the other hand, the developers/designer s maintained an
            >arms-length relationship, truly planning and reviewing each other's
            >work (in tandem, for example), then the "peer reviews" -while
            >relatively informal - may prove to be effective. Provided their
            >documentation includes an understandable record of their activities,
            >issues found and addressed, and results, as well as some analysis of
            >it by them or others, then you have a basis that may lead you to
            >conclude that the goal is satisfied.
            >
            >In conclusion, I view an project's ability to demonstrate they have
            >data and have conducted analysis on it (SP 2.3) as the necessary but
            >not sufficient evidence that peer reviews were in fact conducted (SP
            >2.2) and the goal (SG 2) statement to be satisfied.
            >
            >I hope this is helpful to you.
            >Best Regards,
            >Bruce
            >www.alderonconsult ing.com
            >
            >At 03:55 PM 6/28/2009, you wrote:
            >
            > >Hello.
            > >
            > >This question is directed to Jeff, Hillel and all other agilist out there.
            > >
            > >One of my clients claims that they use agile methods (SCRUM mainly)
            > >and develop their design documents collaboratively, therefore they
            > >don't need peer reviews in the traditional sense.
            > >
            > >I can see how collaborative development of "anything" could meet the
            > >expectations of VER SP 2.2. I'd "file it under" alternative
            > >implementation of VER SP2.2. I have suggested that they may want to
            > >schedule a more traditional peer review during a sprint if (for
            > >example) some key person, such as one of the main tech leads is
            > >missing during a collaborative design session. So far so good.
            > >
            > >My question around VER SP 2.3 "Analyze data about preparation,
            > >conduct, and results of the peer reviews." What "alternative
            > >implementation" would you expect to see for this practice in an
            > agile project?
            > >
            > >Thanks for all advice.
            > >
            > >regards
            > >Winifred
            > >
            > >
            >
            >

          • wihamenezes
            Everyone, Thanks a million for a great and illuminating discussion. It was good to be reminded of the many perspectives. cheers Winifred
            Message 5 of 16 , Jul 4, 2009
            • 0 Attachment
              Everyone,

              Thanks a million for a great and illuminating discussion. It was good to be reminded of the many perspectives.

              cheers
              Winifred




              --- In cmmi_process_improvement@yahoogroups.com, "Patrick OToole" <PACT.otoole@...> wrote:
              >
              >
              > Ed,
              >
              > I see that my little trip into Fantasyland has spawned analysis of an interesting cosmic phenomenon.
              >
              > I believe you are suggesting that the goal could be rated "Satisfied" in a SCAMPI Class A appraisal if the INTENT of the goal were met, rather than the explicit goal statement itself, correct?
              >
              > As long as we are talking about MY perception of the intent, then I would tend to agree with you. The bigger problem is that other lead appraisers may have differing views about the underlying intent of a particular goal - and they would adopt the position: "As long as we are talking about MY perception..."
              >
              > I'm not disagreeing with you - I'd just need more time to noodle this one through. This would make for some real interesting (although probably theoretical) debate within the lead appraiser community. Sounds like an ATLAS study to me!
              >
              > Regards,
              >
              > Pat
              >
              >
              > ----- Original Message -----
              > From: EDWARD F WELLER III
              > To: cmmi_process_improvement@yahoogroups.com
              > Sent: Wednesday, July 01, 2009 9:33 AM
              > Subject: Re: [CMMi Process Improvement] VER SG2 in an agile world
              >
              >
              >
              >
              >
              >
              > Pat and Bruce
              >
              > We are overlooking some words in the MDD Appendix C which I stated earlier -
              >
              > identifying which model practices appear to be implemented using an alternative
              > practice, and analyzing whether or not the alternative practice does indeed achieve an
              > effect equivalent to that achieved by the model practices toward satisfying the associated
              > specific or generic goal
              >
              > I believe the operative words are "effect equivalent" and for peer reviews the intent is to remove defects early and cheaply - not to do peer reviews.
              >
              > It would be better phrased "satisfying the intent of the associated specific or generic goal", however, to remove all doubt. Should a CR be raised?
              >
              > Ed
              > ----- Original Message -----
              > From: Patrick OToole
              > To: cmmi_process_improvement@yahoogroups.com
              > Sent: Tuesday, June 30, 2009 9:26 PM
              > Subject: Re: [CMMi Process Improvement] VER SG2 in an agile world
              >
              >
              >
              >
              > Bruce,
              >
              > I believe in my alternative universe - the one where Super Code Generator eliminated the need for peer review of design, code, and test cases, as well as the elimination of traceability of requirements => design => code => test, as well as the elimination of end-of-life cycle verification, one would find that the Requirements Phase would be the primary focus of VER SG1, SG2, and SG3, and that traceability of customer requirements => product requirements as it would be absolutely critical to get the requirements absolutely right. Validation would take on a much more important role as the customer could be provided with full system functionality as easy as we generate prototypes today (and hopefully even easier!)
              >
              > In such a world, there would likely be other practices that emerge to make system generation and maintenance more effective and efficient - and that these would eventually find their way into the CMMI v4.1.
              >
              > I agree that "alternative goals" is not provided for in the CMMI and the associated SCAMPI appraisal method - but then the Super Code Generator does not yet exist. It's fairly safe to say that in the next 10-20 years, software development will undergo yet another paradigm shift that will obviate the need for some of the existing practices (and perhaps even some of the goals and/or process areas).
              >
              > If there comes a time when a CMMI goal becomes absolutely unnecessary, I suspect that the SEI would go into scramble mode to change the model or method to accommodate it. However, since adoption of new methods and technologies is relatively slow in our industry (how old are agile methods and what percentage of organizations have adopted them?) this would most likely be handled in the standard evolution of the model.
              >
              > I would be the first to encourage early adopters of such an emerging technology that the continuous representation would be the best way for them to go. Rather than arguing for alternative goals or, worse yet, performing practices simply to satisfy the model, they would be best served to use the rest of the model to drive their process improvement project.
              >
              > Regards,
              >
              > Pat
              >
              >
              >
              > ----- Original Message -----
              > From: Bruce R. Duncil
              > To: cmmi_process_improvement@yahoogroups.com
              > Sent: Tuesday, June 30, 2009 3:44 PM
              > Subject: Re: [CMMi Process Improvement] VER SG2 in an agile world
              >
              >
              > Hi Pat,
              > Your post included the following: "Where I disagree with Ed and some
              > of the others is when they contend that the alternative approach has
              > to be "at least as good" as the practices they are replacing - that
              > is, you have to demonstrate that pair programming is at least as
              > effective as peer reviews in detecting or preventing defects. What
              > if a new method was only 75% as effective, but cost 10% as much to
              > perform? Conversely, what if a new method was 50% MORE effective
              > than peer reviews but cost 10x as much to perform? I'm not venturing
              > into the answer to the above rhetorical questions, but they should
              > help explain why I'm not in favor of an "equivalence" test. "
              >
              > I would like to clarify a point or two.
              >
              > I firmly believe in alternative practices; I just don't believe in
              > alternative goals. VER SG 2 states: "Peer reviews are performed on
              > selected work products". Hence, some form of "methodical
              > examination" MUST be performed. My "arms-length" example was just
              > one situation that could potentially meet this goal requirement.
              >
              > In initially stating my belief that it is important to evaluate the
              > developer relationship and documentation in an agile environment if
              > you are going to rate VER, I was speaking of determining whether this
              > situation constituted some "equivalence to" a "methodical
              > examination" of code, not whether it was somehow equivalent to a
              > formal inspection at removing defects.
              >
              > And there we have it: companies are required to do examinations (SG
              > 2) which are expected to result in analyzing data (SP 2.3/3.2) that
              > presumably will be applied in some un-described manner (no
              > informative material on that) to meet a vaguely outlined intention
              > (the presumed purpose of the PA).
              >
              > The CMMI doesn't so explicitly call out defect identification and
              > removal - and VER is focused instead on determining that requirements
              > are met. If the most important thing is that such activities remove
              > defects, we should cause CMMI v1.3 to incorporate SW-CMM, PR Goal 2:
              > "Defects in the (software) work products are identified and removed".
              >
              > Best Regards,
              > Bruce
              > www.alderonconsulting.com
              >
              > At 04:39 AM 6/30/2009, you wrote:
              >
              > >
              > >Ed, et.al.,
              > >
              > >Imagine, if you will, that someone develops a software package that
              > >inputs well-specified software requirements and outputs working
              > >software code. It includes an interface facility that let's you
              > >specify the software-to-software, software-to-hardware, and
              > >software-to-human interfaces. It includes other facilities that
              > >address all of the concerns that you might raise in the conversion
              > >of English-based requirements to working software code (AND a
              > >Spanish version is due to be released real soon!) It reduces the
              > >software maintenance task to altering the requirements and
              > >regenerating the code and documentation - which is also self-generated.
              > >
              > >Furthermore, the code works as specified each and every time! Test
              > >though you will, you are NEVER able to discover a case where the
              > >generated code varies from the requirements as specified. (The
              > >specified requirements may be wrong, but the code and the
              > >requirements are 100% aligned).
              > >
              > >If such a product actually existed, it would essentially eliminate
              > >the need to perform some of the Technical Solution practices - those
              > >dealing with design, interfaces, and perhaps, documentation. It
              > >would also eliminate the need to do many of the Verification
              > >activities - at least as applied to the non-existent design and the
              > >perfectly aligned code. Traceability from requirements to design to
              > >code would no longer be necessary.
              > >
              > >As such a miracle product emerged, the lead appraiser community
              > >would need to align on the acceptance of this approach as
              > >"alternative practice" for a host of specific practices - or
              > >not. In 10 years, as this new miracle product became embedded as
              > >the standard in software engineering, the model authors would find
              > >themselves giving consideration to altering the CMMI model to
              > >accommodate this new world order directly.
              > >
              > >Had the CMMI been around in the 1960s, there may have very well been
              > >some practices dealing with the sequencing of punch cards, or the
              > >security of programs on paper tape. There certainly would NOT have
              > >been any practices around peer reviews - as that had not yet been
              > >discovered as a "good practice" worthy of a CMMI goal.
              > >
              > >This is a rather long-winded way of saying that I agree with Ed -
              > >that alternative practices may not be a one-for-one mapping from
              > >"Prepare for Peer Reviews" to "Prepare for Pair Programming." I
              > >think it is essential to really understand the desired outcome of
              > >the practice/goal/process area and evaluate whether some alternative
              > >approach is targeting the same desired outcome using a different paradigm.
              > >
              > >Where I disagree with Ed and some of the others is when they contend
              > >that the alternative approach has to be "at least as good" as the
              > >practices they are replacing - that is, you have to demonstrate that
              > >pair programming is at least as effective as peer reviews in
              > >detecting or preventing defects. What if a new method was only 75%
              > >as effective, but cost 10% as much to perform? Conversely, what if
              > >a new method was 50% MORE effective than peer reviews but cost 10x
              > >as much to perform?
              > >
              > >I'm not venturing into the answer to the above rhetorical questions,
              > >but they should help explain why I'm not in favor of an
              > >"equivalence" test. Rather, I would tend to focus on the intent of
              > >the model practices and evaluate whether the same intent is being
              > >addressed (or obviated) in the alternative approach.
              > >
              > >Besides, why try to resolve the issue of whether pair programming is
              > >as effective in eliminating or removing defects when there is such a
              > >broad set of results from the various activities that organizations
              > >call "peer reviews?" Against which "peer review" do we compare pair
              > >programming - the formal Fagan Inspection, the "virtual" peer
              > >review, or the "buddy" review?
              > >
              > >Hey, but I'm on vacation, so I'll let the rest of you gnaw on this
              > >for a while...
              > >
              > >Regards,
              > >
              > >Pat
              > >
              > >
              > >
              > >----- Original Message -----
              > >From: <mailto:edwardfwelleriii@...>EDWARD F WELLER III
              > >To:
              > ><mailto:cmmi_process_improvement@yahoogroups.com>cmmi_process_improvement@yahoogroups.com
              > >
              > >Sent: Monday, June 29, 2009 3:38 PM
              > >Subject: Re: [CMMi Process Improvement] VER SG2 in an agile world
              > >
              > >Bruce and others
              > >
              > >I think the point of an alternative practice, rather than a
              > >modification of the practice, is that a different approach gives you
              > >the equivalent result. Even though I am strong proponent of
              > >inspections, I am willing to see that pair-wise code, or team
              > >development, that **measurably** achieves the same results as peer
              > >reviews/inspections, would be an alternative practice. The goal of
              > >peer reviews is to remove defects early and efficiently. If you can
              > >demonstrate that the requirements, design, and code defects are at
              > >or below the results achieved by peer reviews, then you have met the
              > >intent of peer reviews, which then should be a satisfaction of the
              > >intent of SG2
              > >
              > >I recall when Pat had one of his ATLAS questions on alternative
              > >practices that most of those offered/commented on were more
              > >variations of the practice, rather than an alternative. Take as
              > >another example the Personal Software Process. With code, some are
              > >able to remove 95+% of their defects in personal reviews - I there
              > >then a need for peer reviews (probably product and customer dependent)?
              > >
              > >What I would debate is whether or not for requirements and design
              > >you can skip the peer review which has the characteristics Bruce
              > >named below - the deliberate focus on defect removal seems to just
              > >work better than other types of review. For code, the effect of
              > >paired coding has been shown to reduce defect rates to 1/100th
              > >(Randy Jensen experiments and reports going back to the 80s) of the
              > >individual rate.
              > >
              > >Part of this is the definition of an alternative practice - what
              > >ATLAS showed is there is no consensus on what that means - another
              > >area of the SCAMPI process that needs clarification via experience
              > >rather than the MDD glossary definition:
              > >The CMMI Glossary includes the following definition of "alternative practice.
              > >"A practice that is a substitute for one or more generic or specific
              > >practices contained in
              > >CMMI models that achieves an equivalent effect toward satisfying the
              > >generic or specific
              > >goal associated with model practices. Alternative practices are not
              > >necessarily one-for-one
              > >replacements for the generic or specific practices."
              > >
              > >
              > >It would be useful to try to achieve a common understanding - any
              > >other thoughts?
              > >
              > >Ed
              > >----- Original Message -----
              > >From: <mailto:brduncil@...>Bruce R. Duncil
              > >To:
              > ><mailto:cmmi_process_improvement@yahoogroups.com>cmmi_process_improvement@yahoogroups.com
              > >
              > >Sent: Monday, June 29, 2009 9:36 AM
              > >Subject: Re: [CMMi Process Improvement] VER SG2 in an agile world
              > >
              > >Hi Winifred,
              > >Since I no longer develop software, I don't consider myself an
              > >agilist. However, I am increasingly doing process improvement with
              > >clients claiming to be agile and leading appraisals of agile
              > >implementations and will therefore offer you my perspective.
              > >
              > >I don't consider joint development of any work product to
              > >automatically include - or to otherwise preclude - the need for peer
              > >review or verification.
              > >
              > >The CMMI goal requirement to satisfy is that peer reviews are
              > >performed on selected work products (SG 2). Informative material
              > >directly supporting the meaning and implementation of that goal
              > >states: "Peer reviews involve a methodical examination of work
              > >products by the producers' peers to identify defects for removal and
              > >to recommend other changes that are needed." Preparation (SP 2.1),
              > >conduct of the review and recording of issues (SP 2.2), and analysis
              > >of the data on preparation, conducting and results of the peer
              > >reviews (SP 2.3) are the key expectations in meeting this goal.
              > >Likewise, the informative material supporting SP 2.2 states" "When
              > >issues arise during the peer review, they should be communicated to
              > >the primary developer of the work product for correction." Finally,
              > >the informative material supporting implementation of SP 3.2 (which
              > >also references pee reviews - and SP 2.3) states: "Actual results
              > >must be compared to established verification criteria to determine
              > >acceptability. The results of the analysis are recorded as evidence
              > >that verification was conducted." Peer reviews - in whatever form
              > >they may take - must support the overall objective of verifying that
              > >selected product requirements are met.
              > >
              > >I recommend you look hard at the developers' and designers'
              > >relationship during development/design and their documentation (such
              > >as in developer's notebooks) relative to the CMMI requirement(s),
              > >expectations, and intent.
              > >
              > >In situations where two (or more) people are simply designing and
              > >developing everything together and at the same time, and leaving
              > >loose notes (at best), rarely is the CMMI effectively implemented. It
              > >also tends to cause group-think, blinding those involved to issues or
              > >defects and thus not providing meaningful product verification.
              > >
              > >If, on the other hand, the developers/designers maintained an
              > >arms-length relationship, truly planning and reviewing each other's
              > >work (in tandem, for example), then the "peer reviews" -while
              > >relatively informal - may prove to be effective. Provided their
              > >documentation includes an understandable record of their activities,
              > >issues found and addressed, and results, as well as some analysis of
              > >it by them or others, then you have a basis that may lead you to
              > >conclude that the goal is satisfied.
              > >
              > >In conclusion, I view an project's ability to demonstrate they have
              > >data and have conducted analysis on it (SP 2.3) as the necessary but
              > >not sufficient evidence that peer reviews were in fact conducted (SP
              > >2.2) and the goal (SG 2) statement to be satisfied.
              > >
              > >I hope this is helpful to you.
              > >Best Regards,
              > >Bruce
              > >www.alderonconsulting.com
              > >
              > >At 03:55 PM 6/28/2009, you wrote:
              > >
              > > >Hello.
              > > >
              > > >This question is directed to Jeff, Hillel and all other agilist out there.
              > > >
              > > >One of my clients claims that they use agile methods (SCRUM mainly)
              > > >and develop their design documents collaboratively, therefore they
              > > >don't need peer reviews in the traditional sense.
              > > >
              > > >I can see how collaborative development of "anything" could meet the
              > > >expectations of VER SP 2.2. I'd "file it under" alternative
              > > >implementation of VER SP2.2. I have suggested that they may want to
              > > >schedule a more traditional peer review during a sprint if (for
              > > >example) some key person, such as one of the main tech leads is
              > > >missing during a collaborative design session. So far so good.
              > > >
              > > >My question around VER SP 2.3 "Analyze data about preparation,
              > > >conduct, and results of the peer reviews." What "alternative
              > > >implementation" would you expect to see for this practice in an
              > > agile project?
              > > >
              > > >Thanks for all advice.
              > > >
              > > >regards
              > > >Winifred
              > > >
              > > >
              > >
              > >
              >
            Your message has been successfully submitted and would be delivered to recipients shortly.