Loading ...
Sorry, an error occurred while loading the content.
 

Re: [scrumdevelopment] Completeness definition

Expand Messages
  • Chris Brooks
    On Tue, 01 Mar 2005 09:34:49 -0800, Stefan Ahrensdorf ... Thanks Stefan. As an enterprise software company that releases software to customers no more often
    Message 1 of 15 , Mar 1, 2005
      On Tue, 01 Mar 2005 09:34:49 -0800, Stefan Ahrensdorf
      <sahrensdorf@...> wrote:
      > To get the right mindset into developers heads who are telling me they are
      > "done with it" as soon as an item is ready for integration and UA testing I
      > am telling them they are only "done" when it's released and they have
      > received the first mail from a user saying what great a feature it is and
      > how well it works. In other words, it has been delivered and starts to
      > return benefits.
      >
      > The internal definition however reads "Completed = all work including QA
      > has been performed, item is ready for release." This is mainly to account
      > for multiple sprints running up to one release. In most cases we deliver
      > after every sprint, so both is true.

      Thanks Stefan. As an enterprise software company that releases
      software to customers no more often than every 6 months or so, these
      criteria don't help us too much. We can't deploy into production on a
      monthly basis, so we rely on QA and our product owner (generally
      product management) for validation. For those of you on a similar
      product release cycle, what would you recommend?

      There is also a class of testing that we do with every release that is
      mandatory but can't be completed until the entire system is available.
      This includes scalability, performance, and availability testing.
      Not that we don't do *any* of this testing before we are functionally
      complete, but the results aren't meaningful to the release as a whole
      until we can test the entire as-built system.

      --
      Chris Brooks
      http://www.chrisbrooks.org
    • David H.
      ... ... First of all, please take everything I say with a grain of salt. I realise that you need to make compromises, but please follow my train of
      Message 2 of 15 , Mar 2, 2005
        On Tue, 1 Mar 2005 10:07:06 -0800, Chris Brooks <brookscl@...> wrote:
        >
        <snip>
        >
        First of all, please take everything I say with a grain of salt. I
        realise that you need to make compromises, but please follow my train
        of thought none the less.

        > Thanks Stefan. As an enterprise software company that releases
        > software to customers no more often than every 6 months or so, these
        > criteria don't help us too much. We can't deploy into production on a
        > monthly basis, so we rely on QA and our product owner (generally
        > product management) for validation.
        Maybe your production cycle is wrong then. SCRUM quite clearly states
        that you need to be able to release a _complete_ product at the end of
        every Sprint. That does not necessarily mean that you have all the
        Features your Product Owner wanted and it does not mean that
        everything that has been planne din the backlog is available yet, but
        it does mean that I can tell you at the end of _any_ Sprint -->"go
        live".

        THat also implies that this portion of the syste, is

        Regression Tested
        Unit tested
        User Acceptance tested
        Code has been refactored
        Test Driven Development has been used
        The code is bug free (as opposed to being "believed bug-free")

        > There is also a class of testing that we do with every release that is
        > mandatory but can't be completed until the entire system is available.
        > This includes scalability, performance, and availability testing.
        > Not that we don't do *any* of this testing before we are functionally
        > complete, but the results aren't meaningful to the release as a whole
        > until we can test the entire as-built system.

        Once more, I cannot follow you here. I know that ceratin
        functionailites are dependant on others, yet given the fact that you
        need to be able to deliver a "releasable" product at the end of every
        sprint that product per se has to be complete in itself. That means it
        is functional which means that functionality can be benchmarked and
        load/stress tested. Whether that refelcts the actual need that needs
        to be met by the points defined in the backlog is a whole different
        issue in my humble opinion.

        Maybe your stories are not granular enough, maybe you are trying to
        implement too much and thus you are left behind with too much
        incompleteness.

        Just a few thoughs.

        -d

        >
        > --
        > Chris Brooks
        > http://www.chrisbrooks.org
        >
        > To Post a message, send it to: scrumdevelopment@...
        > To Unsubscribe, send a blank message to: scrumdevelopment-unsubscribe@...
        > Yahoo! Groups Links
        >
        >
        >
        >
        >
      • Paul Hodgetts
        Chris Brooks wrote: Hi Chris! ... Most teams I ve worked with can become very development complete within a few sprints. The only development activity I
        Message 3 of 15 , Mar 2, 2005
          Chris Brooks wrote:

          Hi Chris!

          > I'm looking for some examples of "completeness" definitions for
          > functional product backlog items. What do you all use? Areas of
          > expectation that I want to cover include:
          > * Development (of course this should be complete)

          Most teams I've worked with can become very development complete
          within a few sprints. The only development activity I sometimes
          see being deferred is technical documentation. I encourage ways
          to make it lighter and/or automate it, but sometimes some of the
          technical docs are polished up right before release.

          We shouldn't be leaving enough design cruft to require separate
          refactoring activities in subsequent sprints, but especially with
          newer Scrum teams, no team is perfect and sometimes it happens.
          But I don't feel we should make it a habit to defer refactoring.

          > * Unit tests complete & pass

          In terms of engineering unit tests, I would expect all of these
          to be written and running every sprint. During a release sprint,
          if the development team has some bandwidth while QA is testing,
          they could shore up some areas that lack good test coverage, but
          this should be the exception.

          > * The general area of non-unit tests (functional, integration tests,
          > etc.) - I'm struggling with this, especially because at times this
          > sort of testing will span multiple sprints given the complexity of the
          > product work that we do.

          The most common items I see being deferred to release sprints
          are larger testing activities. This is a slippery slope, of
          course, because any testing we defer carries the associated
          risk that we are leaving hidden "debt" in the system with an
          unknown amount of work to complete.

          We should challenge ourselves to find ways to perform smaller
          blocks of testing. It won't be easy at first, but most teams
          will come up with some creative solutions within a few sprints.
          For example, have some smaller suites of performance tests that
          can be run in an hour or two. Run these more frequently. If
          necessary, run them in the sprint immediately following the
          build, but run them as soon as possible after backlog items are
          development complete. Use them as probes to check if some of
          the key areas are showing signs of performance issues.

          If we get better at having more modularized tests, we can run
          a subset of all the tests more frequently. If we haven't
          figured out how to run them in the same sprint as development,
          run them in the following sprint(s) in parallel with the new
          work. Yes, this might require more QA testers to perform these
          tests while other are doing the functional testing for the new
          features being added. We can also rotate some of the tests, so
          in sprint 2 we run subset A, sprint 3 we run subset B, etc. In
          this way, although we are leaving some incompleteness after a
          sprint, we are finding it as soon as possible.

          As a last resort, dedicate the last sprint before release as a
          release sprint, where the primary activities are the final
          things that must be done as a single large block. This of
          course introduces risk because we have unpredictable debt
          that may be discovered at the worst possible time, right
          before a release when schedule and deadlines are most acute.

          > * Documentation

          As with testing, I try to challenge the writers to be more
          incremental and parallel in their work. Can they write the
          outlines of the docs early and fill them in incrementally?
          If they can focus on getting the more stable parts of the
          documentation at least in a first draft early, then they will
          flush out areas of missing information earlier. The details
          can be added (perhaps in stages) as the details become more
          stable, with the final, least stable things like screen shots
          being done just before release.

          Many writers are not used to working in this way. They want
          all the information to be ready before writing, so they can
          work heads-down and finish it in one activity. Splitting up
          the writing does indeed introduce some inefficiencies to the
          writing process, but we're trading some local inefficiency
          against the benefits of seeing more continuous progress and
          gaining earlier feedback from the writing process.


          In general, I try to get teams as a minimum to define sprint
          completeness as:
          - All development work - coding, unit testing, check ins - is
          done. The system is left acceptably clean (refactoring).
          - UI work is finished to the point of completed page designs,
          graphics, and at least some minimal usability testing.
          - We have at minimum drafts of the documentation for anything
          new that was added or changed.
          - The system builds and the full suite of engineering unit
          tests and any automated functional tests run in the
          engineering environment. A nightly build can help make
          sure this happens.
          - The build can be promoted and deployed to a system such as
          a QA or integration system. This means all the necessary
          install and migration scripts are updated as well.
          - Each new feature has been functionally tested in isolation
          to ensure the backlog item's requirements are met.
          - The largest feasible subset of the bigger tests (regression,
          performance, scalability, etc.) have been run against the
          promoted build.

          It's going to vary for each specific team's environment. The
          key thing is we want a safety net of completeness to be drawn
          as soon as possible that prevents unfinished debt from making
          it to the release. The largest part of the net (all of it in
          an ideal world) is cast in the sprint where something is first
          created. If we can't cast the whole net, get more and more of
          it in place as soon as possible in subsequent sprints in
          parallel with the new sprint work. As a last resort, make
          sure the release sprint fills the remaining gaps.

          I think the big key with the larger activities you mention is
          not whether we do performance testing in this sprint or later,
          but how can we be modular with the testing so we can ask how
          much of it can we do now and how incrementally can we do the
          rest to minimize what's left at the end?

          Hope that helps,
          Paul
          -----
          Paul Hodgetts -- CEO, Coach, Trainer, Consultant
          Agile Logic -- www.agilelogic.com
          Training, Coaching, Consulting -- On-Site & Out-Sourced Development
          Agile Processes/Scrum/Lean/XP -- Java/J2EE, .NET, OOA/D, UED, SOA

          Upcoming Events:

          Certified ScrumMaster Training, Las Vegas, NV - April 25-26, 2005
          http://www.agilelogic.com/CSM.html
        • Chris Brooks
          ... You are probably correct! We are just now learning the ropes here... ... To give a concrete example: we supply online banking software for over 25% of the
          Message 4 of 15 , Mar 2, 2005
            On Wed, 2 Mar 2005 12:38:11 +0100, David H. <dmalloc@...> wrote:
            > Maybe your production cycle is wrong then.

            You are probably correct! We are just now learning the ropes here...

            > SCRUM quite clearly states
            > that you need to be able to release a _complete_ product at the end of
            > every Sprint. That does not necessarily mean that you have all the
            > Features your Product Owner wanted and it does not mean that
            > everything that has been planne din the backlog is available yet, but
            > it does mean that I can tell you at the end of _any_ Sprint -->"go
            > live".

            To give a concrete example: we supply online banking software for over
            25% of the US population, including 4 of the top 10 US banks. Before
            we can safely ship new platform releases to clients, we need to
            validate in a rather large stress lab (often involving 60+ servers)
            certain levels of scalability and availability. These tests often
            take 2-3 weeks to complete on their own. We can't accelerate this
            time because stress testing requires running the system under load for
            a specified duration. Call me a non-purist, but I'm not willing to pay
            the cost of this sort of testing within each sprint; in fact, we often
            rent lab space from IBM, Microsoft, et al to achieve this work so
            travel is required, there a fixed costs, etc. So, when I say we
            really can't call a sprint truly releasable until certain other
            activities are done, that's what I mean. I don't think Scrum is going
            to do anything for me to help address issues like this, and I'm not
            looking for any silver bullets. Our current application of Scrum
            involves planning for 1-2 release sprints at the end of a platform
            release for this very purpose.

            If I can't truly "GO LIVE" at the end of each sprint, does that mean
            that I'm not doing Scrum?

            > THat also implies that this portion of the syste, is
            >
            > Regression Tested
            > Unit tested
            > User Acceptance tested
            > Code has been refactored
            > Test Driven Development has been used
            > The code is bug free (as opposed to being "believed bug-free")

            This is the sort of stuff I can live with and makes sense. I *have*
            wondered about the "Code has been refactored" requirement. What if
            code is well factored to start with? Do we always need to refactor?

            For those of you out there that provide technical documentation for
            products that you deliver, what requirements do you place on docs
            within the sprint (I know Paul responded to this point in another
            response, but I'm curious about what others do).

            --
            Chris Brooks
            http://www.chrisbrooks.org
          • mike.dwyer1@comcast.net
            Stasis comes to mind Death as well Looking for an adequate definition for complete sounds like expecting to find truth using bayesian logic. You almost get
            Message 5 of 15 , Mar 2, 2005
              Stasis comes to mind
              Death as well
               
              Looking for an adequate definition for complete sounds like expecting to find truth using bayesian logic.  You almost get there but not quite.
               
              --
              Mike Dwyer

              "I Keep six faithful serving-men
              Who serve me well and true:
              Their names are What and Where and When
              And How and Why and Who." - Kipling
               
              -------------- Original message --------------

              >
              > Chris Brooks wrote:
              >
              > Hi Chris!
              >
              > > I'm looking for some examples of "completeness" definitions for
              > > functional product backlog items. What do you all use? Areas of
              > > expectation that I want to cover include:
              > > * Development (of course this should be complete)
              >
              > Most teams I've worked with can become very development complete
              > within a few sprints. The only development activity I sometimes
              > see being deferred is technical documentation. I encourage ways
              > to make it lighter and/or automate it, but sometimes some of the
              > technical docs are polished up right before release.
              >
              > We shouldn't be leaving enough design cruft to require separate
              > refactoring activities in subsequent sprints, but especially with
              > newer Scrum teams, no team is perfect and sometimes it happens.
              > But I don't feel we should make it a habit to defer refactoring.
              >
              > > * Unit tests complete & pass
              >
              > In terms of engineering unit tests, I would expect all of these
              > to be written and running every sprint. During a release sprint,
              > if the development team has some bandwidth while QA is testing,
              > they could shore up some areas that lack good test coverage, but
              > this should be the exception.
              >
              > > * The general area of non-unit tests (functional, integration tests,
              > > etc.) - I'm struggling with this, especially because at times this
              > > sort of testing will span multiple sprints given the complexity of the
              > > product work that we do.
              >
              > The most common items I see being deferred to release sprints
              > are larger testing activities. This is a slippery slope, of
              > course, because any testing we defer carries the associated
              > risk that we are leaving hidden "debt" in the system with an
              > unknown amount of work to complete.
              >
              > We should challenge ourselves to find ways to perform smaller
              > blocks of testing. It won't be easy at first, but most teams
              > will come up with some creative solutions within a few sprints.
              > For example, have some smaller suites of performance tests that
              > can be run in an hour or two. Run these more frequently. If
              > necessary, run them in the sprint immediately following the
              > build, but run them as soon as possible after backlog items are
              > development complete. Use them as probes to check if some of
              > the key areas are showing signs of performance issues.
              >
              > If we get better at having more modularized tests, we can run
              > a subset of all the tests more frequently. If we haven't
              > figured out how to run them in the same sprint as development,
              > run them in the following sprint(s) in parallel with the new
              > work. Yes, this might require more QA testers to perform these
              > tests while other are doing the functional testing for the new
              > features being added. We can also rotate some of the tests, so
              > in sprint 2 we run subset A, sprint 3 we run subset B, etc. In
              > this way, although we are leaving some incompleteness after a
              > sprint, we are finding it as soon as possible.
              >
              > As a last resort, dedicate the last sprint before release as a
              > release sprint, where the primary activities are the final
              > things that must be done as a single large block. This of
              > course introduces risk because we have unpredictable debt
              > that may be discovered at the worst possible time, right
              > before a release when schedule and deadlines are most acute.
              >
              > > * Documentation
              >
              > As with testing, I try to challenge the writers to be more
              > incremental and parallel in their work. Can they write the
              > outlines of the docs early and fill them in incrementally?
              > If they can focus on getting the more stable parts of the
              > documentation at least in a first draft early, then they will
              > flush out areas of missing information earlier. The details
              > can be added (perhaps in stages) as the details become more
              > stable, with the final, least stable things like screen shots
              > being done just before release.
              >
              > Many writers are not used to working in this way. They want
              > all the information to be ready before writing, so they can
              > work heads-down and finish it in one activity. Splitting up
              > the writing does indeed introduce some inefficiencies to the
              > writing process, but we're trading some local inefficiency
              > against the benefits of seeing more continuous progress and
              > gaining earlier feedback from the writing process.
              >
              >
              > In general, I try to get teams as a minimum to define sprint
              > completeness as:
              > - All development work - coding, unit testing, check ins - is
              > done. The system is left acceptably clean (refactoring).
              > - UI work is finished to the point of completed page designs,
              > graphics, and at least some minimal usability testing.
              > - We have at minimum drafts of the documentation for anything
              > new that was added or changed.
              > - The system builds and the full suite of engineering unit
              > tests and any automated functional tests run in the
              > engineering environment. A nightly build can help make
              > sure this happens.
              > - The build can be promoted and deployed to a system such as
              > a QA or integration system. This means all the necessary
              > install and migration scripts are updated as well.
              > - Each new feature has been functionally tested in isolation
              > to ensure the backlog item's requirements are met.
              > - The largest feasible subset of the bigger tests (regression,
              > performance, scalability, etc.) have been run against the
              > promoted build.
              >
              > It's going to vary for each specific team's environment. The
              > key thing is we want a safety net of completeness to be drawn
              > as soon as possible that prevents unfinished debt from making
              > it to the release. The largest part of the net (all of it in
              > an ideal world) is cast in the sprint where something is first
              > created. If we can't cast the whole net, get more and more of
              > it in place as soon as possible in subsequent sprints in
              > parallel with the new sprint work. As a last resort, make
              > sure the release sprint fills the remaining gaps.
              >
              > I think the big key with the larger activities you mention is
              > not whether we do performance testing in this sprint or later,
              > but how can we be modular with the testing so we can ask how
              > much of it can we do now and how incrementally can we do the
              > rest to minimize what's left at the end?
              >
              > Hope that helps,
              > Paul
              > -----
              > Paul Hodgetts -- CEO, Coach, Trainer, Consultant
              > Agile Logic -- www.agilelogic.com
              > Training, Coaching, Consulting -- On-Site & Out-Sourced Development
              > Agile Processes/Scrum/Lean/XP -- Java/J2EE, .NET, OOA/D, UED, SOA
              >
              > Upcoming Events:
              >
              > Certified ScrumMaster Training, Las Vegas, NV - April 25-26, 2005
              > http://www.agilelogic.com/CSM.html
              >
              >
              >
              > To Post a message, send it to: scrumdevelopment@...
              > To Unsubscribe, send a blank message to:
              > scrumdevelopment-unsubscribe@...
              > Yahoo! Groups Links
              >
              > <*> To visit your group on the web, go to:
              > http://groups.yahoo.com/group/scrumdevelopment/
              >
              > <*> To unsubscribe from this group, send an email to:
              > scrumdevelopment-unsubscribe@yahoogroups.com
              >
              > <*> Your use of Yahoo! Groups is subject to:
              > http://docs.yahoo.com/info/terms/
              >
              >
              >
              >
            • David Roberts
              Excellent question, I see the same sort of thing at my work. My thought is that scrum give a good starting point but can t be a one size fits all methodology
              Message 6 of 15 , Mar 2, 2005

                Excellent question, I see the same sort of thing at my work. My thought is that scrum give a good starting point but can’t be a one size fits all methodology where one you understand the rules, you can’t break them.

                 

                I believe methodology scale includes problem size.

                 

                David Roberts

                InnovaSystems

                (619) 368-9621

                 


                From: Chris Brooks [mailto:brookscl@...]
                Sent: Wednesday, March 02, 2005 9:26 AM
                To: scrumdevelopment@yahoogroups.com
                Subject: Re: [scrumdevelopment] Completeness definition

                 

                On Wed, 2 Mar 2005 12:38:11 +0100, David H. <dmalloc@...> wrote:
                > Maybe your production cycle is wrong then.

                You are probably correct!  We are just now learning the ropes here...

                > SCRUM quite clearly states
                > that you need to be able to release a _complete_ product at the end of
                > every Sprint. That does not necessarily mean that you have all the
                > Features your Product Owner wanted and it does not mean that
                > everything that has been planne din the backlog is available yet, but
                > it does mean that I can tell you at the end of _any_ Sprint -->"go
                > live".

                To give a concrete example: we supply online banking software for over
                25% of the US population, including 4 of the top 10 US banks. Before
                we can safely ship new platform releases to clients, we need to
                validate in a rather large stress lab (often involving 60+ servers)
                certain levels of scalability and availability.  These tests often
                take 2-3 weeks to complete on their own.  We can't accelerate this
                time because stress testing requires running the system under load for
                a specified duration. Call me a non-purist, but I'm not willing to pay
                the cost of this sort of testing within each sprint; in fact, we often
                rent lab space from IBM, Microsoft, et al to achieve this work so
                travel is required, there a fixed costs, etc.  So, when I say we
                really can't call a sprint truly releasable until certain other
                activities are done, that's what I mean.  I don't think Scrum is going
                to do anything for me to help address issues like this, and I'm not
                looking for any silver bullets.  Our current application of Scrum
                involves planning for 1-2 release sprints at the end of a platform
                release for this very purpose.

                If I can't truly "GO LIVE" at the end of each sprint, does that mean
                that I'm not doing Scrum?

                > THat also implies that this portion of the syste, is
                >
                > Regression Tested
                > Unit tested
                > User Acceptance tested
                > Code has been refactored
                > Test Driven Development has been used
                > The code is bug free (as opposed to being "believed bug-free")

                This is the sort of stuff I can live with and makes sense.  I *have*
                wondered about the "Code has been refactored" requirement.  What if
                code is well factored to start with?  Do we always need to refactor?

                For those of you out there that provide technical documentation for
                products that you deliver, what requirements do you place on docs
                within the sprint (I know Paul responded to this point in another
                response, but I'm curious about what others do).

                --
                Chris Brooks
                http://www.chrisbrooks.org


                To Post a message, send it to:   scrumdevelopment@...
                To Unsubscribe, send a blank message to: scrumdevelopment-unsubscribe@...




              • Chamberlain, Eric
                Personally, I think the journey towards creating a potentially shippable product is what you should be thinking about rather than fulfilling the
                Message 7 of 15 , Mar 2, 2005
                  Personally, I think the journey towards creating a potentially shippable product is what you should be thinking about rather than fulfilling the letter-of-the-rule. My current Scrum team doesn't deliver anything potentially shippable at the end of its sprints--yet. I am working within the confines of the organization, the technology, and all the other factors to get there but I am not there yet.

                  I think if your acceptance testing is as time-consuming and expensive as you say, then maybe you need to lower the bar a bit for end-of-sprint acceptance and then, if the product owner gives the thumbs up (we like it!) then you cut your potential release, go through the whole testing deal. But the product owner should be able to see something before the testing starts so to make a wise assessment. That "something" is what you deliver at the end of each Sprint--something that the customer (representative) can kick around and evaluate.

                  My 2 cents.

                  == Eric Chamberlain ==

                  -----Original Message-----
                  From: Chris Brooks [mailto:brookscl@...]
                  Sent: Wednesday, March 02, 2005 9:26 AM
                  To: scrumdevelopment@yahoogroups.com
                  Subject: Re: [scrumdevelopment] Completeness definition


                  On Wed, 2 Mar 2005 12:38:11 +0100, David H. <dmalloc@...> wrote:
                  > Maybe your production cycle is wrong then.

                  You are probably correct! We are just now learning the ropes here...

                  > SCRUM quite clearly states
                  > that you need to be able to release a _complete_ product at the end of
                  > every Sprint. That does not necessarily mean that you have all the
                  > Features your Product Owner wanted and it does not mean that
                  > everything that has been planne din the backlog is available yet, but
                  > it does mean that I can tell you at the end of _any_ Sprint -->"go
                  > live".

                  To give a concrete example: we supply online banking software for over 25% of the US population, including 4 of the top 10 US banks. Before we can safely ship new platform releases to clients, we need to validate in a rather large stress lab (often involving 60+ servers) certain levels of scalability and availability. These tests often take 2-3 weeks to complete on their own. We can't accelerate this time because stress testing requires running the system under load for a specified duration. Call me a non-purist, but I'm not willing to pay the cost of this sort of testing within each sprint; in fact, we often rent lab space from IBM, Microsoft, et al to achieve this work so travel is required, there a fixed costs, etc. So, when I say we really can't call a sprint truly releasable until certain other activities are done, that's what I mean. I don't think Scrum is going to do anything for me to help address issues like this, and I'm not looking for any
                  silver bullets. Our current application of Scrum involves planning for 1-2 release sprints at the end of a platform release for this very purpose.

                  If I can't truly "GO LIVE" at the end of each sprint, does that mean that I'm not doing Scrum?

                  > THat also implies that this portion of the syste, is
                  >
                  > Regression Tested
                  > Unit tested
                  > User Acceptance tested
                  > Code has been refactored
                  > Test Driven Development has been used
                  > The code is bug free (as opposed to being "believed bug-free")

                  This is the sort of stuff I can live with and makes sense. I *have* wondered about the "Code has been refactored" requirement. What if code is well factored to start with? Do we always need to refactor?

                  For those of you out there that provide technical documentation for products that you deliver, what requirements do you place on docs within the sprint (I know Paul responded to this point in another response, but I'm curious about what others do).

                  --
                  Chris Brooks
                  http://www.chrisbrooks.org


                  To Post a message, send it to: scrumdevelopment@...
                  To Unsubscribe, send a blank message to: scrumdevelopment-unsubscribe@...
                  Yahoo! Groups Links
                • David H.
                  ... That is a good attitude. SCRUM is indeed no silver bullet and you will always have to compromise at some point. I guess you are doing teh right thing
                  Message 8 of 15 , Mar 3, 2005
                    On Wed, 2 Mar 2005 09:26:17 -0800, Chris Brooks <brookscl@...> wrote:
                    >
                    > On Wed, 2 Mar 2005 12:38:11 +0100, David H. <dmalloc@...> wrote:
                    > > Maybe your production cycle is wrong then.
                    >
                    > You are probably correct! We are just now learning the ropes here...
                    >
                    > > SCRUM quite clearly states
                    > > that you need to be able to release a _complete_ product at the end of
                    > > every Sprint. That does not necessarily mean that you have all the
                    > > Features your Product Owner wanted and it does not mean that
                    > > everything that has been planne din the backlog is available yet, but
                    > > it does mean that I can tell you at the end of _any_ Sprint -->"go
                    > > live".
                    >
                    > To give a concrete example: we supply online banking software for over
                    > 25% of the US population, including 4 of the top 10 US banks. Before
                    > we can safely ship new platform releases to clients, we need to
                    > validate in a rather large stress lab (often involving 60+ servers)
                    > certain levels of scalability and availability. These tests often
                    > take 2-3 weeks to complete on their own. We can't accelerate this
                    > time because stress testing requires running the system under load for
                    > a specified duration. Call me a non-purist, but I'm not willing to pay
                    > the cost of this sort of testing within each sprint; in fact, we often
                    > rent lab space from IBM, Microsoft, et al to achieve this work so
                    > travel is required, there a fixed costs, etc. So, when I say we
                    > really can't call a sprint truly releasable until certain other
                    > activities are done, that's what I mean. I don't think Scrum is going
                    > to do anything for me to help address issues like this, and I'm not
                    > looking for any silver bullets. Our current application of Scrum
                    > involves planning for 1-2 release sprints at the end of a platform
                    > release for this very purpose.

                    That is a good attitude. SCRUM is indeed no silver bullet and you will
                    always have to compromise at some point. I guess you are doing teh
                    right thing there. Such large scale tests could _maybe_ be seen as
                    their own Sprint though?
                    >
                    > If I can't truly "GO LIVE" at the end of each sprint, does that mean
                    > that I'm not doing Scrum?

                    Technically speaking you are not doing "perfect" scrum. The scrum
                    definitition clearly states that you _have_ to be able to deliver at
                    the end of every sprint. However I do not think that this will ever be
                    possible in your particular setup.
                    >
                    > > THat also implies that this portion of the syste, is
                    > >
                    > > Regression Tested
                    > > Unit tested
                    > > User Acceptance tested
                    > > Code has been refactored
                    > > Test Driven Development has been used
                    > > The code is bug free (as opposed to being "believed bug-free")
                    >
                    > This is the sort of stuff I can live with and makes sense. I *have*
                    > wondered about the "Code has been refactored" requirement. What if
                    > code is well factored to start with? Do we always need to refactor?
                    >
                    Refactoring is a big part of Scrum and test driven development in my
                    humble opinion. There should be _no_ code duplication if at all
                    possible. That is exactly what happens in most waterfall setups. You
                    plan and analyse and implement only to have two departments write 90%
                    of the same code to do two different things. So in my humble opinion
                    refactoring is very important. With all due respect, learing what a
                    huge application you are building I do not think you could ever factor
                    it so perfecttly that no refactoring can be done

                    -d
                    > For those of you out there that provide technical documentation for
                    > products that you deliver, what requirements do you place on docs
                    > within the sprint (I know Paul responded to this point in another
                    > response, but I'm curious about what others do).
                    >
                    > --
                    > Chris Brooks
                    > http://www.chrisbrooks.org
                    >
                    > To Post a message, send it to: scrumdevelopment@...
                    > To Unsubscribe, send a blank message to: scrumdevelopment-unsubscribe@...
                    > Yahoo! Groups Links
                    >
                    >
                    >
                    >
                    >
                  • Hubert Smits
                    Higuyes, ... I would define ready as the software being in a state where the development team has no reasponable doubts or knowledge about the software not
                    Message 9 of 15 , Mar 3, 2005
                      Higuyes,

                      > > > Maybe your production cycle is wrong then.
                      > >
                      > > You are probably correct! We are just now learning the ropes here...
                      > >
                      > > > SCRUM quite clearly states
                      > > > that you need to be able to release a _complete_ product at the end of
                      > > > every Sprint. That does not necessarily mean that you have all the
                      > > > Features your Product Owner wanted and it does not mean that
                      > > > everything that has been planne din the backlog is available yet, but
                      > > > it does mean that I can tell you at the end of _any_ Sprint -->"go
                      > > > live".
                      > >
                      > > These tests often
                      > > take 2-3 weeks to complete on their own. We can't accelerate this
                      > > time because stress testing requires running the system under load for
                      > > a specified duration. Call me a non-purist, but I'm not willing to pay
                      >
                      > That is a good attitude. SCRUM is indeed no silver bullet and you will
                      > always have to compromise at some point. I guess you are doing teh
                      > right thing there. Such large scale tests could _maybe_ be seen as
                      > their own Sprint though?
                      > >
                      > > If I can't truly "GO LIVE" at the end of each sprint, does that mean
                      > > that I'm not doing Scrum?
                      >
                      > Technically speaking you are not doing "perfect" scrum. The scrum
                      > definitition clearly states that you _have_ to be able to deliver at
                      > the end of every sprint. However I do not think that this will ever be
                      > possible in your particular setup.

                      I would define 'ready' as the software being in a state where the
                      development team has no reasponable doubts or knowledge about the
                      software not passing the stress testing. I.e. they can demo it, all
                      tests have run (except the stress testing), docs are there etc. If the
                      company would decide to drop the stress tests then the product could
                      be shrink wrapped.


                      > >
                      > > > THat also implies that this portion of the syste, is
                      > > >
                      > > > Regression Tested
                      > > > Unit tested
                      > > > User Acceptance tested
                      > > > Code has been refactored
                      > > > Test Driven Development has been used
                      > > > The code is bug free (as opposed to being "believed bug-free")
                      > >
                      > > This is the sort of stuff I can live with and makes sense. I *have*
                      > > wondered about the "Code has been refactored" requirement. What if
                      > > code is well factored to start with? Do we always need to refactor?
                      > >
                      > Refactoring is a big part of Scrum and test driven development in my

                      Scrum defines no engineering practices. That is not to say that
                      refactoring or TDD isn't important, it is just not part of Scrum.

                      > > For those of you out there that provide technical documentation for
                      > > products that you deliver, what requirements do you place on docs
                      > > within the sprint (I know Paul responded to this point in another
                      > > response, but I'm curious about what others do).

                      If it is a requirement for the potentially shippable product then it
                      has to be delivered.

                      --Hubert
                    • Edwin Miller
                      This is something we struggle with as well. We are near the end of a release cycle that has taken 5 months to complete. We changed an underlying data
                      Message 10 of 15 , Mar 3, 2005
                        This is something we struggle with as well.  We are near the end of a release cycle that has taken 5 months to complete.  We changed an underlying data architecture that broke 20+ modules of our application.  We can't ship part of it without all 20 modules being brought into compliance with the new design.  Sure, the work can be chunked into 30 day sprints, and we can build the application with localized testing for the module we just completed, but it can't go to production until everything's done.  It satisfies the test of being able to show the product owner running code, but it does not equal a releasable product in and of itself.
                         
                        We also have an a sprint devoted to what I call "ship-mode" (don't say it out loud or people will look at you funny), which is essentially all the prep work required to move the code to production.  This can include final regression, rehearsing the migration scripts (which involves migration to a staging environment and then doing a regression and load testing on bigger hardware), internal communication and all the things that go with "productifying" the work we've done.
                         
                        I'll be the first to admit that after using Scrum for almost 2 years and with two CSM's on board, we have by no means mastered it.  This is just one of many topics that requires careful consideration and thoughtful "inspection and adapt-ion".   Schwaber talks about the "empirical process" which is kind of anathema to following rules blindly (ie defined process).
                         
                        The other major controversial topic around here is what level of requirements detail are brought into the scrum team, but that deserves its own thread.
                         
                        Edwin Miller
                        digiChart, Inc.
                         


                        From: Chamberlain, Eric [mailto:echamber@...]
                        Sent: Wednesday, March 02, 2005 7:43 PM
                        To: scrumdevelopment@yahoogroups.com
                        Subject: RE: [scrumdevelopment] Completeness definition

                        Personally, I think the journey towards creating a potentially shippable product is what you should be thinking about rather than fulfilling the letter-of-the-rule.  My current Scrum team doesn't deliver anything potentially shippable at the end of its sprints--yet.  I am working within the confines of the organization, the technology, and all the other factors to get there but I am not there yet. 

                        I think if your acceptance testing is as time-consuming and expensive as you say, then maybe you need to lower the bar a bit for end-of-sprint acceptance and then, if the product owner gives the thumbs up (we like it!) then you cut your potential release, go through the whole testing deal.  But the product owner should be able to see something before the testing starts so to make a wise assessment.  That "something" is what you deliver at the end of each Sprint--something that the customer (representative) can kick around and evaluate.

                        My 2 cents.

                        == Eric Chamberlain ==

                        -----Original Message-----
                        From: Chris Brooks [mailto:brookscl@...]
                        Sent: Wednesday, March 02, 2005 9:26 AM
                        To: scrumdevelopment@yahoogroups.com
                        Subject: Re: [scrumdevelopment] Completeness definition


                        On Wed, 2 Mar 2005 12:38:11 +0100, David H. <dmalloc@...> wrote:
                        > Maybe your
                        production cycle is wrong then.

                        You are probably correct!  We are just now learning the ropes here...

                        > SCRUM quite clearly
                        states
                        > that you need to be able to release a _complete_ product at the
                        end of
                        > every Sprint. That does not necessarily mean that you have all
                        the
                        > Features your Product Owner wanted and it does not mean that
                        > everything that has been planne din the backlog is available yet, but
                        > it does mean that I can tell you at the end of _any_ Sprint -->"go
                        > live".

                        To give a concrete example: we supply online banking software for over 25% of the US population, including 4 of the top 10 US banks. Before we can safely ship new platform releases to clients, we need to validate in a rather large stress lab (often involving 60+ servers) certain levels of scalability and availability.  These tests often take 2-3 weeks to complete on their own.  We can't accelerate this time because stress testing requires running the system under load for a specified duration. Call me a non-purist, but I'm not willing to pay the cost of this sort of testing within each sprint; in fact, we often rent lab space from IBM, Microsoft, et al to achieve this work so travel is required, there a fixed costs, etc.  So, when I say we really can't call a sprint truly releasable until certain other activities are done, that's what I mean.  I don't think Scrum is going to do anything for me to help address issues like this, and I'm not looking for any silver bullets.  Our current application of Scrum involves planning for 1-2 release sprints at the end of a platform release for this very purpose.

                        If I can't truly "GO LIVE" at the end of each sprint, does that mean that I'm not doing Scrum?

                        > THat also implies that this portion
                        of the syste, is
                        >
                        > Regression Tested
                        > Unit tested
                        >
                        User Acceptance tested
                        > Code has been refactored
                        > Test Driven
                        Development has been used
                        > The code is bug free (as opposed to being
                        "believed bug-free")

                        This is the sort of stuff I can live with and makes sense.  I *have* wondered about the "Code has been refactored" requirement.  What if code is well factored to start with?  Do we always need to refactor?

                        For those of you out there that provide technical documentation for products that you deliver, what requirements do you place on docs within the sprint (I know Paul responded to this point in another response, but I'm curious about what others do).

                        --
                        Chris Brooks
                        http://www.chrisbrooks.org


                        To Post a message, send it to:   scrumdevelopment@...
                        To Unsubscribe, send a blank message to: scrumdevelopment-unsubscribe@...
                        Yahoo! Groups Links









                        To Post a message, send it to:   scrumdevelopment@...
                        To Unsubscribe, send a blank message to: scrumdevelopment-unsubscribe@...



                      • David H.
                        ... I beg to differ. But I guess that is purely philosohpical. To me it is part of Scrum because scrum does define a methodology that leads to certain
                        Message 11 of 15 , Mar 3, 2005
                          > > >
                          > > > This is the sort of stuff I can live with and makes sense. I *have*
                          > > > wondered about the "Code has been refactored" requirement. What if
                          > > > code is well factored to start with? Do we always need to refactor?
                          > > >
                          > > Refactoring is a big part of Scrum and test driven development in my
                          >
                          > Scrum defines no engineering practices. That is not to say that
                          > refactoring or TDD isn't important, it is just not part of Scrum.
                          >
                          I beg to differ. But I guess that is purely philosohpical. To me it is
                          part of Scrum because scrum does define a methodology that leads to
                          certain processes. Maybe one could argue that it is more part of test
                          driven development than scrum. But then again, who cares :)

                          -d
                        • Chris Brooks
                          ... I should clarify my original comment. Fowler s definition of refactoring is the process of changing a software system in such a way that it does not
                          Message 12 of 15 , Mar 3, 2005
                            On Thu, 3 Mar 2005 19:55:23 +0100, David H. <dmalloc@...> wrote:
                            >
                            > > > >
                            > > > > This is the sort of stuff I can live with and makes sense. I *have*
                            > > > > wondered about the "Code has been refactored" requirement. What if
                            > > > > code is well factored to start with? Do we always need to refactor?
                            > > > >
                            > > > Refactoring is a big part of Scrum and test driven development in my
                            > >
                            > > Scrum defines no engineering practices. That is not to say that
                            > > refactoring or TDD isn't important, it is just not part of Scrum.
                            > >
                            > I beg to differ. But I guess that is purely philosohpical. To me it is
                            > part of Scrum because scrum does define a methodology that leads to
                            > certain processes. Maybe one could argue that it is more part of test
                            > driven development than scrum. But then again, who cares :)

                            I should clarify my original comment. Fowler's definition of
                            refactoring is "the process of changing a software system in such a
                            way that it does not alter the external behavior of the code yet
                            improves its internal structure."

                            My issue was with this statement as a suggested criteria for
                            completness: "Code has been refactored". Refactoring describes a
                            process, not an end state. I would rather make the criteria something
                            like "Code is well factored" and describe a desirable state. To
                            suggest that all new code written must be refactored is a bit strong,
                            IMHO. Certainly there are cases were the initial implementation of
                            software is reasonably well factored.

                            --
                            Chris Brooks
                            http://www.chrisbrooks.org
                          • David A Barrett
                            ... I think, just like the pigs and chickens thing (which is only about who gets to speak in a daily scrum), people lose track of what the Scrum rule about
                            Message 13 of 15 , Mar 4, 2005
                              >Sure, the work can be chunked into 30
                              >day sprints, and we can build the application with localized testing for
                              >the module we just completed, but it can't go to production until
                              >everything's done.

                              I think, just like the "pigs and chickens" thing (which is only about who
                              gets to speak in a daily scrum), people lose track of what the Scrum "rule"
                              about "potentially implementable" features is all about.

                              There isn't any rule that your entire product has to be deployable at the
                              end of every Sprint. IMHO, the rule about making each Sprint Backlog item
                              a "potentially implementable" feature has two purposes:

                              1. To keep the team focused on "functionality". Creating an artifact, or
                              investigating an approach is not a valid SB item. You need something that
                              you can demo to the user at the end and show forward movement on
                              functionality.

                              2. To keep the team focused on finishing. Starting something doesn't
                              count. Finishing it does. Size things so that they can be completed, even
                              if this means that the incremental gain in functionality is so small as to
                              be useless to the end user as a practical matter.


                              I don't see any conflict here with scheduling releases to occur some
                              quantity of Sprints in the future, nor do I see any conflict with dealing
                              with necessary pre-release activities. I don't think you need to be filled
                              with angst because some feature that you've developed can't be "released"
                              until the whole system has been stress tested for 2 weeks in a lab. The
                              rules do make you think about what you are doing, and potentially knock out
                              a whole pantload of activities as valid SB items. For instance:

                              1. Documentation
                              2. UAT
                              3. Unit Testing
                              4. User Training
                              5. Investigation
                              6. Bug fixing (as an open-ended, general activity)

                              Wouldn't ordinarily qualify. Instead if a feature needs those items
                              completed in order to be considered "potentially implementable", then they
                              should be included in the SB item for the development of that feature.
                              Even here, though, you may need to make exceptions. For instance, you may
                              have a separate documentation department, who work on their own schedule
                              and need to see a final version of the product before they will update
                              documention. The spirit of the thing is the most important: Only take on
                              as much stuff as you can finish, and be clear about what "finishing" means.

                              Dave Barrett,
                              Lawyers' Professional Indemnity Company
                            Your message has been successfully submitted and would be delivered to recipients shortly.