Loading ...
Sorry, an error occurred while loading the content.

Re: [XP] done?

Expand Messages
  • Dossy Shiobara
    ... It passes all the Customer Acceptance Tests? -- Dossy -- Dossy Shiobara | dossy@panoptic.com | http://dossy.org/ Panoptic Computer Network |
    Message 1 of 27 , Jul 5, 2006
    • 0 Attachment
      On 2006.07.05, jeff_olfert <jeff.olfert@...> wrote:
      > On your xp team, what does it mean to be "done" with a story?

      It passes all the Customer Acceptance Tests?

      -- Dossy

      --
      Dossy Shiobara | dossy@... | http://dossy.org/
      Panoptic Computer Network | http://panoptic.com/
      "He realized the fastest way to change is to laugh at your own
      folly -- then you can let go and quickly move on." (p. 70)
    • William Pietri
      ... When the customer and the developers agree that the story is done, it s done. William
      Message 2 of 27 , Jul 5, 2006
      • 0 Attachment
        jeff_olfert wrote:
        > On your xp team, what does it mean to be "done" with a story?

        When the customer and the developers agree that the story is done, it's
        done.


        William
      • Ron Jeffries
        ... If I had a team, it would mean: 1. The automated customer acceptance tests for the story are all running; 2. The code is well-tested; 3. The code is
        Message 3 of 27 , Jul 5, 2006
        • 0 Attachment
          On Wednesday, July 5, 2006, at 11:05:19 AM, jeff_olfert wrote:

          > On your xp team, what does it mean to be "done" with a story?

          If I had a team, it would mean:

          1. The automated customer acceptance tests for the story are all
          running;

          2. The code is well-tested;

          3. The code is well-factored;

          4. The code is integrated into the delivery stream and we could
          cut a CD tomorrow and ship it.

          Ron Jeffries
          www.XProgramming.com
          Curiosity is more powerful than skepticism.
        • Adrian Howard
          ... (leaving aside whether my team would count as an xp team :-) A story is done when: * It passes all the customer tests * We re at the end of a
          Message 4 of 27 , Jul 5, 2006
          • 0 Attachment
            On 5 Jul 2006, at 16:05, jeff_olfert wrote:

            > On your xp team, what does it mean to be "done" with a story?

            (leaving aside whether my team would count as an "xp team" :-)

            A story is done when:
            * It passes all the customer tests
            * We're at the end of a red-green-refactor TDD cycle

            Adrian
          • yahoogroups@jhrothjr.com
            Fantastic! This points up one of the major issues that Kent points out in the second version of the White Book: you can _say_ you ve got a production quality
            Message 5 of 27 , Jul 5, 2006
            • 0 Attachment
              Fantastic!

              This points up one of the major issues that Kent points
              out in the second version of the White Book: you can
              _say_ you've got a production quality deployable, but
              until you've actually deployed it you don't really _know_
              if it's going to work in the production environment(s).

              You also haven't started to get the R part of your ROI
              until you've actually deployed into production.

              It not only isn't easy getting things smooth enough that
              your users welcome daily deployment, but it also
              raises real questions about basic project management
              concepts such as releases and iterations.

              John Roth



              ----- Original Message -----
              From: "D. André Dhondt"
              <d.andre.dhondt.at.gmail.com@...>
              To: "extremeprogramming@yahoogroups.com"
              <extremeprogramming.at.yahoogroups.com@...>
              Sent: Wednesday, July 05, 2006 9:31 AM
              Subject: Re: [XP] done?


              > It used to mean the card had passed UTs and ATs, and was waiting for the
              > customer to approve a downtime to deploy it. That caused all kinds of
              > flow
              > problems, including days of mass deployment that inevitably would break
              > something and then it was hard to see what change was the underlying
              > cause.
              > Now we deploy as soon as possible (at least daily), in baby steps, and get
              > immediate feedback to see if that worked in the production environment.
              > This has challenged us to find ways to deploy without causing as many
              > downtimes, and this challenge has been interesting and fun (the bulk of
              > our
              > apps are windows desktop apps!) Now a card is not "done" until it's
              > deployed and we see that the customer agrees that it's doing what they
              > wanted it to do.
              >
              > On 7/5/06, jeff_olfert <jeff.olfert@...> wrote:
              >>
              >> On your xp team, what does it mean to be "done" with a story?
              >>
              >>
              >>
              >
              >
              > [Non-text portions of this message have been removed]
              >
              >
            • Craig Demyanovich
              ... Very thought-provoking idea! You write, Now we deploy as soon as possible (at least daily), in baby steps, and get immediate feedback to see if that
              Message 6 of 27 , Jul 5, 2006
              • 0 Attachment
                On Jul 5, 2006, at 11:31 AM, D. André Dhondt wrote:

                > It used to mean the card had passed UTs and ATs, and was waiting
                > for the
                > customer to approve a downtime to deploy it. That caused all kinds
                > of flow
                > problems, including days of mass deployment that inevitably would
                > break
                > something and then it was hard to see what change was the
                > underlying cause.
                > Now we deploy as soon as possible (at least daily), in baby steps,
                > and get
                > immediate feedback to see if that worked in the production
                > environment.
                > This has challenged us to find ways to deploy without causing as many
                > downtimes, and this challenge has been interesting and fun (the
                > bulk of our
                > apps are windows desktop apps!) Now a card is not "done" until it's
                > deployed and we see that the customer agrees that it's doing what they
                > wanted it to do.
                >
                Very thought-provoking idea!

                You write, "Now we deploy as soon as possible (at least daily), in
                baby steps, and get immediate feedback to see if that worked in the
                production environment." You deploy right to production instead of a
                staging area first? If so, how have you been burned by that in the
                past? Furthermore, do you still have problems deploying directly to
                production?

                Let's say that you've created well-tested, well-factored code that
                passes automated customer acceptance tests. However, the feature is
                not deployed before the end of the iteration. It appears that your
                team would not count the points for that feature with those that are
                done (deployed). For example, you report that you finished 12 points
                of work instead of 14. Does the customer then create a deployment
                story, whose cost is very low compared to the cost (2 points) of the
                implementation? (It would seem that the story is no longer a 2 if all
                that remains is deployment.) If so, do they feel that the 2 points of
                work on implementation aren't being tracked, or is there agreement
                that such things average out over the life of the project?

                Regards,
                Craig
              • Kay A. Pentecost
                Hi, André, ... I think this is *awesome*!!! Would you tell us some of the ways you ve found to do daily deployments without inconveniencing the customers??
                Message 7 of 27 , Jul 5, 2006
                • 0 Attachment
                  Hi, André,

                  > -----Original Message-----
                  > From: extremeprogramming@yahoogroups.com
                  > [mailto:extremeprogramming@yahoogroups.com] On Behalf Of D.
                  > André Dhondt
                  > Sent: Wednesday, July 05, 2006 11:32 AM
                  > To: extremeprogramming@yahoogroups.com
                  > Subject: Re: [XP] done?
                  >
                  > It used to mean the card had passed UTs and ATs, and was
                  > waiting for the
                  > customer to approve a downtime to deploy it. That caused all
                  > kinds of flow
                  > problems, including days of mass deployment that inevitably
                  > would break
                  > something and then it was hard to see what change was the
                  > underlying cause.
                  > Now we deploy as soon as possible (at least daily), in baby
                  > steps, and get
                  > immediate feedback to see if that worked in the production
                  > environment.
                  > This has challenged us to find ways to deploy without causing as many
                  > downtimes, and this challenge has been interesting and fun

                  I think this is *awesome*!!! Would you tell us some of the ways you've
                  found to do daily deployments without inconveniencing the customers?? I
                  would love to hear about that!

                  Kay Pentecost
                • Ron Jeffries
                  ... I agree that having done mean in the hands of the real users would be a fantastic thing. However, ... I ve recently worked with teams that used simpler
                  Message 8 of 27 , Jul 5, 2006
                  • 0 Attachment
                    On Wednesday, July 5, 2006, at 1:11:14 PM, yahoogroups@... wrote:

                    > Fantastic!

                    > This points up one of the major issues that Kent points
                    > out in the second version of the White Book: you can
                    > _say_ you've got a production quality deployable, but
                    > until you've actually deployed it you don't really _know_
                    > if it's going to work in the production environment(s).

                    > You also haven't started to get the R part of your ROI
                    > until you've actually deployed into production.

                    > It not only isn't easy getting things smooth enough that
                    > your users welcome daily deployment, but it also
                    > raises real questions about basic project management
                    > concepts such as releases and iterations.

                    I agree that having done mean "in the hands of the real users" would
                    be a fantastic thing.

                    However, ...

                    I've recently worked with teams that used simpler rules for "done"
                    that will illuminate what I'm concerned about.

                    One such definition of done was "Awaiting Verification: QA has
                    installed it on their machines and it runs".

                    The team had no control over when QA would install their software
                    and determine whether it ran. The result was that more and more
                    stories moved over toward the right hand side of their status board,
                    piling up in "Awaiting Verification". For weeks on end, nothing was
                    ever "done". This was quite demoralizing.

                    I'm inclined to think that the definition of "done" that a team uses
                    needs to be something that is under their control. While I would
                    agree that we really don't know if it's done until users are using
                    it, and while I'd love to see the team working hard to get stuff
                    deployed, so as to get the R in ROI, I'm concerned that it's asking
                    more than many teams can do, and that setting them an impossible
                    goal will be destructive.

                    Must remember to bring this up in the "Fear" workshop. Meanwhile,
                    help me out.

                    Ron Jeffries
                    www.XProgramming.com
                    Hold on to your dream. --ELO
                  • mnbluesguy
                    Jeff, ... We define the story done when it has been User Acceptance Tested in the QA environment and is ready to be released into production.
                    Message 9 of 27 , Jul 5, 2006
                    • 0 Attachment
                      Jeff,

                      > On your xp team, what does it mean to be "done" with a story?

                      We define the story done when it has been User Acceptance Tested in
                      the QA environment and is ready to be released into production.
                    • Joseph Little
                      This discussion points up the importance of making the meaning of done quite visible. Wouldn t it be nice if it were always a simple answer? And, indeed
                      Message 10 of 27 , Jul 5, 2006
                      • 0 Attachment
                        This discussion points up the importance of making the meaning of done
                        quite visible. Wouldn't it be nice if it were always a simple answer?
                        And, indeed sometimes it can be (see Ron's post with the 4 quick
                        items listed).

                        Clearly, most teams can improve their practices to move their
                        product/stories closer to the extreme end of done. But for individual
                        stories that are in iterations that end way before a real release,
                        done might be compromised.

                        In our experience, any team that says it has improved its practices
                        (and company practices) as much as possible to get as close to being
                        fully done as is "reasonable" for their environment is, well, not
                        fully done improving their engineering practices.

                        The discussion also points up the /purposes/ of being more done.
                        1. Morale of the team
                        2. Gaining the R in ROI sooner (or at least having a better handle on
                        when it could be gained)
                        3. Making clear to the customer (project sponsor) how done a story
                        (the project) really is
                        4. Getting clear feedback about whether you hit the mark (once done)
                        5. ...and others too, I'm sure.

                        Reminder on ROI: Often the ROI waits to be achieved until well
                        /after/ the software has been deployed. For example, often
                        business/operations things have to be put in place after the software
                        is deployed, but before the full ROI can be achieved. Yet another
                        reason to get the money flow started sooner.

                        Feedback: Of course, you can solicit feedback all along the way, and
                        we should. But when you tell a real user "it's done, this is the final
                        testing before we put this story to bed" they often get more serious
                        in their feedback.

                        Regards, Joe

                        --- In extremeprogramming@yahoogroups.com, Ron Jeffries
                        <ronjeffries@...> wrote:
                        >
                        > On Wednesday, July 5, 2006, at 1:11:14 PM, yahoogroups@... wrote:
                        >
                        > > Fantastic!
                        >
                        > > This points up one of the major issues that Kent points
                        > > out in the second version of the White Book: you can
                        > > _say_ you've got a production quality deployable, but
                        > > until you've actually deployed it you don't really _know_
                        > > if it's going to work in the production environment(s).
                        >
                        > > You also haven't started to get the R part of your ROI
                        > > until you've actually deployed into production.
                        >
                        > > It not only isn't easy getting things smooth enough that
                        > > your users welcome daily deployment, but it also
                        > > raises real questions about basic project management
                        > > concepts such as releases and iterations.
                        >
                        > I agree that having done mean "in the hands of the real users" would
                        > be a fantastic thing.
                        >
                        > However, ...
                        >
                        > I've recently worked with teams that used simpler rules for "done"
                        > that will illuminate what I'm concerned about.
                        >
                        > One such definition of done was "Awaiting Verification: QA has
                        > installed it on their machines and it runs".
                        >
                        > The team had no control over when QA would install their software
                        > and determine whether it ran. The result was that more and more
                        > stories moved over toward the right hand side of their status board,
                        > piling up in "Awaiting Verification". For weeks on end, nothing was
                        > ever "done". This was quite demoralizing.
                        >
                        > I'm inclined to think that the definition of "done" that a team uses
                        > needs to be something that is under their control. While I would
                        > agree that we really don't know if it's done until users are using
                        > it, and while I'd love to see the team working hard to get stuff
                        > deployed, so as to get the R in ROI, I'm concerned that it's asking
                        > more than many teams can do, and that setting them an impossible
                        > goal will be destructive.
                        >
                        > Must remember to bring this up in the "Fear" workshop. Meanwhile,
                        > help me out.
                        >
                        > Ron Jeffries
                        > www.XProgramming.com
                        > Hold on to your dream. --ELO
                        >
                      • jeff_olfert
                        ... What is the Fear workshop?
                        Message 11 of 27 , Jul 5, 2006
                        • 0 Attachment
                          --- In extremeprogramming@yahoogroups.com, Ron Jeffries
                          <ronjeffries@...> wrote:
                          >
                          > On Wednesday, July 5, 2006, at 1:11:14 PM, yahoogroups@... wrote:
                          >
                          > > Fantastic!
                          >
                          > > This points up one of the major issues that Kent points
                          > > out in the second version of the White Book: you can
                          > > _say_ you've got a production quality deployable, but
                          > > until you've actually deployed it you don't really _know_
                          > > if it's going to work in the production environment(s).
                          >
                          > > You also haven't started to get the R part of your ROI
                          > > until you've actually deployed into production.
                          >
                          > > It not only isn't easy getting things smooth enough that
                          > > your users welcome daily deployment, but it also
                          > > raises real questions about basic project management
                          > > concepts such as releases and iterations.
                          >
                          > I agree that having done mean "in the hands of the real users" would
                          > be a fantastic thing.
                          >
                          > However, ...
                          >
                          > I've recently worked with teams that used simpler rules for "done"
                          > that will illuminate what I'm concerned about.
                          >
                          > One such definition of done was "Awaiting Verification: QA has
                          > installed it on their machines and it runs".
                          >
                          > The team had no control over when QA would install their software
                          > and determine whether it ran. The result was that more and more
                          > stories moved over toward the right hand side of their status board,
                          > piling up in "Awaiting Verification". For weeks on end, nothing was
                          > ever "done". This was quite demoralizing.
                          >
                          > I'm inclined to think that the definition of "done" that a team uses
                          > needs to be something that is under their control. While I would
                          > agree that we really don't know if it's done until users are using
                          > it, and while I'd love to see the team working hard to get stuff
                          > deployed, so as to get the R in ROI, I'm concerned that it's asking
                          > more than many teams can do, and that setting them an impossible
                          > goal will be destructive.
                          >
                          > Must remember to bring this up in the "Fear" workshop. Meanwhile,
                          > help me out.
                          >
                          > Ron Jeffries
                          > www.XProgramming.com
                          > Hold on to your dream. --ELO
                          >

                          What is the "Fear" workshop?
                        • jeff_olfert
                          ... We ve recently come up with a list of working agreements on my agile team including what done means. For us are striving for the following items...
                          Message 12 of 27 , Jul 5, 2006
                          • 0 Attachment
                            --- In extremeprogramming@yahoogroups.com, "mnbluesguy" <tannen@...>
                            wrote:
                            >
                            > Jeff,
                            >
                            > > On your xp team, what does it mean to be "done" with a story?
                            >
                            > We define the story done when it has been User Acceptance Tested in
                            > the QA environment and is ready to be released into production.
                            >

                            We've recently come up with a list of working agreements on my agile
                            team including what "done" means. For us are striving for the
                            following items...

                            Customer review
                            Automated customer acceptance tests (watir)
                            Automated unit tests (usually resulting from TDD)
                            Code review (pairing counts...)
                            Performance tests
                            Feature is integrated in to msi installer
                            Feature is include in the continous integration build
                            QA tested (automated and manual)
                            API documented (ndoc) (for public api's)
                            Feature is documented.
                            Feature is deployed to demo/integration server.
                            Feature is demo'd at the iteration demo.

                            Our application is consumed by other teams in my company, so the
                            ultimate "done" is when these other teams finally deploy the
                            application to production at a customer site. But we are attempting
                            to use the checklist above on my team, before handing over the
                            story(s) to the other teams.

                            The list is long, but it seems that when some of these items are left
                            off the list, then we suffer later.
                          • Ron Jeffries
                            ... A workshop that Chet and I are doing at Agile 2006, entitled Crushing Fear Under the Iron Heel of Action . It s something like 3x over-subscribed at the
                            Message 13 of 27 , Jul 5, 2006
                            • 0 Attachment
                              On Thursday, July 6, 2006, at 12:20:51 AM, jeff_olfert wrote:

                              > What is the "Fear" workshop?

                              A workshop that Chet and I are doing at Agile 2006, entitled
                              "Crushing Fear Under the Iron Heel of Action". It's something like
                              3x over-subscribed at the moment, but we're authorizing a few more
                              attendees and offering to give it a second time.

                              Ron Jeffries
                              www.XProgramming.com
                              No one expects the Spanish Inquisition ...
                            • Doug Swartz
                              ... That is a great one line definition, which I subscribe to wholeheartedly! To add some shades of grey: My current team has four states of doneness for a
                              Message 14 of 27 , Jul 6, 2006
                              • 0 Attachment
                                Wednesday, July 05, 2006, 10:59:21 AM, William Pietri wrote:

                                > jeff_olfert wrote:
                                >> On your xp team, what does it mean to be "done" with a story?

                                > When the customer and the developers agree that the story is done, it's
                                > done.

                                That is a great one line definition, which I
                                subscribe to wholeheartedly!

                                To add some shades of grey: My current team has four
                                states of doneness for a card: While we could call them rare,
                                medium rare, medium, and well-done, we don't. They are called
                                something like: released, done, in customer acceptance, and in
                                production.

                                Released: The programmer thinks he is done with it. It passes
                                all unit tests and has been released into the repository.

                                Done: The testers agree that it is done and it passes all the
                                automated acceptance tests. This may happen the same day as
                                it is released into the repository or a few days later.

                                In customer acceptance: It is in our final testing environment
                                which is essentially a parallel of production and is
                                available to our end customers on the Internet. Our testers
                                also do additional exploratory testing here. We move new code into
                                the acceptance environment every few days.

                                In production: Hopefully, a self explanatory state. Code is
                                promoted from the customer acceptance environment at least
                                monthly, and as often as every iteration.


                                --

                                Doug Swartz
                                daswartz@...
                              • Ilja Preuss
                                ... I have very mixed feelings about this. On the one hand, I can see how this could be quite demoralizing, and that would drive me to redefine the meaning of
                                Message 15 of 27 , Jul 6, 2006
                                • 0 Attachment
                                  > I've recently worked with teams that used simpler rules for "done"
                                  > that will illuminate what I'm concerned about.
                                  >
                                  > One such definition of done was "Awaiting Verification: QA has
                                  > installed it on their machines and it runs".
                                  >
                                  > The team had no control over when QA would install their software
                                  > and determine whether it ran. The result was that more and more
                                  > stories moved over toward the right hand side of their status board,
                                  > piling up in "Awaiting Verification". For weeks on end, nothing was
                                  > ever "done". This was quite demoralizing.
                                  >
                                  > I'm inclined to think that the definition of "done" that a team uses
                                  > needs to be something that is under their control. While I would
                                  > agree that we really don't know if it's done until users are using
                                  > it, and while I'd love to see the team working hard to get stuff
                                  > deployed, so as to get the R in ROI, I'm concerned that it's asking
                                  > more than many teams can do, and that setting them an impossible
                                  > goal will be destructive.

                                  I have very mixed feelings about this.

                                  On the one hand, I can see how this could be quite demoralizing, and that
                                  would drive me to redefine the meaning of done.

                                  On the other hand, not getting early feedback from QA is certainly a bad
                                  thing, and I'm not fully convinced by the "we have no control over it"
                                  argument. To some amount we might not have control over it because we
                                  accepted to have no control, because of an "us versus them" attitude. With
                                  other words, perhaps it would be a Good Thing (TM) if the team made this
                                  issue very visible and every team member did what he can to change it. Again
                                  with other words, I would fear that redefining "done" would only hide a
                                  problem that we should in fact tackle instead.

                                  Perhaps there simply should be two columns on the right hand side of the
                                  status board: "done" and "really done". That might at least trigger some
                                  interesting discussions...

                                  Just my 0.02 Euro cents,

                                  Ilja
                                • D. André Dhondt
                                  In order to release features in our desktop apps on a more regular basis, without causing multiple downtimes per week, we ve done the following (feel free to
                                  Message 16 of 27 , Jul 6, 2006
                                  • 0 Attachment
                                    In order to release features in our desktop apps on a more regular basis,
                                    without causing multiple downtimes per week, we've done the following (feel
                                    free to ask for more detail if you need it):


                                    - deploying a front-end that could automatically detect when the
                                    back-end database changes had been released, and therefore switch modes
                                    - deploying a backwards-compatible database change (after verifying in
                                    the staging environment that the deploy only locked resources for a couple
                                    seconds)
                                    - make a resource-intensive database update on a copy of a table, get
                                    a BRIEF hard downtime to rename the table & add the newest recent activity,
                                    and let people back in
                                    - we temporarily halted a running middle-tier app, deployed, and
                                    restarted it before end-users could notice the soft downtime
                                    - we have an app that runs as a background process--we killed it
                                    remotely on users' machines, deployed, and either wait 'til their next logon
                                    for it to re-launch or asked them to click a button to restart it at their
                                    leisure
                                    - we copy & paste deploy some resource-type files, then next time the
                                    app is re-loaded it gets the updates

                                    In general, we seek to take baby steps with our releases, so that only one
                                    part of one subsystem is affected at once.


                                    On 7/5/06, Kay A. Pentecost <kayp@...> wrote:

                                    > Hi, André,
                                    >
                                    >
                                    > > -----Original Message-----
                                    > > From: extremeprogramming@yahoogroups.com<extremeprogramming%40yahoogroups.com>
                                    > > [mailto:extremeprogramming@yahoogroups.com<extremeprogramming%40yahoogroups.com>]
                                    > On Behalf Of D.
                                    > > André Dhondt
                                    > > Sent: Wednesday, July 05, 2006 11:32 AM
                                    > > To: extremeprogramming@yahoogroups.com<extremeprogramming%40yahoogroups.com>
                                    > > Subject: Re: [XP] done?
                                    > >
                                    > > It used to mean the card had passed UTs and ATs, and was
                                    > > waiting for the
                                    > > customer to approve a downtime to deploy it. That caused all
                                    > > kinds of flow
                                    > > problems, including days of mass deployment that inevitably
                                    > > would break
                                    > > something and then it was hard to see what change was the
                                    > > underlying cause.
                                    > > Now we deploy as soon as possible (at least daily), in baby
                                    > > steps, and get
                                    > > immediate feedback to see if that worked in the production
                                    > > environment.
                                    > > This has challenged us to find ways to deploy without causing as many
                                    > > downtimes, and this challenge has been interesting and fun
                                    >
                                    > I think this is *awesome*!!! Would you tell us some of the ways you've
                                    > found to do daily deployments without inconveniencing the customers?? I
                                    > would love to hear about that!
                                    >
                                    > Kay Pentecost
                                    >
                                    >
                                    >


                                    [Non-text portions of this message have been removed]
                                  • Ron Jeffries
                                    ... Doug ... this is a reasonable list, and not all that different from what I see elsewhere. What would concern me in this context would be the extent to
                                    Message 17 of 27 , Jul 6, 2006
                                    • 0 Attachment
                                      On Thursday, July 6, 2006, at 6:03:45 AM, Doug Swartz wrote:

                                      > To add some shades of grey: My current team has four
                                      > states of doneness for a card: While we could call them rare,
                                      > medium rare, medium, and well-done, we don't. They are called
                                      > something like: released, done, in customer acceptance, and in
                                      > production.

                                      > Released: The programmer thinks he is done with it. It passes
                                      > all unit tests and has been released into the repository.

                                      > Done: The testers agree that it is done and it passes all the
                                      > automated acceptance tests. This may happen the same day as
                                      > it is released into the repository or a few days later.

                                      > In customer acceptance: It is in our final testing environment
                                      > which is essentially a parallel of production and is
                                      > available to our end customers on the Internet. Our testers
                                      > also do additional exploratory testing here. We move new code into
                                      > the acceptance environment every few days.

                                      > In production: Hopefully, a self explanatory state. Code is
                                      > promoted from the customer acceptance environment at least
                                      > monthly, and as often as every iteration.

                                      Doug ... this is a reasonable list, and not all that different from
                                      what I see elsewhere. What would concern me in this context would be
                                      the extent to which things don't just proceed from one to the next,
                                      instead looping back. When something comes back, it interrupts
                                      current work, slows down the work, takes longer to fix than it
                                      probably would have to have done it right, and so on.

                                      So long as a team uses reflux from the downstream events to improve
                                      its process and reduce future reflux, things are in pretty good
                                      shape. But when I see things piling up at one of these stations,
                                      it's a good bet that there's going to be trouble.

                                      Ron Jeffries
                                      www.XProgramming.com
                                      If you want to garden, you have to bend down and touch the soil.
                                      Gardening is a practice, not an idea.
                                      -- Thich Nhat Hanh
                                    • Ron Jeffries
                                      ... Well, I have mixed feelings as well. My preferred fix in the situation I described is for the team to abolish the need for QA, by learning, over time, to
                                      Message 18 of 27 , Jul 6, 2006
                                      • 0 Attachment
                                        On Thursday, July 6, 2006, at 6:16:12 AM, Ilja Preuss wrote:

                                        >> One such definition of done was "Awaiting Verification: QA has
                                        >> installed it on their machines and it runs".
                                        >>
                                        >> The team had no control over when QA would install their software
                                        >> and determine whether it ran. The result was that more and more
                                        >> stories moved over toward the right hand side of their status board,
                                        >> piling up in "Awaiting Verification". For weeks on end, nothing was
                                        >> ever "done". This was quite demoralizing.
                                        >>
                                        >> I'm inclined to think that the definition of "done" that a team uses
                                        >> needs to be something that is under their control. While I would
                                        >> agree that we really don't know if it's done until users are using
                                        >> it, and while I'd love to see the team working hard to get stuff
                                        >> deployed, so as to get the R in ROI, I'm concerned that it's asking
                                        >> more than many teams can do, and that setting them an impossible
                                        >> goal will be destructive.

                                        > I have very mixed feelings about this.

                                        > On the one hand, I can see how this could be quite demoralizing, and that
                                        > would drive me to redefine the meaning of done.

                                        > On the other hand, not getting early feedback from QA is certainly a bad
                                        > thing, and I'm not fully convinced by the "we have no control over it"
                                        > argument. To some amount we might not have control over it because we
                                        > accepted to have no control, because of an "us versus them" attitude. With
                                        > other words, perhaps it would be a Good Thing (TM) if the team made this
                                        > issue very visible and every team member did what he can to change it. Again
                                        > with other words, I would fear that redefining "done" would only hide a
                                        > problem that we should in fact tackle instead.

                                        > Perhaps there simply should be two columns on the right hand side of the
                                        > status board: "done" and "really done". That might at least trigger some
                                        > interesting discussions...

                                        Well, I have mixed feelings as well. My preferred fix in the
                                        situation I described is for the team to abolish the need for QA, by
                                        learning, over time, to test well enough to [nearly] ensure that
                                        nothing will ever come back broken from QA. That's wasteful, but it
                                        improves the flow of the inner loops of the team, and should make it
                                        clear, again over time, that the QA situation is broken.

                                        It would be ideal, of course, not to have the stages. In order to
                                        help them go away, it can be helpful to track the impact of the
                                        stages, displaying the impact on charts or in reports.

                                        In practice, teams seem not to be willing or able to push things
                                        very far outside their own walls. I wish they could and would do
                                        more, and at the same time I try to advise them on how to work as
                                        well as they can inside the boundaries they choose to accept.

                                        Ron Jeffries
                                        www.XProgramming.com
                                        You don't want to sell me waterfall.
                                        You want to go home and rethink your life.
                                      • D. André Dhondt
                                        ... I ve learned from this community that with a bit of courage, and a bit of creativity, we can achieve goals that seemed impossible. This is not
                                        Message 19 of 27 , Jul 7, 2006
                                        • 0 Attachment
                                          >> While I would
                                          >> agree that we really don't know if it's done until users are using
                                          >> it, and while I'd love to see the team working hard to get stuff
                                          >> deployed, so as to get the R in ROI, I'm concerned that it's asking
                                          >> more than many teams can do, and that setting them an impossible
                                          >> goal will be destructive.

                                          I've learned from this community that with a bit of courage, and a bit of
                                          creativity, we can achieve goals that seemed impossible. This is not
                                          destructive.

                                          Not every team can do every practice. Admittedly, XP2E doesn't list a
                                          weekly or daily deploy as a practice, but it does talk about flow as a
                                          powerful principle.

                                          In our environment it is difficult, and largely out of our control, to get a
                                          downtime. It used to be impossible to get one at night, difficult during
                                          the day--now it's impossible except at 5am.
                                          So instead of fighting against that barrier, we went around it--and found
                                          ways to make releases without as much downtime. It brought many, many
                                          benefits associated with flow. These benefits easily pay for the energy it
                                          took to invent downtime-free deploys.


                                          [Non-text portions of this message have been removed]
                                        • Kent Beck
                                          Joe, My ultimate measure for done is when the customer s check clears. I suppose this is a candidate for number 5 in your list below--make enough money to
                                          Message 20 of 27 , Jul 7, 2006
                                          • 0 Attachment
                                            Joe,

                                            My ultimate measure for "done" is when the customer's check clears. I
                                            suppose this is a candidate for number 5 in your list below--make enough
                                            money to stay in business. As I progress towards being able measure my
                                            engineering work financially, I will accept less comprehensive definitions
                                            of "done" that still give me concrete feedback about the effectiveness of my
                                            work.

                                            One of the stories in this thread was about a team that defined "done" as
                                            "released to QA", even though they could see that QA was backed up months.
                                            That seems to me a dangerously short-sighted definition. The team as a whole
                                            (programmers and QA together) would progress faster if some of the
                                            programmers shifted to QA until the backlog was cleared and enough
                                            automation was put in place to prevent the backlog from reforming. Proposing
                                            such a shift might be welcomed.

                                            As the first XP team said, "End-to-end is further than you think." Once you
                                            have this check from the customer, I suppose you start measuring retention
                                            rates. So, a feature is done when it keeps customers satisfied.

                                            Take care,

                                            Kent Beck
                                            Three Rivers Institute



                                            _____

                                            From: extremeprogramming@yahoogroups.com
                                            [mailto:extremeprogramming@yahoogroups.com] On Behalf Of Joseph Little
                                            Sent: Wednesday, July 05, 2006 7:20 PM
                                            To: extremeprogramming@yahoogroups.com
                                            Subject: Re: [XP] done?



                                            This discussion points up the importance of making the meaning of done
                                            quite visible. Wouldn't it be nice if it were always a simple answer?
                                            And, indeed sometimes it can be (see Ron's post with the 4 quick
                                            items listed).

                                            Clearly, most teams can improve their practices to move their
                                            product/stories closer to the extreme end of done. But for individual
                                            stories that are in iterations that end way before a real release,
                                            done might be compromised.

                                            In our experience, any team that says it has improved its practices
                                            (and company practices) as much as possible to get as close to being
                                            fully done as is "reasonable" for their environment is, well, not
                                            fully done improving their engineering practices.

                                            The discussion also points up the /purposes/ of being more done.
                                            1. Morale of the team
                                            2. Gaining the R in ROI sooner (or at least having a better handle on
                                            when it could be gained)
                                            3. Making clear to the customer (project sponsor) how done a story
                                            (the project) really is
                                            4. Getting clear feedback about whether you hit the mark (once done)
                                            5. ...and others too, I'm sure.

                                            Reminder on ROI: Often the ROI waits to be achieved until well
                                            /after/ the software has been deployed. For example, often
                                            business/operations things have to be put in place after the software
                                            is deployed, but before the full ROI can be achieved. Yet another
                                            reason to get the money flow started sooner.

                                            Feedback: Of course, you can solicit feedback all along the way, and
                                            we should. But when you tell a real user "it's done, this is the final
                                            testing before we put this story to bed" they often get more serious
                                            in their feedback.

                                            Regards, Joe

                                            --- In extremeprogramming@ <mailto:extremeprogramming%40yahoogroups.com>
                                            yahoogroups.com, Ron Jeffries
                                            <ronjeffries@...> wrote:
                                            >
                                            > On Wednesday, July 5, 2006, at 1:11:14 PM, yahoogroups@... wrote:
                                            >
                                            > > Fantastic!
                                            >
                                            > > This points up one of the major issues that Kent points
                                            > > out in the second version of the White Book: you can
                                            > > _say_ you've got a production quality deployable, but
                                            > > until you've actually deployed it you don't really _know_
                                            > > if it's going to work in the production environment(s).
                                            >
                                            > > You also haven't started to get the R part of your ROI
                                            > > until you've actually deployed into production.
                                            >
                                            > > It not only isn't easy getting things smooth enough that
                                            > > your users welcome daily deployment, but it also
                                            > > raises real questions about basic project management
                                            > > concepts such as releases and iterations.
                                            >
                                            > I agree that having done mean "in the hands of the real users" would
                                            > be a fantastic thing.
                                            >
                                            > However, ...
                                            >
                                            > I've recently worked with teams that used simpler rules for "done"
                                            > that will illuminate what I'm concerned about.
                                            >
                                            > One such definition of done was "Awaiting Verification: QA has
                                            > installed it on their machines and it runs".
                                            >
                                            > The team had no control over when QA would install their software
                                            > and determine whether it ran. The result was that more and more
                                            > stories moved over toward the right hand side of their status board,
                                            > piling up in "Awaiting Verification". For weeks on end, nothing was
                                            > ever "done". This was quite demoralizing.
                                            >
                                            > I'm inclined to think that the definition of "done" that a team uses
                                            > needs to be something that is under their control. While I would
                                            > agree that we really don't know if it's done until users are using
                                            > it, and while I'd love to see the team working hard to get stuff
                                            > deployed, so as to get the R in ROI, I'm concerned that it's asking
                                            > more than many teams can do, and that setting them an impossible
                                            > goal will be destructive.
                                            >
                                            > Must remember to bring this up in the "Fear" workshop. Meanwhile,
                                            > help me out.
                                            >
                                            > Ron Jeffries
                                            > www.XProgramming.com
                                            > Hold on to your dream. --ELO
                                            >







                                            [Non-text portions of this message have been removed]
                                          • Kent Beck
                                            André, It sounds like you ve made a lot of progress with your process. I would like to hear more details/stories about how you achieve daily deployment, both
                                            Message 21 of 27 , Jul 7, 2006
                                            • 0 Attachment
                                              André,

                                              It sounds like you've made a lot of progress with your process. I would like
                                              to hear more details/stories about how you achieve daily deployment, both
                                              technical details and the social/business prerequisites and consequences.

                                              Cheers,

                                              Kent Beck
                                              Three Rivers Institute

                                              P.S. "Daily Deployment" is on page 68 of XP2E.


                                              _____

                                              From: extremeprogramming@yahoogroups.com
                                              [mailto:extremeprogramming@yahoogroups.com] On Behalf Of D. André Dhondt
                                              Sent: Friday, July 07, 2006 7:15 AM
                                              To: extremeprogramming@yahoogroups.com
                                              Subject: Re: [XP] done?




                                              Not every team can do every practice. Admittedly, XP2E doesn't list a
                                              weekly or daily deploy as a practice, but it does talk about flow as a
                                              powerful principle.


                                              New Message Search

                                              Find the message you want faster. Visit your group to try out the improved
                                              message search.





                                              Share
                                              <http://us.lrd.yahoo.com/_ylc=X3oDMTJtdXF0NzU1BF9TAzk3MzU5NzE0BF9wAzIEZ3JwSW
                                              QDMTUwNTQwOQRncnBzcElkAzE2MDIyNzY3MTgEc2VjA25jbW9kBHNsawNmZGJjawRzdGltZQMxMT
                                              UyMjgxODcy;_ylg=1/SIG=11im36rmb/**http%3a//surveylink.yahoo.com/wix/p1412899
                                              .aspx> feedback on the new changes to Groups

                                              Recent Activity

                                              *

                                              27
                                              New
                                              <http://us.lrd.yahoo.com/_ylc=X3oDMTJmZG9nOXFtBF9TAzk3MzU5NzE0BGdycElkAzE1MD
                                              U0MDkEZ3Jwc3BJZAMxNjAyMjc2NzE4BHNlYwN2dGwEc2xrA3ZtYnJzBHN0aW1lAzExNTIyODE4Nz
                                              I-;_ylg=1/SIG=11tvm9ib7/**http%3a//groups.yahoo.com/group/extremeprogramming
                                              /members> Members

                                              Visit
                                              <http://us.lrd.yahoo.com/_ylc=X3oDMTJlbnFqYTljBF9TAzk3MzU5NzE0BGdycElkAzE1MD
                                              U0MDkEZ3Jwc3BJZAMxNjAyMjc2NzE4BHNlYwN2dGwEc2xrA3ZnaHAEc3RpbWUDMTE1MjI4MTg3Mg
                                              --;_ylg=1/SIG=11lufkf1f/**http%3a//groups.yahoo.com/group/extremeprogramming
                                              > Your Group
                                              .

                                              <http://geo.yahoo.com/serv?s=97359714&grpId=1505409&grpspId=1602276718&msgId
                                              =121132&stime=1152281872>




                                              [Non-text portions of this message have been removed]
                                            • Jim Standley
                                              Sounds like you were wondering around my team and I didn t see you. Wave next time.
                                              Message 22 of 27 , Jul 8, 2006
                                              • 0 Attachment
                                                Sounds like you were wondering around my team and I didn't see you. Wave
                                                next time.

                                                Ron Jeffries wrote:

                                                >
                                                > I agree that having done mean "in the hands of the real users" would
                                                > be a fantastic thing.
                                                >
                                                > However, ...
                                                >
                                                > I've recently worked with teams that used simpler rules for "done"
                                                > that will illuminate what I'm concerned about.
                                                >
                                                > One such definition of done was "Awaiting Verification: QA has
                                                > installed it on their machines and it runs".
                                                >
                                                > The team had no control over when QA would install their software
                                                > and determine whether it ran. The result was that more and more
                                                > stories moved over toward the right hand side of their status board,
                                                > piling up in "Awaiting Verification". For weeks on end, nothing was
                                                > ever "done". This was quite demoralizing.
                                                >
                                                > I'm inclined to think that the definition of "done" that a team uses
                                                > needs to be something that is under their control. While I would
                                                > agree that we really don't know if it's done until users are using
                                                > it, and while I'd love to see the team working hard to get stuff
                                                > deployed, so as to get the R in ROI, I'm concerned that it's asking
                                                > more than many teams can do, and that setting them an impossible
                                                > goal will be destructive.
                                                >
                                                > Must remember to bring this up in the "Fear" workshop. Meanwhile,
                                                > help me out.
                                                >
                                                > Ron Jeffries
                                                > www.XProgramming.com
                                                > Hold on to your dream. --ELO
                                              • Jim Standley
                                                We re in this position with an independent QA team out of our control. I m pushing for we know it s done and they know it s done where we make things so
                                                Message 23 of 27 , Jul 8, 2006
                                                • 0 Attachment
                                                  We're in this position with an independent QA team out of our control.
                                                  I'm pushing for "we know it's done" and "they know it's done" where we
                                                  make things so good that QA becomes irrelevant. Feedback from that late
                                                  part of the cycle becomes so rare we hardly think about it. I think our
                                                  developers still count on QA to catch stuff for them, so it ain't
                                                  happening yet.

                                                  Ilja Preuss wrote:
                                                  >
                                                  > I have very mixed feelings about this.
                                                  >
                                                  > On the one hand, I can see how this could be quite demoralizing, and that
                                                  > would drive me to redefine the meaning of done.
                                                  >
                                                  > On the other hand, not getting early feedback from QA is certainly a bad
                                                  > thing, and I'm not fully convinced by the "we have no control over it"
                                                  > argument. To some amount we might not have control over it because we
                                                  > accepted to have no control, because of an "us versus them" attitude. With
                                                  > other words, perhaps it would be a Good Thing (TM) if the team made this
                                                  > issue very visible and every team member did what he can to change it. Again
                                                  > with other words, I would fear that redefining "done" would only hide a
                                                  > problem that we should in fact tackle instead.
                                                  >
                                                  > Perhaps there simply should be two columns on the right hand side of the
                                                  > status board: "done" and "really done". That might at least trigger some
                                                  > interesting discussions...
                                                  >
                                                  > Just my 0.02 Euro cents,
                                                  >
                                                  > Ilja
                                                  >
                                                • Ron Jeffries
                                                  ... I move silently ... ;- Ron Jeffries www.XProgramming.com In times of stress, I like to turn to the wisdom of my Portuguese waitress, who said: Olá, meu
                                                  Message 24 of 27 , Jul 8, 2006
                                                  • 0 Attachment
                                                    On Saturday, July 8, 2006, at 1:30:30 PM, wrote:

                                                    > Sounds like you were wandering around my team and I didn't see you. Wave
                                                    > next time.

                                                    I move silently ... ;->

                                                    Ron Jeffries
                                                    www.XProgramming.com
                                                    In times of stress, I like to turn to the wisdom of my Portuguese waitress,
                                                    who said: "Olá, meu nome é Marisol e eu serei sua garçonete."
                                                    -- after Mark Vaughn, Autoweek.
                                                  • D. André Dhondt
                                                    The prerequisite to our daily deployment has a lot to do with scope. A couple months ago the power of scope really clicked for one of our team members, and
                                                    Message 25 of 27 , Jul 10, 2006
                                                    • 0 Attachment
                                                      The prerequisite to our daily deployment has a lot to do with scope. A
                                                      couple months ago the power of scope really clicked for one of our team
                                                      members, and his focus on delivering just what the customer asked for,
                                                      nothing more, made us go faster. We started moving cards at 25-50% more per
                                                      iteration! We nailed several iterations this way, and then we had a failed
                                                      iteration--it failed miserably. We weren't sure why at first, but then we
                                                      realized it had something to do with the fact that this was the first 'big'
                                                      iteration that focussed only on one application--every one of the previous
                                                      big iterations included changes to several applications. Even up to the
                                                      last few hours of this failed iteration we had thought we'd be able to
                                                      deliver on the cards, but since they all hinged upon the release of one
                                                      application, all the cards for the iteration had been moved over to the
                                                      'waiting to deploy' section of our board and the customer hadn't yet gotten
                                                      any business benefit from the work we'd done.

                                                      In retrospect, the problem seems so obvious. This huge pile of 'waiting to
                                                      deploy' cards amounted to one BIG SCARY release. On the last day of the
                                                      iteration, there was one minor issue that needed to be fixed, but we felt
                                                      like we could fix it quickly. Instead, that issue covered up another, and
                                                      yet another, and so the last day of our iteration became an unhealthy 'code
                                                      and fix' cycle. This particular application was a mix of legacy
                                                      'untestable' code and newly refactored, test-first coding--and the system
                                                      was too big to write UTs for all the legacy code in the current iteration.
                                                      We thought we could just keep an attitude of 'leave it cleaner than it was
                                                      when you got there' with the legacy code, and that would be sufficient.
                                                      It's embarrasing to admit we fell into a code and fix cycle, but we were
                                                      knee-deep in it, stayed late and came in early to fix the issue, and never
                                                      succeeded. When the customer got in to see how it was going, we admitted
                                                      failure with no end in sight. We broke out the known issues into new cards,
                                                      started the new iteration, and continued to address each issue. It took
                                                      another week to realize our biggest mistake was a lack of flow, and yet
                                                      another week to figure out how to deploy parts of this application without
                                                      deploying the whole thing. All in all, this mess caused us 3 weeks of
                                                      failed iterations (we did move other cards, but this one app plagued us the
                                                      entire time, and as a result represented broken promises to deliver
                                                      functionality and therefore failed iterations). Even though we knew what to
                                                      do (theoretically) by the third week, it took a while for us to figure out
                                                      how to release a small part of it at a time. As soon as we could take baby
                                                      steps in deploying, however, we got immediate feedback about whether they
                                                      were in the right direction or not--and with the amount of legacy code out
                                                      there we really needed the feedback of daily releases to find out if our
                                                      changes worked or not. With baby steps, it was also easy to see which one
                                                      represented a mistake...

                                                      An earlier reply to this thread listed some of the strategies we use for
                                                      daily deployments...

                                                      This concept of flow has been really powerful in making it so that at every
                                                      step of the way during an iteration, the customer is getting business
                                                      benefit out of the code we've written. I had heard that some XP web teams
                                                      do daily deploys (and know that some web servers make this possible without
                                                      downtime) and I thought maybe we could do this as well. After seeing the
                                                      benefit of baby step deploys, we decided to try daily deployments (even
                                                      sometimes multiple deployments) so that we'd never end up with a BIG SCARY
                                                      deployment at the end of an iteration. Sometimes we can't release a day's
                                                      work the same day, but there's usually something waiting to deploy, and
                                                      keeping that 'waiting to deploy' pile as small as possible is very helpful.


                                                      [Non-text portions of this message have been removed]
                                                    Your message has been successfully submitted and would be delivered to recipients shortly.