Loading ...
Sorry, an error occurred while loading the content.

Re: Shouldnt done include everything.

Expand Messages
  • xtremenilanjan
    This is aside from my original question- Isn t it setting the bar high - if we expect programmers to keep programming till the end of the iteration? :-) Do
    Message 1 of 49 , Jun 1, 2010
    • 0 Attachment
      This is aside from my original question-

      Isn't it setting the bar high - if we expect programmers to keep programming till the end of the iteration? :-) Do let me know if I should expect this.

      I am thinking more from the point of view of a software product team

      --- In extremeprogramming@yahoogroups.com, Ron Jeffries <ronjeffries@...> wrote:
      >
      > Hello, xtremenilanjan. On Monday, May 31, 2010, at 9:38:43 AM,
      > you wrote:
      >
      > > Some agile teams I have spoken to and a few accounts I have read,
      > > do a certain amount of testing after the iteration is complete.
      > > The idea is that acceptance tests are done, but there are still
      > > minor defects which need to be closed. In some cases people do
      > > exploratory testing, performance testing etc. in the next iteration.
      >
      > > Shouldn't "done" include everything? The purpose from what I
      > > understand is to keep the concept of "complete" simple - done or
      > > not done and get a customer buy-in.
      >
      > > I can understand having performance tests outside the iteration.
      > > However, I don't see why exploratory testing would not fall into a single iteration.
      >
      > Clearly it is difficult to do all the exploratory testing within the
      > iteration, unless programmers stop programming before the end. (They
      > could just "fix bugs" at the end but in that case I would downgrade
      > them for having enough bugs to fix.)
      >
      > However, if exploratory testing finds defects, I would think that
      > one or both of these things is true:
      >
      > 1. Acceptance criteria are not clear;
      > 2. Automated testing is not strong enough.
      >
      > So if exploratory testing is finding defects, the team has some
      > learning to do. If it isn't finding defects, it can still be finding
      > "interesting things" which can be turned into new stories.
      >
      > If exploratory testing is only turning up "interesting things", then
      > it is no longer a problem when it is done. Next iteration can be
      > just fine.
      >
      > Ron Jeffries
      > www.XProgramming.com
      > www.xprogramming.com/blog
      > I could be wrong, but I'm not. --Eagles, Victim of Love
      >
    • Adam Sroka
      Hi Jeff: Are you responding to what Tim wrote below? Or to one of the earlier messages that I wrote? Anyway, thanks ;-) On Wed, Jun 9, 2010 at 3:52 PM, Jeff
      Message 49 of 49 , Jun 9, 2010
      • 0 Attachment
        Hi Jeff:

        Are you responding to what Tim wrote below? Or to one of the earlier
        messages that I wrote?

        Anyway, thanks ;-)

        On Wed, Jun 9, 2010 at 3:52 PM, Jeff Anderson
        <Thomasjeffreyandersontwin@...> wrote:
        >
        >
        >
        > Adam
        >
        > Your description of your coding life cycle was a breath of fresh air,
        > I sometimes get so surrounded by the old schoolers that I forget how
        > profound and powerful the XP approach is.
        >
        > Bravo.
        >
        > On 6/9/10, Tim Ottinger <linux_tim@...> wrote:
        > > FWIW
        > >
        > > My current company (an awesome place) is two years into agile transition.
        > > They are still releasing by content rather than time, mostly because it
        > > hasn't sunk in to upper levels the way it has been embraced in lower levels.
        > >
        > > There is a large legacy code base still, though it is constantly being
        > > whittled down. It has less coverage than the newer code.
        > >
        > > The ideal we strive for is that someday release will be a nonevent. There
        > > are many versions of our software in git that have had a full batch of
        > > unit and automated acceptance tests. Eventually, we will have sufficient
        > > trust in them that we can release any of them at any time. That's when
        > > we have arrived.
        > >
        > > While the code base and product management haven't fully transitioned, we
        > > have a 'code freeze' (really a branchpoint, after which we continue on) and
        > > there is manual testing and exploratory testing before a release. We are
        > > not really blocked by it, and we are programming on the day of release (on
        > > the next release).
        > >
        > > But someday a release will be a total non-event. Someone will pick a release
        > > package from the CI system and run the automated deploy on it in our big
        > > SAAS farm and nobody will stay up late or worry about it. Until then, we
        > > have the ever-thinning vestiges of an earlier circumstance.
        > >
        > > Tim Ottinger
        > > http://agileinaflash.blogspot.com/
        > > http://agileotter.blogspot.com/
        > >
        > >
        > >
        > >
        >
        > --
        > Sent from my mobile device
        >
        > Jeff Anderson
        >
        > http://agileconsulting.blogspot.com/
        >
      Your message has been successfully submitted and would be delivered to recipients shortly.