Loading ...
Sorry, an error occurred while loading the content.

Re: [XP] Shouldnt done include everything.

Expand Messages
  • Adam Sroka
    Bill Wake has an old article on his site that describes the nature of good stories and tasks. It says that tasks should be SMART: Specific, Measurable,
    Message 1 of 49 , May 31, 2010
    • 0 Attachment
      Bill Wake has an old article on his site that describes the nature of
      good stories and tasks. It says that tasks should be "SMART: Specific,
      Measurable, Achievable, Relevant, and Time-Boxed."
      (http://xp123.com/xplor/xp0308/ also reproduced in Mike Cohn's /User
      Stories Applied/.)

      It seems to me that exploratory testing has none of those qualities.
      Rather, it is more like many other activities that we do all the time:
      building, integrating, running the tests for regression, refactoring,
      talking to the customer, etc. These are vital to quality and our
      continued progress, but not necessarily specific to the story at hand.

      Exploratory testing is not meant to just catch bugs in the feature you
      were working on. It is meant to find bugs in the cracks between the
      stories. Those cracks don't belong to any story. So, exploring them
      can't really belong to the definition of done for that story.

      We also can't say, "We'll do exploratory testing until we're done,"
      because exploratory testing is never done. It's like refactoring. We
      don't "refactor until we're done." We refactor until we are
      sufficiently confident that we have a clean enough and simple enough
      design /for now./ We could always do more refactoring. We could spend
      weeks and weeks on it. The same thing is true of exploratory testing.

      It is reasonable to say that we will do some exploratory testing every
      time we implement a new feature. It is not reasonable to say that we
      will constrain the scope of such testing such that we know when we are
      done, because that is not exploratory testing.

      ...

      The other thing about exploratory testing is that not only does it not
      help us get to done, it actually helps us get undone. As Ron says
      below, the outcome of exploratory testing is either to find things we
      didn't think to test (I don't agree, by the way, that mature teams
      stop doing this. Some amount of regression is likely for any app of
      moderate complexity.) Or to find opportunities for new stories by
      exploring the way the app works. Both of those things lead us to
      discover that we aren't done even though we were confident that we
      were.

      That is useful, BTW. We want to learn that our product is not as good
      as we think it is before our customers do. It will happen, and the
      important thing is that we recognize it as an opportunity to learn:
      about our product, about our customer, about our domain, about our
      process, about our technical abilities and shortcomings, etc.

      On Mon, May 31, 2010 at 4:03 PM, Ron Jeffries <ronjeffries@...> wrote:
      >
      >
      >
      > Hello, xtremenilanjan. On Monday, May 31, 2010, at 9:38:43 AM,
      >
      > you wrote:
      >
      > > Some agile teams I have spoken to and a few accounts I have read,
      > > do a certain amount of testing after the iteration is complete.
      > > The idea is that acceptance tests are done, but there are still
      > > minor defects which need to be closed. In some cases people do
      > > exploratory testing, performance testing etc. in the next iteration.
      >
      > > Shouldn't "done" include everything? The purpose from what I
      > > understand is to keep the concept of "complete" simple - done or
      > > not done and get a customer buy-in.
      >
      > > I can understand having performance tests outside the iteration.
      > > However, I don't see why exploratory testing would not fall into a single iteration.
      >
      > Clearly it is difficult to do all the exploratory testing within the
      > iteration, unless programmers stop programming before the end. (They
      > could just "fix bugs" at the end but in that case I would downgrade
      > them for having enough bugs to fix.)
      >
      > However, if exploratory testing finds defects, I would think that
      > one or both of these things is true:
      >
      > 1. Acceptance criteria are not clear;
      > 2. Automated testing is not strong enough.
      >
      > So if exploratory testing is finding defects, the team has some
      > learning to do. If it isn't finding defects, it can still be finding
      > "interesting things" which can be turned into new stories.
      >
      > If exploratory testing is only turning up "interesting things", then
      > it is no longer a problem when it is done. Next iteration can be
      > just fine.
      >
      > Ron Jeffries
      > www.XProgramming.com
      > www.xprogramming.com/blog
      > I could be wrong, but I'm not. --Eagles, Victim of Love
      >
      >
    • Adam Sroka
      Hi Jeff: Are you responding to what Tim wrote below? Or to one of the earlier messages that I wrote? Anyway, thanks ;-) On Wed, Jun 9, 2010 at 3:52 PM, Jeff
      Message 49 of 49 , Jun 9, 2010
      • 0 Attachment
        Hi Jeff:

        Are you responding to what Tim wrote below? Or to one of the earlier
        messages that I wrote?

        Anyway, thanks ;-)

        On Wed, Jun 9, 2010 at 3:52 PM, Jeff Anderson
        <Thomasjeffreyandersontwin@...> wrote:
        >
        >
        >
        > Adam
        >
        > Your description of your coding life cycle was a breath of fresh air,
        > I sometimes get so surrounded by the old schoolers that I forget how
        > profound and powerful the XP approach is.
        >
        > Bravo.
        >
        > On 6/9/10, Tim Ottinger <linux_tim@...> wrote:
        > > FWIW
        > >
        > > My current company (an awesome place) is two years into agile transition.
        > > They are still releasing by content rather than time, mostly because it
        > > hasn't sunk in to upper levels the way it has been embraced in lower levels.
        > >
        > > There is a large legacy code base still, though it is constantly being
        > > whittled down. It has less coverage than the newer code.
        > >
        > > The ideal we strive for is that someday release will be a nonevent. There
        > > are many versions of our software in git that have had a full batch of
        > > unit and automated acceptance tests. Eventually, we will have sufficient
        > > trust in them that we can release any of them at any time. That's when
        > > we have arrived.
        > >
        > > While the code base and product management haven't fully transitioned, we
        > > have a 'code freeze' (really a branchpoint, after which we continue on) and
        > > there is manual testing and exploratory testing before a release. We are
        > > not really blocked by it, and we are programming on the day of release (on
        > > the next release).
        > >
        > > But someday a release will be a total non-event. Someone will pick a release
        > > package from the CI system and run the automated deploy on it in our big
        > > SAAS farm and nobody will stay up late or worry about it. Until then, we
        > > have the ever-thinning vestiges of an earlier circumstance.
        > >
        > > Tim Ottinger
        > > http://agileinaflash.blogspot.com/
        > > http://agileotter.blogspot.com/
        > >
        > >
        > >
        > >
        >
        > --
        > Sent from my mobile device
        >
        > Jeff Anderson
        >
        > http://agileconsulting.blogspot.com/
        >
      Your message has been successfully submitted and would be delivered to recipients shortly.