Loading ...
Sorry, an error occurred while loading the content.

Re: Tracking number of passed story acceptance criteria during the sprint

Expand Messages
  • davenicolette
    Hi Fredrik, ... Well, I wasn t picking on you particularly; it seemed as if some participants in the discussion were focusing on tactical work-arounds to
    Message 1 of 74 , Mar 1, 2009
    View Source
    • 0 Attachment
      Hi Fredrik,

      > Looking back at my post, I can see how you could get the impression
      > that we're coming from a past waterfall process and want to keep some
      > old habit of tracking at a detail level.

      Well, I wasn't picking on you particularly; it seemed as if some
      participants in the discussion were focusing on tactical work-arounds
      to immediate problems and losing sight of the larger problem. I might
      have read too much between the lines in some cases. Anyway...
      >
      > We also struggle with a code base which has a quite ok unit test
      > coverage but is lacking a lot when it comes to proper automated tests
      > at the story acceptance level. This hampers our agility due to costs
      > of manual regression testing, and reduced development speed due to
      > lack of confidence that changes will not break old functionality.

      Sounds like something you can address incrementally. You've already
      identified it as a problem, which is a big step foward in itself.

      >
      > My reasoning for showing progress in terms of passed story acceptance
      > tests was that a single acceptance criteria in a user story should
      > have business value, or it should not be asked for.

      IMHO a single acceptance criterion might be too fine a level of
      granularity to deliver business value. The story as a whole should do
      so, of course.

      >
      > My experience is that it's quite straight forward to split stories
      > vertically to a certain point, but then the tendency is that the team
      > suggest horizontal splitting if further break down is needed.

      Sure, that's consistent with most people's experience. It's consistent
      with my experience, too, before I started to get into all this agile
      stuff. The thing is, we can continue to improve our skills in
      decomposing the work verticcally. When we hit our personal limit and
      feel as if we "have to" define technical tasks separately, it tells us
      that's the point where we have an opportunity for improvement.

      If the story is small enough, then the various technical activities
      necessary to complete the work can just be a short punch list on a
      piece of scratch paper or a verbal discussion between the pairing
      partners who are playing the story. So I think one of the keys to all
      this is to drive the story size down to a practical minimum. Some of
      the related activities will then be small enough that we don't need
      additional ceremony or formality to keep track of them.

      > In practice we have never accepted a single story that is bigger than
      > half of the average team velocity, and when taking on one of these
      > larger stories we always try to swarm around the big story in the
      > beginning of the sprint to reduce the risk of having a failed sprint
      > with a partially done story.

      To me it sounds as if you're halfway to smaller stories already. It's
      the same approach, basically, except that all the little pieces are
      vertically sliced and defined as individual stories. Progress will be
      visible throughout the iteration because you'll be able to knock out
      the individual stories to completion. So, there's your partial and
      real progress, still keeping the model of using 100% complete stories
      as the unit of measure.

      >
      > When it comes to improving the acceptance criteria for stories
      > accepted by the team I prefer a more pragmatic approach aproach than
      > just refusing the story.

      I understand what you mean; I'd just like to interject that "just
      refusing the story" is pretty pragmatic. ;-) Obviously, on a practical
      level we wouldn't "just refuse" and walk away. We would refuse to
      accept a story that wasn't properly defined, and then collaborate with
      the customer/product-owner/whatever-the-role-is-called to get the
      story into proper shape.

      > The truth is that our current stories are
      > not all that bad, but every once in a while there's a high priority
      > story with fluffy or incomplete acceptance criteria coming up in
      > sprint planning. We typically discuss it on the spot with the PO, we
      > get a fairly good understanding, and someone is appointed as
      > responsible to work out the details with the PO in the beginning of
      > the sprint.

      That sounds pretty normal to me. A possible opportunity for
      improvement is not to wait until the beginning of the sprint, but go
      ahead and hammer out the acceptance criteria right then. If the PO
      isn't able to do so, it might indicate he/she doesn't quite know what
      he/she is asking for. It seems likely that they would take up a lot of
      time at the beginning of the sprint trying to figure it out; maybe
      they need to do some research or some thinking before they pull that
      particular story into play; next sprint, maybe. It's all to the good;
      it's not a question of refusing to work.


      > An acceptance criteria burndown graph would show this
      > situation and could be a remonder to bring up the issue in the
      > retrospective.

      Frankly, this still looks like additional ceremony that doesn't add
      value. Dealing with the issues on the spot would yield better results
      faster, and without any additional project tracking activities.

      >
      > * We need to do manual regression testing
      > Why #1: Why have you not automated the acceptance tests during the
      > sprint. Maybe it is because the acceptance criteria were not defined
      > in the proper way
      >
      > Why #2: Why were the acceptance criteria not defined in the proper
      > way. Maybe it was because of lack of time/priority from the PO
      >
      > Why #3: Why did the PO not make time for properly defining the
      > acceptance criteria? Maybe it was because the correlation of proper
      > acceptance criteria and sprint outcome was not clear to him.
      >
      > Why #4: Why was the correlation not clear to him? Maybe it was
      > because it was not brought up in a retrospective.
      >
      > Why #5: Why was the issue never raised in a retrospective? Maybe
      > because it was not really visible to the team either.
      >
      > Any suggestions on how to adress this situation?

      Seems like the series of whys you wrote is already pointing to actions
      you could take. At the next retrospective, make the team and the PO
      aware of the relationship between acceptance criteria and sprint
      success. Then use the power of self-organization and the wisdom of
      crowds: Let them come up with an idea for a solution, and let them try
      it out for a couple of sprints. You can always revisit the question in
      a future retrospective, or at any time you feel it's necessary.

      Cheers,
      Dave
    • Ron Jeffries
      Hello, Robert. On Saturday, March 7, 2009, at 1:51:27 PM, you ... Maybe next game, with that point. :) Ron Jeffries www.XProgramming.com
      Message 74 of 74 , Mar 7, 2009
      View Source
      • 0 Attachment
        Hello, Robert. On Saturday, March 7, 2009, at 1:51:27 PM, you
        wrote:

        > What seemed odd to me about your game is that it seemed to involve
        > no decision making during play. I had been expecting to see some kind of
        > evaluation and some kind of decision-making about making changes.

        Maybe next game, with that point. :)

        Ron Jeffries
        www.XProgramming.com
        www.xprogramming.com/blog
        Attend our CSM Plus Course!
        http://hendricksonxp.com/index.php?option=com_eventlist&Itemid=28
        The model that really matters is the one that people have in
        their minds. All other models and documentation exist only to
        get the right model into the right mind at the right time.
        -- Paul Oldfield
      Your message has been successfully submitted and would be delivered to recipients shortly.