Loading ...
Sorry, an error occurred while loading the content.
 

Re: Tracking number of passed story acceptance criteria during the sprint

Expand Messages
  • lindgrenf
    ... stories ... team ... that ... Because it s easier i guess. I think it has to do with our vertical slices being pretty deep at times. We re developing a
    Message 1 of 74 , Mar 1, 2009
      --- In scrumdevelopment@yahoogroups.com, George Dinwiddie <lists@...>
      wrote:
      >
      > lindgrenf wrote:
      > > My experience is that it's quite straight forward to split
      stories
      > > vertically to a certain point, but then the tendency is that the
      team
      > > suggest horizontal splitting if further break down is needed. At
      that
      > > point I prefer a slightly bigger story than a bunch of horizontal
      > > slices.
      >
      > Why does the team jump to horizontal splitting?

      Because it's easier i guess. I think it has to do with our vertical
      slices being pretty deep at times. We're developing a distributed
      system with a central server where configuration is done, distributed
      nodes in the network and several different clients for handhelds. So
      far we've tried to require initial slicing to cover at least one of
      the clients, the distributed network node, and of course the
      configuration. Without these components, there would not be any
      business value to the story. Given our separate software components,
      it is just a too natural splitting boundary to resist once the
      vertical slice is thin enough.

      Our default definition of done also includes documenting the added
      feature for both administrators and in general product documentation.
      I know that we could split the documentation parts into separate
      stories by adding a story for: As a novice administrator I want to be
      able to learn howto..., and a story for: As a solution architect I
      want to read the documentation for the feature, so that I can better
      configure my deployment. However, this IMO would mean that the real
      story is not really done-done.

      >
      > > In practice we have never accepted a single story that is bigger
      than
      > > half of the average team velocity, and when taking on one of
      these
      > > larger stories we always try to swarm around the big story in the
      > > beginning of the sprint to reduce the risk of having a failed
      sprint
      > > with a partially done story. In such a case, I think that showing
      the
      > > partial, but real, progress could help us to discover problems
      > > earlier.
      >
      > Oof! That's a huge story, to me.

      Well, we're a small team with a majority of junior developers and
      we're running two week sprints. My gut feeling is that the stories
      are not that big, rather the velocity is lower than for a larger team
      with a longer sprint. My hope is that we will improve while not
      letting the stories to grow bigger, and then the relative size
      between maximum story size and velocity should improve as well.

      >
      > > When it comes to improving the acceptance criteria for stories
      > > accepted by the team I prefer a more pragmatic approach aproach
      than
      > > just refusing the story. The truth is that our current stories
      are
      > > not all that bad, but every once in a while there's a high
      priority
      > > story with fluffy or incomplete acceptance criteria coming up in
      > > sprint planning. We typically discuss it on the spot with the PO,
      we
      > > get a fairly good understanding, and someone is appointed as
      > > responsible to work out the details with the PO in the beginning
      of
      > > the sprint. An acceptance criteria burndown graph would show this
      > > situation and could be a remonder to bring up the issue in the
      > > retrospective.
      > >
      > > I hope I have provided a better background for my suggestion.
      > >
      > > Now, I would be extremely grateful for comments to the following:
      > >
      > > * We need to do manual regression testing
      > > Why #1: Why have you not automated the acceptance tests during
      the
      > > sprint. Maybe it is because the acceptance criteria were not
      defined
      > > in the proper way
      > >
      > > Why #2: Why were the acceptance criteria not defined in the
      proper
      > > way. Maybe it was because of lack of time/priority from the PO
      > >
      > > Why #3: Why did the PO not make time for properly defining the
      > > acceptance criteria? Maybe it was because the correlation of
      proper
      > > acceptance criteria and sprint outcome was not clear to him.
      > >
      > > Why #4: Why was the correlation not clear to him? Maybe it was
      > > because it was not brought up in a retrospective.
      > >
      > > Why #5: Why was the issue never raised in a retrospective? Maybe
      > > because it was not really visible to the team either.
      > >
      > > Any suggestions on how to adress this situation?
      >
      > Do you have any good testers on this team? When the P.O. is
      describing
      > the story, do they not ask what is the acceptance criteria? Is the
      crux
      > of the story so hard to discern? If so, how does the P.O. expect
      the
      > programmers to get it right?
      >

      Good testers or not, the team will ask the P.O. about the acceptance
      criteria. It's just that the level of detail that we would get during
      sprint may not be complete. The way we deal with it is by
      communicating with the P.O. daily through oút the sprint, refining
      the acceptance criteria as we go. Maybe we're just good enough at
      handling it this way, that the issue remains hidden. Which brings me
      back to the idea of exposing the issue by visualizing it throughout
      the sprint.

      /Fredrik


      > - George
      >
      > --
      >
      ----------------------------------------------------------------------
      > * George Dinwiddie * http://
      blog.gdinwiddie.com
      > Software Development http://
      www.idiacomputing.com
      > Consultant and Coach http://
      www.agilemaryland.org
      >
      ----------------------------------------------------------------------
      >
    • Ron Jeffries
      Hello, Robert. On Saturday, March 7, 2009, at 1:51:27 PM, you ... Maybe next game, with that point. :) Ron Jeffries www.XProgramming.com
      Message 74 of 74 , Mar 7, 2009
        Hello, Robert. On Saturday, March 7, 2009, at 1:51:27 PM, you
        wrote:

        > What seemed odd to me about your game is that it seemed to involve
        > no decision making during play. I had been expecting to see some kind of
        > evaluation and some kind of decision-making about making changes.

        Maybe next game, with that point. :)

        Ron Jeffries
        www.XProgramming.com
        www.xprogramming.com/blog
        Attend our CSM Plus Course!
        http://hendricksonxp.com/index.php?option=com_eventlist&Itemid=28
        The model that really matters is the one that people have in
        their minds. All other models and documentation exist only to
        get the right model into the right mind at the right time.
        -- Paul Oldfield
      Your message has been successfully submitted and would be delivered to recipients shortly.