Re: Tracking number of passed story acceptance criteria during the sprint
- Hi Fredrik,
> Looking back at my post, I can see how you could get the impressionWell, I wasn't picking on you particularly; it seemed as if some
> that we're coming from a past waterfall process and want to keep some
> old habit of tracking at a detail level.
participants in the discussion were focusing on tactical work-arounds
to immediate problems and losing sight of the larger problem. I might
have read too much between the lines in some cases. Anyway...
>Sounds like something you can address incrementally. You've already
> We also struggle with a code base which has a quite ok unit test
> coverage but is lacking a lot when it comes to proper automated tests
> at the story acceptance level. This hampers our agility due to costs
> of manual regression testing, and reduced development speed due to
> lack of confidence that changes will not break old functionality.
identified it as a problem, which is a big step foward in itself.
>IMHO a single acceptance criterion might be too fine a level of
> My reasoning for showing progress in terms of passed story acceptance
> tests was that a single acceptance criteria in a user story should
> have business value, or it should not be asked for.
granularity to deliver business value. The story as a whole should do
so, of course.
>Sure, that's consistent with most people's experience. It's consistent
> My experience is that it's quite straight forward to split stories
> vertically to a certain point, but then the tendency is that the team
> suggest horizontal splitting if further break down is needed.
with my experience, too, before I started to get into all this agile
stuff. The thing is, we can continue to improve our skills in
decomposing the work verticcally. When we hit our personal limit and
feel as if we "have to" define technical tasks separately, it tells us
that's the point where we have an opportunity for improvement.
If the story is small enough, then the various technical activities
necessary to complete the work can just be a short punch list on a
piece of scratch paper or a verbal discussion between the pairing
partners who are playing the story. So I think one of the keys to all
this is to drive the story size down to a practical minimum. Some of
the related activities will then be small enough that we don't need
additional ceremony or formality to keep track of them.
> In practice we have never accepted a single story that is bigger thanTo me it sounds as if you're halfway to smaller stories already. It's
> half of the average team velocity, and when taking on one of these
> larger stories we always try to swarm around the big story in the
> beginning of the sprint to reduce the risk of having a failed sprint
> with a partially done story.
the same approach, basically, except that all the little pieces are
vertically sliced and defined as individual stories. Progress will be
visible throughout the iteration because you'll be able to knock out
the individual stories to completion. So, there's your partial and
real progress, still keeping the model of using 100% complete stories
as the unit of measure.
>I understand what you mean; I'd just like to interject that "just
> When it comes to improving the acceptance criteria for stories
> accepted by the team I prefer a more pragmatic approach aproach than
> just refusing the story.
refusing the story" is pretty pragmatic. ;-) Obviously, on a practical
level we wouldn't "just refuse" and walk away. We would refuse to
accept a story that wasn't properly defined, and then collaborate with
the customer/product-owner/whatever-the-role-is-called to get the
story into proper shape.
> The truth is that our current stories areThat sounds pretty normal to me. A possible opportunity for
> not all that bad, but every once in a while there's a high priority
> story with fluffy or incomplete acceptance criteria coming up in
> sprint planning. We typically discuss it on the spot with the PO, we
> get a fairly good understanding, and someone is appointed as
> responsible to work out the details with the PO in the beginning of
> the sprint.
improvement is not to wait until the beginning of the sprint, but go
ahead and hammer out the acceptance criteria right then. If the PO
isn't able to do so, it might indicate he/she doesn't quite know what
he/she is asking for. It seems likely that they would take up a lot of
time at the beginning of the sprint trying to figure it out; maybe
they need to do some research or some thinking before they pull that
particular story into play; next sprint, maybe. It's all to the good;
it's not a question of refusing to work.
> An acceptance criteria burndown graph would show thisFrankly, this still looks like additional ceremony that doesn't add
> situation and could be a remonder to bring up the issue in the
value. Dealing with the issues on the spot would yield better results
faster, and without any additional project tracking activities.
>Seems like the series of whys you wrote is already pointing to actions
> * We need to do manual regression testing
> Why #1: Why have you not automated the acceptance tests during the
> sprint. Maybe it is because the acceptance criteria were not defined
> in the proper way
> Why #2: Why were the acceptance criteria not defined in the proper
> way. Maybe it was because of lack of time/priority from the PO
> Why #3: Why did the PO not make time for properly defining the
> acceptance criteria? Maybe it was because the correlation of proper
> acceptance criteria and sprint outcome was not clear to him.
> Why #4: Why was the correlation not clear to him? Maybe it was
> because it was not brought up in a retrospective.
> Why #5: Why was the issue never raised in a retrospective? Maybe
> because it was not really visible to the team either.
> Any suggestions on how to adress this situation?
you could take. At the next retrospective, make the team and the PO
aware of the relationship between acceptance criteria and sprint
success. Then use the power of self-organization and the wisdom of
crowds: Let them come up with an idea for a solution, and let them try
it out for a couple of sprints. You can always revisit the question in
a future retrospective, or at any time you feel it's necessary.
- Hello, Robert. On Saturday, March 7, 2009, at 1:51:27 PM, you
> What seemed odd to me about your game is that it seemed to involveMaybe next game, with that point. :)
> no decision making during play. I had been expecting to see some kind of
> evaluation and some kind of decision-making about making changes.
Attend our CSM Plus Course!
The model that really matters is the one that people have in
their minds. All other models and documentation exist only to
get the right model into the right mind at the right time.
-- Paul Oldfield