Re: Scrum and Traceability
If it needs to be done for whatever reason, here is an easy flow to get it:
* All requirements are converted into user stories in a PRODUCT BACKLOG,
* user Stories are assigned to a sprint from the Product Backlog,
* For Each sprint user story, you create a task.
So, your traceability basically is the asnwer to the question: are there any user stories not assigned to a sprint?
I have used this simple but effective approach with no arguements from the client side.
--- In email@example.com, "Steve Ropa" <theropas@...> wrote:
> Brian: "Why aren't women allowed to go to stoning, mother?"
> Mother: "Because its *written* that's why!"
> I see this discussion going the same direction as the baseline discussion. The only reason I've ever seen for either one, and I am not making light of this reason, is because someone else requires it. Whether it be the big boss on Mahogany Row who needs to see where things were and where they are going, or a project manager who has "learned over the years" that these artifacts are required. What I haven't seen is any need for the actual development team to have traceability of anything in particular..
> From: George Dinwiddie
> Sent: Monday, March 01, 2010 8:07 AM
> To: firstname.lastname@example.org
> Subject: Re: [scrumdevelopment] Re: Scrum and Traceability
> Ron Jeffries wrote:
> > Hello, pauloldfield1. On Monday, March 1, 2010, at 2:57:00 AM,
> > you wrote:
> >>> Is there any role for traceability in scrum? How can we handle
> >>> links in scrum?
> >> There are about 14 distinct reasons to have traceability, if you
> >> count them all up. Almost all of these are irrelevant if you are
> >> working in an agile fashion, and most or all of the rest are covered
> >> by having a set of regression tests at 'green' (or not, as the case
> >> may be).
> > I'd love to see a list of about 14 reasons to have traceability,
> > each with a few sentences on why with Agile one does not need them,
> > and/or why they are covered by regression tests ...
> As would I. I've seen may people ask about traceability on this an
> other lists. Not one of them has answered my questions about what they
> want to trace and why. The best answer I can get is that it's "required."
> - George
> * George Dinwiddie * http://blog.gdinwiddie.com
> Software Development http://www.idiacomputing.com
> Consultant and Coach http://www.agilemaryland.org
Couldn't we write the tests such that they don't look like tests, but rather requirements?
With one, and only one formal specification, which also happens to be executable against the actual system, aren't we better off than having to split time between two possibly out-of-sync artifacts?
ThoughtWorks has a testing tool called Twist, which uses something called Business Workflows. And now it has a nestable declarative aggregator called a "Concept" (what a concept!).
Twist is... designed to help you deliver applications fully aligned with your business. It eliminates requirements mismatch as business users directly express intent in their domain language.
I have not used the tool myself. If anyone has, please add some insight.
P.S. I have no affiliation w/ ThoughtWorks.
--- In email@example.com, "woynam" <woyna@...> wrote:
> --- In firstname.lastname@example.org, "pauloldfield1" <PaulOldfield1@> wrote:
> > (responding to George)
> > > I feel like a broken record with my questions.
> > I guess I need to learn to answer you better :-)
> > > pauloldfield1 wrote:
> > > > IMHO Traceability, of itself, has no value. However some of the
> > > > things that we DO value may be achieved readily if we have
> > > > Traceability.
> > >
> > > What are those things?
> > Well, I gave you a list of 15 things that some people value.
> > I guess we could take a lead from Hillel's sig line and say
> > they are all various categories of attempting to use process
> > to cover for us being too stupid to be agile.
> > We value knowing that we are testing to see that our system does
> > what the customer wants (but we're too stupid to write the
> > requirements directly as tests)... etc. etc.
> And this continues to irk the sh*t out of me. Why do we create another intermediate artifact that has to be translated by an error-prone human into a set of tests? What does the requirements document provide that the tests don't? Couldn't we write the tests such that they don't look like tests, but rather requirements?
> With one, and only one formal specification, which also happens to be executable against the actual system, aren't we better off than having to split time between two possibly out-of-sync artifacts?
> If you continue to have a separate requirements document, and your tests don't reflect the entirety of the requirements, what mechanism do you use to verify the uncovered requirements? How is that working for you?
> "A man with one watch knows what time it is; A man with two watches is never quite sure."
> > Paul Oldfield
> > Capgemini