Re: Scrum and Traceability
- (responding to Ron)
> I'd love to see a list of about 14 reasons to have traceability,A challenge! I'll do my best. Don't regard what follows as
> each with a few sentences on why with Agile one does not need them,
> and/or why they are covered by regression tests ...
authoritative, just a starting point for further thought...
BTW, haven't spent time yet catching up on the thread, so I
apologise if I'm duplicating...
Trace back to the source of any requirement so we know who
to go to for clarification
I guess we get that in who gets the value from a User story, but in
general we get the requirements and clarifications all at the same time
so we don't need to go back very often or very far.
Trace each requirement back to an owner who is keen to get
As above, we get that in who gets the value from a User story.
Everything we build and ship should be based on a current requirement, so
trace from every line of code back to a requirement
Test Driven Development means we create every line of code to pass a test
trace forward from every requirement to design elements and the code that
implements them. Among other benefits this can increase customer confidence
in our ability to deliver the product that is wanted.
Create acceptance tests as we code the requirements, then see the tests
back out from a requirement; find all code & design elements that
trace back only to the `recanted' requirement
If I had to do this I would delete the tests, then use a good code coverage
tool while running a full set of tests, to see which code is no longer
needed. I guess you need some traceability to know what tests to delete.
re-introduce work removed owing to a `recanted' requirement; follow the
existing trace forward from the reinstated requirement to the `deleted'
design and code elements
We might try some fancy work with historic versions and merges, but I'd
be tempted to ask the team to just add the story back onto the next
be able to demonstrate that we did everything we should do, help find the
root cause if we didn't; Trace all changes to any elements of any type to
the person who made that change.
IMHO if you want to do this you aren't being agile. I might want to track the
team members into their new jobs in their new organizations though.
capture what we did to deliver so we can repeat the steps necessary to do
it again; trace artefacts to or from the defined process in use when they
I would want these decisions internalised rather than on paper, What we need
to do in this respect gets done at the Retrospective.
trace from requirements and dependent elements (see completeness) to a `status'
Look at the test results. Are they all Green?
capture what we did to deliver so we can find out root causes of what went
wrong (see Repeatability)
Hold Retrospectives sufficiently often that the team members don't forget.
Writing the process down inhibits change for the better.
Use traceability links to collect together the related parts into a working
We still use "build scripts" but almost completely eliminate intermediate
artefacts. There's nothing left to build, apart from the executable code
from the source code.
to estimate the cost of a change to a requirement, trace to all the design
elements and code that might be impacted
Have all your User Stories independent and 'valuable'. The Story Point count
is a quick and easy estimation; we find out exactly what code needs to be
refactored when we do the work, but unless we have a build-up of technical
debt this will be a relatively constant amount 'per Story Point' - or at
least near enough so that we don't care about the variability.
Function Point Count
to quote the cost of a change to a requirement, trace to all the analysis
elements that represent the design and code that will be changed
If you really need to do a Function Point count, it can be done on the
Acceptance Tests... I think?
to find the design for specific requirements so one can understand how parts
of the system are implemented
...or with a code coverage tool, see what code gets called when you run the
specific tests you're interested in.
Ensure Requirements are Tested
Trace forward from Requirements to Tests that would demonstrate the requirement
is delivered; and to the outcomes of these tests
If the Acceptance Tests ARE the requirements, then this need evaporates.
I make that 15 'reasons', but you could cut this cake in different
Couldn't we write the tests such that they don't look like tests, but rather requirements?
With one, and only one formal specification, which also happens to be executable against the actual system, aren't we better off than having to split time between two possibly out-of-sync artifacts?
ThoughtWorks has a testing tool called Twist, which uses something called Business Workflows. And now it has a nestable declarative aggregator called a "Concept" (what a concept!).
Twist is... designed to help you deliver applications fully aligned with your business. It eliminates requirements mismatch as business users directly express intent in their domain language.
I have not used the tool myself. If anyone has, please add some insight.
P.S. I have no affiliation w/ ThoughtWorks.
--- In firstname.lastname@example.org, "woynam" <woyna@...> wrote:
> --- In email@example.com, "pauloldfield1" <PaulOldfield1@> wrote:
> > (responding to George)
> > > I feel like a broken record with my questions.
> > I guess I need to learn to answer you better :-)
> > > pauloldfield1 wrote:
> > > > IMHO Traceability, of itself, has no value. However some of the
> > > > things that we DO value may be achieved readily if we have
> > > > Traceability.
> > >
> > > What are those things?
> > Well, I gave you a list of 15 things that some people value.
> > I guess we could take a lead from Hillel's sig line and say
> > they are all various categories of attempting to use process
> > to cover for us being too stupid to be agile.
> > We value knowing that we are testing to see that our system does
> > what the customer wants (but we're too stupid to write the
> > requirements directly as tests)... etc. etc.
> And this continues to irk the sh*t out of me. Why do we create another intermediate artifact that has to be translated by an error-prone human into a set of tests? What does the requirements document provide that the tests don't? Couldn't we write the tests such that they don't look like tests, but rather requirements?
> With one, and only one formal specification, which also happens to be executable against the actual system, aren't we better off than having to split time between two possibly out-of-sync artifacts?
> If you continue to have a separate requirements document, and your tests don't reflect the entirety of the requirements, what mechanism do you use to verify the uncovered requirements? How is that working for you?
> "A man with one watch knows what time it is; A man with two watches is never quite sure."
> > Paul Oldfield
> > Capgemini