Loading ...
Sorry, an error occurred while loading the content.

Re: [scrumdevelopment] Re: Scrum and Traceability

Expand Messages
  • George Dinwiddie
    Hi, Hillel, ... Does a traceability document do these things? ... Umm, I think automated tests, if executed, could catch this much better than any traceability
    Message 1 of 137 , Mar 1, 2010
    • 0 Attachment
      Hi, Hillel,

      Hillel Glazer wrote:
      > That the testing is assumed to have been created with the ability to be
      > comprehensive and tightly integrated with what's being tested in such a
      > way as to ensure it is aware of changes to the code and can properly
      > eliminate unneeded tests and add new ones, including all hypotheticals
      > and nulls... and...

      Does a traceability document do these things?

      > That the people doing all the work are themselves experienced enough to
      > ensure that the tests they've configured and the code they're creating
      > have accounted for not just testing what *should be* in the code but
      > also what *actually is* in the code, whether it should be there or not.
      > Specifically, say a feature is supposed to go into the code at some
      > point in the future, or, to one user base but not another, at least not
      > yet. It's *supposed* to be in there, or maybe it's slated to be there,
      > or is under consideration, but not now, if ever. Automated tests alone
      > wouldn't catch this and it could end up being embarrassing, if not
      > an actual problem.

      Umm, I think automated tests, if executed, could catch this much better
      than any traceability document.

      > Were Ron or George or many others on this list to assure that
      > traceability exists due to "other things they do", I'm quite sure they
      > have the experience to ensure that, indeed, those other things
      > accomplish the same outcome that traceability practices are meant to
      > perform. If they were working on some life or safety critical
      > application that required some form of assessment that they are doing
      > things that accomplish "traceability", I'd look at what they do that
      > obviate the outcome by whatever it is that they do that make traditional
      > traceability unnecessary.

      When I look at http://en.wikipedia.org/wiki/Traceability_matrix it
      appears that this merely maps requirements to test cases. If the
      requirements are expressed as automated test cases, this is a trivial
      mapping.

      > Unfortunately, most people in this field today aren't where the experts
      > in the field are and need more blunt forms of traceability to mitigate
      > the risks associated with managing code that's changing over time,
      > space, and multitudes of people touching it that describes the context
      > in which many systems created by inexperienced people exist.

      I don't understand this, at all. How does a table relating requirements
      documents to test case identifiers help anyone? How can inexperienced
      people be helped more by a document than by executable tests that
      actually tell them when a requirement isn't met?

      How is a traceability matrix actually used? I've never seen anyone
      demonstrate a use. What questions are answered by it?

      - George

      --
      ----------------------------------------------------------------------
      * George Dinwiddie * http://blog.gdinwiddie.com
      Software Development http://www.idiacomputing.com
      Consultant and Coach http://www.agilemaryland.org
      ----------------------------------------------------------------------
    • john_hermann
      @Mark Couldn t we write the tests such that they don t look like tests, but rather requirements? With one, and only one formal specification, which
      Message 137 of 137 , Apr 20 3:14 AM
      • 0 Attachment
        @Mark
        <quote>
        Couldn't we write the tests such that they don't look like tests, but rather requirements?

        With one, and only one formal specification, which also happens to be executable against the actual system, aren't we better off than having to split time between two possibly out-of-sync artifacts?
        </quote>

        ThoughtWorks has a testing tool called Twist, which uses something called Business Workflows. And now it has a nestable declarative aggregator called a "Concept" (what a concept!).

        http://www.thoughtworks-studios.com/agile-test-automation
        <snip>
        Twist is... designed to help you deliver applications fully aligned with your business. It eliminates requirements mismatch as business users directly express intent in their domain language.
        </snip>

        I have not used the tool myself. If anyone has, please add some insight.

        -johnny
        P.S. I have no affiliation w/ ThoughtWorks.


        --- In scrumdevelopment@yahoogroups.com, "woynam" <woyna@...> wrote:
        >
        >
        >
        > --- In scrumdevelopment@yahoogroups.com, "pauloldfield1" <PaulOldfield1@> wrote:
        > >
        > > (responding to George)
        > >
        > > > I feel like a broken record with my questions.
        > >
        > > I guess I need to learn to answer you better :-)
        > >
        > > > pauloldfield1 wrote:
        > > > > IMHO Traceability, of itself, has no value. However some of the
        > > > > things that we DO value may be achieved readily if we have
        > > > > Traceability.
        > > >
        > > > What are those things?
        > >
        > > Well, I gave you a list of 15 things that some people value.
        > > I guess we could take a lead from Hillel's sig line and say
        > > they are all various categories of attempting to use process
        > > to cover for us being too stupid to be agile.
        > >
        > > We value knowing that we are testing to see that our system does
        > > what the customer wants (but we're too stupid to write the
        > > requirements directly as tests)... etc. etc.
        >
        > And this continues to irk the sh*t out of me. Why do we create another intermediate artifact that has to be translated by an error-prone human into a set of tests? What does the requirements document provide that the tests don't? Couldn't we write the tests such that they don't look like tests, but rather requirements?
        >
        > With one, and only one formal specification, which also happens to be executable against the actual system, aren't we better off than having to split time between two possibly out-of-sync artifacts?
        >
        > If you continue to have a separate requirements document, and your tests don't reflect the entirety of the requirements, what mechanism do you use to verify the uncovered requirements? How is that working for you?
        >
        > Mark
        >
        > "A man with one watch knows what time it is; A man with two watches is never quite sure."
        >
        >
        > >
        > > Paul Oldfield
        > > Capgemini
        > >
        >
      Your message has been successfully submitted and would be delivered to recipients shortly.