Re: [scrumdevelopment] Re: Use Cases - a "Bridge" between the "what" and the "how"?
I'm sure you saw this coming, but I have to strongly disagree with your definition of User Story.
Both Cohn and our fellow lister Ron Jeffries make it very clear that a User Story is more than a sentence. Oh, you also mentioned the conversations... so you got 2/3 in my book. What you and much of the industry fails to grasp is that User Stories require a 3rd component, the "Confirmations" or "Acceptance Tests" - which are descriptions(documented or not) of what will constitute success for the story. They are also the place the people put most of the system behavior details(such as logic, error conditions, order of flow, etc).
Cohn in more recent years on his website has also failed to mention acceptance tests, and I called him out on it recently. (of course he talks a lot about it in his book) His response was that I was right to bring it up, and he said he would update his website materials on US when he got around to it, but that it may be a while because he's slammed.
So, with that context, writing one or more UC's for a US, while not horrible, is also not terribly efficient, either, and in my view is an attempt to replace Acceptance Tests with Use Cases.
Most of the value of Use Cases comes from documenting the boundary between User and System(this is both Cockburn's advice and my experience), but as you mention, you can do them at differing levels too if that adds value. Further, I would argue that the detailing of a UC is almost exactly analogous to the detailing of a US -- it is meant to be a *result* of a collaboration.
Detailed Use Cases will take more time and include much more detail than a User Story, as you mention. To me, this is the major distinction in terms of efficiency. Sea Level UC's(the ones that document user/system boundary) require the details in a way as to specify the user/system interaction down to a fine detail, including order of operations. User Story Acceptance Tests take a less specific road by saying "Just tell me *what* needs to happen for success, not *how* the interaction must play out". The other big difference is that while UC's are supposed to also correspond to TC's to ensure closure(which then may be automated), US Acceptance Tests being converted directly into automated tests is what gives US's a more efficient mechanism. To me, it's almost as if US skip the UC part and go directly to a detailed Test Case. Another major distinction is that, in order to be effective, UC's, at the time they are used to spur development, must always document all current behavior, and the new/changed behavior, in order to be useful. US's take a more streamlined approach and just document the new/changed behavior.
The overall theme difference I see as someone experienced in both, is that UC's(and their associated Test Case docs) require much more time and much more detail than US's, and UC's specify system behavior in MUCH more detail than US's. As such, IMO, 99%+ of the time, it's a better choice to use US's, but I leave open the possibility that there are extreme corner cases where spending the added money to get the added level of detail *might be* a wise choice. As many have said here, though, generally speaking, it's better to spend more time/money writing tests than it is to spend the same time/money going into fine grained details in a Use Case. I also leave open the possibility that there are projects where the risks involved (wrt specifying expected system behavior) are more important than the money saved by going the less specified route of US's -- and as such, a reason a team might choose UC's over US's. (I also leave open the possibilty that a team can both do extensive automated testing *and* UC's, but again, that costs a lot of extra time/money.)
- "Oh, I'm sorry, when I said I wanted it to 'take off like a helicopter,' I meant 'take off like a helicopter from a warzone, where we can carry troops and equipment.'
And there's the rub. The devil is in the details, and sometimes those details means we're off the mark (Harrier vs. Osprey) or things take much longer than anticipated (Osprey).
Someone else made a point here recently about how it's extremely important to a) get to the details, by having the CEO appoint a point person and b) make sure you loop in the CEO iteratively and intelligently (respecting his time) so he can head off any incorrect interpretation of his vision.
I'm not against vision, but a vision is not a "software requirement". Further, a vision is not easily testable, nor is a business requirement, because they often lack the key ingredient of system behavior that you can test against.