User Story Traps
- I finally got the first shot at my "User Story Traps" post done. I'm posting the top 3 traps here, and the link to the full blog posting. (Incidentally, I'm not doing that to redirect traffic to my blog, but I didn't think you folks would enjoy an even more ginormous post on here)
Any comments/complaints/controversies/corrections appreciated. :-)
This article assumes that User Stories are used with Scrum, though much of the advice can be translated to other development processes. User Stories and Scrum are two completely independent practices. Neither requires the other, though they are often used together. In fact, User Stories are suggested in the Scrum Guide as a technique for representing Product Backlog Items.
The traps below are in order from most impacting to least impacting, as judged by my own knowledge of User Stories, as well as by coaching 5 development teams on the best practices of User Stories. My intention here is to help other teams maximize their effectiveness with User Stories, by being aware of some of the User Story traps that should be avoided.
*** Trap: Thinking a user story is a sentence, or a card
The most common misconception, and the most dangerous trap, is thinking that a User Story is a sentence (like "As a user...I want...") or an index card on a Scrum board. A sentence generally only describes the User Story description(aka User Story Title), and the card can possibly include the title and/or tests, but a User Story is more than that. First and foremost, you must understand what a User Story truly is, and that it contains 3 very essential parts. The more modern definition below is my own, but it is based very largely on the definition of a User Story as described in Mike Cohn's book, User Stories Applied. Another good resource for defining a User Story is Ron Jeffries' article: http://xprogramming.com/articles/expcardconversationconfirmation/
A user story describes new or changed functionality that will be valuable to an external stakeholder of a system or software. User stories are composed of three aspects:
* a written description of the story used for planning and as a reminder
* conversations about the story that serve to flesh out the details of the story
* tests that convey and document details, that can be used to determine when a story is complete, and that are almost always automatable
By external stakeholder here, we mean a stakeholder external to the development team. It could be a user in your company, an executive in your company, another development team inside or outside of your company, your Product Owner, or any other role that is not officially a member of the development team who is implementing the story.
Not understanding the definition of User Story, and not realizing that every User Story contains 3 very essential pieces, is the biggest User Story Trap of all.
*** Trap: Everything is a Story, aka("A story is just some code that needs to be written...")
A widely believed misconception is that everything a development team does is a User Story. A similar view is that a User Story is just a "High level task" on a project plan, a piece of software that nees to be developed, or some other misapplied knowledge. A User Story is very different from either of these two descriptions. A User Story delivers clear business value via a change in the product to a stakeholder outside of the dev team, and most often, directly to a user of the system under development. I've seen teams mis-use User Stories to represent the following kinds of tasks: Setting up a staging server, Testing, Regression Testing, Time to write User Stories, Scrum ceremony meetings(even assigning Story Points to Sprint Planning Meetings!! ) These others efforts can be called anything you want to, but calling it a User Story, Technical Story, or all other manner of pseudo User Story or Scrum terms is fraught with danger that will defeat 20-80% of the benefits of using true User Stories. One of the biggest dangers is that assigning story points to these kinds of activities will totally pollute your velocity measurement. Advice: Scrum meetings and ceremony are tasks, and can be tracked as such on your Sprint Backlog. Other dev type tasks that are not directly related to delivering something of value to external stakeholders or users can be called "dev items" or "dev time". Track these dev items totally separately from the Product Backlog, and let the dev team bring them into sprints as tasks as they so choose. Only the dev team(remember, in Scrum, the "dev team" role does NOT include the Product Owner) can decide how much Product Backlog work can be taken on during a sprint, so when the dev team decides to prioritize these dev items, they will need to take those efforts into account when taking on Product Backlog. Having said all of that, any development/programming/testing activity that can be directly tied to a Story should be tied to that story and estimated with that story. Dev items are only for things that cannot be tied directly to a Story(Examples: Setting up a new test server, Upgrading your dev tools, researching dev tools or libraries, resolving technical debt, attending training -- so long as none of these items can be directly tied to one or more User Story requests -- sometimes they can, and then they are just considered a task for the story.).
*** Trap: Lack of clear acceptance tests
This one comes in a few flavors. Sometimes is simply a lack of well understood acceptance tests. Other times is comes in the form of thinking a user story is just a sentence or is just a "card" on the board. Many people mistakenly think a user story is some sentence that begins with "As a <user>, I want <some functionality>, so that ". Focusing too much on the "As a user " template is a User Story trap of it's own see below.
Acceptance tests should be well understood by all on the team. There are numerous techniques for acceptance testing, but the key principle in all of them is that they should be created such that they can be automated. You don't have to automate all of them, but they should most often be `automatable.' Some of the more popular techniques of documenting acceptance tests at a conceptual level are "Test that <some expected behavior occurs>", Given/When/Then, and Specification by Example. Any team wanting to do User Stories well should take the time to master one, or preferably more, of these techniques. Also, don't forget to make strategic use of simple tables, bullets, flowcharts, and pictures when a visual conveys meaning better than text. It has been my experience that teams often focus too much on the User Story description and conversations, and very little on acceptance testing. This is backwards, in my opinion. The focus should be mostly on acceptance testing, followed in priority by the conversations, and then last priority is the User Story description/title.
Rest of article at: