Date: Mon, 01 May 2006 16:58:38 -0000
From: "Deb" < deborah@... >
Subject: Re: FW: Metrics
> items into three categories:
> A) Features customers think they have "ordered" or items that have had
> any measurable investment of time.
Mary, how would you relate this to the Scrum product backlog
maintenance cycle? Current and next N releases? (I know, it will
differ by client, just trying to get a feel)
> B) Stuff we need to break out in order to think about architecture,
> estimate overall schedule, etc.
This feels like "too big to put into a sprint as-is"
or "requirement hasn't been well thought out yet" - definitely not
stuff at the top of the prioritized backlog.
> C) Really big bullet product roadmap items
bottom-of-the-backlog stuff, "Replace General Ledger" etc.
The best process capability or process maturity measurement is cycle time. This is true in operational environments, and time-to-market (a cycle time) is the key measure of a product development organization’s maturity. The only way we can get away from all kinds of sub-optimizing measurements is to switch to a higher level one that works. Cycle time works, because everything that can go wrong in a process will show up as delayed cycle time.
Fundamentally, you measure the time from when an item goes into the Scrum backlog until it is released as deployed software – that is its cycle time. A mature organization will have a repeatable, reliable cycle time, and will work to continually reduce it.
Sometimes, there is more than one cycle time – for instance, for a maintenance organization, reasonable cycle times might be 2 hours for an emergency, 8 hours for a normal patch, 1 week for most problems.
Either you operate like a maintenance organization, and measure time from customer request to deployment, or you operate like a product development organization and measure cycle time from product concept to launch. That’s the objective.
In Scrum, if everything is in one backlog, then you have different kinds of items which may have different expected cycle times. If that is so, you have to create sensible categories. In the example I gave, people were telling me that the scrum backlog had some things in it that were not really requests, they were just there to help think about the future. Okay then, some of these do not need to have their cycle time measured in the same breath as the ones that have a customer waiting for them or the ones in which you have invested some time.
So what I’m saying is that all Scrum backlog items which have customers waiting for them or have had any appreciable amount of time invested in them should have their cycle time measured in category A. Your objective is to keep the cycle time of category A items short, repeatable, and reliable. With this single measurement, your know your development process is under control, and you will be alerted when something goes wrong. Moreover, everyone concerned will be motivated to collaborate to reduce the cycle time.
Now the trick here is that you need to limit the scrum backlog to the capacity of the team to deliver in order for this to work. We know the team’s capacity – it is called velocity. Once we know velocity, there is no good PROCESS reason to have more than a month or two worth of velocity in the backlog. There may be political reasons, but that is a different matter. Put the political stuff in another category, and the planning stuff in another category, what you have left in category A should be aggressively limited to the teams capacity to deliver, and maintained at maybe two iterations or so worth of work (assuming that work arrives at a steady rate).
Once you get to the point of having queues which are MANAGED to be short, and teams that pull work at their known velocity, measuring cycle time is the best process maturity measurement I know of. It’s vastly superior to stuff like utilization and attempts at measuring productivity.
I’m really not sure if this makes any sense… but I’ll give it a try.
Author of: Lean Software Development: An Agile Toolkit
- Hi Pablo,
> The "agile operations" you seem to propose would, in that sense, beexactly the opposite of the essence of Agile's good engineering practices.
I did not propose any Agile operations in my post. I simply and emphatically wanted to make clear that ITIL and COBIT are neither Lean nor Agile, and have no orientation or basis on these principles. Colleagues who have been contracted to implement ITIL have been educating their customer on Lean and are using Lean principles to define the what and how of ITIL. That approach is great for their customer, however it is quite unusual as ITIL is implemented to provide centralized control and limits on change.
The analogy to CI does not make sense. Are you equating the work orders we had to complete for db changes to CI?
The Lean model for handling operations is much more efficient and effective than the ITIL set of practices. Interestingly the WIKI entry for ITIL says that it is based Edwards Deming, however I don't see the connection. A Lean solution will involve a lot more automation code, more visibility, less control, and thoughtful responses to outages to mistake proof operations. ITIL tries to stop outages by imposing control to reduce firedrills. It is treating a symptom - firedrills.
Now having said all that, I would start with lean principles, a value stream map, causal diagrams for key incident areas, and Scrum or Kanban to handle service requests. I would organize a process that pushes decision making down, and allows teams to find solutions to vexing operational problems. I would read ITIL for ideas that I can use to help support Lean principles, and be very cautious about practices that remove decision making away from those with the most information to resolve the issue.
What is interesting is the history of ITIL, and the power struggles around its formation. Why would that be?
Robin.On Fri, Feb 20, 2009 at 9:27 PM, Pablo Emanuel <pablo.emanuel@...> wrote:Robin,first of all, as Paul wrote, there's nothing on ITIL that is essentially contrary to Agility, although no framework is immune to bad implementations. On a deeper level, however, ITIL is about service management, i.e., operation, and Agile methodologies are about projects.From an operation standpoint, changes are the source almost all operational risks, while, on the other hand, there's no improvement without changes. So, one of the key issues of any service management framework is how to control changes in a way that it doesn't paralyze the improvements, yet doesn't let them jeopardize the operation stability. The analogy within the software development world is the continuous integration tool, that guarantees that the changes commited to the codebase are under the nightly build control, so one's able to easily track the offending change and restore the codebase's stability. The "agile operations" you seem to propose would, in that sense, be exactly the opposite of the essence of Agile's good engineering practices.That said, I can't see the relation between ITIL's change management process and the service metrics that have been asked for on the original message.Regards,Pablo Emanuel2009/2/20 Robin Dymond <robin.dymond@...>Hi Hank,
Don't waste your time with ITIL or COBIT if you value Agility. Neither of these S&M organizational bondage frameworks offer anything other than Grief and Bureaucracy. I know as I have had the unpleasant exercise in navigating the organizational stupidity they brought to a large Fin Serv organization that atempted Agile. Colorful language aside, it is in the blind application and locally optimized implementation of these frameworks that realy drives waste in the org.
A small example: development teams not allowed to make changes to development databases! Not kidding. Change control process and handoff for any server in the CMDB. Just brutal. Cause huge delays, and the db etam making the changes were _completely_ disconnected from the teams. If they made a mistake, back into the work order cue you went.
I would look far and wide for alternatives that are based on Lean Agile, or create one myself based on the lean agile principles.
Hope that saves you some time and pain.
Robin.On Fri, Feb 20, 2009 at 2:22 PM, Ryan Shriver <ryanshriver@...> wrote:
Hank,Thanks for the additional detail. I'm not sure that I have all your answers, but perhaps I can point you in some directions.From an outside-looking-in, start simply with Availability. This is the most important quality. There are varying ways to measure this, uptime is the most common. Following Availability I'd argue is Throughput (workload capacity of system under defined conditions such as normal or peak). Closely related to Throughput and third would be Response Time - what's the latency between request and response under defined conditions.I wrote a blog post about how to quantify Availability, Throughput and Response Time here:http://theagileengineer.com/public/Home/Entries/2008/11/7_The_3_Key_Performance_Qualities_for_all_web_systems_(Part_1).htmlYou can't leave out Security, so it may pre-empt some of the qualities above. Security can be a complex quality with multiple legitimate measures (access, authorization, encryption, integrity, etc.). Other qualities you may want to consider:- Recoverability - How quickly your team can response to a defined situation (power outage, hard drive crash)- Scalability - How easy or hard (expensive) is it for your system scale to meet new demand?- Maintainability - Another complex quality including efficiency of adding new features, resolving issues, isolating bugs, etc.- Reliability - Mean time between faults- Usability - How efficient is it for your customers to do their Top N number of transactions? Do they enjoy the experience? How much does it cost to train new users? Likely another complex quality.For further reading online, try www.gilb.com and search for any of these terms - you'll find a wealth of information.-ryanOn Feb 20, 2009, at 7:46 AM, Hank Roark wrote:
Robin Dymond, CST
Managing Partner, Innovel, LLC.