Loading ...
Sorry, an error occurred while loading the content.
 

RE: Metrics

Expand Messages
  • MaryP
    Date: Mon, 01 May 2006 16:58:38 -0000 From: Deb Subject: Re: FW: Metrics divide backlog ... had ... Mary, how would you relate this to
    Message 1 of 41 , May 1 7:10 PM

         Date: Mon, 01 May 2006 16:58:38 -0000

         From: "Deb" < deborah@... >

      Subject: Re: FW: Metrics

      divide

      backlog

      > items into three categories: 

      >

      > A)      Features customers think they have "ordered" or items that have had

      > any measurable investment of time.

      Mary, how would you relate this to the Scrum product backlog

      maintenance cycle? Current and next N releases? (I know, it will

      differ by client, just trying to get a feel)

      >

      > B)      Stuff we need to break out in order to think about architecture,

      > estimate overall schedule, etc.

      This feels like "too big to put into a sprint as-is"

      or "requirement hasn't been well thought out yet" - definitely not

      stuff at the top of the prioritized backlog.

      >

      > C)      Really big bullet product roadmap items

      >

      bottom-of-the-backlog stuff, "Replace General Ledger" etc.

       

      Deb,

       

      The best process capability or process maturity measurement is cycle time.  This is true in operational environments, and time-to-market (a cycle time) is the key measure of a product development organization’s maturity.  The only way we can get away from all kinds of sub-optimizing measurements is to switch to a higher level one that works.  Cycle time works, because everything that can go wrong in a process will show up as delayed cycle time.

       

      Fundamentally, you measure the time from when an item goes into the Scrum backlog until it is released as deployed software – that is its cycle time.  A mature organization will have a repeatable, reliable cycle time, and will work to continually reduce it.

       

      Sometimes, there is more than one cycle time – for instance, for a maintenance organization, reasonable cycle times might be 2 hours for an emergency, 8 hours for a normal patch, 1 week for most problems. 

       

      Either you operate like a maintenance organization, and measure time from customer request to deployment, or you operate like a product development organization and measure cycle time from product concept to launch.  That’s the objective.

       

      In Scrum, if everything is in one backlog, then you have different kinds of items which may have different expected cycle times.  If that is so, you have to create sensible categories.   In the example I gave, people were telling me that the scrum backlog had some things in it that were not really requests, they were just there to help think about the future.  Okay then, some of these do not need to have their cycle time measured in the same breath as the ones that have a customer waiting for them or the ones in which you have invested some time.   

       

      So what I’m saying is that all Scrum backlog items which have customers waiting for them or have had any appreciable amount of time invested in them should have their cycle time measured in category A.  Your objective is to keep the cycle time of category A items short, repeatable, and reliable.  With this single measurement, your know your development process is under control, and you will be alerted when something goes wrong.  Moreover, everyone concerned will be motivated to collaborate to reduce the cycle time. 

       

      Now the trick here is that you need to limit the scrum backlog to the capacity of the team to deliver in order for this to work.  We know the team’s capacity – it is called velocity.  Once we know velocity, there is no good PROCESS reason to have more than a month or two worth of velocity in the backlog.  There may be political reasons, but that is a different matter.  Put the political stuff in another category, and the planning stuff in another category, what you have left in category A should be aggressively limited to the teams capacity to deliver, and maintained at maybe two iterations or so worth of work (assuming that work arrives at a steady rate).

       

      Once you get to the point of having queues which are MANAGED to be short, and teams that pull work at their known velocity, measuring cycle time is the best process maturity measurement I know of. It’s vastly superior to stuff like utilization and attempts at measuring productivity.

       

      I’m really not sure if this makes any sense… but I’ll give it a try.

       

      Cheers!

       

      Mary Poppendieck

      www.poppendieck.com

      952-934-7998

      Author of:  Lean Software Development: An Agile Toolkit

       

    • Robin Dymond
      Hi Pablo, ... exactly the opposite of the essence of Agile s good engineering practices. I did not propose any Agile operations in my post. I simply and
      Message 41 of 41 , Feb 23, 2009
        Hi Pablo,

        > The "agile operations" you seem to propose would, in that sense, be
        exactly the opposite of the essence of Agile's good engineering practices.

        I did not propose any Agile operations in my post. I simply and emphatically wanted to make clear that ITIL and COBIT are neither Lean nor Agile, and have no orientation or basis on these principles. Colleagues who have been contracted to implement ITIL have been educating their customer on Lean and are using Lean principles to define the what and how of ITIL. That approach is great for their customer, however it is quite unusual as ITIL is implemented to provide centralized control and limits on change.

        The analogy to CI does not make sense. Are you equating the work orders we had to complete for db changes to CI?

        The Lean model for handling operations is much more efficient and effective than the ITIL set of practices. Interestingly the WIKI entry for ITIL says that it is based Edwards Deming, however I don't see the connection. A Lean solution will involve a lot more automation code, more visibility, less control, and thoughtful responses to outages to mistake proof operations. ITIL tries to stop outages by imposing control to reduce firedrills. It is treating a symptom - firedrills.

        Now having said all that, I would start with lean principles, a value stream map, causal diagrams for key incident areas, and Scrum or Kanban to handle service requests. I would organize a process that pushes decision making down, and allows teams to find solutions to vexing operational problems. I would read ITIL for ideas that I can use to help support Lean principles, and be very cautious about practices that remove decision making away from those with the most information to resolve the issue.

        What is interesting is the history of ITIL, and the power struggles around its formation. Why would that be?

        cheers,
        Robin.


        On Fri, Feb 20, 2009 at 9:27 PM, Pablo Emanuel <pablo.emanuel@...> wrote:

        Robin,
         
        first of all, as Paul wrote, there's nothing on ITIL that is essentially contrary to Agility, although no framework is immune to bad implementations. On a deeper level, however, ITIL is about service management, i.e., operation, and Agile methodologies are about projects.
         
        From an operation standpoint, changes are the source almost all operational risks, while, on the other hand, there's no improvement without changes. So, one of the key issues of any service management framework is how to control changes in a way that it doesn't paralyze the improvements, yet doesn't let them jeopardize the operation stability. The analogy within the software development world is the continuous integration tool, that guarantees that the changes commited to the codebase are under the nightly build control, so one's able to easily track the offending change and restore the codebase's stability. The "agile operations" you seem to propose would, in that sense, be exactly the opposite of the essence of Agile's good engineering practices.
         
        That said, I can't see the relation between ITIL's change management process and the service metrics that have been asked for on the original message.
         
        Regards,
        Pablo Emanuel

        2009/2/20 Robin Dymond <robin.dymond@...>

        Hi Hank,

        Don't waste your time with ITIL or COBIT if you value Agility. Neither of these S&M organizational bondage frameworks offer anything other than Grief and Bureaucracy. I know as I have had the unpleasant exercise in navigating the organizational stupidity they brought to a large Fin Serv organization that atempted Agile. Colorful language aside, it is in the blind application and locally optimized implementation of these frameworks that realy drives waste in the org.

        A small example: development teams not allowed to make changes to development databases! Not kidding. Change control process and handoff for any server in the CMDB. Just brutal. Cause huge delays, and the db etam making the changes were _completely_ disconnected from the teams. If they made a mistake, back into the work order cue you went.

        I would look far and wide for alternatives that are based on Lean Agile, or create one myself based on the lean agile principles.

        Hope that saves you some time and pain.
        Robin.


        On Fri, Feb 20, 2009 at 2:22 PM, Ryan Shriver <ryanshriver@...> wrote:

        Hank,

        Thanks for the additional detail. I'm not sure that I have all your answers, but perhaps I can point you in some directions.

        From an outside-looking-in, start simply with Availability. This is the most important quality. There are varying ways to measure this, uptime is the most common. Following Availability I'd argue is Throughput (workload capacity of system under defined conditions such as normal or peak). Closely related to Throughput and third would be Response Time - what's the latency between request and response under defined conditions.

        I wrote a blog post about how to quantify Availability, Throughput and Response Time here:


        You can't leave out Security, so it may pre-empt some of the qualities above. Security can be a complex quality with multiple legitimate measures (access, authorization, encryption, integrity, etc.). Other qualities you may want to consider:

        - Recoverability - How quickly your team can response to a defined situation (power outage, hard drive crash)
        - Scalability - How easy or hard (expensive) is it for your system scale to meet new demand? 
        - Maintainability - Another complex quality including efficiency of adding new features, resolving issues, isolating bugs, etc. 
        - Reliability - Mean time between faults
        - Usability - How efficient is it for your customers to do their Top N number of transactions? Do they enjoy the experience? How much does it cost to train new users? Likely another complex quality.

        For further reading online, try www.gilb.com and search for any of these terms - you'll find a wealth of information.

        -ryan


        On Feb 20, 2009, at 7:46 AM, Hank Roark wrote:

        Pablo,

        Thanks for the replies.  Exactly.  The software is only one part of the overall service (notice the focus on customer support readiness, warranty claims, etc.).  As another example, time to resolve issues is already a key part of our manufacturing quality measurements (time to short term resolution, time to permanent resolution) and seems to make sense in the case of any product, be they bits or atoms or a combination of the two.  

        [I will look into ITIL and COBIT work on this.]

        Notice that I'm looking for outside-in metrics, things like reliability and responsiveness to issues.  Or to put it a different way, the software is in production, it is done (I know, software only done when we declare it done), we are selling the service...I'm looking for what other companies use as a benchmark for software intensive product quality.  We have a whole slew of internal, design time metrics if you will, that teams can use to measure how they are doing or to change in behaviour on the team (notice, the team chooses to use these, sometimes with coaching).  That's not what I'm looking for.

        Roy, In regards to customer satisfaction comment, I highly recommended that you look into this subject more.  Customer satisfaction, if done properly, is a key indicator of future growth (see HBR _The One Number You Need to Grow_ by Reichheld).

        I know it seems weird that I posted this on the scrumdevelopment group...the reason I did is because we use Scrum and believe in the values of agility.  I was hoping to get some pointers from this community of like minded folks.

        Cheers,
        Hank

        On Fri, Feb 20, 2009 at 6:55 AM, Pablo Emanuel <pablo.emanuel@...> wrote:

        Roy,
         
        I don't think he's interested in "software development" metrics. Actually "software development" is usually a (very) small part of the whole service being provided, and none of the metrics he mentioned are directly related to that (such as velocity, number of unit tests per KLOC, etc).
         
        Regards,
        Pablo Emanuel

        2009/2/20 Roy Morien <roymorien@...>

        Frankly, I think you answered your own question. 'We have equivalent metrics for the manufacturing portion of our business'. First, software development is not manufacturing, so trying to say that because we have manufacturing metrics terefore we can (and should) have sofgtware develoment metrics does not follow. 
         
        To be a bit more specific, let's look at 'time to resolve 'defects''. Yep, maybe you can measure the time between receiving the 'defect notice' and starting to resolve it. Maybe you can measure time between receiving the defect notice and puttig the correction into production. Make out of that what you can, but the fact is, that does not take into account the difficulty of finding and fixing the defect. 
         
        Yes, I guess you can measure 'customer satisfaction' on some sort of Likert Scale, and many people do 'satisfaction' surveys. But, to what end exactly? Do you have the same customer who you can measure, and see 'improvement'?
         
        Sure, measure it all, if you can. But don't try to say because it works in manufacturing it should work in software development. Always ask what are we measuring, why are we measuring it, what outcome do we wish to achieve by measuring it?
         
        Regards,
        Roy Morien
         


        To: scrumdevelopment@yahoogroups.com
        From: Hank.Roark@...
        Date: Thu, 19 Feb 2009 12:36:29 -0500
        Subject: [scrumdevelopment] Metrics

        Hello -

        There is some interested in my company in establishing some metrics for software product quality.  We have the equivalent metrics for the manufacturing portion of our business that include things like returns and allowances, customer support readiness, product first pass yield, etc.

        We are thinking that our software centric products probably need some different metrics (while still including important metrics like returns and allowances and customer support readiness)...stuff like customer satisfaction, time to resolve 'defects', and for hosted systems availability.  We are mostly focused on 'operational' metrics at this point, not design time metrics (but we understand the two are interrelated).

        Does anyone have any suggestions for material (books/articles) or benchmarks from other companies that you would mind passing along?

        Cheers,
        Hank



        Sell your car for just $50. It's simple!






        Ryan Shriver
        My current thoughts on Agile =>- theagileengineer.com






        --
        Robin Dymond, CST
        Managing Partner, Innovel, LLC.
        www.innovel.net
        www.scrumtraining.com
        (804) 239-4329





        --
        Robin Dymond, CST
        Managing Partner, Innovel, LLC.
        www.innovel.net
        www.scrumtraining.com
        (804) 239-4329
      Your message has been successfully submitted and would be delivered to recipients shortly.