Loading ...
Sorry, an error occurred while loading the content.

Re: [kanbandev] Re: Tracking and metrics with Kanban?

Expand Messages
  • Richard Hensley
    Brad, How are you tracking the number of days from start to done? If you are using a start date and a stop date, and some math, you are doing the same thing we
    Message 1 of 14 , Mar 2, 2010
    View Source
    • 0 Attachment
      Brad,

      How are you tracking the number of days from start to done? If you are using a start date and a stop date, and some math, you are doing the same thing we are doing. We also do double duty with the stop date to figure out how many things are getting done in a given week, or month, or quarter.

      On Sat, Feb 27, 2010 at 6:37 PM, Brad <bradhome@...> wrote:
       

      Thanks for the additional info - that all makes good sense.

      I'm not sure what the question was but what we are tracking now is:
      - number of days from when a card on our board starts to when it is done
      - number of engineers on the team each week
      - number of cards finished each week

      We're still ramping up the project, and both our sw eng and test eng team membership although sw is running further ahead to this point.

      One of the metrics I'm charting is the average and normal distribution for number of days to complete a card. I'm partly using that to project potential project completion for cards which have yet to be created. I'm also using that to see trends; you have more comprehensively described your use of those than we are doing just yet, Richard

      Cheers,
      Brad Sherman



      --- In kanbandev@yahoogroups.com, Richard Hensley <hensley99@...> wrote:
      >
      > We use type of work as a dimension when slicing data during analysis. For
      > instance, we can answer the question what is the cycle time of a feature
      > overall? what is the cycle time of a design element? We have discovered that
      > examining fast moving work like design elements give us short feedback
      > loops.
      >
      > For my status reports, I interpret the metrics and charts to answer these
      > specific questions:
      >
      > - How are we tracking against the plan for the current quarter? (Simple
      > examination of the burn up chart)
      > - What are the problems we are facing?
      > - How do we know?
      > - What are we doing about them?
      > - How will we know we are improving?
      > - What is some good news?
      >
      > Each status report also includes a link to a spreadsheet with all the charts
      > derived from querying the data out of our tracking system.
      >
      > So, a question for you. You indicated you are collecting a very small subset
      > of the metrics I do. Frankly, I think you might be underestimating what you
      > have. I say this because I did for a very long time. So, here are some
      > details about how I collect the hard data that may help.
      >
      >
      > - Staff Days - Look around the bullpen on Friday, do a bit of thinking,
      > maybe ask a question or two, and then enter a number into a spreadsheet.
      > (Semi-automated)
      > - Type of Work - Our tracking system, trac, has a field. So this is
      > captured when the ticket is created. (Automated)
      > - Work Start - When a card is started, the engineer writes the date on
      > the card with a little I= in front of it. We have a defined station on
      > board. If this is missed, we look at the commit logs for the ticket and
      > figure it out. (Manual, with an automated backup)
      > - Work Done - When a card is dropped into the done bucket, yes it's a
      > bucket on the board, we put a date on it. (manual)
      > - Work start and done - We enter these dates into trac on a couple of
      > custom fields in trac (manual, auditable)
      > - Defect open and close - We use the date the ticket was created in trac,
      > and the date it was closed in trac. (automated)
      >
      > On Fri, Feb 26, 2010 at 7:53 AM, Brad <bradhome@...> wrote:
      >
      > >
      > >
      > > Richard,
      > >
      > > That was a very interesting post with lots of thought-provoking details;
      > > thanks for sending it.
      > >
      > > We are also doing scrumban but collecting a very small (but overlapping)
      > > subset of metrics that you've described.
      > >
      > > - I see you collecting Type of Work, but I can't see where it is used - can
      > > you elaborate on that?
      > > - I'm intersted, given the rich set of metrics that are available to you,
      > > how you typically distill them down into weekly status interpretations. Is
      > > that weekly status something graphical, combined with a text description, or
      > > just text bullet points - can you share an interesting, yet
      > > confidential-sanitized version of one?
      > >
      > > Brad Sherman
      > >
      > >
      > > --- In kanbandev@yahoogroups.com <kanbandev%40yahoogroups.com>, Richard

      > > Hensley <hensley99@> wrote:
      > > >
      > > > We gather the following hard data:
      > > >
      > > > - Staff Days - How many folks were present on a given day (Important to
      > > > smooth out holidays and such)
      > > > - Type of Work - Feature, Design Element, Bug, Infrastructure Work
      > > > - Work Start and Stop Date - The dates that work started and stop.
      > > > - Defect Open and Close Date
      > > > - Cost of a Staff Day
      > > > - Big Visible Tracking Board
      > > >
      > > > From the hard data, we calculate the following:
      > > >
      > > > - Throughput - How many things per time period
      > > > - Cycle time - How many days a thing took
      > > > - Cost - How many staff days per feature completed in a time period, i.e.
      > > > staff days per feature in Q3FY10
      > > > - Feature Complexity - Combination of feature cycle time, story cycle
      > > > time, and stories per feature per time period. We use this as a monitor
      > > that
      > > > our features remain "about the same.
      > > > - Quality - Trend of open defects, going up BAD, going down GOOD, staying
      > > > level at an acceptable level, OK
      > > >
      > > > We use these metrics in our daily business:
      > > >
      > > > - Throughput is used for near term, mid term, and long term planning.
      > > > - Cycle time is used to monitor our process improvements. It is the most
      > > > sensitive metric to change for us.
      > > > - Near term plan burn down is maintained to communicate variance from
      > > > plan.
      > > > - Quality efforts
      > > >
      > > > Some business challenges that metrics have helped us with
      > > >
      > > > - Staff on boarding expense and subsequent process changes
      > > > - Truthful, transparent, and committed near term and mid term planning.
      > > > - Engineering focus on getting things done
      > > > - Identifying and fulfilling learning opportunities
      > > > - Staff readiness evaluation for complex work
      > > > - Staff performance planning and discussions
      > > > - Choose an acceptable quality level, and commit to maintaining that
      > > > quality level.
      > > >
      > > > In general, we apply the following rules to the hard data we gather.
      > > >
      > > > - Automated
      > > > - Accessible
      > > > - Auditable
      > > > - Actionable
      > > > - Minimal
      > > > - Simple
      > > >
      > > > At the end of the day, the biggest lesson that we have learned about
      > > metrics
      > > > is that simple hard data can lead to many insights. The most important
      > > thing
      > > > about metrics is to observe the trends. We use the metrics to back up
      > > simple
      > > > observations and gut feels, because right now all our hard data are term
      > > > trailing edge indicators. We use gut feel as leading edge indicators.
      > > >
      > > > Another piece of advise, metrics can be used to make your business
      > > better.
      > > > However, metrics can also be used as a stick by "them". We have
      > > experienced
      > > > times when engineering is the only organization with analyzed metrics at
      > > the
      > > > table. When things are great, you might get a pat on the back. When you
      > > are
      > > > in trouble, folks still ask "how the ... this happened?" and react with
      > > > utter surprise. So, we are countering this by communication and education
      > > > about our metrics often. For instance, We create a weekly status report.
      > > > It's not a typical status report that is from the gut. It is a status
      > > report
      > > > that interprets the metrics into a very few key bad news and good news
      > > > bullet points. We also have regular education sessions for all staff in
      > > our
      > > > business on how to interpret metrics. However, the most important
      > > education
      > > > we do around the metrics is a regular expose on how metrics have been
      > > used
      > > > to identify, correct, and monitor the business challenges we face. For
      > > our
      > > > business, metrics have been a great builder of trust, even with the
      > > hiccups
      > > > and surprises. The are especially helpful and trusted while communicating
      > > > bad news, especially now that we communicate early and often.
      > > >
      > > > Richard
      > > >
      > >
      > >
      > >
      >


    • Brad
      Richard, We declare the card to be started when someone (could be more than one person) takes the card and begins working to better specify the resulting
      Message 2 of 14 , Mar 3, 2010
      View Source
      • 0 Attachment
        Richard,

        We declare the card to be started when someone (could be more than one person) takes the card and begins working to better specify the resulting functionality, and to add its acceptance criteria. We declare it done when it has been validated by the team that it passes its specified functionality and acceptance.

        I'm using excel to track the number of days, and I only record the date when it was started and the date when it was finished. I use the 'networkdays' function so the granularity for the number of days is 1 whole day.

        I'm using the snapshot at the end of the week to track the number of cards that are completed each week. I also track that number with a rolling average to see trends (although I'm thinking about the future where we might use some sort of SPC limits, and using those to detect positive or negative trends).

        I think we are on the same track as you, but you're further ahead of us in terms of collecting and using complete metrics. I've learned some good stuff from what you have said that you're team is doing.

        Cheers,
        Brad Sherman

        --- In kanbandev@yahoogroups.com, Richard Hensley <hensley99@...> wrote:
        >
        > Brad,
        >
        > How are you tracking the number of days from start to done? If you are using
        > a start date and a stop date, and some math, you are doing the same thing we
        > are doing. We also do double duty with the stop date to figure out how many
        > things are getting done in a given week, or month, or quarter.
        >
        > On Sat, Feb 27, 2010 at 6:37 PM, Brad <bradhome@...> wrote:
        >
        > >
        > >
        > > Thanks for the additional info - that all makes good sense.
        > >
        > > I'm not sure what the question was but what we are tracking now is:
        > > - number of days from when a card on our board starts to when it is done
        > > - number of engineers on the team each week
        > > - number of cards finished each week
        > >
        > > We're still ramping up the project, and both our sw eng and test eng team
        > > membership although sw is running further ahead to this point.
        > >
        > > One of the metrics I'm charting is the average and normal distribution for
        > > number of days to complete a card. I'm partly using that to project
        > > potential project completion for cards which have yet to be created. I'm
        > > also using that to see trends; you have more comprehensively described your
        > > use of those than we are doing just yet, Richard
        > >
        > > Cheers,
        > > Brad Sherman
        > >
        > >
        > > --- In kanbandev@yahoogroups.com <kanbandev%40yahoogroups.com>, Richard
        > > Hensley <hensley99@> wrote:
        > > >
        > > > We use type of work as a dimension when slicing data during analysis. For
        > > > instance, we can answer the question what is the cycle time of a feature
        > > > overall? what is the cycle time of a design element? We have discovered
        > > that
        > > > examining fast moving work like design elements give us short feedback
        > > > loops.
        > > >
        > > > For my status reports, I interpret the metrics and charts to answer these
        > > > specific questions:
        > > >
        > > > - How are we tracking against the plan for the current quarter? (Simple
        > > > examination of the burn up chart)
        > > > - What are the problems we are facing?
        > > > - How do we know?
        > > > - What are we doing about them?
        > > > - How will we know we are improving?
        > > > - What is some good news?
        > > >
        > > > Each status report also includes a link to a spreadsheet with all the
        > > charts
        > > > derived from querying the data out of our tracking system.
        > > >
        > > > So, a question for you. You indicated you are collecting a very small
        > > subset
        > > > of the metrics I do. Frankly, I think you might be underestimating what
        > > you
        > > > have. I say this because I did for a very long time. So, here are some
        > > > details about how I collect the hard data that may help.
        > > >
        > > >
        > > > - Staff Days - Look around the bullpen on Friday, do a bit of thinking,
        > > > maybe ask a question or two, and then enter a number into a spreadsheet.
        > > > (Semi-automated)
        > > > - Type of Work - Our tracking system, trac, has a field. So this is
        > > > captured when the ticket is created. (Automated)
        > > > - Work Start - When a card is started, the engineer writes the date on
        > > > the card with a little I= in front of it. We have a defined station on
        > > > board. If this is missed, we look at the commit logs for the ticket and
        > > > figure it out. (Manual, with an automated backup)
        > > > - Work Done - When a card is dropped into the done bucket, yes it's a
        > > > bucket on the board, we put a date on it. (manual)
        > > > - Work start and done - We enter these dates into trac on a couple of
        > > > custom fields in trac (manual, auditable)
        > > > - Defect open and close - We use the date the ticket was created in trac,
        > > > and the date it was closed in trac. (automated)
        > > >
        > > > On Fri, Feb 26, 2010 at 7:53 AM, Brad <bradhome@> wrote:
        > > >
        > > > >
        > > > >
        > > > > Richard,
        > > > >
        > > > > That was a very interesting post with lots of thought-provoking
        > > details;
        > > > > thanks for sending it.
        > > > >
        > > > > We are also doing scrumban but collecting a very small (but
        > > overlapping)
        > > > > subset of metrics that you've described.
        > > > >
        > > > > - I see you collecting Type of Work, but I can't see where it is used -
        > > can
        > > > > you elaborate on that?
        > > > > - I'm intersted, given the rich set of metrics that are available to
        > > you,
        > > > > how you typically distill them down into weekly status interpretations.
        > > Is
        > > > > that weekly status something graphical, combined with a text
        > > description, or
        > > > > just text bullet points - can you share an interesting, yet
        > > > > confidential-sanitized version of one?
        > > > >
        > > > > Brad Sherman
        > > > >
        > > > >
        > > > > --- In kanbandev@yahoogroups.com <kanbandev%40yahoogroups.com><kanbandev%
        > > 40yahoogroups.com>, Richard
        > >
        > > > > Hensley <hensley99@> wrote:
        > > > > >
        > > > > > We gather the following hard data:
        > > > > >
        > > > > > - Staff Days - How many folks were present on a given day (Important
        > > to
        > > > > > smooth out holidays and such)
        > > > > > - Type of Work - Feature, Design Element, Bug, Infrastructure Work
        > > > > > - Work Start and Stop Date - The dates that work started and stop.
        > > > > > - Defect Open and Close Date
        > > > > > - Cost of a Staff Day
        > > > > > - Big Visible Tracking Board
        > > > > >
        > > > > > From the hard data, we calculate the following:
        > > > > >
        > > > > > - Throughput - How many things per time period
        > > > > > - Cycle time - How many days a thing took
        > > > > > - Cost - How many staff days per feature completed in a time period,
        > > i.e.
        > > > > > staff days per feature in Q3FY10
        > > > > > - Feature Complexity - Combination of feature cycle time, story cycle
        > > > > > time, and stories per feature per time period. We use this as a
        > > monitor
        > > > > that
        > > > > > our features remain "about the same.
        > > > > > - Quality - Trend of open defects, going up BAD, going down GOOD,
        > > staying
        > > > > > level at an acceptable level, OK
        > > > > >
        > > > > > We use these metrics in our daily business:
        > > > > >
        > > > > > - Throughput is used for near term, mid term, and long term planning.
        > > > > > - Cycle time is used to monitor our process improvements. It is the
        > > most
        > > > > > sensitive metric to change for us.
        > > > > > - Near term plan burn down is maintained to communicate variance from
        > > > > > plan.
        > > > > > - Quality efforts
        > > > > >
        > > > > > Some business challenges that metrics have helped us with
        > > > > >
        > > > > > - Staff on boarding expense and subsequent process changes
        > > > > > - Truthful, transparent, and committed near term and mid term
        > > planning.
        > > > > > - Engineering focus on getting things done
        > > > > > - Identifying and fulfilling learning opportunities
        > > > > > - Staff readiness evaluation for complex work
        > > > > > - Staff performance planning and discussions
        > > > > > - Choose an acceptable quality level, and commit to maintaining that
        > > > > > quality level.
        > > > > >
        > > > > > In general, we apply the following rules to the hard data we gather.
        > > > > >
        > > > > > - Automated
        > > > > > - Accessible
        > > > > > - Auditable
        > > > > > - Actionable
        > > > > > - Minimal
        > > > > > - Simple
        > > > > >
        > > > > > At the end of the day, the biggest lesson that we have learned about
        > > > > metrics
        > > > > > is that simple hard data can lead to many insights. The most
        > > important
        > > > > thing
        > > > > > about metrics is to observe the trends. We use the metrics to back up
        > > > > simple
        > > > > > observations and gut feels, because right now all our hard data are
        > > term
        > > > > > trailing edge indicators. We use gut feel as leading edge indicators.
        > > > > >
        > > > > > Another piece of advise, metrics can be used to make your business
        > > > > better.
        > > > > > However, metrics can also be used as a stick by "them". We have
        > > > > experienced
        > > > > > times when engineering is the only organization with analyzed metrics
        > > at
        > > > > the
        > > > > > table. When things are great, you might get a pat on the back. When
        > > you
        > > > > are
        > > > > > in trouble, folks still ask "how the ... this happened?" and react
        > > with
        > > > > > utter surprise. So, we are countering this by communication and
        > > education
        > > > > > about our metrics often. For instance, We create a weekly status
        > > report.
        > > > > > It's not a typical status report that is from the gut. It is a status
        > > > > report
        > > > > > that interprets the metrics into a very few key bad news and good
        > > news
        > > > > > bullet points. We also have regular education sessions for all staff
        > > in
        > > > > our
        > > > > > business on how to interpret metrics. However, the most important
        > > > > education
        > > > > > we do around the metrics is a regular expose on how metrics have been
        > > > > used
        > > > > > to identify, correct, and monitor the business challenges we face.
        > > For
        > > > > our
        > > > > > business, metrics have been a great builder of trust, even with the
        > > > > hiccups
        > > > > > and surprises. The are especially helpful and trusted while
        > > communicating
        > > > > > bad news, especially now that we communicate early and often.
        > > > > >
        > > > > > Richard
        > > > > >
        > > > >
        > > > >
        > > > >
        > > >
        > >
        > >
        > >
        >
      • andrew_ux
        ... Thanks Richard - this is great stuff. After a meeting our team had with our manager yesterday, it was clear that there was some unease about Kanban
        Message 3 of 14 , Mar 13, 2010
        View Source
        • 0 Attachment
          --- In kanbandev@yahoogroups.com, Richard Hensley <hensley99@...> wrote:
          >
          > We gather the following hard data:
          >
          > - Staff Days - How many folks were present on a given day (Important to
          > smooth out holidays and such)
          > - Type of Work - Feature, Design Element, Bug, Infrastructure Work
          > - Work Start and Stop Date - The dates that work started and stop.
          > - Defect Open and Close Date
          > - Cost of a Staff Day
          > - Big Visible Tracking Board

          <snip>

          Thanks Richard - this is great stuff. After a meeting our team had with our manager yesterday, it was clear that there was some unease about Kanban and predictability. I was planning to spend some time this weekend crunching the numbers from the data we've gathered so far and see what kind of reports I can generate to show how much work we have left to complete our various projects.

          Then ... I stumbled across this post and now I'm stoked about creating our own status dashboard with Kanban metrics:

          http://www.panic.com/blog/2010/03/the-panic-status-board/

          Right now I'm trying to figure out all of the useful metrics and charts I can create for our own dashboard and hopefully will have something to show for it by the end of the weekend. I'll report back if I do.

          Andrew
        • Jeff Anderson
          Richard Thx for sharing this, any chance you could share or post what your dashboard looks like? I d love to take a look... ... -- Sent from my mobile device
          Message 4 of 14 , Mar 14, 2010
          View Source
          • 0 Attachment
            Richard

            Thx for sharing this, any chance you could share or post what your
            dashboard looks like? I'd love to take a look...

            On 3/2/10, Richard Hensley <hensley99@...> wrote:
            > Brad,
            >
            > How are you tracking the number of days from start to done? If you are using
            > a start date and a stop date, and some math, you are doing the same thing we
            > are doing. We also do double duty with the stop date to figure out how many
            > things are getting done in a given week, or month, or quarter.
            >
            > On Sat, Feb 27, 2010 at 6:37 PM, Brad <bradhome@...> wrote:
            >
            >>
            >>
            >> Thanks for the additional info - that all makes good sense.
            >>
            >> I'm not sure what the question was but what we are tracking now is:
            >> - number of days from when a card on our board starts to when it is done
            >> - number of engineers on the team each week
            >> - number of cards finished each week
            >>
            >> We're still ramping up the project, and both our sw eng and test eng team
            >> membership although sw is running further ahead to this point.
            >>
            >> One of the metrics I'm charting is the average and normal distribution for
            >> number of days to complete a card. I'm partly using that to project
            >> potential project completion for cards which have yet to be created. I'm
            >> also using that to see trends; you have more comprehensively described
            >> your
            >> use of those than we are doing just yet, Richard
            >>
            >> Cheers,
            >> Brad Sherman
            >>
            >>
            >> --- In kanbandev@yahoogroups.com <kanbandev%40yahoogroups.com>, Richard
            >> Hensley <hensley99@...> wrote:
            >> >
            >> > We use type of work as a dimension when slicing data during analysis.
            >> > For
            >> > instance, we can answer the question what is the cycle time of a feature
            >> > overall? what is the cycle time of a design element? We have discovered
            >> that
            >> > examining fast moving work like design elements give us short feedback
            >> > loops.
            >> >
            >> > For my status reports, I interpret the metrics and charts to answer
            >> > these
            >> > specific questions:
            >> >
            >> > - How are we tracking against the plan for the current quarter? (Simple
            >> > examination of the burn up chart)
            >> > - What are the problems we are facing?
            >> > - How do we know?
            >> > - What are we doing about them?
            >> > - How will we know we are improving?
            >> > - What is some good news?
            >> >
            >> > Each status report also includes a link to a spreadsheet with all the
            >> charts
            >> > derived from querying the data out of our tracking system.
            >> >
            >> > So, a question for you. You indicated you are collecting a very small
            >> subset
            >> > of the metrics I do. Frankly, I think you might be underestimating what
            >> you
            >> > have. I say this because I did for a very long time. So, here are some
            >> > details about how I collect the hard data that may help.
            >> >
            >> >
            >> > - Staff Days - Look around the bullpen on Friday, do a bit of thinking,
            >> > maybe ask a question or two, and then enter a number into a spreadsheet.
            >> > (Semi-automated)
            >> > - Type of Work - Our tracking system, trac, has a field. So this is
            >> > captured when the ticket is created. (Automated)
            >> > - Work Start - When a card is started, the engineer writes the date on
            >> > the card with a little I= in front of it. We have a defined station on
            >> > board. If this is missed, we look at the commit logs for the ticket and
            >> > figure it out. (Manual, with an automated backup)
            >> > - Work Done - When a card is dropped into the done bucket, yes it's a
            >> > bucket on the board, we put a date on it. (manual)
            >> > - Work start and done - We enter these dates into trac on a couple of
            >> > custom fields in trac (manual, auditable)
            >> > - Defect open and close - We use the date the ticket was created in
            >> > trac,
            >> > and the date it was closed in trac. (automated)
            >> >
            >> > On Fri, Feb 26, 2010 at 7:53 AM, Brad <bradhome@...> wrote:
            >> >
            >> > >
            >> > >
            >> > > Richard,
            >> > >
            >> > > That was a very interesting post with lots of thought-provoking
            >> details;
            >> > > thanks for sending it.
            >> > >
            >> > > We are also doing scrumban but collecting a very small (but
            >> overlapping)
            >> > > subset of metrics that you've described.
            >> > >
            >> > > - I see you collecting Type of Work, but I can't see where it is used
            >> > > -
            >> can
            >> > > you elaborate on that?
            >> > > - I'm intersted, given the rich set of metrics that are available to
            >> you,
            >> > > how you typically distill them down into weekly status
            >> > > interpretations.
            >> Is
            >> > > that weekly status something graphical, combined with a text
            >> description, or
            >> > > just text bullet points - can you share an interesting, yet
            >> > > confidential-sanitized version of one?
            >> > >
            >> > > Brad Sherman
            >> > >
            >> > >
            >> > > --- In kanbandev@yahoogroups.com
            >> > > <kanbandev%40yahoogroups.com><kanbandev%
            >> 40yahoogroups.com>, Richard
            >>
            >> > > Hensley <hensley99@> wrote:
            >> > > >
            >> > > > We gather the following hard data:
            >> > > >
            >> > > > - Staff Days - How many folks were present on a given day (Important
            >> to
            >> > > > smooth out holidays and such)
            >> > > > - Type of Work - Feature, Design Element, Bug, Infrastructure Work
            >> > > > - Work Start and Stop Date - The dates that work started and stop.
            >> > > > - Defect Open and Close Date
            >> > > > - Cost of a Staff Day
            >> > > > - Big Visible Tracking Board
            >> > > >
            >> > > > From the hard data, we calculate the following:
            >> > > >
            >> > > > - Throughput - How many things per time period
            >> > > > - Cycle time - How many days a thing took
            >> > > > - Cost - How many staff days per feature completed in a time period,
            >> i.e.
            >> > > > staff days per feature in Q3FY10
            >> > > > - Feature Complexity - Combination of feature cycle time, story
            >> > > > cycle
            >> > > > time, and stories per feature per time period. We use this as a
            >> monitor
            >> > > that
            >> > > > our features remain "about the same.
            >> > > > - Quality - Trend of open defects, going up BAD, going down GOOD,
            >> staying
            >> > > > level at an acceptable level, OK
            >> > > >
            >> > > > We use these metrics in our daily business:
            >> > > >
            >> > > > - Throughput is used for near term, mid term, and long term
            >> > > > planning.
            >> > > > - Cycle time is used to monitor our process improvements. It is the
            >> most
            >> > > > sensitive metric to change for us.
            >> > > > - Near term plan burn down is maintained to communicate variance
            >> > > > from
            >> > > > plan.
            >> > > > - Quality efforts
            >> > > >
            >> > > > Some business challenges that metrics have helped us with
            >> > > >
            >> > > > - Staff on boarding expense and subsequent process changes
            >> > > > - Truthful, transparent, and committed near term and mid term
            >> planning.
            >> > > > - Engineering focus on getting things done
            >> > > > - Identifying and fulfilling learning opportunities
            >> > > > - Staff readiness evaluation for complex work
            >> > > > - Staff performance planning and discussions
            >> > > > - Choose an acceptable quality level, and commit to maintaining that
            >> > > > quality level.
            >> > > >
            >> > > > In general, we apply the following rules to the hard data we gather.
            >> > > >
            >> > > > - Automated
            >> > > > - Accessible
            >> > > > - Auditable
            >> > > > - Actionable
            >> > > > - Minimal
            >> > > > - Simple
            >> > > >
            >> > > > At the end of the day, the biggest lesson that we have learned about
            >> > > metrics
            >> > > > is that simple hard data can lead to many insights. The most
            >> important
            >> > > thing
            >> > > > about metrics is to observe the trends. We use the metrics to back
            >> > > > up
            >> > > simple
            >> > > > observations and gut feels, because right now all our hard data are
            >> term
            >> > > > trailing edge indicators. We use gut feel as leading edge
            >> > > > indicators.
            >> > > >
            >> > > > Another piece of advise, metrics can be used to make your business
            >> > > better.
            >> > > > However, metrics can also be used as a stick by "them". We have
            >> > > experienced
            >> > > > times when engineering is the only organization with analyzed
            >> > > > metrics
            >> at
            >> > > the
            >> > > > table. When things are great, you might get a pat on the back. When
            >> you
            >> > > are
            >> > > > in trouble, folks still ask "how the ... this happened?" and react
            >> with
            >> > > > utter surprise. So, we are countering this by communication and
            >> education
            >> > > > about our metrics often. For instance, We create a weekly status
            >> report.
            >> > > > It's not a typical status report that is from the gut. It is a
            >> > > > status
            >> > > report
            >> > > > that interprets the metrics into a very few key bad news and good
            >> news
            >> > > > bullet points. We also have regular education sessions for all staff
            >> in
            >> > > our
            >> > > > business on how to interpret metrics. However, the most important
            >> > > education
            >> > > > we do around the metrics is a regular expose on how metrics have
            >> > > > been
            >> > > used
            >> > > > to identify, correct, and monitor the business challenges we face.
            >> For
            >> > > our
            >> > > > business, metrics have been a great builder of trust, even with the
            >> > > hiccups
            >> > > > and surprises. The are especially helpful and trusted while
            >> communicating
            >> > > > bad news, especially now that we communicate early and often.
            >> > > >
            >> > > > Richard
            >> > > >
            >> > >
            >> > >
            >> > >
            >> >
            >>
            >>
            >>
            >

            --
            Sent from my mobile device

            Jeff Anderson

            http://agileconsulting.blogspot.com/
          • Raoul Duke
            On Sun, Mar 14, 2010 at 9:28 PM, Jeff Anderson ... to interject / hijack a little bit: i d be interested in hearing about When Metrics Go Wrong. i m the kind
            Message 5 of 14 , Mar 15, 2010
            View Source
            • 0 Attachment
              On Sun, Mar 14, 2010 at 9:28 PM, Jeff Anderson
              <Thomasjeffreyandersontwin@...> wrote:
              > Thx for sharing this, any chance you could share or post what your
              > dashboard looks like? I'd love to take a look...

              to interject / hijack a little bit:

              i'd be interested in hearing about When Metrics Go Wrong. i'm the kind
              of person who worries that metrics can all to easily be misused,
              either through good or bad intentions.

              sincerely.
            Your message has been successfully submitted and would be delivered to recipients shortly.