I often wonder why IT development in organisations is so fraught about productivity measures, such a need to compare team productivity , and so on. Does theMessage 1 of 57 , Nov 1, 2010View SourceI often wonder why IT development in organisations is so fraught about productivity measures, such a need to 'compare team productivity', and so on.
Does the Accounts Receivable Department get assessed on their 'productivity'? Does the Marketing Department get assed on their 'Productivity'? In fact, does the executive management team get assessed on their 'productivity'. What measures of performance do these admin and operational areas use to assess performance?
Is it even possible to measure IT productivity? Where productivity is a measure of output over time! But then, output of what? Output of documents? output of lines of code?.
Date: Mon, 1 Nov 2010 14:27:00 +0000
Subject: Re: [scrumdevelopment] Re: Metrics to report up
> Comparing teams is fraught with peril. I'm guessing you're going to wind up causing more harm than good.People (and teams) do vary, and ignorign this variation is also harmful. How should organisations manage this?
Although comparing teams is problematical, ignoring situations where one team is much worse than others is also problematical. Hypothetically, perhaps one team is too junior, too inexperienced with the technologies they use but not able to recognise the issues themselves (being unconsiously incomptenent in some areas, perhaps). What do people suggest as ways of detecting this?
(Resolving it has a number of choices, including training, coaching, changing the team membership, or disbanding some teams).
I've worried about this for a while. While "you can't compare team productivity" has a lot of truth, it's also a very unsatisfactory answer when it comes to trying to get the most practical return for an organisation's buck.
I have come across this scenario very often (almost 95% products/projects) in which defects escape and I have seen this irrespective of methodologies (orMessage 57 of 57 , Dec 13 8:17 PMView SourceI have come across this scenario very often (almost 95% products/projects) in which defects escape and I have seen this irrespective of methodologies (or practices) used. Human can make mistakes (at every phase/activity) for various reasons however we would like to keep improving ourselves continuously. We measure 'defects escaped' and take it seriously, means, we get to the root cause and invest in required trainings/expectations.
We focus on design & code quality metrics, like, cyclomatic complexity, fan-in, fan-out, depth (inheritance) and run some code quality tools to check coding standard compliance and other parameters (like, memory leakage etc). We report this to management as well to keep them in loop. We measure test related metrics, like, # of test cases (manual vs automated), first time pass ratio, # of defects (open, fixed, closed, postponed) etc.
We do not focus much on velocity, thanks to this forum. We track release burn down, # of stories committed / # of stories achieved (to keep improving team's commitment level), # of demos accepted/ rejected by PO, # of times team got changes in middle of sprint (it is minimal but not zero yet, this helps in deciding sprint length, puts back-pressure on PO) and few more (customer satisfaction and employee satisfaction).
For us, agile is mix of Scrum and XP practices hence we focus on both.
Hariprakash Agrawal (Hari),
Managing Director | Agile Coach | http://opcord.com | http://www.linkedin.com/in/hariprakash
Software Testing (QTP, Selenium, JMeter, AutoIT, Sahi, NUnit, VB/Java Script, Manual) || Consulting / Trainings (Agile, CMMi, Six Sigma, Project Management, Software Testing)On Mon, Dec 13, 2010 at 9:11 PM, Ron Jeffries <ronjeffries@...> wrote:
Hello, woynam. On Monday, December 13, 2010, at 10:12:25 AM, you
wrote:It is possible, and not all that unlikely, to miss a test or write
> Sorry, but I don't see how "defects" can escape. If you're
> writing automated tests for every story, an "escaped" defect means
> that you ignored the failing test. Is that really that common?
one incorrectly. It would be possible, I suppose, to define Done as
"passes whatever tests we wrote" but that strikes me as a bit too
So an escaped defect would be something we didn't like, that we
agree we understood and somehow failed to get implemented and
tested.Sorry about your cow ... I didn't know she was sacred.