> I'm working on a project wherein we'll change the
> existing V model
> to Agile. However we can't discard every existing
> process; Scrum
> seems suitable as a wrapper for those process
> elements which make
> sense to the team.
If you incrementally adopt, and if you get a practice
(such as RefactorMercilessly) out of balance with its
matching practices (such as TDD), you will invite
To adopt, either hire a coach, or adopt two practices
- Test-Driven Development, and Frequent Releases
> I expect to see differences, changes, improvements,
> even fear (some
> of the developers are getting Luddite about pair
> programming and we
> haven't even started).
PP without TDD is rough. TDD has the pleasant rhythm
of participatory game. Without TDD, programmers spend
lots of time hunting bugs, and pairing either helps or
is agonizingly boring.
> So there'll be changes...but how do you prove that
> Agile really made
> the difference? For data-driven corporations to
> realise what's
> going on and apply it more widely they have to be
> shown valid
> metrics. For the last ten years I've been hearing,
> "If you can't
> measure it, you can't manage it..."
What metrics have you all collected, in the past, for
your existing systems?
> Can anyone help me answer these questions?
> - which metrics are useful in demonstrating that
> Scrum made a
Read /Agile & Iterative Development: A Manager's
Guide/ by Craig Larman. It reveals the history of
Waterfall, and all the surveys and evidence that show
It would be nice if Waterfall were a strawman argument
that only Agilistas speak of. Waterfall re-arises
every time a project goes sour, and a manager says,
"Okay, we have to spend more time planning for the
To not have Waterfall, in whatever disguise, sort
features by business priority, implement the highest
ones first, and deliver to live users as soon as
> - Is it possible to demonstrate Scrum made the
> difference, i.e.
> demonstrate success was not simply due to a change
> from the norm?
Many teams report better productivity. Larman's book
points out that in an industry where so many projects
fail, simply reducing the failure rate
Now instead of covering yourselves with metrics, can
your organization honestly assess whether and how any
of your projects have failed?
If you have no samples to draw from, maybe you are
doing the right thing and shouldn't change!
> - whilst data-driven corporations will supply their
> own metric
> analysis of defect reduction, cycle time, etc. has
> measured 'softer' improvements?
Some teams find such satisfaction doing Agile
development that they don't bother to measure. ;-)
> For instance, Agile
> makes great
> play of user involvement, team cohesion & learning,
> the kind of
> thing that corporations don't take much notice of
> (until staff leave
> and knowledge goes with them). Have these been
> measured? How? I'm thinking of surveys and
> interviews but that's a
> difficult area to validate.
Tom DeMarco, in his book /Slack/, has a great tale
where a client shows him a pie chart of their
expenditures, and he asks "where's turnover?" He
observed that the cost of retraining people after
others leave had been buried in the numbers, instead
of revealed as a metric.
Do you Yahoo!?
Check out the new Yahoo! Front Page.