how do I find bottlenecks?
- Recently, there's been a Google-related discussion going on in the XP
mailing list; that reminded me of something Mary said in one of her
talks at Agile 2006. Namely: optimizing locally is not only not
useful, it can be actively harmful. In that context, she suggested
that Google's policy of letting people spend 20% of their time
pursuing pet projects actually helps their productivity.
I'm a manager of a software team; I'm trying to figure out what, if
any, relevance this has to us. The flip side to Mary's claim is that
I doubt that the our extended group (us plus all the other teams
working on the same product) would have a productivity increase if
they let every single person spend 20% of their time working on pet
projects. Maybe it would - people would be more energized, and it
would increase the number of new product ideas floating through the
air - but, strictly from a queuing theory point of view, I'm a little
dubious. Instead (I'm taking this from Theory of Constraints), there
are likely to be bottlenecks in our system that are determining the
extended group's productivity level; if my team is a bottleneck, then
we shouldn't lightly give up on-task time.
So: what do I do to determine if we're a bottleneck?
(Maybe this is explained in the new book; it's next in my to-read
Thinking about this, one interesting aspect of XP is that it broadens
potential bottlenecks, eliminating many traditional ones: bottlenecks
happen when there's a scarce resource in high demand, but XP's
knowledge sharing means that individual knowledge is much less likely
to be a scarce resource. So either the team as a whole isn't a
bottleneck, or everybody on the team is part of a bottleneck! And
then a further advantage is that, because of the relatively
transparency of the value stream map in an XP context, it's probably
easier to figure out where the bottlenecks are.
Unfortunately, this transparency is an area where my team (and
surrounding groups) isn't doing so well: our Customer interaction
isn't as good as I'd like, and we don't yet have frequent real
(non-internal) releases. Perhaps as a result, my vision is a bit
muddled: it seems very useful to figure out where the bottlenecks are
in the process, but I'm having a hard time figuring that out.
I guess one traditional answer is "look where work is piling up". On
the one hand, there's no shortage of requests for us to do stuff, so
you could say that work is piling up before us. But if there are
further bottlenecks downstream from us, then that's kind of
irrelevant; I'm having a hard time seeing whether or not that is the
- On Mon, 09 Oct 2006 09:41:26 +0100, allan kelly <allan@...> said:
> David Carlton wrote:Some reactions:
>> We have had (monthly) regular internal releases in the past. We've
>> had two problems with those, though:
>> * They're not real releases: nobody in the outside world is keeping us
>> honest as to their quality. We try to do a good job of keeping
>> ourselves honest, but an internal release (or even one for partners)
>> just isn't the same as an external release.
> I find this worrying. It sounds like people only take quality
> seriously when a customer is in sight. So, for the first 11 months
> of a 12 month project quality can take a holiday.
* Yes, it is worrying: I am worried!
* We have a few kinds of quality problems. I think we're getting
better on outright bugs, though there's room to go there.
Specification mismatches seem to be a harder problem to solve.
* I don't think I'm unique in feeling that internal releases are
different from external releases. It's the same sort of reasoning
that recommends that, in XP speak, your Customer be an actual
customer, not a customer representative. Or that your daily
deployments (if you're that far) actually deploy, they're not just
builds that could be deployed. There's a reason why the value
stream map ends when the customer actually gets the value, after
>> * They're sometimes not fast enough - if we learn now that a bigAnd, indeed, our internal releases are good enough quality that this
>> potential customer wants to start a trial in three weeks, and new
>> feature X is necessary for the trial, then sticking to a monthly
>> heartbeat isn't going to do us much good.
> There are few ways to tackle this.
> Firstly, this seems to contradict your point above. Surely you want
> to stay as close to release quality at all times so you can just
> take what your working on, add feature X and ship it over? Provided
> your internal releases were good quality this wouldn't be a problem.
isn't a problem: we have successfully created trial releases in short
order. All I'm saying is that it means that we're not on a regular
> Second consider what you call your releases. Say your internalThat's a good idea - I'll think about that.
> releases were called Beta, and you had 4 iterations to each release.
> Then, again assuming each iteration completed with high quality, the
> software at the end of each iteration could be called an Alpha.
> Provided you set expectations with your trial customer they should
> be happy to work with a Beta or an Alpha. Trials don't need full
> releases, make it clear to the customer that you can have an Alpha
> with the feature available for the trial, and say that by the time
> the trial is finished you will have a release version available.
> I suspect, from what you say and my own experience, that when yourI don't think this is entirely true. I don't think our quality is as
> customer staff come back with a request you drop your current plans,
> work out how to handle their request, do it, ship something of
> dubious quality and then try to work out were you are.
bad as you're getting a picture of. We do reprioritize work pretty
often; I guess it's not clear to me that this is inherently a problem.
(Isn't our ability to do that supposed to be one of the advantages of
What is bad is if either our interim work is of low quality or if we
implement something that ends up ultimately not to be useful. We're
working to avoid the former. I don't really understand how to avoid
the latter; I wish I did.
> Consequently, you are thrashing, changing direction and prioritise aIn all honesty, I don't think anybody in the world knows what should
> lot. (This is also a worry for product strategy if you are driven
> by ad hoc customer requests. Do your product managers really know
> what should be in the product?)
be in this product. (And if that worries you, well, it worries me
too.) We're trying to provide new capabilities in an existing space;
we have a compelling dream for the new capabilities, but nobody really
understands how the details will play out. And our product will be
part of a quite complex ecosystem: we need to work with others to
bring the new capabilities to fruition. At the same time, to displace
existing deployments, we have to be able to integrate with systems
that are already in place. And there is a woeful lack of
standardization, which means that, if we want to target five different
deployments, we probably have to do five different integrations.
It's taken us a while to find a good strategy for this. Our current
favorite one is to partner with systems integrators; I think that will
help a lot, because it will let us sell basically the same system to
multiple individual deployments.
> Most likely your customers really don't want it tomorrow.Yes, that is true.