Loading ...
Sorry, an error occurred while loading the content.

Re: [XP] Paradoxes in agile

Expand Messages
  • Ron Jeffries
    ... Tim, my experience and my reasoning both tell me that the above is simply not the case. I really don t get it: Could you explain how, perhaps with
    Message 1 of 206 , Apr 3, 2006
    • 0 Attachment
      On Monday, April 3, 2006, at 5:12:06 AM, Tim Dugan wrote:

      > Ilja Preuss wrote...
      > <<You can't learn by practice how a by-the-book process works
      > (and consequently needs to be tailored to your situation) without
      > actually *practicing* the process by-the-book for some time, as I think is obvious>>

      > Hmm...maybe you can't learn every detail and every nuance of the
      > by-the-book, sacred process...but software engineers with
      > experience in a variety of areas should be able to get a good
      > idea, without actually having to do practice them all...they
      > should be able to judge what aspects are appropriate for them and
      > which are not.

      Tim, my experience and my reasoning both tell me that the above is
      simply not the case.

      I really don't get it: Could you explain how, perhaps with examples,
      a person with experience doing other things, but not some
      recommended thing, could draw valid conclusions about the usefulness
      of that thing?

      > One might also consider that agile methods were not a gift from
      > the gods, but something that some people made up based on their
      > own particular needs and have evolved over time--and hopefully are
      > still evolving--that someone would tailor the agile processes to
      > their own needs is no great sin, but, rather, normal innovation.

      No one is saying not to tailor them. We are saying that people don't
      know what the methods are actually about until they experience them.

      It seems to me that I have a pretty good imagination, but it's not
      likely that I can perfectly imagine how something as complex as
      software development would go if I did things I've never done.

      > It is possible that that tailoring might mean completely violating
      > the spirit of the process, which could lead one to say "that isn't
      > really Agile". But to think a process has to be practiced exactly
      > one way is just naïve. And really is not consistent with human
      > nature.

      No one is saying to do the process exactly forever. We are saying
      that the way to learn the effects of the practices as written is to
      try them as written.

      It might be that people can make up a better process, with better
      practices, without trying the ones we write about. My experience
      visiting project teams suggests that this is rarely the case.

      > <<If you've never practiced a process by-the-book, you simply
      > don't know whether your changes are actual improvements over the
      > original, because you don't have a reference point.>>

      > That's not necessarily true. For example, if the process calls
      > for pair programming but I only have one developer...? Changing
      > the process to meet my needs is clearly a benefit.

      Can you think of an example where it would be obvious that an
      untried process element should be changed but not because it's
      impossible to do?

      I visit teams with eight programmers who have decided, without
      trying it, that pair programming won't help them.

      I visit teams where the programmers write code, maybe test it
      manually, and then let the testers deal with it.

      I visit teams where they work for weeks or months on technical tasks
      that make no sense to the product owners, and then wonder why the
      product owners aren't impressed with progress.

      I visit teams where they don't do customer tests, and they work for
      weeks or months on features and then, surprise! the features aren't
      what the product owners actually wanted.

      In many of these cases I manage to influence them to try things like
      pairing and testing and focusing on customer needs, and things
      improve. They didn't think those things applied "in our situation".
      It turns out that they just didn't know how to apply them.

      Ron Jeffries
      It's easier to act your way into a new way of thinking
      than to think your way into a new way of acting. --Millard Fuller
    • David Carlton
      [ Sorry for the slow response on this one - it took a while for me to get around to dig through the appropriate archive. ] ... It certainly seemed to be taking
      Message 206 of 206 , Apr 13, 2006
      • 0 Attachment
        [ Sorry for the slow response on this one - it took a while for me to
        get around to dig through the appropriate archive. ]

        On Mon, 10 Apr 2006 22:26:57 -0700, William Pietri <william@...> said:
        > David Carlton wrote:

        >> I can't quite remember what happened over the next few months, but
        >> at some point our mood improved. We certainly got better at
        >> pairing; it is a skill that has to be learned, and perhaps we were
        >> particularly slow learners.

        > An alternative explanation is that pairing is hard to learn. I think
        > it's the hardest one to learn from a book because it's so
        > intangible.

        It certainly seemed to be taking us longer than the impression I got
        from, say, _Pair Programming Illuminated_.

        > How much is your team pairing now?

        I don't have a good feeling - half the time, maybe, hopefully on tasks
        for which pairing is more important?

        Some situations in which we're not pairing:

        * Some people work at home one day a week, one team member works late
        a couple days a week.

        * Some of our tasks involve running long (ten minutes, say) tests,
        e.g. when doing performance measurement or when testing a machine
        failure scenario in the full end-to-end environment. Those tasks
        have a lot of dead time, so we don't pair through all parts of them.
        (We do pair when those tests turn up a surprise.)

        * Sometimes there's an odd number of people in the office.

        >> we wrote down lists of things
        >> we liked and things we didn't like [...] (And
        >> we also made a list of situations where we thought pairing was
        >> especially helpful.)

        > Would you have those lists handy? I'm very interested to see them, and I
        > imagine others would be interested as well.

        Here are my notes from a review meeting. (We'd also introduced daily
        standups at the same time.)

        We liked the organizational effects (of stand-up meetings, not pair
        programming per se), and the effects on knowledge and quality. No
        stunning positive stories, but general positive feelings.

        When it came to productivity, we saw both positive and negative
        effects. The managers who were present didn't think that
        productivity was unacceptably low, so that wasn't problematic.

        We had issues with the mechanics of pair programming; also, none of
        us really look forward to pair programming, even if we basically
        enjoy it while we're doing it.

        Changes we discussed:

        * We'll want to tweak our algorithm for choosing the driver. In
        expert/nonexpert pairs, the nonexpert should drive whenever
        possible. The task owner should drive less often. We should
        switch roles more frequently, and try to have the navigator play a
        more active role.

        (Good advice; also, currently, our tasks are shorter and we don't
        necessarily have "task owners".)

        * We added a 1:30pm official stand-up meeting time. In our stand-up
        meetings, we'll explicitly discuss whether we think each task will
        be helped by pairing. We'll actively work to make sure that some
        time gets scheduled for tasks that won't be helped by pairing.

        (We later got rid of the 1:30 stand-up meeting, returning to just one
        meeting in the morning.)

        * We'll make sure [new hire] gets a monitor and keyboard. [David's
        boss] and David will see if they can find a way to set up some
        experimental pairing stations.

        (Yes, we're a little dysfunctional about providing equipment for new
        hires. He did have a laptop, at least. We never got around to
        setting up experimental pairing stations; using a straight part of a
        desk has worked well enough for us.)

        Tasks that we think pairing is good for:

        * Writing production code. (This is the most important place to
        * Investigating nontrivial bugs.
        * Doing interesting refactorings.

        Task that we think pairing isn't good for:

        * Triaging bugs.
        * Doing boring refactorings (e.g. extract method).

        Tasks where it has its plusses and minuses:

        * Writing unit tests for untested code.
        * Bad: grunt work.
        * Good: knowledge transfer, keeps you honest, can lead to
        interesting refactorings.

        (These days, we are rarely writing unit tests for untested code except
        in service to another task; if we were, I tend to think that it should
        be done while pairing.)

        * Writing e2e tests.
        * Bad: grunt work, long cycle times.
        * Good: knowledge transfer, typos are more important (because of
        long cycle times).

        (Pairing while the actual tests are running is not very productive,
        but it is useful while writing the tests. And it's hard in my
        experience to pair in small spurts - it messes up time management no

        Also, we decided to require code reviews for almost all changes that
        were developed while not pairing. Which helped somewhat, but isn't
        nearly as good as pairing; one advantage is that it sensitized us more
        to benefits of pairing.

        >> This is actually one area where I'm very curious about the XP holy
        >> land - I still can't conceive of going months, years without bugs
        >> escaping. [...] (And other XP practices have helped us
        >> significantly reduce the number of bugs, especially the "I have no
        >> idea how long it will take to fix that" bugs, which is great.)

        > Do you happen to have any data on previous and current bug rates?

        No, sorry...

        David Carlton
      Your message has been successfully submitted and would be delivered to recipients shortly.