Loading ...
Sorry, an error occurred while loading the content.

297Re: [APBR_analysis] Re: Tendex rating

Expand Messages
  • Michael K. Tamada
    Nov 3, 2001
    • 0 Attachment
      On Fri, 2 Nov 2001, Dean Oliver wrote:

      > Model analysis can be evaluated on 3 (or 4) philosophies:
      > 1. Simplicity
      > 2. Reality
      > 3. Conservatism
      > (4. Consistency)


      >#2. My favorite example is that people used to think that orbital
      >physics was all built on circles. The sun went around the earth in a
      >big circle. If it wasn't just a circle, it was circles within
      >circles. There was a good name for all of this that fails me right
      >now and I know someone out there can spell this out better. People

      I think the word you're looking for is "epicycles". Unless you're looking
      for the more general term of "adding more and more little adjustments to
      the model to make it fit reality, so much so that the model starts falling
      apart under its own complexity". Thomas Kuhn (referred to in Stuart
      McKibbin's posting) might've had a word for this too, but I forget. When
      the old model gets pushed aside by a new different one, that's a "paradigm

      Hey didn't you go to Caltech and shouldn't you know this stuff? ;) But
      you did nail the correct answer to the ellipse modeller: Kepler.


      > What's the 3rd thing? Well, conservativeness is important when using
      > a model to set policy (something I do at work). Say you're
      > interested in only the good defensive players in the league (for some
      > reason) and you want your stat to get those guys. Well, you want to
      > make sure you don't get the mediocre ones. Your statistic should
      > come out and there should be no argument that the method you chose is
      > only going to get good defensive players. Not sure it matters much
      > for basketball, except if you're helping the league to rewrite rules.

      I'm not sure about this one. Because an overly "conservative" list of
      good defensive players will STILL get arguments -- from people who
      complain that the list left out players X, Y, and Z, who are great

      It's analogous to statistics: you can be "conservative" and minimize the
      probability of a Type I error by choosing a small significance level. But
      in doing so, you are automatically raising the probability of a Type II

      A list of "great defenders" which is conservative will avoid Type I
      errors, but will be making more Type II errors.

      Decision theory tells us we should look at the relative cost of Type I
      errors and Type II errors and choose a signficance level which balances
      out the likely errors so as to minimize the costs.

      In other words, sometimes we want a "conservative" list of great
      defenders, but other times a not so conservative one. Depending on the
      purpose of the list.

      > I frankly hate the 4th one. A lot of times, someone has done
      > something stupid before, but because of "consistency", we have to do
      > the same stupid thing again.

      Well there's another kind of consistency, one which is a good thing to
      have: logical self-consistency. E.g. rating systems should avoid
      double-counting (unless there is a reason to put a heavy weight on that

    • Show all 6 messages in this topic