Loading ...
Sorry, an error occurred while loading the content.
 

Re: [ai-philosophy] Re: General relativity and a universe of efficient computations

Expand Messages
  • Eray Ozkural
    ... How drastic are the NFL theorems in genetic algorithms? ... Indeed, the Newtonian theory seems to have smaller axiomatic complexity than a relativistic
    Message 1 of 7 , Sep 1, 2004
      On Wednesday 01 September 2004 06:29, whazateh wrote:
      > It depends at what level you look at, if you want something that is
      > true from pure physics up to interacting with humans, I believe it is
      > nonsense to speak of universal distibution, as humans could decide to
      > play the No Free lunch [1] game and pick a create a program that fools
      > the learning algorithm into thinking it is simpler or quicker than it
      > actually is.

      How drastic are the NFL theorems in genetic algorithms?

      > From a just a physics point of view, aren't you running into
      > epistemology, how will we actually know if a certain universal
      > distribution is true? Take for example relativity, since we have shown
      > some relativistic affects in the real world, it suggests as simple and
      > speedy as it was F=ma and all the other newtonian constructs were not
      > the correct view point, and shouldn't have had a higher probability
      > than the later relativistic ones. So if we ran some experiments using
      > a system that uses a distribution, how would we verify that the
      > experiment got the correct answer. It would be judged by our own
      > probability distributions or the equivalent of. To paraphrase,
      > programs that we think are inteligent are the ones that would agree
      > with us.

      Indeed, the Newtonian theory seems to have smaller axiomatic complexity than a
      relativistic one, and in addition it can be simulated faster. However, it may
      indeed have higher probability of being correct than the relativistic one to
      us before we have made any observations. I think, our induction principle
      should be modified upon observations. How do we identify new axioms? In
      special relativity, it was a postulate that the speed of light is constant. I
      don't think such knowledge can be derived merely from our a priori knowledge
      of the world, it must be derived from a large number of observations and
      experiments. After we draw the borders of our theory, we seek a solution,
      perhaps again in terms of discovering the most efficient or the most elegant
      model.

      > However from a pragmatic point of view it makes sense to model things
      > with as little computing power as possible to explain the facts so
      > that we can react to them quickly, apart from some things such as the
      > brain and other learning systems, else we run into the behaviourists
      > problem.

      That is a good observation, in my opinion. The agent is operating in a dynamic
      environment, and he must be responsive. The same thing can be said about
      reflective thinking, if the processes of reasoning, planning, etc., are
      efficient, this greatly increases our success in the environment. That is why
      we need an on-line learning algorithm, that can address the constraints of
      the real world.

      > Will Pearson
      >
      > [1] http://www.aic.nrl.navy.mil/~spears/yin-yang.html

      Regards,

      --
      Eray Ozkural (exa) <erayo@...>
      Comp. Sci. Dept., Bilkent University, Ankara KDE Project: http://www.kde.org
      http://www.cs.bilkent.edu.tr/~erayo Malfunction: http://malfunct.iuma.com
      GPG public key fingerprint: 360C 852F 88B0 A745 F31B EA0F 7C07 AE16 874D 539C
    • whazateh
      ... They are not that drastic really, because people are generally sensible enough not to use genetic algorithms on problems that are deceptive. Deceptive
      Message 2 of 7 , Sep 4, 2004
        --- In ai-philosophy@yahoogroups.com, Eray Ozkural <erayo@c...> wrote:
        > On Wednesday 01 September 2004 06:29, whazateh wrote:
        > > It depends at what level you look at, if you want something that is
        > > true from pure physics up to interacting with humans, I believe it is
        > > nonsense to speak of universal distibution, as humans could decide to
        > > play the No Free lunch [1] game and pick a create a program that fools
        > > the learning algorithm into thinking it is simpler or quicker than it
        > > actually is.
        >
        > How drastic are the NFL theorems in genetic algorithms?


        They are not that drastic really, because people are generally
        sensible enough not to use genetic algorithms on problems that are
        deceptive.

        Deceptive functions (at least in Genetic Algorithm terms) are
        functions where a narrow global optima is a long hamming distance away
        from wide local optima.

        An extreme case of this is 000000 being the optimal fitness and from
        000001 upto 111111 being a gradually increasing fitness. It very
        unlikely 000000 will be found by either bit flips or recombination as
        most of the answers will be clustered around 111111.

        But as I said most people are sensible enough to pick a representation
        or a different mutation operator if this is going to be a problem. So
        it is fine when you can tune the problem space to your algorithm or
        the other way round, but worth remembering when there is talk about
        universal learners.

        Will Pearson
      Your message has been successfully submitted and would be delivered to recipients shortly.