Loading ...
Sorry, an error occurred while loading the content.

Re: [ai-philosophy] Provable generality?

Expand Messages
  • Denis
    ________________________________ From: Bill Modlin To: ai-philosophy@yahoogroups.com Sent: Mon, June 28, 2010 9:55:44 PM Subject: Re:
    Message 1 of 38 , Jul 1, 2010
    • 0 Attachment
      ________________________________
      From: Bill Modlin <wdmodlin@...>
      To: ai-philosophy@yahoogroups.com
      Sent: Mon, June 28, 2010 9:55:44 PM
      Subject: Re: [ai-philosophy] Provable generality?



      In your casual description of training the system for the early stages of
      processing, it sounds as though you assume that these algorithms can be
      developed once and frozen, that while it may take "a lot of training" to set it
      up in the first place, once it is done it is done.

      In the human brain, it appears the early stages retain some plasticity even into

      adulthood, that feedback from higher levels of cognitive processing can modify
      the details. We can learn new feature detection algorithms and tune our
      discrimination of existing features. In other words, the training is never
      really finished for any part of the system. I'm concerned that even "adaptive"
      Levin search is not suitable for the kind of continuous iterative tuning of
      algorithms that is supported by a network of adjustable connections and
      weights. The granularity of adaptive changes is too coarse, the computational
      work required for an adaptation is much more than is required for a local
      weight-tuning adjustment.

      I have similar granularity-related concerns about the HTM methods also under
      discussion here.

      ----------

      Bill Modlin
      ________________________________

      I want to point-out that a "changing program" is a ill definition a program
      never change by definition, a changing program is not a program ... . If you are
      watching a changing program you are not watching the real program.
      And again a program can be "not written somewhere" you can have a system without
      an explicit program .
      So a brain change its network etc... this means that you can not reproduce
      exactly the brain to reproduce "the changing brain" functionality.
      Using an Inverse Levin Search what you build is the correct ultimate program ,
      also you train a human ( in a hypothetical exeriment) .

      Denis.
    • Eray Ozkural
      ... I don t understand what inverse levin search is here. ... that is what levin search does in fact, but inverse levin search? ... To be exact, it is
      Message 38 of 38 , Jan 31, 2011
      • 0 Attachment
        On Fri, Jul 2, 2010 at 10:58 AM, Denis <dnsflex@...> wrote:
        >
        >From: Bill Modlin <wdmodlin@...>
        >Sent: Thu, July 1, 2010 6:05:54 PM
        >Subject: Re: [ai-philosophy] Provable generality?
        >
        >
        >--- On Thu, 7/1/10, Denis <dnsflex@...> wrote:
        >
        >>  Using an Inverse Levin Search what you build is the correct ultimate program


        >
        >I think you are missing important points.

        No , I am not missing that points .
        The problem of an Inverse Levin Search is not "changing of new information", the
        problem is an absurd requirement of resources .

        I try to explain :
        If you have an initial training set T1 you can run the I.L.S. so you find a
        program P1 now if you have a new training set T2 you can run again the I.L.S for
        the training T1+T2 you have 2 possibilities for the result 1) the solution is
        again P1   2) the solution is a program P2 .
        The second case mean that was impossible to find P2 without more information! To
        find P2 you need more information than T1 ! Find P1 is the best you can do with
        only T1 .

        I don't understand what "inverse" levin search is here. 

         

        OK, this is only theoretical becouse there are not enough resource to do
        something like this.

        >
        >For one thing, situations change and new information arrives all the time.  So
        >even if your inductive engine could build the "correct ultimate program" based
        >on all the information available today, tomorrow you might need to modify it.
        >
        >For another, it is computationally infeasible to take into account all the
        >information available all at once.  You cannot really build the "correct
        >ultimate program".

        >You must instead spend computational energy developing
        >simplified views of the data, abstracting what seem likely to be important
        >aspects, features and relationships for further examination.

        But this is what the inverse search do!


        that is what levin search does in fact, but inverse levin search?
         
        >  Once you begin to
        >see how it fits together, you may find reasons to modify the abstraction
        >algorithms, to retain more or less detail about various features or to look for

        >new features.

        again this is what the inverse search do.

        > At any point in time you can have at best an approximation to the
        >"ultimate" program.

        Yes we are always in approximations , the problem is in general uncomputable.
         

        To be exact, it is semi-computable. It is computable in the limit. 

        Best, 

        --
        Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
        http://groups.yahoo.com/group/ai-philosophy
        http://myspace.com/arizanesil http://myspace.com/malfunct

      Your message has been successfully submitted and would be delivered to recipients shortly.