Loading ...
Sorry, an error occurred while loading the content.
 

Re: [ai-philosophy] Re: [FOM] The Lucas-Penrose Thesis vs The Turing Thesis

Expand Messages
  • Anssi Hyytiäinen
    ... Thank you for the links to the publication. I think I must have been misunderstood though. The program that is being described takes a line drawing and
    Message 1 of 32 , Oct 31, 2006
      Marvin Minsky wrote:

      >
      >
      > Umm, you have too much respect for evolution, considering that most
      > species end up in restricted niches, or even as simplified parasites.
      > As for your claim that "you cannot really tell the system in any
      > explicit sense how it should perceive 'shadows' and 'solid geometry'
      > etc," David Waltz actually did that rather well in his "Generating
      > Semantic Descriptions From Drawings of Scenes With Shadows"
      > November 1972,
      > ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-271.ps
      > ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-271.pdf
      >
      > and his programs actually worked rather well.


      Thank you for the links to the publication.

      I think I must have been misunderstood though. The program that is being
      described takes a line drawing and turns it into a sensical 3D-scene,
      which it expresses with edges and junctions. This could be called
      semantical interpretation in some sense, but the programmer has still
      chosen edges and junctions to be the "things" the program is capable of
      expressing in explicit sense.

      This can be useful for something of course, but the original assertion
      "You cannot really tell the system in any explicit sense how it should
      perceive...'" was made in the context of what "should be important to
      grasp by anyone who wants to build an AI that is capable of
      creativity/semantical understanding". This is not what the above
      mentioned program is capable of in the sense that its "elementary
      concepts of reality" are rigidly defined.

      Plainly put, we can hardly say our elementary comprehension of reality
      is edges and junctions and such things. What really are the most
      elementary features that our brain is expressing when it has interpreted
      some visual data? I claim there are no such elementary features in any
      explicit sense, but rather the data is expressed in terms of "semantical
      concepts" that have some meaning by the virtue of our learning
      processes. So far so good, but there's a catch that seems to be often
      times overlooked.

      Let's imagine a cortex that is expressing that "round" feature exists in
      visual data. Imagine this expression existsing physically as a
      spatial/temporal pattern of firing neurons. Can we say there exists such
      patterns in reality that just metaphysically have the meaning of
      "roundness" to whatever system is producing the pattern? Hardly. What
      must happen in the brain, in one sense or another, is that conception of
      roundness is not any independently definable thing, but it is, for
      example, what a straight line is not. Straight line is not definable in
      turn without saying it is what roundness is not.

      It is fairly straightforward to conclude that no concept we can be aware
      of has any independent meaning to itself. Rather, it can only have any
      meaning when it is related to other concepts, and this is what I mean
      with our worldview being a self-supported one, and this is why we can
      only comprehend semantical descrpiption of reality, but not the reality
      itself directly. (This is why when we look at a picture of water and
      patches of land, we are not simply aware of some elementary features of
      it, but rather we wonder if we are looking at an "ocean" with "islands"
      on it or "land" with "rivers" on it. And this is why we can wonder how
      deep does a "cavity" on the side of a mountain have to be before it
      turns into a "cave", etc...)

      In other words, when you force an AI system to express sensory data in
      terms of any "elementary concepts", you are killing its capability of
      semantical reasoning; it's capability of learning new and equally valid
      ways to look at the same systems (when the problem at hand so requires),
      it's capability of predicting the behaviour of new systems (composed of
      old components), and thus its capability of creating new systems itself
      by predicting their behaviour in semantical terms.

      Here's a funny related video clip:
      http://www.compfused.com/directlink/833/
      I suspect what is happening here is that there has been a glass on the
      door, and the cat has assumed in its worldview in some sense that it
      cannot pass through the door frames. But its idea for why this is so is
      not similar to how we think of it in terms of solid matter and
      see-through materials etc... Its worldview is much simpler than ours,
      but what matters is that it is useful for its behaviour. Of course in
      this case this particular cat has formed such a worldview where the guy
      passing through the doorframe is meaningless to whether the cat itself
      can pass the frame or not.

      We find this sort of behaviour incredibly dumb, but it reminds me of our
      ideas of quantum mechanics, in that clearly our ideas of what exists are
      radically wrong, just like the ideas of this cat... Anyhow, this method
      of building a semantical idea of reality grants the cat with ability to
      predict dangers and to move its muscles in meaningful ways to move
      around in dynamic environment (to figure things out), and similarly it
      grants us the ability to build scientific models of reality that can be
      used to predict the behaviour of semantical objects in any new situations.


      >
      > You surely are right to some extent when you say, "you will merely
      > force your own incomplete ideas of the world on it and won't allow it
      > to form its own idea about the world, etc." However, then you go on to
      > suggest that every creature will end up doing better than what a
      > community of scientists and thinkers can do.


      I didn't mean to claim every "creature" will end up doing well. The
      comment about retarded monkeys merely said that many less capable
      intelligent beings of natural evolution are exhibiting so complex
      behaviour just when they are using their muscles to fluently jump around
      in trees, that one can hardly hope to be able to "intelligently design"
      a system of the same capacity. Perhaps it is possible, but for this
      behaviour we are already approaching such complexity that it might be
      easier to intelligently design an evolutionary environment that promotes
      such behaviour. The result would probably be too complex to be
      understandable (at least when we talk about a natural system where
      things truly occur in parallel)

      > In fact, most pre-intelligent, non-social creatures are so bad at this
      > that they need to lay hundreds or thousands of eggs‹because most of
      > them soon make fatal mistakes. Perhaps you should reboot your thinking
      > to see if you can come up with some different conclusions that don't
      > soon get stuck on non-optimal peaks‹which is what most evolutionary
      > searches do. Are they really "much more efficient" than systems that
      > are based on "intelligently designing"? Surely, the best way to deal
      > with a new environment is to ask some expert (or community) about the
      > best way they've found to deal with it.

      I certainly don't want to trivialize the difficulties associated with
      building evolutionary simulation properly. This you must do by
      intelligently designing it of course, using semantical concepts to work
      things out, and often times the system will behave in ways you didn't
      predict, and it probably is not possible to produce the required level
      of dynamics into a virtual environment with current hardware, etc...

      Also let it be said that there exists a number of "intelligently
      designed" learning systems that seem to show potential as platforms for
      semantical learning, like perhaps Jeff Hawkins' idea of predictive
      memory networks which can form a hypothesis about some pattern being
      "the same thing" as something experienced before (this he describes in
      the book On Intelligence).

      Anyhow, when using evolutionary processes to produce complex behaviour,
      it should not be a surprise or a problem that an evolutionary process
      produces a large number of "poor results". And second, when evolutionary
      process is understood more properly, it becomes quite ambiguous what
      constitutes a "poor result". After all, classifying evolutionary process
      into "organisms" and different "species" is completely semantical issue,
      as we are always looking at colonies of colonies of colonies (of
      "semantical objects").

      The parts of our genes that don't appear to do anything but are rather
      just parasites getting copied along with the rest of the genes could
      perhaps be called poor results, but then they may play a useful part
      later on in the evolution. A lowly spider cannot be thought of as poor
      result unless we can be sure its evolutionary path won't have a useful
      effect on that of ours or some other so called "success". An ant should
      not be thought of as an organism because it is the ant colony that is
      the "survival machine" of the genes, and the ants are more like the
      cells of the "ant colony" survival machine. The single ants are in a
      sense "less intelligent" than the colony as a whole. It is all quite
      ambiguous.

      The natural evolutionary process should be understood as a whole from
      quantum level to the complexity of whole ecosystems or human societies
      and all the ambiguity that comes along with classifying it into parts is
      also a result of our intelligent processes working that way; us building
      a semantical worldview; classifying reality into "sensible objects" that
      are not "real" objects, but merely semantical objects...

      Likewise, if we talk about a virtual evolution, like Avida, then the
      single "organisms" in it should be viewed as analogous to, say,
      molecules in natural evolution. Intelligent behaviour should be expected
      only if they can form colonies that interact with each others, producing
      new abstraction layers on top of abstraction layers.

      I am using so many words on this issue just to clarify further how
      ambiguous all our ideas about organisms and "things" are, and to clarify
      how restricting it would be to build an AI system that is, say, trying
      to recognize "an organism" instead of forming its own ideas about how
      reality is. And to underline how difficult it can be for us to design a
      system that can form an expression of sensory data, without actually
      defining what are the elements it can use to express it. And to
      underline how limited all the current simulated evolutionary processes
      really are by their possibility spaces.

      But if you can harness the power of evolutionary process properly, it
      can be a good method of finding some massively complex stable system, or
      another way to put it, finding solutions to extremely complex problems
      (in this case "how to build an efficient learning machine"). The
      difficult part is in building an environment where such a solution can
      actually be found; where appropriate complex system can come to exist.
      Yet at some complexity level it becomes easier to produce the proper
      evolutionary environment than to produce the complex system(s) directly.

      -Anssi


      --
      No virus found in this outgoing message.
      Checked by AVG Free Edition.
      Version: 7.1.409 / Virus Database: 268.13.18/506 - Release Date: 30.10.2006
    • anssihyytiainen
      ... semantics of ... Not really. Semantics of a logical proof, such as any ontological interpretation of quantum physics or relativity, are simply descriptions
      Message 32 of 32 , Nov 7, 2006
        --- In ai-philosophy@yahoogroups.com, "TARIK ÖZKANLI"
        <tozkanli2023@...> wrote:
        >
        > What is the difference between the semantics of a poem and the
        semantics of
        > a logical proof ?
        > Is there a fundamental difference?

        Not really. Semantics of a logical proof, such as any ontological
        interpretation of quantum physics or relativity, are simply
        descriptions of semantical objects and their behaviour in semantical
        sense.

        The logical proof itself, when rid of all semantics, can only say
        that when semantical object X and semantical object Y come together
        in such and such way, then a semantical phenomenon Z occurs.

        Ontological interpretations of some predictive math are
        basically "poetic descriptions of what occurs".

        What are usually called "poems" are not just pure semantics either.
        They are logical constructions, which we describe in terms of
        semantical objects and behaviour. In other words, a poem must "make
        sense" before it can be called a poem rather than random noise :)
      Your message has been successfully submitted and would be delivered to recipients shortly.