Loading ...
Sorry, an error occurred while loading the content.

18609Re: [ai-philosophy] RE: On mathematics

Expand Messages
  • Sergio Navega
    Nov 4, 2013
    • 0 Attachment
      > Computational AI theory holds that there is nothing within the
      > space of human level "intelligence" that can not be captured
      > programmatically by a universal machine.
       
      John, this is important: I agree with all that! The position
      I'm advocating may not be obvious, so let me try to make my
      point clearer. I have nothing against this idea that a "sufficiently
      powerful computer" could capture all organic/computational processes
      happening in a homo sapiens body. Of course this is theoretically
      possible.
       
      The problem I see is regarding how to construct a theoretical
      (scientific, methodological, conceptual) foundation to do that (other
      than the "brute force" approach, which would require a computer
      the size of the solar system).
       
      What all AI researchers are trying to do now is to abstract some
      concepts from the workings of a brain in order to implement an
      equivalent process in a computer. I think this is not working.
       
      Let's think about another complex thing: imagine that we must
      design a computer program to grow a Palm tree. AI researchers
      are doing the equivalent of grabbing the genetic code of
      a Palm tree and "running" it in a huge computer. All I'm
      saying is that this particular theoretical stance will not
      work (and in the case of Palm trees, it doesn't work because
      genetic codes are just HALF of the story, without simulating
      the interactions of the genes with the environment one cannot
      grow the whole thing).
       
      Therefore, I'm saying that AI is not working (for more than
      50 years now) because we are stuck at that "genetic level"
      of the Palm trees, and we are not seeing the whole picture,
      the things we're missing in a suitable theoretical approach.
       
      > Question: If we "evolve" a machine capable of human level
      > intelligence (or greater) then is it really fair to call it
      > "artificially" intelligent? I think not. Artificial Intelligence
      > implies a reduction which results in a deep understanding of
      > the methodology employed.
       
      That's a very good question. Let's not forget that intelligence
      on Earth took evolution some hundred million years of
      "experimentation". Could we do it from "first principles"?
      I don't know. I only know that, judging by what we have at
      our hands now, we will not be able to do it. Something very
      deep is missing in our theories. And the worse thing is that
      most of us aren't even aware that we're missing these things.
       
      Sergio Navega
       
       
       
       
      Sent: Monday, November 04, 2013 3:01 PM
      Subject: RE: Re: [ai-philosophy] RE: On mathematics
       
       

       Sergio:


      Thanks for the replay, but I think you miss the point. The fact that cell phones are much more powerful the the computers of the 60's is beside the point. Computational AI theory holds that there is nothing within the space of human level "intelligence" that can not be captured programmatically by a universal machine. (I hesitate to use the term universal Turing machine because that would tend to place the conversation within the domain of math and I would much rather keep it within computation as it applies to real machines, like the brain, as apposed to theoretical machines like UTM).


      So, Computational AI theory say the space "human intelligence" is completely within the space defined by, say the space lisp-eval (for example). Yes, maybe we need more powerful machine to demonstrate this... but it could also be that human intelligence is < what's required to prove that conjecture.


      Question: If we "evolve" a machine capable of human level intelligence (or greater) then is it really fair to call it "artificially" intelligent? I think not. Artificial Intelligence implies a reduction which results in a deep understanding of the methodology employed.


      So maybe we need to evolve such a machine who would then prove "human intelligence" is completely within the space and with any luck we will be intelligent enough to understand the proof.


      :)


      John J. Gagne






      ---In ai-philosophy@yahoogroups.com, <ai-philosophy@yahoogroups.com> wrote:

      John, you make some good points, and I agree that our
      mathematical modeling of reality will always be
      "behind" the real thing. What I'm complaining is
      that the equivalent of the newtonian->relativistic
      kind of jump has not appeared in AI. And it's almost
      obvious that we aren't doing things right.
       
      Jeopardy is, no doubt, an impressive achievement.
      But that's not human-like intelligence (not even
      dog-like). I'm also impressed with Wolfram Alpha
      (www.wolframalpha.com), but again thats not human-like
      intelligence either.
       
      Let's take a look at how much computational machinery
      has evolved over the last 50 years. Any conventional
      cell phone today has much more processing power than
      the biggest computers of the sixties. Why does
      "computational intelligence" didn't follow the
      same exponential growth? What's holding it back?
       
      Now think of what could happen if we continue with
      the theoretical substrate used in Jeopardy and Wolfram,
      but adding a thousand fold more computational capability
      in the next decade. Will that become the "real AI" we
      want? I would say no. It would be an impressive
      "machine correlator", sort of an "intelligent google".
      But that kind of cognition is just a fraction of what
      makes us intelligent. It's that "other part" that I think
      we're missing.
       
      Sergio Navega
       
       
       
       
      Sent: Monday, November 04, 2013 10:53 AM
      Subject: [ai-philosophy] RE: On mathematics
       
       

       Sergio Navega said" The problem is that when we talk about intelligence

      in a "homo sapiens way", today's computational descriptions
      and theories are just fairy tales."

      This is always the case Sergio, when mapping theory to reality (reductionist, scientific methods of understanding). "As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality".

      Sergio Navega said "Let's say that AI was born in that 1956 gathering (McCarthy, Minsky,
      Shannon, Herb Simon, and others). It's been more than fifty years! And we still don't have a
      shadow of AI. What is happening?"

      I certainly disagree. Much has happened in fifty years (though we may have no more than just a "shadow"). If you weren't thoroughly impressed with Watsons display at the Jeopardy challenge I'm not sure what kind of shadow you're looking for? I'm also of the opinion that the kind of performance displayed by Watson absolutely tells us something about how we perform such tasks. How much it tells us we are free to argue about that...

      The DARPA urban challenge was also some very impressive tech. I would consider both of these "shadows" of whats to come.

      John J. Gagne

       


      ---In ai-philosophy@yahoogroups.com, <snavega@...> wrote:

      John, I understand what you say, and believe me, in
      practical terms I mostly agree. It is in a more
      fundamental ("theoretical") stance that I see it
      differently.
       
      The problem is that when we talk about intelligence
      in a "homo sapiens way", today's computational descriptions
      and theories are just fairy tales. Let's say that
      AI was born in that 1956 gathering (McCarthy, Minsky,
      Shannon, Herb Simon, and others). It's been more than
      fifty years! And we still don't have a shadow of AI.
      What is happening?
       
      Of course, we have advanced a lot in specific (niche)
      segments, but nothing today gives us any hope that
      we will ever achieve AI (at least using today's
      theoretical substrates).
       
      In my way of seeing things, this is because we think
      that "the brain" is the only organ we have to focus
      to understand intelligence. It's not. Embodyment is
      not only an "architectural necessity", it is also
      an important component in the whole system. And
      here's where I introduce the importance of emotion
      and emotional states. Since the work of Damasio,
      LeDoux and many others we've seen that there's more
      to intelligence than a mere computational simulation
      (or equivalent processing) of the brain (neocortex).
      Anyone who interacts with a friendly dog will
      understand what I mean by that.
       
      Of course, one can say that all this machinery
      (brain, body, hormonal systems, neurotransmitters,
      etc.) all this can be computationally modeled.
      Ok, I'm fine with this. But let's be clear about
      the sheer computational hiper-complexity of this
      whole process. It's daunting! And if we're still
      struggling to simulate just some activities of the
      neocortex, imagine how long it will take to simulate
      (or emulate) the whole thing. Only a through
      theoretical revision can tackle this task.
       
      So my final message is that we're on the wrong track,
      in theoretical terms. Without a revision of the whole
      thing I don't see any possible route toward really
      intelligent systems. In newtonian words, we're still
      in "pre-Einstein" times. But that's just my feeling...
       
      Sergio Navega 
       
       
       
       
      Sent: Sunday, November 03, 2013 2:27 PM
      Subject: RE: Re: [ai-philosophy] On mathematics
       
       

      Sergio Navega said

       

      "I used to think that thinking was just

      computation. Now I'm inclined to consider
      that thinking can be modelled computationally
      but it is NOT a computation per se.
      Thinking is a physical process just like a hurricane:"

      But this implies that computation is, somehow, not a physical
      process (like a hurricane) and as you point out above, just like
      thinking. It is...

      Certainly, any statement about what "thinking" is--suffers
      a drop in precision when compared with the statement "thinking
      is computation". Its all too popular to take an anti-computational
      philosophical position with respect to brain/biological processes.
      But every time this approach is established the result seem to be
      to replace the precises language of computation/mathematics
      with vague statements about highly non-computational emotional
      states... What does that even mean?

      Any "process" which is "claimed" to be non-computational is also
      certainly a process which can (in the same breath) be said to be
      less than "understood". By the same token, any process which can
      be "modeled computationally" precisely defines not only the language
      but also the level of understanding achieved with respect to the subject
      in question.
       
      I'm not saying computation will never be superseded by a better
      theory (as Newton was superseded by Einstein). I'm am saying
      no such superposition has presented itself to date.    

      John J. Gagne

      ---In ai-philosophy@yahoogroups.com, <ai-philosophy@yahoogroups.com> wrote:

      > You are not "making a mapping" that's the wrong way to think about
      it
       
      So what is the right way to think about it?
      What is the name of the process of discarding
      details from one level of analysis (or grouping
      many of them into a single name) and link all these
      to a higher level abstract construct? Because in
      science we do that all the time.
       
      Sergio Navega
       
       
       
      Sent: Tuesday, October 29, 2013 11:00 PM
      Subject: Re: [ai-philosophy] On mathematics
       
       
      You are not "making a mapping" that's the wrong way to think about it, and where I stopped reading.


      On Tue, Oct 29, 2013 at 2:29 PM, Sergio Navega <snavega@...> wrote:


      Eray, I understand what you say, but in order to
      construct a mapping between that bunch of organic
      molecules and those high level constructs of a
      universal computer we have to discard a lot of things.
      That's what I'm afraid of doing!
       
      Take the concept of memory, for instance. An essential
      part of any computer, memory in "brain terms" is a complex
      (and highly distributed) pattern of connections between
      neurons that not only changes through time, but is subject
      to many external influences (such as the state of "arousal"
      of the remainder of the body). The net result is that when
      we say that brains have "memory" we are in fact discarding
      the majority of the biochemical processes that happen in
      that organ.
       
      This mapping usually works, but only because our abstraction
      from "biological chemistry" to the level of "the concept
      of memory" is coarse enough (which means, we are discarding
      many things). Sure, we are discarding many irrelevant stuff.
      But perhaps some of the things we are discarding can be very
      important in a different kind of theoretical construct (a
      different conceptual mapping), one which advances over the
      concept of universal computers.
       
      I may argue that science, in general, does this simplification
      all the time. After all, without doing it we couldn't think
      of Newtonian mechanics anymore, because of quantum details or
      relativistic effects. We just discard quantum and relativistic
      stuff when building bridges and cars. However, maybe there's
      some danger in doing this for any kind of system.
       
      A satellite in orbit of the Earth could be thought as being a
      purely newtonian system. But in practice we know that there are
      some satellites that require a broader model: GPS systems, for
      instance, needs that we consider relativistic effects in order
      to work correctly. In other words, if we were constrained to use
      only newtonian mechanics, GPS would not be possible.
       
      So I'm not against doing that kind of simplification,
      I just propose that we remember that we're doing this, so
      as not to lose the "big picture" and the eventual possibility
      of doing more things with a new theoretical substrate, maybe
      beyond what we're considering to be the "ultimate model".
       
      Sergio Navega
       
       
       
       
      Sent: Monday, October 28, 2013 10:55 PM
      Subject: Re: [ai-philosophy] On mathematics
       
       
      I think the analogy is misleading, at least for the reason that the brain isn't just a computer, it is a *universal* computer.
       
      Regards,


      On Mon, Oct 28, 2013 at 8:12 PM, Sergio Navega <snavega@...> wrote:


      > It is a computer, it has processing and memory elements, etc
       
      I may agree with that, but in order to do so one has to
      establish a mapping between the theoretical view of what a computer
      is (an abstract definition of memory, processing elements, etc) and
      a physical system (hormones, neurotransmitters, synapses, non-linear
      behavior of neurons, etc.). What I'm saying is that if one is doing
      that for the brain, then using a sufficiently well built mapping one
      can also do that for a  hurricane (although it would be a little bit
      odd to determine what exactly a hurricane is computing).
       
      I'm saying all this because there's a big chasm between the
      theoretical definition of what a computer (and computation) is
      and the real implementation of such a device. And IMHO there's
      a big danger to forget that distinction: it is too easy to fall
      prey of the "the brain is a computer" simplification.
       
      Let me expand a bit on why I've changed my mind in recent years.
      We know that what makes us human is not restricted to the
      operations being performed on the neocortex. Much of the
      "quality" and peculiarity of our reasoning process is due to
      emotional concerns. This has been the focus of attention of many
      researchers for some decades now. And that's my point: emotional
      processes are terribly non-computational (that's quite an assertion,
      I know!). So modeling what the brain does (in order to be
      capable of building intelligent machines) is a small, tiny
      part of the whole endeavor. There's a non-computational
      part that makes me somewhat skeptic we will be able of
      building (at least using current strategies). AI keeps giving
      us reasons to believe that we're on a race to build "the fastest
      idiots on earth".
       
      Sergio Navega
       
       
       
      Sent: Monday, October 28, 2013 1:46 PM
      Subject: Re: [ai-philosophy] On mathematics
       
       
      That's the strong C-T thesis, but it is not needed actually. What matters is what the nervous system *is* and it is a biological computer, it doesn't have any other real function. If you think it is not a computer, probably your theory of computation is not wide enough. It is a computer, it has processing and memory elements, etc.


      On Mon, Oct 28, 2013 at 5:40 PM, Sergio Navega <snavega@...> wrote:


      > ... and thinking is MADE UP OF COMPUTATION
       
      I must confess that I'm changing my mind about this
      subject. I used to think that thinking was just
      computation. Now I'm inclined to consider that thinking
      can be modelled computationally, but it is NOT a
      computation per se.
       
      Thinking is a physical process just like a hurricane: it
      could be (in principle) modeled computationally, but it
      is not a computation. Unless, of course, we adhere
      to the idea that all material reality (below the level of
      quarks and gluons) is just an informational substrate,
      being executed by an extra-galactic humongous computer
      of some sort.
       
      But that's too much, isn't it? Or is it?
       
      Sergio Navega
       
       
       
       
      Sent: Monday, October 28, 2013 1:20 AM
      Subject: [ai-philosophy] On mathematics
       
       

      [The return of the positivist]

      Greetings all,

      I don't think that the analytic-synthetic distinction's too relevant any more. The trouble is that mathematics cannot be captured by neither platonism nor formalism, that many mathematicians like so much. The true answer is through physicalism of course.

      I think that computation is the golden standard of mathematics. That is because if mathematics is a science, it is the science of thinking, and thinking is MADE UP OF COMPUTATION. That is to say, the mathematics that lies beyond computer science (i.e. graphs with uncountably many edges) isn't science. It's a pretty harsh statement, given how many academic mathematicians make a living out of, what is essentially the empty set.

      Let me say it again, mathematics always involves a kind of reasoning, a form of reasoning that is better and more precise than common sense. And as such that WAY OF THINKING underlies all science.

      That is not an UNCANNY COINCIDENCE or anything like that. It happens precisely because the abstract representations in mathematics have proven themselves useful, that they exist at all. Therefore, if one "liberates" mathematics by turning it into an art form, one essentially makes an assault on the entirety of science, of which mathematics is a mostly reliable foundation.

      That is to say, mathematical statements gain their meaning only by way of a computational interpretation. When such interpretation is absent, mathematics is nonsense.

      Mathematics is just general purpose computations (i.e., an axionatic system of geometry).

      Computer science is identical to mathematics.

      Universal induction can answer any valid mathematical question.

      Halting problem essentially  encompasses the entirety of mathematical thought. Of which there is an infinite variety, only limited by computational resources.

      Full empiricism explains mathematics: it is just experiments on computer devices. It fully reduces to the physical science of computers.

      A set is just an ordered list of bitstrings.

      These are the scientific facts we know as they follow from the plain and obvious fact that the brain  is a computer. In summary, computational neuroscience proves Godel's silly spiritual philosophy wrong. (I dont even mention how irrelevant quine and putnam are to science of nathematics)

      Regards,

      --
      Eray Ozkural, PhD





       
      --
      Eray Ozkural, PhD. Computer Scientist
      Founder, Gok Us Sibernetik Ar&Ge Ltd.
      http://groups.yahoo.com/group/ai-philosophy




       
      --
      Eray Ozkural, PhD. Computer Scientist
      Founder, Gok Us Sibernetik Ar&Ge Ltd.
      http://groups.yahoo.com/group/ai-philosophy




       
      --
      Eray Ozkural, PhD. Computer Scientist
      Founder, Gok Us Sibernetik Ar&Ge Ltd.
      http://groups.yahoo.com/group/ai-philosophy
    • Show all 22 messages in this topic