18609Re: [ai-philosophy] RE: On mathematics
- Nov 4, 2013> Computational AI theory holds that there is nothing within the> space of human level "intelligence" that can not be captured> programmatically by a universal machine.John, this is important: I agree with all that! The positionI'm advocating may not be obvious, so let me try to make mypoint clearer. I have nothing against this idea that a "sufficientlypowerful computer" could capture all organic/computational processeshappening in a homo sapiens body. Of course this is theoreticallypossible.The problem I see is regarding how to construct a theoretical(scientific, methodological, conceptual) foundation to do that (otherthan the "brute force" approach, which would require a computerthe size of the solar system).What all AI researchers are trying to do now is to abstract someconcepts from the workings of a brain in order to implement anequivalent process in a computer. I think this is not working.Let's think about another complex thing: imagine that we mustdesign a computer program to grow a Palm tree. AI researchersare doing the equivalent of grabbing the genetic code ofa Palm tree and "running" it in a huge computer. All I'msaying is that this particular theoretical stance will notwork (and in the case of Palm trees, it doesn't work becausegenetic codes are just HALF of the story, without simulatingthe interactions of the genes with the environment one cannotgrow the whole thing).Therefore, I'm saying that AI is not working (for more than50 years now) because we are stuck at that "genetic level"of the Palm trees, and we are not seeing the whole picture,the things we're missing in a suitable theoretical approach.> Question: If we "evolve" a machine capable of human level> intelligence (or greater) then is it really fair to call it> "artificially" intelligent? I think not. Artificial Intelligence> implies a reduction which results in a deep understanding of> the methodology employed.That's a very good question. Let's not forget that intelligenceon Earth took evolution some hundred million years of"experimentation". Could we do it from "first principles"?I don't know. I only know that, judging by what we have atour hands now, we will not be able to do it. Something verydeep is missing in our theories. And the worse thing is thatmost of us aren't even aware that we're missing these things.Sergio Navega
Thanks for the replay, but I think you miss the point. The fact that cell phones are much more powerful the the computers of the 60's is beside the point. Computational AI theory holds that there is nothing within the space of human level "intelligence" that can not be captured programmatically by a universal machine. (I hesitate to use the term universal Turing machine because that would tend to place the conversation within the domain of math and I would much rather keep it within computation as it applies to real machines, like the brain, as apposed to theoretical machines like UTM).
So, Computational AI theory say the space "human intelligence" is completely within the space defined by, say the space lisp-eval (for example). Yes, maybe we need more powerful machine to demonstrate this... but it could also be that human intelligence is < what's required to prove that conjecture.
Question: If we "evolve" a machine capable of human level intelligence (or greater) then is it really fair to call it "artificially" intelligent? I think not. Artificial Intelligence implies a reduction which results in a deep understanding of the methodology employed.
So maybe we need to evolve such a machine who would then prove "human intelligence" is completely within the space and with any luck we will be intelligent enough to understand the proof.
John J. Gagne
---In firstname.lastname@example.org, <email@example.com> wrote:John, you make some good points, and I agree that ourmathematical modeling of reality will always be"behind" the real thing. What I'm complaining isthat the equivalent of the newtonian->relativistickind of jump has not appeared in AI. And it's almostobvious that we aren't doing things right.Jeopardy is, no doubt, an impressive achievement.But that's not human-like intelligence (not evendog-like). I'm also impressed with Wolfram Alpha(www.wolframalpha.com), but again thats not human-likeintelligence either.Let's take a look at how much computational machineryhas evolved over the last 50 years. Any conventionalcell phone today has much more processing power thanthe biggest computers of the sixties. Why does"computational intelligence" didn't follow thesame exponential growth? What's holding it back?Now think of what could happen if we continue withthe theoretical substrate used in Jeopardy and Wolfram,but adding a thousand fold more computational capabilityin the next decade. Will that become the "real AI" wewant? I would say no. It would be an impressive"machine correlator", sort of an "intelligent google".But that kind of cognition is just a fraction of whatmakes us intelligent. It's that "other part" that I thinkwe're missing.Sergio Navega
Sergio Navega said" The problem is that when we talk about intelligencein a "homo sapiens way", today's computational descriptionsand theories are just fairy tales."
This is always the case Sergio, when mapping theory to reality (reductionist, scientific methods of understanding). "As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality".
Sergio Navega said "Let's say that AI was born in that 1956 gathering (McCarthy, Minsky,Shannon, Herb Simon, and others). It's been more than fifty years! And we still don't have a
shadow of AI. What is happening?"
I certainly disagree. Much has happened in fifty years (though we may have no more than just a "shadow"). If you weren't thoroughly impressed with Watsons display at the Jeopardy challenge I'm not sure what kind of shadow you're looking for? I'm also of the opinion that the kind of performance displayed by Watson absolutely tells us something about how we perform such tasks. How much it tells us we are free to argue about that...
The DARPA urban challenge was also some very impressive tech. I would consider both of these "shadows" of whats to come.
John J. Gagne
---In firstname.lastname@example.org, <snavega@...> wrote:John, I understand what you say, and believe me, inpractical terms I mostly agree. It is in a morefundamental ("theoretical") stance that I see itdifferently.The problem is that when we talk about intelligencein a "homo sapiens way", today's computational descriptionsand theories are just fairy tales. Let's say thatAI was born in that 1956 gathering (McCarthy, Minsky,Shannon, Herb Simon, and others). It's been more thanfifty years! And we still don't have a shadow of AI.What is happening?Of course, we have advanced a lot in specific (niche)segments, but nothing today gives us any hope thatwe will ever achieve AI (at least using today'stheoretical substrates).In my way of seeing things, this is because we thinkthat "the brain" is the only organ we have to focusto understand intelligence. It's not. Embodyment isnot only an "architectural necessity", it is alsoan important component in the whole system. Andhere's where I introduce the importance of emotionand emotional states. Since the work of Damasio,LeDoux and many others we've seen that there's moreto intelligence than a mere computational simulation(or equivalent processing) of the brain (neocortex).Anyone who interacts with a friendly dog willunderstand what I mean by that.Of course, one can say that all this machinery(brain, body, hormonal systems, neurotransmitters,etc.) all this can be computationally modeled.Ok, I'm fine with this. But let's be clear aboutthe sheer computational hiper-complexity of thiswhole process. It's daunting! And if we're stillstruggling to simulate just some activities of theneocortex, imagine how long it will take to simulate(or emulate) the whole thing. Only a throughtheoretical revision can tackle this task.So my final message is that we're on the wrong track,in theoretical terms. Without a revision of the wholething I don't see any possible route toward reallyintelligent systems. In newtonian words, we're stillin "pre-Einstein" times. But that's just my feeling...Sergio Navega
Sergio Navega said
"I used to think that thinking was justcomputation. Now I'm inclined to consider
that thinking can be modelled computationally
but it is NOT a computation per se.Thinking is a physical process just like a hurricane:"
But this implies that computation is, somehow, not a physical
process (like a hurricane) and as you point out above, just like
thinking. It is...
Certainly, any statement about what "thinking" is--suffers
a drop in precision when compared with the statement "thinking
is computation". Its all too popular to take an anti-computational
philosophical position with respect to brain/biological processes.
But every time this approach is established the result seem to be
to replace the precises language of computation/mathematics
with vague statements about highly non-computational emotional
states... What does that even mean?
Any "process" which is "claimed" to be non-computational is also
certainly a process which can (in the same breath) be said to be
less than "understood". By the same token, any process which can
be "modeled computationally" precisely defines not only the language
but also the level of understanding achieved with respect to the subject
I'm not saying computation will never be superseded by a better
theory (as Newton was superseded by Einstein). I'm am saying
no such superposition has presented itself to date.
John J. Gagne
---In email@example.com, <firstname.lastname@example.org> wrote:
it> You are not "making a mapping" that's the wrong way to think aboutSo what is the right way to think about it?What is the name of the process of discardingdetails from one level of analysis (or groupingmany of them into a single name) and link all theseto a higher level abstract construct? Because inscience we do that all the time.Sergio NavegaYou are not "making a mapping" that's the wrong way to think about it, and where I stopped reading.On Tue, Oct 29, 2013 at 2:29 PM, Sergio Navega <snavega@...> wrote:Eray, I understand what you say, but in order toconstruct a mapping between that bunch of organicmolecules and those high level constructs of auniversal computer we have to discard a lot of things.That's what I'm afraid of doing!Take the concept of memory, for instance. An essentialpart of any computer, memory in "brain terms" is a complex(and highly distributed) pattern of connections betweenneurons that not only changes through time, but is subjectto many external influences (such as the state of "arousal"of the remainder of the body). The net result is that whenwe say that brains have "memory" we are in fact discardingthe majority of the biochemical processes that happen inthat organ.This mapping usually works, but only because our abstractionfrom "biological chemistry" to the level of "the conceptof memory" is coarse enough (which means, we are discardingmany things). Sure, we are discarding many irrelevant stuff.But perhaps some of the things we are discarding can be veryimportant in a different kind of theoretical construct (adifferent conceptual mapping), one which advances over theconcept of universal computers.I may argue that science, in general, does this simplificationall the time. After all, without doing it we couldn't thinkof Newtonian mechanics anymore, because of quantum details orrelativistic effects. We just discard quantum and relativisticstuff when building bridges and cars. However, maybe there'ssome danger in doing this for any kind of system.A satellite in orbit of the Earth could be thought as being apurely newtonian system. But in practice we know that there aresome satellites that require a broader model: GPS systems, forinstance, needs that we consider relativistic effects in orderto work correctly. In other words, if we were constrained to useonly newtonian mechanics, GPS would not be possible.So I'm not against doing that kind of simplification,I just propose that we remember that we're doing this, soas not to lose the "big picture" and the eventual possibilityof doing more things with a new theoretical substrate, maybebeyond what we're considering to be the "ultimate model".Sergio NavegaI think the analogy is misleading, at least for the reason that the brain isn't just a computer, it is a *universal* computer.Regards,On Mon, Oct 28, 2013 at 8:12 PM, Sergio Navega <snavega@...> wrote:> It is a computer, it has processing and memory elements, etcI may agree with that, but in order to do so one has toestablish a mapping between the theoretical view of what a computeris (an abstract definition of memory, processing elements, etc) anda physical system (hormones, neurotransmitters, synapses, non-linearbehavior of neurons, etc.). What I'm saying is that if one is doingthat for the brain, then using a sufficiently well built mapping onecan also do that for a hurricane (although it would be a little bitodd to determine what exactly a hurricane is computing).I'm saying all this because there's a big chasm between thetheoretical definition of what a computer (and computation) isand the real implementation of such a device. And IMHO there'sa big danger to forget that distinction: it is too easy to fallprey of the "the brain is a computer" simplification.Let me expand a bit on why I've changed my mind in recent years.We know that what makes us human is not restricted to theoperations being performed on the neocortex. Much of the"quality" and peculiarity of our reasoning process is due toemotional concerns. This has been the focus of attention of manyresearchers for some decades now. And that's my point: emotionalprocesses are terribly non-computational (that's quite an assertion,I know!). So modeling what the brain does (in order to becapable of building intelligent machines) is a small, tinypart of the whole endeavor. There's a non-computationalpart that makes me somewhat skeptic we will be able ofbuilding (at least using current strategies). AI keeps givingus reasons to believe that we're on a race to build "the fastestidiots on earth".Sergio NavegaThat's the strong C-T thesis, but it is not needed actually. What matters is what the nervous system *is* and it is a biological computer, it doesn't have any other real function. If you think it is not a computer, probably your theory of computation is not wide enough. It is a computer, it has processing and memory elements, etc.On Mon, Oct 28, 2013 at 5:40 PM, Sergio Navega <snavega@...> wrote:> ... and thinking is MADE UP OF COMPUTATIONI must confess that I'm changing my mind about thissubject. I used to think that thinking was justcomputation. Now I'm inclined to consider that thinkingcan be modelled computationally, but it is NOT acomputation per se.Thinking is a physical process just like a hurricane: itcould be (in principle) modeled computationally, but itis not a computation. Unless, of course, we adhereto the idea that all material reality (below the level ofquarks and gluons) is just an informational substrate,being executed by an extra-galactic humongous computerof some sort.But that's too much, isn't it? Or is it?Sergio Navega
[The return of the positivist]
I don't think that the analytic-synthetic distinction's too relevant any more. The trouble is that mathematics cannot be captured by neither platonism nor formalism, that many mathematicians like so much. The true answer is through physicalism of course.
I think that computation is the golden standard of mathematics. That is because if mathematics is a science, it is the science of thinking, and thinking is MADE UP OF COMPUTATION. That is to say, the mathematics that lies beyond computer science (i.e. graphs with uncountably many edges) isn't science. It's a pretty harsh statement, given how many academic mathematicians make a living out of, what is essentially the empty set.
Let me say it again, mathematics always involves a kind of reasoning, a form of reasoning that is better and more precise than common sense. And as such that WAY OF THINKING underlies all science.
That is not an UNCANNY COINCIDENCE or anything like that. It happens precisely because the abstract representations in mathematics have proven themselves useful, that they exist at all. Therefore, if one "liberates" mathematics by turning it into an art form, one essentially makes an assault on the entirety of science, of which mathematics is a mostly reliable foundation.
That is to say, mathematical statements gain their meaning only by way of a computational interpretation. When such interpretation is absent, mathematics is nonsense.
Mathematics is just general purpose computations (i.e., an axionatic system of geometry).
Computer science is identical to mathematics.
Universal induction can answer any valid mathematical question.
Halting problem essentially encompasses the entirety of mathematical thought. Of which there is an infinite variety, only limited by computational resources.
Full empiricism explains mathematics: it is just experiments on computer devices. It fully reduces to the physical science of computers.
A set is just an ordered list of bitstrings.
These are the scientific facts we know as they follow from the plain and obvious fact that the brain is a computer. In summary, computational neuroscience proves Godel's silly spiritual philosophy wrong. (I dont even mention how irrelevant quine and putnam are to science of nathematics)
Eray Ozkural, PhD--
Eray Ozkural, PhD. Computer Scientist
Founder, Gok Us Sibernetik Ar&Ge Ltd.
Eray Ozkural, PhD. Computer Scientist
Founder, Gok Us Sibernetik Ar&Ge Ltd.
Eray Ozkural, PhD. Computer Scientist
Founder, Gok Us Sibernetik Ar&Ge Ltd.
- << Previous post in topic