Loading ...
Sorry, an error occurred while loading the content.

Re: Turing/AI issues

Expand Messages
  • a b
    ... Is this is your proposed strengthened version? ... OK I think I get this. So then also a cup of coffee could be simulated in such a computer. Which
    Message 1 of 22 , Jun 1, 2012
    • 0 Attachment
      On Tue, May 29, 2012 at 12:38 PM, David Deutsch <david.deutsch@...> wrote:
      >
      >
      >
      > On 29 May 2012, at 9:51am, a b wrote:
      >
      > > On Mon, May 28, 2012 at 11:04 AM, David Deutsch
      > > <david.deutsch@...> wrote:
      > >>
      > >> On 28 May 2012, at 2:43am, a b wrote:
      > >>
      > >>> On Thu, May 24, 2012 at 10:20 AM, David Deutsch
      > >>> <david.deutsch@...> wrote:
      > >>>>
      > >>>> On 24 May 2012, at 9:11am, hibbsa wrote:
      > >>>>
      > >>>>> There are two issues facing both A.I. and the Turing Principle:
      > >>>>>
      > >>>>> Firstly...forget consciousness for a moment and rewrite his
      > >>>>> principle
      > >>>>> more simply "if a computer program accurately emulates another
      > >>>>> computer
      > >>>>> program then any meaningful characteristic that program has, the
      > >>>>> emulation also has".
      > >>>>>
      > >>>>> No particular problem with that.
      > >>>>
      > >>>> It's false. For instance, one program might be exponentially less
      > >>>> efficient than the other. That is a meaningful characteristic.
      > >>>>
      > >>>> That's why the principle does not refer to 'meaningful
      > >>>> characteristics',
      > >>>> but only to 'computations' or 'information processing'.
      > >>
      > >> [...]
      > >>
      > >>
      > >>> in what you say, do you mean that the statement as I worded it
      > >>> was false, or do you mean that there is actually no way to rewrite the
      > >>> turing principle for two computer programs? I mean...can it be stated
      > >>> in terms of information processing?
      > >>
      > >> I meant that the proposition that you stated is false, and I gave a
      > >> counter-example. Certainly, there exist true propositions as well, and
      > >> some
      > >> of them are about information processing. But there is no way to state
      > >> a
      > >> false proposition in such a way as to make it true.
      > >>
      > >> -- David Deutsch
      > >>
      > >
      > > So, among those true propositions involving information processing, do
      > > we find that the Turing Principle is true when another computer
      > > program is substituted for consciousness?
      >
      > Since the Turing principle,
      >
      > "A computer capable of simulating with arbitrary accuracy any physically
      > possible system, is physically possible"

      Is this is your proposed strengthened version?

      >
      > does not mention consciousness, it is unchanged by substituting the term
      > "another computer program" for the term "consciousness". So the answer to
      > your question is yes.
      >
      > Moreover the sense of 'simulation' for which the principle holds is a very
      > strong one. Quantum computational networks are physically possible systems
      > whose possible motions include images, with arbitrary accuracy, of all
      > possible motions of all other physical objects that the laws of physics
      > permit to exist. These images can be local (with physically and logically
      > contiguous parts of the network representing contiguous parts of the system
      > being imaged) and efficient (not just polynomially efficient but linearly),
      > and with all the causal relationships corresponding one-to-one. So the
      > 'image' in the computer is really just the original process viewed in a
      > different font, as it were.
      >

      OK I think I get this. So then also a "cup of coffee" could be
      simulated in such a computer. Which doesn't resolve what was
      originally on my mind, but across these two responses you've explained
      a lot (and/or said things that motivated me to google a bit), so
      probably I will want to reflect and pose my question again at a later
      date. That said....something else comes up about simulating a cup of
      coffee, relating to BoI.

      Would the computer be able to simulate the cup of coffee without
      embodying a theoretical answer to the questions raised by you in the
      "Reality of Abstractions" chapter of BoI? I mean...would the computer
      simulate the relative positions and interactions of all the particles,
      and expect the higher levels of abstraction to emerge by themselves?
      Wouldn't this contradict BoI in the sense that the higher levels are
      effectively relegated to secondary consequences?

      On the other hand, if a usable, computable theory resolving the layers
      of abstraction issues, were a prerequisite for the cup of coffee,
      could it be said that the Church-Turing-Deutsch Principle assumes such
      a theory has been discovered? In which case does that not infer a more
      generalation assumption/prerequisite of a complete, computable, theory
      of reality? In which case...doesn't that refute BoI at the top level
      as in we are always at the beginning of infinity? Just wondering.
    • Peter D
      ... Of course most simulations aren;t real, and of course the claim that simulated brains, as an exception to the rules, would be just as a real as unsimulated
      Message 2 of 22 , Jun 6, 2012
      • 0 Attachment
        --- In Fabric-of-Reality@yahoogroups.com, Colin Geoffrey Hales <cghales@...> wrote:
        >
        > It is something to behold. Here, for the first time in history, you find people that look at the only example of natural general intelligence - you, the human reading this - accept a model of a brain, put it in a computer and then expect the result to be a brain. This is done without a shred of known physical law, in spite of thousands of years of contrary experience, and despite decades of abject failure to achieve the sacred goal of an artificial intelligence like us.

        Of course most simulations aren;t real, and of course
        the claim that simulated brains, as an exception to
        the rules, would be just as a real as unsimulated
        ones is a contentious, contentful claim. But the
        1000s of years of experience you appeal to is basically
        and inductive argument, and even the admirers of inductive
        arguments admit that they do not lead to certain truth.
        what's more, the simulability of the mind is not assered
        uniquely. There seems to be a small subset of really-simulable
        thingies: how could simulated chess differ from real chess,
        or simulated artihmetic from real arithemetic?
      • Colin Geoffrey Hales
        Hi, A while ago I got all agitated about computationalism and said I d write it up and get it published somewhere. Well here it is..... (and FoR gets a
        Message 3 of 22 , Jun 25, 2012
        • 0 Attachment
          Hi,
          A while ago I got all agitated about computationalism and said I'd write it up and get it published somewhere.

          Well here it is..... (and FoR gets a mention!)

          Hales, C. G. 2012 The modern phlogiston: why 'thinking machines' don't need computers TheConversation. The Conversation media Group.

          http://www.theconversation.edu.au/the-modern-phlogiston-why-thinking-machines-dont-need-computers-7881

          Cheers
          Colin
          P.S. I am done with this issue. I'll just 'Lavoisier' my way through the phlogiston.
        • Elliot Temple
          ... I agree with you that a fire and a computer simulation of a fire are not the same thing. Maybe Bruno will argue the point and I don t want to 100% dismiss
          Message 4 of 22 , Jun 26, 2012
          • 0 Attachment
            On Jun 25, 2012, at 6:14 PM, Colin Geoffrey Hales wrote:

            > Hi,
            > A while ago I got all agitated about computationalism and said I'd write it up and get it published somewhere.
            >
            > Well here it is..... (and FoR gets a mention!)
            >
            > Hales, C. G. 2012 The modern phlogiston: why 'thinking machines' don't need computers TheConversation. The Conversation media Group.
            >
            > http://www.theconversation.edu.au/the-modern-phlogiston-why-thinking-machines-dont-need-computers-7881

            I agree with you that a fire and a computer simulation of a fire are not the same thing. Maybe Bruno will argue the point and I don't want to 100% dismiss him, but I'm sure not convinced currently.


            But with AGI, it's not completely parallel to that.

            The physical object in question is not a fire but a computer made out of organic (carbon-based) molecules.

            When we simulate it, we're also using a physical object, a computer. Which, while not a fire, is a physical, real world computer.

            So what's the difference?

            Brains are computers and silicon computers are also computers, both are physical real-world computers, so why shouldn't they be able to do the same things?

            Comparing fire-to-computer is one thing, but computer-to-computer is another!


            I don't think computers built out of carbon and silicon are totally different and I don't think that carbon allows intelligence while silicon doesn't. And I don't see what other differences to focus on as important instead.

            -- Elliot Temple
            http://fallibleideas.com/
          • smitra@zonnet.nl
            ... Also, what we experience is what the model the brain uses to represent the physical world generates, not the physical world itself. If this model flawed,
            Message 5 of 22 , Jun 26, 2012
            • 0 Attachment
              Citeren Elliot Temple <curi@...>:

              >
              > On Jun 25, 2012, at 6:14 PM, Colin Geoffrey Hales wrote:
              >
              >> Hi,
              >> A while ago I got all agitated about computationalism and said I'd
              >> write it up and get it published somewhere.
              >>
              >> Well here it is..... (and FoR gets a mention!)
              >>
              >> Hales, C. G. 2012 The modern phlogiston: why 'thinking machines'
              >> don't need computers TheConversation. The Conversation media Group.
              >>
              >> http://www.theconversation.edu.au/the-modern-phlogiston-why-thinking-machines-dont-need-computers-7881
              >
              > I agree with you that a fire and a computer simulation of a fire are
              > not the same thing. Maybe Bruno will argue the point and I don't want
              > to 100% dismiss him, but I'm sure not convinced currently.
              >
              >
              > But with AGI, it's not completely parallel to that.
              >
              > The physical object in question is not a fire but a computer made out
              > of organic (carbon-based) molecules.
              >
              > When we simulate it, we're also using a physical object, a computer.
              > Which, while not a fire, is a physical, real world computer.
              >
              > So what's the difference?
              >
              > Brains are computers and silicon computers are also computers, both
              > are physical real-world computers, so why shouldn't they be able to
              > do the same things?
              >
              > Comparing fire-to-computer is one thing, but computer-to-computer is another!
              >
              >
              > I don't think computers built out of carbon and silicon are totally
              > different and I don't think that carbon allows intelligence while
              > silicon doesn't. And I don't see what other differences to focus on
              > as important instead.
              >
              > -- Elliot Temple
              > http://fallibleideas.com/
              >

              Also, what we experience is what the model the brain uses to represent
              the physical world generates, not the physical world itself. If this
              model flawed, we still experience that flawed representation of the
              World.


              Saibal
            • Colin Geoffrey Hales
              ... To Saibal and Elliot (and those of the ilk), The comments you have made are exactly the phlogiston thinking of my article. The brain is not a computer.
              Message 6 of 22 , Jun 26, 2012
              • 0 Attachment
                Elliot Temple wrote:

                >> On Jun 25, 2012, at 6:14 PM, Colin Geoffrey Hales wrote:
                >>
                >> Hi,
                >> A while ago I got all agitated about computationalism and said I'd write it up and get it >> published somewhere.
                >>
                >> Well here it is..... (and FoR gets a mention!)
                >>
                >> Hales, C. G. 2012 The modern phlogiston: why 'thinking machines' don't need computers
                >>TheConversation. The Conversation media Group.
                >>
                >> http://www.theconversation.edu.au/the-modern-phlogiston-why-thinking-machines-dont-need-computers-7881
                >
                >I agree with you that a fire and a computer simulation of a fire are not the same thing. Maybe Bruno will argue the point and I don't want to 100% dismiss him, but I'm sure not convinced currently.
                >
                >
                > But with AGI, it's not completely parallel to that.
                >
                > The physical object in question is not a fire but a computer made out of organic (carbon-based) molecules.
                >
                > When we simulate it, we're also using a physical object, a computer. Which, while not a fire, is a physical, real world computer.
                >
                > So what's the difference?
                >
                > Brains are computers and silicon computers are also computers, both are physical real-world computers, so why shouldn't they be able to do the same things?
                >
                > Comparing fire-to-computer is one thing, but computer-to-computer is another!
                >
                > I don't think computers built out of carbon and silicon are totally different and I don't think that carbon allows intelligence while silicon doesn't. And I don't see what other differences to focus on as important instead.

                Saibal wrote:

                > Also, what we experience is what the model the brain uses to represent
                > the physical world generates, not the physical world itself. If this
                > model flawed, we still experience that flawed representation of the
                > World.

                To Saibal and Elliot (and those of the ilk),

                The comments you have made are exactly the 'phlogiston' thinking of my article.

                The brain is not a computer. Maths of brain physics can be computed. There are abstract models of function that can be computed, there are abstract models of representation that can be computed. Etc etc Pick your ism.

                None of these are what a brain is doing. In a brain, real action potentials are resonating with a real 3D EM field system best described in wave-mechanical/quantum mechanical terms. We _are_ this activity. We are not a computer running a model of it. Until we fully replicate and examine the performance of replicated inorganic tissue that does the same physics, we are underequipped to say any of the simulations mean anything, let alone that the brain is a computer.

                This profound map/territory confusion of the kind in my article, and that has caused replication to be sidelined for half a century. I don't know what systemic flaw there is in our education system that has you thinking like this... but I know it exists, and I know that it has done a disservice to progress in AGI and neuro/cognitive science.

                I don't want to squabble about it. It's taken 10 years to work out what's been going on and I can't fixed this endemic misdirection overnight.

                Consider this:

                The people at CERN/LHC have spent billions replicating things in the supercollider to understand what is going on because they admit their models are frayed around the edges, and computing can't get at them. What craziness is it that justifies a case for that the most complex thing in the universe can be understood without ever having properly tried replication? That's the damage that this mistake has caused. That's why AGI has failed for 60 years: assumptions like the ones you made.

                All I can ask is that you revisit and challenge your ideas, because they are anomalous in science as a whole.

                Thanks for reading the article.

                Cheers
                Colin





                [Non-text portions of this message have been removed]
              • brian_scurfield
                ... Do you have a criticism of David Deutsch s post below? http://groups.yahoo.com/group/Fabric-of-Reality/message/24560 ... -- Brian Scurfield
                Message 7 of 22 , Jun 26, 2012
                • 0 Attachment
                  On Jun 26, 2012, at 11:46 AM, Elliot Temple wrote:
                  >
                  > On Jun 25, 2012, at 6:14 PM, Colin Geoffrey Hales wrote:
                  >
                  >> Hi,
                  >> A while ago I got all agitated about computationalism and said I'd write it up and get it published somewhere.
                  >>
                  >> Well here it is..... (and FoR gets a mention!)
                  >>
                  >> Hales, C. G. 2012 The modern phlogiston: why 'thinking machines' don't need computers TheConversation. The Conversation media Group.
                  >>
                  >> http://www.theconversation.edu.au/the-modern-phlogiston-why-thinking-machines-dont-need-computers-7881
                  >
                  > I agree with you that a fire and a computer simulation of a fire are not the same thing. Maybe Bruno will argue the point and I don't want to 100% dismiss him, but I'm sure not convinced currently.

                  Do you have a criticism of David Deutsch's post below?

                  http://groups.yahoo.com/group/Fabric-of-Reality/message/24560

                  > [T]he sense of 'simulation' for which the [Turing] principle holds is a very strong one. Quantum computational networks are physically possible systems whose possible motions include images, with arbitrary accuracy, of all possible motions of all other physical objects that the laws of physics permit to exist. These images can be local (with physically and logically contiguous parts of the network representing contiguous parts of the system being imaged) and efficient (not just polynomially efficient but linearly), and with all the causal relationships corresponding one-to-one. So the 'image' in the computer is really just the original process viewed in a different font, as it were.

                  -- Brian Scurfield
                • Brett Hall
                  Colin, on this topic it seems you have more answers than questions. And that is precisely not the approach you cannot have towards hard problems in
                  Message 8 of 22 , Jun 27, 2012
                  • 0 Attachment
                    Colin, on this topic it seems you have more answers than questions. And that is precisely not the approach you cannot have towards hard problems in consciousness like AGI.

                    I find you to be on the same continuum as those who argue that the brain *is* (obviously) a computer. You're at the other end of the spectrum, sure - but you are dogmatic about what you think as much as they are. I explain this below...


                    On 27/06/2012, at 18:16, "Colin Geoffrey Hales" <cghales@...> wrote:

                    > Elliot Temple wrote:
                    >
                    > >> On Jun 25, 2012, at 6:14 PM, Colin Geoffrey Hales wrote:
                    > >>
                    > >> Hi,
                    > >> A while ago I got all agitated about computationalism and said I'd write it up and get it >> published somewhere.
                    > >>
                    > >> Well here it is..... (and FoR gets a mention!)
                    > >>
                    > >> Hales, C. G. 2012 The modern phlogiston: why 'thinking machines' don't need computers
                    > >>TheConversation. The Conversation media Group.
                    > >>
                    > >> http://www.theconversation.edu.au/the-modern-phlogiston-why-thinking-machines-dont-need-computers-7881
                    > >
                    > >I agree with you that a fire and a computer simulation of a fire are not the same thing. Maybe Bruno will argue the point and I don't want to 100% dismiss him, but I'm sure not convinced currently.
                    > >
                    > >
                    > > But with AGI, it's not completely parallel to that.
                    > >
                    > > The physical object in question is not a fire but a computer made out of organic (carbon-based) molecules.
                    > >
                    > > When we simulate it, we're also using a physical object, a computer. Which, while not a fire, is a physical, real world computer.
                    > >
                    > > So what's the difference?
                    > >
                    > > Brains are computers and silicon computers are also computers, both are physical real-world computers, so why shouldn't they be able to do the same things?
                    > >
                    > > Comparing fire-to-computer is one thing, but computer-to-computer is another!
                    > >
                    > > I don't think computers built out of carbon and silicon are totally different and I don't think that carbon allows intelligence while silicon doesn't. And I don't see what other differences to focus on as important instead.
                    >
                    > Saibal wrote:
                    >
                    > > Also, what we experience is what the model the brain uses to represent
                    > > the physical world generates, not the physical world itself. If this
                    > > model flawed, we still experience that flawed representation of the
                    > > World.
                    >
                    > To Saibal and Elliot (and those of the ilk),
                    >
                    > The comments you have made are exactly the 'phlogiston' thinking of my article.
                    >
                    > The brain is not a computer.
                    >
                    You don't have good reasons to believe this. It seems to me that this is just an article of faith, exactly mirrored by people who do believe that the brain is a computer and will not be moved on this point. I don't agree with Saibal either that the brain is a computer. I simply don't think any of you have a good enough theory of how the brain gives rise to the mind (an hence intelligence) to be committed to either position. This isn't a cop-out, because I think that the mind and intelligence and consciousness more generally are special kinds of problem unlike others in science and philosophy where there are very good reasons to prefer one theory over another. Henceforth I'll just talk about 'consciousness' rather than AGI as I think it better captures the real mystery at the heart of this.

                    We know something about neuroscience but not enough to conclude either that the brain is a computer or that it is not. I don't see anything in what you have written that falsifies the theory that the brain is a computer.

                    > Maths of brain physics can be computed.
                    >
                    I'm not even sure what this means. Do you know what you mean here? What is "brain physics"? What does it mean to compute it? Isn't computing maths just...maths? I don't get this.

                    It might be that it (brain physics - whatever that is) 'can' in principal be modelled - like any physical system, but it never has been in reality. It's simply not true to say that we have tried and failed.

                    We have not yet tried at all to model a brain in a computer because we do not know how. We do not have a physical theory of how the brain works. If we did, we could model it. We do not. We do not have a clue about how consciousness arises from the unconscious. You seem to both know this and simultaneously rule out avenues of research that you cannot possibly know the outcomes of. I don't see how this is either coherent or consistent.

                    There is no computer that is able to simulate even a fraction of the trillions of synaptic connections estimated to constitute the human brain. Maybe when we can do that, we will simulate - in a silicon computer - a brain. Or not. Who knows? It has not yet been tried. Until it is, there is no reason to believe it either is, or is not, possible. It is just a promising area of research. Elliot, Saibal and 'others of their ilk' might yet turn out to be correct. It's all just computation.

                    Articles like this one:
                    http://www.scientificamerican.com/article.cfm?id=graphic-science-ibm-simulates-4-percent-human-brain-all-of-cat-brain

                    Suggest that it won't be until 2019, using Moore's Law and other considerations, before we have the capacity to simulate a human brain. And even then...there's no guarantee. So it's not that it's failed. It hasn't even been tried yet. And if you read these reports, they really do talk up their successes. It's impossible to know what they've really accomplished apart from making silly little AI's...which might have nothing at all to do with how a real, physical brain works. Might.

                    It's disingenuous for you to assert it's failed.

                    Who knows if consciousness won't "emerge" in a simulation like IBM's until it's been tried? We don't know how consciousness emerges from stuff.

                    So you are wrong to assert that "Maths of brain physics can be computed". Indeed that very claim seems to me to be without content. A true simulation of 'brain physics' as you call it may very well require a simulation of quantum effects at the level of individual synapses...of which there are trillions. This is what Roger Penrose thinks. He thinks consciousness is to be found at the quantum level.

                    And this we *know* simply is *not possible* given our current technology. If consciousness is truly a quantum phenomenon then modelling it properly will require a quantum computer. We don't have those yet.

                    Do you have a refutation of Penrose's ideas in "the Emperor's New Mind"?

                    What do you think "brain physics" is? Are you certain that all brain functions can be adequately described by classical physics?

                    Even if it can be, precisely what bits of physics do you think are important? Just mechanics or also thermodynamics? Is electrodynamics important too?

                    It might be that entirely new physics is needed. I don't know this either. Nor do you. Consciousness is just that weird a phenomenon (in the sense, that it's so poorly understood) that it is not ridiculous to presume that none of our current physical theories will be able to adequately capture what's going on. In that case "brain physics" might very well turn out to be dependent upon theories that supplant quantum theory and relativity. This is unlikely - but who knows? More mundane things like explaining "dark energy" might require entirely new physical theories - and I reckon consciousness will be a harder problem than dark energy.
                    > There are abstract models of function that can be computed, there are abstract models of representation that can be computed.
                    >
                    Yes. And they are grossly inadequate to model consciousness - which is, at root, what we are talking about right? These abstract models are gross simplifications of reality. An abstract model of consciousness has never been proposed. What is your abstract model of consciousness? Consciousness is something the brain does or has as far as we can tell...we just have no clue how exactly it's involved. We just know it is. Maybe the brain is like an antenna that tunes into consciousness? We just don't know. Maybe consciousness is in everything. Are you aware of panpsychism? Have you read philosopher Galen Strawson on this?

                    Anyone who thinks we know one way or the other on AGI and consciousness is being a dogmatist on the issue. On one of the most interesting scientific and philosophical questions ever. It's the search that is fascinating. Not being blinkered by a pet theory.
                    > Etc etc Pick your ism.
                    >
                    > None of these are what a brain is doing.
                    >

                    Sorry? None of what are? None of the isms? What about dualism? That hasn't been ruled out either. What about monism? There might be *only* consciousness. What about Bruno Marchal's computationalist hypothesis? Consciousness might be found there and general AI might only make sense in that context.

                    To say that "none of these are what the brain is doing" is to assert you know what it *is* doing. You don't.

                    > In a brain, real action potentials are resonating with a real 3D EM field system best described in wave-mechanical/quantum mechanical terms.
                    >
                    Okay so that's something that is going on in a brain. So what?

                    "In a brain, blood vessels called capillaries are transporting dissolved oxygen into neurones where it is able to reach the synapses. There, neurotransmitters, including serotonin are acting on receptors. This interaction between neurotransmitter and receptor seems to be involved in subjective experiences."

                    So I've made some random true statements there about physical processes in the brain at the level of the neurone. I don't get what your point is in picking some arbitrary level of description (namely where action potentials occur) and claiming (as you are about to) that "We are this activity".


                    More to the point...what you have written there sounds like quakery (a bunch of jargon intended to convey some air of authority but on analysis is no more than scientific *sounding*). In particular that "wave-mechanical" bit; what does that mean? And the assertion that quantum mechanics is involved is not at all mainstream in neuroscience.

                    Yes, some respectable people have made noises of this sort - but they are careful to parse their comments and not be dogmatic about things. Penrose for example. You are pretending that there is a complete theory of this already. That's simply not true. Or are respectable neuroscientists like Sam Harris wrong? See http://www.samharris.org/blog/item/the-mystery-of-consciousness

                    In particular Sam says, "Most scientists are confident that consciousness emerges from unconscious complexity. We have compelling reasons for believing this, because the only signs of consciousness we see in the universe are found in evolved organisms like ourselves. Nevertheless, this notion of emergence strikes me as nothing more than a restatement of a miracle. To say that consciousness emerged at some point in the evolution of life doesn’t give us an inkling of how it could emerge from unconscious processes, even in principle."

                    So Sam admits that we have no clue about how consciousness emerges, despite everything we know about physical processes in the brain.

                    > We _are_ this activity.
                    >
                    No. We are animals. No. We are universal explainers. No. We are individuals in a complex society. No. We are thermodynamic engines. No. We are subjective experience.

                    The list is almost infinitely long. We don't know what we are. And yet we know we are many things at once. David made a great advance in putting forth the idea in BoI that we are Universal Knowledge creators and this separates us from other living things. That is far more useful a description than that we are action potentials. Indeed I don't think we are action potentials. Because there are action potentials that are clearly not people.
                    > We are not a computer running a model of it. Until we fully replicate and examine the performance of replicated inorganic tissue that does the same physics, we are underequipped to say any of the simulations mean anything, let alone that the brain is a computer.
                    >
                    Well now you seem to be saying we don't know; "...we are underequipped..." you say. Well which is it? If we're underequipped, then I agree. But you have spent the rest of this email and other posts talking about how we are most defiantly equipped. We're equipped apparently to rule out "The Brain is a computer" hypothesis. You say we know the brain is not a computer. Be consistent at least in one post.
                    >
                    > This profound map/territory confusion of the kind in my article, and that has caused replication to be sidelined for half a century. I don't know what systemic flaw there is in our education system that has you thinking like this... but I know it exists, and I know that it has done a disservice to progress in AGI and neuro/cognitive science.
                    >
                    In the art of debating it's well known that the first person to mention Hitler loses the debate. I think there should be similar common knowledge when it comes to appealing to failures in the education system as the reason your opponents don't agree with you.

                    The map/territory analogy doesn't win the argument. It's already been explained to you that *if* the brain is indeed doing computations that give rise to its intelligence then this is precisely the reason why simulating such a thing will replicate consciousness and intelligence inside software running on silicon hardware.

                    It's completely unlike simulating a fire in a computer and you seem to deliberately want to misunderstand this point.
                    >
                    > I don't want to squabble about it.
                    >
                    > It's taken 10 years to work out what's been going on and I can't fixed this endemic misdirection overnight.
                    >
                    Constant repetition of how many years of work you have done on this is an appeal to authority and actually undermines your position rather than strengthens it. I think you have good arguments for building physical brains.

                    I don't think you have to dismiss other ideas to advance your own. There's enough love on this topic to go round.
                    >
                    > Consider this:
                    >
                    > The people at CERN/LHC have spent billions replicating things in the supercollider to understand what is going on because they admit their models are frayed around the edges, and computing can't get at them.
                    >
                    Huh? CERN doesn't exist just because computers are incapable of doing the simulations!!

                    This is orthogonal to the discussion. Do you actually know what's going on at CERN or are you just making a stab in the dark? This statement of yours also suggests a genuine misconception about the nature of science and the philosophy of science. I suggest you read BoI. If "frayed around the edges" means "incomplete" then that's not a criticism. All theories are "frayed around the edges" in that way. They wouldn't be scientific if they weren't.

                    LHC was built for lots of reasons. But it wasn't because particle physicists were committed to trying to simulate everything in computers and failed...so *had* to resort to building the biggest particle accelerator of all time. Their theories of particle physics actually predicted that such an accelerator was *required* if we wanted to know more about the internal structure of subatomic particles.

                    Are you a justificationist? Do you believe that scientific theories are proven true?

                    > What craziness is it that justifies a case for that the most complex thing in the universe can be understood without ever having properly tried replication?
                    >
                    You are a justificationist. But moving on...

                    You're arguing that we should try to build a brain out of hardware...I agree. That's a nice approach. Good luck to you. Power to progress and creativity. I also get the impression that you seem to be upset that this is not tried enough or at all and you aren't getting funding because people are more interested in simulating things in a computer and you're up against a certain philosophy that is antagonistic towards this approach. You aren't alone. Jaron Lanier is my favorite scientist and philosopher on this point. He is a computer scientist of some renown and disagrees with the philosophy that emanates from silicon valley that seems to be anti-human in many ways. In particular he is unconvinced that consciousness has much hope of being replicated in a silicon computer (i.e: it's not Turing emulable). His book "You are not a gadget" is an amazing defence of this idea. I commend it to you. But even Lanier is not as dogmatic as you are.

                    So, yes, let's try many approaches, good idea.

                    But just because you have *an* approach doesn't mean it's the *right* approach or that the other approaches are necessarily wrong. We cannot know which approach is correct. If we did - we would already have the solution to the hard problems of consciousness - including how intelligence emerges from brain stuff. You have picked a side. Great, good luck. But so? You don't need to go a step further into the close-minded trap of dogmatic adherenace to a particular point of view about this issue...to the exclusion of all other possibilities. Like simulating brains in software.

                    I don't see any good reasons to believe you are correct just yet. I don't see good reason to think that any theory currently on offer *actually explains* human intelligence, creativity or consciousness. There is no explanatory theory of these things. Just hypotheses - few of which seem to be testable in practise. Some that aren't even testable in principal.
                    > That's the damage that this mistake has caused. That's why AGI has failed for 60 years: assumptions like the ones you made.
                    >
                    That's a dogmatic position. To believe that AGI has failed because of certain attempts to make progress is just wrong.

                    AGI has failed because we do not have a good theory of consciousness, intelligence, creativity and other things. We don't know enough.

                    It has nothing to do with assumptions. We won't know what assumptions were wrong until (and if) we ever have an explanatory theory.
                    >
                    > All I can ask is that you revisit and challenge your ideas, because they are anomalous in science as a whole.
                    >
                    I think it goes for you, more than for them. You need to be more open minded. If you close off the possibility that the brain is a computer then you will close avenues of research. You also seem to betray an ignorance of the theories of computation that people on this list are expert on. I suggest you read the book "The Fabric of Reality" to fill in those gaps. I am no expert, but I at least understand the argument that the brain, as a physical system, can be considered as a Universal Turing Machine and if that's the case then it can be simulated in a computer. Unless you "get" the connection between physics and computation, this will simply fly over your head and you will fly off the handle about how the brain can't be a computer. As you have done. But we have no reasons yet to think that the brain is special when it comes to the computations that it does.

                    Sam Harris is more correct on this point than either side. Consciousness, as it stands is not the same sort of problem as other scientific questions. It is rivalled only by the question "Why is there something rather than nothing?" The very question seems to buffer itself against solutions, and yet as optimists it is important we continue to try and make progress - especially in philosophical terms on this point.

                    It may very well be the case that the brain is hardware and the mind is software. We have not yet ruled that out.

                    You might be right.

                    But to insist everyone else is wrong given how much we *don't know* about this stuff at the moment seems terribly dogmatic and pessimistic. Especially as we do not yet have either scientific or philosophical refutations of those ideas that you do not like.

                    Brett.
                    >
                    > Thanks for reading the article.
                    >
                    > Cheers
                    > Colin
                    >
                    >
                    >
                    > Switch to: Text-Only, Daily Digest • Unsubscribe _,_._,___


                    [Non-text portions of this message have been removed]
                  • Bruno Marchal
                    ... It is not a confusion. Ir is a working hypothesis. It is based on the fact that we just don know any process in nature which is not Turing emulable. You
                    Message 9 of 22 , Jun 27, 2012
                    • 0 Attachment
                      On 27 Jun 2012, at 04:31, Colin Geoffrey Hales wrote:

                      > Elliot Temple wrote:
                      >
                      > >> On Jun 25, 2012, at 6:14 PM, Colin Geoffrey Hales wrote:
                      > >>
                      > >> Hi,
                      > >> A while ago I got all agitated about computationalism and said
                      >>> I'd write it up and get it published somewhere.
                      > >>
                      > >> Well here it is..... (and FoR gets a mention!)
                      > >>
                      > >> Hales, C. G. 2012 The modern phlogiston: why 'thinking machines'
                      > >> don't need computers
                      > >>TheConversation. The Conversation media Group.
                      > >>
                      > >> http://www.theconversation.edu.au/the-modern-phlogiston-why-thinking-machines-dont-need-computers-7881
                      > >
                      > >I agree with you that a fire and a computer simulation of a fire
                      >> are not the same thing. Maybe Bruno will argue the point and I don't
                      > > want to 100% dismiss him, but I'm sure not convinced currently.
                      > >
                      > >
                      > > But with AGI, it's not completely parallel to that.
                      > >
                      > > The physical object in question is not a fire but a computer made
                      > > out of organic (carbon-based) molecules.
                      > >
                      > > When we simulate it, we're also using a physical object, a
                      > > computer. Which, while not a fire, is a physical, real world computer.
                      > >
                      > > So what's the difference?
                      > >
                      > > Brains are computers and silicon computers are also computers,
                      > > both are physical real-world computers, so why shouldn't they be
                      > > able to do the same things?
                      > >
                      > > Comparing fire-to-computer is one thing, but computer-to-computer
                      > > is another!
                      > >
                      > > I don't think computers built out of carbon and silicon are
                      > > totally different and I don't think that carbon allows intelligence
                      > > while silicon doesn't. And I don't see what other differences to
                      > > focus on as important instead.
                      >
                      > Saibal wrote:
                      >
                      > > Also, what we experience is what the model the brain uses to represent
                      > > the physical world generates, not the physical world itself. If this
                      > > model flawed, we still experience that flawed representation of the
                      > > World.
                      >
                      > To Saibal and Elliot (and those of the ilk),
                      >
                      > The comments you have made are exactly the 'phlogiston' thinking of
                      > my article.
                      >
                      > The brain is not a computer. Maths of brain physics can be computed.
                      > There are abstract models of function that can be computed, there
                      > are abstract models of representation that can be computed. Etc etc
                      > Pick your ism.
                      >
                      > None of these are what a brain is doing. In a brain, real action
                      > potentials are resonating with a real 3D EM field system best
                      > described in wave-mechanical/quantum mechanical terms. We _are_ this
                      > activity. We are not a computer running a model of it. Until we
                      > fully replicate and examine the performance of replicated inorganic
                      > tissue that does the same physics, we are underequipped to say any
                      > of the simulations mean anything, let alone that the brain is a
                      > computer.
                      >
                      > This profound map/territory confusion of the kind in my article,
                      > and that has caused replication to be sidelined for half a century.
                      > I don't know what systemic flaw there is in our education system
                      > that has you thinking like this... but I know it exists, and I know
                      > that it has done a disservice to progress in AGI and neuro/cognitive
                      > science.
                      >
                      > I don't want to squabble about it. It's taken 10 years to work out
                      > what's been going on and I can't fixed this endemic misdirection
                      > overnight.
                      >
                      > Consider this:
                      >
                      > The people at CERN/LHC have spent billions replicating things in the
                      > supercollider to understand what is going on because they admit
                      > their models are frayed around the edges, and computing can't get at
                      > them. What craziness is it that justifies a case for that the most
                      > complex thing in the universe can be understood without ever having
                      > properly tried replication? That's the damage that this mistake has
                      > caused. That's why AGI has failed for 60 years: assumptions like the
                      > ones you made.
                      >
                      > All I can ask is that you revisit and challenge your ideas, because
                      > they are anomalous in science as a whole.

                      It is not a confusion. Ir is a working hypothesis. It is based on the
                      fact that we just don' know any process in nature which is not Turing
                      emulable.

                      You can't simulate a fire capable of burning you, but you might be
                      able to simulate the couple "fire + observer" so that the observer
                      will behave like he is burning, and if we accept the hypothesis, the
                      observer will genuinely feel the burning.

                      If you believe that the brain is not Turing emulable, it is up to you
                      to give us an argument, for that is an extraordinary claim. In
                      particular you have to define such a non computable process, and to
                      refute comp, it should not be of the type of the non-computable
                      elements that comp already predicts to exist in nature.

                      You also describe the comp hypothesis as a mistake, but you did not
                      present any argument. All the type of justification would only, if
                      they were sensical, make the computationalist substitution level very
                      low. So you are doing a big extrapolation from finite to infinite,
                      leading to an hypothesis which is not purported by what we know
                      already. It seems to me that the non computable processes you are
                      invoking are far more "phlogiston like" than the comp hypothesis which
                      up to now fits remarkably with the facts, notably by its explanation
                      of the existence of the first person indeterminacy, of local non
                      locality, non cloning of matter, many-worlds like type of reality, and
                      this without mentioning the computable aspect of chemistry and
                      particles/waves behavior.

                      Or ... you believe in the wave packet reduction, and in the idea that
                      consciousness is responsible for it. But this has been refuted by
                      Abner Shimony (and others). It is a nice idea which unfortunately does
                      not work, when on the contrary the non collapse of the wave and the
                      parallel realities confirmed, as I said, what we have to expect from
                      comp.

                      So comp is an hypothesis, or a theory, and as such can be refuted, but
                      no refutation have yet appeared on the horizon.

                      Your own articles refutes perhaps the na�ve idea that humans can build
                      in some *provable* way a machine as intelligent as themselves. This is
                      not possible indeed, but that does not makes such machine impossible,
                      and without strong evidences, the contrary hypothesis seems a bit like
                      making things artificially complex.

                      Bruno
                    • Bruno Marchal
                      ... We do have a theory of consciousness, computationalism + theoretical computer science, which makes very specific technical points which can be tested, and
                      Message 10 of 22 , Jun 27, 2012
                      • 0 Attachment
                        On 27 Jun 2012, at 12:19, Brett Hall wrote:

                        > Colin, on this topic it seems you have more answers than questions.
                        > And that is precisely not the approach you cannot have towards hard
                        > problems in consciousness like AGI.
                        >
                        > I find you to be on the same continuum as those who argue that the
                        > brain *is* (obviously) a computer. You're at the other end of the
                        > spectrum, sure - but you are dogmatic about what you think as much
                        > as they are. I explain this below...
                        >
                        >
                        > On 27/06/2012, at 18:16, "Colin Geoffrey Hales" <cghales@...
                        > > wrote:
                        >
                        > > Elliot Temple wrote:
                        > >
                        > > >> On Jun 25, 2012, at 6:14 PM, Colin Geoffrey Hales wrote:
                        > > >>
                        > > >> Hi,
                        > > >> A while ago I got all agitated about computationalism and said
                        > >>> I'd write it up and get it >> published somewhere.
                        > > >>
                        > > >> Well here it is..... (and FoR gets a mention!)
                        > > >>
                        > > >> Hales, C. G. 2012 The modern phlogiston: why 'thinking
                        > >>> machines' don't need computers
                        > > >>TheConversation. The Conversation media Group.
                        > > >>
                        > > >> http://www.theconversation.edu.au/the-modern-phlogiston-why-thinking-machines-dont-need-computers-7881
                        > > >
                        > > > I agree with you that a fire and a computer simulation of a fire
                        > >> are not the same thing. Maybe Bruno will argue the point and I don't
                        > >> want to 100% dismiss him, but I'm sure not convinced currently.
                        > > >
                        > > >
                        > > > But with AGI, it's not completely parallel to that.
                        > > >
                        > > > The physical object in question is not a fire but a computer
                        > >> made out of organic (carbon-based) molecules.
                        > > >
                        > > > When we simulate it, we're also using a physical object, a
                        > >> computer. Which, while not a fire, is a physical, real world computer.
                        > > >
                        > > > So what's the difference?
                        > > >
                        > > > Brains are computers and silicon computers are also computers,
                        > >> both are physical real-world computers, so why shouldn't they be
                        > >> able to do the same things?
                        > > >
                        > > > Comparing fire-to-computer is one thing, but computer-to-
                        > >> computer is another!
                        > > >
                        > > > I don't think computers built out of carbon and silicon are
                        > >> totally different and I don't think that carbon allows intelligence
                        > >> while silicon doesn't. And I don't see what other differences to
                        > >> focus on as important instead.
                        > >
                        > > Saibal wrote:
                        > >
                        > > > Also, what we experience is what the model the brain uses to represent
                        > > > the physical world generates, not the physical world itself. If this
                        > > > model flawed, we still experience that flawed representation of the
                        > > > World.
                        > >
                        > > To Saibal and Elliot (and those of the ilk),
                        > >
                        > > The comments you have made are exactly the 'phlogiston' thinking
                        > of my article.
                        > >
                        > > The brain is not a computer.
                        > >
                        > You don't have good reasons to believe this. It seems to me that
                        > this is just an article of faith, exactly mirrored by people who do
                        > believe that the brain is a computer and will not be moved on this
                        > point. I don't agree with Saibal either that the brain is a
                        > computer. I simply don't think any of you have a good enough theory
                        > of how the brain gives rise to the mind (an hence intelligence) to
                        > be committed to either position. This isn't a cop-out, because I
                        > think that the mind and intelligence and consciousness more
                        > generally are special kinds of problem unlike others in science and
                        > philosophy where there are very good reasons to prefer one theory
                        > over another. Henceforth I'll just talk about 'consciousness' rather
                        > than AGI as I think it better captures the real mystery at the heart
                        > of this.
                        >
                        > We know something about neuroscience but not enough to conclude
                        > either that the brain is a computer or that it is not. I don't see
                        > anything in what you have written that falsifies the theory that the
                        > brain is a computer.
                        >
                        > > Maths of brain physics can be computed.
                        > >
                        > I'm not even sure what this means. Do you know what you mean here?
                        > What is "brain physics"? What does it mean to compute it? Isn't
                        > computing maths just...maths? I don't get this.
                        >
                        > It might be that it (brain physics - whatever that is) 'can' in
                        > principal be modelled - like any physical system, but it never has
                        > been in reality. It's simply not true to say that we have tried and
                        > failed.
                        >
                        > We have not yet tried at all to model a brain in a computer because
                        > we do not know how. We do not have a physical theory of how the
                        > brain works. If we did, we could model it. We do not. We do not have
                        > a clue about how consciousness arises from the unconscious. You seem
                        > to both know this and simultaneously rule out avenues of research
                        > that you cannot possibly know the outcomes of. I don't see how this
                        > is either coherent or consistent.
                        >
                        > There is no computer that is able to simulate even a fraction of the
                        > trillions of synaptic connections estimated to constitute the human
                        > brain. Maybe when we can do that, we will simulate - in a silicon
                        > computer - a brain. Or not. Who knows? It has not yet been tried.
                        > Until it is, there is no reason to believe it either is, or is not,
                        > possible. It is just a promising area of research. Elliot, Saibal
                        > and 'others of their ilk' might yet turn out to be correct. It's all
                        > just computation.
                        >
                        > Articles like this one:
                        > http://www.scientificamerican.com/article.cfm?id=graphic-science-ibm-simulates-4-percent-human-brain-all-of-cat-brain
                        >
                        > Suggest that it won't be until 2019, using Moore's Law and other
                        > considerations, before we have the capacity to simulate a human
                        > brain. And even then...there's no guarantee. So it's not that it's
                        > failed. It hasn't even been tried yet. And if you read these
                        > reports, they really do talk up their successes. It's impossible to
                        > know what they've really accomplished apart from making silly little
                        > AI's...which might have nothing at all to do with how a real,
                        > physical brain works. Might.
                        >
                        > It's disingenuous for you to assert it's failed.
                        >
                        > Who knows if consciousness won't "emerge" in a simulation like IBM's
                        > until it's been tried? We don't know how consciousness emerges from
                        > stuff.
                        >
                        > So you are wrong to assert that "Maths of brain physics can be
                        > computed". Indeed that very claim seems to me to be without content.
                        > A true simulation of 'brain physics' as you call it may very well
                        > require a simulation of quantum effects at the level of individual
                        > synapses...of which there are trillions. This is what Roger Penrose
                        > thinks. He thinks consciousness is to be found at the quantum level.
                        >
                        > And this we *know* simply is *not possible* given our current
                        > technology. If consciousness is truly a quantum phenomenon then
                        > modelling it properly will require a quantum computer. We don't have
                        > those yet.
                        >
                        > Do you have a refutation of Penrose's ideas in "the Emperor's New
                        > Mind"?
                        >
                        > What do you think "brain physics" is? Are you certain that all brain
                        > functions can be adequately described by classical physics?
                        >
                        > Even if it can be, precisely what bits of physics do you think are
                        > important? Just mechanics or also thermodynamics? Is electrodynamics
                        > important too?
                        >
                        > It might be that entirely new physics is needed. I don't know this
                        > either. Nor do you. Consciousness is just that weird a phenomenon
                        > (in the sense, that it's so poorly understood) that it is not
                        > ridiculous to presume that none of our current physical theories
                        > will be able to adequately capture what's going on. In that case
                        > "brain physics" might very well turn out to be dependent upon
                        > theories that supplant quantum theory and relativity. This is
                        > unlikely - but who knows? More mundane things like explaining "dark
                        > energy" might require entirely new physical theories - and I reckon
                        > consciousness will be a harder problem than dark energy.
                        > > There are abstract models of function that can be computed, there
                        > are abstract models of representation that can be computed.
                        > >
                        > Yes. And they are grossly inadequate to model consciousness - which
                        > is, at root, what we are talking about right? These abstract models
                        > are gross simplifications of reality. An abstract model of
                        > consciousness has never been proposed. What is your abstract model
                        > of consciousness? Consciousness is something the brain does or has
                        > as far as we can tell...we just have no clue how exactly it's
                        > involved. We just know it is. Maybe the brain is like an antenna
                        > that tunes into consciousness? We just don't know. Maybe
                        > consciousness is in everything. Are you aware of panpsychism? Have
                        > you read philosopher Galen Strawson on this?
                        >
                        > Anyone who thinks we know one way or the other on AGI and
                        > consciousness is being a dogmatist on the issue. On one of the most
                        > interesting scientific and philosophical questions ever. It's the
                        > search that is fascinating. Not being blinkered by a pet theory.
                        >
                        > > Etc etc Pick your ism.
                        > >
                        > > None of these are what a brain is doing.
                        > >
                        >
                        > Sorry? None of what are? None of the isms? What about dualism? That
                        > hasn't been ruled out either. What about monism? There might be
                        > *only* consciousness. What about Bruno Marchal's computationalist
                        > hypothesis? Consciousness might be found there and general AI might
                        > only make sense in that context.
                        >
                        > To say that "none of these are what the brain is doing" is to assert
                        > you know what it *is* doing. You don't.
                        >
                        > > In a brain, real action potentials are resonating with a real 3D
                        > > EM field system best described in wave-mechanical/quantum mechanical
                        > > terms.
                        >
                        > Okay so that's something that is going on in a brain. So what?
                        >
                        > "In a brain, blood vessels called capillaries are transporting
                        > dissolved oxygen into neurones where it is able to reach the
                        > synapses. There, neurotransmitters, including serotonin are acting
                        > on receptors. This interaction between neurotransmitter and receptor
                        > seems to be involved in subjective experiences."
                        >
                        > So I've made some random true statements there about physical
                        > processes in the brain at the level of the neurone. I don't get what
                        > your point is in picking some arbitrary level of description (namely
                        > where action potentials occur) and claiming (as you are about to)
                        > that "We are this activity".
                        >
                        >
                        > More to the point...what you have written there sounds like quakery
                        > (a bunch of jargon intended to convey some air of authority but on
                        > analysis is no more than scientific *sounding*). In particular that
                        > "wave-mechanical" bit; what does that mean? And the assertion that
                        > quantum mechanics is involved is not at all mainstream in
                        > neuroscience.
                        >
                        > Yes, some respectable people have made noises of this sort - but
                        > they are careful to parse their comments and not be dogmatic about
                        > things. Penrose for example. You are pretending that there is a
                        > complete theory of this already. That's simply not true. Or are
                        > respectable neuroscientists like Sam Harris wrong? See http://www.samharris.org/blog/item/the-mystery-of-consciousness
                        >
                        > In particular Sam says, "Most scientists are confident that
                        > consciousness emerges from unconscious complexity. We have
                        > compelling reasons for believing this, because the only signs of
                        > consciousness we see in the universe are found in evolved organisms
                        > like ourselves. Nevertheless, this notion of emergence strikes me as
                        > nothing more than a restatement of a miracle. To say that
                        > consciousness emerged at some point in the evolution of life doesn�t
                        > give us an inkling of how it could emerge from unconscious
                        > processes, even in principle."
                        >
                        > So Sam admits that we have no clue about how consciousness emerges,
                        > despite everything we know about physical processes in the brain.
                        >
                        > > We _are_ this activity.
                        >
                        > No. We are animals. No. We are universal explainers. No. We are
                        > individuals in a complex society. No. We are thermodynamic engines.
                        > No. We are subjective experience.
                        >
                        > The list is almost infinitely long. We don't know what we are. And
                        > yet we know we are many things at once. David made a great advance
                        > in putting forth the idea in BoI that we are Universal Knowledge
                        > creators and this separates us from other living things. That is far
                        > more useful a description than that we are action potentials. Indeed
                        > I don't think we are action potentials. Because there are action
                        > potentials that are clearly not people.
                        >
                        > > We are not a computer running a model of it. Until we fully
                        > > replicate and examine the performance of replicated inorganic tissue
                        > > that does the same physics, we are underequipped to say any of the
                        > > simulations mean anything, let alone that the brain is a computer.
                        >
                        > Well now you seem to be saying we don't know; "...we are
                        > underequipped..." you say. Well which is it? If we're underequipped,
                        > then I agree. But you have spent the rest of this email and other
                        > posts talking about how we are most defiantly equipped. We're
                        > equipped apparently to rule out "The Brain is a computer"
                        > hypothesis. You say we know the brain is not a computer. Be
                        > consistent at least in one post.
                        >
                        > > This profound map/territory confusion of the kind in my article,
                        > > and that has caused replication to be sidelined for half a century.
                        > > I don't know what systemic flaw there is in our education system
                        > > that has you thinking like this... but I know it exists, and I know
                        > > that it has done a disservice to progress in AGI and neuro/cognitive
                        > > science.
                        >
                        > In the art of debating it's well known that the first person to
                        > mention Hitler loses the debate. I think there should be similar
                        > common knowledge when it comes to appealing to failures in the
                        > education system as the reason your opponents don't agree with you.
                        >
                        > The map/territory analogy doesn't win the argument. It's already
                        > been explained to you that *if* the brain is indeed doing
                        > computations that give rise to its intelligence then this is
                        > precisely the reason why simulating such a thing will replicate
                        > consciousness and intelligence inside software running on silicon
                        > hardware.
                        >
                        > It's completely unlike simulating a fire in a computer and you seem
                        > to deliberately want to misunderstand this point.
                        >
                        > > I don't want to squabble about it.
                        > >
                        > > It's taken 10 years to work out what's been going on and I can't
                        > > fixed this endemic misdirection overnight.
                        >
                        > Constant repetition of how many years of work you have done on this
                        > is an appeal to authority and actually undermines your position
                        > rather than strengthens it. I think you have good arguments for
                        > building physical brains.
                        >
                        > I don't think you have to dismiss other ideas to advance your own.
                        > There's enough love on this topic to go round.
                        >
                        > > Consider this:
                        > >
                        > > The people at CERN/LHC have spent billions replicating things in
                        > > the supercollider to understand what is going on because they admit
                        > > their models are frayed around the edges, and computing can't get at
                        > > them.
                        >
                        > Huh? CERN doesn't exist just because computers are incapable of
                        > doing the simulations!!
                        >
                        > This is orthogonal to the discussion. Do you actually know what's
                        > going on at CERN or are you just making a stab in the dark? This
                        > statement of yours also suggests a genuine misconception about the
                        > nature of science and the philosophy of science. I suggest you read
                        > BoI. If "frayed around the edges" means "incomplete" then that's not
                        > a criticism. All theories are "frayed around the edges" in that way.
                        > They wouldn't be scientific if they weren't.
                        >
                        > LHC was built for lots of reasons. But it wasn't because particle
                        > physicists were committed to trying to simulate everything in
                        > computers and failed...so *had* to resort to building the biggest
                        > particle accelerator of all time. Their theories of particle physics
                        > actually predicted that such an accelerator was *required* if we
                        > wanted to know more about the internal structure of subatomic
                        > particles.
                        >
                        > Are you a justificationist? Do you believe that scientific theories
                        > are proven true?
                        >
                        > > What craziness is it that justifies a case for that the most
                        > > complex thing in the universe can be understood without ever having
                        > > properly tried replication?
                        >
                        > You are a justificationist. But moving on...
                        >
                        > You're arguing that we should try to build a brain out of
                        > hardware...I agree. That's a nice approach. Good luck to you. Power
                        > to progress and creativity. I also get the impression that you seem
                        > to be upset that this is not tried enough or at all and you aren't
                        > getting funding because people are more interested in simulating
                        > things in a computer and you're up against a certain philosophy that
                        > is antagonistic towards this approach. You aren't alone. Jaron
                        > Lanier is my favorite scientist and philosopher on this point. He is
                        > a computer scientist of some renown and disagrees with the
                        > philosophy that emanates from silicon valley that seems to be anti-
                        > human in many ways. In particular he is unconvinced that
                        > consciousness has much hope of being replicated in a silicon
                        > computer (i.e: it's not Turing emulable). His book "You are not a
                        > gadget" is an amazing defence of this idea. I commend it to you. But
                        > even Lanier is not as dogmatic as you are.
                        >
                        > So, yes, let's try many approaches, good idea.
                        >
                        > But just because you have *an* approach doesn't mean it's the
                        > *right* approach or that the other approaches are necessarily wrong.
                        > We cannot know which approach is correct. If we did - we would
                        > already have the solution to the hard problems of consciousness -
                        > including how intelligence emerges from brain stuff. You have picked
                        > a side. Great, good luck. But so? You don't need to go a step
                        > further into the close-minded trap of dogmatic adherenace to a
                        > particular point of view about this issue...to the exclusion of all
                        > other possibilities. Like simulating brains in software.
                        >
                        > I don't see any good reasons to believe you are correct just yet. I
                        > don't see good reason to think that any theory currently on offer
                        > *actually explains* human intelligence, creativity or consciousness.
                        > There is no explanatory theory of these things. Just hypotheses -
                        > few of which seem to be testable in practise. Some that aren't even
                        > testable in principal.
                        > > That's the damage that this mistake has caused. That's why AGI has
                        > failed for 60 years: assumptions like the ones you made.
                        > >
                        > That's a dogmatic position. To believe that AGI has failed because
                        > of certain attempts to make progress is just wrong.
                        >
                        > AGI has failed because we do not have a good theory of
                        > consciousness, intelligence, creativity and other things. We don't
                        > know enough.
                        >
                        > It has nothing to do with assumptions. We won't know what
                        > assumptions were wrong until (and if) we ever have an explanatory
                        > theory.
                        >
                        > > All I can ask is that you revisit and challenge your ideas,
                        > > because they are anomalous in science as a whole.
                        >
                        > I think it goes for you, more than for them. You need to be more
                        > open minded. If you close off the possibility that the brain is a
                        > computer then you will close avenues of research. You also seem to
                        > betray an ignorance of the theories of computation that people on
                        > this list are expert on. I suggest you read the book "The Fabric of
                        > Reality" to fill in those gaps. I am no expert, but I at least
                        > understand the argument that the brain, as a physical system, can be
                        > considered as a Universal Turing Machine and if that's the case then
                        > it can be simulated in a computer. Unless you "get" the connection
                        > between physics and computation, this will simply fly over your head
                        > and you will fly off the handle about how the brain can't be a
                        > computer. As you have done. But we have no reasons yet to think that
                        > the brain is special when it comes to the computations that it does.
                        >
                        > Sam Harris is more correct on this point than either side.
                        > Consciousness, as it stands is not the same sort of problem as other
                        > scientific questions. It is rivalled only by the question "Why is
                        > there something rather than nothing?" The very question seems to
                        > buffer itself against solutions, and yet as optimists it is
                        > important we continue to try and make progress - especially in
                        > philosophical terms on this point.
                        >

                        We do have a theory of consciousness, computationalism + theoretical
                        computer science, which makes very specific technical points which can
                        be tested, and most have been tested, retrospectively. The theory
                        predicts most of the quantum weirdness (indeterminacy, many-realities,
                        non locality and non cloning, but also symmetries, and perhaps with
                        some more work it explains quantum computation, without postulating
                        QM). The theory explains completely why our knowledge divides into non
                        sharable qualia and sharable quanta.

                        That theory refutes the Aristotelian idea that *primitive* or
                        *primary* matter exists, again in a constructive way, as it predicts
                        completely the appearance of matter, and that is what makes the theory
                        testable, when we accept the classical definitions of belief and
                        knowledge.

                        The theory explains why there is something material instead of
                        nothing, but has to accept some axioms, like the definition of
                        addition and multiplication. But then the theory explains why this is
                        something not explainable by *any* theory, and in that sense, it makes
                        elementary arithmetic a theory of everything, including quanta and
                        qualia.




                        >
                        > It may very well be the case that the brain is hardware and the mind
                        > is software. We have not yet ruled that out.
                        >
                        > You might be right.
                        >
                        > But to insist everyone else is wrong given how much we *don't know*
                        > about this stuff at the moment seems terribly dogmatic and
                        > pessimistic. Especially as we do not yet have either scientific or
                        > philosophical refutations of those ideas that you do not like.
                        >

                        I agree with many of your critics. Colin seems to have a dogmatic
                        attitude. He talks like if he knew the truth, which is the mark of
                        false prophets and the pseudo-religious people.

                        Bruno
                      • smitra@zonnet.nl
                        ... While we are indeed not a computer running a model of the brain (obviously we are whatever the brain is), the brain itself is de-facto implementing
                        Message 11 of 22 , Jun 30, 2012
                        • 0 Attachment
                          Citeren Colin Geoffrey Hales <cghales@...>:

                          > Elliot Temple wrote:
                          >
                          >>> On Jun 25, 2012, at 6:14 PM, Colin Geoffrey Hales wrote:
                          >>>
                          >>> Hi,
                          >>> A while ago I got all agitated about computationalism and said I'd
                          >>> write it up and get it >> published somewhere.
                          >>>
                          >>> Well here it is..... (and FoR gets a mention!)
                          >>>
                          >>> Hales, C. G. 2012 The modern phlogiston: why 'thinking machines'
                          >>> don't need computers
                          >>> TheConversation. The Conversation media Group.
                          >>>
                          >>> http://www.theconversation.edu.au/the-modern-phlogiston-why-thinking-machines-dont-need-computers-7881
                          >>
                          >> I agree with you that a fire and a computer simulation of a fire are
                          >> not the same thing. Maybe Bruno will argue the point and I don't
                          >> want to 100% dismiss him, but I'm sure not convinced currently.
                          >>
                          >>
                          >> But with AGI, it's not completely parallel to that.
                          >>
                          >> The physical object in question is not a fire but a computer made
                          >> out of organic (carbon-based) molecules.
                          >>
                          >> When we simulate it, we're also using a physical object, a computer.
                          >> Which, while not a fire, is a physical, real world computer.
                          >>
                          >> So what's the difference?
                          >>
                          >> Brains are computers and silicon computers are also computers, both
                          >> are physical real-world computers, so why shouldn't they be able to
                          >> do the same things?
                          >>
                          >> Comparing fire-to-computer is one thing, but computer-to-computer is
                          >> another!
                          >>
                          >> I don't think computers built out of carbon and silicon are totally
                          >> different and I don't think that carbon allows intelligence while
                          >> silicon doesn't. And I don't see what other differences to focus on
                          >> as important instead.
                          >
                          > Saibal wrote:
                          >
                          >> Also, what we experience is what the model the brain uses to represent
                          >> the physical world generates, not the physical world itself. If this
                          >> model flawed, we still experience that flawed representation of the
                          >> World.
                          >
                          > To Saibal and Elliot (and those of the ilk),
                          >
                          > The comments you have made are exactly the 'phlogiston' thinking of
                          > my article.
                          >
                          > The brain is not a computer. Maths of brain physics can be computed.
                          > There are abstract models of function that can be computed, there are
                          > abstract models of representation that can be computed. Etc etc Pick
                          > your ism.
                          >
                          > None of these are what a brain is doing. In a brain, real action
                          > potentials are resonating with a real 3D EM field system best
                          > described in wave-mechanical/quantum mechanical terms. We _are_ this
                          > activity. We are not a computer running a model of it. Until we fully
                          > replicate and examine the performance of replicated inorganic tissue
                          > that does the same physics, we are underequipped to say any of the
                          > simulations mean anything, let alone that the brain is a computer.
                          >
                          > This profound map/territory confusion of the kind in my article, and
                          > that has caused replication to be sidelined for half a century. I
                          > don't know what systemic flaw there is in our education system that
                          > has you thinking like this... but I know it exists, and I know that
                          > it has done a disservice to progress in AGI and neuro/cognitive
                          > science.
                          >
                          > I don't want to squabble about it. It's taken 10 years to work out
                          > what's been going on and I can't fixed this endemic misdirection
                          > overnight.
                          >
                          > Consider this:
                          >
                          > The people at CERN/LHC have spent billions replicating things in the
                          > supercollider to understand what is going on because they admit their
                          > models are frayed around the edges, and computing can't get at them.
                          > What craziness is it that justifies a case for that the most complex
                          > thing in the universe can be understood without ever having properly
                          > tried replication? That's the damage that this mistake has caused.
                          > That's why AGI has failed for 60 years: assumptions like the ones you
                          > made.
                          >
                          > All I can ask is that you revisit and challenge your ideas, because
                          > they are anomalous in science as a whole.
                          >
                          > Thanks for reading the article.
                          >
                          > Cheers
                          > Colin
                          >
                          While we are indeed not a computer running a model of the brain
                          (obviously we are whatever the brain is), the brain itself is de-facto
                          implementing information processing methods. We don't experience
                          everything in the brain, in practice we only experience a small part.

                          Suppose someone has its brain removed, the nerves that connect the
                          body to the brain are now hooked up to some supercomputer via
                          transmitter in the head, and that supercomputer were to simulate the
                          removed brain to some very accurate detail. Then while obviously the
                          physical nature of the processes in the brain and in the computer would
                          be different, there would be a one to one correspondence as far as the
                          relevant processes are concerned.

                          Saibal
                        Your message has been successfully submitted and would be delivered to recipients shortly.