Loading ...
Sorry, an error occurred while loading the content.

Subjective Experience

Expand Messages
  • Eray Ozkural
    I was looking in my doc/articles directory and I saw this post which I wrote on comp.ai and comp.ai.philosophy a long time ago after I reviewed Minsky s talk
    Message 1 of 1 , Jan 14, 2004
    • 0 Attachment
      I was looking in my doc/articles directory and I saw this post which I wrote
      on comp.ai and comp.ai.philosophy a long time ago after I reviewed Minsky's
      talk at game developers conference.

      Here is a link for that review on comp.ai:
      http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&safe=off&frame=right&th=d640762f2ddd5a2e&seekm=9f76tf%
      24mj5%241%40mulga.cs.mu.OZ.AU#link1

      And the link for original "Subjective Experience" post on comp.ai
      http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&group=comp.ai.*&safe=off&selm=9h6ldp%
      2444d%241%40mulga.cs.mu.OZ.AU&rnum=1

      I was thinking of how best to assess Jochem's and Ozge's views on subjective
      experience. This post summarizes my post on subjective experience

      [I have seen that my review of Prof. Minsky's talk about the current state of
      AI has spawned quite a big discussion. Especially the part about
      consciousness, as usual, proved to be remarkably controversial. I feel
      obliged to explicate my views on the matter in more detail.]

      Every discipline in cognitive sciences has offered a different approach to the
      study of high level psychological phenomena such as consciousness. However a
      consensus seems almost impossible. In the philosophy of mind, where we seek
      to establish principled thinking about the mind/body problem, computational
      aspects of thinking, the problem of representation and such, philosophers
      have engaged into arguments which have lasted for decades on even the most
      basic questions. Some have tried to outright deny that a computer can attain
      consciousness, i.e. Searle. And probably some may have claimed that
      consciousness does not exist at all.

      My feeling about this matter is that if we abandon the scientific "practice"
      for seeking an answer we will certainly fail to find one.

      I have seen that many have walked into the shadowy realm of syntax vs.
      semantics discussions, in which the true adversary to AI will grant computers
      every aspect of syntax but will say that they do not necessarily attain
      semantics. In response, the proponent shall remark that computers can have as
      much semantics as a person if correctly programmed. That kind of discussion
      without any frame of reference whatsoever is hardly of interest to the
      practical minded scientist/philosopher for we would like to know the problem
      we are thinking about. In the rest of this article, it will be accepted that
      the Berkeley systems reply to Searle's argument is indeed true and that
      Searle's argument is logically flawed: in that we cannot simply "assert" that
      computation cannot contain consciousness. [If you would like to talk about
      this topic please start another thread, renaming the subject]

      Other premises of this discussion are that human brains are machines. And that
      the mind depends on solely the physical. However, we also assume that we can
      think about cognitive phenomena at a high level without reference to a purely
      reductionist account, thus from the standing point of philosophy of mind. The
      reasoning behind this position is that
      a) such an account may not be feasible to formulate
      b) we can safely ignore features that are too low level (such as the
      intricate details of chemical bonds)
      c) we have reason to believe that most of cognition has a computational
      nature.

      That is, I do not view intelligence as a large scale quantum effect or caused
      by the will of a powerful deity.

      The crux of Minsky's argument was that the term consciousness acted as a
      placeholder for several very difficult problems. It is such a big problem
      that we avoid solving it by saying that it is not solvable, i.e. that it is
      necessarily irreducible and a very important human trait: subjective
      experience. It is certainly not in the agenda of the practical minded to
      regard a cognitive phenomenon in a dogmatic manner with which one simply
      asserts a superficial explanation that bears no scientific value.

      The more significant part of Minsky's claim has been largely ignored. In that
      he has claimed to have a theory that explains what those hard problems are
      and how they could be solved.

      I agree with Minsky in that a rigorous treatment of consciousness truly leads
      to many well defined difficult problems. For instance, it has often been said
      that "awareness" is one part of the puzzle. How can a system be 'aware' of
      its state and the environment? In what way is "attention" realized?
      Nevertheless, a very aware system (say, a military combat computer) cannot be
      said to be conscious. That is, it is not a sufficient condition for
      consciousness. This kind of philosophical analysis, I believe, is most
      beneficial for researchers in other disciplines within cognitive sciences:
      divide and conquer.

      On the other hand, even making systems that have realized these aspects of
      high level mammal cognition may not be satisfactory for "implementing"
      consciousness. For we must reckon that there might be a design that makes an
      intelligent system conscious at many levels. [Or for some other valid reason]
      In other words, the problem might be bigger than we think or it might be more
      fundamental than we assume it to be.

      Take for instance the idea of tracking down evolutionary process: if we build
      simple minds and then add new layers bringing new features then we should
      eventually reach a human-level. Surely, especially in understanding the less
      developed regions of mammal brains, that is a reasonable course for
      experiment. On the other hand, we cannot guarantee that following a simple
      methodology will result in success: we can have robots that walk but that
      cannot think. Perhaps it's the last layer that is so hard to reproduce, or we
      have inaccurately modeled a long phase of evolution. In that perspective, it
      might be more than difficult to backtrace the log of evolution of
      intelligence on earth. It is an experiment too vast to replicate.

      As a digression, I offer the the following thought which I entertain
      occasionally. In the design of abstract machines, we have come a long way.
      Imagine what the early programmers would think of today's advanced languages
      such as Haskell. They used to program in machine code (not assembly), and
      they would perhaps not be able to think how the design of a functional-OO
      program would look like. Likewise, perhaps we are not able to see the design
      behind the architecture of human mind. Perhaps it is the necessary outcome of
      a very advanced design that we have not come to appreciate yet.

      I would like to present a warm thought experiment in this respect: the poor
      bug. When we are watching a documentary of the jungle, we observe bugs
      swarming with life. Each one of them is a unique individual, with a different
      prospect of life and an optimal behavior function that will lead to their
      goals.

      This thought experiment, as you have predicted, is one in simulacra. Now, let
      the same (harmless) little bug live in a controlled environment such as a
      glass box. When we observe the bug, we are also observing the "psychology" of
      the bug which is far inferior then ours. Yet, there is solid evidence that it
      has some intelligence. Some bugs, as you may find proof in literature, have
      shown behavior that require learning. And the truth is that none of our
      robots is as good as a real insect. We almost have the technology to do
      fantastic things to a bug. Let us assume that we can "scan" the bug and a
      simulation of the bug so so accurate can be produced that when placed in a
      virtual construct of the controlled environment it demonstrates us the same
      cognitive capacity. We would be tempted to ask whether this is the same
      individual as the scanned one. Physically, it is not. Nevertheless, we could
      say that the new entity is the continuation of the original bug and should
      thus be considered that it is the same individual "teleported" to a different
      "place".

      My argument is focused on "subjective experience", therefore it doesn't
      require a human. We can surely accept that "to be a bug" has an extensive
      subjective experience. The experience of a member of a species, it is true,
      is very valuable. Nevertheless, we do not question value.

      Our simulation physically contains the mind of the replica of the poor bug.
      Something strange has happened. We have transferred a mind to a computer, but
      we do not know how to design and build one. What we did was scanning: perhaps
      dissecting the poor bug with precise devices and then encoding the state of
      each cellular machinery in cells. Yet, we have not reverse engineered the
      design. We do not _understand_ how a machine with similar capabilities could
      be built. Nevertheless, we can look at the simulation monitors and using some
      excellent programs measure every aspect of the bug with sufficient precision.
      What have we achieved?[*]

      Since we are now inside the frame of a thought experiment let us take it
      further. Imagine that after the poor bug was crushed, one of the lab
      assistants named Tenno, who happened to be a practicing Zen monk at the same
      time, came to study the poor bug. Having a perfect control of his mind, he
      read the binary code that represented the bug and re-formed the bug in his
      mind. He closed his eyes, and during his meditation, he gave freedom to the
      poor bug in his mind. He watched in the eye of his mind how it walked and ate
      and how it went into a little hole in the ground to avoid the rain. He
      understood the poor bug. He knew what it saw and how it saw and the crude
      feeling of being a poor bug running for life underneath a leaf. His professor
      entered the lab and finding him sitting in the lotus position asked him
      whether he backed up the simulation. He made no reply, because until he
      opened his eyes he was the bug.

      When he opened his eyes, he said "being a bug is more exhilarating than the
      code". The professor said "to know the bug is more important than to be the
      bug".

      The persuasive component of this thought experiment is that we can diminish
      the difference between subjective experience and objective experience to a
      point where there is no more anything to distinguish. There is only
      operation.

      This is of course only a long way of saying "Minds are simply what brains
      do" (Minsky, SOM)

      The moral of the story is that the fact that we do not currently understand
      how a mind works does not necessarily mean that it cannot ever be understood.

      Moving out of the frame of the experiment, the remaining question is then why
      "consciousness" is one of the most powerful suitcase words. It is, because it
      does refer to our most precious capabilities. Therefore, saying that those
      precious capabilities do not exist is a fallacy. Nevertheless, consciousness
      as we know it does not exist, for saying that it is 'being capable of
      subjective experience' is a very weak definition. Then, one would be forced
      to enquire what makes your subjective experience different than a bug's, and
      with no one to offer an admissible answer that involves "the holiness of
      subjective experience", the ancient definition of consciousness would be
      blown to oblivion.

      I would be delighted to see insightful comments to this humble post.

      Regards,

      [*] This thought experiment had initially purposed to support the thought that
      activating an intelligent entity without understanding how its mind works
      doesn't amount to much. It had perhaps arisen from my anti-neural-network
      attitude. That the agenda of AI _must_ include _understanding_ how the mind
      works.

      __
      Eray Ozkural, a.k.a. exa
      CS Dept., Bilkent Univ.
      There is no perfect circle. -- me
    Your message has been successfully submitted and would be delivered to recipients shortly.