Loading ...
Sorry, an error occurred while loading the content.
 

Re: [analytic] Sense of self

Expand Messages
  • Eray Ozkural
    This is also relevant for ai-philosophy. Comments below, ... I quoted in entire length. Unity of self is without doubt an illusion. All senses themselves are
    Message 1 of 1 , Jul 1, 2009
      This is also relevant for ai-philosophy. Comments below,

      On Sun, Feb 1, 2009 at 3:13 AM, jrstern <jrstern@...> wrote:
      > --- In analytic@yahoogroups.com, "Stuart W. Mirsky" <swmaerske@...>
      > wrote:
      >>
      >> (was: Wittgenstein on minds)
      >>
      >> You're probably right that it is not quite as simple as I
      >> described it above. Certainly we have conflicting intuitions
      >> and to some degree what we take to be our intuitions about
      >> this ARE a function of training, acculturation, linguistic
      >> systems adopted, etc. But I would argue that when we set out
      >> to do introspection the first thing we tend to "see" about
      >> our "selves" is that there is an us, and that us is defined
      >> by its history, sense of continuity with that, etc. Kant
      >> captured this nicely, I think, in his reference to the
      >> transcendental "I" in which he recognized Hume's insight
      >> (and it was THAT because it was new) that the self is at
      >> bottom just a bunch of disconnected mental events which
      >> we think are connected. This, by the way, does suggest
      >> that our tendency, until we take up Hume or, perhaps, Zazen,
      >> is to think of our selves as a unity as I suggested,
      >> even if there is something of our forms of life behind it
      >> as well.
      >>
      >> Anyway, Kant went on to suggest that there was at the core of all
      >> the manifestations of what we recognized as the self, a perceiver
      >> that was not itself perceived, a sort of transparent I, etc.
      >> Dennett suggests, of course, that this is part of the illusion
      >> that is produced by something on the order of his
      >> "multiple drafts" system. At the least, I think the "multiple
      >> drafts" proposal an interesting one for accounting for the sense
      >> of a self lying at the core of the known self, etc.
      >
      > Thanks for the reply, it gives me a chance first of all to add in
      > what I meant to include earlier, that I don't give much weight at all
      > to intuitions, not mine nor anyone's, and I strongly suspect (ie,
      > reject) any argument that does otherwise. As does Dennett, who I
      > believe coined the term "intuition pump" for various stories and the
      > way they grab attention beyond their real, defensible value.
      >
      > I'm not sure how novel Hume's insight might be, there seems no
      > generic position that was not already recognized by the ancient
      > Greeks, who at least got some of them down on paper so that they have
      > made it down to us, priority intact, and wasn't it Locke who (re)
      > introduced the term tabula rasa, which I would argue has rather
      > similar implications.
      >
      > http://en.wikipedia.org/wiki/Tabula_rasa
      >
      > I suppose (even) I have some remnant "intuition" about the unity of
      > self, but I'd say my leading, active intuition for some years now has
      > been to immediately question any such statement and look for its
      > components, along the lines of Dennett's multiple drafts, or Freud's
      > hidden motivations, or nominalistic linguistic assumptions, or
      > whatever. My intuition is to wonder at the illusion, not to accept
      > it any more than my "intuition" tells me that the Earth is flat in
      > any way that I am prepared to accept. Indeed, knowing the Earth is
      > not flat, I can reexamine even raw percepts and see that they do
      > contain hints, once one knows what to look for, of the curvature of
      > the Earth. Insofar as there is a unity of self, I want to see how it
      > is brought about!

      I quoted in entire length. Unity of self is without doubt an illusion.
      All senses themselves are illusions. Yet some of them are illusions
      that correspond to incorrect predictions about the environment. Seeing
      is an illusion, or hallucination itself. At any rate, the experience
      associated with the unity may be actually caused by a physical notion
      of locality. Because my experience is *bounded* by physical
      constraints and architecture: speed of neural transmission, how much
      calculation your brain can make in a second, how much memory do you
      have for solving a problem (working memory)? how much long-term memory
      do you have? Computational facts about the brain. This sense of unity
      then is a _computational unity_ and it interpreting the unity illusion
      does not require aspects of illusion itself, thus it prefers to reduce
      this sense to reflective statements in a computational mind.

      In what computational sense is there unity? Several, perhaps.
      1) The argument from algorithmic uniformity. The mind is _systematic_
      and seems to apply the same basic principles of reasoning in all
      domains and can apply this problem solving power to any problem. This
      universality property is thus an essence of any part of the, say,
      neo-cortex. (This also has an argument from neural structure)
      2) A more useful one. Argument from parallel architecture. The
      architecture of the brain is fine-grained parallelism and there are a
      large number of identical resources, This massive parallelism can be
      described as a model of computation, the execution of this model, for
      instance to make predictions about the machine (as with above
      computational capacity questions) , I think, would be identical to
      _one_ _sense_ of unity.
      3) A probably more common one. Argument from computational space.
      There is no computation without space. The working memory of a
      computer requires physical space, and this space has boundaries, the
      unity then can be facilitated by shared memory.
      4) The argument from shared information. Like the fact that the mutual
      information between two cells corresponds to the genetic code, there
      is mutual information between neurons. For instance, some basic code
      would be shared. Likewise, in the dynamic picture we expect there to
      be self-representations which add to mutual information between parts.

      Does this make sense to you? It does a little to me, but not much yet!

      Best,

      --
      Eray Ozkural, PhD candidate. Comp. Sci. Dept., Bilkent University, Ankara
      Research Assistant, Erendiz Supercomputer Inc.
      http://groups.yahoo.com/group/ai-philosophy
      http://myspace.com/arizanesil http://myspace.com/malfunct
    Your message has been successfully submitted and would be delivered to recipients shortly.