Loading ...
Sorry, an error occurred while loading the content.

Re: [ai-philosophy] Re: What Searle is really guilty of

Expand Messages
  • Eray Ozkural
    ... Searle confuses subjective experience with consciousness. That s the problem. He does not understand that there can be non-informative subjective states,
    Message 1 of 32 , Apr 1 2:14 AM
    • 0 Attachment
      On Wed, Apr 1, 2009 at 6:00 AM, iro3isdx <xznwrjnk-evca@...> wrote:
      > --- In ai-philosophy@yahoogroups.com, "Joe Legris" <jalegris@...> wrote:
      >
      >> I can imagine that a suitably structured machine could
      >> plausibly claim to have mental states. Searle says in his
      >> Q & A that some machines might obtain consciousness
      >> or intentional states by duplicating the causes and effects
      >> operating in humans, possibly using different chemical
      >> processes, but not by the mere manipulation of formal
      >> symbols.
      >
      > But why does this have to be tied to chemical processes?
      >
      > It seems to me that a cognitive system is following some
      > principles that have to do with information.  And any system
      > capable of implementing those principles should be capable
      > of having mental states.  Whether or not those principles are
      > purely computational, it seems unlikely that they would be such
      > that only chemical implementations are possible.

      Searle confuses subjective experience with consciousness. That's the problem.

      He does not understand that there can be non-informative subjective
      states, for instance.

      Best,


      --
      Eray Ozkural, PhD candidate. Comp. Sci. Dept., Bilkent University, Ankara
      Research Assistant, Erendiz Supercomputer Inc.
      http://groups.yahoo.com/group/ai-philosophy
      http://myspace.com/arizanesil http://myspace.com/malfunct
    • iro3isdx
      ... I can t say what a commitment to computationalism should entail, since I don t have such a commitment. I m committed to following the evidence wherever
      Message 32 of 32 , Apr 8 6:23 PM
      • 0 Attachment
        --- In ai-philosophy@yahoogroups.com, "Joe Legris" <jalegris@...> wrote:

        > I agree that the above seems silly and nitpicking,
        > but shouldn't a commitment to computationalism
        > include a commitment to some physical properties
        > of "mind"

        I can't say what a commitment to computationalism should entail,
        since I don't have such a commitment. I'm committed to following
        the evidence wherever that leads, but not to a particular conclusion.

        I'm not sure what you mean by "physical properties." If that is
        a reference to properties studied by physics, then since physics
        does not study consciousness I don't see any certain need for
        such properties.

        I see computation, itself, as abstract and non-physical. Of course
        we use physical implementations. But generally we don't want details
        of the implementation to get in the way of the computation, so it
        isn't clear that there should be any particular physical implications
        from assuming computationalism.
      Your message has been successfully submitted and would be delivered to recipients shortly.