Loading ...
Sorry, an error occurred while loading the content.

Re: [ai-philosophy] Re: Minsky on consciousness

Expand Messages
  • Paul Bramscher
    ... Great work there, I will go into it more deeply. ... That s the curious part. I, also, would not expect the training time to be identical to that required
    Message 1 of 100 , Feb 1, 2005
    View Source
    • 0 Attachment
      Pei Wang wrote:

      >
      > >
      > > > But if "intelligence" is taken to be a mechanism that produces certain
      > > > behavior from certain experience (which I believe is a better way
      > to see
      > > > it), then it is much more (though not completely) universal,
      > because the
      > > > differences in behavior are mostly due to the difference in
      > experiences,
      > > > not in the mechanism.
      > > >
      > > > Roughly speaking, I take intelligence to be a function that maps
      > > > experience to behavior, that is, B = I(E). According to this opinion,
      > > > different cultures provide different E, which lead to different B, but
      > > > it doesn't mean that the "I" involved is necessarily different.
      > >
      > > This is exceedingly important from an algorithmic perspective, since
      > > what you suggest seems highly plausible, especially when T is not
      > > involved. That is, it describes people as tabula rosa, blank slates (as
      > > infants). We grow into the intelligence which our culture (E, here)
      > > imparts upon us, and is expressed phenomenologically through B.
      >
      > It is not merely "plausible" --- I've had a working demo at
      > http://www.cogsci.indiana.edu/farg/peiwang/NARS/ for years. :-)

      Great work there, I will go into it more deeply.

      >
      > Of course I'm not claiming that a human infant is really a blank slate
      > --- it needs some minimum capacity to survive, at least. However, what
      > initial capacity (stimulus-response relations) an intelligent system has
      > should be to a large extent independent of its intelligence, that is,
      > the "I" in the above formula. In the current demo, the system can start
      > with a blank slate, but in the future it will be able to start from a
      > pre-loaded knowledge base --- there is no major technical difference.
      >
      > > So in this model, we seek to build a non-acculturated infant -- perhaps
      > > with innate intelligence, or innate capacity to build a set of B outputs
      > > in response to particular learned input E's over time.
      >
      > Exactly.
      >
      > > We could contrast this with "matured" AI which might begin its existence
      > > as such -- without the requisite years of formative (culturally filtered
      > > input) intelligence. It seems that most AI models want this method, or
      > > something like it. I imagine it's not possible financially to let an AI
      > > "grow" for 10-20 years to AI adulthood -- no grant money lasts this
      > > long. So the maturation process of AI seems to be either simplified
      > > (robot pattern recognition in simple environments for short duration) or
      > > sped up (whatever "intelligence" it might glean from a short and furious
      > > data collection period). Is it possible to have intelligence without
      > > instruction (beyond an abstract ruleset)?
      >
      > The exact time for such a training/education won't be identical to that
      > of a human being, and furthermore, you can train one system, them make
      > many copies into other systems.

      That's the curious part. I, also, would not expect the training time to
      be identical to that required for a human. I, also, would expect it to
      be fully copyable. But it would seem that if an AI were to sample and
      experience human-range input (whether sound, visual input, etc.) that it
      might require a certain time committment. Yet if it was possible to
      dramatically shortcut the time, I wonder whether this might suggest a
      dramatic "engineering" difference between silicon- and carbon-based
      intelligence, perhaps too much a difference?

      When thinking of networks of meaning (as opposed to semantically-blind
      symbolic/math systems) I think that there is a key piece missing between
      -- for example -- a dictionary of definitions (which link to other
      definitions) and sampled input.

      That is, someone might need to be there, at the machine's side. When it
      recieves input which can be recognized as the pattern of a triangle,
      then someone might instruct the machine: "This object has the shape of a
      triangle." The machine will have access to a dictionary, and then be
      able to fathom what it is about triangles which make them triangles, and
      have access to pattern-recognition algorithims which allows them to
      resolve a triangle out of the image sample.

      Without the connection between the semantic network
      (dictionary/thesaurus/lexicon) and the mathematical rules of the sampled
      data, it seems that we might only have zombie intelligence.

      Paul F. Bramscher
      University of Minnesota
      Digital Library Development Lab

      >
      > Whether such a research can be funded is independent of whether it is
      > the right way to go, though I agree with your point here.
      >
      > BTW, I prefer "training" or "education" over "maturation", since the
      > latter usually is associated the grow or change of the hardware/wetware.
      >
      > According to my belief, it is impossible to have intelligence without
      > learning --- as I said elsewhere, "intelligence" should not be about
      > "what it can do", but "what it can learn to do". Even if a system can
      > have certain capacity be design, it won't know how to revise it for new
      > situations, which is exactly the situation with the conventional,
      > non-intelligent computer systems.
      >
      > > Despite their successes, things like pattern recognition (involving
      > > input of data) seem to differ from input of intelligence (deliberate
      > > instruction, teaching, input of meaning). I'm not aware of many AI
      > > models which involve a (presumably human) mentor to guide the AI along
      > > as we'd expect of "real" I.
      >
      > It is a matter of degree. Any adaptive/learning system needs training or
      > education before it can function properly. The more flexible an infant
      > is, the greater potential it has, and the longer time it takes to learn
      > --- just compare human and animals.
      >
      > > If we skip this mentoring stage and view intelligence as possible to
      > > achieve in a fully robust and mature manner from the start, then I would
      > > suggest that cultural whimsy plays a specific role. This would not
      > > variable from the AI's perspective: it was built with the specification
      > > of a particular culture in mind and may not be "adopted" as a young
      > > infant into another culture, given a whole new chance to develop a new
      > > sort of AI. (It would be able to learn a few new tricks, but it would
      > > still be an old dog as it were.)
      >
      > Agree, though I don't think to view intelligence in that way is very
      > fruitful.
      >
      > Pei Wang
      >
      > >
      > > >
      > > > BTW, for the same reason I don't take passing the Turing Test as the
      > > > goal of my research, because it asks a system to have the same
      > behavior
      > > > as a human being.
      > > >
      > > > Pei Wang
      > > >
      > > > > My inclination is that intelligence is territory-driven. The first,
      > > > > immediate, border is that which connected nerve pathway ends (me vs.
      > > > > it). Note that the Buddhist & Zen philosophies reject
      > territory, they
      > > > > hope to meld the seeking-I with that which is sought, the I with the
      > > > > "they", etc. The distinction between what is me and mine vs.
      > you and
      > > > > yours. All of this is to be overcome.
      > > > >
      > > > > So if we wish to build it in AI, I suggest that we need to capture a
      > > > > system which is deliberately territorial, perhaps greedy and
      > > egotistical
      > > > > (not merely ego-istic), since territory doesn't exist without
      > > > > someone/something which claims it as such (via ego, personality,
      > etc.
      > > > > dynamics). This is not just a feature of intelligence, but
      > comes very
      > > > > early in our evolutionary history, basic to animals in the kingdom
      > > > > across the board.
      > > > >
      > > > > Paul F. Bramscher
      > > > > University of Minnesota
      > > > >
      > > > > aiprog wrote:
      > > > >
      > > > > >
      > > > > > Pei,
      > > > > >
      > > > > > Not Dr. Minsky here... sorry!
      > > > > >
      > > > > > My 2 cents on the issue: study depersonalization.
      > > > > >
      > > > > > According to DSM-IV, Depersonalization Disorder, in part,
      > > constitutes
      > > > > > the following:
      > > > > >
      > > > > > ... a feeling of detachment or estrangement from one's self . The
      > > > > > individual may feel like an automaton or as if he or she is
      > > living in
      > > > > > a dream or a movie. There may be a sensation of being an outside
      > > > > > observer of one's metal processes, one's body, or parts of one's
      > > body.
      > > > > >
      > > > > > ... Various types of sensory anesthesia, lack of affective
      > response,
      > > > > > and a sensation of lacking control of one's actions, including
      > > speech,
      > > > > > are often present. The individual with Depersonalization Disorder
      > > > > > maintains intact reality testing (e.g., awareness that it is
      > only a
      > > > > > feeling and that he or she is not really an automaton) .
      > > > > > Depersonalization is a common experience, and this diagnosis
      > > should be
      > > > > > made only if the symptoms are sufficiently severe to cause marked
      > > > > > distress or impairment in functioning).
      > > > > >
      > > > > > I suspect that depersonalization may imply a sort of "privation of
      > > > > > consciousness." And that if the cause of depersonalization
      > could be
      > > > > > found and "reversed" in a machine... viola! consciousness!
      > > > > >
      > > > > > Mike Archbold
      > > > > >
      > > > > >
      > > > > > --- In ai-philosophy@yahoogroups.com, Pei Wang <peiwang@m...>
      > wrote:
      > > > > > > Dr. Minsky,
      > > > > > >
      > > > > > > First, I changed the subject line, since the following
      > discussion
      > > > > > has
      > > > > > > little to do with Sapir-Whorf hypothesis.
      > > > > > >
      > > > > > > To me, what we call "consciousness" is basically our
      > sensorimotor
      > > > > > > mechanism on our /internal/ world. It shares many properties
      > with
      > > > > > our
      > > > > > > sensorimotor mechanism on the /outside/ world, such as the
      > > > > > requirement
      > > > > > > for attention, the combination of sequential and parallel
      > > > > > processing,
      > > > > > > the associated categorization process, and so on. However,
      > > there are
      > > > > > > important differences: (1) one shares the outside world with
      > other
      > > > > > > people, while the inside world, so far, is only accessible to
      > > > > > oneself.
      > > > > > > (2) since we have different sensors and operators on these two
      > > > > > worlds,
      > > > > > > we have developed different concepts to talk about them.
      > > Because of
      > > > > > > these difference, consciousness seems mystical to many
      > people, and
      > > > > > > indeed we haven't achieved it in AI yet. I believe it can be
      > done,
      > > > > > > though it must be based on certain more fundamental
      > > mechanisms, like
      > > > > > > general-purpose reasoning and learning on both declarative and
      > > > > > > procedural knowledge. For this reason, I never talked about this
      > > > > > topic
      > > > > > > in public before --- I feel that it is hard to be
      > constructive and
      > > > > > to
      > > > > > > give enough details in the discussion.
      > > > > > >
      > > > > > > Because of the above beliefs, I can agree with most of your
      > theory
      > > > > > as in
      > > > > > > 4-3, so I won't repeat them, but just mention the points that I
      > > > > > don't
      > > > > > > fully agree:
      > > > > > >
      > > > > > > (1) The four constituents: "/recent memories", //"serial
      > > processes",
      > > > > > //
      > > > > > > "symbolic dipiction", /and /"self models" /--- I agree that
      > > they are
      > > > > > all
      > > > > > > highly relevant to "consciousness", but to me, they are neither
      > > > > > > independent nor exclusive. For example, a "self model" may
      > > have much
      > > > > > > recent memories involved, and use a verbal description. If you
      > > mean
      > > > > > that
      > > > > > > "consciousness is mainly about serial processes on a system's
      > > > > > beliefs
      > > > > > > about its own internal activities, which is mostly based on its
      > > > > > recent
      > > > > > > memory, and represented using concepts", then I'd fully agree,
      > > > > > though
      > > > > > > the above notions are still not forming a partition.
      > > > > > >
      > > > > > > (2) You said "a brain cannot think about what it is thinking
      > > /right
      > > > > > > now/", but how about for a mind to "feel" what it is thinking
      > > right
      > > > > > now?
      > > > > > > Isn't it part (or the starting point) of the thinking process? I
      > > > > > agree
      > > > > > > that most thinking-about-thinking is based on recent memory, but
      > > > > > your
      > > > > > > conclusion still sounds too strong to me.
      > > > > > >
      > > > > > > (3) You said "/They use abstract, symbolic, or verbal
      > > descriptions/
      > > > > > ".
      > > > > > > These three adjectives mean very different things to me. The
      > > > > > conscious
      > > > > > > mental activities are indeed /abstract/ in the sense that they
      > > don't
      > > > > > > directly correspond to the details of neural activities, but
      > they
      > > > > > are
      > > > > > > not "symbolic" in the usual sense that they get their meaning
      > > via an
      > > > > > > interpretation, and finally, I don't think all conscious mental
      > > > > > > activities can be verbally described, in the sense that we have
      > > > > > words
      > > > > > > for them in our language.
      > > > > > >
      > > > > > > Thanks for sharing your ideas with us.
      > > > > > >
      > > > > > > Pei Wang
      > > > > > >
      > > > > > >
      > > > > > > Marvin Minsky wrote:
      > > > > > >
      > > > > > > > I agree with Eray's analysis, especially, in his
      > > > > > > > description of an attitude in which a person
      > > > > > > > holds:
      > > > > > > > But the following kind of reasoning I do not consider good
      > > enough.
      > > > > > > > - I have this thing called consciousness
      > > > > > > > - I don't know what it means
      > > > > > > > - Nobody else does, either
      > > > > > > > - Nobody seems to even know how to approach the problem
      > > > > > > > - So this thing called consciousness, while
      > > > > > > > obviously "real", cannot ever be explained,
      > > > > > > >
      > > > > > > > So, instead of praising one's lack of
      > > > > > > > imagination, take a look at
      > > > > > > > http://web.media.mit.edu/~minsky/E4/eb4.html
      > <http://web.media.mit.edu/%7Eminsky/E4/eb4.html>
      > > <http://web.media.mit.edu/%7Eminsky/E4/eb4.html>
      > > > <http://web.media.mit.edu/%7Eminsky/E4/eb4.html>
      > > > > <http://web.media.mit.edu/%7Eminsky/E4/eb4.html>
      > > > > > <http://web.media.mit.edu/%7Eminsky/E4/eb4.html>
      > > > > > > > <http://web.media.mit.edu/%7Eminsky/E4/eb4.html>
      > > > > > > > which is a theory of what people mean by
      > > > > > > > consciousness. I would especially like comments
      > > > > > > > on the model in section ยง4-3 of how a brain might
      > > > > > > > come to say things like, "right now I am
      > > > > > > > conscious of ..."
      > > > > > > >
      > > > > > > >
      > > > > > > > The text is already in the process of
      > > > > > > > publication, so I would appreciate comments soon.
      > > > > > > >
      > > > > > > > (A lot of the book is based on discussions
      > > > > > > > carried on in comp.ai.philosophy
      > > > > > > > forum deteriorated more recently because of the
      > > > > > > > strange revival of 'mindless behaviorism.')
      > > > > > > >
      > > > > > > > >Hi Robin,
      > > > > > > > >
      > > > > > > > >I think your post deserves a more detailed reply.
      > > > > > > > >
      > > > > > > > >--- In ai-philosophy@yahoogroups.com, robin <robin@B...>
      > wrote:
      > > > > > > > >> Eray Ozkural wrote:
      > > > > > > > >> I don't think that using the word "mysterious" implies
      > that
      > > > > > something
      > > > > > > > >> qill always be a mystery - it's just a stronger version of
      > > > > > "puzzling",
      > > > > > > > >> often used to indicate that not only have we not solved a
      > > > > > problem, we
      > > > > > > > >> have no idea of how we might go about solving it.
      > > > > > > > >
      > > > > > > > >I understand this sense of the word. As I said, lightning
      > must
      > > > > > have
      > > > > > > > >looked mysterious to men who knew nothing about electricity.
      > > > > > > > >
      > > > > > > > >> "Mystical" goes one
      > > > > > > > >> step further in implying that the phenomenon in
      > question is
      > > > > > not
      > > > > > > > >> understandable through normal processes of reasoning.
      > > > > > Consciousness
      > > > > > > > >> could fall into either of those two categories.
      > > > > > > > >
      > > > > > > > >In the above sense, the "mystery" of consciousness is no
      > > problem.
      > > > > > It
      > > > > > > > >is what we are trying to unravel, in a sense.
      > > > > > > > >
      > > > > > > > >In the sense you describe, the adjective "mystical" seems
      > > > > > problematic.
      > > > > > > > > It is prevalent in theological discourse, but I honesty
      > > cannot
      > > > > > see
      > > > > > > > >how we might want to use it in a philosophical
      > discussion. If a
      > > > > > > > >philosopher is making such a claim, e.g. that
      > consciousness is
      > > > > > > > >inexplicable, he ought to give an argument. Asserting such is
      > > > > > > > >insufficient. Searle gives an argument for what might be
      > > regarded
      > > > > > a
      > > > > > > > >similar position. So, I'd at least demand that one gives an
      > > > > > argument
      > > > > > > > >or refers to specific arguments.
      > > > > > > > >
      > > > > > > > >Citing a position is often helpful. One might say things
      > like I
      > > > > > think
      > > > > > > > >epiphenomenalism holds, or anomalous monism, or emergentism
      > > > > > holds, etc.
      > > > > > > > >
      > > > > > > > >But the following kind of reasoning I do not consider good
      > > > > > enough.
      > > > > > > > > - I have this thing called consciousness
      > > > > > > > > - I don't know what it means
      > > > > > > > > - Nobody else does, either
      > > > > > > > > - Nobody seems to even know how to approach the problem
      > > > > > > > > - So this thing called consciousness, while obviously
      > > "real",
      > > > > > cannot
      > > > > > > > >ever be explained, it's in the same knowledge category as
      > > things
      > > > > > that
      > > > > > > > >do not exist (like God) and cannot be explained (e.g. you
      > > have to
      > > > > > take
      > > > > > > > >it on faith or it's incomprehensible)
      > > > > > > > >
      > > > > > > > >I think this is bad logical reasoning, the premises do
      > not take
      > > > > > us to
      > > > > > > > >the supposed conclusion. Please remember, it took
      > thousands of
      > > > > > years
      > > > > > > > >before we could have astronomy or physics that had the
      > kind of
      > > > > > > > >explanatory power we take for granted today.
      > > > > > > > >
      > > > > > > > >Is not there a tenable non-reductionism? Maybe, there is, but
      > > > > > even if
      > > > > > > > >that is the case (which I seriously doubt) then it is nothing
      > > > > > like the
      > > > > > > > >above kind of false reasoning.
      > > > > > > > >
      > > > > > > > >This is the same kind of insufficient reasoning that led
      > > > > > Descartes to
      > > > > > > > >consider substances that do not extend in space. We should
      > > > > > observe the
      > > > > > > > >higher standards of logic (e.g. method of philosophy) that
      > > > > > succeed
      > > > > > > > >Descartes.
      > > > > > > > >
      > > > > > > > >> There is also the question of what it would mean to
      > > understand
      > > > > > or
      > > > > > > > >> explain consciousness.
      > > > > > > > >
      > > > > > > > >Of course. So, what do you suggest it means?
      > > > > > > > >
      > > > > > > > >Regards,
      > > > > > > > >
      > > > > > > > >--
      > > > > > > > >Eray Ozkural
      > > > > > > > >
      > > > > > > > >
      > > > > > > > >
      > > > > > > > >
      > > > > > > > >
      > > > > > > > >
      > > > > > > > >
      > > > > > > > >Yahoo! Groups Links
      > > > > > > > >
      > > > > > > > >
      > > > > > > > >
      > > > > > > > >
      > > > > > > >
      > > > > > > >
      > > > > > > >
      > > ------------------------------------------------------------------
      > > > > > ------
      > > > > > > > *Yahoo! Groups Links*
      > > > > > > >
      > > > > > > > * To visit your group on the web, go to:
      > > > > > > > http://groups.yahoo.com/group/ai-philosophy/
      > > > > > > >
      > > > > > > > * To unsubscribe from this group, send an email to:
      > > > > > > > ai-philosophy-unsubscribe@yahoogroups.com
      > > > > > > > <mailto:ai-philosophy-unsubscribe@yahoogroups.com?
      > > > > > subject=Unsubscribe>
      > > > > > > >
      > > > > > > > * Your use of Yahoo! Groups is subject to the Yahoo!
      > > Terms of
      > > > > > > > Service <http://docs.yahoo.com/info/terms/>.
      > > > > > > >
      > > > > > > >
      > > > > >
      > > > > >
      > > > > >
      > > > > >
      > > > > >
      > > >
      > ------------------------------------------------------------------------
      > > > > > *Yahoo! Groups Links*
      > > > > >
      > > > > > * To visit your group on the web, go to:
      > > > > > http://groups.yahoo.com/group/ai-philosophy/
      > > > > >
      > > > > > * To unsubscribe from this group, send an email to:
      > > > > > ai-philosophy-unsubscribe@yahoogroups.com
      > > > > >
      > > > >
      > <mailto:ai-philosophy-unsubscribe@yahoogroups.com?subject=Unsubscribe>
      > > > > >
      > > > > > * Your use of Yahoo! Groups is subject to the Yahoo! Terms of
      > > > > > Service <http://docs.yahoo.com/info/terms/>.
      > > > > >
      > > > > >
      > > > > > .
      > > > >
      > > > >
      > > > >
      > > > >
      > > ------------------------------------------------------------------------
      > > > > *Yahoo! Groups Links*
      > > > >
      > > > > * To visit your group on the web, go to:
      > > > > http://groups.yahoo.com/group/ai-philosophy/
      > > > >
      > > > > * To unsubscribe from this group, send an email to:
      > > > > ai-philosophy-unsubscribe@yahoogroups.com
      > > > >
      > > > <mailto:ai-philosophy-unsubscribe@yahoogroups.com?subject=Unsubscribe>
      > > > >
      > > > > * Your use of Yahoo! Groups is subject to the Yahoo! Terms of
      > > > > Service <http://docs.yahoo.com/info/terms/>.
      > > > >
      > > > >
      > > >
      > > >
      > > >
      > ------------------------------------------------------------------------
      > > > *Yahoo! Groups Links*
      > > >
      > > > * To visit your group on the web, go to:
      > > > http://groups.yahoo.com/group/ai-philosophy/
      > > >
      > > > * To unsubscribe from this group, send an email to:
      > > > ai-philosophy-unsubscribe@yahoogroups.com
      > > >
      > > <mailto:ai-philosophy-unsubscribe@yahoogroups.com?subject=Unsubscribe>
      > > >
      > > > * Your use of Yahoo! Groups is subject to the Yahoo! Terms of
      > > > Service <http://docs.yahoo.com/info/terms/>.
      > > >
      > > >
      > > > .
      > >
      > >
      > >
      > > ------------------------------------------------------------------------
      > > *Yahoo! Groups Links*
      > >
      > > * To visit your group on the web, go to:
      > > http://groups.yahoo.com/group/ai-philosophy/
      > >
      > > * To unsubscribe from this group, send an email to:
      > > ai-philosophy-unsubscribe@yahoogroups.com
      > >
      > <mailto:ai-philosophy-unsubscribe@yahoogroups.com?subject=Unsubscribe>
      > >
      > > * Your use of Yahoo! Groups is subject to the Yahoo! Terms of
      > > Service <http://docs.yahoo.com/info/terms/>.
      > >
      > >
      >
      >
      > ------------------------------------------------------------------------
      > *Yahoo! Groups Links*
      >
      > * To visit your group on the web, go to:
      > http://groups.yahoo.com/group/ai-philosophy/
      >
      > * To unsubscribe from this group, send an email to:
      > ai-philosophy-unsubscribe@yahoogroups.com
      > <mailto:ai-philosophy-unsubscribe@yahoogroups.com?subject=Unsubscribe>
      >
      > * Your use of Yahoo! Groups is subject to the Yahoo! Terms of
      > Service <http://docs.yahoo.com/info/terms/>.
      >
      >
      > .
    • Paul Bramscher
      ... That s right. In fact, this problem exists independently of the language the instruction is given in. Ideally, the seed knowledge literacy would be
      Message 100 of 100 , Feb 10, 2005
      View Source
      • 0 Attachment
        feedbackdroids wrote:

        >
        > --- In ai-philosophy@yahoogroups.com, Paul Bramscher <brams006@u...>
        > wrote:
        >
        >
        > Yet if we instruct the robot to "go to the east face of the
        > > largest pyramid" it'll fail twice: it fails because without an
        > internal
        > > compass (nature itself will provide the cardinality here) or at
        > least
        > > one reference point (provided by a "mentor") it cannot locate the
        > proper
        > > face. Secondly, it fails because although it has excellent ability
        > to
        > > recognize and categorize patterns, there is nothing about the
        > geometry
        > > of a pyramid which evokes the word "pyramid." And no amount of
        > > self-categorization or self-learning can establish compass
        > coordinate or
        > > associate the word "pyramid" with the objects it is able to
        > > self-categorize -- however successfully it may self-categorize.
        > >
        >
        >
        > Similar problem for any non-english speaker.

        That's right. In fact, this problem exists independently of the
        language the instruction is given in. Ideally, the "seed knowledge
        literacy" would be categorized against a cross-lingual dictionary. So
        that once the robot had a network of meaning established between word X
        and data pattern Y, we could then substitute the commands in any other
        language for X.

        (So while the problem is language independent, perhaps the solution
        could be language independent as well.)
      Your message has been successfully submitted and would be delivered to recipients shortly.