Loading ...
Sorry, an error occurred while loading the content.

Awareness

Expand Messages
  • david_dodds_2001
    (previously copyrighted) My Extreme Markup Languages 2006 presentation was titled Metaprogramming, Ontologies and Still Nobody s Home . The important points
    Message 1 of 1 , Sep 9, 2006
    • 0 Attachment
      (previously copyrighted)
      My Extreme Markup Languages 2006 presentation was titled
      "Metaprogramming, Ontologies and Still Nobody's Home". The important
      points from that presentation / audience panel will be presented in
      this group posting, starting with 'awareness'.

      Awareness what is it? What's it good for? Here we discuss awareness in
      a machine. We are not talking about awareness as it is in a human, nor
      even in any known animal. The awareness discussed here is
      metaphorical, it is not human awareness. Many people use the term
      awareness as a synonym for consciousness. No consciousness, as
      experienced by humans, is discussed here. What is discussed here is
      not offered as a model of consciousness in humans.

      I have presented conference papers on computer software technologies
      which implement some of the prime systems needed to support the
      performing of metaphorical / analogical processing. I have also
      presented material which mentions information sources (publications)
      which describe the general belief ((Jay Ingram, Theatre of the Mind))
      that much of what humans do, in terms of mental processing, on a daily
      basis occurs, by and large, in the pre-conscious (some people,
      incorrectly, use the term subconscious to refer to the same mental
      processing) and only now and again do the RESULTS of all this myriad
      of processing pop into ("appear" in) consciousness. See also Jeff
      Hawkins book "On Intelligence"; and "Scripts, Plans, Goals and
      Understanding", and other books by Roger Schank, which discuss his
      technology of "Reminding".

      Awareness, the type that does not include consciousness, is very
      valuable in humans. An example of such non-conscious awareness is that
      which occurs in the human visual system. Detailed vision only occurs
      in a rather small cone of coverage oriented directly where the eye is
      pointing and extending only a rather few degrees to each side of
      straight ahead. The eye performs not consciously controlled saccades,
      flitting from one direction of gaze to another. This is necessary
      because the eye responds to change, if an image can be kept constant
      on the retina or unchanging it will automatically disappear. There are
      "rods" in the _periphery_ of the eye which are not colour sensitive
      nor are these in large enough numbers to achieve an image but they are
      quite sensitive to light. If something moves in the input field out in
      this (visual) periphery area we "automatically" (and quickly) turn our
      head and look directly in that general direction. This is a common
      phenomena in all humans. It is important that the reader understand
      that at no time was there any kind of "image" experienced from out in
      this peripheral area, nor was there any kind of "light", nor "flash",
      nor "pulse" (of light) _experienced_! The only thing that happens is a
      sudden "need" to turn one's head and point one's face in a certain
      direction. It is simply a feeling, a somewhat pressing one. [Jay
      Ingram in his recent book, and also Antonio DeMassio in his books talk
      about many such phenomena in humans.] One's head usually just darts to
      one side and a great deal of attention is placed upon gaining
      (conscious) information about what exists and what is happening in the
      new direction we now face. A similar thing may happen based on our
      hearing. Some sound occurrence, or cessation, has occurred and our
      attention is strongly focussed in the direction of that change or event.

      While we never consciously "see" any image or light from the periphery
      we do sometimes seem to consciously "hear" the sound that grabbed our
      attention. The almost inaudible snap of a twig in the forest (which
      seems to happen more frequently after the sun has gone down) catches
      our attention.

      Okay, okay what does all this have to do with computers and
      computational metaphor. Well, now comes the discussion which makes the
      connections. We started off saying "the awareness discussed here is
      metaphorical." We saw how humans daily use obviously complex
      processing (in their brain) to detect things of interest "in our
      visual realm even though we don't "see" them. A similar thing happens
      with hearing. The result is that a noticeable "tap on the shoulder" is
      delivered to consciousness, which then dutifully provides us with a
      (conscious) "experience" (of something). In the case of vision we
      discussed no visual experience has yet occurred , all we experience is
      the need to use "full attention" to quickly get (new) information
      about surroundings, particularly via vision. We also experience that
      our head is turning, but there is no memory of having consciously
      initiating that head turning, or any reason to do so. We do
      experience a strong _sense_ of the _direction_ we should be attending to.

      Simply put, motion of an object causes a change in the light pattern
      of the (relative) field of vision it occurs in. Theatre marquee lights
      cause the sensation of apparent motion in this way. The processing
      which performs this "detection" is far more than a simple (UN*X style)
      "dif" of two rasters. That all of this processing "completes" below
      the level of consciousness is of interest.There is a constant
      maelstrom of "mental processing" going on "below the level of
      consciousness" (a qualitative and subjective METAPHOR). "We" (our
      "conscious selves") which "experiences" is mostly ignorant and
      clued-out, initiate a strong _conscious_ attention but only as a
      result of "being tapped on the shoulder and handed a cue card" [by the
      "society of processes" which ceaselessly toil "beneath" our "awareness".]

      So what!? It is on the one hand cheering (for "computer people") that
      in fact the greater part of our functional intellect is pre-conscious
      (aka (incorrectly) "unconscious", or "below the level of (conscious)
      awareness.")

      Nobody yet knows what consciousness IS. So we can't program computers
      to "do" consciousness, we can only program computers to perform
      (non-random) things which WE already understand. Consciousness is not
      required for most of the intelligent behaviour we do every day. This
      gives hope that the processing "below the level of consciousness" can
      be modelled in a computer, since the spectre of consciousness need not
      be included there.

      A first-order view, a single "alpha-level model" of awareness (the
      kind that does not include consciousness (like the
      un"see"n-visual-objects mentioned earlier)) is "detection" (possibly
      with associated action triggered by it).

      This is exactly the kind of awareness which a (military or business)
      _Situation Awareness_ system has / uses. It is an unconscious
      DETECTION of stuff, with subsequent "reporting" of it and also
      possibly associated triggered responses. Some database systems have
      "triggers" and within the sense under discussion have a primitive (if
      not confined) awareness. Databases and situation-awareness systems
      contain knowledge but do not contain "knowing". These systems do not
      "know what knowledge they have", in knee-jerk stimulus-response
      fashion they can "report" the results of a query, which smites them
      soundly on the head from outside their system, but they have no
      "knowing" of their own content. That is because they are not situated.
      [Situatedness will be one of the threads of this group's discussion
      across time.]

      Is an exploding stick of dynamite an example of a stimulus response
      system? (nope) One doubts that there is any mental processing going on
      to cause a leg to "kick" (or "jerk") when the doctor whacks it with
      his little hammer. The "jerk" or "twitch" is initiated "involuntarily"
      by systems "below the level of consciousness" (some think way far
      below), no thinking, analyzing, or even consciousness is even required
      for the twitch response. We all find it convenient that our
      conscious-self experiences 'that the leg is twitching' and often we
      also "feel" (experience) the thunk (impact) of the hammer.

      I presented at Extreme Markup Languages 2006 a "starter model of
      awareness". Awareness, I said, may be implemented by having a planning
      program monitor the (set of) activation record of a system and perform
      plan detection with the contents it finds there.

      (For example, see: A Formal Theory of Plan Recognition and its
      Implementation,
      Henry A. Kautz.;
      Using Plan Recognition in Human-Computer Collaboration,
      Neal Lesh, Charles Rich and Candace L. Sidner;
      Decision-Theoretic Approach to Plan Recognition,
      Wenji Mao, Jonathan Gratch)

      This is to say that a program "watches" what actions a system performs
      / emits and also the effects produced by such actions and (attempts
      to) determine which goals the system might be pursuing by (attempting
      to) determine which plans might be being used by the system.
      Individual actions and sequences of same, as located via the
      activation record, are members of various plans. The awareness so
      obtained , then, consists of (hypothesized and recognized) goals,
      plans and activities / actions. The actions convey "how knowledge"
      and the plans and goals provide "what knowledge". In SHRDLU-esque
      fashion the latter can be used to convey "why knowledge".

      In this way a system can not only simply detect what actions the
      overall system is executing it can use inference to "hypothesize"
      which plans and goals "are being pursued" [yet another spatial
      metaphor]. This means that a plan-detecting-planner (program) can
      provide a _modicum_ of awareness as we believe it to be in humans sans
      the "experience" aspect.

      By including metaphorical-plans (like ride a motorcycle across country
      instead of a horse) in this capability the system is able to free
      itself to some (important) extent from the concreteness and
      circumscription of standard literal-planners. Conceptual metaphors
      (black hole, event horizon, gravity well, yaddayadda) and
      metaphorical-planning is another thread of this group.

      Other threads are contexts, what they are, how they are made and how
      they define a situation. Contexts provide a viewpoint. Conceptual
      metaphors, from the point of view of George Lakoff, myself, Gentner /
      Forbus and others.

      Other topics covered in my Extreme Markup Languages 2006 presentation
      were Posted Agenda, Blackboards and an SVG program that displays a
      picture of an analog clock with the seconds hand sweeping out the
      seconds and the other hands show local time.

      It is one thing to have a collection or library of executable actions
      but it is another thing entirely to know you have that collection and
      what it means that you can do (using those actions). Reflection
      (including SVG reflection) and Annotation are yet other threads that
      will be covered in this group.
    Your message has been successfully submitted and would be delivered to recipients shortly.