Loading ...
Sorry, an error occurred while loading the content.

Second Order Metaprogramming and 'hasIntrinsic'

Expand Messages
  • david_dodds_2001
    (previously Copyright David Dodds 2008) Second Order Metaprogramming and hasIntrinsic (referring to my previous post) --- Bob is a person (actually Bob is
    Message 1 of 1 , Feb 5, 2008
    • 0 Attachment
      (previously Copyright David Dodds 2008)

      Second Order Metaprogramming and 'hasIntrinsic'

      (referring to my previous post) --- Bob is a person (actually Bob is
      name of Person, so it is more accurate to say Bob is a name, Bob is a
      string value instance of name.) Exactly the same thing could be said
      about Mary. The only thing which makes them not exactly identical is
      that those two strings do not have exactly the same collating order.
      Perhaps one might want to have a predicate such as 'hasIntrinsic'.

      Bob 'hasIntrinsic' a temperature, a volume/morphology/topography, a
      gender, etc. Bob may be a student or a professor depending on whether
      he has a Phd, but he always has SOME temperature, even if he is dead.
      and until he returns to dust (and even then) Bob has a
      volume/morphology/topography (aka size).

      Because we are people WE know that Bob and Mary are (typically)
      different in particular ways, the content of this knowledge is
      metadata and is 'commonsense' or 'background knowledge', just like we
      all know / expect that unsupported objects 'fall', and that all
      'living things' eventually cease 'living'. Just yet computers do not
      take instrument readings, such as TV camera images, and place
      'knowledge', obtained / derived from such readings of the world, into
      knowledgebases and databases, by (means of) themselves (the computer
      selves). Right now the computers require humans to interpret the
      instrument outputs and produce / populate the knowledge , which the
      humans place into knowledgebases like ontologies. What will the world
      be like once we have figured out how to program computers to
      consistently and adequately perform these actions (for) themselves?!

      Parts of this are not so far away from reality already. For example,
      that collection of fancy beach sand in a box (aka computer) can
      already take a streamed TV video signal and amidst all the stuff "on"
      the screen, pick out a particular face, in a crowd. Granted this is
      not a conscious act on the part of the computer, but the point is that
      the technology is here which allows us to describe the visual
      morphology of things (faces for example) and how to locate /
      differentiate them in a crowded natural scene. By associating
      ontologies with these programs one can add semantics ("meaning") to
      the things in the picture and the meaning of the scene the picture
      "shows", such as "the crowd in Grand Central Station milling around.."
      If someone mills in a way deviating from the predefined milling
      pattern then robo-dobermans can be dropped out of the ceiling to deal
      with it. No deviated preversions are permitted in GCS.

      Using programs like "flocking" and "schooling" algorithms to model
      group traffic dynamics it is possible to use such models as
      descriptions of 'standard' or 'typical' (transportation) behaviour.
      If there is a reasonably finite set of 'milling around' patterns which
      can be decently described via algorithms (or other) then these could
      be used to evaluate scenes such as GCS crowds, detecting 'unusual' or
      'atypical' 'milling behaviour', such as putting a briefcase or other
      package onto the floor and then walking or otherwise moving away from
      that location without bringing the package with one. Of course the
      computer would also have to have some (perhaps algorithmic)
      'understanding' of throwing / dropping / putting garbage into a
      garbage recepticle (or even, gurk) onto the floor ('the ground'). It
      would also help those ontological systems if something like OASIS' HML
      (Human Markup Language) were included for it provides a focus for
      interpreted actions to be compared against. Equal vs not equal is
      pretty much all the computer can do well, similar(ity) being
      notoriously difficult to "explain to a computer" / program. In this
      way, using HML terms, such as "deceipt" and associating the term with
      transportation patterns (milling) and "object caretaking/stewardship"
      (abandonment/retention) the computer / camera can be operated in
      'scene serenity failure mode'.

      In fact Edward de Bono, in his groundbreaking book "Atlas of
      Management Thinking", clearly has notions in mind (similar to that
      "human taxonomy") in the "working" of his diagrams. The book reader's
      cognitive system must be monitoring / watching for a particular set of
      notions depicted in the diagrams of the book. An example of one such
      notion is motion (*), and that is suggested / depicted in a cartoon
      drawing by having two or three curved lines pointing in the direction
      of motion "behind" the object in the cartoon which is "moving". A
      thrown baseball in the air has those curved lines "behind" it (and,
      yes, sometimes in front too), and we viewers (of the cartoon /
      drawing) cognize those lines as (meaning) the ball is moving, and not
      by coincidence in the direction the curves are pointing. Often this
      cognition on our part is mostly subconscious, but the cues are in the
      picture to trigger that. (How often does one examine (the content /
      components of) one's own conscious?)

      * In de Bono's book, AoMT, he uses a dashed line with an arrowhead to
      suggest motion along the depicted path. This form of motion depiction
      is of a slightly more abstract nature than the wavy lines behind a
      moving object. The wavy lines are reminiscent of air waves behind
      moving things, which can been seen on bright hot days as shimmer, for
      example. The dashed or dotted line is not evoking of air waves but
      rather the higher-level or more abstract visual (plan-view) depiction
      of a succession of location (point) or placement. Sometimes blurring
      is used to represent motion also, such as showing fan blades in one
      position and (complete or partial) outlines of the blades in another
      position and or curved lines in between the stationary blades.

      That we use spatial metaphors in daily life and aren't even conscious
      of the (cognitive) metaphor also suggests that <much of our cognitive
      life (estimating / recognizing / interpreting, and especially initial
      recollection ("that finding something in memory" which is then passed
      up to the conscious for "realization") actually occurs in our
      subconscious>, witness walking along a non-empty sidewalk and (you)
      not crashing into other people or objects there. You are not aware of
      all the distance, time, and occupancy estimates occurring and
      re-occurring. You dont even have to know where everyone / everything
      actually is, only an analog approximation (an estimate from
      subconscious visual interpretation handed to our awareness as a
      (cognitive not emotional) 'feeling' or 'sense'. A 'sense' of the
      distances.) is known and all the while you do not see numbers and
      symbols superimposed on your visual input, like in The Terminator, nor
      do you deliberate on moving along a non-clear sidewalk, it just
      'happens' 'automatically' 'without thinking (about it)'.

      In fact, it would be an intellectually marvelous feat if computers
      could "look at" a cartoon / drawing and "understand" it. Perhaps the
      computer's understanding of a page in "Mad Magazine" would be even
      more entertaining than the page itself.
    Your message has been successfully submitted and would be delivered to recipients shortly.