Loading ...
Sorry, an error occurred while loading the content.
 

Computing Sense of Spatiality 5

Expand Messages
  • david_dodds_2001
    Computing Sense of Spatiality 5 Copyright 2008 David Dodds Remember we are looking at the concept near . We previously looked at the calculation which
    Message 1 of 1 , Apr 8, 2008
      Computing Sense of Spatiality 5

      Copyright 2008 David Dodds


      Remember we are looking at the concept "near". We previously looked at
      the calculation which computes the direct linear distance
      ("separation") between two points. If we use the two text strings x,y
      origins we can compute a crisp value of separation (of them). In the
      bar chart illustration these two bar origins are 40 units apart along
      the x-axis but since they are of different heights their actual
      distance of separation depends on the heights of the bars and so bars
      that might be otherwise close together or quite "near" (each other)
      will be measured as farther apart "not so near" when one bar is "tall"
      and the other is "short", if we are using the rectangle's x,y origin
      as "location" (of the rectangle).

      Look at the bar chart illustration. Bar 3 is tall and Bar5 is rather
      short. There is only one 20 unit wide bar separating them and so in
      that respect they are relatively close together even though the
      origin-to-origin distance measure between the two bars numerically
      suggests that they are not at all that close together. The problem is
      in the way the "near" value was computed. It did not take into account
      either the morphology of the two (bar) objects nor their orientation
      with respect to the canvas origin axes.

      What we need to do is include both of those sets of information in the
      computation for "near".

      In a next episode we will see the programming for how to do these.
      Also next episode we will see discussion of the use of
      meta-programming as a means for implementing concepts like "near"
      ("/far"), "above" "below", "beside", "in front", "in back" ("behind").
      These concepts are so oft used by us as to seem obvious and our
      familiarity through frequent use makes them seem near trivial. This
      is because these concepts are processed in our subconscious and hence
      any complexities involved in the processing of these concepts is not
      available to our consciousness or awareness. We are consciously able
      to deliberate on these concepts but in their typical day to day use we
      only become conscious or aware of the results of this processing but
      are not privy to the details or innards of the processing. In a sense
      our consciousness could be likened to a programmer who submits or runs
      parameterized functions to a processor. The innards and doings of the
      processor are opaque, few users have knowledge of the logic circuitry
      used by the computer to do addition or other aritmetic, but we don't
      care as long as it seems to be working ok. I've just described the
      average windows user mouse clicks and drag-and-drop.

      The user in this scenario runs functions and programs on his PC (the
      subconscious). He may not know about the innards of the code which is
      associated with what he clicked on or dragged but he doesn't need to
      know, as long as it effects the results he desires by activating it.
      The windows user has a mental map which maps things he wants done with
      things to click on and drag and drop. As long as his clicks and drags
      accomplished what he wanted done he didn't care what was actually
      being done or how. This is the (human) metaprogrammer.

      In a future episode we will see a model of the human metaprogrammer
      and how "knowing how" (to determine how-) 'near' Bar3 and Bar5 are to
      each other is achieved programmatically. We don't have a TV camera we
      can point at the barchart illustration nor do we have a machine that
      can take the video data stream from such a camera and "see" the
      illustration. Seeing (and understanding what is seen) is not just
      photography.

      In order that the computer system be able to "perceive" near-ness and
      the other mentioned spatial relationships in a way which seems similar
      or familiar to how we do we need to have the computer do its
      processing such that the output or result is like our own perceptions.
      It is absolutely not the case that the way the computer processing is
      performed is the same as in us.

      When we look at the bar chart picture we have a) a mental sense of the
      picture, b) a linguistically mediated sense of the picture (ie
      'words') and c) possibly some emotions are triggered (or possibly
      not). The "a)" sense is the same kind of (mental) stuff that occurs
      when we look out at the people in front of us as we walk along the
      sidewalk. Humans are capable of applying linguistic processing to this
      "a)" mental content and again the details or innards of that
      linguification or turning non-linguistic mental content (thought) into
      linguistic content are performed in the subconscious and we are not at
      all aware of the details of the linguification processing itself we
      are aware only of the (linguistic) [ie 'words'] output from it.

      In a next episode we will see discussion of programming which
      processes spatial information in a way that results in analyses
      similar to how we perceive "near-ness", "above" etc. Part of the key
      to such processing is to determine and use "context" in the
      interpretation ("perception") of the spatial relations.
      Metaprogramming / planning is discussed as a means to organize /
      orchestrate the programs which perform the spatial analysis and
      linguification of the results.

      Next time also we read about Daniel Weld's book about how one decides
      what to say (when talking or writing); also about metaprogramming,
      reflection in programing and a bit about AspectJ an Aspect Oriented
      Programming language.
    Your message has been successfully submitted and would be delivered to recipients shortly.