Loading ...
Sorry, an error occurred while loading the content.

1501Formalizing World Events

Expand Messages
  • red_cell_op
    Jul 4 2:36 PM
    • 0 Attachment
      Young:> Actually, film works at 24 frames per second, and TV at
      25/sec. Your > eyes, however, work at about 1000 fps. Your brain is
      cognitive of > these pictures in a vastly different way, so that
      a "frames per> second" rating becomes meaningless.

      Borgia: I will re-check my notes about 25/sec shots(of the human
      eye)....but this comparison of shots/second is moot--that is only
      tangent to the point i was trying to make, which is: isolating each
      frames and executing algorithms on them in _real time_. I have
      forgotten about the name of a super-fast cameras that can take
      thousands of shots/second--much faster, and with better resolution
      than the human eye. The rate is not all that important as much as the
      computing power that will process the _stills_ in real time. On how
      the brain process these visual data, is irrelevant in my view--we
      don't have to simulate the brain. Just let us do something that
      works, brain-imitation or no brain-imitation, regardless.

      > > On _depth perception_ ...
      > Maybe the lasers that land surveyors use to judge distance would be
      > easier and more accurate than running an algorithm on an image.

      Borgia: The _depth perception_ part of the post _the search for om_
      has nothing to do with "running algorithm on an image". I do not
      recall typing "running algorithm on an image to determine depth
      perception", what i recalled doing was proposing two approaches from
      physics for depth perception. Thanks for the thought though.

      > > If a robot is ordered to find an exit out of a room;
      > > and the robot circles about looking for a door or
      > > opening and happens to find none--the room is walled
      > > completely. How does the robot improvise on its own?
      > > E.g. Break down a wall.
      YOung:> A good question. I guess that it would either be programmed
      to > destroy objects in it's way, or it would have to learn to do
      that by > itself. How, though, I'm not sure!

      Borgia:From my own personal experience i think creativity =
      integration. You see an apple falling down and then integrate that
      visual data with other data to get Newton's gravitation. Think about
      how we CREATIVELY solve problems, it seems to be one and only one
      way: integrating relevant but seemily disparate data into a new
      synthesis. Can a robot on seeing a woman slicing an apple on a street
      corner break down this visual input into some version of this crude
      formalization:"sharp object(of certain characteristics y) + force +
      an object(of certain characteristics x))--> a split x + object y"??.
      Then, how can one write algorithms that will attempt to match this
      _solution pattern_ with a _problem pattern_(e.g. getting out of a
      walled room)?. To find a solution to something, first, the problem
      has to be defined. Is an algorithm capable of partially formalizing
      environmental events, feasible?
      After the problem of the walled room has been formalized:
      (a)Get from point_A_(inside the walled enclosure) to point _B_(out of
      the walled enclosure)
      (b)How? Tranverse a sectional area without walls
      (c)There is no such area.
      [The problem has been determined: no such area. Any "~x" that
      fustrates acquiring an objective "z":is a problem.

      Can the robot then formalize this scenario into this format:
      X(passage) + Y(robot; moving) = Z(objective--get from A to B)?

      Since the room is walled, then there is no X, only ~X. "~X"
      being "walls".

      Can the robot then proceed to this stage: ~X + Y = ~Z.--this will now
      be the _state of events_ in the robots' cpu, There are logically two
      options: Opt for ~Z--and not leave the room(that will be against its
      instruction, thus the robot can't do that) Or two, eliminate ~X--this
      fits with its instructions.
      How do you eliminate ~X?
      Scan ~X(the walls), from the scanning the robot will gather some
      scientific data from its scans--physical and chemical properties of
      Then, First priority:(i)how do you eliminate walls or things bearing
      close resemblance to the physical/chemical properties of walls as
      determined by the robot's scans?
      Second priority:(ii)How don you eliminate any object?

      Inorder to answer these questions by itself(the robot), algorithms
      then prompt the robot to conduct memory search for visual data
      involving scenes of any form of separation involving physical
      objects?--cutting, slicing, dicing, twisting, cracking, burning, ,
      chemical dissolution in a degree relevantly close to the
      physical/chemical objects of walls and digressing from that point
      away. etcetera. Then, formalize these scenes into a X + Y = Z, partly
      using _cause and effect_/physics, and attempt to implement the
      _formalization_ on the walls so as to create a passage way out of a
      walled room?

      Does this makes any sense?

      --Borgia, C.
    • Show all 14 messages in this topic