Loading ...
Sorry, an error occurred while loading the content.

Re: the quest for om.

Expand Messages
  • red_cell_op
    The human eye takes pictures at approximately 25shots/second. If there is an algorithm that can be perfected to the point where the robot will be able to
    Message 1 of 14 , Jul 2, 2002
    • 0 Attachment
      The human eye takes pictures at approximately 25shots/second. If
      there is an algorithm that can be perfected to the point where the
      robot will be able to recognize a pencil in different background _A_,
      _B_, _C_, _D_ ....etcetera; then, why can't the robot "facture" a
      scene comprising of a moving target into multiple "still" or stagnant
      pictures?....The reverse of what moviegoers sees at the cinema--slow
      the scene down then execute the algorithm on it. In such a case, we
      won't have to worry much about developing special algorithms for
      detecting if the target is moving or not. Does this make any sense?

      (II)On _depth perception_, i hypothesize two approaches: The robot
      calculate the time differential of the light bouncing back from each
      isolated target in its visual field--back into its optic sensors,
      from that the robot can estimate distances. (b), From physics, the
      lens formula points out the location or how far an _image_ is formed
      from the lense, given the center of curvature and the distance of the
      viewed _object_. Practically if one knows how far an _image_ is
      formed from the mirror one can use the lense equation to determine
      the distance of the _object_.(Clause:How do you know the
      distance/position of the image beforehand?). Does this make any sense?

      (III)
      X + Y = Z is equivalent to Z - Y = X. Of course!!!
      In my view, If a robot is ordered to find an exit out of a room; and
      the robot circles about looking for a door or opening and happens to
      find none--the room is walled completely. How does the robot
      improvise on its own? E.g. Break down a wall. This kind of adaptive
      behaviour is what i think signifies intelligence--creativity.
      For the robot to accomplish this, the robot has to go through these
      stages:
      (a)Determine that by "get out of the room" it means getting from
      point _A_ (which is the inside of the room) to point B(which is the
      out of the walled enclosure).
      (b)Determine that in order to get from _A_ to _B_ it(robot) needs to
      transverse a sectional area not occupied by the wall.
      (c) Determined that there is no such area.

      [From this, can the robot then make the deduction that the only way
      to get from _A_ to _B_ is for a sectional removal of the wall?, that
      is, Z - Y = X, where Y can be any point in the domain(Z) of
      approximately 6 feet in width.?]

      (d)[Can the robot then initiate a search into its memory files to
      isolate data correlating with slicing, cutting, sectioning, etcetera--
      things that will aid in getting rid of the wall?]

      I KNOW ALL THIS TO BE VERY CRUDE, any ideas? Does this make any sense?

      --Borgia, C.

      [Common sense a given population is merely the notion or idea that
      seems to have the most weight or gravity in opinion polls of a given
      population. Feed this data into the robot, i will wager the robot
      will have common sense!]
    • bobdeloyd
      ... like ... question ... Dear young_and_benevolent: Yes theres that word reality again.. Are we talking about intelligence or game theory? //bob
      Message 2 of 14 , Jul 3, 2002
      • 0 Attachment
        --- In artificialintelligencegroup@y..., young_and_benevolent
        <no_reply@y...> wrote:
        > bobdeloyd wrote:
        > > Once we finally get a "real" artificial intelligence
        > > the rest is evolution....
        >
        > Perhaps all we have to do is mimic something relatively simple,
        like
        > a flatworm, and then set it off on a virtual evolution. The
        question
        > is: How do we make the virtual reality that our AI lives in and
        > evolves through?
        >
        > y&b

        Dear young_and_benevolent:
        Yes theres that word reality again.. Are we talking about
        intelligence or game theory? //bob
      • young_and_benevolent
        ... An artificial intelligence would have to exist in some form of environment for it to enact and react within. This is the reality that the AI is aware
        Message 3 of 14 , Jul 4, 2002
        • 0 Attachment
          young_and_benevolent wrote:
          > How do we make the virtual reality that our
          > AI lives in and evolves through?

          bobdeloyd wrote:
          > Dear young_and_benevolent:
          > Yes theres that word reality again.. Are we talking about
          > intelligence or game theory? //bob

          An artificial intelligence would have to exist in some form of
          environment for it to enact and react within. This is the "reality"
          that the AI is "aware" of. That space could be as simple as a command
          line interface, or as complex as a three dimensional interactive
          environment. (Probably the former, though.)

          So when I say "virtual reality" I mean the environment that the AI is
          aware of. I am using the word "aware" because we strive to build a
          cognitive and sentient intelligence, in the long run.

          y&b
        • young_and_benevolent
          ... Actually, film works at 24 frames per second, and TV at 25/sec. Your eyes, however, work at about 1000 fps. Your brain is cognitive of these pictures in a
          Message 4 of 14 , Jul 4, 2002
          • 0 Attachment
            "red_cell_op" <RED_CELLss@H...> wrote:
            > The human eye takes pictures at approximately 25shots/second.

            Actually, film works at 24 frames per second, and TV at 25/sec. Your
            eyes, however, work at about 1000 fps. Your brain is cognitive of
            these pictures in a vastly different way, so that a "frames per
            second" rating becomes meaningless.

            > On _depth perception_ ...

            Maybe the lasers that land surveyors use to judge distance would be
            easier and more accurate than running an algorithm on an image.

            > If a robot is ordered to find an exit out of a room;
            > and the robot circles about looking for a door or
            > opening and happens to find none--the room is walled
            > completely. How does the robot improvise on its own?
            > E.g. Break down a wall.

            A good question. I guess that it would either be programmed to
            destroy objects in it's way, or it would have to learn to do that by
            itself. How, though, I'm not sure!


            y&b
          • red_cell_op
            http://64.4.22.250/cgi-bin/linkrd? _lang=EN&lah=891ef64a2e5597e1b61127d6d401df26&lat=1025809872&hm___acti on=http%3a%2f%2fcgi%2ezdnet%2ecom%2fslink%3f182325
            Message 5 of 14 , Jul 4, 2002
            • 0 Attachment
              http://64.4.22.250/cgi-bin/linkrd?
              _lang=EN&lah=891ef64a2e5597e1b61127d6d401df26&lat=1025809872&hm___acti
              on=http%3a%2f%2fcgi%2ezdnet%2ecom%2fslink%3f182325
            • red_cell_op
              Here s a quote from Robert O Shea s webpage, at: http://psy.otago.ac.nz/r_oshea/ I have so many interests in human visual perception it makes my eyes blur.
              Message 6 of 14 , Jul 4, 2002
              • 0 Attachment
                Here's a quote from Robert O'Shea's webpage, at:
                http://psy.otago.ac.nz/r_oshea/

                I have so many interests in human visual perception it makes my eyes
                blur. Currently I'm working on projects on: Binocular rivalry
                (including
                spread of rivalry, the nature of rivalry suppression, and rivalry in
                split-brain observers); Early history of binocular vision; Interocular
                transfer of aftereffects; Meteorological optics (including why we
                perceive the bowl of the sky, perception of sun rays and their linear
                perspective, and effects of height on perceived eye level); Size and
                depth perception over large distances; Spatial frequency, blur,
                contrast, and luminance as depth cues; Colour constancy with
                reflected and emitted light; Kinetic depth effect; Perception of
                contrast and blur in the peripheral visual field; Colour spreading in
                the McCollough effect; and Vernier acuity with opposite-contrast and
                dichoptic figures.

                At the same page, you can also download the PDF of an article
                describing
                some of the work covered by the talk:

                O'Shea, R. P., & Corballis, P. M. (2001).
                Binocular rivalry between complex stimuli in split-brain observers.
                Brain and Mind, 2, 151-160.

                You can check out the webpage of O'Shea's collaborator, Paul
                Corballis,
                here:
                http://www.dartmouth.edu/~cogneuro/corballis.html

                Raj
                ---------- Forwarded message ----------
                Date: Tue, 02 Jul 2002 16:01:14 -0400
                From: George Alvarez <geoalvarez@w...>
                To: VisionLabTalks <geoalvarez@w...>
                Subject: Harvard Vision Lab Talk Wednesday: Robert P. O'Shea,
                Wednesday,
                July 3rd

                *************************************************************

                Harvard Vision Lab Seminar Series Announcement

                *************************************************************

                Binocular rivalry in split-brain observers

                Robert P. O'Shea & Paul M. Corballis
                Department of Psychology, University of Otago

                Wednesday, July 3rd
                12:00 noon
                Rm 765, William James Hall, Harvard University
                33 Kirkland Street, Cambridge

                A split-brain observer has had the corpus callosum, the major tract
                between the left and the right hemispheres, cut to relieve epilepsy.
                One can selectively stimulate the left or right hemisphere by
                presenting stimuli to the right or left of fixation respectively.
                Likewise, one can elicit responses from the left or right hemisphere
                by requiring the observer to press keys with the right or left hand
                respectively. On many tasks, these fascinating individuals behave as
                though each hemisphere is acting independently of the other. Are there
                differences in rivalry between the isolated hemispheres? To answer
                this, we have studied two split-brain observers, VP and JW.

                We first trained split-brain and intact-brain observers to respond to
                real alternations between nonrival stimuli by pressing keys with the
                ipsilateral hand. When we presented rival stimuli to the isolated
                hemispheres of split-brain observers, their key presses showed that
                their experiences of rivalry were similar to those of intact-brain
                observers. When we presented stimuli to the left hemisphere of the
                split-brain observers, they were also able to describe the chaotic
                appearance of rivalry alternations.

                Over many experiments, mainly on JW, we conclude that rivalry is
                essentially normal when processed in each isolated hemisphere,
                although periods of dominance are slower from the left hemisphere than
                from the right. Rivalry is normal from stimuli such as sinusoidal
                gratings, coloured faces, random dots, and Diaz-Caneja displays. The
                distributions of periods of dominance follow the classical gamma
                shape. The only case in which lacking a corpus callosum made a
                difference was that the synchronization of rivalry in two regions of
                the visual field did not happen when the two regions were processed by
                different hemispheres. We think that the longer rivalry periods from
                the left hemisphere reflect only its response bias. We conclude from
                the qualitative similarity of rivalry in the two isolated hemispheres
                that the rivalry mechanism is low in the visual system.

                Wednesday, July 3rd
                12:00 noon
                Rm 765, William James Hall, Harvard University
                33 Kirkland Street, Cambridge
              • red_cell_op
                http://64.4.22.250/cgi-bin/linkrd? _lang=EN&lah=74d76f6202fc28863821a712ea0cd4a7&lat=1025813685&hm___acti on=http%3a%2f%2fcgi%2ezdnet%2ecom%2fslink%3f182325
                Message 7 of 14 , Jul 4, 2002
                • 0 Attachment
                  http://64.4.22.250/cgi-bin/linkrd?
                  _lang=EN&lah=74d76f6202fc28863821a712ea0cd4a7&lat=1025813685&hm___acti
                  on=http%3a%2f%2fcgi%2ezdnet%2ecom%2fslink%3f182325
                • red_cell_op
                  Young: Actually, film works at 24 frames per second, and TV at 25/sec. Your eyes, however, work at about 1000 fps. Your brain is cognitive of these
                  Message 8 of 14 , Jul 4, 2002
                  • 0 Attachment
                    Young:> Actually, film works at 24 frames per second, and TV at
                    25/sec. Your > eyes, however, work at about 1000 fps. Your brain is
                    cognitive of > these pictures in a vastly different way, so that
                    a "frames per> second" rating becomes meaningless.
                    >

                    Borgia: I will re-check my notes about 25/sec shots(of the human
                    eye)....but this comparison of shots/second is moot--that is only
                    tangent to the point i was trying to make, which is: isolating each
                    frames and executing algorithms on them in _real time_. I have
                    forgotten about the name of a super-fast cameras that can take
                    thousands of shots/second--much faster, and with better resolution
                    than the human eye. The rate is not all that important as much as the
                    computing power that will process the _stills_ in real time. On how
                    the brain process these visual data, is irrelevant in my view--we
                    don't have to simulate the brain. Just let us do something that
                    works, brain-imitation or no brain-imitation, regardless.

                    > > On _depth perception_ ...
                    >
                    > Maybe the lasers that land surveyors use to judge distance would be
                    > easier and more accurate than running an algorithm on an image.

                    Borgia: The _depth perception_ part of the post _the search for om_
                    has nothing to do with "running algorithm on an image". I do not
                    recall typing "running algorithm on an image to determine depth
                    perception", what i recalled doing was proposing two approaches from
                    physics for depth perception. Thanks for the thought though.

                    > > If a robot is ordered to find an exit out of a room;
                    > > and the robot circles about looking for a door or
                    > > opening and happens to find none--the room is walled
                    > > completely. How does the robot improvise on its own?
                    > > E.g. Break down a wall.
                    >
                    YOung:> A good question. I guess that it would either be programmed
                    to > destroy objects in it's way, or it would have to learn to do
                    that by > itself. How, though, I'm not sure!


                    Borgia:From my own personal experience i think creativity =
                    integration. You see an apple falling down and then integrate that
                    visual data with other data to get Newton's gravitation. Think about
                    how we CREATIVELY solve problems, it seems to be one and only one
                    way: integrating relevant but seemily disparate data into a new
                    synthesis. Can a robot on seeing a woman slicing an apple on a street
                    corner break down this visual input into some version of this crude
                    formalization:"sharp object(of certain characteristics y) + force +
                    an object(of certain characteristics x))--> a split x + object y"??.
                    Then, how can one write algorithms that will attempt to match this
                    _solution pattern_ with a _problem pattern_(e.g. getting out of a
                    walled room)?. To find a solution to something, first, the problem
                    has to be defined. Is an algorithm capable of partially formalizing
                    environmental events, feasible?
                    After the problem of the walled room has been formalized:
                    (a)Get from point_A_(inside the walled enclosure) to point _B_(out of
                    the walled enclosure)
                    (b)How? Tranverse a sectional area without walls
                    (c)There is no such area.
                    [The problem has been determined: no such area. Any "~x" that
                    fustrates acquiring an objective "z":is a problem.

                    Can the robot then formalize this scenario into this format:
                    X(passage) + Y(robot; moving) = Z(objective--get from A to B)?

                    Since the room is walled, then there is no X, only ~X. "~X"
                    being "walls".

                    Can the robot then proceed to this stage: ~X + Y = ~Z.--this will now
                    be the _state of events_ in the robots' cpu, There are logically two
                    options: Opt for ~Z--and not leave the room(that will be against its
                    instruction, thus the robot can't do that) Or two, eliminate ~X--this
                    fits with its instructions.
                    How do you eliminate ~X?
                    Scan ~X(the walls), from the scanning the robot will gather some
                    scientific data from its scans--physical and chemical properties of
                    ~X.
                    Then, First priority:(i)how do you eliminate walls or things bearing
                    close resemblance to the physical/chemical properties of walls as
                    determined by the robot's scans?
                    Second priority:(ii)How don you eliminate any object?

                    Inorder to answer these questions by itself(the robot), algorithms
                    then prompt the robot to conduct memory search for visual data
                    involving scenes of any form of separation involving physical
                    objects?--cutting, slicing, dicing, twisting, cracking, burning, ,
                    chemical dissolution in a degree relevantly close to the
                    physical/chemical objects of walls and digressing from that point
                    away. etcetera. Then, formalize these scenes into a X + Y = Z, partly
                    using _cause and effect_/physics, and attempt to implement the
                    _formalization_ on the walls so as to create a passage way out of a
                    walled room?

                    Does this makes any sense?

                    --Borgia, C.
                  • red_cell_op
                    Can a chip help computers see in 3D? 09:07 Wednesday 3rd July 2002 Stephen Shankland, CNET News.com A Silicon Valley start-up believes it can give stereo
                    Message 9 of 14 , Jul 4, 2002
                    • 0 Attachment
                      Can a chip help computers see in 3D?
                      09:07 Wednesday 3rd July 2002
                      Stephen Shankland, CNET News.com


                      A Silicon Valley start-up believes it can give stereo vision to video
                      cameras by encoding a processing scheme into a custom chip. It could
                      ready the way for robots with depth perception
                      A Silicon Valley start-up believes it can improve computer vision by
                      combining a custom-designed chip with the way humans see.

                      Human brains judge how far away objects are by comparing the slightly
                      different view each eye sees. Tyzx hopes to build this stereo vision
                      process into video cameras.



                      The Palo Alto, California-based start-up has encoded a processing
                      scheme into a custom chip called DeepSea, allowing the processor to
                      determine not only the color of each tiny patch of an image but also
                      how far away that patch is from the camera.

                      The technology could be a boon for surveillance systems,
                      strengthening the ability to track people in banks, stores or
                      airports. But stereo vision could have wider uses as well, helping
                      focus a computer's attention and cutting down on the amount of data
                      that needs to be crunched.

                      For instance, a vacuuming robot trying to discern a table leg through
                      pattern recognition could avoid getting caught up in examining the
                      wallpaper in the background. Similarly, vehicles could use the
                      technology to detect obstacles in their path while filtering out
                      visual noise.

                      "The biggest value is the segmentation. It separates out the portion
                      of the image that interests you," said Takeo Kanade, a stereo vision
                      computing pioneer at Carnegie Mellon University and a member of an
                      independent Tyzx advisory board. "You have not only appearance but
                      also distance to each point. That makes the subsequent processing,
                      such as object detection and recognition, significantly easier."

                      Tyzx's first customers are mostly research labs, with other potential
                      business partners evaluating the technology, chief executive Ron Buck
                      said in an interview. Those who have bought the systems include MD
                      Robotics, the company that makes the robotic arm for the Space
                      Shuttle and, in the future, for the International Space Station. And
                      ChevronTexaco is employing the equipment for "augmented reality"
                      work -- supplementing what ordinary people see with computer imagery
                      for tasks such as operating oil platform cranes in bad weather.

                      The company hopes to win customers in the military and surveillance
                      industries, and, as costs go down, to expand into
                      broader "intelligent environments" where, for example, doors could
                      open automatically or a house could send a medical alert if someone
                      has been sitting still for an unusually long time. But Tyzx faces a
                      solid challenge translating the idea into a workable product.

                      "I believe it's a great idea," Kanade said. "Conceptually it's easy,
                      but computationally it's not."

                      Tyzx is backed by Vulcan Ventures, the investment firm of Microsoft
                      co-founder Paul Allen. It has less than 20 employees, some of whom
                      have years of experience in the field.

                      John Woodfill and Gaile Gordon launched the company in early 2001,
                      but much of their work precedes that date. A key formula used in the
                      custom chip dates back to 1990, and Tyzx has had prototype chips for
                      about a year, Buck said. It's only recently, though, that Tyzx's
                      ideas have become economically feasible.

                      Eyes on the prize
                      Stereo vision may indeed be a leap ahead for computers, but there's
                      still a long way to go before machines can achieve the sophistication
                      of human sight.

                      "Because vision comes so naturally to us, we don't appreciate the
                      problem intuitively," said David Touretzky, a computational
                      neuroscientist at Carnegie Mellon. "I don't think we got that
                      appreciation until people started trying to build computer systems to
                      see."

                      A large fraction of the brains of primates such as monkeys, apes and
                      humans is devoted to processing visual information, Touretzky said.
                      There are more than 20 different specialised areas for tasks such as
                      recognizing motion, color, shapes and spatial relationships between
                      objects.

                      "These areas are all interconnected in ways not fully understood
                      yet," Touretzky said, but together these parts of the brain can
                      discern the difference between the edge of a shadow and the edge of
                      an object or compensate for color shifts that occur when the sun
                      comes out.

                      Tyzx isn't the only company trying to capitalize on stereo computer
                      vision. Microsoft Research is working on technology that extracts 3D
                      information from 2D pictures. Point Grey Research already has cameras
                      on the market, though its processing algorithms require a full-
                      fledged computer.

                      In Japan, a company called ViewPlus is working in collaboration with
                      Point Grey Research. Its products, though, combine as many as 60
                      cameras into a spherical system that produces 20 simultaneous video
                      information streams.

                      These other companies are taking a fundamentally different approach
                      to Tyzx in one respect: Their systems compare more than two images.

                      Carnegie Mellon's Kanade said it might seem that comparing three
                      images would be a harder computational task, but in fact having more
                      data to work with can actually make the process simpler.

                      DeepSea processing
                      The key development at Tyzx is its custom chip, which runs an
                      algorithm called census correspondence that quickly finds
                      similarities across two streams of video images broken up into a
                      square grid of 512 pixels, or picture elements. The chip can perform
                      this comparison 125 times per second with a video image measuring 512
                      by 512 pixels, but the 33MHz DeepSea consumes much less power than
                      full-fledged processors such as Intel's Pentium.

                      "It allows incredibly compute-intensive searching for matching pixels
                      to happen very fast at a very low price. It allows us to bring stereo
                      vision to computers," chief executive Buck said.

                      Another important development needed to reach Tyzx's low-price
                      targets is camera sensors built using the comparatively inexpensive
                      complimentary metal-oxide semiconductor (CMOS) technology -- the same
                      process used to build most computer chips, Buck said. Digital cameras
                      today use more elaborate -- but more expensive -- "charge-coupled
                      devices", or CCDs.

                      Kanade has an appreciation for the difficulties involved. About 10
                      years ago he built an expensive but pioneering stereo vision system
                      with many processors that could determine range information by
                      comparing the images from multiple cameras.

                      Since then, more powerful computer processing abilities have elevated
                      the potential of the field, which Kanade believes will take off once
                      stereo cameras are as cheap as today's ordinary video cameras.

                      "I'm very impressed with the various attempts which made real-time
                      stereo possible. I think the Tyzx effort may be one of the eventual
                      successes," Kanade said.



                      ----------------------------------------------------------------------
                      ----------
                    Your message has been successfully submitted and would be delivered to recipients shortly.