Loading ...
Sorry, an error occurred while loading the content.

Re: More on perception

Expand Messages
  • red_cell_op
    http://64.4.22.250/cgi-bin/linkrd? _lang=EN&lah=74d76f6202fc28863821a712ea0cd4a7&lat=1025813685&hm___acti on=http%3a%2f%2fcgi%2ezdnet%2ecom%2fslink%3f182325
    Message 1 of 14 , Jul 4, 2002
    View Source
    • 0 Attachment
      http://64.4.22.250/cgi-bin/linkrd?
      _lang=EN&lah=74d76f6202fc28863821a712ea0cd4a7&lat=1025813685&hm___acti
      on=http%3a%2f%2fcgi%2ezdnet%2ecom%2fslink%3f182325
    • red_cell_op
      Young: Actually, film works at 24 frames per second, and TV at 25/sec. Your eyes, however, work at about 1000 fps. Your brain is cognitive of these
      Message 2 of 14 , Jul 4, 2002
      View Source
      • 0 Attachment
        Young:> Actually, film works at 24 frames per second, and TV at
        25/sec. Your > eyes, however, work at about 1000 fps. Your brain is
        cognitive of > these pictures in a vastly different way, so that
        a "frames per> second" rating becomes meaningless.
        >

        Borgia: I will re-check my notes about 25/sec shots(of the human
        eye)....but this comparison of shots/second is moot--that is only
        tangent to the point i was trying to make, which is: isolating each
        frames and executing algorithms on them in _real time_. I have
        forgotten about the name of a super-fast cameras that can take
        thousands of shots/second--much faster, and with better resolution
        than the human eye. The rate is not all that important as much as the
        computing power that will process the _stills_ in real time. On how
        the brain process these visual data, is irrelevant in my view--we
        don't have to simulate the brain. Just let us do something that
        works, brain-imitation or no brain-imitation, regardless.

        > > On _depth perception_ ...
        >
        > Maybe the lasers that land surveyors use to judge distance would be
        > easier and more accurate than running an algorithm on an image.

        Borgia: The _depth perception_ part of the post _the search for om_
        has nothing to do with "running algorithm on an image". I do not
        recall typing "running algorithm on an image to determine depth
        perception", what i recalled doing was proposing two approaches from
        physics for depth perception. Thanks for the thought though.

        > > If a robot is ordered to find an exit out of a room;
        > > and the robot circles about looking for a door or
        > > opening and happens to find none--the room is walled
        > > completely. How does the robot improvise on its own?
        > > E.g. Break down a wall.
        >
        YOung:> A good question. I guess that it would either be programmed
        to > destroy objects in it's way, or it would have to learn to do
        that by > itself. How, though, I'm not sure!


        Borgia:From my own personal experience i think creativity =
        integration. You see an apple falling down and then integrate that
        visual data with other data to get Newton's gravitation. Think about
        how we CREATIVELY solve problems, it seems to be one and only one
        way: integrating relevant but seemily disparate data into a new
        synthesis. Can a robot on seeing a woman slicing an apple on a street
        corner break down this visual input into some version of this crude
        formalization:"sharp object(of certain characteristics y) + force +
        an object(of certain characteristics x))--> a split x + object y"??.
        Then, how can one write algorithms that will attempt to match this
        _solution pattern_ with a _problem pattern_(e.g. getting out of a
        walled room)?. To find a solution to something, first, the problem
        has to be defined. Is an algorithm capable of partially formalizing
        environmental events, feasible?
        After the problem of the walled room has been formalized:
        (a)Get from point_A_(inside the walled enclosure) to point _B_(out of
        the walled enclosure)
        (b)How? Tranverse a sectional area without walls
        (c)There is no such area.
        [The problem has been determined: no such area. Any "~x" that
        fustrates acquiring an objective "z":is a problem.

        Can the robot then formalize this scenario into this format:
        X(passage) + Y(robot; moving) = Z(objective--get from A to B)?

        Since the room is walled, then there is no X, only ~X. "~X"
        being "walls".

        Can the robot then proceed to this stage: ~X + Y = ~Z.--this will now
        be the _state of events_ in the robots' cpu, There are logically two
        options: Opt for ~Z--and not leave the room(that will be against its
        instruction, thus the robot can't do that) Or two, eliminate ~X--this
        fits with its instructions.
        How do you eliminate ~X?
        Scan ~X(the walls), from the scanning the robot will gather some
        scientific data from its scans--physical and chemical properties of
        ~X.
        Then, First priority:(i)how do you eliminate walls or things bearing
        close resemblance to the physical/chemical properties of walls as
        determined by the robot's scans?
        Second priority:(ii)How don you eliminate any object?

        Inorder to answer these questions by itself(the robot), algorithms
        then prompt the robot to conduct memory search for visual data
        involving scenes of any form of separation involving physical
        objects?--cutting, slicing, dicing, twisting, cracking, burning, ,
        chemical dissolution in a degree relevantly close to the
        physical/chemical objects of walls and digressing from that point
        away. etcetera. Then, formalize these scenes into a X + Y = Z, partly
        using _cause and effect_/physics, and attempt to implement the
        _formalization_ on the walls so as to create a passage way out of a
        walled room?

        Does this makes any sense?

        --Borgia, C.
      • red_cell_op
        Can a chip help computers see in 3D? 09:07 Wednesday 3rd July 2002 Stephen Shankland, CNET News.com A Silicon Valley start-up believes it can give stereo
        Message 3 of 14 , Jul 4, 2002
        View Source
        • 0 Attachment
          Can a chip help computers see in 3D?
          09:07 Wednesday 3rd July 2002
          Stephen Shankland, CNET News.com


          A Silicon Valley start-up believes it can give stereo vision to video
          cameras by encoding a processing scheme into a custom chip. It could
          ready the way for robots with depth perception
          A Silicon Valley start-up believes it can improve computer vision by
          combining a custom-designed chip with the way humans see.

          Human brains judge how far away objects are by comparing the slightly
          different view each eye sees. Tyzx hopes to build this stereo vision
          process into video cameras.



          The Palo Alto, California-based start-up has encoded a processing
          scheme into a custom chip called DeepSea, allowing the processor to
          determine not only the color of each tiny patch of an image but also
          how far away that patch is from the camera.

          The technology could be a boon for surveillance systems,
          strengthening the ability to track people in banks, stores or
          airports. But stereo vision could have wider uses as well, helping
          focus a computer's attention and cutting down on the amount of data
          that needs to be crunched.

          For instance, a vacuuming robot trying to discern a table leg through
          pattern recognition could avoid getting caught up in examining the
          wallpaper in the background. Similarly, vehicles could use the
          technology to detect obstacles in their path while filtering out
          visual noise.

          "The biggest value is the segmentation. It separates out the portion
          of the image that interests you," said Takeo Kanade, a stereo vision
          computing pioneer at Carnegie Mellon University and a member of an
          independent Tyzx advisory board. "You have not only appearance but
          also distance to each point. That makes the subsequent processing,
          such as object detection and recognition, significantly easier."

          Tyzx's first customers are mostly research labs, with other potential
          business partners evaluating the technology, chief executive Ron Buck
          said in an interview. Those who have bought the systems include MD
          Robotics, the company that makes the robotic arm for the Space
          Shuttle and, in the future, for the International Space Station. And
          ChevronTexaco is employing the equipment for "augmented reality"
          work -- supplementing what ordinary people see with computer imagery
          for tasks such as operating oil platform cranes in bad weather.

          The company hopes to win customers in the military and surveillance
          industries, and, as costs go down, to expand into
          broader "intelligent environments" where, for example, doors could
          open automatically or a house could send a medical alert if someone
          has been sitting still for an unusually long time. But Tyzx faces a
          solid challenge translating the idea into a workable product.

          "I believe it's a great idea," Kanade said. "Conceptually it's easy,
          but computationally it's not."

          Tyzx is backed by Vulcan Ventures, the investment firm of Microsoft
          co-founder Paul Allen. It has less than 20 employees, some of whom
          have years of experience in the field.

          John Woodfill and Gaile Gordon launched the company in early 2001,
          but much of their work precedes that date. A key formula used in the
          custom chip dates back to 1990, and Tyzx has had prototype chips for
          about a year, Buck said. It's only recently, though, that Tyzx's
          ideas have become economically feasible.

          Eyes on the prize
          Stereo vision may indeed be a leap ahead for computers, but there's
          still a long way to go before machines can achieve the sophistication
          of human sight.

          "Because vision comes so naturally to us, we don't appreciate the
          problem intuitively," said David Touretzky, a computational
          neuroscientist at Carnegie Mellon. "I don't think we got that
          appreciation until people started trying to build computer systems to
          see."

          A large fraction of the brains of primates such as monkeys, apes and
          humans is devoted to processing visual information, Touretzky said.
          There are more than 20 different specialised areas for tasks such as
          recognizing motion, color, shapes and spatial relationships between
          objects.

          "These areas are all interconnected in ways not fully understood
          yet," Touretzky said, but together these parts of the brain can
          discern the difference between the edge of a shadow and the edge of
          an object or compensate for color shifts that occur when the sun
          comes out.

          Tyzx isn't the only company trying to capitalize on stereo computer
          vision. Microsoft Research is working on technology that extracts 3D
          information from 2D pictures. Point Grey Research already has cameras
          on the market, though its processing algorithms require a full-
          fledged computer.

          In Japan, a company called ViewPlus is working in collaboration with
          Point Grey Research. Its products, though, combine as many as 60
          cameras into a spherical system that produces 20 simultaneous video
          information streams.

          These other companies are taking a fundamentally different approach
          to Tyzx in one respect: Their systems compare more than two images.

          Carnegie Mellon's Kanade said it might seem that comparing three
          images would be a harder computational task, but in fact having more
          data to work with can actually make the process simpler.

          DeepSea processing
          The key development at Tyzx is its custom chip, which runs an
          algorithm called census correspondence that quickly finds
          similarities across two streams of video images broken up into a
          square grid of 512 pixels, or picture elements. The chip can perform
          this comparison 125 times per second with a video image measuring 512
          by 512 pixels, but the 33MHz DeepSea consumes much less power than
          full-fledged processors such as Intel's Pentium.

          "It allows incredibly compute-intensive searching for matching pixels
          to happen very fast at a very low price. It allows us to bring stereo
          vision to computers," chief executive Buck said.

          Another important development needed to reach Tyzx's low-price
          targets is camera sensors built using the comparatively inexpensive
          complimentary metal-oxide semiconductor (CMOS) technology -- the same
          process used to build most computer chips, Buck said. Digital cameras
          today use more elaborate -- but more expensive -- "charge-coupled
          devices", or CCDs.

          Kanade has an appreciation for the difficulties involved. About 10
          years ago he built an expensive but pioneering stereo vision system
          with many processors that could determine range information by
          comparing the images from multiple cameras.

          Since then, more powerful computer processing abilities have elevated
          the potential of the field, which Kanade believes will take off once
          stereo cameras are as cheap as today's ordinary video cameras.

          "I'm very impressed with the various attempts which made real-time
          stereo possible. I think the Tyzx effort may be one of the eventual
          successes," Kanade said.



          ----------------------------------------------------------------------
          ----------
        Your message has been successfully submitted and would be delivered to recipients shortly.