Loading ...
Sorry, an error occurred while loading the content.

hacked low-cost laser range finder?

Expand Messages
  • Lucas
    Has anyone taken a Bushnell or equivalent laser range finder meant for hunting or golf and attempted to get a digital or analog signal out of it? So far the
    Message 1 of 26 , Oct 9, 2008
    • 0 Attachment
      Has anyone taken a Bushnell or equivalent laser range finder meant for
      hunting or golf and attempted to get a digital or analog signal out of it?
      So far the most inexpensive range finder I've found with a computer
      interface is from Optilogic (
      http://www.opti-logic.com/industrial_rangefinders.htm ) for around $500.
      The sports and low-end home improvement types go for $100-$200, I'd be
      really interested in knowing if the data can be extracted ( and not have to
      OCR their displays).

      Earlier I tried out a laser dot and camera approach out in combination with
      the artoolkit marker tracking software for okay results (
      http://vimeo.com/1897078 ), but I'd like to swap out the visible laser for
      something longer range and that won't require the laser be pointed at
      objects in camera.

      Thanks,

      Luke


      [Non-text portions of this message have been removed]
    • Larry Barello
      I purchased one of the Optilogic rangefinder for a contract project. It is the SAME guts as the low end sports model literally cut out of the plastic molding
      Message 2 of 26 , Oct 9, 2008
      • 0 Attachment
        I purchased one of the Optilogic rangefinder for a contract project. It is
        the SAME guts as the low end sports model literally cut out of the plastic
        molding with a small AVR based interface board to replace the LCD. The
        interface that they provide is really crummy. You can't send it characters
        too fast or it wedges. There is no integrity check. The calibration
        routine is funky. Other than that it does what it says it does and it is
        relatively cheap.


        -----Original Message-----
        From: SeattleRobotics@yahoogroups.com
        [mailto:SeattleRobotics@yahoogroups.com] On Behalf Of Lucas
        Sent: Thursday, October 09, 2008 6:48 AM
        To: SeattleRobotics@yahoogroups.com
        Subject: [SeattleRobotics] hacked low-cost laser range finder?

        Has anyone taken a Bushnell or equivalent laser range finder meant for
        hunting or golf and attempted to get a digital or analog signal out of it?
        So far the most inexpensive range finder I've found with a computer
        interface is from Optilogic (
        http://www.opti-logic.com/industrial_rangefinders.htm ) for around $500.
        The sports and low-end home improvement types go for $100-$200, I'd be
        really interested in knowing if the data can be extracted ( and not have to
        OCR their displays).

        Earlier I tried out a laser dot and camera approach out in combination with
        the artoolkit marker tracking software for okay results (
        http://vimeo.com/1897078 ), but I'd like to swap out the visible laser for
        something longer range and that won't require the laser be pointed at
        objects in camera.

        Thanks,

        Luke


        [Non-text portions of this message have been removed]


        ------------------------------------

        Visit the SRS Website at http://www.seattlerobotics.orgYahoo! Groups Links
      • Matthew Tedder
        I have no idea about this, specifically.. But I sometimes wonder how well stereo cameras might work for this.. I mean, it s simple math to figure out distance
        Message 3 of 26 , Oct 9, 2008
        • 0 Attachment
          I have no idea about this, specifically.. But I sometimes wonder how well
          stereo cameras might work for this.. I mean, it's simple math to figure out
          distance from overlap in two 2D images. The only problem I can think of is
          differentiating between objects of more or less relevance--such as a brick
          wall in front of you versus rain drops or a paper airplane.

          Matthew

          On Thu, Oct 9, 2008 at 6:48 AM, Lucas <wsacul@...> wrote:

          > Has anyone taken a Bushnell or equivalent laser range finder meant for
          > hunting or golf and attempted to get a digital or analog signal out of it?
          > So far the most inexpensive range finder I've found with a computer
          > interface is from Optilogic (
          > http://www.opti-logic.com/industrial_rangefinders.htm ) for around $500.
          > The sports and low-end home improvement types go for $100-$200, I'd be
          > really interested in knowing if the data can be extracted ( and not have to
          > OCR their displays).
          >
          > Earlier I tried out a laser dot and camera approach out in combination with
          > the artoolkit marker tracking software for okay results (
          > http://vimeo.com/1897078 ), but I'd like to swap out the visible laser for
          > something longer range and that won't require the laser be pointed at
          > objects in camera.
          >
          > Thanks,
          >
          > Luke
          >
          > [Non-text portions of this message have been removed]
          >
          >
          >


          [Non-text portions of this message have been removed]
        • Ray Xu
          Wow, is it me that stimulated all these laser range finder topics I ve been seeing in some forums because I am making one based on phase shift? Just
          Message 4 of 26 , Oct 9, 2008
          • 0 Attachment
            Wow, is it me that stimulated all these laser range finder topics I've been
            seeing in some forums because I am making one based on phase shift? Just
            wondering. Consider this "junk" or off topic if you think so.



            Ray Xu

            rayxu@...



            From: SeattleRobotics@yahoogroups.com
            [mailto:SeattleRobotics@yahoogroups.com] On Behalf Of Matthew Tedder
            Sent: Thursday, October 09, 2008 12:55 PM
            To: SeattleRobotics@yahoogroups.com
            Subject: Re: [SeattleRobotics] hacked low-cost laser range finder?



            I have no idea about this, specifically.. But I sometimes wonder how well
            stereo cameras might work for this.. I mean, it's simple math to figure out
            distance from overlap in two 2D images. The only problem I can think of is
            differentiating between objects of more or less relevance--such as a brick
            wall in front of you versus rain drops or a paper airplane.

            Matthew

            On Thu, Oct 9, 2008 at 6:48 AM, Lucas <wsacul@...
            <mailto:wsacul%40gmail.com> > wrote:

            > Has anyone taken a Bushnell or equivalent laser range finder meant for
            > hunting or golf and attempted to get a digital or analog signal out of it?
            > So far the most inexpensive range finder I've found with a computer
            > interface is from Optilogic (
            > http://www.opti-logic.com/industrial_rangefinders.htm ) for around $500.
            > The sports and low-end home improvement types go for $100-$200, I'd be
            > really interested in knowing if the data can be extracted ( and not have
            to
            > OCR their displays).
            >
            > Earlier I tried out a laser dot and camera approach out in combination
            with
            > the artoolkit marker tracking software for okay results (
            > http://vimeo.com/1897078 ), but I'd like to swap out the visible laser for
            > something longer range and that won't require the laser be pointed at
            > objects in camera.
            >
            > Thanks,
            >
            > Luke
            >
            > [Non-text portions of this message have been removed]
            >
            >
            >

            [Non-text portions of this message have been removed]





            [Non-text portions of this message have been removed]
          • rcrichton@att.net
            My ancient Palm Tungsten has died, I m looking for recommendations for a relatively inexpensive pocket brain. Fitting in a pocket/on a belt is a must - an
            Message 5 of 26 , Oct 9, 2008
            • 0 Attachment
              My ancient Palm Tungsten has died, I'm looking for recommendations for a relatively inexpensive pocket brain.
              Fitting in a pocket/on a belt is a must - an EeePC would be cool to play with, but would not meet my needs.
              "nice to have" features:
              camera
              ebook software
              wireless connectivity
              long battery life
              Built-in/freely available programming environment

              Any suggestions?

              -Bob Crichton

              [Non-text portions of this message have been removed]
            • David Murphy
              iTouch (no camera)/iPhone ? SDK is a free download from apple. -David ... [Non-text portions of this message have been removed]
              Message 6 of 26 , Oct 9, 2008
              • 0 Attachment
                iTouch (no camera)/iPhone ?
                SDK is a free download from apple.

                -David
                On Oct 9, 2008, at 4:13 PM, rcrichton@... wrote:

                >
                >
                > My ancient Palm Tungsten has died, I'm looking for recommendations
                > for a relatively inexpensive pocket brain.
                > Fitting in a pocket/on a belt is a must - an EeePC would be cool to
                > play with, but would not meet my needs.
                > "nice to have" features:
                > camera
                > ebook software
                > wireless connectivity
                > long battery life
                > Built-in/freely available programming environment
                >
                > Any suggestions?
                >
                > -Bob Crichton
                >
                > [Non-text portions of this message have been removed]
                >
                >
                >



                [Non-text portions of this message have been removed]
              • Peter Balch
                ... A couple of weeks ago someone suggested I should develop apps for the iPhone - not that I own one. Would I be right in thinking that although the SDK is
                Message 7 of 26 , Oct 10, 2008
                • 0 Attachment
                  > From: "David Murphy" <dfm794@...>
                  > iTouch (no camera)/iPhone ?
                  > SDK is a free download from apple.

                  A couple of weeks ago someone suggested I should develop apps for the
                  iPhone - not that I own one.

                  Would I be right in thinking that although the SDK is free, I would have to
                  buy a Mac?

                  Peter
                • Lucas
                  That sounds disappointing- for a little more money I see products like the Leica Disto A6 or Bosch DLE 150, which have bluetooth interfaces. But no Linux
                  Message 8 of 26 , Oct 10, 2008
                  • 0 Attachment
                    That sounds disappointing- for a little more money I see products like the
                    Leica Disto A6 or Bosch DLE 150, which have bluetooth interfaces. But no
                    Linux support though, and I don't know if bluetooth interfaces in general
                    are easy to reverse engineer.

                    What kind of update rate could you get out of it?

                    -Luke

                    On Thu, Oct 9, 2008 at 8:06 AM, Larry Barello <yahoo@...> wrote:

                    > I purchased one of the Optilogic rangefinder for a contract project. It
                    > is
                    > the SAME guts as the low end sports model literally cut out of the plastic
                    > molding with a small AVR based interface board to replace the LCD. The
                    > interface that they provide is really crummy. You can't send it characters
                    > too fast or it wedges. There is no integrity check. The calibration
                    > routine is funky. Other than that it does what it says it does and it is
                    > relatively cheap.
                    >
                    > -----Original Message-----
                    > From: SeattleRobotics@yahoogroups.com <SeattleRobotics%40yahoogroups.com>
                    > [mailto:SeattleRobotics@yahoogroups.com<SeattleRobotics%40yahoogroups.com>]
                    > On Behalf Of Lucas
                    > Sent: Thursday, October 09, 2008 6:48 AM
                    > To: SeattleRobotics@yahoogroups.com <SeattleRobotics%40yahoogroups.com>
                    > Subject: [SeattleRobotics] hacked low-cost laser range finder?
                    >
                    > Has anyone taken a Bushnell or equivalent laser range finder meant for
                    > hunting or golf and attempted to get a digital or analog signal out of it?
                    > So far the most inexpensive range finder I've found with a computer
                    > interface is from Optilogic (
                    > http://www.opti-logic.com/industrial_rangefinders.htm ) for around $500.
                    > The sports and low-end home improvement types go for $100-$200, I'd be
                    > really interested in knowing if the data can be extracted ( and not have to
                    > OCR their displays).
                    >
                    > Earlier I tried out a laser dot and camera approach out in combination with
                    > the artoolkit marker tracking software for okay results (
                    > http://vimeo.com/1897078 ), but I'd like to swap out the visible laser for
                    > something longer range and that won't require the laser be pointed at
                    > objects in camera.
                    >
                    > Thanks,
                    >
                    > Luke
                    >
                    > [Non-text portions of this message have been removed]
                    >
                    > ------------------------------------
                    >
                    > Visit the SRS Website at http://www.seattlerobotics.orgYahoo! Groups Links
                    >
                    >
                    >



                    --
                    http://binarymillenium.com


                    [Non-text portions of this message have been removed]
                  • Robert & Elaine
                    I would like to answer your question with another question. That is how did you learn the difference between these objects. My opinion on AI and basically
                    Message 9 of 26 , Oct 10, 2008
                    • 0 Attachment
                      I would like to answer your question with another question. That is how did you learn the difference between these objects.

                      My opinion on AI and basically computer intelligence is this. What people are trying to obtain is a machine that thinks like a human. However, I don't think it can be accomplished following the current path. Human brains are wired a certain way, although we still don't really know how, only making some predictions. The wiring in human brains was established during gestation at the direction of the code contained in DNA; how to build a human. In that code are the directions for the :"basic wiring" in the brain. What follows once that brain becomes functional is learning and memory of what is learned.

                      Now, in order for a computer to emulate this process, you need a machine that can learn and remember what it learned. Then, just as a human, the machine starts learning about it's environment, about it self, and how it works (such as arms, legs, eyes, ears and touch; the five senses.).

                      As an example, think about how you learned to walk! It took some time for your brain to understand all the input. But eventually your brain put it's mechanical abilities and vestibular (balance) inputs together and you attempted to stand. What an accomplishment. Eventually you learned to put one foot in front of the other and walk! another great accomplishment.

                      Thus, I feel the same must be done with machines. They need to be able to learn and remember what was learned; whether it was good or bad; what it needed internally to accomplish the task and through memory, be able to repeat the task.

                      We can make machines to do certain functions such as identify objects from a camera. The problem we see (pardon the pun), is that the machine has no comprehension of what it sees like a human would. Another example perhaps would help my point. As a child you got to a point when you discovered a pencil. It was probably made of wood, painted yellow with a pink thingy on the end that was soft and flexible. How did you learn what it was and then what you could do with it? Could have been many different ways. For instance you might have picked it up (another learned task) and noticed that it makes marks on some objects. Or, someone taught you that this object makes marks on some objects. You even might have been told how to hold it. It was learned and by remembering this conglomeration of sensory input and directed mechanical actions, you learned that this was an object that makes marks. Thus given that you built a machine that had all the necessary mechanical abilities to pick up a pencil and was able to learn, you must teach the machine what this object is, how to hold it and that it makes marks.

                      I've been studying AI for some time now and came to this realization recently. As another drill for you head (hmmm sounds gross put that way), when you see something with your eyes, what is it that you actually remember? I think you will find that you don't really recall the actual "picture" but you do remember things about what you saw.

                      Anyway, those of my thoughts.

                      Bob

                      ----- Original Message -----
                      From: Matthew Tedder
                      To: SeattleRobotics@yahoogroups.com
                      Sent: Thursday, October 09, 2008 10:55 AM
                      Subject: Re: [SeattleRobotics] hacked low-cost laser range finder?


                      I have no idea about this, specifically.. But I sometimes wonder how well
                      stereo cameras might work for this.. I mean, it's simple math to figure out
                      distance from overlap in two 2D images. The only problem I can think of is
                      differentiating between objects of more or less relevance--such as a brick
                      wall in front of you versus rain drops or a paper airplane.

                      Matthew

                      On Thu, Oct 9, 2008 at 6:48 AM, Lucas <wsacul@...> wrote:

                      > Has anyone taken a Bushnell or equivalent laser range finder meant for
                      > hunting or golf and attempted to get a digital or analog signal out of it?
                      > So far the most inexpensive range finder I've found with a computer
                      > interface is from Optilogic (
                      > http://www.opti-logic.com/industrial_rangefinders.htm ) for around $500.
                      > The sports and low-end home improvement types go for $100-$200, I'd be
                      > really interested in knowing if the data can be extracted ( and not have to
                      > OCR their displays).
                      >
                      > Earlier I tried out a laser dot and camera approach out in combination with
                      > the artoolkit marker tracking software for okay results (
                      > http://vimeo.com/1897078 ), but I'd like to swap out the visible laser for
                      > something longer range and that won't require the laser be pointed at
                      > objects in camera.
                      >
                      > Thanks,
                      >
                      > Luke
                      >
                      > [Non-text portions of this message have been removed]
                      >
                      >
                      >

                      [Non-text portions of this message have been removed]





                      [Non-text portions of this message have been removed]
                    • larry barello
                      The Optilogic has 10hz, calibrated filtered output and 200hz raw output (about 10m to 600m). I ended up writing my own calibration routine (it is a simple
                      Message 10 of 26 , Oct 10, 2008
                      • 0 Attachment
                        The Optilogic has 10hz, calibrated filtered output and 200hz raw output
                        (about 10m to 600m). I ended up writing my own calibration routine (it is a
                        simple slope offset calibration, raw output is 0-4095 as a decimal #) and
                        filtered the output over 10-20 cycles. Nothing fancy. The problem is that
                        the built in interface is fragile and spurious characters can really screw
                        things up. The Linux board I happen to use, initially, spewed junk on the
                        com port at boot time wedging the interface. So I set it up for raw output
                        and then cut the Rx line. Now it is very stable and I can quickly adjust
                        the calibration via a web page on the product, a GPS guided autonomous
                        parachute delivery system. This is just an experimental investigation at
                        the moment.

                        http://www.airborne-sys.com/productlisting.htm#aerialdelivery

                        I am sure anyone with modest skills could come up with a far better computer
                        interface than provided with the Opti-logic. If I were doing this as a
                        hobbyist, I would purchase their low end sports model and start cutting away
                        plastic. Then use a scope to figure out the interface between the LCD
                        display and the rangefinder. Opti-logic is very protective of the interface
                        - They won't say how it works, but it has got to be simple as the computer
                        interface they provide is nothing more than an AVR chip and level shifters
                        for the RS232 and maybe a 12 bit ADC. In any case I wouldn't spend $300
                        extra for their industrial model with the crappy interface, which, when you
                        unpack it, obviously was cut away from a consumer molding...

                        Cheers!

                        -----Original Message-----
                        From: SeattleRobotics@yahoogroups.com
                        [mailto:SeattleRobotics@yahoogroups.com] On Behalf Of Lucas
                        Sent: Friday, October 10, 2008 7:08 AM
                        To: SeattleRobotics@yahoogroups.com
                        Subject: Re: [SeattleRobotics] hacked low-cost laser range finder?

                        That sounds disappointing- for a little more money I see products like the
                        Leica Disto A6 or Bosch DLE 150, which have bluetooth interfaces. But no
                        Linux support though, and I don't know if bluetooth interfaces in general
                        are easy to reverse engineer.

                        What kind of update rate could you get out of it?

                        -Luke

                        On Thu, Oct 9, 2008 at 8:06 AM, Larry Barello <yahoo@...> wrote:

                        > I purchased one of the Optilogic rangefinder for a contract project. It
                        > is
                        > the SAME guts as the low end sports model literally cut out of the plastic
                        > molding with a small AVR based interface board to replace the LCD. The
                        > interface that they provide is really crummy. You can't send it characters
                        > too fast or it wedges. There is no integrity check. The calibration
                        > routine is funky. Other than that it does what it says it does and it is
                        > relatively cheap.
                        >
                        > -----Original Message-----
                        > From: SeattleRobotics@yahoogroups.com <SeattleRobotics%40yahoogroups.com>
                        >
                        [mailto:SeattleRobotics@yahoogroups.com<SeattleRobotics%40yahoogroups.com>]
                        > On Behalf Of Lucas
                        > Sent: Thursday, October 09, 2008 6:48 AM
                        > To: SeattleRobotics@yahoogroups.com <SeattleRobotics%40yahoogroups.com>
                        > Subject: [SeattleRobotics] hacked low-cost laser range finder?
                        >
                        > Has anyone taken a Bushnell or equivalent laser range finder meant for
                        > hunting or golf and attempted to get a digital or analog signal out of it?
                        > So far the most inexpensive range finder I've found with a computer
                        > interface is from Optilogic (
                        > http://www.opti-logic.com/industrial_rangefinders.htm ) for around $500.
                        > The sports and low-end home improvement types go for $100-$200, I'd be
                        > really interested in knowing if the data can be extracted ( and not have
                        to
                        > OCR their displays).
                        >
                        > Earlier I tried out a laser dot and camera approach out in combination
                        with
                        > the artoolkit marker tracking software for okay results (
                        > http://vimeo.com/1897078 ), but I'd like to swap out the visible laser for
                        > something longer range and that won't require the laser be pointed at
                        > objects in camera.
                        >
                        > Thanks,
                        >
                        > Luke
                        >
                        > [Non-text portions of this message have been removed]
                        >
                        > ------------------------------------
                        >
                        > Visit the SRS Website at http://www.seattlerobotics.orgYahoo! Groups Links
                        >
                        >
                        >



                        --
                        http://binarymillenium.com


                        [Non-text portions of this message have been removed]


                        ------------------------------------

                        Visit the SRS Website at http://www.seattlerobotics.orgYahoo! Groups Links
                      • David Murphy
                        Yes, that seems to be true. I just checked their readme on the sdk and it only mentions macs. I guess that would make these options for a pda replacement
                        Message 11 of 26 , Oct 10, 2008
                        • 0 Attachment
                          Yes, that seems to be true. I just checked their readme on the sdk and
                          it only mentions macs.
                          I guess that would make these options for a pda replacement rather
                          expensive!

                          - David

                          On Oct 10, 2008, at 6:10 AM, Peter Balch wrote:

                          > > From: "David Murphy" <dfm794@...>
                          > > iTouch (no camera)/iPhone ?
                          > > SDK is a free download from apple.
                          >
                          > A couple of weeks ago someone suggested I should develop apps for the
                          > iPhone - not that I own one.
                          >
                          > Would I be right in thinking that although the SDK is free, I would
                          > have to
                          > buy a Mac?
                          >
                          > Peter
                          >
                          >
                          >



                          [Non-text portions of this message have been removed]
                        • ed@okerson.com
                          If you are looking for a small device that has an open API, have a look at the Nokia Internet Tablets. Current models are N800 and N810. While not designed
                          Message 12 of 26 , Oct 10, 2008
                          • 0 Attachment
                            If you are looking for a small device that has an open API, have a look at
                            the Nokia Internet Tablets. Current models are N800 and N810. While not
                            designed to be PDA's, they are about the same form factor and an open
                            source SDK is available at www.maemo.org. The devices themselves run
                            Linux, and there are lots of open source projects already for them.

                            Ed

                            > Yes, that seems to be true. I just checked their readme on the sdk and
                            > it only mentions macs.
                            > I guess that would make these options for a pda replacement rather
                            > expensive!
                            >
                            > - David
                            >
                            > On Oct 10, 2008, at 6:10 AM, Peter Balch wrote:
                            >
                            >> > From: "David Murphy" <dfm794@...>
                            >> > iTouch (no camera)/iPhone ?
                            >> > SDK is a free download from apple.
                            >>
                            >> A couple of weeks ago someone suggested I should develop apps for the
                            >> iPhone - not that I own one.
                            >>
                            >> Would I be right in thinking that although the SDK is free, I would
                            >> have to
                            >> buy a Mac?
                            >>
                            >> Peter
                            >>
                            >>
                            >>
                            >
                            >
                            >
                            > [Non-text portions of this message have been removed]
                            >
                            >
                            > ------------------------------------
                            >
                            > Visit the SRS Website at http://www.seattlerobotics.orgYahoo! Groups Links
                            >
                            >
                            >
                          • Mike Payson
                            Wait a couple of weeks and buy a T-Mobile G1, based on Google s Android dev environment... http://code.google.com/android/ Cheaper than an iPhone, much more
                            Message 13 of 26 , Oct 10, 2008
                            • 0 Attachment
                              Wait a couple of weeks and buy a T-Mobile G1, based on Google's
                              Android dev environment...

                              http://code.google.com/android/

                              Cheaper than an iPhone, much more usable, and truly open.


                              On Thu, Oct 9, 2008 at 4:13 PM, <rcrichton@...> wrote:
                              >
                              >
                              > My ancient Palm Tungsten has died, I'm looking for recommendations for a relatively inexpensive pocket brain.
                              > Fitting in a pocket/on a belt is a must - an EeePC would be cool to play with, but would not meet my needs.
                              > "nice to have" features:
                              > camera
                              > ebook software
                              > wireless connectivity
                              > long battery life
                              > Built-in/freely available programming environment
                              >
                              > Any suggestions?
                              >
                              > -Bob Crichton
                              >
                              > [Non-text portions of this message have been removed]
                              >
                              >
                              > ------------------------------------
                              >
                              > Visit the SRS Website at http://www.seattlerobotics.orgYahoo! Groups Links
                              >
                              >
                              >
                              >
                            • Brian Pitt
                              ... and if it doesn t think like a human how would a human be able to tell it was thinking at all? the way I see it people are optimized for dealing with a
                              Message 14 of 26 , Oct 11, 2008
                              • 0 Attachment
                                On Friday 10 October 2008 07:21, Robert & Elaine wrote:
                                > My opinion on AI and basically computer intelligence is this.  What people are
                                > trying to obtain is a machine that thinks like a human. However, I don't think it
                                > can be accomplished following the current path.

                                and if it doesn't think like a human how would a human be able to tell it was thinking at all?

                                the way I see it people are optimized for dealing with a physical environment as your learning
                                to walk and figuring out what a pencil does examples show ,while computers are optimized
                                for a totally alien environment with only vague and often misleading similaritys to ours

                                an experiment that might level things out would be to have a computer figure out how to use a computer
                                suppose you took an old IBM PC (with ROM Basic :) and loaded it with a minimum operating system
                                just command.com ,debug.com and a few other files and used the ctty command to redirect its
                                console I/O to a serial port ,after it boots you don't get to touch it

                                on the other end of the serial cable your megasuperionXXVII quad core has to learn how to use it

                                if it works at all the 'Brain' program would find ways to use the PC that no person would
                                ever have come up with

                                Brian
                              • Matthew Tedder
                                I sometimes wonder about that, also. I d use a virtual machine because the intelligence engine is sure to crash it and wipe it out, numerous times. The
                                Message 15 of 26 , Oct 11, 2008
                                • 0 Attachment
                                  I sometimes wonder about that, also. I'd use a virtual machine because the
                                  intelligence engine is sure to crash it and wipe it out, numerous times.

                                  The easiest way to conceptualize what many people are thinking of as "true
                                  AI" is that it "thinks like a human", but what does that mean? How do
                                  humans think?

                                  If the goal is essentially practical, trying to make software learn novel
                                  environments, theorize about the partial and unseen elements within that
                                  environment, and solve problem based on this understanding then I think
                                  human-like thinking (whatever it is) appears to be the only algorithm we
                                  know works.

                                  What the intelligence engine has to do is: Derive models as accurately as
                                  possible from interaction sequence patterns within its environment and use
                                  those models to engineer and predict novel new interaction sequences aimed
                                  at achieving desired directions and/or goals.

                                  There may be many ways to accomplish this, but it's clearly not easy to
                                  figure out. It makes obvious sense to see if we can at least get hints from
                                  how humans might do it.

                                  I do think universal intelligence engines are possible, although fine tuning
                                  for specific applications might also be helpful. Simple, classical
                                  conditioning seems to work. I recently saw an article in popular media
                                  showing that bacteria and parameciums might even learn this way:

                                  http://www.technologyreview.com/biomedicine/21447/

                                  Matthew

                                  On Sat, Oct 11, 2008 at 2:34 PM, Brian Pitt <bfp@...> wrote:

                                  > On Friday 10 October 2008 07:21, Robert & Elaine wrote:
                                  > > My opinion on AI and basically computer intelligence is this. What
                                  > people are
                                  > > trying to obtain is a machine that thinks like a human. However, I don't
                                  > think it
                                  > > can be accomplished following the current path.
                                  >
                                  > and if it doesn't think like a human how would a human be able to tell it
                                  > was thinking at all?
                                  >
                                  > the way I see it people are optimized for dealing with a physical
                                  > environment as your learning
                                  > to walk and figuring out what a pencil does examples show ,while computers
                                  > are optimized
                                  > for a totally alien environment with only vague and often misleading
                                  > similaritys to ours
                                  >
                                  > an experiment that might level things out would be to have a computer
                                  > figure out how to use a computer
                                  > suppose you took an old IBM PC (with ROM Basic :) and loaded it with a
                                  > minimum operating system
                                  > just command.com ,debug.com and a few other files and used the ctty
                                  > command to redirect its
                                  > console I/O to a serial port ,after it boots you don't get to touch it
                                  >
                                  > on the other end of the serial cable your megasuperionXXVII quad core has
                                  > to learn how to use it
                                  >
                                  > if it works at all the 'Brain' program would find ways to use the PC that
                                  > no person would
                                  > ever have come up with
                                  >
                                  > Brian
                                  >
                                  >


                                  [Non-text portions of this message have been removed]
                                • robert mckee
                                  ... From: Matthew Tedder Subject: Re: [SeattleRobotics] hacked low-cost AI? To: SeattleRobotics@yahoogroups.com Date: Saturday, October
                                  Message 16 of 26 , Oct 11, 2008
                                  • 0 Attachment
                                    --- On Sat, 10/11/08, Matthew Tedder <matthewct@...> wrote:

                                    From: Matthew Tedder <matthewct@...>
                                    Subject: Re: [SeattleRobotics] hacked low-cost AI?
                                    To: SeattleRobotics@yahoogroups.com
                                    Date: Saturday, October 11, 2008, 3:23 PM






                                    I sometimes wonder about that, also. I'd use a virtual machine because the
                                    intelligence engine is sure to crash it and wipe it out, numerous times.

                                    The easiest way to conceptualize what many people are thinking of as "true
                                    AI" is that it "thinks like a human", but what does that mean? How do
                                    humans think?

                                    If the goal is essentially practical, trying to make software learn novel
                                    environments, theorize about the partial and unseen elements within that
                                    environment, and solve problem based on this understanding then I think
                                    human-like thinking (whatever it is) appears to be the only algorithm we
                                    know works.

                                    What the intelligence engine has to do is: Derive models as accurately as
                                    possible from interaction sequence patterns within its environment and use
                                    those models to engineer and predict novel new interaction sequences aimed
                                    at achieving desired directions and/or goals.

                                    There may be many ways to accomplish this, but it's clearly not easy to
                                    figure out. It makes obvious sense to see if we can at least get hints from
                                    how humans might do it.

                                    I do think universal intelligence engines are possible, although fine tuning
                                    for specific applications might also be helpful. Simple, classical
                                    conditioning seems to work. I recently saw an article in popular media
                                    showing that bacteria and parameciums might even learn this way:

                                    http://www.technolo gyreview. com/biomedicine/ 21447/

                                    Matthew

                                    On Sat, Oct 11, 2008 at 2:34 PM, Brian Pitt <bfp@earthlink. net> wrote:

                                    > On Friday 10 October 2008 07:21, Robert & Elaine wrote:
                                    > > My opinion on AI and basically computer intelligence is this. What
                                    > people are
                                    > > trying to obtain is a machine that thinks like a human. However, I don't
                                    > think it
                                    > > can be accomplished following the current path.
                                    >
                                    > and if it doesn't think like a human how would a human be able to tell it
                                    > was thinking at all?
                                    >
                                    > the way I see it people are optimized for dealing with a physical
                                    > environment as your learning
                                    > to walk and figuring out what a pencil does examples show ,while computers
                                    > are optimized
                                    > for a totally alien environment with only vague and often misleading
                                    > similaritys to ours
                                    >
                                    > an experiment that might level things out would be to have a computer
                                    > figure out how to use a computer
                                    > suppose you took an old IBM PC (with ROM Basic :) and loaded it with a
                                    > minimum operating system
                                    > just command.com ,debug.com and a few other files and used the ctty
                                    > command to redirect its
                                    > console I/O to a serial port ,after it boots you don't get to touch it
                                    >
                                    > on the other end of the serial cable your megasuperionXXVII quad core has
                                    > to learn how to use it
                                    >
                                    > if it works at all the 'Brain' program would find ways to use the PC that
                                    > no person would
                                    > ever have come up with
                                    >
                                    > Brian
                                    >
                                    >

                                    [Non-text portions of this message have been removed]


















                                    [Non-text portions of this message have been removed]
                                  • Robert & Elaine
                                    Yes, Here is something else to contemplate. If a learning machine eventually becomes sentient (?), aka knows of it s own existence, would it know it s living
                                    Message 17 of 26 , Oct 12, 2008
                                    • 0 Attachment
                                      Yes,

                                      Here is something else to contemplate.

                                      If a learning machine eventually becomes sentient (?), aka knows of it's own existence, would it know it's living in a box? That it is a silicon based life form? And if it thinks like a human, would it eventually have phobias and display neurotic behavior stemming from living in a box?

                                      Interesting food for thought... (um pardon the pun)

                                      Bob
                                      ----- Original Message -----
                                      From: Matthew Tedder
                                      To: SeattleRobotics@yahoogroups.com
                                      Sent: Saturday, October 11, 2008 3:23 PM
                                      Subject: Re: [SeattleRobotics] hacked low-cost AI?


                                      I sometimes wonder about that, also. I'd use a virtual machine because the
                                      intelligence engine is sure to crash it and wipe it out, numerous times.

                                      The easiest way to conceptualize what many people are thinking of as "true
                                      AI" is that it "thinks like a human", but what does that mean? How do
                                      humans think?

                                      If the goal is essentially practical, trying to make software learn novel
                                      environments, theorize about the partial and unseen elements within that
                                      environment, and solve problem based on this understanding then I think
                                      human-like thinking (whatever it is) appears to be the only algorithm we
                                      know works.

                                      What the intelligence engine has to do is: Derive models as accurately as
                                      possible from interaction sequence patterns within its environment and use
                                      those models to engineer and predict novel new interaction sequences aimed
                                      at achieving desired directions and/or goals.

                                      There may be many ways to accomplish this, but it's clearly not easy to
                                      figure out. It makes obvious sense to see if we can at least get hints from
                                      how humans might do it.

                                      I do think universal intelligence engines are possible, although fine tuning
                                      for specific applications might also be helpful. Simple, classical
                                      conditioning seems to work. I recently saw an article in popular media
                                      showing that bacteria and parameciums might even learn this way:

                                      http://www.technologyreview.com/biomedicine/21447/

                                      Matthew

                                      On Sat, Oct 11, 2008 at 2:34 PM, Brian Pitt <bfp@...> wrote:

                                      > On Friday 10 October 2008 07:21, Robert & Elaine wrote:
                                      > > My opinion on AI and basically computer intelligence is this. What
                                      > people are
                                      > > trying to obtain is a machine that thinks like a human. However, I don't
                                      > think it
                                      > > can be accomplished following the current path.
                                      >
                                      > and if it doesn't think like a human how would a human be able to tell it
                                      > was thinking at all?
                                      >
                                      > the way I see it people are optimized for dealing with a physical
                                      > environment as your learning
                                      > to walk and figuring out what a pencil does examples show ,while computers
                                      > are optimized
                                      > for a totally alien environment with only vague and often misleading
                                      > similaritys to ours
                                      >
                                      > an experiment that might level things out would be to have a computer
                                      > figure out how to use a computer
                                      > suppose you took an old IBM PC (with ROM Basic :) and loaded it with a
                                      > minimum operating system
                                      > just command.com ,debug.com and a few other files and used the ctty
                                      > command to redirect its
                                      > console I/O to a serial port ,after it boots you don't get to touch it
                                      >
                                      > on the other end of the serial cable your megasuperionXXVII quad core has
                                      > to learn how to use it
                                      >
                                      > if it works at all the 'Brain' program would find ways to use the PC that
                                      > no person would
                                      > ever have come up with
                                      >
                                      > Brian
                                      >
                                      >

                                      [Non-text portions of this message have been removed]





                                      [Non-text portions of this message have been removed]
                                    • Brian Pitt
                                      ... I m not so sure it would perceive it as a box people have their brains stuffed into skulls but they don t all get claustrophobia they re living on a ball
                                      Message 18 of 26 , Oct 12, 2008
                                      • 0 Attachment
                                        On Sunday 12 October 2008 07:04, Robert & Elaine wrote:
                                        > Here is something else to contemplate.
                                        > If a learning machine eventually becomes sentient (?), aka knows of it's own
                                        > existence, would it know it's living in a box?

                                        I'm not so sure it would perceive it as a box
                                        people have their brains stuffed into skulls but they don't all get claustrophobia
                                        they're living on a ball surrounded by vacuum but agoraphobia is fairly rare
                                        and you almost never really think of yourself as being a water based life form
                                        (sure they say carbon based but without water you'd be nothing)

                                        of course it may get a little twitchy if you disconnected it from the network
                                        but then again who doesn't ;)

                                        Brian
                                      • Tony Mactutis
                                        What I wonder is, is it possible for a machine to be (truly) creative, if it is not also subject to neuroses, phobias, or all of the (less severe)
                                        Message 19 of 26 , Oct 12, 2008
                                        • 0 Attachment
                                          What I wonder is, is it possible for a machine to be (truly) creative,
                                          if it is not also subject to neuroses, phobias, or all of the (less
                                          severe) manifestations of personality?

                                          Tony

                                          Robert & Elaine wrote:
                                          >
                                          > Yes,
                                          >
                                          > Here is something else to contemplate.
                                          >
                                          > If a learning machine eventually becomes sentient (?), aka knows of
                                          > it's own existence, would it know it's living in a box? That it is a
                                          > silicon based life form? And if it thinks like a human, would it
                                          > eventually have phobias and display neurotic behavior stemming from
                                          > living in a box?
                                          >
                                          > Interesting food for thought... (um pardon the pun)
                                          >
                                          > Bob
                                          > ----- Original Message -----
                                          > From: Matthew Tedder
                                          > To: SeattleRobotics@yahoogroups.com
                                          > <mailto:SeattleRobotics%40yahoogroups.com>
                                          > Sent: Saturday, October 11, 2008 3:23 PM
                                          > Subject: Re: [SeattleRobotics] hacked low-cost AI?
                                          >
                                          > I sometimes wonder about that, also. I'd use a virtual machine because the
                                          > intelligence engine is sure to crash it and wipe it out, numerous times.
                                          >
                                          > The easiest way to conceptualize what many people are thinking of as "true
                                          > AI" is that it "thinks like a human", but what does that mean? How do
                                          > humans think?
                                          >
                                          > If the goal is essentially practical, trying to make software learn novel
                                          > environments, theorize about the partial and unseen elements within that
                                          > environment, and solve problem based on this understanding then I think
                                          > human-like thinking (whatever it is) appears to be the only algorithm we
                                          > know works.
                                          >
                                          > What the intelligence engine has to do is: Derive models as accurately as
                                          > possible from interaction sequence patterns within its environment and use
                                          > those models to engineer and predict novel new interaction sequences aimed
                                          > at achieving desired directions and/or goals.
                                          >
                                          > There may be many ways to accomplish this, but it's clearly not easy to
                                          > figure out. It makes obvious sense to see if we can at least get hints
                                          > from
                                          > how humans might do it.
                                          >
                                          > I do think universal intelligence engines are possible, although fine
                                          > tuning
                                          > for specific applications might also be helpful. Simple, classical
                                          > conditioning seems to work. I recently saw an article in popular media
                                          > showing that bacteria and parameciums might even learn this way:
                                          >
                                          > http://www.technologyreview.com/biomedicine/21447/
                                          > <http://www.technologyreview.com/biomedicine/21447/>
                                          >
                                          > Matthew
                                          >
                                          > On Sat, Oct 11, 2008 at 2:34 PM, Brian Pitt <bfp@...
                                          > <mailto:bfp%40earthlink.net>> wrote:
                                          >
                                          > > On Friday 10 October 2008 07:21, Robert & Elaine wrote:
                                          > > > My opinion on AI and basically computer intelligence is this. What
                                          > > people are
                                          > > > trying to obtain is a machine that thinks like a human. However, I
                                          > don't
                                          > > think it
                                          > > > can be accomplished following the current path.
                                          > >
                                          > > and if it doesn't think like a human how would a human be able to
                                          > tell it
                                          > > was thinking at all?
                                          > >
                                          > > the way I see it people are optimized for dealing with a physical
                                          > > environment as your learning
                                          > > to walk and figuring out what a pencil does examples show ,while
                                          > computers
                                          > > are optimized
                                          > > for a totally alien environment with only vague and often misleading
                                          > > similaritys to ours
                                          > >
                                          > > an experiment that might level things out would be to have a computer
                                          > > figure out how to use a computer
                                          > > suppose you took an old IBM PC (with ROM Basic :) and loaded it with a
                                          > > minimum operating system
                                          > > just command.com ,debug.com and a few other files and used the ctty
                                          > > command to redirect its
                                          > > console I/O to a serial port ,after it boots you don't get to touch it
                                          > >
                                          > > on the other end of the serial cable your megasuperionXXVII quad
                                          > core has
                                          > > to learn how to use it
                                          > >
                                          > > if it works at all the 'Brain' program would find ways to use the PC
                                          > that
                                          > > no person would
                                          > > ever have come up with
                                          > >
                                          > > Brian
                                          > >
                                          > >
                                          >
                                          > [Non-text portions of this message have been removed]
                                          >
                                          > [Non-text portions of this message have been removed]
                                          >
                                          >
                                        • David Wyland
                                          I think that magical thinking sometimes invades AI. It is in the form of the idea that we set up a structure, give it some starting data, and then a miracle
                                          Message 20 of 26 , Oct 12, 2008
                                          • 0 Attachment
                                            I think that magical thinking sometimes invades AI. It is in the form
                                            of the idea that we set up a structure, give it some starting data,
                                            "and then a miracle occurs," to borrow from the R&D cartoon. The
                                            robot/thing "wakes up" and becomes intelligent. "Frankenstein" is
                                            translated to "Short Circuit". As in the R&D cartoon, I think this
                                            step needs more explaining.

                                            Maybe we should consider how getting to our robot is likely to happen.
                                            We start by accepting that we do not *know* what intelligence is or
                                            how to define it. We are no closer to a definition after 50+ years (or
                                            2,000+ years if we go back to Aristotle). So we are left with behavior.

                                            To make our robot/thing work, we will have to define its behavior
                                            and/or how it will learn its behavior. This means that we define the
                                            behavior or how it learns its behavior. Any "intelligence" will have
                                            to be reduced to specific behavior that can be evaluated and criticized.

                                            If we chose the learning path, we will have to be quite explicit as to
                                            what it learns and how it learns it. Optimization is good; wishing is
                                            bad. Open-ended schemes for learning "everything" - often borrowed
                                            from then-current biological work - have not worked as hoped in the
                                            past 50+ years. I believe they foster magical thinking. I also believe
                                            magical thinking leads to relative addressing: "We will have AI in 10
                                            years (from whenever you ask)," a mantra for the last 50+ years.

                                            As others [e.g., Pollack] have pointed out, the behavior of AI to date
                                            has resulted from researchers building-in epistemic knowledge of the
                                            world in which the robot moves and works. Indeed, this knowledge is
                                            what makes it work. This knowledge is typically buried in the
                                            structure of the AI planning program that exhibits it. Although the
                                            system seems to exhibit AI, its behavior is actually driven by the
                                            human supplied knowledge embedded in it.

                                            I favor accepting that we are putting knowledge of the world into the
                                            robot to make it work the way we want. That means we are admitting
                                            that we are making a better conventional automatic machine, not an
                                            "intelligent" one. I think accepting this model will help us to make
                                            progress in autonomous robotics.

                                            Dave


                                            --- In SeattleRobotics@yahoogroups.com, "Matthew Tedder"
                                            <matthewct@...> wrote:
                                            >
                                            > I sometimes wonder about that, also. I'd use a virtual machine
                                            because the
                                            > intelligence engine is sure to crash it and wipe it out, numerous times.
                                            >
                                            > The easiest way to conceptualize what many people are thinking of as
                                            "true
                                            > AI" is that it "thinks like a human", but what does that mean? How do
                                            > humans think?
                                            >
                                            > If the goal is essentially practical, trying to make software learn
                                            novel
                                            > environments, theorize about the partial and unseen elements within that
                                            > environment, and solve problem based on this understanding then I think
                                            > human-like thinking (whatever it is) appears to be the only algorithm we
                                            > know works.
                                            >
                                            > What the intelligence engine has to do is: Derive models as
                                            accurately as
                                            > possible from interaction sequence patterns within its environment
                                            and use
                                            > those models to engineer and predict novel new interaction
                                            sequences aimed
                                            > at achieving desired directions and/or goals.
                                            >
                                            > There may be many ways to accomplish this, but it's clearly not easy to
                                            > figure out. It makes obvious sense to see if we can at least get
                                            hints from
                                            > how humans might do it.
                                            >
                                            > I do think universal intelligence engines are possible, although
                                            fine tuning
                                            > for specific applications might also be helpful. Simple, classical
                                            > conditioning seems to work. I recently saw an article in popular media
                                            > showing that bacteria and parameciums might even learn this way:
                                            >
                                            > http://www.technologyreview.com/biomedicine/21447/
                                            >
                                            > Matthew
                                            >
                                            > On Sat, Oct 11, 2008 at 2:34 PM, Brian Pitt <bfp@...> wrote:
                                            >
                                            > > On Friday 10 October 2008 07:21, Robert & Elaine wrote:
                                            > > > My opinion on AI and basically computer intelligence is this. What
                                            > > people are
                                            > > > trying to obtain is a machine that thinks like a human. However,
                                            I don't
                                            > > think it
                                            > > > can be accomplished following the current path.
                                            > >
                                            > > and if it doesn't think like a human how would a human be able to
                                            tell it
                                            > > was thinking at all?
                                            > >
                                            > > the way I see it people are optimized for dealing with a physical
                                            > > environment as your learning
                                            > > to walk and figuring out what a pencil does examples show ,while
                                            computers
                                            > > are optimized
                                            > > for a totally alien environment with only vague and often misleading
                                            > > similaritys to ours
                                            > >
                                            > > an experiment that might level things out would be to have a computer
                                            > > figure out how to use a computer
                                            > > suppose you took an old IBM PC (with ROM Basic :) and loaded it with a
                                            > > minimum operating system
                                            > > just command.com ,debug.com and a few other files and used the ctty
                                            > > command to redirect its
                                            > > console I/O to a serial port ,after it boots you don't get to touch it
                                            > >
                                            > > on the other end of the serial cable your megasuperionXXVII quad
                                            core has
                                            > > to learn how to use it
                                            > >
                                            > > if it works at all the 'Brain' program would find ways to use the
                                            PC that
                                            > > no person would
                                            > > ever have come up with
                                            > >
                                            > > Brian
                                            > >
                                            > >
                                            >
                                            >
                                            > [Non-text portions of this message have been removed]
                                            >
                                          • dlc
                                            Dave, I totally agree. AI research is very useful to us because we are studying the nature of thought in ourselves. Trying to create an AI for a robot is
                                            Message 21 of 26 , Oct 12, 2008
                                            • 0 Attachment
                                              Dave,

                                              I totally agree. AI research is very useful to us because we are
                                              studying the nature of thought in ourselves. Trying to create an "AI"
                                              for a robot is pretty much a waste of time otherwise, I don't want a
                                              self-motivated, self-aware robot, I want an automaton that will take
                                              over those tasks that I don't want to do. I think that the vast
                                              majority of robotics would be successful this way too.

                                              IMO,
                                              DLC

                                              David Wyland wrote:
                                              > I think that magical thinking sometimes invades AI. It is in the form
                                              > of the idea that we set up a structure, give it some starting data,
                                              > "and then a miracle occurs," to borrow from the R&D cartoon. The
                                              > robot/thing "wakes up" and becomes intelligent. "Frankenstein" is
                                              > translated to "Short Circuit". As in the R&D cartoon, I think this
                                              > step needs more explaining.
                                              >
                                              > Maybe we should consider how getting to our robot is likely to happen.
                                              > We start by accepting that we do not *know* what intelligence is or
                                              > how to define it. We are no closer to a definition after 50+ years (or
                                              > 2,000+ years if we go back to Aristotle). So we are left with behavior.
                                              >
                                              > To make our robot/thing work, we will have to define its behavior
                                              > and/or how it will learn its behavior. This means that we define the
                                              > behavior or how it learns its behavior. Any "intelligence" will have
                                              > to be reduced to specific behavior that can be evaluated and criticized.
                                              >
                                              > If we chose the learning path, we will have to be quite explicit as to
                                              > what it learns and how it learns it. Optimization is good; wishing is
                                              > bad. Open-ended schemes for learning "everything" - often borrowed
                                              > from then-current biological work - have not worked as hoped in the
                                              > past 50+ years. I believe they foster magical thinking. I also believe
                                              > magical thinking leads to relative addressing: "We will have AI in 10
                                              > years (from whenever you ask)," a mantra for the last 50+ years.
                                              >
                                              > As others [e.g., Pollack] have pointed out, the behavior of AI to date
                                              > has resulted from researchers building-in epistemic knowledge of the
                                              > world in which the robot moves and works. Indeed, this knowledge is
                                              > what makes it work. This knowledge is typically buried in the
                                              > structure of the AI planning program that exhibits it. Although the
                                              > system seems to exhibit AI, its behavior is actually driven by the
                                              > human supplied knowledge embedded in it.
                                              >
                                              > I favor accepting that we are putting knowledge of the world into the
                                              > robot to make it work the way we want. That means we are admitting
                                              > that we are making a better conventional automatic machine, not an
                                              > "intelligent" one. I think accepting this model will help us to make
                                              > progress in autonomous robotics.
                                              >
                                              > Dave
                                              >
                                              >
                                              > --- In SeattleRobotics@yahoogroups.com, "Matthew Tedder"
                                              > <matthewct@...> wrote:
                                              >> I sometimes wonder about that, also. I'd use a virtual machine
                                              > because the
                                              >> intelligence engine is sure to crash it and wipe it out, numerous times.
                                              >>
                                              >> The easiest way to conceptualize what many people are thinking of as
                                              > "true
                                              >> AI" is that it "thinks like a human", but what does that mean? How do
                                              >> humans think?
                                              >>
                                              >> If the goal is essentially practical, trying to make software learn
                                              > novel
                                              >> environments, theorize about the partial and unseen elements within that
                                              >> environment, and solve problem based on this understanding then I think
                                              >> human-like thinking (whatever it is) appears to be the only algorithm we
                                              >> know works.
                                              >>
                                              >> What the intelligence engine has to do is: Derive models as
                                              > accurately as
                                              >> possible from interaction sequence patterns within its environment
                                              > and use
                                              >> those models to engineer and predict novel new interaction
                                              > sequences aimed
                                              >> at achieving desired directions and/or goals.
                                              >>
                                              >> There may be many ways to accomplish this, but it's clearly not easy to
                                              >> figure out. It makes obvious sense to see if we can at least get
                                              > hints from
                                              >> how humans might do it.
                                              >>
                                              >> I do think universal intelligence engines are possible, although
                                              > fine tuning
                                              >> for specific applications might also be helpful. Simple, classical
                                              >> conditioning seems to work. I recently saw an article in popular media
                                              >> showing that bacteria and parameciums might even learn this way:
                                              >>
                                              >> http://www.technologyreview.com/biomedicine/21447/
                                              >>
                                              >> Matthew
                                              >>
                                              >> On Sat, Oct 11, 2008 at 2:34 PM, Brian Pitt <bfp@...> wrote:
                                              >>
                                              >>> On Friday 10 October 2008 07:21, Robert & Elaine wrote:
                                              >>>> My opinion on AI and basically computer intelligence is this. What
                                              >>> people are
                                              >>>> trying to obtain is a machine that thinks like a human. However,
                                              > I don't
                                              >>> think it
                                              >>>> can be accomplished following the current path.
                                              >>> and if it doesn't think like a human how would a human be able to
                                              > tell it
                                              >>> was thinking at all?
                                              >>>
                                              >>> the way I see it people are optimized for dealing with a physical
                                              >>> environment as your learning
                                              >>> to walk and figuring out what a pencil does examples show ,while
                                              > computers
                                              >>> are optimized
                                              >>> for a totally alien environment with only vague and often misleading
                                              >>> similaritys to ours
                                              >>>
                                              >>> an experiment that might level things out would be to have a computer
                                              >>> figure out how to use a computer
                                              >>> suppose you took an old IBM PC (with ROM Basic :) and loaded it with a
                                              >>> minimum operating system
                                              >>> just command.com ,debug.com and a few other files and used the ctty
                                              >>> command to redirect its
                                              >>> console I/O to a serial port ,after it boots you don't get to touch it
                                              >>>
                                              >>> on the other end of the serial cable your megasuperionXXVII quad
                                              > core has
                                              >>> to learn how to use it
                                              >>>
                                              >>> if it works at all the 'Brain' program would find ways to use the
                                              > PC that
                                              >>> no person would
                                              >>> ever have come up with
                                              >>>
                                              >>> Brian
                                              >>>
                                              >>>
                                              >>
                                              >> [Non-text portions of this message have been removed]
                                              >>
                                              >
                                              >
                                              >
                                              > ------------------------------------
                                              >
                                              > Visit the SRS Website at http://www.seattlerobotics.orgYahoo! Groups Links
                                              >
                                              >
                                              >

                                              --
                                              -------------------------------------------------
                                              Dennis Clark TTT Enterprises
                                              www.techtoystoday.com
                                              -------------------------------------------------
                                            • Phil Malone
                                              ... There s two things I ve never understood about AI research. 1) Don t you have to be able to define Human Intelligence before you can try to create an
                                              Message 22 of 26 , Oct 13, 2008
                                              • 0 Attachment
                                                > AI research is very useful to us because we are
                                                > studying the nature of thought in ourselves.

                                                There's two things I've never understood about AI research.

                                                1) Don't you have to be able to define Human Intelligence before you
                                                can try to create an artificial version? If you sampled 100 humans
                                                that "thought" they were intelligent, do you think that you'd get a
                                                good baseline for emulation?

                                                2) Why are we being so elitist? If we realy want to model the human
                                                thought process, and need lots of good examples, shouldn't we be trying
                                                to create "Artifical Stupidity"?


                                                Just in case you hadn't figured it out....
                                                This is my attempt at humor...
                                                Although I HAVE always wondered this...I'm not looking for an answer :)
                                              • dan michaels
                                                ... can try to create an artificial version? If you sampled 100 humans that thought they were intelligent, do you think that you d get a good baseline for
                                                Message 23 of 26 , Oct 13, 2008
                                                • 0 Attachment
                                                  --- In SeattleRobotics@yahoogroups.com, "Phil Malone"
                                                  <onlinestoreemail@...> wrote:
                                                  >
                                                  > > AI research is very useful to us because we are
                                                  > > studying the nature of thought in ourselves.
                                                  >
                                                  > There's two things I've never understood about AI research.
                                                  >
                                                  > 1) Don't you have to be able to define Human Intelligence before you
                                                  can try to create an artificial version? If you sampled 100 humans
                                                  that "thought" they were intelligent, do you think that you'd get a
                                                  good baseline for emulation?
                                                  >


                                                  If everyone else in the world is supposed to sit on the "hold" button
                                                  while waiting for the "100" to agree on a common answer, then everyone
                                                  else might better find another field of endeavor.

                                                  The alternative is to do the only reasonable thing. Just keep evolving
                                                  the next generation of better and smarter robots by bootstrapping off
                                                  what we've learned from building the previous generation. All
                                                  engineering works incrementally, after all.

                                                  Also, for my 0.02, Gerald Edelman's ideas are the best I've seen so
                                                  far, but he lives in the world of anti-cognitivist approaches, so
                                                  anyone schooled in classical AI will probably find disagreement.

                                                  http://www.nsi.edu/index.php?page=ii_brain-based-devices_bbd
                                                • Matthew Tedder
                                                  I second that thought that you cannot let lack of a definition stop you from trying to make smarter robots. And, I d like to add: it s not that there is no
                                                  Message 24 of 26 , Oct 13, 2008
                                                  • 0 Attachment
                                                    I second that thought that you cannot let lack of a definition stop you from
                                                    trying to make smarter robots.

                                                    And, I'd like to add: it's not that there is no definition for intelligence
                                                    but just not universally recognized definition. You could take the least
                                                    common denominator approach and consider all the rest, optional add-ons.
                                                    Anyway, that's what I do.

                                                    1. What's common about everyone's definition? That's your basic definition
                                                    for intelligence.
                                                    2. Classify on top of this, everything else it requires.

                                                    Sometimes these discussions with people wind up asking other questions, such
                                                    as "what is consciousness?" I personally don't think anything is,
                                                    objectively, "consciousness". It's not something to discover so much as it
                                                    is something to define--or, a decision to make, not a discovery. If you
                                                    consider it a discovery then you can only logically end up in a problem of
                                                    circular reasoning.

                                                    I think a person is making useful progress, each time he/she layers one new
                                                    intelligent feature over the previous. For example, the following
                                                    iterations of robot development:

                                                    Version 1.0: robot reacts in hard-wired ways to specific sensory stimulus
                                                    types.
                                                    These are intelligent features--they react to the environment in ways
                                                    the increase its likeliness of doing what it's designed to do and/or not
                                                    what it's not.

                                                    Version 2.0: robot identifies certain simultaneous, stimulus patterns that
                                                    lead to an unwanted state and what simultaneous, motor patterns in response,
                                                    make that state less likely. (and/or, vice-versa for a wanted state)

                                                    Version 3.0: robot identifies pattern sequences to enhance its version 2.0
                                                    capabilities by also recognizing and producing the temporal patterns. For
                                                    example, after I saw x, then I saw y, and then z, q (which I don't want)
                                                    happened, while x, then y, the t lead to p (which I do want). Doing w,
                                                    after x followed by y, usually leads to t--so do that next time I see x and
                                                    then y.

                                                    Version 4.0: robot identifies abstract pattern segments and swaps similar
                                                    segments in pattern sequences where the usual segments required to reach a
                                                    wanted state are not available. I.E if segment A leads to B leads to C (the
                                                    state you want) and G leads to H leads to C, and A occurs and you don't have
                                                    B or H, but J has the same characteristics that overlap between B and H then
                                                    try using J in their place.

                                                    So your robot develops from something akin to a single-celled organism to
                                                    one capable of abstract reasoning.

                                                    Matthew

                                                    On Mon, Oct 13, 2008 at 1:11 PM, dan michaels <oric_dan@...> wrote:

                                                    > --- In SeattleRobotics@yahoogroups.com<SeattleRobotics%40yahoogroups.com>,
                                                    > "Phil Malone"
                                                    >
                                                    > <onlinestoreemail@...> wrote:
                                                    > >
                                                    > > > AI research is very useful to us because we are
                                                    > > > studying the nature of thought in ourselves.
                                                    > >
                                                    > > There's two things I've never understood about AI research.
                                                    > >
                                                    > > 1) Don't you have to be able to define Human Intelligence before you
                                                    > can try to create an artificial version? If you sampled 100 humans
                                                    > that "thought" they were intelligent, do you think that you'd get a
                                                    > good baseline for emulation?
                                                    > >
                                                    >
                                                    > If everyone else in the world is supposed to sit on the "hold" button
                                                    > while waiting for the "100" to agree on a common answer, then everyone
                                                    > else might better find another field of endeavor.
                                                    >
                                                    > The alternative is to do the only reasonable thing. Just keep evolving
                                                    > the next generation of better and smarter robots by bootstrapping off
                                                    > what we've learned from building the previous generation. All
                                                    > engineering works incrementally, after all.
                                                    >
                                                    > Also, for my 0.02, Gerald Edelman's ideas are the best I've seen so
                                                    > far, but he lives in the world of anti-cognitivist approaches, so
                                                    > anyone schooled in classical AI will probably find disagreement.
                                                    >
                                                    > http://www.nsi.edu/index.php?page=ii_brain-based-devices_bbd
                                                    >
                                                    >
                                                    >


                                                    [Non-text portions of this message have been removed]
                                                  • Brian Pitt
                                                    ... then it would go into politics... :) I see modeling human intelligence as a great way to get side tracked ,the environment is just too alien to use
                                                    Message 25 of 26 , Oct 13, 2008
                                                    • 0 Attachment
                                                      On Monday 13 October 2008 07:01, Phil Malone wrote:
                                                      > 2) Why are we being so elitist?  If we realy want to model the human
                                                      > thought process, and need lots of good examples, shouldn't we be trying
                                                      > to create "Artifical Stupidity"?

                                                      then it would go into politics... :)

                                                      I see modeling human intelligence as a great way to get side tracked ,the environment
                                                      is just too alien to use anthropomorphic models

                                                      maybe we need a sort of reverse Turing test where you'd have to convince a computer that you
                                                      were another computer

                                                      Brian
                                                    • dan michaels
                                                      ... Also, Hans Moravec ... First-Generation Universal Robots, etc. http://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1993/Robot93.html ... you
                                                      Message 26 of 26 , Oct 14, 2008
                                                      • 0 Attachment
                                                        --- In SeattleRobotics@yahoogroups.com, "Matthew Tedder"
                                                        <matthewct@...> wrote:
                                                        >

                                                        Also, Hans Moravec ... First-Generation Universal Robots, etc.

                                                        http://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1993/Robot93.html


                                                        >
                                                        > I second that thought that you cannot let lack of a definition stop
                                                        you from > trying to make smarter robots.
                                                        >
                                                        > And, I'd like to add: it's not that there is no definition for
                                                        intelligence > but just not universally recognized definition. You
                                                        could take the least > common denominator approach and consider all e
                                                        rest, optional add-ons. > Anyway, that's what I do.
                                                        >
                                                        > 1. What's common about everyone's definition? That's your basic
                                                        definition
                                                        > for intelligence.
                                                        > 2. Classify on top of this, everything else it requires.
                                                        >
                                                        > Sometimes these discussions with people wind up asking other
                                                        questions, such
                                                        > as "what is consciousness?" I personally don't think anything is,
                                                        > objectively, "consciousness". It's not something to discover so
                                                        much as it
                                                        > is something to define--or, a decision to make, not a discovery. If you
                                                        > consider it a discovery then you can only logically end up in a
                                                        problem of
                                                        > circular reasoning.
                                                        >
                                                        > I think a person is making useful progress, each time he/she layers
                                                        one new
                                                        > intelligent feature over the previous. For example, the following
                                                        > iterations of robot development:
                                                        >
                                                        > Version 1.0: robot reacts in hard-wired ways to specific sensory
                                                        stimulus
                                                        > types.
                                                        > These are intelligent features--they react to the environment in
                                                        ways
                                                        > the increase its likeliness of doing what it's designed to do and/or not
                                                        > what it's not.
                                                        >
                                                        > Version 2.0: robot identifies certain simultaneous, stimulus
                                                        patterns that
                                                        > lead to an unwanted state and what simultaneous, motor patterns in
                                                        response,
                                                        > make that state less likely. (and/or, vice-versa for a wanted state)
                                                        >
                                                        > Version 3.0: robot identifies pattern sequences to enhance its
                                                        version 2.0
                                                        > capabilities by also recognizing and producing the temporal
                                                        patterns. For
                                                        > example, after I saw x, then I saw y, and then z, q (which I don't want)
                                                        > happened, while x, then y, the t lead to p (which I do want). Doing w,
                                                        > after x followed by y, usually leads to t--so do that next time I
                                                        see x and
                                                        > then y.
                                                        >
                                                        > Version 4.0: robot identifies abstract pattern segments and swaps
                                                        similar
                                                        > segments in pattern sequences where the usual segments required to
                                                        reach a
                                                        > wanted state are not available. I.E if segment A leads to B leads
                                                        to C (the
                                                        > state you want) and G leads to H leads to C, and A occurs and you
                                                        don't have
                                                        > B or H, but J has the same characteristics that overlap between B
                                                        and H then
                                                        > try using J in their place.
                                                        >
                                                        > So your robot develops from something akin to a single-celled
                                                        organism to
                                                        > one capable of abstract reasoning.
                                                        >
                                                        > Matthew
                                                        >
                                                        > On Mon, Oct 13, 2008 at 1:11 PM, dan michaels <oric_dan@...> wrote:
                                                        >
                                                        > > --- In
                                                        SeattleRobotics@yahoogroups.com<SeattleRobotics%40yahoogroups.com>,
                                                        > > "Phil Malone"
                                                        > >
                                                        > > <onlinestoreemail@> wrote:
                                                        > > >
                                                        > > > > AI research is very useful to us because we are
                                                        > > > > studying the nature of thought in ourselves.
                                                        > > >
                                                        > > > There's two things I've never understood about AI research.
                                                        > > >
                                                        > > > 1) Don't you have to be able to define Human Intelligence before you
                                                        > > can try to create an artificial version? If you sampled 100 humans
                                                        > > that "thought" they were intelligent, do you think that you'd get a
                                                        > > good baseline for emulation?
                                                        > > >
                                                        > >
                                                        > > If everyone else in the world is supposed to sit on the "hold" button
                                                        > > while waiting for the "100" to agree on a common answer, then everyone
                                                        > > else might better find another field of endeavor.
                                                        > >
                                                        > > The alternative is to do the only reasonable thing. Just keep evolving
                                                        > > the next generation of better and smarter robots by bootstrapping off
                                                        > > what we've learned from building the previous generation. All
                                                        > > engineering works incrementally, after all.
                                                        > >
                                                        > > Also, for my 0.02, Gerald Edelman's ideas are the best I've seen so
                                                        > > far, but he lives in the world of anti-cognitivist approaches, so
                                                        > > anyone schooled in classical AI will probably find disagreement.
                                                        > >
                                                        > > http://www.nsi.edu/index.php?page=ii_brain-based-devices_bbd
                                                        > >
                                                        > >
                                                        > >
                                                        >
                                                        >
                                                        > [Non-text portions of this message have been removed]
                                                        >
                                                      Your message has been successfully submitted and would be delivered to recipients shortly.