Yes ideas set out like Moravecs are, like good SF, inspirational, but they are only ideas. not route maps. ... From: oric_dan To:Message 1 of 321 , Dec 2 3:00 PMView SourceYes ideas set out like Moravecs are, like good SF, inspirational, but they are only ideas. not route maps.----- Original Message -----From: oric_danSent: Wednesday, December 02, 2009 10:01 PMSubject: [SeattleRobotics] Re: The Wozniak Test
--- In SeattleRobotics@ yahoogroups. com, "David Buckley" <david@...> wrote:
> I met Hans briefly in 1996 at the Robot Olympics in Glagow and he seemed a really nice quiet guy, but what has he been doing for the last 20 years or so? Its easy to say this is how it can be done, the comic books are full of robots like that. The only people who are going to find out how to build intelligent robots are the people building robots and even then most won't get there.
Personally, I think GOFAI hit the proverbial wall about 25 years ago, but guys like Minsky and Moravec had made their names and stopped innovating. Minsky is still sticking to his old guns [cf, flintlocks], and poo-poohing stuff like Brooks and Pfeifer and others are doing. Ie, so-called Nouveau AI, embodied robotics, and anything involving emergence and self-organization - ie, basically anything not overtly symbolic GOFAI. His ideas of how the brain works in Society of Mind haven't been updated since he had them back in 1972. He seems to think Cyc is a great tool to bootstrap off of. [from his comments made in the past on the yahoo/ai-philosophy forum]
Nevertheless, as I told Dave.W, I still think Moravec's original idea of a series of universal robots is useful for helping to determine the "next step" for robotics.
> ----- Original Message -----
> From: dcwjobs2004
> To: SeattleRobotics@ yahoogroups. com
> Sent: Wednesday, December 02, 2009 7:47 AM
> Subject: [SeattleRobotics] Re: The Wozniak Test
> Or not. He points out the problem in the third paragraph, referring to the robots of the 1970's (e.g. Shakey) versus the logic programs like theorem proving:
> "What a shock! While pure reasoning programs did their jobs about as well and about as fast as college freshmen, the best robot control programs took hours to find and pick up a few blocks on a table. Often these robots failed completely, giving a performance much worse than a six month old child. This disparity between programs that reason and programs that perceive and act in the real world holds to this day. "
> The rest of the article proceeds to ignore this fact, keeping the AI faith in the belief that the problem will be solved by more computing power.
> It is an extrapolation from a flawed analogy. Computing power (CPU speed and memory size) is not a measure of how well you can simulate intelligence in the same sense that horsepower is not a measure of how well you have simulated a horse.
> Further on, the article implies that we will get intelligent robots when our computers are powerful enough to simulate insect, animal and eventually human nervous systems. This presumes that computational power is what is holding us back. This is not really credible at this point [2009 vs 1991]. We have a LOT of compute power available, but we do not know how to simulate intelligence using it. The fact that we do not understand intelligence is shown by the parallel lack of a simple, obvious, widely accepted way of measuring it.
> The article tries to map robotic progress onto the AI paradigm, but it fails to directly do so. It makes good points about developing better sensors and algorithms that use the sensors to get useful tasks done. This looks like continued progress in conventional automation.
> The problem with the AI paradigm was discovered in the 1970's as Hans noted. It has not gone away. Futuristic scenarios where robots magically "wake up" to intelligence when some critical threshold of compute power, etc. is passed all remind me of the R&D cartoon where a blackboard full of equations contains a statement in the middle, "... and then a miracle occurs!" Like the reviewing scientist, I think more explanation is required at that step.
> Dave Wyland
> --- In SeattleRobotics@ yahoogroups. com, "oric_dan" <oric_dan@> wrote:
> > --- In SeattleRobotics@ yahoogroups. com, "dcwjobs2004" <dcwyland@> wrote:
> > >
> > > Ah, but if you have no direction, how do you define the "next step"? And whether it is a step forward, backward or sideways?
> > >
> > > Dave Wyland
> > >
> > Hans Moravec already did that 20 years ago.
> > http://www.frc. ri.cmu.edu/ ~hpm/project. archive\
> > /robot.papers/ 1991/Universal. Robot.910618. html
> > [Moravec88] Hans Moravec, Mind Children: The Future of Robot and Human Intelligence, Harvard University Press, Cambridge, Massachusetts, 1988.
> > http://www.frc. ri.cmu.edu/ ~hpm/book97/ index.html
> > http://www.frc. ri.cmu.edu/ ~hpm/book97/ ch4/index. html
oric_dan said: Wednesday, December 02, 2009 12:12 PM ... Hi Dan, I really don t think USB to micros is the way to go. I think CAN is a much better approach. AMessage 321 of 321 , Dec 13 3:18 PMView Sourceoric_dan said: Wednesday, December 02, 2009 12:12 PM
> The netbook becomes the high-level brains, plus usefulHi Dan,
> for wifi comms, of course.
I really don't think USB to micros is the way to go. I think CAN
is a much better approach. A webpage, or spreadsheet, of
features would probably be a good thing done. I'm not sure I can
do it. But here's a first blush. I would like a network that was
muyltidrop, peer-to-peer, availble on small micros.
CAN Ser SPI I2C USB Ethernet
Small Sys x x x x
Multidrop x x x x
PeertoPeer x x
CSMA/CA/CD x x
Noise Immun x x
Video x x
So CAN is favorable for everything important except video (also
not all small micros have it, but SPI peripherals can
compensate). Ethernet is suitable for everything except
reasonaly small micros we'd really like to use. USB misses on
almost all desireable traits except video (but that solves
webcam issues). It requires a host (master-slave) arrangement.
It is not suitable for small systems (particularly the hosting),
and it takes hardware and software at every micro to talk. It is
not multidrop, so every node has to have a hardware channel to
host or hub.
(My experience with this was seeing it in action on the Smart
Car CAN Bus, where many controller units offered up their
reports on the bus, and anything in the car that needed that
information could just grab it when it came by. For instance,
both the throttle and the transmission could watch where the
operator put the gear selection and the pedal. The front panel
could read the engine water temperature and know if it should
set a warning. Nobody asked for data. It was always there,
updated at the necessary rate, depending on expected latency
So I'd think your best solution is from your netbook use one USB
for Webcam, and one USB as a bridge to CAN or something like