Loading ...
Sorry, an error occurred while loading the content.

Re: [hackers-il] Book Review: The Age of Spiritual Machines

Expand Messages
  • Orna Agmon
    ... I think, out of all those tasks that a computer needs to be able to do in order to pass the Turing test, the humor part is the most difficult. Humor
    Message 1 of 6 , Nov 7, 2003
    • 0 Attachment
      On Sun, 14 Sep 2003, Nadav Har'El wrote:

      > On Sat, Sep 13, 2003, Orna Agmon wrote about "Re: [hackers-il] Book Review: The Age of Spiritual Machines":
      > > > Here's my review of Ray Kurzweil's 1999 book
      > > >
      > > > "The Age of Spiritual Machines -
      > > > When Computers Exceed Human Intelligence"
      > >
      > > > accelerates in its current pace, is that around 2020, a $1000 computer will
      > > > have the computing power of a human brain. Very quickly afterwards the
      > > > computer "intelligence" will surpass those of humans. In the following decades
      > >
      > > Computers are indeed getting faster and with more memory, but is this
      > > really comparable to the human mind? Ever since the 60's (at least) they
      > > have been trying for AI, but I do not think the development in this field
      > > can be compared to the physical development of computers.
      > >
      > > Does he say anything about how he thinks this will happen?
      >
      > People who plan to read this book very soon might not want to read on, because
      > I'll be spoiling some of your surprises ;)
      >
      > Yes. Kurzweil concedes that having a computer as strong computationally as
      > a human brain doesn't necessarily mean that we'll have the software as good
      > as the "software" that "runs" on the human brain.
      >
      > His response to this attack is two-pronged:
      >
      > First, he claims that together with the improvement of computing power, we're
      > going to see improvement in other relvant technologies. One of these
      > technologies is scanning human brains using MRI-like technologies with ever-
      > increasing resolution (what he calls "bandwidth"). By 2020 Kurzweil predicts
      > we could destructively scan the entire structure of a (dead) human brain, and
      > by 2050 we will be able to non-destructively scan a live human brain (I'm
      > writing this from memory, so perhaps I got the dates wrong). Scanning the
      > complete structure of a brain will supposedly allow us to replicate exactly
      > the "mind" of a person on a computer, which is one way of making a computer do
      > what a human can do. Such scanning will also allow scientists to better
      > understand how the brain is built, copying "algorithms" from it (like how to
      > do face recognition, how to read, how to understand language, etc.) into
      > artificial neural networks on a computer.
      > One really interesting observersion Kurzweil makes is that the "algorithms"
      > in the human brain are much "smaller" than appear in first glance. He estimates
      > the amount of DNA that specifies the human brain to contain only 10 MB of data.
      > Yes, something 1/10th the size of Open Office ;) How can 10 MB of code specify
      > something like our brain that contains thousands of times more information?
      > The "trick", Kurzweil says, is that a fetus brain starts out with a lot of
      > random neural connections (this, of course, doesn't need any "data" to specify)
      > and appropriate algorithms to build correct connections based on input data
      > the baby gets in the very first years of its life. Similarly, we might
      > theoretically build a computer that has a few-megabyte-large program and then
      > goes on to listen, see and read, like a child normally would, until it built
      > the knowledge of an adult human. But how do we write this 10MB program?
      > Understanding the human brain might give us some ideas. Evolution has worked
      > on it for a lot more years than our human programmers can spare ;)
      >
      > Kurzweil's second reponse to your "attack" is that "AI" is a moving target:
      > Whenever a computer can do something it couldn't do before, we suddenly say
      > this is not "real" intelligence. For example, a computer can now beat the
      > world chess champion and it couldn't do so in the 60s. Did we conclude that
      > computers have become smarter than humans? No, we concluded that playing chess
      > doesn't require intelligence :) Similarly, computers can now read written
      > text (OCR), understand words spoken to it ("continuous speech recognition"),
      > translate texts (as varying degrees of accuracy), create music and paintings
      > of certain complexity, and other stuff they weren't able to do in the 60s.
      > Right, computers still don't pass the "Turing Test". But the Turing Test
      > basically requires a single computer to have all the faculties of the human
      > brain - understanding and producing language, recognizing patterns, memory
      > and knowledge of the world, the concept of "self", emotions, sense of humor,
      > and so on. Would a computer that knows how to do just one of those things,
      > or just a few of them be "intelligent" or not?
      >
      > Another thing to remember that in 2020 computers will be (according to
      > Kurzweil's predictions) as strong (computationally) as the human brain,
      > but in 2030 they will be 1000 stronger. Given such huge margins, it is
      > conceivable that even lousily-designed software we put on these computers
      > will appear to be as intelligent as a human.
      >
      > You might like to read Kurzweil's original arguments, rather than my "broken
      > telephone" (how do you call that in English?) version of his arguments.
      >
      >

      I think, out of all those tasks that a computer needs to be able to do in
      order to pass the Turing test, the humor part is the most difficult. Humor
      changes between places and times. I wonder, what defines humor?

      It ranges from sharp tongues (??) such as word games to slapstick, and we
      recognize it all as humor.

      Can anyone draw a plan as to how to teach a computer to laugh? Say we
      define laugh as print "LOL", and define smile as print ":)". How would a
      computer know when to print any of those, and when to operate an Eliza
      program?

      P.S.

      I do actually classify OCR and speach recognition under AI.

      Orna.
    • Muli Ben-Yehuda
      ... There is an excellent science fiction story, where the main question is what makes humans laugh? In order to not give the story away, I ll just mention
      Message 2 of 6 , Nov 8, 2003
      • 0 Attachment
        On Fri, Nov 07, 2003 at 06:29:06PM +0200, Orna Agmon wrote:

        > I think, out of all those tasks that a computer needs to be able to do in
        > order to pass the Turing test, the humor part is the most difficult. Humor
        > changes between places and times. I wonder, what defines humor?

        There is an excellent science fiction story, where the main question
        is what makes humans laugh? In order to not give the story away, I'll
        just mention that it ends with the alien / entity that was looking for
        the answer to the above question laughing, and laughing, and
        laughing... Anyone knows what story I'm thinking about?

        > Can anyone draw a plan as to how to teach a computer to laugh? Say we
        > define laugh as print "LOL", and define smile as print ":)". How would a
        > computer know when to print any of those, and when to operate an Eliza
        > program?

        Judging by IRC or AOL, randomly would do just fine ;-)
        --
        Muli Ben-Yehuda
        http://www.mulix.org | http://mulix.livejournal.com/

        "the nucleus of linux oscillates my world" - gccbot@#offtopic
      • Orna Agmon
        ... I was thinking of a story by Asimov, which involves Multivac, but I don t think it is this one.
        Message 3 of 6 , Nov 8, 2003
        • 0 Attachment
          On Sat, 8 Nov 2003, Muli Ben-Yehuda wrote:

          > On Fri, Nov 07, 2003 at 06:29:06PM +0200, Orna Agmon wrote:
          >
          > > I think, out of all those tasks that a computer needs to be able to do in
          > > order to pass the Turing test, the humor part is the most difficult. Humor
          > > changes between places and times. I wonder, what defines humor?
          >
          > There is an excellent science fiction story, where the main question
          > is what makes humans laugh? In order to not give the story away, I'll
          > just mention that it ends with the alien / entity that was looking for
          > the answer to the above question laughing, and laughing, and
          > laughing... Anyone knows what story I'm thinking about?

          I was thinking of a story by Asimov, which involves Multivac, but I don't
          think it is this one.



          >
          > > Can anyone draw a plan as to how to teach a computer to laugh? Say we
          > > define laugh as print "LOL", and define smile as print ":)". How would a
          > > computer know when to print any of those, and when to operate an Eliza
          > > program?
          >
          > Judging by IRC or AOL, randomly would do just fine ;-)
          >
        Your message has been successfully submitted and would be delivered to recipients shortly.