Loading ...
Sorry, an error occurred while loading the content.

Re: [Artificial Intelligence Group] - Turing

Expand Messages
  • Unmitigated Gall
    ... if you designed a learning system there is hope, even us humans ... confronted with unfamiliar slang, not to mention an unknown language Well my point is
    Message 1 of 25 , Feb 12 9:24 AM
    • 0 Attachment
      --- In artificialintelligencegroup@yahoogroups.com, André Clements
      <aclements@i...> wrote:
      > if you're trying to program the responses - sure, you're lost, but
      if you designed a learning system there is hope, even us humans
      > have to learn how to deal with slang, and are at a loss when
      confronted with unfamiliar slang, not to mention an unknown language


      Well my point is that Turing was an idiot. Convincing a human that
      it is talking to another human is practically pointless. There are
      more important things we could be teaching machines than ebonics.

      It's like people talking about adding emotions to machines. Even if
      you could, it would probably be dangerous. The great thing about
      them is their predictability. Make them unreliable and you could
      cause serious trouble. I mean, what kind of moron wants to program
      PMS into a machine, other than an idiot or a masochist?

      You can make machines smarter, but you should make them emulate our
      strengths not our weaknesses.

      I suppose AI is about the fuzzy logic in a sense. Creativity,
      learning spontaniety. But is it really wise to make machines model
      human neurosis?

      > -----Original Message-----
      > From: Unmitigated Gall [mailto:Spammastergrand@a...]
      > Sent: 12 February 2005 10:40
      > To: artificialintelligencegroup@yahoogroups.com
      > Subject: Re: [Artificial Intelligence Group] - Turing
      >
      >
      >
      > --- In artificialintelligencegroup@yahoogroups.com, ARASH ARASH
      > <maziar_teh59@y...> wrote:
      > >
      > > hi all.
      > >
      > > i am from iran and new for your group.
      > >
      > > i am software engneering. but my english lang is not so good.
      > >
      > > i have many question about the philosophy of the mind and the
      > conflict it with some problems like "all nonmaterial philasophy
      > belive that the Mind process is not material process "
      > >
      > > but some of expriment -like Chines room or Turing test- all
      says
      > that mentall process can be impleament at futuer .
      > >
      > > and i think that the semantic section is only special for the
      > human and not machins.
      > >
      > > plz replay and response my question .
      > >
      > > sicerly.
      > >
      > > bye
      >
      >
      > The Turing test was a bit absurd to start with. You can luck out
      and
      > fool someone with basic greetings with a program. But someone
      trying
      > to determine if he was talking to a machine could guess pretty
      > quickly if he asked in an unusual way.
      >
      > With all the possible topics of conversation and forms of
      speech, no
      > one could program that much information or understanding into a
      > computer. Even if the computer did a web search for topics, it
      would
      > not know how to address questions it wasn't prepared for.
      >
      > How is Motorolla looking these days on the street?
      >
      > "Motorolla is a tech company involved in processors, wireless and
      > other electronic, communications equiptment."
      >
      > "But is the smart money on it?"
      >
      > "Smart is another word for intelligent. Money is a form of
      currency."
      >
      > "But would you bet the farm on it? Or do you like Biotech?"
      >
      > "Betting is illegal in most states. A farm is a place where
      > agricultural produce is grown. Biotech is an industry in which
      > technology is applied to biology and medicine."
      >
      > "What do you think of the new Green Day? And how about them
      Mets?"
      >
      >
      > A computer can not possible be programmed to respond to slang,
      > obscure cultural references even having the internet as a vast
      > library of resources.
      >
      > >
      > >
      > > ---------------------------------
      > > Do you Yahoo!?
      > > Yahoo! Search presents - Jib Jab's 'Second Term'
      > >
      > > [Non-text portions of this message have been removed]
      >
      >
      >
      >
      > Yahoo! Groups Sponsor
      > ADVERTISEMENT
      >
      >
      >
      >
      >
      > -------------------------------------------------------------------
      -----------
      > Yahoo! Groups Links
      >
      > a.. To visit your group on the web, go to:
      > http://groups.yahoo.com/group/artificialintelligencegroup/
      >
      > b.. To unsubscribe from this group, send an email to:
      > artificialintelligencegroup-unsubscribe@yahoogroups.com
      >
      > c.. Your use of Yahoo! Groups is subject to the Yahoo! Terms
      of Service.
      >
      >
      >
      >
      > [Non-text portions of this message have been removed]
    • Unmitigated Gall
      ... is to develop a systems that does three important things - on the ... modelling, constantly (2) exploring establishing and evaluating ... time abstracting
      Message 2 of 25 , Feb 12 9:48 AM
      • 0 Attachment
        --- In artificialintelligencegroup@yahoogroups.com, Andre Clements
        <aclements@i...> wrote:
        > thinking about nlp and symbolic modelling techniques the challenge
        is to develop a systems that does three important things - on the
        > one hand (1)learning and expanding knowledge of and by meaning
        modelling, constantly (2) exploring establishing and evaluating
        > relations within the topography of meaning while (3) at the same
        time abstracting the meaning in a way that allows functional
        > reduction of data - so it looks like the real challenge isn't so
        much cognition as meta-cognition. Does this make any sense!? What
        > do the experienced A.I.ers think?
        >
        > A

        Modelling is interesting. I think understanding the limits and
        benefits of human vs machine skills is important. If we could make
        realistic models of human orgnas with cancer and the effects of many
        chemical substances, the computer could predict the effect of those
        substances. Machines only advantage over us is speed. They have no
        logic, reasoning. We have to build models and accept that they are
        merely sophisticated calculators. Never going much beyond number
        crunching.

        We evolved. And we are only as good as the obstacles we overcame.
        Creating artificial life that evolves involves making an environment
        as complex as the life interacting with it.

        Creating artificial intelligence has the enormous barrior of not
        even know how our own minds work. Synapses, neurons, chemicals.
        Adrenaline, oxytocin, nephonepherine. DNA, RNA.

        It's like asking a caveman to reproduce a computer. Our own thoughts
        are so enigmatic viewed from a materialistic or empirical point of
        view, it will be very hard to digitize information we dont really
        understand in the first place.

        Maybe neural networks and some of the life sciences will advance
        computer science by using computers to model what is there and
        figuring it out once its in a digital form.

        But lacking awareness, a machine is still a machine. Lacking
        willpower, desires, it can do nothing but what it is told to, other
        then a little programed random behavior.

        I was thinking of interests. Take ten subects. Ten programming
        models. Give them interests randomly as well as abilities numbered
        randomly, like those games where you assign a player 10 magic, 12
        speed, 5 strength, 8 intelligence.

        Then let the computer models grab 8 lines of information from a box
        titles, sociology, psychology, earth science, whatever.

        You have them 'mate' and the mates take a combination using 80% of
        their 'parents' 'genes' 20% randomly assigning interest or ability
        in other fields.

        Your're evolving knowlege bases, personalities of a sort. But your
        your telling it anything we dont know. Just to memorize a few bits
        of trivia about earth science. It may evolve in a sense, but is it
        really gaining positive mutation?

        And can it be taught to understand anything at all if it is not
        aware, conscious, alive? No.

        Programs cant understand. It is part of consciousness. And it might
        be some devine intervention type of thing where we simply can not
        make machines able to understand, because it is a facet of real
        life. And the Metaphysical concept of awareness.

        One of the greatest proofs of some kind of god is that it seems
        impossible to manufacture awareness.

        > >
        >
        >
        > [Non-text portions of this message have been removed]
      • Valdinei Freire da Silva
        ... Well my point is that Turing was an idiot. Convincing a human that it is talking to another human is practically pointless. There are more important things
        Message 3 of 25 , Feb 12 5:45 PM
        • 0 Attachment
          >
          Well my point is that Turing was an idiot. Convincing a human that
          it is talking to another human is practically pointless. There are
          more important things we could be teaching machines than ebonics.
          >

          I do agree that Turing's Test is not the better test to AI, but saying that
          Turing was an idiot is too much.
          Think about the philosophical implications of a machine doing well in such
          test:
          1 - What is conciousness?
          It's very hard to define, but somehow everybody knows the meaning,
          at least theirselves.
          2 - Why everybody has a conciousness?
          Because I have, and once other humans is like me, I'm sure they also
          have. Few people ask themselves whether an ant has conciousness, but many
          more ask themselves whether an ape or a dog has conciousness, only because
          the last ones is more similar to human.
          3 - What does it mean if a machine do well in Turing's Test?
          First we would have to deny the arguments when answering question 2.
          Besides that, if a machine survive Turing's Test, can be sure it will show
          lots of sign of intelligence, for instance, learning. Turing only tried to
          design a more objective test, what I think is a great beginning, for who
          until today cannot yet define what is intelligence.

          >
          It's like people talking about adding emotions to machines. Even if
          you could, it would probably be dangerous. The great thing about
          them is their predictability. Make them unreliable and you could
          cause serious trouble. I mean, what kind of moron wants to program
          PMS into a machine, other than an idiot or a masochist?
          >
          What does means PMS?
          The predictability regarding a machine is the final result (reach an aim or
          maximize some amount), not the way to get there. Once we have good
          architeture for autonomous agents (machines), all we have to worry is how to
          define objectively the final result, not worrying about what the machines
          will do, it is even desired unpredictability, otherwise we wouldn't need
          intelligent and autonomous machine.

          >
          You can make machines smarter, but you should make them emulate our
          strengths not our weaknesses.
          >
          Maybe, what you call human "weaknesses" is only hardcoded program to avoid
          bad plans or even to overcome rational "slowness". So fear, pain, happiness,
          sociability, etc. could be consider as general plans, while rationality
          would help for specialized plans.

          >
          I suppose AI is about the fuzzy logic in a sense. Creativity,
          learning spontaniety. But is it really wise to make machines model
          human neurosis?
          >
          This means that you believe that the Turing's test would imply conciousness.
          Because is the only of machines having human neurosis.

          Well, these are my thoughts.

          Valdinei
        • Unmitigated Gall
          ... saying that ... Okay, so I tend to exagerate. ... in such ... meaning, at least theirselves. Is theirselves a real word? ... Except vegetables. And they
          Message 4 of 25 , Feb 13 1:58 AM
          • 0 Attachment
            --- In artificialintelligencegroup@yahoogroups.com, "Valdinei Freire
            da Silva" <valdinei.silva@p...> wrote:
            > >
            > Well my point is that Turing was an idiot. Convincing a human that
            > it is talking to another human is practically pointless. There are
            > more important things we could be teaching machines than ebonics.
            > >
            >
            > I do agree that Turing's Test is not the better test to AI, but
            saying that
            > Turing was an idiot is too much.

            Okay, so I tend to exagerate.

            > Think about the philosophical implications of a machine doing well
            in such
            > test:
            > 1 - What is conciousness?
            > It's very hard to define, but somehow everybody knows the
            meaning, at least theirselves.

            Is theirselves a real word?

            > 2 - Why everybody has a conciousness?


            Except vegetables. And they could arguably be in a coma. Living in
            dreams.

            > Because I have, and once other humans is like me, I'm sure they
            also have. Few people ask themselves whether an ant has
            conciousness, but many more ask themselves whether an ape or a dog
            has conciousness, only because the last ones is more similar to
            human.

            Ants obviously do. You would have to get smaller.

            I saw a show on reproduction once. It showed sperm cells. When a
            woman had sex with another male, the sperm cells already there from
            another broke off into different tasks. Some went for the egg.
            Others formed a wall to stop invaders, still others stayed behind to
            attack the invading sperm of anothers DNA.

            What struck me is how purposefully they moved around. Like they
            could see. Amazing. Its as if they were conscious, aware, which begs
            the question, are all cells like this? Is our brain and body
            composed of billions of seperate consciousnesses?

            What does this imply? That the one is an illusion? I am many? The
            many come together to make me feel like one consciousness?


            > 3 - What does it mean if a machine do well in Turing's Test?

            That someone wasn't asking the right questions. Or the program got
            lucky? Anticipated basic greetings.

            Just by looking for ? at the end of an interogators wuestion, a
            program could respond "I don't know." Unless it was something it did
            know like its name, the time, etc.

            If the interogator ended a statement with a period, the program
            could say:

            "Yup." Or "I know what you mean."

            > First we would have to deny the arguments when answering
            question 2. Besides that, if a machine survive Turing's Test, can be
            sure it will show lots of sign of intelligence, for instance,
            learning. Turing only tried to design a more objective test, what I
            think is a great beginning, for who until today cannot yet define
            what is intelligence.

            A modern programmer would look at the test as absurd. Other than
            basic greetings and very casual introductions like, "Who are you"
            or "What's your name" a programmer could never anticipate the
            infinite number of questions, phrases, jargon an interogator couls
            throw at it. Furthermore, we don't know how to make a computer grasp
            anything. As I've said before, understanding is a facet of awareness
            and consciousness.

            >
            > >
            > >It's like people talking about adding emotions to machines. Even
            if you could, it would probably be dangerous. The great thing about
            > them is their predictability. Make them unreliable and you could
            > cause serious trouble. I mean, what kind of moron wants to program
            > PMS into a machine, other than an idiot or a masochist?
            > >
            > What does means PMS?

            Premenstral syndrome. Just a joke.

            > The predictability regarding a machine is the final result (reach
            an aim or maximize some amount), not the way to get there. Once we
            have good architeture for autonomous agents (machines), all we have
            to worry is how to define objectively the final result, not worrying
            about what the machines will do, it is even desired
            unpredictability, otherwise we wouldn't need intelligent and
            autonomous machine.

            That's true in the sense that a learning machine, one that could
            transcend a slave would need autonomy and unpredictability. But from
            a coders point of view it is difficult. The idea of programming is
            to give a machine instructions. To tell it what to do.

            >
            > >> You can make machines smarter, but you should make them emulate
            our strengths not our weaknesses.
            > >
            > Maybe, what you call human "weaknesses" is only hardcoded program
            to avoid bad plans or even to overcome rational "slowness". So fear,
            pain, happiness, sociability, etc. could be consider as general
            plans, while rationality would help for specialized plans.
            >

            You can't make a program feel because it is not aware or conscious.
            It is dead. A program is like an algebraic statement, proof or
            formula. How do you make an algebraic formula that is aware of it's
            surrounding? That can feel, or think, or understand. That is what
            programmers are dealing with. It is like asking an authors
            characters to answer questions. They are not actually alive. Giving
            instructions to a machine is like pushing its buttons. But it
            doesn't know it has buttons because it is no more aware than a
            pencil. It is only a tool. We can try to emulate life. But we can
            not make code aware of its surroundings.

            We can emulate sight in a sense. Or hearing. But a computer can
            neither see or hear. Only take input.

            > >
            > >I suppose AI is about the fuzzy logic in a sense. Creativity,
            > learning spontaniety. But is it really wise to make machines model
            > human neurosis?
            > >
            > This means that you believe that the Turing's test would imply
            conciousness. Because is the only of machines having human neurosis.
            >
            > Well, these are my thoughts.
            >
            > Valdinei

            No, I don't think Turings test would imply consciousness as much as
            it would require consciousness to grasp the meaning of any word in
            the English language.

            Hey, they have things like computer psychologists. Interesting. And
            in a very real sense, comething that achieves Turings goal. But only
            in a very narrow predictable way. You could fullfill his criteria if
            the field and interaction was narrow enough. But there is no machine
            in the world that I couldnt determine was a machine in under 10
            questions.

            You simply ask it very human things. Not predictable things like
            whats your name, what time is it, how far is Saturn from Jupiter.

            You would ask,

            "Do you think David Gilmore is a cool guitarist?"

            "Isn't Terry Hatcher a hottie"

            You would ask it idioms. Questions without literal translation. I've
            done a lot of writing. Fiction. Studied speech, jargon, lingo, slang.

            I would guess 20% of our statements would be illogical translated
            literally.

            Go to another usergroup. And really examine phrases:

            "My bio test was a motherfucker."

            Can a biology test have incestuous relationships?

            A lot of human speech defies literal translation.

            "I aced that test. I was all over it."

            "You rock. But I have to grab some shut eye."

            "No shit."

            "Shit."

            "Okay, later dude."
          • cic@ua.fm
            http://www.directgalleries.com/?r=admincic http://www.directgalleries.com/?r=admincic
            Message 5 of 25 , Feb 13 4:50 AM
            • 0 Attachment
            Your message has been successfully submitted and would be delivered to recipients shortly.