Loading ...
Sorry, an error occurred while loading the content.
 

Re: Defending Chomsky!?!?!?

Expand Messages
  • jrstern
    ... I didn t say it *works*, I m just defending him against the charge that he was a black-box behaviorist. Besides poverty of stimulus, Chomsky drove a wedge
    Message 1 of 21 , Mar 1, 2005
      --- In ai-philosophy@yahoogroups.com, Marvin Minsky <minsky@m...>
      wrote:
      > >It's Chomsky's poverty of stimulus story that wins or loses here,
      > >and I say it wins, big.
      >
      > That is certainly the best argument I have heard in favor of it.

      ...

      > >As for looking for machinery and making theories, Chomsky's school
      > >of generative transformational grammar has got to count for
      > > something. Maybe something too ideal and distant from empirical
      > >theories, but even so.
      >
      > Why does it "have to" count. It is a partial
      > description of the corpus of sentences, and needs
      > to be patched to account for thousands of
      > exceptions. (And didn't Harris do it first?)

      I didn't say it *works*, I'm just defending him against the charge
      that he was a black-box behaviorist. Besides poverty of stimulus,
      Chomsky drove a wedge between performance and competence that has
      haunted AI ever since. Who is to say a priori what makes a model one
      of performance versus competence? Or do you want to take issue with
      the distinction? Strangely, I might want to take issue with it, but
      if I do, I'm going to address it very, very carefully.

      > >Chomsky's role in the history and philosophy of twentieth century
      > >science is one of the key movers, however intentionally or
      > >accidentally, away from behaviorism and towards mechanism, innate
      > >or otherwise.
      >
      > I agree that he had a huge influence. He almost
      > singlehandedly retarded semantics for several
      > decades.

      Well now, I wonder.

      ... it's late, and I've just deleted my fourth long answer here.

      What I wonder is if we really missed anything, after all.

      J.
    • Eray Ozkural
      ... I ve seen a lot of people who just started singing. There is poverty of stimulus, nobody really told them what voices to make and what voices not to make,
      Message 2 of 21 , Mar 1, 2005
        --- In ai-philosophy@yahoogroups.com, Marvin Minsky <minsky@m...> wrote:
        > >Holy Catfish, Batman!
        > >
        > >It's Chomsky's poverty of stimulus story that wins or loses here, and
        > >I say it wins, big.
        >
        > That is certainly the best argument I have heard in favor of it.

        I've seen a lot of people who just started singing. There is poverty
        of stimulus, nobody really told them what voices to make and what
        voices not to make, they were not even told what melodies and styles
        were good, which should mean that there is a Universal Singer in their
        brains, probably evolved from bird-singing-centers.

        More seriously, the problem with "poverty of stimulus" might be that
        we are not yet able to quantify how much linguistic information an
        infant processes during its early development, terabytes, petabytes?

        Regards,

        --
        Eray Ozkural
      • Eray Ozkural
        ... constructions/sentence ... typically ... The step 2 is weak as you say. However, I see the logical conclusion just as troublesome. Because that seems to
        Message 3 of 21 , Mar 1, 2005
          --- In ai-philosophy@yahoogroups.com, Fred Mailhot <fred.mailhot@v...>
          wrote:
          > 1) Children need exposure to particular syntactic
          constructions/sentence
          > forms in order to learn
          > about them.
          > 2) Some forms are conspicuously absent from the input children
          typically
          > receive.
          > 3) Children *never* make mistakes on these kinds of forms
          > 4) Therefore, there must be some syntactic properties that aren't
          > learned from the input, i.e. are innate.

          The step 2 is weak as you say.

          However, I see the logical conclusion just as troublesome. Because
          that seems to assume a behaviorist theory of learning, in that it
          seems to assume that forms are learnt only through exact mimicry, or
          learning input/output pairs only, while in reality human learning has
          at least a sophisticated inductive learning mechanism which can deal
          with errors/missing information and creatively explore the environment
          to fill in missing information.

          The problem as I see it is that we haven't been able to quantify how
          much linguistic information is present in the few first years of a
          child. Only then we would be able to make a poverty of stimulus argument.

          The problem is, it might be just as difficult to explain how humans
          learn anything from so few trials and so few bits of information. Let
          me try to tell you a personal anecdote. I read my first computer book
          when I was 7 years old. It talked about a BASIC program that printed
          patterns of '*'s in the first page. I understood that instantly, I
          felt as if I knew that already, and I could imagine programs that did
          other things. Does that mean I have a Universal Programmer in my head?

          I am very serious. Programmers can figure out a system while they are
          exposed to only a few details, and by the way the programming
          languages are *languages*. Do you seriously mean that something
          awfully unnatural like machine language should have been inscribed in
          my genes so that I was able to program with it when I was 15? I think
          that's kind of nonsense. Then, perhaps you want to say that learning a
          programming language is an entirely different business than learning a
          natural language, which I don't find myself in agreement with.

          Likewise for mathematics or any other endeavor that requires
          sophisticated encoding in the environment, like painting, driving,
          using a phone, knitting, *writing*, playing an instrument, composing,
          etc. (You can find better examples)

          While I'm prepared to accept the existence of many ways that make it
          especially easy to learn language, and I can also gladly accept that
          there are universal computers that use universal languages, I find it
          hard to believe that such a thing would have the form of natural language.

          Also, if you will excuse me, I will ask you how that universal grammar
          is specified in so few genes in the human genome. I wonder how many
          bits there are to fit that in. Why do our NLP toolkits have
          non-trivial code sizes then?

          Regards,

          --
          Eray Ozkural
        • Fred Mailhot
          ... No, it actually doesn t assume behaviorist learning...I fail to see how you can learn something that you ve never been exposed to. And it seems pretty
          Message 4 of 21 , Mar 1, 2005
            Eray Ozkural wrote:

            >
            >The step 2 is weak as you say.
            >
            >However, I see the logical conclusion just as troublesome. Because
            >that seems to assume a behaviorist theory of learning, in that it
            >seems to assume that forms are learnt only through exact mimicry, or
            >learning input/output pairs only, while in reality human learning has
            >at least a sophisticated inductive learning mechanism which can deal
            >with errors/missing information and creatively explore the environment
            >to fill in missing information.
            >
            >
            No, it actually doesn't assume behaviorist learning...I fail to see how
            you can
            learn something that you've never been exposed to. And it seems pretty
            unlikely
            that "induction" is an adequate answer, because induction carries with
            it the risk
            of making a mistake (potentially one from which you can't recover, in
            fact)...and
            like I said (more relevantly, as the literature shows), kids simply
            don't make a
            HUGE amount of mistakes that one would expect them to if they were
            inductive learners.

            >The problem as I see it is that we haven't been able to quantify how
            >much linguistic information is present in the few first years of a
            >child. Only then we would be able to make a poverty of stimulus argument.
            >
            >
            Well, there are actually corpora (in particular the CHILDES corpus) that
            document
            pretty damned well the kind of input that kids get from their
            environment in the
            first 2 or 3 years of their lives...

            >The problem is, it might be just as difficult to explain how humans
            >learn anything from so few trials and so few bits of information.
            >
            This is true, and a valid point, because it's clear that in some cases
            people DO
            successfully use some kind of inductive processing...how we do that is
            something that needs studying.

            > Let
            >me try to tell you a personal anecdote. I read my first computer book
            >when I was 7 years old. It talked about a BASIC program that printed
            >patterns of '*'s in the first page. I understood that instantly, I
            >felt as if I knew that already, and I could imagine programs that did
            >other things. Does that mean I have a Universal Programmer in my head?
            >
            >
            I know that this is not a serious example, Eray...anymore than that moronic
            Universal Driver argument (which, incidentally, is NOT Neil Rickert's...I'm
            pretty sure Hilary Putnam came up with it first). There's clearly prior
            knowledge
            involved in both cases.

            >I am very serious. Programmers can figure out a system while they are
            >exposed to only a few details, and by the way the programming
            >languages are *languages*.
            >
            Once again, there's obviously a tonne of prior knowledge here...and the
            syntax of
            programming languages is nearly trivial compared to human languages.

            >Do you seriously mean that something
            >awfully unnatural like machine language should have been inscribed in
            >my genes so that I was able to program with it when I was 15? I think
            >that's kind of nonsense. Then, perhaps you want to say that learning a
            >programming language is an entirely different business than learning a
            >natural language, which I don't find myself in agreement with.
            >
            >
            Yes, of course that's exactly what I want to say...

            >Likewise for mathematics or any other endeavor that requires
            >sophisticated encoding in the environment, like painting, driving,
            >using a phone, knitting, *writing*, playing an instrument, composing,
            >etc. (You can find better examples)
            >
            >While I'm prepared to accept the existence of many ways that make it
            >especially easy to learn language, and I can also gladly accept that
            >there are universal computers that use universal languages, I find it
            >hard to believe that such a thing would have the form of natural language.
            >
            >
            I never made this claim, so I'm not too clear what it is you're saying
            here...

            >Also, if you will excuse me, I will ask you how that universal grammar
            >is specified in so few genes in the human genome. I wonder how many
            >bits there are to fit that in. Why do our NLP toolkits have
            >non-trivial code sizes then?
            >
            >
            For a smart guy, that's an equally dumb thing to say, Eray...Consider
            that the human
            genome is only 30000-40000 genes long...how could it possibly encode for
            all the
            billions of neurons in our brain, not to mention the connectivity
            pattern, and all of
            the billions of other cells in our bodies. Clearly there's some kind of
            mechanism at
            work here...and a generative grammar is a perfect way to get a potential
            infinitude of
            sentences and a lot of variety of structure out of a relatively limited
            set of basic rules
            for putting things together. Obviously, whatever mechanisms generative
            linguists posit
            will gave to be something that brain science can eventually meet up with
            (and, whatever
            brain scientists eventually discover about the brain will have to meet
            up with what linguists
            say is minimally necessary for a system like human language)...so using
            whatever mechanism
            it is that enables 40000 genes to encode for a brain makes it pretty
            trivial to encode for
            Universal Grammar.


            Thanks for the feedback,

            Fred.
          • Fred Mailhot
            Forgot to mention... The point I made in my other msg about the length of the genome and the complexity of the things it encodes for is made very well by Gary
            Message 5 of 21 , Mar 1, 2005
              Forgot to mention...

              The point I made in my other msg about the length of the genome and the
              complexity of the things it encodes
              for is made very well by Gary Marcus in his most recent book __The Birth
              of the Mind__. I saw him give
              a fantastic talk about this stuff.


              Cheers,

              Fred.
            • Eray Ozkural
              ... There is something that has been missed. Until Montague s extensive paper on the semantics of quantifiers in natural language, the logical semantics
              Message 6 of 21 , Mar 1, 2005
                --- In ai-philosophy@yahoogroups.com, "jrstern" <jrstern@y...> wrote:
                >
                > > I agree that he had a huge influence. He almost
                > > singlehandedly retarded semantics for several
                > > decades.
                >
                > Well now, I wonder.
                >
                > ... it's late, and I've just deleted my fourth long answer here.
                >
                > What I wonder is if we really missed anything, after all.

                There is something that has been missed. Until Montague's extensive
                paper on the semantics of quantifiers in natural language, the logical
                semantics efforts were suppressed by Chomsky's critiques.

                As you know I'm not a defender of logical semantics. I take logical
                semantics to be an inaccurate reading of Frege. I don't think it's
                right to interpret Frege as saying that the objective meaning of a
                sentence consists of referential semantics (At least that's how I read
                it, anyway)

                So, indeed, a proper theory of semantics, I believe cannot be divorced
                from sense and other properties of language such as force, role, etc.
                In particular, my observation is that we have to take these into
                account to explain any linguistic expression longer than one sentence.
                I have developed some of my own arguments to show how there are
                'feedback loops' between components all the way from morphology to
                pragmatics.

                That is why, I don't think that it is right to a priori assume that
                some simple syntactic transformations can account for semantic
                analysis. In particular, I believe there is always need for theories
                that properly explain compositionality of meaning. And for that
                purpose categorial grammars were an excellent tool, but they were
                hindered, somehow.

                So, if more non-syntactic work on semantics, including work in logical
                semantics, were encouraged, now we could be further ahead I think.

                This is not to say there is a guarantee, but I think a variety of
                theories is always preferable to a dearth of theories from an
                evolutionary point of view.

                The categorial grammar formalism tells us that we should look for a
                syntax->semantic homomorphism, a proper mathematical mapping. While
                this might not exist, taking the mathematical route is a good way to
                start understanding Frege's principle of compositionality. You do all
                that can be done about it, and then move on to not-so-compositional
                constructs in the language.

                So, currently you can find CC tools on the web that generate logical
                semantics directly from text. That's an improvement that wouldn't be
                possible if you subscribed to the view that nothing above syntax exists.

                My problem with a syntax-centric view is this: syntax is trivial,
                compared to semantics. That is why, you can build a syntax analyzer
                for most of the english grammar (a severely restricted one of course),
                but when it comes to natural language understanding it's a horror
                story. That's what computational linguistics showed us over and over
                again. Try implementing head-phrase grammar formalisms and see if they
                work out the semantics. More trouble when it comes to languages that
                are not like english, for instance turkish that's so very close to
                free word order in small sentences. I can swap words all I like, and
                everybody is going to understand what I want to say. Language, it
                seems, can tolerate an extreme amount of noise. Hmmm. There is signal
                processing.

                And of course, needless to say, syntax does seem to be a compression
                of ideas... Which means that the syntax is merely an encoding of
                messages that are themselves "not there" directly. Enter information
                theory.

                When we read a sentence, I think it's pointless to deny that it "opens
                up" a world of ideas, a small virtual world of its own. But that's
                exactly what a behaviorist would deny, that there is anything beyond
                "verbal behavior", so to a behaviorist all that happens is the simple
                manipulation of public symbols. This world of ideas is the context
                created by a discourse. What is a context?

                The trouble is the same trouble with early AI research. Chomsky never
                considered anything further than toy sentences. I cannot even
                calculate how many valid theories you can construct for simple enough
                sentences, there are probably countless such theories that don't
                generalize well or have good enough accuracy.

                On the other hand, I think many failed to "connect" the "cognitive
                universals" with "language". When you are reading a sentence, you
                perceive. You predict. You imagine. So, I think it's also pointless to
                deny that generic mechanisms are acting in this modality, as they also
                act in harder modalities like vision or common sense reasoning.

                This is the criticism part. Now, here is the solution part. I think
                it's better to leave aside human natural language aside for
                fundamental AI research. Instead, we should construct artificial
                systems that are in dire need of communication, and then we should try
                to find ways of teaching these machines how to communicate. Naturally,
                things like theorems and bounds follow. Algorithms follow. Systems,
                architectures, follow. The complexity of these systems can be
                *controlled*, they don't have to be pointless toy examples. They can
                perform substantial jobs. That way we can get rid of all the cultural
                baggage that clouds our vision. Also we don't have to deal with the
                actual complexity of natural language that spans a wide range from
                sociology to anthropology, which Chomsky's theory neglects.

                In the machine learning front, my idea is to apply more general
                purpose cognitive mechanisms for text learning tasks. In particular, I
                plan to apply OpenMind to this end, to which I had contributed some
                data. The "naturality" of the input database is very attractive for
                evaluating the merits of how such data can be useful at all.

                Regards,

                --
                Eray
              • Eray Ozkural
                ... Birth ... In fact, I found it a bad argument. It is not easy to tell it, but let me say that you can t compress things indefinitely, and the thing
                Message 7 of 21 , Mar 1, 2005
                  --- In ai-philosophy@yahoogroups.com, Fred Mailhot <fred.mailhot@v...>
                  wrote:
                  > Forgot to mention...
                  >
                  > The point I made in my other msg about the length of the genome and the
                  > complexity of the things it encodes
                  > for is made very well by Gary Marcus in his most recent book __The
                  Birth
                  > of the Mind__. I saw him give
                  > a fantastic talk about this stuff.

                  In fact, I found it a bad argument. It is not easy to tell it, but let
                  me say that you can't compress things indefinitely, and the thing
                  generative linguists are looking for has nothing to do with the human
                  development process.

                  He's got the wrong numbers to start with. What he should be interested
                  in would be the number of genes that we differ from the chimps. How
                  many of those very few genes is devoted to UG? And if the UG is so
                  short, why is it any more difficult to learn than a myriad of other
                  difficult things we learn in our infancy? This shows a lack of
                  information theoretic analysis on the part of the UG argument.

                  Regards,

                  --
                  Eray Ozkural
                • Paul Bramscher
                  ... Could you elaborate on #2 and #3? After reading a strain of Eastern philosophy and certain thinkers like Aldous Huxley I looked at my first son and
                  Message 8 of 21 , Mar 1, 2005
                    Fred Mailhot wrote:

                    > Eray Ozkural wrote:
                    >
                    > >I happened to agree with Rossin, so I forwarded the message. I think
                    > >Marvin is right that Chomsky's innateness turned out be behaviorist.
                    > >
                    > >
                    > Actually, on this point Dr. Minsky is most definitely wrong, and winds
                    > up sounding like he
                    > hasn't actually read anything Chomsky's written. Chomsky has, from the
                    > start, explicitly
                    > asserted that a mentalist stance is the correct one to take, with
                    > respect to developing a
                    > descriptive and explanatory theory of human language. I use the word
                    > "mentalist" here with
                    > reservation, because somebody will surely jump in and accuse Chomsky of
                    > being some kind
                    > of dualist, and that's not right, either.
                    >
                    > For ppl who are actually interested in finding out what he thinks, I
                    > highly recommend two
                    > recent essays, "Language from an Internalist Perspective" and "Language
                    > as a Natural
                    > Object"...both of which can be found in the book __New Horizons in the
                    > Study of Language
                    > and Mind__ ...they're both quite short.
                    >
                    > Moreover, Chomsky has made amply clear his belief that looking at the
                    > *physical* processes by
                    > which humans "produce" language (i.e. moving their mouths, vibrating
                    > their vocal cords, etc.
                    > -- the kinds of things a behaviorist would be interested in) cannot
                    > reveal anything useful about
                    > the properties of human language.
                    >
                    > >>From the POV of cognitive science, I think such grand claims require
                    > >empirical evidence which does not seem to be present in his arguments.
                    > >
                    > >
                    > A lot of C's arguments are logical, and he presents them as such.
                    > Empirical evidence pertaining
                    > to the premises of his arguments is nearly always available through
                    > other linguists' research.
                    >
                    > >A particular argument I definitely disagree for the "innateness
                    > >assumption" is that computational learning is difficult, so all of
                    > >that hard stuff must have been achieved by evolution. Basically all of
                    > >P&P framework seems to be built on this brilliant idea.
                    > >
                    > I think this is a bit of a misconstrual of the Argument from the Poverty
                    > of the Stimulus (APS), since
                    > Chomsky never actually says anything about "computational learning being
                    > hard"...once again,
                    > this is presented as a logical argument, and any of the premises are
                    > open to question.
                    >
                    > 1) Children need exposure to particular syntactic constructions/sentence
                    > forms in order to learn
                    > about them.
                    > 2) Some forms are conspicuously absent from the input children typically
                    > receive.
                    > 3) Children *never* make mistakes on these kinds of forms
                    > 4) Therefore, there must be some syntactic properties that aren't
                    > learned from the input, i.e. are innate.

                    Could you elaborate on #2 and #3? After reading a strain of Eastern
                    philosophy and certain thinkers like Aldous Huxley I looked at my first
                    son and watched his development in wonderment, only half-jokingly said
                    to my wife that he had innate "baby wisdom". Having seen the crazy and
                    illogical things that children (and many adults) do, I now dismiss this
                    theory. What exactly is this infallible linguistic ability that
                    children seem to have?

                    Certainly there are some things we seem to be hardcoded with (how to
                    blink, swallow, suckle, cry, to keep our heart beating, to breathe).
                    Language seems a much higher-order thing, does not come at birth, and is
                    not needed at birth. Keep in mind, also, that there's been an
                    evolutionary tug of war between skull size and women's hips. Too large
                    a brain at birth, and the odds of difficulty during labor increase.
                    We're born on a certain scale of economics. Since language isn't
                    something we can start practicing at birth, like adult permanent teeth,
                    it seems to come when something else is developed enough (jaw in that
                    case), additional brain development in the other.

                    Though, I imagine, this doesn't directly present a challenge ot
                    Chomsky. One might retort that yes, (advanced) language/communication
                    may not be available at birth, but that -- indeed -- it becomes
                    available when the physical piping has sufficiently developed.

                    Paul Bramscher
                  • Fred Mailhot
                    ... I m just running out, so I ll comment more later on today... ... There are many generative linguists who argue that exactly this kind of maturational
                    Message 9 of 21 , Mar 1, 2005
                      Paul Bramscher wrote:

                      >
                      >Could you elaborate on #2 and #3?
                      >
                      I'm just running out, so I'll comment more later on today...

                      >Though, I imagine, this doesn't directly present a challenge ot
                      >Chomsky. One might retort that yes, (advanced) language/communication
                      >may not be available at birth, but that -- indeed -- it becomes
                      >available when the physical piping has sufficiently developed.
                      >
                      >Paul Bramscher
                      >
                      >
                      There are many generative linguists who argue that exactly this kind of
                      maturational process is at work in the development of increasingly
                      sophisticated linguistic structures. Of course, this is in perfect
                      accord with a nativist perspective, the same way someone might say that
                      the changes associated with puberty are innate, i.e. genetically-specified.


                      Cheers,

                      Fred.
                    • Paul Bramscher
                      ... The missing theory seems to be how we link learned behavior to physical changes. There s been enough research done since Whorf-Sapir on language to
                      Message 10 of 21 , Mar 1, 2005
                        Fred Mailhot wrote:

                        > Paul Bramscher wrote:
                        >
                        > >
                        > >Could you elaborate on #2 and #3?
                        > >
                        > I'm just running out, so I'll comment more later on today...
                        >
                        > >Though, I imagine, this doesn't directly present a challenge ot
                        > >Chomsky. One might retort that yes, (advanced) language/communication
                        > >may not be available at birth, but that -- indeed -- it becomes
                        > >available when the physical piping has sufficiently developed.
                        > >
                        > >Paul Bramscher
                        > >
                        > >
                        > There are many generative linguists who argue that exactly this kind of
                        > maturational process is at work in the development of increasingly
                        > sophisticated linguistic structures. Of course, this is in perfect
                        > accord with a nativist perspective, the same way someone might say that
                        > the changes associated with puberty are innate, i.e.
                        > genetically-specified.

                        The missing theory seems to be how we link learned behavior to physical
                        changes. There's been enough research done since Whorf-Sapir on
                        language to suggest that there are some significant differences among
                        cultures with regard to describing numbers, time, etc. And across
                        divergent geographical areas, some cultures need a much richer
                        volcabulary among certain terminology (nautical, snow, desert, etc.)
                        than others which may not need it at all.

                        In computers, at least, this suggests a physical state or structural
                        change. That is, outside influences, the storage of memes and
                        behavorial codes of conduct, even the ability to process certain
                        abstractions, all would seem to directly shape the structures of the
                        brain. Perhaps the brain is like a tree or shrub, subject to a
                        topiarist's skill. It has an innate ability to grow, but the (physical)
                        structure is directly malleable.

                        It seems that many theories present a polemic one way or the other (the
                        old nature vs. nuture).

                        Paul F. Bramscher
                      • Jim Whitescarver
                        We cannot really blame Chomsky for the misapplication of transformational grammars to semantics of thought. I do not believe he has ever advocated such usage
                        Message 11 of 21 , Mar 1, 2005
                          We cannot really blame Chomsky for the misapplication of
                          transformational grammars to semantics of thought. I do not believe he
                          has ever advocated such usage in that he has denied any relation between
                          grammars and mechanisms of intelligence. He does not think the mind is
                          infinite, he just does not think we understand it very well yet, and
                          denys that any existing theory is even close to being correct.

                          My own view is that Marvin's "Society of Mind" is "close enough", and an
                          extremely simple starting point. For thinking about, and modeling, how
                          the mind works. The network of communicating "agents" exhibiting
                          universal computing is sufficient. It is unbiased to the nature of the
                          particular universal language or any school of AI. Other models, that
                          can be shown equivalent to this, are also viable. Employing semantic
                          linguistics cannot be excluded and may be more comprehensible that other
                          alternatives, but Marvin's point is well taken that transformational
                          grammars are insufficient, as is consistent with Chomsky's own view, as
                          they do not exhibit universal computing.

                          It is harder to forgive Chomsky's Platonic application of algorithmic
                          information theory. But his contribution to language theory and
                          information theory is significant whether you agree with his views or not.

                          I can't believe I am supporting him for a change. But his influence in
                          me is undeniable. Even his politics, which oppose mine on the surface,
                          expose him as a great humanist, not really that far from my own
                          libertarian internationalist views. His criticisms of government, to
                          me, are inconsistent with his expectation of government. He ought
                          express his minimalist government views more plainly as being distinct
                          from socialism as logic would demand, if that indeed is his intended
                          meaning. It is impractical to be a libertarian socialist. His voice
                          ought summon humanity, not government. But that is for another forum.

                          Jim

                          Marvin Minsky wrote:

                          >>--- In ai-philosophy@yahoogroups.com, Marvin Minsky <minsky@m...>
                          >>wrote:
                          >>
                          >>
                          >>
                          >>> So, of course course, as everyone knows, we have lots of innate
                          >>> machinery, but Chomsky looked in the wrong place for it, because
                          >>> he was really a diehard behaviorist who was principally concerned
                          >>> with describing the sentences that people produced without trying
                          >>> to make theories of how they were produced or understood.
                          >>>
                          >>> Many people still maintain that he was basically opposed to the
                          >>> theories of B.H.Skinner; in fact he was, if possible, even more
                          >>> opposed to trying to make theories about internal mental
                          >>> activities.
                          >>>
                          >>>
                          >>Holy Catfish, Batman!
                          >>
                          >>It's Chomsky's poverty of stimulus story that wins or loses here, and
                          >>I say it wins, big.
                          >>
                          >>
                          >
                          >That is certainly the best argument I have heard in favor of it.
                          >
                          >
                          >
                          >>As for looking for machinery and making theories, Chomsky's school of
                          >>generative transformational grammar has got to count for something.
                          >>Maybe something too ideal and distant from empirical theories, but
                          >>even so.
                          >>
                          >>
                          >
                          >Why does it "have to" count. It is a partial
                          >description of the corpus of sentences, and needs
                          >to be patched to account for thousands of
                          >exceptions. (And didn't Harris do it first?)
                          >
                          >
                          >
                          >>Chomsky's role in the history and philosophy of twentieth century
                          >>science is one of the key movers, however intentionally or
                          >>accidentally, away from behaviorism and towards mechanism, innate or
                          >>otherwise.
                          >>
                          >>
                          >
                          >I agree that he had a huge influence. He almost
                          >singlehandedly retarded semantics for several
                          >decades.
                          >
                          >
                          >
                        • jrstern
                          ... Chomskian theories have been applied to how birds learn to sing as well. ... There are petabytes of literature on this, starting with Chomsky himself. J.
                          Message 12 of 21 , Mar 1, 2005
                            --- In ai-philosophy@yahoogroups.com, "Eray Ozkural" <erayo@c...>
                            wrote:
                            > I've seen a lot of people who just started singing. There is poverty
                            > of stimulus, nobody really told them what voices to make and what
                            > voices not to make, they were not even told what melodies and styles
                            > were good, which should mean that there is a Universal Singer in
                            > their brains, probably evolved from bird-singing-centers.

                            Chomskian theories have been applied to how birds learn to sing as
                            well.

                            > More seriously, the problem with "poverty of stimulus" might be that
                            > we are not yet able to quantify how much linguistic information an
                            > infant processes during its early development, terabytes, petabytes?

                            There are petabytes of literature on this, starting with Chomsky
                            himself.

                            J.
                          • ray scanlon
                            In the view of the physiologist, Chomsky in debate, the virtuoso musician in concert, and the ballerina on stage, are exactly the same: these are instances of
                            Message 13 of 21 , Mar 2, 2005
                              In the view of the physiologist, Chomsky in debate, the virtuoso
                              musician in concert, and the ballerina on stage, are exactly the
                              same: these are instances of a central nervous system producing
                              motor acts. The ballerina, for instance, is an example of
                              locomotor behavior. Somatomotor neuron pools in the spinal cord
                              are excited by a locolmotor pattern generator in the spinal cord,
                              that is excited by a locomotor pattern iniator in the midbrain
                              locomotor region, that is, in turn, excited by a locomotor
                              pattern controller in the hypothalamic locomotor region.

                              When a debater makes noises with his mouth, he exercises a motor
                              program generator in his parvicellular nucleus in his
                              dorsolateral hindbrain.

                              That both these motor programs pass through a neural pathway en
                              route to the motor cortex, and that the neurons of this pathway
                              are subject to modification by the experiences of the organism,
                              are of interest, but how are they related to the practice of
                              artificial intelligence?

                              We should not be bemused by the multitude of neurons involved.
                              They are all neurons. Each neuron is a bag of some billion
                              protein molecules. These molecules selected from possibly ten
                              thousand. Here lies the story of a nervous system.

                              I say that artificial intelligence should be concerned with the
                              motor program generators and the nuclei of the brain that affect
                              the motor programs on their way out.

                              This is reductionism. Anti-reductionism is alive and well. It is
                              happy with innate grammars.

                              Thoughtfully,

                              Ray
                            • Eray Ozkural
                              ... This is strange, but then you mean to say that an inductive learner always makes a huge amount of mistakes, which runs counter to the idea of induction. I
                              Message 14 of 21 , Mar 2, 2005
                                --- In ai-philosophy@yahoogroups.com, Fred Mailhot <fred.mailhot@v...>
                                wrote:
                                > No, it actually doesn't assume behaviorist learning...I fail to see how
                                > you can
                                > learn something that you've never been exposed to. And it seems pretty
                                > unlikely
                                > that "induction" is an adequate answer, because induction carries with
                                > it the risk
                                > of making a mistake (potentially one from which you can't recover, in
                                > fact)...and
                                > like I said (more relevantly, as the literature shows), kids simply
                                > don't make a
                                > HUGE amount of mistakes that one would expect them to if they were
                                > inductive learners.

                                This is strange, but then you mean to say that an inductive learner
                                always makes a huge amount of mistakes, which runs counter to the idea
                                of induction.

                                I know it's not easy to see this, but have you thought *why* Occam's
                                razor works at all?

                                > >The problem as I see it is that we haven't been able to quantify how
                                > >much linguistic information is present in the few first years of a
                                > >child. Only then we would be able to make a poverty of stimulus
                                argument.
                                > >
                                > >
                                > Well, there are actually corpora (in particular the CHILDES corpus)
                                that
                                > document
                                > pretty damned well the kind of input that kids get from their
                                > environment in the
                                > first 2 or 3 years of their lives...

                                I wonder if that input includes anything other than text. I remember
                                having heard of such a corpus.

                                A small google search shows that even somewhat straightforward
                                statistical learning methods can extract a high degree of proficiency
                                in syntax from a *subset* of CHILDES corpus:

                                http://kybele.psych.cornell.edu/~edelman/nips03-draft.pdf

                                This subset had 300.000 sentences, is that poverty of stimulus?
                                Edelman's model seems to achieve the performance of a 9th grader,
                                that's kind of impressive, considering that the corpus is of small
                                children!

                                Note that the method used is a long shot from the universal problem
                                solvers (Levin) and universal inductive inference procedures (Solomonoff).

                                What do you think?

                                Regards,

                                --
                                Eray Ozkural
                              Your message has been successfully submitted and would be delivered to recipients shortly.