Loading ...
Sorry, an error occurred while loading the content.

Re: Is there a word for this?

Expand Messages
  • Gary Shannon
    I agree for conlang purposes. Link grammars are kind of fun to play with, though. They do have some problems, especially using conjunctions. A sentence like
    Message 1 of 29 , Jan 27, 2013
    • 0 Attachment
      I agree for conlang purposes. Link grammars are kind of fun to play
      with, though. They do have some problems, especially using
      conjunctions. A sentence like "He stole the tarts and ran away." can't
      be parsed with their link grammar because "ran" doesn't have a subject
      that can be linked without crossing lines. So they have to make a
      special "cheat" pass to resolve those kinds of problems. That makes it
      less than elegant as far as I'm concerned.

      --gary

      On Sun, Jan 27, 2013 at 1:59 PM, Jeff Sheets <sheets.jeff@...> wrote:
      > I have never seen Link Grammars before. I've generally approached natural
      > and constructed languages from the linguistics side of things. It's
      > definitely an interesting way of going about parsing, and it definitely has
      > a more computer science-y feel to it than I'm used to (in languages). Seems
      > to work quite well, but I'm inherently wary of anything which doesn't
      > explicitly state the rules of grammar separately from the lexicon. I'm
      > biased, I suppose, but I'd prefer the grammar stand separate for my own
      > conlanging. My reasoning is simple: linguists are fairly certain that
      > grammar and lexicon are separate in the brain. Also, if the grammar is
      > separate, the number of grammatical rules will be minimized, leaving only
      > context clues in the lexicon.
    • Leonardo Castro
      How old is the link grammar concept? My conlang works this way but I wasn t aware this already existed as a concept. Até mais! Leonardo
      Message 2 of 29 , Jan 28, 2013
      • 0 Attachment
        How old is the "link grammar" concept? My conlang works this way but I
        wasn't aware this already existed as a concept.

        Até mais!

        Leonardo


        2013/1/27 Gary Shannon <fiziwig@...>:
        > I agree for conlang purposes. Link grammars are kind of fun to play
        > with, though. They do have some problems, especially using
        > conjunctions. A sentence like "He stole the tarts and ran away." can't
        > be parsed with their link grammar because "ran" doesn't have a subject
        > that can be linked without crossing lines. So they have to make a
        > special "cheat" pass to resolve those kinds of problems. That makes it
        > less than elegant as far as I'm concerned.
        >
        > --gary
        >
        > On Sun, Jan 27, 2013 at 1:59 PM, Jeff Sheets <sheets.jeff@...> wrote:
        >> I have never seen Link Grammars before. I've generally approached natural
        >> and constructed languages from the linguistics side of things. It's
        >> definitely an interesting way of going about parsing, and it definitely has
        >> a more computer science-y feel to it than I'm used to (in languages). Seems
        >> to work quite well, but I'm inherently wary of anything which doesn't
        >> explicitly state the rules of grammar separately from the lexicon. I'm
        >> biased, I suppose, but I'd prefer the grammar stand separate for my own
        >> conlanging. My reasoning is simple: linguists are fairly certain that
        >> grammar and lexicon are separate in the brain. Also, if the grammar is
        >> separate, the number of grammatical rules will be minimized, leaving only
        >> context clues in the lexicon.
      • And Rosta
        ... I first encountered Link Grammar in the 90s. It makes used of Dependency Grammar, whose origins go back to the Middle Ages (see a study by Michael
        Message 3 of 29 , Jan 28, 2013
        • 0 Attachment
          On Jan 28, 2013 10:19 AM, "Leonardo Castro" <leolucas1980@...> wrote:
          >
          > How old is the "link grammar" concept? My conlang works this way but I
          > wasn't aware this already existed as a concept.

          I first encountered Link Grammar in the 90s. It makes used of Dependency
          Grammar, whose origins go back to the Middle Ages (see a study by Michael
          Covington on this), and Lexicalism, which dates back to a famous work by
          Chomsky from the early 70s whose title my ageing brain is not recalling for
          me. (Ah, I remember now: Remarks on nominalization.)

          > 2013/1/27 Gary Shannon <fiziwig@...>:
          > > I agree for conlang purposes. Link grammars are kind of fun to play
          > > with, though. They do have some problems, especially using
          > > conjunctions. A sentence like "He stole the tarts and ran away." can't
          > > be parsed with their link grammar because "ran" doesn't have a subject
          > > that can be linked without crossing lines. So they have to make a
          > > special "cheat" pass to resolve those kinds of problems. That makes it
          > > less than elegant as far as I'm concerned.

          I don't know that that's a problem with the Link Grammar approach per se.
          Rather the problem is with a parse that will build structure consisting
          only of nodes uniquely expressed by a phonological word. If "he" is subject
          of a word whose complement is "stole the cakes and ran away", the crossing
          links go away.

          > > On Sun, Jan 27, 2013 at 1:59 PM, Jeff Sheets <sheets.jeff@...>
          wrote:
          > >> I have never seen Link Grammars before. I've generally approached
          natural
          > >> and constructed languages from the linguistics side of things. It's
          > >> definitely an interesting way of going about parsing, and it
          definitely has
          > >> a more computer science-y feel to it than I'm used to (in languages).
          Seems
          > >> to work quite well, but I'm inherently wary of anything which doesn't
          > >> explicitly state the rules of grammar separately from the lexicon. I'm
          > >> biased, I suppose, but I'd prefer the grammar stand separate for my own
          > >> conlanging. My reasoning is simple: linguists are fairly certain that
          > >> grammar and lexicon are separate in the brain. Also, if the grammar is
          > >> separate, the number of grammatical rules will be minimized, leaving
          only
          > >> context clues in the lexicon.

          Even if we accept your reasoning, this doesn't entail a rejection of
          lexicalism, because the part of the lexical entry specifying valency
          (subcategorization) might still be located in the grammar zone of the brain.

          It's not at all true that linguists reject lexicalism. Indeed, most
          cognitive linguists probably accept it; I'm thinking particularly of
          usage-based theories.

          --And.
        • Jeff Sheets
          ... You re correct. I should ve added some before linguists are fairly certain... Ultimately, I believe it s useful to distinguish between syntactic rules
          Message 4 of 29 , Jan 28, 2013
          • 0 Attachment
            On Mon, Jan 28, 2013 at 10:21 AM, And Rosta <and.rosta@...> wrote:

            > > > On Sun, Jan 27, 2013 at 1:59 PM, Jeff Sheets <sheets.jeff@...>
            > wrote:
            > > >> I have never seen Link Grammars before. I've generally approached
            > natural
            > > >> and constructed languages from the linguistics side of things. It's
            > > >> definitely an interesting way of going about parsing, and it
            > definitely has
            > > >> a more computer science-y feel to it than I'm used to (in languages).
            > Seems
            > > >> to work quite well, but I'm inherently wary of anything which doesn't
            > > >> explicitly state the rules of grammar separately from the lexicon. I'm
            > > >> biased, I suppose, but I'd prefer the grammar stand separate for my
            > own
            > > >> conlanging. My reasoning is simple: linguists are fairly certain that
            > > >> grammar and lexicon are separate in the brain. Also, if the grammar is
            > > >> separate, the number of grammatical rules will be minimized, leaving
            > only
            > > >> context clues in the lexicon.
            >
            > Even if we accept your reasoning, this doesn't entail a rejection of
            > lexicalism, because the part of the lexical entry specifying valency
            > (subcategorization) might still be located in the grammar zone of the
            > brain.
            >
            > It's not at all true that linguists reject lexicalism. Indeed, most
            > cognitive linguists probably accept it; I'm thinking particularly of
            > usage-based theories.
            >
            > --And.
            >

            You're correct. I should've added "some" before "linguists are fairly
            certain..." Ultimately, I believe it's useful to distinguish between
            syntactic rules and lexical features, since doing so will tend to greatly
            reduce the number of times a particular syntactic rule needs to be stated.
            Perhaps the brain stores the rule in some fashion multiple times for each
            lexeme that uses it, but I don't feel that an extremely complex, only
            partially understood, microscopic parallel processing device such as the
            brain should be my sole guide to studying language. Especially when a
            simpler grammar of syntactic rules can account for what we see in produced
            speech.

            Then again, this is veering quite a bit off of the original topic, namely,
            what to call a section of a sentence that is missing, and perhaps, how to
            decide what can possibly be in that position. Constituent is the closest I
            can think of to that concept, though it is loaded with additional notions
            that don't quite match what Gary is looking for. Perhaps "node" may make
            more sense, but that presupposes looking at the structure of a sentence as
            a tree, and in Gary's context of programming, doesn't seem particularly
            enlightening to his future-self audience. Maybe "potential node" or
            "potential set"? Also, good luck, Gary. A quick perusal of Google Translate
            (and other automated translator) results will indicate that the state of
            the art of computer translation is still sketchy at best. Here's hoping you
            make a breakthrough!
          • Gary Shannon
            ... I doubt I ll make any breakthroughs. I m still aiming at a conlang into which I can auto-translate from English. Since I can engineer the language to fit
            Message 5 of 29 , Jan 28, 2013
            • 0 Attachment
              On Mon, Jan 28, 2013 at 8:46 AM, Jeff Sheets <sheets.jeff@...> wrote:

              > ...Also, good luck, Gary. A quick perusal of Google Translate
              > (and other automated translator) results will indicate that the state of
              > the art of computer translation is still sketchy at best. Here's hoping you
              > make a breakthrough!

              I doubt I'll make any breakthroughs. I'm still aiming at a conlang
              into which I can auto-translate from English. Since I can engineer the
              language to fit my machine translation needs it makes the problem a
              LOT simpler than machine translations into languages that I can't just
              change to make them easier for the computer. For one thing, my conlang
              will, by design, have NO idioms. Every statement in the conlang will
              be literal.

              --gary
            • Leonardo Castro
              ... Interesting! I think I was first indirectly exposed to this concept by means of Lojban. Then, in my conlang, I decided that each verb could have only 2
              Message 6 of 29 , Jan 28, 2013
              • 0 Attachment
                2013/1/28 And Rosta <and.rosta@...>:
                > On Jan 28, 2013 10:19 AM, "Leonardo Castro" <leolucas1980@...> wrote:
                >>
                >> How old is the "link grammar" concept? My conlang works this way but I
                >> wasn't aware this already existed as a concept.
                >
                > I first encountered Link Grammar in the 90s. It makes used of Dependency
                > Grammar, whose origins go back to the Middle Ages (see a study by Michael
                > Covington on this), and Lexicalism, which dates back to a famous work by
                > Chomsky from the early 70s whose title my ageing brain is not recalling for
                > me. (Ah, I remember now: Remarks on nominalization.)

                Interesting!

                I think I was first indirectly exposed to this concept by means of
                Lojban. Then, in my conlang, I decided that each verb could have only
                2 slots ("subject" and "object"). The main goal is to have the passive
                voice of each verb unambiguously.

                In the book "Inferências Lexicais e Interpretação de Redes de
                Predicados" (the second author of this book was my professor), they
                apply the Graph Theory (from Maths) to predicates, considering all
                language relations as "directed graphs". They even define figures of
                speech by means of mathematics-like functions.
              • Ralph DeCarli
                You might find Apertium helpful in your term search. http://wiki.apertium.org/wiki/Main_Page The Apertium translation system uses an intermediate wrapper
                Message 7 of 29 , Jan 28, 2013
                • 0 Attachment
                  You might find Apertium helpful in your term search.

                  http://wiki.apertium.org/wiki/Main_Page

                  The Apertium translation system uses an intermediate 'wrapper' layer
                  for translating between romance languages. Their descriptions of the
                  meta-grammatical categories might help.

                  Their wrapper layer was one of the sources of inspiration for my
                  conlang. They have a similar, yet different wrapper for every language pair
                  and the classic way to avoid the proliferation wrappers is to create
                  a single intermediate format that can comprehend all languages.

                  I know that's not possible, but it was an influence.

                  Ralph
                  ------------
                  On Mon, 28 Jan 2013 09:03:55 -0800
                  Gary Shannon <fiziwig@...> wrote:

                  > On Mon, Jan 28, 2013 at 8:46 AM, Jeff Sheets
                  > <sheets.jeff@...> wrote:
                  >
                  > > ...Also, good luck, Gary. A quick perusal of Google Translate
                  > > (and other automated translator) results will indicate that the
                  > > state of the art of computer translation is still sketchy at
                  > > best. Here's hoping you make a breakthrough!
                  >
                  > I doubt I'll make any breakthroughs. I'm still aiming at a conlang
                  > into which I can auto-translate from English. Since I can engineer
                  > the language to fit my machine translation needs it makes the
                  > problem a LOT simpler than machine translations into languages
                  > that I can't just change to make them easier for the computer. For
                  > one thing, my conlang will, by design, have NO idioms. Every
                  > statement in the conlang will be literal.
                  >
                  > --gary

                  --

                  Have you heard of the new post-neo-modern art style?
                  They haven't decided what it looks like yet.
                Your message has been successfully submitted and would be delivered to recipients shortly.