Loading ...
Sorry, an error occurred while loading the content.

Re: 30-day project and stack depth

Expand Messages
  • Gary Shannon
    Thanks to everyone for the interesting ideas. I do want to stay far away from RPN. It just doesn t feel natural. On the other hand, I want something unique, so
    Message 1 of 22 , Jan 4, 2013
    • 0 Attachment
      Thanks to everyone for the interesting ideas.

      I do want to stay far away from RPN. It just doesn't feel natural.

      On the other hand, I want something unique, so I'm not going to worry
      about "rules" like "SOV languages usually put adjectives before their
      nouns." I've already broken that rule, and I'm happy with the way it's
      working out so far.

      The problem, as I see it, is that SOV _can_ result is putting a lot of
      distance between the arguments of the verb and the verb itself.

      Consider: The boy saw a dog that had a bone that had cracks that were
      filled with dirt.

      If I just move all the verbs to the end I get:

      The boy a dog that a bone that cracks that with dirt were filled had had saw.

      Or I could try to move the relative clauses before their nouns:

      The boy {that (it bone had) a dog} saw.

      That seems to require a resumptive pronoun, and doesn't seem natural.

      So what I need are strategies to break up the clauses and keep the
      reader from having to wait so long to see the verb. What was it that
      Mark Twain said about German? Something about swimming the Atlantic
      Ocean underwater and finally putting his head above water in New York
      to spit out the long-awaited verb.

      What I think I will do is play around with some bare-bones sentences
      and shuffle and permute the elements adding whatever markers are
      needed to make sense of them. I may end up inflecting nouns for case.
      That would be interesting.

      So how many ways can I permute the elements of these?

      Boy dog saw.
      Boy dog bone had saw.
      Boy dog bone cracks had had saw.
      Boy dog bone cracks dirt filled had had saw.

      This is becoming and interesting project. :-)

      --gary
    • MorphemeAddict
      ... For relative clauses you could use something like this: The boy a bone-having (-hadding?) dog saw. stevo
      Message 2 of 22 , Jan 4, 2013
      • 0 Attachment
        On Fri, Jan 4, 2013 at 12:23 PM, Gary Shannon <fiziwig@...> wrote:

        > Thanks to everyone for the interesting ideas.
        >
        > I do want to stay far away from RPN. It just doesn't feel natural.
        >
        > On the other hand, I want something unique, so I'm not going to worry
        > about "rules" like "SOV languages usually put adjectives before their
        > nouns." I've already broken that rule, and I'm happy with the way it's
        > working out so far.
        >
        > The problem, as I see it, is that SOV _can_ result is putting a lot of
        > distance between the arguments of the verb and the verb itself.
        >
        > Consider: The boy saw a dog that had a bone that had cracks that were
        > filled with dirt.
        >
        > If I just move all the verbs to the end I get:
        >
        > The boy a dog that a bone that cracks that with dirt were filled had had
        > saw.
        >
        > Or I could try to move the relative clauses before their nouns:
        >
        > The boy {that (it bone had) a dog} saw.
        >
        > That seems to require a resumptive pronoun, and doesn't seem natural.
        >

        For relative clauses you could use something like this:

        The boy a bone-having (-hadding?) dog saw.

        stevo

        >
        >
        >
      • Logan Kearsley
        ... [...] ... Optimal parsing algorithms like PCKY certainly make no use of a stack structure, but aren t 100% cognitively plausible because a) they assume
        Message 3 of 22 , Jan 4, 2013
        • 0 Attachment
          On 4 January 2013 08:18, Jörg Rhiemeier <joerg_rhiemeier@...> wrote:
          > Hallo conlangers!
          >
          > On Friday 04 January 2013 07:36:27 Gary Shannon wrote:
          [...]
          >> So I guess my
          >> question is this: In natlangs, how deep does the deferred elements
          >> stack generally go? What depth does it never exceed? Does anybody have
          >> a handle on these questions?
          >
          > At any rate, "stack depth" (I sincerely doubt that "stack" is the
          > right concept here, we are rather dealing with tree structures here)
          > in human languages is quite limited, and deep center-embedding is a
          > no-no. Most people feel uncomfortable with clauses embedded more
          > than three deep, I think, though some people are capable of handling
          > more.

          Optimal parsing algorithms like PCKY certainly make no use of a stack
          structure, but aren't 100% cognitively plausible because a) they
          assume unbounded memory and b) it's simple to observe that humans are
          not optimal parsers.
          I have seen one example (though I'm sure there are probably more) of
          research into a general-purpose parser with human-like memory
          constraints (http://www-users.cs.umn.edu/~schuler/paper-jcl08wsj.pdf)
          which assumes that parsing occurs mainly in short-term working memory,
          you can have only 3-4 "chunks" (containing partial constituents) in
          working memory at any given time, and memory can be saved by
          transforming partial trees to maximize how much stuff you can put into
          one chunk by ensuring that you never have to store complete but
          unattached constituents. The parser is actually implemented as a
          hierarchical hidden markov model where shirt-term memory locations are
          represented by a small finite set of random variables whose values are
          partial syntactic trees, but access patterns look the same as access
          patterns for a stack structure, such that it could be equivalently
          represented by a bounded push-down automaton with a maximum stack
          depth of 3-4.
          That model can explain why some examples of center-embedded sentences
          cause interpretation problems in human while other
          structurally-identical models don't because the probability of
          constructing a certain syntactic structure changes in different
          contexts; thus, garden-path constructions that you are very familiar
          with (and thus which have been programmed into the transition
          probabilities of the HHMM) don't feel like garden-path constructions
          anymore.

          -l.
        • Gary Shannon
          Here s an idea for a mixed word order. My conlang was initially set up to be SAOVI where A is an optional aux marking tense/aspect/mood, and I is an optional
          Message 4 of 22 , Jan 4, 2013
          • 0 Attachment
            Here's an idea for a mixed word order.

            My conlang was initially set up to be SAOVI where A is an optional aux
            marking tense/aspect/mood, and I is an optional indirect object. So in
            a sense, my word order is already SVOV where the verb is split into
            its root and its TAM marker.

            The presence of an aux marks both the subject and object by lying
            between them, eliminating the need for an attached (or detached) case
            marker on the noun. But suppose that a relative clause used SVO where
            the aux was assumed to be the same as for the main verb, and so the
            clause verb is promoted to the aux position. Then we would have
            something like:

            Boy did dog see. SAOV
            Boy did dog have bone see. SA(SVO)V

            In case the relative clause had a different tense ("The boy WILL SEE
            the dog that HAD a bone.") then both verb would have their own aux's:

            Boy will dog did bone have see.

            So there are two approaches:

            1) Make nested clauses SAOV, or if no A, SVO.
            2) Require a TAM aux even for present tense indicative.

            Boy did dog see.
            Boy now dog see.
            Boy will dog see.

            Boy did dog will bone have see.
            Boy now dog now bone have see.
            Boy will dog will bone have see.

            It seems like the duplicated TAM aux is redundant, but simply dropping
            it causes ambiguity, or at least difficulty:

            Boy will dog bone have see.

            But if the relative clause is permitted to promote the V to the A slot:

            Boy will dog have bone see.

            which seems perfectly clear.

            But then there's:

            The boy that the dog I just saw barked at was scared.

            Oh dear! What now?

            --gary
          • Charles W Brickner
            ... From: Constructed Languages List [mailto:CONLANG@LISTSERV.BROWN.EDU] On Behalf Of MorphemeAddict For relative clauses you could use something like this:
            Message 5 of 22 , Jan 4, 2013
            • 0 Attachment
              -----Original Message-----
              From: Constructed Languages List [mailto:CONLANG@...] On Behalf Of MorphemeAddict

              For relative clauses you could use something like this:

              The boy a bone-having (-hadding?) dog saw.
              ==========================

              Senjecas can do that. There is an active and a passive participle for each of the three tenses.

              paútus ósþom e-údante čénem e-óĸ̌a:

              paút-us ósþ-om e-úd-ante čén-em e-óĸ̌-a

              boy-NOM.sg bone-ACC.sg PST-have-PRES.PTCP dog-ACC.sg PST-see-IND

              Charlie
            • R A Brown
              ... If by head you mean operator and by dependent you mean operand . Let us be clear that RPN is a method which was developed for both unambiguous
              Message 6 of 22 , Jan 4, 2013
              • 0 Attachment
                On 04/01/2013 15:30, And Rosta wrote:
                > Jörg Rhiemeier, On 04/01/2013 13:18:


                >>
                >> I have been maintaining for long that RPN is a word
                >> order type in itself that is actually very different
                >> from the SOV word order found in about one half of all
                >> human languages.
                >
                > Why is it actually very different? RPN has consistent
                > dependent--head ordering,

                If by head you mean 'operator 'and by dependent you mean
                'operand'. Let us be clear that RPN is a method which was
                developed for both unambiguous mathematical expression and
                for their evaluation. It was to avoid the artificial
                convention of what we used to call BODMAS when I was at
                school and, indeed, the need for ever using brackets within
                an expression.

                Though strictly speaking it was the so-called "Polish
                notation" (PN) devised by Jan Łukasiewicz that was devised
                to achieve this. In this operator comes first and the
                operands follow. However, it was noticed by computer
                scientists that if you did things the other way round, then
                the expression could be built up using a stack and
                immediately evaluated, which is why RPN became so widely
                used in computing.

                (Strictly RPN is not the reverse of PN, as the operands are
                still put in the same order, e.g. "5 - 2" is "- 5 2" in PN
                and "5 2 -" in RPN - not "2 5 -").

                So does that mean all SOV languages are expressed as RPN,
                and all VSO languages as PN? Certainly not.

                > and S & O are normally considered dependents of head V.

                Yes. If we spoke with nothing more complicated than "man
                bites dog", then indeed "bites man dog" is PN, and "man dog
                bites" is RPN.

                The trouble us we humans like to add a few adjectives around
                the place, together with maybe the odd determiner or two; we
                even stick in an adverb or two and maybe one or more dratted
                prepositional and/or postpositional phrases; then we have
                the temerity to add relative clauses and various other
                subordinate clause, etc.

                I cannot think of any natlang that conforms solely to PN or
                RPN ordering.

                But let us take a simple RPN example. First we'll have to
                recast it so that operators are replaced by a verb in such a
                way that the first operand is a grammatical subject and the
                second an object. I must confess I haven't found a neat way
                of doing this. The best I can manage is to have the
                operators as passive verbs; this allows the first operand to
                be the subject of the verb; the second operand is then the
                object of "by" or, if you prefer, the verbal phrase.

                Thus I rephrase "5 2 -" as: five two diminished-by.

                OK. Here is a pretty simple RPN expression:
                three five augmented-by seven two diminished-by multiplied-by.

                Which you easily evaluate as 40! Or not?

                [snip]

                >> Who will ever become fluent in a stack-based language?
                >> Such beasts are outside what the human language
                >> facility can cope with in real time, I think.

                I agree.

                >> Stack-based grammars are very economical with regard to
                >> rules (which is the reason they are sometimes used in
                >> computing), but require a prodigious short- term memory
                >> in order to handle the stack properly (which computers
                >> of course have).

                I also agree with this.

                > I don't want to repeat the several lengthy threads on
                > this topic that appear to have left no impression on
                > Joerg's memory,

                On the contrary, I can assure you that they have left an
                impression both on Jörg's mind and on mine.

                > so let me essay a quick summary for Gary's benefit:
                >
                > The available evidence (i.e. what has been adduced in
                > previous discussions) indicates that humans parse
                > natlangoid lgs using stacks.

                IMO all that has been adduced is that a fairly trivial use
                of stack is possibly involved in human language processing.

                > So in one sense, a stack-based conlang grammar would just
                > be a grammar formulated in a way that takes into account
                > how sentences will be parsed, and there's nothing
                > obviously unnatural about it.

                I have yet to see convincing examples where a sentence
                parsing of human usable language can be done solely in ways
                analogous to the use of stacks as a computer data structure.

                > However, previous discussion of Fith, which is a
                > stack-based conlang (in the above sense) revealed that
                > the language was also intended to be parsed in way that
                > went straight from phonology to semantic interpretation,
                > without a level of syntax:

                Not sure what you mean by this. In any case this part of
                the thread is really about RPN. Is there no syntax in the
                expression "5 2 -" (five two diminished-by)?

                > when the parser combined an operator and operand, the
                > output would be a semantic rather than a syntactic
                > object.

                Obviously - that's what RPN is all about.

                > This is logically independent of the stack-basedness,

                Maybe - but, with respect, you're putting the cart before
                the horse. Stacks are used to evaluate RPN because it's the
                obvious way to do it. By all means use the stack for
                something else if you wish. But, as a computer scientist, I
                use a stack when it is useful to do so, and some other
                appropriate data structure when it is useful to do so. Data
                structures are tools.

                > but the previous discussion revealed that some (Ray and
                > Joerg) were using the term _stack-based_ to mean
                > "stack-based and syntaxless".

                No - we were both using stack-based in the way that computer
                scientists and programmers use the term.

                > To my mind, syntaxfulness is a necessary property of
                > languagehood --

                Have you ever tried writing a natural language parser?

                [snip]

                > In "I gave the place where tomorrow the princess the *
                > bishop will crown instead of the prince a quick once
                > over", by the time you hit *, the stack contains (i)
                > _gave_, waiting for the direct object (_a quick once
                > over_), (ii) _where_, waiting for _will_, (iii)
                > _tomorrow_, waiting for _will_, (iv) _the princess_,
                > waiting for _will_, (v) _the_ waiting for _bishop_ and
                > for _will_.

                Er? Could you evaluate this *as a stack* beginning with "I'
                and proceeding to the next word and so on?

                [snip]
                >
                >>> I want my language to be naturalistic, and I want it
                >>> to be at least theoretically possible to become
                >>> fluent in the language.
                >>
                >> Two strong reasons to forsake a stack-based approach!
                >> Stack-based languages are not naturalistic, and you'll
                >> never become fluent in them!
                >
                > You (Joerg) should find an apter and less misleading term
                > than "stack-based".

                No - stack-based means _based_ on a stack, i.e. the stack is
                the main or, as in the case of RPN, only data structure used.

                > In the most literal and obvious sense of "stack-based",
                > natlangs are "stack-based".

                If only! That has not been my experience with natural
                language processing. Natlangs are rather more complicated.

                > "Stack-based languages" in your extended sense are indeed
                > not naturalistic, and indeed aren't even languages, but
                > because of the syntaxlessness, not the stack-basedness.

                Why is "5 2 -" syntaxless?

                --
                Ray
                ==================================
                http://www.carolandray.plus.com
                ==================================
                There ant no place like Sussex,
                Until ye goos above,
                For Sussex will be Sussex,
                And Sussex won't be druv!
                [W. Victor Cook]
              • Jörg Rhiemeier
                Hallo conlangers! ... Right. ... Just that. English-speaking people had their difficulties with Łukasiewicz notation , so they just called it Polish
                Message 7 of 22 , Jan 4, 2013
                • 0 Attachment
                  Hallo conlangers!

                  On Friday 04 January 2013 21:38:23 R A Brown wrote:

                  > On 04/01/2013 15:30, And Rosta wrote:
                  > > Jörg Rhiemeier, On 04/01/2013 13:18:
                  > >> I have been maintaining for long that RPN is a word
                  > >> order type in itself that is actually very different
                  > >> from the SOV word order found in about one half of all
                  > >> human languages.
                  > >
                  > > Why is it actually very different? RPN has consistent
                  > > dependent--head ordering,
                  >
                  > If by head you mean 'operator 'and by dependent you mean
                  > 'operand'. Let us be clear that RPN is a method which was
                  > developed for both unambiguous mathematical expression and
                  > for their evaluation. It was to avoid the artificial
                  > convention of what we used to call BODMAS when I was at
                  > school and, indeed, the need for ever using brackets within
                  > an expression.

                  Right.

                  > Though strictly speaking it was the so-called "Polish
                  > notation" (PN) devised by Jan Łukasiewicz that was devised
                  > to achieve this. In this operator comes first and the
                  > operands follow. However, it was noticed by computer
                  > scientists that if you did things the other way round, then
                  > the expression could be built up using a stack and
                  > immediately evaluated, which is why RPN became so widely
                  > used in computing.

                  Just that. English-speaking people had their difficulties with
                  "Łukasiewicz notation", so they just called it "Polish notation".
                  The same way as we call Sun Zi's theorem the "Chinese remainder
                  theorem", even though _Sun Zi_ is actually an easier name for
                  English-speakers than _Łukasiewicz_, if we ignore the tones
                  (which I haven't found on Wikipedia).

                  > (Strictly RPN is not the reverse of PN, as the operands are
                  > still put in the same order, e.g. "5 - 2" is "- 5 2" in PN
                  > and "5 2 -" in RPN - not "2 5 -").
                  >
                  > So does that mean all SOV languages are expressed as RPN,
                  > and all VSO languages as PN? Certainly not.

                  Indeed not!

                  > > and S & O are normally considered dependents of head V.
                  >
                  > Yes. If we spoke with nothing more complicated than "man
                  > bites dog", then indeed "bites man dog" is PN, and "man dog
                  > bites" is RPN.
                  >
                  > The trouble us we humans like to add a few adjectives around
                  > the place, together with maybe the odd determiner or two; we
                  > even stick in an adverb or two and maybe one or more dratted
                  > prepositional and/or postpositional phrases; then we have
                  > the temerity to add relative clauses and various other
                  > subordinate clause, etc.
                  >
                  > I cannot think of any natlang that conforms solely to PN or
                  > RPN ordering.

                  Nor can I! A conlang that does is Fith, but that one lacks the
                  sophistication of a natlang. It is nothing more than a sketch
                  which covers only the very basics of syntax. I have never seen
                  a longer text in Fith; I am pretty sure that once one was to
                  translate a sophisticated literary text into it, it would show
                  its limits and fall apart.

                  > But let us take a simple RPN example. First we'll have to
                  > recast it so that operators are replaced by a verb in such a
                  > way that the first operand is a grammatical subject and the
                  > second an object. I must confess I haven't found a neat way
                  > of doing this. The best I can manage is to have the
                  > operators as passive verbs; this allows the first operand to
                  > be the subject of the verb; the second operand is then the
                  > object of "by" or, if you prefer, the verbal phrase.
                  >
                  > Thus I rephrase "5 2 -" as: five two diminished-by.
                  >
                  > OK. Here is a pretty simple RPN expression:
                  > three five augmented-by seven two diminished-by multiplied-by.
                  >
                  > Which you easily evaluate as 40! Or not?

                  I arrive at the same result. But this is just arithmetics, and
                  languages can do much more than that. The same way human minds
                  can do much more than computers (which has nothing to do with
                  raw computing power - on that bill, computers have left us in
                  the dust decades ago!).

                  > [snip]
                  >
                  > >> Who will ever become fluent in a stack-based language?
                  > >> Such beasts are outside what the human language
                  > >> facility can cope with in real time, I think.
                  >
                  > I agree.
                  >
                  > >> Stack-based grammars are very economical with regard to
                  > >> rules (which is the reason they are sometimes used in
                  > >> computing), but require a prodigious short- term memory
                  > >> in order to handle the stack properly (which computers
                  > >> of course have).
                  >
                  > I also agree with this.
                  >
                  > > I don't want to repeat the several lengthy threads on
                  > > this topic that appear to have left no impression on
                  > > Joerg's memory,
                  >
                  > On the contrary, I can assure you that they have left an
                  > impression both on Jörg's mind and on mine.

                  Indeed they have left a lasting impression, which is reinforced
                  by the current iteration of this debate. There really is no
                  need to repeat those lengthy threads again, though for different
                  reasons than what And assumes to be ;)

                  > > so let me essay a quick summary for Gary's benefit:
                  > >
                  > > The available evidence (i.e. what has been adduced in
                  > > previous discussions) indicates that humans parse
                  > > natlangoid lgs using stacks.
                  >
                  > IMO all that has been adduced is that a fairly trivial use
                  > of stack is possibly involved in human language processing.

                  Certainly, some kind of memory is involved here which in some
                  way tags the stored syntax tree nodes according to where they
                  occur; whether it is a "stack" is another matter.

                  > > So in one sense, a stack-based conlang grammar would just
                  > > be a grammar formulated in a way that takes into account
                  > > how sentences will be parsed, and there's nothing
                  > > obviously unnatural about it.
                  >
                  > I have yet to see convincing examples where a sentence
                  > parsing of human usable language can be done solely in ways
                  > analogous to the use of stacks as a computer data structure.

                  I have yet to see such examples, too.

                  > > However, previous discussion of Fith, which is a
                  > > stack-based conlang (in the above sense) revealed that
                  > > the language was also intended to be parsed in way that
                  > > went straight from phonology to semantic interpretation,
                  >
                  > > without a level of syntax:
                  > Not sure what you mean by this. In any case this part of
                  > the thread is really about RPN. Is there no syntax in the
                  > expression "5 2 -" (five two diminished-by)?

                  Certainly there is syntax in it! There is a rule which states
                  which of the arguments of "-" is to be subtracted from which.

                  > > when the parser combined an operator and operand, the
                  > > output would be a semantic rather than a syntactic
                  > > object.
                  >
                  > Obviously - that's what RPN is all about.

                  Yep. The stack of an RPN calculator never holds anything else
                  than *numbers*, i.e. "semantic objects". The human language
                  faculty, in contrast, certainly stores not only words but also
                  phrases and clauses, i.e. syntactic objects. (But the stack
                  of a Fithian also holds syntactic objects. The language is
                  thus not "syntax-free", its syntax is only vastly simpler
                  - but more taxing on short-term memory - than that of human
                  languages.)

                  > > This is logically independent of the stack-basedness,
                  >
                  > Maybe - but, with respect, you're putting the cart before
                  > the horse. Stacks are used to evaluate RPN because it's the
                  > obvious way to do it.

                  Right.

                  > By all means use the stack for
                  > something else if you wish. But, as a computer scientist, I
                  > use a stack when it is useful to do so, and some other
                  > appropriate data structure when it is useful to do so. Data
                  > structures are tools.

                  And that is the right way of using it. It is a useful tool
                  for some purposes; for others, it is less so, and you better
                  use something else.

                  > > but the previous discussion revealed that some (Ray and
                  > > Joerg) were using the term _stack-based_ to mean
                  > > "stack-based and syntaxless".
                  >
                  > No - we were both using stack-based in the way that computer
                  > scientists and programmers use the term.

                  Yes.

                  > > To my mind, syntaxfulness is a necessary property of
                  > > languagehood --

                  A truism - but nobody ever doubted it!

                  > Have you ever tried writing a natural language parser?
                  >
                  > [snip]
                  >
                  > > In "I gave the place where tomorrow the princess the *
                  > > bishop will crown instead of the prince a quick once
                  > > over", by the time you hit *, the stack contains (i)
                  > > _gave_, waiting for the direct object (_a quick once
                  > > over_), (ii) _where_, waiting for _will_, (iii)
                  > > _tomorrow_, waiting for _will_, (iv) _the princess_,
                  > > waiting for _will_, (v) _the_ waiting for _bishop_ and
                  > > for _will_.
                  >
                  > Er? Could you evaluate this *as a stack* beginning with "I'
                  > and proceeding to the next word and so on?

                  I am completely lost in And's example ;)

                  > [snip]
                  >
                  > >> Two strong reasons to forsake a stack-based approach!
                  > >> Stack-based languages are not naturalistic, and you'll
                  > >> never become fluent in them!
                  > >
                  > > You (Joerg) should find an apter and less misleading term
                  > > than "stack-based".
                  >
                  > No - stack-based means _based_ on a stack, i.e. the stack is
                  > the main or, as in the case of RPN, only data structure used.

                  Yes. And I doubt that it is sufficient to parse a human
                  language.

                  > > In the most literal and obvious sense of "stack-based",
                  > > natlangs are "stack-based".
                  >
                  > If only! That has not been my experience with natural
                  > language processing. Natlangs are rather more complicated.

                  I don't have much practical experience with natural language
                  processing (I once tinkered with a parser for Zork-style games,
                  which, however, only understood a restricted subset of a human
                  language), but at any rate, human languages are much more
                  complex than most programming languages!

                  > > "Stack-based languages" in your extended sense are indeed
                  > > not naturalistic, and indeed aren't even languages, but
                  > > because of the syntaxlessness, not the stack-basedness.
                  >
                  > Why is "5 2 -" syntaxless?

                  It can't be syntaxless when a reordering changes the meaning:
                  _2 5 -_ gives a different result, and _5 - 2_ gives again a
                  different result which even depends on what is currently on
                  the stack, or a syntax error if the stack is empty ;)

                  --
                  ... brought to you by the Weeping Elf
                  http://www.joerg-rhiemeier.de/Conlang/index.html
                  "Bêsel asa Éam, a Éam atha cvanthal a cvanth atha Éamal." - SiM 1:1
                • Elena ``of Valhalla''
                  ... Don t you need lots of short-term memory to parse complex SOV sentences such as those common in *literary* German? Actually, in my youth I ve been guilty
                  Message 8 of 22 , Jan 5, 2013
                  • 0 Attachment
                    On 2013-01-04 at 23:02:41 +0100, Jörg Rhiemeier wrote:
                    > On Friday 04 January 2013 21:38:23 R A Brown wrote:
                    > > On 04/01/2013 15:30, And Rosta wrote:
                    > > > Jörg Rhiemeier, On 04/01/2013 13:18:
                    > > >> Stack-based grammars are very economical with regard to
                    > > >> rules (which is the reason they are sometimes used in
                    > > >> computing), but require a prodigious short- term memory
                    > > >> in order to handle the stack properly (which computers
                    > > >> of course have).

                    Don't you need lots of short-term memory to parse complex
                    SOV sentences such as those common in *literary* German?

                    Actually, in my youth I've been guilty of a few monstruosity in
                    Latin influenced written Italian, and they did require
                    quite some short-term memory to parse, even if they were "simple"
                    SVO.

                    > Yep. The stack of an RPN calculator never holds anything else
                    > than *numbers*, i.e. "semantic objects". The human language
                    > faculty, in contrast, certainly stores not only words but also
                    > phrases and clauses, i.e. syntactic objects. (But the stack
                    > of a Fithian also holds syntactic objects.

                    the stack of an RPN *programming language* interpreter can hold
                    list of expressions (used to define functions, for conditional
                    clauses, etc.)

                    e.g. in postscript (the only RPN language I have used)::

                    /Square {
                    moveto
                    0 1 4 {
                    dup 2 mod 0 eq {
                    100 0 rlineto
                    } {
                    0 100 rlineto
                    } ifelse
                    } for
                    } def

                    0 0 Square stroke

                    (this defines a function that draws a square and calls it.)

                    Once the interpreter gets to the ``def`` the actual function is
                    stored elsewhere, but everything else is kept and used in the stack.

                    --
                    Elena ``of Valhalla''
                  • R A Brown
                    ... [snip] ... Amen! ... Yes, indeed - and I will keep my reply short for that reason. [snip] [snip] ... Indeed not. [snip] ... Me too - in any case, as far as
                    Message 9 of 22 , Jan 5, 2013
                    • 0 Attachment
                      On 04/01/2013 22:02, Jörg Rhiemeier wrote:
                      > Hallo conlangers!
                      >
                      > On Friday 04 January 2013 21:38:23 R A Brown wrote:
                      >
                      >> On 04/01/2013 15:30, And Rosta wrote:
                      [snip]
                      >>> I don't want to repeat the several lengthy threads on
                      >>> this topic that appear to have left no impression on
                      >>> Joerg's memory,
                      >>
                      >> On the contrary, I can assure you that they have left
                      >> an impression both on Jörg's mind and on mine.
                      >
                      > Indeed they have left a lasting impression, which is
                      > reinforced by the current iteration of this debate.

                      Amen!

                      > There really is no need to repeat those lengthy threads
                      > again, though for different reasons than what And
                      > assumes to be ;)

                      Yes, indeed - and I will keep my reply short for that reason.

                      [snip]
                      [snip]
                      >
                      >>> To my mind, syntaxfulness is a necessary property of
                      >>> languagehood --
                      >
                      > A truism - but nobody ever doubted it!

                      Indeed not.

                      [snip]
                      >>
                      >>> In "I gave the place where tomorrow the princess the
                      >>> * bishop will crown instead of the prince a quick
                      >>> once over", by the time you hit *, the stack
                      >>> contains (i) _gave_, waiting for the direct object
                      >>> (_a quick once over_), (ii) _where_, waiting for
                      >>> _will_, (iii) _tomorrow_, waiting for _will_, (iv)
                      >>> _the princess_, waiting for _will_, (v) _the_ waiting
                      >>> for _bishop_ and for _will_.
                      >>
                      >> Er? Could you evaluate this *as a stack* beginning
                      >> with "I' and proceeding to the next word and so on?
                      >
                      > I am completely lost in And's example ;)

                      Me too - in any case, as far as I can see, it has nothing
                      whatever to do with RPN.

                      [snip]

                      >>> "Stack-based languages" in your extended sense are
                      >>> indeed not naturalistic, and indeed aren't even
                      >>> languages, but because of the syntaxlessness, not
                      >>> the stack-basedness.
                      >>
                      >> Why is "5 2 -" syntaxless?
                      >
                      > It can't be syntaxless when a reordering changes the
                      > meaning: _2 5 -_ gives a different result, and _5 - 2_
                      > gives again a different result which even depends on
                      > what is currently on the stack, or a syntax error if the
                      > stack is empty ;)

                      Exactly!! I really do not understand what And is on about
                      with all this "syntaxless" business. Of course both PN and
                      RPN must have syntax, particularly with regard to the
                      subtraction and dividing operators!

                      --
                      Ray
                      ==================================
                      http://www.carolandray.plus.com
                      ==================================
                      There ant no place like Sussex,
                      Until ye goos above,
                      For Sussex will be Sussex,
                      And Sussex won't be druv!
                      [W. Victor Cook]
                    • Jan Strasser
                      ... This is similar to how my conlang Buruya Nzaysa handles relative clauses, except that the head-initial parts of BNz syntax conform to a VSO pattern rather
                      Message 10 of 22 , Jan 5, 2013
                      • 0 Attachment
                        On Fri, 4 Jan 2013 11:46:16 -0800, Gary Shannon wrote:
                        > From: Gary Shannon<fiziwig@...>
                        > Subject: Re: 30-day project and stack depth
                        >
                        > Here's an idea for a mixed word order.
                        >
                        > My conlang was initially set up to be SAOVI where A is an optional aux
                        > marking tense/aspect/mood, and I is an optional indirect object. So in
                        > a sense, my word order is already SVOV where the verb is split into
                        > its root and its TAM marker.
                        >
                        > The presence of an aux marks both the subject and object by lying
                        > between them, eliminating the need for an attached (or detached) case
                        > marker on the noun. But suppose that a relative clause used SVO where
                        > the aux was assumed to be the same as for the main verb, and so the
                        > clause verb is promoted to the aux position. Then we would have
                        > something like:
                        >
                        > Boy did dog see. SAOV
                        > Boy did dog have bone see. SA(SVO)V
                        >
                        > In case the relative clause had a different tense ("The boy WILL SEE
                        > the dog that HAD a bone.") then both verb would have their own aux's:
                        >
                        > Boy will dog did bone have see.
                        >
                        > So there are two approaches:
                        >
                        > 1) Make nested clauses SAOV, or if no A, SVO.
                        > 2) Require a TAM aux even for present tense indicative.
                        >
                        > Boy did dog see.
                        > Boy now dog see.
                        > Boy will dog see.
                        >
                        > Boy did dog will bone have see.
                        > Boy now dog now bone have see.
                        > Boy will dog will bone have see.
                        >
                        > It seems like the duplicated TAM aux is redundant, but simply dropping
                        > it causes ambiguity, or at least difficulty:
                        >
                        > Boy will dog bone have see.
                        >
                        > But if the relative clause is permitted to promote the V to the A slot:
                        >
                        > Boy will dog have bone see.
                        >
                        > which seems perfectly clear.
                        >
                        > But then there's:
                        >
                        > The boy that the dog I just saw barked at was scared.
                        >
                        > Oh dear! What now?
                        >
                        > --gary

                        This is similar to how my conlang Buruya Nzaysa handles relative
                        clauses, except that the head-initial parts of BNz syntax conform to a
                        VSO pattern rather than a SVO one. BNz syntax can thus be characterised
                        as AuxSOV, with noun modifiers following their head. The semantic verb
                        at the end is uninflected; the initial Aux marks both tense/aspect/mood
                        of the clause and person/number/role of both subject and object. (This
                        polypersonal agreement system surely simplifies parsing, but I believe
                        the syntax would work well without it too.)

                        Like you suggested for your system, the auxiliary is in fact mandatory
                        for all clauses in BNz, including subclauses of any type. Complement
                        clauses are introduced by a subordinating conjunction similar to the
                        English "that" (but note that this conjunction cannot be dropped in
                        BNz). Relative clauses are introduced by a different conjunction which
                        actually acts (and inflects) like an auxiliary in most situations. If
                        the TAM of the subclause is saliently different from that of the matrix
                        clause, an additional aux may be introduced right before the semantic
                        verb (giving A(S)OAV word order for the subclause).

                        Another additional detail is that BNz uses case-marking articles on
                        every noun phrase. Like polypersonal marking on the aux, this makes
                        parsing significantly easier, but it probably wouldn't be entirely
                        necessary for the syntactic system to work.

                        Here's how BNz would handle the example sentences you gave:

                        did.3s>3 the.NOM boy the.ACC dog see
                        AuxSOV
                        "The boy saw the dog."

                        did.3s>3 the.NOM boy the.ACC dog which.3s>3 a.ACC bone have see
                        AuxSO(AuxOV)V
                        "The boy saw the dog that had a bone."

                        will.3s>3 the.NOM boy the.ACC dog which did.3s>3 a.ACC bone have see
                        AuxSO(AuxAuxOV)V
                        "The boy will see the dog that had a bone."


                        The last sentence you gave can either be built according to the same
                        syntax rules, which results in two levels of center-embedding...:

                        did.3s the.NOM boy [which.3s towards.3 him.ACC the.NOM dog [which.1s>3
                        just see] bark] be_scared
                        AuxS(AuxOblS(AuxAuxV)V)V
                        "The boy that the dog I just saw barked at was scared."

                        ...or else, either or both of the heavy subclauses may be postposed to
                        after the verb of their matrix clause:

                        did.3s the.NOM boy be_scared [which.3s towards.3 him.ACC the.NOM dog
                        bark [which.1s>3 just see]]
                        AuxSV(AuxOblSV(AuxAuxV))
                        "The boy that the dog I just saw barked at was scared."


                        -- Jan
                      • Jan Strasser
                        ... Oops, that s ungrammatical in Buruya Nzaysa! :P The following sentence would be correct: will.3s 3 the.NOM boy the.ACC dog which.3s 3 a.ACC bone did have
                        Message 11 of 22 , Jan 5, 2013
                        • 0 Attachment
                          On Sat, 5 Jan 2013 13:39:51 +0100, Jan Strasser wrote:
                          > will.3s>3 the.NOM boy the.ACC dog which did.3s>3 a.ACC bone have see
                          > AuxSO(AuxAuxOV)V
                          > "The boy will see the dog that had a bone."

                          Oops, that's ungrammatical in Buruya Nzaysa! :P The following sentence
                          would be correct:

                          will.3s>3 the.NOM boy the.ACC dog which.3s>3 a.ACC bone did have see
                          AuxSO(AuxOAuxV)V
                          "The boy will see the dog that had a bone."

                          -- Jan
                        • Christophe Grandsire-Koevoets
                          ... That rule is broken by Basque as well, and Basque speakers don t seem any worse for it! :) My Moten is also strictly SOV with adjectives following nouns,
                          Message 12 of 22 , Jan 5, 2013
                          • 0 Attachment
                            On 4 January 2013 18:23, Gary Shannon <fiziwig@...> wrote:

                            > Thanks to everyone for the interesting ideas.
                            >
                            > I do want to stay far away from RPN. It just doesn't feel natural.
                            >
                            > On the other hand, I want something unique, so I'm not going to worry
                            > about "rules" like "SOV languages usually put adjectives before their
                            > nouns." I've already broken that rule, and I'm happy with the way it's
                            > working out so far.
                            >
                            >
                            That rule is broken by Basque as well, and Basque speakers don't seem any
                            worse for it! :)

                            My Moten is also strictly SOV with adjectives following nouns, something I
                            specifically copied from Basque ;) .


                            > The problem, as I see it, is that SOV _can_ result is putting a lot of
                            > distance between the arguments of the verb and the verb itself.
                            >
                            >
                            True. I did notice that SOV languages tend to be more parsimonious in their
                            use of subclauses than non-verb-final languages. Quite often, subclauses
                            are actually absent, and are replaced by nominalised phrases. And when
                            subclauses do exist, they are kept quite short, and deep embedding is not
                            common in speech. Written text is another matter :) .


                            > Consider: The boy saw a dog that had a bone that had cracks that were
                            > filled with dirt.
                            >
                            > If I just move all the verbs to the end I get:
                            >
                            > The boy a dog that a bone that cracks that with dirt were filled had had
                            > saw.
                            >
                            > Or I could try to move the relative clauses before their nouns:
                            >
                            > The boy {that (it bone had) a dog} saw.
                            >
                            > That seems to require a resumptive pronoun, and doesn't seem natural.
                            >
                            >
                            Actually, that's exactly what Japanese does, and it doesn't use any
                            resumptive pronoun. It doesn't even mark relative subclauses in any special
                            way: the subclause is just put in front of the noun it completes, with the
                            verb in its neutral form. There's no resumptive pronoun, nor any other form
                            of syntax to infer what the role of the head is in the subclause. Somehow,
                            the Japanese don't seem to have a problem with that.


                            > So what I need are strategies to break up the clauses and keep the
                            > reader from having to wait so long to see the verb.


                            My only question here is: why? Natlangs exist that actually do exactly
                            that, have the reader wait quite a long time to find the verb, so why are
                            you so intent on avoiding that issue? Having that issue *is* naturalistic.
                            Trying to twist your language to prevent it isn't.
                            --
                            Christophe Grandsire-Koevoets.

                            http://christophoronomicon.blogspot.com/
                            http://www.christophoronomicon.nl/
                          • R A Brown
                            On 05/01/2013 09:26, Elena ``of Valhalla wrote: [snip] ... Those long Ciceronian-type periods! Not sure how fluent German readers do it :-) I remember
                            Message 13 of 22 , Jan 5, 2013
                            • 0 Attachment
                              On 05/01/2013 09:26, Elena ``of Valhalla'' wrote:
                              [snip]
                              > Don't you need lots of short-term memory to parse
                              > complex SOV sentences such as those common in *literary*
                              > German?

                              Those long Ciceronian-type periods! Not sure how fluent
                              German readers do it :-)

                              I remember when many years back I was researching for my
                              M.Litt. degree, I had to read quite a bit of source material
                              in German. I recall one particular sentence that went on,
                              and on, and on and on - while I was understanding less and
                              less and less and less. Eventually I got to the full stop
                              (period) half-way down the page. The only thing I could do
                              was to take the sentence apart and analyze it, as we do way
                              back in my schooldays.

                              Oh yes, it was beautifully constructed with balancing
                              clauses etc, worthy of anything Cicero had done. But it
                              certainly was not a stack I used or any similar structure
                              for the analysis. The resultant parse was quite an
                              elaborate *tree* (not a nice neat binary tree).

                              If anything goes on in the human brain analogous to anything
                              that goes on in a Von Neumann machine, it is surely more
                              likely to be tree structures (or even neural _networks_).

                              [snip]
                              >
                              > the stack of an RPN *programming language* interpreter
                              > can hold list of expressions (used to define functions,
                              > for conditional clauses, etc.)

                              Yep - there's an interesting article about real stack-based
                              or, more properly, stack-oriented languages here:
                              http://en.wikipedia.org/wiki/Stack-oriented_programming_language

                              But while, because of the limitations of the Von Neumann
                              architecture of (home) computers, stack oriented processing
                              is very convenient, there's no reason to suppose that the
                              human brain, which has evolved over zillions of years, is so
                              limited.

                              --
                              Ray
                              ==================================
                              http://www.carolandray.plus.com
                              ==================================
                              There ant no place like Sussex,
                              Until ye goos above,
                              For Sussex will be Sussex,
                              And Sussex won't be druv!
                              [W. Victor Cook]
                            • Tim Smith
                              ... There s a group of West African languages, including Maninka and its close relatives, that have the same basic SAOV order that yours does. The way they
                              Message 14 of 22 , Jan 5, 2013
                              • 0 Attachment
                                On 1/4/2013 2:46 PM, Gary Shannon wrote:
                                > Here's an idea for a mixed word order.
                                >
                                > My conlang was initially set up to be SAOVI where A is an optional aux
                                > marking tense/aspect/mood, and I is an optional indirect object. So in
                                > a sense, my word order is already SVOV where the verb is split into
                                > its root and its TAM marker.
                                >
                                > The presence of an aux marks both the subject and object by lying
                                > between them, eliminating the need for an attached (or detached) case
                                > marker on the noun. But suppose that a relative clause used SVO where
                                > the aux was assumed to be the same as for the main verb, and so the
                                > clause verb is promoted to the aux position. Then we would have
                                > something like:
                                >
                                > Boy did dog see. SAOV
                                > Boy did dog have bone see. SA(SVO)V
                                >
                                > In case the relative clause had a different tense ("The boy WILL SEE
                                > the dog that HAD a bone.") then both verb would have their own aux's:
                                >
                                > Boy will dog did bone have see.
                                >
                                > So there are two approaches:
                                >
                                > 1) Make nested clauses SAOV, or if no A, SVO.
                                > 2) Require a TAM aux even for present tense indicative.
                                >
                                > Boy did dog see.
                                > Boy now dog see.
                                > Boy will dog see.
                                >
                                > Boy did dog will bone have see.
                                > Boy now dog now bone have see.
                                > Boy will dog will bone have see.
                                >
                                > It seems like the duplicated TAM aux is redundant, but simply dropping
                                > it causes ambiguity, or at least difficulty:
                                >
                                > Boy will dog bone have see.
                                >
                                > But if the relative clause is permitted to promote the V to the A slot:
                                >
                                > Boy will dog have bone see.
                                >
                                > which seems perfectly clear.
                                >
                                > But then there's:
                                >
                                > The boy that the dog I just saw barked at was scared.
                                >
                                > Oh dear! What now?
                                >
                                > --gary
                                >
                                There's a group of West African languages, including Maninka and its
                                close relatives, that have the same basic SAOV order that yours does.
                                The way they handle relative clauses strikes me as very elegant. The
                                head noun of the relative clause is kept within the relative clause, but
                                the relative clause is not nested within the matrix clause; instead,
                                it's preposed, with a special relative particle marking the head, and a
                                resumptive pronoun marking the position that the head would have
                                occupied in the matrix clause if it hadn't been relativized.

                                So your first example would be (where REL is the relative particle and
                                THAT is the resumptive pronoun):

                                Dog REL did bone have, boy did THAT see.
                                "The boy saw the dog that had the bone."

                                This structure makes it possible to relativize on positions other than
                                subject, which I don't see how either of your alternatives would do
                                without ambiguity, e.g., to relativize on "bone" instead of on "dog":

                                Dog did bone REL have, boy did THAT see.
                                "The boy saw the bone that the dog had."

                                It can also be applied recursively, as in your last example:

                                I did dog REL just see, THAT did boy REL bark-at, THAT was scared.
                                "The boy that the dog I just saw barked at was scared."

                                OR, a little more like real Maninka, which puts only the direct object
                                between the auxiliary and the lexical verb, but puts oblique objects
                                with postpositions after the lexical verb:
                                I did dog REL just see, that did bark boy REL at, that was scared.

                                (Or maybe that should be "scared was" instead of "was scared" -- I don't
                                know whether Maninka treats a copula like an auxiliary or like a lexical
                                verb, or even whether it has a copula at all.)

                                This system of extraposed head-internal relative clauses is an extremely
                                powerful relativization strategy. But I don't know how compatible it is
                                with your vision of this conlang; I must admit I haven't been following
                                this thread closely.

                                - Tim
                              • Gary Shannon
                                ... [---snip---] ... [---snip---] ... @Jan: I really like the idea of putting a required Aux at the front of the sentence or clause. Consider the two pieces of
                                Message 15 of 22 , Jan 5, 2013
                                • 0 Attachment
                                  On Sat, Jan 5, 2013 at 4:39 AM, Jan Strasser <cedh_audmanh@...> wrote:
                                  > On Fri, 4 Jan 2013 11:46:16 -0800, Gary Shannon wrote:
                                  >>
                                  >> From: Gary Shannon<fiziwig@...>

                                  >> My conlang was initially set up to be SAOVI where A is an optional aux
                                  >> marking tense/aspect/mood, and I is an optional indirect object. So in
                                  >> a sense, my word order is already SVOV where the verb is split into
                                  >> its root and its TAM marker.
                                  [---snip---]
                                  >
                                  > Here's how BNz would handle the example sentences you gave:
                                  >
                                  > did.3s>3 the.NOM boy the.ACC dog see
                                  > AuxSOV
                                  > "The boy saw the dog."
                                  >
                                  > did.3s>3 the.NOM boy the.ACC dog which.3s>3 a.ACC bone have see
                                  > AuxSO(AuxOV)V
                                  > "The boy saw the dog that had a bone."
                                  [---snip---]
                                  >
                                  > -- Jan

                                  @Jan:

                                  I really like the idea of putting a required Aux at the front of the
                                  sentence or clause.

                                  Consider the two pieces of information:

                                  Did boy dog see.
                                  Did dog bone have.

                                  Now if we nest them by replacing the object "dog" with the sentence
                                  that describes the dog we get:

                                  Did boy [did dog bone have] see.
                                  Did boy did dog bone have see.

                                  Somehow that feels a lot easier to parse to me. I understand the two
                                  sequential verbs at the end more readily.

                                  -----------------------------------------------------------------------

                                  On Sat, Jan 5, 2013 at 8:32 AM, Tim Smith <tim.langsmith@...> wrote:
                                  > On 1/4/2013 2:46 PM, Gary Shannon wrote:
                                  [---snip---]

                                  @Tim:

                                  That's very interesting. I'm going to have to study your examples and
                                  see what more I can learn about those languages. It strikes me as a
                                  very elegant solution.

                                  --gary

                                  >>
                                  > There's a group of West African languages, including Maninka and its close
                                  > relatives, that have the same basic SAOV order that yours does. The way they
                                  > handle relative clauses strikes me as very elegant. The head noun of the
                                  > relative clause is kept within the relative clause, but the relative clause
                                  > is not nested within the matrix clause; instead, it's preposed, with a
                                  > special relative particle marking the head, and a resumptive pronoun marking
                                  > the position that the head would have occupied in the matrix clause if it
                                  > hadn't been relativized.
                                  >
                                  > So your first example would be (where REL is the relative particle and THAT
                                  > is the resumptive pronoun):
                                  >
                                  > Dog REL did bone have, boy did THAT see.
                                  > "The boy saw the dog that had the bone."
                                  >
                                  > This structure makes it possible to relativize on positions other than
                                  > subject, which I don't see how either of your alternatives would do without
                                  > ambiguity, e.g., to relativize on "bone" instead of on "dog":
                                  >
                                  > Dog did bone REL have, boy did THAT see.
                                  > "The boy saw the bone that the dog had."
                                  >
                                  > It can also be applied recursively, as in your last example:
                                  >
                                  > I did dog REL just see, THAT did boy REL bark-at, THAT was scared.
                                  >
                                  > "The boy that the dog I just saw barked at was scared."
                                  >
                                  > OR, a little more like real Maninka, which puts only the direct object
                                  > between the auxiliary and the lexical verb, but puts oblique objects with
                                  > postpositions after the lexical verb:
                                  > I did dog REL just see, that did bark boy REL at, that was scared.
                                  >
                                  > (Or maybe that should be "scared was" instead of "was scared" -- I don't
                                  > know whether Maninka treats a copula like an auxiliary or like a lexical
                                  > verb, or even whether it has a copula at all.)
                                  >
                                  > This system of extraposed head-internal relative clauses is an extremely
                                  > powerful relativization strategy. But I don't know how compatible it is
                                  > with your vision of this conlang; I must admit I haven't been following this
                                  > thread closely.
                                  >
                                  > - Tim
                                • Jörg Rhiemeier
                                  Hallo conlangers! ... Sure. There is not much to say on this matter any more. ... Indeed not! ... Surely, an RPN language has a syntax, even if it is one that
                                  Message 16 of 22 , Jan 5, 2013
                                  • 0 Attachment
                                    Hallo conlangers!

                                    On Saturday 05 January 2013 12:13:43 R A Brown wrote:

                                    > On 04/01/2013 22:02, Jörg Rhiemeier wrote:
                                    > > Hallo conlangers!
                                    > [...]
                                    > > There really is no need to repeat those lengthy threads
                                    > > again, though for different reasons than what And
                                    > > assumes to be ;)
                                    >
                                    > Yes, indeed - and I will keep my reply short for that reason.

                                    Sure. There is not much to say on this matter any more.

                                    > [...]
                                    >
                                    > > I am completely lost in And's example ;)
                                    >
                                    > Me too - in any case, as far as I can see, it has nothing
                                    > whatever to do with RPN.

                                    Indeed not!

                                    > [snip]
                                    >
                                    > >>> "Stack-based languages" in your extended sense are
                                    > >>> indeed not naturalistic, and indeed aren't even
                                    > >>> languages, but because of the syntaxlessness, not
                                    > >>> the stack-basedness.
                                    > >>
                                    > >> Why is "5 2 -" syntaxless?
                                    > >
                                    > > It can't be syntaxless when a reordering changes the
                                    > > meaning: _2 5 -_ gives a different result, and _5 - 2_
                                    > > gives again a different result which even depends on
                                    > > what is currently on the stack, or a syntax error if the
                                    > > stack is empty ;)
                                    >
                                    > Exactly!! I really do not understand what And is on about
                                    > with all this "syntaxless" business. Of course both PN and
                                    > RPN must have syntax, particularly with regard to the
                                    > subtraction and dividing operators!

                                    Surely, an RPN language has a syntax, even if it is one that can
                                    be parsed very efficiently by von Neumann machines. But I have
                                    a hunch that such a syntax is too simple to cope with the
                                    complexity necessary for a language with the same expressive
                                    power as a human language. As I have observed yesterday, I'd
                                    expect Fith to break down when it comes to translating long,
                                    sophisticated literary texts. Talking with people is just not
                                    the same as giving orders to a computer. Confusing the two
                                    vastly overestimates what computers can do, and underestimates
                                    what it means to be sapient. But I have met many IT nerds in
                                    my life who indeed get this wrong ;)

                                    On Saturday 05 January 2013 15:09:19 R A Brown wrote:

                                    > On 05/01/2013 09:26, Elena ``of Valhalla'' wrote:
                                    > [snip]
                                    >
                                    > > Don't you need lots of short-term memory to parse
                                    > > complex SOV sentences such as those common in *literary*
                                    > > German?
                                    >
                                    > Those long Ciceronian-type periods! Not sure how fluent
                                    > German readers do it :-)

                                    They balk at too complex constructions ;) There *are* things
                                    that are syntactically correct in theory but overload one's
                                    short-term memory in practice.

                                    > [...]
                                    >
                                    > If anything goes on in the human brain analogous to anything
                                    > that goes on in a Von Neumann machine, it is surely more
                                    > likely to be tree structures (or even neural _networks_).

                                    Yep.

                                    > [snip]
                                    >
                                    > > the stack of an RPN *programming language* interpreter
                                    > > can hold list of expressions (used to define functions,
                                    > > for conditional clauses, etc.)
                                    >
                                    > Yep - there's an interesting article about real stack-based
                                    > or, more properly, stack-oriented languages here:
                                    > http://en.wikipedia.org/wiki/Stack-oriented_programming_language
                                    >
                                    > But while, because of the limitations of the Von Neumann
                                    > architecture of (home) computers, stack oriented processing
                                    > is very convenient, there's no reason to suppose that the
                                    > human brain, which has evolved over zillions of years, is so
                                    > limited.

                                    Indeed there isn't. I fancy that the human mind is actually a
                                    *quantum* information system of some kind, but I admit that this
                                    idea is sheer speculation. But I seriously doubt that it is a
                                    von Neumann machine!

                                    --
                                    ... brought to you by the Weeping Elf
                                    http://www.joerg-rhiemeier.de/Conlang/index.html
                                    "Bêsel asa Éam, a Éam atha cvanthal a cvanth atha Éamal." - SiM 1:1
                                  • Alex Fink
                                    ... On a quick skim, this looks really neat, though I really know nothing about the literature in this area and perhaps it d seem less comparatively neat once
                                    Message 17 of 22 , Jan 7, 2013
                                    • 0 Attachment
                                      On Fri, 4 Jan 2013 14:01:18 -0500, Logan Kearsley <chronosurfer@...> wrote:

                                      >On 4 January 2013 08:18, Jörg Rhiemeier <joerg_rhiemeier@...> wrote:
                                      >> Hallo conlangers!
                                      >>
                                      >> On Friday 04 January 2013 07:36:27 Gary Shannon wrote:
                                      >[...]
                                      >>> So I guess my
                                      >>> question is this: In natlangs, how deep does the deferred elements
                                      >>> stack generally go? What depth does it never exceed? Does anybody have
                                      >>> a handle on these questions?
                                      >>
                                      >> At any rate, "stack depth" (I sincerely doubt that "stack" is the
                                      >> right concept here, we are rather dealing with tree structures here)
                                      >> in human languages is quite limited, and deep center-embedding is a
                                      >> no-no. Most people feel uncomfortable with clauses embedded more
                                      >> than three deep, I think, though some people are capable of handling
                                      >> more.
                                      >
                                      >Optimal parsing algorithms like PCKY certainly make no use of a stack
                                      >structure, but aren't 100% cognitively plausible because a) they
                                      >assume unbounded memory and b) it's simple to observe that humans are
                                      >not optimal parsers.
                                      >I have seen one example (though I'm sure there are probably more) of
                                      >research into a general-purpose parser with human-like memory
                                      >constraints (http://www-users.cs.umn.edu/~schuler/paper-jcl08wsj.pdf)
                                      >which assumes that parsing occurs mainly in short-term working memory,
                                      >you can have only 3-4 "chunks" (containing partial constituents) in
                                      >working memory at any given time, and memory can be saved by
                                      >transforming partial trees to maximize how much stuff you can put into
                                      >one chunk by ensuring that you never have to store complete but
                                      >unattached constituents. The parser is actually implemented as a
                                      >hierarchical hidden markov model where shirt-term memory locations are
                                      >represented by a small finite set of random variables whose values are
                                      >partial syntactic trees, but access patterns look the same as access
                                      >patterns for a stack structure, such that it could be equivalently
                                      >represented by a bounded push-down automaton with a maximum stack
                                      >depth of 3-4.
                                      >That model can explain why some examples of center-embedded sentences
                                      >cause interpretation problems in human while other
                                      >structurally-identical models don't because the probability of
                                      >constructing a certain syntactic structure changes in different
                                      >contexts; thus, garden-path constructions that you are very familiar
                                      >with (and thus which have been programmed into the transition
                                      >probabilities of the HHMM) don't feel like garden-path constructions
                                      >anymore.

                                      On a quick skim, this looks really neat, though I really know nothing about the literature in this area and perhaps it'd seem less comparatively neat once I understood it in more context.

                                      I wonder if this also works well for right-branching languages. They mention work on Japanese so presumably they've thought about it. (Japanese must also have a tagged corpus somewhere, no?)

                                      I don't understand this right-corner transform in actual detail from the meager exemplification given there (in which points am I allowed to generalise the couple rewritten trees displayed?); all that I managed to get out of it is that it lets them have just one item on the processing stack for each time we switch from being a left child to a right and back to a left as we read from root to current node. (Is the number of times that happens the only thing they're claiming a bound on?)

                                      What I'd eventually like to do, in the unutterably distant future, is use something like this in my language generation project, as one has to model parsing to know which structures are subject to replacement for being difficult to parse. But it also seems clear that I won't have just binary trees to work with at that point: I'll have many operations that branch binarily, but some that don't, instead branching ternarily or introducing a big idiom template or doing something more alternation-like or any of the various possibilities that paradigm and/or template based morphological approaches allow (esp. as the dividing line between morphology and syntax can't be assumed to actually exist). I wonder how well this sort of idea of bundling a sequential bunch of partial template-expansions into one stack-consuming operation (and working probabilistically with them) extends to that.

                                      Alex
                                    Your message has been successfully submitted and would be delivered to recipients shortly.