Re: multi-modal language (was: Nika Introduction)
- Parker Glynn-Adey wrote:
>>> Solresol has speaking, singing, color (flags, painting)This has actually happened, hasn't it? English (and, I'm
>>> inter_alia. [...]
>> In this vein I may as well remind people of the project I did once with
>> providing ASL a spoken mode of sorts:
> I feel like Solresol is hackishly multimodal - your project is not. I'll
> come up w. a painted way of writing English, a flagged way, a musical way,
> etc. while on the can. Just come up with some bijection between the roman
> alphabet and the medium of your choice.
sure, other natlangs) have been represented by different
flags; one thinks of Nelson's famous signal before the
battle of Trafalgar: "England expects ......."
I recall very many moons ago as a 'Wolf Cub' (I believe
they're now called 'Cub Scouts') learning semaphore and the
Morse code. Does this make English hackishly multimodal? Maybe.
> Solresol just cheats by having fewI think 'cheat' is a little harsh. It has only seven
> *-memes so the bijections seem more natural.
'phonemes', the seven notes of the major scale from do to
ti/si. Clearly these lend themselves to some fairly obvious
('natural') ways of representation. Personally I would not
call that either hackish or cheating. But, I admit some
representations claimed are hackish and IMO silly; that's
why I put a smiley against the use gun fire ;)
But I wasn't championing Solresol as a 'multi-modal'
language; I was merely responding to Dana's:
"SASXSEK is intended to eventually be multimodal too.
If/when I ever fully develop it, it will have written
(printed, handscript or tactile), spoken and gestured forms."
It strikes that any language can be 'multi-modal' if one
merely means that communication in that language can be made
in several different modes, e.g. speech, writing, flags, arm
positions (semaphore), dot-dash sequences, colors, hand &
tactile gesture etc.
I do *not* want to belittle Alex's ASL-spoken and David's
slipa one bit - indeed, I find them both well thought out
and impressive - but I am left wondering what essentially is
the difference between representing signed-language in
written form and providing a spoken language with other
modes of expression?
I can see that representing ASL or any other signed language
in a written representation is on a far greater level of
complexity than that of representing, say, English in other
modes (and representing Solresol in other modes is trivial).
But how does this make one 'truly' multi-modal and the other
not? Is is a question of complexity or is there something
else that I am missing?
These questions are *not* meant to be critical. They are
genuine question on my part. I feel I must be missing
something and, as my sig says, one is never to old to learn.
Nid rhy hen neb i ddysgu.
There's none too old to learn.
- On Sat, Apr 10, 2010 at 12:37 AM, Nuno-Miguel Raposo
> I don't see how you can decidedly call it non grammatical. This would beNo, it's like saying that written English is not 2D-grammatical
> like saying written English is not grammatical because the page it is
> written on has no meaning.
because the placement of one letter above another (in a conventional
book) has no meaning. As Alex mentioned, English - like all other
linear writing systems - could just as easily be written on a single
strip of very long, thin paper as on a 2D page.
> My point was never that right means somethingThat's not the same thing at all.
> different than left in ASL, but that once ASL has lexicalized space there is
> very much a meaning between pointing left or right.
In English for instance, [hi] is an assignable pronoun for male
objects and [Si] is an assignable pronoun for female ones. However,
this is NOT grammaticalized use of consonant space, such that for
example [xi] would be an assignable pronoun for transgendered ones.
They're merely arbitrary points in space that could just as easily be
any other points, and whose meaning is in no way whatsoever related to
Likewise, ASL pronouns are arbitrarily assigned, and their *positions*
relative to each another in visual space (articulatorily analogous to
phonological space) are completely meaningless.
I think that if you want to argue this you have to accept my analogy
into phonological space, and by reductio ad absurdium your sense is
> I didn't actually bring in the ASL mirroring of actual space. But I alsoWhere did you get this "all"? I explicitly gave different examples of
> don't see how just because the language exists in a 3D space all use of
> space is iconic, and why this excludes it from being non-linear.
grammatical and non-grammatical use of space, supported by
neurological evidence. If damage to Broca's area does not damage it,
it ain't grammar.
Iconicity has nothing to do with linearity; I'm simply excluding it
from the discussion entirely as not grammatical in the first place.
> ASL uses space to describe events in time. When these are being set up, theThis is no different from English. Again, reductio.
> space (time line) is understood, and placement of events, and people also
> place them in time. Things can happen before and after, and do not have to
> be described in any specific order.
> There are specific spaces to produce sign to make clearNo matter how many loci you add, you cannot change the dimensionality
> which of these you are talking about. Also you can add more loci (I think
> this most contradicts your point) but the use of this space now adds meaning
> to that loci.
of Alex's space 2.
> After all this space has been lexicalised, a signer can simplyYes. This is equivalent to having a "large" index LIKED and NONLIKED
> sign "banana" and index a space nested in the required meaning. (ie.
> <food><liked>BANANA</liked></food>). Do you still call this linear?
which apply to a whole segment of space, and then just assigning new
pronouns/nouns there based on the previous indices.
>> One example of something that a NLWS can do that no linear languageYou asked for a feature that a NLWS *can* do that something else
>> can do even in principle (AFAICT) is to provide a completely
>> receiver-decided parse order. (Skipping around within an
>> author-created linear order doesn't count.)
> According to your abstract on your web site, this was not a required
> feature, but optional.
cannot. This is such a feature.
It is optional in NLWS in that one *could* grammatically dictate a
particular parse order (or partially so, like a start position).
That's not relevant to the fact that a non-NLWS cannot do it in the
On Sat, Apr 10, 2010 at 1:40 AM, Nuno-Miguel Raposo <nulpoints@...> wrote:
> This is exactly my point, the grammar of ASL is not bound by time in the way
> you are thinking. Even while the signer is *not articulating* spaces are
> actively grammatical.
Both of these statements are borderline absurdist to me.
1. Are you claiming that ASL signers are not time-bound, i.e., they
can somehow articulate such that each of their articulators (each
hand, facial muscles, etc) can be in more than one configuration
(position, handshape, orientation, etc) at a time?
AFAICT it's completely impossible (short of Alien Magic™) for any
language that is expressed by humans (as in the flesh bits) to not be
time-bound. A NLWS is not time-bound because it's not embedded in time
in that way. (Note that a 2+time-D writing system [e.g. what Glide
tries to be] *would* be time-bound and thus necessarily have a 1D
syntax in addition to its 2D syntax.)
2. Are you claiming that the spaces *themselves* are actively
grammatical? As in, a point in space will move of its own accord to
interact with another point in space? This would certainly count (in
that at minimum you could project a NLWS into the space in front of
you), but again requires Alien Magic™ (think e.g. the Dancing Lights
spell from D&D canon).
I think you are making a very serious confusion here, between the
existence of persistent variables and the grammatical use of a space.
In English, we have persistent variables too. I can assign "he" at the
beginning of a conversation and still refer back to it several minutes
later (if I haven't overridden it). This doesn't mean that [hi] is in
any way participating in the interim - if you are going to claim that
ASL's persistent pronouns/indexes make it "non-linear", then I think
you are forced to also claim this of all spoken languages with
pronouns. Again, reductio.
> Finger spelling can be removed from the discussion seeing as how it is
> obviously a linear representation of a linear concept. ASL has yet to be
> written down linearly.
Please define "linear" as you use it thrice above. These don't appear
to even be consistent uses.
AFAICT you have not done so, which makes arguing over what is or isn't
linear rather difficult. I accept Alex's description of "space 2" as
my sense - if it's one dimensional, that's linear, if it's not, it's
> "tearing the syntax of L to shreds." I would probably use those words to
> describe most attempts at writing ASL down.
What he described is a drastically more fundamental tearing-apart than
what you are thinking.
For instance, suppose we made FlatASL. In FlatASL, everything works
the same as usual, but you are confined to a 2D plane that intersects
and is parallel to the face. Anything that would be a motion
front-back is instead a motion up-down, and likewise we map the space
assignement (so for instance the YOU index location would be e.g.
above one's forehead, and the ME location would be at one's chest;
I-GIVE-YOU would be the same handshape, movement, and orientation
except that it would go vertically from the new I to the new YOU).
While FlatASL would certainly be severely constrained compared to ASL
(in Alex's sense 1), it would not be fundamentally *broken*. You could
still express everything that you could express in ASL, in about the
same way, with all the same features you've been describing (e.g.
persistence of pronouns/indexes).
Now consider a different case. Take some complex Lego piece which (for
instance) is a functioning wheeled car. Compare it to a description of
how to make that piece.
The instructions, while isomorphic to the actual thing (i.e. it has
exactly the same information, but differs in form), is very seriously
*broken* in comparison. The form is fundamentally relevant, rather
than merely being a more flexible way of encoding it.
As another example, consider the difference between the Mona Lisa and
a pixel-by-pixel description of how to draw the Mona Lisa (e.g. the
contents of a JPG file as written out). Again, they have the same
information, but the form is drastically relevant to the function. It
is simply *not possible* to perceive e.g. her face, or even the
direction in which she's gazing, directly from the latter.
Computer programs that do recognition basically have to reconstruct
the former from the latter - converting a pixel-by-pixel image
description into a 2+D array from which you can extract e.g. neighbor
features. Thus again form is critical.
There simply aren't very many examples of Alex's space 2 with higher
than one dimension, which is why this is a bit hard to explain by