Loading ...
Sorry, an error occurred while loading the content.

Re: Verbal Agreements -- Can TC or RC explain them?

Expand Messages
  • Jean VALENTIN
    ... This is the kind of question I ask myself about work on the Synoptic agreements/disagreements - though I have not studied much about this. Most of these
    Message 1 of 9 , Dec 31, 1969
    • 0 Attachment
      >I do see this as a problem needing consideration, since various scholars
      >of the synoptic problem in the past, such as William Farmer, view these
      >verbal agreements as proving beyond doubt that Matthew and Luke were not
      >written independently. The assumption is made that the presently accepted
      >Gospel text, and that as of a century ago also, are pretty close to the
      >truth of how the Gospels appeared within a few decades after being
      >written, regarding the presence of these long duplicated strings. I tend
      >to go along with this, but perhaps it places too much reliance upon the
      >work of text critics in deducing the present majority text?
      >
      This is the kind of question I ask myself about work on the Synoptic
      agreements/disagreements - though I have not studied much about this.
      Most of these works, I believe, are made on the basis of the modern
      critical texts which mostly concur with the B-text. What happens if the
      presupposed text is changed for another text closer to D for example, or,
      as you say, a text closer to the Byzantine text ? May be we would hae
      much different theories. Just wondering.

      Jean V.


      _________________________________________________
      Jean Valentin - Bruxelles - Belgique
      e-mail: jgvalentin@... /// netmail: 2:291/780.103
      _________________________________________________
      "Ce qui est trop simple est faux, ce qui est trop complexe est
      inutilisable"
      "What's too simple is wrong, what's too complex is unusable"
      _________________________________________________
      NISUS WRITER - the multilingual word processor for the Macintosh.
      Find more about it at:
      http://www.nisus-soft.com
      http://www.humnet.ucla.edu/humnet/nelc/grads/maschke/nisus_overview/toc.htm
      l
      _________________________________________________
    • Timothy John Finney
      Jim Deardorff has noticed a statistical pattern in the frequency of long parallel strings. I have noticed a similar pattern in the frequency of the number of
      Message 2 of 9 , Aug 27, 1997
      • 0 Attachment
        Jim Deardorff has noticed a statistical pattern in the frequency of
        long parallel strings. I have noticed a similar pattern in the frequency
        of the number of readings in a variation unit.

        Taking the forty four variation units in Hebrews that are given in the
        UBS edition, there are 22 with two possibilities, 11 with three
        possibilities, 6 with four, 3 with 5, and so on.

        Noting this apparent geometric progression, I counted the variation units
        in Romans and added them to the picture. I can't remember the exact
        figures off hand, but they did not form a geometric progression and looked
        a bit like a Fibonacci series. After much trying of distributions, I found
        that the numbers did not fit the Poisson distribution, as one might expect
        for accidental events, but fitted a transformed Gaussian distribution i.e.
        N = A exp -(a - bx)^2, where N is the number of variation units with x
        readings. I have worked out A, a, and b, but I haven't got those here either.

        Strange, isn't it? Any statisticians out there who can explain why the
        number of readings in a variation unit appear to obey such a law?

        Best regards,

        Tim Finney

        finney@...
        Baptist Theological College
        and Murdoch University
        Perth, W. Australia
      • James R. Adair
        ... My initial impression for why the number of variants in a variation unit seem to follow Fibonacci or transformed Gaussian patterns (and I d have to look up
        Message 3 of 9 , Aug 27, 1997
        • 0 Attachment
          On Thu, 28 Aug 1997, Timothy John Finney wrote:

          > Taking the forty four variation units in Hebrews that are given in the
          > UBS edition, there are 22 with two possibilities, 11 with three
          > possibilities, 6 with four, 3 with 5, and so on.

          My initial impression for why the number of variants in a variation unit
          seem to follow Fibonacci or transformed Gaussian patterns (and I'd have to
          look up the latter!) is that it is largely accidental. If instead of
          analyzing the variation units in the UBS apparatus you look at those in
          the NA-27 apparatus, your numbers will change completely, and I would
          guess that the curve would be different as well. Again, if you look at
          the apparatus of a major critical edition, the curve will change once
          again, although I think the curve that you would get from examining
          Tischendorf or Von Soden or IGNTP would be more meaningful than UBS.
          Although it may be that variation units with small numbers of variants are
          more common than those with large numbers, the curves that are generated
          will depend to a very large extent on the sampling technique: how many mss
          do you collate? do you count purely orthographic variants? how do you
          group clusters of variants? Finally, if we do find that plotting the
          number of variants in the variation units throughout a NT book produces a
          curve that can be approximated by some mathematical formula, what are the
          implications of this discovery?

          Jimmy Adair
          Manager of Information Technology Services, Scholars Press
          and
          Managing Editor of TELA, the Scholars Press World Wide Web Site
          ---------------> http://scholar.cc.emory.edu <-----------------
        • Robert B. Waltz
          On Thu, 28 Aug 1997, Timothy John Finney wrote: [ ... ] ... Several comments on this (since it s a subject I ve looked at in
          Message 4 of 9 , Aug 28, 1997
          • 0 Attachment
            On Thu, 28 Aug 1997, Timothy John Finney <finney@...>
            wrote:

            [ ... ]

            >Taking the forty four variation units in Hebrews that are given in the
            >UBS edition, there are 22 with two possibilities, 11 with three
            >possibilities, 6 with four, 3 with 5, and so on.
            >
            >Noting this apparent geometric progression, I counted the variation units
            >in Romans and added them to the picture. I can't remember the exact
            >figures off hand, but they did not form a geometric progression and looked
            >a bit like a Fibonacci series. After much trying of distributions, I found
            >that the numbers did not fit the Poisson distribution, as one might expect
            >for accidental events, but fitted a transformed Gaussian distribution i.e.
            >N = A exp -(a - bx)^2, where N is the number of variation units with x
            >readings. I have worked out A, a, and b, but I haven't got those here either.
            >
            >Strange, isn't it? Any statisticians out there who can explain why the
            >number of readings in a variation unit appear to obey such a law?

            And "James R. Adair" <jadair@...> added:

            >My initial impression for why the number of variants in a variation unit
            >seem to follow Fibonacci or transformed Gaussian patterns (and I'd have to
            >look up the latter!) is that it is largely accidental. If instead of
            >analyzing the variation units in the UBS apparatus you look at those in
            >the NA-27 apparatus, your numbers will change completely, and I would
            >guess that the curve would be different as well. Again, if you look at
            >the apparatus of a major critical edition, the curve will change once
            >again, although I think the curve that you would get from examining
            >Tischendorf or Von Soden or IGNTP would be more meaningful than UBS.
            >Although it may be that variation units with small numbers of variants are
            >more common than those with large numbers, the curves that are generated
            >will depend to a very large extent on the sampling technique: how many mss
            >do you collate? do you count purely orthographic variants? how do you
            >group clusters of variants? Finally, if we do find that plotting the
            >number of variants in the variation units throughout a NT book produces a
            >curve that can be approximated by some mathematical formula, what are the
            >implications of this discovery?

            Several comments on this (since it's a subject I've looked at in some
            depth). First, Tim's sample is small, so a pattern might not tell us
            much. But more important is the question of "What is a variant?"
            This has been dealt with elsewhere, but just take the case of
            O (DE/GAR) (QEOS/LOGOS). One variant with four readings? Or two with
            two readings each? The number of readings in a variant will depend
            in large measure on the editors of the edition -- e.g. the users of
            Claremont Profile Method type systems seem very much to prefer
            binary (two-reading) variants.

            Then, too, part of the difference between UBS and NA is that UBS cites
            many more manuscripts. This will naturally increase the number of
            minor variants in lesser manuscripts.

            But let's assume, for the sake of the argument, that we have come up
            with an absolutely hard-and-fast system for deciding "what is a variant"
            (there are several articles in Epp & Fee on this, though I don't think
            they've resolved the matter). Let's say that we have created a fixed
            list of manuscripts. Assume whatever we need to to get a set of numbers.
            What do they mean?

            Now we have to remember that there are two sources of readings. One is
            ancient readings, of genetic significance. The other is accident. Based
            on my observation, variants where there are two readings (except for
            haplographic errors) are usually genetic. Variants where there are
            four or more, by contrast, almost never fall along text-type lines.
            An analogy I have used is that of a crystal: Tap it lightly and it
            will break at a facet (a text-type division). To make it break
            into four or more pieces, you have to hit it so hard that the crystal,
            rather than breaking along facets, *shatters.*

            What this shows is that one needs also to examine the nature of variants.
            Is it a reading that invites errors of some sort? Is it a difficult
            readings that invites corrections? Such will generally have more readings
            (I think).

            As for Jimmy Adair's question about what this tells us, I think the
            answer is that we need to know the distribution first. Potentially it
            could tell us something about text-types or scribal habits -- but we
            can't base any conclusions simply on the readings in the UBS edition.
            Nor even, I would argue, on the readings in NA27, unless we can add
            many more manuscripts to the collations.

            For what it's worth....


            -*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-

            Robert B. Waltz
            waltzmn@...

            Want more loudmouthed opinions about textual criticism?
            Try my web page: http://www.skypoint.com/~waltzmn
            (A site inspired by the Encyclopedia of NT Textual Criticism)
          • Jim Deardorff
            I m still interested in hearing if any members of this list think that tc can explain the verbal agreements between Mt-Lk s Q verses in which strings of up
            Message 5 of 9 , Aug 28, 1997
            • 0 Attachment
              I'm still interested in hearing if any members of this list think that tc
              can explain the verbal agreements between Mt-Lk's "Q" verses in which
              strings of up to 27 consecutive words are duplicated. If so, what
              might be the most likely means?

              Those who work on the synoptic problem but don't buy into the two-document
              hypothesis (2DH: Mark-Q priority) believe that redaction criticism (RC) is
              responsible for it. Those who favor the 2DH, however, must feel that tc
              can explain it, as they assume that Matthew and Luke were written
              independently, and that Matthew was (first) written in Greek and so that
              no translator was involved either. But I've never seen any proposed
              explanation of how scribal assimilation or "harmonistic corruptions," as
              Metzger refers to it, could leave behind a frequency distribution that
              follows the exponential or geometric-progression distribution very well
              for Q-verse word strings up to 10 words long, and then deviates radically
              from it for the longer strings. For the one test case I could find in
              which independence of the translator/editor can probably be assumed, the
              distribution followed the geometric progression all the way out to 10 or
              so, with none of these very long strings existing.

              I've checked out the 9 longest duplicate strings of "Q" using both N-S 26
              and N-S 21, and they stay unchanged during that interval of time. If any
              here are interested, you might look into N-S 27 to see if they survive
              there also, unchanged.

              They are:
              27 consecutive, perfectly agreeing, words long:
              in Mt 11:25-27 & Lk 10:21-22
              26, in Mt 24:50-51 & Lk 12:46
              26, in Mt 6:24 & Lk 16:13
              25, in Mt 8:9-10 & Lk 7:8-9
              24, in Mt 7:7-8 & Lk 11:9-10
              24, in Mt 3:8-10 & Lk 3:8-9
              20, in Mt 3:10 & Lk 3:9
              19, in Mt 11:7-8 & Lk 7:24-25
              16, in Mt 12:42 & Lk 11:31

              The excessive word-string lengths continue on down to lengths of 10 or 12
              words, below which there are so many more shorter strings that the normal
              geometric progression prevails.

              >From what I read in Metzger (I hope his book doesn't have very many
              serious mistakes; I certainly find it fascinating reading as a beginner in
              this field), there are a lot of mechanisms that can cause variant
              readings: all the errors that come under the "accidental heading" and
              most of the "intentional" scribal errors. Against these you have the
              "harmonistic corruptions." Is it reasonable to explain the nine verbal
              agreements listed above, and another 20 or so, as due to the latter? One
              would then need to explain why a whole host of shorter strings also did
              not get harmonized; i.e., why would the scribal harmonization be so
              selective? Although a few of the above nine verses are among the more
              memorable discourse verses, others are not.

              I do see this as a problem needing consideration, since various scholars
              of the synoptic problem in the past, such as William Farmer, view these
              verbal agreements as proving beyond doubt that Matthew and Luke were not
              written independently. The assumption is made that the presently accepted
              Gospel text, and that as of a century ago also, are pretty close to the
              truth of how the Gospels appeared within a few decades after being
              written, regarding the presence of these long duplicated strings. I tend
              to go along with this, but perhaps it places too much reliance upon the
              work of text critics in deducing the present majority text?

              Jim Deardorff
            • Jim Deardorff
              ... Someone with access to a copy of D could check this out (I think it would be too awkward to try to check it out just using the critical apparatus of N-S).
              Message 6 of 9 , Aug 28, 1997
              • 0 Attachment
                On 29 xxx -1, Jean VALENTIN wrote:

                > >I do see this as a problem needing consideration, since various scholars
                > >of the synoptic problem in the past, such as William Farmer, view these
                > >verbal agreements as proving beyond doubt that Matthew and Luke were not
                > >written independently. The assumption is made that the presently accepted
                > >Gospel text, and that as of a century ago also, are pretty close to the
                > >truth of how the Gospels appeared within a few decades after being
                > >written, regarding the presence of these long duplicated strings. I tend
                > >to go along with this, but perhaps it places too much reliance upon the
                > >work of text critics in deducing the present majority text?

                > This is the kind of question I ask myself about work on the Synoptic
                > agreements/disagreements - though I have not studied much about this.
                > Most of these works, I believe, are made on the basis of the modern
                > critical texts which mostly concur with the B-text. What happens if the
                > presupposed text is changed for another text closer to D for example, or,
                > as you say, a text closer to the Byzantine text ? May be we would hae
                > much different theories. Just wondering.
                >
                > Jean V.

                Someone with access to a copy of D could check this out (I think it would
                be too awkward to try to check it out just using the critical apparatus of
                N-S). But it's rather time consuming, and after you're finished someone
                would suggest you ought to check it out on still another manuscript!

                Jim
              • Timothy John Finney
                Here are the figures which I alluded to before: NOS = number of states = number of readings in a variation unit. FRQ = frequency = how often a given number of
                Message 7 of 9 , Sep 1, 1997
                • 0 Attachment
                  Here are the figures which I alluded to before:

                  NOS = number of states = number of readings in a variation unit.
                  FRQ = frequency = how often a given number of states occurs in the sampled
                  variation units.
                  FIT = fitted value using the equation F(n) = C x exp[-(a + bn)^2/2], C =
                  399, a = 1.50, b = 0.23.

                  Hebrews + Romans (for variation units listed in the UBS 4th edn apparatus)

                  NOS 1 2 3 4 5 6 7+

                  FRQ ? 58 36 21 11 5 3
                  FIT 89 58 36 21 12 6 6

                  As you can see, the fit is quite good for 2 to 6 states.

                  Jimmy Adair is right to point out that the curves that are generated will
                  depend to a very large extent on the sampling technique. If there is an
                  underlying law which obeys this equation, then using a different edition,
                  or simply widening the scope from Hebrews and Romans to the whole Pauline
                  corpus in the UBS edition, will change the constants of the equation but
                  not its shape. In other words, changing the sample size or sampling
                  technique will generate new members of one family of equations.

                  Bob Waltz is right to say that the definition of a variation unit will
                  also affect the results. This is a sticky problem. (Perhaps someone will
                  one day come up with an indisputable way of defining the density of
                  variation at consecutive places in the text). Nevertheless, no matter how
                  the UBS Committee arrived at the given arrangements of variation units and
                  their readings, it still seems strange to me that they should appear to
                  fit such an equation. Hence my request for a statistician to enlighten us
                  concerning possible causes.

                  On the significance of the predicted 89 units with 1 state, I take this to
                  mean that if 228 sections of the UBS text (89 + 58 + 36 + 21 + 12 + 6 + 6)
                  with a certain standard size were examined, on average 89 (39%) would
                  display no variation, 58 (25%) would have two possible readings, 36 (16%)
                  would have three, and so on. Romans and Hebrews together have about 12,036
                  words, resulting in this standard size being about 53 words.

                  One last note. The figure of 6 I inserted for 7 or more states is the sum
                  to infinity of a geometric progression which starts with half the
                  preceding value of 6 and halves at every step (3 + 1.5 + .75 + ... = 6).


                  Best regards,

                  Tim Finney

                  finney@...
                  Baptist Theological College
                  and Murdoch University
                  Perth, W. Australia
                • Robert B. Waltz
                  On Mon, 1 Sep 1997, Timothy John Finney ... Almost too good to be true. :-) But I ll return to this point below. ... As an
                  Message 8 of 9 , Sep 1, 1997
                  • 0 Attachment
                    On Mon, 1 Sep 1997, Timothy John Finney <finney@...>

                    >Here are the figures which I alluded to before:
                    >
                    >NOS = number of states = number of readings in a variation unit.
                    >FRQ = frequency = how often a given number of states occurs in the sampled
                    >variation units.
                    >FIT = fitted value using the equation F(n) = C x exp[-(a + bn)^2/2], C =
                    >399, a = 1.50, b = 0.23.
                    >
                    >Hebrews + Romans (for variation units listed in the UBS 4th edn apparatus)
                    >
                    >NOS 1 2 3 4 5 6 7+
                    >
                    >FRQ ? 58 36 21 11 5 3
                    >FIT 89 58 36 21 12 6 6
                    >
                    >As you can see, the fit is quite good for 2 to 6 states.

                    Almost too good to be true. :-) But I'll return to this point below.

                    >Jimmy Adair is right to point out that the curves that are generated will
                    >depend to a very large extent on the sampling technique. If there is an
                    >underlying law which obeys this equation, then using a different edition,
                    >or simply widening the scope from Hebrews and Romans to the whole Pauline
                    >corpus in the UBS edition, will change the constants of the equation but
                    >not its shape. In other words, changing the sample size or sampling
                    >technique will generate new members of one family of equations.
                    >
                    >Bob Waltz is right to say that the definition of a variation unit will
                    >also affect the results. This is a sticky problem. (Perhaps someone will
                    >one day come up with an indisputable way of defining the density of
                    >variation at consecutive places in the text). Nevertheless, no matter how
                    >the UBS Committee arrived at the given arrangements of variation units and
                    >their readings, it still seems strange to me that they should appear to
                    >fit such an equation. Hence my request for a statistician to enlighten us
                    >concerning possible causes.

                    As an experiment, I took a bunch of data which I had on hand --
                    the readings of all uncials and papyri, plus the minuscules 330 1739,
                    in Colossians 1. This proved to be a bit more complicated than
                    it sounds, because of nonsense readings and scribal errors. I did
                    my best to treat these realistically, and came up with the following
                    numbers (out of 71 variants):

                    NOS 1 2 3 4 5 6 7+
                    FRQ - 49 17 3 2 0 0

                    Let's rewrite the above formula as my calculator understands it:

                    2
                    -(.23n + 1.5)
                    --------------
                    2
                    FIT = 399 e

                    This gives a total of 58+36+21+12+6+6 = 139 readings.

                    Normalizing to percents gives us

                    NOS 1 2 3 4 5 6 7+
                    FIT - 42 26 15 9 4 4

                    Over 71 readings, this gives us
                    NOS 2 3 4 5+
                    Expected 30 18 11 12
                    Actual 49 17 3 2

                    So the fit doesn't work -- although I agree that the data does look
                    exponential. (I'm too lazy to fit my own data. :-)

                    The problem is, we are dealing with *three* variables:

                    1. The definition of a variant
                    2. The method of selecting variants
                    3. The number and nature of the manuscripts in the sample set.

                    Given that (1) had a vague definition, (2) has as yet no definition
                    at all, and (3) is something that needs to be explored, perhaps we
                    shouldn't expect much at this point.

                    Also keep in mind that we are dealing with very few data points here --
                    in Tim's set, only six (data for 2, 3, 4, 5, 6, and 7+-order variants);
                    in mine, an even smaller 4-point data set (or, arguably, 5; we could
                    throw in the results for 6+).

                    I suspect Tim is right, and there is an exponential fall-off. But
                    with only six data points, and a monotonically decreasing function,
                    we could get a good fit for an exponential even if the actual function
                    were of some other form.

                    Now note: I think this is a very important subject to pursue. The
                    mean number of significant variant readings at each point of
                    variation has an immense impact on the statistics we can use to
                    compare manuscripts. I just think we need a greater degree of
                    rigour here (sorry, Tim. :-)

                    >On the significance of the predicted 89 units with 1 state, I take this to
                    >mean that if 228 sections of the UBS text (89 + 58 + 36 + 21 + 12 + 6 + 6)
                    >with a certain standard size were examined, on average 89 (39%) would
                    >display no variation, 58 (25%) would have two possible readings, 36 (16%)
                    >would have three, and so on. Romans and Hebrews together have about 12,036
                    >words, resulting in this standard size being about 53 words.

                    I think this last is a statement that needs to be clarified. Your
                    actual claim is that 39% of your 53 word samples would show *no variant
                    of interest to the UBS committee*. (The fact is, of course, that there
                    are variants in just about every word of the NT). But this, in turn,
                    gives us some problems. There are instances in the UBS text of as
                    many as 3 variants in a single verse. (UBS3 had four variants in
                    Hebrews 13:21; one of them was dropped in UBS4). Taking Hebrews 13:21
                    as an example, the verse is 30 words long. The variants show
                    3, 2, and 2 readings. Would this be considered a single point
                    of variation with 12 readings (3x2x2) or something else?

                    -*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-

                    Robert B. Waltz
                    waltzmn@...

                    Want more loudmouthed opinions about textual criticism?
                    Try my web page: http://www.skypoint.com/~waltzmn
                    (A site inspired by the Encyclopedia of NT Textual Criticism)
                  • Jim Deardorff
                    ... Tim, You do have an excellent fitting equation there. But with four parameters at your disposal, (C, a, b, and the ^2 rather than some other power), and
                    Message 9 of 9 , Sep 1, 1997
                    • 0 Attachment
                      On Mon, 1 Sep 1997, Timothy John Finney wrote:

                      > Here are the figures which I alluded to before:
                      >
                      > NOS = number of states = number of readings in a variation unit.
                      > FRQ = frequency = how often a given number of states occurs in the sampled
                      > variation units.
                      > FIT = fitted value using the equation F(n) = C x exp[-(a + bn)^2/2], C =
                      > 399, a = 1.50, b = 0.23.
                      >
                      > Hebrews + Romans (for variation units listed in the UBS 4th edn apparatus)
                      >
                      > NOS 1 2 3 4 5 6 7+
                      >
                      > FRQ ? 58 36 21 11 5 3
                      > FIT 89 58 36 21 12 6 6
                      >
                      > As you can see, the fit is quite good for 2 to 6 states.

                      Tim,

                      You do have an excellent fitting equation there. But with four parameters
                      at your disposal, (C, a, b, and the ^2 rather than some other power), and
                      only 5 or 6 data points, it had better fit pretty well!

                      The amount of data you have above is around 50% (though still less in one
                      case) of what the Gospel parallels provide for duplicate word strings. If
                      it were of comparable length, it's quite possible that if you were to
                      require a fit by a two-parameter curve, the simple exponential or
                      geometric progression would work better than anything else, once you
                      decided on just what the rules are for counting variants, etc.

                      But I don't see what gain in knowledge that would produce. Surely those
                      verses or sentences that exhibit an unusally large number of variants will
                      be relatively rare, if for no other reason than from definition of
                      "unusual." So you're almost bound to find some monotonically decreasing
                      curve that will approximate the distribution. In one case the very few
                      rare values of FRQ=1 or 2 for large n may occur at n = 8,9, and 11, say,
                      and in another at n = 8,10 & 13, say -- this I have been referring to as
                      "sampling error."

                      Although a similar statement could be said of the Gospels' duplicate
                      word-string parallels, there we run into the peculiarity that what occurs
                      in the region you labelled "7+" exhibits far too many occurrences to be at
                      all consonant with the monotonic fall-off shown for n=3,4,5,6.

                      After finding zero occurrences for n = 8 or 10 or 12, do you then notice
                      one or two instances of an occurrence for n=13 and another for n=16 and
                      17? If so, I think you'd be interested in knowing why -- was the sentence
                      so difficult to understand, relative to almost all other sentences in the
                      gospel/book, that it caused a huge number of variants? Or was the grammar
                      of the sentence so bad in the earliest ms that it caused later dependent
                      mss to correct it in many various ways? If so, why were these anomalous
                      sentences so much more anomalous than others that it caused a disruption
                      in the monotonic decrease of FRQ with n, with an upturn after many zeroes?
                      Or can such anomalies be simply explained as inevitable sampling error?

                      With the Gospels' duplicate word-string parallels, one can also examine
                      the anomalies, with the importance being that one can see if they fit into
                      any proposed solutions of the Synoptic Problem -- bolster one and rule
                      others out. Although this latter problem may not be of particular
                      interest to TC, I still wonder if TC can hazard any guesses as to whether
                      or not harmonistic corruption could have caused the anomalies in the
                      frequency distribution at large values of n or "I," as I called it.

                      Jim Deardorff
                    Your message has been successfully submitted and would be delivered to recipients shortly.