Loading ...
Sorry, an error occurred while loading the content.

Re: Frequency of variants for different NT books

Expand Messages
  • yennifmit
    Hi David, Some responses are interspersed below... ... To do this you need to install R on your computer. (Download from http://www.r-project.org/.) Then
    Message 1 of 11 , May 9, 2013
      Hi David,

      Some responses are interspersed below...

      > I wonder if
      > there's any way you could do the following:
      > 1. Have an 'interactive' version of your rotating GIF, so that people could zoom in and/or slow it down, in order
      > to look at the details;

      To do this you need to install R on your computer. (Download from http://www.r-project.org/.) Then download my R scripts and data files from the Views site. (It would probably be a good idea to use the directory structure found at the Views site, especially data/, dist/, scripts/, cmds/, dc/.) Some instructions which may help are at section 1.2 of tfinney.net/ATV/index.html. Once everything is in place, you will be able to produce CMDS maps which can be manipulated (rotation, zooming) through mouse movements.

      > 2. Create 'blocks' for the different 'sections' of the synoptics i.e. three sondergut blocks, three 'double
      > tradition' blocks, and a 'triple tradition' block (7 in all) and see how the variants in each block group together.

      The data sets can be sliced by, e.g., selecting particular variation units. The data sets based on the INTF's Parallel Pericopes volume (INTF-Parallel) might be amenable. It sounds like an interesting concept but one that someone besides me will have to pursue.

      > Then, something else that may or may not show anything. I got to thinking about accidental (as opposed to deliberate)
      > variants. My hypothesis is that accidental variants in documents in any one group are likely to obscure the grouping
      > effect (assuming that accidental variants are essentially random), and that removing them from the analysis would
      > therefore tend to sharpen the results.

      The noise of random effects does obscure relationships. Being able to remove accidental agreements would make relationships clearer. The problem is knowing how to identify which agreements are accidental. One must beware not to introduce bias. I therefore prefer to do the minimum possible amount of vetting before analysis, apart from rejecting distances derived from too few variation sites. There is already a fair bit of selection built into the data sets as many readings are dropped because they are nonsense, orthographical variations, or not thought worth including.

      > Also, if an accidental change produces a nonsense variant, it is likely that the
      > nonsense variant would (in later mss) be changed to one or more sensible variants at the same variation site, possibly
      > all different to the original.

      I agree. I think this mechanism is responsible for the genesis of many variations.

      > So, if it was possible to include the mss date as a factor in the analysis we might be
      > able to see that several different variants at the same site might actually be part of the same group. Am I making any
      > sense?

      Yes. However, there are some complicating factors. (1) Manuscript dating is very rubbery. Plus or minus 50 years is not an unreasonable rule of thumb for palaeographical dating. (I'd like to see some of the NT papyri carbon dated to see whether the palaeographical dates are any good. It only takes about one square centimetre of papyrus.) (2) The date of a manuscript is not the date of the readings it carries. Some, yes; many, no. (3) There is the survival problem. The further back one goes, the less one has as a proportion of what once existed. My guess is that we have between one hundredth and one thousandth of the NT manuscripts that existed in the second and third centuries. Trying to see patterns in the development of readings given such a sparse sample may be problematic. Nevertheless, readings are tenacious so we are likely to have many of the most popular ones despite the gaps in the record. The INTF's CBGM (coherence-based genealogical method) works by looking at readings at every variation site and choosing which readings gave rise to which.

      The thought has occurred to me that if dates were included, one might be able to see some kind of general progression in, say, a time animation of a CMDS map. (Witnesses would fade in and out as the animation clock went through their dates. I can imagine such a thing but wouldn't like to try making it.) There would be general convergence towards the Byzantine cloud later on. The Egyptian cluster would dominate at the beginning, largely due to most early papyri being from Egypt.


      Tim Finney
    Your message has been successfully submitted and would be delivered to recipients shortly.