RE: [textualcriticism] Re: Frequency of variants for different NT books
Tim, thank you. I spent most of my morning (well, 2 hours at least) reading through your methodology and the results. It’s amazing stuff, and must have taken a lot of effort. Thank you for having done it. Having said that, I wonder if there’s any way you could do the following:
1. Have an ‘interactive’ version of your rotating GIF, so that people could zoom in and/or slow it down, in order to look at the details;
2. Create ‘blocks’ for the different ‘sections’ of the synoptics i.e. three sondergut blocks, three ‘double tradition’ blocks, and a ‘triple tradition’ block (7 in all) and see how the variants in each block group together.
Then, something else that may or may not show anything. I got to thinking about accidental (as opposed to deliberate) variants. My hypothesis is that accidental variants in documents in any one group are likely to obscure the grouping effect (assuming that accidental variants are essentially random), and that removing them from the analysis would therefore tend to sharpen the results. Also, if an accidental change produces a nonsense variant, it is likely that the nonsense variant would (in later mss) be changed to one or more sensible variants at the same variation site, possibly all different to the original. So, if it was possible to include the mss date as a factor in the analysis we might be able to see that several different variants at the same site might actually be part of the same group. Am I making any sense?
Anyway, it’s fascinating stuff, and I look forward to any more insights that may come out of it.
David Inglis, Lafayette, CA, 94549, USA
This might interest you:
N.B. this is a draft. I'm only up to Mark in the discussion. The data matrices, distance matrices, CMDS results, and DC results shown in the "Data sets and analysis results" table are not likely to change. (Although more rows are being added to the table as data becomes available.)
I am convinced that Streeter's theory of local texts is a useful hypothesis for explaining these analysis results. It seems to me that many of the analysis results point to four ancient varieties of the text:
Cf. the CMDS result for Mark based on UBS4 data:
What one labels these clusters is problematic. I follow the TC convention of using scare quotes to say "These are just labels, to be taken with a pinch of salt." My preference is to use group medoids (identified through PAM analysis) as labels because a medoid is the most central member of its group (if the group has more than two members).
Streeter's theory is unpopular these days for a number of reasons:
1. More than one flavour of the text is found in the papyri, showing that there were multiple varieties circulating in Egypt in the second and third centuries. (Or else that there were no distinct varieties early on.)
2. Streeter's "Caesarean" text, as represented by Theta and 565, is now regarded with suspicion.
3. Other arguments that others can supply.
However, despite these things, I think that a theory of local texts should be reconsidered. Here is why:
1. The principle of least effort. Why send for an exotic exemplar (what you make a copy from) when you can get one next door? If the average copyist acted according to the principle of least effort then local texts would tend to arise.
2. There is a distinct cluster that is a good match to what Streeter called "an Eastern type." (E.g., in Mark, the Sinaitic Syriac, Armenian, Georgian, P45, W in chs. 5-16, Family 1, Origen). Other witnesses, such as 28, 700, Family 13, the Palestinian Syriac, have points of contact with this variety. Theta and 565 do too, but have a "Western" component as well.
3. My PhD research on early copies of the Book of Hebrews. That research shows that separate analyses of textual and spelling variation produce similar results -- the same MSS tend to collocate for both types of data. I put that down to scribes typically using local spelling practice and local exemplars.
4. The collocation of early versions (Cop, Syr, Lat) and early varieties ("Alex", "East", "West") in CMDS maps. There is one absentee variety ("Byz") and one absentee early Christian population centre (Asia Minor) if Cop/Alex = Egypt; Syr/East = Eastern end of the Mediterranean; Lat/West = Rome, Gaul, North Africa. I leave it as an exercise for the reader to decide whether those two absents should be connected.
Could what stands behind the Byzantine text be the ancient text of Asia Minor? Conflations and smoothing could be a surface layer of "improvement" on top of an ancient variety. Harnack says that Asia Minor was the major Christian population centre in the second century. I would expect that region to have had many copies of its own textual flavour. If Asia Minor did have its own variety of the text in the second century, what happened to it? Why would Asia Minor's second century text be the only one not to be preserved in later textual streams? One thing to consider: The CMDS map for Mark (UBS4) places Jerome's Vulgate (vg) about midway between a group of Old Latin texts and the "Byzantine" cloud. Jerome says (Letter to Damasus, written about 380) that he used old Greek copies to revise the Latin.
There. I said it.
- Hi David,
Some responses are interspersed below...
>To do this you need to install R on your computer. (Download from http://www.r-project.org/.) Then download my R scripts and data files from the Views site. (It would probably be a good idea to use the directory structure found at the Views site, especially data/, dist/, scripts/, cmds/, dc/.) Some instructions which may help are at section 1.2 of tfinney.net/ATV/index.html. Once everything is in place, you will be able to produce CMDS maps which can be manipulated (rotation, zooming) through mouse movements.
> I wonder if
> there's any way you could do the following:
> 1. Have an 'interactive' version of your rotating GIF, so that people could zoom in and/or slow it down, in order
> to look at the details;
>The data sets can be sliced by, e.g., selecting particular variation units. The data sets based on the INTF's Parallel Pericopes volume (INTF-Parallel) might be amenable. It sounds like an interesting concept but one that someone besides me will have to pursue.
> 2. Create 'blocks' for the different 'sections' of the synoptics i.e. three sondergut blocks, three 'double
> tradition' blocks, and a 'triple tradition' block (7 in all) and see how the variants in each block group together.
>The noise of random effects does obscure relationships. Being able to remove accidental agreements would make relationships clearer. The problem is knowing how to identify which agreements are accidental. One must beware not to introduce bias. I therefore prefer to do the minimum possible amount of vetting before analysis, apart from rejecting distances derived from too few variation sites. There is already a fair bit of selection built into the data sets as many readings are dropped because they are nonsense, orthographical variations, or not thought worth including.
> Then, something else that may or may not show anything. I got to thinking about accidental (as opposed to deliberate)
> variants. My hypothesis is that accidental variants in documents in any one group are likely to obscure the grouping
> effect (assuming that accidental variants are essentially random), and that removing them from the analysis would
> therefore tend to sharpen the results.
> Also, if an accidental change produces a nonsense variant, it is likely that theI agree. I think this mechanism is responsible for the genesis of many variations.
> nonsense variant would (in later mss) be changed to one or more sensible variants at the same variation site, possibly
> all different to the original.
> So, if it was possible to include the mss date as a factor in the analysis we might beYes. However, there are some complicating factors. (1) Manuscript dating is very rubbery. Plus or minus 50 years is not an unreasonable rule of thumb for palaeographical dating. (I'd like to see some of the NT papyri carbon dated to see whether the palaeographical dates are any good. It only takes about one square centimetre of papyrus.) (2) The date of a manuscript is not the date of the readings it carries. Some, yes; many, no. (3) There is the survival problem. The further back one goes, the less one has as a proportion of what once existed. My guess is that we have between one hundredth and one thousandth of the NT manuscripts that existed in the second and third centuries. Trying to see patterns in the development of readings given such a sparse sample may be problematic. Nevertheless, readings are tenacious so we are likely to have many of the most popular ones despite the gaps in the record. The INTF's CBGM (coherence-based genealogical method) works by looking at readings at every variation site and choosing which readings gave rise to which.
> able to see that several different variants at the same site might actually be part of the same group. Am I making any
The thought has occurred to me that if dates were included, one might be able to see some kind of general progression in, say, a time animation of a CMDS map. (Witnesses would fade in and out as the animation clock went through their dates. I can imagine such a thing but wouldn't like to try making it.) There would be general convergence towards the Byzantine cloud later on. The Egyptian cluster would dominate at the beginning, largely due to most early papyri being from Egypt.