Loading ...
Sorry, an error occurred while loading the content.

Re: NA and UBS omission of majority consistent witnesses, including uncials

Expand Messages
  • yennifmit
    Hi Everyone, Much of the reason for the economical presentation of apparatus data, which often makes it difficult to know which manuscript supports what, is a
    Message 1 of 11 , Apr 7, 2013
    • 0 Attachment
      Hi Everyone,

      Much of the reason for the economical presentation of apparatus data, which often makes it difficult to know which manuscript supports what, is a hang over from the print era. Now, with computers, much more comprehensive data can be presented. One useful way to present information on which witnesses support which readings is a data matrix. A number of data matrices are available at my Views site at Table 2, "Data sets and analysis results." E.g.

      http://www.tfinney.net/Views/data/Mark-UBS4.csv

      Each data matrix is extracted from a particular data source (e.g. the UBS4 apparatus). The readings are encoded. That is, a numeral or letter is used to represent each reading in a variation unit. One has to go back to the source of the data matrix to see what the encoded readings represent.

      The advantage of using a data matrix to represent textual information is that it makes subsequent analysis straightforward.

      It is much easier to use the UBS4 apparatus rather than the NA27/28 print apparatus to construct a data matrix. The UBS4 apparatus has a set of witnesses whose readings are always listed if defined (i.e. able to be discerned). Therefore, I can assume that the reading of such a witness is not defined if it is not in any of the UBS4 attestation lists for a variation unit. It is more difficult to use the NA27 apparatus because one has to do extra work to discover whether a witness not listed in a variation unit is not defined there or has been omitted for some other reason.

      Hopefully, the field will begin to present apparatus data in a manner which makes subsequent analysis more straightforward. A very helpful step along this road would be to always indicate the reading of a witness if its text is defined at a variation site. It will continue to be desirable to select only a sample of representative witnesses and variation units to include. A comprehensive listing is difficult -- the ECM lists about seven variation units per verse, and there sre tens of thousands of witnesses per variation unit. (How a variation unit is defined is a matter of editorial discretion.)

      Happily, a great deal of the inherent relational information can be represented by a small sample of the whole. I have written an article on how to define textual groups which includes sections on how to select representative witnesses and variation units:

      http://www.tfinney.net/Groups/index.xhtml

      One needs to analyse a fairly comprehensive data set to discover which witnesses and variation units give a good representation of the big picture. Once that is done, a relatively small number of variation units (I set a minimum of fifteen for any section that needs to be located in textual space) and witnesses (perhaps 50-100, including representatives of each major early version and Greek family) is enough to provide a fair understanding of the distribution of New Testament witnesses in their textual space.

      As an aside, I am struck by the possible influence of the early versions on the big picture. See, e.g., section 4.5 of the Groups article ("The Random Walk").

      Best,

      Tim Finney
    • Steven Avery
      Hi, Tim Finney Much of the reason for the economical presentation of apparatus data, which often makes it difficult to know which manuscript supports what, is
      Message 2 of 11 , Apr 8, 2013
      • 0 Attachment
        Hi,

        Tim Finney
        Much of the reason for the economical presentation of apparatus data, which often makes it difficult to know which manuscript supports what, is a hang over from the print era.

        Steven
        As I pointed out, this does would never have made sense when it comes to the limited data of the uncials.  It is understandable for abbreviating the cursive info.  The amount of space would not be much different than the it-d type listings of Old Latin manuscripts.

        Tim Finney
        > A number of data matrices are available at my Views site at Table 2, "Data sets and analysis results." E.g. http://www.tfinney.net/Views/data/Mark-UBS4.csv  Each data matrix is extracted from a particular data source (e.g. the UBS4 apparatus).... It is much easier to use the UBS4 apparatus rather than the NA27/28 print apparatus to construct a data matrix. The UBS4 apparatus has a set of witnesses whose readings are always listed if defined (i.e. able to be discerned).

        So are you are saying that there is a computerized UBS4 listing that does in fact include directly all the uncials? 
        If so, do you know what is required to have access to that listing.

        (Do you have it on your home computer?  University library access?)

        Thanks.

        Shalom,
        Steven Avery
        Bayside, NY
      • Jac Perrin
        Dear Tim, Now THAT it impressive! Amazing piece of work. Just imagine when all the minuscules are added, or at least the ones representing groups. JP Sent from
        Message 3 of 11 , Apr 8, 2013
        • 0 Attachment
          Dear Tim,

          Now THAT it impressive!

          Amazing piece of work. Just imagine when all the minuscules are added, or at least the ones representing groups.

          JP

          Sent from my iPad

          On Apr 7, 2013, at 10:36 PM, "yennifmit" <tjf@...> wrote:

           

          Hi Everyone,

          Much of the reason for the economical presentation of apparatus data, which often makes it difficult to know which manuscript supports what, is a hang over from the print era. Now, with computers, much more comprehensive data can be presented. One useful way to present information on which witnesses support which readings is a data matrix. A number of data matrices are available at my Views site at Table 2, "Data sets and analysis results." E.g.

          http://www.tfinney.net/Views/data/Mark-UBS4.csv

          Each data matrix is extracted from a particular data source (e.g. the UBS4 apparatus). The readings are encoded. That is, a numeral or letter is used to represent each reading in a variation unit. One has to go back to the source of the data matrix to see what the encoded readings represent.

          The advantage of using a data matrix to represent textual information is that it makes subsequent analysis straightforward.

          It is much easier to use the UBS4 apparatus rather than the NA27/28 print apparatus to construct a data matrix. The UBS4 apparatus has a set of witnesses whose readings are always listed if defined (i.e. able to be discerned). Therefore, I can assume that the reading of such a witness is not defined if it is not in any of the UBS4 attestation lists for a variation unit. It is more difficult to use the NA27 apparatus because one has to do extra work to discover whether a witness not listed in a variation unit is not defined there or has been omitted for some other reason.

          Hopefully, the field will begin to present apparatus data in a manner which makes subsequent analysis more straightforward. A very helpful step along this road would be to always indicate the reading of a witness if its text is defined at a variation site. It will continue to be desirable to select only a sample of representative witnesses and variation units to include. A comprehensive listing is difficult -- the ECM lists about seven variation units per verse, and there sre tens of thousands of witnesses per variation unit. (How a variation unit is defined is a matter of editorial discretion.)

          Happily, a great deal of the inherent relational information can be represented by a small sample of the whole. I have written an article on how to define textual groups which includes sections on how to select representative witnesses and variation units:

          http://www.tfinney.net/Groups/index.xhtml

          One needs to analyse a fairly comprehensive data set to discover which witnesses and variation units give a good representation of the big picture. Once that is done, a relatively small number of variation units (I set a minimum of fifteen for any section that needs to be located in textual space) and witnesses (perhaps 50-100, including representatives of each major early version and Greek family) is enough to provide a fair understanding of the distribution of New Testament witnesses in their textual space.

          As an aside, I am struck by the possible influence of the early versions on the big picture. See, e.g., section 4.5 of the Groups article ("The Random Walk").

          Best,

          Tim Finney

        • Steven Avery
          Hi, JP Now THAT it impressive! Amazing piece of work. Just imagine when all the minuscules are added, or at least the ones representing groups., Tim, I have an
          Message 4 of 11 , Apr 8, 2013
          • 0 Attachment
            Hi,

            JP
            Now THAT it impressive! Amazing piece of work.
            Just imagine when all the minuscules are added, or at least the ones representing groups.,

            Tim,  I have an additional question for you, separate from my thread focus question of how to get the full and proper uncial data (below, and which I am very interested in understanding, it seems ultra-foundational to know exactly where one goes to know all the uncials and whether that information is available short of major $ investment or clout). 

            Currently my Excel type spreadsheet is not reloaded, after an OS reimage, and I realize I should work with your .csv for awhile to understand your data.   However, allow me to think out loud, since what I am asking could easily be a common question or request, or might spur some ideas.

            Does your current model allow for manuscript-driven analysis ?

            Take Codex Alexandrinus, which is more Byzantine in the Gospels and more Alexandrian in Acts and the rest of the NT. (A fascinating question as to how that arose historically, if anyone knows of a good article on that, share away!)

            In your data modeling, can you see the location of Alexandrinus, compared to either a Critical Text, or Byzantine, or Received Text data-point (I am taking 3 elements that can be taken as discrete, eg. using Stephanus 1550 for the TR or NA-27 for the CT or Robinson-Pierpont for the Byz) say book by book ?  Or .. Gospels. Or "rest of NT".  Looking only at the places where there are significant variant units.

            And then see a number like this, or a data point positioning on a graph:

            Alexandrinus - Acts - 90% agreement CT
                                         - 15% agreement TR
                                         -   8% agreement Byz

            Those are made up numbers, and are designed to simply remind us that occasionally there are unusual agreements like CT-TR vs. Byz, or CT-Byz vs.TR. 

            (Theoretically the Clementine Vulgate would be extremely good to be in there too, even the Peshitta could be very helpful especially if we ever knew when it was first translated, and of course possibly individual manuscripts.)

            Is this concept in your current data modeling ?  If not, is it on your radar ?

            Ultimately, it would also be interesting to be able to mold the search points.  e.g. In my studies, there are maybe 200-250 "highly" significant omissions in the Critical Text when compared to Received Text or Byzantine Text (I started with a page called the Westcott-Hort Magic Marker Binge and then added more as I bumped into them.)

            So, taking a full NT, a search on say the 3000 most significant differences overall might, or might not, give comparable results to searches on the 250.  This relates to issues of scribal habits as well,  à la the studies of Ronald Royse and Peter Head, where you might be focusing on just inclusion/omission concepts.

            Thanks for any feedback.

            Shalom,
            Steven Avery
            Queens, NY

            ===================

            EARLIER

            Tim Finney
            Much of the reason for the economical presentation of apparatus data, which often makes it difficult to know which manuscript supports what, is a hang over from the print era.

            Steven
            As I pointed out, this does would never have made sense when it comes to the limited data of the uncials.  It is understandable for abbreviating the cursive info.  The amount of space would not be much different than the it-d type listings of Old Latin manuscripts.

            Tim Finney
            > A number of data matrices are available at my Views site at Table 2, "Data sets and analysis results." E.g. http://www.tfinney.net/Views/data/Mark-UBS4.csv  Each data matrix is extracted from a particular data source (e.g. the UBS4 apparatus).... It is much easier to use the UBS4 apparatus rather than the NA27/28 print apparatus to construct a data matrix. The UBS4 apparatus has a set of witnesses whose readings are always listed if defined (i.e. able to be discerned).

            So are you are saying that there is a computerized UBS4 listing that does in fact include directly all the uncials? 
            If so, do you know what is required to have access to that listing.

            (Do you have it on your home computer?  University library access?)
             

          • yennifmit
            Hi Jac, I m glad you think it s impressive. It has taken some doing. Others (e.g. the INTF, Maurice Robinson, Richard Mallett) deserve credit for compiling and
            Message 5 of 11 , Apr 8, 2013
            • 0 Attachment
              Hi Jac,

              I'm glad you think it's impressive. It has taken some doing. Others (e.g. the INTF, Maurice Robinson, Richard Mallett) deserve credit for compiling and encoding the data. They can't be blamed for what I do with it, though.

              INTF data sets do include a lot of minuscules, perhaps enough to say that everything interesting among the extant minuscules is represented. Analysis results for INTF data sets can be seen in "INTF" rows of table 2 ("Data sets and analysis results") of my Views article:

              http://www.tfinney.net/Views/index.xhtml

              If one does PAM analysis using a large number of groups (say 100) then the resulting medoids serve as group representatives. The medoid is the most central member of its group. (If only two members, the PAM algorithm chooses one.)

              I think that the results seen with existing data sets are giving a fairly good representation of the big picture of textual relationships within the New Testament textual tradition. However, there are deficits. For example, the UBS data does not cover as many witnesses as the INTF data. Also, the INTF data sets do not include versional and patristic evidence. This is not a criticism -- these deficits often reflect practical constraints. But I would like to be able to analyse something that includes numerous representatives of all major classes: Greek MSS, MSS of each major version, quotations of Church Fathers, ... If a class of evidence is missing then it affects analysis results. This accounts for why Bezae (05) joins the "Alexandrian" stream when no Old Latins are included in the mix. Wisse was criticised for this. My analysis results confirm his with respect to 05 when no Old Latin evidence is included.

              Best,

              Tim Finney

              --- In textualcriticism@yahoogroups.com, Jac Perrin <jperrin@...> wrote:
              >
              > Dear Tim,
              >
              > Now THAT it impressive!
              >
              > Amazing piece of work. Just imagine when all the minuscules are added, or at least the ones representing groups.
              >
              > JP
              >
              > Sent from my iPad
              >
              > On Apr 7, 2013, at 10:36 PM, "yennifmit" <tjf@...> wrote:
              >
              > > Hi Everyone,
              > >
              > > Much of the reason for the economical presentation of apparatus data, which often makes it difficult to know which manuscript supports what, is a hang over from the print era. Now, with computers, much more comprehensive data can be presented. One useful way to present information on which witnesses support which readings is a data matrix. A number of data matrices are available at my Views site at Table 2, "Data sets and analysis results." E.g.
              > >
              > > http://www.tfinney.net/Views/data/Mark-UBS4.csv
              > >
              > > Each data matrix is extracted from a particular data source (e.g. the UBS4 apparatus). The readings are encoded. That is, a numeral or letter is used to represent each reading in a variation unit. One has to go back to the source of the data matrix to see what the encoded readings represent.
              > >
              > > The advantage of using a data matrix to represent textual information is that it makes subsequent analysis straightforward.
              > >
              > > It is much easier to use the UBS4 apparatus rather than the NA27/28 print apparatus to construct a data matrix. The UBS4 apparatus has a set of witnesses whose readings are always listed if defined (i.e. able to be discerned). Therefore, I can assume that the reading of such a witness is not defined if it is not in any of the UBS4 attestation lists for a variation unit. It is more difficult to use the NA27 apparatus because one has to do extra work to discover whether a witness not listed in a variation unit is not defined there or has been omitted for some other reason.
              > >
              > > Hopefully, the field will begin to present apparatus data in a manner which makes subsequent analysis more straightforward. A very helpful step along this road would be to always indicate the reading of a witness if its text is defined at a variation site. It will continue to be desirable to select only a sample of representative witnesses and variation units to include. A comprehensive listing is difficult -- the ECM lists about seven variation units per verse, and there sre tens of thousands of witnesses per variation unit. (How a variation unit is defined is a matter of editorial discretion.)
              > >
              > > Happily, a great deal of the inherent relational information can be represented by a small sample of the whole. I have written an article on how to define textual groups which includes sections on how to select representative witnesses and variation units:
              > >
              > > http://www.tfinney.net/Groups/index.xhtml
              > >
              > > One needs to analyse a fairly comprehensive data set to discover which witnesses and variation units give a good representation of the big picture. Once that is done, a relatively small number of variation units (I set a minimum of fifteen for any section that needs to be located in textual space) and witnesses (perhaps 50-100, including representatives of each major early version and Greek family) is enough to provide a fair understanding of the distribution of New Testament witnesses in their textual space.
              > >
              > > As an aside, I am struck by the possible influence of the early versions on the big picture. See, e.g., section 4.5 of the Groups article ("The Random Walk").
              > >
              > > Best,
              > >
              > > Tim Finney
              > >
              > >
              >
            • yennifmit
              Hi Steven, [Your comments are in quotes.] As I pointed out, this does would never have made sense when it comes to the limited data of the uncials. It is
              Message 6 of 11 , Apr 8, 2013
              • 0 Attachment
                Hi Steven,

                [Your comments are in quotes.]

                "As I pointed out, this does would never have made sense when it comes to the limited data of the uncials. It is understandable for abbreviating the cursive info. The amount of space would not be much different than the it-d type listings of Old Latin manuscripts."

                Each apparatus has a policy behind it. There are practical limitations and each editor does what seems best to meet those. One consequence is that not all evidence on everything is presented in a typical print apparatus. Take scribal spelling, for instance. Concerning the uncials, they are affected by the constraints as well. One could present all of them, but, then, why not present all minuscules as well? One might end up with many pages of apparatus per line of biblical text.

                "So are you are saying that there is a computerized UBS4 listing that does in fact include directly all the uncials?
                If so, do you know what is required to have access to that listing."

                The UBS data and distance matrices at the Views site are (1) encoded versions of the UBS4 apparatus (done by Richard Mallett), or (2) distance matrices constructed from tables of percentage agreement made by Maurice Robinson using the UBS2 apparatus. All of the UBS data I have is made available as data and distance matrices at the Views site.

                Best,

                Tim Finney
              • Steven Avery
                Hi, Subject was: [textualcriticism] Re: NA and UBS omission of majority consistent witnesses, including uncials Friends, s il vous plait, I m still trying to
                Message 7 of 11 , Apr 9, 2013
                • 0 Attachment
                  Hi,

                  Subject was: [textualcriticism] Re: NA and UBS omission of majority  consistent witnesses, including uncials

                  Friends, s'il vous plait, I'm still trying to have a very basic question answered.

                  What tools can we use, if any, to actually see the position of all the uncials on a variant?

                  Without guessing and extrapolating using the missing entries in UBS and NA.
                  Like this ---> "well these 10 are not listed when they are Byzantine, and these 7 of the 10 have Mark 7:19, and these 2 show up in other alternatives for the variant, so be subtraction that means there are 5 hidden Byz uncials".

                  Surely that is not very satisfactory.
                  Is there a book to buy, a web site to try, fish to fry, dues very high ?

                  And if there is not any such tool today, or if it is only available in a very specialized (e.g $$) manner, let's acknowledge that lack straight-out and, if specialized, please explain the access to the specialized manner.

                  This is the textual forum with many labourers in the field, can anybody help on this question?

                  ==========================================

                  Related, but distinct:

                  Tim, what do you use to be sure you have all the uncials placed right?
                  Or do you have to calculate and extrapolate as above ?  (It sounds like you might use a computer collation, but it is not totally clear if there is any different than the written one, that lacks many of the Byzantine majority uncial listings and forces you to do some special checking and guessing)

                  " Views site are (1) encoded versions of the UBS4 apparatus (done by Richard Mallett),
                  http://www.tfinney.net/Views/index.xhtml
                   
                  Now I look at that and I see a matrix like below ...

                   Is there a nice explanation, are 1, 2, 3, 4 variant units or the major controls?
                  Can I know if Alexandrinus on Mark 1:1 is a specific variant ?  Did I miss an Intro?

                  Emacs!

                  ==========================================

                  Steven
                  "As I pointed out, this would never have made sense when it comes to the limited data of the uncials. It is understandable for abbreviating the cursive info. The amount of space would not be much different than the it-d type listings of Old Latin manuscripts."

                  Tim Finney
                  Each apparatus has a policy behind it. There are practical limitations and each editor does what seems best to meet those. One consequence is that not all evidence on everything is presented in a typical print apparatus. Take scribal spelling, for instance. Concerning the uncials, they are affected by the constraints as well. One could present all of them, but, then, why not present all minuscules as well? One might end up with many pages of apparatus per line of biblical text.

                  Steven
                  The answer to that is simple.  Only about 10 uncials are omitted, even in the Gospels, and those are only if they are Byzantine uncials readings.  For a full listing, the average number of uncials might be about 7 to add, even in the Gospels.  There is barely a space consideration, rarely will it cause a line spillover.

                  With 500 or 1000 cursives, obviously that is a major change, both to space and to appearance.
                  Apples and kumquats.

                  Perhaps I touched a nerve with "none dare call it rigging". :-)

                  Steven
                  "So are you are saying that there is a computerized UBS4 listing that does in fact include directly all the uncials?
                  If so, do you know what is required to have access to that listing."

                  Tim Finney
                  The UBS data and distance matrices at the Views site are (1) encoded versions of the UBS4 apparatus (done by Richard Mallett), or (2) distance matrices constructed from tables of percentage agreement made by Maurice Robinson using the UBS2 apparatus. All of the UBS data I have is made available as data and distance matrices at the Views site.

                  Steven
                  Thanks for your help on this, it looks like you have some of the cutting edge analysis matrix ideas.
                  However, I am largely still back on wanting to know how we know what we know (on the Uncials).

                  Shalom,
                  Steven Avery
                  Queens, NY
                • rslocc
                  Mr. Avery wrote; What tools can we use, if any, to actually see the position of all the uncials on a variant? Without guessing and extrapolating using the
                  Message 8 of 11 , Apr 9, 2013
                  • 0 Attachment
                    Mr. Avery wrote;

                    What tools can we use, if any, to actually see the position of all the uncials on a variant?

                    Without guessing and extrapolating using the missing entries in UBS and NA.
                    Like this ---> "well these 10 are not listed when they are Byzantine, and these 7 of the 10 have Mark 7:19, and these 2 show up in other alternatives for the variant, so be subtraction that means there are 5 hidden Byz uncials".

                    Surely that is not very satisfactory.
                    Is there a book to buy, a web site to try, fish to fry, dues very high ?

                    And if there is not any such tool today, or if it is only available in a very specialized (e.g $$) manner, let's acknowledge that lack straight-out and, if specialized, please explain the access to the specialized manner.

                    This is the textual forum with many labourers in the field, can anybody help on this question?


                    Hi Steven,

                    I agree that the UBS Critical Apparatus can lead one to false impressions and implications on many variants (whether intentionally or not) by not listing the witnesses (especially the Byz Uncials) more fully. The UBS 4th edition does (thankfully) list more of the major Uncials which back the Byz Text in [brackets]. Before I aquired one (UBS 4) a few years back I would always wonder in awe how easily I could more thoroughly furnish the Critical Apparatus by making simple reference to Scrivener, Tischendorf, Burgon, Legg, Meyer, Olshausen, Godet or Lange's works and/or commentaries on such and such passages. If one adds Text und Textwert, Editio Critica Maior and Swanson to this list they will have nearly all the pertinent Apparatus material available. Unfortunately, some of these sources are extremely expensive and rare. Union Theo. Seminary Library is probably the place I would start if I were you.

                    You asked;
                    "What tools can we use, if any, to actually see the position of all the uncials on a variant?"

                    First UBS-4 more fully divulges the Byz. Uncials via brackets, then Swanson chooses a solid group of Uncials to collate. His "New Testament Greek Manuscripts" are very inexpensive when purchased used. (See Ebay, abebooks, cheepestbookprice.com, etc,.). Then one could reference Tischendorf 8th ed., Scrivener Intro., Burgon, etc. to fill in any blanks. These will be the backbone of the apparatus because "Text und Textwert" does not cover every variant, but when they do, you can basically leave all other sources by the wayside (as far as manuscripts go, becuase the ECW & Versions are not covered by T&T). The downside is that T & T is extremely hard to find and very expensive when you do light upon a volume for sale. "Editio Critica Maior" is another great apparatus resource which can be purchased at somewhat reasonable prices (somewhat). I apologize for preaching to the choir to some extant, but I truly hope this helps.

                    Peace,

                    M.M.R.
                  • yennifmit
                    Hi Steven, [Your comments in quotes because all I get when I hit reply to your email in the Yahoo groups interface is an empty box!] Does your current model
                    Message 9 of 11 , Apr 9, 2013
                    • 0 Attachment
                      Hi Steven,

                      [Your comments in quotes because all I get when I hit "reply" to your email in the Yahoo groups interface is an empty box!]

                      "Does your current model allow for manuscript-driven analysis ?"

                      Yes, if by that you mean "Can I focus analysis on a particular witness." There are various ways to do that:

                      1. look for that witness in analysis results for a data set that includes the witness
                      2. use the witness as a reference when calculating a distance matrix. (The R scripts I've written to construct distance matrices can be asked to keep a particular witness.)

                      "In your data modeling, can you see the location of Alexandrinus, compared to either a Critical Text, or Byzantine, or Received Text data-point (I am taking 3 elements that can be taken as discrete, eg. using Stephanus 1550 for the TR or NA-27 for the CT or Robinson-Pierpont for the Byz) say book by book ? Or .. Gospels. Or "rest of NT". Looking only at the places where there are significant variant units."

                      Yes. E.g., look for A in UBS-based data sets or 02 in INTF-based ones. (Beware: A = Ausgangstext (the ECM text) for INTF-Parallel data sets.)

                      http://www.tfinney.net/Views/dc/Mark-UBS4.15.SMD.png
                      http://www.tfinney.net/Views/dc/Mark-INTF-Parallel.15.SMD.png

                      http://www.tfinney.net/Views/cmds/Peter-A-UBS4.15.SMD.gif
                      http://www.tfinney.net/Views/cmds/Peter-A-INTF-General.15.SMD.gif

                      (I used dendrograms for the first two because Alex. is inside the Byzantine cloud -- with Family Pi -- and therefore hard to see in a CMDS map [= whirling cube].)

                      Or, if you want to rank witnesses by distance from Alex., my "rank.r" script does that:

                      > source("rank.r")
                      Rank witnesses by distance from a reference.
                      Asterisked distances are not statistically significant (alpha = 0.05).
                      Distance matrix: ../dist/Acts-UBS2.P45.15.SMD.csv
                      Counts list: ../dist/Acts-UBS2.P45.15.counts.csv
                      Reference witness: A
                      P74 (0.211); 81 (0.216); C (0.283); 33 (0.318); B (0.322); cop-bo (0.349); P45 (0.357*); Aleph-c (0.367*); 181 (0.386); vg (0.404); 1739 (0.405); 945 (0.421); Origen (0.429*); Lucifer (0.437*); cop-sa (0.447*); it-r (0.455*); arm (0.461*); it-ar (0.473*); geo (0.473*); 629 (0.476*); E (0.482*); 630 (0.489*); Psi (0.500*); syr-p (0.511*); it-e (0.531*); 326 (0.533*); 436 (0.548*); eth (0.549*); syr-h (0.559*); 88 (0.564*); Lect (0.570*); 614 (0.580); 1505 (0.580); it-l (0.583*); it-gig (0.584); 104 (0.587); 2412 (0.588); 1241 (0.590); 2492 (0.592); 1877 (0.598); 0142 (0.600); Byz (0.602); 2127 (0.603); 056 (0.606); 2495 (0.608); P (0.613); 451 (0.617); 049 (0.622); 330 (0.622); it-h (0.667*); it-d (0.680); Chrysostom (0.688); D (0.719); it-p (0.825)

                      (The figures are simple matching distances. The smaller the distance, the closer the witness.)

                      One can only get a distance if the thing you want to compare is in the data set. Many of the data sets have items corresponding to the ECM/NA/UBS text (e.g. ECM, A, UBS) or Majority text (e.g. Byz, Maj). Some have an item corresponding to the TR.

                      "And then see a number like this, or a data point positioning on a graph:

                      Alexandrinus - Acts - 90% agreement CT
                      - 15% agreement TR
                      - 8% agreement Byz

                      Those are made up numbers, and are designed to simply remind us that occasionally there are unusual agreements like CT-TR vs. Byz, or CT-Byz vs.TR.

                      (Theoretically the Clementine Vulgate would be extremely good to be in there too, even the Peshitta could be very helpful especially if we ever knew when it was first translated, and of course possibly individual manuscripts.)

                      Is this concept in your current data modeling ? If not, is it on your radar ?"

                      A distance matrix contains distances between all pairs of witnesses in the source data (unless a witness is dropped through lack of defined readings). Each distance can be transformed to a percentage agreement using this:

                      agreement (%) = 100 * (1 - distance)

                      The witnesses you are interested in need to be in the source data to end up in the corresponding distance matrix. Some data sets (e.g. Mark, UBS4) even include a row for the Clementine Vulgate (vg-cl)!

                      By the way, there is structure within the mass of Byzantine texts.

                      "Ultimately, it would also be interesting to be able to mold the search points. e.g. In my studies, there are maybe 200-250 "highly" significant omissions in the Critical Text when compared to Received Text or Byzantine Text "

                      A data matrix can be sliced to choose a selection of variation units or of witnesses or of both. See the section titled "Slices of a Data Set" in my Groups article:

                      http://www.tfinney.net/Groups/index.xhtml

                      A bias can be introduced by choosing variation units. Happily, the broad outline of the results seems persistent: it tends to survive no matter how you chop up the data.

                      "So, taking a full NT, a search on say the 3000 most significant differences overall might, or might not, give comparable results to searches on the 250."

                      One would only know by analysing both data sets.

                      Best,

                      Tim Finney
                    • yennifmit
                      Hi Steven, [Your comments in quotes.] What tools can we use, if any, to actually see the position of all the uncials on a variant? One usually has to do a
                      Message 10 of 11 , Apr 9, 2013
                      • 0 Attachment
                        Hi Steven,

                        [Your comments in quotes.]

                        "What tools can we use, if any, to actually see the position of all the uncials on a variant?"

                        One usually has to do a fair bit of digging. The INTF's New Testament Transcripts is a good place to start:

                        http://nttranscripts.uni-muenster.de/AnaServer?NTtranscripts+0+start.anv

                        There is also ITSEE for texts of John's Gospel:

                        http://vmr.bham.ac.uk/about/

                        The data I have comes from a variety of sources, as outlined in the Sources section of my Views site:

                        http://www.tfinney.net/Views/index.xhtml

                        "Tim, what do you use to be sure you have all the uncials placed right?
                        Or do you have to calculate and extrapolate as above ? (It sounds like you might use a computer collation, but it is not totally clear if there is any different than the written one, that lacks many of the Byzantine majority uncial listings and forces you to do some special checking and guessing)"

                        I use a number of modes of multivariate analysis (PAM, CMDS, DC). Each mode operates on a distance matrix. Each mode places the witnesses in its own way. CMDS gives you the best possible representation (according to its stress function) of distances between witnesses with the specified number of dimensions. (I specify three.) All I do is apply established multivariate analysis techniques to distance matrices derived from New Testament textual data. The data sets are always samples of one kind or another. (Even if I had a complete set of all readings of all extant NT witnesses, it would still be a mere sample of what once existed.)

                        "Is there a nice explanation, are 1, 2, 3, 4 variant units or the major controls?
                        Can I know if Alexandrinus on Mark 1:1 is a specific variant ? Did I miss an Intro?"

                        The introduction to the (draft) Views site is probably the best place to start:

                        http://www.tfinney.net/Views/index.xhtml

                        For more detail, see chapter two of my "Analysis of Textual Variation":

                        http://www.tfinney.net/ATV/

                        But the thing I would really like people to read is my Groups article:

                        http://www.tfinney.net/Groups/index.xhtml

                        Best,

                        Tim Finney
                      Your message has been successfully submitted and would be delivered to recipients shortly.