Sorry, an error occurred while loading the content.

## Re: [Synoptic-L] Testing the 3ST

Expand Messages
• ... Dave, Down to pericope or saying level there are I think 73 such blocks. ... I count 18 sub-blocks in xQ and 57 in sQ (thus indicating that only 2 blocks
Message 1 of 24 , Dec 15, 2007
Dave Gentile wrote:

> I was thinking of blocks that would need to be defined by being
> contiguous in both Matthew and Luke. These blocks could be a
> pericope, or a single saying found in both Matthew and Luke, but in
> a different context.

Dave,

Down to pericope or saying level there are I think 73 such blocks.

> Those blocks are then assigned to sQ or xQ in whole or in part. The
> resulting number of blocks in each sQ and zQ are what we would wish
> to count, I belive (as well as determine their length).

I count 18 sub-blocks in xQ and 57 in sQ (thus indicating that only 2 blocks
were split between xQ and sQ). As for counting the length of each block in
both Matthew and Luke, I could do the counts if and when you actually want
to make use of the information.

Ron Price

Derbyshire, UK

Web site: http://homepage.virgin.net/ron.price/index.htm
• ... Replying to my own post - We would also need the length of the blocks in sQ and xQ that are not in A and B. i.e the blocks which do not contain identical
Message 2 of 24 , Dec 15, 2007
>
> Thanks for the inventory. That is helpful. If we had total word
> length for for each of the As, A1-A7, and for each of the Bs, B1-
> B13, then we'd be ready to crunch a few numbers.

Replying to my own post -

We would also need the length of the blocks in sQ and xQ that are not
in A and B. i.e the blocks which do not contain identical strings of
at least 10 words.

Dave Gentile
Riverside IL
• No need to do all the word counting yet. I think we have enough information for a hand-waving approximate calculation. I have to take the cat to the vet, but
Message 3 of 24 , Dec 15, 2007
No need to do all the word counting yet. I think we have enough
information for a hand-waving approximate calculation.

I have to take the cat to the vet, but I'll come back to this soon.

Dave Gentile
Riverside, IL

--- In Synoptic@yahoogroups.com, Ron Price <ron.price@...> wrote:
>
> Dave Gentile wrote:
>
> > I was thinking of blocks that would need to be defined by being
> > contiguous in both Matthew and Luke. These blocks could be a
> > pericope, or a single saying found in both Matthew and Luke, but
in
> > a different context.
>
> Dave,
>
> Down to pericope or saying level there are I think 73 such blocks.
>
> > Those blocks are then assigned to sQ or xQ in whole or in part.
The
> > resulting number of blocks in each sQ and zQ are what we would
wish
> > to count, I belive (as well as determine their length).
>
> I count 18 sub-blocks in xQ and 57 in sQ (thus indicating that
only 2 blocks
> were split between xQ and sQ). As for counting the length of each
block in
> both Matthew and Luke, I could do the counts if and when you
actually want
> to make use of the information.
>
> Ron Price
>
> Derbyshire, UK
>
> Web site: http://homepage.virgin.net/ron.price/index.htm
>
• ... RON: O.K. I see why you re confused. The hypothetical document Q never existed. *I* took the xQ material out of Q, and assigned it where it really
Message 4 of 24 , Dec 15, 2007
> BRUCE: Still not clear, and to me, still enigmatic terminologically. If xQ
> means "out of Q" (rather than out of the "logia") then how exactly can it
> "originate with Matthew?" Do we have an equation xQ = xM?

RON: O.K. I see why you're confused. The hypothetical document Q never
existed. *I* took the xQ material out of Q, and assigned it where it really
belonged, i.e. to Matthew.

> BRUCE: This most recent comment might be construed as meaning that there is a
> Q somewhere in the 3ST. But that is evidently not the case;

RON: Indeed. Q is a figment of the imagination resulting from a simplistic
analysis of the Double Tradition.

> BRUCE: ..... the
> conventional Q is being divided into Matthean original material and stuff
> that really IS in an outside written source. We might then gloss
>
> sQ = "still in Q"
> xQ = "taken out of Q; not in an outside source used by aMt"

RON: Phew. I think we may be nearly there.

> BRUCE: Why not pick another [label for the sayings source]?

RON: I have already back-tracked on my use of the label "sQ", which I now
retain only for a certain subset of the Double Tradition. However I can see
the advantage of not using the letter "Q" at all in labels relating to a
theory which dispenses with the document widely known as "Q". The difficulty
is that most folk know about Q. It seemed easier to start by relating what
is new in my proposal to what is known and what it replaces.

Ron Price

Derbyshire, UK

Web site: http://homepage.virgin.net/ron.price/index.htm
• O.K. - a back of the envelop calculation (or really some quick cutting and pasting with a spreadsheet) - xQ: 18 blocks 1770 words average length 98 words 1602
Message 5 of 24 , Dec 15, 2007
O.K. - a back of the envelop calculation (or really some quick
cutting and pasting with a spreadsheet) -

xQ:

18 blocks
1770 words
average length 98 words
1602 possible 10 word agrements
23 actual agreements
1.5% point extimate of frequency
Low end of 95th percentile credibility range = 1.03%
High end = 2.03%

sQ:
57 blocks
2381 words
average length 42 words
1881 possible 10 word agrements
12 actual agreements
0.69% point extimate of frequency
Low end of 95th percentile credibility range = 0.41%
High end = 1.03%

The edges of the credibility ranges just touch but do not overlap.
So there is something like a 2.5% chance that this is finding is due
to random chance.

Doing the actual word counts would add very little information to
this picture, since the average block length seems to be quite
adaquate for these purposes.

Thus - we seem to have a signficant result. And so far, two
suggested explinations for it.

Dave Gentile
Riverside, IL
• ... A correction to the quick calculation - I had the spreadsheet set for a 90th percentile confidence range, not 95th. I also needed to double the number I
Message 6 of 24 , Dec 15, 2007
--- In Synoptic@yahoogroups.com, "Dave Gentile" <gentile_dave@...>
wrote:
>
> O.K. - a back of the envelop calculation (or really some quick
> cutting and pasting with a spreadsheet) -
>

A correction to the quick calculation - I had the spreadsheet set for
a 90th percentile confidence range, not 95th. I also needed to double
the number I gave, for another reason. As a result, there is more like
a 10% chance these numbers are just random chance (not 2.5% as
previously stated). Appologies for the error.

So the result seems significant at the 90th percentile, but just
barely. However, this (combined with Ron's other observations) still
suggests to me that sQ and xQ, by in large, are the result of two
different processes.

Dave Gentile
Riverside, IL
• ... Dave, Thanks for your efforts, but you may need to find another envelope - should be plenty around at this time of year :-) ... Or another spreadsheet.
Message 7 of 24 , Dec 16, 2007
Dave Gentile wrote:

> O.K. - a back of the envelop calculation

Dave,

Thanks for your efforts, but you may need to find another envelope - should
be plenty around at this time of year :-)

> (or really some quick cutting and pasting with a spreadsheet) -

> xQ:
>
> 18 blocks
> 1770 words
> average length 98 words
> 1602 possible 10 word agrements
> .......
> sQ:
> 57 blocks
> 2381 words
> average length 42 words
> 1881 possible 10 word agrements
> 12 actual agreements

Firstly, what I found was the set of strings common to Matthew and Luke
having *more than* ten contiguous words, i.e. 11+
Thus 1602 should be replaced by 1584 and 1881 by 1824.

Secondly you appear to be comparing apples and pears in the agreements. The
numbers 1584 and 1824 represent counts of the number of possible 11-word
strings (some of which will be overlapping). What I had counted were the
numbers and lengths of all the strings having more than ten words (none of
which overlap with each other by definition). The total number of words in
the xQ and sQ strings were 364 and 205 respectively. Therefore my actual
numbers of 11-word strings (some of which will overlap) are 364 - 10*23 =
134 and 205 - 10*12 = 85 respectively. So in xQ there are 134 contiguous
11-word strings out of a possible 1584, and in sQ there are 85 contiguous
11-word strings out of a possible 1824. (All this neglects the fact that the
blocks have different lengths, but I agree that the approximation that they
have equal lengths is unlikely to make much difference to the results.)

Ron Price

Derbyshire, UK

Web site: http://homepage.virgin.net/ron.price/index.htm
• ... 23 actual agreement ... Luke ... Dave: O.K. I ll change the calculation from 10+ to 11+. I d expect this is a small effect. ... agreements. The ... 11-word
Message 8 of 24 , Dec 17, 2007
>
> > xQ:
> >
> > 18 blocks
> > 1770 words
> > average length 98 words
> > 1602 possible 10 word agrements
23 actual agreement

> > .......
> > sQ:
> > 57 blocks
> > 2381 words
> > average length 42 words
> > 1881 possible 10 word agrements
> > 12 actual agreements

Ron:
>
> Firstly, what I found was the set of strings common to Matthew and
Luke
> having *more than* ten contiguous words, i.e. 11+
> Thus 1602 should be replaced by 1584 and 1881 by 1824.
>

Dave:
O.K. I'll change the calculation from 10+ to 11+. I'd expect this is
a small effect.

Ron:
> Secondly you appear to be comparing apples and pears in the
agreements. The
> numbers 1584 and 1824 represent counts of the number of possible
11-word
> strings (some of which will be overlapping). What I had counted
were the
> numbers and lengths of all the strings having more than ten words
(none of
> which overlap with each other by definition).

Dave:
I had given that some thought. Counting that way seems to greatly
inflate the significance, and I don't think it is correct, although
granted I did not formulate a precise argument as to why it is
correct or not. Done the way you suggest, you get something like
99.999 percentile significance, which does not seem to be the right
order of magnitude for the numbers we're dealing with. Plus,
considering a few extreme cases leads to absurd looking conclusions.
So, without precise argument, I conclude we should not count that
way.

Rather, I would put it this way - there are 1824 places a string
could start, and 12 places one actually does start.

Then using the revised numbers, the finding is significant at the
89th percentile, just short of one typical arbitrary cut-off.
Regardless, it still adds something when combined with your other
arguments.

Here I should also note that I used a Bayesian credibility interval,
rather that a traditional confidence interval. They give nearly the
same result, although they say something subtly different. But in
this case if we are looking for that last 1%, the other method might
give results more to our liking, or it might be slightly worse.

Finally, one other potential problem - How was the "11+" criteria
selected? Was that the first number you tried, or did you try other
string length cutoffs first?

Dave Gentile
Riverside, IL
• ... Dave, Thanks for carrying out this investigation. ... Good question. I first tried 18+ and realized there were so few strings that the result was going to
Message 9 of 24 , Dec 18, 2007
Dave Gentile wrote:

> Then using the revised numbers, the finding is significant at the
> 89th percentile, just short of one typical arbitrary cut-off.
> Regardless, it still adds something when combined with your other
> arguments.

Dave,

Thanks for carrying out this investigation.

> Finally, one other potential problem - How was the "11+" criteria
> selected? Was that the first number you tried, or did you try other
> string length cutoffs first?

Good question. I first tried 18+ and realized there were so few strings that
the result was going to be too sensitive to the choice of cut-off. I wanted
to choose a cut-off which was significantly lower than 18+, yet not so low
as to necessitate too much effort (my procedure being part computerized and
part manual). It also had to be not too near 14 as I had already observed an
apparently more-than-average number of strings of this length with known
assignment, and didn't want the result to be biased. I had also by this
stage determined to use a single computer run, for which (as it happens) an
odd number cut-off was more 'efficient'. Hence the 11+.

Ron Price

Derbyshire, UK

Web site: http://homepage.virgin.net/ron.price/index.htm
Your message has been successfully submitted and would be delivered to recipients shortly.