## [Synoptic-L] Problems, and new results

Expand Messages
• I ve done some more work on the statistics I previously reported, and happily, besides getting what I believe are much more reliable results, I now have enough
Message 1 of 17 , Feb 15, 2002
• 0 Attachment
I've done some more work on the statistics I previously reported, and
happily, besides getting what I believe are much more reliable results, I
now have enough material to use this as a class project.

Let me apologize in advance to the non-mathematically inclined for the
mathematical complications.

First, the problem with the current set of results is this: The tests
involved, particularly the t-test to determine the p-value, assume that the
distributions being correlated are normally distributed. To the extent that
this is not true, the results are unreliable.

Examining the distribution of the frequency of the words involved reveals
that they are not normally distributed.
There is a large concentration near zero, and some extreme outliers in the
tails. They are very, "lepto-kurtotic".

The effect is that the outliers dominate the results, we do not fully
utilize all available information, and we overstate the confidence level.

My first attempt at a solution to this problem was to use a non-parametric
method. Rather that using the actual data values, this method ranks the
data, then tries to correlate the ranks, The advantage here is that we can
do a significance test that does not involve any assumptions about the
distribution of the data. The drawback is that we lose even more
information.

The expected results would be that only the strongest correlations from the
previous attempts will show up. This is indeed what we find.

I'll post the full results later tonight, but here are the values that are
significant at the .0003 level.

Positive
------------
012-112
221-121
221-021
222-220
211-210
112-012
112-102** (new result) (.0003 level exactly)
221-121
221-021
221-220
121-021
121-120
021-121
002-112
002-012
020-120
200-202
200-201

negatives
----------
002-200
002-202
002-221

I note here that since the 102-202 connection has disappeared, and 102-112
has appeared, this can be viewed as rather positive news for the FH, and
the 3SH. Influential outliers must have been largely responsible for
previous results.
AUTON, is the biggest offended here. However, in the next method I describe
All of the above results appear, and more, *except* for 112-102. Plus 102
and 201 remain symmetric with respect to 202 in the next set of results.

Other that to state the obvious, that 102 seems to be related to both Luke
and 202, I've no more insight as to why 112-102 appears in this test and
not the next one. In the next test we can make use of all the zeros, so we

==========

The problem with this non- parametric approach is that we are making poor
use of the data. By ranking them we lose information. Also, we still are
not making any use of the zero values. The next method solves these
problems.
We can, effectively and correctly, use all the data including the zeros.
The results,it turns out are free of those annoying minor effects too.
While it might be possible for a redactor to preferentially retain a
specific word. It is unlikely that this happens over many words. So, by
effectively using *ALL* the data, we can remove many of these. Whereas
before we had to push the confidence levels very high to eliminate them, I
can now go as low as .99 confidence for individual results, without seeing
anything bizarre, and getting even more results that seem very plausible.

The method involves maximum likelihood fitting, and a likelihood ratio test
to determine significance. The first question is, if a normal distribution
is not appropriate, what distribution is. Realizing that we are dealing
with frequencies, and integer values leads us quickly to the Poisson
distribution.

An example of a Poisson process is the frequency of customer arrival at a
store. There is an average arrival rate (gamma), and in any given time
interval we can calculate the exact probability that 1 person arrives, 2
people arrive, 0 people arrive, etc., by using a Poisson distribution. The
only parameter we need is the average rate (gamma).

I treat each different word as a separate Poisson process with its own
gamma. I first estimate the gamma by looking at the overall frequency of
the word, and the number of words in a category.

Example:

ABRAAM occurs 18 times in all categories.
There are 25843 words studies in total.
1220 words are in category 200.

The expected number in 200 based on this is 18*1220 / 25843 = about 2.67.
So 2.67 will be our gamma estimate.
We can calc the probability of 0 occurrences is = .06
of 1 occurrence = .18
of 2 occurrences = .24
of 3 = .22
of 4 = .14
etcetera. (The actual value is 3).

Once we have the probability of the actual observed occurrence for each
word, multiplying the results would give the total probability. Since this
would involve multiplying many fractions, the result would be tiny. So, the
preferred method is to take the log of each probability, and add the
individual results.

The next step, is to ask if information from another category (say 202),
might be useful in predicting 200. A frequency estimate based on 202 would
be calculated in a similar manner to the estimated frequency based on all
categories.

I then assign a variable "beta" to weigh the estimates.
Best estimate = B * estimate based on other category + (1-B) * estimate
based on overall frequency.
I then use Excel's solver feature to find the value of beta that will most
improve the overall calculated likelihood.
If beta is 0 we conclude the category is unrelated. If there is a positive
relation, we need a test for significance.

The test statistic is -2* ln( Lu / Lr ) where Lu is the likelihood for the
model with the Beta, and Lr is the likelihood of the model based only on
the overall frequency. The statistic is distributed chi-squared with n
degrees of freedom where n is the number of parameters added. In this case
1.

The method does not test for negative relations. Also note that we do not
have absolute symmetry. 002 + overall may predict 112 better that 112 +
overall predicts 002.

The full results will be posted later. Here I will list results significant
at the .99 level.
If the result is not also significant at the .0003 level, I'll mark it with
an *.

Luke group
------------
012-112
112-002
002-012*

Matthew group
--------------
212-211
212-210
210-211

Sayings group
-----------
200-201
200-202
201-202*
202-102*

Central group
-----------
222-220
222-022*

Mark group
----------
020-021
020-120
020-121
020-221*
121-120
121-221
121-122
121-021
021-120
021-221
120-122*

Mark-central connections
---------------
022-021*
220-221
222-221

I'm sure Ron with be happy about the support the 212 results give for
Luke's use of Matthew.

David Gentile
Riverside, Illinois
M.S. Physics
Ph.D. Management Science candidate

Synoptic-L Homepage: http://www.bham.ac.uk/theology/synoptic-l
List Owner: Synoptic-L-Owner@...
• A small correction. The word count for 200 as a whole is 3843, not 1220, as a reported below. Sorry for making the example confusing.
Message 2 of 17 , Feb 15, 2002
• 0 Attachment
A small correction.

The word count for 200 as a whole is 3843, not 1220, as a reported below.
Sorry for making the example confusing.

dgentil@...@... on 02/15/2002 11:38:59 AM

Sent by: owner-synoptic-l@...

To: Synoptic-L@...
cc: gentdave@...

Subject: [Synoptic-L] Problems, and new results

I've done some more work on the statistics I previously reported, and
happily, besides getting what I believe are much more reliable results, I
now have enough material to use this as a class project.

Let me apologize in advance to the non-mathematically inclined for the
mathematical complications.

First, the problem with the current set of results is this: The tests
involved, particularly the t-test to determine the p-value, assume that the
distributions being correlated are normally distributed. To the extent that
this is not true, the results are unreliable.

Examining the distribution of the frequency of the words involved reveals
that they are not normally distributed.
There is a large concentration near zero, and some extreme outliers in the
tails. They are very, "lepto-kurtotic".

The effect is that the outliers dominate the results, we do not fully
utilize all available information, and we overstate the confidence level.

My first attempt at a solution to this problem was to use a non-parametric
method. Rather that using the actual data values, this method ranks the
data, then tries to correlate the ranks, The advantage here is that we can
do a significance test that does not involve any assumptions about the
distribution of the data. The drawback is that we lose even more
information.

The expected results would be that only the strongest correlations from the
previous attempts will show up. This is indeed what we find.

I'll post the full results later tonight, but here are the values that are
significant at the .0003 level.

Positive
------------
012-112
221-121
221-021
222-220
211-210
112-012
112-102** (new result) (.0003 level exactly)
221-121
221-021
221-220
121-021
121-120
021-121
002-112
002-012
020-120
200-202
200-201

negatives
----------
002-200
002-202
002-221

I note here that since the 102-202 connection has disappeared, and 102-112
has appeared, this can be viewed as rather positive news for the FH, and
the 3SH. Influential outliers must have been largely responsible for
previous results.
AUTON, is the biggest offender here. However, in the next method I describe
all of the above results appear, and more, *except* for 112-102. Plus 102
and 201 remain symmetric with respect to 202 in the next set of results.

Other that to state the obvious, that 102 seems to be related to both Luke
and 202, I've no more insight as to why 112-102 appears in this test and
not the next one. In the next test we can make use of all the zeros, so we

==========

The problem with this non-parametric approach is that we are making poor
use of the data. By ranking them we lose information. Also, we still are
not making any use of the zero values. The next method solves these
problems.
We can, effectively and correctly, use all the data including the zeros.
The results,it turns out are free of those annoying minor effects too.
While it might be possible for a redactor to preferentially retain a
specific word. It is unlikely that this happens over many words. So, by
effectively using *ALL* the data, we can remove many of these. Whereas
before we had to push the confidence levels very high to eliminate them, I
can now go as low as .99 confidence for individual results, without seeing
anything bizarre, and getting even more results that seem very plausible.

The method involves maximum likelihood fitting, and a likelihood ratio test
to determine significance. The first question is, if a normal distribution
is not appropriate, what distribution is. Realizing that we are dealing
with frequencies, and integer values leads us quickly to the Poisson
distribution.

An example of a Poisson process is the frequency of customer arrival at a
store. There is an average arrival rate (gamma), and in any given time
interval we can calculate the exact probability that 1 person arrives, 2
people arrive, 0 people arrive, etc., by using a Poisson distribution. The
only parameter we need is the average rate (gamma).

I treat each different word as a separate Poisson process with its own
gamma. I first estimate the gamma by looking at the overall frequency of
the word, and the number of words in a category.

Example:

ABRAAM occurs 18 times in all categories.
There are 25843 words studies in total.
1220 words are in category 200.

The expected number in 200 based on this is 18*1220 / 25843 = about 2.67.
So 2.67 will be our gamma estimate.
We can calc the probability of 0 occurrences is = .06
of 1 occurrence = .18
of 2 occurrences = .24
of 3 = .22
of 4 = .14
etcetera. (The actual value is 3).

Once we have the probability of the actual observed occurrence for each
word, multiplying the results would give the total probability. Since this
would involve multiplying many fractions, the result would be tiny. So, the
preferred method is to take the log of each probability, and add the
individual results.

The next step, is to ask if information from another category (say 202),
might be useful in predicting 200. A frequency estimate based on 202 would
be calculated in a similar manner to the estimated frequency based on all
categories.

I then assign a variable "beta" to weigh the estimates.
Best estimate = B * estimate based on other category + (1-B) * estimate
based on overall frequency.
I then use Excel's solver feature to find the value of beta that will most
improve the overall calculated likelihood.
If beta is 0 we conclude the category is unrelated. If there is a positive
relation, we need a test for significance.

The test statistic is -2* ln( Lu / Lr ) where Lu is the likelihood for the
model with the Beta, and Lr is the likelihood of the model based only on
the overall frequency. The statistic is distributed chi-squared with n
degrees of freedom where n is the number of parameters added. In this case
1.

The method does not test for negative relations. Also note that we do not
have absolute symmetry. 002 + overall may predict 112 better that 112 +
overall predicts 002.

The full results will be posted later. Here I will list results significant
at the .99 level.
If the result is not also significant at the .0003 level, I'll mark it with
an *.

Luke group
------------
012-112
112-002
002-012*

Matthew group
--------------
212-211
212-210
210-211

Sayings group
-----------
200-201
200-202
201-202*
202-102*

Central group
-----------
222-220
222-022*

Mark group
----------
020-021
020-120
020-121
020-221*
121-120
121-221
121-122
121-021
021-120
021-221
120-122*

Mark-central connections
---------------
022-021*
220-221
222-221

I'm sure Ron with be happy about the support the 212 results give for
Luke's use of Matthew.

David Gentile
Riverside, Illinois
M.S. Physics
Ph.D. Management Science candidate

Synoptic-L Homepage: http://www.bham.ac.uk/theology/synoptic-l
List Owner: Synoptic-L-Owner@...

Synoptic-L Homepage: http://www.bham.ac.uk/theology/synoptic-l
List Owner: Synoptic-L-Owner@...
• ... David, I have been following your statistical analysis off and on, but have not, and do not have the time now, to pursue the math and the erudite
Message 3 of 17 , Feb 15, 2002
• 0 Attachment
David Gentile wrote on Friday, February 15, 2002:

> I've done some more work on the statistics I previously reported, and
> happily, besides getting what I believe are much more reliable results, I
> now have enough material to use this as a class project.
>
> Let me apologize in advance to the non-mathematically inclined for the
> mathematical complications.

David, I have been following your statistical analysis off and on, but have not,
and do not have the time now, to pursue the math and the erudite explanations to
grasp fully the implications of your analysis for Synoptic study. Could you
state for me in very simple terms what conclusions you now draw with respect to
Synoptic dependency, interdependency or independency?. Which of the current
theories of Synoptic relationships fare well from the results of your analysis
and which do not fare so well?.

Thank you,

Ted Weeden

Synoptic-L Homepage: http://www.bham.ac.uk/theology/synoptic-l
List Owner: Synoptic-L-Owner@...
• ... I think it is a good decision to go with the Poisson process. I wish I had thought of that. I really wish I knew what was going on with this set of
Message 4 of 17 , Feb 15, 2002
• 0 Attachment
At 11:38 AM 2/15/02 -0600, dgentil@... wrote:
>I treat each different word as a separate Poisson process with its own
>gamma. I first estimate the gamma by looking at the overall frequency of
>the word, and the number of words in a category.

I think it is a good decision to go with the Poisson process.
I wish I had thought of that.

I really wish I knew what was going on with this set of
results.

>Sayings group
>-----------
>200-201
>200-202
>201-202*
>202-102*

Why is there a 202-102, but no 002-102 or 201-102?

Could you list the beta values for all the above and
the test statistic?

Stephen Carlson
--
Stephen C. Carlson mailto:scarlson@...
"Poetry speaks of aspirations, and songs chant the words." Shujing 2.35

Synoptic-L Homepage: http://www.bham.ac.uk/theology/synoptic-l
List Owner: Synoptic-L-Owner@...
• ... As best I understand the results, the Synoptic theories that do the best are the ones that postulate Markan priority and Luke s non-priority to either
Message 5 of 17 , Feb 15, 2002
• 0 Attachment
At 09:39 PM 2/15/02 -0600, Ted Weeden wrote:
>David, I have been following your statistical analysis off and on, but have not,
>and do not have the time now, to pursue the math and the erudite explanations to
>grasp fully the implications of your analysis for Synoptic study. Could you
>state for me in very simple terms what conclusions you now draw with respect to
>Synoptic dependency, interdependency or independency?. Which of the current
>theories of Synoptic relationships fare well from the results of your analysis
>and which do not fare so well?.

As best I understand the results, the Synoptic theories that
do the best are the ones that postulate Markan priority and
Luke's non-priority to either Matthew or Mark. Lukan priority
theories fare abysmally, and Matthean priority is poorly
supported.

The data are more ambiguous about Q, but I'd give a slight
edge to Farrer, in that both the Minor Agreements and the
Double Tradition come out as more Matthean than Lukan.

Stephen Carlson
--
Stephen C. Carlson mailto:scarlson@...
"Poetry speaks of aspirations, and songs chant the words." Shujing 2.35

Synoptic-L Homepage: http://www.bham.ac.uk/theology/synoptic-l
List Owner: Synoptic-L-Owner@...
• ... Thank you, Stephen. I have a question regarding Q. What if we pose the hypothesis that Mark used Q, how would such a hypothesis fare in this
Message 6 of 17 , Feb 16, 2002
• 0 Attachment
Stephen Carlson wrote on Friday, February 15, 2002:

> As best I understand the results, the Synoptic theories that
> do the best are the ones that postulate Markan priority and
> Luke's non-priority to either Matthew or Mark. Lukan priority
> theories fare abysmally, and Matthean priority is poorly
> supported.
>
> The data are more ambiguous about Q, but I'd give a slight
> edge to Farrer, in that both the Minor Agreements and the
> Double Tradition come out as more Matthean than Lukan.

Thank you, Stephen. I have a question regarding Q. What if we pose the
hypothesis that Mark used Q, how would such a hypothesis fare in this
statistical analysis?

Ted Weeden

Synoptic-L Homepage: http://www.bham.ac.uk/theology/synoptic-l
List Owner: Synoptic-L-Owner@...
• I uploaded all results to: http://groups.yahoo.com/group/synoptic-l/files/final%20results/ The results spreadsheet contains just the results. The top number
Message 7 of 17 , Feb 16, 2002
• 0 Attachment

http://groups.yahoo.com/group/synoptic-l/files/final%20results/

The "results" spreadsheet contains just the results.
The top number in each pair is the beta, the bottom number is the confidence
level.

I also uploaded the worksheet used to calculate the values. If someone wants
to work with it, I can explain it more. It uses the solver, and a small VBA
marco.

Finally, I also uploaded the non-parametric results.

My interpretation of the sayings group is that they represent a Q+, a
document that contained at least large sections of both 200 and 202. 202 is
more tied to 200 than it is to either 201 or 102. So 202 and 200 are the
source. 201 is Matthew editing the source, or the source left after a Luke
edit. 102 is Luke editing the source, or the source left after a Matthew
edit.

Dave Gentile
Riverside, Illinois
M.S. Physics
Ph.D. Management Science candidate

=========

From: "Stephen C. Carlson" <scarlson@...>

> I think it is a good decision to go with the Poisson process.
> I wish I had thought of that.
>
> I really wish I knew what was going on with this set of
> results.
>
> >Sayings group
> >-----------
> >200-201
> >200-202
> >201-202*
> >202-102*
>
> Why is there a 202-102, but no 002-102 or 201-102?
>
> Could you list the beta values for all the above and
> the test statistic?
>
> Stephen Carlson

Synoptic-L Homepage: http://www.bham.ac.uk/theology/synoptic-l
List Owner: Synoptic-L-Owner@...
• I agree with Stephen s comments. I d add that the 3SH fairs the best of all. There seems to be evidence that 212 was produced by Matthew and copied by Luke,
Message 8 of 17 , Feb 16, 2002
• 0 Attachment
I agree with Stephen's comments. I'd add that the 3SH fairs the best of all.
There seems to be evidence that 212 was produced by Matthew and copied by
Luke, and that Luke knew of sondergut Matthew, but omitted it. But there
also seems to be a 4th document, and Matthew and Luke both seem to have used
it. 102 and 201 are symmetric around 202.

I think a face value reading yields the 3SH with a heavy Q. The farther away
from that idea a hypothesis is, the more the results would argue against it,
in general.

There is still the possibility in the results that Mark is not completely
original. It is also quite possible from the results that Mark is completely
original, however.

Dave Gentile
Riverside, Illinois
M.S. Physics
Ph.D. Management Science candidate

----- Original Message -----
From: "Stephen C. Carlson" <scarlson@...>

> At 09:39 PM 2/15/02 -0600, Ted Weeden wrote:
> >David, I have been following your statistical analysis off and on, but
have not,
> >and do not have the time now, to pursue the math and the erudite
explanations to
> >grasp fully the implications of your analysis for Synoptic study. Could
you
> >state for me in very simple terms what conclusions you now draw with
respect to
> >Synoptic dependency, interdependency or independency?. Which of the
current
> >theories of Synoptic relationships fare well from the results of your
analysis
> >and which do not fare so well?.
>
> As best I understand the results, the Synoptic theories that
> do the best are the ones that postulate Markan priority and
> Luke's non-priority to either Matthew or Mark. Lukan priority
> theories fare abysmally, and Matthean priority is poorly
> supported.
>
> The data are more ambiguous about Q, but I'd give a slight
> edge to Farrer, in that both the Minor Agreements and the
> Double Tradition come out as more Matthean than Lukan.
>
> Stephen Carlson
> --

Synoptic-L Homepage: http://www.bham.ac.uk/theology/synoptic-l
List Owner: Synoptic-L-Owner@...
• I d say there is no evidence that Mark used Q in the results. But on the other hand, the result certainly would not argue against some minimal usage. Dave
Message 9 of 17 , Feb 16, 2002
• 0 Attachment
I'd say there is no evidence that Mark used Q in the results.
But on the other hand, the result certainly would not argue against some
minimal usage.

Dave Gentile
Riverside, Illinois
M.S. Physics
Ph.D. Management Science candidate

----- Original Message -----
From: "Ted Weeden" <weedent@...>

I have a question regarding Q. What if we pose the
> hypothesis that Mark used Q, how would such a hypothesis fare in this
> statistical analysis?
>
> Ted Weeden
>

Synoptic-L Homepage: http://www.bham.ac.uk/theology/synoptic-l
List Owner: Synoptic-L-Owner@...
• ... Thank you, David, for all the fine work you have done in performing this statistical analysis. Ted Weeden Synoptic-L Homepage:
Message 10 of 17 , Feb 16, 2002
• 0 Attachment
David Gentile wrote on Saturday, February 16, 2002:

> I agree with Stephen's comments. I'd add that the 3SH fairs the best of all.
> There seems to be evidence that 212 was produced by Matthew and copied by
> Luke, and that Luke knew of sondergut Matthew, but omitted it. But there
> also seems to be a 4th document, and Matthew and Luke both seem to have used
> it. 102 and 201 are symmetric around 202.

Thank you, David, for all the fine work you have done in performing this
statistical analysis.

Ted Weeden

Synoptic-L Homepage: http://www.bham.ac.uk/theology/synoptic-l
List Owner: Synoptic-L-Owner@...
• ... I m wondering if this is due to the way the editors of the Synoptic Concordance discriminated between the Mark/Q overlaps and triple tradition in assigning
Message 11 of 17 , Feb 16, 2002
• 0 Attachment
At 06:30 AM 2/16/02 -0600, David Gentile wrote:
>I'd say there is no evidence that Mark used Q in the results.
>But on the other hand, the result certainly would not argue against some
>minimal usage.

I'm wondering if this is due to the way the editors of the
Synoptic Concordance discriminated between the Mark/Q overlaps
and triple tradition in assigning words to their categories?

Stephen Carlson
--
Stephen C. Carlson mailto:scarlson@...
"Poetry speaks of aspirations, and songs chant the words." Shujing 2.35

Synoptic-L Homepage: http://www.bham.ac.uk/theology/synoptic-l
List Owner: Synoptic-L-Owner@...
• ... Stephen, It may be because Luke tends to preserve the wording in the sayings source better than Matthew. Thus 102 is closer to the sayings source than to
Message 12 of 17 , Feb 16, 2002
• 0 Attachment
Stephen Carlson wrote:

>Why is there a 202-102, but no 002-102 or 201-102?

Stephen,
It may be because Luke tends to preserve the wording in the sayings
source better than Matthew. Thus 102 is closer to the sayings source
than to typical Lukan redaction, whereas 201 is closer to typical
Matthean redaction than to the sayings source.

>The data are more ambiguous about Q,

Thus favouring 2ST or 3ST ?!

> ... but I'd give a slight
>edge to Farrer, in that both the Minor Agreements and the
>Double Tradition come out as more Matthean than Lukan.

Thus favouring Farrer or 3ST ?!

Need I say more? ;-)

P.S. My rationale for the 3ST has now been accepted as a 'short study'
in the current issue of the Journal of Biblical Studies:

http://journalofbiblicalstudies.org

Ron Price

Weston-on-Trent, Derby, UK

e-mail: ron.price@...

Web site: http://homepage.virgin.net/ron.price/index.htm

Synoptic-L Homepage: http://www.bham.ac.uk/theology/synoptic-l
List Owner: Synoptic-L-Owner@...
• ... The Mark/Q overlap has to be recorded in some Markian category. We might argue it should have been 222 not 020, but it s still in a category we think of as
Message 13 of 17 , Feb 16, 2002
• 0 Attachment
Stephen Carlson wrote:

> At 06:30 AM 2/16/02 -0600, David Gentile wrote:
> >I'd say there is no evidence that Mark used Q in the results.
> >But on the other hand, the result certainly would not argue against some
> >minimal usage.
>
> I'm wondering if this is due to the way the editors of the
> Synoptic Concordance discriminated between the Mark/Q overlaps
> and triple tradition in assigning words to their categories?
>
> Stephen Carlson

The Mark/Q overlap has to be recorded in some Markian category. We might
argue it should have been 222 not 020, but it's still in a category we think
of as "Mark". So if there was substantial use of Q it should show up by some
Markian category being related to a Q category.

But, if Mark did use some sayings from Q, wouldn't they represent a small
fraction of the total text of Mark? Any contribution they make to Mark's
characteristics might not be visible through Mark's own style.

We were seeing some relation between 020, and 200/202 in the simple
correlations, and the new results have 020 and 202 as a very slight
positive. (only 50% confidence), so I suppose we could take this as a
possible hint of Q in Mark.

Dave Gentile
Riverside, Illinois
M.S. Physics
Ph.D. Management Science candidate

Synoptic-L Homepage: http://www.bham.ac.uk/theology/synoptic-l
List Owner: Synoptic-L-Owner@...
• ... David, the Markan texts in which I think there may be some Markan dependency on Q are for example: Q [Lk.] 7:27//Mk.1:2 Q 3:16//Mk.1:7f. (see Mt.3:11;
Message 14 of 17 , Feb 16, 2002
• 0 Attachment
David Gentile wrote on February 16, 2002 in response to Stephen Carlson:

> The Mark/Q overlap has to be recorded in some Markian category. We might
> argue it should have been 222 not 020, but it's still in a category we think
> of as "Mark". So if there was substantial use of Q it should show up by some
> Markian category being related to a Q category.
>
> But, if Mark did use some sayings from Q, wouldn't they represent a small
> fraction of the total text of Mark? Any contribution they make to Mark's
> characteristics might not be visible through Mark's own style.
>
> We were seeing some relation between 020, and 200/202 in the simple
> correlations, and the new results have 020 and 202 as a very slight
> positive. (only 50% confidence), so I suppose we could take this as a
> possible hint of Q in Mark.

David, the Markan texts in which I think there may be some Markan dependency on
Q are for example:

Q [Lk.] 7:27//Mk.1:2
Q 3:16//Mk.1:7f. (see Mt.3:11; Lk.3:16)
Q 10:4-5a, 7a,10-11//Mk. 6:8-11 (see Mt. 10:9-11; Lk.9:3-5)
Q 11:17b-19a, 21//Mk. 3:22 , 23-27 (see Mt. 12:25b-26, 29, 32; Lk. 11:17f., 21)
Q 12:10//Mk. 3:28 (see Mt.12:32b)
Q 12:11-12//Mk. 13:9,11 (see Mt. 24:17f.; Lk. 21:12)

I am particularly interested in Q [Lk.] 7:27//Mk.1:2. For I see it as a likely
example of Markan intercalation or sandwiching in which Mark frames the Q 7:27
conflation of Ex. 23:20/Mal. 3:1 (a conflation originating with 2Q) within
Isaianic features (Isaiah named as prophet=the initial part of frame; Isa.
40:3=concluding part of frame) in an effort to redirect the Q-conflation's
theological thrust away from the purification of the temple-cult motif of Mal
3:1ff. (which is the direction it was headed in 2Q) to the exodus motif of Ex.
23:30, thereby making the conflation conform to the new exodus motif of Isa.
40:3, which is a Markan theological theme. Note, too, that Mark appropriates
GEGRAPTAI from Q 7:27a and uses it to introduce (1:2a) the Isaianic framed
Q-conflation of Ex.23:20/Mal. 3:1. Of course, Matthew and Luke omit the
conflation at the point Mark includes it (see Mt. 3:3; Lk. 3:4), a fact that
immediately prejudices the analysis toward some possible form of Markan and Q
interdependency, does it not?.

Can you (1) design a statistical analysis, using the presupposition that Mark
was dependent upon Q in these passages, and test to see whether the results are,
with respect to the presupposition, positive, negative, or indeterminate, or (2)
any other statistical analysis that would test for Markan dependency on Q?

Ted Weeden

Synoptic-L Homepage: http://www.bham.ac.uk/theology/synoptic-l
List Owner: Synoptic-L-Owner@...
• Ted, In answer to (1), I suspect the hardest part of what you suggest would be getting the data. We would need word counts for the specific sections in
Message 15 of 17 , Feb 16, 2002
• 0 Attachment
Ted,
I suspect the hardest part of what you suggest would be getting the data.
We would need word counts for the specific sections in question, probably by
category, to compare against other sections of text. Then, assuming the
sample of text was large enough, we could probably test it in a similar way.

As far as any other tests...I suspect there are probably other things we
could learn from the data we have, given time, (a rare commodity at the
moment). But I think data requirements will be an issue for any
investigation of specific passages, because this data is not directly
available, at the moment.

Dave Gentile
Riverside, Illinois
M.S. Physics
Ph.D. Management Science candidate

----- Original Message -----
From: "Ted Weeden" <weedent@...>
>
> David, the Markan texts in which I think there may be some Markan
dependency on
> Q are for example:
>
> Q [Lk.] 7:27//Mk.1:2
> Q 3:16//Mk.1:7f. (see Mt.3:11; Lk.3:16)
> Q 10:4-5a, 7a,10-11//Mk. 6:8-11 (see Mt. 10:9-11; Lk.9:3-5)
> Q 11:17b-19a, 21//Mk. 3:22 , 23-27 (see Mt. 12:25b-26, 29, 32; Lk.
11:17f., 21)
> Q 12:10//Mk. 3:28 (see Mt.12:32b)
> Q 12:11-12//Mk. 13:9,11 (see Mt. 24:17f.; Lk. 21:12)
>
> I am particularly interested in Q [Lk.] 7:27//Mk.1:2. For I see it as a
likely
> example of Markan intercalation or sandwiching in which Mark frames the Q
7:27
> conflation of Ex. 23:20/Mal. 3:1 (a conflation originating with 2Q) within
> Isaianic features (Isaiah named as prophet=the initial part of frame; Isa.
> 40:3=concluding part of frame) in an effort to redirect the Q-conflation's
> theological thrust away from the purification of the temple-cult motif of
Mal
> 3:1ff. (which is the direction it was headed in 2Q) to the exodus motif of
Ex.
> 23:30, thereby making the conflation conform to the new exodus motif of
Isa.
> 40:3, which is a Markan theological theme. Note, too, that Mark
appropriates
> GEGRAPTAI from Q 7:27a and uses it to introduce (1:2a) the Isaianic framed
> Q-conflation of Ex.23:20/Mal. 3:1. Of course, Matthew and Luke omit the
> conflation at the point Mark includes it (see Mt. 3:3; Lk. 3:4), a fact
that
> immediately prejudices the analysis toward some possible form of Markan
and Q
> interdependency, does it not?.
>
> Can you (1) design a statistical analysis, using the presupposition that
Mark
> was dependent upon Q in these passages, and test to see whether the
results are,
> with respect to the presupposition, positive, negative, or indeterminate,
or (2)
> any other statistical analysis that would test for Markan dependency on Q?
>
> Ted Weeden
>

Synoptic-L Homepage: http://www.bham.ac.uk/theology/synoptic-l
List Owner: Synoptic-L-Owner@...
• David Gentile wrote on Saturday, February 16, 2002: 7:32 PM ... Dave, I think it would be a daunting task and there may not be enough text to provide a useful
Message 16 of 17 , Feb 16, 2002
• 0 Attachment
David Gentile wrote on Saturday, February 16, 2002: 7:32 PM

> Ted,
> I suspect the hardest part of what you suggest would be getting the data.
> We would need word counts for the specific sections in question, probably by
> category, to compare against other sections of text. Then, assuming the
> sample of text was large enough, we could probably test it in a similar way.

Dave, I think it would be a daunting task and there may not be enough text to
provide a useful sample. Maybe it would make a good Ph.D. dissertation project
for someone. In any event, thanks for all your good work.

Ted

Synoptic-L Homepage: http://www.bham.ac.uk/theology/synoptic-l
List Owner: Synoptic-L-Owner@...
• ... text ... Synoptic-L Homepage: http://www.bham.ac.uk/theology/synoptic-l List Owner: Synoptic-L-Owner@bham.ac.uk
Message 17 of 17 , Feb 17, 2002
• 0 Attachment
> You're welcome. I'm just glad, at this point, that I can use it as a class
> project.
> It's due it 2 weeks, and I wasn't having a lot of other inspirations for
> projects.
>
> Dave Gentile
> Riverside, Illinois
> M.S. Physics
> Ph.D. Management Science candidate
>
> >
> > Dave, I think it would be a daunting task and there may not be enough
text
> to
> > provide a useful sample. Maybe it would make a good Ph.D. dissertation
> project
> > for someone. In any event, thanks for all your good work.
> >
> > Ted
> >
>
>

Synoptic-L Homepage: http://www.bham.ac.uk/theology/synoptic-l
List Owner: Synoptic-L-Owner@...
Your message has been successfully submitted and would be delivered to recipients shortly.