Re: Where is Seminar Gray/Black Voting Data?
- Michael Davies wrote:
>Thanks for the suggestion, Steve. You are right in surmising that I'm
> I believe you are going through your email systematically, as is your
> custom, and you'll find that Lee has laid some very serious charges
> against the mathematics of the JSem. I've been tempted to send them
> off to Funk, in your absence, but I don't think he's on the WWW. In any
> event, it will be interesting to see what your response is to the whole
> crosstalk discussion, which I'd suggest you read through as a whole.
wading through a landslide of back e-mail. Truth is I've been without a
computer for almost a month. My not-so-old one got zapped by lightning
in that horrendous tornado spawning thunderstorm on May 6 & I've spent
the past month negotiating with my insurance company for data recovery &
replacement. I finally got back on line today, but it will take me a
week or so to get files reloaded & everything working the way it should.
(I probably will have to rebuild my address books from scratch.) And I
still have 600+ e-notes to sift through (not counting the dozen or so
new ones that were posted since suppertime).
My earlier response to Lee was dashed off on the naive assumption that
it was a simple request for bibliographical information. I could have
saved myself precious time if I'd sorted through the next 300+ notes
before replying. I can't guarantee that I've read the whole subsequent
discussion yet but I've sorted thru enough related messages & reviewed
the mathematics of Lee's "Error in JS Vote Tally" webpage to be able to
make a less redundant & more substantive reply.
1. I agree that Lee's challenge to the objectivity of the JS methodology
is substantial enough to merit the attention of Funk & other JS fellows.
So I plan to add a link to his page on the JS Forum's webpage on recent
reaction to the JS's work. That should get his webpage (currently
accessed only 46 times) some real action (the JSForum has no counter,
but is a major contributor to the 100+ daily traffic on the RU religion
website). I hope that someone who is so concerned with generating
accurate statistics will appreciate this move to give his negative vote
on the JS's work greater weight.
2. Lee's challenge though mathematically impressive itself suffers
several serious flaws, the most basic of which is his assumption that
the purpose of the RPGB color scheme was simply to indicate the
statistical mean of fellows voting for or against the authenticity of a
given saying. If that were the case there would have been no need for
weighting the votes or for using an alogorithm to calculate the weighted
3. Lee totally ignores the published explanation of the meaning of the
RPGB voting schema & assigns his own interpretations of what these
colors should or should not mean. Greg Jenks' assessment of the JS
voting was absolutely correct:
>I consider that the point of the colour coding is not simply to convey >the results of a vote, but rather to interpret the voting for the >purposes of assessing the probability that a specific item of tradition >should be included in the database for Jesus research.This is made clear in Funk's explanation of the the agenda of the JS in
the introductions to both 5G (pp. 35-37) & Acts of Jesus ("Beads &
Boxes" pp. 36-37).
First, note that the primary meaning of the RPGB voting scheme was to
determine consensus on whether a saying should be included in a
historical data base FOR DETERMINING WHO JESUS WAS. The answer to this
question is simply yes (it is in) or no (it is out).
Everyone has reasons for thinking that Jesus probably did or did not say
something. After publically debating these, the Fellows voted on the
basis of the strength of arguments pro & con. In other words, did the
reasons for ascribing this item to Jesus (rather than the author of this
or that gospel) outweigh the arguments against. If so=R; if not=B. But
historical reality is not simply black or red. The P & G gradations were
introduced to avoid the quandry of what to do with a basically authentic
Jesus saying that had been modified by one or another gospel writer or
an editorial composition that included some genuine echoes of Jesus.
It is obviously harder to determine whether an ambiguous item should be
accepted into or rejected from a data base of information that is useful
for determining Jesus' personal characteristics. Somemay be & some may
not. Thus, in proposing to recalculate the boundaries of the pink-gray
categories, Lee is in fact arguing for greater ambiguity in determining
the consensus about what may or may not have originated with Jesus
himself. If we wanted an ambiguous grabbag of material that individual
scholars could select bits & pieces from to construct their personal
images of Jesus we would not have had to spend 13 years & 10's of
thousands of dollars to find it. We could have simply used the extant
gospels as is.
The whole point of the JS project was to try to clarify what material a
broad range of contemporary scholars would accept as historically
reliable info about Jesus & what material they would exclude. One is
always free to argue that excluded material (black) should be included,
but this has to be done by providing cogent reasons for authenticity,
not just by arbitrarily shifting the statistical position of the
position of the markers for determining what is in & what is out.
4. Lee faults the JS for giving the R & B votes too much weight &
concludes his webpage thus:
"Determining the Seminar consensus is exactly analogous to calculating
Grade Point Averages in school."
This should not be such a startling revelation to anyone one who has
read the 5G or Acts Jesus since Funk explicitly makes the analogy of a
red vote to an A & a black vote to an F. Red is unqualified support;
black is unqualified rejection. When an academic scholar committed to
objective weighing of the evidence makes such an unqualified assessment
it merits extra weight, whether it is in evaluating the work of a
student or in evaluating the historical reliability of a piece of
information. So red votes & black votes were deliberately given more
weight than the ambiguous mediocre pinks & grays. NB: The JS votes were
not tabulated on a bell curve, because the object was not to determine a
broad middle ground of those items that might pass as historical but to
determine the consensus of what should definitely be accepted or not.
The project is akin to the hiring process. When you're looking for a
reliable colleague, do you hire the candidate who is everybody's 2nd or
3rd choice? In my department at least we give greater weight to
unqualified recommendations or objections in order to find someone we
all can live with. That is why the Fellows who voted Pink or Gray did
not object (except on a rare occasion) when a smaller number black votes
excluded a saying from the common data base, or when a few red votes
rescued a saying that the majority had reservations about. A's & F's do
generally tell more about reliability than B's & C's.
But this has gotten too long. If I find time this next week or so, I'll
post rebuttals to some of Lee's posts.