## Re: "The Five Gospels" printed in wrong colors

Expand Messages
• re missive of 05/06/98 06:30 PM signed -Mahlon H. Smith- : ... Dear Mahlon, I respectfully vote black on your defense of the Jesus Seminar methodology. I
Message 1 of 4 , Jun 7, 1998
re missive of 05/06/98 06:30 PM signed -Mahlon H. Smith- :

Stephen Carlson has nailed it, on the head:

>Funk's formula does not even measure what he and the rest of the
>Jesus Seminar probably intended the formula to measure.

Dear Mahlon,
I respectfully vote "black" on your defense of the
Jesus Seminar methodology. I hope you won't mind if I cut and
paste from your several messages in order to try to stick to
major points.

>> To generalize this simpler approach:
>>
>> 1. Total up the bead-points.
>> 2. Divide by the number of voters.
>> 3. Simply round the result.

>This approach would be "simpler" only if one were trying to determine
>the statistical mean in the voting range of the fellows.

No.

In steps 1 and 2, what is calculated is the *weighted average*.

(If you don't believe me, consult Robert Funk! That's how "weighted
average" is defined in the glossary of _The Five Gospels_...)

It seems that the definition of "statistical mean", or what we should
call a "simple average" and the definition of "weighted average" are

A "simple average" or "statistical mean" is calculated by this formula:

simple average of n A's = (A1+A2+A3+...+An)/n

(Whoops. I just lost the "average reader." Hope I haven't lost you
as well, Mahlon!) :-)

A *weighted average*, on the other hand, is a calculation of this type:

(A1*X1+A2*X2+A3*X3+...+An*Xn)/(A1+A2+A3+...+An)

Now, clearly, the Jesus Seminar vote tabulation is a weighted average.
(A's correspond to numbers of people who voted a particular colour,
X's correspond to the numerical values that were assigned to each
colour.)

The JS did not go wrong in calculating those weighted averages.
(Not in their arithmetic. Perhaps in their overall approach, as Bruce
says.)
The mistake happened when they used the wrong scale to assign colour codes
to the results.

>To insure that significant minority voices would not quit the seminar
>simply because they were regularly outvoted by the majority an
>objectively neutral statistical means of giving weight to Red (& Black)
>of determining weighted cumulative averages.

Once again, as Dr. Young has outlined on his webpage, the formula which
the Jesus Seminar *actually used* to report their results does *not*
match the formula for determining GPAs. Useing the JS method to assign
letter-grades to GPAs would be sure to elicit howls of protest from
any academic constituency, as Dr. Young has explained. Perhaps you
could read that page (it only takes a few minutes) and tell us why it's
wrong?

http://members.aol.com/leeayoung/jseminar/gpa.htm

It seems from a previous post of yours that you read the first page,

http://members.aol.com/leeayoung/jseminar/error.htm

but didn't bother with the second one. Forgive me if I presume
incorrectly.

>Our mathematical formula may not yield
>the statistics that you & Lee want.

Well, speaking for myself, I don't "want" any statistics in particular
from the JSem... The point that I'm making here, rather, is that it
doesn't
yield the statistics that *they* want! Rather than the actual weighted
average, what the final colour-coding represents is an *arbitrarily
skewed* weighted average.

>But it satisfied the JS's aim of
>providing an objective measure of what gospel info has enough support to
>have to be take into account in ANY contemporary scholar's account of
>who Jesus really was.

If by "objective" you mean "ridiculously arbitrary," I'm willing to
concede that point!

>most of
>those scholars who declined to participate in the JS project don't think
>there is much of any reliable historical info about Jesus in gospels &
>they continue to publicly say as much (witness the recent PBS special).
>But that is simply to echo a skepticism based on a critique of the
>historical positivism of the last century.

It's interesting that you say that. Here is a quote (again) from
Robert Funk, _The Five Gospels_:

>in particular could readily pull an average down, as students know who
>have one "F" along with several "A"s. Yet this shortcoming seemed
>consonant with the methodological skepticism that was a working principle
>of the Seminar: when in sufficient doubt, leave it out.

Fascinating. Funk implies here that the JSem system was biased toward
skepticism. Yet under that very system, Black and Red have equal weight.
Confusing, no?

>The colors printed in 5G do not represent the evaluation
>of any particular scholar or the statistical means of any group of
>scholars. They are rather a tool for advertising consensus on the
>relative historical value of every gospel pericope.

the weighted-average consensus of the Fellows.

No one here, including you, has answered the simple question:
Why would the Jesus Seminar *claim* to have determined their results
according to a weighted-average computation analogous to GPAs, and
then use an algorithm that *significantly* differs from that?

I've come to conclude, with many others on this list, that they
simply didn't know what they were doing with the mathematics.

If I can say this without sounding impertinent, to someone whose
Biblical scholarship I have nothing but deep respect for:
By ducking the peskiest of the mathematical questions, you have
done nothing to alleviate that impression.

You made this point to Bruce:

>...Without a weighted vote system the
>JS would have quickly have lost its diversity on both ends of the voting
>spectrum, those who tended to vote red or pink on all but a few sayings
>& those who seldom voted higher than gray on anything.
>...its open forum & weighted
>voting system has encouraged the loyal participation of a broad spectrum
>of scholars with diverse convictions.

Once again, please note that the thrust of Dr. Young's presentation was
*not* that the JSem should *not* use a weighted average (although he
implies some misgivings about that as well). Rather, his claim is that
what the JSem did was take weighted averages, and then skewed those
results in the process of assigning colour-ranges. This is a point that
you seem to be missing.

>> *The Seminar did NOT insist on UNIFORM standards for balloting.*

>This line by itself should serve as a caveat against subjecting the JS
>voting tallies to criteria of pure mathematics.

Well, golly gee. I didn't realize that if the JSem says it's doing
one thing, and then does another, it automatically has the privilege
of being beyond question by bothersome little details like "mathematics"!
If I'd-a known that, I never woulda started this consarned discussion
in the first place. That's fer darn tootin'. Sorry to "subject" the
mathematics of the Seminar's process to, um, the rules of math. :-)

>The Seminar's statistics
>were never intended to measure the exact center of a set of uniform
>ideal digits. They were designed rather to report & *promote* consensus
>among a large group of highly individual scholars...

Unfortunately, the method that was used did more than *promote*
consensus. It arbitrarily constructed a consensus that was not
enacted in the voting. And that's true whether you take Bruce's
assumptions about what should have been done, or Lee's.

>...highly individual scholars, with divergent
>perspectives on ancient history, the value & relationship of primary
>texts, & the criteria for establishing valid historical evidence.

And, it seems, precious little grasp of simple mathematics.
Unfortunately.

>This is where Lee's accusation that the JS is not democratic is totally
>in error. On the contrary, it is the most democratic academic gathering
>I have ever been a part of, since no one claimed of magisterial
>authority over others. A red vote counted the same whether it was cast
>by Funk, Crossan, Borg, Chilton, Kloppenborg or any of us lesser known
>scholars who did much of the research that provided the basis for
>debating the historical value of each item. But a red vote always
>carried more weight than a pink or gray for determining what was IN or
>OUT. It would take 3 gray votes to counteract 1 red vote. But that is
>only proper given the basic definitions of red & gray. Red=IN without
>reservation; Gray=OUT but with some reservation. A Gray item by
>definition contained some information that could be useful for
>determining who Jesus was & therefore it was not clearly OUT unless the
>proportion of those with doubts was significantly greater than those who
>were certain it should be included.

Your presentation of the rationale behind the various weights is
quite satisfactory, as far as I'm concerned. Unfortunately,
the JSem did not report their results according to the scale
that you describe. They introduced an extra step, in which
the weighted averages were divided by three and intrepreted
according to ranges that were incorrectly chosen.

If the correct algorithm had been used, all of what you are saying
here would still hold true. All the more so, because if the results
had been reported accurately, they would have corresponded to the
actual voting patterns (albeit, as Bruce notes, and as you protested
years ago, in a rather meaningless way in certain cases).

Unfortunately, the results were arbitrarily skewed. This undercuts
the otherwise democratic nature of the Seminar, as Dr. Young claims.

>> The Jesus Seminar is going to take a credibility
>> hit because of this oversight, I'm afraid. I
>> hope that they don't refuse to acknowledge the
>> error.

>The only error we need acknowledge is in not making clearer the purpose
>& rationale for tabulating the votes so that intelligent mathematicians
>like you & Lee would not have marked us wrong.

I agree that the "purpose & rationale" could have been better stated.
But what really concerns me is that the algorithm that the JSem seem to
have *wanted to use* differs *signficantly* from the one that was
*actually used*. And no amount of verbiage concerning "purpose &
rationale" seems to get us closer to explaining that pesky fact.

The nub of it:

>There has been no "error" in representing the JS consensus simply
>because the JS as a whole agreed to this method of determining &
>reporting its consensus & clearly explained the formulae it used to
>arrive at the results. The only error is in the interpretation of the JS
>statistics by those who did not participate in this process & insist on
>trying to make up their own definitions of what the colors should mean &
>where the statistical boundaries between colors should be set.

No.

The Jesus Seminar made the definitions.
black = 0
gray = 1
pink = 2
red = 3

The Jesus Seminar claims to have determined the consensus colours
by using the "weighted average" of the votes.

Yet, in many cases where the weighted average was 1 (=gray),
the Seminar reported it as black (=0). In many cases where
the weighted average was 2 (=pink), it was reported as red (=3).

If the JSem wanted to "clearly explain the formulae it used to
arrive at the results," it should have clearly stated that the
results were arbitrarily skewed using an extra step, so that
more red and black results would be returned than would have
been by the use of weighted average alone.

To date, I've seen nothing to convince me that the misrepresentation
of the process/results was willful. I believe that it was
simply not well understood by the people concerned.

Or to put it another way, it's a good thing that registrars,
not profs, calculate GPAs!

>As for a "credibility
>hit," I'm not ready to panic. This issue is not akin to the Monika
>Lewinsky matter.

Yes. The Jesus Seminar only blew their math!

James

P.S. As an aside, back to a point that I made...

>> I think that Dr. Young's example of the
>> Good Samaritan is a strong one. It should make
>> intuitive sense that the a singly-attested parable,
>> deeply marked with the authorial style of its
>> single source, would meet with some skepticism
>> among any group of critical scholars as to its
>> originating with Jesus. That this pericope
>> should have come out pink, given the JS's own
>> principles of evidence, seems correct.

You said (in part):

>That depends who is part of the group of scholars who were voting. In
>this case the JS included several scholars who had made their career up
>to that point in meticulous analysis of the parables using a wide range
>of sophisticated scholarly methods (form, redaction, structural &>
>rhetorical criticism). Their arguments persuaded most of the rest of the
>Fellows (I missed the Redlands meeting; having read the papers I
>probably would have voted pink). But 11% of the Fellows voted gray or
>black; & I take it you would have added to that percentage of skeptics