> (2) Tabor is correct in noting the logical curiosity in the
> Jay Cost criticism of the patina tests that "it is a waste of
> time to test patinas known to have been from radically
> different environments than the 'James' and Talpiot ossuaries."
> Oh? The environment of the "James" ossuary is known? What
> is that? Granted the data needs to become available and
> looked at critically by more experts. But prima facie there
> is something worth looking at here, *unless the method itself
> is shown false*. There is no reason in principle why a method
> cannot newly be proposed and argued, and then cited as
> argument in a first test case, in the same scientific article.
> It may hold up, it may not, but there is nothing a priori
> improper about this, again contra a common trope in
> the blogosphere.
I agree with you in principle if you mean the following. There is nothing wrong with
proposing a method, arguing that it is valid, "and then" using it in a test case. However,
this is not what was done in this instance. The second and third steps were conflated.
There is no "and then." The argument for the method was also the test case.
This is why statisticians, when they offer a new statistical estimator, run Monte Carlo
simulations. They create a known population via a computer simulation, sample from it,
use their new estimator on the sample to estimate a particular characteristic of that
population, and then they go back to their created population to see if the estimate
matches. They validate their estimator before offering it to the scholarly for use.
ANY method has to be validated before it is used to draw inferences about other things. It
has to be shown, as Randy said in another thread, to yield low levels of Type 1 error (i.e.
few false positives) and Type 2 error (i.e. few false negatives). This has not happened
here. Thus, any inference that is made about the James ossuary using the patina
"fingerprint" implicitly assumes something about Type 1 and Type 2 error that the
fingerprint creates, which nobody has any business assuming.
Does this mean that, even though it was not done correctly, there is not "something worth
looking at here?" Of course not. There IS something worth looking at here. The point is
simply that one cannot "match" James to Talpiot by this method. The issue is not whether
these results are interesting. It is not whether these results warrant further study. They
are and they do! The issue is: what can you say about James and Talpiot based upon this
data? Answer: not much.
It is, interestingly enough, the same problem with the statistical analysis. Have they
miscalculated the statistics? No. They have not. Their mistake was to draw an
unwarranted inference from them. I am not saying that they miscalculated the statistics.
I am not saying that they improperly calibrated the microscope. I am not saying that this
is an unworthy subject of study. I am saying that at no point, so far as I can tell, does
their data yield for them the conclusions that they think it does. All of their hypotheses