[Fwd: Re: At last, a proof that RH is true]
- By the author's request, I'm forwarding the reply (without comment)
from Jeffrey Cook, the author of the Riemann Hypothesis paper I
-------- Original Message --------
Subject: Re: At last, a proof that RH is true
Date: Sat, 10 Jan 2009 20:05:07 -0000
From: Jeffrey N Cook <antidyne@...>
To: Alan Eliasen <eliasen@...>
Hi, Alan. This is Jeff Cook, author of the paper and purported proof
in question. What you have found in my paper regarding u (x) is
valid, as it is misrepresented. On 5-21-08 I noticed the values of
this function as written were not matching my recorded worksheet
values. I had it noted at http://www.jeffreyncook.com/jeff%20cook%
20updates.htm that there was an error in the paper. I noted that
equation (83) had a misprint. However, in a hurry, I looked over the
equation alongside my worksheet and noticed that a few decimal points
on lower values of x were not matching what I originally had in the
paper. I could not find the problem immediately and ignored it for
the time being and simply changed the values in the paper that did
NOT match my worksheet. This was a foolish mistake. On 5-28-08 I
had it noted there was no mistake afterall. But the mistake is still
Alan Eliasen`s: brief points on a couple major places where [Cook`s
purported proof is] incorrect."
Note the x^x in eq. 84 above, which grows rapidly. From
these meager data points, the paper claims (eq. 88) that the maximum
error J[x]-Prime[x] *for all x* is on the order of 12. This is
[W]hen you get to even x=206, the error is -19.6006."
[T]hen the error just starts to increase linearly (actually,
a little worse than linearly.)"
Thus, the completely unsupported assertions about the error
of this prime-approximating function in eq. 87-89 are wrong, as they
seem to be based on incorrect ideas of how the functions actually
[T]he leaps of faith from one unsupported equation to the
next in this chain are immense.) There is of course no reason to
believe the extrapolations to infinity, as the assertions are wrong
for even very small numbers."
u(x) = log_e (Pi * i * y * x^2 * Beta (x) * A (x))
u(y) ~ log_e (-Pi * x^2 * Beta (x) * A (x))
Lim y = 1 / pi to infinity
u(x) in the paper is derived from u(y). The only reason why I keep
the pi in there is to clear the first few values. Eventually, the
value of Pi * i * y = -1 in accordance with the rest of the paper.
However, u(x) is not correct (missing a variable) that is not
I thank you for going over my results in order to bring me back to
this. The problem is that there is a missing variable in (83) that I
lost in my notes. The values in my work are accurate (but not
reflected in my paper). I will dig out my old equations and fix this
in the next day or so. It is only a typo, a missing variable.
In any case, this equation is part of the second proof. While ugly
indeed, and needing to be fixed, the first proof still stands. You
In addition, the definitions of things like the log integral
(eq. 40) don't seem to be the definition of the log integral that I'm
familiar with. It appears to be a summation of discrete terms, and
not the integral! I may just not recognize it in this form, but it
doesn't seem right at all. If that's important in the paper, it's
This is not a problem and is commonly understood by those familiar
with the Prime Number Theorem. Pi (N) ~ N / log (N) is the older and
obsolete Prime Number Theorem. Now we Express it as pi (x) ~ Li
(x). The derivative of log (x) is 1 / x, so log (x) is the
integral. The inverse of a derivative is an integral and the inverse
of differentiation is integration. So long as N does not equal -1,
the integral x^N is x^(N+1)/(N+1). But there is no common function
that can be used to express the integral of 1 / log (x), Li (x) takes
on the area under the graph of 1 / log (x) or the integral of the
inverse of log x with respect to x, from zero to infinity. Integrals
and sums are very closely linked. 
 from my paper is Derbyshire`s book.
Remember, there are two proofs of the RH in the paper. The error
involving the second will be fixed soon.
--- In firstname.lastname@example.org, Alan Eliasen <eliasen@...> wrote:
> Alex Petty wrote:
> > My colleague has settled the long outstanding question of
> > Hypothesis and shown conclusively that all non trivial zeros of
> > zeta function do indeed have Real part one half, ie. the
> > has been proven to be true. To review this proof, now in pre-
> > please follow the url:
> I don't want to spend a lot of time on this, so I'll be brief and
> point out a couple major places where it's incorrect. The paper
> some amazing things, for example, "These functions are put together
> reveal a new function whose difference from the Prime Number
> (2, 3, 5, 7, 11...) to infinity is zero..."
> This amazing function is very simple and can be summarized as the
> beta[x] := 6x + 1 (rearrangement of eq. 74)
> r[x] := 4x/5 - 15 (eq. 82)
> A[x] := 1 - 6x^x (eq. 84)
> u[x] := ln[-pi beta[x] A[x]] (eq. 83)
> J[x] := r[x] + u[x] (eq. 86)
> Where J[x] is supposed to be an approximation of the function
> lists the primes, (I'll call it Prime[x]) e.g. 2,3,5,7,11. That is,
> Prime=2 Prime=3, etc.
> From this, I could reproduce the values in the table 27. No
> discrepancy there. The paper goes on to graph the first 140 terms
> this to show how well J[x] matches Prime[x]. (I'll make a note,
> that when you're talking about the primes, looking only at the
> terms of a sequence and extrapolating beyond that is a sure recipe
> But why only the first 140 terms? Probably because 64-bit IEEE-
> floating-point hardware overflows after this! Note the x^x in eq.
> above, which grows rapidly. From these meager data points, the
> claims (eq. 88) that the maximum error J[x]-Prime[x] *for all x* is
> the order of 12. This is completely wrong.
> If you use a real computing environment to evaluate larger
> the error does bounce around and stay smaller than 12 for very small
> values of x, but when you get to even x=206, the error is -
> then the error just starts to increase linearly (actually, a little
> worse than linearly.) At x=1400, the error is -398.109. At
> the error is -4626.66. At x=20000, the error is -10667.6. And the
> trend continues. That's a lot larger than 12, but the paper says
> "the maximum value of all error terms from 1 to infinity becomes
> clearly" 12. That is no longer clear.
> Thus, the completely unsupported assertions about the error of
> prime-approximating function in eq. 87-89 are wrong, as they seem
> based on incorrect ideas of how the functions actually behave.
> leaps of faith from one unsupported equation to the next in this
> are immense.) There is of course no reason to believe the
> to infinity, as the assertions are wrong for even very small
> I don't know if anything else in the paper relies on this, but it
> shows that the numbers weren't checked even to moderately-sized
> and assertions about the primes are incorrect.
> In addition, the definitions of things like the log integral
> don't seem to be the definition of the log integral that I'm
> with. It appears to be a summation of discrete terms, and not the
> integral! I may just not recognize it in this form, but it doesn't
> right at all. If that's important in the paper, it's another
> My projection of the veracity of claims about the Riemann
> are thus approximately epsilon. But this is probably sufficient for
> this list.
> Alan Eliasen | "Furious activity is no substitute
> eliasen@... | for understanding."
> http://futureboy.us/ | --H.H. Williams
Alan Eliasen | "Furious activity is no substitute
eliasen@... | for understanding."
http://futureboy.us/ | --H.H. Williams