> My colleague has settled the long outstanding question of Reimann's [sic]

I don't want to spend a lot of time on this, so I'll be brief and

> Hypothesis and shown conclusively that all non trivial zeros of the

> zeta function do indeed have Real part one half, ie. the hypothesis

> has been proven to be true. To review this proof, now in pre-print,

> please follow the url:

>

> http://www.singularics.com/science/mathematics/OnNeutronicFunctions.pdf

point out a couple major places where it's incorrect. The paper claims

some amazing things, for example, "These functions are put together to

reveal a new function whose difference from the Prime Number Function

(2, 3, 5, 7, 11...) to infinity is zero..."

This amazing function is very simple and can be summarized as the

following:

beta[x] := 6x + 1 (rearrangement of eq. 74)

r[x] := 4x/5 - 15 (eq. 82)

A[x] := 1 - 6x^x (eq. 84)

u[x] := ln[-pi beta[x] A[x]] (eq. 83)

J[x] := r[x] + u[x] (eq. 86)

Where J[x] is supposed to be an approximation of the function that

lists the primes, (I'll call it Prime[x]) e.g. 2,3,5,7,11. That is,

Prime[1]=2 Prime[2]=3, etc.

From this, I could reproduce the values in the table 27. No

discrepancy there. The paper goes on to graph the first 140 terms of

this to show how well J[x] matches Prime[x]. (I'll make a note, though,

that when you're talking about the primes, looking only at the first 140

terms of a sequence and extrapolating beyond that is a sure recipe for

disaster.)

But why only the first 140 terms? Probably because 64-bit IEEE-754

floating-point hardware overflows after this! Note the x^x in eq. 84

above, which grows rapidly. From these meager data points, the paper

claims (eq. 88) that the maximum error J[x]-Prime[x] *for all x* is on

the order of 12. This is completely wrong.

If you use a real computing environment to evaluate larger numbers,

the error does bounce around and stay smaller than 12 for very small

values of x, but when you get to even x=206, the error is -19.6006. And

then the error just starts to increase linearly (actually, a little

worse than linearly.) At x=1400, the error is -398.109. At x=10000,

the error is -4626.66. At x=20000, the error is -10667.6. And the

trend continues. That's a lot larger than 12, but the paper says that

"the maximum value of all error terms from 1 to infinity becomes very

clearly" 12. That is no longer clear.

Thus, the completely unsupported assertions about the error of this

prime-approximating function in eq. 87-89 are wrong, as they seem to be

based on incorrect ideas of how the functions actually behave. (And the

leaps of faith from one unsupported equation to the next in this chain

are immense.) There is of course no reason to believe the extrapolations

to infinity, as the assertions are wrong for even very small numbers.

I don't know if anything else in the paper relies on this, but it

shows that the numbers weren't checked even to moderately-sized values,

and assertions about the primes are incorrect.

In addition, the definitions of things like the log integral (eq. 40)

don't seem to be the definition of the log integral that I'm familiar

with. It appears to be a summation of discrete terms, and not the

integral! I may just not recognize it in this form, but it doesn't seem

right at all. If that's important in the paper, it's another problem.

My projection of the veracity of claims about the Riemann Hypothesis

are thus approximately epsilon. But this is probably sufficient for

this list.

--

Alan Eliasen | "Furious activity is no substitute

eliasen@... | for understanding."

http://futureboy.us/ | --H.H. Williams