--- In firstname.lastname@example.org
, "Kermit Rose" <kermit@p...> wrote:
> Let q_k = the next prime after p_k # + 1.
> Then q_k - p_k # is prime.
> It does seem quite plausible that his is true. So why has
> no one proven it yet?
> Can anyone construct a rating on how computationally useful this
Seems to me that Fortunate's is only slightly better than using
Wilson's theorem to generate the primes and a bit spotty and
meandering at that. I haven't looked at it too hard yet, but am I to
understand that one needs to find and verify the next prime higher
than a primorial in order to generate a prime about the same order of
magnitude as the largest prime factor of the primorial itself? If so,
all I can say is - that's harsh, makes it less attractive than using
Wilson's theorem I should think, but I may be talking out me bum--I
just wanted to chime in with one of my favorites and have no time to
do Fortunate homework at the moment.
"Computationally useful" is subjective and subject to change.
Since we can pretty much make any scheme we currently know of
"computationally useless" simply by choosing numbers large enough, I'd
like to focus on something I find neat and beautiful. Not a closed
formula, but an iterative one, but it's deterministic, simple, and
spits out all of the primes in increasing order with no false
It is based on Wolstenholmes Theorem,
* Mathworld does not specifically say "there are no Wolstenholme
pseudoprimes", but I'm pretty sure the theorem is directly intertwined
and essentially equivalent to Wilson's theorem which is an iff type
test. Maybe someone here already knows the answer to this question.
If one could calculate and store the nth Harmonic number for an
arbitrary large number n, the iterative formula could be started from
that point and start spitting out the primes > n. So, finding a
closed formula for the nth harmonic number would essentially give one
the ability to start anywhere with considerations for storage and
processing power to operate on fully stored giganto numbers.
This is also essentially useless, like using Wilson's Theorem directly
to test every odd number, but it's a little faster and requires less
storage (I think). I bring it to your attention, because it's neat!
Here's Mathematica code that includes a factorization step to verify
the absence of pseudoprimes, but I'm pretty sure it should be
relatively easy to prove that there are no Wolstenholme pseudoprimes.
It's optimized to avoid testing the even numbers and one could break
it into submodules to optimize it to avoid numbers =0 mod 3, = 0 mod
5, etc. to whatever degree practical (not that this is practical to
dontstop = 1; n = 2; hn = 3/2; While[dontstop == 1,
If[Mod[Numerator[hn], (n + 1)^2] == 0, Print[
Flatten[FactorInteger[n + 1]]]]; n += 2; hn += (2 n - 1)/(n^2 - n)]
Here's a cute problem, for which I think the answer is probably
"never" unless the factor x is ludicrously large and/or the factor y
is ludicrously small.
Assuming one started this iterative algorithm on today's best
machine, and assuming storage never becomes a problem and that the
state of the iteration is transfered and continues to run on the best
available machine as they improve at the rate of x cycles per year.
And given that the size of the top 5000 grows by a factor of y each
year. How long, if ever, would this have to run in order to start
spitting out arbitrary primes large enough to make the list?