Examination of Feitsma's ideas for finding all base-2 Fermat pseudoprimes <2^64
- --- In firstname.lastname@example.org, David Cleaver <wraithx@...> wrote:
>--ok, so if I (Warren D. Smith) understand aright, Jan Feitsma did eventually describe his algorithm in a way other than just providing his program (i.e. stated the ideas behind it, which look actually a lot less complicated than I had feared -- I had thought based on no evidence that it instead was going to be a backtrack search for the factorizations of the pseudoprimes) then David Cleaver & Jeff Gilchrist coded their own re-implement of Feitsma's ideas, then ran it, with same output as Feitsma. Thus, the computation(s) were correct -- but the question is, were Feitsma's ideas correct?
> On 3/11/2013 8:54 PM, WarrenS wrote:
> >> I have a hunch that BPSW may have been tested up to 2^64, if Jan Feitsma,
> >> and followers, have done their stuff aright?
> > --My claims about infallibility up to 2^64 rely on a database compiled by Jan
> > Feitsma of all strong psp(2) up to 2^64. Actually all fermat(2) pseudoprimes
> > up to 2^64.
> > The question is, is Feitsma correct? Yes in the sense his dataset agrees with
> > previous smaller (but still huge) computations by others. Maybe No, in the
> > sense he might have had a bug that only kicks in above, say, 0.8 * 2^64.
> > Until somebody redoes his computation independently there will remain room
> > for doubt. Nobody wants to since it could take a CPU-year or more, plus a
> > considerable amount of intelligence to rediscover his or comparable ideas.
> Actually, Jeff Gilchrist and I performed the double check on the data. I wrote
> the majority of the double-checking code, which was independent of Jan's code
> since it was only based on my understanding of his algorithms. The majority of
> the double-check computations were performed by Jeff Gilchrist and the
> "big-iron" he had access to. So, the database based on his algorithm has been
> double checked. You can see Jan's latest post about this on the mersenneforum here:
> The hope is that somebody, perhaps someone on this list, will go through his
> algorithm and verify that _that_ is correct. You can find all the details in
> the links Jan provided in the above post.
> -David C.
So that is the status?
In the below I will
(1) examine Feitsma's ideas. I agree they are correct.
(2) suggest new simpler approach.
Examining Feitsma's web page http://www.janfeitsma.nl/math/psp2/index
(and you also need to examine the hyperlinks he provides), here are the ideas he uses.
His goal is to enumerate all composite n with 1<n<2^64 and n|2^(n-1).
For example n=341=11*31 since 341 divides 2^340 - 1.
Let rho(q) denote the order of 2 in the multiplicative group modulo q.
That is, rho(q) is the least m such that q divides ((2^m)-1).
Then Feitsma's goal equivalently is to enumerate composite n with
1<n<2^64 and rho(n)|(n-1).
Feitsma claims that
LEMMA 1: if n's prime factorization is
n= p1^e1 * p2^e2 * ... * pk^ek
rho(n) = LCM( rho(p1^e1), rho(p2^e2), ..., rho(pk^ek) ).
If p is prime then rho(p) divides p-1 by Fermat little theorem, and hence is
efficiently computable by factoring p-1 and trying all divisors.
If q=p^e is a prime power then rho(q) is computible by using the lemma that
rho(p)|rho(q) and then he just tries multiples of rho(p) by brute force.
MY PROOFS OF THOSE THREE LEMMAS:
Lemma 2: just Fermat's little theorem.
Lemma 3: because write numbers in radix p, then the final digit proves the claim.
Lemma 1: writing numbers in a mixed-radix representation using the primes p1 (e1 times),
p2 (e2 times) etc as radices, and considering the final ej-digit-block (in an ordering
where pj is the ej last radices) shows that rho(pj^ej) divides rho(n) for each j,
and also shows rho(pj) divides rho(n) if we instead consider the final digit alone.
Therefore, LCM( rho(p1^e1), rho(p2^e2), ..., rho(pk^ek) ) divides rho(n).
We need to show not mere division, but actual equality.
We know 2^rho(pj^ej) is congruent to 1 mod pj^ej.
Hence by the chinese remainder theorem, 2^LCM is congruent to 1 mod n.
Feitsma uses the above to precompute a table of rho(q) for all odd q<2^32.
Feitsma divides candidate n into two categories:
category S: all prime factors p of n obey p<=sqrt(n)
category E: other.
For category S:
For all odd s with 1<s*q<2^64 and 1<q<2^32 and gcd(s,q)=1,
Feitsma determines whether n=q*s is
a Fermat pseudoprime. It can only be a Fermat psuprime if q^(-1) mod rho(q) = s.
For any given q this would allow us to search quickly
by computing L=q^(-1) mod rho(q),
then looping thru s=L+(q, 2q, 3q, ...) trying them all.
This observation already is a big time-save.
Feitsma then gets more time-savings by claiming
"we need only consider q<=x^(2/3)"
unfortunately without saying what the hell "x" is, so I have no idea what he is
talking about? But reading other pages, I deduce he meant x=2^64, so he meant
"we need only consider q<=6981463658331."
He also says "we need only consider s with 1<s<=q."
JUSTIFICATION OF THOSE CLAIMS:
Obviously, we can factor n<2^64 into two factors, call them q and s, with
If q>2^32 then q must be composite since all its prime factors are below 2^32 by assumption. If q>6981463658331 then s=n/q<=2642245 and
q must have a factor f<=2642245.
Hence we can rename f*s as the new q and q/f as the new s,
in which case the new q obeys 2^32<q<=6981463658331
and hence 1<s<=q still is true.
Feitsma notes his enumeration procedure still is inefficient in the sense that it can enumerate the same pseudoprime many times, but he lives with that.
Feitsma also worries that his category S algorithm could screw up if n is not squarefree,
i.e. contains repeated prime factors. However, he argues that for such n
the repeated factor must be a Wieferich prime. Others have done heavy search for
Weiferich primes already, proving the only ones below 10^17 are 1093 and 3511.
Therefore, this is not a problem, since if there are any further Wieferich primes, they cannot be repeated factors of n<2^64. Feitsma deals specially with 1093 and 3511.
For category E:
Each pseudoprime n in this category has a large prime divisor p and n=p*s with 1<s<p.
We know (for valid n) that rho(n) = LCM( rho(p), rho(s) ) is congruent to -1 mod n
and hence also to -1 mod s and to -1 mod p.
We also know that n is congruent to 1 mod rho(n) and hence also
to 1 mod rho(p) and mod rho(s).
We also know that p is congruent to 1 mod rho(p).
Therefore s must be congruent to 1=p^(-1) mod rho(p).
If rho(p) is odd then since s is odd and s is congruent to 1=p^(-1) mod rho(p),
we conclude more strongly that
Feitsma's plan is simply to examine all possible (s,rho(p)) pairs finding every eligible prime
p (with p*s<2^64) for that rho(p). In order to do that, he uses the fact that every MERSENNE NUMBER 2^r - 1 has been fully factored for r<859 because of
an ENORMOUS amount of work by others over many decades. He consults these factor tables. From them he knows every prime p in the universe enjoying r=rho(p)<859.
He also knows every prime p for many other r also, because all small factors of
Mersenne numbers are known even if the full factorization is not, and also since many other Mersennes 2^r -1 have been fully factored.
Even if we just we naively were to cut at r=859, that alone would allow Feitsma to restrict attention to prime p with p<(2^64)/859<2^54.26.
Note that all primes p with p<10^18 have been found by sieving, and
2^54.26 < 2.2*10^16, so that it ought to be feasible to find all such p
if you are willing to spend many years of computing.
Feitsma says he spent many CPU-decades of computing using assembly language coding to get high speed.
So in conclusion, I agree Feitsma's ideas are valid.
CAN WE DO BETTER?
My alternative simpler idea for finding all fermat(2) pseudoprimes<2^64 is this.
Enumerate all the composite integer n<2^64 in fully prime-factored form by exhaustive backtracking. However, "prune" the backtrack tree using intelligence, i.e. using
theorems of the form "this whole subtree cannot contain a fermat(2) pseudoprime,
so I will not search it."
Because of the Wieferich primes argument we can restrict attention to squarefree n only, which simplifies our lives (at least if we consider 3511 and 1093 specially).
At some point in this recursive tree-search, you will have found some factorization F of
some divisor of n. You want a test which will enable you to proclaim: "every odd n that is a multiple of F with 1<n<2^64, cannot be a Fermat(2) pseudoprime."
What should that test be?
Well, we know n needs to be congruent to 1 mod rho(F). We know rho(F).
We know n=s*F<2^64. After computing G=F^(-1) mod rho(F), we know
s must be congruent to G mod rho(F).
So if floor((2^64)/F)<=G we can instantly prune the search.
We also know F-1 is divisible by rho(s). Using precomputed tables we could
sometimes rapidly know that this cannot be true for any small-enough s, hence prune.
If these tests fail to prune, then we recurse, adjoining a new prime p to our factorization F,
and continue on.
We only adjoin p such that F-1 is divisible by rho(p).
- On 3/13/2013 5:47 AM, Phil Carmody wrote:
> No I couldn't. That's so overly verbose and redundant it makes me twitch, I canApologies, my Pari/GP script-fu is definitely not on par with DJB's. I just
> barely bring myself to repeat it!
> "lift(Mod(p*s, lift(znorder(Mod(2,s))))) == 1" is just
> "p*s % znorder(Mod(2,s)) == 1"
> Having 3 exit conditions to the loop is overkill too.
wanted to provide a script so that people could plug and chug an r,p pair to see
what psp's would be generated. Also, I didn't know that the % (mod) operator
still worked in Pari. I thought everything had to be done with Mod(). Thanks
for that insight.
> The latter worries me a bit, as it might imply wasted effort. I'm trying toAnd apologies here too, I mis-remembered a statement from his Category S page
> picture how these duplicates arise. Given a n, the maximal prime factor p|s is
> uniquely defined, and r as order_2(p) is uniquely defined. Therefore n can only
> appear with pair (r,p)?
and mis-spoke by applying it to the Category E psp's.
On 3/14/2013 11:02 AM, WarrenS wrote:
> 2. Consulting the Cunningham project pages,
> every Mersenne-form number 2^r - 1 now is fully factored if
> r<929. Apparently the first two open cases are r=929 and 947
> yielding 214 and 217 digit numbers to factor.
An update here: M929 has been factored, and a group of people have already
started factoring M947. You can find the factor for M929 here:
And, you can see who is factoring which Cunningham number here:
And more importantly, you can find all known*1 factors for all important*2
numbers in the online factor database here:
Once there, you can type in 2^929-1, and it will show you all the factors of
that number and that it is Fully Factored (FF). Currently, you can type in
2^947-1 and see that it is a CF, composite number, with factors known, but not
yet fully factored.
*1 = All known factors that have been stored into the factordb.
*2 = All numbers that people are interested in and store in the factordb.
Also, the factordb stores prime numbers too. Below 300 digits it will just
prove the number prime, and above that it will accept Primo certificates and
verify them locally.