Browse Groups

• ## Re: Does such a factoring algorithm exist?

(7)
• NextPrevious
• ... Sigh. That s what the original poster said. I won t claim to be an expert but I m not aware of any algorithm that fits the bill. X iterations of Rho, P+-1
Message 1 of 7 , May 3, 2004
View Source
> Pollard' rho algorithm is not deterministic,
> It is probabilistic and could fail even with small numbers.

Sigh. That's what the original poster said.

I won't claim to be an expert but I'm not aware of any algorithm that
fits the bill.

X iterations of Rho, P+-1 and ECM won't give you a deterministic lower
bound. SQUFOF, Schnorr-Lenstra, Pollard-Strassen, CFRAC, QS, and
G/SNFS all depend on the size of N. Fermat's method and Lehman's
method depend on special properties of p and q in relation to each
other.

-Don Leclair

> Pollard' rho algorithm is not deterministic,
>
> It is probabilistic and could fail even with small numbers.
>
> It should not be part of discussions about
> "proven lower bounds"
>
> Milton L. Brown
>
>
> -----Original Message-----
> From: Décio Luiz Gazzoni Filho <decio@r...>
> Sent: May 2, 2004 9:17 PM
> Subject: [PrimeNumbers] Does such a factoring algorithm exist?
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Is there a factoring algorithm which depends on the size of the
factors (and
> not the integer being factored itself), and is faster than trial
division,
> yet upon failure it provides a proven lower bound on the factors?
>
> With Pollard rho, for instance, if I perform O(t) iterations of the
method
> unsuccessfully, this is a clue that the integer in question is not
likely to
> have prime factors < t^2, but for my application I need a proof of
that
> assertion. If I trial divide with primes up to t without finding
anything,
> then this is proof that the integer being trial divided has no
factors up to
> t -- thus trial division is an algorithm that I could use, except
that it's
> slow.
>
> So, anyone knows of an algorithm that fits the bill?
>
> Décio
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.2.4 (GNU/Linux)
>
> iD8DBQFAlcf3FXvAfvngkOIRAmBiAJ49BGSUYXTtHpeWbXV3U+9dB1Z7BgCZAcEw
> 4yYr3lR2y3tjTeYxMNa9tu8=
> =1CpK
> -----END PGP SIGNATURE-----
>
>
> Unsubscribe by an email to: primenumbers-unsubscribe@yahoogroups.com
> The Prime Pages : http://www.primepages.org/
>
>
• ... that ... lower ... p+-1 for a given pair of B1 and B2 will have a lower bound, as there will be some smallest prime that doesn t fit the smoothness
Message 1 of 7 , May 3, 2004
View Source
wrote:
> > Pollard' rho algorithm is not deterministic,
> > It is probabilistic and could fail even with small numbers.
>
> Sigh. That's what the original poster said.
>
> I won't claim to be an expert but I'm not aware of any algorithm
that
> fits the bill.
>
> X iterations of Rho, P+-1 and ECM won't give you a deterministic
lower
> bound. SQUFOF, Schnorr-Lenstra, Pollard-Strassen, CFRAC, QS, and
> G/SNFS all depend on the size of N. Fermat's method and Lehman's
> method depend on special properties of p and q in relation to each
> other.
>

p+-1 for a given pair of B1 and B2 will have a lower bound, as there
will be some smallest prime that doesn't fit the smoothness criteria
of the algorithm. However for large B1 and B2 it may be difficult to
determine what this is.

Andrew
• ... (Remark: almost all factoring algorithms have a superpolynomial dependency on the size of N, the integer being factored. I can t, off hand, think of any
Message 1 of 7 , May 4, 2004
View Source
> Is there a factoring algorithm which depends on the size of
> the factors (and
> not the integer being factored itself), and is faster than
> trial division,
> yet upon failure it provides a proven lower bound on the factors?

(Remark: almost all factoring algorithms have a superpolynomial
dependency on the size of N, the integer being factored. I can't,
off hand, think of any that do not. Some, such as ECM and NFS
have subexponential dependency, others such as trial divsion and
SQUFOF are exponential.)

Yes, if you regard Dan Bernstein's product-tree method of finding
small prime divisors as being different from "trial division". It's
certainly faster asymtotically than the algorithm conventionally
known as trial division.

Another technique: precomputate the primorial of a prime and
evaluate g=gcd(N,p#) repeatedly, replacing N by N/g until g=1.
p is now your lower bound. Pull out the factors from the
gcds with trial division (or recursive application of this
algorithm).

Note this is not faster than trial division for a single N,
because of the precomputation, but will run faster if you
amortize over many N.

There may be other techniques.

Paul
• ... I have not heard of Bernstein s product-tree before but it sounds like my TreeSieve: http://groups.yahoo.com/group/primeform/message/4231 The GMP based
Message 1 of 7 , May 4, 2004
View Source
Paul Leyland wrote:
> > Is there a factoring algorithm which depends on the size of
> > the factors (and
> > not the integer being factored itself), and is faster than
> > trial division,
> > yet upon failure it provides a proven lower bound on the factors?
>
> Yes, if you regard Dan Bernstein's product-tree method of finding
> small prime divisors as being different from "trial division". It's
> certainly faster asymtotically than the algorithm conventionally
> known as trial division.

I have not heard of Bernstein's product-tree before but it sounds like my
TreeSieve:
http://groups.yahoo.com/group/primeform/message/4231
The GMP based implementation is 12 times faster than individual trial factoring
by primeform for 50000-digit numbers, but slow for small numbers.
This version can only factor to 2^32.

--
Jens Kruse Andersen
• The question is how it can prove a lower bound for the factors of a number. I imagine this would be easy for numbers such as repunits where factors are
Message 1 of 7 , May 4, 2004
View Source
The question is how it can prove a lower bound for the factors of a
number. I imagine this would be easy for numbers such as repunits
where factors are restricted to special forms, but for other numbers
it might not work very well because of the astonishing number of
primes.

For example, for the repunits R311 (not to be confused with R317),
R509 and R557 - the status of which is "Composite But No Factor
Known", this method seems very promising because what I once read
about repunits suggests that these numbers must have two very large
factors and if primality tests for numbers of the correct form are
available, it might solve the problem well, especially for R311,
which mystified me on seeing it in a table of repunits.

The problem is with how to test the primality of the numbers of the
appropriate form - though the effect of not doing this is most likely
to be mere wastage of time ratehr than errors - for instance with the
large 1187-digit cofactor of F12.
Your message has been successfully submitted and would be delivered to recipients shortly.
• Changes have not been saved
Press OK to abandon changes or Cancel to continue editing
• Your browser is not supported
Kindly note that Groups does not support 7.0 or earlier versions of Internet Explorer. We recommend upgrading to the latest Internet Explorer, Google Chrome, or Firefox. If you are using IE 9 or later, make sure you turn off Compatibility View.