Loading ...
Sorry, an error occurred while loading the content.

Re: Does such a factoring algorithm exist?

Expand Messages
  • andrew_j_walker
    ... that ... lower ... p+-1 for a given pair of B1 and B2 will have a lower bound, as there will be some smallest prime that doesn t fit the smoothness
    Message 1 of 7 , May 3, 2004
    View Source
    • 0 Attachment
      --- In primenumbers@yahoogroups.com, "dleclair55" <dleclair55@y...>
      wrote:
      > > Pollard' rho algorithm is not deterministic,
      > > It is probabilistic and could fail even with small numbers.
      >
      > Sigh. That's what the original poster said.
      >
      > I won't claim to be an expert but I'm not aware of any algorithm
      that
      > fits the bill.
      >
      > X iterations of Rho, P+-1 and ECM won't give you a deterministic
      lower
      > bound. SQUFOF, Schnorr-Lenstra, Pollard-Strassen, CFRAC, QS, and
      > G/SNFS all depend on the size of N. Fermat's method and Lehman's
      > method depend on special properties of p and q in relation to each
      > other.
      >

      p+-1 for a given pair of B1 and B2 will have a lower bound, as there
      will be some smallest prime that doesn't fit the smoothness criteria
      of the algorithm. However for large B1 and B2 it may be difficult to
      determine what this is.

      Andrew
    • Paul Leyland
      ... (Remark: almost all factoring algorithms have a superpolynomial dependency on the size of N, the integer being factored. I can t, off hand, think of any
      Message 2 of 7 , May 4, 2004
      View Source
      • 0 Attachment
        > Is there a factoring algorithm which depends on the size of
        > the factors (and
        > not the integer being factored itself), and is faster than
        > trial division,
        > yet upon failure it provides a proven lower bound on the factors?

        (Remark: almost all factoring algorithms have a superpolynomial
        dependency on the size of N, the integer being factored. I can't,
        off hand, think of any that do not. Some, such as ECM and NFS
        have subexponential dependency, others such as trial divsion and
        SQUFOF are exponential.)

        Yes, if you regard Dan Bernstein's product-tree method of finding
        small prime divisors as being different from "trial division". It's
        certainly faster asymtotically than the algorithm conventionally
        known as trial division.

        Another technique: precomputate the primorial of a prime and
        evaluate g=gcd(N,p#) repeatedly, replacing N by N/g until g=1.
        p is now your lower bound. Pull out the factors from the
        gcds with trial division (or recursive application of this
        algorithm).

        Note this is not faster than trial division for a single N,
        because of the precomputation, but will run faster if you
        amortize over many N.

        There may be other techniques.


        Paul
      • Jens Kruse Andersen
        ... I have not heard of Bernstein s product-tree before but it sounds like my TreeSieve: http://groups.yahoo.com/group/primeform/message/4231 The GMP based
        Message 3 of 7 , May 4, 2004
        View Source
        • 0 Attachment
          Paul Leyland wrote:
          > > Is there a factoring algorithm which depends on the size of
          > > the factors (and
          > > not the integer being factored itself), and is faster than
          > > trial division,
          > > yet upon failure it provides a proven lower bound on the factors?
          >
          > Yes, if you regard Dan Bernstein's product-tree method of finding
          > small prime divisors as being different from "trial division". It's
          > certainly faster asymtotically than the algorithm conventionally
          > known as trial division.

          I have not heard of Bernstein's product-tree before but it sounds like my
          TreeSieve:
          http://groups.yahoo.com/group/primeform/message/4231
          The GMP based implementation is 12 times faster than individual trial factoring
          by primeform for 50000-digit numbers, but slow for small numbers.
          This version can only factor to 2^32.

          --
          Jens Kruse Andersen
        • julienbenney
          The question is how it can prove a lower bound for the factors of a number. I imagine this would be easy for numbers such as repunits where factors are
          Message 4 of 7 , May 4, 2004
          View Source
          • 0 Attachment
            The question is how it can prove a lower bound for the factors of a
            number. I imagine this would be easy for numbers such as repunits
            where factors are restricted to special forms, but for other numbers
            it might not work very well because of the astonishing number of
            primes.

            For example, for the repunits R311 (not to be confused with R317),
            R509 and R557 - the status of which is "Composite But No Factor
            Known", this method seems very promising because what I once read
            about repunits suggests that these numbers must have two very large
            factors and if primality tests for numbers of the correct form are
            available, it might solve the problem well, especially for R311,
            which mystified me on seeing it in a table of repunits.

            The problem is with how to test the primality of the numbers of the
            appropriate form - though the effect of not doing this is most likely
            to be mere wastage of time ratehr than errors - for instance with the
            large 1187-digit cofactor of F12.
          Your message has been successfully submitted and would be delivered to recipients shortly.