Loading ...
Sorry, an error occurred while loading the content.

Does such a factoring algorithm exist?

Expand Messages
  • Décio Luiz Gazzoni Filho
    ... Hash: SHA1 Is there a factoring algorithm which depends on the size of the factors (and not the integer being factored itself), and is faster than trial
    Message 1 of 7 , May 2 9:17 PM
    • 0 Attachment
      -----BEGIN PGP SIGNED MESSAGE-----
      Hash: SHA1

      Is there a factoring algorithm which depends on the size of the factors (and
      not the integer being factored itself), and is faster than trial division,
      yet upon failure it provides a proven lower bound on the factors?

      With Pollard rho, for instance, if I perform O(t) iterations of the method
      unsuccessfully, this is a clue that the integer in question is not likely to
      have prime factors < t^2, but for my application I need a proof of that
      assertion. If I trial divide with primes up to t without finding anything,
      then this is proof that the integer being trial divided has no factors up to
      t -- thus trial division is an algorithm that I could use, except that it's
      slow.

      So, anyone knows of an algorithm that fits the bill?

      Décio
      -----BEGIN PGP SIGNATURE-----
      Version: GnuPG v1.2.4 (GNU/Linux)

      iD8DBQFAlcf3FXvAfvngkOIRAmBiAJ49BGSUYXTtHpeWbXV3U+9dB1Z7BgCZAcEw
      4yYr3lR2y3tjTeYxMNa9tu8=
      =1CpK
      -----END PGP SIGNATURE-----
    • miltbrown@earthlink.net
      Pollard rho algorithm is not deterministic, It is probabilistic and could fail even with small numbers. It should not be part of discussions about proven
      Message 2 of 7 , May 3 9:16 AM
      • 0 Attachment
        Pollard' rho algorithm is not deterministic,

        It is probabilistic and could fail even with small numbers.

        It should not be part of discussions about
        "proven lower bounds"

        Milton L. Brown
        miltbrown at earthlink.net


        -----Original Message-----
        From: Décio Luiz Gazzoni Filho <decio@...>
        Sent: May 2, 2004 9:17 PM
        To: primenumbers@yahoogroups.com
        Subject: [PrimeNumbers] Does such a factoring algorithm exist?

        -----BEGIN PGP SIGNED MESSAGE-----
        Hash: SHA1

        Is there a factoring algorithm which depends on the size of the factors (and
        not the integer being factored itself), and is faster than trial division,
        yet upon failure it provides a proven lower bound on the factors?

        With Pollard rho, for instance, if I perform O(t) iterations of the method
        unsuccessfully, this is a clue that the integer in question is not likely to
        have prime factors < t^2, but for my application I need a proof of that
        assertion. If I trial divide with primes up to t without finding anything,
        then this is proof that the integer being trial divided has no factors up to
        t -- thus trial division is an algorithm that I could use, except that it's
        slow.

        So, anyone knows of an algorithm that fits the bill?

        Décio
        -----BEGIN PGP SIGNATURE-----
        Version: GnuPG v1.2.4 (GNU/Linux)

        iD8DBQFAlcf3FXvAfvngkOIRAmBiAJ49BGSUYXTtHpeWbXV3U+9dB1Z7BgCZAcEw
        4yYr3lR2y3tjTeYxMNa9tu8=
        =1CpK
        -----END PGP SIGNATURE-----


        Unsubscribe by an email to: primenumbers-unsubscribe@yahoogroups.com
        The Prime Pages : http://www.primepages.org/


        Yahoo! Groups Links
      • dleclair55
        ... Sigh. That s what the original poster said. I won t claim to be an expert but I m not aware of any algorithm that fits the bill. X iterations of Rho, P+-1
        Message 3 of 7 , May 3 7:00 PM
        • 0 Attachment
          > Pollard' rho algorithm is not deterministic,
          > It is probabilistic and could fail even with small numbers.

          Sigh. That's what the original poster said.

          I won't claim to be an expert but I'm not aware of any algorithm that
          fits the bill.

          X iterations of Rho, P+-1 and ECM won't give you a deterministic lower
          bound. SQUFOF, Schnorr-Lenstra, Pollard-Strassen, CFRAC, QS, and
          G/SNFS all depend on the size of N. Fermat's method and Lehman's
          method depend on special properties of p and q in relation to each
          other.

          -Don Leclair

          --- In primenumbers@yahoogroups.com, miltbrown@e... wrote:
          > Pollard' rho algorithm is not deterministic,
          >
          > It is probabilistic and could fail even with small numbers.
          >
          > It should not be part of discussions about
          > "proven lower bounds"
          >
          > Milton L. Brown
          > miltbrown at earthlink.net
          >
          >
          > -----Original Message-----
          > From: Décio Luiz Gazzoni Filho <decio@r...>
          > Sent: May 2, 2004 9:17 PM
          > To: primenumbers@yahoogroups.com
          > Subject: [PrimeNumbers] Does such a factoring algorithm exist?
          >
          > -----BEGIN PGP SIGNED MESSAGE-----
          > Hash: SHA1
          >
          > Is there a factoring algorithm which depends on the size of the
          factors (and
          > not the integer being factored itself), and is faster than trial
          division,
          > yet upon failure it provides a proven lower bound on the factors?
          >
          > With Pollard rho, for instance, if I perform O(t) iterations of the
          method
          > unsuccessfully, this is a clue that the integer in question is not
          likely to
          > have prime factors < t^2, but for my application I need a proof of
          that
          > assertion. If I trial divide with primes up to t without finding
          anything,
          > then this is proof that the integer being trial divided has no
          factors up to
          > t -- thus trial division is an algorithm that I could use, except
          that it's
          > slow.
          >
          > So, anyone knows of an algorithm that fits the bill?
          >
          > Décio
          > -----BEGIN PGP SIGNATURE-----
          > Version: GnuPG v1.2.4 (GNU/Linux)
          >
          > iD8DBQFAlcf3FXvAfvngkOIRAmBiAJ49BGSUYXTtHpeWbXV3U+9dB1Z7BgCZAcEw
          > 4yYr3lR2y3tjTeYxMNa9tu8=
          > =1CpK
          > -----END PGP SIGNATURE-----
          >
          >
          > Unsubscribe by an email to: primenumbers-unsubscribe@yahoogroups.com
          > The Prime Pages : http://www.primepages.org/
          >
          >
          > Yahoo! Groups Links
        • andrew_j_walker
          ... that ... lower ... p+-1 for a given pair of B1 and B2 will have a lower bound, as there will be some smallest prime that doesn t fit the smoothness
          Message 4 of 7 , May 3 10:08 PM
          • 0 Attachment
            --- In primenumbers@yahoogroups.com, "dleclair55" <dleclair55@y...>
            wrote:
            > > Pollard' rho algorithm is not deterministic,
            > > It is probabilistic and could fail even with small numbers.
            >
            > Sigh. That's what the original poster said.
            >
            > I won't claim to be an expert but I'm not aware of any algorithm
            that
            > fits the bill.
            >
            > X iterations of Rho, P+-1 and ECM won't give you a deterministic
            lower
            > bound. SQUFOF, Schnorr-Lenstra, Pollard-Strassen, CFRAC, QS, and
            > G/SNFS all depend on the size of N. Fermat's method and Lehman's
            > method depend on special properties of p and q in relation to each
            > other.
            >

            p+-1 for a given pair of B1 and B2 will have a lower bound, as there
            will be some smallest prime that doesn't fit the smoothness criteria
            of the algorithm. However for large B1 and B2 it may be difficult to
            determine what this is.

            Andrew
          • Paul Leyland
            ... (Remark: almost all factoring algorithms have a superpolynomial dependency on the size of N, the integer being factored. I can t, off hand, think of any
            Message 5 of 7 , May 4 1:37 AM
            • 0 Attachment
              > Is there a factoring algorithm which depends on the size of
              > the factors (and
              > not the integer being factored itself), and is faster than
              > trial division,
              > yet upon failure it provides a proven lower bound on the factors?

              (Remark: almost all factoring algorithms have a superpolynomial
              dependency on the size of N, the integer being factored. I can't,
              off hand, think of any that do not. Some, such as ECM and NFS
              have subexponential dependency, others such as trial divsion and
              SQUFOF are exponential.)

              Yes, if you regard Dan Bernstein's product-tree method of finding
              small prime divisors as being different from "trial division". It's
              certainly faster asymtotically than the algorithm conventionally
              known as trial division.

              Another technique: precomputate the primorial of a prime and
              evaluate g=gcd(N,p#) repeatedly, replacing N by N/g until g=1.
              p is now your lower bound. Pull out the factors from the
              gcds with trial division (or recursive application of this
              algorithm).

              Note this is not faster than trial division for a single N,
              because of the precomputation, but will run faster if you
              amortize over many N.

              There may be other techniques.


              Paul
            • Jens Kruse Andersen
              ... I have not heard of Bernstein s product-tree before but it sounds like my TreeSieve: http://groups.yahoo.com/group/primeform/message/4231 The GMP based
              Message 6 of 7 , May 4 3:50 PM
              • 0 Attachment
                Paul Leyland wrote:
                > > Is there a factoring algorithm which depends on the size of
                > > the factors (and
                > > not the integer being factored itself), and is faster than
                > > trial division,
                > > yet upon failure it provides a proven lower bound on the factors?
                >
                > Yes, if you regard Dan Bernstein's product-tree method of finding
                > small prime divisors as being different from "trial division". It's
                > certainly faster asymtotically than the algorithm conventionally
                > known as trial division.

                I have not heard of Bernstein's product-tree before but it sounds like my
                TreeSieve:
                http://groups.yahoo.com/group/primeform/message/4231
                The GMP based implementation is 12 times faster than individual trial factoring
                by primeform for 50000-digit numbers, but slow for small numbers.
                This version can only factor to 2^32.

                --
                Jens Kruse Andersen
              • julienbenney
                The question is how it can prove a lower bound for the factors of a number. I imagine this would be easy for numbers such as repunits where factors are
                Message 7 of 7 , May 4 5:36 PM
                • 0 Attachment
                  The question is how it can prove a lower bound for the factors of a
                  number. I imagine this would be easy for numbers such as repunits
                  where factors are restricted to special forms, but for other numbers
                  it might not work very well because of the astonishing number of
                  primes.

                  For example, for the repunits R311 (not to be confused with R317),
                  R509 and R557 - the status of which is "Composite But No Factor
                  Known", this method seems very promising because what I once read
                  about repunits suggests that these numbers must have two very large
                  factors and if primality tests for numbers of the correct form are
                  available, it might solve the problem well, especially for R311,
                  which mystified me on seeing it in a table of repunits.

                  The problem is with how to test the primality of the numbers of the
                  appropriate form - though the effect of not doing this is most likely
                  to be mere wastage of time ratehr than errors - for instance with the
                  large 1187-digit cofactor of F12.
                Your message has been successfully submitted and would be delivered to recipients shortly.