Loading ...
Sorry, an error occurred while loading the content.

New Algorithms for Integer Factoring and Integer Factorials (draft #1)

Expand Messages
  • WarrenS
    A bunch of people wanted to see my new algorithms, but not a single one was going to supply any info about whether it was worth writing up. Sigh. Well, ok,
    Message 1 of 2 , Dec 3, 2011
      A bunch of people wanted to see my new algorithms, but not a single one was going to supply any info about whether it was worth writing up. Sigh. Well, ok, I wrote
      it up, somewhat sketchily in places, here you are:

      New Algorithms for Integer Factoring and Integer Factorials
      ====Warren D. Smith Dec 2011===PRELIMINARY FIRST DRAFT==============

      In this note I explain new algorithms for factoring integer N.
      I also explain new algorithms for computing N!, the factorial of N.
      These algorithms both appear to be world records, although the senses
      in which that is true are not necessarily very impressive.

      *****I. Factorial.

      To compute N!, the naive method is 1*2*3*...*(N-1)*N.
      We want faster methods.

      Mentally divide the sequence 1,2,3,...,N into R=floor(squareroot(N)) chunks
      each R long, plus a leftover smaller chunk if N is not a perfect square.

      Consider the degree-M polynomial
      which we can abbreviate as x^_M
      ["falling factorial"].

      We have N! = N^_R * (N-R)^_R * (N-2R)^_R *... * (N-[R-1]R)^_R * (N-RR)!.
      The final chunk (N-RR)! can be evaluated recursively or just naively using
      <R operations. All the other chunk values are different values of the same
      polynomial x^_R, it is just that this polynomial is evaluated at R
      different values of its argument x, where the different arguments form an
      arithmetic sequence. Finally multiply all the R+1 terms together going
      up a binary tree.

      There are known fast algorithms for simultaneously-evaluating
      a polynomial of degree M at M different argument values
      (forming an arithmetic sequence) in
      a number of arithmetic operations O(pm(M)*logM)
      where pm(M) denotes the number of arithmetic operations required to multiply
      to degree-M polynomials symbolically. See, e.g.
      A.Bostan & E.Schost:
      Polynomial evaluation and interpolation on special sets of points,
      Journal of Complexity 21,4 (2005) 420-446.
      [I also have a different algorithm than theirs which matches their time bound, though
      probably with a worse constant factor.]
      By using FFT convolution theorem to multiply polynomials symbolically, pm(M)=O(M*logM).
      Note, this FFT approach takes us outside the integers into the real and complex numbers,
      but at the end of the process you get integers back (and the reals did not need to have
      much higher precision than the integers had to keep the erorrs below +-0.1).
      If this bothers you you can use Toom multiplication with pm(M)=O(M^(1+epsilon)) instead.
      This keeps everything rational with bounded denominators.

      Also, the polynomial x^_M can be computed in symbolic form if desired by
      polynomial multiplication on a binary tree, in O(pm(M)*logM) arithmetic steps.

      1. N! can be computed in O(squareroot(N)*(logN)^2) arithmetic operations.
      2. The total time usage is more interesting to most people than the arithmetic operation
      count. If we compute over the ring of integers, then the time should be of the same order
      as performing logN multiplications on numbers of the same length as N!. If
      we compute over some small ring (such as "the integers mod 1009") then the arithmetic-op
      count really does govern the time since all operations are on bounded-length numbers.
      3. The memory usage is large: need to store order squareroot(N) numbers.

      *****II. Even faster "primorial"-like things such as LCM(1,2,3,...,N).

      It is possible to speed up the above algorithm somewhat further by cleverness about primes.
      I have not thought carefully about how far that can be pushed.

      But just to make it clear that speedup is possible... suppose instead of computing N!
      we were satisfied to compute some multiple of primorial(N).
      Then we do not need to multiply 1*2*3*...*(N-1)*N.
      It suffices to multiply
      (product of primes<L) * (product of numbers from L to N which are relatively prime to all primes<L)
      where L is chosen so that primorial(L)=squareroot(N) roughly, i.e. L has order logN/loglogN roughly.
      In this case each of my "falling factorial" polynomials can be replaced by a polynomial whose terms
      have been "sieved" to remove ones that would have been multiples of the primes<L. The
      degree of this polynomial would no longer be longer R, but rather about R/loglogN.
      Actually you need to adjust the parameters a bit... and if you want factorials not
      primorials you need to keep track of the stuff you sacrificed and put it back in... but anyhow

      N! can be computed in O(squareroot(N/loglogN)*(logN)^2) arithmetic operations
      and comparable-factor savings should be possible re time savings and memory usage too.
      Quite possibly one can do better than
      these since my prime-using methods were very straightforward with no attempt to be clever.
      Peter Luschny and Arnold Sch:onhage each invented clever prime-using methods that
      compute N! in the same time as doing a constant number of multiplications of the numbers of the same
      length as N! [i.e. O(NlogN) bits long] but their methods, though better than me by a log factor
      about TIME, are far far worse than me about ARITHMETIC-OP COUNT...
      so we have to wonder if it is possible to get the best of both worlds.

      *****III. Factoring the integer N.

      Assume we know N is composite (fast primality tests are known) and it is not divisible by
      (say) the primes<100.

      Define the function F(x)=gcd(x!, N). This is a monotonically increasing function of x
      with 1<x<=N, and F(1)=1 and F(N)=N. We can evaluate it fast by computing (x! mod N)
      using the fast algorithm above in the ring of integers mod N, then fast gcd algorithm.
      (The "improved" algorithm II will also work.)

      To factor N, one approach is to do a binary search on x with 100<x<squareroot(N) to find an x
      such that F(x)>F(x-1) and then gcd(x,N) will be a nontrivial factor of N.

      This will run in
      time = O( (squareroot(N)/loglogN)^(1/2) * (logN)^2) arithmetic operations
      on numbers each O(logN) bits long [same length as const*N^2]
      memory = O( (squareroot(N)/loglogN)^(1/2) ) numbers each O(logN) bits long.

      That is:
      Total time = O( N^(1/4) * (logN)^2 * (loglogN)^(1/2) ) * (logloglogN or lower) bit-ops.

      Total memory = O( N^(1/4) * logN * (loglogN)^(1/2) ) bits.

      As far as I know, this is a new world record time bound for integer factoring provided we only
      allow algorithms that are (i) deterministic and (ii) have rigorous worst-case time bound.
      Further, as I said it might be possible to improve this further by some kind of log factor
      with cleverer prime-using tricks.

      *****IV. Factoring the integer N using less memory but more time.

      Our proposal so far can be viewed as a way to speed up "trial division."
      As R.S.Lehman pointed out, it is possible to speed up factoring by doing a combination
      of "trial division to find factors < L" with
      "Fermat's difference of squares method to find factors > L".
      Lehman chose the breakpoint L to be of order N^(1/3) thus achieving a rigorous
      factoring algorithm running in O(N^(1/3)) arithmetic operations.
      R. Sherman Lehman: Factoring Large Integers,
      Mathematics of Computation 28, 126 (April 1974) 637-646

      We now propose to do the same thing as Lehman but now using "sped-up trial division."

      I haven't checked this as carefully as I should, but I believe what happens is this.
      Choose the breakpoint L=N^P for 1/4<=P<=1/2. Besides P, there will also be another
      tunable parameter Q, with 0<Q<=P/2.
      The runtime will then be
      time = O( N^(P-Q) * (loglogN)^(-1/2) * (logN)^2 + N^([1-P]/2) ) arithmetic operations
      memory = O( (N^Q / loglogN)^(1/2) ) numbers.

      If P=1/2 and Q=1/4 then we get our fast (but memory-hog) algorithm without using Fermat.

      If 1/4<=P<1/2 we slow down, but can enjoy using less memory.
      For example, choose P=0.4 and Q=0.1 to get a factoring algorithm running in time O(N^0.3)
      ignoring log terms, and with memory usage O(N^).1) ignoring log terms.

      This compromise strikes me as actually pretty attractive -- more attractive than either
      plain Lehman, plain Fermat, or plain trial division.
      However, it will still be heavily outperformed by
      Pollard-rho method, ECM, Quadratic Sieve, and NF-Sieve factorers, if you
      are willing to accept the fact they are nondeterministic and/or have nonrigorous time bounds.

      *****V. History -- is this just a rediscovery?.

      It now seems likely that V.Strassen 1976 in a paper I have not seen, invented something similar
      to my algorithms here -- but I suspect his log terms must have been slightly worse than mine?
      Volker Strassen: Einige Resultate u:ber Berechnungskomplexitat,
      Jahresber. Deutsch. Math.-Verein 78,1 (1976/7) 1-8.

    • WarrenS
      My new Algorithms for Integer Factoring turn out to be a rediscovery (albeit with some slight improvements which reduce the runtime by log factors) of the
      Message 2 of 2 , Dec 10, 2011
        My "new" Algorithms for Integer Factoring
        turn out to be a rediscovery (albeit with some slight improvements
        which reduce the runtime by log factors) of the "Pollard Strassen polynomial evaluation algorithm" according to book by Crandall & Pomerance.

        C&P confirm this is the fastest known rigorous deterministic factoring algorithm (and now my slightly improved version supplants that role).
      Your message has been successfully submitted and would be delivered to recipients shortly.