Loading ...
Sorry, an error occurred while loading the content.
 

halfbaked idea about fast primality test

Expand Messages
  • WarrenS
    Warning: This whole post, while I believe it is correct, quite likely is useless and stupid and of mere curiosity or recreational value. As you probably know,
    Message 1 of 16 , Feb 16, 2011
      Warning: This whole post, while I believe it is correct, quite
      likely is useless and stupid and of mere curiosity or recreational
      value.

      As you probably know, the AKS primality test runs in
      polynomial time
      http://en.wikipedia.org/wiki/AKS_primality_test
      but it is of little or no practical interest.

      Wilson's theorem from about 1770
      states N is prime iff
      (N-1)! + 1 is divisible by N.
      http://en.wikipedia.org/wiki/Wilson%27s_theorem

      Suppose temprorarily we are not worried about computer TIME (BIT OPS), but merely about NUMBER OF INTEGER ARITMETIC OPERATIONS
      (we have fantasy computer hardware that'll do a
      huge-precision +,-,* or / in
      1 time step no matter how many digits are in the integers)
      then I point out it is possible to use Wilson's theorem to
      test primality in polynomial #steps. That is, in a #steps polynomial
      in the number of digits in N. A rough sketch of how:
      Using integer / and * and + and - we can compute A mod B.
      You can compute 10000000...00001^N [a very long number whose digits all are 0s except for leading and tail 1, raised to power N]
      and from the digits of the result you get via Binomial theorem, the
      binomial coefficients (N choose K). (The extraction
      of the desired digit-range can be done with powering and mod
      operations.) Do this for the right N's
      and K's and combine the binomial coefficient results
      using (N choose K) = N!/[K!(N-K)!] to get a
      fast computation of the factorial (N-1)!.
      The number 1000...0001 needs to be order N^2
      digits long (ouch!). The whole computation should run in O(logN)^2
      arithmetic operations using "binary powering" and a mildly clever scheme for combining binomial coeffs which I'll leave to you.

      THE HOLY GRAIL:
      OK, now the question is, can we turn this into an actually-fast scheme?
      To do so, you'd want to somehow compute things using modular arithmetic with cleverly chosen moduli (and cleverly chosen radices in the huge multidigit numbers, or use symbolic polynomials not numbers and only compute the coefficient of the desired term not the undesired terms... or use some flavor of chinese remaindering... or something) so we do not ever need huge-precision numbers so that the
      whole algorithm can be done in polynomial memory-space and/or time.

      I do not have a method to propose to accomplish this. On the other hand it sounds mildly plausible there might be a way to do it, and I do not have any proof to propose that you can't do it!


      --
      Warren D. Smith
      http://RangeVoting.org <-- add your endorsement (by clicking "endorse" as 1st step)
      and
      math.temple.edu/~wds/homepage/works.html
    • Paul Leyland
      ... It is also well-known that such a machine can factor a general integer in polynomial time. Read Knuth s The Art of Computer Programming, Volume 2:
      Message 2 of 16 , Feb 17, 2011
        On Thu, 2011-02-17 at 00:46 +0000, WarrenS wrote:
        > Warning: This whole post, while I believe it is correct, quite
        > likely is useless and stupid and of mere curiosity or recreational
        > value.
        >
        > As you probably know, the AKS primality test runs in
        > polynomial time
        > http://en.wikipedia.org/wiki/AKS_primality_test
        > but it is of little or no practical interest.
        >
        > Wilson's theorem from about 1770
        > states N is prime iff
        > (N-1)! + 1 is divisible by N.
        > http://en.wikipedia.org/wiki/Wilson%27s_theorem
        >
        > Suppose temprorarily we are not worried about computer TIME (BIT OPS), but merely about NUMBER OF INTEGER ARITMETIC OPERATIONS
        > (we have fantasy computer hardware that'll do a
        > huge-precision +,-,* or / in
        > 1 time step no matter how many digits are in the integers)
        > then I point out it is possible to use Wilson's theorem to
        > test primality in polynomial #steps. That is, in a #steps polynomial

        It is also well-known that such a machine can factor a general integer
        in polynomial time. Read Knuth's The Art of Computer Programming,
        Volume 2: Seminumerical Algorithms, section 4.5.4, Exercise 38. Knuth
        credits Adi Shamir for this exercise.

        I, and doubtless many others, have been pondering this approach for many
        years. If anyone has had any success, and I very much doubt they have,
        you would almost certainly have heard about it now.


        Paul
      • WarrenS
        Continuing to investigate the whole idea (we don t want to be too discouraged by the fact Paul Leyland has been pondering it in private for many years... what
        Message 3 of 16 , Feb 17, 2011
          Continuing to investigate the whole idea (we don't want to
          be too discouraged by the fact Paul Leyland has been pondering it in private for many years... what a party-pooper :)
          let me now discuss, sort of, "analytic number theory as a factoring algorithm" (?!) where again this is likely only to be of
          recreational interest.

          A different approach for computing binomial(N,K)
          fast modularly would be to use the Euler "Beta function" integral

          1/binomial(X+Y,Y) =
          (X+Y+1) * integral[from t=0 to t=1] t^X * (1-t)^Y dt

          defined for general complex X,Y with re(X)>-1 and re(Y)>-1.
          There's also other integral representations of stuff.

          The one above seems not so great since it outputs 1/integer not integer. One can compute within the ring of (real numbers mod M) and do numerical integration if one wants to use integral
          representations to compute integers mod M. That could allow
          using fairly low-precision real arithmetic. However, if so, the key word is "ring" -- we can only do ring operations +,-, and * and NOT
          division and square root. That is why the above integral representation for the RECIPROCAL of what we want, is bad news.

          If N=A+B is approximately an equal split
          of N [if N is even make A=B, if odd make B=A+1] then
          N! = A! * A! * binomial(N,A) if N=A+A=even
          N! = A! * A! * (A+1) * binomial(N,A) if N=A+A+1=odd
          are the recurrences we need to compute N! fast if
          we can compute binomial coefficients fast. (There might also be other
          schemes, this is only one of many possibilities.)

          Here is another integral I designed to overcome the reciprocation
          problem:
          binomial(N,K) = 2*pi*i * c^(K-N) *
          ContourIntegral (c+z)^N * z^(-K-1) dz
          where you integrate round the origin and c is an arbitrary
          nonzero constant.
          [Proof: Cauchy residue theorem.]

          Here is another integral:
          N! = integral[-infin to +infin] exp([N+1]*t-exp(t)) dt

          More integrals and other fun stuff may be found in my paper "The Gamma Function Revisited" paper #92 here
          http://www.math.temple.edu/~wds/homepage/works.html .

          The lesson of the latter two integrals are:
          If we had the ability to compute these integrals in the ring of real numbers mod M accurate to additive error +-O(1) fast, then we would have the ability to factorize M fast.

          The lesson of the first integral is the same except it's "if we had the ability to compute its RECIPROCAL fast... then..."

          Perhaps that contour integral CAN be computed fast... provided you have a "quantum computer." Peter Shor already famously showed how to factorize integers fast with a quantum computer. However, this (if it works) would be a different way to do it.
        • maximilian_hasler
          ... I thought that should rather have encouraged you :) ... except for the detail that there s no such ring, since MZ is not an ideal of R. Maximilian
          Message 4 of 16 , Feb 17, 2011
            > Continuing to investigate the whole idea (we don't want to
            > be too discouraged by the fact Paul Leyland has been pondering it

            I thought that should rather have encouraged you :)


            > The one above seems not so great since it outputs 1/integer not
            > integer. One can compute within the ring of (real numbers mod M)

            except for the detail that there's no such ring,
            since MZ is not an ideal of R.


            Maximilian
          • WarrenS
            The function F(x) = 2/(1+squareroot(1-4*x)) = (1-squareroot(1-4*x))/(2*x) is the generating function of the Catalan numbers C[n]=binomial(2*n,n)/(n+1) [which
            Message 5 of 16 , Feb 17, 2011
              The function

              F(x) = 2/(1+squareroot(1-4*x)) = (1-squareroot(1-4*x))/(2*x)

              is the generating function of the Catalan numbers C[n]=binomial(2*n,n)/(n+1)
              [which are integers]

              F(x) = sum[n>=0] x^n * C[n]

              These -- if we had a fast way to compute C[n] mod M -- could be used for fast factorizing and prime-testing also,
              as per previous messages in this thread.

              And we have the integral formula

              C[n] = 2*pi*i * ContourIntegral F(z) * z^(-1-n) dz ;

              integrate round the origin.

              This if we integrate round the unit circle is a very well-behaved
              integral (bounded integrand)... but unfortunately the circle needs to have radius<1/4 or would be invalidated by F(z)'s branch cut.

              This integral is actually pretty much perfect for us... but does not lead to a fast integer factorization algorithm (that I can see), sorry. (It leads to a very slow one... :)

              This F(x) function trick is able to speed up and simplify my/Shamir's
              factorization algorithm (on a model of computation with FLOOR and squareroot functions), but it's an impractical algorithm before
              and after this speedup.

              --

              A correction to one of my earlier posts - I spoke of "the ring of real numbers mod M"... that was a mistake since adding and subtracting reals mod M works, but multiplying runs into the problem
              that (A*B) mod M does not equal (A mod M)*(B mod M) mod M.
              That oversight by me means this whole approach is even less likely
              to be useful.
            • WarrenS
              ... --to be precise. My/Shamir s algorithm seems to take order (logN)^2 arithmetic operations to factorize N (on its ridiculous model of computation) but
              Message 6 of 16 , Feb 17, 2011
                >Catalan numbers...
                > This F(x) function trick is able to speed up and simplify my/Shamir's
                > factorization algorithm (on a model of computation with FLOOR and squareroot functions), but it's an impractical algorithm before
                > and after this speedup.

                --to be precise. My/Shamir's algorithm seems to take
                order (logN)^2 arithmetic operations to factorize N (on its ridiculous
                model of computation) but the Catalan generating function trick seems
                to be able to sped it up to O(logN) on a slightly more ridiculous model of computation.
              • djbroadhurst
                ... I had been trying to find an O(log(N)) model, in your delightful size-does-not-matter universe. However, I failed. So, please, Warren, if you are able to
                Message 7 of 16 , Feb 17, 2011
                  --- In primenumbers@yahoogroups.com,
                  "WarrenS" <warren.wds@...> wrote:

                  > the Catalan generating function trick seems
                  > to be able to speed it up to O(logN)
                  > on a slightly more ridiculous model of computation.

                  I had been trying to find an O(log(N)) model,
                  in your delightful "size-does-not-matter" universe.

                  However, I failed. So, please, Warren, if you are
                  able to do so, spell it out, more fully.

                  Since "size does not matter", we shall not able to check
                  out your claim empirically. Yet we may, perchance, be able
                  to descry it, with our inner minds.

                  With thanks for these excellently half-baked ideas,

                  David
                • WarrenS
                  ... --I love that size does not matter sobriquet. OK, let me try to spell it out in (almost) full detail and unfortunately with a (slight) weakening. (I m
                  Message 8 of 16 , Feb 18, 2011
                    > > Warren D. Smith: the Catalan generating function trick seems
                    > > to be able to speed it up to O(logN)
                    > > on a slightly more ridiculous model of computation.
                    >
                    > D.J.Broadhurst: I had been trying to find an O(log(N)) model,
                    > in your delightful "size-does-not-matter" universe.
                    > However, I failed. So, please, Warren, if you are
                    > able to do so, spell it out, more fully.

                    --I love that "size does not matter" sobriquet.
                    OK, let me try to spell it out in (almost) full detail
                    and unfortunately with a (slight) weakening. (I'm not sure if the weakening is really needed, maybe you can unweaken it,
                    but it's only by a loglogN factor and only for smooth N...)

                    The "size does not matter" computational model shall mean
                    a magic machine which in 1 step can add, subtract, multiply, or divide (with truncation to integer)
                    any two integers, no matter how large; and can do positive, negative or zero testing in 1 step.

                    Lemma1: can compute A mod B = A-B*(A/B) in O(1) steps also.

                    Lemma2: can test if X is even or odd in O(1) steps. Proof: X mod 2.

                    Lemma3: can compute A^B in O(log(|B|+1)) steps via "binary powering"
                    using recurrence: If B=2*C+(0 or 1)>0, then
                    A^B = (A^C)^2 * (1 or A).

                    Lemma4: we can simulate fixed point arithmetic carrying D decimal places,
                    by first computing 10^D in O(log(|D|+1)) steps in a one-time only computation,
                    then premultiply all integers by 10^D, then every time you do an operation
                    A+B, A-B, fine; if A*B you need also to divide result by 10^D; if A/B you need
                    also to multiply result by 10^D. This slows down everything by only a constant
                    factor after initial precomputation of 10^D.
                    This also allows the FLOOR function in O(1) time by dividing by then multiplying by 10^D.

                    Warning: watch out, from now on I will sometimes work in the fixpoint universe instead of
                    the integer universe, depending on my whim.

                    Lemma5: can compute squareroot(X) to D decimals in O(logD) steps.
                    Proof: various well known iterations work... see lemmas 11+12 below
                    for details.

                    Notation: Let
                    C[n] = binomial(2*n, n)/(n+1) = nth catalan number.
                    Its generating function is:
                    F(x) = (1-squareroot(1-4*x))/(2*x) = 2/(1+squareroot(1-4*x)) = sum(n>=0) C[n] * x^n
                    [Series converges if |x|<=1/4.]

                    Lemma6: can compute C[N] in O(log(N+1)) steps.
                    The Algorithm:
                    First compute Z=10^N and Q=10^(N*N).
                    Then compute F(1/Z) carrying N*N decimals.
                    Then C[N] = FLOOR(F(1/Z)*Q) mod Z.

                    Remark: of course this also allows us to compute central binomial coefficients.

                    Lemma7: can compute A^N for all integers N>=0 in some pre-specified pre-sorted set S
                    SIMULTANEOUSLY, in O(cardinality(S) + log(the maximum N)) steps, PROVIDED
                    we know how to obtain each successive N from its predecessors in an average number of
                    O(1) add, subtract, and/or add-constant operations. For example if
                    S={0,1,2,4,8,16,32,65,129,259}. (Knuth book discusses "addition chains.")

                    Lemma8: can compute all C[N] for a prespecified SET S of positive N
                    SIMULTANEOUSLY, same proviso as in lemma7,
                    in O(cardinality(S) + log(the maximum N)) steps.
                    The Algorithm:
                    Find maxN.
                    Compute Z=10^maxN and Q=10^(maxN*maxN).
                    Then compute F(Z) carrying N*N decimals.
                    Then for all N in S in order compute C[N] = FLOOR(F(Z)*Z^N) mod Z.

                    Lemma9: can compute N! in time O(logN).
                    We use this recurrence:
                    if N=2*M=even then N! = M!^2 * binomial(N,M).
                    if N=2*M+1=odd then N! = N*(N-1)!.
                    The binomials ar got frmthe Catalan numbers from lemma 8.

                    Theorem1: can test primality in O(logN) steps.
                    You compute T=(N-1)!+1 using preceding lemma, then "Wilson's theorem" says N prime iff
                    T is divisible by N. For example 6!=720 and 7 divides 721.

                    Lemma10: can compute GCD(A,B) in O(log(min(A,B)+2)) steps. Proof: Euclid's algorithm.

                    Theorem2: If N=prime*prime then can find a nontrivial factor of N in O(logN) steps.
                    Proof: if N is a square, output R=FLOOR(squareroot(N)). Otherwise there is
                    exactly one prime factor <=R. Compute it as GCD(N, R!).

                    Lemma11: can compute the Kth root of N accurate to +-1
                    in time O(log(N)*log(K)) for N>K>1.
                    [Generalizes lemma5.]
                    Algorithm: Use the always-convergent iteration
                    X <-- [(K-1)*X + N/X^(K-1)]/K
                    using binary powering to compute each iterate.
                    We need to start from a decent initial guess X to get good speed, i.e. quadratic convergence.
                    You can compute L=log10(N) accurate to +-0.5 by repeatedly multiplying
                    by 10 until you exceed N (keep count); and then an initial guess of 10^(L/K)
                    will do.

                    Lemma12: [sped up version of lemma11] can compute the Kth root of N accurate to +-1
                    in time O(loglog(N)*log(K)) for N>K>1.
                    Idea: The method of lemma11 actually will take O(loglog(N)*log(K)) steps if we start from
                    that initial guess and only turn on our step counter then
                    (thanks to quadratic rather than geometric convergence). But you can find log10(N)
                    accurate to +-0.5 in a somewhat cleverer way using a binary rather than sequential search,
                    so we can get the initial guess in only O(loglog(N)) steps.

                    Theorem3: If somebody gives you N and tells you "N is the product of K primes"
                    (and tells you the value of K>1) then you can find a nontrivial factor of N in O(logN) steps.
                    Proof: There must be some prime factors above and below R=FLOOR(N^(1/k))
                    (unless R is the factor) and you can find a factor as GCD(N, R!).

                    Remark:
                    It's a bit annoying but if I am not told the value of K, then it takes me slightly longer to find a nontrivial factor of N. (Can any of you overcome this last niggling annoyance?) We know a priori that 0<K<logN and we can do a binary search to find
                    the right (or anyhow a good-enough) value of K.
                    If your K is too small then GCD(N, R!) will just be N. If it is too large it will be 1.
                    So doing this binary search requires at most log(logN) "too-small
                    or too-large?" question-answers. Hence

                    Theorem4: Finding a nontrivial factor of a given integer N>1
                    [or proving N is prime] can be done in O(logN * loglogN) steps, worst case.

                    But we can do better for average case.

                    The key thing is to find a threshhold T so that some prime factor of N exceeds T, but
                    not all of them. Then GCD(T!, N) finds a factor of N and we are done.

                    You can keep doubling X starting from X=1 and also computing Y=X! each time as a side effect,
                    until X exceeds N, and this whole process takes O(logN)
                    steps because we were smart about it (the lemmas above suffice... I hope you realize how).

                    Now for each of those Y's we can test whether GCD(Y, N)=N
                    in only 1 step each. The greatest X for which this is not the case, ought to correspond -- to within a factor of 2 -- to a good threshhold.
                    Now we do that binary search for the right K as before, except only searching among those Ks that are compatible with our factor-2-wide range. If K<squareroot(logN) then there will be only O(1) K's in the range so the binary search requires only a constant number of whacks.

                    It is a well known theorem in "probabilistic number theory" (Erdos+Kac?) that the
                    typical number of prime factors of a number N is of order loglogN
                    (mean) with variance also of order loglogN, with asymptotically-normal distribution. This is safely way below squareroot(logN).
                    Therefore

                    Theorem5: Finding a nontrivial factor of a given integer N>1
                    [or proving N is prime] can be done in O(logN) steps for almost all integer N ("almost all" here meaning, the fraction of good N goes to 100% as N-->infinity), specifically all N with O(squareroot(logN))
                    prime factors; and in O(logN * loglogN) steps, in the worst case.
                    The primality test is O(logN) time worst case.

                    Remark: It is kind of weird, but the Ns with more prime factors (which you'd naively think are the easiest to factor) are the ones causing me the most difficulty.

                    In case you forgot, everything here was in the "size doesn't matter" computational model.
                    Warren D. Smith, Feb. 2011.
                  • WarrenS
                    ... --to be precise. The only N for which the loglogN weakening happens, are those which have more than (2^100)*squareroot(log(N)) prime factors, and such that
                    Message 9 of 16 , Feb 18, 2011
                      SLIGHT WEAKNESS:
                      > and unfortunately with a (slight) weakening. (I'm not sure if the weakening is really needed, maybe you can unweaken it,
                      > but it's only by a loglogN factor and only for smooth N...)

                      --to be precise. The only N for which the loglogN weakening happens,
                      are those which have more than (2^100)*squareroot(log(N)) prime factors, and such that the maximum prime factor is at most 1.01 times
                      the minimum prime factor.

                      These "bad" N are extremely rare.
                      In fact, the density of them below X should be less than
                      2^(-100*Squareroot(logX))
                      which if X is >100 digits long is so small that you are never
                      going to encounter one in the age of the universe with random
                      trial... note this is rare enough to make the average case runtime
                      O(logN).

                      Nevertheless I find this weakening highly annoying since
                      it makes the result seem slightly "suboptimal." O(logN) time
                      is "optimal" in the sense that it takes that long just to read in the input (or print the output factor in balanced 2-factor cases).

                      It might nevertheless be the case that O(logN * loglogN)
                      steps actually IS optimal (for worst case input N) in the
                      "size doesn't matter" model. I do not currently see how to do better, although I have some halfbaked ideas that might manage it,
                      for example you can actually compute not merely 1 row of the Pascal triangle, but the whole triangle, fast using a double generating function...


                      LITERATURE SEARCH
                      So I should really check the literature to see what Adi Shamir did to see if this result by me is really new... oh dear...
                      it seems the cite is

                      Adi Shamir: Factoring Numbers in O(log n) Arithmetic Steps,
                      Info. Processing Letters 8,1 (January 1979) 28-31

                      which I have not seen, but seems to imply that Shamir already had the optimal result in 1979 (contrary to what I was thinking, which was I was improving over Shamir by nearly a factor of logN ?!).
                      Unfortunately my library does not have this so it could be a while before I see whatever Shamir did. Maybe somebody else can check this out.


                      Qi Cheng: On the ultimate complexity of factorials
                      Theoretical Computer Science 326,1-3 (October 2004) 419-429
                      showed you can compute N! with only +,- and * (without /) in
                      exp( O(loglogN * squareroot(logN)) ) steps. But he needed
                      an unproven conjecture about smooth numbers. One wonders if
                      this can be reduced... I mean, we can factor faster than that
                      using the Number Field Sieve...

                      M.Shub & S.Smale: On the intractability of Hilbert's Nullstellensatz and an algebraic version of P=NP?, Duke Math J 81 (1995) 47-54
                      showed that if N! is asymptotically hard to compute in our "size doesn't matter" model BUT without truncation after division, i.e. without the FLOOR function, then the algebraic version of NP notequal P is true.
                    • djbroadhurst
                      ... I too am unable to access Shamir s article. However I did note what Qi Cheng says in Theoretical Computer Science 326 (2004) 419–429, ... So it seems
                      Message 10 of 16 , Feb 18, 2011
                        --- In primenumbers@yahoogroups.com,
                        "WarrenS" <warren.wds@...> wrote:

                        > Adi Shamir: Factoring Numbers in O(log n) Arithmetic Steps,
                        > Info. Processing Letters 8,1 (January 1979) 28-31
                        > which I have not seen, but seems to imply that Shamir already
                        > had the optimal result in 1979

                        I too am unable to access Shamir's article.
                        However I did note what Qi Cheng says in
                        Theoretical Computer Science 326 (2004) 419–429,
                        "On the ultimate complexity of factorials":

                        > Throughout this paper, we assume that a straight-line program
                        > only contains ring operations. Shamir [11] showed that if
                        > division (computing remainder and quotient) is allowed, then
                        > n! can be computed by a straight-line program of polynomial
                        > length.

                        So it seems that different authors stipulate different rules?

                        With thanks to Warren for his posting,

                        David
                      • Paul Leyland
                        ... Why not just mail Adi and ask if he has a copy? Written in 1978, it is very unlikely that it was created in electronic format but he may well have
                        Message 11 of 16 , Feb 18, 2011
                          On Sat, 2011-02-19 at 07:35 +0000, djbroadhurst wrote:
                          >
                          > --- In primenumbers@yahoogroups.com,
                          > "WarrenS" <warren.wds@...> wrote:
                          >
                          > > Adi Shamir: Factoring Numbers in O(log n) Arithmetic Steps,
                          > > Info. Processing Letters 8,1 (January 1979) 28-31
                          > > which I have not seen, but seems to imply that Shamir already
                          > > had the optimal result in 1979
                          >
                          > I too am unable to access Shamir's article.
                          > However I did note what Qi Cheng says in
                          > Theoretical Computer Science 326 (2004) 419--429,
                          > "On the ultimate complexity of factorials":

                          Why not just mail Adi and ask if he has a copy? Written in 1978, it is
                          very unlikely that it was created in electronic format but he may well
                          have reprints available.

                          Paul
                        • djbroadhurst
                          ... For members who might not already know, I remark that Adi and Paul are co-authors: http://www.springerlink.com/content/3nmr8v0u6n2v02eq/ David
                          Message 12 of 16 , Feb 19, 2011
                            --- In primenumbers@yahoogroups.com,
                            Paul Leyland <paul@...> wrote:

                            > > Adi Shamir: Factoring Numbers in O(log n) Arithmetic Steps,
                            > > Info. Processing Letters 8,1 (January 1979) 28-31
                            > Why not just mail Adi and ask if he has a copy?

                            For members who might not already know, I remark
                            that Adi and Paul are co-authors:
                            http://www.springerlink.com/content/3nmr8v0u6n2v02eq/

                            David
                          • Paul Leyland
                            ... Did you actually take my hint and read Knuth? In my edition at least (the 2nd from 1980) the answers section contains a sketch of the complete algorithm.
                            Message 13 of 16 , Feb 19, 2011
                              On Sat, 2011-02-19 at 01:05 +0000, WarrenS wrote:

                              > LITERATURE SEARCH
                              > So I should really check the literature to see what Adi Shamir did to
                              > see if this result by me is really new... oh dear...
                              > it seems the cite is
                              >
                              > Adi Shamir: Factoring Numbers in O(log n) Arithmetic Steps,
                              > Info. Processing Letters 8,1 (January 1979) 28-31
                              >
                              > which I have not seen, but seems to imply that Shamir already had the
                              > optimal result in 1979 (contrary to what I was thinking, which was I
                              > was improving over Shamir by nearly a factor of logN ?!).
                              > Unfortunately my library does not have this so it could be a while
                              > before I see whatever Shamir did. Maybe somebody else can check this
                              > out.

                              Did you actually take my hint and read Knuth? In my edition at least
                              (the 2nd from 1980) the answers section contains a sketch of the
                              complete algorithm.

                              Paul
                            • WarrenS
                              I ll definitely check Knuth vol2 ex38 sec4.5.4 when I get a chance but that book (which I own, though I m not sure what edition) is in storage where I cannot
                              Message 14 of 16 , Feb 19, 2011
                                I'll definitely check Knuth vol2 ex38 sec4.5.4 when I get a chance but
                                that book (which I own, though I'm not sure what edition) is in storage where I cannot get it so I have to wait until next trip to library. If you want you can post a sketch of the exercise+answer yourself to this bulletin board... or at least tell us what the result is, exactly. I presume Shamir's paper says a good deal more than Knuth book (?).

                                If Shamir's logN result is merely an existential one, not an algorithm, then my result I posted here already does that, i.e. just take the final step in my algorithm's binary search -- the factorial and GCD computation within that step, is an O(logN) long arithmetic-chain which computes a factor of N. I suspect or hope that my result therefore is actually better than Shamir's, but I cannot decide until I see his.
                                I occasionally go to faraway libraries at which point I'll be able to see Shamir paper.

                                Anyway those of you (if any) who wish to look into this deeper now have a number of interesting questions to ponder, such as:
                                1. is there any hope to actually turn any of this into a fast algorithm on a non-crazy computer?
                                2. what is the ring-operation complexity of the factorial function?
                                3. the field-operation complexity?

                                All these questions are pretty much wide open and are elementary (at least to ask, maybe not to solve). Furthermore, they all seem fairly important (to the extent anything in number theory is "important").
                              • WarrenS
                                ... --yah. And actually, if we had a size doesn t matter computer with only +, - and * (without /) we could program it to implement division A/B with
                                Message 15 of 16 , Feb 19, 2011
                                  --- In primenumbers@yahoogroups.com, "djbroadhurst" <d.broadhurst@...> wrote:
                                  >
                                  >
                                  >
                                  > --- In primenumbers@yahoogroups.com,
                                  > "WarrenS" <warren.wds@> wrote:
                                  >
                                  > > Adi Shamir: Factoring Numbers in O(log n) Arithmetic Steps,
                                  > > Info. Processing Letters 8,1 (January 1979) 28-31
                                  > > which I have not seen, but seems to imply that Shamir already
                                  > > had the optimal result in 1979
                                  >
                                  > I too am unable to access Shamir's article.
                                  > However I did note what Qi Cheng says in
                                  > Theoretical Computer Science 326 (2004) 419�429,
                                  > "On the ultimate complexity of factorials":
                                  >
                                  > > Throughout this paper, we assume that a straight-line program
                                  > > only contains ring operations. Shamir [11] showed that if
                                  > > division (computing remainder and quotient) is allowed, then
                                  > > n! can be computed by a straight-line program of polynomial
                                  > > length.
                                  >
                                  > So it seems that different authors stipulate different rules?

                                  --yah. And actually, if we had a "size doesn't matter" computer with
                                  only +, - and * (without /) we could program it to implement
                                  division A/B with remainder in O(log(max(A,B)+2)) steps.
                                  But Cheng is interested in, say, modular computation, such as computing N! mod M, and for that the ring model really is reasonably sensible and you cannot use it to simulate general division efficiently unless you know the factorization of M into primes.

                                  Cheng's short ring-arithmetic chains for N! and fast algorithms for finding them both are based on "reverse engineering" the elliptic curve factorization algorithm. So one might hope that the
                                  Number Field Sieve could be fooled with in order to improve Cheng's
                                  results to get replace Cheng's square root with a cube root, essentially. Even better would be to show abstractly that ANY
                                  factorization algorithm could be turned into an N! algorithm without much slowdown thus proving computational equivalence. It might
                                  be necessary to assume conjectures about smooth numbers to do that.

                                  Cheng's paper:
                                  http://www.cs.ou.edu/~qcheng/paper/factorial.pdf

                                  Here is a quick result by me to get you in the right mood... :)
                                  THEOREM. Let N>1 be an integer.
                                  Turing machine can compute (N-1)! mod N in an amount of time polynomial in logN.
                                  PROOF.
                                  Test N for primality using AKS test.
                                  If N is prime then the answer is -1 by Wilson's theorem, otherwise it is 0.
                                  QED
                                • djbroadhurst
                                  ... We don t need a Turing machine for that, do we? We can do it with a real machine and any real machine is less powerful than a Turing machine. David
                                  Message 16 of 16 , Feb 19, 2011
                                    --- In primenumbers@yahoogroups.com,
                                    "WarrenS" <warren.wds@...> wrote:

                                    > THEOREM. Let N>1 be an integer.
                                    > Turing machine can compute (N-1)! mod N in an amount of
                                    > time polynomial in logN.
                                    > PROOF.
                                    > Test N for primality using AKS test.
                                    > If N is prime then the answer is -1 by Wilson's theorem,
                                    > otherwise it is 0.

                                    We don't need a Turing machine for that, do we?
                                    We can do it with a real machine and any real
                                    machine is less powerful than a Turing machine.

                                    David
                                  Your message has been successfully submitted and would be delivered to recipients shortly.