Loading ...
Sorry, an error occurred while loading the content.
 

Re: [PrimeNumbers] has SB been tested?

Expand Messages
  • Décio Luiz Gazzoni Filho
    ... Hash: SHA1 ... The choice of FFT parameters for a given operand size has all to do with the speed, memory usage and correctness of an FFT implementation.
    Message 1 of 8 , Apr 1, 2003
      -----BEGIN PGP SIGNED MESSAGE-----
      Hash: SHA1

      On Tuesday 01 April 2003 09:58, mikeoakes2@... wrote:
      > In a message dated 01/04/03 13:25:21 GMT Daylight Time,
      >
      > decio@... writes:
      > > While I do agree that such testing would be reassuring and consume few
      > > resources, it is my experience that coding an FFT for the 1 million digit
      > > range is rather trivial.
      >
      > As is coding an FFT for 100 Million digits: size has nothing to do with it.

      The choice of FFT parameters for a given operand size has all to do with the
      speed, memory usage and correctness of an FFT implementation. Colin Percival
      has written a paper regarding roundoff error rates for FFTs, and according to
      what his results predicts, the parameter set I suggested (radix 2^16,
      transform size 2^18) is supposed to overflow by 3.5 bits in any
      IEEE754/854-compliant FPU with 53-bit mantissa, and so accuracy isn't
      guaranteed. However, going down to radix 2^8 and transform size 2^19 would
      ruin speed. But keep in mind that Percival's estimates are worst-case ones,
      and as mostly everything in the study of roundoff error, there is a lot of
      headroom beyond these estimates before the routines start to fail.

      For reference, and some plots of predicted error rate vs. actual error rate,
      see http://members.tripod.com/careybloodworth/HTMLobj-174/mpaper.ps.

      > What is /not/ trivial is to optimise it to the extent done for the GMP
      > library incorporated (I believe) in the Proth search programs used by SB
      > team.

      Instruction-scheduling optimizations are unlikely to affect roundoff error
      rates. Sloppiness in computing the primitive roots of unity is the only way I
      can see of significantly introducing errors into the computation. Other than
      that, most algorithmic improvements introduced actually _improve_ the error
      rate: right-angle convolution via DWT, split-radix butterfly, balanced data
      representation, probably the four-step decomposition too, etc.

      > To maximize speed, George Woltman's FFT code incorporates an unprecedented
      > amount of optimisation, and is constantly riding on the extreme edge of
      > failing (roundoff >= 0.5 bits), so it needs an exceptional amount of
      > testing, on many different architectures, especially near the many
      > FFT-size breakpoints.
      > See e.g. http://www.mersenne.org/source.htm

      It should be clear now that this is more of an issue regarding optimal
      parameter choice than optimization itself.

      Décio
      -----BEGIN PGP SIGNATURE-----
      Version: GnuPG v1.2.1 (GNU/Linux)

      iD8DBQE+iaQUce3VljctsGsRAgSvAKCC4jbXt2oCWLBTncx3N4UhCGejYgCeM20y
      OMV31vEmhQFzMDL8nYn/ys8=
      =o/t2
      -----END PGP SIGNATURE-----
    • Michael Bell
      Hi, I think we are getting confused here between roundoff errors (which are implementation independent assuming the exact same FFT algorithm, limb size, etc.
      Message 2 of 8 , Apr 1, 2003
        Hi,

        I think we are getting confused here between roundoff errors (which are
        implementation independent assuming the exact same FFT algorithm, limb size,
        etc. being run on a processor with the same precision) and coding errors.
        The latter are nothing much to do with size.

        I believe SoB is using George's code which is fairly well tested, but I
        think they may have been tweaking it a bit. Certainly it is beginning to
        seem odd that there've been no primes, but I'm not sure its that far out of
        line with the expectations.

        I agree with David that running a couple of test cases isn't going to harm
        throughput much and will give a bit more confidence to the results. But
        here probably isn't the best place to suggest an idea for SoB, they have a
        forum at http://www.free-dc.org/forum/forumdisplay.php?s=&forumid=38

        Mike.
      • Décio Luiz Gazzoni Filho
        ... Hash: SHA1 ... Exactly. Since SoB already found 5 or 6 primes, I assume their code has no errors of the latter kind. The only errors that might creep up
        Message 3 of 8 , Apr 1, 2003
          -----BEGIN PGP SIGNED MESSAGE-----
          Hash: SHA1

          On Tuesday 01 April 2003 12:53, Michael Bell wrote:
          > Hi,
          >
          > I think we are getting confused here between roundoff errors (which are
          > implementation independent assuming the exact same FFT algorithm, limb
          > size, etc. being run on a processor with the same precision) and coding
          > errors. The latter are nothing much to do with size.

          Exactly. Since SoB already found 5 or 6 primes, I assume their code has no
          errors of the latter kind. The only errors that might creep up now are of the
          roundoff kind, which I argued against in my other posts.

          > I believe SoB is using George's code which is fairly well tested, but I
          > think they may have been tweaking it a bit. Certainly it is beginning to
          > seem odd that there've been no primes, but I'm not sure its that far out of
          > line with the expectations.
          >
          > I agree with David that running a couple of test cases isn't going to harm
          > throughput much and will give a bit more confidence to the results. But
          > here probably isn't the best place to suggest an idea for SoB, they have a
          > forum at http://www.free-dc.org/forum/forumdisplay.php?s=&forumid=38

          I also agree with David, but IMO no one should hope that SoB is not finding
          any new primes due to the multiplication code pushing the limits of roundoff
          errors (and even less due to outright buggy code).

          Décio
          -----BEGIN PGP SIGNATURE-----
          Version: GnuPG v1.2.1 (GNU/Linux)

          iD8DBQE+ibmdce3VljctsGsRAj/9AKCkZwhsUjgc+xPz967TdnTovX4DEACgr51k
          7h0VWanqg/SPzJjAEx5sxLw=
          =zGVR
          -----END PGP SIGNATURE-----
        • Michael Bell
          ... Except that George s code uses different assembler routines for each FFT length, so that doesn t really hold. ... Indeed, I should think this is pretty
          Message 4 of 8 , Apr 1, 2003
            Décio Luiz Gazzoni Filho wrote:
            > On Tuesday 01 April 2003 12:53, Michael Bell wrote:
            > > Hi,
            > >
            > > I think we are getting confused here between roundoff errors (which are
            > > implementation independent assuming the exact same FFT algorithm, limb
            > > size, etc. being run on a processor with the same precision) and coding
            > > errors. The latter are nothing much to do with size.
            >
            > Exactly. Since SoB already found 5 or 6 primes, I assume their code has no
            > errors of the latter kind. The only errors that might creep up now are
            > of the
            > roundoff kind, which I argued against in my other posts.

            Except that George's code uses different assembler routines for each FFT
            length, so that doesn't really hold.

            >
            > I also agree with David, but IMO no one should hope that SoB is not finding
            > any new primes due to the multiplication code pushing the limits of
            > roundoff
            > errors (and even less due to outright buggy code).
            >

            Indeed, I should think this is pretty unlikely.

            Mike.
          Your message has been successfully submitted and would be delivered to recipients shortly.