Loading ...
Sorry, an error occurred while loading the content.

has SB been tested?

Expand Messages
  • David Broadhurst
    http://sb.pns.net/stats/ shows that SB are testing k*2^n-1 at about a million decimal digits. A stray thought: did they slip in the case k=1, n=3021377, to
    Message 1 of 8 , Mar 31, 2003
    • 0 Attachment
      http://sb.pns.net/stats/
      shows that SB are testing k*2^n-1 at about a million decimal digits.
      A stray thought: did they slip in the case
      k=1, n=3021377,
      to test that the distributed PowerMod is working OK?
      It would be relatively little extra work,
      for the sake of peace of mind.
      If it were declared PrP 10 times out of 10, then one would have
      some confidence that any FFT failure rate is less than 10%.
      At 3.3 THz, that's only about an hour's work for 10 runs, I guess?
      David (belt and braces school)
    • David Broadhurst
      As no doubt everyone noticed, I foolishly confused Proths with Riesels. But the principle stands: it there a way of setting a decent limit on FFT failures?
      Message 2 of 8 , Mar 31, 2003
      • 0 Attachment
        As no doubt everyone noticed, I foolishly confused
        Proths with Riesels. But the principle stands:
        it there a way of setting a decent limit on FFT failures?
        Even the rather smaller Cosgrave Proth 3*2^2145353+1
        would be some reassurance.
        David (foolish, though trying to be helpful)
      • Décio Luiz Gazzoni Filho
        ... Hash: SHA1 While I do agree that such testing would be reassuring and consume few resources, it is my experience that coding an FFT for the 1 million digit
        Message 3 of 8 , Apr 1 3:55 AM
        • 0 Attachment
          -----BEGIN PGP SIGNED MESSAGE-----
          Hash: SHA1

          While I do agree that such testing would be reassuring and consume few
          resources, it is my experience that coding an FFT for the 1 million digit
          range is rather trivial. That's less than 4194304 bits or 2^22 bits. Assuming
          radix 2^16, that would imply a transform size of 2^18, if the routine employs
          the right-angle DWT, which cuts the transform length in half by using the
          imaginary part of the transform elements to store useful data. Dominique
          Delande's pi calculation program does length 2^25 transforms (that's seven
          additional bits required for the pyramid) with radix 10^4, that's only less
          than 3 bits smaller. SB's routine should have a lot of headroom if correctly
          done and using all the usual tricks -- balanced representation, fast and
          accurate trig identities for computing primitive roots of C, etc.

          So, barring a stupid mistake, I am not worried that such a routine would fail
          at this range.

          Décio

          On Monday 31 March 2003 23:04, David Broadhurst wrote:
          > As no doubt everyone noticed, I foolishly confused
          > Proths with Riesels. But the principle stands:
          > it there a way of setting a decent limit on FFT failures?
          > Even the rather smaller Cosgrave Proth 3*2^2145353+1
          > would be some reassurance.
          > David (foolish, though trying to be helpful)
          -----BEGIN PGP SIGNATURE-----
          Version: GnuPG v1.2.1 (GNU/Linux)

          iD8DBQE+iX4zce3VljctsGsRAmjTAJ9bnytktF1lmA1s3iQG0lzmUxc7xACgo/1g
          Q+VCtNkKjBdwA8yT1mnBJzc=
          =fV+w
          -----END PGP SIGNATURE-----
        • mikeoakes2@aol.com
          In a message dated 01/04/03 13:25:21 GMT Daylight Time, ... As is coding an FFT for 100 Million digits: size has nothing to do with it. What is /not/ trivial
          Message 4 of 8 , Apr 1 4:58 AM
          • 0 Attachment
            In a message dated 01/04/03 13:25:21 GMT Daylight Time,
            decio@... writes:


            > While I do agree that such testing would be reassuring and consume few
            > resources, it is my experience that coding an FFT for the 1 million digit
            > range is rather trivial.
            >

            As is coding an FFT for 100 Million digits: size has nothing to do with it.

            What is /not/ trivial is to optimise it to the extent done for the GMP
            library incorporated (I believe) in the Proth search programs used by SB team.

            To maximize speed, George Woltman's FFT code incorporates an unprecedented
            amount of optimisation, and is constantly riding on the extreme edge of
            failing (roundoff >= 0.5 bits), so it needs an exceptional amount of testing,
            on many different architectures, especially near the many FFT-size
            breakpoints.
            See e.g. http://www.mersenne.org/source.htm

            Mike Oakes


            [Non-text portions of this message have been removed]
          • Décio Luiz Gazzoni Filho
            ... Hash: SHA1 ... The choice of FFT parameters for a given operand size has all to do with the speed, memory usage and correctness of an FFT implementation.
            Message 5 of 8 , Apr 1 6:37 AM
            • 0 Attachment
              -----BEGIN PGP SIGNED MESSAGE-----
              Hash: SHA1

              On Tuesday 01 April 2003 09:58, mikeoakes2@... wrote:
              > In a message dated 01/04/03 13:25:21 GMT Daylight Time,
              >
              > decio@... writes:
              > > While I do agree that such testing would be reassuring and consume few
              > > resources, it is my experience that coding an FFT for the 1 million digit
              > > range is rather trivial.
              >
              > As is coding an FFT for 100 Million digits: size has nothing to do with it.

              The choice of FFT parameters for a given operand size has all to do with the
              speed, memory usage and correctness of an FFT implementation. Colin Percival
              has written a paper regarding roundoff error rates for FFTs, and according to
              what his results predicts, the parameter set I suggested (radix 2^16,
              transform size 2^18) is supposed to overflow by 3.5 bits in any
              IEEE754/854-compliant FPU with 53-bit mantissa, and so accuracy isn't
              guaranteed. However, going down to radix 2^8 and transform size 2^19 would
              ruin speed. But keep in mind that Percival's estimates are worst-case ones,
              and as mostly everything in the study of roundoff error, there is a lot of
              headroom beyond these estimates before the routines start to fail.

              For reference, and some plots of predicted error rate vs. actual error rate,
              see http://members.tripod.com/careybloodworth/HTMLobj-174/mpaper.ps.

              > What is /not/ trivial is to optimise it to the extent done for the GMP
              > library incorporated (I believe) in the Proth search programs used by SB
              > team.

              Instruction-scheduling optimizations are unlikely to affect roundoff error
              rates. Sloppiness in computing the primitive roots of unity is the only way I
              can see of significantly introducing errors into the computation. Other than
              that, most algorithmic improvements introduced actually _improve_ the error
              rate: right-angle convolution via DWT, split-radix butterfly, balanced data
              representation, probably the four-step decomposition too, etc.

              > To maximize speed, George Woltman's FFT code incorporates an unprecedented
              > amount of optimisation, and is constantly riding on the extreme edge of
              > failing (roundoff >= 0.5 bits), so it needs an exceptional amount of
              > testing, on many different architectures, especially near the many
              > FFT-size breakpoints.
              > See e.g. http://www.mersenne.org/source.htm

              It should be clear now that this is more of an issue regarding optimal
              parameter choice than optimization itself.

              Décio
              -----BEGIN PGP SIGNATURE-----
              Version: GnuPG v1.2.1 (GNU/Linux)

              iD8DBQE+iaQUce3VljctsGsRAgSvAKCC4jbXt2oCWLBTncx3N4UhCGejYgCeM20y
              OMV31vEmhQFzMDL8nYn/ys8=
              =o/t2
              -----END PGP SIGNATURE-----
            • Michael Bell
              Hi, I think we are getting confused here between roundoff errors (which are implementation independent assuming the exact same FFT algorithm, limb size, etc.
              Message 6 of 8 , Apr 1 7:53 AM
              • 0 Attachment
                Hi,

                I think we are getting confused here between roundoff errors (which are
                implementation independent assuming the exact same FFT algorithm, limb size,
                etc. being run on a processor with the same precision) and coding errors.
                The latter are nothing much to do with size.

                I believe SoB is using George's code which is fairly well tested, but I
                think they may have been tweaking it a bit. Certainly it is beginning to
                seem odd that there've been no primes, but I'm not sure its that far out of
                line with the expectations.

                I agree with David that running a couple of test cases isn't going to harm
                throughput much and will give a bit more confidence to the results. But
                here probably isn't the best place to suggest an idea for SoB, they have a
                forum at http://www.free-dc.org/forum/forumdisplay.php?s=&forumid=38

                Mike.
              • Décio Luiz Gazzoni Filho
                ... Hash: SHA1 ... Exactly. Since SoB already found 5 or 6 primes, I assume their code has no errors of the latter kind. The only errors that might creep up
                Message 7 of 8 , Apr 1 8:08 AM
                • 0 Attachment
                  -----BEGIN PGP SIGNED MESSAGE-----
                  Hash: SHA1

                  On Tuesday 01 April 2003 12:53, Michael Bell wrote:
                  > Hi,
                  >
                  > I think we are getting confused here between roundoff errors (which are
                  > implementation independent assuming the exact same FFT algorithm, limb
                  > size, etc. being run on a processor with the same precision) and coding
                  > errors. The latter are nothing much to do with size.

                  Exactly. Since SoB already found 5 or 6 primes, I assume their code has no
                  errors of the latter kind. The only errors that might creep up now are of the
                  roundoff kind, which I argued against in my other posts.

                  > I believe SoB is using George's code which is fairly well tested, but I
                  > think they may have been tweaking it a bit. Certainly it is beginning to
                  > seem odd that there've been no primes, but I'm not sure its that far out of
                  > line with the expectations.
                  >
                  > I agree with David that running a couple of test cases isn't going to harm
                  > throughput much and will give a bit more confidence to the results. But
                  > here probably isn't the best place to suggest an idea for SoB, they have a
                  > forum at http://www.free-dc.org/forum/forumdisplay.php?s=&forumid=38

                  I also agree with David, but IMO no one should hope that SoB is not finding
                  any new primes due to the multiplication code pushing the limits of roundoff
                  errors (and even less due to outright buggy code).

                  Décio
                  -----BEGIN PGP SIGNATURE-----
                  Version: GnuPG v1.2.1 (GNU/Linux)

                  iD8DBQE+ibmdce3VljctsGsRAj/9AKCkZwhsUjgc+xPz967TdnTovX4DEACgr51k
                  7h0VWanqg/SPzJjAEx5sxLw=
                  =zGVR
                  -----END PGP SIGNATURE-----
                • Michael Bell
                  ... Except that George s code uses different assembler routines for each FFT length, so that doesn t really hold. ... Indeed, I should think this is pretty
                  Message 8 of 8 , Apr 1 8:23 AM
                  • 0 Attachment
                    Décio Luiz Gazzoni Filho wrote:
                    > On Tuesday 01 April 2003 12:53, Michael Bell wrote:
                    > > Hi,
                    > >
                    > > I think we are getting confused here between roundoff errors (which are
                    > > implementation independent assuming the exact same FFT algorithm, limb
                    > > size, etc. being run on a processor with the same precision) and coding
                    > > errors. The latter are nothing much to do with size.
                    >
                    > Exactly. Since SoB already found 5 or 6 primes, I assume their code has no
                    > errors of the latter kind. The only errors that might creep up now are
                    > of the
                    > roundoff kind, which I argued against in my other posts.

                    Except that George's code uses different assembler routines for each FFT
                    length, so that doesn't really hold.

                    >
                    > I also agree with David, but IMO no one should hope that SoB is not finding
                    > any new primes due to the multiplication code pushing the limits of
                    > roundoff
                    > errors (and even less due to outright buggy code).
                    >

                    Indeed, I should think this is pretty unlikely.

                    Mike.
                  Your message has been successfully submitted and would be delivered to recipients shortly.