Loading ...
Sorry, an error occurred while loading the content.

Re: 62-digit IsPrime

Expand Messages
  • maximilian_hasler
    It can t be prime because it s not pseudoprime. I suggest you install PARI/gp (done in less than a minute) and do such preliminary checks prior to posting in
    Message 1 of 20 , Apr 24, 2010
    • 0 Attachment
      It can't be prime because it's not pseudoprime.
      I suggest you install PARI/gp (done in less than a minute)
      and do such preliminary checks prior to posting in this group.

      Maximilian

      --- In primenumbers@yahoogroups.com, Ali Adams <alipoland@...> wrote:
      >
      > Thanks Walter�for your help.
      >
      > What about this 309-digit number?
      >
      > 259336006801222696014182798990654577020329185417451253947966996203374778056958592994128470622708352120230964321433370545343196089458222530238878359827583627468563462233210398589090852507947007265127498998595582306765369537411175275870858814651979141558307396316154291312142703185675301452916463755740936626397
      >
      > How do I estimate how long it would take?
      > �
      > And even if after few days/weeks the result is prime.
      > What is the meaning of the certificate to be validated by other programs mean?
      > �
      > Is there still a chance that it is not 100% prime?
      > �
      > Is the Elliptic Curve Method approximate?
      > �
      > Sorry for so many questions and thanks in advance.
      > �
      > Ali Adams
      > God > infinity
      > www.heliwave.com
      >
      >
      >
      >
      > ________________________________
      > From: Walter Nissen <nissen@...>
      > To: Ali Adams <alipoland@...>; primenumbers@yahoogroups.com
      > Cc: nissen@...
      > Sent: Fri, April 23, 2010 10:44:14 AM
      > Subject: Re: [PrimeNumbers] 62-digit IsPrime
      >
      > Greetings ,
      >
      > UBASIC , running the supplied primality test , APRT-CLE , on
      > an 11-year-old Pentium II at 266-MHz , produced this output :
      >
      > 11108195956680805165653650502135350605769090617575464617311659 is prime.
      > � 0:00:04
      >
      > 4 seconds on the wall clock .
      > Without experience running these programs , it is difficult even
      > to ballpark run times .
      > As you become more experienced , it will become easier .
      > It would help if someone would collect experience from people
      > and organize it .
      > Even isolated cases would be somewhat helpful .
      > I don't know if there is a textbook that gives recent info .
      > Some web sites have some data , but how can they be located ?
      >
      > Cheers ,
      >
      > Walter
      > http://upforthecount.com
      >
      >
      >
      >
      > [Non-text portions of this message have been removed]
      >
    • Norman Luhn
      The next prime number is your number+334 N=...626731 PRIMO takes 6,25s to create a certificate. Best Norman [Non-text portions of this message have been
      Message 2 of 20 , Apr 24, 2010
      • 0 Attachment
        The next prime number is your number+334
        N=...626731


        PRIMO takes 6,25s to create a certificate.

        Best

        Norman














        [Non-text portions of this message have been removed]
      • Alan Eliasen
        ... (in another posting, asked) ... (Which is not even probable-prime, as one single strong pseudoprime test shows.) I m about to sermonize, so be prepared, my
        Message 3 of 20 , Apr 25, 2010
        • 0 Attachment
          On 04/21/2010 09:01 PM, Ali Adams wrote:
          > Greetings all,
          >
          > Can someone please help with validating if the following 62-digit
          > number is a prime or not:
          >
          > 11108195956680805165653650502135350605769090617575464617311659
          >
          > I am aware of IsProbablePrime but need a definite primality test.

          (in another posting, asked)
          > What about this 309-digit number?

          >259336006801222696014182798990654577020329185417451253947966996203374778056958592994128470622708352120230964321433370545343196089458222530238878359827583627468563462233210398589090852507947007265127498998595582306765369537411175275870858814651979141558307396316154291312142703185675301452916463755740936626397

          (Which is not even probable-prime, as one single strong pseudoprime
          test shows.)

          I'm about to sermonize, so be prepared, my brothers and sisters.

          Whenever someone says they "need" a definite primality test, they, by
          their actions, usually prove that they don't understand that a
          primality-proving test, by itself, is not an infallible proof of
          primality (in the real world) and they rarely if ever do the correct
          thing to decrease the probability of error, giving them a result that's
          effectively no more certain than a working pseudoprime test (and, by
          probabilities and failure modes that I'll cite here, probably quite a
          bit weaker. And by "quite a bit" I mean 20 orders of magnitude, easy.)

          Note: Keep in mind that all of this analysis applies in the *real
          world*, not to some perfect, fictitious mathematical abstraction where
          programmers don't make errors and hardware never fails and you can
          ignore most algorithms and divide out big pesky constants because the
          ideal theoretical asymptotic performance is always what you get when you
          run programs.

          It is simple to implement and perform a strong pseudoprime test in
          which the probability that a randomly-chosen composite number is
          mistakenly stated to be prime is so low that it would never happen in
          your lifetime.[1] Not only that, you can trivially make the probability
          of this mistake so low that you could test trillions of numbers every
          second for the lifetime of the universe, and the probability of *any* of
          those tests failing are still astronomically low, and that's even taking
          the old ultra-conservative bound that as many as 1/4 of strong
          pseudoprime tests can fail. See citations in the link below for numbers
          about how conservative that estimate really is.

          Many people have pointed out over the years that the probability that
          your *hardware* or *software* fails (say, due to a high-energy particle
          passing through your processor, or random thermal drift of electrons[2],
          or a bad transistor) during your primality test or primality "proof"
          becomes rapidly much, much higher than the probability that, say, a
          Rabin-Miller test incorrectly declares a composite to be prime.
          Depending on the size of the number, this is true *even if you only do
          one pass of a Rabin-Miller test*, and the probability of error in the
          algorithm is far less than the probability of hardware failure, possibly
          by hundreds or thousands of tens of thousands of *orders of magnitude*!

          It may also be far, far more probable that the number you meant to
          test or the reply (with certificate) was corrupted during communication
          with someone else. For example, TCP only has a 16-bit checksum, and can
          miss, say, two single-bit errors separated by 16 bits, for a possible
          undetected error rate of 2^-16, or 1/65536 in a noisy channel. (Other
          error-correcting algorithms apply to communication across the internet,
          and most channels have fairly good s/n ratios so the end-to-end
          probability of error is luckily usually lower than this.)

          There's also the probability that Yahoo does something stupid with
          the long line and cuts off a digit or something. (Some would say the
          probability of Yahoo's software doing something stupid when adding their
          cruft to a posting is 1, but that's being mean. Their software isn't
          really that bad. The probability is actually 1-epsilon.)

          To see approximate probabilities of the Rabin-Miller test failing
          for certain number sizes, even with a *single* round of tests, see:

          http://primes.utm.edu/notes/prp_prob.html

          Those probabilities of failure, especially for large numbers, are
          mind-bogglingly small! Far smaller than the probability of some other
          source of error.

          Note that for the 309-digit number posted above, the probability of a
          strong pseudoprime test mistakenly returning "prime" for a composite
          number is less than 5.8*10^-29. Is that probability higher than other
          probabilities we've already listed? (And note that since a strong
          pseudoprime test easily catches that the number is actually composite.
          No matter what base you choose. I'm sure that many here wouldn't
          hesistate to give a cash reward for anyone who can find *any* prime base
          smaller than the number for which a strong pseudoprime test fails. I
          will initially offer all the money in my wallet. And I haven't even
          tried to look for one. I'm that confident in the probabilities, or my
          wallet is at its usual sad level.)

          It's hard to approximate the probability that a particular piece of
          software or hardware or communication will fail, but you can never
          expect your primality "proof" to be any stronger than the most likely
          error in the chain.

          If you really *need* a prime number, you *must* then be sure to take
          the certificate produced by one program's prime number proof *and verify
          this certificate*, presumably with different software on a different
          machine and hardware architecture! (I feel I should repeat this
          sentence many times!) And then verify it again with another piece of
          software! If you *don't* do this, the probability that the primality
          "proof" is in error is approximately the probability of the thing most
          likely to go wrong in the entire chain. (Again, maybe cosmic rays,
          software bugs, Pentiums, race conditions in software, power glitches,
          probability of human error, etc.) The primality certificate gives you a
          way of verifying (without performing the whole proof again) that the
          proof is indeed valid for that number.

          However, there's nothing that prevents even the *verification* of the
          certificate from having a similar unlikely failure! (Or similar
          implementation bugs.)[3] Assuming the implementations and hardware are
          completely independent, the best you can do is to multiply the
          probability of failure in both systems. Thus, if the probability of
          failure in each system independently is 10^-32, then the probability
          that *both* fail completely independently but in a way that gives the
          same flawed result is at best 10^-64. Look at the URL cited above and
          compare that to the probability of a probable-prime test failing. Is it
          larger or smaller? If the probability is larger, the likelihood of your
          "proof" being better than a pseudoprime test is purely illusory.

          I'm not sure what the exact failure probability for a single
          instruction is in modern hardware, but it's almost certainly *not*
          better than 10^-32. (Multiply this by the number of instructions
          required to perform the calculation.)

          One of the best sources of information I've seen about actual failure
          rates in installed hardware was done by the SETI@Home team, (which had
          the world's largest distributed supercomputer at some points) which
          cited detected error rates in returned work units (each work unit was
          given to at least two people for validation, so errors could be
          detected.) I don't remember the exact numbers, but the actual failure
          rate appeared to be many, many orders of magnitude lower than 10^-32. I
          don't seem to be able to find these stats at the moment. Does anyone
          have the link?[4]

          If you haven't verified a primality certificate independently to
          decrease your probability of hardware or software error, (and there is
          *always* a non-zero probability of error on real physical hardware and
          where humans are involved, even in a primality "proof") you're likely
          not doing any better than a probabilistic prime test, in which the
          unreliability of hardware, software, communications, and humans, rapidly
          become the limiting factors.

          You're also much more likely that some random mischiefmaker will say,
          "sure it's prime" when it's not because they're annoyed with you for not
          doing these simple tests yourself, and for refusing to take advice on
          tools that will do the work for you. But if people didn't refuse to
          listen, I wouldn't get a chance to gratuitously sermonize.

          If you didn't verify the primality certificate as many ways as
          possible, you clearly can't claim you understand that you "need" a prime
          number, and don't really understand the probability of all the different
          ways that failure could have occurred. Do you even know that the
          primality certificate that was posted here was valid for the number you
          gave, or did someone maybe miss the last digit digit during
          cut-and-paste? Do you even know that the certificate was anything but a
          cat walking across a keyboard? The probability that *you*
          cut-and-pasted the number incorrectly is orders of magnitude higher than
          probability of failure of a pseudoprime algorithm.[5]

          Homework: 1.) Can anyone else estimate probabilities of certain
          types of errors? What do you think are the probabilities of various
          failure modes in posting "prove this number prime for me" to a public
          group? Do any of those probabilities exceed that of failure of a
          probabilistic test? (Hint: The answer is yes.)

          2.) Even if you get a provable prime number handed down impeachably
          from the ghost of Fermat, what's the probability that when you're
          *using* this prime number that something goes wrong in those
          calculations? (Hint: Weakest link in the chain.)

          ================

          Footnotes:

          [1] I intentionally state "randomly-chosen" because there are ways
          to generate a very sparse set of numbers that can fool one of these
          tests if you know in advance the bases it's going to test, which is why
          most Rabin-Miller tests choose some bases randomly if there is the
          potential for adversarial assault. See, for example:
          François Arnault, Constructing Carmichael numbers which are strong
          pseudoprimes to several bases, Journal of Symbolic Computation, v.20
          n.2, p.151-161, Aug. 1995

          [2] There's a certain nonzero probability that an electron will go
          "the wrong way" and potentially tunnel "backwards" through a potential
          barrier. As voltages used in processors get lower, and as gates get
          smaller and the number of electrons required to switch a gate get fewer,
          this becomes increasingly more probable. (I don't have my "Feynman
          Lectures on Computation" at hand or I'd post some equations for this
          probability. Very highly recommended!
          http://tinyurl.com/9q4p8o )

          [3] Multiple programmers creating similar software bugs is more
          common than you might think. Sun's Java implementation had a famous bug
          in their calculation of the Jacobi symbol, which caused
          primality-testing routines to fail:

          http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4624738

          When I implemented a function to calculate the Jacobi symbol in my
          Frink programming language ( http://futureboy.us/frinkdocs/ ) I
          initially created a similar bug, because my algorithm didn't work for
          some negative numbers either. I didn't use Sun's algorithm, but I can
          see how they went wrong. The algorithm often cited on the web, in
          several number theory books, etc. needs preconditioning to handle
          negative numbers correctly. Even more insanely, when evaluating
          negative numbers, Java uses different sign conventions for the %
          (modulus) operator (used for "int" values) and the BigInteger.mod(n)
          function! It's easy to see how Sun even confuses their own programmers.

          This bug in Sun's implementation caused some significant pain. Their
          primality testing was originally a probabilistic Rabin-Miller test (with
          probabilities that can easily be set that it won't return a wrong result
          during the lifetime of the universe, and many users set them *way*
          higher than that to be very safe) yet that bug caused failures much more
          often, and introduced a method of failure that Rabin-Miller *can't*
          produce, (Rabin-Miller can't ever declare a prime to be composite, but
          it can, very rarely, declare a composite to be probably prime,) which is
          why this failure mode was particularly unexpected to a lot of people,
          and caused numbers above a certain size to fail mysteriously and
          sporadically.

          Note: When I was informed of this bug in my implementation of my
          JacobiSymbol[x] function, (it didn't affect my primality testing, which
          has always worked properly,) I was ashamed that I wasn't able to release
          a fix until I got home from work later that day. Sun, on the other
          hand, took *3 years* to release a fix. (This bug was present in Java
          versions 1.3 through 1.4.2.)

          [4] Of course, some of their "failures" were due to intentional
          attempts of people to corrupt their results, so exact probabilities of
          failure are impossible to come by. There were interesting stories,
          though, of *all* or *almost all* of the work units returned by several
          well-meaning participants returning incorrect results. The SETI team
          contacted these people and were able to verify that their computers'
          floating-point units were indeed failing. (Sometimes subtly enough to
          not make everything crash, but enough to make all extended calculations
          wrong.) This is interesting and probably indicates that the frequency
          of subtly or explicitly broken processors in the world is far less than
          10^-9, setting a bound for what reliability we might expect for
          primality testing.

          [5] Don't laugh. This happened on this very list. A few years ago,
          an intrepid researcher stated that they had been running a computer for
          3+ years to find the factors of one of the RSA factoring challenge
          numbers. Awesome persistence! And then one day it beeped! (I can't
          imagine the excitement!) He announced to this list that he had
          submitted his solution to RSA and was awaiting confirmation of the
          factors, and he wasn't sure if the numbers were factors. I asked him
          the obvious question, "did you multiply the two factors together and did
          they come up with the original number?" The next day, with a leaden
          heart, he responded to me and indicated that he had accidentally cut off
          a digit when pasting in the original RSA number to his factoring
          program. It still hurts me to think about it.

          --
          Alan Eliasen
          eliasen@...
          http://futureboy.us/
        • Jens Kruse Andersen
          ... The below is largely my own speculation not based on careful research. Your numbers apparently assume there are no hardware, software, operator or other
          Message 4 of 20 , Apr 25, 2010
          • 0 Attachment
            Alan Eliasen wrote:
            > It is simple to implement and perform a strong pseudoprime test in
            > which the probability that a randomly-chosen composite number is
            > mistakenly stated to be prime is so low that it would never happen in
            > your lifetime.[1] Not only that, you can trivially make the probability
            > of this mistake so low that you could test trillions of numbers every
            > second for the lifetime of the universe, and the probability of *any* of
            > those tests failing are still astronomically low, and that's even taking
            > the old ultra-conservative bound that as many as 1/4 of strong
            > pseudoprime tests can fail. See citations in the link below for numbers
            > about how conservative that estimate really is.

            The below is largely my own speculation not based on careful research.

            Your numbers apparently assume there are no hardware, software,
            operator or other errors when the pseudoprime tests are performed.
            But they can have the same type of errors as primality proofs.
            In practice it seems more important to use independent reliable software
            and hardware for the pseudoprime tests than to run a large number of them.
            And if you both have pseudoprime tests saying composite and probable
            prime on a large number then be extremely careful before claiming to have
            proved compositeness.

            Fast pseudoprime test programs may implement more complicated
            algorithms with greater risk of programming error, and may be written
            in assembler that tends to produce more errors for human programmers
            (partly because of increassed source code size), so it may be better to
            run a smaller number of pseudoprime tests with a slow simple program
            than a large number with a fast program.

            If you run the same type of pseudoprime test with multiple supposedly
            independent programs then some things risk causing the same error,
            for example:
            An error in a text read by multiple programmers.
            A systematic hardware error affecting operations likely to be performed
            by different programs implementing the same algorithm.
            A tricky part of the implementation which may fool multiple programmers.
            An error in a software library or other routine used by multiple programs.

            So it also seems a good idea in practice to use different types of
            pseudoprime tests instead of many uses of the one with the best
            theoretical accuracy.

            If the same person makes many tests then a copy-and-paste error
            or lack of honesty, knowledge, intelligence, carefulness or other
            human factors (some of them may affect yourself without your
            knowledge) can make all tests invalid, so also get multiple people
            to run tests.

            In brief, if you want to increase the "real" chance that a number is
            prime then I think you should bet more on independence of as
            much as possible (software, hardware, algorithms, people, source
            of the number), than on increasing the number of pseudoprime tests.
            And aim for not only two independent of something but as many as
            possible or practical.

            However, in practice the negative consequences of an alleged prime
            being composite are usually so small that I don't think people should
            bother with all my advice (I don't myself).
            Most large primes are just announced with no practical consequence
            (except maybe to the reputation of you or the used software) if
            somebody else proves them composite.
            It will often be more important to an audience of an announced prime
            that you say "Trusted program X proved primality" than you argue
            about the microscopic risk that something went wrong in all the
            pseudoprime tests.

            --
            Jens Kruse Andersen
          • Ali Adams
            Thank  you for the reply Alan and advice taken well. Here in China I don t have access to the PRIMO website and the one I am using now is
            Message 5 of 20 , Apr 25, 2010
            • 0 Attachment
              Thank  you for the reply Alan and advice taken well.
              Here in China I don't have access to the PRIMO website and the one I am using now is http://www.alpertron.com.ar/ECM.HTM which says composite but still no prime factors out yet.
               
              I am interested in either the number being a prime and then see if its digit sum is also prime (additve prime number). Alternatively if composite then the maths stop and the interpretation of the prime factors starts.
               
              Thank you again for your advice and I am a software engineer and I know the weakest link :)
               
              Salam,
               
              Ali
              God > infinity




              ________________________________
              From: Alan Eliasen <eliasen@...>
              To: Ali Adams <alipoland@...>; Prime Number <primenumbers@yahoogroups.com>
              Sent: Sun, April 25, 2010 3:59:17 PM
              Subject: [PrimeNumbers] Strong pseudosermon (was 62-digit IsPrime)

               
              On 04/21/2010 09:01 PM, Ali Adams wrote:
              > Greetings all,
              >
              > Can someone please help with validating if the following 62-digit
              > number is a prime or not:
              >
              > 1110819595668080516 5653650502135350 6057690906175754 64617311659
              >
              > I am aware of IsProbablePrime but need a definite primality test.

              (in another posting, asked)
              > What about this 309-digit number?

              >259336006801222696 0141827989906545 7702032918541745 1253947966996203 3747780569585929 9412847062270835 2120230964321433 3705453431960894 5822253023887835 9827583627468563 4622332103985890 9085250794700726 5127498998595582 3067653695374111 7527587085881465 1979141558307396 3161542913121427 0318567530145291 6463755740936626 397

              (Which is not even probable-prime, as one single strong pseudoprime
              test shows.)

              I'm about to sermonize, so be prepared, my brothers and sisters.

              Whenever someone says they "need" a definite primality test, they, by
              their actions, usually prove that they don't understand that a
              primality-proving test, by itself, is not an infallible proof of
              primality (in the real world) and they rarely if ever do the correct
              thing to decrease the probability of error, giving them a result that's
              effectively no more certain than a working pseudoprime test (and, by
              probabilities and failure modes that I'll cite here, probably quite a
              bit weaker. And by "quite a bit" I mean 20 orders of magnitude, easy.)

              Note: Keep in mind that all of this analysis applies in the *real
              world*, not to some perfect, fictitious mathematical abstraction where
              programmers don't make errors and hardware never fails and you can
              ignore most algorithms and divide out big pesky constants because the
              ideal theoretical asymptotic performance is always what you get when you
              run programs.

              It is simple to implement and perform a strong pseudoprime test in
              which the probability that a randomly-chosen composite number is
              mistakenly stated to be prime is so low that it would never happen in
              your lifetime.[1] Not only that, you can trivially make the probability
              of this mistake so low that you could test trillions of numbers every
              second for the lifetime of the universe, and the probability of *any* of
              those tests failing are still astronomically low, and that's even taking
              the old ultra-conservative bound that as many as 1/4 of strong
              pseudoprime tests can fail. See citations in the link below for numbers
              about how conservative that estimate really is.

              Many people have pointed out over the years that the probability that
              your *hardware* or *software* fails (say, due to a high-energy particle
              passing through your processor, or random thermal drift of electrons[2] ,
              or a bad transistor) during your primality test or primality "proof"
              becomes rapidly much, much higher than the probability that, say, a
              Rabin-Miller test incorrectly declares a composite to be prime.
              Depending on the size of the number, this is true *even if you only do
              one pass of a Rabin-Miller test*, and the probability of error in the
              algorithm is far less than the probability of hardware failure, possibly
              by hundreds or thousands of tens of thousands of *orders of magnitude*!

              It may also be far, far more probable that the number you meant to
              test or the reply (with certificate) was corrupted during communication
              with someone else. For example, TCP only has a 16-bit checksum, and can
              miss, say, two single-bit errors separated by 16 bits, for a possible
              undetected error rate of 2^-16, or 1/65536 in a noisy channel. (Other
              error-correcting algorithms apply to communication across the internet,
              and most channels have fairly good s/n ratios so the end-to-end
              probability of error is luckily usually lower than this.)

              There's also the probability that Yahoo does something stupid with
              the long line and cuts off a digit or something. (Some would say the
              probability of Yahoo's software doing something stupid when adding their
              cruft to a posting is 1, but that's being mean. Their software isn't
              really that bad. The probability is actually 1-epsilon.)

              To see approximate probabilities of the Rabin-Miller test failing
              for certain number sizes, even with a *single* round of tests, see:

              http://primes. utm.edu/notes/ prp_prob. html

              Those probabilities of failure, especially for large numbers, are
              mind-bogglingly small! Far smaller than the probability of some other
              source of error.

              Note that for the 309-digit number posted above, the probability of a
              strong pseudoprime test mistakenly returning "prime" for a composite
              number is less than 5.8*10^-29. Is that probability higher than other
              probabilities we've already listed? (And note that since a strong
              pseudoprime test easily catches that the number is actually composite.
              No matter what base you choose. I'm sure that many here wouldn't
              hesistate to give a cash reward for anyone who can find *any* prime base
              smaller than the number for which a strong pseudoprime test fails. I
              will initially offer all the money in my wallet. And I haven't even
              tried to look for one. I'm that confident in the probabilities, or my
              wallet is at its usual sad level.)

              It's hard to approximate the probability that a particular piece of
              software or hardware or communication will fail, but you can never
              expect your primality "proof" to be any stronger than the most likely
              error in the chain.

              If you really *need* a prime number, you *must* then be sure to take
              the certificate produced by one program's prime number proof *and verify
              this certificate* , presumably with different software on a different
              machine and hardware architecture! (I feel I should repeat this
              sentence many times!) And then verify it again with another piece of
              software! If you *don't* do this, the probability that the primality
              "proof" is in error is approximately the probability of the thing most
              likely to go wrong in the entire chain. (Again, maybe cosmic rays,
              software bugs, Pentiums, race conditions in software, power glitches,
              probability of human error, etc.) The primality certificate gives you a
              way of verifying (without performing the whole proof again) that the
              proof is indeed valid for that number.

              However, there's nothing that prevents even the *verification* of the
              certificate from having a similar unlikely failure! (Or similar
              implementation bugs.)[3] Assuming the implementations and hardware are
              completely independent, the best you can do is to multiply the
              probability of failure in both systems. Thus, if the probability of
              failure in each system independently is 10^-32, then the probability
              that *both* fail completely independently but in a way that gives the
              same flawed result is at best 10^-64. Look at the URL cited above and
              compare that to the probability of a probable-prime test failing. Is it
              larger or smaller? If the probability is larger, the likelihood of your
              "proof" being better than a pseudoprime test is purely illusory.

              I'm not sure what the exact failure probability for a single
              instruction is in modern hardware, but it's almost certainly *not*
              better than 10^-32. (Multiply this by the number of instructions
              required to perform the calculation. )

              One of the best sources of information I've seen about actual failure
              rates in installed hardware was done by the SETI@Home team, (which had
              the world's largest distributed supercomputer at some points) which
              cited detected error rates in returned work units (each work unit was
              given to at least two people for validation, so errors could be
              detected.) I don't remember the exact numbers, but the actual failure
              rate appeared to be many, many orders of magnitude lower than 10^-32. I
              don't seem to be able to find these stats at the moment. Does anyone
              have the link?[4]

              If you haven't verified a primality certificate independently to
              decrease your probability of hardware or software error, (and there is
              *always* a non-zero probability of error on real physical hardware and
              where humans are involved, even in a primality "proof") you're likely
              not doing any better than a probabilistic prime test, in which the
              unreliability of hardware, software, communications, and humans, rapidly
              become the limiting factors.

              You're also much more likely that some random mischiefmaker will say,
              "sure it's prime" when it's not because they're annoyed with you for not
              doing these simple tests yourself, and for refusing to take advice on
              tools that will do the work for you. But if people didn't refuse to
              listen, I wouldn't get a chance to gratuitously sermonize.

              If you didn't verify the primality certificate as many ways as
              possible, you clearly can't claim you understand that you "need" a prime
              number, and don't really understand the probability of all the different
              ways that failure could have occurred. Do you even know that the
              primality certificate that was posted here was valid for the number you
              gave, or did someone maybe miss the last digit digit during
              cut-and-paste? Do you even know that the certificate was anything but a
              cat walking across a keyboard? The probability that *you*
              cut-and-pasted the number incorrectly is orders of magnitude higher than
              probability of failure of a pseudoprime algorithm.[5]

              Homework: 1.) Can anyone else estimate probabilities of certain
              types of errors? What do you think are the probabilities of various
              failure modes in posting "prove this number prime for me" to a public
              group? Do any of those probabilities exceed that of failure of a
              probabilistic test? (Hint: The answer is yes.)

              2.) Even if you get a provable prime number handed down impeachably
              from the ghost of Fermat, what's the probability that when you're
              *using* this prime number that something goes wrong in those
              calculations? (Hint: Weakest link in the chain.)

              ============ ====

              Footnotes:

              [1] I intentionally state "randomly-chosen" because there are ways
              to generate a very sparse set of numbers that can fool one of these
              tests if you know in advance the bases it's going to test, which is why
              most Rabin-Miller tests choose some bases randomly if there is the
              potential for adversarial assault. See, for example:
              François Arnault, Constructing Carmichael numbers which are strong
              pseudoprimes to several bases, Journal of Symbolic Computation, v.20
              n.2, p.151-161, Aug. 1995

              [2] There's a certain nonzero probability that an electron will go
              "the wrong way" and potentially tunnel "backwards" through a potential
              barrier. As voltages used in processors get lower, and as gates get
              smaller and the number of electrons required to switch a gate get fewer,
              this becomes increasingly more probable. (I don't have my "Feynman
              Lectures on Computation" at hand or I'd post some equations for this
              probability. Very highly recommended!
              http://tinyurl. com/9q4p8o )

              [3] Multiple programmers creating similar software bugs is more
              common than you might think. Sun's Java implementation had a famous bug
              in their calculation of the Jacobi symbol, which caused
              primality-testing routines to fail:

              http://bugs. sun.com/bugdatab ase/view_ bug.do?bug_ id=4624738

              When I implemented a function to calculate the Jacobi symbol in my
              Frink programming language ( http://futureboy. us/frinkdocs/ ) I
              initially created a similar bug, because my algorithm didn't work for
              some negative numbers either. I didn't use Sun's algorithm, but I can
              see how they went wrong. The algorithm often cited on the web, in
              several number theory books, etc. needs preconditioning to handle
              negative numbers correctly. Even more insanely, when evaluating
              negative numbers, Java uses different sign conventions for the %
              (modulus) operator (used for "int" values) and the BigInteger.mod( n)
              function! It's easy to see how Sun even confuses their own programmers.

              This bug in Sun's implementation caused some significant pain. Their
              primality testing was originally a probabilistic Rabin-Miller test (with
              probabilities that can easily be set that it won't return a wrong result
              during the lifetime of the universe, and many users set them *way*
              higher than that to be very safe) yet that bug caused failures much more
              often, and introduced a method of failure that Rabin-Miller *can't*
              produce, (Rabin-Miller can't ever declare a prime to be composite, but
              it can, very rarely, declare a composite to be probably prime,) which is
              why this failure mode was particularly unexpected to a lot of people,
              and caused numbers above a certain size to fail mysteriously and
              sporadically.

              Note: When I was informed of this bug in my implementation of my
              JacobiSymbol[ x] function, (it didn't affect my primality testing, which
              has always worked properly,) I was ashamed that I wasn't able to release
              a fix until I got home from work later that day. Sun, on the other
              hand, took *3 years* to release a fix. (This bug was present in Java
              versions 1.3 through 1.4.2.)

              [4] Of course, some of their "failures" were due to intentional
              attempts of people to corrupt their results, so exact probabilities of
              failure are impossible to come by. There were interesting stories,
              though, of *all* or *almost all* of the work units returned by several
              well-meaning participants returning incorrect results. The SETI team
              contacted these people and were able to verify that their computers'
              floating-point units were indeed failing. (Sometimes subtly enough to
              not make everything crash, but enough to make all extended calculations
              wrong.) This is interesting and probably indicates that the frequency
              of subtly or explicitly broken processors in the world is far less than
              10^-9, setting a bound for what reliability we might expect for
              primality testing.

              [5] Don't laugh. This happened on this very list. A few years ago,
              an intrepid researcher stated that they had been running a computer for
              3+ years to find the factors of one of the RSA factoring challenge
              numbers. Awesome persistence! And then one day it beeped! (I can't
              imagine the excitement!) He announced to this list that he had
              submitted his solution to RSA and was awaiting confirmation of the
              factors, and he wasn't sure if the numbers were factors. I asked him
              the obvious question, "did you multiply the two factors together and did
              they come up with the original number?" The next day, with a leaden
              heart, he responded to me and indicated that he had accidentally cut off
              a digit when pasting in the original RSA number to his factoring
              program. It still hurts me to think about it.

              --
              Alan Eliasen
              eliasen@mindspring. com
              http://futureboy. us/






              [Non-text portions of this message have been removed]
            • Alan Eliasen
              ... I didn t mean to imply in any way that pseudoprime tests were magically free of the same kinds of hardware/software/ human error. Of course they re not,
              Message 6 of 20 , Apr 25, 2010
              • 0 Attachment
                On 04/25/2010 07:14 AM, Jens Kruse Andersen wrote:
                > Your numbers apparently assume there are no hardware, software,
                > operator or other errors when the pseudoprime tests are performed.
                > But they can have the same type of errors as primality proofs.

                I didn't mean to imply in any way that pseudoprime tests were
                magically free of the same kinds of hardware/software/ human error. Of
                course they're not, but perhaps I didn't state this clearly enough. The
                limiting factor for reliability is always going to be the weakest link
                in the chain (possibly the person cutting-and-pasting in the number.)

                > Fast pseudoprime test programs may implement more complicated
                > algorithms with greater risk of programming error, and may be written
                > in assembler that tends to produce more errors for human programmers
                > (partly because of increassed source code size), so it may be better to
                > run a smaller number of pseudoprime tests with a slow simple program
                > than a large number with a fast program.

                A Rabin-Miller algorithm can be written so simply that testing
                against the trivial, non-optimized version is likely always beneficial.
                I definitely agree that the probability of undetected programming
                errors increases with source size and complexity of algorithms, though.
                We've all seen how hard it is to get even the multiplication of two
                integers always correct using something like an FFT algorithm. From the
                notes of the GMP project, many of these errors are even due to broken
                compilers!

                > If you run the same type of pseudoprime test with multiple supposedly
                > independent programs then some things risk causing the same error,
                > for example:
                > An error in a text read by multiple programmers.
                > A systematic hardware error affecting operations likely to be performed
                > by different programs implementing the same algorithm.
                > A tricky part of the implementation which may fool multiple programmers.
                > An error in a software library or other routine used by multiple programs.

                Don't forget the probability of cutting-and-pasting the same wrong
                number into all of those programs! Or receiving the wrong number due to
                transmission errors.

                The concept of independent verification reducing probability of error
                rests on the idea that you don't have some sort of systematic error. If
                you *do* have systematic error, no amount of independent validation will
                improve your answers, and that needs to be understood.

                > In brief, if you want to increase the "real" chance that a number is
                > prime then I think you should bet more on independence of as
                > much as possible (software, hardware, algorithms, people, source
                > of the number), than on increasing the number of pseudoprime tests.
                > And aim for not only two independent of something but as many as
                > possible or practical.

                Yes, my point was that reduction of error probability below the most
                likely mode of failure could only be achieved by truly independent
                tests. This means you have to look very closely indeed at eliminating
                potential systematic errors. There are many of these failure modes and
                some of their probabilities are very high. I cited human error or TCP
                transmission error or mail/web client/server bugs as being
                high-probability, systematic sources of failure.

                The probability of these systematic errors probably also increases if
                the number isn't of a simple form, e.g. 2^12345-1, but rather an
                uncompressable number like the 300-digit number cited, which has more
                probability of corruption in transmission, errors in wrapping, or
                undetected cut-and-paste error, etc.

                --
                Alan Eliasen
                eliasen@...
                http://futureboy.us/
              • Ali Adams
                Alan you will be happy to see this article on the BBC, UK :) http://news.bbc.co.uk/2/hi/technology/8637845.stm Web security attack makes silicon chips more
                Message 7 of 20 , Apr 26, 2010
                • 0 Attachment
                  Alan you will be happy to see this article on the BBC, UK :)
                  http://news.bbc.co.uk/2/hi/technology/8637845.stm
                  Web security attack 'makes silicon chips more reliable'
                  ---------------
                  An attack on a widely used web security system could soon help make silicon chips more powerful and reliable.
                  Many websites use cryptographic systems to scramble key data, such as credit card numbers, when customers pay.
                  Scientists have found that by varying the voltage to key parts of a computer's processor, the ability to keep this data secret is compromised.
                  The researchers also discovered that a method that helps chips beat the attack could also make them more reliable.
                  Secure sites
                  Many modern security systems, such as the ones websites use to encrypt the credit card numbers of their customers, are based around a system known as public key cryptography.
                  This uses two keys, one public and one private, to scramble data. One of the most widely used implementations of this is known as RSA authentication.
                  "If data is locked with a public key, it can only be unlocked with the corresponding private key," said Professor Todd Austin, from the electrical engineering and computer science department at the University of Michigan who helped conduct the research.
                  Within 10 years a chip will have transistor failures every day

                  Professor Valeria Bertacco
                  "It's the kind of algorithm you use when you go to a website and you see the little padlock in the lower right hand corner to indicate a secure connection," he said.
                  The keys take the form of large numbers more than 1,000 digits long. Security is ensured because trying to guess a private number by trying all possible combinations would take longer than the age of the universe, using current computer technology.
                  Professor Austin, working with Andrea Pellegrini and Professor Valeria Bertacco, found a much quicker route to guessing the keys by varying the voltage to a processor.
                  "You need to be able to control the voltage to the power source to the device," said Professor Bertacco. "By putting the voltage just below where it should be means the device makes computational mistakes - it suffers temporary transistor failure."
                  The voltage was varied when a target machine was communicating with another machine via the web and the data flying between the two was encrypted using the public key system.
                  "It makes one mistake every now and again," she said. "But we need just a few mistakes."
                  During their test, the three researchers collected 8800 corrupted signatures in 10 hours and then analysed them using software that could call on 81 separate machines to boost its number crunching power.
                  The end result of the research was an attack method that could extract all the parts of a 1024 bit key in about 100 hours.
                  'Error prone'
                  Initially, said Professor Bertacco, the work will lead to improvements in the way the public key security system works to make it less susceptible to such an attack. Future versions of the system will be "salted" with fake values to confuse any attempt to reconstruct a private key.
                  "It's part of the ongoing process of hardening RSA," said Professor Austin.
                  The implications of the research do not stop at security. It is also helping to produce error correction systems that spot when transistors fail and ensure that data is not corrupted as a result.
                  Professor Bertacco said the research would be useful when chips are made of even smaller components than those in use today. The widely-known Moore's Law predicts that the number of transistors on a given size of silicon wafer doubles roughly every 18 months.
                  Often that doubling is due to the transistors on the chip getting smaller. The transistors on Intel's most up to date desktop computers are about 32 nanometres in size.
                  Intel has said that it expects to soon start producing chips with components 22 and 16nm wide. A nanometre is a billionth of a metre.
                  However, as components get smaller they can get less reliable and need error checking and correction software to help cope with any errors that get introduced.
                  "Our mainstream research in this area is to make microchips operate correctly even in the face of transistor failure," she said. "Within 10 years a chip will have transistor failures every day. As transistors get smaller so they are more prone to failure."
                  ---------------

                   
                  Ali
                  God > infinity




                  ________________________________
                  From: Alan Eliasen <eliasen@...>
                  To: Ali Adams <alipoland@...>; Prime Number <primenumbers@yahoogroups.com>
                  Sent: Sun, April 25, 2010 3:59:17 PM
                  Subject: [PrimeNumbers] Strong pseudosermon (was 62-digit IsPrime)

                   
                  On 04/21/2010 09:01 PM, Ali Adams wrote:
                  > Greetings all,
                  >
                  > Can someone please help with validating if the following 62-digit
                  > number is a prime or not:
                  >
                  > 1110819595668080516 5653650502135350 6057690906175754 64617311659
                  >
                  > I am aware of IsProbablePrime but need a definite primality test.

                  (in another posting, asked)
                  > What about this 309-digit number?

                  >259336006801222696 0141827989906545 7702032918541745 1253947966996203 3747780569585929 9412847062270835 2120230964321433 3705453431960894 5822253023887835 9827583627468563 4622332103985890 9085250794700726 5127498998595582 3067653695374111 7527587085881465 1979141558307396 3161542913121427 0318567530145291 6463755740936626 397

                  (Which is not even probable-prime, as one single strong pseudoprime
                  test shows.)

                  I'm about to sermonize, so be prepared, my brothers and sisters.

                  Whenever someone says they "need" a definite primality test, they, by
                  their actions, usually prove that they don't understand that a
                  primality-proving test, by itself, is not an infallible proof of
                  primality (in the real world) and they rarely if ever do the correct
                  thing to decrease the probability of error, giving them a result that's
                  effectively no more certain than a working pseudoprime test (and, by
                  probabilities and failure modes that I'll cite here, probably quite a
                  bit weaker. And by "quite a bit" I mean 20 orders of magnitude, easy.)

                  Note: Keep in mind that all of this analysis applies in the *real
                  world*, not to some perfect, fictitious mathematical abstraction where
                  programmers don't make errors and hardware never fails and you can
                  ignore most algorithms and divide out big pesky constants because the
                  ideal theoretical asymptotic performance is always what you get when you
                  run programs.

                  It is simple to implement and perform a strong pseudoprime test in
                  which the probability that a randomly-chosen composite number is
                  mistakenly stated to be prime is so low that it would never happen in
                  your lifetime.[1] Not only that, you can trivially make the probability
                  of this mistake so low that you could test trillions of numbers every
                  second for the lifetime of the universe, and the probability of *any* of
                  those tests failing are still astronomically low, and that's even taking
                  the old ultra-conservative bound that as many as 1/4 of strong
                  pseudoprime tests can fail. See citations in the link below for numbers
                  about how conservative that estimate really is.

                  Many people have pointed out over the years that the probability that
                  your *hardware* or *software* fails (say, due to a high-energy particle
                  passing through your processor, or random thermal drift of electrons[2] ,
                  or a bad transistor) during your primality test or primality "proof"
                  becomes rapidly much, much higher than the probability that, say, a
                  Rabin-Miller test incorrectly declares a composite to be prime.
                  Depending on the size of the number, this is true *even if you only do
                  one pass of a Rabin-Miller test*, and the probability of error in the
                  algorithm is far less than the probability of hardware failure, possibly
                  by hundreds or thousands of tens of thousands of *orders of magnitude*!

                  It may also be far, far more probable that the number you meant to
                  test or the reply (with certificate) was corrupted during communication
                  with someone else. For example, TCP only has a 16-bit checksum, and can
                  miss, say, two single-bit errors separated by 16 bits, for a possible
                  undetected error rate of 2^-16, or 1/65536 in a noisy channel. (Other
                  error-correcting algorithms apply to communication across the internet,
                  and most channels have fairly good s/n ratios so the end-to-end
                  probability of error is luckily usually lower than this.)

                  There's also the probability that Yahoo does something stupid with
                  the long line and cuts off a digit or something. (Some would say the
                  probability of Yahoo's software doing something stupid when adding their
                  cruft to a posting is 1, but that's being mean. Their software isn't
                  really that bad. The probability is actually 1-epsilon.)

                  To see approximate probabilities of the Rabin-Miller test failing
                  for certain number sizes, even with a *single* round of tests, see:

                  http://primes. utm.edu/notes/ prp_prob. html

                  Those probabilities of failure, especially for large numbers, are
                  mind-bogglingly small! Far smaller than the probability of some other
                  source of error.

                  Note that for the 309-digit number posted above, the probability of a
                  strong pseudoprime test mistakenly returning "prime" for a composite
                  number is less than 5.8*10^-29. Is that probability higher than other
                  probabilities we've already listed? (And note that since a strong
                  pseudoprime test easily catches that the number is actually composite.
                  No matter what base you choose. I'm sure that many here wouldn't
                  hesistate to give a cash reward for anyone who can find *any* prime base
                  smaller than the number for which a strong pseudoprime test fails. I
                  will initially offer all the money in my wallet. And I haven't even
                  tried to look for one. I'm that confident in the probabilities, or my
                  wallet is at its usual sad level.)

                  It's hard to approximate the probability that a particular piece of
                  software or hardware or communication will fail, but you can never
                  expect your primality "proof" to be any stronger than the most likely
                  error in the chain.

                  If you really *need* a prime number, you *must* then be sure to take
                  the certificate produced by one program's prime number proof *and verify
                  this certificate* , presumably with different software on a different
                  machine and hardware architecture! (I feel I should repeat this
                  sentence many times!) And then verify it again with another piece of
                  software! If you *don't* do this, the probability that the primality
                  "proof" is in error is approximately the probability of the thing most
                  likely to go wrong in the entire chain. (Again, maybe cosmic rays,
                  software bugs, Pentiums, race conditions in software, power glitches,
                  probability of human error, etc.) The primality certificate gives you a
                  way of verifying (without performing the whole proof again) that the
                  proof is indeed valid for that number.

                  However, there's nothing that prevents even the *verification* of the
                  certificate from having a similar unlikely failure! (Or similar
                  implementation bugs.)[3] Assuming the implementations and hardware are
                  completely independent, the best you can do is to multiply the
                  probability of failure in both systems. Thus, if the probability of
                  failure in each system independently is 10^-32, then the probability
                  that *both* fail completely independently but in a way that gives the
                  same flawed result is at best 10^-64. Look at the URL cited above and
                  compare that to the probability of a probable-prime test failing. Is it
                  larger or smaller? If the probability is larger, the likelihood of your
                  "proof" being better than a pseudoprime test is purely illusory.

                  I'm not sure what the exact failure probability for a single
                  instruction is in modern hardware, but it's almost certainly *not*
                  better than 10^-32. (Multiply this by the number of instructions
                  required to perform the calculation. )

                  One of the best sources of information I've seen about actual failure
                  rates in installed hardware was done by the SETI@Home team, (which had
                  the world's largest distributed supercomputer at some points) which
                  cited detected error rates in returned work units (each work unit was
                  given to at least two people for validation, so errors could be
                  detected.) I don't remember the exact numbers, but the actual failure
                  rate appeared to be many, many orders of magnitude lower than 10^-32. I
                  don't seem to be able to find these stats at the moment. Does anyone
                  have the link?[4]

                  If you haven't verified a primality certificate independently to
                  decrease your probability of hardware or software error, (and there is
                  *always* a non-zero probability of error on real physical hardware and
                  where humans are involved, even in a primality "proof") you're likely
                  not doing any better than a probabilistic prime test, in which the
                  unreliability of hardware, software, communications, and humans, rapidly
                  become the limiting factors.

                  You're also much more likely that some random mischiefmaker will say,
                  "sure it's prime" when it's not because they're annoyed with you for not
                  doing these simple tests yourself, and for refusing to take advice on
                  tools that will do the work for you. But if people didn't refuse to
                  listen, I wouldn't get a chance to gratuitously sermonize.

                  If you didn't verify the primality certificate as many ways as
                  possible, you clearly can't claim you understand that you "need" a prime
                  number, and don't really understand the probability of all the different
                  ways that failure could have occurred. Do you even know that the
                  primality certificate that was posted here was valid for the number you
                  gave, or did someone maybe miss the last digit digit during
                  cut-and-paste? Do you even know that the certificate was anything but a
                  cat walking across a keyboard? The probability that *you*
                  cut-and-pasted the number incorrectly is orders of magnitude higher than
                  probability of failure of a pseudoprime algorithm.[5]

                  Homework: 1.) Can anyone else estimate probabilities of certain
                  types of errors? What do you think are the probabilities of various
                  failure modes in posting "prove this number prime for me" to a public
                  group? Do any of those probabilities exceed that of failure of a
                  probabilistic test? (Hint: The answer is yes.)

                  2.) Even if you get a provable prime number handed down impeachably
                  from the ghost of Fermat, what's the probability that when you're
                  *using* this prime number that something goes wrong in those
                  calculations? (Hint: Weakest link in the chain.)

                  ============ ====

                  Footnotes:

                  [1] I intentionally state "randomly-chosen" because there are ways
                  to generate a very sparse set of numbers that can fool one of these
                  tests if you know in advance the bases it's going to test, which is why
                  most Rabin-Miller tests choose some bases randomly if there is the
                  potential for adversarial assault. See, for example:
                  François Arnault, Constructing Carmichael numbers which are strong
                  pseudoprimes to several bases, Journal of Symbolic Computation, v.20
                  n.2, p.151-161, Aug. 1995

                  [2] There's a certain nonzero probability that an electron will go
                  "the wrong way" and potentially tunnel "backwards" through a potential
                  barrier. As voltages used in processors get lower, and as gates get
                  smaller and the number of electrons required to switch a gate get fewer,
                  this becomes increasingly more probable. (I don't have my "Feynman
                  Lectures on Computation" at hand or I'd post some equations for this
                  probability. Very highly recommended!
                  http://tinyurl. com/9q4p8o )

                  [3] Multiple programmers creating similar software bugs is more
                  common than you might think. Sun's Java implementation had a famous bug
                  in their calculation of the Jacobi symbol, which caused
                  primality-testing routines to fail:

                  http://bugs. sun.com/bugdatab ase/view_ bug.do?bug_ id=4624738

                  When I implemented a function to calculate the Jacobi symbol in my
                  Frink programming language ( http://futureboy. us/frinkdocs/ ) I
                  initially created a similar bug, because my algorithm didn't work for
                  some negative numbers either. I didn't use Sun's algorithm, but I can
                  see how they went wrong. The algorithm often cited on the web, in
                  several number theory books, etc. needs preconditioning to handle
                  negative numbers correctly. Even more insanely, when evaluating
                  negative numbers, Java uses different sign conventions for the %
                  (modulus) operator (used for "int" values) and the BigInteger.mod( n)
                  function! It's easy to see how Sun even confuses their own programmers.

                  This bug in Sun's implementation caused some significant pain. Their
                  primality testing was originally a probabilistic Rabin-Miller test (with
                  probabilities that can easily be set that it won't return a wrong result
                  during the lifetime of the universe, and many users set them *way*
                  higher than that to be very safe) yet that bug caused failures much more
                  often, and introduced a method of failure that Rabin-Miller *can't*
                  produce, (Rabin-Miller can't ever declare a prime to be composite, but
                  it can, very rarely, declare a composite to be probably prime,) which is
                  why this failure mode was particularly unexpected to a lot of people,
                  and caused numbers above a certain size to fail mysteriously and
                  sporadically.

                  Note: When I was informed of this bug in my implementation of my
                  JacobiSymbol[ x] function, (it didn't affect my primality testing, which
                  has always worked properly,) I was ashamed that I wasn't able to release
                  a fix until I got home from work later that day. Sun, on the other
                  hand, took *3 years* to release a fix. (This bug was present in Java
                  versions 1.3 through 1.4.2.)

                  [4] Of course, some of their "failures" were due to intentional
                  attempts of people to corrupt their results, so exact probabilities of
                  failure are impossible to come by. There were interesting stories,
                  though, of *all* or *almost all* of the work units returned by several
                  well-meaning participants returning incorrect results. The SETI team
                  contacted these people and were able to verify that their computers'
                  floating-point units were indeed failing. (Sometimes subtly enough to
                  not make everything crash, but enough to make all extended calculations
                  wrong.) This is interesting and probably indicates that the frequency
                  of subtly or explicitly broken processors in the world is far less than
                  10^-9, setting a bound for what reliability we might expect for
                  primality testing.

                  [5] Don't laugh. This happened on this very list. A few years ago,
                  an intrepid researcher stated that they had been running a computer for
                  3+ years to find the factors of one of the RSA factoring challenge
                  numbers. Awesome persistence! And then one day it beeped! (I can't
                  imagine the excitement!) He announced to this list that he had
                  submitted his solution to RSA and was awaiting confirmation of the
                  factors, and he wasn't sure if the numbers were factors. I asked him
                  the obvious question, "did you multiply the two factors together and did
                  they come up with the original number?" The next day, with a leaden
                  heart, he responded to me and indicated that he had accidentally cut off
                  a digit when pasting in the original RSA number to his factoring
                  program. It still hurts me to think about it.

                  --
                  Alan Eliasen
                  eliasen@mindspring. com
                  http://futureboy. us/






                  [Non-text portions of this message have been removed]
                • Jack
                  Alan, Thank you for your thoughtful and considerate reply to the original poster, and to the group in general. I often have questions that I d like to post
                  Message 8 of 20 , Apr 26, 2010
                  • 0 Attachment
                    Alan,

                    Thank you for your thoughtful and considerate reply to the original poster, and to the group in general. I often have questions that I'd like to post here but do not because I fear being publicly flamed for my ignorance.


                    --- In primenumbers@yahoogroups.com, Alan Eliasen <eliasen@...> wrote:
                    >
                    > On 04/21/2010 09:01 PM, Ali Adams wrote:
                    > > Greetings all,
                    > >
                    > > Can someone please help with validating if the following 62-digit
                    > > number is a prime or not:
                    > >
                    > > 11108195956680805165653650502135350605769090617575464617311659
                    > >
                    > > I am aware of IsProbablePrime but need a definite primality test.
                    >
                    > (in another posting, asked)
                    > > What about this 309-digit number?
                    >
                    >


                    <snip>
                  • djbroadhurst
                    ... Jens, as per usual, put his finger on the core of this debate. In primality proving, we subject ourselves to two disciplines: 1) do not proclaim a proof if
                    Message 9 of 20 , Apr 30, 2010
                    • 0 Attachment
                      --- In primenumbers@yahoogroups.com,
                      "Jens Kruse Andersen" <jens.k.a@...> wrote:

                      > It will often be more important to an audience of an announced prime
                      > that you say "Trusted program X proved primality" than you argue
                      > about the microscopic risk that something went wrong in all the
                      > pseudoprime tests.

                      Jens, as per usual, put his finger on the core of this debate.

                      In primality proving, we subject ourselves to two disciplines:

                      1) do not proclaim a proof if you cannot understand the proof method

                      2) take reasonable precautions that your claim has not been
                      vitiated by egregious soft/hard-ware errors.

                      Alan's points are well put. Yet he has missed the essential
                      gravamen of Jens' dictum

                      > in practice the negative consequences of an alleged prime
                      > being composite are usually so small

                      No-one suffers if a cosmic ray hits your computer during a test.
                      No-one suffers if George's FFT's screw up during that test.
                      We are trying to be as careful and honest as humanly possible.
                      The pursuit of excellence is a greater cause than its achievement.

                      David
                    Your message has been successfully submitted and would be delivered to recipients shortly.