Loading ...
Sorry, an error occurred while loading the content.

Re: [PrimeNumbers] Boland's Distribution of Primes

Expand Messages
  • Dick Boland
    Hello, When g=2762, g^2=7628644, My distribution function, pi(3*g/2)-pi(g/2) ~ pi(g^2)-pi((g-1)^2) predicts 350 primes vs. actual 390 primes, error= -40 or
    Message 1 of 14 , Jun 3, 2001
    • 0 Attachment
      Hello,

      When g=2762, g^2=7628644,
      My distribution function,
      pi(3*g/2)-pi(g/2) ~ pi(g^2)-pi((g-1)^2)
      predicts 350 primes vs. actual 390 primes,
      error= -40 or -10.2564%

      I conjecture that g=2762 is the highest g for which the
      deviation error is greater than 10%.
      And I would that someone more skilled than I on this
      list can search for a counterexample.

      So far I've tested all g up to g=5250, and I had previously
      tested all prime g up to ~17,000.

      As I continue to suspect about this function that
      the percentage of the error deviation grows progressively
      smaller in amplitude, I began testing a range starting
      g=25000 and I would further conjecture that the highest
      g with percentage error > 8% will have occurred
      prior to g=25000.

      It was a theoretical scenario that brought me to test this
      function in this neighborhood. I believe my theoretical
      argument will make it clear why this phenomenon must
      exist within the distribution of prime numbers.

      > where the constant c = 3/2*ln(3/2)-1/2*ln(1/2) = 0.95477...

      Be aware that my first formulation of
      pi(3*g/2)-pi(g/2) ~ pi(g^2)-pi((g-1)^2)
      may not be the most exact center for this
      "order 1 order 2 codependancy"
      within the distribution of primes, but it is close enough
      that the percentage error goes to zero with increasing g.

      I conjecture that one could consider

      pi(3*g/2)-pi(g/2) ~ pi((g-1)^2)-pi((g-2)^2) or
      pi(3*g/2)-pi(g/2) ~ pi((g+1)^2)-pi(g^2), for example

      and these functions will also yield a percentage error that
      goes to zero, maybe slower, maybe faster, somewhere there may
      be an exact center (error drops fastest).

      As I continue to suspect about this function that
      the percentage of the error deviation grows progressively
      smaller with increasing g.
      I began testing a range starting
      g=25000 and now I further conjecture that the highest g
      for which the percentage error exceeds 8% will have occurred
      prior to g=25000.

      Here's as far as I got from g=25,000. The
      highest percentage error found is < 4% in the tests below.
      The sign of the error continues to change frequently
      and the percentage of error continues to average
      lower & lower.

      Can someone please verify some of these numbers for me?

      Thanks,

      -Dick Boland

      Data for g>25000
      g g^2 PRED. ACT. ERROR count and %deviation
      ______________________________________________________
      25000 625000000 2476 2431 45 1.8510900863842040312
      25001 625050001 2477 2475 2 0.080808080808080808
      25002 625100004 2477 2421 56 2.3130937629078893018
      25003 625150009 2477 2472 5 0.2022653721682847896
      25004 625200016 2477 2465 12 0.4868154158215010141
      25005 625250025 2478 2465 13 0.527383367139959432
      25006 625300036 2478 2439 39 1.5990159901599015989
      25007 625350049 2478 2470 8 0.3238866396761133602
      25008 625400064 2478 2390 88 3.68200836820083682
      25009 625450081 2478 2503 -25 -0.9988014382740711146
      25010 625500100 2478 2489 -11 -0.4419445560466050622
      25011 625550121 2478 2480 -2 -0.0806451612903225806
      25012 625600144 2479 2466 13 0.5271695052716950526
      25013 625650169 2479 2497 -18 -0.7208650380456547856
      25014 625700196 2479 2483 -4 -0.1610954490535642368
      25015 625750225 2479 2473 6 0.2426202992317023857
      25016 625800256 2479 2468 11 0.4457050243111831442
      25017 625850289 2479 2428 51 2.1004942339373970345
      25018 625900324 2479 2428 51 2.1004942339373970345
      25019 625950361 2479 2467 12 0.4864207539521686258
      25020 626000400 2480 2466 14 0.5677210056772100567
      25021 626050441 2480 2470 10 0.4048582995951417003
      25022 626100484 2480 2453 27 1.1006930289441500203
      25023 626150529 2480 2487 -7 -0.2814636107760353839
      25024 626200576 2479 2493 -14 -0.5615724027276373846
      25025 626250625 2480 2429 51 2.0996294771510909839
      25026 626300676 2480 2465 15 0.6085192697768762677
      25027 626350729 2480 2492 -12 -0.4815409309791332263
      25028 626400784 2480 2400 80 3.3333333333333333333
      25029 626450841 2480 2516 -36 -1.4308426073131955484
      25030 626500900 2480 2512 -32 -1.2738853503184713375
      25031 626550961 2480 2520 -40 -1.5873015873015873015
      25032 626601024 2481 2490 -9 -0.3614457831325301204
      25033 626651089 2482 2471 11 0.4451639012545528126
      25034 626701156 2482 2486 -4 -0.1609010458567980691
      25035 626751225 2482 2489 -7 -0.2812374447569304941
      25036 626801296 2481 2426 55 2.2671063478977741137
      25037 626851369 2481 2510 -29 -1.1553784860557768923
      25038 626901444 2481 2448 33 1.3480392156862745097
      25039 626951521 2481 2456 25 1.0179153094462540716
      25040 627001600 2481 2469 12 0.4860267314702308626
      25041 627051681 2482 2486 -4 -0.1609010458567980691
      25042 627101764 2482 2472 10 0.4045307443365695792
      25043 627151849 2482 2477 5 0.2018570851836899474

      __________________________________________________
      Do You Yahoo!?
      Get personalized email addresses from Yahoo! Mail - only $35
      a year! http://personal.mail.yahoo.com/
    • Dick Boland
      Hello, Has anyone on this list checked out my numbers? Anyone want to know the theory? I need help writing the paper(s), can anyone help me? Nothing worth
      Message 2 of 14 , Jun 4, 2001
      • 0 Attachment
        Hello,

        Has anyone on this list checked out my numbers?
        Anyone want to know the theory?
        I need help writing the paper(s),
        can anyone help me?
        Nothing worth writing about here? - I need to
        understand why not before wasting my time, or yours.

        Thank you

        -Dick Boland


        --- Dick Boland <richard042@...> wrote:
        > Hello,
        >
        > When g=2762, g^2=7628644,
        > My distribution function,
        > pi(3*g/2)-pi(g/2) ~ pi(g^2)-pi((g-1)^2)
        > predicts 350 primes vs. actual 390 primes,
        > error= -40 or -10.2564%
        >
        > I conjecture that g=2762 is the highest g for which the
        > deviation error is greater than 10%.
        > And I would that someone more skilled than I on this
        > list can search for a counterexample.
        >
        > So far I've tested all g up to g=5250, and I had previously
        > tested all prime g up to ~17,000.
        >
        > As I continue to suspect about this function that
        > the percentage of the error deviation grows progressively
        > smaller in amplitude, I began testing a range starting
        > g=25000 and I would further conjecture that the highest
        > g with percentage error > 8% will have occurred
        > prior to g=25000.
        >
        > It was a theoretical scenario that brought me to test this
        > function in this neighborhood. I believe my theoretical
        > argument will make it clear why this phenomenon must
        > exist within the distribution of prime numbers.
        >
        > > where the constant c = 3/2*ln(3/2)-1/2*ln(1/2) = 0.95477...
        >
        > Be aware that my first formulation of
        > pi(3*g/2)-pi(g/2) ~ pi(g^2)-pi((g-1)^2)
        > may not be the most exact center for this
        > "order 1 order 2 codependancy"
        > within the distribution of primes, but it is close enough
        > that the percentage error goes to zero with increasing g.
        >
        > I conjecture that one could consider
        >
        > pi(3*g/2)-pi(g/2) ~ pi((g-1)^2)-pi((g-2)^2) or
        > pi(3*g/2)-pi(g/2) ~ pi((g+1)^2)-pi(g^2), for example
        >
        > and these functions will also yield a percentage error that
        > goes to zero, maybe slower, maybe faster, somewhere there may
        > be an exact center (error drops fastest).
        >
        > As I continue to suspect about this function that
        > the percentage of the error deviation grows progressively
        > smaller with increasing g.
        > I began testing a range starting
        > g=25000 and now I further conjecture that the highest g
        > for which the percentage error exceeds 8% will have occurred
        > prior to g=25000.
        >
        > Here's as far as I got from g=25,000. The
        > highest percentage error found is < 4% in the tests below.
        > The sign of the error continues to change frequently
        > and the percentage of error continues to average
        > lower & lower.
        >
        > Can someone please verify some of these numbers for me?
        >
        > Thanks,
        >
        > -Dick Boland
        >
        > Data for g>25000
        > g g^2 PRED. ACT. ERROR count and %deviation
        > ______________________________________________________
        > 25000 625000000 2476 2431 45 1.8510900863842040312
        > 25001 625050001 2477 2475 2 0.080808080808080808
        > 25002 625100004 2477 2421 56 2.3130937629078893018
        > 25003 625150009 2477 2472 5 0.2022653721682847896
        > 25004 625200016 2477 2465 12 0.4868154158215010141
        > 25005 625250025 2478 2465 13 0.527383367139959432
        > 25006 625300036 2478 2439 39 1.5990159901599015989
        > 25007 625350049 2478 2470 8 0.3238866396761133602
        > 25008 625400064 2478 2390 88 3.68200836820083682
        > 25009 625450081 2478 2503 -25 -0.9988014382740711146
        > 25010 625500100 2478 2489 -11 -0.4419445560466050622
        > 25011 625550121 2478 2480 -2 -0.0806451612903225806
        > 25012 625600144 2479 2466 13 0.5271695052716950526
        > 25013 625650169 2479 2497 -18 -0.7208650380456547856
        > 25014 625700196 2479 2483 -4 -0.1610954490535642368
        > 25015 625750225 2479 2473 6 0.2426202992317023857
        > 25016 625800256 2479 2468 11 0.4457050243111831442
        > 25017 625850289 2479 2428 51 2.1004942339373970345
        > 25018 625900324 2479 2428 51 2.1004942339373970345
        > 25019 625950361 2479 2467 12 0.4864207539521686258
        > 25020 626000400 2480 2466 14 0.5677210056772100567
        > 25021 626050441 2480 2470 10 0.4048582995951417003
        > 25022 626100484 2480 2453 27 1.1006930289441500203
        > 25023 626150529 2480 2487 -7 -0.2814636107760353839
        > 25024 626200576 2479 2493 -14 -0.5615724027276373846
        > 25025 626250625 2480 2429 51 2.0996294771510909839
        > 25026 626300676 2480 2465 15 0.6085192697768762677
        > 25027 626350729 2480 2492 -12 -0.4815409309791332263
        > 25028 626400784 2480 2400 80 3.3333333333333333333
        > 25029 626450841 2480 2516 -36 -1.4308426073131955484
        > 25030 626500900 2480 2512 -32 -1.2738853503184713375
        > 25031 626550961 2480 2520 -40 -1.5873015873015873015
        > 25032 626601024 2481 2490 -9 -0.3614457831325301204
        > 25033 626651089 2482 2471 11 0.4451639012545528126
        > 25034 626701156 2482 2486 -4 -0.1609010458567980691
        > 25035 626751225 2482 2489 -7 -0.2812374447569304941
        > 25036 626801296 2481 2426 55 2.2671063478977741137
        > 25037 626851369 2481 2510 -29 -1.1553784860557768923
        > 25038 626901444 2481 2448 33 1.3480392156862745097
        > 25039 626951521 2481 2456 25 1.0179153094462540716
        > 25040 627001600 2481 2469 12 0.4860267314702308626
        > 25041 627051681 2482 2486 -4 -0.1609010458567980691
        > 25042 627101764 2482 2472 10 0.4045307443365695792
        > 25043 627151849 2482 2477 5 0.2018570851836899474
        >
        > __________________________________________________
        > Do You Yahoo!?
        > Get personalized email addresses from Yahoo! Mail - only $35
        > a year! http://personal.mail.yahoo.com/
        >


        __________________________________________________
        Do You Yahoo!?
        Get personalized email addresses from Yahoo! Mail - only $35
        a year! http://personal.mail.yahoo.com/
      • Phil Carmody
        ... We probably all trust you to have got the numerics correct, so checked may not be the right word. They certainly look believable. ... You need more data,
        Message 3 of 14 , Jun 4, 2001
        • 0 Attachment
          On Mon, 04 June 2001, Dick Boland wrote:
          >
          > Hello,
          >
          > Has anyone on this list checked out my numbers?

          We probably all trust you to have got the numerics correct, so 'checked' may not be the right word. They certainly look believable.

          > Anyone want to know the theory?
          > I need help writing the paper(s),
          > can anyone help me?
          > Nothing worth writing about here? - I need to
          > understand why not before wasting my time, or yours.

          You need more data, from far higher ranges, before such a prediction makes much sense. When n is small the read deviation may be smaller than the noise.

          If you look at www.wolfram.com (the Mathematica website), then I know in the 'Mathematica Book' section, there's am implementation note:
          <<<
          Prime and PrimePi use sparse caching and sieving. For large n, the Lagarias�Miller�Odlyzko algorithm for PrimePi is
          used, based on asymptotic estimates of the density of primes, and is inverted to give Prime.
          >>>

          Using those names you could try to find the algorithm in question, and using that find some far higher ranges to prove (in the original sense, meaning 'test') your hypothesis.

          You might be able to find an online calculator, or Java Applet which does the calculation for you. ('Prime Pi' is the standard name for the function, so it probably a good search string.)

          Good luck,
          Phil

          Mathematics should not have to involve martyrdom;
          Support Eric Weisstein, see http://mathworld.wolfram.com
          Find the best deals on the web at AltaVista Shopping!
          http://www.shopping.altavista.com
        • d.broadhurst@open.ac.uk
          ... http://www.math.Princeton.EDU/~arbooker/nthprime.html
          Message 4 of 14 , Jun 4, 2001
          • 0 Attachment
            Phil Carmody wrote:
            > You might be able to find an online calculator
            http://www.math.Princeton.EDU/~arbooker/nthprime.html
          • Ferenc Adorjan
            Hi, I checked the conjecture by using the nthprime page which David Broadhurst proposed and found for g=10^6, that pi(g^2)-pi((g-1)^2)= 72470 while
            Message 5 of 14 , Jun 5, 2001
            • 0 Attachment
              Hi,

              I checked the conjecture by using the "nthprime"
              page which David Broadhurst proposed and found
              for
              g=10^6, that
              pi(g^2)-pi((g-1)^2)= 72470 while
              pi(3*g/2)-pi(g/2) = 72617
              with a relative difference of 3.4e-3.
              Thus, it seems working pretty well. An exact
              proof would be most interesting, especially if
              providing error bounds.

              Ferenc
              2,3,5,7,17,23,47,103,107,137,283,313,347,373,...
            • d.broadhurst@open.ac.uk
              pi(x) ~ x/ln(x)*(1+1/ln(x)+O(1/ln(x)^2)) lhs = pi(g^2)-pi((g-1)^2) rhs = pi(3*g/2)-p(g/2) rhs/lhs = 1 + k/log(g) + O(1/ln(g)^2) k = 1 - log(27/4)/2 =
              Message 6 of 14 , Jun 5, 2001
              • 0 Attachment
                pi(x) ~ x/ln(x)*(1+1/ln(x)+O(1/ln(x)^2))
                lhs = pi(g^2)-pi((g-1)^2)
                rhs = pi(3*g/2)-p(g/2)
                rhs/lhs = 1 + k/log(g) + O(1/ln(g)^2)
                k = 1 - log(27/4)/2 = 0.04522874755778077232...

                Hence rhs > lhs, at large g, because the
                base of Naperian logarithms exceeds sqrt(27/4).
              • d.broadhurst@open.ac.uk
                Let L(g) = pi(g^2) - pi((g-1)^2) R(g) = pi(3*g/2) - pi(g/2) D(g) = R(g) - L(g) where pi(g) is the number of primes not exceeding g. Dick Boland conjectured
                Message 7 of 14 , Jun 6, 2001
                • 0 Attachment
                  Let

                  L(g) = pi(g^2) - pi((g-1)^2)
                  R(g) = pi(3*g/2) - pi(g/2)
                  D(g) = R(g) - L(g)

                  where pi(g) is the number of primes not exceeding g.

                  Dick Boland conjectured that D(g) changes
                  sign an infinite number of times.

                  On the contrary, I claimed that

                  k = lim_{g to infty} log(g)^2*D(g)/g = 1 - log(27/4)/2 > 0.

                  If you replace pi(x) by Riemann's estimator R(x)
                  (Ribenboim p224) you will find a single sign change
                  around g=10^4. Superimposed on this upward trend
                  are sqrt fluctuations from the complex zeros of zeta.
                  Dick was misled by the fact these can easily buck
                  the trend for his small g's, around 2.5*10^4.

                  But for how much longer can this go on?

                  Already it's getting difficult for g around 10^6,
                  where a simple sieve of Eratosthenes gave

                  g R(g) L(g) D(g)
                  1000000 72617 72450 167 [Pace Ferenc]
                  999999 72617 72569 48
                  999998 72617 72340 277
                  999997 72617 72573 44
                  999996 72617 72546 71
                  999995 72617 72381 236
                  999994 72617 72542 75
                  999993 72617 72425 192
                  999992 72617 72548 69
                  999991 72617 72180 437
                  999990 72617 72195 422
                  999989 72617 72561 56
                  999988 72617 72434 183
                  999987 72617 72703 -86 [Made it!]
                  999986 72617 72099 518
                  999985 72617 72162 455
                  999984 72616 72378 238
                  999983 72616 72317 299
                  999982 72616 72511 105
                  999981 72616 72371 245
                  999980 72616 72579 37
                  999979 72616 72311 305
                  999978 72616 72352 264
                  999977 72616 72548 68
                  999976 72616 72645 -29 [And again!]

                  These *roughly* agree with a mean k*g/log(g)^2 = 237
                  and a deviation that is of order sqrt(g/log(g))= 269.

                  Puzzle: Is there a g>10^7 for which D(g)<0 ?

                  Here it won't be so easy to
                  buck the Riemann trend, since
                  (k*g/log(g)^2)/sqrt(g/log(g)) > 1741/788 > 2.2
                • Dick Boland
                  Hello, ... Thanks Phil, Interesting stuff rersulting from this search (besides the algorithm), I will be doing some research to try and put it into context of
                  Message 8 of 14 , Jun 6, 2001
                  • 0 Attachment
                    Hello,

                    > Prime and PrimePi use sparse caching and sieving. For large n, the Lagarias�Miller�Odlyzko
                    > algorithm for PrimePi is
                    > used, based on asymptotic estimates of the density of primes, and is inverted to give Prime.

                    Thanks Phil,
                    Interesting stuff rersulting from this search (besides the algorithm),
                    I will be doing some research to try and put it into context of my theory
                    I haven't gotten my hands on the algorithm in a form that I can use,
                    and it would be good to get some higher data, but it may not be necessary.
                    The highest prime page is good for some spot checking as Forenc showed,
                    and still no counterexamples :)

                    As for Dave's proposition
                    > pi(x) ~ x/ln(x)*(1+1/ln(x)+O(1/ln(x)^2))
                    > lhs = pi(g^2)-pi((g-1)^2)
                    > rhs = pi(3*g/2)-p(g/2)
                    > rhs/lhs = 1 + k/log(g) + O(1/ln(g)^2)
                    > k = 1 - log(27/4)/2 = 0.04522874755778077232...
                    > Hence rhs > lhs, at large g, because the
                    > base of Naperian logarithms exceeds sqrt(27/4).

                    I'm not sure that the above proves anything
                    or if it simply reflects what current
                    wisdom on the subject would have us believe.
                    If it's a hard mathematical proof, it would seem to disprove
                    the conjecture that the sign of the error in my function
                    changes infinitely often, but not necessarily disprove the
                    percentage error going to zero.
                    I need to understand it better, so I have some home work.

                    I should be able to put something together to share after the weekend.

                    Thank you,

                    -Dick Boland



                    __________________________________________________
                    Do You Yahoo!?
                    Get personalized email addresses from Yahoo! Mail - only $35
                    a year! http://personal.mail.yahoo.com/
                  • d.broadhurst@open.ac.uk
                    ... Yes Dick, that is what I claim: constant sign of difference at sufficiently large g, because you missed a term whose fractional contribution is k/log(g),
                    Message 9 of 14 , Jun 6, 2001
                    • 0 Attachment
                      Dick Boland wrote:

                      > it would seem to disprove
                      > the conjecture that the sign of the error in my function
                      > changes infinitely often, but not necessarily disprove the
                      > percentage error going to zero.

                      Yes Dick, that is what I claim: constant sign
                      of difference at sufficiently large g, because you
                      missed a term whose fractional contribution is
                      k/log(g), which of course goes to zero, relative to
                      each side, but *dominates* the difference,
                      when the (roughly!) order 1/sqrt(g) fluctuations die away.

                      Your less interesting conjecture, that lhs/rhs
                      goes to unity seems eminently plausible:
                      both sides are g/log(g) + sub-leading.
                      No one has *ever* suggested that fluctuations
                      remain of finite relative size!

                      My emphasis is on the sub-leading k/log(g) which becomes
                      (I claim) leading in the relative *difference* R/L-1.
                      It is masked by fluctuations for g^2 < 10^12,
                      so you ain't learned nuffin yet :-)
                      because you stayed at g < 3*10^4.
                      I believe that k/log(g) dominates fluctuations in R/L-1
                      *eventually*.

                      You can use Nth-prime page, for g^2 in [10^12,3*10^13],
                      like Ferenc, or write an Erato sieve, like me.
                      Nothing smaller counts, it seems to me.
                      Wobble masks Riemann for tiny log(g)!

                      But you are in good company, Andrew Odlyzko
                      got very worried at g=O(10^22), a few years
                      ago, when statistical correlations were not
                      in accord with the *asymptotic* predictions of
                      the Riemann hypothesis. Then some of Mike Berry's
                      colleagues in Bristol observed that they could
                      mock up Andrew's data with random N by N matrices
                      (Gaussian unitary ensemble, to be technical)
                      where N is something like log(g)/pi.
                      So they simulated Odlyzko in tiny amounts of
                      time (compared to finding the 10^22'nd zero of zeta)
                      with very modest random matrices (16 by 16 as I recall)
                      and then easily upped their matrix size to see the onset
                      of the expected Riemannian behaviour.

                      Log is a cruel function,
                      for people interested in asymptotics...
                      Alain Connes told me that it gave him
                      the creeps that 10^22 is such a *small* number
                      when you take its log (and divide by pi as I recall).
                      You find the 10^22'nd zero of zeta and still
                      are far away from the prediction of Riemann!

                      On the other hand, log is good news for prime provers,
                      with cheap Proths coming at merely log^3 prices.

                      Best

                      David
                    • d.broadhurst@open.ac.uk
                      ... This is a believe: not a proof! The subtlety is that your between squares L(g) = pi(g^2)-pi((g-1)^2) is very intriguing. If pi(g^2) ~ R(g^2)*(1 +/-
                      Message 10 of 14 , Jun 6, 2001
                      • 0 Attachment
                        PS:

                        > I believe that k/log(g) dominates fluctuations in R/L-1
                        > *eventually*.

                        This is a believe: not a proof!

                        The subtlety is that your "between squares"
                        L(g) = pi(g^2)-pi((g-1)^2) is very intriguing.

                        If
                        pi(g^2) ~ R(g^2)*(1 +/- O(1/sqrt(g^2))
                        then naively we get
                        L(g) ~ g/log(g)*(1 +/- O(1)) [whoops!]

                        I don't believe that nightmare, since the
                        ends of the range [(g-1)^2,g^2] are
                        relatively close together, and hence
                        tightly correlated.

                        But you have clearly taken us into
                        novel (to us) territory, thanks.

                        David
                      • bhelmes_1
                        A beautifull day Results for Primes mod some numbers up to 10^14 is ready http://beablue.selfip.net/devalco/table_of_primes.htm I checked the results to P mod
                        Message 11 of 14 , Jan 23, 2010
                        • 0 Attachment
                          A beautifull day

                          Results for Primes mod some numbers up to 10^14 is ready
                          http://beablue.selfip.net/devalco/table_of_primes.htm
                          I checked the results to P mod 4 = 3 and P mod 4 = 1
                          concerning the existing table.

                          I used a sieve of Eratosthenes with a Heap-construction for collecting the primes and Help-array in the first level Cache for sieving the primes.

                          Program under
                          http://beablue.selfip.net/devalco/sieb_des_eratosthenes.htm

                          Runtime of the program is 7*14 days, i distributed the work on 7 nodes.

                          There can be made some improvements using assembler.

                          I would like to expand the tables with the distribution of Primes up to 10^15 or 10^16.

                          Is there a chance to calculate the programs in a grid or cluster.
                          A connection between the node is not necessary.

                          Besides the results could be usefull for physic or biologic research

                          Nice Greetings from the primes
                          Bernhard

                          http://www.devalco.de
                        • Andrey Kulsha
                          ... 10^15 or 10^16. http://listserv.nodak.edu/cgi-bin/wa.exe?A2=ind1001&L=nmbrthry&T=0&X=14ADB57FE44944E3D4&P=327 Best regards, Andrey
                          Message 12 of 14 , Jan 23, 2010
                          • 0 Attachment
                            > I would like to expand the tables with the distribution of Primes up to
                            10^15 or 10^16.

                            http://listserv.nodak.edu/cgi-bin/wa.exe?A2=ind1001&L=nmbrthry&T=0&X=14ADB57FE44944E3D4&P=327

                            Best regards,

                            Andrey
                          Your message has been successfully submitted and would be delivered to recipients shortly.