- Mar 4, 2013I have been working on some compression notions regarding primes. If anybody has references on what's out there already on this matter, if what I am doing is superior it might combine well with this fact about pre-computed primes (or those references would). Wikipedia has a link to a prime collection that is small in comparison to what's possible at http://en.wikipedia.org/wiki/Prime_number

JGM

P.S. Compression notions don't (generally) have fixed correct answers, as a trade-off of time vs. space is an inevitable consideration. Finding the 6th term (where the 7th looks certainly impossible) in a sequence I have submitted at the OEIS would benefit from pre-computation through 17-digit primes, so that good space-compression and time-savings better than any form of NEXTPRIME or ISPSEUDOPRIME function's implementation (that is not itself derived directly from a certain list) would help solve my problem and others.

--- On Mon, 3/4/13, Jack Brennen <jfb@...> wrote:

From: Jack Brennen <jfb@...>

Subject: Re: [PrimeNumbers] Calculate first 3.2 trillion primes in 39 hours on standard PC

To: "James Firth" <jimothy_bear@...>

Cc: "primenumbers@yahoogroups.com" <primenumbers@yahoogroups.com>

Date: Monday, March 4, 2013, 2:39 PM

Take a look here: http://code.google.com/p/primesieve/

Looks like you can extrapolate that implementation to get the first

3.2 trillion primes in about 5-6 hours on an Intel i7-920, assuming

sufficient RAM available.

I'm not sure RAM is that much of a limiting factor though; I've

written fast sieves that sieved in what I called "slices", and

could easily do sieves of this size with much less RAM than today's

machines commonly have available.

Generally, as you chase speed, you end up tuning the algorithm

parameters to minimize cache misses -- cache misses are the biggest

impediment to speed.

On 3/4/2013 2:20 PM, James Firth wrote:

> Hi,

>

> Forgive the intrusion as I'm a physicist with an interest in efficient computing rather than a mathematician who has studied number theory.

>

> In order to benchmark system performance I've been playing with an algorithm I created to calculate primes and was surprised by the results and wondered how it compared to other methods and implementations.

>

> I calculated the first 3.2 trillion primes in 38 hours and 32 minutes on standard intel home PC (4-cores, 4GHz, Linux, mean memory usage using mean of ~300MB of memory). In this pass I merely calculated pi(x) rather than output the data as I don't have 25 terabytes of disk space to hand. My results for pi(x) are correct according to other published data for the number of primes up to 1x10^14.

>

>

> A little research after the fact shows my method is a multi-threaded variant on a sieve of Eratosthenes with an additional sieve to filter multiples of the first 8 integers coprime with 2.

>

> A bit of tinkering shows the algorithm scales reasonably well; calculation time increases by around 15% as x increases by 10-fold but memory usage creaps up to keep the algorithm efficient at high values of x. Also there is a uint64 limit as I haven't used a big number library.

>

> Since this is my first stab I wondered what I could realistically aim for in terms of cycle time for finding primes with realtively low values of x (<1.8x10^19)?

>

> fdj

>

>

> ------------------------------------

>

> Unsubscribe by an email to: primenumbers-unsubscribe@yahoogroups.com

> The Prime Pages : http://primes.utm.edu/

>

> Yahoo! Groups Links

>

>

>

>

>

>

[Non-text portions of this message have been removed]