Prime numbers and primality testing is a Restricted Group with 1120 members.
 Prime numbers and primality testing

 Restricted Group,
 1120 members
Estimating log (B^pi(B))
Expand Messages
 BEGIN PGP SIGNED MESSAGE
Hash: SHA1
Hello there,
In the course of my investigation on the complexity of p1 factoring, I found
out I had made a mistake when estimating the least common multiple of all
numbers up to B. It turns out that an upper bound for this is B^pi(B). Since
the cost of a modular exponentiation (in modular multiplies) is asymptotic to
the logarithm of the exponent, the cost for modular exponentiation here is
log(B^pi(B)) = pi(B) * log B = log B * B/log B = B. And this is actually an
excellent estimate:
?
a=0;B=400000000;logB=log(B);forprime(p=2,B,logp=log(p);a=a+floor(logB/logp)*logp);print(a);
400002778.0577266417502590976
This is as far as PARI/GP will go to the best of my knowledge (with the
forprime command at least). Still, it's a very small error.
In other good news, B*(log (p1)/log B)^(log (p1)/log B), which is the cost
of a modular exponentiation to the exponent lcm (x <= B) divided by the
probability that the number p1 is Bsmooth, has a local minimum, which I
couldn't approximate algebraically, but B is equal to ~1200 for p1 = 10^10,
or ~8100 for 10^15.
Now I need to calculate the cost of doing stage 2. Any good estimates to the
value of the summation of 1/p, for p prime, over a finite interval?
Thanks,
Décio
BEGIN PGP SIGNATURE
Version: GnuPG v1.2.1 (GNU/Linux)
iD8DBQE+OYt5ce3VljctsGsRAvyOAJ4687I3DkVsIlY/wB3Lb1rn/EMxFwCeKIlX
Bm2N7F44H6tBmy05eOL42uk=
=lyuu
END PGP SIGNATURE  For your "stage 2"
I'm thinking:
By the meanvalue theorem, with f(x)=ln(x), exists a point c such that 1/c = (ln(b)ln(a))/(ba) in the closed interval [a,b]; if it's [1,n], then (ln(n)ln(1))/(n1) = 1/c >
> 1/n <= ln(n)/(n1) <= 1 > (n1)/n <= ln(n) <= n1
Also
The product of the primes up to the nth prime is greater than 2^n; (n>1)
(p_n)# > 2^(n) > PRODUCT_n(1/p_k) < 2^(n)
then, applying logarithms to both sides, we have that the sum of the logarithms of the inverses of the primes up to n is lesser than n*ln(2) ;
SUM(ln(1/p_k)) < n·ln 2
Now we have the first inequality,
if ln(m) = ln(1/p) then (m1)/m = p/(p1) = (11/p) <= ln(1/p) <= 1/(p1)
so, SUM(11/p) <= SUM(ln(1/p)) < n*ln2 >
> SUM(1)  SUM(1/p) < n*ln2 > SUM(1/p) < (n*ln2 + SUM(1)) >
> SUM(1/p) > n*ln2 + SUM(1) where n is counting the primes, not the natural numbers, in fact there are n sums, and so
SUM(1/p(n)) > n*(ln2 + 1) = 1,693n ( p(n) is the nth prime)
I've surely commited a mistake somewhere, but I can't find it... try to catch it and maybe then this reasoning will be of some utility (altough that when n increases, 2^n is a worse lower limit, but you can use as base a greater prime you now that has already been used in the product).
Jose Brox
PD I apologize for the case I have said an stupidity. Justifications: novelty, Discrete Signal Processing exam tomorrow :S Original Message 
From: Décio Luiz Gazzoni Filho
To: primenumbers@yahoogroups.com
Sent: Thursday, January 30, 2003 9:30 PM
Subject: [PrimeNumbers] Estimating log (B^pi(B))
BEGIN PGP SIGNED MESSAGE
Hash: SHA1
Hello there,
In the course of my investigation on the complexity of p1 factoring, I found
out I had made a mistake when estimating the least common multiple of all
numbers up to B. It turns out that an upper bound for this is B^pi(B). Since
the cost of a modular exponentiation (in modular multiplies) is asymptotic to
the logarithm of the exponent, the cost for modular exponentiation here is
log(B^pi(B)) = pi(B) * log B = log B * B/log B = B. And this is actually an
excellent estimate:
?
a=0;B=400000000;logB=log(B);forprime(p=2,B,logp=log(p);a=a+floor(logB/logp)*logp);print(a);
400002778.0577266417502590976
This is as far as PARI/GP will go to the best of my knowledge (with the
forprime command at least). Still, it's a very small error.
In other good news, B*(log (p1)/log B)^(log (p1)/log B), which is the cost
of a modular exponentiation to the exponent lcm (x <= B) divided by the
probability that the number p1 is Bsmooth, has a local minimum, which I
couldn't approximate algebraically, but B is equal to ~1200 for p1 = 10^10,
or ~8100 for 10^15.
Now I need to calculate the cost of doing stage 2. Any good estimates to the
value of the summation of 1/p, for p prime, over a finite interval?
Thanks,
Décio
BEGIN PGP SIGNATURE
Version: GnuPG v1.2.1 (GNU/Linux)
iD8DBQE+OYt5ce3VljctsGsRAvyOAJ4687I3DkVsIlY/wB3Lb1rn/EMxFwCeKIlX
Bm2N7F44H6tBmy05eOL42uk=
=lyuu
END PGP SIGNATURE
Yahoo! Groups Sponsor
ADVERTISEMENT
Unsubscribe by an email to: primenumbersunsubscribe@yahoogroups.com
The Prime Pages : http://www.primepages.org/
Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service.
[Nontext portions of this message have been removed] > Any good estimates to the value of the summation of 1/p,
sum(p=p1,p2,1/p) for p2>>p1>>1 is given by PNT as
> for p prime, over a finite interval?
approximately log(log(p2)/log(p1))
since integral dx/(x*log(x))=log(log(x)).
If the lower limit is small and the upper large,
do a finite sum from p1 to p_big and then add the integral
from p_big to p2.
David BEGIN PGP SIGNED MESSAGE
Hash: SHA1
On Thursday 30 January 2003 21:29, David Broadhurst <d.broadhurst@...>
wrote:> > Any good estimates to the value of the summation of 1/p,
I plugged some values and I realized there was something wrong. Not with your
> > for p prime, over a finite interval?
>
> sum(p=p1,p2,1/p) for p2>>p1>>1 is given by PNT as
> approximately log(log(p2)/log(p1))
> since integral dx/(x*log(x))=log(log(x)).
> If the lower limit is small and the upper large,
> do a finite sum from p1 to p_big and then add the integral
> from p_big to p2.
> David
>
estimate, of course, but rather with the approach I tried.
I guess in order to estimate the probability that p1 is divisible by one
single prime in the interval (B, B'] (say these primes are p_k, p_(k+1), ...,
p_j), then I need to calculate prob(p_k  p1)*prob(p_(k+1) !
p1)*...*prob(p_j ! p1) + prob(p_(k+1)  p1)*prob(p_k ! p1)*prob(p_(k+2)
! p1)*...*prob(p_j ! p1) + ... + prob(p_j  p1)*prob(p_k !
p1)*...*prob(p_(j1) ! p1). This doesn't seem particularly easy to
evaluate.
I'd rather approximate it as a continuous variable and find a suitable
probability density function to estimate the probability that p1 is
divisible by a prime in that interval. Anyone care to help? I'm sure this is
very intimately connected to the PNT.
Thanks
Décio
BEGIN PGP SIGNATURE
Version: GnuPG v1.2.1 (GNU/Linux)
iD8DBQE+Oc7xce3VljctsGsRAobqAJ9Ogovmd6BM4PzdaQ7PnwIOtggVdQCgqc5C
2YYA0jutPUsHZR212LtjTok=
=3Neb
END PGP SIGNATURE   D�cio Luiz Gazzoni Filho <decio@...> wrote:
> I guess in order to estimate the probability that p1 is divisible by one
Dickman's theorem: (Riesel, Thm. 5.5)
> single prime in the interval (B, B'] (say these primes are p_k, p_(k+1), ...,
> p_j), then I need to calculate prob(p_k  p1)*prob(p_(k+1) !
> p1)*...*prob(p_j ! p1) + prob(p_(k+1)  p1)*prob(p_k ! p1)*prob(p_(k+2)
> ! p1)*...*prob(p_j ! p1) + ... + prob(p_j  p1)*prob(p_k !
> p1)*...*prob(p_(j1) ! p1). This doesn't seem particularly easy to
> evaluate.
>
> I'd rather approximate it as a continuous variable and find a suitable
> probability density function to estimate the probability that p1 is
> divisible by a prime in that interval. Anyone care to help? I'm sure this is
> very intimately connected to the PNT.
The probability of a number N, chosen at random, having a prime factor
between N^a and N^(a+a.d) is approximately = d, independent of the magnitude
of a, if d is small.
So set N^a = B, N^(a.d)=B'/B.
=> d=log(B'/B)/log(B)
(logs in any base, obviously, as you're taking a ratio of 2 of them)
Phil
=====
Is that free as in Willy or as in bird?
__________________________________________________
Do you Yahoo!?
Yahoo! Mail Plus  Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com > The probability of a number N, chosen at random,
Well I guess you should also demand that a < 0.5
> having a prime factor between N^a and N^(a+a.d)
> is approximately = d, independent of the magnitude
> of a, if d is small.
else there is _no_ prime factor :)
How small does a have to be for us to use this?
Suppose that a=0.2 is small enough. Now set d=0.1.
Then a 200 digit number has a 10% chance
of having a prime factor between 40 and 44 digits,
if we can use Dickman.
This is the sort of range that has been probed
by ECM effort on Cunningham cofactors.
Is this 10% estimate consistent with Cunningham
experience accumlated by Paul Leyland and company
when using ECM prior to contemplating SNFS?
David  "David Broadhurst <d.broadhurst@...>" <d.broadhurst@...> wrote:
> > The probability of a number N, chosen at random,
Eep, no! /Your/ number isn't chosen at random, you know it's even.
> > having a prime factor between N^a and N^(a+a.d)
> > is approximately = d, independent of the magnitude
> > of a, if d is small.
>
> Well I guess you should also demand that a < 0.5
> else there is _no_ prime factor :)
Once you've divided it by 2, however it effectively becomes random again
(within the limits of what's relevant here, he says waving hands).
> How small does a have to be for us to use this?
Eep, no! None of the cunningham numbers are chosen at random!
>
> Suppose that a=0.2 is small enough. Now set d=0.1.
> Then a 200 digit number has a 10% chance
> of having a prime factor between 40 and 44 digits,
> if we can use Dickman.
>
> This is the sort of range that has been probed
> by ECM effort on Cunningham cofactors.
Cofactors in particularlt will be much less random, for thereasons you
raise above.
I'd give something like the Home Prime sequences and the Champerownebased songs
and dances a better chance of providing random data. (Remember that these won't
be completely random either, at least 2 and 5 will be antigimmes, and
possibly 3 as well will have some skew. I wouldn't get against 11 being
weird either.)
> Is this 10% estimate consistent with Cunningham
It's hard to think of a corpus that would be entirely devoid of bias to be
> experience accumlated by Paul Leyland and company
> when using ECM prior to contemplating SNFS?
honest...
However, I'm not claiming that there isn't an exact analogue for numbers
with restrictive divisibility properties. However, Dickman says _nothing_
about such numbers, and is therefore undemonstrable/unrefutable using them.
... brilliant numbers. If someone has kept a full record of all the numbers
that were in the range they tested for brilliantness, then that would be a
fair corpus, surely? I'm assuming they weren't sieved terribly deeply, and
therefore only the medium sized factors would need to have been recorded, as
the small ones can be reproduced easily (hey, I'll offer to find all the
small factors to test my new factoring algorithm!). Sure they're smaller
than 200 digits by a factor of 2, but they're about as unbiased as you can
get, and there's a fair number of them.
Anyone with such a collection?
Phil
=====
Is that free as in Willy or as in bird?
__________________________________________________
Do you Yahoo!?
Yahoo! Mail Plus  Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com  Phil:
> Once you've divided it by 2, however it effectively
The Carmody sieving doctrine seems to assert that when
> becomes random again
you have divided anything (where possible) by primes
less then a trillion then you ain't got anything special
left, except a number with no prime factor less than a trillion:)
However I was very well aware that the Cunningham cofactors
are special numbers, which is why I asked the *open* question:
does Dickman square with the data?
Puzzle: How did I exploit Dickman to gain a factor of
2 speedup for finding a recently posted prime? (And yes, I
do [unfortunately] have enough data to support the
factor of 2 speedup.)
David   "David Broadhurst <d.broadhurst@...>" <d.broadhurst@...> wrote:
> Phil:
I would quite broadbrush it like that. I'd say that I know a factor of 60
>
> > Once you've divided it by 2, however it effectively
> > becomes random again
>
> The Carmody sieving doctrine seems to assert that when
> you have divided anything (where possible) by primes
> less then a trillion then you ain't got anything special
> left, except a number with no prime factor less than a trillion:)
from a factor of oneplusitsybitsy though, and I don't tend to worry about
the itsybitsies.
> However I was very well aware that the Cunningham cofactors
I reckon there's a perpendicular merger of Dickman and Dirichlet waiting to
> are special numbers, which is why I asked the *open* question:
> does Dickman square with the data?
be performed. i.e. I don't reckon there's much room for misbehaviour.
(Not full Dirichlet  just 1(n) and maybe 1(n). Aside  was there a name
for this restricted theory, which I've been told drops out far more easily
than full Dirichlet?)
> Puzzle: How did I exploit Dickman to gain a factor of
I find it hard to imagine using Dickman for that purpose, what was the
> 2 speedup for finding a recently posted prime? (And yes, I
> do [unfortunately] have enough data to support the
> factor of 2 speedup.)
particular number you were thinging of. And I genuinely mean, like the
landing party on Star Trek that discovers some bizarre alien artefact, I
would know how to use Dickman for that purpose  where should I hold it,
in which direction should I point it, and what button should I press?!?!
I'm just glad I'm not wearing red...
Phil
=====
Is that free as in Willy or as in bird?
__________________________________________________
Do you Yahoo!?
Yahoo! Mail Plus  Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com   "David Broadhurst <d.broadhurst@...>" <d.broadhurst@...> wrote:
> > what button should I press?!?!
Woh! I can't see Dickman in that at all. What am I missing?
> the {1,2,12} button or
> the {1,2,15} button or
> the {1,4,6} button
> but not many others :)
Phil
=====
Is that free as in Willy or as in bird?
__________________________________________________
Do you Yahoo!?
Yahoo! Mail Plus  Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com > Woh! I can't see Dickman in that at all.
Let me backtrack a bit, to Aurifeuille.
> What am I missing?
Some time ago, Bouk and I were trying
to develop a theory of Generalized Lucas
Aurifeuillian parts: the generalization of
http://www.utm.edu/research/primes/lists/top20/LucasA.html
from Lucas(1,1,n) to Lucas(p,q,n).
[Actually with q=+/1 in our case.]
Before we articulated the full theory
we needed an empirical diagnostic that an
Aurifeullian factorization exists, in situations where
the numbers were too big to completely factorize.
Dickman (informally construed, with all the usual
caveats) provided it. If N=A*B exists as an algebraic
factorization of N, and we have taken out primes
up to (more or less) T1 digits and ask ECM for primes between
(more or less) T1 and T2 digits then Dickman suggests
(don't eek me, I only said sugests) that we will
get them twice as fast, compared with when there
isn't such a factorization. Anyway that turned
out to be a tolerable diagnostic which enabled
us to guess the relationship between (p,q,n) that
gives an Aurifeuillian factorization.
Now this also suggests that the seeds [Markus: sic!]
{1,2,12}
{1,2,15}
{1,4,6}
will repay people like him who use ECM to get
4Carmichaels [without using CRT, which first
I and then, more ably, you used].
They need ECM because when the 3seed has
produced primes {p,q,r} then the 4th Carmichael
factor 1+(p*q*r1)/d is gotten by hacking primes out
(p*q*r1)/lcm(p1,q1,r1) and combining them
to make d such that the 4th factor is also prime.
Now what's special about those 3 buttons?
? print(factor((1+x)*(1+2*x)*(1+12*x)1))
[x, 1; 4*x + 3, 1; 6*x + 5, 1]
? print(factor((1+x)*(1+2*x)*(1+15*x)1))
[x, 1; 3*x + 2, 1; 10*x + 9, 1]
? print(factor((1+x)*(1+4*x)*(1+6*x)1))
[x, 1; 2*x + 1, 1; 12*x + 11, 1]
Of course you and I did not bother about this when we used CRT.
But Markus is stuck with his NewPgen output and so
can't use CRT. So I thought it would be useful
to give {1,2,12} a proof in principle by beating the
4Carmichael 4th factor record.
In fact I did it twice over:
2763260532*((1591638066432*3061#+19)^21)*3061#/
286610757607008951353515277095+1 3909 p44 03
4Carmichael factor (4)
757849549*((72753556704*3061#+19)^21)*3061#/
56731664597567983567442428202862519351615138+1 3891 p44 03
4Carmichael factor (4)
finding ECM factors at about twice
the rate for this seed as compared with
the other seeds in my plantpot.
OK, it's a trivial [and suspect] use of Dickman.
But it was funny that you should mention him
just after I had commented on another such seed
{1,2,15}. But Markus can't use that
Dickmanenhanced seed, since he
can't get x=1 mod 5 in
? print(factor((1+x)*(1+2*x)*(1+15*x)1))
[x, 1; 3*x + 2, 1; 10*x + 9, 1]
because he used primorial mode in NewPgen.
David Oh, I just thought of a nice grace note.
One can use algebraically factorizing
seeds to find a 4Carm with
D+D+D+2*D digits,
better than your
D+D+D+D
but short of
D+D+D+3*D.
This piggyinmiddle method would enable one to use
both CRT and BLS, whereas you went for the small pig
because your parallelseed CRT had taken up too
much of action to allow BLS.
Bet that idea isn't in your 40pages of Carmichael notes:)
David   "David Broadhurst <d.broadhurst@...>" <d.broadhurst@...> wrote:
> Oh, I just thought of a nice grace note.
4Carms were a long time back. I don't believe I expended more than about 2
> One can use algebraically factorizing
> seeds to find a 4Carm with
> D+D+D+2*D digits,
> better than your
> D+D+D+D
> but short of
> D+D+D+3*D.
> This piggyinmiddle method would enable one to use
> both CRT and BLS, whereas you went for the small pig
> because your parallelseed CRT had taken up too
> much of action to allow BLS.
> Bet that idea isn't in your 40pages of Carmichael notes:)
> David
pages on them. I spent a while writng the code that optimally placed factors
into the different slots, but that's a different matter!
Phil
=====
Is that free as in Willy or as in bird?
__________________________________________________
Do you Yahoo!?
Yahoo! Mail Plus  Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com   "David Broadhurst <d.broadhurst@...>" <d.broadhurst@...> wrote:
> > Woh! I can't see Dickman in that at all.
...
> > What am I missing?
>
> Let me backtrack a bit, to Aurifeuille.
> (more or less) T1 and T2 digits then Dickman suggests
Spot on. I like it. That's a completely sideways way of looking at things!
> (don't eek me, I only said sugests) that we will
> get them twice as fast, compared with when there
> isn't such a factorization.
I like it!
> Now what's special about those 3 buttons?
Speak for yourself. I used {1,2,12} and {1,4,6} /precisely/ for that reason.
>
> ? print(factor((1+x)*(1+2*x)*(1+12*x)1))
> [x, 1; 4*x + 3, 1; 6*x + 5, 1]
>
> ? print(factor((1+x)*(1+2*x)*(1+15*x)1))
> [x, 1; 3*x + 2, 1; 10*x + 9, 1]
>
> ? print(factor((1+x)*(1+4*x)*(1+6*x)1))
> [x, 1; 2*x + 1, 1; 12*x + 11, 1]
>
> Of course you and I did not bother about this when we used CRT.
I think one of my results was a {1,4,6} wasn't it?
> But Markus is stuck with his NewPgen output and so
Indeed it is.
> can't use CRT. So I thought it would be useful
> to give {1,2,12} a proof in principle by beating the
> 4Carmichael 4th factor record.
>
> In fact I did it twice over:
>
> 2763260532*((1591638066432*3061#+19)^21)*3061#/
> 286610757607008951353515277095+1 3909 p44 03
> 4Carmichael factor (4)
>
> 757849549*((72753556704*3061#+19)^21)*3061#/
> 56731664597567983567442428202862519351615138+1 3891 p44 03
> 4Carmichael factor (4)
>
> finding ECM factors at about twice
> the rate for this seed as compared with
> the other seeds in my plantpot.
>
> OK, it's a trivial [and suspect] use of Dickman.
It's a great idea for trying to squeeze out factors quicker, certainly.
I'm surprised it works, because the factors need to be of a particular
residue in order to make the thing carmichael (like your lucky 2s in the
3carmichael).
There's the Saarinen DLOG trick one can use. However, he's not published
that yet. (It's one of the tricks that helped him generate the multi
billiondigit Carmichael so easily last month.)
You seem to have claimed a *2 speedup over the naive technique? I think I
claimed either a *4 or *6 speedup. However, as you have noted elsewhere I
become constipated with CRT. (Not every element in my speedup contributed a
linear factorisation, and therefore caused dopingbloat.)
Phil
=====
Is that free as in Willy or as in bird?
__________________________________________________
Do you Yahoo!?
Yahoo! Mail Plus  Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com  Phil:
> That's a completely sideways way of looking at things!
Thank ye, kind sir.
> I like it!
> I think one of my results was a {1,4,6} wasn't it?
Yep. I said you didn't need to bother, not that you
didn't bother.
> I'm surprised it works, because the factors need to
\begin{explanation}
> be of a particular residue in order to make the thing
> carmichael
lcm{1,2,12}=3*4
We get the 3 from 4*x+3, since
x=0 mod 3, because of the primorial.
We get the 4 from (1+1/d).
All possible d's are odd.
So half of the combinations of ECM primes work.
But we double these using the 5 from 6*x+5,
since x=0 mod 5, because of the primorial.
So {1,2,12} is as rich as {1,2,3}.
Except that it's twice as fast
as {1,2,3} for ECMing.
\end{explanation}  Given the 1,2,15 seed, or 1,2,12 seed etc
Couldn't we check for a 4th element using ECM, and a 5th element (e) such
that we can form a 5carm of the form N = e*1*2*15*ECMFactor
Carms of that form
5,7,13,193,439,9637697
5,7,13,193,499,2657,42829
5,19,37,547,1009,3211937
7,19,37,541,601,58741
7,79,157,2341,3541,9181
13,19,37,541,631,2689
13,19,37,541,739,811,1231
13,19,37,541,1009,2311
17,37,73,1093,2081,65521
17,37,73,1093,93809
With my dataset i will have 4 billion possible e values and 350 titanic
seeds of whatever 3 part seed that is picked. Is it possible to create a
carm by extending a "seed" at both ends. IE creating a custom formula for
each e*1*2*15 which is then ecm'd. to get a prime that
makes e*1*2*15*ECMFactor PRP
Markus
>But Markus is stuck with his NewPgen output and so
>can't use CRT. So I thought it would be useful
>to give {1,2,12} a proof in principle by beating the
>4Carmichael 4th factor record.
>
>In fact I did it twice over:
>
>2763260532*((1591638066432*3061#+19)^21)*3061#/
>286610757607008951353515277095+1 3909 p44 03
>4Carmichael factor (4)
>
>757849549*((72753556704*3061#+19)^21)*3061#/
>56731664597567983567442428202862519351615138+1 3891 p44 03
>4Carmichael factor (4)
>
>finding ECM factors at about twice
>the rate for this seed as compared with
>the other seeds in my plantpot.
>
>OK, it's a trivial [and suspect] use of Dickman.
>
>But it was funny that you should mention him
>just after I had commented on another such seed
>{1,2,15}. But Markus can't use that
>Dickmanenhanced seed, since he
>can't get x=1 mod 5 in
>
>? print(factor((1+x)*(1+2*x)*(1+15*x)1))
>[x, 1; 3*x + 2, 1; 10*x + 9, 1]
>
>because he used primorial mode in NewPgen.
>
>David
>
>
>Unsubscribe by an email to: primenumbersunsubscribe@yahoogroups.com
>The Prime Pages : http://www.primepages.org/
>
>
>
>Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/  Phil Carmody wrote:
>
Phil,
> ... brilliant numbers. If someone has kept a full record of all the numbers
> that were in the range they tested for brilliantness, then that would be a
> fair corpus, surely? I'm assuming they weren't sieved terribly deeply, and
> therefore only the medium sized factors would need to have been recorded, as
> the small ones can be reproduced easily (hey, I'll offer to find all the
> small factors to test my new factoring algorithm!). Sure they're smaller
> than 200 digits by a factor of 2, but they're about as unbiased as you can
> get, and there's a fair number of them.
>
> Anyone with such a collection?
>
> Phil
>
> =====
> Is that free as in Willy or as in bird?
I have been keeping such a list and was going to offer up all my
findings once I have found my next brilliant number. The program I used
to do the "sieving" was Miracl's factor.exe (slightly modified by me)
and it does some ecm to remove up to 25 digit factors. So, would my
list be too sparse to be useful? I didn't keep any of the sieved
numbers, but everything that fell through that I have nfsx or ppsiqs
factorings of.
I was also thinking of giving the timings for the work thus far, and
maybe also giving a list of the primes that were found in this range.
Would either of the last two pieces of information be useful to anyone?
I'm still compiling the information and still waiting for the next
brilliant, but as soon as both are ready I'll email the list.
David C.  David Cleaver asked:
> So, would my list be too sparse to be useful?
I think that what you have so far cleaved/cleft/cloven
is rather impressive and that the more you document it
the better. Since the files are presumably not huge,
I can't see that Phil is going to fret if you
create a "brilliant" folder in
http://groups.yahoo.com/group/primenumbers/files/
and add whatever you think might be informative therein.
David
Your message has been successfully submitted and would be delivered to recipients shortly.