- At 10:46 PM 10/4/2003, Décio Luiz Gazzoni Filho wrote:

I'm not trying to downplay the contributions of these methods -- far from it,>and experimental evidence shows that these heuristics are on track, but it

Yes, it wasn't too much of a surprise for there to be good algorithms for

>appears to me that researchers were treading on far more solid ground when it

>came to primality tests.

testing primality, since Fermat's theorem and pseudoprimality tests got close. - -----BEGIN PGP SIGNED MESSAGE-----

Hash: SHA1

Sorry, sent past message without signature...

Décio

- ----

> I see. So "running out" of dependencies is not the reason for the

No. If not for anything else, getting an epsilon of extra dependencies costs

> worst-case time required by NFS?

nothing. If you're processing a 4,000,010 by 4,000,000 matrix, it doesn't

cost much more to process a 4,000,100 by 4,000,000 matrix -- but the

probability of finding a solutions increases exponentially with each

dependency. (That's not how it works in practice, just trying to argue from a

complexity standpoint.)

The real bottleneck of NFS is the rate at which one can produce relations

(which in turn depends on the yield of the polynomials, which in turn are

mostly related to the size of the number being factored, whether it has

special form, and how much computational power has been thrown at its

selection). Also, since the sieving step is usually done in a

massively-parallel fashion, whereas LA is performed in a vector computer or

cluster, LA ends up being a bottleneck too, which can be reduced by two

means: better filtering strategies (involving more complex algorithms and

requiring many extra relations before creating the matrix), which may improve

the matrix's sparseness and possibly chop off some columns from it, and of

course a smaller matrix to boot -- but that means a smaller factor base,

which places a higher burden on the sieving stage. Of course there's an

optimal factor base size which balances sieving and LA work, and any shifts

away from this optimal point will increase computational requirements

subexponentially (read: faster than polynomially). Then there's other

improvements, like large prime variations, which facilitiate sieving while

requiring more complex filtering strategies and more computational resources

for filtering, while increasing the matrix density in the process. So there's

a very delicate balance to be reached by the NFS practitioner.

Décio

-----BEGIN PGP SIGNATURE-----

Version: GnuPG v1.2.3 (GNU/Linux)

iD8DBQE/gtcWce3VljctsGsRAnqyAJ9F42j2G9eDqbr8jFZ1EbtCf8BsdQCfarim

E2mTXz9aihmeak3IKBuelsA=

=W4uV

-----END PGP SIGNATURE-----