--- Paul Leyland <

pleyland@...> wrote:

[quoth P.M.]

> > I imagine something like the 16 bit signed

> > multiplication

> > is the unit?

>

> There are two "natural" units --- that is, units which have some

> theoretical justification and have some practical application.

>

> One work unit is "arithmetical operations in the underlying

> field/ring/group/whatever". Mathematicians tend to prefer this

> one as

> it sweeps under the carpet all the messy details of the machine

> architecture underlying actual computations. It also makes

> comparison

> between different classes of algorithms easier.

>

> The other one is "bit operations", which tends to be favoured by

> the

> engineers, programmers and other such pigs in the rose garden.

> This

> measure counts how many times a bit is

> set/reset/toggled/compared/whatever and is fairly clearly well

> matched

> to the operations of binary computers. It's relatively easy to

> cast

> bit-operations into the language of some other machines, classical

> Turing machines for instance, but the concept tends to fall down

> badly

> when applied to some of the weirder models of computation that have

> been

> proposed.

Absolutely. The first one, with your wordy title, is so prefered by

mathematicians that by default, if you hear someone with a

mathematical bent describing an algorithm, then they will simply be

called "operations". I cite Crandall & Pomerance here, as they

unambiguously raise the issue of the two measures in their /Prime

Numbers, a Computational Perspective/.

Describing the algorithm in terms of operations is the first stage,

describing the implementation of those operations at the lowest level

of your (maybe hypothetical) machine architecture is the second

stage. For real world work factors, you need to "multiply the two

together", so to speak. Break-throughs in either stage are welcomed,

of course, but if you manage to improve a low-level primitive, then

you may well have improved the work-factor of _all_ algorithms that

make use of said primitive.

However, it's not black and white! Many of the high level algorithms

(e.g. find a quadratic non-residue), which rely on lower level

primitives (e.g. expmods, which rely on mulmods) can be considered

'low level primitives' to other algorithms!

This does mean that evaluating complexities can be very tricky

indeed, as you sometimes have to rely on more than just 2 layers.

Browse sci.crypt for subject lines including 'Bernstein' recently to

see quite how up-in-the-air NFS's complexity is. (Sure, he's changed

the 'space' parameters, but has given most of the time ones quite a

shuffle in the process. So much so that noone yet has /any idea/ how

useful the asymptotic complexity of his modified algorithm is for

describing the process for the ranges of numbers it's likely to be

used for.)

Phil

Phil

__________________________________________________

Do You Yahoo!?

Yahoo! Games - play chess, backgammon, pool and more

http://games.yahoo.com/