>> I learnt algorithmic complexity as part of a Maths degree,
>> others learn it as part of a CompSci degree. It belongs to
>> both areas. The pidgeon-holing doesn't really do the subject
>> any good at all. The more cross-over the better.
> So it remains now to define a unit of work in
> mathematics/computer science. I imagine something like
> the 16 bit signed multiplication is the unit?
It's true that the "unit of work" has to be defined. The algorithmic
complexity is often based on a number of operations (additions and
multiplications) and this is clearly obsolete today. The unit of work is
certainly not something like a multiplication.
The algorithmic complexity is useful for general purpose: an algorithm in
O(n.log(n)) will be at a point faster than another one in O(n^2), but it
doesn't indicate more. Some people compute a complexity by counting the
number of mul. and add. Then you see often some boring papers explaining to
you that algorithm B is faster than A because the number of multiplications
is 9*n*log_2(n) rather than 10*n*log_2(n). But the problem is that the
number of mul. and add. is not yet today the bottleneck on real computers.
Today, the major problem for complexity is where your data are. If you read
your data from main memory, your algorithm will be as fast as (or rather as
slow as) the memory bus. However, if you read your data from L1 cache, the
algorithm will run at the full speed of the core. The result is that an
algorithm with a complexity of 5*n*log_2(n) but that doesn't use efficiently
the data cache could be 5 times slower that an algorithm with a complexity
of 10*n*log_2(n) but that processes data by blocks of the size of the L1
The problem with a modern definition of algorithmic complexity is that it is
a complex problem (:o) ). To obtain a correct definition (I mean by correct,
something that could be useful for writing a program), you should introduce
some parameters like the L1, L2, ... cache size, latencies, the buses speed,
to consider the operations depending on the number of execution units and by
taken into account the parallelism. Then you will have to merge all that
with some min, max functions and certainly a bit of statistics because
instructions are not yet executed in order and the algorithm for reordering
cannot be completely modeled.
You can imagine the complexity of the final formula for a simple algorithm
such as the FFT by just remarking that the fastest algorithm on the PIII is
not the fastest on the P4, even if the technology just did a little step!
La complexité algorithmique est morte, vive la complexité!