## Re: [sicp-vsg] Gripe about "orders of growth" explanation

Expand Messages
• ... But k s are just arbitrary constants? ... But they also continue on to say in the same footnote why it s an oversimplification. While an algorithm might
Message 1 of 3 , Apr 14, 2005
• 0 Attachment
> There is no further discussion of k1f(n) and k2f(n), as far as
> I know. I find this annoying and frustrating.

But k's are just arbitrary constants?

> In any event, the authors themselves say in footnote 36, "These
> statements mask a great deal of oversimplification."

But they also continue on to say in the same footnote why it's an
oversimplification. While an algorithm might be theta(log n) it
doesn't account for procedures it uses that might grow at changing
rates themselves, like multiplication of large numbers. If for some
reason those steps somehow were more than log n, then your algorithm's
no longer theta(log n).

It's also an oversimplification of the idea of steps, since you're not
really counting steps, just their order of growth. As they said,
things that grow at n^2, 3n^2 and say, n^2 + 1000n + 50000 are all
theta(n^2).

> If not, then where did 14 come from? If anybody knows how to derive
> 14, I'd like them so show me how. (I.e., given what has been
> said previously about orders of growth, prove that a process that
> grows by theta(log n)for n = 1000 will require exactly 14
> multiplications.)

Their footnote kind of explains it. 14 multiplications comes from

"More precisely, the number of multiplications required is equal to 1
less than the log base 2 of n plus the number of ones in the binary
representation of n. This total is always less than twice the log base
2 of n."

They derived this informally from the definition of fast-expt, afaict.
When do multiplications occur? The even? cases total up to log n of
base 2, which they've explained before. The odd case is every time
you meet an odd n, which is the same as saying the number of 1's in
the binary representation of n (dividing a binary number by 2 is the
same as shifting it left once).

> Similarly, I see an "empirical" approach in Exercise 1.22. (Or
> maybe the term should be "pragmatic.") I'm referring to the
> idea of confirming a theoretical prediction with an experiment,
> in this case timing the test for primality against the system
> clock.

Nothing's wrong with the empirical approach here, they were just
trying to have you explore whether or not the number of "steps" in a
process relates directly to the time it takes to complete on your
computer. AFAIK, this isn't necessarily true depending on how your
machine is implemented.

> Another way of putting the above is to say that the authors
> are not living up to their aim of presenting a "procedural
> epistemology." (See Preface to the First Edition, p. xviii.)
> According to that, the mathematics of orders of growth could
> just be omitted. The reader should be more interested in "how to"
> than in "what is."

I think it can be omitted only because it's not totally important
until you have problems with different algorithms that have different
orders of growth and you have to use orders of growth as a way of
comparing them. Sorting is the canonical example but in real life,
you might be looking at the same thing with some other problem.

--
Myron Wu
(http://homepage.mac.com/myronwu/)
• ... Oops that s right, not left.
Message 2 of 3 , Apr 14, 2005
• 0 Attachment
> The odd case is every time
> you meet an odd n, which is the same as saying the number of 1's in
> the binary representation of n (dividing a binary number by 2 is the
> same as shifting it left once).

Oops that's right, not left.
Your message has been successfully submitted and would be delivered to recipients shortly.