Let F(N) be the smallest number of operations

required by any algorithm

which employs only +, -, *, and /, starts

out with 0 and 1, and computes a number N.

THEOREM: for an infinite set of integers X>0,

F(X) > lnX / [2*lnlnX +- O(1)].

And for every integer X>0

F(X) < log2(X) * [1+o(1)]

PROOF SKETCH:

LOWER BOUND: The k-operation chains

can do any of 4*(J+1)^2 things at their Jth step,

hence can produce at most 4^k*(k+1)!^2

different possible outputs.

Now via Stirling formula and pigeonhole principle

we find that if K=F(X) then 2KlnK+O(K)>lnX.

UPPER BOUND:

Just addition chains will suffice to show this

(only allowed operation is +): "Radix-R powering algorithm" with

R=2^w(X) and w(X) is any slowly-enough-growing function

of X which grows to infinity.

QED

The lower bound suggests that the "size doesn't matter" computer model's factoring algorithm could not be improved to take below

order logN / loglogN steps... but does not prove it since

that algorithm has N as an additional input (not just 0 and 1).

What is the sequence of "hard to represent" numbers

such that the Nth number in the sequence (N=0,1,2,3,...)

is the least positive integer

that requires at least N operations to compute?

This sequence begins

1, 2, 3, 5, 7, 13, x

where x>40, because

0:0 1

1:2

2:3 4

3:5 6 8 9 16

4:7 10 11 12 14 15 17 18 25 27 32 36 64 81 256

5:13 19 20 21 22 23 24 26 28 29 30 31 33 34 35 37 38 39 40...;

you could take it a bit further with a computer.