Re: wolfram's small universal computer
- If you haven't been following the discussion at:
Alex Smith has chimed in to address Vaughan Pratt's comments.
Very interesting reading indeed.
Vaughan Pratt said:
> This line of reasoning works for numbers but not machines. ForAlex Smith said:
> it would show that a linear-bounded automaton (LBA) is universal: a
> non-universal machine can easily add to the input a string giving in
> unary the number of steps to emulate the given Turing machine, and a
> suitable LBA can then run the computation that far without exceeding
> linear space bound.
There is a mistake in your reasoning at this point: the
linear-bounded-automaton+preprocessor system that you suggest here is
not universal. 'A non-universal machine can easily add to the input a
string giving in unary the number of steps to emulate the given Turing
machine' is false; a universal machine is capable of infinite searches
where the existence and value of any answer searched for is not known
in advance. For instance, a universal machine can search for
counterexamples to the Riemann hypothesis; the
linear-bounded-automata+preprocessor system cannot, unless the
preprocessor itself is sufficiently advanced to search for a
counterexample itself to know how large the numbers can get before a
counterexample can be found. (It would be possible to do this for the
specific case of the Riemann hypothesis - or any other specific case -
by engineering a specific preprocessor for the exact problem at hand,
but this does not work in the general case.) Likewise, a system
consisting of one push-down automaton is not universal, not even with
a preprocessor that's a push-down automaton itself, due to the fact
that information can only be sent one way (from the preprocessor to
the main system), and therefore push-down automata are not universal
by this definition, even though systems that are like push-down
automata except with two memory stacks (this is one way of
interpreting Boehm's system P'', for instance) are universal.
The definition of universality used is important in the situation in
question; the problem of what sorts of initial conditions are allowed
is a tricky one. There seems to be a lemma in question, which is that
if a sequence of systems, each of which reads input and produces
output, are set up so that each one reads the output of the previous
one, and none of them are universal, then the resulting system cannot
be universal. In the case of the prize problem, this lemma was
basically incorporated into the definition of universality. (The
technical details of the original problem specified 'One common
definition requires that the encoding of input for the universal
system, and the decoding of its output, must both be done by a
computation that always halts after a finite number of steps. One
difficulty with this definition is that it may be too restrictive for
infinite tapes. It is not clear, for instance, whether one should
allow a Thue-Morse sequence
<http://www.wolframscience.com/nksonline/page-890> in the initial
conditions.'; the definition that was used in the prize basically said
that what was required was that the encoding of the initial condition
could be infinite but had to be done in an obviously non-universal
way, and given this definition the lemma in question is trivial.) To
prove or disprove this lemma for a different definition, it is
necessary to first define 'universal' in a way which uses a different
treatment of the way initial conditions are handled; it may well be
that it is true with some definitions of 'universal' and false with
others; in this case the situation would be that in the absence of the
clarifications of the meaning of the problem that were decided upon by
the committee that decided the outcome of the Wolfram 2,3 Turing
Machine Research Prize, the problem of deciding whether the machine in
question is universal would be ambiguous (but these clarifications
were given and decided upon, so any ambiguity was resolved in this case).
In any event, the construction of the initial condition is done in a
highly computationally simple manner, making it intuitively reasonable
that the system is universal; from my point of view, a definition of
universality that excludes this sort of initial condition would be
inconsistent with an intuitive view of what universality is.
- --- In email@example.com, "John J. Gagne"
>I don't understand any of this.
> If you haven't been following the discussion at:
> Alex Smith has chimed in to address Vaughan Pratt's comments.
For any finite machine, isn't there some program that is one bit
larger, that it can't run, and thus not be "universal"?
- --- In firstname.lastname@example.org, "jrstern" <jrstern@...> wrote:
>I think that's part of his argument actually.
> --- In email@example.com, "John J. Gagne"
> <john_j_gagne@> wrote:
> > If you haven't been following the discussion at:
> > http://cs.nyu.edu/pipermail/fom/2007-October/012178.html
> > Alex Smith has chimed in to address Vaughan Pratt's comments.
> I don't understand any of this.
> For any finite machine, isn't there some program that is one bit
> larger, that it can't run, and thus not be "universal"?
If the definition of universality critically depends on the presents
or absents of an infinite tape then LBA are trivially not universal.
If on the other hand, universality is *independent* of the presents or
absents of an infinite tape then the question of the universality of
LBA is not trivial.
I'm no expert, but it seems to me that universality is really about
the language which the machine accepts or rejects. If the input to a
LBA must always be finite (which it must or universality just means
trivially must have an infinite tape) then all you have to do is show
that a halting equivalent (acceptor) function can always be
established to define the language (or something like that) for any
input string. The fact that it can't store all possible strings "at
the same time" is more about the tape issue than the algorithm doing
the computation which is what we should be focusing on in any theory
As far as I can tell, the reason Turing referenced an infinite tape in
the first place was for just this reason above...
- --- In firstname.lastname@example.org, "John J. Gagne"
> > I don't understand any of this.
> > For any finite machine, isn't there some program that is one bit
> > larger, that it can't run, and thus not be "universal"?
> I think that's part of his argument actually.
> If the definition of universality critically depends on the presentsOK.
> or absents of an infinite tape then LBA are trivially not universal.
> If on the other hand, universality is *independent* of the presentsSo, what are the options here, prove that the infinite tape machine
> or absents of an infinite tape then the question of the
> universality of LBA is not trivial.
is NOT universal, or prove that the finite machine IS universal -
even though we already know triviall that it's not?
Isn't that a lot like searching for a hornless unicorn, or a married
bachelor, or an honest politician?
> I'm no expert, but it seems to me that universality is really about... then you're looking for various finite equivalence classes.
> the language which the machine accepts or rejects. If the input to a
> LBA must always be finite (which it must or universality just means
> trivially must have an infinite tape)
> As far as I can tell, the reason Turing referenced an infinite tapeThe paper was "On Computable Numbers" and the putative topic was
> in the first place was for just this reason above...
something involving infinite enumerations. When he wandered onto the
topic of UTMs, specifying an infinite tape simplified the discussion
greatly, but as the individual Turing machines already had infinite
tape, it was a freebie.
I just cannot see an issue here, unless it's the parlor game of
minimal UTMs, but no, that doesn't seem to be the topic. It sure
seems to be they've simply decided to ignore a trivial fact about
Turing machines, and gone on to argue happily about what the universe
would be like, if it wasn't how it really is. This is within
spittin' distance of ignoring entropy and arguing about whose
perpetual motion machine is the best.
in my humble opinion, of course
- JJG said:
> > If on the other hand, universality is *independent* of theJosh said:
> > presents or absents of an infinite tape then the question of the
> > universality of LBA is not trivial.
> So, what are the options here, prove that the infinite tape machineIf all computable function must halt and the set of all computable
> is NOT universal, or prove that the finite machine IS universal -
> even though we already know triviall that it's not?
> Isn't that a lot like searching for a hornless unicorn, or a married
> bachelor, or an honest politician?
functions is complete then why waste the tape?
searching for a computable function which requires an infinite tape is
like searching for a hornless unicorn, or a married bachelor, or an
Rather than infinite tape, why not just say "an unspecified but
necessarily finite length of tape"?
- John McCarthy said:
In the 1950s I thought that the smallest possible (symbol-state
product) universal Turing machine would tell something about the
nature of computation. Unfortunately, it didn't. Instead as simpler
universal machines were discovered, the proofs that they were
universal became more elaborate, and do did the encodings of
The proof that Wolfram's 3x2 is universal is reported to be 44 pages,
and go via Minsky's? tag systems.
Then while surfing around over the past week or so, I ran across the
term "Turing Tarpit" found this quote:
In the final chapter, Very Simple Bases for Computability, of his 1967
book Computation: Finite and Infinite Machines, Marvin Minsky writes:
The reader is welcome to enter the competition [to design the smallest
universal Turing machine ...] although the reader should understand
clearly that the question is an intensely tricky puzzle and has
essentially no serious mathematical interest.
Well, after reading the 44 page proof submitted by Alex Smith and not
being able to make heads or tails of it for myself and then not
feeling to bad about my not being able to do so because it seems that
others much smarter than I are having trouble with it too, I started
wondering if John and Marvin were right?
McCarthy's relationship says:
The simpler the machine the more complex the proof that they are
It seems to me that the more complex the "proof" is the less of a
"proof" it is too. Given a proof, there should be no doubt that the
proof is proof. If there is any doubt then certainly we shouldn't call
it a proof...
But above I'm using the term "complex" simply to mean *hard to
understand* or difficult to verify. This is somewhat different than
the usage of "complex" within complexity theory, as far as I can tell.
Eray had talked about porting Alex Smith's compiler to LISP. Certainly
if this could be done then there would be no doubt that a 2/3 LBA was
universal. Even McCarthy, as he said here:
In my previous message, I forgot to brag that it is easy to show that
the Lisp function eval is universal.
would have to accept this as an undeniable "proof" that the 2/3
machine was universal.
But such an undertaking, of porting such a *rich* language such as
LISP to run on the constraints imposed by the 2/3 machine, would
probably be a very labor intensive task if done directly.
But, I happened to run across a reference to the brainfuck (or more PC
brainf*ck) language which consists of just eight commands and has
already been proven to be just as universal as LISP!
> Increment the pointer.< Decrement the pointer.
+ Increment the byte at the pointer.
- Decrement the byte at the pointer.
. Output the byte at the pointer.
, Input a byte and store it in the byte at the pointer.
[ Jump forward past the matching ] if the byte at the pointer is zero.
] Jump backward to the [ unless the byte at the pointer is zero.
Here is "Hello World!" in BF:
>+.According to the author, it was originally developed on the Amiga with
the intension to be the worlds smallest compiler and at it smallest
was under 200 bytes.
Certainly it would be easier to implement brainfucks eight commands
then it would be to do the richness of LISP and since BF can do LISP,
if the 2/3 machine can do BF it can do LISP.
I was talking to a friend of mine (who's rather computer illiterate)
and I was telling him how this language had only eight commands to
which he said "So, it's a really simple language to learn". I tried to
explain that while it might be easy to remember the complete set of
eight commands, this does not mean it's a simple language to program in.
The richness of a language means there are many more ways to
accomplish an arbitrary programming task *within a given number of
statements* within the language. There maybe 1000s of way to
accomplish any given task in LISP within 100 lines of code while the
same task in BF might take ten-1000 of lines of code. This is what
gives high level languages the power to be "simple". But here "simple"
simply means redundant. You reduce the length of the "programs"
written in the language only at the expense of the length (redundancy)
of the given compiler.
But, with respect to the 2/3 LBA, the question of universality *of the
machine* just means "can it do a universal language"?
Machines, ***all physical machines***, if they be multi-core Intel LBA
or 2 state 3 color LBA, are finite and complete. Universal Languages,
from brainfuck to LISP are infinite only at the expense of being
If the machine is (complete and finite) and the language the machine
is doing )infinite and incomplete( then we can be just as sure as we
can be that we are doing universal computation...
John J. Gagne