- If you haven't been following the discussion at:

http://cs.nyu.edu/pipermail/fom/2007-October/012178.html

Alex Smith has chimed in to address Vaughan Pratt's comments.

Very interesting reading indeed.

Vaughan Pratt said:

> This line of reasoning works for numbers but not machines. For

Alex Smith said:

> machines

> it would show that a linear-bounded automaton (LBA) is universal: a

> non-universal machine can easily add to the input a string giving in

> unary the number of steps to emulate the given Turing machine, and a

> suitable LBA can then run the computation that far without exceeding

> its

> linear space bound.

Quote:

There is a mistake in your reasoning at this point: the

linear-bounded-automaton+preprocessor system that you suggest here is

not universal. 'A non-universal machine can easily add to the input a

string giving in unary the number of steps to emulate the given Turing

machine' is false; a universal machine is capable of infinite searches

where the existence and value of any answer searched for is not known

in advance. For instance, a universal machine can search for

counterexamples to the Riemann hypothesis; the

linear-bounded-automata+preprocessor system cannot, unless the

preprocessor itself is sufficiently advanced to search for a

counterexample itself to know how large the numbers can get before a

counterexample can be found. (It would be possible to do this for the

specific case of the Riemann hypothesis - or any other specific case -

by engineering a specific preprocessor for the exact problem at hand,

but this does not work in the general case.) Likewise, a system

consisting of one push-down automaton is not universal, not even with

a preprocessor that's a push-down automaton itself, due to the fact

that information can only be sent one way (from the preprocessor to

the main system), and therefore push-down automata are not universal

by this definition, even though systems that are like push-down

automata except with two memory stacks (this is one way of

interpreting Boehm's system P'', for instance) are universal.

The definition of universality used is important in the situation in

question; the problem of what sorts of initial conditions are allowed

is a tricky one. There seems to be a lemma in question, which is that

if a sequence of systems, each of which reads input and produces

output, are set up so that each one reads the output of the previous

one, and none of them are universal, then the resulting system cannot

be universal. In the case of the prize problem, this lemma was

basically incorporated into the definition of universality. (The

technical details of the original problem specified 'One common

definition requires that the encoding of input for the universal

system, and the decoding of its output, must both be done by a

computation that always halts after a finite number of steps. One

difficulty with this definition is that it may be too restrictive for

infinite tapes. It is not clear, for instance, whether one should

allow a Thue-Morse sequence

<http://www.wolframscience.com/nksonline/page-890> in the initial

conditions.'; the definition that was used in the prize basically said

that what was required was that the encoding of the initial condition

could be infinite but had to be done in an obviously non-universal

way, and given this definition the lemma in question is trivial.) To

prove or disprove this lemma for a different definition, it is

necessary to first define 'universal' in a way which uses a different

treatment of the way initial conditions are handled; it may well be

that it is true with some definitions of 'universal' and false with

others; in this case the situation would be that in the absence of the

clarifications of the meaning of the problem that were decided upon by

the committee that decided the outcome of the Wolfram 2,3 Turing

Machine Research Prize, the problem of deciding whether the machine in

question is universal would be ambiguous (but these clarifications

were given and decided upon, so any ambiguity was resolved in this case).

In any event, the construction of the initial condition is done in a

highly computationally simple manner, making it intuitively reasonable

that the system is universal; from my point of view, a definition of

universality that excludes this sort of initial condition would be

inconsistent with an intuitive view of what universality is.

End quote - John McCarthy said:

http://cs.nyu.edu/pipermail/fom/2007-October/012141.html

Quote:

In the 1950s I thought that the smallest possible (symbol-state

product) universal Turing machine would tell something about the

nature of computation. Unfortunately, it didn't. Instead as simpler

universal machines were discovered, the proofs that they were

universal became more elaborate, and do did the encodings of

information.

The proof that Wolfram's 3x2 is universal is reported to be 44 pages,

and go via Minsky's? tag systems.

End quote

Then while surfing around over the past week or so, I ran across the

term "Turing Tarpit" found this quote:

http://esoteric.voxelperfect.net/wiki/Turing_tarpit

In the final chapter, Very Simple Bases for Computability, of his 1967

book Computation: Finite and Infinite Machines, Marvin Minsky writes:

Quote:

The reader is welcome to enter the competition [to design the smallest

universal Turing machine ...] although the reader should understand

clearly that the question is an intensely tricky puzzle and has

essentially no serious mathematical interest.

End quote

Well, after reading the 44 page proof submitted by Alex Smith and not

being able to make heads or tails of it for myself and then not

feeling to bad about my not being able to do so because it seems that

others much smarter than I are having trouble with it too, I started

wondering if John and Marvin were right?

McCarthy's relationship says:

The simpler the machine the more complex the proof that they are

universal.

It seems to me that the more complex the "proof" is the less of a

"proof" it is too. Given a proof, there should be no doubt that the

proof is proof. If there is any doubt then certainly we shouldn't call

it a proof...

But above I'm using the term "complex" simply to mean *hard to

understand* or difficult to verify. This is somewhat different than

the usage of "complex" within complexity theory, as far as I can tell.

Eray had talked about porting Alex Smith's compiler to LISP. Certainly

if this could be done then there would be no doubt that a 2/3 LBA was

universal. Even McCarthy, as he said here:

http://cs.nyu.edu/pipermail/fom/2007-October/012142.html

Quote:

In my previous message, I forgot to brag that it is easy to show that

the Lisp function eval is universal.

End quote

would have to accept this as an undeniable "proof" that the 2/3

machine was universal.

But such an undertaking, of porting such a *rich* language such as

LISP to run on the constraints imposed by the 2/3 machine, would

probably be a very labor intensive task if done directly.

But, I happened to run across a reference to the brainfuck (or more PC

brainf*ck) language which consists of just eight commands and has

already been proven to be just as universal as LISP!

> Increment the pointer.

< Decrement the pointer.

+ Increment the byte at the pointer.

- Decrement the byte at the pointer.

. Output the byte at the pointer.

, Input a byte and store it in the byte at the pointer.

[ Jump forward past the matching ] if the byte at the pointer is zero.

] Jump backward to the [ unless the byte at the pointer is zero.

Here is "Hello World!" in BF:

++++++++++

[>+++++++>++++++++++>+++>+<<<<-]>++.

+++++++.

>+.

.

+++.>++.

<<+++++++++++++++.

>.

+++.

------.

--------.>+.

According to the author, it was originally developed on the Amiga with

>.

the intension to be the worlds smallest compiler and at it smallest

was under 200 bytes.

Certainly it would be easier to implement brainfucks eight commands

then it would be to do the richness of LISP and since BF can do LISP,

if the 2/3 machine can do BF it can do LISP.

I was talking to a friend of mine (who's rather computer illiterate)

and I was telling him how this language had only eight commands to

which he said "So, it's a really simple language to learn". I tried to

explain that while it might be easy to remember the complete set of

eight commands, this does not mean it's a simple language to program in.

The richness of a language means there are many more ways to

accomplish an arbitrary programming task *within a given number of

statements* within the language. There maybe 1000s of way to

accomplish any given task in LISP within 100 lines of code while the

same task in BF might take ten-1000 of lines of code. This is what

gives high level languages the power to be "simple". But here "simple"

simply means redundant. You reduce the length of the "programs"

written in the language only at the expense of the length (redundancy)

of the given compiler.

But, with respect to the 2/3 LBA, the question of universality *of the

machine* just means "can it do a universal language"?

Machines, ***all physical machines***, if they be multi-core Intel LBA

or 2 state 3 color LBA, are finite and complete. Universal Languages,

from brainfuck to LISP are infinite only at the expense of being

incomplete.

If the machine is (complete and finite) and the language the machine

is doing )infinite and incomplete( then we can be just as sure as we

can be that we are doing universal computation...

John J. Gagne