- I haven't gotten back to this religion thread because I've been swamped, no=

t because I

didn't have anything else to add.

If you go back and look, some of you might be surprised to realize that I d=

id not in fact

profess to be on either side of the "science is just another religion" deba=

te, because in fact

I'm not on either side. I do appreciate Chris Phoenix's exuberant confirmat=

ion of my up to

that point thinly supported assertions about one of the common stances, and=

I hope he

won't attribute too malicious an intent to my deliberately delayed confessi=

on of sympathy

for both viewpoints.

The problem, as it is so often, is that the sides are talking right past ea=

chother. Of course

it's not really true that science is just another belief system, and it is =

true that some of the

people on the other side of academia mean to flatly deny this. But there is=

another

contingent which will concede that science is in fact a more sophisticated =

and theoretically

distinguished belief system, while still insisting that this distinction is=

not very significant.

And their point is much more than just that scientific "knowledge" is alway=

s by definition

both contingent and incompletethe much bigger point is that much of our "r=

eality",

including particularly most of the morally and politically important aspect=

s of it, are

socially constructed, and thus in a much more profound sense our reality re=

ally is _not_

objective.

Ironically, in fact, the more advanced our scientific and technological kno=

wledge become,

the less and less relevant it becomes to moral and political issues. While =

on the one hand

technology often seems to take issues out of the hands of legislators, by d=

istributing

capabilities to such an extent as to make them beyond governmental control,=

and on the

other hand it produces issues the political system and culture are ill-prep=

ared to deal

with, both of these are merely the immediate, incremental effects. The over=

arching

broader effect is to successively remove scientific and technological const=

raints on the

range of feasible political, economic, cultural systems people can adopt, t=

hereby putting a

progressively greater demand on our collective capacity for imagination, co=

urage, and

discretion in order to successfully determine and follow wise paths, rather=

than go down

very dystopian ones.

Stewart Brand made a similar observation in his book, "How Buildings Learn"=

the most

successfully adaptable buildings turn out to be those with constraints, suc=

h as support

columns, which greatly reduce the "design space" which can be considered wh=

en

contemplating modifications. (Perhaps professional architects could do more=

with less

constraints, but most building dwellers are not architects themselves, so a=

pparently less

quite often turns out to be more.) I think many video game critics (and som=

e movie critics)

have also similarly suggested that games (or movies) were better back when =

designers (or

directors) couldn't fall back on eye-popping graphics (or stunts & f/x, or =

sex and violence)

to keep players (audiences) entertained. And Jaron Lanier is one among seve=

ral who's

voiced the opinion that while the capabilities of software have in fact gon=

e up as hardware

has improved, it has not maintained the same pace of improvement, largely b=

ecause the

quality of the _code_ has at the same time gone very much downhill.

This doesn't bode well for our ability to "cope", as it were, with the cont=

inually expanding

possibilities that accelerating scientific and technological progress will =

continue to bring

us. JFK observed that we had the power to eliminate hunger in the world bac=

k in the '60s,

and yet it still hasn't happened. Instead our politicians spend their time,=

for example,

facilitating ever greater abuse of increasingly counter-productive IP laws =

to hinder all

kinds of things from online music sharing to the provision of patented drug=

s to third

world patients. Both are due not to technological constraints but rather to=

political ones. I

don't want to preach to the choir so I'll stop there, but I'm sure all of y=

ou have at least a

couple of other widely-recognized problems which come to your mind, which s=

ociety is

either failing to address or is continuing to itself cause because of "poli=

tical constraints".

On a related theme, "Mark L."'s musings on the likely nature of a native or=

innate

philosophy in AIs actually made something click for me though, in a moment =

of tiredness

when I let my guard down enough to truly consider it. One of the memes Jaro=

n Lanier puts

forward in his Half a Manifesto is "cybernetic totalism", which is basicall=

y the digerati

version of George Soros's "market fundamentalism" schtick. It's also a fair=

definition of the

philosophy that could I think fairly be considered the obvious pre-disposit=

ion, if there is

any, of any A.I. system. It is essentially a perfection of the reductionist=

hypothesis, holding

that not only is reductionism valid, but that perception _is_ reality, and =

that recognizing

this "fact" is essential to true understanding and sound moral judgment. Th=

e problem, of

course, is it's exactly the same type of ends-trump-means philosophy which =

produced the

devastating seduction of much of the world by nazism, fascism, and despotic=

communism

last century. This philosophy _is_ dangerous, to an even greater extent tha=

n Lanier tried to

explain.

Fortunately (for my own sanity), I'm still in the John Holland camp (as he =

articulated it at

the 2000 Stanford "Spiritual Robots" debate, shortly after the publication =

of Bill Joy's

infamous Wired article), and don't believe the emergence of A.I. will be ne=

arly as

automatic, inevitable, nor early as Kurzweil an company expect, so I'm not =

terribly worried

about it. Barring, of course, the frightening possibility of Lanier's inver=

sion hypothesis

being validated, and producing a perceived success by moving the goalposts.=

If we let this

happen, then we will in fact create our own dystopia, but only by (at least=

implicit) choice,

not due to any force of technological determinism.

I'll try to elaborate my thoughts on Zen and the self-other dichotomy soon =

as well.

--

Kevin D. Keck - On Apr 24, 2004, at 2:44 PM, Chris Phoenix wrote:
> You mean there's theoretical justification for what I said? Cool! Is

It is only true for algorithmically finite cases, but since this seems

> it thought to extend to systems that are not algorithmically finite as

> well? What about algorithmic approximations to non-A.F. systems? Can

> you give me a reference or two for this?

to cover all likely "real" spaces, you get a lot of bang for that buck

as a pragmatic matter. In terms of references, they are sparse but

what you are looking for is probably "non-axiomatic reasoning systems",

and Pei Wang's work in this area is probably the best and most

accessible on the Internet. There has been an interesting bit of

activity over the last year or two toward the unification of the fields

of probability theory, information theory, computational theory,

reasoning/logics, and a couple other bits and pieces as different

facets of a single elegant universal conceptual model for

algorithmically finite systems. My theoretical point comes from some

of the bridgework that is unifying reasoning logics and algorithmic

information theory. There isn't a lot out there; the first mentions of

this general result is implied in some papers from the early '90s on

universal predictors and Pei Wang's stuff, but we've really only worked

it all out in the last couple years (and is still a work in progress).

Finite versus Infinite mathematics:

Algorithmically infinite systems are actually the standard assumption

for classic theory in these areas, and it is of limited utility. That

is how you end up with things like standard first-order logics. The

problem is that we missed a lot because of this. Some very interesting

things emerge when you restrict the mathematics purely to the finite

case, often in areas that were considered mathematically "undefined" in

the general case (mostly because the inclusion of infinite parameters

force an undefined value for theorems and functions that have rich,

interesting, and definable properties when restricted to purely finite

parameters).

As for what "algorithmically finite" means:

The classic "finite state" is an inadequate system descriptor for the

above area of mathematics, and the term "algorithmically finite"

denotes something distinct from "finite state", though there are

conceptual similarities. I actually coined the distinction a couple

years ago. I used to regularly argue with a math-savvy retired

Christian lady about the nature of religion and God in a mathematical

context -- I've developed a lot of good pure theory angles in the

course of trying to prove mathematical points to her, best exercise of

theory I ever got. She made the poignant observation that the apparent

algorithmic finiteness of the universe did not seem to have any obvious

dependency on the universe actually being a finite state machine in the

classical sense. And she seemed to have a point after I thought about

it for a bit, which I later formalized.

"Algorithmically finite" means (very roughly) a system that can only

express finite intrinsic Kolmogorov complexity in finite time. A

properly rigorous definition is fairly difficult to express well, and

tonight is not that night. Interesting things that fall out of this

are:

1.) This is inclusive of all finite state systems.

2.) The effective Kolmogorov complexity of these systems can vary in

time.

3.) This is inclusive of some infinite state systems.

The second property looks mundane, but is actually relatively

interesting. This essentially replaces an important given constant in

classic computational theory with a function. Since expressible

intelligence also varies with Kolmogorov complexity, this has

interesting implications. It is worth noting that this can also break

the assumptions of some theorems from classic theory.

The third property is interesting in that you can have infinite state

systems that are mathematically bound to express the computational

properties of finite systems over any finite span of time. An example

of such a system would be a system with a countably infinite state

fabric (say, at the resolution of the Planck length) and a finite bound

on information propagation (say, the speed of light), resulting in a

system which would be mathematically required to do things like express

an analog of the Laws of Thermodynamics that falls out of algorithmic

information theory. While such a system is nominally infinite state,

it is theoretically limited to the expression of finite algorithms with

a Kolmogorov complexity limit that varies in finite time.

From a functional standpoint, I would say that the AF model is more

general than the classic finite state machine model.

Okay, its past my bedtime,

j. andrew rogers