- --- In artificialintelligencegroup@y..., wizard_of_frozzbozz > > Graph

>>theory is generally used to describe decision trees...

While a schematic of the neural pathways would be of great help,

> Very much a miopic view being that a decision tree is a simple

> subset of graph theory. By Definition, a connection network is in

> fact a graph.

applying graph theory to it would be most awkward. Especially if one

were to consider and describe each of the millions of cortical

columns, and that would fail to capture the properties of each column.

> > That is not possible. You can predict what this cortical emulator

Can you predict the behavior of an infant five years from now? We are

>> will do just as easily as you could predict when your windows

>> machine will crash by looking at the memory hardware...

> Prove that. Not looking for exactly for what it will do, but a way

> to predict behavour - not exactly what it will do.

looking for computational properties, behavior has little to do with

the cortex. Look at deep brain structures, they are both central to

behavior and highly predictable. That's how your genes control your

behavior. The computational properties of the cortex only act as

faculties.

Computations are notoriously unpredictable. Go google "computabality"

and "the halting problem" and other texts about the impossibility of

varrious meta-programs.

> Each one is likely to have it's own properties, this is what I mean

Its not important... Furthermore, several analytic or synthetic

> by global behavour.

computations may be in progress simultaniously throughout the cortex.

> For example, given a natural number x, your can probably

According to what I have read such things can be said of cortical

> tell me something about it, but you can't expect exact results ( aka

> you can tell me that the successor of x is not 0, but you can't tell

> me the exact number that the successor of is beyond x + 1).

columns individually but not of the entire cortex...

> > The cortex _IS_ a turing machine though probably not a universal

A. The cortex has been observed to have a hexagonally tiled

>> turing machine... Its halting properties are no more predictable

>> than those of a turing machine. -- It just can't be predicted.

>

> Prove it. If you wish to make this statement, I challenge you then

> to produce a proof that no level of prediction is possible.

orgainization. This orgainization, along with some functional

inferances leads me to beleive that it is a class of celular automata.

B. It has been proven that a celular automata is equivalent to a

turing machine.

C. Therefore I deduce that the cortex, being like a celular automata,

is computationally equivalent to a turing machine.

It should be noted that there are at least two classes of turing

machines. The set of all turing machines, and the set of UNIVERSAL

turing machines (any of which can emulate _all_ other turing machines).

I have not read anything about the cortex that would indicate that it,

infact, is computationally universal. Yet, if the celular automata

analogy holds, then we know that the cortex is computationally at

level 1 on the Chomsky heirarchy...

> > These types of predictions are not required to engineer a working

The only way that I can think of to do that is constructively... That

> > cortex.

> Oh, so then you you have built a working cortex without one? Should

> like to once again see valid proof of this beyond your word.

would be inconvenient for me at this time because I need to implement

some testing tools first... (anyone want to help??)

> Yes, that may be so, but we are not attaching our equipment to a

The ideas that come to my mind are simple bit-fields or a string of

> biological brain, we would be attaching it to the electronic, human

> made equipment, so we need to know how to best handle that

> information.

integers that act as the cell's state.. (cell != neuron)

Abstract data types are built up by the system and should not be

included explicitly in its design.

> > GAH!!!

I call the concept "Abstraction" which is broken down into the

> > "pattern recognition" is such a foobared concept!!!

> Yes, agreed it is, but without at least some pattern recog. the

> machines just not going to be able to work now is it? Unless you

> have a way around it, in which case I would be most interested in

> hearing it.

proceses of synthesis and analysis. - Once again, as before my replies are of course inline.

--- In artificialintelligencegroup@y..., "al0nz0tg" <alangrimes@s...>

wrote:> --- In artificialintelligencegroup@y..., wizard_of_frozzbozz > >

Graph

>

in

> >>theory is generally used to describe decision trees...

>

> > Very much a miopic view being that a decision tree is a simple

> > subset of graph theory. By Definition, a connection network is

> > fact a graph.

column.

>

> While a schematic of the neural pathways would be of great help,

> applying graph theory to it would be most awkward. Especially if one

> were to consider and describe each of the millions of cortical

> columns, and that would fail to capture the properties of each

I would not suggest building a schematic then applying graph theory

too it, that indeed would be tedious, boring, unnessary, hmm, fun for

use as torture for unsuspecting people on the net, but I digress,

rather the other way around, use graph theory to help build a general

theory of the properties of ANN's in certain toplogy.

> > > That is not possible. You can predict what this cortical

emulator

> >> will do just as easily as you could predict when your windows

way

> >> machine will crash by looking at the memory hardware...

>

> > Prove that. Not looking for exactly for what it will do, but a

> > to predict behavour - not exactly what it will do.

are

>

> Can you predict the behavior of an infant five years from now? We

> looking for computational properties, behavior has little to do with

If you talk to a five year old and 10 year old, on average is thier

> the cortex. Look at deep brain structures, they are both central to

> behavior and highly predictable. That's how your genes control your

> behavior. The computational properties of the cortex only act as

> faculties.

a difference in behavour? Once again, my graph theory approach was

intended to look for HOLISTIC properities (I just like to capitilize

things once in while) not reductionistics once.

> Computations are notoriously unpredictable. Go

google "computabality"

> and "the halting problem" and other texts about the impossibility of

mean

> varrious meta-programs.

>

> > Each one is likely to have it's own properties, this is what I

> > by global behavour.

cortex.

>

> Its not important... Furthermore, several analytic or synthetic

> computations may be in progress simultaniously throughout the

Actually the proporties are quite important, as each one will behave

differently, infact they will behave much differently, each topolgy

will work best in a different instance. You can't relaibly build a

system if you don't know how the parts work (atleast not when you

have to build the parts as well).

> > For example, given a natural number x, your can probably

aka

> > tell me something about it, but you can't expect exact results (

> > you can tell me that the successor of x is not 0, but you can't

tell

> > me the exact number that the successor of is beyond x + 1).

universal

>

> According to what I have read such things can be said of cortical

> columns individually but not of the entire cortex...

>

> > > The cortex _IS_ a turing machine though probably not a

> >> turing machine... Its halting properties are no more predictable

then

> >> than those of a turing machine. -- It just can't be predicted.

>>

> > Prove it. If you wish to make this statement, I challenge you

> > to produce a proof that no level of prediction is possible.

automata.

>

> A. The cortex has been observed to have a hexagonally tiled

> orgainization. This orgainization, along with some functional

> inferances leads me to beleive that it is a class of celular

>

automata,

> B. It has been proven that a celular automata is equivalent to a

> turing machine.

>

> C. Therefore I deduce that the cortex, being like a celular

> is computationally equivalent to a turing machine.

That is conjecture, B does not imply A as you are well aware, you are

lead to believe is just a longer version of what you said before,

unless you wish to give out said functional inferences. I am not

saying that you are wrong, if you can make A imply that it is a

cellular automata without hunch, you're proof is correct.

> It should be noted that there are at least two classes of turing

machines).

> machines. The set of all turing machines, and the set of UNIVERSAL

> turing machines (any of which can emulate _all_ other turing

>

it,

> I have not read anything about the cortex that would indicate that

> infact, is computationally universal. Yet, if the celular automata

working

> analogy holds, then we know that the cortex is computationally at

> level 1 on the Chomsky heirarchy...

>

> > > These types of predictions are not required to engineer a

> > > cortex.

Should

>

> > Oh, so then you you have built a working cortex without one?

> > like to once again see valid proof of this beyond your word.

That

>

> The only way that I can think of to do that is constructively...

> would be inconvenient for me at this time because I need to

implement

> some testing tools first... (anyone want to help??)

human

>

> > Yes, that may be so, but we are not attaching our equipment to a

> > biological brain, we would be attaching it to the electronic,

> > made equipment, so we need to know how to best handle that

Yes yes, I know that cell is != neuron (unless on biological level

> > information.

>

> The ideas that come to my mind are simple bit-fields or a string of

> integers that act as the cell's state.. (cell != neuron)

where a neuron is cell :) )

> Abstract data types are built up by the system and should not be

Not building the abstract data types, but the physical hardware to

> included explicitly in its design.

support them.

> > > GAH!!!

The system still has to recognize patterns does it not? You've

> > > "pattern recognition" is such a foobared concept!!!

>

> > Yes, agreed it is, but without at least some pattern recog. the

> > machines just not going to be able to work now is it? Unless you

> > have a way around it, in which case I would be most interested in

> > hearing it.

>

> I call the concept "Abstraction" which is broken down into the

> proceses of synthesis and analysis.

mearly made advances in what your calling it.