Loading ...
Sorry, an error occurred while loading the content.

re: chaos and mind.

Expand Messages
  • al0nz0tg
    ... While a schematic of the neural pathways would be of great help, applying graph theory to it would be most awkward. Especially if one were to consider and
    Message 1 of 3 , Jul 16, 2002
    • 0 Attachment
      --- In artificialintelligencegroup@y..., wizard_of_frozzbozz > > Graph

      >>theory is generally used to describe decision trees...

      > Very much a miopic view being that a decision tree is a simple
      > subset of graph theory. By Definition, a connection network is in
      > fact a graph.

      While a schematic of the neural pathways would be of great help,
      applying graph theory to it would be most awkward. Especially if one
      were to consider and describe each of the millions of cortical
      columns, and that would fail to capture the properties of each column.

      > > That is not possible. You can predict what this cortical emulator
      >> will do just as easily as you could predict when your windows
      >> machine will crash by looking at the memory hardware...

      > Prove that. Not looking for exactly for what it will do, but a way
      > to predict behavour - not exactly what it will do.

      Can you predict the behavior of an infant five years from now? We are
      looking for computational properties, behavior has little to do with
      the cortex. Look at deep brain structures, they are both central to
      behavior and highly predictable. That's how your genes control your
      behavior. The computational properties of the cortex only act as
      faculties.

      Computations are notoriously unpredictable. Go google "computabality"
      and "the halting problem" and other texts about the impossibility of
      varrious meta-programs.

      > Each one is likely to have it's own properties, this is what I mean
      > by global behavour.

      Its not important... Furthermore, several analytic or synthetic
      computations may be in progress simultaniously throughout the cortex.

      > For example, given a natural number x, your can probably
      > tell me something about it, but you can't expect exact results ( aka
      > you can tell me that the successor of x is not 0, but you can't tell
      > me the exact number that the successor of is beyond x + 1).

      According to what I have read such things can be said of cortical
      columns individually but not of the entire cortex...

      > > The cortex _IS_ a turing machine though probably not a universal
      >> turing machine... Its halting properties are no more predictable
      >> than those of a turing machine. -- It just can't be predicted.
      >
      > Prove it. If you wish to make this statement, I challenge you then
      > to produce a proof that no level of prediction is possible.

      A. The cortex has been observed to have a hexagonally tiled
      orgainization. This orgainization, along with some functional
      inferances leads me to beleive that it is a class of celular automata.

      B. It has been proven that a celular automata is equivalent to a
      turing machine.

      C. Therefore I deduce that the cortex, being like a celular automata,
      is computationally equivalent to a turing machine.

      It should be noted that there are at least two classes of turing
      machines. The set of all turing machines, and the set of UNIVERSAL
      turing machines (any of which can emulate _all_ other turing machines).

      I have not read anything about the cortex that would indicate that it,
      infact, is computationally universal. Yet, if the celular automata
      analogy holds, then we know that the cortex is computationally at
      level 1 on the Chomsky heirarchy...

      > > These types of predictions are not required to engineer a working
      > > cortex.

      > Oh, so then you you have built a working cortex without one? Should
      > like to once again see valid proof of this beyond your word.

      The only way that I can think of to do that is constructively... That
      would be inconvenient for me at this time because I need to implement
      some testing tools first... (anyone want to help??)

      > Yes, that may be so, but we are not attaching our equipment to a
      > biological brain, we would be attaching it to the electronic, human
      > made equipment, so we need to know how to best handle that
      > information.

      The ideas that come to my mind are simple bit-fields or a string of
      integers that act as the cell's state.. (cell != neuron)

      Abstract data types are built up by the system and should not be
      included explicitly in its design.

      > > GAH!!!
      > > "pattern recognition" is such a foobared concept!!!

      > Yes, agreed it is, but without at least some pattern recog. the
      > machines just not going to be able to work now is it? Unless you
      > have a way around it, in which case I would be most interested in
      > hearing it.

      I call the concept "Abstraction" which is broken down into the
      proceses of synthesis and analysis.
    • wizard_of_frozzbozz
      Once again, as before my replies are of course inline. ... Graph ... in ... column. I would not suggest building a schematic then applying graph theory too it,
      Message 2 of 3 , Jul 16, 2002
      • 0 Attachment
        Once again, as before my replies are of course inline.

        --- In artificialintelligencegroup@y..., "al0nz0tg" <alangrimes@s...>
        wrote:
        > --- In artificialintelligencegroup@y..., wizard_of_frozzbozz > >
        Graph
        >
        > >>theory is generally used to describe decision trees...
        >
        > > Very much a miopic view being that a decision tree is a simple
        > > subset of graph theory. By Definition, a connection network is
        in
        > > fact a graph.
        >
        > While a schematic of the neural pathways would be of great help,
        > applying graph theory to it would be most awkward. Especially if one
        > were to consider and describe each of the millions of cortical
        > columns, and that would fail to capture the properties of each
        column.

        I would not suggest building a schematic then applying graph theory
        too it, that indeed would be tedious, boring, unnessary, hmm, fun for
        use as torture for unsuspecting people on the net, but I digress,
        rather the other way around, use graph theory to help build a general
        theory of the properties of ANN's in certain toplogy.


        > > > That is not possible. You can predict what this cortical
        emulator
        > >> will do just as easily as you could predict when your windows
        > >> machine will crash by looking at the memory hardware...
        >
        > > Prove that. Not looking for exactly for what it will do, but a
        way
        > > to predict behavour - not exactly what it will do.
        >
        > Can you predict the behavior of an infant five years from now? We
        are
        > looking for computational properties, behavior has little to do with
        > the cortex. Look at deep brain structures, they are both central to
        > behavior and highly predictable. That's how your genes control your
        > behavior. The computational properties of the cortex only act as
        > faculties.

        If you talk to a five year old and 10 year old, on average is thier
        a difference in behavour? Once again, my graph theory approach was
        intended to look for HOLISTIC properities (I just like to capitilize
        things once in while) not reductionistics once.

        > Computations are notoriously unpredictable. Go
        google "computabality"
        > and "the halting problem" and other texts about the impossibility of
        > varrious meta-programs.
        >
        > > Each one is likely to have it's own properties, this is what I
        mean
        > > by global behavour.
        >
        > Its not important... Furthermore, several analytic or synthetic
        > computations may be in progress simultaniously throughout the
        cortex.

        Actually the proporties are quite important, as each one will behave
        differently, infact they will behave much differently, each topolgy
        will work best in a different instance. You can't relaibly build a
        system if you don't know how the parts work (atleast not when you
        have to build the parts as well).

        > > For example, given a natural number x, your can probably
        > > tell me something about it, but you can't expect exact results (
        aka
        > > you can tell me that the successor of x is not 0, but you can't
        tell
        > > me the exact number that the successor of is beyond x + 1).
        >
        > According to what I have read such things can be said of cortical
        > columns individually but not of the entire cortex...
        >
        > > > The cortex _IS_ a turing machine though probably not a
        universal
        > >> turing machine... Its halting properties are no more predictable
        > >> than those of a turing machine. -- It just can't be predicted.
        >>
        > > Prove it. If you wish to make this statement, I challenge you
        then
        > > to produce a proof that no level of prediction is possible.
        >
        > A. The cortex has been observed to have a hexagonally tiled
        > orgainization. This orgainization, along with some functional
        > inferances leads me to beleive that it is a class of celular
        automata.
        >
        > B. It has been proven that a celular automata is equivalent to a
        > turing machine.
        >
        > C. Therefore I deduce that the cortex, being like a celular
        automata,
        > is computationally equivalent to a turing machine.

        That is conjecture, B does not imply A as you are well aware, you are
        lead to believe is just a longer version of what you said before,
        unless you wish to give out said functional inferences. I am not
        saying that you are wrong, if you can make A imply that it is a
        cellular automata without hunch, you're proof is correct.


        > It should be noted that there are at least two classes of turing
        > machines. The set of all turing machines, and the set of UNIVERSAL
        > turing machines (any of which can emulate _all_ other turing
        machines).
        >
        > I have not read anything about the cortex that would indicate that
        it,
        > infact, is computationally universal. Yet, if the celular automata
        > analogy holds, then we know that the cortex is computationally at
        > level 1 on the Chomsky heirarchy...
        >
        > > > These types of predictions are not required to engineer a
        working
        > > > cortex.
        >
        > > Oh, so then you you have built a working cortex without one?
        Should
        > > like to once again see valid proof of this beyond your word.
        >
        > The only way that I can think of to do that is constructively...
        That
        > would be inconvenient for me at this time because I need to
        implement
        > some testing tools first... (anyone want to help??)
        >
        > > Yes, that may be so, but we are not attaching our equipment to a
        > > biological brain, we would be attaching it to the electronic,
        human
        > > made equipment, so we need to know how to best handle that
        > > information.
        >
        > The ideas that come to my mind are simple bit-fields or a string of
        > integers that act as the cell's state.. (cell != neuron)

        Yes yes, I know that cell is != neuron (unless on biological level
        where a neuron is cell :) )

        > Abstract data types are built up by the system and should not be
        > included explicitly in its design.

        Not building the abstract data types, but the physical hardware to
        support them.

        > > > GAH!!!
        > > > "pattern recognition" is such a foobared concept!!!
        >
        > > Yes, agreed it is, but without at least some pattern recog. the
        > > machines just not going to be able to work now is it? Unless you
        > > have a way around it, in which case I would be most interested in
        > > hearing it.
        >
        > I call the concept "Abstraction" which is broken down into the
        > proceses of synthesis and analysis.

        The system still has to recognize patterns does it not? You've
        mearly made advances in what your calling it.
      Your message has been successfully submitted and would be delivered to recipients shortly.