Loading ...
Sorry, an error occurred while loading the content.

Re: [Artificial Intelligence Group] My current thinking on AI.

Expand Messages
  • Troy
    Alan.. I have been reading this and re-reading it.. give me a few more days to digest. Troy ... [Non-text portions of this message have been removed]
    Message 1 of 6 , Oct 19, 2002
    • 0 Attachment
      Alan.. I have been reading this and re-reading it.. give me a few more
      days to digest.

      Troy

      Alan Grimes wrote:

      > hi...
      >
      > While I may prove to be a very good theorist, I simply don't currently
      > have the knowlege-base of a compitent programmer. I know the theory
      > behind a language but I don't know all the "headers" and libraries
      > needed to do practical programming with it. It is uncertain wheather it
      > would be a good investment for me to program some of my all-to-scarce
      > cortex with said knowlege when I might be better served by studying
      > neuroscience instead and improving myself as a theorist.
      >
      > While this descision might make sense, I am unable to, by myself, test
      > my theories now or in the near future. Because I want to see the
      > Singularity happen Real Soon Now I will go ahead and dump what I have
      > now on this list with hopes that someone who is a better programmer than
      > I can take it the last mile.
      >
      > The ideas I present here owe the greatest debt to a person identified as
      > "neuromorph" who used to post to "arcondev". In a similar way I hope
      > that my contribution to the idea will be enough to carry it forward to a
      > working implementation. That isn't to say I won't continue to refine my
      > theories or that what I present in this [series] of articles is all
      > there is. I will try to point out any "missing links" that I am aware
      > of.
      >
      > Before I begin with my theories, I should make some points about
      > methodology that may turn out to be the only useful piece of my
      > ramblings...
      >
      > My basic method is that of a reverse engineer. I deliberately excluded
      > all philisophical and metaphysical nonsense from consideration. What
      > does it take to build a system that has capabilities equivalent to what
      > we know about the human brain? The human brain processes tactile,
      > proprioceptive, chemical, acustic and visual information. The human
      > brain produces a sequence of commands to what essentially is a
      > biological robot. For my purposes this and varrious qualitative
      > requirements on the outputs produced are the only things to be
      > considered. Explicitly excluded considerations include "Self Awareness",
      > "Free Will", the soul, and other arcania along those lines.
      >
      > This is a NO BULL approach to AI. If a concept is not absolutly
      > essential to meeting one of the basic functional requirements, IT GOES.
      > The things that are absolutly required are quite few.
      >
      > Okay, now for the details.
      >
      > My first big break on the problem was from two things Neuromorph did for
      > me. He reccomended me the book "Principals of Neuroscience" which is
      > already dated material from 3 years ago. And, more importantly, he put
      > me on to the idea that what the brain does is the process of
      > abstraction.
      >
      > Abstraction is a conceptual improvment over blindly wiring togeather
      > neural nets. A neural network can be reduced to an "Adaptive Logic
      > Network". This network has two key features. As a whole, it implements a
      > single behavior through blind adaptation. At a significant depth, a
      > single line of connection has the effect on a specific pattern in the
      > output.
      >
      > Neuromorph's great insight was that the key is this mechanism of
      > Abstraction. His proposal is a specific organization of neurons that
      > maximze their utility for abstracting inputs and output patterns. By
      > clustering neurons into abstractors, features of the output behavior can
      > be manipulated by activating different abstractions. Instead of one
      > fixed trained behavior, you build abstractions which each orchestrate
      > lower-level abstractions which eventually implement arbitrary behaviors.
      >
      > Unfortunately, he has been unable to progress beyond a very
      > computationally expensive simulation of biological neurons to achieve
      > this. I hope my contribution will be the proposal of a distilled version
      > of this process which should have some very attractive features and be
      > quite cheap to implement.
      >
      > The system-level model I am using is that of a cybernetic system. It is
      > a system that only functions when it is incorporated into some organized
      > feedback loop. This can be a robot, A VR body, or some abstract system
      > that meets the basic requirements of Inputs with associated outputs. For
      > clarity I will use the robotic example whenever necessary.
      >
      > In regular computer programming the language that serves as the most
      > elegant example of abstraction is FORTH. In forth, _ALL_ program
      > elements are called words, the most elemental being : and ; . A
      > definition is of the form:
      >
      > : newWord oldword1 oldword2 oldword1 oldword3 ;
      >
      > Instead of programming an abstract machine with this as forth does, I
      > have been working on a model that I hope can transcend the limitations
      > of previous systems by discarding the symbols. In this new model, every
      > member of set N (natural numbers) is potentially a meaningful program in
      > this system. I don't know wheather this idea will hold up but I'll try
      > to explain the inferance that lead me to it. I was reading GEB:EGB and
      > learning about godel numbers and godel incompleteness. While doing that
      > I began to wonder about what would qualify as an "informal" system and
      > wheather that would have any features that would make it easier to
      > implement the meta-logical processes the brain accomplishes. (this is
      > nothing more than the mind's ability to compose and manipulate arbitrary
      > formal systems.)
      >
      > I used the working name of "Spherical" to describe this rather bizzare
      > notion. Now that my ideas have made a few baby-steps towards maturity I
      > can now talk about them. It doesn't matter what the language is called
      > only that it is created afresh for each cybernetic system.
      >
      > Every atomic output such as "step_servo" or "increase tension on muscle"
      > is reduced to a word in my "unforth" syntax. These symbols are, as in
      > forth, held in a dictionary. As such, each is assigned a number from N.
      > When a new definition is read, each symbol is looked up and each word in
      > the definition is replaced by the number creating a godel number.
      >
      > The "static" or pre-defined words of the cybernetic system, "step_servo"
      > and the like, form the semantic foundation of the AI paradigm.
      >
      > At this point, the attentive reader should be able to compose a program
      > for his robot which implements this funky varriant of FORTH. At this
      > point the system should be an effective scripting tool for causing the
      > robot to do specific behaviors. There is an issue with time and
      > coordination. The time problem is also shared by the cortex of the brain
      > and apparently is rectified by the cerebellum. I have not worked out the
      > programming equivalent yet. Coordinating the robot's actions with
      > outside events is accomplished by linking specific input abstractions to
      > output modifyer abstractions. The details of this are still fuzzy to me.
      > =(
      >
      > Generating behaviors like this is pretty trivial to this point. Perhaps
      > a more conventional system could accomplish the same things. The key
      > here is, again, abstraction. Lets say we automated the abstraction
      > definition routine. The system will watch the "kernel actor" (which I
      > will describe eventually), trigger the atomic abstractions. It will
      > record every pattern that is executed and then store it as if it were a
      > compiled word. If that pattern is executed again, the underlying
      > execution mechanism will notice that an abstraction already exists for
      > that behavior and then suggest it to the "kernel actor". Later, the
      > "kernel actor" will tend to prefer the higher level abstraction. This is
      > the mechanism underlying the learning of behaviors in humans. Over time
      > you need to think less and less about the stuff you do. This is how that
      > process works. I make this claim with complete confidence.
      >
      > On the input side of things, abstractions are generated in almost
      > exactly the same way. Instead of learning patterns first generated by
      > the "kernel actor", the input side learns patterns by observing the
      > behavior of the input atoms. When a pattern match is found, those raw
      > inputs are "consumed" by the abstraction and removed from the list of
      > active patterns. The "Kernel Actor" only the top level report which
      > makes it to consciousness. This is the basic mechanism for making sense
      > out of the world.
      >
      > Memory functions based on abstractions too, except it works from the
      > top-down, an inverted version of the sensation system. The base
      > abstractions of memory are the most abstract concepts. the details are
      > held at the lowest level. I havn't worked out the specifics of this yet.
      > This organization, however, is apparent in studies of the temporal lobe.
      > The replay mechanism is rather simple.
      >
      > A vaguely familiar stimulus first activates the most abstract layer of
      > the memory system. Say, a picture of President Ford. This first level of
      > recognition will activate the next layer down, which then scans the
      > input and classifies it further. Each memory abstraction, once
      > triggered, will activate a corresponding primary sensory abstraction. In
      > effect, recalling the memory to immediate perception. The time and space
      > requirements for this should both be logarithmic.
      >
      > In a complete mind, the input and output systems, previously described,
      > have a symetric relationship and interact with each other in the
      > following ways.
      >
      > Since all behavior is implemented through output abstractions and
      > literals, "internal" behaviors are handled through output abstractions
      > as well. Output abstractions can manipulate the input circuitry and the
      > results are Immagniation, and focused attention. Your internal voice is
      > nothing more than your output abstractions operating on your sensory
      > systems creating a cognitive loop.
      >
      > Similarly, one function of the input system is to report back to the
      > output mechanism the sucess or failure of given behaviors. The disorder
      > of communicative dysphasia is an example of what happens when this
      > reporting function is dammaged. In such patients, the input system is
      > unable to tune the output system and the output abstractions begin to
      > become corrupted resulting in disorganized speech. The exact protocol
      > for this tuning function is yet to be discovered.
      >
      > What I've discussed so far should cover about 70% of all cortical
      > function. A system, as described, should be able to form fairly complex
      > understandings of the world composed of built-up abstractions. This
      > "Kernel Actor" I have been talking about is composed of the limbic
      > system and some areas of cortex. This system is responsible for
      > directing the intellectual faculties mentioned earlier. By this system
      > the understandings discovered through the abstractions mentioned before
      > are processed in more of a sequential manner. Of critical importance
      > there is a mechanism which implements motovation and several instinctual
      > behaviors. Hopefully further reading in "principals of Neuroscience"
      > will reveal more about how these motovations function.
      >
      > For the present lets assume that these instincts and root qualia are
      > implemented by a combination of hard-wired abstractions and state
      > machines. On top of this kernel there is an auxilary abstraction system
      > which simply records patterns of activity in the kernel actor itself.
      > These abstractions serve the purpose of guiding the "kernel actor"
      > towards activating certain output abstractions. The function of this
      > abstraction net is to work in conjunction to the root behavior state
      > machine and guide it to the best output abstraction for the job.
      >
      > As the computational faculties of this root actor are limited, it can
      > only remember sequences of output abstractions of a fixed length. As
      > such, it is unable to implement complex behaviors without a
      > well-developed list of available output abstractions.
      >
      > My understanding of these things are perfect but I do have a few more
      > things about the "metaformal system" I mentioned above.
      >
      > I was intrigued by the 1846 classic by George Bool, _The Laws of
      > Thought_. Though I havn't stuidied it directly the idea was that natural
      > language can be mapped onto a calculus of predicates. That abstract idea
      > expressed by each sentance can be expressed in an equation. This is an
      > extremely powerful idea because the activation of one input abstraction
      > can be linked to an abstract set of rules which describe it which are,
      > in turn, expressed in abstract rules, in such a way that its structure
      > is its semantics, a pure abstract semantics.
      >
      > Using a prolog-type execution mechanism, it should be possible to
      > evaluate any possible formal system. Cognition in the brain is somewhat
      > limited because it is using a lookup table methodology (A table of
      > compiled abstractions), as described above. It should be possible to
      > find a mechanism which can yield "good enough" solutions in the general
      > case.
      > The problem _IS_ in NP so god-like reasoning does require quantum
      > computation. Again, I must stress that the brain isn't nearly that
      > smart. Its ability to do math is based on memorizing rule-lookup tables
      > and applying them using chalk. Advanced mathmeticians have more and
      > better abstractions and can make further leaps of logic than the rest of
      > us but they DO NOT have a truly general-purpose execution mechanism in
      > their heads.
      >
      > This is, ofcourse, a beginning. I have high hopes for this line of
      > research and hope someone can pick up the football from here and make a
      > touchdown. I will continue to work on it and try to assist, in any way I
      > can, anyone who is working on this. I just want the AI problem solved at
      > this point.
      >
      > --
      > Sometime in 1996 God reached down from the heavens and created a game
      > called "Terranigma". ;)
      > http://users.rcn.com/alangrimes/
      >
      > Yahoo! Groups Sponsor
      > ADVERTISEMENT
      > <http://rd.yahoo.com/M=229441.2397090.3822005.2273195/D=egroupweb/S=1705948923:HM/A=1189558/R=0/*http://www.bmgmusic.com/acq/ee/q6/enroll/mhn/9/>
      >
      >
      >
      > To unsubscribe from this group, send an email to:
      > artificialintelligencegroup-unsubscribe@yahoogroups.com
      >
      >
      >
      > Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service
      > <http://docs.yahoo.com/info/terms/>.




      [Non-text portions of this message have been removed]
    • Troy
      Some initial thoughts below. Perhaps more later. ... In some respects, the single behavior through blind adaptation can the thought of as a strength of
      Message 2 of 6 , Oct 20, 2002
      • 0 Attachment
        Some initial thoughts below. Perhaps more later.

        Alan Grimes wrote:

        > Abstraction is a conceptual improvment over blindly wiring togeather
        > neural nets. A neural network can be reduced to an "Adaptive Logic
        > Network". This network has two key features. As a whole, it implements a
        > single behavior through blind adaptation. At a significant depth, a
        > single line of connection has the effect on a specific pattern in the
        > output.

        In some respects, the "single behavior through blind adaptation" can the
        thought of as a strength of NNets. I like to think of NNets as the low
        end of intelligence, while abstraction needs to be done somewhat
        differently, more symbolically.

        Getting neural nets to abstract complex relationships has been a
        challenge for many years, and it is a diffucult one. One of the best to
        date has been the NN that could play backgammon better than experts, and
        the NN that could conjugate past tenses on english verbs. That is about
        as good as it has gotten.


        >
        > Instead of programming an abstract machine with this as forth does, I
        > have been working on a model that I hope can transcend the limitations
        > of previous systems by discarding the symbols.

        Not sure what you mean here. Symbols are a very important part of
        higher level reasoning.. as you mention later on.

        > In this new model, every
        > member of set N (natural numbers) is potentially a meaningful program in
        > this system. I don't know wheather this idea will hold up but I'll try
        > to explain the inferance that lead me to it. I was reading GEB:EGB and
        > learning about godel numbers and godel incompleteness. While doing that
        > I began to wonder about what would qualify as an "informal" system and
        > wheather that would have any features that would make it easier to
        > implement the meta-logical processes the brain accomplishes. (this is
        > nothing more than the mind's ability to compose and manipulate arbitrary
        > formal systems.)

        The GEB:EGB ideas on incompleteness are interesting, but I think they
        are manily philosophical. We are so far away from having a system that
        can understand incompleteness, that it is almost a non-issue in currently.

        I think the brain can understand the incompleteness of formal systems,
        but that is simply a logical deduction, because it has many formal
        systems to work with. I think it can be arrived at logically, once you
        step outside the formal system itself and realize there are other
        systems. A formal system can't deduce incompleteness on its own,
        because it is bounded by its own constraints (this is exactly what godel
        found) but the brain is a collection of *different* formal systems and
        it eventually realizes there are tricks that each system can play on
        another. For example, perceptual tricks - or even magic tricks, are
        events that occur outside a single formal system, but the brain can
        logically recognize them for what they are - perceptual tricks to make
        you think something is happening that you know is impossible. I don't
        think this is too difficult once you have a agent with a number of
        cooperating formal systems. Programming an agent to have a
        meta-awareness of the limitations and capabilities of each system would
        be the next step to solving this problem.

        > I used the working name of "Spherical" to describe this rather bizzare
        > notion. Now that my ideas have made a few baby-steps towards maturity I
        > can now talk about them. It doesn't matter what the language is called
        > only that it is created afresh for each cybernetic system.
        >
        > Every atomic output such as "step_servo" or "increase tension on muscle"
        > is reduced to a word in my "unforth" syntax. These symbols are, as in
        > forth, held in a dictionary. As such, each is assigned a number from N.
        > When a new definition is read, each symbol is looked up and each word in
        > the definition is replaced by the number creating a godel number.
        >
        > The "static" or pre-defined words of the cybernetic system, "step_servo"
        > and the like, form the semantic foundation of the AI paradigm.
        >
        > At this point, the attentive reader should be able to compose a program
        > for his robot which implements this funky varriant of FORTH. At this
        > point the system should be an effective scripting tool for causing the
        > robot to do specific behaviors. There is an issue with time and
        > coordination. The time problem is also shared by the cortex of the brain
        > and apparently is rectified by the cerebellum. I have not worked out the
        > programming equivalent yet. Coordinating the robot's actions with
        > outside events is accomplished by linking specific input abstractions to
        > output modifyer abstractions. The details of this are still fuzzy to me.
        > =(

        Yes.. this is the hard part.

        > Generating behaviors like this is pretty trivial to this point. Perhaps
        > a more conventional system could accomplish the same things. The key
        > here is, again, abstraction. Lets say we automated the abstraction
        > definition routine. The system will watch the "kernel actor" (which I
        > will describe eventually), trigger the atomic abstractions. It will
        > record every pattern that is executed and then store it as if it were a
        > compiled word. If that pattern is executed again, the underlying
        > execution mechanism will notice that an abstraction already exists for
        > that behavior and then suggest it to the "kernel actor". Later, the
        > "kernel actor" will tend to prefer the higher level abstraction. This is
        > the mechanism underlying the learning of behaviors in humans. Over time
        > you need to think less and less about the stuff you do. This is how that
        > process works. I make this claim with complete confidence.

        Yes.. what you are describing is procedural knowledge and declarative
        knowledge. Check out anything written by John R. Anderson of CMU. He
        describes this as "procedurallizing" knowledge where knowledge is
        compiled into somekind of compressed format.


        On the input side of things, abstractions are generated in almost

        > exactly the same way. Instead of learning patterns first generated by
        > the "kernel actor", the input side learns patterns by observing the
        > behavior of the input atoms. When a pattern match is found, those raw
        > inputs are "consumed" by the abstraction and removed from the list of
        > active patterns. The "Kernel Actor" only the top level report which
        > makes it to consciousness. This is the basic mechanism for making sense
        > out of the world.

        Yes.. you are right, however, this is very difficult to implement
        computationally.

        > Memory functions based on abstractions too, except it works from the
        > top-down, an inverted version of the sensation system. The base
        > abstractions of memory are the most abstract concepts. the details are
        > held at the lowest level. I havn't worked out the specifics of this yet.
        > This organization, however, is apparent in studies of the temporal lobe.
        > The replay mechanism is rather simple.

        Yes.. this is correct.


        > A vaguely familiar stimulus first activates the most abstract layer of
        > the memory system. Say, a picture of President Ford. This first level of
        > recognition will activate the next layer down, which then scans the
        > input and classifies it further. Each memory abstraction, once
        > triggered, will activate a corresponding primary sensory abstraction. In
        > effect, recalling the memory to immediate perception. The time and space
        > requirements for this should both be logarithmic.

        Yes, this is correct. John Anderson has done a lot of work on this. He
        calls it "spreading activation" and has implemented it quite well in a
        system called ACT-R. Yes, the computations are logarithmic.

        > Since all behavior is implemented through output abstractions and
        > literals, "internal" behaviors are handled through output abstractions
        > as well. Output abstractions can manipulate the input circuitry and the
        > results are Immagniation, and focused attention. Your internal voice is
        > nothing more than your output abstractions operating on your sensory
        > systems creating a cognitive loop.

        The inner voice can also be thought of as a symbolic manipulation
        system. Which is basically what it does. It allows for cognitive
        manipulation of symbolic representations.

        >
        > Similarly, one function of the input system is to report back to the
        > output mechanism the sucess or failure of given behaviors.


        This can be hard as well, especially in reinforcement learning. The
        basic problem again is time. It is difficult to know when to assign
        credit to a given behavior that has resulted from a given stimulus.
        This is the credit assignment problem for behaviors and is a problem in
        AI. However, since humans frequently fall victim to the credit
        assignment problem, I see know reason why it should be implemented any
        better in an autonomous agent.


        > function. A system, as described, should be able to form fairly complex
        > understandings of the world composed of built-up abstractions. This
        > "Kernel Actor" I have been talking about is composed of the limbic
        > system and some areas of cortex.

        Remember too that the limbic system plays a large role in emotions as
        well. There is a lot of research that shows that emotions help us to
        learn - which is why you remember very clearly tramatic events.

        >
        > For the present lets assume that these instincts and root qualia are
        > implemented by a combination of hard-wired abstractions and state
        > machines. On top of this kernel there is an auxilary abstraction system
        > which simply records patterns of activity in the kernel actor itself.
        > These abstractions serve the purpose of guiding the "kernel actor"
        > towards activating certain output abstractions. The function of this
        > abstraction net is to work in conjunction to the root behavior state
        > machine and guide it to the best output abstraction for the job.
        >
        > As the computational faculties of this root actor are limited, it can
        > only remember sequences of output abstractions of a fixed length. As
        > such, it is unable to implement complex behaviors without a
        > well-developed list of available output abstractions.

        Yes.. these abstractions are however difficult manage because humans are
        constantly reorganizing their abstractions.

        You might want to check out a book called "Conceptual Spaces" by
        Gardenfors (SP?) He talks at great length about how to organize
        abstractions into a conceptual space. The idea of conceptual spaces has
        been around for awhile. I have been looking at Latent Semantic Analysis
        (LSA) which can take any language and create a conceptual hyperbolic
        space out of text. It is a first step, but this is the hard part...
        organizing abstractions.


        > My understanding of these things are perfect but I do have a few more
        > things about the "metaformal system" I mentioned above.
        >
        > I was intrigued by the 1846 classic by George Bool, _The Laws of
        > Thought_. Though I havn't stuidied it directly the idea was that natural
        > language can be mapped onto a calculus of predicates. That abstract idea
        > expressed by each sentance can be expressed in an equation. This is an
        > extremely powerful idea because the activation of one input abstraction
        > can be linked to an abstract set of rules which describe it which are,
        > in turn, expressed in abstract rules, in such a way that its structure
        > is its semantics, a pure abstract semantics.

        Yes.. he was the father of symbolic representations of thought.
        Symbolics has a long history culminating with Chomsky.

        >
        >
        > Using a prolog-type execution mechanism, it should be possible to
        > evaluate any possible formal system. Cognition in the brain is somewhat
        > limited because it is using a lookup table methodology (A table of
        > compiled abstractions), as described above. It should be possible to
        > find a mechanism which can yield "good enough" solutions in the general
        > case.
        > The problem _IS_ in NP so god-like reasoning does require quantum
        > computation.

        I have always wondered if creativity might require quantum computation,
        however, typical problem solving which occurs during everyday situations
        I think is pretty linear, non-chaotic.


        > Again, I must stress that the brain isn't nearly that
        > smart. Its ability to do math is based on memorizing rule-lookup tables
        > and applying them using chalk. Advanced mathmeticians have more and
        > better abstractions and can make further leaps of logic than the rest of
        > us but they DO NOT have a truly general-purpose execution mechanism in
        > their heads.

        Yes.. you might want to look a the novice-to-expert research. There has
        been a lot of study about how people transition from novices to experts.
        This is the core to what you were describing earlier about
        abstractions. How do these abstractions get formed? And when? How do
        you reorganize your knowledge as you gain more and more skill? What you
        are saying is essentially correct. Lower level knowledge gets
        proceduralized and frees up more cognitive resources for higher level
        symbolic manipulation.

        The problem with much of this however, is that even when we have an
        understanding of how things work in the brain, it is very difficult to
        implement computationally.

        Good Thoughts!!

        Troy
      • Alan Grimes
        ... What I m saying is that I don t care what a word is called anymore, but only its index number. As you describe, the system tuning the abstractions may be
        Message 3 of 6 , Oct 20, 2002
        • 0 Attachment
          Troy wrote:
          > > Instead of programming an abstract machine with this as forth does, I
          > > have been working on a model that I hope can transcend the
          >> limitations of previous systems by discarding the symbols.

          > Not sure what you mean here. Symbols are a very important part of
          > higher level reasoning.. as you mention later on.

          What I'm saying is that I don't care what a "word" is called anymore,
          but only its index number.

          As you describe, the system tuning the abstractions may be quite
          sophisticated...

          > The GEB:EGB ideas on incompleteness are interesting, but I think they
          > are manily philosophical. We are so far away from having a system that
          > can understand incompleteness, that it is almost a non-issue in
          > currently.

          I think that problem is the whole game...

          > I think the brain can understand the incompleteness of formal systems,
          > but that is simply a logical deduction, because it has many formal
          > systems to work with. I think it can be arrived at logically, once you
          > step outside the formal system itself and realize there are other
          > systems.

          That is one of the key ideas, yes.

          > godel found) but the brain is a collection of *different* formal
          > systems and it eventually realizes there are tricks that each system
          > can play on another.

          I just might be on the tail of something that's even better....

          > I don't think this is too difficult once you have a agent with a number
          > of cooperating formal systems.

          WRONG!!! er, I shouldn't be so dramatic with you as you are being very
          kind to my ideas. (just one more quote down!)

          > Programming an agent to have a meta-awareness of the limitations and
          > capabilities of each system would be the next step to solving this
          > problem.

          Just a step further down the wrong path.

          We don't care about formal systems but are, instead, seeking something
          that starts out with almost literally nothing and builds formal systems
          or hap-hazard fragments thereof as needed...

          That's the real trick here. I am not certain that my proposal can be
          sucessful as the things you have pointed out may not be able to be
          resolved with an approach as primitive as mine but I am very certain
          that we are on the right road here.

          >> On the input side of things, abstractions are generated in almost
          > > exactly the same way. Instead of learning patterns first generated by
          > > the "kernel actor", the input side learns patterns by observing the
          > > behavior of the input atoms. When a pattern match is found, those raw
          > > inputs are "consumed" by the abstraction and removed from the list of
          > > active patterns. The "Kernel Actor" only sees the top level report
          >> which makes it to consciousness. This is the basic mechanism for
          >> making sense out of the world.

          > Yes.. you are right, however, this is very difficult to implement
          > computationally.

          =(

          > Yes, this is correct. John Anderson has done a lot of work on this.
          > He calls it "spreading activation" and has implemented it quite well in
          > a system called ACT-R. Yes, the computations are logarithmic.

          c00l inf0z.
          I guess you really need to spill your gutz to get the real dope. =P

          > > Similarly, one function of the input system is to report back to the
          > > output mechanism the sucess or failure of given behaviors.

          > This can be hard as well, especially in reinforcement learning.

          Reinforcement is only necessary in neural systems.

          It should be possible to achieve deterministic learning with the
          abstraction approach...

          > > function. A system, as described, should be able to form fairly
          >> complex understandings of the world composed of built-up abstractions.
          >> This "Kernel Actor" I have been talking about is composed of the
          >> limbic system and some areas of cortex.

          > Remember too that the limbic system plays a large role in emotions as
          > well.

          Exactly. As we evolved from primitive systems, the lowest level
          programming was retained. Even today it plays a central role in many
          human behaviors.

          > There is a lot of research that shows that emotions help us to
          > learn - which is why you remember very clearly tramatic events.

          Definitely.

          > hyperbolic space out of text. It is a first step, but this is the hard
          > part... organizing abstractions.

          uh, hyperbolic space? I only reciently graduated from a local community
          college... =\

          > > in turn, expressed in abstract rules, in such a way that its
          >> structure is its semantics, a pure abstract semantics.

          > Yes.. he was the father of symbolic representations of thought.
          > Symbolics has a long history culminating with Chomsky.

          I should spend more time on him...

          > > The problem _IS_ in NP so god-like reasoning does require quantum
          > > computation.

          > I have always wondered if creativity might require quantum computation,
          > however, typical problem solving which occurs during everyday
          > situations I think is pretty linear, non-chaotic.

          A lot of creativity comes actually from failures in normal neural
          processing that happens to be some new interesting result. Either a
          controlled error generator or some other mechanism should be devised for
          AI.

          > The problem with much of this however, is that even when we have an
          > understanding of how things work in the brain, it is very difficult to
          > implement computationally.

          At least it seems that we know what we're doing at this point and know
          what we need to get better at. =)

          --
          Sometime in 1996 God reached down from the heavens and created a game
          called "Terranigma". ;)
          http://users.rcn.com/alangrimes/
        • Ed Minchau
          ... does, I ... anymore, ... A good starting point for the index numbering system for words can be found in an old Roget s Thesaurus, where words are divided
          Message 4 of 6 , Oct 20, 2002
          • 0 Attachment
            --- In artificialintelligencegroup@y..., Alan Grimes
            <alangrimes@s...> wrote:
            > Troy wrote:
            > > > Instead of programming an abstract machine with this as forth
            does, I
            > > > have been working on a model that I hope can transcend the
            > >> limitations of previous systems by discarding the symbols.
            >
            > > Not sure what you mean here. Symbols are a very important part of
            > > higher level reasoning.. as you mention later on.
            >
            > What I'm saying is that I don't care what a "word" is called
            anymore,
            > but only its index number.

            A good starting point for the index numbering system for words can be
            found in an old Roget's Thesaurus, where words are divided into 1000
            categories. Words within a category could be similarly broken down
            along sub-axes (greater and lesser degrees of whtaever quality is
            expressed by the category, part of speech/number/tense, and so on).


            >
            > > Programming an agent to have a meta-awareness of the limitations
            and
            > > capabilities of each system would be the next step to solving
            this
            > > problem.
            >
            > Just a step further down the wrong path.
            >
            > We don't care about formal systems but are, instead, seeking
            something
            > that starts out with almost literally nothing and builds formal
            systems
            > or hap-hazard fragments thereof as needed...

            It is the _almost_ in the above sentence that is the kicker. What
            are the bare basic amounts that must be programmed?

            >
            > That's the real trick here. I am not certain that my proposal can be
            > sucessful as the things you have pointed out may not be able to be
            > resolved with an approach as primitive as mine but I am very certain
            > that we are on the right road here.
            >
            > >> On the input side of things, abstractions are generated in almost
            > > > exactly the same way. Instead of learning patterns first
            generated by
            > > > the "kernel actor", the input side learns patterns by observing
            the
            > > > behavior of the input atoms. When a pattern match is found,
            those raw
            > > > inputs are "consumed" by the abstraction and removed from the
            list of
            > > > active patterns. The "Kernel Actor" only sees the top level
            report
            > >> which makes it to consciousness. This is the basic mechanism for
            > >> making sense out of the world.
            >
            > > Yes.. you are right, however, this is very difficult to implement
            > > computationally.
            >
            > =(

            I agree here. What is considered a matching pattern between any two
            random imputs? That becomes more difficult as the number of inputs
            are added to the system.

            My robots will each have 64 inputs and 22 outputs. This would be
            completely unmanageable by a single processor. Instead, I have
            distributed the workload among 10 networked processors, each
            responsible for at most ten inputs and four outputs. At least one
            layer of abstraction is needed for any processor to pass along
            information about its sensors and actuators to the rest of the
            network.

            The source of sensor data itself helps to divide the information
            handled by a single processor as well. For instance, A
            microprocessor board (containing th uP, RAM, ROM, and four parallel
            I/O ports) may be connected to a board which digitizes 8 channels of
            analog input, one board which has four analog input channels and four
            analog output channels, and to two parallel networking PCBs. The
            data from the 8-channel input board (body segment pressure sensors)
            is automatically grouped together in its own subnetwork of input
            neurons, the four inputs and four outputs on the second board
            comprise two subnetworks (with a third subnet to tie them together),
            and the networking boards have a similar setup.

            >
            > > Yes, this is correct. John Anderson has done a lot of work on
            this.
            > > He calls it "spreading activation" and has implemented it quite
            well in
            > > a system called ACT-R. Yes, the computations are logarithmic.

            The system I have worked out is not nearly as computation-intensive
            as neural net (or fuzzy cognitive map) systems have been so far.
            Instead of the O(n*n*log(n)) operations for n neurons in most neural
            nets (or n fuzzy rules in fuzzy cognitive maps), mine are O(n).
            This, combined with near-infinite scalability (although intra-
            processor lag time does come into play somewhat), is probably the
            biggest advantage that FUNGAL has over other AI systems. There are
            ten thousand neurons per robot, so this project would be nearly
            impossible otherwise.

            >
            > c00l inf0z.
            > I guess you really need to spill your gutz to get the real dope. =P
            >
            > > > Similarly, one function of the input system is to report back
            to the
            > > > output mechanism the sucess or failure of given behaviors.
            >
            > > This can be hard as well, especially in reinforcement learning.
            >
            > Reinforcement is only necessary in neural systems.
            >
            > It should be possible to achieve deterministic learning with the
            > abstraction approach...
            >

            Neural systems, the abstraction approach you mention, fuzzy rule
            sets, subsumption architecture, Minsky's Society of Mind... all of
            these are simply different expressions of the same concept.

            > > > function. A system, as described, should be able to form fairly
            > >> complex understandings of the world composed of built-up
            abstractions.
            > >> This "Kernel Actor" I have been talking about is composed of the
            > >> limbic system and some areas of cortex.
            >
            > > Remember too that the limbic system plays a large role in
            emotions as
            > > well.
            >
            > Exactly. As we evolved from primitive systems, the lowest level
            > programming was retained. Even today it plays a central role in many
            > human behaviors.

            I often see it stated that "we will never be able to program a
            machine to have emotions". As if there were something magical or
            supernatural about emotion - or by extension, thought, consciousness,
            and so on. It represents one of the last remaining ideas of the view
            that mankind is the center of the universe.

            Thoughts are not the firing of a single neuron, and memories are not
            a cluster of interconnected nerves. It is the parallel processing of
            digital logic (burst or spike neurons) and fuzzy logic (pulse train
            neurons) signals accomplished by neurons which produces thoughts and
            memories. The neurons are simply the substrates upon which thoughts
            and memories occur. Emotions are the different overlapping modes of
            neuron operation, mediated by the chemicals released by synapses, and
            it is these modes of operations which alter the processing of
            signals, making people think in different ways and about different
            things based upon those emotions.

            Of course AI will have emotions; even if at first there is only the
            single emotion equivalent to "pay attention and obey". Robot
            emotions will be very different from human emotions. The form they
            take will follow the various functions for which those emotions are
            required.


            >
            > > > The problem _IS_ in NP so god-like reasoning does require
            quantum
            > > > computation.
            >
            > > I have always wondered if creativity might require quantum
            computation,
            > > however, typical problem solving which occurs during everyday
            > > situations I think is pretty linear, non-chaotic.

            Roger Penrose (in The Emperor's New Mind) suggests that quantum
            mechanical effects are necessary to completely describe the operation
            of the human brain. This ignores the readily-observable nonlinear
            signal-processing effect that occurs in each neuron. In any case, I
            am not seeking to make a machine with god-like reasoning. I just
            want to build one that safely navigates its environment with or
            without explicit external control.

            :) ed
          • Alan Grimes
            ... The basic abstraction system in a framework that applies it to the varrious cognitive tasks. Now, the abstraction system itself will require several
            Message 5 of 6 , Oct 21, 2002
            • 0 Attachment
              Ed Minchau wrote:

              > > Just a step further down the wrong path.

              > > We don't care about formal systems but are, instead, seeking
              >> something that starts out with almost literally nothing and builds
              >> formal systems or hap-hazard fragments thereof as needed...

              > It is the _almost_ in the above sentence that is the kicker. What
              > are the bare basic amounts that must be programmed?

              The basic abstraction system in a framework that applies it to the
              varrious cognitive tasks.

              Now, the abstraction system itself will require several functions that
              permit abstractions to be optomized over time.

              One key function that we see in the brain are the pair of language
              association areas, Boca's and Werneke's areas. Wheather they must be
              implemented seperately in a computer implementation is unclear..

              One way to accomplish this is to create an association list that
              consists of 2-tuples of abstraction pairs. As natural language semantics
              is complex, something beyond 2-tuples will probably be necessary.

              > > > Yes.. you are right, however, this is very difficult to implement
              > > > computationally.

              > > =(

              > I agree here. What is considered a matching pattern between any two
              > random imputs? That becomes more difficult as the number of inputs
              > are added to the system.

              The pattern is matched to previous patterns or is stored in a newly
              generated abstraction.

              For complex sensory systems such as hearing and vision, literal mappings
              are used that contain hard-wired assiciations and relationships.

              For visual information, the primary input is not as literals but as a
              matrix of abstractions that encode color and brightness. Visual
              information is mapped and analyzed before it is fed into the abstraction
              system as is seen in the retina and occipital lobe. Since the type of
              patterns you are looking for in visual information can be known
              beforehand, it is possible to analyze it in a linear fassion without the
              generality of the full abstraction system.

              In nature we see that the human eye is optomized for recognizing faces.
              Frogs are optomized for detecting flies...

              I think the best way to solve this is to hack togeather a system and see
              how bad the problem is...

              > My robots will each have 64 inputs and 22 outputs. This would be
              > completely unmanageable by a single processor. Instead, I have
              > distributed the workload among 10 networked processors, each
              > responsible for at most ten inputs and four outputs. At least one
              > layer of abstraction is needed for any processor to pass along
              > information about its sensors and actuators to the rest of the
              > network.

              There is nothing special about multiprocessor systems. Though there may
              be some engineering considerations that make multiprocessor systems more
              practical for certain applications.

              --
              Sometime in 1996 God reached down from the heavens and created a game
              called "Terranigma". ;)
              http://users.rcn.com/alangrimes/
            • Alan Grimes
              ... I ve been bugged by this since I read it but didn t really nail it untill today. While I havn t yet looked up concept spaces it sounds like a prime
              Message 6 of 6 , Oct 25, 2002
              • 0 Attachment
                > Yes.. these abstractions are however difficult manage because humans
                > are constantly reorganizing their abstractions.

                > You might want to check out a book called "Conceptual Spaces" by
                > Gardenfors (SP?) He talks at great length about how to organize
                > abstractions into a conceptual space. The idea of conceptual spaces
                > has been around for awhile. I have been looking at Latent Semantic
                > Analysis (LSA) which can take any language and create a conceptual
                > hyperbolic space out of text. It is a first step, but this is the hard
                > part... organizing abstractions.

                I've been bugged by this since I read it but didn't really nail it
                untill today. While I havn't yet looked up "concept spaces" it sounds
                like a prime example of putting the model before the system. -- Devoting
                inordinant ammounts of effort to creating an overly elegant mathematical
                theorem when what we see in the brain is a few relatively
                streight-forward concepts wired togeather into fairly well defined
                functional subsystems...

                While it may turn out to be true that this deapth of mathematical
                analysis will enhance our understanding of mind and allow us to create
                more elegant and more powerful minds, for the time being we should force
                ourselves to justify any jump into high-theory with direct comparison to
                some circuit in the brain.

                This is definitely an example of letting the better be the enemy of the
                good.

                --
                Sometime in 1996 God reached down from the heavens and created a game
                called "Terranigma". ;)
                http://users.rcn.com/alangrimes/
              Your message has been successfully submitted and would be delivered to recipients shortly.