Wrappers, Architectures and/or Agent Frameworks for Evolving AI Techniques?
I posted this to comp.ai, but it occurred to me that there might be some
folk here who don't read newsgroups, so....
During my background reading for Living@Home
(http://LivingAtHome.sourceforge.net) I've encountered what seems to
be a blank area in existing reasearch.
Living@Home (L@H) is about evolving behaviour for simulated
creatures (biots). At first I'm using genetic programming, but an ANN
approach is also relevant. I can see ANN as an alternative
core-architecture for the L@H biots or, more-interestingly, as the
second strand in a hybrid approach.
In reading up on the area, one of the things I was expecting
to find was studies where people had wrapped an evolving core-system
in some other AI techniques, but I found very little of this nature.
For the evolving core I mean GP or GA or EC or NN (in fact,
"evolving" in this case is in the sense of "changing with time", so
any "learning" or "adapting" core might apply). For my purposes, I'm
generally considering that the core would hold multiple entities (call
them genes or memes) which compete to be considered "relevant" to the
current situation. I would not consider the single gene case to be so
different that I wouldn't discuss it, however.
From the perspective of some of those core techniques, you
might think that my wrapper functionality (details below) overlaps
with things the core can do. For example, "memory" is something I'd
consider having in the wrapper, but a recurrent NN core would support
its own form of "memory". I have no problem with these clashes. As I
hope you'll see below, I want to use the wrapper to extend the core,
rather than take anything from it. So if the genes can store memories
as internal state (and longer-term by evolving) and the wrapper also
does that (a buffering recent input and longer-term as deliberately
written memories) then I'm sure the system will find uses for both.
For what the wrapper might do, I include quite a wide range of
things (this topic gets very hard to discuss at this point, largely
because it cuts across several areas of study, and doesn't have it's
own terminology :) ). However, the sorts of services that the wrapper
might provide to the core are:
-(long term) output from the core which is flagged for keeping
-(immediate) a buffer of recent input
-(working data) output from the core not flagged for keeping
-(working state) activity status of competing core elements
To increase evolvability, I would like to fill the core with
many loosely-interacting components (genes). The wrapper would be
-calculating the activity of each gene
-gene to gene communication
(maybe via the "working data", above)
-running any "somatic" evolution that the core might involve
-tracking gene "homogeneity" during sexual reproduction
One area where I really see the wrapper extending the core is
in decoupling evolved components from the precise details of their
history. For example, in the context of Living@Home, one might add a
new entity to the world. Previously evolved biots would have no genes
applying directly to the new entity. However, if the new entity
shared characteristics with existing ones (position, size, velocity,
...) then some of the more-general genes might be usefully applied,
_IF_ the resemblance could be detected. Structuring the biot's data
hierarchically so that e.g. position was contained _within_ some
higher-order record might allow this, as both old and new entities
would merely look (to the relevant gene) like "things with positions".
We've been talking about such an organised store as a "blackboard",
(BB) although not with any slavish devotion to the detail of other
blackboard systems. Other uses of hierarchic data-storage are:
-store partial data as records with less fields
-store age of data in parallel?
-store reliability of data in parallel?
-genes could "annotate" existing records with synthesized data
(which other genes could then read)
Genes need to know when to activate, and a component of their
activity should certainly be based on how well the current data on the
BB matches their area of application. To cover this, each gene should
have one or more "input filters" which are pattern-matched against the
current BB. such filters could be quite complex, such as "anything
new with a position and less than 20m from me and containing any
dangerous weapon and cannot be eaten" (although the alternative to
this is several genes chained together). Other things filters do:
-map entries in the BB to gene input "channels"
-optionally supply defaults for missing data
-match single or multiple cases
One problem with evolving techniques is variable multiplicity
of input. With a suitable wrapper, however, when an input filter
matches multiple cases, it is an option to clone and activate multiple
instances of the gene.
If CPU resources are limited, the wrapper need not activate
every gene with a partial match on the BB. Instead some priority
queuing might be applied.
Similarly, old abandoned memories would get culled.
This is a "maybe" but a further element of evolvability in
biology is that embryo development processes its genetic input in a
fairly loose and flexible manner - basically interpreting it in a
"common sense" manner. The is why, for example, two-headed snakes can
live (where a two-nosed 747 would not). There are several layers of
such systems in biology, such as early cell-migration to about the
right place, and the most obvious being nerve growth. In some cases
some similar processing could increase evolvability by buffering
against semi-detrimental changes for long enough to allow accommodating
mutations to occur.
Obviously the wrapper is adding a second layer of evolution to
the core, in that its configuration is susceptible to mutation in
addition to the genes in the core. This mutation is what enables the
wrapper services and genes to (hopefully) co-evolve into effective
systems. And the soft-wired nature of the wrapper<->core interactions
(with the implied softness of gene<->interactions) hopefully adds a
lot to evolvability and flexibility.
So, is anybody working on anything like this? Or do you know
of somebody who is? Or would you just like to discuss it? There's
some outlining of the issues on the L@H mailing list (follow the link
from the web-site) but actual discussion never got off the ground.
That's why I'm asking here.
I'll talk to anybody here, or by email, or I'll create and
administer a mailing list if there's enough interest.
Free transport into the future - while you wait.