Loading ...
Sorry, an error occurred while loading the content.

RE: [neat] Evolving Substrates in HyperNEAT

Expand Messages
  • Ken Lloyd
    Hi Jeff, Thought provoking question. Here s my gut response: Seems to me that a hidden layer is a possibility (probability) space for hidden nodes: n^2 that
    Message 1 of 10 , May 1, 2008
    • 0 Attachment
      Hi Jeff,
       
      Thought provoking question.  Here's my gut response:
       
      Seems to me that a hidden layer is a possibility (probability) space for hidden nodes: n^2 that may be described
       
      The patterned placement of nodes in a substrate layer can be learned (discovered) by a meta-evolution.
       
      The functor(s) that relate the patterned placements of nodes may be used for reducing the dimensionality of a possibility space (much in the same way that derivatives reduce the dimensionality of a series of equation changes).
       
      The idea here is to look in the most likely places where you might find "right" answers.  Sort of forward and inverse Bayesian cycles for evolving neural networks, see?  There are all sorts of caveats that go with this idea, of course.
       
      Ken Lloyd


      From: neat@yahoogroups.com [mailto:neat@yahoogroups.com] On Behalf Of Jeff Clune
      Sent: Wednesday, April 30, 2008 11:55 PM
      To: neat@yahoogroups.com
      Subject: [neat] Evolving Substrates in HyperNEAT

      Hello-

      Many of you have said that it would be nice if the substrate configuration
      of HyperNEAT was not pre-defined. What elements of it do you think are
      important to evolve?

      I can think of at least three options (am I missing any?)

      1) The number of hidden layers
      2) The number of hidden nodes per layer
      3) The geometric placement of the nodes in every layer

      If you think all of them are important, how would you prioritize them? In
      what order would you prefer researchers tackled them? I am just curious what
      different people think about these questions.

      Cheers,
      Jeff Clune

      Digital Evolution Lab, Michigan State University

      jclune@...

    • petar_chervenski
      Hello Jeff, I don t think that such assumptions about layered topology are the way to achieve this. There are two fundamental things associated with any
      Message 2 of 10 , May 1, 2008
      • 0 Attachment
        Hello Jeff,

        I don't think that such assumptions about layered topology are the
        way to achieve this. There are two fundamental things associated with
        any substrate in 2D/3D. These are node presence and node density. In
        fact, skip the first, it just.. it can be thought of as spatial CPPN
        output where the output means the overall node density at that place.
        And if the output there is less than 0.2, there are no nodes (sounds
        familiar? :)).

        So the same connective CPPN is actually capable of representing its
        own substrate. But this approach would generate too big substrates.
        Even the most simple CPPNs are able to generate a substrate with
        1000s of nodes and now imagine how many connections there could be.
        And this is for the simplest (!) CPPN. That sounds like a huge waste
        of computational effort, doesn't it? ;)

        So what is needed is a way to restrict the substrate complexity for
        small CPPNs. Of course more complex CPPNs can be allowed to generate
        more complex/dense substrates.

        This is actually just like complexification. First the major concepts
        are established on a substrate with very low resolution and as the
        CPPNs complexify, the substrate becomes more complex. In fact each
        CPPN will generate a substrate based on its complexity (say,
        num_nodes+num_links). Oh I think I mentioned that.

        What I am thinking is, if we actually allow substrates to evolve,
        does this mean that humans cannot inject that priory geometric
        knowledge any more? Or only for the inputs/outputs?

        It essentialy becomes like.. like just a good indirect encoding.

        Peter

        --- In neat@yahoogroups.com, Jeff Clune <jclune@...> wrote:
        >
        > Hello-
        >
        > Many of you have said that it would be nice if the substrate
        configuration
        > of HyperNEAT was not pre-defined. What elements of it do you think
        are
        > important to evolve?
        >
        > I can think of at least three options (am I missing any?)
        >
        > 1) The number of hidden layers
        > 2) The number of hidden nodes per layer
        > 3) The geometric placement of the nodes in every layer
        >
        > If you think all of them are important, how would you prioritize
        them? In
        > what order would you prefer researchers tackled them? I am just
        curious what
        > different people think about these questions.
        >
        >
        > Cheers,
        > Jeff Clune
        >
        > Digital Evolution Lab, Michigan State University
        >
        > jclune@...
        >
      • Kenneth Stanley
        Peter, these are interesting thoughts and similar to the way I think about it. When I think about evolving the substrate, I usually think of a distribution of
        Message 3 of 10 , May 1, 2008
        • 0 Attachment
          Peter, these are interesting thoughts and similar to the way I think
          about it. When I think about evolving the substrate, I usually think
          of a distribution of varying densities rather than strict layers or
          preconceived architectures, which is closest to Jeff's option (c).
          One reference for me is the human brain: It has dense masses that
          look different from each other architecturally and do not exist in
          layers with respect to each other (e.g. the cerebellum vs. the basal
          ganglia). However, *within* particular masses there is some layering,
          such as in the neocortex, which is perhaps the most important for
          high-level intelligence. It would be nice if all that could just
          arise on its own to suit the task.

          What you said about CPPN complexity and substrate complexity
          increasing together is something I never thought of. It's an
          interesting idea to correlate the two. You are right that in general
          there is a problem with letting the nodes evolve in the substrate
          because it is so easy to express a massive number of nodes, which
          would not always be needed.

          One problem I see is that we are talking about astronomical ranges, so
          the range in density may not really make sense to vary on a continuum.
          For example, if you have a number between 0 and 1 that represents a
          density somehow, then that density may range from several dozen
          neurons to several billion (if we are talking about natural scales).
          Does this range really make sense on a continuum between 0 and 1?
          Maybe we need to scale it exponentially or something like that, but
          still, then you end up with a tiny mutation potentially increasing the
          number of neurons by billions. That seems odd. In a way, we'd like
          to not even be in a circumscribed range and just let increases keep
          happening indefinitely, but not too much at a time.

          Finally, the inputs and outputs may be a special case. We may want to
          preserve the current human control of the geometry there and only let
          evolution increase density or something like that.

          Anyway, this is a great topic and a completely untouched area of
          research with plenty of things left to try.

          ken



          --- In neat@yahoogroups.com, "petar_chervenski" <petar_chervenski@...>
          wrote:
          >
          > Hello Jeff,
          >
          > I don't think that such assumptions about layered topology are the
          > way to achieve this. There are two fundamental things associated with
          > any substrate in 2D/3D. These are node presence and node density. In
          > fact, skip the first, it just.. it can be thought of as spatial CPPN
          > output where the output means the overall node density at that place.
          > And if the output there is less than 0.2, there are no nodes (sounds
          > familiar? :)).
          >
          > So the same connective CPPN is actually capable of representing its
          > own substrate. But this approach would generate too big substrates.
          > Even the most simple CPPNs are able to generate a substrate with
          > 1000s of nodes and now imagine how many connections there could be.
          > And this is for the simplest (!) CPPN. That sounds like a huge waste
          > of computational effort, doesn't it? ;)
          >
          > So what is needed is a way to restrict the substrate complexity for
          > small CPPNs. Of course more complex CPPNs can be allowed to generate
          > more complex/dense substrates.
          >
          > This is actually just like complexification. First the major concepts
          > are established on a substrate with very low resolution and as the
          > CPPNs complexify, the substrate becomes more complex. In fact each
          > CPPN will generate a substrate based on its complexity (say,
          > num_nodes+num_links). Oh I think I mentioned that.
          >
          > What I am thinking is, if we actually allow substrates to evolve,
          > does this mean that humans cannot inject that priory geometric
          > knowledge any more? Or only for the inputs/outputs?
          >
          > It essentialy becomes like.. like just a good indirect encoding.
          >
          > Peter
          >
          > --- In neat@yahoogroups.com, Jeff Clune <jclune@> wrote:
          > >
          > > Hello-
          > >
          > > Many of you have said that it would be nice if the substrate
          > configuration
          > > of HyperNEAT was not pre-defined. What elements of it do you think
          > are
          > > important to evolve?
          > >
          > > I can think of at least three options (am I missing any?)
          > >
          > > 1) The number of hidden layers
          > > 2) The number of hidden nodes per layer
          > > 3) The geometric placement of the nodes in every layer
          > >
          > > If you think all of them are important, how would you prioritize
          > them? In
          > > what order would you prefer researchers tackled them? I am just
          > curious what
          > > different people think about these questions.
          > >
          > >
          > > Cheers,
          > > Jeff Clune
          > >
          > > Digital Evolution Lab, Michigan State University
          > >
          > > jclune@
          > >
          >
        • petar_chervenski
          Hi Ken, I am happy that you find my thoughts interesting. I am trying to find a solution for this but not a constrained one like most. It has to be as general
          Message 4 of 10 , May 1, 2008
          • 0 Attachment
            Hi Ken,

            I am happy that you find my thoughts interesting. I am trying to find
            a solution for this but not a constrained one like most. It has to be
            as general as the whole CPPNs thing and NEAT. Basically I agree with
            your comments. About the brain, where several different types of
            connectivity exist, it is close to mind that a CPPN can divide the
            space into segments or at least different parts and apply different
            connectivity concepts for each. So in fact it is already capable for
            this.

            More interesting is the other problem you mentioned about. Well
            following the deeper phylosophy behind NEAT, it is clear that there
            should not be a bound to complexity. We can easily map 0 to "1 node"
            and 1.0 to "a billion of nodes" but I don't think that evolution
            should be in that "box" so to say. Another option is to use abs(x) as
            the activation function for the outputting node. This way it is
            unbounded [0 .. +infinity). Therefore, in order to get to the very
            high substrate complexity, the weights to this outputting node should
            have very big magnitudes. Which in fact is a good thing since this
            raises the overall weight difference betweeen individuals as well and
            CPPNs with different substrate complexity will be in separate
            species. So such cases that a billon-nodes brain mating with a
            thousand-nodes brain will not happen.

            This reminds me of Mattias's question, but I can state it in a more
            generalized way: how individuals generating different substrates can
            mate in a meaningful way so they do not produce bad offspring?

            One suggestion I have is to change the way speciation is done. We
            currently compare genotypes to do speciation. What if we compare
            *phenotypes*? Maybe that is a stupid idea, but.. I don't know. In
            biology mating is done because the phenotypes actually choose each
            other. And they do because they *look* like each other.

            Peter

            --- In neat@yahoogroups.com, "Kenneth Stanley" <kstanley@...> wrote:
            >
            > Peter, these are interesting thoughts and similar to the way I think
            > about it. When I think about evolving the substrate, I usually
            think
            > of a distribution of varying densities rather than strict layers or
            > preconceived architectures, which is closest to Jeff's option (c).
            > One reference for me is the human brain: It has dense masses that
            > look different from each other architecturally and do not exist in
            > layers with respect to each other (e.g. the cerebellum vs. the basal
            > ganglia). However, *within* particular masses there is some
            layering,
            > such as in the neocortex, which is perhaps the most important for
            > high-level intelligence. It would be nice if all that could just
            > arise on its own to suit the task.
            >
            > What you said about CPPN complexity and substrate complexity
            > increasing together is something I never thought of. It's an
            > interesting idea to correlate the two. You are right that in
            general
            > there is a problem with letting the nodes evolve in the substrate
            > because it is so easy to express a massive number of nodes, which
            > would not always be needed.
            >
            > One problem I see is that we are talking about astronomical ranges,
            so
            > the range in density may not really make sense to vary on a
            continuum.
            > For example, if you have a number between 0 and 1 that represents a
            > density somehow, then that density may range from several dozen
            > neurons to several billion (if we are talking about natural
            scales).
            > Does this range really make sense on a continuum between 0 and 1?
            > Maybe we need to scale it exponentially or something like that, but
            > still, then you end up with a tiny mutation potentially increasing
            the
            > number of neurons by billions. That seems odd. In a way, we'd like
            > to not even be in a circumscribed range and just let increases keep
            > happening indefinitely, but not too much at a time.
            >
            > Finally, the inputs and outputs may be a special case. We may want
            to
            > preserve the current human control of the geometry there and only
            let
            > evolution increase density or something like that.
            >
            > Anyway, this is a great topic and a completely untouched area of
            > research with plenty of things left to try.
            >
            > ken
            >
            >
            >
            > --- In neat@yahoogroups.com, "petar_chervenski" <petar_chervenski@>
            > wrote:
            > >
            > > Hello Jeff,
            > >
            > > I don't think that such assumptions about layered topology are
            the
            > > way to achieve this. There are two fundamental things associated
            with
            > > any substrate in 2D/3D. These are node presence and node density.
            In
            > > fact, skip the first, it just.. it can be thought of as spatial
            CPPN
            > > output where the output means the overall node density at that
            place.
            > > And if the output there is less than 0.2, there are no nodes
            (sounds
            > > familiar? :)).
            > >
            > > So the same connective CPPN is actually capable of representing
            its
            > > own substrate. But this approach would generate too big
            substrates.
            > > Even the most simple CPPNs are able to generate a substrate with
            > > 1000s of nodes and now imagine how many connections there could
            be.
            > > And this is for the simplest (!) CPPN. That sounds like a huge
            waste
            > > of computational effort, doesn't it? ;)
            > >
            > > So what is needed is a way to restrict the substrate complexity
            for
            > > small CPPNs. Of course more complex CPPNs can be allowed to
            generate
            > > more complex/dense substrates.
            > >
            > > This is actually just like complexification. First the major
            concepts
            > > are established on a substrate with very low resolution and as
            the
            > > CPPNs complexify, the substrate becomes more complex. In fact
            each
            > > CPPN will generate a substrate based on its complexity (say,
            > > num_nodes+num_links). Oh I think I mentioned that.
            > >
            > > What I am thinking is, if we actually allow substrates to evolve,
            > > does this mean that humans cannot inject that priory geometric
            > > knowledge any more? Or only for the inputs/outputs?
            > >
            > > It essentialy becomes like.. like just a good indirect encoding.
            > >
            > > Peter
            > >
            > > --- In neat@yahoogroups.com, Jeff Clune <jclune@> wrote:
            > > >
            > > > Hello-
            > > >
            > > > Many of you have said that it would be nice if the substrate
            > > configuration
            > > > of HyperNEAT was not pre-defined. What elements of it do you
            think
            > > are
            > > > important to evolve?
            > > >
            > > > I can think of at least three options (am I missing any?)
            > > >
            > > > 1) The number of hidden layers
            > > > 2) The number of hidden nodes per layer
            > > > 3) The geometric placement of the nodes in every layer
            > > >
            > > > If you think all of them are important, how would you
            prioritize
            > > them? In
            > > > what order would you prefer researchers tackled them? I am just
            > > curious what
            > > > different people think about these questions.
            > > >
            > > >
            > > > Cheers,
            > > > Jeff Clune
            > > >
            > > > Digital Evolution Lab, Michigan State University
            > > >
            > > > jclune@
            > > >
            > >
            >
          • Jeff Clune
            This is an exhilarating conversation. I really like the ideas that have been expressed. I agree that, in the spirit of complexification, we would want to start
            Message 5 of 10 , May 1, 2008
            • 0 Attachment
              This is an exhilarating conversation. I really like the ideas that have been
              expressed.

              I agree that, in the spirit of complexification, we would want to start with
              a small number of nodes on the substrate and then allow them to increase
              (but not increase too fast).

              I also like the idea of getting away from discrete layers.

              Why correlate the CPPN complexity with the substrate complexity, though? My
              instincts tell me that such a correlation might create evolutionary
              pathologies, such as a pressure to increase CPPN complexity to 'buy'
              substrate complexity. This is only a vague foreboding though. I would be
              interested to see if it works!

              Speaking of which, what do you all think would be a good test domain to see
              if evolving the substrate is effective? What would prove that it was worth
              the trouble?




              Cheers,
              Jeff Clune

              Digital Evolution Lab, Michigan State University

              jclune@...




              > From: Kenneth Stanley <kstanley@...>
              > Reply-To: "neat@yahoogroups.com" <neat@yahoogroups.com>
              > Date: Thu, 01 May 2008 17:01:36 -0000
              > To: "neat@yahoogroups.com" <neat@yahoogroups.com>
              > Subject: [neat] Re: Evolving Substrates in HyperNEAT
              >
              > Peter, these are interesting thoughts and similar to the way I think
              > about it. When I think about evolving the substrate, I usually think
              > of a distribution of varying densities rather than strict layers or
              > preconceived architectures, which is closest to Jeff's option (c).
              > One reference for me is the human brain: It has dense masses that
              > look different from each other architecturally and do not exist in
              > layers with respect to each other (e.g. the cerebellum vs. the basal
              > ganglia). However, *within* particular masses there is some layering,
              > such as in the neocortex, which is perhaps the most important for
              > high-level intelligence. It would be nice if all that could just
              > arise on its own to suit the task.
              >
              > What you said about CPPN complexity and substrate complexity
              > increasing together is something I never thought of. It's an
              > interesting idea to correlate the two. You are right that in general
              > there is a problem with letting the nodes evolve in the substrate
              > because it is so easy to express a massive number of nodes, which
              > would not always be needed.
              >
              > One problem I see is that we are talking about astronomical ranges, so
              > the range in density may not really make sense to vary on a continuum.
              > For example, if you have a number between 0 and 1 that represents a
              > density somehow, then that density may range from several dozen
              > neurons to several billion (if we are talking about natural scales).
              > Does this range really make sense on a continuum between 0 and 1?
              > Maybe we need to scale it exponentially or something like that, but
              > still, then you end up with a tiny mutation potentially increasing the
              > number of neurons by billions. That seems odd. In a way, we'd like
              > to not even be in a circumscribed range and just let increases keep
              > happening indefinitely, but not too much at a time.
              >
              > Finally, the inputs and outputs may be a special case. We may want to
              > preserve the current human control of the geometry there and only let
              > evolution increase density or something like that.
              >
              > Anyway, this is a great topic and a completely untouched area of
              > research with plenty of things left to try.
              >
              > ken
              >
              >
              >
              > --- In neat@yahoogroups.com, "petar_chervenski" <petar_chervenski@...>
              > wrote:
              >>
              >> Hello Jeff,
              >>
              >> I don't think that such assumptions about layered topology are the
              >> way to achieve this. There are two fundamental things associated with
              >> any substrate in 2D/3D. These are node presence and node density. In
              >> fact, skip the first, it just.. it can be thought of as spatial CPPN
              >> output where the output means the overall node density at that place.
              >> And if the output there is less than 0.2, there are no nodes (sounds
              >> familiar? :)).
              >>
              >> So the same connective CPPN is actually capable of representing its
              >> own substrate. But this approach would generate too big substrates.
              >> Even the most simple CPPNs are able to generate a substrate with
              >> 1000s of nodes and now imagine how many connections there could be.
              >> And this is for the simplest (!) CPPN. That sounds like a huge waste
              >> of computational effort, doesn't it? ;)
              >>
              >> So what is needed is a way to restrict the substrate complexity for
              >> small CPPNs. Of course more complex CPPNs can be allowed to generate
              >> more complex/dense substrates.
              >>
              >> This is actually just like complexification. First the major concepts
              >> are established on a substrate with very low resolution and as the
              >> CPPNs complexify, the substrate becomes more complex. In fact each
              >> CPPN will generate a substrate based on its complexity (say,
              >> num_nodes+num_links). Oh I think I mentioned that.
              >>
              >> What I am thinking is, if we actually allow substrates to evolve,
              >> does this mean that humans cannot inject that priory geometric
              >> knowledge any more? Or only for the inputs/outputs?
              >>
              >> It essentialy becomes like.. like just a good indirect encoding.
              >>
              >> Peter
              >>
              >> --- In neat@yahoogroups.com, Jeff Clune <jclune@> wrote:
              >>>
              >>> Hello-
              >>>
              >>> Many of you have said that it would be nice if the substrate
              >> configuration
              >>> of HyperNEAT was not pre-defined. What elements of it do you think
              >> are
              >>> important to evolve?
              >>>
              >>> I can think of at least three options (am I missing any?)
              >>>
              >>> 1) The number of hidden layers
              >>> 2) The number of hidden nodes per layer
              >>> 3) The geometric placement of the nodes in every layer
              >>>
              >>> If you think all of them are important, how would you prioritize
              >> them? In
              >>> what order would you prefer researchers tackled them? I am just
              >> curious what
              >>> different people think about these questions.
              >>>
              >>>
              >>> Cheers,
              >>> Jeff Clune
              >>>
              >>> Digital Evolution Lab, Michigan State University
              >>>
              >>> jclune@
              >>>
              >>
              >
              >
            • Luca_PI
              Brainstorming... and what about an evolution of a substrate in a fractal way? :P Instead of thinking to evolve the main substrate like neat, we could do that
              Message 6 of 10 , May 7, 2008
              • 0 Attachment
                Brainstorming...

                and what about an evolution of a substrate in a fractal way? :P

                Instead of thinking to evolve the "main"substrate like neat, we could
                do that using various substrates, one for define the primitive
                "main"substrate, another define the quantity of them and another the
                relations between them... for example.

                in this way Neat will act as a DNA containing all the info of the
                complex object which will be built by the CPPN.
              • Kenneth Stanley
                Jeff, I think the issue of the right domain for evolving the substrate is very tricky. The risk is that in some domains there is little to gain by evolving
                Message 7 of 10 , May 8, 2008
                • 0 Attachment
                  Jeff, I think the issue of the right domain for evolving the
                  substrate is very tricky. The risk is that in some domains there
                  is little to gain by evolving the substrate. Also, it depends on
                  what aspect of the substrate is being evolved. For example, the
                  configuration of sensors has a different implication than the
                  configuration of hidden nodes, or both together.

                  In general, I think the important thing is to guide the choice of
                  domain based on the principle you hope to demonstrate. If it is
                  that e.g. an algorithm that can increase the density of the
                  substrate can evolve a very complex network wherein that complexity
                  is leveraged effectively, then it is important to choose a domain
                  where such complexity is needed. That is not a trivial task,
                  because in some domains, the problem might simply be too hard to get
                  started.

                  ken

                  --- In neat@yahoogroups.com, Jeff Clune <jclune@...> wrote:
                  >
                  > This is an exhilarating conversation. I really like the ideas that
                  have been
                  > expressed.
                  >
                  > I agree that, in the spirit of complexification, we would want to
                  start with
                  > a small number of nodes on the substrate and then allow them to
                  increase
                  > (but not increase too fast).
                  >
                  > I also like the idea of getting away from discrete layers.
                  >
                  > Why correlate the CPPN complexity with the substrate complexity,
                  though? My
                  > instincts tell me that such a correlation might create evolutionary
                  > pathologies, such as a pressure to increase CPPN complexity
                  to 'buy'
                  > substrate complexity. This is only a vague foreboding though. I
                  would be
                  > interested to see if it works!
                  >
                  > Speaking of which, what do you all think would be a good test
                  domain to see
                  > if evolving the substrate is effective? What would prove that it
                  was worth
                  > the trouble?
                  >
                  >
                  >
                  >
                  > Cheers,
                  > Jeff Clune
                  >
                  > Digital Evolution Lab, Michigan State University
                  >
                  > jclune@...
                  >
                  >
                  >
                  >
                  > > From: Kenneth Stanley <kstanley@...>
                  > > Reply-To: "neat@yahoogroups.com" <neat@yahoogroups.com>
                  > > Date: Thu, 01 May 2008 17:01:36 -0000
                  > > To: "neat@yahoogroups.com" <neat@yahoogroups.com>
                  > > Subject: [neat] Re: Evolving Substrates in HyperNEAT
                  > >
                  > > Peter, these are interesting thoughts and similar to the way I
                  think
                  > > about it. When I think about evolving the substrate, I usually
                  think
                  > > of a distribution of varying densities rather than strict layers
                  or
                  > > preconceived architectures, which is closest to Jeff's option
                  (c).
                  > > One reference for me is the human brain: It has dense masses
                  that
                  > > look different from each other architecturally and do not exist
                  in
                  > > layers with respect to each other (e.g. the cerebellum vs. the
                  basal
                  > > ganglia). However, *within* particular masses there is some
                  layering,
                  > > such as in the neocortex, which is perhaps the most important for
                  > > high-level intelligence. It would be nice if all that could just
                  > > arise on its own to suit the task.
                  > >
                  > > What you said about CPPN complexity and substrate complexity
                  > > increasing together is something I never thought of. It's an
                  > > interesting idea to correlate the two. You are right that in
                  general
                  > > there is a problem with letting the nodes evolve in the substrate
                  > > because it is so easy to express a massive number of nodes, which
                  > > would not always be needed.
                  > >
                  > > One problem I see is that we are talking about astronomical
                  ranges, so
                  > > the range in density may not really make sense to vary on a
                  continuum.
                  > > For example, if you have a number between 0 and 1 that
                  represents a
                  > > density somehow, then that density may range from several dozen
                  > > neurons to several billion (if we are talking about natural
                  scales).
                  > > Does this range really make sense on a continuum between 0 and 1?
                  > > Maybe we need to scale it exponentially or something like that,
                  but
                  > > still, then you end up with a tiny mutation potentially
                  increasing the
                  > > number of neurons by billions. That seems odd. In a way, we'd
                  like
                  > > to not even be in a circumscribed range and just let increases
                  keep
                  > > happening indefinitely, but not too much at a time.
                  > >
                  > > Finally, the inputs and outputs may be a special case. We may
                  want to
                  > > preserve the current human control of the geometry there and
                  only let
                  > > evolution increase density or something like that.
                  > >
                  > > Anyway, this is a great topic and a completely untouched area of
                  > > research with plenty of things left to try.
                  > >
                  > > ken
                  > >
                  > >
                  > >
                  > > --- In neat@yahoogroups.com, "petar_chervenski"
                  <petar_chervenski@>
                  > > wrote:
                  > >>
                  > >> Hello Jeff,
                  > >>
                  > >> I don't think that such assumptions about layered topology are
                  the
                  > >> way to achieve this. There are two fundamental things
                  associated with
                  > >> any substrate in 2D/3D. These are node presence and node
                  density. In
                  > >> fact, skip the first, it just.. it can be thought of as spatial
                  CPPN
                  > >> output where the output means the overall node density at that
                  place.
                  > >> And if the output there is less than 0.2, there are no nodes
                  (sounds
                  > >> familiar? :)).
                  > >>
                  > >> So the same connective CPPN is actually capable of representing
                  its
                  > >> own substrate. But this approach would generate too big
                  substrates.
                  > >> Even the most simple CPPNs are able to generate a substrate with
                  > >> 1000s of nodes and now imagine how many connections there could
                  be.
                  > >> And this is for the simplest (!) CPPN. That sounds like a huge
                  waste
                  > >> of computational effort, doesn't it? ;)
                  > >>
                  > >> So what is needed is a way to restrict the substrate complexity
                  for
                  > >> small CPPNs. Of course more complex CPPNs can be allowed to
                  generate
                  > >> more complex/dense substrates.
                  > >>
                  > >> This is actually just like complexification. First the major
                  concepts
                  > >> are established on a substrate with very low resolution and as
                  the
                  > >> CPPNs complexify, the substrate becomes more complex. In fact
                  each
                  > >> CPPN will generate a substrate based on its complexity (say,
                  > >> num_nodes+num_links). Oh I think I mentioned that.
                  > >>
                  > >> What I am thinking is, if we actually allow substrates to
                  evolve,
                  > >> does this mean that humans cannot inject that priory geometric
                  > >> knowledge any more? Or only for the inputs/outputs?
                  > >>
                  > >> It essentialy becomes like.. like just a good indirect encoding.
                  > >>
                  > >> Peter
                  > >>
                  > >> --- In neat@yahoogroups.com, Jeff Clune <jclune@> wrote:
                  > >>>
                  > >>> Hello-
                  > >>>
                  > >>> Many of you have said that it would be nice if the substrate
                  > >> configuration
                  > >>> of HyperNEAT was not pre-defined. What elements of it do you
                  think
                  > >> are
                  > >>> important to evolve?
                  > >>>
                  > >>> I can think of at least three options (am I missing any?)
                  > >>>
                  > >>> 1) The number of hidden layers
                  > >>> 2) The number of hidden nodes per layer
                  > >>> 3) The geometric placement of the nodes in every layer
                  > >>>
                  > >>> If you think all of them are important, how would you
                  prioritize
                  > >> them? In
                  > >>> what order would you prefer researchers tackled them? I am just
                  > >> curious what
                  > >>> different people think about these questions.
                  > >>>
                  > >>>
                  > >>> Cheers,
                  > >>> Jeff Clune
                  > >>>
                  > >>> Digital Evolution Lab, Michigan State University
                  > >>>
                  > >>> jclune@
                  > >>>
                  > >>
                  > >
                  > >
                  >
                • Kenneth Stanley
                  Luca, that could be interesting and makes sense but it moves a little bit more towards more traditional indirect encodings with explicit hierarchies. I m
                  Message 8 of 10 , May 8, 2008
                  • 0 Attachment
                    Luca, that could be interesting and makes sense but it moves a little
                    bit more towards more traditional indirect encodings with explicit
                    hierarchies. I'm curious whether there is an encoding for the
                    substrate that is more implicit and similar in spirit to how HyperNEAT
                    evolves the connectivity pattern. In any case, there is room for many
                    ideas to be tried.

                    ken

                    --- In neat@yahoogroups.com, "Luca_PI" <wolfgroups@...> wrote:
                    >
                    > Brainstorming...
                    >
                    > and what about an evolution of a substrate in a fractal way? :P
                    >
                    > Instead of thinking to evolve the "main"substrate like neat, we could
                    > do that using various substrates, one for define the primitive
                    > "main"substrate, another define the quantity of them and another the
                    > relations between them... for example.
                    >
                    > in this way Neat will act as a DNA containing all the info of the
                    > complex object which will be built by the CPPN.
                    >
                  • petar_chervenski
                    One domain I suggest is Alife. Imagine 2D creatures, whose bodies are directly derived from spatial CPPN s output and converted to mass- spring models. Now
                    Message 9 of 10 , May 8, 2008
                    • 0 Attachment
                      One domain I suggest is Alife. Imagine 2D creatures, whose bodies are
                      directly derived from spatial CPPN's output and converted to mass-
                      spring models. Now let's extend this and let that CPPN output types
                      of tissue as well. There can be 3 types of tissue - sensor tissue
                      that excites on touch/collision, muscle tissue that contracts and
                      another tissue used to connect both and just fill the gaps. So these
                      creatures will have brains. Well since we have the body and we know
                      its tissue structure, we can build a HyperNEAT substrate pretty easy.
                      We can assign positions of input nodes where sensor tissue is
                      located, output nodes where muscle tissue is located, and hidden
                      nodes for the rest. If the model is a mass-spring system, then this
                      can be directly related. For example colliding with an object would
                      excite the sensor tissue, thus inputting 1.0 into the HyperNEAT
                      substrate. And when the outputs in the substrate excite above a
                      certain treshold, the springs in the physics system will contract.
                      Evolution can pick up the best parents capable of doing something,
                      for example, tasks like swimming. Or running away from objects.
                      I think this will be pretty interesting thing to see. It can show how
                      HyperNEAT/CPPN-NEAT can utilize a simulated physical environment.

                      Peter

                      --- In neat@yahoogroups.com, "Kenneth Stanley" <kstanley@...> wrote:
                      >
                      > Jeff, I think the issue of the right domain for evolving the
                      > substrate is very tricky. The risk is that in some domains there
                      > is little to gain by evolving the substrate. Also, it depends on
                      > what aspect of the substrate is being evolved. For example, the
                      > configuration of sensors has a different implication than the
                      > configuration of hidden nodes, or both together.
                      >
                      > In general, I think the important thing is to guide the choice of
                      > domain based on the principle you hope to demonstrate. If it is
                      > that e.g. an algorithm that can increase the density of the
                      > substrate can evolve a very complex network wherein that complexity
                      > is leveraged effectively, then it is important to choose a domain
                      > where such complexity is needed. That is not a trivial task,
                      > because in some domains, the problem might simply be too hard to
                      get
                      > started.
                      >
                      > ken
                      >
                      > --- In neat@yahoogroups.com, Jeff Clune <jclune@> wrote:
                      > >
                      > > This is an exhilarating conversation. I really like the ideas
                      that
                      > have been
                      > > expressed.
                      > >
                      > > I agree that, in the spirit of complexification, we would want to
                      > start with
                      > > a small number of nodes on the substrate and then allow them to
                      > increase
                      > > (but not increase too fast).
                      > >
                      > > I also like the idea of getting away from discrete layers.
                      > >
                      > > Why correlate the CPPN complexity with the substrate complexity,
                      > though? My
                      > > instincts tell me that such a correlation might create
                      evolutionary
                      > > pathologies, such as a pressure to increase CPPN complexity
                      > to 'buy'
                      > > substrate complexity. This is only a vague foreboding though. I
                      > would be
                      > > interested to see if it works!
                      > >
                      > > Speaking of which, what do you all think would be a good test
                      > domain to see
                      > > if evolving the substrate is effective? What would prove that it
                      > was worth
                      > > the trouble?
                      > >
                      > >
                      > >
                      > >
                      > > Cheers,
                      > > Jeff Clune
                      > >
                      > > Digital Evolution Lab, Michigan State University
                      > >
                      > > jclune@
                      > >
                      > >
                      > >
                      > >
                      > > > From: Kenneth Stanley <kstanley@>
                      > > > Reply-To: "neat@yahoogroups.com" <neat@yahoogroups.com>
                      > > > Date: Thu, 01 May 2008 17:01:36 -0000
                      > > > To: "neat@yahoogroups.com" <neat@yahoogroups.com>
                      > > > Subject: [neat] Re: Evolving Substrates in HyperNEAT
                      > > >
                      > > > Peter, these are interesting thoughts and similar to the way I
                      > think
                      > > > about it. When I think about evolving the substrate, I usually
                      > think
                      > > > of a distribution of varying densities rather than strict
                      layers
                      > or
                      > > > preconceived architectures, which is closest to Jeff's option
                      > (c).
                      > > > One reference for me is the human brain: It has dense masses
                      > that
                      > > > look different from each other architecturally and do not exist
                      > in
                      > > > layers with respect to each other (e.g. the cerebellum vs. the
                      > basal
                      > > > ganglia). However, *within* particular masses there is some
                      > layering,
                      > > > such as in the neocortex, which is perhaps the most important
                      for
                      > > > high-level intelligence. It would be nice if all that could
                      just
                      > > > arise on its own to suit the task.
                      > > >
                      > > > What you said about CPPN complexity and substrate complexity
                      > > > increasing together is something I never thought of. It's an
                      > > > interesting idea to correlate the two. You are right that in
                      > general
                      > > > there is a problem with letting the nodes evolve in the
                      substrate
                      > > > because it is so easy to express a massive number of nodes,
                      which
                      > > > would not always be needed.
                      > > >
                      > > > One problem I see is that we are talking about astronomical
                      > ranges, so
                      > > > the range in density may not really make sense to vary on a
                      > continuum.
                      > > > For example, if you have a number between 0 and 1 that
                      > represents a
                      > > > density somehow, then that density may range from several dozen
                      > > > neurons to several billion (if we are talking about natural
                      > scales).
                      > > > Does this range really make sense on a continuum between 0 and
                      1?
                      > > > Maybe we need to scale it exponentially or something like that,
                      > but
                      > > > still, then you end up with a tiny mutation potentially
                      > increasing the
                      > > > number of neurons by billions. That seems odd. In a way, we'd
                      > like
                      > > > to not even be in a circumscribed range and just let increases
                      > keep
                      > > > happening indefinitely, but not too much at a time.
                      > > >
                      > > > Finally, the inputs and outputs may be a special case. We may
                      > want to
                      > > > preserve the current human control of the geometry there and
                      > only let
                      > > > evolution increase density or something like that.
                      > > >
                      > > > Anyway, this is a great topic and a completely untouched area of
                      > > > research with plenty of things left to try.
                      > > >
                      > > > ken
                      > > >
                      > > >
                      > > >
                      > > > --- In neat@yahoogroups.com, "petar_chervenski"
                      > <petar_chervenski@>
                      > > > wrote:
                      > > >>
                      > > >> Hello Jeff,
                      > > >>
                      > > >> I don't think that such assumptions about layered topology are
                      > the
                      > > >> way to achieve this. There are two fundamental things
                      > associated with
                      > > >> any substrate in 2D/3D. These are node presence and node
                      > density. In
                      > > >> fact, skip the first, it just.. it can be thought of as
                      spatial
                      > CPPN
                      > > >> output where the output means the overall node density at that
                      > place.
                      > > >> And if the output there is less than 0.2, there are no nodes
                      > (sounds
                      > > >> familiar? :)).
                      > > >>
                      > > >> So the same connective CPPN is actually capable of
                      representing
                      > its
                      > > >> own substrate. But this approach would generate too big
                      > substrates.
                      > > >> Even the most simple CPPNs are able to generate a substrate
                      with
                      > > >> 1000s of nodes and now imagine how many connections there
                      could
                      > be.
                      > > >> And this is for the simplest (!) CPPN. That sounds like a huge
                      > waste
                      > > >> of computational effort, doesn't it? ;)
                      > > >>
                      > > >> So what is needed is a way to restrict the substrate
                      complexity
                      > for
                      > > >> small CPPNs. Of course more complex CPPNs can be allowed to
                      > generate
                      > > >> more complex/dense substrates.
                      > > >>
                      > > >> This is actually just like complexification. First the major
                      > concepts
                      > > >> are established on a substrate with very low resolution and as
                      > the
                      > > >> CPPNs complexify, the substrate becomes more complex. In fact
                      > each
                      > > >> CPPN will generate a substrate based on its complexity (say,
                      > > >> num_nodes+num_links). Oh I think I mentioned that.
                      > > >>
                      > > >> What I am thinking is, if we actually allow substrates to
                      > evolve,
                      > > >> does this mean that humans cannot inject that priory geometric
                      > > >> knowledge any more? Or only for the inputs/outputs?
                      > > >>
                      > > >> It essentialy becomes like.. like just a good indirect
                      encoding.
                      > > >>
                      > > >> Peter
                      > > >>
                      > > >> --- In neat@yahoogroups.com, Jeff Clune <jclune@> wrote:
                      > > >>>
                      > > >>> Hello-
                      > > >>>
                      > > >>> Many of you have said that it would be nice if the substrate
                      > > >> configuration
                      > > >>> of HyperNEAT was not pre-defined. What elements of it do you
                      > think
                      > > >> are
                      > > >>> important to evolve?
                      > > >>>
                      > > >>> I can think of at least three options (am I missing any?)
                      > > >>>
                      > > >>> 1) The number of hidden layers
                      > > >>> 2) The number of hidden nodes per layer
                      > > >>> 3) The geometric placement of the nodes in every layer
                      > > >>>
                      > > >>> If you think all of them are important, how would you
                      > prioritize
                      > > >> them? In
                      > > >>> what order would you prefer researchers tackled them? I am
                      just
                      > > >> curious what
                      > > >>> different people think about these questions.
                      > > >>>
                      > > >>>
                      > > >>> Cheers,
                      > > >>> Jeff Clune
                      > > >>>
                      > > >>> Digital Evolution Lab, Michigan State University
                      > > >>>
                      > > >>> jclune@
                      > > >>>
                      > > >>
                      > > >
                      > > >
                      > >
                      >
                    Your message has been successfully submitted and would be delivered to recipients shortly.