Loading ...
Sorry, an error occurred while loading the content.
 

Re: ES-HyperNEAT for non-geometric problems

Expand Messages
  • Ken
    Hi Oliver, even though the individual depicted in the Unified Approach paper does not appear to reflect the symmetry of its inputs and outputs, my guess is
    Message 1 of 4 , Oct 17 12:44 PM
      Hi Oliver, even though the individual depicted in the "Unified Approach" paper does not appear to reflect the symmetry of its inputs and outputs, my guess is that it still benefits from the fact there is some geometry to them. For example, aside from symmetry, sensors that are next to each other are responsible for adjacent areas of the sensory field. It's also possible that even though the node placement is asymmetric that their internal connectivity among each other is still somehow a derivation of the more explicit symmetric geometry of the sensors and outputs, like if you took a symmetric pattern and twisted it in a half-pretzel - within the pretzel itself (ignoring its curvature) there is still a notion of symmetry. Also, often we do see ES-HyperNEAT produce nice symmetric patterns that even the human eye can appreciate.

      However, none of that really answers your question about what it implies if there is no clear geometry to begin with. I believe Jeff Clune can point you to some of his work where he tried training off randomized geometries and found that even there HyperNEAT still can gain some advantage from arbitrary geometric relations that can still be exploited. But in terms of ES-HyperNEAT, I don't know of any explicit studies of this issue. If there is really no geometric principle whatsoever then I think it won't work out as well as if there is, so I'd try to have one. But if you really can't (e.g. because of the domain) there still might be hope of something interesting happening in the hidden node configuration nevertheless.

      ken



      --- In neat@yahoogroups.com, Oliver Coleman <oliver.coleman@...> wrote:
      >
      > Hi all,
      >
      > Does anyone have thoughts or experience on the benefits or issues of using
      > ES-HyperNEAT for evolving recurrent neural networks which deal with input
      > and output that does not necessarily/explicitly contain geometric
      > regularities (compared to, say, NEAT). I notice that in the example evolved
      > substrate shown in "A Unified Approach to Evolving Plasticity and Neural
      > Geometry" (Risi and Stanley, 2012) the neuron placement does seem to
      > contain regularities (perhaps reflecting the geometric regularity in the
      > input, but this isn't clear) but isn't symmetrical, even though the input
      > and output spaces for the substrate do contain symmetry.
      >
      > Cheers,
      > Oliver
      >
    • Jeff Clune
      Hello Oliver, The paper of mine that Ken is referring to is Clune, J., Ofria, C. & Pennock, R. T. The Sensitivity of HyperNEAT to Different Geometric
      Message 2 of 4 , Oct 19 8:21 AM
        Hello Oliver,

        The paper of mine that Ken is referring to is

        Clune, J., Ofria, C. & Pennock, R. T. The Sensitivity of HyperNEAT to Different Geometric Representations of a Problem. Proceedings of the Genetic and Evolutionary Computation Conference (GECCO) 144–148 (2009).

        In it are a couple of ideas that could be helpful to you:

        The first is a test for whether HyperNEAT is exploiting geometric information in the input domain. We show that you can simply compare the performance of HyperNEAT with the geometrically arranged inputs and then compare it to runs with HyperNEAT where you randomize the geometric locations of the inputs. Any performance advantage (averaged over many runs) in the geometrically arranged input space runs vs. the randomized runs reveals the extent to which those geometric relationships help.

        More to your point, as Ken mentioned, I also found that HyperNEAT outcompetes a direct encoding control (Fixed Topology-NEAT) even when inputs are randomized. That performance gain can either be interpreted as a benefit of generative encodings vs. direct encodings, or it could be that HyperNEAT learns to map its inputs into a geometric arrangement in the hidden layer, and then exploits that geometry in the hidden layer.

        Either way, the results suggest that even if you don't have a good idea for how to geometrically arrange the inputs, HyperNEAT will still do better than direct controls. None of that, of course, answers whether ES-HyperNEAT is better than NonES-HyperNEAT uniquely in such situations, aside from the general differences between the two algorithms.

        I hope that helps.

        Best regards,
        Jeff Clune

        Visiting Scientist
        Cornell University
        jeffclune@...
        jeffclune.com

        On Oct 17, 2012, at 8:44 PM, Ken <kstanley@...> wrote:

        >
        >
        > Hi Oliver, even though the individual depicted in the "Unified Approach" paper does not appear to reflect the symmetry of its inputs and outputs, my guess is that it still benefits from the fact there is some geometry to them. For example, aside from symmetry, sensors that are next to each other are responsible for adjacent areas of the sensory field. It's also possible that even though the node placement is asymmetric that their internal connectivity among each other is still somehow a derivation of the more explicit symmetric geometry of the sensors and outputs, like if you took a symmetric pattern and twisted it in a half-pretzel - within the pretzel itself (ignoring its curvature) there is still a notion of symmetry. Also, often we do see ES-HyperNEAT produce nice symmetric patterns that even the human eye can appreciate.
        >
        > However, none of that really answers your question about what it implies if there is no clear geometry to begin with. I believe Jeff Clune can point you to some of his work where he tried training off randomized geometries and found that even there HyperNEAT still can gain some advantage from arbitrary geometric relations that can still be exploited. But in terms of ES-HyperNEAT, I don't know of any explicit studies of this issue. If there is really no geometric principle whatsoever then I think it won't work out as well as if there is, so I'd try to have one. But if you really can't (e.g. because of the domain) there still might be hope of something interesting happening in the hidden node configuration nevertheless.
        >
        > ken
        >
        > --- In neat@yahoogroups.com, Oliver Coleman <oliver.coleman@...> wrote:
        > >
        > > Hi all,
        > >
        > > Does anyone have thoughts or experience on the benefits or issues of using
        > > ES-HyperNEAT for evolving recurrent neural networks which deal with input
        > > and output that does not necessarily/explicitly contain geometric
        > > regularities (compared to, say, NEAT). I notice that in the example evolved
        > > substrate shown in "A Unified Approach to Evolving Plasticity and Neural
        > > Geometry" (Risi and Stanley, 2012) the neuron placement does seem to
        > > contain regularities (perhaps reflecting the geometric regularity in the
        > > input, but this isn't clear) but isn't symmetrical, even though the input
        > > and output spaces for the substrate do contain symmetry.
        > >
        > > Cheers,
        > > Oliver
        > >
        >
        >
      • Oliver Coleman
        Thanks for your responses, Ken and Jeff, very useful as always. I m familiar with Jeff s paper (I read it at least a couple of times :)), and started putting
        Message 3 of 4 , Oct 19 7:58 PM
          Thanks for your responses, Ken and Jeff, very useful as always.

          I'm familiar with Jeff's paper (I read it at least a couple of times :)), and started putting in something about it but then decided it probably wasn't relevant enough as the input/output spaces were still derived from input/output patterns containing explicit geometric relationships, and because it compares HyperNEAT to FT-NEAT rather than regular NEAT (which made sense in the tasks in the paper but I don't think would for what I'm looking at).

          You're right, Ken, there's no reason (ES-)HyperNEAT couldn't exploit the geometric relationships in ways not directly obvious by looking at the evolved substrate. As I wrote my question I wondered if it could also work the other way: that geometric relationships in the input/output space that aren't obvious (to a human) would nevertheless be exploitable by (ES-)HyperNEAT. Or taking it further perhaps all sorts of relationships in the input/output could be interpreted as "geometric" relationships. Jeff's paper shows that, for one reason or another, HyperNEAT can exploit randomised geometries, which (I think) means relationships that exist between inputs that are not correlated in the substrate input geometry. In other words, all that is required is that two inputs (or outputs) are correlated in some way, they don't need to be placed next to each other (or in some other geometric pattern) in the input/output substrate geometry, for HyperNEAT to exploit the correlations (at least better than FT-NEAT can). Of course, like you say Jeff, it doesn't say much about (ES-)HyperNEAT vs NEAT. I'll just have to try both. :)

          Cheers,
          Oliver

          On 20 October 2012 02:21, Jeff Clune <jeffclune@...> wrote:
          Hello Oliver,

          The paper of mine that Ken is referring to is

          Clune, J., Ofria, C. & Pennock, R. T. The Sensitivity of HyperNEAT to Different Geometric Representations of a Problem. Proceedings of the Genetic and Evolutionary Computation Conference (GECCO) 144–148 (2009).

          In it are a couple of ideas that could be helpful to you:

          The first is a test for whether HyperNEAT is exploiting geometric information in the input domain. We show that you can simply compare the performance of HyperNEAT with the geometrically arranged inputs and then compare it to runs with HyperNEAT where you randomize the geometric locations of the inputs. Any performance advantage (averaged over many runs) in the geometrically arranged input space runs vs. the randomized runs reveals the extent to which those geometric relationships help.

          More to your point, as Ken mentioned, I also found that HyperNEAT outcompetes a direct encoding control (Fixed Topology-NEAT) even when inputs are randomized. That performance gain can either be interpreted as a benefit of generative encodings vs. direct encodings, or it could be that HyperNEAT learns to map its inputs into a geometric arrangement in the hidden layer, and then exploits that geometry in the hidden layer.

          Either way, the results suggest that even if you don't have a good idea for how to geometrically arrange the inputs, HyperNEAT will still do better than direct controls. None of that, of course, answers whether ES-HyperNEAT is better than NonES-HyperNEAT uniquely in such situations, aside from the general differences between the two algorithms.

          I hope that helps.

          Best regards,
          Jeff Clune

          Visiting Scientist
          Cornell University
          jeffclune@...
          jeffclune.com

          On Oct 17, 2012, at 8:44 PM, Ken <kstanley@...> wrote:

          >
          >
          > Hi Oliver, even though the individual depicted in the "Unified Approach" paper does not appear to reflect the symmetry of its inputs and outputs, my guess is that it still benefits from the fact there is some geometry to them. For example, aside from symmetry, sensors that are next to each other are responsible for adjacent areas of the sensory field. It's also possible that even though the node placement is asymmetric that their internal connectivity among each other is still somehow a derivation of the more explicit symmetric geometry of the sensors and outputs, like if you took a symmetric pattern and twisted it in a half-pretzel - within the pretzel itself (ignoring its curvature) there is still a notion of symmetry. Also, often we do see ES-HyperNEAT produce nice symmetric patterns that even the human eye can appreciate.
          >
          > However, none of that really answers your question about what it implies if there is no clear geometry to begin with. I believe Jeff Clune can point you to some of his work where he tried training off randomized geometries and found that even there HyperNEAT still can gain some advantage from arbitrary geometric relations that can still be exploited. But in terms of ES-HyperNEAT, I don't know of any explicit studies of this issue. If there is really no geometric principle whatsoever then I think it won't work out as well as if there is, so I'd try to have one. But if you really can't (e.g. because of the domain) there still might be hope of something interesting happening in the hidden node configuration nevertheless.
          >
          > ken
          >
          > --- In neat@yahoogroups.com, Oliver Coleman <oliver.coleman@...> wrote:
          > >
          > > Hi all,
          > >
          > > Does anyone have thoughts or experience on the benefits or issues of using
          > > ES-HyperNEAT for evolving recurrent neural networks which deal with input
          > > and output that does not necessarily/explicitly contain geometric
          > > regularities (compared to, say, NEAT). I notice that in the example evolved
          > > substrate shown in "A Unified Approach to Evolving Plasticity and Neural
          > > Geometry" (Risi and Stanley, 2012) the neuron placement does seem to
          > > contain regularities (perhaps reflecting the geometric regularity in the
          > > input, but this isn't clear) but isn't symmetrical, even though the input
          > > and output spaces for the substrate do contain symmetry.
          > >
          > > Cheers,
          > > Oliver
          > >
          >
          >



          ------------------------------------

          Yahoo! Groups Links

          <*> To visit your group on the web, go to:
              http://groups.yahoo.com/group/neat/

          <*> Your email settings:
              Individual Email | Traditional

          <*> To change settings online go to:
              http://groups.yahoo.com/group/neat/join
              (Yahoo! ID required)

          <*> To change settings via email:
              neat-digest@yahoogroups.com
              neat-fullfeatured@yahoogroups.com

          <*> To unsubscribe from this group, send an email to:
              neat-unsubscribe@yahoogroups.com

          <*> Your use of Yahoo! Groups is subject to:
              http://docs.yahoo.com/info/terms/


        Your message has been successfully submitted and would be delivered to recipients shortly.