Loading ...
Sorry, an error occurred while loading the content.
 

Re: Questions regarding to the ANJI and JNEAT

Expand Messages
  • Kenneth Stanley
    Matt, what kind of experiment is implemented in your hyperneat? Is it one of the same experiments from our versions? Thanks, ken
    Message 1 of 12 , May 1, 2009
      Matt, what kind of experiment is implemented in your hyperneat? Is it one of the same experiments from our versions?

      Thanks,

      ken

      --- In neat@yahoogroups.com, "Matt Simmerson" <m.simmerson@...> wrote:
      >
      > Hi Ken
      >
      > My Neat4j v2 (as yet unreleased) implements hyperneat. I believe it works, but could do with testing by someone other than me. If anyone is up for that I can let them have the code. Not sure when I will release this, looking for a new job as I have been made redundant. Don't you just love the global economy!
      >
      > Cheers
      >
      > Matt
      >
      > --- In neat@yahoogroups.com, "Kenneth Stanley" <kstanley@> wrote:
      > >
      > > JT, is your Java-based HyperNEAT something you'd be interested in sharing in general? There is currently no Java version of HyperNEAT, so others might be interested as well. Do you have any experiments working in it?
      > >
      > > (I understand if it's not ready for that but I wanted to at least raise the possibility.)
      > >
      > > Thanks,
      > >
      > > ken
      > >
      > > --- In neat@yahoogroups.com, "JT" <skybro77@> wrote:
      > > >
      > > > I actually implemented HyperNeat in Java about a year ago. (I had some trouble understanding what to do with ANJI and JNEAT too.)
      > > >
      > > > It was designed to be able to modify the number of hidden layers. It was my first Java project and it was unnecessarily complicated. However, a large chunk of the code was ported from the original C++ HyperNeat. I'm sure it's reusable with a bit tuning (e.g. not mutating number of layers and their connection patterns.)
      > > >
      > > > If you want it, I will email you the code.
      > > >
      > >
      >
    • Kenneth Stanley
      JT, thanks for the upload. If you don t feel it s polished enough it may not be the best idea to release it widely (i.e. from our HyperNEAT page) since it
      Message 2 of 12 , May 1, 2009
        JT, thanks for the upload. If you don't feel it's polished enough it may not be the best idea to release it widely (i.e. from our HyperNEAT page) since it could cause unintended confusion. In any case, I'm still curious what experiments are implemented in your version?

        Thanks,

        ken

        --- In neat@yahoogroups.com, "JT" <skybro77@...> wrote:
        >
        > I have uploaded my code in the files section of this group. It's called JN-HyperNeat.zip
        >
        > This was my first Java project, so it's really rough. I want to say I will polish it, but I really don't have the time. Plus, nowadays I prefer coding in Common Lisp and Clojure.
        >
        > However I'm still very interested in HyperNeat, so I may eventually implement it in Lisp.
        >
        > Feel free to do whatever you want with JN-HyperNeat.
        >
      • Matt Simmerson
        Hi Ken I had done an experiment in conjuction with the Torcs competition that Julian Togelius et al are running. The results were not that encouraging when
        Message 3 of 12 , May 3, 2009
          Hi Ken

          I had done an experiment in conjuction with the Torcs competition that Julian Togelius et al are running. The results were not that encouraging when compared with a standard NEAT experiment (maybe I need to sort out the geometry, though given the recent paper by Jeff, any geometry should work better than standard NEAT). I implemented David B. D'Ambrosio's Food Finder experiment and that worked just fine.

          I know you used this experiment in your alife 09 paper and I have one question regarding the fitness function of this experiment. First, the algorithms are slightly different - could be typos or could be just different. However, in both, they seem to reward (increasing fitness) bots that take longer with the (Ttot * 1000r) or (Ttot / 1000r) terms of yours and David's experiments. This seems at odds with the goal. I have implemented (1000r / Ttot) which after all food is found rewards quicker bots.

          Maybe I have missed something or misunderstood.

          Cheers

          Matt

          --- In neat@yahoogroups.com, "Kenneth Stanley" <kstanley@...> wrote:
          >
          > Matt, what kind of experiment is implemented in your hyperneat? Is it one of the same experiments from our versions?
          >
          > Thanks,
          >
          > ken
          >
          > --- In neat@yahoogroups.com, "Matt Simmerson" <m.simmerson@> wrote:
          > >
          > > Hi Ken
          > >
          > > My Neat4j v2 (as yet unreleased) implements hyperneat. I believe it works, but could do with testing by someone other than me. If anyone is up for that I can let them have the code. Not sure when I will release this, looking for a new job as I have been made redundant. Don't you just love the global economy!
          > >
          > > Cheers
          > >
          > > Matt
          > >
          > > --- In neat@yahoogroups.com, "Kenneth Stanley" <kstanley@> wrote:
          > > >
          > > > JT, is your Java-based HyperNEAT something you'd be interested in sharing in general? There is currently no Java version of HyperNEAT, so others might be interested as well. Do you have any experiments working in it?
          > > >
          > > > (I understand if it's not ready for that but I wanted to at least raise the possibility.)
          > > >
          > > > Thanks,
          > > >
          > > > ken
          > > >
          > > > --- In neat@yahoogroups.com, "JT" <skybro77@> wrote:
          > > > >
          > > > > I actually implemented HyperNeat in Java about a year ago. (I had some trouble understanding what to do with ANJI and JNEAT too.)
          > > > >
          > > > > It was designed to be able to modify the number of hidden layers. It was my first Java project and it was unnecessarily complicated. However, a large chunk of the code was ported from the original C++ HyperNeat. I'm sure it's reusable with a bit tuning (e.g. not mutating number of layers and their connection patterns.)
          > > > >
          > > > > If you want it, I will email you the code.
          > > > >
          > > >
          > >
          >
        • David D'Ambrosio
          Hi Matt, Thanks for pointing out this inconsistency. You re right that both papers managed to somehow list different but still incorrect fitness functions. I
          Message 4 of 12 , May 7, 2009
            Hi Matt,

            Thanks for pointing out this inconsistency. You're right that both papers managed to somehow list different but still incorrect fitness functions. I actually used 1000r-Ttot to get the amount of time remaining since each trail was allocated 1000 time steps. However, your fitness function makes perfect sense too, so I doubt choosing one or the other would significantly affect the results.

            David


            --- In neat@yahoogroups.com, "Matt Simmerson" <m.simmerson@...> wrote:
            >
            > Hi Ken
            >
            > I had done an experiment in conjuction with the Torcs competition that Julian Togelius et al are running. The results were not that encouraging when compared with a standard NEAT experiment (maybe I need to sort out the geometry, though given the recent paper by Jeff, any geometry should work better than standard NEAT). I implemented David B. D'Ambrosio's Food Finder experiment and that worked just fine.
            >
            > I know you used this experiment in your alife 09 paper and I have one question regarding the fitness function of this experiment. First, the algorithms are slightly different - could be typos or could be just different. However, in both, they seem to reward (increasing fitness) bots that take longer with the (Ttot * 1000r) or (Ttot / 1000r) terms of yours and David's experiments. This seems at odds with the goal. I have implemented (1000r / Ttot) which after all food is found rewards quicker bots.
            >
            > Maybe I have missed something or misunderstood.
            >
            > Cheers
            >
            > Matt
            >
            > --- In neat@yahoogroups.com, "Kenneth Stanley" <kstanley@> wrote:
            > >
            > > Matt, what kind of experiment is implemented in your hyperneat? Is it one of the same experiments from our versions?
            > >
            > > Thanks,
            > >
            > > ken
            > >
            > > --- In neat@yahoogroups.com, "Matt Simmerson" <m.simmerson@> wrote:
            > > >
            > > > Hi Ken
            > > >
            > > > My Neat4j v2 (as yet unreleased) implements hyperneat. I believe it works, but could do with testing by someone other than me. If anyone is up for that I can let them have the code. Not sure when I will release this, looking for a new job as I have been made redundant. Don't you just love the global economy!
            > > >
            > > > Cheers
            > > >
            > > > Matt
            > > >
            > > > --- In neat@yahoogroups.com, "Kenneth Stanley" <kstanley@> wrote:
            > > > >
            > > > > JT, is your Java-based HyperNEAT something you'd be interested in sharing in general? There is currently no Java version of HyperNEAT, so others might be interested as well. Do you have any experiments working in it?
            > > > >
            > > > > (I understand if it's not ready for that but I wanted to at least raise the possibility.)
            > > > >
            > > > > Thanks,
            > > > >
            > > > > ken
            > > > >
            > > > > --- In neat@yahoogroups.com, "JT" <skybro77@> wrote:
            > > > > >
            > > > > > I actually implemented HyperNeat in Java about a year ago. (I had some trouble understanding what to do with ANJI and JNEAT too.)
            > > > > >
            > > > > > It was designed to be able to modify the number of hidden layers. It was my first Java project and it was unnecessarily complicated. However, a large chunk of the code was ported from the original C++ HyperNeat. I'm sure it's reusable with a bit tuning (e.g. not mutating number of layers and their connection patterns.)
            > > > > >
            > > > > > If you want it, I will email you the code.
            > > > > >
            > > > >
            > > >
            > >
            >
          • Kenneth Stanley
            Matt, please let me also apologize for the typos in the fitness function in the food gathering experiment publications. We try to be vigilant about the
            Message 5 of 12 , May 8, 2009
              Matt, please let me also apologize for the typos in the fitness function in the food gathering experiment publications. We try to be vigilant about the accuracy of our formulas, so it's a surprise to us that those somehow slipped through with errors. I hope it did not cause you or anyone else inconvenience. Thank you very much for bringing that to our attention.

              To remedy the mistake, we just now corrected the formula in both online versions of these papers so from now on anyone who downloads them from eplex will have the right formula, thanks to you.

              Note that our HyperSharpNEAT source code distribution did have the correct formula, so there is no problem with the experiment if anyone downloaded it.

              ken

              --- In neat@yahoogroups.com, "David D'Ambrosio" <ddambro84@...> wrote:
              >
              > Hi Matt,
              >
              > Thanks for pointing out this inconsistency. You're right that both papers managed to somehow list different but still incorrect fitness functions. I actually used 1000r-Ttot to get the amount of time remaining since each trail was allocated 1000 time steps. However, your fitness function makes perfect sense too, so I doubt choosing one or the other would significantly affect the results.
              >
              > David
              >
              >
              > --- In neat@yahoogroups.com, "Matt Simmerson" <m.simmerson@> wrote:
              > >
              > > Hi Ken
              > >
              > > I had done an experiment in conjuction with the Torcs competition that Julian Togelius et al are running. The results were not that encouraging when compared with a standard NEAT experiment (maybe I need to sort out the geometry, though given the recent paper by Jeff, any geometry should work better than standard NEAT). I implemented David B. D'Ambrosio's Food Finder experiment and that worked just fine.
              > >
              > > I know you used this experiment in your alife 09 paper and I have one question regarding the fitness function of this experiment. First, the algorithms are slightly different - could be typos or could be just different. However, in both, they seem to reward (increasing fitness) bots that take longer with the (Ttot * 1000r) or (Ttot / 1000r) terms of yours and David's experiments. This seems at odds with the goal. I have implemented (1000r / Ttot) which after all food is found rewards quicker bots.
              > >
              > > Maybe I have missed something or misunderstood.
              > >
              > > Cheers
              > >
              > > Matt
              > >
              > > --- In neat@yahoogroups.com, "Kenneth Stanley" <kstanley@> wrote:
              > > >
              > > > Matt, what kind of experiment is implemented in your hyperneat? Is it one of the same experiments from our versions?
              > > >
              > > > Thanks,
              > > >
              > > > ken
              > > >
              > > > --- In neat@yahoogroups.com, "Matt Simmerson" <m.simmerson@> wrote:
              > > > >
              > > > > Hi Ken
              > > > >
              > > > > My Neat4j v2 (as yet unreleased) implements hyperneat. I believe it works, but could do with testing by someone other than me. If anyone is up for that I can let them have the code. Not sure when I will release this, looking for a new job as I have been made redundant. Don't you just love the global economy!
              > > > >
              > > > > Cheers
              > > > >
              > > > > Matt
              > > > >
              > > > > --- In neat@yahoogroups.com, "Kenneth Stanley" <kstanley@> wrote:
              > > > > >
              > > > > > JT, is your Java-based HyperNEAT something you'd be interested in sharing in general? There is currently no Java version of HyperNEAT, so others might be interested as well. Do you have any experiments working in it?
              > > > > >
              > > > > > (I understand if it's not ready for that but I wanted to at least raise the possibility.)
              > > > > >
              > > > > > Thanks,
              > > > > >
              > > > > > ken
              > > > > >
              > > > > > --- In neat@yahoogroups.com, "JT" <skybro77@> wrote:
              > > > > > >
              > > > > > > I actually implemented HyperNeat in Java about a year ago. (I had some trouble understanding what to do with ANJI and JNEAT too.)
              > > > > > >
              > > > > > > It was designed to be able to modify the number of hidden layers. It was my first Java project and it was unnecessarily complicated. However, a large chunk of the code was ported from the original C++ HyperNeat. I'm sure it's reusable with a bit tuning (e.g. not mutating number of layers and their connection patterns.)
              > > > > > >
              > > > > > > If you want it, I will email you the code.
              > > > > > >
              > > > > >
              > > > >
              > > >
              > >
              >
            • Kenneth Stanley
              Matt, you mentioned attempting to apply HyperNEAT to the Torcs competition, but not doing better than NEAT. Perhaps it is indeed the geometry that could be
              Message 6 of 12 , May 8, 2009
                Matt, you mentioned attempting to apply HyperNEAT to the Torcs competition, but not doing better than NEAT. Perhaps it is indeed the geometry that could be improved or the domain may also simply not be complicated enough to require an indirect encoding to do well. However, have you seen this interesting application from the Computational Intelligence Group at the Czech technical University in Prague of HyperNEAT to driving in traffic:

                http://www.youtube.com/watch?v=lmPJeKRs8gE

                They show briefly (at 0:07) how they organize the sensor geometry. They discretize each rangefinder into several discrete bits. Apparently, this geometry was enough for the cars to come up on their own with the idea of driving on one side of the road, which is a pretty nice result. However, it doesn't necessarily mean they'd win a race.

                ken

                --- In neat@yahoogroups.com, "Matt Simmerson" <m.simmerson@...> wrote:
                >
                > Hi Ken
                >
                > I had done an experiment in conjuction with the Torcs competition that Julian Togelius et al are running. The results were not that encouraging when compared with a standard NEAT experiment (maybe I need to sort out the geometry, though given the recent paper by Jeff, any geometry should work better than standard NEAT). I implemented David B. D'Ambrosio's Food Finder experiment and that worked just fine.
                >
                > I know you used this experiment in your alife 09 paper and I have one question regarding the fitness function of this experiment. First, the algorithms are slightly different - could be typos or could be just different. However, in both, they seem to reward (increasing fitness) bots that take longer with the (Ttot * 1000r) or (Ttot / 1000r) terms of yours and David's experiments. This seems at odds with the goal. I have implemented (1000r / Ttot) which after all food is found rewards quicker bots.
                >
                > Maybe I have missed something or misunderstood.
                >
                > Cheers
                >
                > Matt
                >
                > --- In neat@yahoogroups.com, "Kenneth Stanley" <kstanley@> wrote:
                > >
                > > Matt, what kind of experiment is implemented in your hyperneat? Is it one of the same experiments from our versions?
                > >
                > > Thanks,
                > >
                > > ken
                > >
                > > --- In neat@yahoogroups.com, "Matt Simmerson" <m.simmerson@> wrote:
                > > >
                > > > Hi Ken
                > > >
                > > > My Neat4j v2 (as yet unreleased) implements hyperneat. I believe it works, but could do with testing by someone other than me. If anyone is up for that I can let them have the code. Not sure when I will release this, looking for a new job as I have been made redundant. Don't you just love the global economy!
                > > >
                > > > Cheers
                > > >
                > > > Matt
                > > >
                > > > --- In neat@yahoogroups.com, "Kenneth Stanley" <kstanley@> wrote:
                > > > >
                > > > > JT, is your Java-based HyperNEAT something you'd be interested in sharing in general? There is currently no Java version of HyperNEAT, so others might be interested as well. Do you have any experiments working in it?
                > > > >
                > > > > (I understand if it's not ready for that but I wanted to at least raise the possibility.)
                > > > >
                > > > > Thanks,
                > > > >
                > > > > ken
                > > > >
                > > > > --- In neat@yahoogroups.com, "JT" <skybro77@> wrote:
                > > > > >
                > > > > > I actually implemented HyperNeat in Java about a year ago. (I had some trouble understanding what to do with ANJI and JNEAT too.)
                > > > > >
                > > > > > It was designed to be able to modify the number of hidden layers. It was my first Java project and it was unnecessarily complicated. However, a large chunk of the code was ported from the original C++ HyperNeat. I'm sure it's reusable with a bit tuning (e.g. not mutating number of layers and their connection patterns.)
                > > > > >
                > > > > > If you want it, I will email you the code.
                > > > > >
                > > > >
                > > >
                > >
                >
              Your message has been successfully submitted and would be delivered to recipients shortly.