Loading ...
Sorry, an error occurred while loading the content.

Re: first steps with NEAT

Expand Messages
  • Ken
    Hi Aline, I m guessing from your message that you already read, How should I test my own version of NEAT to make sure it works? from the NEAT Software FAQ.
    Message 1 of 7 , Jul 25, 2013
    • 0 Attachment
      Hi Aline, I'm guessing from your message that you already read, "How should I test my own version of NEAT to make sure it works?" from the NEAT Software FAQ.

      One other thing that may be useful is to start evolution with networks that already have the structure necessary for a solution. That is, they would have a single hidden node and full connectivity. (And of course make sure they have a bias node as a third input as well.) Then turn off all structural mutation and see how long it takes to solve. If it takes a long time, then you know there is something poorly calibrated about the weight mutation setup. In XOR, it is often the way weights are mutated that turns out to be causing a problem. It should solve XOR very quickly if you start it with a solution topology and it only has to search for weights.

      Also, it's important to note that special parameters that award genomes being "young" or "old" are probably very domain specific. Maybe I'd try to get it solved without having those kinds of bonuses and penalties at all, since however they affect XOR, it may not apply to other problems.

      Best,

      ken

      --- In neat@yahoogroups.com, "alinehuf2" <alinehuf2@...> wrote:
      >
      > Hello all,
      >
      > I'm a french student at Paris university (sorry for my "broken" English).
      > I'm doing my first steps with NEAT and I encounter some difficulties. I have read the FAQ on this web page : http://www.cs.ucf.edu/~kstanley/neat.html but I have not found answers to my problems.
      > I converted the Buckland implementation from C++ to C (I am not comfortable with C++) : https://github.com/alinehuf/neattest.
      > I tried the XOR test as decribed by Stanley in "Evolving Neural Networks through Augmenting Topologies", but I had a really poor result. My program found a solution 981 times out of 1000 tests (98%) and it tooks an average of 184 generations and a solution has 5 hidden nodes…
      >
      > I tried to change my implementation to make it look more like the Stanley's neat 1.1 (reproduction and delta coding). In his article "Evolving Neural Networks through Augmenting Topologies" he said that NEAT can solde the XOR problem 100% of the time, in an average of 32 generations and 2,35 hidden nodes… but even with the modifications the results of my program are always disappointing (100% solution found in an average of 130 generation and with 5 hidden neurons)…
      >
      > Buckland uses parameters quite different from those of Stanley especially concerning the penalization or bonus awarded to the species :
      > iYoungBonusAgeThreshhold => 10
      > dYoungFitnessBonus => Stanley 1.0 - Buckland 1.3
      > iOldAgeThreshold => Stanley 15 - Buckland 50
      > dOldAgePenalty => Stanley 0.01 - Buckland 0.7
      > I tested both sets of parameters without any significant difference in the average result for the 1000 tests.
      >
      > I tried different variations in reproductive functions, speciation, mutation ... but without more success.
      >
      > When I look at my log file, I realize that the best species (or best genome) dies regularly. Sometimes the best species doesn't evolve for some generation and is fatally penalized. The best species doesn't reproduce and then the best fitness disappears…
      >
      > My goal is to use NEAT to evaluate the final positions of games in General Game Playing. If NEAT can predict the score, then I hope to be able to use the network generated to evaluate non-terminal positions. But for the moment, I do not manage to achieve satisfactory results with the simple XOR problem and I do not know what could be the problem. Do you have any advice or tips?
      >
      > Best regards,
      > Aline Huf.
      >
    • alinehuf2
      Thank you all for your help, I will try these tips to debug my code. I will come back to you if I still have questions. Regards, Aline Huf.
      Message 2 of 7 , Jul 29, 2013
      • 0 Attachment
        Thank you all for your help, I will try these tips to debug my code.
        I will come back to you if I still have questions.

        Regards,
        Aline Huf.
      • alinehuf2
        Hi All, As Ken suggested me , I ve tried to use my implementation of NEAT with all structural mutation turned off and with some improvements I can get the same
        Message 3 of 7 , Oct 2, 2013
        • 0 Attachment

          Hi All, 


          As Ken suggested me, I've tried to use my implementation of NEAT with all structural mutation turned off and with some improvements I can get the same results I have with his NEAT 1.2.1. However, the results of my program are much worse when it is necessary to find the right topology for networks. Most of the time it fails.


          NEAT 1.2.1 :

          (100 epochs)

          Failures: 109 out of 1000 runs (win 89,1%)
          Average Nodes: 5.55331
          Average Genes: 9.05387
          Average number of Epoch for successfull runs: 45.3951
          Average Evals: 6770.18


          My program :

          500 epochs
          Failures : 11 out of 100 runs - win : 89 % - it tooks an average of
          134 generations and an average of   1.98 hidden neurons


          100 epochs :

          Failures : 531 out of 1000 runs - win : 46 % - it tooks an average of
          39.916844 generations and an average of   1.19 hidden neurons


          I tried to find differences in behavior of my program. I use the same parameters and I calculate the compatibility threshold in the same way, but I noticed that I have twice more species in the same number of epochs and I can not understand why… 

          Do you have some tips to understand this ?

          Best regards,
          Aline Huf.




          PS : Ken I would point out one little thing I noticed. When I have tried to run your program with recur_prob enabled, it get stuck in an infinite loop.
          As I'm not very comfortable with C++, my friend Yoch help me and he found something strange in your function :

          int NNode::depth(int d, Network *mynet)

          He proposes the following amendments. I did not check this correction but the program no longer crashes now with recur_prob enabled (but of course I have not enabled it for the XOR problem)…

          // debug by yoch
          //Find the greatest depth starting from this neuron at depth d
          int NNode::depth(int d, Network *mynet) {
            std::vector<Link*> innodes=this->incoming;
            std::vector<Link*>::iterator curlink;
            int cur_depth; //The depth of the current node
            int max=d; //The max depth

            if (d>100) {
              //std::cout<<mynet->genotype<<std::endl;
              //std::cout<<"** DEPTH NOT DETERMINED FOR NETWORK WITH LOOP"<<std::endl;
              return d;
            }

            //Base Case
            if ((this->type)==SENSOR)
              return d;
            //Recursion
            else {
              for(curlink=innodes.begin();curlink!=innodes.end();++curlink) {
                cur_depth=((*curlink)->in_node)->depth(d+1,mynet);
                if (cur_depth>max) max=cur_depth;
                if (max>100) break;
              }

              return (d == 0 && max > 100) ? 10 : max;

            } //end else
          }



          ---In neat@yahoogroups.com, <kstanley@...> wrote:

          Hi Aline, I'm guessing from your message that you already read, "How should I test my own version of NEAT to make sure it works?" from the NEAT Software FAQ.

          One other thing that may be useful is to start evolution with networks that already have the structure necessary for a solution. That is, they would have a single hidden node and full connectivity. (And of course make sure they have a bias node as a third input as well.) Then turn off all structural mutation and see how long it takes to solve. If it takes a long time, then you know there is something poorly calibrated about the weight mutation setup. In XOR, it is often the way weights are mutated that turns out to be causing a problem. It should solve XOR very quickly if you start it with a solution topology and it only has to search for weights.

          Also, it's important to note that special parameters that award genomes being "young" or "old" are probably very domain specific. Maybe I'd try to get it solved without having those kinds of bonuses and penalties at all, since however they affect XOR, it may not apply to other problems.

          Best,

          ken

          --- In neat@yahoogroups.com, "alinehuf2" <alinehuf2@...> wrote:
          >
          > Hello all,
          >
          > I'm a french student at Paris university (sorry for my "broken" English).
          > I'm doing my first steps with NEAT and I encounter some difficulties. I have read the FAQ on this web page : http://www.cs.ucf.edu/~kstanley/neat.html but I have not found answers to my problems.
          > I converted the Buckland implementation from C++ to C (I am not comfortable with C++) : https://github.com/alinehuf/neattest.
          > I tried the XOR test as decribed by Stanley in "Evolving Neural Networks through Augmenting Topologies", but I had a really poor result. My program found a solution 981 times out of 1000 tests (98%) and it tooks an average of 184 generations and a solution has 5 hidden nodes…
          >
          > I tried to change my implementation to make it look more like the Stanley's neat 1.1 (reproduction and delta coding). In his article "Evolving Neural Networks through Augmenting Topologies" he said that NEAT can solde the XOR problem 100% of the time, in an average of 32 generations and 2,35 hidden nodes… but even with the modifications the results of my program are always disappointing (100% solution found in an average of 130 generation and with 5 hidden neurons)…
          >
          > Buckland uses parameters quite different from those of Stanley especially concerning the penalization or bonus awarded to the species :
          > iYoungBonusAgeThreshhold => 10
          > dYoungFitnessBonus => Stanley 1.0 - Buckland 1.3
          > iOldAgeThreshold => Stanley 15 - Buckland 50
          > dOldAgePenalty => Stanley 0.01 - Buckland 0.7
          > I tested both sets of parameters without any significant difference in the average result for the 1000 tests.
          >
          > I tried different variations in reproductive functions, speciation, mutation ... but without more success.
          >
          > When I look at my log file, I realize that the best species (or best genome) dies regularly. Sometimes the best species doesn't evolve for some generation and is fatally penalized. The best species doesn't reproduce and then the best fitness disappears…
          >
          > My goal is to use NEAT to evaluate the final positions of games in General Game Playing. If NEAT can predict the score, then I hope to be able to use the network generated to evaluate non-terminal positions. But for the moment, I do not manage to achieve satisfactory results with the simple XOR problem and I do not know what could be the problem. Do you have any advice or tips?
          >
          > Best regards,
          > Aline Huf.
          >
        • kenstanley01
          Hi Aline, your observation regarding the number of species is a potential clue to the remaining problem. It is indeed concerning if you have twice as many
          Message 4 of 7 , Oct 15, 2013
          • 0 Attachment

            Hi Aline, your observation regarding the number of species is a potential clue to the remaining problem.  It is indeed concerning if you have twice as many species, because that means that each species it on average half the size as in my version.  If the size of individual species is generally too small, then there will be insufficient diversity within each species for them to evolve.  So I would indeed recommend to try to move the compatibility threshold in a way to decrease the number of species.


            However, there is still some deeper issue here because the question remains why the same parameters would not lead to the same result (assuming your parameters all match mine). There are different possibilities.  One is that the weight range might be quite different in your code than in mine.  Because the compatibility measure includes the average weight difference, that could lead to bigger differences in compatibility.  It might also slow down evolution if connections are being pushed to their maximum weights and those maximum weights are too large (assuming there is even a maximum, as there should be).  You could check the maximum in my own code for mutating weights to make sure you use the same.


            Another debugging idea is that you take two different networks in my NEAT and record their compatibility distance , and then import those *exact* same networks into your version and check their compatibility distance there.  That would settle whether the compatibilities are really being measured the same way.  It is important to note that in my code the normalization term N in the compatibility function is generally set to 1, that is, N is generally not used.  So if you had N on then you would be seeing different results.


            In the long run, whatever the issue turns out to be, as is done in most NEAT implementations, it would make sense to have a target for the number of species so that you don't have to guess the right compatibility threshold yourself.  This idea is called "dynamic compatibility thresholding."  My code does include a way to do that, as does SharpNEAT, though they do it in different ways.  But the general idea is to let NEAT shift the compatibility threshold itself to aim for a target # of species.  (A good target could be pop_size/15, which would mean the average species would have 15 members.)


            Finally, I appreciate your suggested modification for depth check, but it is not clear where the edited lines are (i.e. which lines were altered in the new version)? 


            I hope the suggestions above are helpful.


            Best,


            ken




            ---In neat@yahoogroups.com, <alinehuf2@...> wrote:

            Hi All, 


            As Ken suggested me, I've tried to use my implementation of NEAT with all structural mutation turned off and with some improvements I can get the same results I have with his NEAT 1.2.1. However, the results of my program are much worse when it is necessary to find the right topology for networks. Most of the time it fails.


            NEAT 1.2.1 :

            (100 epochs)

            Failures: 109 out of 1000 runs (win 89,1%)
            Average Nodes: 5.55331
            Average Genes: 9.05387
            Average number of Epoch for successfull runs: 45.3951
            Average Evals: 6770.18


            My program :

            500 epochs
            Failures : 11 out of 100 runs - win : 89 % - it tooks an average of
            134 generations and an average of   1.98 hidden neurons


            100 epochs :

            Failures : 531 out of 1000 runs - win : 46 % - it tooks an average of
            39.916844 generations and an average of   1.19 hidden neurons


            I tried to find differences in behavior of my program. I use the same parameters and I calculate the compatibility threshold in the same way, but I noticed that I have twice more species in the same number of epochs and I can not understand why… 

            Do you have some tips to understand this ?

            Best regards,
            Aline Huf.




            PS : Ken I would point out one little thing I noticed. When I have tried to run your program with recur_prob enabled, it get stuck in an infinite loop.
            As I'm not very comfortable with C++, my friend Yoch help me and he found something strange in your function :

            int NNode::depth(int d, Network *mynet)

            He proposes the following amendments. I did not check this correction but the program no longer crashes now with recur_prob enabled (but of course I have not enabled it for the XOR problem)…

            // debug by yoch
            //Find the greatest depth starting from this neuron at depth d
            int NNode::depth(int d, Network *mynet) {
              std::vector<Link*> innodes=this->incoming;
              std::vector<Link*>::iterator curlink;
              int cur_depth; //The depth of the current node
              int max=d; //The max depth

              if (d>100) {
                //std::cout<<mynet->genotype<<std::endl;
                //std::cout<<"** DEPTH NOT DETERMINED FOR NETWORK WITH LOOP"<<std::endl;
                return d;
              }

              //Base Case
              if ((this->type)==SENSOR)
                return d;
              //Recursion
              else {
                for(curlink=innodes.begin();curlink!=innodes.end();++curlink) {
                  cur_depth=((*curlink)->in_node)->depth(d+1,mynet);
                  if (cur_depth>max) max=cur_depth;
                  if (max>100) break;
                }

                return (d == 0 && max > 100) ? 10 : max;

              } //end else
            }



            ---In neat@yahoogroups.com, <kstanley@...> wrote:

            Hi Aline, I'm guessing from your message that you already read, "How should I test my own version of NEAT to make sure it works?" from the NEAT Software FAQ.

            One other thing that may be useful is to start evolution with networks that already have the structure necessary for a solution. That is, they would have a single hidden node and full connectivity. (And of course make sure they have a bias node as a third input as well.) Then turn off all structural mutation and see how long it takes to solve. If it takes a long time, then you know there is something poorly calibrated about the weight mutation setup. In XOR, it is often the way weights are mutated that turns out to be causing a problem. It should solve XOR very quickly if you start it with a solution topology and it only has to search for weights.

            Also, it's important to note that special parameters that award genomes being "young" or "old" are probably very domain specific. Maybe I'd try to get it solved without having those kinds of bonuses and penalties at all, since however they affect XOR, it may not apply to other problems.

            Best,

            ken

            --- In neat@yahoogroups.com, "alinehuf2" <alinehuf2@...> wrote:
            >
            > Hello all,
            >
            > I'm a french student at Paris university (sorry for my "broken" English).
            > I'm doing my first steps with NEAT and I encounter some difficulties. I have read the FAQ on this web page : http://www.cs.ucf.edu/~kstanley/neat.html but I have not found answers to my problems.
            > I converted the Buckland implementation from C++ to C (I am not comfortable with C++) : https://github.com/alinehuf/neattest.
            > I tried the XOR test as decribed by Stanley in "Evolving Neural Networks through Augmenting Topologies", but I had a really poor result. My program found a solution 981 times out of 1000 tests (98%) and it tooks an average of 184 generations and a solution has 5 hidden nodes…
            >
            > I tried to change my implementation to make it look more like the Stanley's neat 1.1 (reproduction and delta coding). In his article "Evolving Neural Networks through Augmenting Topologies" he said that NEAT can solde the XOR problem 100% of the time, in an average of 32 generations and 2,35 hidden nodes… but even with the modifications the results of my program are always disappointing (100% solution found in an average of 130 generation and with 5 hidden neurons)…
            >
            > Buckland uses parameters quite different from those of Stanley especially concerning the penalization or bonus awarded to the species :
            > iYoungBonusAgeThreshhold => 10
            > dYoungFitnessBonus => Stanley 1.0 - Buckland 1.3
            > iOldAgeThreshold => Stanley 15 - Buckland 50
            > dOldAgePenalty => Stanley 0.01 - Buckland 0.7
            > I tested both sets of parameters without any significant difference in the average result for the 1000 tests.
            >
            > I tried different variations in reproductive functions, speciation, mutation ... but without more success.
            >
            > When I look at my log file, I realize that the best species (or best genome) dies regularly. Sometimes the best species doesn't evolve for some generation and is fatally penalized. The best species doesn't reproduce and then the best fitness disappears…
            >
            > My goal is to use NEAT to evaluate the final positions of games in General Game Playing. If NEAT can predict the score, then I hope to be able to use the network generated to evaluate non-terminal positions. But for the moment, I do not manage to achieve satisfactory results with the simple XOR problem and I do not know what could be the problem. Do you have any advice or tips?
            >
            > Best regards,
            > Aline Huf.
            >
          Your message has been successfully submitted and would be delivered to recipients shortly.