Loading ...
Sorry, an error occurred while loading the content.
 

Some Questions Regarding SharpNEAT v2

Expand Messages
  • Sina Iravanian
    Hi all, I am using SharpNEAT v2 for some NEAT and HyperNEAT experiments, and wondered if: 1. Is it possible to train the evolved neural networks (as in
    Message 1 of 7 , Dec 2, 2010
      Hi all,

      I am using SharpNEAT v2 for some NEAT and HyperNEAT experiments, and wondered if:

      1. Is it possible to train the evolved neural networks (as in Lamarckian evolution)? Are there any learning mechanisms that exist for recurrent networks (such as backpropagation through time) implemented for them? How about CPPNs? Are there any ways to train them in a way other than evolution?

      2. Do the evolved networks contain standard feed-forward links implicitly? For example, consider a network with 5 inputs and 5 outputs, that the visualizer shows only one link between one of the inputs and one of the outputs. Are the other links missing or just not visualized by the library?

      3. Why can't I define a CPPN with only 1 output? When I do so the library throws an instance of IndexOutOfBoundException. But whenever I set the number of outputs to 2, everything works just fine.

      4. As far as I know the set of activation functions and their parameters are fixed, e.g., in order to create a repeating pattern the library contains only Sin(2 * x). But what if the regularities to be modeled in some problem requires a repetition with a higher (or lower) frequency?

      In the end, a lot of thanks go to Colin Green for his nice library.

      Regards,
      Sina


    • Colin Green
      ... It s a good question. I m not 100% sure, all I can say is that of the learning methods I know of, generally they aren t used on recursive networks. One of
      Message 2 of 7 , Jan 2, 2011
        On 2 December 2010 21:22, Sina Iravanian <sina_iravanian@...> wrote:
        >
        >
        > I am using SharpNEAT v2 for some NEAT and HyperNEAT experiments, and wondered if:
        >
        > 1. Is it possible to train the evolved neural networks (as in Lamarckian evolution)? Are there any learning mechanisms that exist for recurrent
        > networks (such as backpropagation through time) implemented for them? How about CPPNs? Are there any ways to train them in a way
        > other than evolution?

        It's a good question. I'm not 100% sure, all I can say is that of the
        learning methods I know of, generally they aren't used on recursive
        networks. One of the outstanding tasks I have is to provide support
        for evolving feedforward only nets as they are more useful in soem
        respects, e.g. easier to reverse engineer and to apply further
        learning methods to as you poijt out.


        > 2. Do the evolved networks contain standard feed-forward links implicitly? For example, consider a network with 5 inputs and 5 outputs, that
        > the visualizer shows only one link between one of the inputs and one of the outputs. Are the other links missing or just not visualized by the
        > library?

        What you see is that you get. The initial random populations have just
        inputs and ouputs and typically have some small proportion of the
        inputs and outputs connected up; Evolution works from there.


        > 3. Why can't I define a CPPN with only 1 output? When I do so the library throws an instance of IndexOutOfBoundException. But whenever I
        > set the number of outputs to 2, everything works just fine.

        Sounds like a bug. As you're probabyl aware the dimensionality of the
        substrate is not restricted in any way, but I may have maed an
        assumption somewhere in the code that there would be at least 2
        dimensions. Let me know if this is still a problem and I'll look into
        it.



        > 4. As far as I know the set of activation functions and their parameters are fixed, e.g., in order to create a repeating pattern the library
        > contains only Sin(2 * x). But what if the regularities to be modeled in some problem requires a repetition with a higher (or lower) frequency?

        The set of activation functions is defined by
        IActivationFunctionLibrary, there are standard sets defined by
        DefaultActivationFunctionLibrary.CreateLibraryNeat() and
        CreateLibraryCppn(), but this can be replaced with a custom set of
        functions when you instantiate the IGenomeFactory, e.g. see the
        CreateGenomeFactory() method in
        BoxesVisualDiscriminationExperiment.cs.


        > In the end, a lot of thanks go to Colin Green for his nice library.

        Thanks, hope the above helps. Apologies for the late response.

        Colin.
      • gotnobluemilk
        The but from Item 3 below, which forces at least 2 outputs to be defined, appears to be in SharpNeat.Decoders.HyperNeat.Substrate, line 299: // Read bias
        Message 3 of 7 , Feb 6, 2012
          The but from Item 3 below, which forces at least 2 outputs to be defined, appears to be in SharpNeat.Decoders.HyperNeat.Substrate, line 299:

          // Read bias connection weight from output 1.
          double weight = outputSignalArr[1];

          Should this be outputSignalArr[0]? Sure would love to get rid of having to have 2 outputs, since I ignore the 2nd one for fitness.

          --- In neat@yahoogroups.com, Colin Green <colin.green1@...> wrote:
          >
          > On 2 December 2010 21:22, Sina Iravanian <sina_iravanian@...> wrote:
          > >
          > >
          > > I am using SharpNEAT v2 for some NEAT and HyperNEAT experiments, and wondered if:
          > >
          > > 1. Is it possible to train the evolved neural networks (as in Lamarckian evolution)? Are there any learning mechanisms that exist for recurrent
          > > networks (such as backpropagation through time) implemented for them? How about CPPNs? Are there any ways to train them in a way
          > > other than evolution?
          >
          > It's a good question. I'm not 100% sure, all I can say is that of the
          > learning methods I know of, generally they aren't used on recursive
          > networks. One of the outstanding tasks I have is to provide support
          > for evolving feedforward only nets as they are more useful in soem
          > respects, e.g. easier to reverse engineer and to apply further
          > learning methods to as you poijt out.
          >
          >
          > > 2. Do the evolved networks contain standard feed-forward links implicitly? For example, consider a network with 5 inputs and 5 outputs, that
          > > the visualizer shows only one link between one of the inputs and one of the outputs. Are the other links missing or just not visualized by the
          > > library?
          >
          > What you see is that you get. The initial random populations have just
          > inputs and ouputs and typically have some small proportion of the
          > inputs and outputs connected up; Evolution works from there.
          >
          >
          > > 3. Why can't I define a CPPN with only 1 output? When I do so the library throws an instance of IndexOutOfBoundException. But whenever I
          > > set the number of outputs to 2, everything works just fine.
          >
          > Sounds like a bug. As you're probabyl aware the dimensionality of the
          > substrate is not restricted in any way, but I may have maed an
          > assumption somewhere in the code that there would be at least 2
          > dimensions. Let me know if this is still a problem and I'll look into
          > it.
          >
          >
          >
          > > 4. As far as I know the set of activation functions and their parameters are fixed, e.g., in order to create a repeating pattern the library
          > > contains only Sin(2 * x). But what if the regularities to be modeled in some problem requires a repetition with a higher (or lower) frequency?
          >
          > The set of activation functions is defined by
          > IActivationFunctionLibrary, there are standard sets defined by
          > DefaultActivationFunctionLibrary.CreateLibraryNeat() and
          > CreateLibraryCppn(), but this can be replaced with a custom set of
          > functions when you instantiate the IGenomeFactory, e.g. see the
          > CreateGenomeFactory() method in
          > BoxesVisualDiscriminationExperiment.cs.
          >
          >
          > > In the end, a lot of thanks go to Colin Green for his nice library.
          >
          > Thanks, hope the above helps. Apologies for the late response.
          >
          > Colin.
          >
        • Colin Green
          Hi, ... The code you re referring to takes the evolved CPPN and queries it to create the neural net that you will go on to evaluate. The CPPN accepts inputs
          Message 4 of 7 , Feb 6, 2012
            Hi,

            On 6 February 2012 13:36, gotnobluemilk <gotnobluemilk@...> wrote:
            >
            > // Read bias connection weight from output 1.
            > double weight = outputSignalArr[1];
            >
            > Should this be outputSignalArr[0]? Sure would love to get rid of having to have 2 outputs, since I ignore the 2nd one for fitness.
            >

            The code you're referring to takes the evolved CPPN and queries it to
            create the neural net that you will go on to evaluate. The CPPN
            accepts inputs describing the coordinates of the source and target
            nodes within the substrate. Output [0] is the connection strength
            between the source and target, output [1] is the strength of the bias
            connection going in to the target node.

            If you are trying to evaluate the CPPN directly then ignoring the
            second output is probably the easiest approach, unless you wish to
            improve the performance by eliminating it. I think this is easy to do
            as the number of outputs on the CPPN is defined on each
            IGuiNeatExperiment (or INeatExperiment) class, on the OutputCount
            property, hence it's not fixed for SharpNEAT as a whole.

            Hope that helps,

            Colin.
          • gotnobluemilk
            I set the outputs to 1 in my expermiment class: public int OutputCount { //get { return 1; } get { return 2; } } When I return 1, I get the error I described.
            Message 5 of 7 , Feb 6, 2012
              I set the outputs to 1 in my expermiment class:
              public int OutputCount
              {
              //get { return 1; }
              get { return 2; }
              }

              When I return 1, I get the error I described. If it is 2 or higher, no error.

              Based on your comment that error is because there is no strenght of the bias being created. So I'm not quite sure why this is occuring. I have reduced my experiment to a single input, no hidden layer, and a single output trying to figure this out. I really hate to spend twice the time evovling my network because I'm spending half the time testing a 2nd output that never gets used.


              --- In neat@yahoogroups.com, Colin Green <colin.green1@...> wrote:
              >
              > Hi,
              >
              > On 6 February 2012 13:36, gotnobluemilk <gotnobluemilk@...> wrote:
              > >
              > > // Read bias connection weight from output 1.
              > > double weight = outputSignalArr[1];
              > >
              > > Should this be outputSignalArr[0]? Sure would love to get rid of having to have 2 outputs, since I ignore the 2nd one for fitness.
              > >
              >
              > The code you're referring to takes the evolved CPPN and queries it to
              > create the neural net that you will go on to evaluate. The CPPN
              > accepts inputs describing the coordinates of the source and target
              > nodes within the substrate. Output [0] is the connection strength
              > between the source and target, output [1] is the strength of the bias
              > connection going in to the target node.
              >
              > If you are trying to evaluate the CPPN directly then ignoring the
              > second output is probably the easiest approach, unless you wish to
              > improve the performance by eliminating it. I think this is easy to do
              > as the number of outputs on the CPPN is defined on each
              > IGuiNeatExperiment (or INeatExperiment) class, on the OutputCount
              > property, hence it's not fixed for SharpNEAT as a whole.
              >
              > Hope that helps,
              >
              > Colin.
              >
            • Colin Green
              ... So I would set OutputCount to 1, and also modify Substrate.cs so that it doesn t try to read the bias weight from output[1]. Basically you can comment out
              Message 6 of 7 , Feb 7, 2012
                On 7 February 2012 01:21, gotnobluemilk <gotnobluemilk@...> wrote:
                >
                >
                > I set the outputs to 1 in my expermiment class:
                > public int OutputCount
                > {
                > //get { return 1; }
                > get { return 2; }
                > }
                >
                > When I return 1, I get the error I described. If it is 2 or higher, no error.
                >
                > Based on your comment that error is because there is no strenght of the
                > bias being created. So I'm not quite sure why this is occuring.

                So I would set OutputCount to 1, and also modify Substrate.cs so that
                it doesn't try to read the bias weight from output[1]. Basically you
                can comment out lines 298 to 311 (the section of code that adds a bias
                connection). It's not clear if that is what you did so report back if
                this isn't working.

                Colin
              • gotnobluemilk
                Changed the code to this: if (blackbox.OutputCount 1) { // Read bias connection weight from output 1. double weight = outputSignalArr[1]; // Skip connections
                Message 7 of 7 , Feb 7, 2012
                  Changed the code to this:

                  if (blackbox.OutputCount > 1)
                  {
                  // Read bias connection weight from output 1.
                  double weight = outputSignalArr[1];

                  // Skip connections with a weight magnitude less than _weightThreshold.
                  double weightAbs = Math.Abs(weight);
                  if (weightAbs > _weightThreshold)
                  {
                  // For weights over the threshold we re-scale into the range [-_maxWeight,_maxWeight],
                  // assuming IBlackBox outputs are in the range [-1,1].
                  weight = (weightAbs - _weightThreshold) * _weightRescalingCoeff * Math.Sign(weight);

                  // Create network definition connection and add to list. Bias node is always ID 0.
                  networkConnList.Add(new NetworkConnection(0, node._id, weight));
                  }
                  }

                  Set OutputCount = 1 in Experiment, created only 1 output and it works like a champ. Thanks.
                  --- In neat@yahoogroups.com, Colin Green <colin.green1@...> wrote:
                  >
                  > On 7 February 2012 01:21, gotnobluemilk <gotnobluemilk@...> wrote:
                  > >
                  > >
                  > > I set the outputs to 1 in my expermiment class:
                  > > public int OutputCount
                  > > {
                  > > //get { return 1; }
                  > > get { return 2; }
                  > > }
                  > >
                  > > When I return 1, I get the error I described. If it is 2 or higher, no error.
                  > >
                  > > Based on your comment that error is because there is no strenght of the
                  > > bias being created. So I'm not quite sure why this is occuring.
                  >
                  > So I would set OutputCount to 1, and also modify Substrate.cs so that
                  > it doesn't try to read the bias weight from output[1]. Basically you
                  > can comment out lines 298 to 311 (the section of code that adds a bias
                  > connection). It's not clear if that is what you did so report back if
                  > this isn't working.
                  >
                  > Colin
                  >
                Your message has been successfully submitted and would be delivered to recipients shortly.