Loading ...
Sorry, an error occurred while loading the content.

HyperNEAT experiment ..

Expand Messages
  • petar_chervenski
    Hello, I am experimenting with my implementation of HyperNEAT in C++. The application is a roving eye visual discrimination system. The CPPNs use sigmoids,
    Message 1 of 8 , May 1, 2007
    • 0 Attachment
      Hello,

      I am experimenting with my implementation of HyperNEAT in C++. The
      application is a roving eye visual discrimination system.
      The CPPNs use sigmoids, gaussians and sine activation functions. The
      substrate space is 3D cartesian.
      The results are a bit disappointing, so I am looking for answer why.
      I set up the substrate like this - the inputs are organized in a NxN
      grid on the Z == -1 plane, N is currently 11. The hidden nodes are
      arranged in a circular manner on the Z == 0 plane, and the outputs
      are arranged the same on the Z == 1 plane.
      The eye learns to distinguish between a square and a circle very
      easy, but things got weird when I increased the complexity of the
      task - it had to learn to distinguish the target shape regardless of
      its position.
      It never learned that. The eye is moving and zooming chaotic, without
      concentrating on the shape's corners or something like that, but
      switches state and flees when some pixel on the retina is set.. I
      hope you can imagine this.
      I hope you may help me find what is the problem. The problem may be
      the substrate configuration or dimensionality, or the NEAT parameters
      (which I checked and didn't find anything strange)...

      Peter
    • Jason Gauci
      Hey Peter, Good to hear that you are making use of the software! As soon as I get through this next milestone, I ll release a 1.1 version which has support
      Message 2 of 8 , May 1, 2007
      • 0 Attachment
        Hey Peter,

        Good to hear that you are making use of the software! As soon as I
        get through this next milestone, I'll release a 1.1 version which has
        support for multi-individual experiment evaluations and a better
        random number generator with some different random distributions to
        play with, and a few bug fixes.

        I have a few suggestions which might improve the performance of your
        app:

        First off, you have to account for the mutliplication effect.
        Basically, with a 11x11 input grid, you have potentially 121 links
        going to a single hidden node. The problem with this is that there
        is so much magnitude there that it can mean that your sigmoids always
        output high values because you are operating at the far reaches of
        the sigmoid domain. Also, it restricts your system to having to make
        a distinction between .9990 and .9991 . This can explain the erratic
        behvaior. in my GECCO paper I mentioned that I fixed this by scaling
        down the inputs so that instead of putting a 1.0 for each input that
        is filled in, I put a fraction based on the number of nodes. In
        another experiment I'm running, I decided to take a different route
        and instead I divided all of the link weights by N (in your case,
        11.0). This seemed to produce values which were in the domain [-5,5]
        the majority of the time. If this still produces high values, you
        can try dividing by N*N.

        When your roving eye zooms in, do you change N? It seems like that
        would be the right thing to do, but I don't think you are doing
        that. What I think you should do is create substrates for N=22,
        N=11, N=5. Then, when your eye zooms in, just switch the substrate
        that you use to one with a smaller N. Hopefully this will let
        HyperNEAT handle the granularity of the data, something which it's
        already very good at. I would suggest not to use a roving eye at
        all, but since I already did that in the GECCO paper, I think that
        this approach would be more interesting.

        Also, be careful about using Z for each layer. You are in some sense
        saying that the output layer is the "negative image" of the input
        layer. Of course, the neural network will eventually learn anything,
        but it's harder to bootstrap when the initial conditions are a little
        wacky.

        --- In neat@yahoogroups.com, "petar_chervenski"
        <petar_chervenski@...> wrote:
        >
        > Hello,
        >
        > I am experimenting with my implementation of HyperNEAT in C++. The
        > application is a roving eye visual discrimination system.
        > The CPPNs use sigmoids, gaussians and sine activation functions.
        The
        > substrate space is 3D cartesian.
        > The results are a bit disappointing, so I am looking for answer
        why.
        > I set up the substrate like this - the inputs are organized in a
        NxN
        > grid on the Z == -1 plane, N is currently 11. The hidden nodes are
        > arranged in a circular manner on the Z == 0 plane, and the outputs
        > are arranged the same on the Z == 1 plane.
        > The eye learns to distinguish between a square and a circle very
        > easy, but things got weird when I increased the complexity of the
        > task - it had to learn to distinguish the target shape regardless
        of
        > its position.
        > It never learned that. The eye is moving and zooming chaotic,
        without
        > concentrating on the shape's corners or something like that, but
        > switches state and flees when some pixel on the retina is set.. I
        > hope you can imagine this.
        > I hope you may help me find what is the problem. The problem may be
        > the substrate configuration or dimensionality, or the NEAT
        parameters
        > (which I checked and didn't find anything strange)...
        >
        > Peter
        >
      • Stephen Waits
        ... Have you tried it on a 2D space? Perhaps the extra dimension is stretching the NEAT part of HyperNEAT a bit too far for this problem? --Steve
        Message 3 of 8 , May 1, 2007
        • 0 Attachment

          On May 1, 2007, at 4:51 AM, petar_chervenski wrote:

          The
          substrate space is 3D cartesian.

          Have you tried it on a 2D space?  Perhaps the extra dimension is stretching the NEAT part of HyperNEAT a bit too far for this problem?

          --Steve
        • petar_chervenski
          Yes, a 2D space was the first realization. But things got complicated mainly because there is no way to make a matrix-like input layer, and to place the hidden
          Message 4 of 8 , May 1, 2007
          • 0 Attachment
            Yes, a 2D space was the first realization. But things got complicated
            mainly because there is no way to make a matrix-like input layer, and
            to place the hidden and output nodes in positions that are meaningful.
            An intuitive idea is to put the output at the center and the inputs and
            hidden neurons around it, but then it becomes not a visual field, you
            know, a grid of inputs. The solution was to add an extra dimention and
            to put the hidden and output nodes behind the input layer, not on it.
            I think this is good, even in reality there are only 3D neural
            networks, 2D is just a simplification.
            Someone had mentioned that a roving eye is a controller and a
            classifier at the same time. I guess it is hard for HyperNEAT to figure
            out how to control the eye at first. This is very important.
            Currently I am trying out Jason's suggestion about it (normalizing the
            inputs to avoid the multiplication problem.), but it still doesn't
            learn to find the shape on the field.
            Perhaps I should not use a roving eye at all?



            --- In neat@yahoogroups.com, Stephen Waits <steve@...> wrote:
            >
            >
            > On May 1, 2007, at 4:51 AM, petar_chervenski wrote:
            >
            > > The
            > > substrate space is 3D cartesian.
            >
            > Have you tried it on a 2D space? Perhaps the extra dimension is
            > stretching the NEAT part of HyperNEAT a bit too far for this problem?
            >
            > --Steve
            >
          • Kenneth Stanley
            Yes I would suggest trying the experiment without a roving eye since in a sense the problem of dealing with different parts of the same field in the same way
            Message 5 of 8 , May 1, 2007
            • 0 Attachment
              Yes I would suggest trying the experiment without a roving eye since
              in a sense the problem of dealing with different parts of the same
              field in the same way is what HyperNEAT is designed to do, so a
              roving eye may be overkill.

              Also, HyperNEAT is very new so every decision that is made is in some
              way an experiment. Therefore, it makes sense to start with the
              simplest possible concept and work towards more experimental
              complexity slowly. Otherwise, there are too many simultaneous open
              issues.

              One other note: You probably should give the CPPN a way to express
              biases for the nodes in the substrate. Jason has done work on this
              issue, and there are a number of ways to do it. But without bias,
              the substate ANN is a little bit disabled.

              ken



              --- In neat@yahoogroups.com, "petar_chervenski"
              <petar_chervenski@...> wrote:
              >
              > Yes, a 2D space was the first realization. But things got
              complicated
              > mainly because there is no way to make a matrix-like input layer,
              and
              > to place the hidden and output nodes in positions that are
              meaningful.
              > An intuitive idea is to put the output at the center and the inputs
              and
              > hidden neurons around it, but then it becomes not a visual field,
              you
              > know, a grid of inputs. The solution was to add an extra dimention
              and
              > to put the hidden and output nodes behind the input layer, not on
              it.
              > I think this is good, even in reality there are only 3D neural
              > networks, 2D is just a simplification.
              > Someone had mentioned that a roving eye is a controller and a
              > classifier at the same time. I guess it is hard for HyperNEAT to
              figure
              > out how to control the eye at first. This is very important.
              > Currently I am trying out Jason's suggestion about it (normalizing
              the
              > inputs to avoid the multiplication problem.), but it still doesn't
              > learn to find the shape on the field.
              > Perhaps I should not use a roving eye at all?
              >
              >
              >
              > --- In neat@yahoogroups.com, Stephen Waits <steve@> wrote:
              > >
              > >
              > > On May 1, 2007, at 4:51 AM, petar_chervenski wrote:
              > >
              > > > The
              > > > substrate space is 3D cartesian.
              > >
              > > Have you tried it on a 2D space? Perhaps the extra dimension is
              > > stretching the NEAT part of HyperNEAT a bit too far for this
              problem?
              > >
              > > --Steve
              > >
              >
            • Ken Lloyd
              Petar, I have used a n-dimensional context space to hold the input and output nodes, and embed the hidden nodes (graph) - genome dual in that space. If you
              Message 6 of 8 , May 1, 2007
              • 0 Attachment
                Petar,
                 
                I have used a n-dimensional context space to hold the input and output nodes, and embed the hidden nodes (graph)  - genome dual in that space.
                 
                If you use a 3D representation, you can always project in down to 2D by reciprocal homogeneous w values if necessary. 
                 
                While this may be computationally expensive there are ways of using the processing power of your GPU to do the number crunching without involving much CPU processing.  c.f. NVIDIA's CUDA.
                 
                Ken
                 
                 


                From: neat@yahoogroups.com [mailto:neat@yahoogroups.com] On Behalf Of petar_chervenski
                Sent: Tuesday, May 01, 2007 11:47 AM
                To: neat@yahoogroups.com
                Subject: [neat] Re: HyperNEAT experiment ..

                Yes, a 2D space was the first realization. But things got complicated
                mainly because there is no way to make a matrix-like input layer, and
                to place the hidden and output nodes in positions that are meaningful.
                An intuitive idea is to put the output at the center and the inputs and
                hidden neurons around it, but then it becomes not a visual field, you
                know, a grid of inputs. The solution was to add an extra dimention and
                to put the hidden and output nodes behind the input layer, not on it.
                I think this is good, even in reality there are only 3D neural
                networks, 2D is just a simplification.
                Someone had mentioned that a roving eye is a controller and a
                classifier at the same time. I guess it is hard for HyperNEAT to figure
                out how to control the eye at first. This is very important.
                Currently I am trying out Jason's suggestion about it (normalizing the
                inputs to avoid the multiplication problem.), but it still doesn't
                learn to find the shape on the field.
                Perhaps I should not use a roving eye at all?

                --- In neat@yahoogroups. com, Stephen Waits <steve@...> wrote:
                >
                >
                > On May 1, 2007, at 4:51 AM, petar_chervenski wrote:
                >
                > > The
                > > substrate space is 3D cartesian.
                >
                > Have you tried it on a 2D space? Perhaps the extra dimension is
                > stretching the NEAT part of HyperNEAT a bit too far for this problem?
                >
                > --Steve
                >

              • Kenneth Stanley
                Ken, Out of curiosity, what have you been able to do using this representation? ken ... output ... space. ... 2D by ... the ... involving ... Of ...
                Message 7 of 8 , May 1, 2007
                • 0 Attachment
                  Ken,

                  Out of curiosity, what have you been able to do using this
                  representation?

                  ken

                  --- In neat@yahoogroups.com, "Ken Lloyd" <kalloyd@...> wrote:
                  >
                  > Petar,
                  >
                  > I have used a n-dimensional context space to hold the input and
                  output
                  > nodes, and embed the hidden nodes (graph) - genome dual in that
                  space.
                  >
                  > If you use a 3D representation, you can always project in down to
                  2D by
                  > reciprocal homogeneous w values if necessary.
                  >
                  > While this may be computationally expensive there are ways of using
                  the
                  > processing power of your GPU to do the number crunching without
                  involving
                  > much CPU processing. c.f. NVIDIA's CUDA.
                  >
                  > Ken
                  >
                  >
                  >
                  >
                  >
                  > _____
                  >
                  > From: neat@yahoogroups.com [mailto:neat@yahoogroups.com] On Behalf
                  Of
                  > petar_chervenski
                  > Sent: Tuesday, May 01, 2007 11:47 AM
                  > To: neat@yahoogroups.com
                  > Subject: [neat] Re: HyperNEAT experiment ..
                  >
                  >
                  >
                  > Yes, a 2D space was the first realization. But things got
                  complicated
                  > mainly because there is no way to make a matrix-like input layer,
                  and
                  > to place the hidden and output nodes in positions that are
                  meaningful.
                  > An intuitive idea is to put the output at the center and the inputs
                  and
                  > hidden neurons around it, but then it becomes not a visual field,
                  you
                  > know, a grid of inputs. The solution was to add an extra dimention
                  and
                  > to put the hidden and output nodes behind the input layer, not on
                  it.
                  > I think this is good, even in reality there are only 3D neural
                  > networks, 2D is just a simplification.
                  > Someone had mentioned that a roving eye is a controller and a
                  > classifier at the same time. I guess it is hard for HyperNEAT to
                  figure
                  > out how to control the eye at first. This is very important.
                  > Currently I am trying out Jason's suggestion about it (normalizing
                  the
                  > inputs to avoid the multiplication problem.), but it still doesn't
                  > learn to find the shape on the field.
                  > Perhaps I should not use a roving eye at all?
                  >
                  > --- In neat@yahoogroups. <mailto:neat%40yahoogroups.com> com,
                  Stephen Waits
                  > <steve@> wrote:
                  > >
                  > >
                  > > On May 1, 2007, at 4:51 AM, petar_chervenski wrote:
                  > >
                  > > > The
                  > > > substrate space is 3D cartesian.
                  > >
                  > > Have you tried it on a 2D space? Perhaps the extra dimension is
                  > > stretching the NEAT part of HyperNEAT a bit too far for this
                  problem?
                  > >
                  > > --Steve
                  > >
                  >
                • Ken Lloyd
                  Ken, I have a paper being submitted to the Software and Systems Modeling Journal (SoSyM) that illustrates some accomplishments. If it is accepted, I will be
                  Message 8 of 8 , May 2, 2007
                  • 0 Attachment
                    Ken,
                     
                    I have a paper being submitted to the Software and Systems Modeling Journal (SoSyM) that illustrates some accomplishments.  If it is accepted, I will be happy to present a general account for the group and the role NEAT plays using a context space.  If it isn't accepted, and depending upon the reasons, I will still probably post it to the group for comment, along with the objections.
                     
                    Ken
                     


                    From: neat@yahoogroups.com [mailto:neat@yahoogroups.com] On Behalf Of Kenneth Stanley
                    Sent: Tuesday, May 01, 2007 12:46 PM
                    To: neat@yahoogroups.com
                    Subject: [neat] Re: HyperNEAT experiment ..

                    Ken,

                    Out of curiosity, what have you been able to do using this
                    representation?

                    ken

                    --- In neat@yahoogroups. com, "Ken Lloyd" <kalloyd@... > wrote:
                    >
                    > Petar,
                    >
                    > I have used a n-dimensional context space to hold the input and
                    output
                    > nodes, and embed the hidden nodes (graph) - genome dual in that
                    space.
                    >
                    > If you use a 3D representation, you can always project in down to
                    2D by
                    > reciprocal homogeneous w values if necessary.
                    >
                    > While this may be computationally expensive there are ways of using
                    the
                    > processing power of your GPU to do the number crunching without
                    involving
                    > much CPU processing. c.f. NVIDIA's CUDA.
                    >
                    > Ken
                    >
                    >
                    >
                    >
                    >
                    > _____
                    >
                    > From: neat@yahoogroups. com [mailto:neat@yahoogroups. com] On Behalf
                    Of
                    > petar_chervenski
                    > Sent: Tuesday, May 01, 2007 11:47 AM
                    > To: neat@yahoogroups. com
                    > Subject: [neat] Re: HyperNEAT experiment ..
                    >
                    >
                    >
                    > Yes, a 2D space was the first realization. But things got
                    complicated
                    > mainly because there is no way to make a matrix-like input layer,
                    and
                    > to place the hidden and output nodes in positions that are
                    meaningful.
                    > An intuitive idea is to put the output at the center and the inputs
                    and
                    > hidden neurons around it, but then it becomes not a visual field,
                    you
                    > know, a grid of inputs. The solution was to add an extra dimention
                    and
                    > to put the hidden and output nodes behind the input layer, not on
                    it.
                    > I think this is good, even in reality there are only 3D neural
                    > networks, 2D is just a simplification.
                    > Someone had mentioned that a roving eye is a controller and a
                    > classifier at the same time. I guess it is hard for HyperNEAT to
                    figure
                    > out how to control the eye at first. This is very important.
                    > Currently I am trying out Jason's suggestion about it (normalizing
                    the
                    > inputs to avoid the multiplication problem.), but it still doesn't
                    > learn to find the shape on the field.
                    > Perhaps I should not use a roving eye at all?
                    >
                    > --- In neat@yahoogroups. <mailto:neat% 40yahoogroups. com> com,
                    Stephen Waits
                    > <steve@> wrote:
                    > >
                    > >
                    > > On May 1, 2007, at 4:51 AM, petar_chervenski wrote:
                    > >
                    > > > The
                    > > > substrate space is 3D cartesian.
                    > >
                    > > Have you tried it on a 2D space? Perhaps the extra dimension is
                    > > stretching the NEAT part of HyperNEAT a bit too far for this
                    problem?
                    > >
                    > > --Steve
                    > >
                    >

                  Your message has been successfully submitted and would be delivered to recipients shortly.