- Jan 1, 2014
I have continued working on this problem and my results have been pretty poor as expected.

The largest concept I am struggling to visualize is how Neat is different to a normal ANN. One of the features of Neat that struck me as most interesting is how Neat can recognize the relationship between inputs eg chess pieces on a board where a plain BP ANN would not understand the relationships.

Stepping through my problem (described in the first post), I take the the first grid location (0,0), it's input values will be -1, -1. I want to map this to the same location on the output grid, so the values for the output location will also be -1, -1. Thus, the input to my CPPN looks like the below:

Input 1 = -1 -> Input grid X location 0

Input 2 = -1 -> Input grid Y location 0

Input 3 = -1 -> Output grid X location 0

Input 4 = -1 -> Output grid Y location 0

Input 5 = 1 -> Fixed bias value of 1

I feed all the above into the CPPN and it generates a value of 0.123 (for an example).

So, I take the input pixel value (let's say it is 1) and multiply by the weight, the the value of the output grid at 0,0 = 1 * 0.123.

Because every input pixel (grid location) is mapped to every output pixel, I simply keep summing the values of the output grid. Eg at output grid location 0,0 I will have (for a square grid of 11) 11x11 = 121 values that make up the output grid value at 0,0. So I just sum these values to make the output value?

Even doing this, I fail to understand how the network learns the relationship between grid locations. Basically currently I am feeding each output grid with 121 neural networks that only take the current grid location as input values.

I am sure I am missing a key point here. Advice as always is appreciated.

PS I find the processing time slow for my current network. Each population of 100 members with a CPPN of 10 functions takes 2500ms per 10 random grids, so evolving a population takes hours. Does this seem normal? Once I have the basics understood I think moving the process to a GU will speed things up...

- << Previous post in topic Next post in topic >>