NEAT Users Group is a Restricted Group with 631 members.
 NEAT Users Group

 Restricted Group,
 631 members
Primary Navigation
CPPNs. Recursive vs Feedforward
Expand Messages
 0 Attachment
Quick question. What's the prevailing opinion on whether CPPNs should
be recursive or feedforwardonly?
Also there were discussions some years back about algorithms for
efficiently implementing feedforward networks  e.g. calculating the
longest path from input to output. I think Derek and Phillip worked on
these ideas in ANJI so I'll be looking in the ANJI code (and NEAT
group archive) but if anyone's got any more recent ideas/pointers on
the subject then I'd be interested to hear them. Thanks. 0 Attachment
Hi Colin
Colin Green <colin.green1@...> wrote :
 Also there were discussions some years back about algorithms for
 efficiently implementing feedforward networks  e.g. calculating the
 longest path from input to output. I think Derek and Phillip worked on
 these ideas in ANJI so I'll be looking in the ANJI code (and NEAT
 group archive) but if anyone's got any more recent ideas/pointers on
 the subject then I'd be interested to hear them. Thanks.
I've recently implemented an algorithm that find this longuest
path. It is written using boost::graph for my own framework (sferes2)
but you could probably translate it to use it in NEAT. Here is the
code (check boost.org for the documentation of boost::graph),
where this>_g is the graph (feedforward). I've not heavily
tested this code so I don't garantee it's bugfree but it should
at least help you to start.
void _compute_depth()
{
using namespace boost;
typedef std::map<vertex_desc_t, size_t> int_map_t;
typedef std::map<vertex_desc_t, vertex_desc_t> vertex_map_t;
typedef std::map<vertex_desc_t, default_color_type> color_map_t;
typedef std::map<edge_desc_t, int> edge_map_t;
typedef associative_property_map<int_map_t> a_map_t;
typedef associative_property_map<color_map_t> c_map_t;
typedef associative_property_map<vertex_map_t> v_map_t;
typedef associative_property_map<edge_map_t> e_map_t;
color_map_t cm; c_map_t cmap(cm);
vertex_map_t vm; v_map_t pmap(vm);
edge_map_t em;
BGL_FORALL_EDGES_T(e, this>_g, graph_t)
em[e] = 1;
e_map_t wmap(em);
_depth = 0;
// we compute the longest path between inputs and outputs
BOOST_FOREACH(vertex_desc_t s, this>_inputs)
{
int_map_t im; a_map_t dmap(im);
dag_shortest_paths
(this>_g, s, dmap, wmap, cmap, pmap,
dijkstra_visitor<null_visitor>(),
std::greater<int>(),
closed_plus<int>(),
std::numeric_limits<int>::min(), 0);
BGL_FORALL_VERTICES_T(v, this>_g, graph_t)
{
size_t d = get(dmap, v);
if (this>_g[v].get_out() != 1 && d <= num_vertices(this>_g))
_depth = std::max(_depth, d);
}
}
// add one "to be sure"
_depth ++;
}

JeanBaptiste Mouret / Mandor
tel : (+33) 6 28 35 10 49
3 ((4&({*.(=+/))++/=3:)@([:,/0&,^:(i.3)@:"2^:2))&.>@]^:(i.@[) <#:3 6 2 0 Attachment
Colin, it is probably not a critical decision whether CPPNs are recurrent or not (at least for the purposes of HyperNEAT), but it does have implications. In the context of patterns, recurrence has the effect of potentially creating fractals, that is, selfsimilarity at different levels of scale. If you think about it, recurrence is like recreating a coordinate frame situated within its own self, which means you will get it repeating at different scales. Note that because of the complexity of the overall network, such repetitions may not be exact (and likely won't be), so you'd get repetition with variation at different scales.
So it introduces a new kind of regularity that the other activation functions don't really genuinely offer. However, my feeling from looking at 2D patterns created by recurrent CPPNs is that they are brittle and can be difficult to optimize such patterns. While they may produce fractals in principle, in practice, they can produce messy noiselike patterns because of their brittleness and interaction with the other parts of the networks (which may include even more recurrence). I have a hunch that a more elegant kind of recurrence could be introduced to artificially enforce certain types of activation paths (while freezing others) to produce much more elegant fractallike patterns consistently, but barring that, the CPPN with recurrent connections will not necessarily produce patterns that are easy to characterize. (Note that it is also an open and interesting question whether a neural networks should have fractallike characteristics or not.)
Nevertheless, it is not clear ultimately that such connections cause a problem or not. While they may not look like very pretty fractals, for the purposes of HyperNEAT, it is not clear that they need to look pretty, and maybe they aren't hurting anything, or even helping a bit by giving extra degrees of freedom through the search space of patterns. I believe HyperSharpNEAT actually does allow such connections and it doesn't seem to have any serious performance implication, though it's hard to be certain without extensive testing.
But if you really want to kind of understand what types of things CPPNs will output, it's best left feedforward, because then it can be understood as sums and compositions of functions, which is conceptually pure and follows the theory in the papers. For example. Picbreeder is strictly feedforward, and that helps us to understand the function of internal nodes in Picbreeder networks (we hope to soon make public the structure of CPPNs within Picbreeder, which is very educational).
I hope this provides some insight.
ken
 In neat@yahoogroups.com, Colin Green <colin.green1@...> wrote:
>
> Quick question. What's the prevailing opinion on whether CPPNs should
> be recursive or feedforwardonly?
>
> Also there were discussions some years back about algorithms for
> efficiently implementing feedforward networks  e.g. calculating the
> longest path from input to output. I think Derek and Phillip worked on
> these ideas in ANJI so I'll be looking in the ANJI code (and NEAT
> group archive) but if anyone's got any more recent ideas/pointers on
> the subject then I'd be interested to hear them. Thanks.
> 0 Attachment
Colin, there is indeed a fast way to compute feedforward activation, but it does not actually involve computing the longest path. Rather, it entails recursing back from the outputs to the inputs, each time summing the incoming activation from the recursive calls. If a node has already been computed that is then called again, it simply returns the result of that prior computation. This approach is efficient because every activation function is guaranteed to be only run once. In contrast, in the more common arbitrarytopology approach to activation, at time t each node's activation is computed from incoming nodes' activations at tine t1, which can take several ticks (and therefore reactivations) for activation to travel through the whole network.
Note that the two different ways of activating do not always yield equivalent results because of some subtle implications. However, these differences generally only come up in continual control scenarios (i.e. scenarios in which a networks is activated over and over again continually). If you just want a single "final answer" (i.e. like a pattern that the CPPN outputs) they should be equivalent, as long as the network really is feedforward.
ken
 In neat@yahoogroups.com, Colin Green <colin.green1@...> wrote:
>
> Quick question. What's the prevailing opinion on whether CPPNs should
> be recursive or feedforwardonly?
>
> Also there were discussions some years back about algorithms for
> efficiently implementing feedforward networks  e.g. calculating the
> longest path from input to output. I think Derek and Phillip worked on
> these ideas in ANJI so I'll be looking in the ANJI code (and NEAT
> group archive) but if anyone's got any more recent ideas/pointers on
> the subject then I'd be interested to hear them. Thanks.
> 0 Attachment
2009/11/3 Ken <kstanley@...>>
[...]
> Colin, there is indeed a fast way to compute feedforward activation,
> it entails recursing back from the outputs to the inputs, each time summing the incoming activation from the recursive calls. If a
Yes I see. It's like evaluating an expression tree but with an extra
> node has already been computed that is then called again, it simply returns the result of that prior computation.
rule for handling the fact it's not a tree (it's a directed acyclic
graph). With the timestep approach nodes are being activated even when
the signal 'wave' hasn't reached them yet, hence there is some
inefficiency.
> Note that the two different ways of activating do not always yield equivalent results because of some subtle implications.
Noted. Thanks.
 0 Attachment
2009/11/3 JeanBaptiste Mouret / Mandor <mandor@...>>
Thanks. I wonder if this is the same approach for DAGs listed here:
> I've recently implemented an algorithm that find this longuest
> path. It is written using boost::graph for my own framework (sferes2)
> but you could probably translate it to use it in NEAT. Here is the
> code (check boost.org for the documentation of boost::graph),
> where this>_g is the graph (feedforward). I've not heavily
> tested this code so I don't garantee it's bugfree but it should
> at least help you to start.
http://en.wikipedia.org/wiki/Longest_path_problem
Colin. 0 Attachment
2009/11/3 Ken <kstanley@...>>
[...]
> Colin, it is probably not a critical decision whether CPPNs are recurrent or not (at least for the purposes of HyperNEAT), but it
> does have implications.
Thanks. From reading your response my instinct is to use feedforward
as the default/baseline for CPPNs and to look toward recursive CPPNs
as a research path from there. The function composition aspect of
feedforward networks is simple to understand (again, it's like an
expression tree with a little extra complexity  but not as much as a
recursive network).
> But if you really want to kind of understand what types of things CPPNs will output, it's best left feedforward, because then it can
Totally agree.
> be understood as sums and compositions of functions, which is conceptually pure and follows the theory in the papers.
> For example. Picbreeder is strictly feedforward, and that helps us to understand the function of internal nodes in Picbreeder
Will look forward to it. I appreciate the amount of work it takes to
> networks (we hope to soon make public the structure of CPPNs within Picbreeder, which is very educational).
get this kind of thing up and running.
Colin. 0 Attachment
Hi Colin,
Colin Green <colin.green1@...> wrote :
 2009/11/3 JeanBaptiste Mouret / Mandor <mandor@...>
 >
 > I've recently implemented an algorithm that find this longuest
 > path. It is written using boost::graph for my own framework (sferes2)
 > but you could probably translate it to use it in NEAT. Here is the
 > code (check boost.org for the documentation of boost::graph),
 > where this>_g is the graph (feedforward). I've not heavily
 > tested this code so I don't garantee it's bugfree but it should
 > at least help you to start.
 Thanks. I wonder if this is the same approach for DAGs listed here:
 http://en.wikipedia.org/wiki/Longest_path_problem
Yes, this is exactly the second approach listed in the DAG part
(i.e. "Using shortestpath algorithms") since a feedforward NN is
obviously a DAG.
I used a shortest path algorithm specially optimized for DAG (this one
http://www.boost.org/doc/libs/1_40_0/libs/graph/doc/dag_shortest_paths.html)
in which I "reversed" the distance.
Best regards,

JeanBaptiste Mouret / Mandor
tel : (+33) 6 28 35 10 49
3 ((4&({*.(=+/))++/=3:)@([:,/0&,^:(i.3)@:"2^:2))&.>@]^:(i.@[) <#:3 6 2 0 Attachment
Hey Colin,
CPPNs, being an indirect encoding, allow for small changes in the genotype to have profound effects on the resulting substrate. While this is what makes indirect encodings so powerful, the drawback is that it is hard to make a population drift in a particular gradient along the fitness landscape. In practice, I've noticed that adding recurrence magnifies this problem without providing much benefit in terms of overall fitness.
Jason G.
 In neat@yahoogroups.com, Colin Green <colin.green1@...> wrote:
>
> 2009/11/3 Ken <kstanley@...>
> >
> > Colin, it is probably not a critical decision whether CPPNs are recurrent or not (at least for the purposes of HyperNEAT), but it
> > does have implications.
> [...]
>
> Thanks. From reading your response my instinct is to use feedforward
> as the default/baseline for CPPNs and to look toward recursive CPPNs
> as a research path from there. The function composition aspect of
> feedforward networks is simple to understand (again, it's like an
> expression tree with a little extra complexity  but not as much as a
> recursive network).
>
>
> > But if you really want to kind of understand what types of things CPPNs will output, it's best left feedforward, because then it can
> > be understood as sums and compositions of functions, which is conceptually pure and follows the theory in the papers.
>
> Totally agree.
>
> > For example. Picbreeder is strictly feedforward, and that helps us to understand the function of internal nodes in Picbreeder
> > networks (we hope to soon make public the structure of CPPNs within Picbreeder, which is very educational).
>
>
> Will look forward to it. I appreciate the amount of work it takes to
> get this kind of thing up and running.
>
> Colin.
> 0 Attachment
2009/11/4 jgmath2000 <jgmath2000@...>
>
Noted. Thanks.
> CPPNs, being an indirect encoding, allow for small changes in the genotype to have profound effects on the resulting substrate.
> While this is what makes indirect encodings so powerful, the drawback is that it is hard to make a population drift in a particular
> gradient along the fitness landscape. In practice, I've noticed that adding recurrence magnifies this problem without providing
> much benefit in terms of overall fitness.
Your message has been successfully submitted and would be delivered to recipients shortly.