Loading ...
Sorry, an error occurred while loading the content.

6241New paper in PLoS One: On the Relationships between Generative Encodings, Regularity, and Learning Abilities when Evolving Plastic Artificial Neural Networks.

Expand Messages
  • Jean-Baptiste Mouret
    Jan 16, 2014
    • 0 Attachment
      Hello all

      I am very happy to announce you that our last paper is online:
      Tonelli, P. and Mouret, J.-B. (2013). On the Relationships between Generative Encodings, Regularity, and Learning Abilities when Evolving Plastic Artificial Neural Networks.
      PLoS One. Vol 8 No 11 Pages e79138.
      -> http://www.isir.upmc.fr/files/2013ACLI2965.pdf
      -> http://dx.doi.org/10.1371/journal.pone.0079138

      This paper shows that developmental systems, synaptic plasticity, regularity and flexibility are interleaved topics.

      Besides the scientific contributions about encodings and neural networks, you might be interested by the new technique used in this paper to evaluate the regularity of neural networks. This technique is based on counting the number of automorphisms (http://en.wikipedia.org/wiki/Graph_automorphism), a problem that can be efficiently solved by widely available libraries. Our implementation is available on github (https://github.com/jbmouret/network_toolbox), but it is mainly a call to bliss (http://www.tcs.hut.fi/Software/bliss/) and you should be able to easily do the same in your framework. There are examples at the end of the paper to illustrate how the number of automorphisms relate to regularity. For those interested, there is also a theoritical paper that links the number of automorphisms to the Kolmogorov complexity (http://arxiv.org/abs/1306.0322).

      Overall, while this tool is not perfect, I think we should, as a scientific community, work on a common set of measures to compare and analyze our work (and not focus on performance benchmarks, which are too short-sighted). This measure of regularity is a first step in this direction.

      Abstract:
      A major goal of bio-inspired artificial intelligence is to design artificial neural networks with abilities that resemble those of animal nervous systems. It is commonly believed that two keys for evolving nature-like artificial neural networks are (1) the developmental process that links genes to nervous systems, which enables the evolution of large, regular neural networks, and (2) synaptic plasticity, which allows neural networks to change during their lifetime. So far, these two topics have been mainly studied separately. The present paper shows that they are actually deeply connected. Using a simple operant conditioning task and a classic evolutionary algorithm, we compare three ways to encode plastic neural networks: a direct encoding, a developmental encoding inspired by computational neuroscience models, and a developmental encoding inspired by morphogen gradients (similar to HyperNEAT). Our results suggest that using a developmental encoding could improve the learning abilities of evolved, plastic neural networks. Complementary experiments reveal that this result is likely the consequence of the bias of developmental encodings towards regular structures: (1) in our experimental setup, encodings that tend to produce more regular networks yield networks with better general learning abilities; (2) whatever the encoding is, networks that are the more regular are statistically those that have the best learning abilities.

      Best regards,
      --
      Jean-Baptiste Mouret / Mandor
      http://pages.isir.upmc.fr/~mouret/