Loading ...
Sorry, an error occurred while loading the content.

6365Re: [neat] Digest Number 2051

Expand Messages
  • afcarl1
    Jun 11, 2014
    • 0 Attachment
      NEAT Users Group
      Ken,
       
      Very interesting paper, as one of my areas of interest is a synergistic combination of NEAT and Deep Learning approaches to unsupervised feature discovery.
       
      Is there a software/application zip file available to attempt duplication of the paper results? I note reference to ShapNEAT for the default parameter specification, but was unable to find any reference to the configured turn-key package for paper result duplication. Please advise.
       
      Thanks,
      Andy
      ----- Original Message -----
      Sent: Tuesday, June 10, 2014 11:22 PM
      Subject: [neat] Digest Number 2051

      1 Message

      Digest #2051

      Message

      Tue Jun 10, 2014 7:56 pm (PDT) . Posted by:

      kenstanley01

      My coauthors Paul Szerlip, Greg Morse, Justin Pugh, and myself are excited to announce our new arXiv e-print, "Unsupervised Feature Learning through Divergent Discriminative Feature Accumulation":


      eplex link: http://eplex.cs.ucf.edu/papers/szerlip_arxiv1406.1833v2.pdf http://eplex.cs.ucf.edu/papers/szerlip_arxiv1406.1833v2.pdf


      arXiv link: http://arxiv.org/abs/1406.1833 http://arxiv.org/abs/1406.1833


      While this paper is not yet published in a journal or conference, because we think its implications are broad for the neuroevolution community and beyond, we decided it's important to share it now. "Unsupervised feature learning" has become a very popular research area in recent years with the rise of "deep learning." In fact, the field of deep learning has a whole subfield dedicated to this topic, usually centered on the idea of pre-training layers of a future classifier network, often through an autoencoder or related technique. The autoencoder is often viewed as the key piece of unsupervised apparatus for learning features without the need for labelled data. Often it is argued that pretraining a network in this way sets it up for increased success later on when training more conventionally (e.g. with backprop) on a classification problem.


      We realized recently that there is an appealing alternative to autoencoders that derives much of its power from recent progress in the field neuroevolution. This alternative, called "divergent discriminative feature accumulation" (DDFA), basically uses novelty search to accumulate a continual stream of novel discriminative features. In other words, novelty search is the feature learning algorithm (and in the paper, HyperNEAT represents the features). This setup provides an entirely new perspective on feature learning that is quite different from autoencoders. For example, it can run indefinitely and keep accumulating new features, which means you don't need to know how many features are needed when you start the search. It also does not converge because novelty search is divergent, so it just keeps on going. It further benefits from being non-objective, so the representations of features you get out of it are likely more evolvable (i.e. better representations) . On top of all that, it benefits from the geometric capabilities of HyperNEAT. I think it also offers an interesting new way to think about learning creatively through a divergent process. After all, divergent thought is often attributed to the most creative people. This algorithm literally accumulates new perspectives on the world divergently.


      We tried running DDFA on MNIST by generating a bunch of features (3,000 in the larger case) and then training a classifier on top of them with simple backprop (similarly to procedures with autoencoders in deep learning). With only a simple one-hidden-layer network with none of the usual tricks used in deep learning (i.e. no preprocessing, regularization, special activation functions, dropout, etc.) DDFA was able to achieve 1.25% error on MNIST. To give perspective on that, Hinton's original 4-layer deep network achieved 1.2% with a much deeper network. One big conclusion for this group is that neuroevolution can contribute meaningfully to deep learning and has a lot to offer, and that we can indeed achieve serious competitive results. More broadly, the results raise interesting questions for the broader field of machine learning, like whether optimization (i.e. minimizing error) is really always the best way to think about learning, and whether sometimes divergence is more powerful than convergence.



      There are a lot of interesting future possibilities for DDFA and we are happy to hear your thoughts!


      Best,


      ken


      No virus found in this message.
      Checked by AVG - www.avg.com
      Version: 2014.0.4592 / Virus Database: 3955/7659 - Release Date: 06/11/14

    • Show all 7 messages in this topic