6360Re: [neat] New Paper on RAAHN, a New Kind of Neural Plasticity
- Jun 7, 2014
Hello Ken, Justin and Andrea,
Congrats on the new paper! Looks fascinating! Can't wait to read it.
VassilisOn Jun 8, 2014 1:04 AM, "kstanley@... [neat]" <firstname.lastname@example.org> wrote:
I am pleased to announce with my coauthors Justin Pugh and Andrea Soltoggio our new ALIFE conference paper, "Real-time Hebbian Learning from Autoencoder Features for Control Tasks."
While the first thing you'll probably notice about this paper is that there is no evolution, the idea was still conceived very much with evolution in mind. It's based on the insight that one of the things holding back neuroevolution research that involves plasticity is that local plasticity rules, while they can be helpful for learning correlations, do not tend to build up new representations (i.e. features) of the world over the agent's lifetime. We (myself and coauthors) think that's a big roadblock for evolving much more interesting kinds of plastic brains. Basically, we'd like to see ANNs that build up new representations of the world at the same time as they learn to control themselves based on those new representations.
That's why we introduce the RAAHN in this paper, which stands for a Real-time Autoencoder-Augmented Hebbian Network. It's a pretty straightforward idea: an autoencoder running in real-time picks up features while neuromodulated Hebbian connections projecting from the autoencoder learn control policies from those developing features based on rewards and penalties (i.e. neuromodulation). The paper shows a proof of concept where RAAHN works even without evolution, but I believe a big part of the future for RAAHN is to embed it within evolved networks (even to evolve its embedding), and there are numerous exciting possibilities for that.
It's also interesting for its potential to unify ideas from deep learning (where autoencoders made their splash) with neuroevolution. After all, the latest unsupervised feature learning technology from deep learning (like present autoencoders) can easily be swapped into RAAHN in the future and then evolved into ANNs through neuroevolution. I think it also suggests that deep learning has missed some of the more interesting implications of its own work by focusing so much on classification tasks, which tend to obfuscate how cool it is to be able to sit there and generate unsupervised reinterpretations of the world on the fly as you go about your business, which if you think about, is a big part of life.
- << Previous post in topic