Papers of the day   All papers

Meta-Learning through Hebbian Plasticity in Random Networks

Comments

Sebastian Risi: @enasmel and myself are excited to announce our paper "Meta-Learning through Hebbian Plasticity in Random Networks" https://arxiv.org/abs/2007.02686 Instead of optimizing the neural network's weights directly, we only search for synapse-specific Hebbian learning rules. Thread 👇 https://t.co/zDiZEUuKLL

12 replies, 378 likes


Sebastian Risi: Congratulations to @enasmel for his (and my!) first accepted #NeurIPS paper! 🎉 We added an experiment that shows that Hebbian networks can also sometimes generalize to robot morphologies not seen during training. PDF: https://arxiv.org/pdf/2007.02686.pdf Code: https://github.com/enajx/HebbianMetaLearning https://t.co/KOqpEHK6iD

11 replies, 232 likes


Kenneth Stanley: Intriguing demonstration of the potential of Hebbian plasticity in large networks at #NeurIPS. Congrats @risi1979 and @enasmel!

1 replies, 43 likes


Timothy O'Hear: A thought provoking paper that shows that back-propagation isn't the only game in town. It's very well written and will pull you down a rabbit hole where neuroscience and deep learning converge in unexpected ways.

0 replies, 32 likes


Elias Najarro: Excited to finally share our work on meta-learning Hebbian networks. Instead of being static, the weights of the Hebbian network evolve dynamically during the lifetime of the agent allowing it to keep on learning through weights self-organisation.

0 replies, 16 likes


Elias Najarro: Our work on Hebbian random networks was accepted at NeurIPS 🤖 Code to train them on any Gym environment 👇

1 replies, 13 likes


Noah Guzmán: This is big

0 replies, 6 likes


Adam Safron: Does this relate to work by DeepMind (described in the Matt Botvinick interview with Lex Fridman, linked below) in which meta-learning spontaneously happens in RNNs, given sufficient shared task structure across training epochs? https://www.youtube.com/watch?v=bbr4bozvNp4&fbclid @lexfridman @DeepMind

2 replies, 3 likes


Maxwell Ramstead: Wow

0 replies, 2 likes


Marcel Fröhlich: Wow! Getting closer.

0 replies, 2 likes


Sebastian Risi: @EnricGuinovart @WiringTheBrain That's really cool! @WiringTheBrain, we recently showed that random weights and Hebbian learning can allow high-performing RL agents and we will now focus on evolving developmental rules. https://twitter.com/risi1979/status/1280544779630186499

0 replies, 1 likes


Content

Found on Jul 07 2020 at https://arxiv.org/pdf/2007.02686.pdf

PDF content of a computer science paper: Meta-Learning through Hebbian Plasticity in Random Networks