Papers of the day   All papers

Neuroevolution of Self-Interpretable Agents

Comments

hardmaru: Neuroevolution of Self-Interpretable Agents Agents with a self-attention “bottleneck” not only can solve these tasks from pixel inputs with only 4000 parameters, but they are also better at generalization! article https://attentionagent.github.io/ pdf https://arxiv.org/abs/2003.08165 Read on 👇🏼 https://t.co/WCJatSdJYn

18 replies, 1185 likes


Kyle McDonald: this is a super interesting approach to generalization for reinforcement learning. in order to minimize parameters and distractions, figure out what to focus on.

1 replies, 82 likes


Ryota Kanai: Finally had a chance to read this exciting work. This reminded me of blindsight, which I interpret as saliency without conscious contents. The result that connects not seeing irrelevant things to generalisation also makes an interesting prediction for human psychophysics.

1 replies, 46 likes


Shane Gu 顾世翔: Bottlenecks are all you need (for generalization). I really wanted to make hard attention work through muprop and gumbel-softmax back then **properly end-to-end with backprop** :) #betrayedbybackprop

0 replies, 37 likes


roadrunner01: Neuroevolution of Self-Interpretable Agents pdf: https://arxiv.org/pdf/2003.08165.pdf abs: https://arxiv.org/abs/2003.08165 https://t.co/E0pJvczO59

0 replies, 23 likes


NexUS 🇺🇸 Software Developers ⭐️⭐️⭐️⭐️⭐️: #MachineLearning 👇 Neuroevolution of Self-Interpretable Agents Demonstrating that self-attention is a powerful module for creating RL agents that are capable of solving challenging vision-based tasks. PDF: https://arxiv.org/pdf/2003.08165.pdf #ArtificialIntelligence #AI https://t.co/kdYy3kwXFn

0 replies, 4 likes


arXiv CS-CV: Neuroevolution of Self-Interpretable Agents http://arxiv.org/abs/2003.08165

0 replies, 3 likes


Sam Greydanus: Indirect encoding ftw! Love the reference to HyperNEAT too...such an underrated idea

0 replies, 2 likes


Cem F Dagdelen: Constraint = opportunity (wink wink)

0 replies, 2 likes


Mitchell Gordon: As always, excellent work from David Ha. Self-interpretable agents are cool. I'd argue that the parameter efficiency and generalization improvements are the coolest part of the post.

1 replies, 2 likes


(((JReuben1))): Neuroevolution of Self-Interpretable Agents https://arxiv.org/abs/2003.08165

0 replies, 1 likes


arXiv CS-CV: Neuroevolution of Self-Interpretable Agents http://arxiv.org/abs/2003.08165

0 replies, 1 likes


Content

Found on Mar 19 2020 at https://arxiv.org/pdf/2003.08165.pdf

PDF content of a computer science paper: Neuroevolution of Self-Interpretable Agents