Papers of the day   All papers

How recurrent networks implement contextual processing in sentiment analysis

Comments

Niru Maheswaranathan: #tweeprint time for our new work out on arXiv!📖We've been trying to understand how recurrent neural networks (RNNs) work, by reverse engineering them using tools from dynamical systems analysis—with @SussilloDavid. https://arxiv.org/abs/2004.08013 https://t.co/dLmzPi8tZA

9 replies, 952 likes


Jascha: This is very cool work. Read this if you want to really, really understand how a neural network solves a specific problem -- like actual scientific understanding.

1 replies, 98 likes


Niru Maheswaranathan: We think of this work as building new tools for reverse engineering neural networks to really understand their learned mechanisms and how to perturb/amplify/isolate their effects. For more information, check out the paper! 😍https://arxiv.org/abs/2004.08013

2 replies, 76 likes


David Sussillo 🏡💻🤞🤓: For those of you who have been following our RNN reverse-engineering research, check out our latest advance (w/ @niru_m). We figured out how to understand contextual input processing in a stream of inputs: how RNNs understand "not good" vs "very good" in sentiment analysis.

0 replies, 57 likes


Stanford NLP Group: Very interesting thread on how modifier words (e.g., adverbs) can be captured by recurrent neural networks (e.g., for sentiment analysis 👇

0 replies, 42 likes


Sam Schoenholz: Niru and David's paper substantially improved my understanding of RNNs. Highly recommended!!

1 replies, 40 likes


Jonathan A. Michaels: Understanding how artificial neural networks work is an essential step towards understanding how biological neural networks work. Great work all!

1 replies, 15 likes


Scott Linderman: Latest in an important line of research on reverse engineering RNNs and understanding their low dimensional dynamics. Very nice, @niru_m and @SussilloDavid!

1 replies, 12 likes


A Neurocrackpot: Terrific Thread! Deep insights into RNNs, contextual processing, modifier subspaces, and the sure-to-come consilience to complex and dynamical systems approaches!

0 replies, 10 likes


Chethan Pandarinath: Tour de force in understanding how recurrent networks perform computations on their inputs. A qualitatively different level of "understanding" than many papers you'll read. Really impressive, @niru_m and @SussilloDavid ! One of the most exciting things I've read in a while.

1 replies, 8 likes


Irenes (many): This thread is a really fun read. We have mixed feelings about the existence of widely-deployed technology that is literally beyond human understanding (at least until reverse-engineering efforts get further along), but... still a fun read.

0 replies, 1 likes


Laura Driscoll: new work from RNN cool kids @SussilloDavid and @niru_m 🥳

0 replies, 1 likes


Paragon Science: Very interesting research, @niru_m! Thanks for sharing! cc: @pacoid @stevenstrogatz @duncanjwatts @barrydauber @sgourley @jasonkessler @jasonbaldridge

0 replies, 1 likes


Content

Found on Apr 20 2020 at https://arxiv.org/pdf/2004.08013.pdf

PDF content of a computer science paper: How recurrent networks implement contextual processing in sentiment analysis