Papers of the day   All papers

How recurrent networks implement contextual processing in sentiment analysis


Niru Maheswaranathan: #tweeprint time for our new work out on arXiv!📖We've been trying to understand how recurrent neural networks (RNNs) work, by reverse engineering them using tools from dynamical systems analysis—with @SussilloDavid.

9 replies, 969 likes

David Sussillo 🏡💻🤞🤓: Folks, we have open sourced a Jupyter notebook that reproduces from scratch the model and analyses of the toy language example in our contextual analysis paper. @niru_m It demos techniques to understand, e.g., how 'not good' flips the valence of 'good'.

1 replies, 110 likes

Jascha: This is very cool work. Read this if you want to really, really understand how a neural network solves a specific problem -- like actual scientific understanding.

1 replies, 98 likes

David Sussillo 🏡💻🤞🤓: For those of you who have been following our RNN reverse-engineering research, check out our latest advance (w/ @niru_m). We figured out how to understand contextual input processing in a stream of inputs: how RNNs understand "not good" vs "very good" in sentiment analysis.

0 replies, 79 likes

Niru Maheswaranathan: We think of this work as building new tools for reverse engineering neural networks to really understand their learned mechanisms and how to perturb/amplify/isolate their effects. For more information, check out the paper! 😍

2 replies, 76 likes

Stanford NLP Group: Very interesting thread on how modifier words (e.g., adverbs) can be captured by recurrent neural networks (e.g., for sentiment analysis 👇

0 replies, 42 likes

Sam Schoenholz: Niru and David's paper substantially improved my understanding of RNNs. Highly recommended!!

1 replies, 40 likes

Jonathan A. Michaels: Understanding how artificial neural networks work is an essential step towards understanding how biological neural networks work. Great work all!

1 replies, 15 likes

Scott Linderman: Latest in an important line of research on reverse engineering RNNs and understanding their low dimensional dynamics. Very nice, @niru_m and @SussilloDavid!

1 replies, 12 likes

A Neurocrackpot: Terrific Thread! Deep insights into RNNs, contextual processing, modifier subspaces, and the sure-to-come consilience to complex and dynamical systems approaches!

0 replies, 10 likes

Chethan Pandarinath: Tour de force in understanding how recurrent networks perform computations on their inputs. A qualitatively different level of "understanding" than many papers you'll read. Really impressive, @niru_m and @SussilloDavid ! One of the most exciting things I've read in a while.

1 replies, 8 likes

Laura Driscoll: new work from RNN cool kids @SussilloDavid and @niru_m 🥳

0 replies, 1 likes

Paragon Science: Very interesting research, @niru_m! Thanks for sharing! cc: @pacoid @stevenstrogatz @duncanjwatts @barrydauber @sgourley @jasonkessler @jasonbaldridge

0 replies, 1 likes

Irenes (many): This thread is a really fun read. We have mixed feelings about the existence of widely-deployed technology that is literally beyond human understanding (at least until reverse-engineering efforts get further along), but... still a fun read.

0 replies, 1 likes


Found on Apr 20 2020 at

PDF content of a computer science paper: How recurrent networks implement contextual processing in sentiment analysis