# How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks

Keyulu Xu: How do neural networks extrapolate, i.e., predict outside the training distribution? We study MLPs and Graph Neural Networks trained by gradient descent, and show how a good representation and architecture can help extrapolation. https://arxiv.org/abs/2009.11848 https://t.co/BBYJdQxiS6

2 replies, 288 likes

Chaitanya Joshi: If you are interested in out-of-distribution generalization, GNNs, graph algorithms (my favorite things!), check out this super exciting paper from @KeyuluXu et al.: "How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks" 🔥 PDF: https://arxiv.org/abs/2009.11848 https://t.co/vP382m4Ke4

1 replies, 168 likes

Alfredo Canziani: This paper is really visually pleasing. 😍 Even though I'd have $\mathrm{d}$ for those ∬ 😬

0 replies, 14 likes

Joanna J Bryson: How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks https://arxiv.org/abs/2009.11848 can't believe I never ran into Stefanie Jegelka before, guess she's so productive because she doesn't twitter!

1 replies, 9 likes

Daisuke Okanohara: MLP cannot extrapolate well because its prediction on OOD becomes linear along with directions from the origin. GNN can extrapolate well if the target problem can be solved by DP and required non-linearity is encoded in architecture or features. https://arxiv.org/abs/2009.11848

0 replies, 9 likes

arxiv: How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks. http://arxiv.org/abs/2009.11848 https://t.co/SIlQ83GZs0

0 replies, 7 likes

## Content

Found on Oct 14 2020 at https://arxiv.org/pdf/2009.11848.pdf