Papers of the day   All papers

BERTology Meets Biology: Interpreting Attention in Protein Language Models

Comments

Richard Socher: Fascinating insight: Trained solely on unsupervised language modeling, the Transformer's attention mechanism recovers high-level structural (folding) and functional properties of proteins! Blog: https://blog.einstein.ai/provis/ Paper: https://arxiv.org/abs/2006.15222 Code: https://github.com/salesforce/provis https://t.co/yOMpV1WGVo

10 replies, 846 likes


Jesse Vig: Excited to announce "BERTology Meets Biology: Interpreting Attention in Protein Language Models" including new tool for visualizing attention in 3D protein structure (1/5): Paper: https://arxiv.org/abs/2006.15222 Blog: https://blog.einstein.ai/provis Code: https://github.com/salesforce/provis https://t.co/70sQn52g8u

7 replies, 277 likes


Stanford NLP Group: Self-supervised language modeling also allows you to recover higher-level properties of proteins

0 replies, 44 likes


Ali Madani: [my take] In the next few years, *large-scale* language modeling will enable the most impactful leaps in protein science/eng. We take an interpretability lens to understand how attention operates via MLM. Check out /use the visualization tool for your protein language models

1 replies, 40 likes


roadrunner01: BERTology Meets Biology: Interpreting Attention in Protein Language Models pdf: https://arxiv.org/pdf/2006.15222.pdf abs: https://arxiv.org/abs/2006.15222 github: https://github.com/salesforce/provis https://t.co/p0TNXS5ork

0 replies, 29 likes


Yannic Kilcher: This sounds like a headline from Babylon Bee 😁

1 replies, 23 likes


Nazneen Rajani: New preprint on the interpretability of protein language models. The most fascinating result for me was how well attention learns the 3D structure of proteins! We found that attention captures not just structure but also protein functions such as binding sites.

1 replies, 19 likes


Lav Varshney: I really enjoyed participating in this work @SFResearch on interpreting attention in protein language models to support scientific discovery: I think we are now in a position to actually make some new discoveries!

0 replies, 10 likes


arXiv CS-CL: BERTology Meets Biology: Interpreting Attention in Protein Language Models http://arxiv.org/abs/2006.15222

0 replies, 9 likes


Julian Klug: Amazing work! BERT, a language model applied on linear amino-acid sequences seems to focus on higher level links only found in the tertiary structure. 🍩 https://arxiv.org/abs/2006.15222 @jesse_vig @thisismadani @lrvarshney @caimingxiong @richardsocher @nazneenrajani https://t.co/nwlLpqnOhJ

1 replies, 9 likes


Connor Shorten: BERTology meets Biology 🧬 "Proteins as a sequence of amino acids" "A great deal of NLP research now focuses on interpreting the Transformer [...] we adapt and extend this line of interpretability research to protein sequences." https://arxiv.org/pdf/2006.15222.pdf

1 replies, 7 likes


Philippe Schwaller: Impressive to see what language models capture in their attention weights 🧬

1 replies, 7 likes


Jesse Vig: Enjoyed presenting our work on interpreting protein language models to @AISC_TO : https://www.youtube.com/watch?v=jlEKIPA0Yvk w/ @thisismadani @lrvarshney @nazneenrajani @CaimingXiong @SFResearch Preprint: https://arxiv.org/abs/2006.15222

0 replies, 5 likes


arxiv: BERTology Meets Biology: Interpreting Attention in Protein Language Models. http://arxiv.org/abs/2006.15222 https://t.co/lz1slA4QkH

0 replies, 5 likes


Mihail Eric: I guess Transformers decided NLP was getting too simple so it decided to try its hand at solving biology.

0 replies, 4 likes


Edward Dixon: A given trained model performs well only on a given (narrow) problem, but #DeepLearning topologies transfer incredibly well between domains. Interesting paper on the intersection of #MachineLearning and #Biology from @salesforce (!) inc. @RichardSocher

0 replies, 3 likes


Thomas Dorfer: Nice paper on #NLP interpretability by @SFResearch. Using attention, they show how a transformer protein model captures sequence characteristics such as folding structures and binding sites. https://arxiv.org/abs/2006.15222 https://t.co/117MDmY8mT

0 replies, 3 likes


HotComputerScience: Most popular computer science paper of the day: "BERTology Meets Biology: Interpreting Attention in Protein Language Models" https://hotcomputerscience.com/paper/bertology-meets-biology-interpreting-attention-in-protein-language-models https://twitter.com/RichardSocher/status/1278058096481333253

0 replies, 2 likes


ML Papers Explained | A.I. Socratic Circles: great to have you @jesse_vig

0 replies, 2 likes


Salesforce Research: Learn how we're using language models to fuel scientific discovery in biology with our new work, BERTology meets Biology. Keep reading below ⬇️ #AI #ML #Biology

0 replies, 1 likes


Marek Bardoński: A medical use of BERT. The authors of this paper observe using attention and analyze the inner workings of the Transformer and explore how the model discerns the structural and functional properties of proteins. https://arxiv.org/pdf/2006.15222.pdf

1 replies, 1 likes


Kalai Ramea: This is a super cool project!

0 replies, 1 likes


Le Jaime: @Xiomena4 👀

1 replies, 1 likes


arXiv in review: #NeurIPS2020 BERTology Meets Biology: Interpreting Attention in Protein Language Models. (arXiv:2006.15222v1 [cs\.CL]) http://arxiv.org/abs/2006.15222

0 replies, 1 likes


Content

Found on Jun 30 2020 at https://arxiv.org/pdf/2006.15222.pdf

PDF content of a computer science paper: BERTology Meets Biology: Interpreting Attention in Protein Language Models