Papers of the day   All papers

Investigating Transferability in Pretrained Language Models

Comments

Alex Tamkin: Which of BERT's layers really matter for finetuning? (Spoiler: it's not what probing tells you!) New work on understanding transfer learning in BERT: https://arxiv.org/abs/2004.14975 w/ Trisha Singh, Davide Giovanardi and Noah Goodman @stanfordnlp @StanfordAILab ⬇ 1/ https://t.co/OxnPKdorh4

4 replies, 443 likes


HotComputerScience: Most popular computer science paper of the day: "Investigating Transferability in Pretrained Language Models" https://hotcomputerscience.com/paper/investigating-transferability-in-pretrained-language-models https://twitter.com/AlexTamkin/status/1256474705122344965

0 replies, 8 likes


James Liao: #NLP #DeepLearning

0 replies, 4 likes


Pranav Rajpurkar: Cool idea to investigate the decrease in finetuning performance by partial-reinitialization of a BERT model!

0 replies, 4 likes


Alex Tamkin (@EMNLP!): At #EMNLP2020? Come check out our work! I'll be presenting some work on understanding transfer learning in BERT through lesion studies Time: Friday, 11-1 PT Talk: https://virtual.2020.emnlp.org/paper_WS-1.1165_F.html Paper: https://arxiv.org/abs/2004.14975 w/ Trisha Singh, Davide Giovanardi, Noah Goodman

1 replies, 3 likes


ML and Data Projects To Know: 📙 Investigating Transferability in Pretrained Language Models Authors: @AlexTamkin, Trisha Singh, Davide Giovanardi, Noah Goodman Paper: https://arxiv.org/abs/2004.14975 Featured in this week's newsletter: https://mailchi.mp/amplifypartners.com/ptk37?e=db9be68785 https://t.co/zblh6SPJSp

0 replies, 1 likes


arXiv CS-CL: Investigating Transferability in Pretrained Language Models http://arxiv.org/abs/2004.14975

0 replies, 1 likes


Content

Found on May 02 2020 at https://arxiv.org/pdf/2004.14975.pdf

PDF content of a computer science paper: Investigating Transferability in Pretrained Language Models