Papers of the day   All papers

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

Comments

Jimmy Lin: Two years ago today, BERT was shared with the #NLProc community. What amazing progress we've seen since! https://arxiv.org/abs/1810.04805

4 replies, 96 likes


Bill Slawski ⚓: No, it's not LSI that Google is Using. It is BERT! https://arxiv.org/abs/1810.04805

7 replies, 87 likes


Lavanya: Join our reading group tomorrow, Saturday, 10am PST / 1pm EST / 10:30pm IST. We’ll discuss the BERT and Q*BERT papers. ‣ BERT https://arxiv.org/abs/1810.04805 ‣ Q*BERT https://arxiv.org/pdf/1909.05840.pdf 🧑🏼‍💻 Register here: https://us02web.zoom.us/webinar/register/WN_sPXYZcBmS8-tKIGnl6eRMA #machinelearning #deeplearning #100daysofmlcode

1 replies, 16 likes


Jelle Zuidema: [13] J. Devlin, M. Chang, K. Lee and K. Toutanova: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. https://arxiv.org/abs/1810.04805 @toutanova

1 replies, 15 likes


Lavanya: Our reading group tomorrow is discuss the BERT and Q*BERT papers together! BERT – https://arxiv.org/abs/1810.04805 Q*BERT – https://arxiv.org/pdf/1909.05840.pdf 🗓️ Saturday, 10am PST / 1pm EST / 10:30pm IST 📍 Join us here: https://us02web.zoom.us/webinar/register/WN_sPXYZcBmS8-tKIGnl6eRMA #machinelearning #deeplearning #100daysofmlcode

0 replies, 14 likes


Lavanya: If you've been wanting to read the #BERT paper but haven't had the chance, join our reading group in an hour! BERT – https://arxiv.org/abs/1810.04805 🗓️ Aug 1, 10am PST / 1pm EST / 10:30pm IST 📍 Join us here: https://us02web.zoom.us/webinar/register/WN_sPXYZcBmS8-tKIGnl6eRMA #machinelearning #deeplearning #100daysofmlcode

0 replies, 11 likes


Spiros Denaxas: I really liked this paper "A Primer in BERTology: What we know about how BERT works" by @annargrs & coauthors, gives a really useful overview of BERT accessible to non-NLP researchers like myself :) Paper: https://arxiv.org/pdf/2002.12327.pdf original BERT paper: https://arxiv.org/abs/1810.04805 https://t.co/e96Sbwp1B4

0 replies, 10 likes


wordmetrics: This is an excellent explanation of BERT. https://towardsdatascience.com/bert-explained-state-of-the-art-language-model-for-nlp-f8b21a9b6270 Original Google AI paper here: https://arxiv.org/pdf/1810.04805.pdf #nlproc #ai #google #textclassification #seo #machinelearning #ml https://t.co/S9MPMMLQPU

0 replies, 9 likes


Vikas Bahirwani: Get up to date on #NLP #DeepLearning in 3 easy steps? It is fun! 1) Read Transformers by @ashVaswani (https://arxiv.org/pdf/1706.03762.pdf) 2) Quickly review CoVe, ELMo, GPT and GPT2 via. https://lilianweng.github.io/lil-log/2019/01/31/generalized-language-models.html#zero-shot-transfer. Thanx @lilianweng. 3) Read BERT https://arxiv.org/pdf/1810.04805.pdf Easy Peasy.

0 replies, 9 likes


Dawn Anderson: @Suzzicks @BritneyMuller @AlexisKSanders My dog was here first. My Bert is 6 years old. https://twitter.com/dawnieando/status/1050652542755958784

2 replies, 7 likes


HubBucket | We're Helping Save Lives: ⚕️#HealthIT @Microsoft makes it easier to build Bidirectional Encoder Representations from Transformers - #BERT for Language Understanding at large scale Article: https://azure.microsoft.com/en-us/blog/microsoft-makes-it-easier-to-build-popular-language-representation-model-bert-at-large-scale/ Paper: https://arxiv.org/pdf/1810.04805.pdf @Azure @HubBucket @HubDataScience #MachineLearning #NLP https://t.co/tuD08uu4CN

0 replies, 6 likes


School of AI Algiers: It's time to announce Next Friday's article! Here's the link to the article: https://arxiv.org/pdf/1810.04805.pdf Go ahead and start learning about BERT so that we can all discuss it together next Friday at 7 pm. Check out the recordings of our previous sessions on our Youtube channel! https://t.co/SQClSIKlIA

0 replies, 6 likes


HubBucket | Technology for Healthcare and Medicine: #BioBert - Bidirectional Encoder Representations from Transformers for #Biomedical #Text #Mining Deep Bidirectional Transformers for Language Understanding #MachineLearning #DeepLearning @ARXIV_ORG 🖥️https://arxiv.org/pdf/1810.04805.pdf @HubBucket @HubAnalytics2 @HubAnalysis1 @HubXpress1 https://t.co/DbFhh6ZqMi

0 replies, 5 likes


HubBase | HubBucket Biomedical Data Science: #BioBert - Bidirectional Encoder Representations from Transformers for #Biomedical #Text #Mining Deep Bidirectional Transformers for Language Understanding #MachineLearning #DeepLearning @ARXIV_ORG 🖥️https://arxiv.org/pdf/1810.04805.pdf @HubBucket @HubAnalytics2 @HubAnalysis1 @HubXpress1 https://t.co/1O30Gm7qpI

0 replies, 4 likes


小猫遊りょう(たかにゃし・りょう): BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding https://arxiv.org/abs/1810.04805

1 replies, 1 likes


John Locke: If you want to read a technical description of how BERT works, here you go: https://arxiv.org/pdf/1810.04805.pdf Be forewarned, there's a lot of engineering-speak here.

0 replies, 1 likes


Ivan Makohon: Google's new Bidirectional Encoder Representations from Transformers (BERT) helps understand the nuances and context of words in searches. Paper: https://arxiv.org/pdf/1810.04805.pdf #cs800s20 #TIL

0 replies, 1 likes


VonVictor V. Rosenchild | HubBucket Founder/CEO: #BioBert - Bidirectional Encoder Representations from Transformers for #Biomedical #Text #Mining Deep Bidirectional Transformers for Language Understanding #MachineLearning #DeepLearning @ARXIV_ORG 🖥️https://arxiv.org/pdf/1810.04805.pdf @HubBucket @HubAnalytics2 @HubAnalysis1 @HubXpress1 https://t.co/pGYqCfWr4F

0 replies, 1 likes


Valentin Pletzer: @MordyOberstein I had to look it up. This doesn't say Google is using BERT for named entity recognition but in the original paper they do compare BERT for NER with other approches https://arxiv.org/pdf/1810.04805.pdf (it's the paper referenced from here: https://ai.googleblog.com/2018/11/open-sourcing-bert-state-of-art-pre.html)

1 replies, 1 likes


Weights & Biases: Join our reading group tomorrow, Saturday, 10am PST / 1pm EST / 10:30pm IST. We’ll discuss the BERT and Q*BERT papers. ‣ BERT https://arxiv.org/abs/1810.04805 ‣ Q*BERT https://arxiv.org/pdf/1909.05840.pdf 🧑🏼‍💻 Register here: https://us02web.zoom.us/webinar/register/WN_sPXYZcBmS8-tKIGnl6eRMA #machinelearning #deeplearning #100daysofmlcode

0 replies, 1 likes


Asif Razzaq: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (AI Paper Summary) Video: https://www.youtube.com/watch?v=M4rq1Ce6wyM&t=53s Paper:Link: https://arxiv.org/pdf/1810.04805.pdf #MachineLearning #ArtificialInteligence #DeepLearning

0 replies, 1 likes


Saroosh Khan: the words "pre-train, bidirectional representations, left and right context" are considerable☺️

0 replies, 1 likes


𝄢 Jason Barnard: That last sentence almost slips by unnoticed "outperforming human performance by 2.0%." https://t.co/srMLiCQQvr

0 replies, 1 likes


Content

Found on Oct 11 2020 at https://arxiv.org/pdf/1810.04805.pdf

PDF content of a computer science paper: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding