Papers of the day   All papers

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

Comments

Oct 12 2018 Thang Luong

A new era of NLP has just begun a few days ago: large pretraining models (Transformer 24 layers, 1024 dim, 16 heads) + massive compute is all you need. BERT from @GoogleAI: SOTA results on everything https://arxiv.org/abs/1810.04805. Results on SQuAD are just mind-blowing. Fun time ahead! https://t.co/1phsCZpqWR
13 replies, 1108 likes


May 26 2019 Bill Slawski ⚓

No, it's not LSI that Google is Using. It is BERT! https://arxiv.org/abs/1810.04805
7 replies, 87 likes


Oct 25 2019 Oren Etzioni

Contextual Word Embedding is now part of Google Search https://www.wired.com/story/google-search-advancing-grade-reading/ Terrific insight and context by @tsimonite
1 replies, 74 likes


Dec 06 2019 Full Fact

This election, we've been using our automated fact checking tools to scrutinise the party manifestos. In this blog, David our Data Scientist explains how AI helps us to identify and categorise claims [1/2] https://fullfact.org/blog/2019/dec/how-we-use-ai-help-fact-check-party-manifestos/
1 replies, 34 likes


Oct 12 2018 Dawn Anderson

Meet BERT from Google AI @GoogleAI (great name although I am biased as it is the same name as my pomeranian) -> 'BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding' (submitted Oct 2018) https://arxiv.org/pdf/1810.04805.pdf
2 replies, 18 likes


Jan 02 2020 Jelle Zuidema

[13] J. Devlin, M. Chang, K. Lee and K. Toutanova: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. https://arxiv.org/abs/1810.04805 @toutanova
1 replies, 15 likes


Nov 30 2017 🍌 John 🍌

@chutty @seroundtable Awesomeness.
2 replies, 14 likes


Sep 10 2019 Vikas Bahirwani

Get up to date on #NLP #DeepLearning in 3 easy steps? It is fun! 1) Read Transformers by @ashVaswani (https://arxiv.org/pdf/1706.03762.pdf) 2) Quickly review CoVe, ELMo, GPT and GPT2 via. https://lilianweng.github.io/lil-log/2019/01/31/generalized-language-models.html#zero-shot-transfer. Thanx @lilianweng. 3) Read BERT https://arxiv.org/pdf/1810.04805.pdf Easy Peasy.
0 replies, 9 likes


Jun 03 2019 wordmetrics

This is an excellent explanation of BERT. https://towardsdatascience.com/bert-explained-state-of-the-art-language-model-for-nlp-f8b21a9b6270 Original Google AI paper here: https://arxiv.org/pdf/1810.04805.pdf #nlproc #ai #google #textclassification #seo #machinelearning #ml https://t.co/S9MPMMLQPU
0 replies, 9 likes


Oct 25 2019 Dawn Anderson

@Suzzicks @BritneyMuller @AlexisKSanders My dog was here first. My Bert is 6 years old. https://twitter.com/dawnieando/status/1050652542755958784
2 replies, 7 likes


Jul 19 2019 HubBucket | We're Helping Save Lives

⚕️#HealthIT @Microsoft makes it easier to build Bidirectional Encoder Representations from Transformers - #BERT for Language Understanding at large scale Article: https://azure.microsoft.com/en-us/blog/microsoft-makes-it-easier-to-build-popular-language-representation-model-bert-at-large-scale/ Paper: https://arxiv.org/pdf/1810.04805.pdf @Azure @HubBucket @HubDataScience #MachineLearning #NLP https://t.co/tuD08uu4CN
0 replies, 6 likes


Sep 24 2019 HubBucket | Technology for Healthcare and Medicine

#BioBert - Bidirectional Encoder Representations from Transformers for #Biomedical #Text #Mining Deep Bidirectional Transformers for Language Understanding #MachineLearning #DeepLearning @ARXIV_ORG 🖥️https://arxiv.org/pdf/1810.04805.pdf @HubBucket @HubAnalytics2 @HubAnalysis1 @HubXpress1 https://t.co/DbFhh6ZqMi
0 replies, 5 likes


Sep 24 2019 HubBase | HubBucket Biomedical Data Science

#BioBert - Bidirectional Encoder Representations from Transformers for #Biomedical #Text #Mining Deep Bidirectional Transformers for Language Understanding #MachineLearning #DeepLearning @ARXIV_ORG 🖥️https://arxiv.org/pdf/1810.04805.pdf @HubBucket @HubAnalytics2 @HubAnalysis1 @HubXpress1 https://t.co/1O30Gm7qpI
0 replies, 4 likes


Oct 30 2019 John Locke

If you want to read a technical description of how BERT works, here you go: https://arxiv.org/pdf/1810.04805.pdf Be forewarned, there's a lot of engineering-speak here.
0 replies, 1 likes


Sep 24 2019 VonVictor V. Rosenchild | HubBucket Founder/CEO

#BioBert - Bidirectional Encoder Representations from Transformers for #Biomedical #Text #Mining Deep Bidirectional Transformers for Language Understanding #MachineLearning #DeepLearning @ARXIV_ORG 🖥️https://arxiv.org/pdf/1810.04805.pdf @HubBucket @HubAnalytics2 @HubAnalysis1 @HubXpress1 https://t.co/pGYqCfWr4F
0 replies, 1 likes


Feb 10 2020 Ivan Makohon

Google's new Bidirectional Encoder Representations from Transformers (BERT) helps understand the nuances and context of words in searches. Paper: https://arxiv.org/pdf/1810.04805.pdf #cs800s20 #TIL
0 replies, 1 likes


May 26 2019 𝄢 Jason Barnard

That last sentence almost slips by unnoticed "outperforming human performance by 2.0%." https://t.co/srMLiCQQvr
0 replies, 1 likes


May 28 2019 Saroosh Khan

the words "pre-train, bidirectional representations, left and right context" are considerable☺️
0 replies, 1 likes


Dec 17 2019 小猫遊りょう(たかにゃし・りょう)

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding https://arxiv.org/abs/1810.04805
1 replies, 1 likes


Dec 13 2019 Valentin Pletzer

@MordyOberstein I had to look it up. This doesn't say Google is using BERT for named entity recognition but in the original paper they do compare BERT for NER with other approches https://arxiv.org/pdf/1810.04805.pdf (it's the paper referenced from here: https://ai.googleblog.com/2018/11/open-sourcing-bert-state-of-art-pre.html)
1 replies, 1 likes


Content