Papers of the day   All papers

Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks

Comments

Patrick Lewis: Thrilled to share new work! “Retrieval-Augmented Generation for Knowledge-Intensive NLP tasks”. Big gains on Open-Domain QA, with new State-of-the-Art results on NaturalQuestions, CuratedTrec and WebQuestions. check out here: https://arxiv.org/abs/2005.11401. 1/N https://t.co/w4CwLxiWxr

4 replies, 578 likes


Hugging Face: We're excited to announce the🤗Transformers release of the Retrieval-Augmented Generation model in collaboration with @facebookai! Paper: https://arxiv.org/abs/2005.11401 Demo: https://huggingface.co/rag/ 🤗Doc: https://huggingface.co/transformers/master/model_doc/rag.html Blog post: https://ai.facebook.com/blog/retrieval-augmented-generation-streamlining-the-creation-of-intelligent-natural-language-processing-models https://t.co/NBjy4tEjSz

6 replies, 387 likes


Ola Piktus: The first retrieval-augmented model in 🤗transformers, RAG goes open source today! A result of a great collaboration between @facebookai and @huggingface - check out our blog post below, as well as the paper: https://arxiv.org/abs/2005.11401 and the demo https://huggingface.co/rag/ 🎉

6 replies, 299 likes


Yann LeCun: New advance in open-domain question answering from Facebook AI.

4 replies, 275 likes


Ethan Perez: New work! We present a single, retrieval-based architecture that can learn a variety of knowledge-intensive tasks: extractive and generative! Cool results (and SOTAs) on open-domain extractive QA, abstractive QA, fact verification, and question generation. W/ many at @facebookai

0 replies, 80 likes


Douwe Kiela: New work! What happens when you add retrieval over Wikipedia to the "work horse of NLP", sequence-to-sequence models? Turns out it works really well. Amazing job by @PSH_Lewis, @EthanJPerez and other @facebookai colleagues.

0 replies, 45 likes


Sebastian Riedel: It continues to fascinate me what a pre-trained seq2seq model like @ml_perception et al's BART (or T5) can do, out-of-the-box. Super excited to see how well it plays with a strong differentiable retriever like DPR!

0 replies, 29 likes


Tim Rocktäschel: Glad to have played a small part in this excellent work by @PSH_Lewis @EthanJPerez @riedelcastro @douwekiela and many others at @FacebookAI. It's a great step towards models that incorporate external textual knowledge, and one reason I am excited about language-conditioned RL!

0 replies, 17 likes


Yacine Jernite: After a tiny little Matplotlib multi-threading kerfuffle, the RAG demo is back online with full visualization capacity! PLEASE send over any good Jeopardy! questions you get from playing with it :D

1 replies, 14 likes


Timo Schick: This is some exiting new work from @facebookai - definitely worth reading!

0 replies, 11 likes


UCL Natural Language Processing: Patrick is up to some paradigm-shifting research on unsupervised QA (e.g. https://arxiv.org/abs/1906.04980) & retrieval-augmented generation (https://arxiv.org/abs/2005.11401) & a great guy - highly recommended!

0 replies, 9 likes


Aran Komatsuzaki: Sota in open domain QA with substantially less params with neither a re-ranker nor extractive reader unlike REALM or T5-SSM

0 replies, 9 likes


Alexander R Johansen: Generate sequences conditioned on all of wikipedia, no more retrieve-and-extract. The progress in open-domain seq2seq is impressive! https://arxiv.org/abs/2005.11401 @PSH_Lewis https://t.co/EutJlimePP

0 replies, 8 likes


Boris Dayma: This is such a well designed demo! I love when we can play with a model and see how it works interactively (vs cherry picked examples from papers)!

0 replies, 6 likes


Fabio Petroni: New Work! 💥 SOTA results for NaturalQuestions, CuratedTrec and WebQuestions. 🚀 Check out our preprint here: https://arxiv.org/abs/2005.11401.

0 replies, 3 likes


Luca Soldaini 🏳️‍🌈: @danqi_chen Oooh @danqi_chen is also covering RAG (@PSH_Lewis et al), a retrieve and generate QA model that recently popped on arxiv: https://arxiv.org/abs/2005.11401 this work is super neat, one of my favorite QA papers recently, definitely give it a read! #acl2020nlp #acl2020en

1 replies, 2 likes


HotComputerScience: Most popular computer science paper of the day: "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks" https://hotcomputerscience.com/paper/retrieval-augmented-generation-for-knowledge-intensive-nlp-tasks https://twitter.com/PSH_Lewis/status/1265300549777440775

0 replies, 2 likes


Content

Found on May 26 2020 at https://arxiv.org/pdf/2005.11401.pdf

PDF content of a computer science paper: Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks