Papers of the day   All papers

Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks


Patrick Lewis: Thrilled to share new work! “Retrieval-Augmented Generation for Knowledge-Intensive NLP tasks”. Big gains on Open-Domain QA, with new State-of-the-Art results on NaturalQuestions, CuratedTrec and WebQuestions. check out here: 1/N

4 replies, 578 likes

Hugging Face: We're excited to announce the🤗Transformers release of the Retrieval-Augmented Generation model in collaboration with @facebookai! Paper: Demo: 🤗Doc: Blog post:

6 replies, 387 likes

Ola Piktus: The first retrieval-augmented model in 🤗transformers, RAG goes open source today! A result of a great collaboration between @facebookai and @huggingface - check out our blog post below, as well as the paper: and the demo 🎉

6 replies, 299 likes

Yann LeCun: New advance in open-domain question answering from Facebook AI.

4 replies, 275 likes

Ethan Perez: New work! We present a single, retrieval-based architecture that can learn a variety of knowledge-intensive tasks: extractive and generative! Cool results (and SOTAs) on open-domain extractive QA, abstractive QA, fact verification, and question generation. W/ many at @facebookai

0 replies, 80 likes

Douwe Kiela: New work! What happens when you add retrieval over Wikipedia to the "work horse of NLP", sequence-to-sequence models? Turns out it works really well. Amazing job by @PSH_Lewis, @EthanJPerez and other @facebookai colleagues.

0 replies, 45 likes

Sebastian Riedel: It continues to fascinate me what a pre-trained seq2seq model like @ml_perception et al's BART (or T5) can do, out-of-the-box. Super excited to see how well it plays with a strong differentiable retriever like DPR!

0 replies, 29 likes

Tim Rocktäschel: Glad to have played a small part in this excellent work by @PSH_Lewis @EthanJPerez @riedelcastro @douwekiela and many others at @FacebookAI. It's a great step towards models that incorporate external textual knowledge, and one reason I am excited about language-conditioned RL!

0 replies, 17 likes

Yacine Jernite: After a tiny little Matplotlib multi-threading kerfuffle, the RAG demo is back online with full visualization capacity! PLEASE send over any good Jeopardy! questions you get from playing with it :D

1 replies, 14 likes

Timo Schick: This is some exiting new work from @facebookai - definitely worth reading!

0 replies, 11 likes

UCL Natural Language Processing: Patrick is up to some paradigm-shifting research on unsupervised QA (e.g. & retrieval-augmented generation ( & a great guy - highly recommended!

0 replies, 9 likes

Aran Komatsuzaki: Sota in open domain QA with substantially less params with neither a re-ranker nor extractive reader unlike REALM or T5-SSM

0 replies, 9 likes

Alexander R Johansen: Generate sequences conditioned on all of wikipedia, no more retrieve-and-extract. The progress in open-domain seq2seq is impressive! @PSH_Lewis

0 replies, 8 likes

Boris Dayma: This is such a well designed demo! I love when we can play with a model and see how it works interactively (vs cherry picked examples from papers)!

0 replies, 6 likes

Fabio Petroni: New Work! 💥 SOTA results for NaturalQuestions, CuratedTrec and WebQuestions. 🚀 Check out our preprint here:

0 replies, 3 likes

Luca Soldaini 🏳️‍🌈: @danqi_chen Oooh @danqi_chen is also covering RAG (@PSH_Lewis et al), a retrieve and generate QA model that recently popped on arxiv: this work is super neat, one of my favorite QA papers recently, definitely give it a read! #acl2020nlp #acl2020en

1 replies, 2 likes

HotComputerScience: Most popular computer science paper of the day: "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks"

0 replies, 2 likes


Found on May 26 2020 at

PDF content of a computer science paper: Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks