Papers of the day   All papers

Do Massively Pretrained Language Models Make Better Storytellers?

Comments

Nov 18 2019 Abigail See

The code for our paper "Do Massively Pretrained Language Models Make Better Storytellers?" is now online! https://github.com/abisee/story-generation-eval You can browse the generated stories, perform new analyses, or apply our metrics to your own generated text.
0 replies, 225 likes


Nov 04 2019 Abigail See

I'll be at the @conll2019 poster session at 4:30pm today to present "Do Massively Pretrained Language Models Make Better Storytellers?". Come say hi! Paper: https://arxiv.org/abs/1909.10705 Work with @aneeshpappu @RohunSaxena @yakhila_04 @stanfordnlp #NLProc #DeepLearning #AI https://t.co/5zw4qNJVb3
3 replies, 171 likes


Sep 25 2019 Abigail See

Impressed by Ovid's 🦄 but want a deeper eval of GPT2 open-ended NLG? See our @conll2019 paper "Do Massively Pretrained Language Models Make Better Storytellers?" https://arxiv.org/abs/1909.10705 Work with @aneeshpappu @RohunSaxena @yakhila_04 @stanfordnlp #NLProc #DeepLearning #AI
3 replies, 167 likes


Sep 25 2019 Jeremy Howard

Interesting paper - studies similar issues to https://arxiv.org/abs/1904.09751 but with a somewhat different focus. It adds some additional insight to the issue of sampling methods in NLP generation.
0 replies, 97 likes


Nov 02 2019 Stanford NLP Group

.@stanfordnlp at CoNLL2019: Monday @abigail_e_see Pretrained Language Models as Storytellers https://arxiv.org/abs/1909.10705; @chrmanning keynote Multi-step reasoning for complex questions. Shout outs: Matt Lamm compositional image captioning, Justin Dieter dialog responses #emnlp2019
0 replies, 94 likes


Oct 17 2019 Mark 🎃. Riedl ✈️ SEA

"Do Massively Pretrained Language Models Make Better Storytellers?” by @abigail_e_see https://arxiv.org/pdf/1909.10705.pdf TL;DR: not so much
0 replies, 41 likes


Sep 25 2019 AI Papers

Do Massively Pretrained Language Models Make Better Storytellers?. http://arxiv.org/abs/1909.10705
0 replies, 1 likes


Content