Papers of the day   All papers

NEURAL TEXT DEGENERATION WITH UNLIKELIHOOD TRAINING

Comments

Aug 14 2019 Sean Welleck

our new paper: "Neural Text d̶e̶Generation with Unlikelihood Training" is now on arxiv! (w/ @uralik1, @stephenroller, Emily Dinan, @kchonyc, @jaseweston) https://arxiv.org/pdf/1908.04319.pdf A step towards solving the case of neural text degeneration 🔎 https://t.co/4fJOfUflm9
7 replies, 239 likes


Sep 16 2019 Sean Welleck

code and pre-trained models for "Neural Text Generation with Unlikelihood Training" now available! - Train and fine-tune LMs with unlikelihood - 🚨fine-tune a GPT-2 model from pytorch-transformers with unlikelihood https://github.com/facebookresearch/unlikelihood_training
0 replies, 126 likes


Oct 02 2019 Ilia Kulikov

💡Update on "Neural Text Generation with Unlikelihood Training" !💡 new: - beam+ngram blocking & nucleus sampling in the human evaluation - analysis of token generation frequency distributions https://ikulikov.name/ul.html (with examples!) arxiv: https://arxiv.org/abs/1908.04319 w/ @wellecks https://t.co/yb6Xn8fk7o
0 replies, 123 likes


Aug 14 2019 Kyunghyun Cho

since and if we know there are problems that we don't necessarily talk about, let's try to tackle one problem at a time, and let us, @uralik1, @wellecks, @jaseweston, @stephenroller and Emily Dinan, take one step for now. thanks to @YejinChoinka and @AlecRad and their (cont)
2 replies, 72 likes


Oct 02 2019 jaseweston

Unlikelihood training beats nucleus sampling and beam blocking for LM generation (new human eval results added on arXiv paper!)
0 replies, 69 likes


Aug 14 2019 Thomas Lahore

Neural Text Generation with Unlikelihood Training "We propose a new objective, unlikelihood training, which forces unlikely generations to be assigned lower probability by the model." https://arxiv.org/abs/1908.04319 https://t.co/EJQ4BPes5Z
0 replies, 18 likes


Aug 14 2019 Ilya Kulikov

our new work is on arxiv! w/ @wellecks @kchonyc @jaseweston @stephenroller Emily Dinan!
0 replies, 15 likes


Oct 02 2019 Sean Welleck

and results on fine-tuning GPT-2 with unlikelihood!
1 replies, 7 likes


Content