Sean Welleck: our new paper:
"Neural Text d̶e̶Generation with Unlikelihood Training"
is now on arxiv! (w/ @uralik1, @stephenroller, Emily Dinan, @kchonyc, @jaseweston) https://arxiv.org/pdf/1908.04319.pdf
A step towards solving the case of neural text degeneration 🔎 https://t.co/4fJOfUflm9
7 replies, 244 likes
Sean Welleck: code and pre-trained models for "Neural Text Generation with Unlikelihood Training" now available!
- Train and fine-tune LMs with unlikelihood
- 🚨fine-tune a GPT-2 model from pytorch-transformers with unlikelihood
0 replies, 155 likes
Ilia Kulikov: 💡Update on "Neural Text Generation with Unlikelihood Training" !💡
- beam+ngram blocking & nucleus sampling in the human evaluation
- analysis of token generation frequency distributions
https://ikulikov.name/ul.html (with examples!)
w/ @wellecks https://t.co/yb6Xn8fk7o
0 replies, 123 likes
Kyunghyun Cho: since and if we know there are problems that we don't necessarily talk about, let's try to tackle one problem at a time, and let us, @uralik1, @wellecks, @jaseweston, @stephenroller and Emily Dinan, take one step for now. thanks to @YejinChoinka and @AlecRad and their (cont)
2 replies, 72 likes
jaseweston: Unlikelihood training beats nucleus sampling and beam blocking for LM generation
(new human eval results added on arXiv paper!)
0 replies, 69 likes
Thomas Lahore: Neural Text Generation with Unlikelihood Training
"We propose a new objective, unlikelihood training, which forces unlikely generations to be assigned lower probability by the model."
0 replies, 18 likes
Ilya Kulikov: our new work is on arxiv! w/ @wellecks @kchonyc @jaseweston @stephenroller Emily Dinan!
0 replies, 15 likes
Sean Welleck: and results on fine-tuning GPT-2 with unlikelihood!
1 replies, 7 likes
Found on Aug 14 2019 at https://arxiv.org/pdf/1908.04319.pdf