Chitwan Saharia: We introduce Imputer, a non-autoregressive sequence model that generates output sequences in a constant number of iterations. Imputer advances SOTA for non-autoregressive models in both speech recognition and machine translation.
3 replies, 260 likes
Geoffrey Hinton: The sequence modeling group at the Toronto lab of Google Research has some really impressive new work on generating the words in a sequence in parallel. Imputers rock!
0 replies, 219 likes
Mo_Norouzi: If you're interested in parallel generation of output sequences in machine translation and speech recognition, check out our new work on "Imputer", achieving 28 BELU on WMT'16 En>De just in 4 generation steps.
6 replies, 186 likes
arXiv CS-CL: Non-Autoregressive Machine Translation with Latent Alignments http://arxiv.org/abs/2004.07437
0 replies, 18 likes
Found on Apr 23 2020 at https://arxiv.org/pdf/2004.07437.pdf