Papers of the day   All papers

Non-Autoregressive Machine Translation with Latent Alignments

Comments

Chitwan Saharia: We introduce Imputer, a non-autoregressive sequence model that generates output sequences in a constant number of iterations. Imputer advances SOTA for non-autoregressive models in both speech recognition and machine translation. http://arxiv.org/abs/2002.08926 http://arxiv.org/abs/2004.07437 https://t.co/F9E0u6gl95

3 replies, 260 likes


Geoffrey Hinton: The sequence modeling group at the Toronto lab of Google Research has some really impressive new work on generating the words in a sequence in parallel. Imputers rock!

0 replies, 219 likes


Mo_Norouzi: If you're interested in parallel generation of output sequences in machine translation and speech recognition, check out our new work on "Imputer", achieving 28 BELU on WMT'16 En>De just in 4 generation steps. translation: http://arxiv.org/abs/2004.07437 speech: http://arxiv.org/abs/2002.08926

6 replies, 186 likes


arXiv CS-CL: Non-Autoregressive Machine Translation with Latent Alignments http://arxiv.org/abs/2004.07437

0 replies, 18 likes


Content

Found on Apr 23 2020 at https://arxiv.org/pdf/2004.07437.pdf

PDF content of a computer science paper: Non-Autoregressive Machine Translation with Latent Alignments