Papers of the day   All papers

Multilingual Denoising Pre-training for Neural Machine Translation

Comments

Jan 24 2020 Facebook AI

We're releasing mBART, a new seq2seq multilingual pretraining system for machine translation across 25 languages. It gives significant improvements for document-level translation and low-resource languages. Read our paper to learn more: https://arxiv.org/pdf/2001.08210.pdf https://t.co/tJbRcOTqik
15 replies, 1068 likes


Jan 25 2020 Jiatao Gu

[1/7] Super excited to present our recent work -- mBART. We demonstrate multilingual denoising pre-training produces significant gains across a variety of machine translation tasks! Joint work with @YinhanL @NamanGoyal21 @xl_nlp @edunov @gh_marjan @ml_perception @LukeZettlemoyer
2 replies, 178 likes


Jan 25 2020 Xian Li

Check out our recent work on multilingual pretraining!
0 replies, 35 likes


Jan 23 2020 arXiv CS-CL

Multilingual Denoising Pre-training for Neural Machine Translation http://arxiv.org/abs/2001.08210
0 replies, 11 likes


Jan 29 2020 Christian R. Jänsch

Facebook releases new multilingual system mBART for machine translation #MachineLearning #MachineTranslation #AI @postoff25 @ShiCooks @theomitsa @AshokNellikar @BecomingDataSci @CEO_AISOMA @JonKrohnLearns @data_nerd @evankirstel @KirkDBorne @mvollmer1 @ValaAfshar @gp_pulipaka
2 replies, 11 likes


Jan 29 2020 DataScienceNigeria

@facebookai releases mBART, the first method for pre-training a complete sequence-to-sequence model by denoising full texts in 25 languages, demonstrating that adding mBART initialization produces performance gains. For more info: https://arxiv.org/pdf/2001.08210.pdf https://t.co/nQEZf6BEPX
0 replies, 5 likes


Feb 07 2020 Jindřich Libovický

In this week's blog post I comment on mBART by @FacebookAI https://arxiv.org/pdf/2001.08210.pdf Pre-trained creep into machine translation 👾👻https://jlibovicky.github.io/2020/02/07/MT-Weekly-MBART.html #mtweekly #NLProc
0 replies, 4 likes


Jan 28 2020 Masato Hagiwara

If you use non-English examples in your slides / diagrams, always try to get a native speaker to check your example. A few seconds can save you from making such an embarrassing mistake. Can you spot a typo in Japanese? https://twitter.com/facebookai/status/1220857120356044800
0 replies, 3 likes


Feb 01 2020 NiuTrans, MT

Document-level machine translation and MT for low-resource languages are 2 hotspots. Congrats for the progress!
0 replies, 2 likes


Jan 25 2020 Jiatao Gu

[7/7] Check out the paper (https://arxiv.org/abs/2001.08210) for more details!! The codes & the pre-trained/fine-tuned models will be released soon in Fairseq (https://github.com/pytorch/fairseq) 😀 Feel free to comment and give more suggestions! Thanks!
0 replies, 2 likes


Jan 26 2020 arXiv CS-CL

Multilingual Denoising Pre-training for Neural Machine Translation http://arxiv.org/abs/2001.08210
0 replies, 2 likes


Feb 04 2020 akira

https://arxiv.org/abs/2001.08210 Research that improves translation by pre-train with multiple languages. Learn in Seq2Seq style with noised 25 languages ​​including Japanese. Significantly updated BLEU scores, especially in low resource languages. https://t.co/b1rkbOlYTZ
0 replies, 2 likes


Jan 24 2020 arXiv CS-CL

Multilingual Denoising Pre-training for Neural Machine Translation http://arxiv.org/abs/2001.08210
0 replies, 2 likes


Content