Papers of the day   All papers

MultiFiT: Efficient Multi-lingual Language Model Fine-tuning

Comments

Oct 22 2019 Sebastian Ruder

Most of the world’s text is not in English. We are releasing MultiFiT to train and fine-tune language models efficiently in any language. Post: http://nlp.fast.ai/classification/2019/09/10/multifit.html Paper: https://arxiv.org/abs/1909.04761 With @eisenjulian @PiotrCzapla Marcin Kadras @GuggerSylvain @jeremyphoward https://t.co/QtcWhKqxyL
12 replies, 1251 likes


Oct 29 2019 DataScienceNigeria

NLP to the next level with MultiFit - novel methods for multilingual fine-tuning of languages that outperform models trained with far more data Kudos @seb_ruder @eisenjulian @PiotrCzapla @GuggerSylvain @jeremyphoward Paper: https://arxiv.org/pdf/1909.04761.pdf Code:https://github.com/n-waves/ulmfit-multilingual https://t.co/u2x6IfsP4i
2 replies, 170 likes


Oct 23 2019 Suzana Ilić

Great effort! πŸ‘πŸ‘πŸ‘
0 replies, 44 likes


Oct 23 2019 Jade Abbott

Very important work by some very wonderful people on fine-tuning models efficiently for any language! 🀩🀩🀩 Thank you for the work you all do in supporting Low-Resourced languages 🌍✨πŸ’ͺβ™₯️
0 replies, 32 likes


Oct 23 2019 Dat Tran

Interestingly they don’t build on BERT like many NLP models these days but instead use an efficient variant of LSTM. Hence, it’s cheaper to pretrain and also results in a smaller model.
0 replies, 2 likes


Oct 22 2019 Robert Munro

This is one of the most important technology releases of the year. Only 5% of daily conversations are in English but most AI only works in English or other privileged languages. Congrats to everyone involved!
1 replies, 1 likes


Oct 22 2019 Elias W. BA

πŸ˜‹πŸ˜‹πŸ˜‹ @MasakhaneMt @galsenai @baamtusarl
0 replies, 1 likes


Oct 23 2019 Graciela Gonzalez-Hernandez, PhD

Could be useful for #SMM4H 2020 @UPennHLP
0 replies, 1 likes


Content