Mia Xu Chen
Also published as: Mia Chen
2020
Leveraging Monolingual Data with Self-Supervision for Multilingual Neural Machine Translation
Aditya Siddhant
|
Ankur Bapna
|
Yuan Cao
|
Orhan Firat
|
Mia Chen
|
Sneha Kudugunta
|
Naveen Arivazhagan
|
Yonghui Wu
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Over the last few years two promising research directions in low-resource neural machine translation (NMT) have emerged. The first focuses on utilizing high-resource languages to improve the quality of low-resource languages via multilingual NMT. The second direction employs monolingual data with self-supervision to pre-train translation models, followed by fine-tuning on small amounts of supervised data. In this work, we join these two lines of research and demonstrate the efficacy of monolingual data with self-supervision in multilingual NMT. We offer three major results: (i) Using monolingual data significantly boosts the translation quality of low-resource languages in multilingual models. (ii) Self-supervision improves zero-shot translation quality in multilingual models. (iii) Leveraging monolingual data with self-supervision provides a viable path towards adding new languages to multilingual models, getting up to 33 BLEU on ro-en translation without any parallel data or back-translation.
Search
Co-authors
- Aditya Siddhant 1
- Ankur Bapna 1
- Yuan Cao 1
- Orhan Firat 1
- Sneha Kudugunta 1
- show all...
Venues
- ACL1