Efficient and High-Quality Neural Machine Translation with OpenNMT
Guillaume Klein, Dakun Zhang, Clément Chouteau, Josep Crego, Jean Senellart
Abstract
This paper describes the OpenNMT submissions to the WNGT 2020 efficiency shared task. We explore training and acceleration of Transformer models with various sizes that are trained in a teacher-student setup. We also present a custom and optimized C++ inference engine that enables fast CPU and GPU decoding with few dependencies. By combining additional optimizations and parallelization techniques, we create small, efficient, and high-quality neural machine translation models.- Anthology ID:
- 2020.ngt-1.25
- Volume:
- Proceedings of the Fourth Workshop on Neural Generation and Translation
- Month:
- July
- Year:
- 2020
- Address:
- Online
- Venues:
- ACL | NGT | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 211–217
- URL:
- https://www.aclweb.org/anthology/2020.ngt-1.25
- DOI:
- PDF:
- https://www.aclweb.org/anthology/2020.ngt-1.25.pdf
You can write comments here (and agree to place them under CC-by). They are not guaranteed to stay and there is no e-mail functionality.