Weihua Luo
2020
Multiscale Collaborative Deep Models for Neural Machine Translation
Xiangpeng Wei
|
Heng Yu
|
Yue Hu
|
Yue Zhang
|
Rongxiang Weng
|
Weihua Luo
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Recent evidence reveals that Neural Machine Translation (NMT) models with deeper neural networks can be more effective but are difficult to train. In this paper, we present a MultiScale Collaborative (MSC) framework to ease the training of NMT models that are substantially deeper than those used previously. We explicitly boost the gradient back-propagation from top to bottom levels by introducing a block-scale collaboration mechanism into deep NMT models. Then, instead of forcing the whole encoder stack directly learns a desired representation, we let each encoder block learns a fine-grained representation and enhance it by encoding spatial dependencies using a context-scale collaboration. We provide empirical evidence showing that the MSC nets are easy to optimize and can obtain improvements of translation quality from considerably increased depth. On IWSLT translation tasks with three translation directions, our extremely deep models (with 72-layer encoders) surpass strong baselines by +2.2~+3.1 BLEU points. In addition, our deep MSC achieves a BLEU score of 30.56 on WMT14 English-to-German task that significantly outperforms state-of-the-art deep NMT models. We have included the source code in supplementary materials.
Bilingual Dictionary Based Neural Machine Translation without Using Parallel Sentences
Xiangyu Duan
|
Baijun Ji
|
Hao Jia
|
Min Tan
|
Min Zhang
|
Boxing Chen
|
Weihua Luo
|
Yue Zhang
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
In this paper, we propose a new task of machine translation (MT), which is based on no parallel sentences but can refer to a ground-truth bilingual dictionary. Motivated by the ability of a monolingual speaker learning to translate via looking up the bilingual dictionary, we propose the task to see how much potential an MT system can attain using the bilingual dictionary and large scale monolingual corpora, while is independent on parallel sentences. We propose anchored training (AT) to tackle the task. AT uses the bilingual dictionary to establish anchoring points for closing the gap between source language and target language. Experiments on various language pairs show that our approaches are significantly better than various baselines, including dictionary-based word-by-word translation, dictionary-supervised cross-lingual word embedding transformation, and unsupervised MT. On distant language pairs that are hard for unsupervised MT to perform well, AT performs remarkably better, achieving performances comparable to supervised SMT trained on more than 4M parallel sentences.
Language-aware Interlingua for Multilingual Neural Machine Translation
Changfeng Zhu
|
Heng Yu
|
Shanbo Cheng
|
Weihua Luo
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Multilingual neural machine translation (NMT) has led to impressive accuracy improvements in low-resource scenarios by sharing common linguistic information across languages. However, the traditional multilingual model fails to capture the diversity and specificity of different languages, resulting in inferior performance compared with individual models that are sufficiently trained. In this paper, we incorporate a language-aware interlingua into the Encoder-Decoder architecture. The interlingual network enables the model to learn a language-independent representation from the semantic spaces of different languages, while still allowing for language-specific specialization of a particular language-pair. Experiments show that our proposed method achieves remarkable improvements over state-of-the-art multilingual NMT baselines and produces comparable performance with strong individual models.
Search
Co-authors
- Heng Yu 2
- Yue Zhang 2
- Xiangpeng Wei 1
- Yue Hu 1
- Rongxiang Weng 1
- show all...
Venues
- ACL3