Richard Yuanzhe Pang
2020
ENGINE: Energy-Based Inference Networks for Non-Autoregressive Machine Translation
Lifu Tu
|
Richard Yuanzhe Pang
|
Sam Wiseman
|
Kevin Gimpel
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
We propose to train a non-autoregressive machine translation model to minimize the energy defined by a pretrained autoregressive model. In particular, we view our non-autoregressive translation system as an inference network (Tu and Gimpel, 2018) trained to minimize the autoregressive teacher energy. This contrasts with the popular approach of training a non-autoregressive model on a distilled corpus consisting of the beam-searched outputs of such a teacher model. Our approach, which we call ENGINE (ENerGy-based Inference NEtworks), achieves state-of-the-art non-autoregressive results on the IWSLT 2014 DE-EN and WMT 2016 RO-EN datasets, approaching the performance of autoregressive models.
Intermediate-Task Transfer Learning with Pretrained Language Models: When and Why Does It Work?
Yada Pruksachatkun
|
Jason Phang
|
Haokun Liu
|
Phu Mon Htut
|
Xiaoyi Zhang
|
Richard Yuanzhe Pang
|
Clara Vania
|
Katharina Kann
|
Samuel R. Bowman
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
While pretrained models such as BERT have shown large gains across natural language understanding tasks, their performance can be improved by further training the model on a data-rich intermediate task, before fine-tuning it on a target task. However, it is still poorly understood when and why intermediate-task training is beneficial for a given target task. To investigate this, we perform a large-scale study on the pretrained RoBERTa model with 110 intermediate-target task combinations. We further evaluate all trained models with 25 probing tasks meant to reveal the specific skills that drive transfer. We observe that intermediate tasks requiring high-level inference and reasoning abilities tend to work best. We also observe that target task performance is strongly correlated with higher-level abilities such as coreference resolution. However, we fail to observe more granular correlations between probing and target task performance, highlighting the need for further work on broad-coverage probing benchmarks. We also observe evidence that the forgetting of knowledge learned during pretraining may limit our analysis, highlighting the need for further work on transfer learning methods in these settings.
Search
Co-authors
- Lifu Tu 1
- Sam Wiseman 1
- Kevin Gimpel 1
- Yada Pruksachatkun 1
- Jason Phang 1
- show all...
Venues
- ACL2