Chi Hu
2020
Learning Architectures from an Extended Search Space for Language Modeling
Yinqiao Li
|
Chi Hu
|
Yuhao Zhang
|
Nuo Xu
|
Yufan Jiang
|
Tong Xiao
|
Jingbo Zhu
|
Tongran Liu
|
changliang li
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Neural architecture search (NAS) has advanced significantly in recent years but most NAS systems restrict search to learning architectures of a recurrent or convolutional cell. In this paper, we extend the search space of NAS. In particular, we present a general approach to learn both intra-cell and inter-cell architectures (call it ESS). For a better search result, we design a joint learning method to perform intra-cell and inter-cell NAS simultaneously. We implement our model in a differentiable architecture search system. For recurrent neural language modeling, it outperforms a strong baseline significantly on the PTB and WikiText data, with a new state-of-the-art on PTB. Moreover, the learned architectures show good transferability to other systems. E.g., they improve state-of-the-art systems on the CoNLL and WNUT named entity recognition (NER) tasks and CoNLL chunking task, indicating a promising line of research on large-scale pre-learned architectures.
The NiuTrans System for WNGT 2020 Efficiency Task
Chi Hu
|
Bei Li
|
Yinqiao Li
|
Ye Lin
|
Yanyang Li
|
Chenglong Wang
|
Tong Xiao
|
Jingbo Zhu
Proceedings of the Fourth Workshop on Neural Generation and Translation
This paper describes the submissions of the NiuTrans Team to the WNGT 2020 Efficiency Shared Task. We focus on the efficient implementation of deep Transformer models (Wang et al., 2019; Li et al., 2019) using NiuTensor, a flexible toolkit for NLP tasks. We explored the combination of deep encoder and shallow decoder in Transformer models via model compression and knowledge distillation. The neural machine translation decoding also benefits from FP16 inference, attention caching, dynamic batching, and batch pruning. Our systems achieve promising results in both translation quality and efficiency, e.g., our fastest system can translate more than 40,000 tokens per second with an RTX 2080 Ti while maintaining 42.9 BLEU on newstest2018.
Search
Co-authors
- Yinqiao Li 2
- Tong Xiao 2
- Jingbo Zhu 2
- Yuhao Zhang 1
- Nuo Xu 1
- show all...