Raphael Tang
2020
DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference
Ji Xin
|
Raphael Tang
|
Jaejun Lee
|
Yaoliang Yu
|
Jimmy Lin
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Large-scale pre-trained language models such as BERT have brought significant improvements to NLP applications. However, they are also notorious for being slow in inference, which makes them difficult to deploy in real-time applications. We propose a simple but effective method, DeeBERT, to accelerate BERT inference. Our approach allows samples to exit earlier without passing through the entire model. Experiments show that DeeBERT is able to save up to ~40% inference time with minimal degradation in model quality. Further analyses show different behaviors in the BERT transformer layers and also reveal their redundancy. Our work provides new ideas to efficiently apply deep transformer-based models to downstream tasks. Code is available at https://github.com/castorini/DeeBERT.
Showing Your Work Doesn’t Always Work
Raphael Tang
|
Jaejun Lee
|
Ji Xin
|
Xinyu Liu
|
Yaoliang Yu
|
Jimmy Lin
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
In natural language processing, a recently popular line of work explores how to best report the experimental results of neural networks. One exemplar publication, titled “Show Your Work: Improved Reporting of Experimental Results” (Dodge et al., 2019), advocates for reporting the expected validation effectiveness of the best-tuned model, with respect to the computational budget. In the present work, we critically examine this paper. As far as statistical generalizability is concerned, we find unspoken pitfalls and caveats with this approach. We analytically show that their estimator is biased and uses error-prone assumptions. We find that the estimator favors negative errors and yields poor bootstrapped confidence intervals. We derive an unbiased alternative and bolster our claims with empirical evidence from statistical simulation. Our codebase is at https://github.com/castorini/meanmax.
Exploring the Limits of Simple Learners in Knowledge Distillation for Document Classification with DocBERT
Ashutosh Adhikari
|
Achyudh Ram
|
Raphael Tang
|
William L. Hamilton
|
Jimmy Lin
Proceedings of the 5th Workshop on Representation Learning for NLP
Fine-tuned variants of BERT are able to achieve state-of-the-art accuracy on many natural language processing tasks, although at significant computational costs. In this paper, we verify BERT’s effectiveness for document classification and investigate the extent to which BERT-level effectiveness can be obtained by different baselines, combined with knowledge distillation—a popular model compression method. The results show that BERT-level effectiveness can be achieved by a single-layer LSTM with at least 40× fewer FLOPS and only ∼3\% parameters. More importantly, this study analyzes the limits of knowledge distillation as we distill BERT’s knowledge all the way down to linear models—a relevant baseline for the task. We report substantial improvement in effectiveness for even the simplest models, as they capture the knowledge learnt by BERT.
Search
Co-authors
- Jimmy Lin 3
- Ji Xin 2
- Jaejun Lee 2
- Yaoliang Yu 2
- Xinyu Liu 1
- show all...