William L. Hamilton
2020
Learning an Unreferenced Metric for Online Dialogue Evaluation
Koustuv Sinha
|
Prasanna Parthasarathi
|
Jasmine Wang
|
Ryan Lowe
|
William L. Hamilton
|
Joelle Pineau
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Evaluating the quality of a dialogue interaction between two agents is a difficult task, especially in open-domain chit-chat style dialogue. There have been recent efforts to develop automatic dialogue evaluation metrics, but most of them do not generalize to unseen datasets and/or need a human-generated reference response during inference, making it infeasible for online evaluation. Here, we propose an unreferenced automated evaluation metric that uses large pre-trained language models to extract latent representations of utterances, and leverages the temporal transitions that exist between them. We show that our model achieves higher correlation with human annotations in an online setting, while not requiring true responses for comparison during inference.
Exploring the Limits of Simple Learners in Knowledge Distillation for Document Classification with DocBERT
Ashutosh Adhikari
|
Achyudh Ram
|
Raphael Tang
|
William L. Hamilton
|
Jimmy Lin
Proceedings of the 5th Workshop on Representation Learning for NLP
Fine-tuned variants of BERT are able to achieve state-of-the-art accuracy on many natural language processing tasks, although at significant computational costs. In this paper, we verify BERT’s effectiveness for document classification and investigate the extent to which BERT-level effectiveness can be obtained by different baselines, combined with knowledge distillation—a popular model compression method. The results show that BERT-level effectiveness can be achieved by a single-layer LSTM with at least 40× fewer FLOPS and only ∼3\% parameters. More importantly, this study analyzes the limits of knowledge distillation as we distill BERT’s knowledge all the way down to linear models—a relevant baseline for the task. We report substantial improvement in effectiveness for even the simplest models, as they capture the knowledge learnt by BERT.
Search
Co-authors
- Koustuv Sinha 1
- Prasanna Parthasarathi 1
- Jasmine Wang 1
- Ryan Lowe 1
- Joelle Pineau 1
- show all...