Gabriel Stanovsky
2020
The Right Tool for the Job: Matching Model and Instance Complexities
Roy Schwartz
|
Gabriel Stanovsky
|
Swabha Swayamdipta
|
Jesse Dodge
|
Noah A. Smith
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
As NLP models become larger, executing a trained model requires significant computational resources incurring monetary and environmental costs. To better respect a given inference budget, we propose a modification to contextual representation fine-tuning which, during inference, allows for an early (and fast) “exit” from neural network calculations for simple instances, and late (and accurate) exit for hard instances. To achieve this, we add classifiers to different layers of BERT and use their calibrated confidence scores to make early exit decisions. We test our proposed modification on five different datasets in two tasks: three text classification datasets and two natural language inference benchmarks. Our method presents a favorable speed/accuracy tradeoff in almost all cases, producing models which are up to five times faster than the state of the art, while preserving their accuracy. Our method also requires almost no additional training resources (in either time or parameters) compared to the baseline BERT model. Finally, our method alleviates the need for costly retraining of multiple models at different levels of efficiency; we allow users to control the inference speed/accuracy tradeoff using a single trained model, by setting a single variable at inference time. We publicly release our code.
Controlled Crowdsourcing for High-Quality QA-SRL Annotation
Paul Roit
|
Ayal Klein
|
Daniela Stepanov
|
Jonathan Mamou
|
Julian Michael
|
Gabriel Stanovsky
|
Luke Zettlemoyer
|
Ido Dagan
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Question-answer driven Semantic Role Labeling (QA-SRL) was proposed as an attractive open and natural flavour of SRL, potentially attainable from laymen. Recently, a large-scale crowdsourced QA-SRL corpus and a trained parser were released. Trying to replicate the QA-SRL annotation for new texts, we found that the resulting annotations were lacking in quality, particularly in coverage, making them insufficient for further research and evaluation. In this paper, we present an improved crowdsourcing protocol for complex semantic annotation, involving worker selection and training, and a data consolidation phase. Applying this protocol to QA-SRL yielded high-quality annotation with drastically higher coverage, producing a new gold evaluation dataset. We believe that our annotation protocol and gold standard will facilitate future replicable research of natural semantic annotations.
Active Learning for Coreference Resolution using Discrete Annotation
Belinda Z. Li
|
Gabriel Stanovsky
|
Luke Zettlemoyer
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
We improve upon pairwise annotation for active learning in coreference resolution, by asking annotators to identify mention antecedents if a presented mention pair is deemed not coreferent. This simple modification, when combined with a novel mention clustering algorithm for selecting which examples to label, is much more efficient in terms of the performance obtained per annotation budget. In experiments with existing benchmark coreference datasets, we show that the signal from this additional question leads to significant performance gains per human-annotation hour. Future work can use our annotation protocol to effectively develop coreference models for new domains. Our code is publicly available.
Search
Co-authors
- Luke Zettlemoyer 2
- Roy Schwartz 1
- Swabha Swayamdipta 1
- Jesse Dodge 1
- Noah A. Smith 1
- show all...
Venues
- ACL3