Yuval Pinter
2020
Learning to Faithfully Rationalize by Construction
Sarthak Jain
|
Sarah Wiegreffe
|
Yuval Pinter
|
Byron C. Wallace
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
In many settings it is important for one to be able to understand why a model made a particular prediction. In NLP this often entails extracting snippets of an input text ‘responsible for’ corresponding model output; when such a snippet comprises tokens that indeed informed the model’s prediction, it is a faithful explanation. In some settings, faithfulness may be critical to ensure transparency. Lei et al. (2016) proposed a model to produce faithful rationales for neural text classification by defining independent snippet extraction and prediction modules. However, the discrete selection over input tokens performed by this method complicates training, leading to high variance and requiring careful hyperparameter tuning. We propose a simpler variant of this approach that provides faithful explanations by construction. In our scheme, named FRESH, arbitrary feature importance scores (e.g., gradients from a trained model) are used to induce binary labels over token inputs, which an extractor can be trained to predict. An independent classifier module is then trained exclusively on snippets provided by the extractor; these snippets thus constitute faithful explanations, even if the classifier is arbitrarily complex. In both automatic and manual evaluations we find that variants of this simple framework yield predictive performance superior to ‘end-to-end’ approaches, while being more general and easier to train. Code is available at https://github.com/successar/FRESH.
UniMorph 3.0: Universal Morphology
Arya D. McCarthy
|
Christo Kirov
|
Matteo Grella
|
Amrit Nidhi
|
Patrick Xia
|
Kyle Gorman
|
Ekaterina Vylomova
|
Sabrina J. Mielke
|
Garrett Nicolai
|
Miikka Silfverberg
|
Timofey Arkhangelskiy
|
Nataly Krizhanovsky
|
Andrew Krizhanovsky
|
Elena Klyachko
|
Alexey Sorokin
|
John Mansfield
|
Valts Ernštreits
|
Yuval Pinter
|
Cassandra L. Jacobs
|
Ryan Cotterell
|
Mans Hulden
|
David Yarowsky
Proceedings of The 12th Language Resources and Evaluation Conference
The Universal Morphology (UniMorph) project is a collaborative effort providing broad-coverage instantiated normalized morphological paradigms for hundreds of diverse world languages. The project comprises two major thrusts: a language-independent feature schema for rich morphological annotation and a type-level resource of annotated data in diverse languages realizing that schema. We have implemented several improvements to the extraction pipeline which creates most of our data, so that it is both more complete and more correct. We have added 66 new languages, as well as new parts of speech for 12 languages. We have also amended the schema in several ways. Finally, we present three new community tools: two to validate data for resource creators, and one to make morphological data available from the command line. UniMorph is based at the Center for Language and Speech Processing (CLSP) at Johns Hopkins University in Baltimore, Maryland. This paper details advances made to the schema, tooling, and dissemination of project resources since the UniMorph 2.0 release described at LREC 2018.
Search
Co-authors
- Sarthak Jain 1
- Sarah Wiegreffe 1
- Byron C. Wallace 1
- Arya D. McCarthy 1
- Christo Kirov 1
- show all...
- Matteo Grella 1
- Amrit Nidhi 1
- Patrick Xia 1
- Kyle Gorman 1
- Ekaterina Vylomova 1
- Sabrina J. Mielke 1
- Garrett Nicolai 1
- Miikka Silfverberg 1
- Timofey Arkhangelskiy 1
- Nataly Krizhanovsky 1
- Andrew Krizhanovsky 1
- Elena Klyachko 1
- Alexey Sorokin 1
- John Mansfield 1
- Valts Ernštreits 1
- Cassandra L. Jacobs 1
- Ryan Cotterell 1
- Mans Hulden 1
- David Yarowsky 1