Karen Livescu
2020
Discrete Latent Variable Representations for Low-Resource Text Classification
Shuning Jin
|
Sam Wiseman
|
Karl Stratos
|
Karen Livescu
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
While much work on deep latent variable models of text uses continuous latent variables, discrete latent variables are interesting because they are more interpretable and typically more space efficient. We consider several approaches to learning discrete latent variable models for text in the case where exact marginalization over these variables is intractable. We compare the performance of the learned representations as features for low-resource document and sentence classification. Our best models outperform the previous best reported results with continuous representations in these low-resource settings, while learning significantly more compressed representations. Interestingly, we find that an amortized variant of Hard EM performs particularly well in the lowest-resource regimes.
PeTra: A Sparsely Supervised Memory Model for People Tracking
Shubham Toshniwal
|
Allyson Ettinger
|
Kevin Gimpel
|
Karen Livescu
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
We propose PeTra, a memory-augmented neural network designed to track entities in its memory slots. PeTra is trained using sparse annotation from the GAP pronoun resolution dataset and outperforms a prior memory model on the task while using a simpler architecture. We empirically compare key modeling choices, finding that we can simplify several aspects of the design of the memory module while retaining strong performance. To measure the people tracking capability of memory models, we (a) propose a new diagnostic evaluation based on counting the number of unique entities in text, and (b) conduct a small scale human evaluation to compare evidence of people tracking in the memory logs of PeTra relative to a previous approach. PeTra is highly effective in both evaluations, demonstrating its ability to track people in its memory despite being trained with limited annotation.
A Cross-Task Analysis of Text Span Representations
Shubham Toshniwal
|
Haoyue Shi
|
Bowen Shi
|
Lingyu Gao
|
Karen Livescu
|
Kevin Gimpel
Proceedings of the 5th Workshop on Representation Learning for NLP
Many natural language processing (NLP) tasks involve reasoning with textual spans, including question answering, entity recognition, and coreference resolution. While extensive research has focused on functional architectures for representing words and sentences, there is less work on representing arbitrary spans of text within sentences. In this paper, we conduct a comprehensive empirical evaluation of six span representation methods using eight pretrained language representation models across six tasks, including two tasks that we introduce. We find that, although some simple span representations are fairly reliable across tasks, in general the optimal span representation varies by task, and can also vary within different facets of individual tasks. We also find that the choice of span representation has a bigger impact with a fixed pretrained encoder than with a fine-tuned encoder.
Search
Co-authors
- Shubham Toshniwal 2
- Kevin Gimpel 2
- Shuning Jin 1
- Sam Wiseman 1
- Karl Stratos 1
- show all...