Christopher Pal
Also published as: Chris Pal
2020
Interactive Machine Comprehension with Information Seeking Agents
Xingdi Yuan
|
Jie Fu
|
Marc-Alexandre Côté
|
Yi Tay
|
Chris Pal
|
Adam Trischler
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Existing machine reading comprehension (MRC) models do not scale effectively to real-world applications like web-level information retrieval and question answering (QA). We argue that this stems from the nature of MRC datasets: most of these are static environments wherein the supporting documents and all necessary information are fully observed. In this paper, we propose a simple method that reframes existing MRC datasets as interactive, partially observable environments. Specifically, we “occlude” the majority of a document’s text and add context-sensitive commands that reveal “glimpses” of the hidden text to a model. We repurpose SQuAD and NewsQA as an initial case study, and then show how the interactive corpora can be used to train a model that seeks relevant information through sequential decision making. We believe that this setting can contribute in scaling models to web-level QA scenarios.
Would you Rather? A New Benchmark for Learning Machine Alignment with Cultural Values and Social Preferences
Yi Tay
|
Donovan Ong
|
Jie Fu
|
Alvin Chan
|
Nancy Chen
|
Anh Tuan Luu
|
Chris Pal
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Understanding human preferences, along with cultural and social nuances, lives at the heart of natural language understanding. Concretely, we present a new task and corpus for learning alignments between machine and human preferences. Our newly introduced problem is concerned with predicting the preferable options from two sentences describing scenarios that may involve social and cultural situations. Our problem is framed as a natural language inference task with crowd-sourced preference votes by human players, obtained from a gamified voting platform. We benchmark several state-of-the-art neural models, along with BERT and friends on this task. Our experimental results show that current state-of-the-art NLP models still leave much room for improvement.
Search
Co-authors
- Jie Fu 2
- Yi Tay 2
- Xingdi Yuan 1
- Marc-Alexandre Côté 1
- Adam Trischler 1
- show all...
Venues
- ACL2