Stephen Roller
2020
The Dialogue Dodecathlon: Open-Domain Knowledge and Image Grounded Conversational Agents
Kurt Shuster
|
Da JU
|
Stephen Roller
|
Emily Dinan
|
Y-Lan Boureau
|
Jason Weston
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
We introduce dodecaDialogue: a set of 12 tasks that measures if a conversational agent can communicate engagingly with personality and empathy, ask questions, answer questions by utilizing knowledge resources, discuss topics and situations, and perceive and converse about images. By multi-tasking on such a broad large-scale set of data, we hope to both move towards and measure progress in producing a single unified agent that can perceive, reason and converse with humans in an open-domain setting. We show that such multi-tasking improves over a BERT pre-trained baseline, largely due to multi-tasking with very large dialogue datasets in a similar domain, and that the multi-tasking in general provides gains to both text and image-based tasks using several metrics in both the fine-tune and task transfer settings. We obtain state-of-the-art results on many of the tasks, providing a strong baseline for this challenge.
Don’t Say That! Making Inconsistent Dialogue Unlikely with Unlikelihood Training
Margaret Li
|
Stephen Roller
|
Ilia Kulikov
|
Sean Welleck
|
Y-Lan Boureau
|
Kyunghyun Cho
|
Jason Weston
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Generative dialogue models currently suffer from a number of problems which standard maximum likelihood training does not address. They tend to produce generations that (i) rely too much on copying from the context, (ii) contain repetitions within utterances, (iii) overuse frequent words, and (iv) at a deeper level, contain logical flaws.In this work we show how all of these problems can be addressed by extending the recently introduced unlikelihood loss (Welleck et al., 2019) to these cases. We show that appropriate loss functions which regularize generated outputs to match human distributions are effective for the first three issues. For the last important general issue, we show applying unlikelihood to collected data of what a model should not do is effective for improving logical consistency, potentially paving the way to generative models with greater reasoning ability. We demonstrate the efficacy of our approach across several dialogue tasks.
Search
Co-authors
- Y-Lan Boureau 2
- Jason Weston 2
- Kurt Shuster 1
- Da JU 1
- Emily Dinan 1
- show all...
Venues
- ACL2