Xiaoyu Shen
2020
Diversifying Dialogue Generation with Non-Conversational Text
Hui Su
|
Xiaoyu Shen
|
Sanqiang Zhao
|
Zhou Xiao
|
Pengwei Hu
|
randy zhong
|
Cheng Niu
|
Jie Zhou
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Neural network-based sequence-to-sequence (seq2seq) models strongly suffer from the low-diversity problem when it comes to open-domain dialogue generation. As bland and generic utterances usually dominate the frequency distribution in our daily chitchat, avoiding them to generate more interesting responses requires complex data filtering, sampling techniques or modifying the training objective. In this paper, we propose a new perspective to diversify dialogue generation by leveraging non-conversational text. Compared with bilateral conversations, non-conversational text are easier to obtain, more diverse and cover a much broader range of topics. We collect a large-scale non-conversational corpus from multi sources including forum comments, idioms and book snippets. We further present a training paradigm to effectively incorporate these text via iterative back translation. The resulting model is tested on two conversational datasets from different domains and is shown to produce significantly more diverse responses without sacrificing the relevance with context.
Neural Data-to-Text Generation via Jointly Learning the Segmentation and Correspondence
Xiaoyu Shen
|
Ernie Chang
|
Hui Su
|
Cheng Niu
|
Dietrich Klakow
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
The neural attention model has achieved great success in data-to-text generation tasks. Though usually excelling at producing fluent text, it suffers from the problem of information missing, repetition and “hallucination”. Due to the black-box nature of the neural attention architecture, avoiding these problems in a systematic way is non-trivial. To address this concern, we propose to explicitly segment target text into fragment units and align them with their data correspondences. The segmentation and correspondence are jointly learned as latent variables without any human annotations. We further impose a soft statistical constraint to regularize the segmental granularity. The resulting architecture maintains the same expressive power as neural attention models, while being able to generate fully interpretable outputs with several times less computational cost. On both E2E and WebNLG benchmarks, we show the proposed model consistently outperforms its neural attention counterparts.
Search
Co-authors
- Hui Su 2
- Cheng Niu 2
- Sanqiang Zhao 1
- Zhou Xiao 1
- Pengwei Hu 1
- show all...
Venues
- ACL2