YIPING SONG
2020
Response-Anticipated Memory for On-Demand Knowledge Integration in Response Generation
Zhiliang Tian
|
Wei Bi
|
Dongkyu Lee
|
Lanqing Xue
|
YIPING SONG
|
Xiaojiang Liu
|
Nevin L. Zhang
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Neural conversation models are known to generate appropriate but non-informative responses in general. A scenario where informativeness can be significantly enhanced is Conversing by Reading (CbR), where conversations take place with respect to a given external document. In previous work, the external document is utilized by (1) creating a context-aware document memory that integrates information from the document and the conversational context, and then (2) generating responses referring to the memory. In this paper, we propose to create the document memory with some anticipated responses in mind. This is achieved using a teacher-student framework. The teacher is given the external document, the context, and the ground-truth response, and learns how to build a response-aware document memory from three sources of information. The student learns to construct a response-anticipated document memory from the first two sources, and teacher’s insight on memory creation. Empirical results show that our model outperforms the previous state-of-the-art for the CbR task.
Learning to Customize Model Structures for Few-shot Dialogue Generation Tasks
YIPING SONG
|
Zequn Liu
|
Wei Bi
|
Rui Yan
|
Ming Zhang
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Training the generative models with minimal corpus is one of the critical challenges for building open-domain dialogue systems. Existing methods tend to use the meta-learning framework which pre-trains the parameters on all non-target tasks then fine-tunes on the target task. However, fine-tuning distinguishes tasks from the parameter perspective but ignores the model-structure perspective, resulting in similar dialogue models for different tasks. In this paper, we propose an algorithm that can customize a unique dialogue model for each task in the few-shot setting. In our approach, each dialogue model consists of a shared module, a gating module, and a private module. The first two modules are shared among all the tasks, while the third one will differentiate into different network structures to better capture the characteristics of the corresponding task. The extensive experiments on two datasets show that our method outperforms all the baselines in terms of task consistency, response quality, and diversity.
Search
Co-authors
- Wei Bi 2
- Zhiliang Tian 1
- Dongkyu Lee 1
- Lanqing Xue 1
- Xiaojiang Liu 1
- show all...
Venues
- ACL2