Jianxing Yu
2020
Low-Resource Generation of Multi-hop Reasoning Questions
Jianxing Yu
|
Wei Liu
|
Shuang Qiu
|
Qinliang Su
|
Kai Wang
|
Xiaojun Quan
|
Jian Yin
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
This paper focuses on generating multi-hop reasoning questions from the raw text in a low resource circumstance. Such questions have to be syntactically valid and need to logically correlate with the answers by deducing over multiple relations on several sentences in the text. Specifically, we first build a multi-hop generation model and guide it to satisfy the logical rationality by the reasoning chain extracted from a given text. Since the labeled data is limited and insufficient for training, we propose to learn the model with the help of a large scale of unlabeled data that is much easier to obtain. Such data contains rich expressive forms of the questions with structural patterns on syntax and semantics. These patterns can be estimated by the neural hidden semi-Markov model using latent variables. With latent patterns as a prior, we can regularize the generation model and produce the optimal results. Experimental results on the HotpotQA data set demonstrate the effectiveness of our model. Moreover, we apply the generated results to the task of machine reading comprehension and achieve significant performance improvements.
Multi-Domain Dialogue Acts and Response Co-Generation
Kai Wang
|
Junfeng Tian
|
Rui Wang
|
Xiaojun Quan
|
Jianxing Yu
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Generating fluent and informative responses is of critical importance for task-oriented dialogue systems. Existing pipeline approaches generally predict multiple dialogue acts first and use them to assist response generation. There are at least two shortcomings with such approaches. First, the inherent structures of multi-domain dialogue acts are neglected. Second, the semantic associations between acts and responses are not taken into account for response generation. To address these issues, we propose a neural co-generation model that generates dialogue acts and responses concurrently. Unlike those pipeline approaches, our act generation module preserves the semantic structures of multi-domain dialogue acts and our response generation module dynamically attends to different acts as needed. We train the two modules jointly using an uncertainty loss to adjust their task weights adaptively. Extensive experiments are conducted on the large-scale MultiWOZ dataset and the results show that our model achieves very favorable improvement over several state-of-the-art models in both automatic and human evaluations.
Search
Co-authors
- Kai Wang 2
- Xiaojun Quan 2
- Wei Liu 1
- Shuang Qiu 1
- Qinliang Su 1
- show all...
Venues
- ACL2