2020
pdf
bib
abs
PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable
Siqi Bao
|
Huang He
|
Fan Wang
|
Hua Wu
|
Haifeng Wang
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Pre-training models have been proved effective for a wide range of natural language processing tasks. Inspired by this, we propose a novel dialogue generation pre-training framework to support various kinds of conversations, including chit-chat, knowledge grounded dialogues, and conversational question answering. In this framework, we adopt flexible attention mechanisms to fully leverage the bi-directional context and the uni-directional characteristic of language generation. We also introduce discrete latent variables to tackle the inherent one-to-many mapping problem in response generation. Two reciprocal tasks of response generation and latent act recognition are designed and carried out simultaneously within a shared network. Comprehensive experiments on three publicly available datasets verify the effectiveness and superiority of the proposed framework.
pdf
bib
abs
Towards Conversational Recommendation over Multi-Type Dialogs
Zeming Liu
|
Haifeng Wang
|
Zheng-Yu Niu
|
Hua Wu
|
Wanxiang Che
|
Ting Liu
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
We focus on the study of conversational recommendation in the context of multi-type dialogs, where the bots can proactively and naturally lead a conversation from a non-recommendation dialog (e.g., QA) to a recommendation dialog, taking into account user’s interests and feedback. To facilitate the study of this task, we create a human-to-human Chinese dialog dataset DuRecDial (about 10k dialogs, 156k utterances), where there are multiple sequential dialogs for a pair of a recommendation seeker (user) and a recommender (bot). In each dialog, the recommender proactively leads a multi-type dialog to approach recommendation targets and then makes multiple recommendations with rich interaction behavior. This dataset allows us to systematically investigate different parts of the overall problem, e.g., how to naturally lead a dialog, how to interact with users for recommendation. Finally we establish baseline results on DuRecDial for future studies.
pdf
bib
abs
Conversational Graph Grounded Policy Learning for Open-Domain Conversation Generation
Jun Xu
|
Haifeng Wang
|
Zheng-Yu Niu
|
Hua Wu
|
Wanxiang Che
|
Ting Liu
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
To address the challenge of policy learning in open-domain multi-turn conversation, we propose to represent prior information about dialog transitions as a graph and learn a graph grounded dialog policy, aimed at fostering a more coherent and controllable dialog. To this end, we first construct a conversational graph (CG) from dialog corpora, in which there are vertices to represent “what to say” and “how to say”, and edges to represent natural transition between a message (the last utterance in a dialog context) and its response. We then present a novel CG grounded policy learning framework that conducts dialog flow planning by graph traversal, which learns to identify a what-vertex and a how-vertex from the CG at each turn to guide response generation. In this way, we effectively leverage the CG to facilitate policy learning as follows: (1) it enables more effective long-term reward design, (2) it provides high-quality candidate actions, and (3) it gives us more control over the policy. Results on two benchmark corpora demonstrate the effectiveness of this framework.
pdf
bib
abs
SKEP: Sentiment Knowledge Enhanced Pre-training for Sentiment Analysis
Hao Tian
|
Can Gao
|
Xinyan Xiao
|
Hao Liu
|
Bolei He
|
Hua Wu
|
Haifeng Wang
|
feng wu
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Recently, sentiment analysis has seen remarkable advance with the help of pre-training approaches. However, sentiment knowledge, such as sentiment words and aspect-sentiment pairs, is ignored in the process of pre-training, despite the fact that they are widely used in traditional sentiment analysis approaches. In this paper, we introduce Sentiment Knowledge Enhanced Pre-training (SKEP) in order to learn a unified sentiment representation for multiple sentiment analysis tasks. With the help of automatically-mined knowledge, SKEP conducts sentiment masking and constructs three sentiment knowledge prediction objectives, so as to embed sentiment information at the word, polarity and aspect level into pre-trained sentiment representation. In particular, the prediction of aspect-sentiment pairs is converted into multi-label classification, aiming to capture the dependency between words in a pair. Experiments on three kinds of sentiment tasks show that SKEP significantly outperforms strong pre-training baseline, and achieves new state-of-the-art results on most of the test datasets. We release our code at https://github.com/baidu/Senta.
pdf
bib
abs
Leveraging Graph to Improve Abstractive Multi-Document Summarization
Wei Li
|
Xinyan Xiao
|
Jiachen Liu
|
Hua Wu
|
Haifeng Wang
|
Junping Du
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Graphs that capture relations between textual units have great benefits for detecting salient information from multiple documents and generating overall coherent summaries. In this paper, we develop a neural abstractive multi-document summarization (MDS) model which can leverage well-known graph representations of documents such as similarity graph and discourse graph, to more effectively process multiple input documents and produce abstractive summaries. Our model utilizes graphs to encode documents in order to capture cross-document relations, which is crucial to summarizing long documents. Our model can also take advantage of graphs to guide the summary generation process, which is beneficial for generating coherent and concise summaries. Furthermore, pre-trained language models can be easily combined with our model, which further improve the summarization performance significantly. Empirical results on the WikiSum and MultiNews dataset show that the proposed architecture brings substantial improvements over several strong baselines.
pdf
bib
abs
Exploring Contextual Word-level Style Relevance for Unsupervised Style Transfer
Chulun Zhou
|
Liangyu Chen
|
Jiachen Liu
|
Xinyan Xiao
|
Jinsong Su
|
Sheng Guo
|
Hua Wu
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Unsupervised style transfer aims to change the style of an input sentence while preserving its original content without using parallel training data. In current dominant approaches, owing to the lack of fine-grained control on the influence from the target style, they are unable to yield desirable output sentences. In this paper, we propose a novel attentional sequence-to-sequence (Seq2seq) model that dynamically exploits the relevance of each output word to the target style for unsupervised style transfer. Specifically, we first pretrain a style classifier, where the relevance of each input word to the original style can be quantified via layer-wise relevance propagation. In a denoising auto-encoding manner, we train an attentional Seq2seq model to reconstruct input sentences and repredict word-level previously-quantified style relevance simultaneously. In this way, this model is endowed with the ability to automatically predict the style relevance of each output word. Then, we equip the decoder of this model with a neural style component to exploit the predicted wordlevel style relevance for better style transfer. Particularly, we fine-tune this model using a carefully-designed objective function involving style transfer, style relevance consistency, content preservation and fluency modeling loss terms. Experimental results show that our proposed model achieves state-of-the-art performance in terms of both transfer accuracy and content preservation.
pdf
bib
Proceedings of the First Workshop on Automatic Simultaneous Translation
Hua Wu
|
Collin Cherry
|
Liang Huang
|
Zhongjun He
|
Mark Liberman
|
James Cross
|
Yang Liu
Proceedings of the First Workshop on Automatic Simultaneous Translation