Kai Wang
2020
Relational Graph Attention Network for Aspect-based Sentiment Analysis
Kai Wang
|
Weizhou Shen
|
Yunyi Yang
|
Xiaojun Quan
|
Rui Wang
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Aspect-based sentiment analysis aims to determine the sentiment polarity towards a specific aspect in online reviews. Most recent efforts adopt attention-based neural network models to implicitly connect aspects with opinion words. However, due to the complexity of language and the existence of multiple aspects in a single sentence, these models often confuse the connections. In this paper, we address this problem by means of effective encoding of syntax information. Firstly, we define a unified aspect-oriented dependency tree structure rooted at a target aspect by reshaping and pruning an ordinary dependency parse tree. Then, we propose a relational graph attention network (R-GAT) to encode the new tree structure for sentiment prediction. Extensive experiments are conducted on the SemEval 2014 and Twitter datasets, and the experimental results confirm that the connections between aspects and opinion words can be better established with our approach, and the performance of the graph attention network (GAT) is significantly improved as a consequence.
Low-Resource Generation of Multi-hop Reasoning Questions
Jianxing Yu
|
Wei Liu
|
Shuang Qiu
|
Qinliang Su
|
Kai Wang
|
Xiaojun Quan
|
Jian Yin
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
This paper focuses on generating multi-hop reasoning questions from the raw text in a low resource circumstance. Such questions have to be syntactically valid and need to logically correlate with the answers by deducing over multiple relations on several sentences in the text. Specifically, we first build a multi-hop generation model and guide it to satisfy the logical rationality by the reasoning chain extracted from a given text. Since the labeled data is limited and insufficient for training, we propose to learn the model with the help of a large scale of unlabeled data that is much easier to obtain. Such data contains rich expressive forms of the questions with structural patterns on syntax and semantics. These patterns can be estimated by the neural hidden semi-Markov model using latent variables. With latent patterns as a prior, we can regularize the generation model and produce the optimal results. Experimental results on the HotpotQA data set demonstrate the effectiveness of our model. Moreover, we apply the generated results to the task of machine reading comprehension and achieve significant performance improvements.
Multi-Domain Dialogue Acts and Response Co-Generation
Kai Wang
|
Junfeng Tian
|
Rui Wang
|
Xiaojun Quan
|
Jianxing Yu
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Generating fluent and informative responses is of critical importance for task-oriented dialogue systems. Existing pipeline approaches generally predict multiple dialogue acts first and use them to assist response generation. There are at least two shortcomings with such approaches. First, the inherent structures of multi-domain dialogue acts are neglected. Second, the semantic associations between acts and responses are not taken into account for response generation. To address these issues, we propose a neural co-generation model that generates dialogue acts and responses concurrently. Unlike those pipeline approaches, our act generation module preserves the semantic structures of multi-domain dialogue acts and our response generation module dynamically attends to different acts as needed. We train the two modules jointly using an uncertainty loss to adjust their task weights adaptively. Extensive experiments are conducted on the large-scale MultiWOZ dataset and the results show that our model achieves very favorable improvement over several state-of-the-art models in both automatic and human evaluations.
Search
Co-authors
- Xiaojun Quan 3
- Rui Wang 2
- Jianxing Yu 2
- Weizhou Shen 1
- Yunyi Yang 1
- show all...
Venues
- ACL3