Ying Li
2020
Diverse and Informative Dialogue Generation with Context-Specific Commonsense Knowledge Awareness
Sixing Wu
|
Ying Li
|
Dawei Zhang
|
Yang Zhou
|
Zhonghai Wu
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Generative dialogue systems tend to produce generic responses, which often leads to boring conversations. For alleviating this issue, Recent studies proposed to retrieve and introduce knowledge facts from knowledge graphs. While this paradigm works to a certain extent, it usually retrieves knowledge facts only based on the entity word itself, without considering the specific dialogue context. Thus, the introduction of the context-irrelevant knowledge facts can impact the quality of generations. To this end, this paper proposes a novel commonsense knowledge-aware dialogue generation model, ConKADI. We design a Felicitous Fact mechanism to help the model focus on the knowledge facts that are highly relevant to the context; furthermore, two techniques, Context-Knowledge Fusion and Flexible Mode Fusion are proposed to facilitate the integration of the knowledge in the ConKADI. We collect and build a large-scale Chinese dataset aligned with the commonsense knowledge for dialogue generation. Extensive evaluations over both an open-released English dataset and our Chinese dataset demonstrate that our approach ConKADI outperforms the state-of-the-art approach CCM, in most experiments.
Neural-DINF: A Neural Network based Framework for Measuring Document Influence
Jie Tan
|
Changlin Yang
|
Ying Li
|
Siliang Tang
|
Chen Huang
|
Yueting Zhuang
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Measuring the scholarly impact of a document without citations is an important and challenging problem. Existing approaches such as Document Influence Model (DIM) are based on dynamic topic models, which only consider the word frequency change. In this paper, we use both frequency changes and word semantic shifts to measure document influence by developing a neural network framework. Our model has three steps. Firstly, we train the word embeddings for different time periods. Subsequently, we propose an unsupervised method to align vectors for different time periods. Finally, we compute the influence value of documents. Our experimental results show that our model outperforms DIM.
Search
Co-authors
- Sixing Wu 1
- Dawei Zhang 1
- Yang Zhou 1
- Zhonghai Wu 1
- Jie Tan 1
- show all...
Venues
- ACL2