Xiaolong Li
2020
Slot-consistent NLG for Task-oriented Dialogue Systems with Iterative Rectification Network
Yangming Li
|
Kaisheng Yao
|
Libo Qin
|
Wanxiang Che
|
Xiaolong Li
|
Ting Liu
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Data-driven approaches using neural networks have achieved promising performances in natural language generation (NLG). However, neural generators are prone to make mistakes, e.g., neglecting an input slot value and generating a redundant slot value. Prior works refer this to hallucination phenomenon. In this paper, we study slot consistency for building reliable NLG systems with all slot values of input dialogue act (DA) properly generated in output sentences. We propose Iterative Rectification Network (IRN) for improving general NLG systems to produce both correct and fluent responses. It applies a bootstrapping algorithm to sample training candidates and uses reinforcement learning to incorporate discrete reward related to slot inconsistency into training. Comprehensive studies have been conducted on multiple benchmark datasets, showing that the proposed methods have significantly reduced the slot error rate (ERR) for all strong baselines. Human evaluations also have confirmed its effectiveness.
Handling Rare Entities for Neural Sequence Labeling
Yangming Li
|
Han Li
|
Kaisheng Yao
|
Xiaolong Li
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
One great challenge in neural sequence labeling is the data sparsity problem for rare entity words and phrases. Most of test set entities appear only few times and are even unseen in training corpus, yielding large number of out-of-vocabulary (OOV) and low-frequency (LF) entities during evaluation. In this work, we propose approaches to address this problem. For OOV entities, we introduce local context reconstruction to implicitly incorporate contextual information into their representations. For LF entities, we present delexicalized entity identification to explicitly extract their frequency-agnostic and entity-type-specific representations. Extensive experiments on multiple benchmark datasets show that our model has significantly outperformed all previous methods and achieved new start-of-the-art results. Notably, our methods surpass the model fine-tuned on pre-trained language models without external resource.
Search
Co-authors
- Yangming Li 2
- Kaisheng Yao 2
- Libo Qin 1
- Wanxiang Che 1
- Ting Liu 1
- show all...
- Han Li 1
Venues
- ACL2