Jiajun CHEN
2020
Dialogue State Tracking with Explicit Slot Connection Modeling
Yawen Ouyang
|
Moxin Chen
|
Xinyu Dai
|
Yinggong Zhao
|
Shujian Huang
|
Jiajun CHEN
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Recent proposed approaches have made promising progress in dialogue state tracking (DST). However, in multi-domain scenarios, ellipsis and reference are frequently adopted by users to express values that have been mentioned by slots from other domains. To handle these phenomena, we propose a Dialogue State Tracking with Slot Connections (DST-SC) model to explicitly consider slot correlations across different domains. Given a target slot, the slot connecting mechanism in DST-SC can infer its source slot and copy the source slot value directly, thus significantly reducing the difficulty of learning and reasoning. Experimental results verify the benefits of explicit slot connection modeling, and our model achieves state-of-the-art performance on MultiWOZ 2.0 and MultiWOZ 2.1 datasets.
Explicit Semantic Decomposition for Definition Generation
Jiahuan Li
|
Yu Bao
|
Shujian Huang
|
Xinyu Dai
|
Jiajun CHEN
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Definition generation, which aims to automatically generate dictionary definitions for words, has recently been proposed to assist the construction of dictionaries and help people understand unfamiliar texts. However, previous works hardly consider explicitly modeling the “components” of definitions, leading to under-specific generation results. In this paper, we propose ESD, namely Explicit Semantic Decomposition for definition Generation, which explicitly decomposes the meaning of words into semantic components, and models them with discrete latent variables for definition generation. Experimental results show that achieves top results on WordNet and Oxford benchmarks, outperforming strong previous baselines.
A Reinforced Generation of Adversarial Examples for Neural Machine Translation
wei zou
|
Shujian Huang
|
Jun Xie
|
Xinyu Dai
|
Jiajun CHEN
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Neural machine translation systems tend to fail on less decent inputs despite its significant efficacy, which may significantly harm the credibility of these systems—fathoming how and when neural-based systems fail in such cases is critical for industrial maintenance. Instead of collecting and analyzing bad cases using limited handcrafted error features, here we investigate this issue by generating adversarial examples via a new paradigm based on reinforcement learning. Our paradigm could expose pitfalls for a given performance metric, e.g., BLEU, and could target any given neural machine translation architecture. We conduct experiments of adversarial attacks on two mainstream neural machine translation architectures, RNN-search, and Transformer. The results show that our method efficiently produces stable attacks with meaning-preserving adversarial examples. We also present a qualitative and quantitative analysis for the preference pattern of the attack, demonstrating its capability of pitfall exposure.
Search
Co-authors
- Xinyu Dai 3
- Shujian Huang 3
- Yawen Ouyang 1
- Moxin Chen 1
- Yinggong Zhao 1
- show all...
Venues
- ACL3