Lawrence Carin
2020
Improving Adversarial Text Generation by Modeling the Distant Future
Ruiyi Zhang
|
Changyou Chen
|
Zhe Gan
|
Wenlin Wang
|
Dinghan Shen
|
Guoyin Wang
|
Zheng Wen
|
Lawrence Carin
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Auto-regressive text generation models usually focus on local fluency, and may cause inconsistent semantic meaning in long text generation. Further, automatically generating words with similar semantics is challenging, and hand-crafted linguistic rules are difficult to apply. We consider a text planning scheme and present a model-based imitation-learning approach to alleviate the aforementioned issues. Specifically, we propose a novel guider network to focus on the generative process over a longer horizon, which can assist next-word prediction and provide intermediate rewards for generator optimization. Extensive experiments demonstrate that the proposed method leads to improved performance.
Improving Disentangled Text Representation Learning with Information-Theoretic Guidance
Pengyu Cheng
|
Martin Renqiang Min
|
Dinghan Shen
|
Christopher Malon
|
Yizhe Zhang
|
Yitong Li
|
Lawrence Carin
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Learning disentangled representations of natural language is essential for many NLP tasks, e.g., conditional text generation, style transfer, personalized dialogue systems, etc. Similar problems have been studied extensively for other forms of data, such as images and videos. However, the discrete nature of natural language makes the disentangling of textual representations more challenging (e.g., the manipulation over the data space cannot be easily achieved). Inspired by information theory, we propose a novel method that effectively manifests disentangled representations of text, without any supervision on semantics. A new mutual information upper bound is derived and leveraged to measure dependence between style and content. By minimizing this upper bound, the proposed method induces style and content embeddings into two independent low-dimensional spaces. Experiments on both conditional text generation and text-style transfer demonstrate the high quality of our disentangled representation in terms of content and style preservation.
Search
Co-authors
- Dinghan Shen 2
- Ruiyi Zhang 1
- Changyou Chen 1
- Zhe Gan 1
- Wenlin Wang 1
- show all...
Venues
- ACL2