Zhe Gan
2020
Improving Adversarial Text Generation by Modeling the Distant Future
Ruiyi Zhang
|
Changyou Chen
|
Zhe Gan
|
Wenlin Wang
|
Dinghan Shen
|
Guoyin Wang
|
Zheng Wen
|
Lawrence Carin
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Auto-regressive text generation models usually focus on local fluency, and may cause inconsistent semantic meaning in long text generation. Further, automatically generating words with similar semantics is challenging, and hand-crafted linguistic rules are difficult to apply. We consider a text planning scheme and present a model-based imitation-learning approach to alleviate the aforementioned issues. Specifically, we propose a novel guider network to focus on the generative process over a longer horizon, which can assist next-word prediction and provide intermediate rewards for generator optimization. Extensive experiments demonstrate that the proposed method leads to improved performance.
Discourse-Aware Neural Extractive Text Summarization
Jiacheng Xu
|
Zhe Gan
|
Yu Cheng
|
Jingjing Liu
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Recently BERT has been adopted for document encoding in state-of-the-art text summarization models. However, sentence-based extractive models often result in redundant or uninformative phrases in the extracted summaries. Also, long-range dependencies throughout a document are not well captured by BERT, which is pre-trained on sentence pairs instead of documents. To address these issues, we present a discourse-aware neural summarization model - DiscoBert. DiscoBert extracts sub-sentential discourse units (instead of sentences) as candidates for extractive selection on a finer granularity. To capture the long-range dependencies among discourse units, structural discourse graphs are constructed based on RST trees and coreference mentions, encoded with Graph Convolutional Networks. Experiments show that the proposed model outperforms state-of-the-art methods by a significant margin on popular summarization benchmarks compared to other BERT-base models.
Distilling Knowledge Learned in BERT for Text Generation
Yen-Chun Chen
|
Zhe Gan
|
Yu Cheng
|
Jingzhou Liu
|
Jingjing Liu
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Large-scale pre-trained language model such as BERT has achieved great success in language understanding tasks. However, it remains an open question how to utilize BERT for language generation. In this paper, we present a novel approach, Conditional Masked Language Modeling (C-MLM), to enable the finetuning of BERT on target generation tasks. The finetuned BERT (teacher) is exploited as extra supervision to improve conventional Seq2Seq models (student) for better text generation performance. By leveraging BERT’s idiosyncratic bidirectional nature, distilling knowledge learned in BERT can encourage auto-regressive Seq2Seq models to plan ahead, imposing global sequence-level supervision for coherent text generation. Experiments show that the proposed approach significantly outperforms strong Transformer baselines on multiple language generation tasks such as machine translation and text summarization. Our proposed model also achieves new state of the art on IWSLT German-English and English-Vietnamese MT datasets.
Search
Co-authors
- Yu Cheng 2
- Jingjing Liu 2
- Ruiyi Zhang 1
- Changyou Chen 1
- Wenlin Wang 1
- show all...
Venues
- ACL3