Haoran Li
2020
Self-Attention Guided Copy Mechanism for Abstractive Summarization
Song Xu
|
Haoran Li
|
Peng Yuan
|
Youzheng Wu
|
Xiaodong He
|
Bowen Zhou
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Copy module has been widely equipped in the recent abstractive summarization models, which facilitates the decoder to extract words from the source into the summary. Generally, the encoder-decoder attention is served as the copy distribution, while how to guarantee that important words in the source are copied remains a challenge. In this work, we propose a Transformer-based model to enhance the copy mechanism. Specifically, we identify the importance of each source word based on the degree centrality with a directed graph built by the self-attention layer in the Transformer. We use the centrality of each source word to guide the copy process explicitly. Experimental results show that the self-attention graph provides useful guidance for the copy distribution. Our proposed models significantly outperform the baseline methods on the CNN/Daily Mail dataset and the Gigaword dataset.
Emerging Cross-lingual Structure in Pretrained Language Models
Alexis Conneau
|
Shijie Wu
|
Haoran Li
|
Luke Zettlemoyer
|
Veselin Stoyanov
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
We study the problem of multilingual masked language modeling, i.e. the training of a single model on concatenated text from multiple languages, and present a detailed study of several factors that influence why these models are so effective for cross-lingual transfer. We show, contrary to what was previously hypothesized, that transfer is possible even when there is no shared vocabulary across the monolingual corpora and also when the text comes from very different domains. The only requirement is that there are some shared parameters in the top layers of the multi-lingual encoder. To better understand this result, we also show that representations from monolingual BERT models in different languages can be aligned post-hoc quite effectively, strongly suggesting that, much like for non-contextual word embeddings, there are universal latent symmetries in the learned embedding spaces. For multilingual masked language modeling, these symmetries are automatically discovered and aligned during the joint training process.
Search
Co-authors
- Song Xu 1
- Peng Yuan 1
- Youzheng Wu 1
- Xiaodong He 1
- Bowen Zhou 1
- show all...
Venues
- ACL2