Ruslan Salakhutdinov
2020
Politeness Transfer: A Tag and Generate Approach
Aman Madaan
|
Amrith Setlur
|
Tanmay Parekh
|
Barnabas Poczos
|
Graham Neubig
|
Yiming Yang
|
Ruslan Salakhutdinov
|
Alan W Black
|
Shrimai Prabhumoye
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
This paper introduces a new task of politeness transfer which involves converting non-polite sentences to polite sentences while preserving the meaning. We also provide a dataset of more than 1.39 instances automatically labeled for politeness to encourage benchmark evaluations on this new task. We design a tag and generate pipeline that identifies stylistic attributes and subsequently generates a sentence in the target style while preserving most of the source content. For politeness as well as five other transfer tasks, our model outperforms the state-of-the-art methods on automatic metrics for content preservation, with a comparable or better performance on style transfer accuracy. Additionally, our model surpasses existing methods on human evaluations for grammaticality, meaning preservation and transfer accuracy across all the six style transfer tasks. The data and code is located at https://github.com/tag-and-generate.
Topological Sort for Sentence Ordering
Shrimai Prabhumoye
|
Ruslan Salakhutdinov
|
Alan W Black
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Sentence ordering is the task of arranging the sentences of a given text in the correct order. Recent work using deep neural networks for this task has framed it as a sequence prediction problem. In this paper, we propose a new framing of this task as a constraint solving problem and introduce a new technique to solve it. Additionally, we propose a human evaluation for this task. The results on both automatic and human metrics across four different datasets show that this new technique is better at capturing coherence in documents.
Towards Debiasing Sentence Representations
Paul Pu Liang
|
Irene Mengze Li
|
Emily Zheng
|
Yao Chong Lim
|
Ruslan Salakhutdinov
|
Louis-Philippe Morency
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
As natural language processing methods are increasingly deployed in real-world scenarios such as healthcare, legal systems, and social science, it becomes necessary to recognize the role they potentially play in shaping social biases and stereotypes. Previous work has revealed the presence of social biases in widely used word embeddings involving gender, race, religion, and other social constructs. While some methods were proposed to debias these word-level embeddings, there is a need to perform debiasing at the sentence-level given the recent shift towards new contextualized sentence representations such as ELMo and BERT. In this paper, we investigate the presence of social biases in sentence-level representations and propose a new method, Sent-Debias, to reduce these biases. We show that Sent-Debias is effective in removing biases, and at the same time, preserves performance on sentence-level downstream tasks such as sentiment analysis, linguistic acceptability, and natural language understanding. We hope that our work will inspire future research on characterizing and removing social biases from widely adopted sentence representations for fairer NLP.
Search
Co-authors
- Alan W. Black 2
- Shrimai Prabhumoye 2
- Aman Madaan 1
- Amrith Setlur 1
- Tanmay Parekh 1
- show all...
Venues
- ACL3