Defining and Evaluating Fair Natural Language Generation

Catherine Yeo, Alyssa Chen


Abstract
Our work focuses on the biases that emerge in the natural language generation (NLG) task of sentence completion. In this paper, we introduce a mathematical framework of fairness for NLG followed by an evaluation of gender biases in two state-of-the-art language models. Our analysis provides a theoretical formulation for biases in NLG and empirical evidence that existing language generation models embed gender bias.
Anthology ID:
2020.winlp-1.27
Volume:
Proceedings of the The Fourth Widening Natural Language Processing Workshop
Month:
July
Year:
2020
Address:
Seattle, USA
Venues:
ACL | WS | WiNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
107–109
URL:
DOI:
Bib Export formats:
BibTeX MODS XML EndNote

You can write comments here (and agree to place them under CC-by). They are not guaranteed to stay and there is no e-mail functionality.