Ryan McDonald
2020
On Faithfulness and Factuality in Abstractive Summarization
Joshua Maynez
|
Shashi Narayan
|
Bernd Bohnet
|
Ryan McDonald
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
It is well known that the standard likelihood training and approximate decoding objectives in neural text generation models lead to less human-like responses for open-ended tasks such as language modeling and story generation. In this paper we have analyzed limitations of these models for abstractive document summarization and found that these models are highly prone to hallucinate content that is unfaithful to the input document. We conducted a large scale human evaluation of several neural abstractive summarization systems to better understand the types of hallucinations they produce. Our human annotators found substantial amounts of hallucinated content in all model generated summaries. However, our analysis does show that pretrained models are better summarizers not only in terms of raw metrics, i.e., ROUGE, but also in generating faithful and factual summaries as evaluated by humans. Furthermore, we show that textual entailment measures better correlate with faithfulness than standard metrics, potentially leading the way to automatic evaluation metrics as well as training and decoding criteria.
BioMRC: A Dataset for Biomedical Machine Reading Comprehension
Dimitris Pappas
|
Petros Stavropoulos
|
Ion Androutsopoulos
|
Ryan McDonald
Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing
We introduceBIOMRC, a large-scale cloze-style biomedical MRC dataset. Care was taken to reduce noise, compared to the previous BIOREAD dataset of Pappas et al. (2018). Experiments show that simple heuristics do not perform well on the new dataset and that two neural MRC models that had been tested on BIOREAD perform much better on BIOMRC, indicating that the new dataset is indeed less noisy or at least that its task is more feasible. Non-expert human performance is also higher on the new dataset compared to BIOREAD, and biomedical experts perform even better. We also introduce a new BERT-based MRC model, the best version of which substantially outperforms all other methods tested, reaching or surpassing the accuracy of biomedical experts in some experiments. We make the new dataset available in three different sizes, also releasing our code, and providing a leaderboard.
Search
Co-authors
- Joshua Maynez 1
- Shashi Narayan 1
- Bernd Bohnet 1
- Dimitris Pappas 1
- Petros Stavropoulos 1
- show all...