Peter Szolovits
2020
Hooks in the Headline: Learning to Generate Headlines with Controlled Styles
Di Jin
|
Zhijing Jin
|
Joey Tianyi Zhou
|
Lisa Orii
|
Peter Szolovits
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Current summarization systems only produce plain, factual headlines, far from the practical needs for the exposure and memorableness of the articles. We propose a new task, Stylistic Headline Generation (SHG), to enrich the headlines with three style options (humor, romance and clickbait), thus attracting more readers. With no style-specific article-headline pair (only a standard headline summarization dataset and mono-style corpora), our method TitleStylist generates stylistic headlines by combining the summarization and reconstruction tasks into a multitasking framework. We also introduced a novel parameter sharing scheme to further disentangle the style from text. Through both automatic and human evaluation, we demonstrate that TitleStylist can generate relevant, fluent headlines with three target styles: humor, romance, and clickbait. The attraction score of our model generated headlines outperforms the state-of-the-art summarization model by 9.68%, even outperforming human-written references.
Entity-Enriched Neural Models for Clinical Question Answering
Bhanu Pratap Singh Rawat
|
Wei-Hung Weng
|
So Yeon Min
|
Preethi Raghavan
|
Peter Szolovits
Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing
We explore state-of-the-art neural models for question answering on electronic medical records and improve their ability to generalize better on previously unseen (paraphrased) questions at test time. We enable this by learning to predict logical forms as an auxiliary task along with the main task of answer span detection. The predicted logical forms also serve as a rationale for the answer. Further, we also incorporate medical entity information in these models via the ERNIE architecture. We train our models on the large-scale emrQA dataset and observe that our multi-task entity-enriched models generalize to paraphrased questions ~5% better than the baseline BERT model.
Search
Co-authors
- Di Jin 1
- Zhijing Jin 1
- Joey Tianyi Zhou 1
- Lisa Orii 1
- Bhanu Pratap Singh Rawat 1
- show all...