Transformer-based Context-aware Sarcasm Detection in Conversation Threads from Social Media
Xiangjue Dong, Changmao Li, Jinho D. Choi
Abstract
We present a transformer-based sarcasm detection model that accounts for the context from the entire conversation thread for more robust predictions. Our model uses deep transformer layers to perform multi-head attentions among the target utterance and the relevant context in the thread. The context-aware models are evaluated on two datasets from social media, Twitter and Reddit, and show 3.1% and 7.0% improvements over their baselines. Our best models give the F1-scores of 79.0% and 75.0% for the Twitter and Reddit datasets respectively, becoming one of the highest performing systems among 36 participants in this shared task.- Anthology ID:
- 2020.figlang-1.38
- Volume:
- Proceedings of the Second Workshop on Figurative Language Processing
- Month:
- July
- Year:
- 2020
- Address:
- Online
- Venues:
- ACL | Fig-Lang | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 276–280
- URL:
- https://www.aclweb.org/anthology/2020.figlang-1.38
- DOI:
- PDF:
- https://www.aclweb.org/anthology/2020.figlang-1.38.pdf
You can write comments here (and agree to place them under CC-by). They are not guaranteed to stay and there is no e-mail functionality.