Ekaterina Shutova
2020
Joint Modelling of Emotion and Abusive Language Detection
Santhosh Rajamanickam
|
Pushkar Mishra
|
Helen Yannakoudakis
|
Ekaterina Shutova
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
The rise of online communication platforms has been accompanied by some undesirable effects, such as the proliferation of aggressive and abusive behaviour online. Aiming to tackle this problem, the natural language processing (NLP) community has experimented with a range of techniques for abuse detection. While achieving substantial success, these methods have so far only focused on modelling the linguistic properties of the comments and the online communities of users, disregarding the emotional state of the users and how this might affect their language. The latter is, however, inextricably linked to abusive behaviour. In this paper, we present the first joint model of emotion and abusive language detection, experimenting in a multi-task learning framework that allows one task to inform the other. Our results demonstrate that incorporating affective features leads to significant improvements in abuse detection performance across datasets.
Proceedings of the Second Workshop on Figurative Language Processing
Beata Beigman Klebanov
|
Ekaterina Shutova
|
Patricia Lichtenstein
|
Smaranda Muresan
|
Chee Wee
|
Anna Feldman
|
Debanjan Ghosh
Proceedings of the Second Workshop on Figurative Language Processing
Being neighbourly: Neural metaphor identification in discourse
Verna Dankers
|
Karan Malhotra
|
Gaurav Kudva
|
Volodymyr Medentsiy
|
Ekaterina Shutova
Proceedings of the Second Workshop on Figurative Language Processing
Existing approaches to metaphor processing typically rely on local features, such as immediate lexico-syntactic contexts or information within a given sentence. However, a large body of corpus-linguistic research suggests that situational information and broader discourse properties influence metaphor production and comprehension. In this paper, we present the first neural metaphor processing architecture that models a broader discourse through the use of attention mechanisms. Our models advance the state of the art on the all POS track of the 2018 VU Amsterdam metaphor identification task. The inclusion of discourse-level information yields further significant improvements.