2020
pdf
bib
abs
From English to Code-Switching: Transfer Learning with Strong Morphological Clues
Gustavo Aguilar
|
Thamar Solorio
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Linguistic Code-switching (CS) is still an understudied phenomenon in natural language processing. The NLP community has mostly focused on monolingual and multi-lingual scenarios, but little attention has been given to CS in particular. This is partly because of the lack of resources and annotated data, despite its increasing occurrence in social media platforms. In this paper, we aim at adapting monolingual models to code-switched text in various tasks. Specifically, we transfer English knowledge from a pre-trained ELMo model to different code-switched language pairs (i.e., Nepali-English, Spanish-English, and Hindi-English) using the task of language identification. Our method, CS-ELMo, is an extension of ELMo with a simple yet effective position-aware attention mechanism inside its character convolutions. We show the effectiveness of this transfer learning step by outperforming multilingual BERT and homologous CS-unaware ELMo models and establishing a new state of the art in CS tasks, such as NER and POS tagging. Our technique can be expanded to more English-paired code-switched languages, providing more resources to the CS community.
pdf
bib
abs
Let Me Choose: From Verbal Context to Font Selection
Amirreza Shirani
|
Franck Dernoncourt
|
Jose Echevarria
|
Paul Asente
|
Nedim Lipka
|
Thamar Solorio
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
In this paper, we aim to learn associations between visual attributes of fonts and the verbal context of the texts they are typically applied to. Compared to related work leveraging the surrounding visual context, we choose to focus only on the input text, which can enable new applications for which the text is the only visual element in the document. We introduce a new dataset, containing examples of different topics in social media posts and ads, labeled through crowd-sourcing. Due to the subjective nature of the task, multiple fonts might be perceived as acceptable for an input text, which makes this problem challenging. To this end, we investigate different end-to-end models to learn label distributions on crowd-sourced data, to capture inter-subjectivity across all annotations.
pdf
bib
abs
Age Suitability Rating: Predicting the MPAA Rating Based on Movie Dialogues
Mahsa Shafaei
|
Niloofar Safi Samghabadi
|
Sudipta Kar
|
Thamar Solorio
Proceedings of The 12th Language Resources and Evaluation Conference
Movies help us learn and inspire societal change. But they can also contain objectionable content that negatively affects viewers’ behaviour, especially children. In this paper, our goal is to predict the suitability of movie content for children and young adults based on scripts. The criterion that we use to measure suitability is the MPAA rating that is specifically designed for this purpose. We create a corpus for movie MPAA ratings and propose an RNN based architecture with attention that jointly models the genre and the emotions in the script to predict the MPAA rating. We achieve 81% weighted F1-score for the classification model that outperforms the traditional machine learning method by 7%.
pdf
bib
abs
LinCE: A Centralized Benchmark for Linguistic Code-switching Evaluation
Gustavo Aguilar
|
Sudipta Kar
|
Thamar Solorio
Proceedings of The 12th Language Resources and Evaluation Conference
Recent trends in NLP research have raised an interest in linguistic code-switching (CS); modern approaches have been proposed to solve a wide range of NLP tasks on multiple language pairs. Unfortunately, these proposed methods are hardly generalizable to different code-switched languages. In addition, it is unclear whether a model architecture is applicable for a different task while still being compatible with the code-switching setting. This is mainly because of the lack of a centralized benchmark and the sparse corpora that researchers employ based on their specific needs and interests. To facilitate research in this direction, we propose a centralized benchmark for Linguistic Code-switching Evaluation (LinCE) that combines eleven corpora covering four different code-switched language pairs (i.e., Spanish-English, Nepali-English, Hindi-English, and Modern Standard Arabic-Egyptian Arabic) and four tasks (i.e., language identification, named entity recognition, part-of-speech tagging, and sentiment analysis). As part of the benchmark centralization effort, we provide an online platform where researchers can submit their results while comparing with others in real-time. In addition, we provide the scores of different popular models, including LSTM, ELMo, and multilingual BERT so that the NLP community can compare against state-of-the-art systems. LinCE is a continuous effort, and we will expand it with more low-resource languages and tasks.
pdf
bib
Proceedings of the The 4th Workshop on Computational Approaches to Code Switching
Thamar Solorio
|
Monojit Choudhury
|
Kalika Bali
|
Sunayana Sitaram
|
Amitava Das
|
Mona Diab
Proceedings of the The 4th Workshop on Computational Approaches to Code Switching
pdf
bib
abs
Aggression and Misogyny Detection using BERT: A Multi-Task Approach
Niloofar Safi Samghabadi
|
Parth Patwa
|
Srinivas PYKL
|
Prerana Mukherjee
|
Amitava Das
|
Thamar Solorio
Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying
In recent times, the focus of the NLP community has increased towards offensive language, aggression, and hate-speech detection.This paper presents our system for TRAC-2 shared task on “Aggression Identification” (sub-task A) and “Misogynistic Aggression Identification” (sub-task B). The data for this shared task is provided in three different languages - English, Hindi, and Bengali. Each data instance is annotated into one of the three aggression classes - Not Aggressive, Covertly Aggressive, Overtly Aggressive, as well as one of the two misogyny classes - Gendered and Non-Gendered. We propose an end-to-end neural model using attention on top of BERT that incorporates a multi-task learning paradigm to address both the sub-tasks simultaneously. Our team, “na14”, scored 0.8579 weighted F1-measure on the English sub-task B and secured 3rd rank out of 15 teams for the task. The code and the model weights are publicly available at https://github.com/NiloofarSafi/TRAC-2. Keywords: Aggression, Misogyny, Abusive Language, Hate-Speech Detection, BERT, NLP, Neural Networks, Social Media
pdf
bib
abs
Detecting Early Signs of Cyberbullying in Social Media
Niloofar Safi Samghabadi
|
Adrián Pastor López Monroy
|
Thamar Solorio
Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying
Nowadays, the amount of users’ activities on online social media is growing dramatically. These online environments provide excellent opportunities for communication and knowledge sharing. However, some people misuse them to harass and bully others online, a phenomenon called cyberbullying. Due to its harmful effects on people, especially youth, it is imperative to detect cyberbullying as early as possible before it causes irreparable damages to victims. Most of the relevant available resources are not explicitly designed to detect cyberbullying, but related content, such as hate speech and abusive language. In this paper, we propose a new approach to create a corpus suited for cyberbullying detection. We also investigate the possibility of designing a framework to monitor the streams of users’ online messages and detects the signs of cyberbullying as early as possible.