﻿<?xml version="1.0" encoding="UTF-8"?>
<modsCollection xmlns="http://www.loc.gov/mods/v3">
<mods ID="dong-etal-2020-transformer">
    <titleInfo>
        <title>Transformer-based Context-aware Sarcasm Detection in Conversation Threads from Social Media</title>
    </titleInfo>
    <name type="personal">
        <namePart type="given">Xiangjue</namePart>
        <namePart type="family">Dong</namePart>
        <role>
            <roleTerm authority="marcrelator" type="text">author</roleTerm>
        </role>
    </name>
    <name type="personal">
        <namePart type="given">Changmao</namePart>
        <namePart type="family">Li</namePart>
        <role>
            <roleTerm authority="marcrelator" type="text">author</roleTerm>
        </role>
    </name>
    <name type="personal">
        <namePart type="given">Jinho</namePart>
        <namePart type="given">D</namePart>
        <namePart type="family">Choi</namePart>
        <role>
            <roleTerm authority="marcrelator" type="text">author</roleTerm>
        </role>
    </name>
    <originInfo>
        <dateIssued>2020-jul</dateIssued>
    </originInfo>
    <typeOfResource>text</typeOfResource>
    <relatedItem type="host">
        <titleInfo>
            <title>Proceedings of the Second Workshop on Figurative Language Processing</title>
        </titleInfo>
        <originInfo>
            <publisher>Association for Computational Linguistics</publisher>
            <place>
                <placeTerm type="text">Online</placeTerm>
            </place>
        </originInfo>
        <genre authority="marcgt">conference publication</genre>
    </relatedItem>
    <abstract>We present a transformer-based sarcasm detection model that accounts for the context from the entire conversation thread for more robust predictions. Our model uses deep transformer layers to perform multi-head attentions among the target utterance and the relevant context in the thread. The context-aware models are evaluated on two datasets from social media, Twitter and Reddit, and show 3.1% and 7.0% improvements over their baselines. Our best models give the F1-scores of 79.0% and 75.0% for the Twitter and Reddit datasets respectively, becoming one of the highest performing systems among 36 participants in this shared task.</abstract>
    <identifier type="citekey">dong-etal-2020-transformer</identifier>
    <location>
        <url>https://www.aclweb.org/anthology/2020.figlang-1.38</url>
    </location>
    <part>
        <date>2020-jul</date>
        <extent unit="page">
            <start>276</start>
            <end>280</end>
        </extent>
    </part>
</mods>
</modsCollection>
