﻿<?xml version="1.0" encoding="UTF-8"?>
<modsCollection xmlns="http://www.loc.gov/mods/v3">
<mods ID="finch-choi-2020-towards">
    <titleInfo>
        <title>Towards Unified Dialogue System Evaluation: A Comprehensive Analysis of Current Evaluation Protocols</title>
    </titleInfo>
    <name type="personal">
        <namePart type="given">Sarah</namePart>
        <namePart type="given">E</namePart>
        <namePart type="family">Finch</namePart>
        <role>
            <roleTerm authority="marcrelator" type="text">author</roleTerm>
        </role>
    </name>
    <name type="personal">
        <namePart type="given">Jinho</namePart>
        <namePart type="given">D</namePart>
        <namePart type="family">Choi</namePart>
        <role>
            <roleTerm authority="marcrelator" type="text">author</roleTerm>
        </role>
    </name>
    <originInfo>
        <dateIssued>2020-jul</dateIssued>
    </originInfo>
    <typeOfResource>text</typeOfResource>
    <relatedItem type="host">
        <titleInfo>
            <title>Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue</title>
        </titleInfo>
        <originInfo>
            <publisher>Association for Computational Linguistics</publisher>
            <place>
                <placeTerm type="text">1st virtual meeting</placeTerm>
            </place>
        </originInfo>
        <genre authority="marcgt">conference publication</genre>
    </relatedItem>
    <abstract>As conversational AI-based dialogue management has increasingly become a trending topic, the need for a standardized and reliable evaluation procedure grows even more pressing. The current state of affairs suggests various evaluation protocols to assess chat-oriented dialogue management systems, rendering it difficult to conduct fair comparative studies across different approaches and gain an insightful understanding of their values. To foster this research, a more robust evaluation protocol must be set in place. This paper presents a comprehensive synthesis of both automated and human evaluation methods on dialogue systems, identifying their shortcomings while accumulating evidence towards the most effective evaluation dimensions. A total of 20 papers from the last two years are surveyed to analyze three types of evaluation protocols: automated, static, and interactive. Finally, the evaluation dimensions used in these papers are compared against our expert evaluation on the system-user dialogue data collected from the Alexa Prize 2020.</abstract>
    <identifier type="citekey">finch-choi-2020-towards</identifier>
    <location>
        <url>https://www.aclweb.org/anthology/2020.sigdial-1.29</url>
    </location>
    <part>
        <date>2020-jul</date>
        <extent unit="page">
            <start>236</start>
            <end>245</end>
        </extent>
    </part>
</mods>
</modsCollection>
