﻿<?xml version="1.0" encoding="UTF-8"?>
<modsCollection xmlns="http://www.loc.gov/mods/v3">
<mods ID="casas-etal-2020-combining">
    <titleInfo>
        <title>Combining Subword Representations into Word-level Representations in the Transformer Architecture</title>
    </titleInfo>
    <name type="personal">
        <namePart type="given">Noe</namePart>
        <namePart type="family">Casas</namePart>
        <role>
            <roleTerm authority="marcrelator" type="text">author</roleTerm>
        </role>
    </name>
    <name type="personal">
        <namePart type="given">Marta</namePart>
        <namePart type="given">R</namePart>
        <namePart type="family">Costa-jussà</namePart>
        <role>
            <roleTerm authority="marcrelator" type="text">author</roleTerm>
        </role>
    </name>
    <name type="personal">
        <namePart type="given">José</namePart>
        <namePart type="given">A</namePart>
        <namePart type="given">R</namePart>
        <namePart type="family">Fonollosa</namePart>
        <role>
            <roleTerm authority="marcrelator" type="text">author</roleTerm>
        </role>
    </name>
    <originInfo>
        <dateIssued>2020-jul</dateIssued>
    </originInfo>
    <typeOfResource>text</typeOfResource>
    <relatedItem type="host">
        <titleInfo>
            <title>Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop</title>
        </titleInfo>
        <originInfo>
            <publisher>Association for Computational Linguistics</publisher>
            <place>
                <placeTerm type="text">Online</placeTerm>
            </place>
        </originInfo>
        <genre authority="marcgt">conference publication</genre>
    </relatedItem>
    <abstract>In Neural Machine Translation, using word-level tokens leads to degradation in translation quality. The dominant approaches use subword-level tokens, but this increases the length of the sequences and makes it difficult to profit from word-level information such as POS tags or semantic dependencies. We propose a modification to the Transformer model to combine subword-level representations into word-level ones in the first layers of the encoder, reducing the effective length of the sequences in the following layers and providing a natural point to incorporate extra word-level information. Our experiments show that this approach maintains the translation quality with respect to the normal Transformer model when no extra word-level information is injected and that it is superior to the currently dominant method for incorporating word-level source language information to models based on subword-level vocabularies.</abstract>
    <identifier type="citekey">casas-etal-2020-combining</identifier>
    <location>
        <url>https://www.aclweb.org/anthology/2020.acl-srw.10</url>
    </location>
    <part>
        <date>2020-jul</date>
        <extent unit="page">
            <start>66</start>
            <end>71</end>
        </extent>
    </part>
</mods>
</modsCollection>
