Combining Subword Representations into Word-level Representations in the Transformer Architecture
Noe Casas, Marta R. Costa-jussà, José A. R. Fonollosa
Abstract
In Neural Machine Translation, using word-level tokens leads to degradation in translation quality. The dominant approaches use subword-level tokens, but this increases the length of the sequences and makes it difficult to profit from word-level information such as POS tags or semantic dependencies. We propose a modification to the Transformer model to combine subword-level representations into word-level ones in the first layers of the encoder, reducing the effective length of the sequences in the following layers and providing a natural point to incorporate extra word-level information. Our experiments show that this approach maintains the translation quality with respect to the normal Transformer model when no extra word-level information is injected and that it is superior to the currently dominant method for incorporating word-level source language information to models based on subword-level vocabularies.- Anthology ID:
- 2020.acl-srw.10
- Volume:
- Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop
- Month:
- July
- Year:
- 2020
- Address:
- Online
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 66–71
- URL:
- https://www.aclweb.org/anthology/2020.acl-srw.10
- DOI:
- PDF:
- https://www.aclweb.org/anthology/2020.acl-srw.10.pdf
You can write comments here (and agree to place them under CC-by). They are not guaranteed to stay and there is no e-mail functionality.