On-The-Fly Information Retrieval Augmentation for Language Models

Hai Wang, David McAllester


Abstract
Here we experiment with the use of information retrieval as an augmentation for pre-trained language models. The text corpus used in information retrieval can be viewed as form of episodic memory which grows over time. By augmenting GPT 2.0 with information retrieval we achieve a zero shot 15% relative reduction in perplexity on Gigaword corpus without any re-training. We also validate our IR augmentation on an event co-reference task.
Anthology ID:
2020.nuse-1.14
Volume:
Proceedings of the First Joint Workshop on Narrative Understanding, Storylines, and Events
Month:
July
Year:
2020
Address:
Online
Venues:
ACL | NUSE | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
114–119
URL:
https://www.aclweb.org/anthology/2020.nuse-1.14
DOI:
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
https://www.aclweb.org/anthology/2020.nuse-1.14.pdf

You can write comments here (and agree to place them under CC-by). They are not guaranteed to stay and there is no e-mail functionality.