Sinong Wang
2020
To Pretrain or Not to Pretrain: Examining the Benefits of Pretrainng on Resource Rich Tasks
Sinong Wang
|
Madian Khabsa
|
Hao Ma
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Pretraining NLP models with variants of Masked Language Model (MLM) objectives has recently led to a significant improvements on many tasks. This paper examines the benefits of pretrained models as a function of the number of training samples used in the downstream task. On several text classification tasks, we show that as the number of training examples grow into the millions, the accuracy gap between finetuning BERT-based model and training vanilla LSTM from scratch narrows to within 1%. Our findings indicate that MLM-based models might reach a diminishing return point as the supervised data size increases significantly.
Language Models as Fact Checkers?
Nayeon Lee
|
Belinda Li
|
Sinong Wang
|
Wen-tau Yih
|
Hao Ma
|
Madian Khabsa
Proceedings of the Third Workshop on Fact Extraction and VERification (FEVER)
Recent work has suggested that language models (LMs) store both common-sense and factual knowledge learned from pre-training data. In this paper, we leverage this implicit knowledge to create an effective end-to-end fact checker using a solely a language model, without any external knowledge or explicit retrieval components. While previous work on extracting knowledge from LMs have focused on the task of open-domain question answering, to the best of our knowledge, this is the first work to examine the use of language models as fact checkers. In a closed-book setting, we show that our zero-shot LM approach outperforms a random baseline on the standard FEVER task, and that our finetuned LM compares favorably with standard baselines. Though we do not ultimately outperform methods which use explicit knowledge bases, we believe our exploration shows that this method is viable and has much room for exploration.
Search