Sebastian Riedel
2020
MLQA: Evaluating Cross-lingual Extractive Question Answering
Patrick Lewis
|
Barlas Oguz
|
Ruty Rinott
|
Sebastian Riedel
|
Holger Schwenk
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Question answering (QA) models have shown rapid progress enabled by the availability of large, high-quality benchmark datasets. Such annotated datasets are difficult and costly to collect, and rarely exist in languages other than English, making building QA systems that work well in other languages challenging. In order to develop such systems, it is crucial to invest in high quality multilingual evaluation benchmarks to measure progress. We present MLQA, a multi-way aligned extractive QA evaluation benchmark intended to spur research in this area. MLQA contains QA instances in 7 languages, English, Arabic, German, Spanish, Hindi, Vietnamese and Simplified Chinese. MLQA has over 12K instances in English and 5K in each other language, with each instance parallel between 4 languages on average. We evaluate state-of-the-art cross-lingual models and machine-translation-based baselines on MLQA. In all cases, transfer results are shown to be significantly behind training-language performance.
TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data
Pengcheng Yin
|
Graham Neubig
|
Wen-tau Yih
|
Sebastian Riedel
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Recent years have witnessed the burgeoning of pretrained language models (LMs) for text-based natural language (NL) understanding tasks. Such models are typically trained on free-form NL text, hence may not be suitable for tasks like semantic parsing over structured data, which require reasoning over both free-form NL questions and structured tabular data (e.g., database tables). In this paper we present TaBERT, a pretrained LM that jointly learns representations for NL sentences and (semi-)structured tables. TaBERT is trained on a large corpus of 26 million tables and their English contexts. In experiments, neural semantic parsers using TaBERT as feature representation layers achieve new best results on the challenging weakly-supervised semantic parsing benchmark WikiTableQuestions, while performing competitively on the text-to-SQL dataset Spider.
Search
Co-authors
- Patrick Lewis 1
- Barlas Oguz 1
- Ruty Rinott 1
- Holger Schwenk 1
- Pengcheng Yin 1
- show all...
Venues
- ACL2