Pengcheng Yin
2020
Incorporating External Knowledge through Pre-training for Natural Language to Code Generation
Frank F. Xu
|
Zhengbao Jiang
|
Pengcheng Yin
|
Bogdan Vasilescu
|
Graham Neubig
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Open-domain code generation aims to generate code in a general-purpose programming language (such as Python) from natural language (NL) intents. Motivated by the intuition that developers usually retrieve resources on the web when writing code, we explore the effectiveness of incorporating two varieties of external knowledge into NL-to-code generation: automatically mined NL-code pairs from the online programming QA forum StackOverflow and programming language API documentation. Our evaluations show that combining the two sources with data augmentation and retrieval-based data re-sampling improves the current state-of-the-art by up to 2.2% absolute BLEU score on the code generation testbed CoNaLa. The code and resources are available at https://github.com/neulab/external-knowledge-codegen.
TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data
Pengcheng Yin
|
Graham Neubig
|
Wen-tau Yih
|
Sebastian Riedel
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Recent years have witnessed the burgeoning of pretrained language models (LMs) for text-based natural language (NL) understanding tasks. Such models are typically trained on free-form NL text, hence may not be suitable for tasks like semantic parsing over structured data, which require reasoning over both free-form NL questions and structured tabular data (e.g., database tables). In this paper we present TaBERT, a pretrained LM that jointly learns representations for NL sentences and (semi-)structured tables. TaBERT is trained on a large corpus of 26 million tables and their English contexts. In experiments, neural semantic parsers using TaBERT as feature representation layers achieve new best results on the challenging weakly-supervised semantic parsing benchmark WikiTableQuestions, while performing competitively on the text-to-SQL dataset Spider.
Search
Co-authors
- Graham Neubig 2
- Frank F. Xu 1
- Zhengbao Jiang 1
- Bogdan Vasilescu 1
- Wen-tau Yih 1
- show all...
Venues
- ACL2