Zhengbao Jiang
2020
Generalizing Natural Language Analysis through Span-relation Representations
Zhengbao Jiang
|
Wei Xu
|
Jun Araki
|
Graham Neubig
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Natural language processing covers a wide variety of tasks predicting syntax, semantics, and information content, and usually each type of output is generated with specially designed architectures. In this paper, we provide the simple insight that a great variety of tasks can be represented in a single unified format consisting of labeling spans and relations between spans, thus a single task-independent model can be used across different tasks. We perform extensive experiments to test this insight on 10 disparate tasks spanning dependency parsing (syntax), semantic role labeling (semantics), relation extraction (information content), aspect based sentiment analysis (sentiment), and many others, achieving performance comparable to state-of-the-art specialized models. We further demonstrate benefits of multi-task learning, and also show that the proposed method makes it easy to analyze differences and similarities in how the model handles different tasks. Finally, we convert these datasets into a unified format to build a benchmark, which provides a holistic testbed for evaluating future models for generalized natural language analysis.
Incorporating External Knowledge through Pre-training for Natural Language to Code Generation
Frank F. Xu
|
Zhengbao Jiang
|
Pengcheng Yin
|
Bogdan Vasilescu
|
Graham Neubig
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Open-domain code generation aims to generate code in a general-purpose programming language (such as Python) from natural language (NL) intents. Motivated by the intuition that developers usually retrieve resources on the web when writing code, we explore the effectiveness of incorporating two varieties of external knowledge into NL-to-code generation: automatically mined NL-code pairs from the online programming QA forum StackOverflow and programming language API documentation. Our evaluations show that combining the two sources with data augmentation and retrieval-based data re-sampling improves the current state-of-the-art by up to 2.2% absolute BLEU score on the code generation testbed CoNaLa. The code and resources are available at https://github.com/neulab/external-knowledge-codegen.
Search
Co-authors
- Graham Neubig 2
- Wei Xu 1
- Jun Araki 1
- Frank F. Xu 1
- Pengcheng Yin 1
- show all...
Venues
- ACL2