Julia Hockenmaier
2020
Learning to execute instructions in a Minecraft dialogue
Prashant Jayannavar
|
Anjali Narayan-Chen
|
Julia Hockenmaier
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
The Minecraft Collaborative Building Task is a two-player game in which an Architect (A) instructs a Builder (B) to construct a target structure in a simulated Blocks World Environment. We define the subtask of predicting correct action sequences (block placements and removals) in a given game context, and show that capturing B’s past actions as well as B’s perspective leads to a significant improvement in performance on this challenging language understanding problem.
A Multi-Perspective Architecture for Semantic Code Search
Rajarshi Haldar
|
Lingfei Wu
|
JinJun Xiong
|
Julia Hockenmaier
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
The ability to match pieces of code to their corresponding natural language descriptions and vice versa is fundamental for natural language search interfaces to software repositories. In this paper, we propose a novel multi-perspective cross-lingual neural framework for code–text matching, inspired in part by a previous model for monolingual text-to-text matching, to capture both global and local similarities. Our experiments on the CoNaLa dataset show that our proposed model yields better performance on this cross-lingual text-to-code matching task than previous approaches that map code and text to a single joint embedding space.
University of Illinois Submission to the SIGMORPHON 2020 Shared Task 0: Typologically Diverse Morphological Inflection
Marc Canby
|
Aidana Karipbayeva
|
Bryan Lunt
|
Sahand Mozaffari
|
Charlotte Yoder
|
Julia Hockenmaier
Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology
The objective of this shared task is to produce an inflected form of a word, given its lemma and a set of tags describing the attributes of the desired form. In this paper, we describe a transformer-based model that uses a bidirectional decoder to perform this task, and evaluate its performance on the 90 languages and 18 language families used in this task.
Search
Co-authors
- Prashant Jayannavar 1
- Anjali Narayan-Chen 1
- Rajarshi Haldar 1
- Lingfei Wu 1
- JinJun Xiong 1
- show all...