2020
pdf
bib
abs
A Summary of the First Workshop on Language Technology for Language Documentation and Revitalization
Graham Neubig
|
Shruti Rijhwani
|
Alexis Palmer
|
Jordan MacKenzie
|
Hilaria Cruz
|
Xinjian Li
|
Matthew Lee
|
Aditi Chaudhary
|
Luke Gessler
|
Steven Abney
|
Shirley Anugrah Hayati
|
Antonios Anastasopoulos
|
Olga Zamaraeva
|
Emily Prud’hommeaux
|
Jennette Child
|
Sara Child
|
Rebecca Knowles
|
Sarah Moeller
|
Jeffrey Micher
|
Yiyuan Li
|
Sydney Zink
|
Mengzhou Xia
|
Roshan S Sharma
|
Patrick Littell
Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)
Despite recent advances in natural language processing and other language technology, the application of such technology to language documentation and conservation has been limited. In August 2019, a workshop was held at Carnegie Mellon University in Pittsburgh, PA, USA to attempt to bring together language community members, documentary linguists, and technologists to discuss how to bridge this gap and create prototypes of novel and practical language revitalization technologies. The workshop focused on developing technologies to aid language documentation and revitalization in four areas: 1) spoken language (speech transcription, phone to orthography decoding, text-to-speech and text-speech forced alignment), 2) dictionary extraction and management, 3) search tools for corpora, and 4) social media (language learning bots and social media analysis). This paper reports the results of this workshop, including issues discussed, and various conceived and implemented technologies for nine languages: Arapaho, Cayuga, Inuktitut, Irish Gaelic, Kidaw’ida, Kwak’wala, Ojibwe, San Juan Quiahije Chatino, and Seneca.
pdf
bib
abs
Politeness Transfer: A Tag and Generate Approach
Aman Madaan
|
Amrith Setlur
|
Tanmay Parekh
|
Barnabas Poczos
|
Graham Neubig
|
Yiming Yang
|
Ruslan Salakhutdinov
|
Alan W Black
|
Shrimai Prabhumoye
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
This paper introduces a new task of politeness transfer which involves converting non-polite sentences to polite sentences while preserving the meaning. We also provide a dataset of more than 1.39 instances automatically labeled for politeness to encourage benchmark evaluations on this new task. We design a tag and generate pipeline that identifies stylistic attributes and subsequently generates a sentence in the target style while preserving most of the source content. For politeness as well as five other transfer tasks, our model outperforms the state-of-the-art methods on automatic metrics for content preservation, with a comparable or better performance on style transfer accuracy. Additionally, our model surpasses existing methods on human evaluations for grammaticality, meaning preservation and transfer accuracy across all the six style transfer tasks. The data and code is located at https://github.com/tag-and-generate.
pdf
bib
abs
Generalizing Natural Language Analysis through Span-relation Representations
Zhengbao Jiang
|
Wei Xu
|
Jun Araki
|
Graham Neubig
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Natural language processing covers a wide variety of tasks predicting syntax, semantics, and information content, and usually each type of output is generated with specially designed architectures. In this paper, we provide the simple insight that a great variety of tasks can be represented in a single unified format consisting of labeling spans and relations between spans, thus a single task-independent model can be used across different tasks. We perform extensive experiments to test this insight on 10 disparate tasks spanning dependency parsing (syntax), semantic role labeling (semantics), relation extraction (information content), aspect based sentiment analysis (sentiment), and many others, achieving performance comparable to state-of-the-art specialized models. We further demonstrate benefits of multi-task learning, and also show that the proposed method makes it easy to analyze differences and similarities in how the model handles different tasks. Finally, we convert these datasets into a unified format to build a benchmark, which provides a holistic testbed for evaluating future models for generalized natural language analysis.
pdf
bib
abs
Weight Poisoning Attacks on Pretrained Models
Keita Kurita
|
Paul Michel
|
Graham Neubig
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Recently, NLP has seen a surge in the usage of large pre-trained models. Users download weights of models pre-trained on large datasets, then fine-tune the weights on a task of their choice. This raises the question of whether downloading untrusted pre-trained weights can pose a security threat. In this paper, we show that it is possible to construct “weight poisoning” attacks where pre-trained weights are injected with vulnerabilities that expose “backdoors” after fine-tuning, enabling the attacker to manipulate the model prediction simply by injecting an arbitrary keyword. We show that by applying a regularization method which we call RIPPLe and an initialization procedure we call Embedding Surgery, such attacks are possible even with limited knowledge of the dataset and fine-tuning procedure. Our experiments on sentiment classification, toxicity detection, and spam detection show that this attack is widely applicable and poses a serious threat. Finally, we outline practical defenses against such attacks.
pdf
bib
abs
Learning to Deceive with Attention-Based Explanations
Danish Pruthi
|
Mansi Gupta
|
Bhuwan Dhingra
|
Graham Neubig
|
Zachary C. Lipton
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Attention mechanisms are ubiquitous components in neural architectures applied to natural language processing. In addition to yielding gains in predictive accuracy, attention weights are often claimed to confer interpretability, purportedly useful both for providing insights to practitioners and for explaining why a model makes its decisions to stakeholders. We call the latter use of attention mechanisms into question by demonstrating a simple method for training models to produce deceptive attention masks. Our method diminishes the total weight assigned to designated impermissible tokens, even when the models can be shown to nevertheless rely on these features to drive predictions. Across multiple models and tasks, our approach manipulates attention weights while paying surprisingly little cost in accuracy. Through a human study, we show that our manipulated attention-based explanations deceive people into thinking that predictions from a model biased against gender minorities do not rely on the gender. Consequently, our results cast doubt on attention’s reliability as a tool for auditing algorithms in the context of fairness and accountability.
pdf
bib
abs
Incorporating External Knowledge through Pre-training for Natural Language to Code Generation
Frank F. Xu
|
Zhengbao Jiang
|
Pengcheng Yin
|
Bogdan Vasilescu
|
Graham Neubig
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Open-domain code generation aims to generate code in a general-purpose programming language (such as Python) from natural language (NL) intents. Motivated by the intuition that developers usually retrieve resources on the web when writing code, we explore the effectiveness of incorporating two varieties of external knowledge into NL-to-code generation: automatically mined NL-code pairs from the online programming QA forum StackOverflow and programming language API documentation. Our evaluations show that combining the two sources with data augmentation and retrieval-based data re-sampling improves the current state-of-the-art by up to 2.2% absolute BLEU score on the code generation testbed CoNaLa. The code and resources are available at https://github.com/neulab/external-knowledge-codegen.
pdf
bib
abs
Soft Gazetteers for Low-Resource Named Entity Recognition
Shruti Rijhwani
|
Shuyan Zhou
|
Graham Neubig
|
Jaime Carbonell
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Traditional named entity recognition models use gazetteers (lists of entities) as features to improve performance. Although modern neural network models do not require such hand-crafted features for strong performance, recent work has demonstrated their utility for named entity recognition on English data. However, designing such features for low-resource languages is challenging, because exhaustive entity gazetteers do not exist in these languages. To address this problem, we propose a method of “soft gazetteers” that incorporates ubiquitously available information from English knowledge bases, such as Wikipedia, into neural named entity recognition models through cross-lingual entity linking. Our experiments on four low-resource languages show an average improvement of 4 points in F1 score.
pdf
bib
abs
TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data
Pengcheng Yin
|
Graham Neubig
|
Wen-tau Yih
|
Sebastian Riedel
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Recent years have witnessed the burgeoning of pretrained language models (LMs) for text-based natural language (NL) understanding tasks. Such models are typically trained on free-form NL text, hence may not be suitable for tasks like semantic parsing over structured data, which require reasoning over both free-form NL questions and structured tabular data (e.g., database tables). In this paper we present TaBERT, a pretrained LM that jointly learns representations for NL sentences and (semi-)structured tables. TaBERT is trained on a large corpus of 26 million tables and their English contexts. In experiments, neural semantic parsers using TaBERT as feature representation layers achieve new best results on the challenging weakly-supervised semantic parsing benchmark WikiTableQuestions, while performing competitively on the text-to-SQL dataset Spider.
pdf
bib
abs
Balancing Training for Multilingual Neural Machine Translation
Xinyi Wang
|
Yulia Tsvetkov
|
Graham Neubig
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
When training multilingual machine translation (MT) models that can translate to/from multiple languages, we are faced with imbalanced training sets: some languages have much more training data than others. Standard practice is to up-sample less resourced languages to increase representation, and the degree of up-sampling has a large effect on the overall performance. In this paper, we propose a method that instead automatically learns how to weight training data through a data scorer that is optimized to maximize performance on all test languages. Experiments on two sets of languages under both one-to-many and many-to-one MT settings show our method not only consistently outperforms heuristic baselines in terms of average performance, but also offers flexible control over the performance of which languages are optimized.
pdf
bib
abs
Predicting Performance for Natural Language Processing Tasks
Mengzhou Xia
|
Antonios Anastasopoulos
|
Ruochen Xu
|
Yiming Yang
|
Graham Neubig
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Given the complexity of combinations of tasks, languages, and domains in natural language processing (NLP) research, it is computationally prohibitive to exhaustively test newly proposed models on each possible experimental setting. In this work, we attempt to explore the possibility of gaining plausible judgments of how well an NLP model can perform under an experimental setting, without actually training or testing the model. To do so, we build regression models to predict the evaluation score of an NLP experiment given the experimental settings as input. Experimenting on~9 different NLP tasks, we find that our predictors can produce meaningful predictions over unseen languages and different modeling architectures, outperforming reasonable baselines as well as human experts. %we represent experimental settings using an array of features. Going further, we outline how our predictor can be used to find a small subset of representative experiments that should be run in order to obtain plausible predictions for all other experimental settings.
pdf
bib
abs
Should All Cross-Lingual Embeddings Speak English?
Antonios Anastasopoulos
|
Graham Neubig
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Most of recent work in cross-lingual word embeddings is severely Anglocentric. The vast majority of lexicon induction evaluation dictionaries are between English and another language, and the English embedding space is selected by default as the hub when learning in a multilingual setting. With this work, however, we challenge these practices. First, we show that the choice of hub language can significantly impact downstream lexicon induction zero-shot POS tagging performance. Second, we both expand a standard English-centered evaluation dictionary collection to include all language pairs using triangulation, and create new dictionaries for under-represented languages. Evaluating established methods over all these language pairs sheds light into their suitability for aligning embeddings from distant languages and presents new challenges for the field. Finally, in our analysis we identify general guidelines for strong cross-lingual embedding baselines, that extend to language pairs that do not include English.
pdf
bib
Proceedings of the Fourth Workshop on Neural Generation and Translation
Alexandra Birch
|
Andrew Finch
|
Hiroaki Hayashi
|
Kenneth Heafield
|
Marcin Junczys-Dowmunt
|
Ioannis Konstas
|
Xian Li
|
Graham Neubig
|
Yusuke Oda
Proceedings of the Fourth Workshop on Neural Generation and Translation
pdf
bib
abs
Findings of the Fourth Workshop on Neural Generation and Translation
Kenneth Heafield
|
Hiroaki Hayashi
|
Yusuke Oda
|
Ioannis Konstas
|
Andrew Finch
|
Graham Neubig
|
Xian Li
|
Alexandra Birch
Proceedings of the Fourth Workshop on Neural Generation and Translation
We describe the finding of the Fourth Workshop on Neural Generation and Translation, held in concert with the annual conference of the Association for Computational Linguistics (ACL 2020). First, we summarize the research trends of papers presented in the proceedings. Second, we describe the results of the three shared tasks 1) efficient neural machine translation (NMT) where participants were tasked with creating NMT systems that are both accurate and efficient, and 2) document-level generation and translation (DGT) where participants were tasked with developing systems that generate summaries from structured data, potentially with assistance from text in another language and 3) STAPLE task: creation of as many possible translations of a given input text. This last shared task was organised by Duolingo.
pdf
bib
abs
AlloVera: A Multilingual Allophone Database
David R. Mortensen
|
Xinjian Li
|
Patrick Littell
|
Alexis Michaud
|
Shruti Rijhwani
|
Antonios Anastasopoulos
|
Alan W Black
|
Florian Metze
|
Graham Neubig
Proceedings of The 12th Language Resources and Evaluation Conference
We introduce a new resource, AlloVera, which provides mappings from 218 allophones to phonemes for 14 languages. Phonemes are contrastive phonological units, and allophones are their various concrete realizations, which are predictable from phonological context. While phonemic representations are language specific, phonetic representations (stated in terms of (allo)phones) are much closer to a universal (language-independent) transcription. AlloVera allows the training of speech recognition models that output phonetic transcriptions in the International Phonetic Alphabet (IPA), regardless of the input language. We show that a “universal” allophone model, Allosaurus, built with AlloVera, outperforms “universal” phonemic models and language-specific models on a speech-transcription task. We explore the implications of this technology (and related technologies) for the documentation of endangered and minority languages. We further explore other applications for which AlloVera will be suitable as it grows, including phonological typology.
pdf
bib
abs
Transliteration for Cross-Lingual Morphological Inflection
Nikitha Murikinati
|
Antonios Anastasopoulos
|
Graham Neubig
Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology
Cross-lingual transfer between typologically related languages has been proven successful for the task of morphological inflection. However, if the languages do not share the same script, current methods yield more modest improvements. We explore the use of transliteration between related languages, as well as grapheme-to-phoneme conversion, as data preprocessing methods in order to alleviate this issue. We experimented with several diverse language pairs, finding that in most cases transliterating the transfer language data into the target one leads to accuracy improvements, even up to 9 percentage points. Converting both languages into a shared space like the International Phonetic Alphabet or the Latin alphabet is also beneficial, leading to improvements of up to 16 percentage points.