2020
pdf
bib
abs
A Summary of the First Workshop on Language Technology for Language Documentation and Revitalization
Graham Neubig
|
Shruti Rijhwani
|
Alexis Palmer
|
Jordan MacKenzie
|
Hilaria Cruz
|
Xinjian Li
|
Matthew Lee
|
Aditi Chaudhary
|
Luke Gessler
|
Steven Abney
|
Shirley Anugrah Hayati
|
Antonios Anastasopoulos
|
Olga Zamaraeva
|
Emily Prud’hommeaux
|
Jennette Child
|
Sara Child
|
Rebecca Knowles
|
Sarah Moeller
|
Jeffrey Micher
|
Yiyuan Li
|
Sydney Zink
|
Mengzhou Xia
|
Roshan S Sharma
|
Patrick Littell
Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)
Despite recent advances in natural language processing and other language technology, the application of such technology to language documentation and conservation has been limited. In August 2019, a workshop was held at Carnegie Mellon University in Pittsburgh, PA, USA to attempt to bring together language community members, documentary linguists, and technologists to discuss how to bridge this gap and create prototypes of novel and practical language revitalization technologies. The workshop focused on developing technologies to aid language documentation and revitalization in four areas: 1) spoken language (speech transcription, phone to orthography decoding, text-to-speech and text-speech forced alignment), 2) dictionary extraction and management, 3) search tools for corpora, and 4) social media (language learning bots and social media analysis). This paper reports the results of this workshop, including issues discussed, and various conceived and implemented technologies for nine languages: Arapaho, Cayuga, Inuktitut, Irish Gaelic, Kidaw’ida, Kwak’wala, Ojibwe, San Juan Quiahije Chatino, and Seneca.
pdf
bib
abs
It’s Easier to Translate out of English than into it: Measuring Neural Translation Difficulty by Cross-Mutual Information
Emanuele Bugliarello
|
Sabrina J. Mielke
|
Antonios Anastasopoulos
|
Ryan Cotterell
|
Naoaki Okazaki
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
The performance of neural machine translation systems is commonly evaluated in terms of BLEU. However, due to its reliance on target language properties and generation, the BLEU metric does not allow an assessment of which translation directions are more difficult to model. In this paper, we propose cross-mutual information (XMI): an asymmetric information-theoretic metric of machine translation difficulty that exploits the probabilistic nature of most neural machine translation models. XMI allows us to better evaluate the difficulty of translating text into the target language while controlling for the difficulty of the target-side generation component independent of the translation task. We then present the first systematic and controlled study of cross-lingual translation difficulties using modern neural translation systems. Code for replicating our experiments is available online at https://github.com/e-bug/nmt-difficulty.
pdf
bib
abs
Predicting Performance for Natural Language Processing Tasks
Mengzhou Xia
|
Antonios Anastasopoulos
|
Ruochen Xu
|
Yiming Yang
|
Graham Neubig
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Given the complexity of combinations of tasks, languages, and domains in natural language processing (NLP) research, it is computationally prohibitive to exhaustively test newly proposed models on each possible experimental setting. In this work, we attempt to explore the possibility of gaining plausible judgments of how well an NLP model can perform under an experimental setting, without actually training or testing the model. To do so, we build regression models to predict the evaluation score of an NLP experiment given the experimental settings as input. Experimenting on~9 different NLP tasks, we find that our predictors can produce meaningful predictions over unseen languages and different modeling architectures, outperforming reasonable baselines as well as human experts. %we represent experimental settings using an array of features. Going further, we outline how our predictor can be used to find a small subset of representative experiments that should be run in order to obtain plausible predictions for all other experimental settings.
pdf
bib
abs
Should All Cross-Lingual Embeddings Speak English?
Antonios Anastasopoulos
|
Graham Neubig
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Most of recent work in cross-lingual word embeddings is severely Anglocentric. The vast majority of lexicon induction evaluation dictionaries are between English and another language, and the English embedding space is selected by default as the hub when learning in a multilingual setting. With this work, however, we challenge these practices. First, we show that the choice of hub language can significantly impact downstream lexicon induction zero-shot POS tagging performance. Second, we both expand a standard English-centered evaluation dictionary collection to include all language pairs using triangulation, and create new dictionaries for under-represented languages. Evaluating established methods over all these language pairs sheds light into their suitability for aligning embeddings from distant languages and presents new challenges for the field. Finally, in our analysis we identify general guidelines for strong cross-lingual embedding baselines, that extend to language pairs that do not include English.
pdf
bib
abs
A Resource for Studying Chatino Verbal Morphology
Hilaria Cruz
|
Antonios Anastasopoulos
|
Gregory Stump
Proceedings of The 12th Language Resources and Evaluation Conference
We present the first resource focusing on the verbal inflectional morphology of San Juan Quiahije Chatino, a tonal mesoamerican language spoken in Mexico. We provide a collection of complete inflection tables of 198 lemmata, with morphological tags based on the UniMorph schema. We also provide baseline results on three core NLP tasks: morphological analysis, lemmatization, and morphological inflection.
pdf
bib
abs
A Resource for Computational Experiments on Mapudungun
Mingjun Duan
|
Carlos Fasola
|
Sai Krishna Rallabandi
|
Rodolfo Vega
|
Antonios Anastasopoulos
|
Lori Levin
|
Alan W Black
Proceedings of The 12th Language Resources and Evaluation Conference
We present a resource for computational experiments on Mapudungun, a polysynthetic indigenous language spoken in Chile with upwards of 200 thousand speakers. We provide 142 hours of culturally significant conversations in the domain of medical treatment. The conversations are fully transcribed and translated into Spanish. The transcriptions also include annotations for code-switching and non-standard pronunciations. We also provide baseline results on three core NLP tasks: speech recognition, speech synthesis, and machine translation between Spanish and Mapudungun. We further explore other applications for which the corpus will be suitable, including the study of code-switching, historical orthography change, linguistic structure, and sociological and anthropological studies.
pdf
bib
abs
AlloVera: A Multilingual Allophone Database
David R. Mortensen
|
Xinjian Li
|
Patrick Littell
|
Alexis Michaud
|
Shruti Rijhwani
|
Antonios Anastasopoulos
|
Alan W Black
|
Florian Metze
|
Graham Neubig
Proceedings of The 12th Language Resources and Evaluation Conference
We introduce a new resource, AlloVera, which provides mappings from 218 allophones to phonemes for 14 languages. Phonemes are contrastive phonological units, and allophones are their various concrete realizations, which are predictable from phonological context. While phonemic representations are language specific, phonetic representations (stated in terms of (allo)phones) are much closer to a universal (language-independent) transcription. AlloVera allows the training of speech recognition models that output phonetic transcriptions in the International Phonetic Alphabet (IPA), regardless of the input language. We show that a “universal” allophone model, Allosaurus, built with AlloVera, outperforms “universal” phonemic models and language-specific models on a speech-transcription task. We explore the implications of this technology (and related technologies) for the documentation of endangered and minority languages. We further explore other applications for which AlloVera will be suitable as it grows, including phonological typology.
pdf
bib
Proceedings of the The Fourth Widening Natural Language Processing Workshop
Rossana Cunha
|
Samira Shaikh
|
Erika Varis
|
Ryan Georgi
|
Alicia Tsai
|
Antonios Anastasopoulos
|
Khyathi Raghavi Chandu
Proceedings of the The Fourth Widening Natural Language Processing Workshop
pdf
bib
abs
SIGMORPHON 2020 Shared Task 0: Typologically Diverse Morphological Inflection
Ekaterina Vylomova
|
Jennifer White
|
Elizabeth Salesky
|
Sabrina J. Mielke
|
Shijie Wu
|
Edoardo Maria Ponti
|
Rowan Hall Maudslay
|
Ran Zmigrod
|
Josef Valvoda
|
Svetlana Toldova
|
Francis Tyers
|
Elena Klyachko
|
Ilya Yegorov
|
Natalia Krizhanovsky
|
Paula Czarnowska
|
Irene Nikkarinen
|
Andrew Krizhanovsky
|
Tiago Pimentel
|
Lucas Torroba Hennigen
|
Christo Kirov
|
Garrett Nicolai
|
Adina Williams
|
Antonios Anastasopoulos
|
Hilaria Cruz
|
Eleanor Chodroff
|
Ryan Cotterell
|
Miikka Silfverberg
|
Mans Hulden
Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology
A broad goal in natural language processing (NLP) is to develop a system that has the capacity to process any natural language. Most systems, however, are developed using data from just one language such as English. The SIGMORPHON 2020 shared task on morphological reinflection aims to investigate systems’ ability to generalize across typologically distinct languages, many of which are low resource. Systems were developed using data from 45 languages and just 5 language families, fine-tuned with data from an additional 45 languages and 10 language families (13 in total), and evaluated on all 90 languages. A total of 22 systems (19 neural) from 10 teams were submitted to the task. All four winning systems were neural (two monolingual transformers and two massively multilingual RNN-based models with gated attention). Most teams demonstrate utility of data hallucination and augmentation, ensembles, and multilingual training for low-resource languages. Non-neural learners and manually designed grammars showed competitive and even superior performance on some languages (such as Ingrian, Tajik, Tagalog, Zarma, Lingala), especially with very limited data. Some language families (Afro-Asiatic, Niger-Congo, Turkic) were relatively easy for most systems and achieved over 90% mean accuracy while others were more challenging.
pdf
bib
abs
The CMU-LTI submission to the SIGMORPHON 2020 Shared Task 0: Language-Specific Cross-Lingual Transfer
Nikitha Murikinati
|
Antonios Anastasopoulos
Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology
This paper describes the CMU-LTI submission to the SIGMORPHON 2020 Shared Task 0 on typologically diverse morphological inflection. The (unrestricted) submission uses the cross-lingual approach of our last year’s winning submission (Anastasopoulos and Neubig, 2019), but adapted to use specific transfer languages for each test language. Our system, with fixed non-tuned hyperparameters, achieved a macro-averaged accuracy of 80.65 ranking 20th among 31 systems, but it was still tied for best system in 25 of the 90 total languages.
pdf
bib
abs
Transliteration for Cross-Lingual Morphological Inflection
Nikitha Murikinati
|
Antonios Anastasopoulos
|
Graham Neubig
Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology
Cross-lingual transfer between typologically related languages has been proven successful for the task of morphological inflection. However, if the languages do not share the same script, current methods yield more modest improvements. We explore the use of transliteration between related languages, as well as grapheme-to-phoneme conversion, as data preprocessing methods in order to alleviate this issue. We experimented with several diverse language pairs, finding that in most cases transliterating the transfer language data into the target one leads to accuracy improvements, even up to 9 percentage points. Converting both languages into a shared space like the International Phonetic Alphabet or the Latin alphabet is also beneficial, leading to improvements of up to 16 percentage points.