2020
pdf
bib
abs
The Johns Hopkins University Bible Corpus: 1600+ Tongues for Typological Exploration
Arya D. McCarthy
|
Rachel Wicks
|
Dylan Lewis
|
Aaron Mueller
|
Winston Wu
|
Oliver Adams
|
Garrett Nicolai
|
Matt Post
|
David Yarowsky
Proceedings of The 12th Language Resources and Evaluation Conference
We present findings from the creation of a massively parallel corpus in over 1600 languages, the Johns Hopkins University Bible Corpus (JHUBC). The corpus consists of over 4000 unique translations of the Christian Bible and counting. Our data is derived from scraping several online resources and merging them with existing corpora, combining them under a common scheme that is verse-parallel across all translations. We detail our effort to scrape, clean, align, and utilize this ripe multilingual dataset. The corpus captures the great typological variety of the world’s languages. We catalog this by showing highly similar proportions of representation of Ethnologue’s typological features in our corpus. We also give an example application: projecting pronoun features like clusivity across alignments to richly annotate languages which do not mark the distinction.
pdf
bib
abs
Computational Etymology and Word Emergence
Winston Wu
|
David Yarowsky
Proceedings of The 12th Language Resources and Evaluation Conference
We developed an extensible, comprehensive Wiktionary parser that improves over several existing parsers. We predict the etymology of a word across the full range of etymology types and languages in Wiktionary, showing improvements over a strong baseline. We also model word emergence and show the application of etymology in modeling this phenomenon. We release our parser to further research in this understudied field.
pdf
bib
abs
An Analysis of Massively Multilingual Neural Machine Translation for Low-Resource Languages
Aaron Mueller
|
Garrett Nicolai
|
Arya D. McCarthy
|
Dylan Lewis
|
Winston Wu
|
David Yarowsky
Proceedings of The 12th Language Resources and Evaluation Conference
In this work, we explore massively multilingual low-resource neural machine translation. Using translations of the Bible (which have parallel structure across languages), we train models with up to 1,107 source languages. We create various multilingual corpora, varying the number and relatedness of source languages. Using these, we investigate the best ways to use this many-way aligned resource for multilingual machine translation. Our experiments employ a grammatically and phylogenetically diverse set of source languages during testing for more representative evaluations. We find that best practices in this domain are highly language-specific: adding more languages to a training set is often better, but too many harms performance—the best number depends on the source language. Furthermore, training on related languages can improve or degrade performance, depending on the language. As there is no one-size-fits-most answer, we find that it is critical to tailor one’s approach to the source language and its typology.
pdf
bib
abs
UniMorph 3.0: Universal Morphology
Arya D. McCarthy
|
Christo Kirov
|
Matteo Grella
|
Amrit Nidhi
|
Patrick Xia
|
Kyle Gorman
|
Ekaterina Vylomova
|
Sabrina J. Mielke
|
Garrett Nicolai
|
Miikka Silfverberg
|
Timofey Arkhangelskiy
|
Nataly Krizhanovsky
|
Andrew Krizhanovsky
|
Elena Klyachko
|
Alexey Sorokin
|
John Mansfield
|
Valts Ernštreits
|
Yuval Pinter
|
Cassandra L. Jacobs
|
Ryan Cotterell
|
Mans Hulden
|
David Yarowsky
Proceedings of The 12th Language Resources and Evaluation Conference
The Universal Morphology (UniMorph) project is a collaborative effort providing broad-coverage instantiated normalized morphological paradigms for hundreds of diverse world languages. The project comprises two major thrusts: a language-independent feature schema for rich morphological annotation and a type-level resource of annotated data in diverse languages realizing that schema. We have implemented several improvements to the extraction pipeline which creates most of our data, so that it is both more complete and more correct. We have added 66 new languages, as well as new parts of speech for 12 languages. We have also amended the schema in several ways. Finally, we present three new community tools: two to validate data for resource creators, and one to make morphological data available from the command line. UniMorph is based at the Center for Language and Speech Processing (CLSP) at Johns Hopkins University in Baltimore, Maryland. This paper details advances made to the schema, tooling, and dissemination of project resources since the UniMorph 2.0 release described at LREC 2018.
pdf
bib
abs
Fine-grained Morphosyntactic Analysis and Generation Tools for More Than One Thousand Languages
Garrett Nicolai
|
Dylan Lewis
|
Arya D. McCarthy
|
Aaron Mueller
|
Winston Wu
|
David Yarowsky
Proceedings of The 12th Language Resources and Evaluation Conference
Exploiting the broad translation of the Bible into the world’s languages, we train and distribute morphosyntactic tools for approximately one thousand languages, vastly outstripping previous distributions of tools devoted to the processing of inflectional morphology. Evaluation of the tools on a subset of available inflectional dictionaries demonstrates strong initial models, supplemented and improved through ensembling and dictionary-based reranking. Likewise, a novel type-to-token based evaluation metric allows us to confirm that models generalize well across rare and common forms alike
pdf
bib
abs
Multilingual Dictionary Based Construction of Core Vocabulary
Winston Wu
|
Garrett Nicolai
|
David Yarowsky
Proceedings of The 12th Language Resources and Evaluation Conference
We propose a new functional definition and construction method for core vocabulary sets for multiple applications based on the relative coverage of a target concept in thousands of bilingual dictionaries. Our newly developed core concept vocabulary list derived from these dictionary consensus methods achieves high overlap with existing widely utilized core vocabulary lists targeted at applications such as first and second language learning or field linguistics. Our in-depth analysis illustrates multiple desirable properties of our newly proposed core vocabulary set, including their non-compositionality. We employ a cognate prediction method to recover missing coverage of this core vocabulary in massively multilingual dictionary construction, and we argue that this core vocabulary should be prioritized for elicitation when creating new dictionaries for low-resource languages for multiple downstream tasks including machine translation and language learning.
pdf
bib
abs
Induced Inflection-Set Keyword Search in Speech
Oliver Adams
|
Matthew Wiesner
|
Jan Trmal
|
Garrett Nicolai
|
David Yarowsky
Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology
We investigate the problem of searching for a lexeme-set in speech by searching for its inflectional variants. Experimental results indicate how lexeme-set search performance changes with the number of hypothesized inflections, while ablation experiments highlight the relative importance of different components in the lexeme-set search pipeline and the value of using curated inflectional paradigms. We provide a recipe and evaluation set for the community to use as an extrinsic measure of the performance of inflection generation approaches.