Yoann Dupont
2020
CamemBERT: a Tasty French Language Model
Louis Martin
|
Benjamin Muller
|
Pedro Javier Ortiz Suárez
|
Yoann Dupont
|
Laurent Romary
|
Éric de la Clergerie
|
Djamé Seddah
|
Benoît Sagot
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Pretrained language models are now ubiquitous in Natural Language Processing. Despite their success, most available models have either been trained on English data or on the concatenation of data in multiple languages. This makes practical use of such models –in all languages except English– very limited. In this paper, we investigate the feasibility of training monolingual Transformer-based language models for other languages, taking French as an example and evaluating our language models on part-of-speech tagging, dependency parsing, named entity recognition and natural language inference tasks. We show that the use of web crawled data is preferable to the use of Wikipedia data. More surprisingly, we show that a relatively small web crawled dataset (4GB) leads to results that are as good as those obtained using larger datasets (130+GB). Our best performing model CamemBERT reaches or improves the state of the art in all four downstream tasks.
Establishing a New State-of-the-Art for French Named Entity Recognition
Pedro Javier Ortiz Suárez
|
Yoann Dupont
|
Benjamin Muller
|
Laurent Romary
|
Benoît Sagot
Proceedings of The 12th Language Resources and Evaluation Conference
The French TreeBank developed at the University Paris 7 is the main source of morphosyntactic and syntactic annotations for French. However, it does not include explicit information related to named entities, which are among the most useful information for several natural language processing tasks and applications. Moreover, no large-scale French corpus with named entity annotations contain referential information, which complement the type and the span of each mention with an indication of the entity it refers to. We have manually annotated the French TreeBank with such information, after an automatic pre-annotation step. We sketch the underlying annotation guidelines and we provide a few figures about the resulting annotations.
Search
Co-authors
- Benjamin Muller 2
- Pedro Javier Ortiz Suárez 2
- Laurent Romary 2
- Benoît Sagot 2
- Louis Martin 1
- show all...