CamemBERT: a Tasty French Language Model - Archive ouverte HAL Access content directly
Preprints, Working Papers, ... Year :

CamemBERT: a Tasty French Language Model

(1, 2) , (2) , (2) , (2) , (2) , (2) , (2) , (2)
1
2

Abstract

Pretrained language models are now ubiquitous in Natural Language Processing. Despite their success, most available models have either been trained on English data or on the concatenation of data in multiple languages. This makes practical use of such models—in all languages except English—very limited. Aiming to address this issue for French, we release CamemBERT, a French version of the Bi-directional Encoders for Transformers (BERT). We measure the performance of CamemBERT compared to multilingual models in multiple downstream tasks, namely part-of-speech tagging, dependency parsing, named-entity recognition, and natural language inference. CamemBERT improves the state of the art for most of the tasks considered. We release the pretrained model for CamemBERT hoping to foster research and downstream applications for French NLP.

Dates and versions

hal-02445946 , version 1 (20-01-2020)

Identifiers

Cite

Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, et al.. CamemBERT: a Tasty French Language Model. 2019. ⟨hal-02445946⟩
715 View
0 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More