Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

CamemBERT: a Tasty French Language Model

Abstract : Pretrained language models are now ubiquitous in Natural Language Processing. Despite their success, most available models have either been trained on English data or on the concatenation of data in multiple languages. This makes practical use of such models—in all languages except English—very limited. Aiming to address this issue for French, we release CamemBERT, a French version of the Bi-directional Encoders for Transformers (BERT). We measure the performance of CamemBERT compared to multilingual models in multiple downstream tasks, namely part-of-speech tagging, dependency parsing, named-entity recognition, and natural language inference. CamemBERT improves the state of the art for most of the tasks considered. We release the pretrained model for CamemBERT hoping to foster research and downstream applications for French NLP.
Document type :
Preprints, Working Papers, ...
Complete list of metadatas

https://hal.inria.fr/hal-02445946
Contributor : Benoît Sagot <>
Submitted on : Monday, January 20, 2020 - 3:01:47 PM
Last modification on : Tuesday, May 26, 2020 - 7:00:03 PM

Links full text

Identifiers

  • HAL Id : hal-02445946, version 1
  • ARXIV : 1911.03894

Collections

Citation

Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, et al.. CamemBERT: a Tasty French Language Model. 2019. ⟨hal-02445946⟩

Share

Metrics

Record views

187