HAL will be down for maintenance from Friday, June 10 at 4pm through Monday, June 13 at 9am. More information
Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT

Abstract : Multilingual pretrained language models have demonstrated remarkable zero-shot cross-lingual transfer capabilities. Such transfer emerges by fine-tuning on a task of interest in one language and evaluating on a distinct language, not seen during the fine-tuning. Despite promising results, we still lack a proper understanding of the source of this transfer. Using a novel layer ablation technique and analyses of the model's internal representations, we show that multilingual BERT, a popular multilingual language model, can be viewed as the stacking of two sub-networks: a multilingual encoder followed by a task-specific language-agnostic predictor. While the encoder is crucial for cross-lingual transfer and remains mostly unchanged during fine-tuning, the task predictor has little importance on the transfer and can be reinitialized during fine-tuning. We present extensive experiments with three distinct tasks, seventeen typologically diverse languages and multiple domains to support our hypothesis.
Document type :
Preprints, Working Papers, ...
Complete list of metadata

https://hal.inria.fr/hal-03161685
Contributor : Djamé Seddah Connect in order to contact the contributor
Submitted on : Monday, March 8, 2021 - 12:20:52 AM
Last modification on : Thursday, February 3, 2022 - 11:13:50 AM

Links full text

Identifiers

  • HAL Id : hal-03161685, version 1
  • ARXIV : 2101.11109

Collections

Citation

Benjamin Muller, Yanai Elazar, Benoît Sagot, Djamé Seddah. First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT. 2021. ⟨hal-03161685⟩

Share

Metrics

Record views

29