Skip to Main content Skip to Navigation
Conference papers

Probing for Bridging Inference in Transformer Language Models

Onkar Pandit 1 yufang Hou 2 
1 MAGNET - Machine Learning in Information Networks
Inria Lille - Nord Europe, CRIStAL - Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189
Abstract : We probe pre-trained transformer language models for bridging inference. We first investigate individual attention heads in BERT and observe that attention heads at higher layers prominently focus on bridging relations incomparison with the lower and middle layers, also, few specific attention heads concentrate consistently on bridging. More importantly, we consider language models as a whole in our second approach where bridging anaphora resolution is formulated as a masked token prediction task (Of-Cloze test). Our formulation produces optimistic results without any finetuning, which indicates that pre-trained language models substantially capture bridging inference. Our further investigation shows that the distance between anaphor-antecedent and the context provided to language models play an important role in the inference.
Document type :
Conference papers
Complete list of metadata

https://hal.inria.fr/hal-03284110
Contributor : Team Magnet Connect in order to contact the contributor
Submitted on : Monday, July 12, 2021 - 3:57:45 PM
Last modification on : Thursday, March 24, 2022 - 3:42:54 AM
Long-term archiving on: : Wednesday, October 13, 2021 - 6:53:42 PM

File

BridgingProbingBERT.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-03284110, version 1

Citation

Onkar Pandit, yufang Hou. Probing for Bridging Inference in Transformer Language Models. NAACL 2021 - Annual Conference of the North American Chapter of the Association for Computational Linguistics, Jun 2021, Online Conference, Mexico. ⟨hal-03284110⟩

Share

Metrics

Record views

23

Files downloads

39