QUACKIE: A NLP Classification Task With Ground Truth Explanations - Archive ouverte HAL Access content directly
Preprints, Working Papers, ... Year :

QUACKIE: A NLP Classification Task With Ground Truth Explanations

(1, 2) , (2) , , (3) , (2, 3, 4)
1
2
3
4
Xavier Renard
Djamé Seddah
Pascal Frossard

Abstract

NLP Interpretability aims to increase trust in model predictions. This makes evaluating interpretability approaches a pressing issue. There are multiple datasets for evaluating NLP Interpretability, but their dependence on human provided ground truths raises questions about their unbiasedness. In this work, we take a different approach and formulate a specific classification task by diverting question-answering datasets. For this custom classification task, the interpretability ground-truth arises directly from the definition of the classification problem. We use this method to propose a benchmark and lay the groundwork for future research in NLP interpretability by evaluating a wide range of current state of the art methods.

Dates and versions

hal-03138351 , version 1 (11-02-2021)

Identifiers

Cite

Yves Rychener, Xavier Renard, Djamé Seddah, Pascal Frossard, Marcin Detyniecki. QUACKIE: A NLP Classification Task With Ground Truth Explanations. 2021. ⟨hal-03138351⟩
16 View
0 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More