QUACKIE: A NLP Classification Task With Ground Truth Explanations - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2021

QUACKIE: A NLP Classification Task With Ground Truth Explanations

Résumé

NLP Interpretability aims to increase trust in model predictions. This makes evaluating interpretability approaches a pressing issue. There are multiple datasets for evaluating NLP Interpretability, but their dependence on human provided ground truths raises questions about their unbiasedness. In this work, we take a different approach and formulate a specific classification task by diverting question-answering datasets. For this custom classification task, the interpretability ground-truth arises directly from the definition of the classification problem. We use this method to propose a benchmark and lay the groundwork for future research in NLP interpretability by evaluating a wide range of current state of the art methods.

Dates et versions

hal-03138351 , version 1 (11-02-2021)

Identifiants

Citer

Yves Rychener, Xavier Renard, Djamé Seddah, Pascal Frossard, Marcin Detyniecki. QUACKIE: A NLP Classification Task With Ground Truth Explanations. 2021. ⟨hal-03138351⟩
20 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More