Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

QUACKIE: A NLP Classification Task With Ground Truth Explanations

Abstract : NLP Interpretability aims to increase trust in model predictions. This makes evaluating interpretability approaches a pressing issue. There are multiple datasets for evaluating NLP Interpretability, but their dependence on human provided ground truths raises questions about their unbiasedness. In this work, we take a different approach and formulate a specific classification task by diverting question-answering datasets. For this custom classification task, the interpretability ground-truth arises directly from the definition of the classification problem. We use this method to propose a benchmark and lay the groundwork for future research in NLP interpretability by evaluating a wide range of current state of the art methods.
Document type :
Preprints, Working Papers, ...
Complete list of metadata

https://hal.inria.fr/hal-03138351
Contributor : Djamé Seddah Connect in order to contact the contributor
Submitted on : Thursday, February 11, 2021 - 10:14:44 AM
Last modification on : Saturday, December 4, 2021 - 4:08:24 AM

Links full text

Identifiers

  • HAL Id : hal-03138351, version 1
  • ARXIV : 2012.13190

Collections

Citation

Yves Rychener, Xavier Renard, Djamé Seddah, Pascal Frossard, Marcin Detyniecki. QUACKIE: A NLP Classification Task With Ground Truth Explanations. 2021. ⟨hal-03138351⟩

Share

Metrics

Les métriques sont temporairement indisponibles