New Frontiers in Explainable AI: Understanding the GI to Interpret the GO - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2019

New Frontiers in Explainable AI: Understanding the GI to Interpret the GO

Résumé

In this paper we focus on the importance of interpreting the quality of the input of predictive models (potentially a GI, i.e., a Garbage In) to make sense of the reliability of their output (potentially a GO, a Garbage Out) in support of human decision making, especially in critical domains, like medicine. To this aim, we propose a framework where we distinguish between the Gold Standard (or Ground Truth) and the set of annotations from which this is derived, and a set of quality dimensions that help to assess and interpret the AI advice: fineness, trueness, representativeness, conformity, dryness. We then discuss implications for obtaining more informative training sets and for the design of more usable Decision Support Systems.
Fichier principal
Vignette du fichier
485369_1_En_3_Chapter.pdf (647.53 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02520038 , version 1 (26-03-2020)

Licence

Paternité

Identifiants

Citer

Federico Cabitza, Andrea Campagner, Davide Ciucci. New Frontiers in Explainable AI: Understanding the GI to Interpret the GO. 3rd International Cross-Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE), Aug 2019, Canterbury, United Kingdom. pp.27-47, ⟨10.1007/978-3-030-29726-8_3⟩. ⟨hal-02520038⟩
30 Consultations
115 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More