Skip to Main content Skip to Navigation
Conference papers

New Frontiers in Explainable AI: Understanding the GI to Interpret the GO

Abstract : In this paper we focus on the importance of interpreting the quality of the input of predictive models (potentially a GI, i.e., a Garbage In) to make sense of the reliability of their output (potentially a GO, a Garbage Out) in support of human decision making, especially in critical domains, like medicine. To this aim, we propose a framework where we distinguish between the Gold Standard (or Ground Truth) and the set of annotations from which this is derived, and a set of quality dimensions that help to assess and interpret the AI advice: fineness, trueness, representativeness, conformity, dryness. We then discuss implications for obtaining more informative training sets and for the design of more usable Decision Support Systems.
Document type :
Conference papers
Complete list of metadata

Cited literature [42 references]  Display  Hide  Download

https://hal.inria.fr/hal-02520038
Contributor : Hal Ifip <>
Submitted on : Thursday, March 26, 2020 - 1:48:52 PM
Last modification on : Friday, June 11, 2021 - 9:26:03 AM
Long-term archiving on: : Saturday, June 27, 2020 - 2:35:23 PM

File

 Restricted access
To satisfy the distribution rights of the publisher, the document is embargoed until : 2022-01-01

Please log in to resquest access to the document

Licence


Distributed under a Creative Commons Attribution 4.0 International License

Identifiers

Citation

Federico Cabitza, Andrea Campagner, Davide Ciucci. New Frontiers in Explainable AI: Understanding the GI to Interpret the GO. 3rd International Cross-Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE), Aug 2019, Canterbury, United Kingdom. pp.27-47, ⟨10.1007/978-3-030-29726-8_3⟩. ⟨hal-02520038⟩

Share

Metrics

Record views

39