Back to the Feature: A Neural-Symbolic Perspective on Explainable AI - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

Back to the Feature: A Neural-Symbolic Perspective on Explainable AI

Résumé

We discuss a perspective aimed at making black box models more eXplainable, within the eXplainable AI (XAI) strand of research. We argue that the traditional end-to-end learning approach used to train Deep Learning (DL) models does not fit the tenets and aims of XAI. Going back to the idea of hand-crafted feature engineering, we suggest a hybrid DL approach to XAI: instead of employing end-to-end learning, we suggest to use DL for the automatic detection of meaningful, hand-crafted high-level symbolic features, which are then to be used by a standard and more interpretable learning model. We exemplify this hybrid learning model in a proof of concept, based on the recently proposed Kandinsky Patterns benchmark, that focuses on the symbolic learning part of the pipeline by using both Logic Tensor Networks and interpretable rule ensembles. After showing that the proposed methodology is able to deliver highly accurate and explainable models, we then discuss potential implementation issues and future directions that can be explored.
Fichier principal
Vignette du fichier
497121_1_En_3_Chapter.pdf (456.88 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03414734 , version 1 (04-11-2021)

Licence

Paternité

Identifiants

Citer

Andrea Campagner, Federico Cabitza. Back to the Feature: A Neural-Symbolic Perspective on Explainable AI. 4th International Cross-Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE), Aug 2020, Dublin, Ireland. pp.39-55, ⟨10.1007/978-3-030-57321-8_3⟩. ⟨hal-03414734⟩
42 Consultations
80 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More