Machine Learning Explainability Through Comprehensible Decision Trees - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2019

Machine Learning Explainability Through Comprehensible Decision Trees

Résumé

The role of decisions made by machine learning algorithms in our lives is ever increasing. In reaction to this phenomenon, the European General Data Protection Regulation establishes that citizens have the right to receive an explanation on automated decisions affecting them. For explainability to be scalable, it should be possible to derive explanations in an automated way. A common approach is to use simpler, more intuitive decision algorithms to build a surrogate model of the black-box model (for example a deep learning algorithm) used to make a decision. Yet, there is a risk that the surrogate model is too large for it to be really comprehensible to humans. We focus on explaining black-box models by using decision trees of limited size as a surrogate model. Specifically, we propose an approach based on microaggregation to achieve a trade-off between comprehensibility and representativeness of the surrogate model on the one side and privacy of the subjects used for training the black-box model on the other side.
Fichier principal
Vignette du fichier
485369_1_En_2_Chapter.pdf (2.18 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02520062 , version 1 (26-03-2020)

Licence

Paternité

Identifiants

Citer

Alberto Blanco-Justicia, Josep Domingo-Ferrer. Machine Learning Explainability Through Comprehensible Decision Trees. 3rd International Cross-Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE), Aug 2019, Canterbury, United Kingdom. pp.15-26, ⟨10.1007/978-3-030-29726-8_2⟩. ⟨hal-02520062⟩
64 Consultations
269 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More