Skip to Main content Skip to Navigation
New interface
Conference papers

Machine Learning Explainability Through Comprehensible Decision Trees

Abstract : The role of decisions made by machine learning algorithms in our lives is ever increasing. In reaction to this phenomenon, the European General Data Protection Regulation establishes that citizens have the right to receive an explanation on automated decisions affecting them. For explainability to be scalable, it should be possible to derive explanations in an automated way. A common approach is to use simpler, more intuitive decision algorithms to build a surrogate model of the black-box model (for example a deep learning algorithm) used to make a decision. Yet, there is a risk that the surrogate model is too large for it to be really comprehensible to humans. We focus on explaining black-box models by using decision trees of limited size as a surrogate model. Specifically, we propose an approach based on microaggregation to achieve a trade-off between comprehensibility and representativeness of the surrogate model on the one side and privacy of the subjects used for training the black-box model on the other side.
Document type :
Conference papers
Complete list of metadata

Cited literature [14 references]  Display  Hide  Download
Contributor : Hal Ifip Connect in order to contact the contributor
Submitted on : Thursday, March 26, 2020 - 1:52:09 PM
Last modification on : Tuesday, March 31, 2020 - 3:50:22 PM
Long-term archiving on: : Saturday, June 27, 2020 - 2:31:22 PM


Files produced by the author(s)


Distributed under a Creative Commons Attribution 4.0 International License



Alberto Blanco-Justicia, Josep Domingo-Ferrer. Machine Learning Explainability Through Comprehensible Decision Trees. 3rd International Cross-Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE), Aug 2019, Canterbury, United Kingdom. pp.15-26, ⟨10.1007/978-3-030-29726-8_2⟩. ⟨hal-02520062⟩



Record views


Files downloads