Skip to Main content Skip to Navigation
Conference papers

Interpretability of a Deep Learning Model for Rodents Brain Semantic Segmentation

Abstract : In recent years, as machine learning research has become real products and applications, some of which are critical, it is recognized that it is necessary to look for other model evaluation mechanisms. The commonly used main metrics such as accuracy or F-statistics are no longer sufficient in the deployment phase. This fostered the emergence of methods for interpretability of models. In this work, we discuss an approach to improving the prediction of a model by interpreting what has been learned and using that knowledge in a second phase. As a case study we have used the semantic segmentation of rodent brain tissue in Magnetic Resonance Imaging. By analogy with what happens to the human visual system, the experiment performed provides a way to make more in-depth conclusions about a scene by carefully observing what attracts more attention after a first glance in en passant.
Document type :
Conference papers
Complete list of metadata

Cited literature [16 references]  Display  Hide  Download
Contributor : Hal Ifip Connect in order to contact the contributor
Submitted on : Thursday, October 24, 2019 - 12:52:12 PM
Last modification on : Thursday, October 24, 2019 - 12:54:33 PM
Long-term archiving on: : Saturday, January 25, 2020 - 3:20:16 PM


Files produced by the author(s)


Distributed under a Creative Commons Attribution 4.0 International License



Leonardo Nogueira Matos, Mariana Fontainhas Rodrigues, Ricardo Magalhães, Victor Alves, Paulo Novais. Interpretability of a Deep Learning Model for Rodents Brain Semantic Segmentation. 15th IFIP International Conference on Artificial Intelligence Applications and Innovations (AIAI), May 2019, Hersonissos, Greece. pp.307-318, ⟨10.1007/978-3-030-19823-7_25⟩. ⟨hal-02331345⟩



Record views


Files downloads