Skip to Main content Skip to Navigation
Conference papers

Interpretability of a Deep Learning Model for Rodents Brain Semantic Segmentation

Abstract : In recent years, as machine learning research has become real products and applications, some of which are critical, it is recognized that it is necessary to look for other model evaluation mechanisms. The commonly used main metrics such as accuracy or F-statistics are no longer sufficient in the deployment phase. This fostered the emergence of methods for interpretability of models. In this work, we discuss an approach to improving the prediction of a model by interpreting what has been learned and using that knowledge in a second phase. As a case study we have used the semantic segmentation of rodent brain tissue in Magnetic Resonance Imaging. By analogy with what happens to the human visual system, the experiment performed provides a way to make more in-depth conclusions about a scene by carefully observing what attracts more attention after a first glance in en passant.
Document type :
Conference papers
Complete list of metadata

Cited literature [16 references]  Display  Hide  Download

https://hal.inria.fr/hal-02331345
Contributor : Hal Ifip <>
Submitted on : Thursday, October 24, 2019 - 12:52:12 PM
Last modification on : Thursday, October 24, 2019 - 12:54:33 PM
Long-term archiving on: : Saturday, January 25, 2020 - 3:20:16 PM

File

 Restricted access
To satisfy the distribution rights of the publisher, the document is embargoed until : 2022-01-01

Please log in to resquest access to the document

Licence


Distributed under a Creative Commons Attribution 4.0 International License

Identifiers

Citation

Leonardo Matos, Mariana Rodrigues, Ricardo Magalhães, Victor Alves, Paulo Novais. Interpretability of a Deep Learning Model for Rodents Brain Semantic Segmentation. 15th IFIP International Conference on Artificial Intelligence Applications and Innovations (AIAI), May 2019, Hersonissos, Greece. pp.307-318, ⟨10.1007/978-3-030-19823-7_25⟩. ⟨hal-02331345⟩

Share

Metrics

Record views

96