Interpretability of a Deep Learning Model for Rodents Brain Semantic Segmentation - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2019

Interpretability of a Deep Learning Model for Rodents Brain Semantic Segmentation

Résumé

In recent years, as machine learning research has become real products and applications, some of which are critical, it is recognized that it is necessary to look for other model evaluation mechanisms. The commonly used main metrics such as accuracy or F-statistics are no longer sufficient in the deployment phase. This fostered the emergence of methods for interpretability of models. In this work, we discuss an approach to improving the prediction of a model by interpreting what has been learned and using that knowledge in a second phase. As a case study we have used the semantic segmentation of rodent brain tissue in Magnetic Resonance Imaging. By analogy with what happens to the human visual system, the experiment performed provides a way to make more in-depth conclusions about a scene by carefully observing what attracts more attention after a first glance in en passant.
Fichier principal
Vignette du fichier
483292_1_En_25_Chapter.pdf (633.39 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02331345 , version 1 (24-10-2019)

Licence

Paternité

Identifiants

Citer

Leonardo Nogueira Matos, Mariana Fontainhas Rodrigues, Ricardo Magalhães, Victor Alves, Paulo Novais. Interpretability of a Deep Learning Model for Rodents Brain Semantic Segmentation. 15th IFIP International Conference on Artificial Intelligence Applications and Innovations (AIAI), May 2019, Hersonissos, Greece. pp.307-318, ⟨10.1007/978-3-030-19823-7_25⟩. ⟨hal-02331345⟩
63 Consultations
31 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More