HAL will be down for maintenance from Friday, June 10 at 4pm through Monday, June 13 at 9am. More information
Skip to Main content Skip to Navigation
Conference papers

Visualization approach to assess the robustness of neural networks for medical image classification

Elina Thibeau-Sutre 1 Olivier Colliot 1 Didier Dormont 1, 2 Ninon Burgos 1
1 ARAMIS - Algorithms, models and methods for images and signals of the human brain
SU - Sorbonne Université, Inria de Paris, ICM - Institut du Cerveau et de la Moëlle Epinière = Brain and Spine Institute
Abstract : The use of neural networks for diagnosis classification is becoming more and more prevalent in the medical imaging community. However, deep learning method outputs remain hard to explain. Another difficulty is to choose among the large number of techniques developed to analyze how networks learn, as all present different limitations. In this paper, we extended the framework of Fong and Vedaldi [IEEE International Conference on Computer Vision (ICCV), 2017] to visualize the training of convolutional neural networks (CNNs) on 3D quantitative neuroimaging data. Our application focuses on the detection of Alzheimer’s disease with gray matter probability maps extracted from structural MRI. We first assessed the robustness of the visualization method by studying the coherence of the longitudinal patterns and regions identified by the network. We then studied the stability of the CNN training by computing visualization-based similarity indexes between different re-runs of the CNN. We demonstrated that the areas identified by the CNN were consistent with what is known of Alzheimer’s disease and that the visualization approach extract coherent longitudinal patterns. We also showed that the CNN training is not stable and that the areas identified mainly depend on the initialization and the training process. This issue may exist in many other medical studies using deep learning methods on datasets in which the number of samples is too small and the data dimension is high. This means that it may not be possible to rely on deep learning to detect stable regions of interest in this field yet.
Document type :
Conference papers
Complete list of metadata

Cited literature [18 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-02370532
Contributor : Elina Thibeau-Sutre Connect in order to contact the contributor
Submitted on : Friday, February 7, 2020 - 1:40:11 PM
Last modification on : Friday, May 20, 2022 - 11:06:54 AM

File

Submission.pdf
Files produced by the author(s)

Identifiers

Citation

Elina Thibeau-Sutre, Olivier Colliot, Didier Dormont, Ninon Burgos. Visualization approach to assess the robustness of neural networks for medical image classification. SPIE Medical Imaging 2020, Feb 2020, Houston, United States. ⟨10.1117/12.2548952⟩. ⟨hal-02370532v3⟩

Share

Metrics

Record views

220

Files downloads

61