Multimodal image fusion via coupled feature learning - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Article Dans Une Revue Signal Processing Année : 2022

Multimodal image fusion via coupled feature learning

Farshad Veshki
Esa Ollila

Résumé

This paper presents a multimodal image fusion method using a novel decomposition model based on coupled dictionary learning. The proposed method is general and can be used for a variety of imaging modalities. In particular, the images to be fused are decomposed into correlated and uncorrelated components using sparse representations with identical supports and a Pearson correlation constraint, respectively. The resulting optimization problem is solved by an alternating minimization algorithm. Contrary to other learning-based fusion methods, the proposed approach does not require any training data, and the correlated features are extracted online from the data itself. By preserving the uncorrelated components in the fused images, the proposed fusion method significantly improves on current fusion approaches in terms of maintaining the texture details and modality-specific information. The maximum-absolute-value rule is used for the fusion of correlated components only. This leads to an enhanced contrast-resolution without causing intensity attenuation or loss of important information. Experimental results show that the proposed method achieves superior performance in terms of both visual and objective evaluations compared to state-of-the-art image fusion methods.

Dates et versions

hal-03763293 , version 1 (29-08-2022)

Identifiants

Citer

Farshad Veshki, Nora Ouzir, Sergiy Vorobyov, Esa Ollila. Multimodal image fusion via coupled feature learning. Signal Processing, 2022, 200, pp.108637. ⟨10.1016/j.sigpro.2022.108637⟩. ⟨hal-03763293⟩
47 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More