Combining Textual and Visual Ontologies to Solve Medical Multimodal Queries - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2006

Combining Textual and Visual Ontologies to Solve Medical Multimodal Queries

Résumé

In order to solve medical multimodal queries, we propose to split the queries in different dimensions using ontology. We extract both textual and visual terms depending on the ontology dimension they belong to. Based on these terms, we build different sub queries each corresponds to one query dimension. Then we use Boolean expressions on these sub queries to filter the entire document collection. The filtered document set is ranked using the techniques in Vector Space Model. We also combine the ranked lists generated using both text and image indexes to further improve the retrieval performance. We have achieved the best overall performance for the Medical Image Retrieval Task in CLEF 2005. These experimental results show that while most queries are better handled by the text query processing as most semantic information are contained in the medical text cases, both textual and visual ontology dimensions are complementary in improving the results during media fusion.
Fichier non déposé

Dates et versions

hal-00953896 , version 1 (28-02-2014)

Identifiants

  • HAL Id : hal-00953896 , version 1

Citer

Saïd Radhouani, Joo Hwee Lim, Jean-Pierre Chevallet, Gilles Falquet. Combining Textual and Visual Ontologies to Solve Medical Multimodal Queries. International Conference on Multimedia, and Expo IEEE ICME 2006, 2006, Toronto, Canada. ⟨hal-00953896⟩
90 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More