Skip to Main content Skip to Navigation
Conference papers

Combining Textual and Visual Ontologies to Solve Medical Multimodal Queries

Abstract : In order to solve medical multimodal queries, we propose to split the queries in different dimensions using ontology. We extract both textual and visual terms depending on the ontology dimension they belong to. Based on these terms, we build different sub queries each corresponds to one query dimension. Then we use Boolean expressions on these sub queries to filter the entire document collection. The filtered document set is ranked using the techniques in Vector Space Model. We also combine the ranked lists generated using both text and image indexes to further improve the retrieval performance. We have achieved the best overall performance for the Medical Image Retrieval Task in CLEF 2005. These experimental results show that while most queries are better handled by the text query processing as most semantic information are contained in the medical text cases, both textual and visual ontology dimensions are complementary in improving the results during media fusion.
Document type :
Conference papers
Complete list of metadata

https://hal.inria.fr/hal-00953896
Contributor : Marie-Christine Fauvet <>
Submitted on : Friday, February 28, 2014 - 4:03:41 PM
Last modification on : Tuesday, December 8, 2020 - 10:42:35 AM

Identifiers

  • HAL Id : hal-00953896, version 1

Collections

Citation

Saïd Radhouani, Joo Hwee Lim, Jean-Pierre Chevallet, Gilles Falquet. Combining Textual and Visual Ontologies to Solve Medical Multimodal Queries. International Conference on Multimedia, and Expo IEEE ICME 2006, 2006, Toronto, Canada. ⟨hal-00953896⟩

Share

Metrics

Record views

226