Evaluating Focused Retrieval Tasks

Abstract : Focused retrieval, identified by question answering, passage retrieval, and XML element retrieval, is becoming increasingly important within the broad task of information retrieval. In this paper, we present a taxonomy of text retrieval tasks based on the structure of the answers required by a task. Of particular importance are the in context tasks of focused retrieval, where not only relevant documents should be retrieved but also relevant information within each document should be correctly identified. Answers containing relevant information could be, for example, best entry points, or non-overlapping passages or elements. Our main research question is: How should the effectiveness of focused retrieval be evaluated? We propose an evaluation framework where different aspects of the in context focused retrieval tasks can be consistently evaluated and compared, and use fidelity tests on simulated runs to show what is measured. Results from our fidelity experiments demonstrate the usefulness of the proposed evaluation framework, and show its ability to measure different aspects and model different evaluation assumptions of focused retrieval.
Type de document :
Communication dans un congrès
SIGIR 2007 Workshop on Focused Retrieval, Jul 2007, Amsterdam, Netherlands. 2007
Liste complète des métadonnées

Littérature citée [22 références]  Voir  Masquer  Télécharger

Contributeur : Jovan Pehcevski <>
Soumis le : jeudi 9 août 2007 - 17:22:25
Dernière modification le : vendredi 25 mai 2018 - 12:02:04
Document(s) archivé(s) le : vendredi 9 avril 2010 - 00:35:09


Fichiers produits par l'(les) auteur(s)


  • HAL Id : inria-00166790, version 1



Jovan Pehcevski, James A. Thom. Evaluating Focused Retrieval Tasks. SIGIR 2007 Workshop on Focused Retrieval, Jul 2007, Amsterdam, Netherlands. 2007. 〈inria-00166790〉



Consultations de la notice


Téléchargements de fichiers