Skip to Main content Skip to Navigation
New interface
Conference papers

Ranking user-annotated images for multiple query terms

Moray Allan 1 Jakob Verbeek 1 
1 LEAR - Learning and recognition in vision
Inria Grenoble - Rhône-Alpes, LJK - Laboratoire Jean Kuntzmann, Grenoble INP - Institut polytechnique de Grenoble - Grenoble Institute of Technology
Abstract : We show how web image search can be improved by taking into account the users who provided different images, and that performance when searching for multiple terms can be increased by learning a new combined model and taking account of images which partially match the query. Search queries are answered by using a mixture of kernel density estimators to rank the visual content of web images from the Flickr website whose noisy tag annotations match the given query terms. Experiments show that requiring agreement between images from different users allows a better model of the visual class to be learnt, and that precision can be increased by rejecting images from 'untrustworthy' users. We focus on search queries for multiple terms, and demonstrate enhanced performance by learning a single model for the overall query, treating images which only satisfy a subset of the search terms as negative training examples.
Document type :
Conference papers
Complete list of metadata

Cited literature [10 references]  Display  Hide  Download
Contributor : Jakob Verbeek Connect in order to contact the contributor
Submitted on : Monday, April 11, 2011 - 3:11:37 PM
Last modification on : Thursday, January 20, 2022 - 5:31:01 PM
Long-term archiving on: : Saturday, December 3, 2016 - 10:45:24 PM


Files produced by the author(s)




Moray Allan, Jakob Verbeek. Ranking user-annotated images for multiple query terms. BMVC 2009 - British Machine Vision Conference, Sep 2009, London, United Kingdom. pp.20.1-20.10, ⟨10.5244/C.23.20⟩. ⟨inria-00439278v2⟩



Record views


Files downloads