Skip to Main content Skip to Navigation
Conference papers

Ranking user-annotated images for multiple query terms

Moray Allan 1 Jakob Verbeek 1
1 LEAR - Learning and recognition in vision
Grenoble INP [2007-2019] - Institut polytechnique de Grenoble - Grenoble Institute of Technology [2007-2019], LJK [2007-2015] - Laboratoire Jean Kuntzmann [2007-2015], Inria Grenoble - Rhône-Alpes
Abstract : We show how web image search can be improved by taking into account the users who provided different images, and that performance when searching for multiple terms can be increased by learning a new combined model and taking account of images which partially match the query. Search queries are answered by using a mixture of kernel density estimators to rank the visual content of web images from the Flickr website whose noisy tag annotations match the given query terms. Experiments show that requiring agreement between images from different users allows a better model of the visual class to be learnt, and that precision can be increased by rejecting images from 'untrustworthy' users. We focus on search queries for multiple terms, and demonstrate enhanced performance by learning a single model for the overall query, treating images which only satisfy a subset of the search terms as negative training examples.
Document type :
Conference papers
Complete list of metadatas

Cited literature [10 references]  Display  Hide  Download
Contributor : Jakob Verbeek <>
Submitted on : Monday, April 11, 2011 - 3:11:37 PM
Last modification on : Friday, July 17, 2020 - 11:38:58 AM
Long-term archiving on: : Saturday, December 3, 2016 - 10:45:24 PM


Files produced by the author(s)




Moray Allan, Jakob Verbeek. Ranking user-annotated images for multiple query terms. BMVC 2009 - British Machine Vision Conference, Sep 2009, London, United Kingdom. pp.20.1-20.10, ⟨10.5244/C.23.20⟩. ⟨inria-00439278v2⟩



Record views


Files downloads