Ranking user-annotated images for multiple query terms

Moray Allan 1 Jakob Verbeek 1
1 LEAR - Learning and recognition in vision
Inria Grenoble - Rhône-Alpes, LJK - Laboratoire Jean Kuntzmann, INPG - Institut National Polytechnique de Grenoble
Abstract : We show how web image search can be improved by taking into account the users who provided different images, and that performance when searching for multiple terms can be increased by learning a new combined model and taking account of images which partially match the query. Search queries are answered by using a mixture of kernel density estimators to rank the visual content of web images from the Flickr website whose noisy tag annotations match the given query terms. Experiments show that requiring agreement between images from different users allows a better model of the visual class to be learnt, and that precision can be increased by rejecting images from 'untrustworthy' users. We focus on search queries for multiple terms, and demonstrate enhanced performance by learning a single model for the overall query, treating images which only satisfy a subset of the search terms as negative training examples.
Document type :
Conference papers
Complete list of metadatas

Cited literature [10 references]  Display  Hide  Download


https://hal.inria.fr/inria-00439278
Contributor : Jakob Verbeek <>
Submitted on : Monday, April 11, 2011 - 3:11:37 PM
Last modification on : Tuesday, June 18, 2019 - 3:06:06 PM
Long-term archiving on : Saturday, December 3, 2016 - 10:45:24 PM

Files

verbeek09bmvc.pdf
Files produced by the author(s)

Identifiers

Collections

Citation

Moray Allan, Jakob Verbeek. Ranking user-annotated images for multiple query terms. BMVC 2009 - British Machine Vision Conference, Sep 2009, London, United Kingdom. pp.20.1-20.10, ⟨10.5244/C.23.20⟩. ⟨inria-00439278v2⟩

Share

Metrics

Record views

533

Files downloads

6980