Convolutional Patch Representations for Image Retrieval: an Unsupervised Approach

Abstract : Convolutional neural networks (CNNs) have recently received a lot of attention due to their ability to model local stationary structures in natural images in a multi-scale fashion, when learning all model parameters with supervision. While excellent performance was achieved for image classification when large amounts of labeled visual data are available, their success for un-supervised tasks such as image retrieval has been moderate so far. Our paper focuses on this latter setting and explores several methods for learning patch descriptors without supervision with application to matching and instance-level retrieval. To that effect, we propose a new family of convolutional descriptors for patch representation , based on the recently introduced convolutional kernel networks. We show that our descriptor, named Patch-CKN, performs better than SIFT as well as other convolutional networks learned by artificially introducing supervision and is significantly faster to train. To demonstrate its effectiveness, we perform an extensive evaluation on standard benchmarks for patch and image retrieval where we obtain state-of-the-art results. We also introduce a new dataset called RomePatches, which allows to simultaneously study descriptor performance for patch and image retrieval.
Document type :
Journal articles
Liste complète des métadonnées
Contributor : Thoth Team <>
Submitted on : Tuesday, March 1, 2016 - 9:12:05 AM
Last modification on : Friday, July 27, 2018 - 3:32:06 PM
Document(s) archivé(s) le : Sunday, November 13, 2016 - 6:34:50 AM


Files produced by the author(s)




Mattis Paulin, Julien Mairal, Matthijs Douze, Zaid Harchaoui, Florent Perronnin, et al.. Convolutional Patch Representations for Image Retrieval: an Unsupervised Approach. International Journal of Computer Vision, Springer Verlag, 2017, 121 (1), pp.149-168. ⟨10.1007/s11263-016-0924-3⟩. ⟨hal-01277109v2⟩



Record views


Files downloads