Service interruption on Monday 11 July from 12:30 to 13:00: all the sites of the CCSD (HAL, Epiciences, SciencesConf, AureHAL) will be inaccessible (network hardware connection).
Skip to Main content Skip to Navigation
Conference papers

A maximum entropy framework for combining parts and relations for texture and object recognition

Svetlana Lazebnik 1 Cordelia Schmid 2, * Jean Ponce 1 
* Corresponding author
2 LEAR - Learning and recognition in vision
GRAVIR - IMAG - Laboratoire d'informatique GRAphique, VIsion et Robotique de Grenoble, Inria Grenoble - Rhône-Alpes, CNRS - Centre National de la Recherche Scientifique : FR71
Abstract : This paper presents a probabilistic part-based approach for texture and object recognition. Textures are represented using a part dictionary found by quantizing the appearance of scale- or affine- invariant keypoints. Object classes are represented using a dictionary of composite semi-local parts, or groups of neighboring keypoints with stable and distinctive appearance and geometric layout. A discriminative maximum entropy framework is used to learn the posterior distribution of the class label given the occurrences of parts from the dictionary in the training set. Experiments on two texture and two object databases demonstrate the effectiveness of this framework for visual classification.
Document type :
Conference papers
Complete list of metadata

https://hal.inria.fr/inria-00548509
Contributor : THOTH Team Connect in order to contact the contributor
Submitted on : Monday, December 20, 2010 - 9:07:57 AM
Last modification on : Wednesday, February 2, 2022 - 3:58:34 PM

Identifiers

  • HAL Id : inria-00548509, version 1

Collections

Citation

Svetlana Lazebnik, Cordelia Schmid, Jean Ponce. A maximum entropy framework for combining parts and relations for texture and object recognition. Tenth IEEE International Conference on Computer Vision (ICCV '05), Oct 2005, Snowbird, United States. pp.832 - 838. ⟨inria-00548509⟩

Share

Metrics

Record views

73